QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
77,993,854
| 5,057,022
|
Using PyPDF2 in a with Loop - Getting ValueError
|
<p>I've got some code that loops over a list of pdfs, reads in each pdf in a PdfReader object and appends all but the last page to a single PdfWriter object.</p>
<p>Here is my code:</p>
<pre><code>def remove_last_page():
sorted_filenames = sorted(glob.glob('Reading*.pdf'), key = sort_key)
writer = PyPDF2.PdfWriter()
for file_path in sorted_filenames:
with open(file_path,'rb') as file:
reader = PyPDF2.PdfReader(file)
for i in range(len(reader.pages) -1):
writer.add_page(reader.pages[i])
new_filename = 'Combined_Readings.pdf'
with open(new_filename,'wb') as new_file:
writer.write(new_file)
</code></pre>
<p>This gives me this error however:</p>
<p><a href="https://i.sstatic.net/R5Fff.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/R5Fff.png" alt="enter image description here" /></a></p>
<p>which suggests that PyPDF2 is not interacting with the with block correctly.</p>
<p>I'm confident the pdfs are all fine.</p>
<p>Could someone point towards my error?</p>
|
<python>
|
2024-02-14 11:12:27
| 0
| 383
|
jolene
|
77,993,834
| 2,595,216
|
Incompatible types in assignment for function return union of types
|
<p>What is best way to fix mypy for such function?</p>
<pre class="lang-py prettyprint-override"><code>from typing import Union
def a(b: int) -> Union[int, str]:
if b:
return b
else:
return '2'
c: int = a(1)
d: str = a(0)
</code></pre>
<p>mypy result:</p>
<pre><code>error: Incompatible types in assignment (expression has type "int | str", variable has type "int") [assignment]
error: Incompatible types in assignment (expression has type "int | str", variable has type "str") [assignment]
</code></pre>
|
<python><mypy><python-typing>
|
2024-02-14 11:09:12
| 1
| 553
|
emcek
|
77,993,730
| 2,876,079
|
How to use indexed data frames with altair?
|
<p>Lets say I have an indexed pandas DataFrame <code>indexed_df</code>:</p>
<pre><code>df = pd.DataFrame(
[
{'id_foo': 1, 'energy_carrier': 'oil', '2000': 5, '2020': 10},
{'id_foo': 2, 'energy_carrier': 'electricity', '2000': 10, '2020': 20},
]
)
indexed_df = df.pivot_table(
columns='energy_carrier',
values=['2000', '2020'],
aggfunc='sum',
)
</code></pre>
<p><a href="https://i.sstatic.net/Rehq8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Rehq8.png" alt="enter image description here" /></a></p>
<p>How can I use this DataFrame with altair and map the index to the x values?</p>
<p>Do I always need to reset the DataFrame to get a named column?</p>
<pre><code>x='named_column'
</code></pre>
<p>Or is there a way to tell altair to use the index?</p>
<pre><code>x='$index'
</code></pre>
<p>I would expect altair to use the index of an indexed DataFrame for x values by default. However, that does not seem to be the case. I tried to use</p>
<pre><code>x='index'
</code></pre>
<p>(or no specification for x) and do not get the results I would expect.</p>
<p>Related:</p>
<p><a href="https://stackoverflow.com/questions/77993663/how-to-create-stacked-bar-chart-with-altair-in-python">How to create stacked bar chart from different data format with altair in python?</a></p>
|
<python><pandas><altair>
|
2024-02-14 10:51:16
| 1
| 12,756
|
Stefan
|
77,993,663
| 2,876,079
|
How to create stacked bar chart from different data format with altair in python?
|
<p>What y-expression should I specify in the following example to create a stacked bar chart? Is there some expression like</p>
<pre><code>y="$all-other-columns"
</code></pre>
<p>or</p>
<pre><code>y="$group-of-columns"
</code></pre>
<p>Or would I need to reshape the data to <a href="https://altair-viz.github.io/user_guide/data.html#long-form-vs-wide-form-data" rel="nofollow noreferrer">long-form</a> to follow the example</p>
<p><a href="https://altair-viz.github.io/gallery/stacked_bar_chart.html" rel="nofollow noreferrer">https://altair-viz.github.io/gallery/stacked_bar_chart.html</a></p>
<p>and the suggestions in</p>
<p><a href="https://altair-viz.github.io/user_guide/data.html#converting-with-pandas" rel="nofollow noreferrer">https://altair-viz.github.io/user_guide/data.html#converting-with-pandas</a></p>
<pre><code>import altair as alt
import pandas as pd
df = pd.DataFrame(
[
{'id_foo': 1, 'energy_carrier': 'oil', '2000': 5, '2020': 10},
{'id_foo': 2, 'energy_carrier': 'electricity', '2000': 10, '2020': 20},
]
)
pivot_df = df.pivot_table(
columns='energy_carrier',
values=['2000', '2020'],
aggfunc='sum',
)
plot_colors = ['blue', 'green']
plot_df = pivot_df.reset_index()
colors = alt.Color('energy_carrier:N', scale=alt.Scale(range=plot_colors))
chart = (
alt.Chart(plot_df)
.mark_bar()
.encode(
x='index:O',
#y=?,
color=colors,
)
.properties(width=200)
)
chart.show()
</code></pre>
<p><a href="https://i.sstatic.net/DUJpF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DUJpF.png" alt="Expected" /></a></p>
<p>I want a result like this:</p>
<p><a href="https://i.sstatic.net/TlabI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TlabI.png" alt="enter image description here" /></a></p>
<p>Related:</p>
<p><a href="https://stackoverflow.com/questions/77993730/how-to-use-indexed-data-frames-with-altair">How to use indexed data frames with altair?</a></p>
|
<python><pandas><altair>
|
2024-02-14 10:40:09
| 1
| 12,756
|
Stefan
|
77,993,595
| 10,053,485
|
anaconda-navigator fails to launch, ImportError. Install was previously fine
|
<p>After functioning fine for nearly a year my Anaconda Navigator has stopped functioning after my Windows11 device updated itself overnight (unprompted, thank you Microsoft.)
Update: <em>2024-02 Cumulative Update for Windows 11 Version 23H2 for x64-based Systems (KB5034765)</em></p>
<p>It's unclear whether this is the direct cause, or whether an unrelated matter triggered the issue.</p>
<p>Jupyter Lab, which I use through Anaconda Navigator, was open at the time.</p>
<p>When opening the Navigator shortcut, a couple command prompt windows pop up (as was the case prior to today), but the navigator simply does not start. No error messages are displayed through this approach.</p>
<p>What I've already tried:
<a href="https://docs.anaconda.com/free/troubleshooting/#navigator-issues" rel="nofollow noreferrer">Most of the steps documented here</a>. <a href="https://docs.anaconda.com/free/navigator/troubleshooting/" rel="nofollow noreferrer">Previous link</a></p>
<ul>
<li>Deleted the <code>.condarc</code> file.</li>
<li>Launch <code>anaconda-navigator</code> directly through Anaconda Prompt.</li>
<li>Manually update with <code>conda update -n base anaconda-navigator</code></li>
<li>Updated urllib3 with <code>conda update urllib3</code> (reason below)</li>
<li>Executed <code>conda remove -n base anaconda-navigator</code> &
<code>conda install -n base anaconda-navigator</code></li>
<li>Restarted my device prior to and after all these steps.</li>
</ul>
<p>Launching <code>anaconda-navigator</code> through Anaconda Prompt does display an error, and this error has changed with the steps taken above.</p>
<p>The original error was <a href="https://docs.pydantic.dev/2.6/migration/#basesettings-has-moved-to-pydantic-settings" rel="nofollow noreferrer">related to Pydantic, with this link attached.</a> This error is no longer displayed.</p>
<p>The current Error is related to urllib3, full trace below:</p>
<pre><code>(base) C:\Users\user>anaconda-navigator
Traceback (most recent call last):
File "C:\Users\user\anaconda3\Scripts\anaconda-navigator-script.py", line 6, in <module>
from anaconda_navigator.app.main import main
File "C:\Users\user\anaconda3\lib\site-packages\anaconda_navigator\app\main.py", line 19, in <module>
from anaconda_navigator.app.start import start_app
File "C:\Users\user\anaconda3\lib\site-packages\anaconda_navigator\app\start.py", line 30, in <module>
from anaconda_navigator.widgets.main_window import MainWindow
File "C:\Users\user\anaconda3\lib\site-packages\anaconda_navigator\widgets\main_window\__init__.py", line 24, in <module>
from anaconda_navigator.api.anaconda_api import AnacondaAPI
File "C:\Users\user\anaconda3\lib\site-packages\anaconda_navigator\api\anaconda_api.py", line 29, in <module>
from anaconda_navigator.api.client_api import ClientAPI
File "C:\Users\user\anaconda3\lib\site-packages\anaconda_navigator\api\client_api.py", line 21, in <module>
import binstar_client
File "C:\Users\user\anaconda3\lib\site-packages\binstar_client\__init__.py", line 27, in <module>
from .requests_ext import NullAuth
File "C:\Users\user\anaconda3\lib\site-packages\binstar_client\requests_ext.py", line 11, in <module>
from urllib3.filepost import choose_boundary, iter_fields
ImportError: cannot import name 'iter_fields' from 'urllib3.filepost' (C:\Users\user\anaconda3\lib\site-packages\urllib3\filepost.py)
</code></pre>
<p>urllib3 version: 2.1.0</p>
<p>pydantic version: 2.6.1</p>
<p>Looks like iter_fields might be deprecated. Would downgrading urllib3 be sufficient to get Anaconda Navigator working again?</p>
|
<python><anaconda>
|
2024-02-14 10:30:16
| 0
| 408
|
Floriancitt
|
77,993,273
| 160,206
|
Protocol for arithmetic operation implemented on another class
|
<p>In Python, arithmetic operations between operands of different types can be implemented on either of the operand types either using for example <code>__add__</code> for the left-hand operand, or <code>__radd__</code> for the right-hand operand. I'm trying to write a protocol that expresses that a type supports addition with a specific other type. However, for one class that I don't control, the implementation of the addition is on the right-hand operand, and so it doesn't match the protocol, even though the operation is supported. The example below illustrates this:</p>
<pre class="lang-py prettyprint-override"><code>from abc import abstractmethod
from dataclasses import dataclass
from typing import Protocol
class SupportsArithmetic(Protocol):
@abstractmethod
def __add__(self, other: "Delta") -> "SupportsArithmetic":
...
@dataclass
class Delta:
amount: int
def __radd__(self, other: object) -> SupportsArithmetic:
if not isinstance(other, Instance):
return NotImplemented
return Instance(other.value + self.amount)
# I can't modify this class
@dataclass
class Instance:
value: int
def add_delta(lhs: SupportsArithmetic) -> None:
print(lhs + Delta(7))
add_delta(Instance(5))
</code></pre>
<p>The function <code>add_delta</code> works, but <code>mypy</code> complains that <code>Instance</code> doesn't implement the protocol when calling <code>add_delta</code>.</p>
<p>The real use-case is more complex. <code>Instance</code> is actually Python's <code>datetime</code> type, and <code>Delta</code> is <code>dateutil.relativedelta.relativedelta</code>. So I can't modify either type. I have created a type that can be used as a drop-in for <code>datetime</code> for certain operations, such as adding a <code>relativedelta</code> to it. I now wanted to create a protocol, so that I can annotate functions that take either a <code>datetime</code>, or my custom type.</p>
|
<python><python-typing>
|
2024-02-14 09:33:47
| 0
| 77,084
|
BjΓΆrn Pollex
|
77,993,223
| 3,433,875
|
Polar bar chart with rounded corners in Matplotlib?
|
<p>I am trying to recreate this radial bar chart in matplotlib:
<a href="https://i.sstatic.net/X2VJp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/X2VJp.png" alt="enter image description here" /></a></p>
<p>I can do it without problems using:</p>
<pre><code> import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from matplotlib.patches import FancyBboxPatch
color_dict = { "Sweden":"#5375D4", "Denmark":"#A54836", "Norway":"#2B314D" }
colors = [ "#2B314D", "#A54836","#5375D4", ]
data = {
"year": [2004, 2022, 2004, 2022, 2004, 2022],
"countries" : ["Sweden", "Sweden", "Denmark", "Denmark", "Norway", "Norway"],
"sites": [13,15,4,10,5,8]
}
df= pd.DataFrame(data)
df['sub_total'] = df.groupby('year')['sites'].transform('sum')
df = df.sort_values(['countries', 'sites'], ascending=True ).reset_index(drop=True)
fig, ax = plt.subplots(figsize=(5,5), facecolor = "#FFFFFF", subplot_kw=dict(polar=True) )
x = len(df.year.unique())
countries = df.countries.unique()
for countries in countries:
y = df[df["countries"] == countries].sort_values("countries", ascending = False)["sites"].values
ax.barh(countries, np.radians(y*10))
ax.set_theta_zero_location('N')
ax.set_theta_direction(-1)
ax.set_rorigin(-1)
ax.set_thetamax(180)
</code></pre>
<p>Which produces: (i havent beautified it yet)
<a href="https://i.sstatic.net/QtcXj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QtcXj.png" alt="enter image description here" /></a></p>
<p>To create the round corners I was thinking of adding circles to both ends, but is there an easier way?</p>
<p>I have found this, but it doesnt work on polar charts:</p>
<p><a href="https://stackoverflow.com/questions/58425392/bar-chart-with-rounded-corners-in-matplotlib">Bar chart with rounded corners in Matplotlib?</a>
It generates this:
<a href="https://i.sstatic.net/Q3yU5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Q3yU5.png" alt="enter image description here" /></a>
I want to do it in pure matplotlib (no plotly or seaborn,...)</p>
|
<python><matplotlib>
|
2024-02-14 09:25:37
| 1
| 363
|
ruthpozuelo
|
77,993,198
| 21,920,909
|
.venv indicator may not be present in the terminal prompt
|
<p>After updating, I encountered an error with Visual Studio Code that was working fine before.</p>
<pre class="lang-none prettyprint-override"><code>Python virtual environment was successfully activated, even though
"(.venv)" indicator may not be present in the terminal prompt.
</code></pre>
<p>When I installed venv in VS using Ctrl+Shift+P > Create Environment, I opened the terminal and it used to automatically add venv before the path. However, now it's not set and an error message appears.</p>
<p>then i used "notepad $PROFILE" command in cmd and add inside that Microsoft powershell profile file this text <code>.\.venv\Scripts\Activate.ps1</code></p>
<p>It automatically displays an error message when PowerShell is turned on, which I never wanted.</p>
<p><a href="https://i.sstatic.net/yvems.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yvems.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/KKQXE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KKQXE.png" alt="enter image description here" /></a></p>
<p>I tried almost everything, including reinstalling Visual Studio and Python, but nothing worked.
Then I synced my account to Visual Studio and it automatically came back again.
I even ran the PowerShell script</p>
<p><code>Set-ExecutionPolicy -Scope CurrentUser RemoteSigned</code></p>
<p>but still faced the issue.</p>
<p>Finally, I tried disabling all extensions from VS, but the problem persisted.</p>
|
<python><python-3.x><visual-studio-code>
|
2024-02-14 09:22:04
| 3
| 383
|
SSujitXX
|
77,993,188
| 7,026,806
|
How do I define a pandas coercion for a custom dtype?
|
<p>I have a special implementation of a <code>date</code> dtype. For various reasons it isn't a <code>date</code> since that has its own set of issues.</p>
<p>For Pandas I would like to just have it treated exactly as if it was a builtin <code>datetime</code>.</p>
<p>So far I haven't been able to get around <code>TypeError: <class '__main__.SpecialTime'> is not convertible to datetime </code></p>
<pre class="lang-py prettyprint-override"><code>from datetime import date, datetime
import numpy as np
import pandas as pd
from pandas.api.extensions import ExtensionDtype
class SpecialTime(ExtensionDtype):
name = "SpecialTime"
na_value = None
@classmethod
def construct_array_type(cls):
return pd.arrays.DatetimeArray
def __init__(self, year, month, day):
self._date = date(year, month, day)
def __str__(self):
return str(self._date)
def to_datetime(self):
return datetime.combine(self._date, datetime.min.time())
# add method to avoid TypeError: <class '__main__.SpecialTime'> is not convertible to datetime
def __array__(self, dtype=None):
return np.array([self.to_datetime()], dtype="datetime64[ns]")
pd.api.extensions.register_extension_dtype(SpecialTime)
df = pd.DataFrame(
{
"date_column": [
SpecialTime(2022, 1, 1),
SpecialTime(2023, 1, 2),
SpecialTime(2024, 1, 3),
]
}
)
print(df.dtypes)
print(df.convert_dtypes().dtypes)
print(df.astype("datetime64[ns]").dtypes)
</code></pre>
|
<python><pandas><dataframe>
|
2024-02-14 09:19:33
| 0
| 2,020
|
komodovaran_
|
77,993,073
| 2,828,099
|
python jira client and rate limit
|
<p>I am currently working on a script to do some bulk operations on our company jira instance, that saves plenty of time compared to doing it via the UI.</p>
<p>Unfortunately there is a very strict rate limiting in place. To not run into the limit accidentially, I have to wait 3 minutes after every 3 requests.</p>
<p>It could be more if I could read how to retrieve the response headers by jira, which contain the current rate limit details.</p>
<p>Unfortunately I didn't find a documentation yet how to read the rate limit headers (or even respect the ratelimit out of the box) via the python jira client (<a href="https://jira.readthedocs.io/" rel="nofollow noreferrer">https://jira.readthedocs.io/</a>).</p>
<p>Any idea if it's possible to catch the headers of the jira requests in any way?</p>
|
<python><jira><jira-rest-api><python-jira>
|
2024-02-14 09:02:04
| 1
| 1,653
|
peez80
|
77,993,017
| 11,013,499
|
finding the intersection of several curves in python
|
<p>I have 4 sets of data that try to fit curves on these data. for each of the sets of data, I used a code like below to fit a curve.</p>
<pre><code>center_540=np.sort(center_540,axis=0)
y=center_540[itr,:]
x=[0.2 ,0.4, 0.6, 0.8, 1, 2, 3, 4, 5,6]
popt, pcov,info,msg, ier= curve_fit(objective, x, y,full_output=True)
# summarize the parameter values
a, b, c ,d,e,f= popt
# plot input vs output
pyplot.scatter(x, y)
# define a sequence of inputs between the smallest and largest known inputs
x_line = np.linspace(min(x), max(x), 100)
# calculate the output for the range
y_line = objective(x_line, a, b, c,d,e,f)
# create a line plot for the mapping function
pyplot.plot(x_line, y_line, '--', color='red')
pyplot.xlabel('Rate')
pyplot.ylabel('PSNR')
err = np.dot(info['fvec'], info['fvec'])
plt.title('interpolation error resolution 540p: err=%f' %err)
pyplot.show()
</code></pre>
<p>after that, I tried to find the intersection of these curves. for this aim I used the following code that shows the intersection of each of the two curves:</p>
<pre><code>find_root(0.0063 , -0.1260 , 1.0146,-4.2147,10.4717,14.9473,0.0179 , -0.3148 , 2.1038,-6.7798,11.6871,15.5829)#1080 vs 720
find_root(0.0063 , -0.1260 , 1.0146,-4.2147,10.4717,14.9473,0.0189 , -0.3332 , 2.2272,-7.1543,12.0520,15.3168)# 1080 vs 540
find_root(0.0063 , -0.1260 , 1.0146,-4.2147,10.4717,14.9473,0.0178 , -0.3181 , 2.1672,-7.0761,11.6511,15.8349)#1080 vs 360
find_root(0.0179 , -0.3148 , 2.1038,-6.7798,11.6871,15.5829,0.0189 , -0.3332 , 2.2272,-7.1543,12.0520,15.3168) # 720 vs 540
find_root( 0.0179 , -0.3148 , 2.1038,-6.7798,11.6871,15.5829,0.0178 , -0.3181 , 2.1672,-7.0761,11.6511,15.8349)#720 vs 360
find_root(0.0189 , -0.3332 , 2.2272,-7.1543,12.0520,15.3168,0.0178 , -0.3181 , 2.1672,-7.0761,11.6511,15.8349)# 540 vs 360
</code></pre>
<p>the numbers in find_root functions are the values of the polynomial function in curve fitting.
this is the <code>find_root</code> function definition:</p>
<pre><code>def find_root(a1,b1,c1,d1,e1,f1,a2,b2,c2,d2,e2,f2):
coeff=[a1-a2,b1-b2,c1-c2,d1-d2,e1-e2,f1-f2]
print(np.roots(coeff))
</code></pre>
<p>but when I check the roots using the <code>find_root</code> function with the diagram in the first code I see some roots that do not exist in the diagram. what is the problem?
<a href="https://i.sstatic.net/ySDWD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ySDWD.png" alt="enter image description here" /></a></p>
<p>for example in the diagram we can see the intersections are lower than 1 but when I use find_root function I have these roots that some real roots con not be see in the diagram</p>
<pre><code>[ 7.19666881+0.j 4.11941032+2.23825508j 4.11941032-2.23825508j
1.14334481+0.j -0.30297219+0.j ]
[ 7.32487047+0.j 4.13676725+2.34581369j 4.13676725-2.34581369j
1.01965241+0.j -0.17361292+0.j ]
[ 7.48885498+0.j 4.24400787+2.72602922j 4.24400787-2.72602922j
1.09680306+0.j -0.36932595+0.j ]
[8.56875045+0.j 4.40845674+3.11694138j 4.40845674-3.11694138j
0.50716803+0.89895985j 0.50716803-0.89895985j]
[-47.61740723+0.j 7.2784646 +3.00665406j
7.2784646 -3.00665406j 0.95450723+0.j
-0.89402922+0.j ]
[ 6.88889702+1.76503189j 6.88889702-1.76503189j -0.72213743+2.48208939j
-0.72213743-2.48208939j 1.39375355+0.j ]
</code></pre>
|
<python><python-3.x><curve-fitting>
|
2024-02-14 08:51:51
| 1
| 1,295
|
david
|
77,992,980
| 1,753,640
|
regex to extract citation
|
<p>Given the following example text, I want to be able to extract the sentences associated to <strong>Citation:</strong>. So for the following text:</p>
<pre><code>1. **Fact:** The Tenant is obligated to obtain all consents required by law and provide copies to the Landlord upon request.
**Citation:** "to obtain all consents which the law requires and give copies to the Landlord on request;"
2. **Fact:** The Tenant must pay all rates, taxes, and outgoings related to the Property, including any imposed after the date of the Lease.
**Citation:** "To pay all rates, taxes and outgoings relating to the Property, including any which are imposed after the date of this Lease (even if of a novel nature)."
3. **Fact:** The Tenant is required to pay VAT on any payments made under the Lease and on any payments made by the Landlord that the Tenant agrees to reimburse.
**Citation:** "To pay VAT on any payment made by the Tenant under this Lease and (except to the extent that the Landlord can reclaim it) on any payment made by the Landlord where the Tenant agrees to reimburse the Landlord
</code></pre>
<p>I want regex to produce a list of the following:</p>
<pre><code>[
"to obtain all consents which the law requires and give copies to the Landlord on request;",
"To pay all rates, taxes and outgoings relating to the Property, including any which are imposed after the date of this Lease (even if of a novel nature).",
"To pay VAT on any payment made by the Tenant under this Lease and (except to the extent that the Landlord can reclaim it) on any payment made by the Landlord where the Tenant agrees to reimburse the Landlord"
]
</code></pre>
<p>This is my attempt using regex:</p>
<pre><code>regex = r"\*\*Citation\:\s*?\"(?P<citation>.*?)\""
citations = re.findall(regex, text, re.MULTILINE | re.DOTALL)
</code></pre>
<p>However all this returns is an empty list. Any help would be great.</p>
|
<python><regex>
|
2024-02-14 08:44:33
| 2
| 385
|
user1753640
|
77,992,967
| 22,437,609
|
How can i add white border after background color in Kivy via Pure Python?
|
<p>Below code works and i can clearly see the blue Color(0, 0.525, 0.953, 1) background of takim_isimleri_box BoxLayout
Base code:</p>
<pre><code>from kivy.graphics import Color, RoundedRectangle, Line
from kivy.uix.boxlayout import BoxLayout
takim_isimleri_box = BoxLayout()
with takim_isimleri_box.canvas.before:
Color(0, 0.525, 0.953, 1) # Background color
takim_isimleri_box.rect = RoundedRectangle(size=takim_isimleri_box.size, pos=takim_isimleri_box.pos, radius=[10, 10, 10, 10])
def update_rect2(instance, value):
instance.rect.size = instance.size
instance.rect.pos = instance.pos
takim_isimleri_box.bind(size=update_rect2, pos=update_rect2)
</code></pre>
<p>I want to add a white white border after background color in Kivy via Pure Python?
Howe can i do this?</p>
<p>What i tried, i asked it to Chatgpt it recommended me below code but did not worked.
ChatgPt:</p>
<pre><code>from kivy.graphics import Color, RoundedRectangle, Line
from kivy.uix.boxlayout import BoxLayout
takim_isimleri_box = BoxLayout()
with takim_isimleri_box.canvas.before:
Color(0, 0.525, 0.953, 1) # Background color
takim_isimleri_box.rect = RoundedRectangle(size=takim_isimleri_box.size, pos=takim_isimleri_box.pos, radius=[10, 10, 10, 10])
with takim_isimleri_box.canvas.after:
Color(1, 1, 1, 1) # White color for the border
Line(rectangle=(takim_isimleri_box.x, takim_isimleri_box.y, takim_isimleri_box.width, takim_isimleri_box.height), width=1.5)
def update_rect2(instance, value):
instance.rect.size = instance.size
instance.rect.pos = instance.pos
takim_isimleri_box.bind(size=update_rect2, pos=update_rect2)
</code></pre>
<p>Above code did not work. There is no white border. I only see blue background without white border.</p>
<p>Thanks very much</p>
|
<python><python-3.x><kivy><kivymd>
|
2024-02-14 08:42:38
| 1
| 313
|
MECRA YAVCIN
|
77,992,857
| 5,761,601
|
How to handle events from Azure Eventhub on multiple partitions one by one by an Azure function app locally
|
<p>While debugging a Python azure function app locally, that processes events from an Azure Eventhub with 32 paritions I would like to know whether it is possible to handle the events one by one.
It seems that whatever setting I set to limit the number of events that arrive are always equal to the number of partitions, in this case 32.</p>
<p>My <code>host.json</code>:</p>
<pre class="lang-json prettyprint-override"><code>{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[4.*, 5.0.0)"
},
"extensions": {
"eventHubs": {
"batchCheckpointFrequency": 1,
"eventProcessorOptions": {
"maxBatchSize": 1,
"prefetchCount": 1,
"maxConcurrentCalls": 1
},
"initialOffsetOptions": {
"type": "fromEnd"
}
}
}
}
</code></pre>
<p>The function app code:</p>
<pre class="lang-py prettyprint-override"><code>@app.event_hub_message_trigger(arg_name="azeventhub",
event_hub_name="eventhubname",
connection="EventHubConnectionString",
cardinality=func.Cardinality.ONE,
consumer_group="consumer_group_x")
async def event_processor(azeventhub: func.EventHubEvent):
event = azeventhub.get_body().decode('utf-8')
logging.info("Received from EventHub: %s", event)
try:
json_event = json.loads(event)
await process_event(json_event)
except Exception as e:
logging.error("Error while processing event: %s", e)
time.sleep(10) # it does not wait after each event
</code></pre>
<p>In the above code the sleep is only executed after it has processed 32 events, so I assume it sleeps 32 times 10 seconds at the same time.
The same behaviour happens when I make the code synchronous.</p>
<p>Is this not possible by design or are there settings that really can force this?</p>
|
<python><azure><azure-functions><azure-eventhub>
|
2024-02-14 08:17:44
| 1
| 557
|
warreee
|
77,992,562
| 5,810,717
|
How to group by and find new or disappearing items
|
<p>I am trying to assess in a sales database whether the # of advertisements has changed.
The example dataframe I am using is as such:</p>
<pre><code>df = pd.DataFrame({"offer-id": [1,1,2,2,3,4,5], "date": ["2024-02-10","2024-02-11","2024-02-10","2024-02-11","2024-02-11","2024-02-11","2024-02-10"], "price": [30,10,30,30,20,25,20]})
</code></pre>
<p>And looks like the below:</p>
<p><a href="https://i.sstatic.net/h5bJU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h5bJU.png" alt="Dataframe" /></a></p>
<p>I am now trying to get the # of items that were sold or newly added (I don't care which one, since once I have one the other should be failry easily computable).</p>
<p>E.g. in a perfect case the next piece of code tells me that on 10th of February 3 offers were online (ID 1, 2, and 5) and one was sold (ID 5)
Or alternatively, it tells me on 11th of February 4 offers are online, and 2 of them are new (from that, since I know the day before 5 were online I can also calculate that one must have sold)</p>
<p>Is there a simple way of doing this?
I have tried things like</p>
<pre><code>df.groupby(['date'])["offer-id"].agg({'nunique'})
</code></pre>
<p>but they are missing the "comparison to previous" timestep component.</p>
|
<python><pandas><group-by><comparison>
|
2024-02-14 07:09:24
| 1
| 363
|
fuΓballball
|
77,992,364
| 9,394,465
|
Processing a large csv files with pandas in a faster way
|
<p>I have a task of reading two large csv files (~5 millions of records, ~2Gig) every day and to process the intersecting and non-intersecting columns in different ways (which is irrelevant to the question). The problem I am facing is that the files are taking more time to load, but once they are loaded (via read_csv(..)), the processing takes just a few seconds. I am wondering:</p>
<ol>
<li>if I can improve the <strong>read_csv(..)</strong> time which btw skips both <em>header</em> and <em>footer</em> and has <em>multi-char</em> separator
<ul>
<li><code>pd.read_csv(input_file, sep='\x1D', engine='python', skiprows=1, skipfooter=1, header=0, dtype=str, keep_default_na=False)</code></li>
<li><em>Note</em>: the 'dtype=str' is to ensure pandas don't manipulate anything (i.e., rounding of fractions, inferring absolute num as float, etc.)</li>
<li><em>Note2</em>: I cannot load these CSVs in chunks as that would make finding the intersecting rows between two CSVs very harder and time consuming.</li>
</ul>
</li>
<li>Is there a way I can do this operation in a lesser RAM-sized system since I kept increasing the RAM in my current system whenever the app throws the "MemoryError: Unable to allocate 3.11 GiB for an array with shape (118, 8140983)..." error.</li>
</ol>
|
<python><pandas>
|
2024-02-14 06:10:00
| 1
| 513
|
SpaceyBot
|
77,992,176
| 4,399,016
|
Pandas idxmax - top n values
|
<p>I have this code:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'consumption': [10.51, 103.11, 55.48], 'co2_emissions': [37.2, 19.66, 1712]}, index=['Pork', 'Wheat Products', 'Beef'])
df['Max'] = df.idxmax(axis=1, skipna=True, numeric_only=True)
df
</code></pre>
<p>I need to find the n largest values. <a href="https://stackoverflow.com/a/35872840/4399016">Here</a> there is a technique using apply/lambda.
But it returns error.</p>
<pre><code>df.apply(lambda s: s.abs().nlargest(2).index.tolist(), axis=1,skipna=True, numeric_only=True)
</code></pre>
<blockquote>
<p>TypeError: () got an unexpected keyword argument
'numeric_only'</p>
</blockquote>
<p>Is there any way to obtain top N results using idxmax? Is there any way to overcome this error got when using apply lambda method?</p>
|
<python><pandas><data-wrangling>
|
2024-02-14 05:05:01
| 2
| 680
|
prashanth manohar
|
77,992,160
| 4,277,485
|
TypeError: __init__() got an unexpected keyword argument 'aws_access_key_id'
|
<p>I want to read parquet files in S3 using polars, evidently the following code is working in python 3.9 but not in python 3.7:</p>
<pre><code>import polars as pl
storage_options = {
"aws_access_key_id": access_key,
"aws_secret_access_key": secret_key,
"aws_session_token": token
}
lazyframe = pl.scan_parquet(read_path, storage_options=storage_options)
</code></pre>
<p>error:</p>
<pre class="lang-none prettyprint-override"><code>File "/opt/py37/lib/python3.7/site-packages/s3fs/core.py", line 714,
in _iterdir<br>
await self.set_session()<br> File "/opt/py37/lib/python3.7/site-packages/s3fs/core.py", line 492, in
set_session<br>
self.session = aiobotocore.session.AioSession(**self.kwargs)<br> TypeError: __init__() got an unexpected keyword argument
'aws_access_key_id'
</code></pre>
|
<python><amazon-s3><parquet><python-polars>
|
2024-02-14 04:59:42
| 0
| 438
|
Kavya shree
|
77,992,101
| 10,292,638
|
How to get all elements as rows for each href in HTML and add it to a pandas dataframe?
|
<p>I am trying to fetch as rows the different values inside each href element from the following website: <a href="https://www.bmv.com.mx/es/mercados/capitales" rel="nofollow noreferrer">https://www.bmv.com.mx/es/mercados/capitales</a></p>
<p>There should be 1 row that matches each field on the provided headers for each different <code>href</code> element on the HTML file.</p>
<p>This is one of the portions of the HTML that I am trying to scrape:</p>
<pre><code>
<tbody>
<tr role="row" class="odd">
<td class="sorting_1"><a href="/es/mercados/cotizacion/1959">AC
</a></td><td><span class="series">*</span>
</td><td>03:20</td><td><span class="color-2">191.04
</span></td><td>191.32</td>
<td>194.51</td>
<td>193.92</td>
<td>191.01</td>
<td>380,544</td>
<td>73,122,008.42</td>
<td>2,793</td>
<td>-3.19</td><td>-1.64</td></tr><tr role="row" class="even">
<td class="sorting_1"><a href="/es/mercados/cotizacion/203">ACCELSA</a>
</td>
<td><span class="series">B</span>
</td><td>03:20</td><td>
<span class="">22.5</span></td><td>0</td>
<td>22.5</td><td>0</td><td>0
</td><td>3</td><td>67.20</td>
<td>1</td><td>0</td><td>0</td></tr>
<tr role="row" class="odd">
<td class="sorting_1">
<a href="/es/mercados/cotizacion/6096">ACTINVR</a></td>
<td><span class="series">B</span></td><td>03:20</td><td>
<span class="">15.13</span></td><td>0</td><td>15.13</td><td>0</td>
<td>0</td><td>13</td><td>196.69</td><td>4</td><td>0</td>
<td>0</td></tr><tr role="row" class="even"><td class="sorting_1">
<a href="/es/mercados/cotizacion/339083">AGUA</a></td>
<td><span class="series">*</span>
</td><td>03:20</td><td>
<span class="color-1">29</span>
</td><td>28.98</td><td>28.09</td>
<td>29</td><td>28</td><td>296,871</td>
<td>8,491,144.74</td><td>2,104</td><td>0.89</td>
<td>3.17</td></tr><tr role="row" class="odd"><td class="sorting_1">
<a href="/es/mercados/cotizacion/30">ALFA</a></td><td><span class="series">A</span></td>
<td>03:20</td>
<td><span class="color-2">13.48</span>
</td><td>13.46</td>
<td>13.53</td><td>13.62</td><td>13.32</td>
<td>2,706,398</td>
td>36,494,913.42</td><td>7,206</td><td>-0.07</td>
<td>-0.52</td>
</tr><tr role="row" class="even"><td class="sorting_1">
<a href="/es/mercados/cotizacion/7684">ALPEK</a></td><td><span class="series">A</span>
</td><td>03:20</td><td><span class="color-2">10.65</span>
</td><td>10.64</td><td>10.98</td><td>10.88</td><td>10.53</td>
<td>1,284,847</td><td>13,729,368.46</td><td>6,025</td><td>-0.34</td>
<td>-3.10</td></tr><tr role="row" class="odd"><td class="sorting_1">
<a href="/es/mercados/cotizacion/1729">ALSEA</a></td><td><span class="series">*</span>
</td><td>03:20</td><td><span class="color-2">65.08</span></td><td>64.94</td><td>65.44</td><td>66.78</td><td>64.66</td><td>588,826</td><td>38,519,244.51</td><td>4,442</td><td>-0.5</td><td>-0.76</td></tr>
<tr role="row" class="even"><td class="sorting_1">
<a href="/es/mercados/cotizacion/424518">ALTERNA</a></td><td><span class="series">B</span></td><td>03:20</td><td><span class="">1.5</span></td><td>0</td><td>1.5</td>
<td>0</td><td>0</td><td>2</td><td>3</td><td>1</td><td>0</td><td>0</td></tr><tr role="row" class="odd"><td class="sorting_1">
<a href="/es/mercados/cotizacion/1862">AMX</a></td>
<td><span class="series">B</span></td><td>03:20</td>
<td><span class="color-2">14.56</span></td><td>14.58</td>
<td>14.69</td><td>14.68</td><td>14.5</td><td>86,023,759</td>
<td>1,254,412,623.59</td><td>41,913</td><td>-0.11</td>
<td>-0.75</td></tr><tr role="row" class="even">
<td class="sorting_1"><a href="/es/mercados/cotizacion/6507">ANGELD</a>
</td><td><span class="series">10</span></td><td>03:20</td><td>
<span class="color-2">21.09</span>
</td><td>21.1</td><td>21.44</td><td>21.23</td><td>21.09</td>
<td>51,005</td><td>1,076,281.67</td>
<td>22</td><td>-0.34</td><td>-1.59</td></tr>
</tbody>
</code></pre>
<p>And my current code results into an empty <code>dataframe</code>:</p>
<pre><code># create empty pandas dataframe
import pandas as pd
import requests
from bs4 import BeautifulSoup
# get response code from webhost
page = requests.get('https://www.bmv.com.mx/es/mercados/capitales')
soup = BeautifulSoup(page.text, 'lxml')
#print(soup.p.text)
# yet it doesn't bring the expected rows!
print('Read html!')
# get headers
tbody = soup.find("thead")
tr = tbody.find_all("tr")
headers= [t.get_text().strip().replace('\n', ',').split(',') for t in tr][0]
#print(headers)
df = pd.DataFrame(columns=headers)
# fetch rows into pandas dataframe# You can find children with multiple tags by passing a list of strings
rows = soup.find_all('tr', {"role":"row"})
#rows
for row in rows:
cells = row.findChildren('td')
for cell in cells:
value = cell.string
#print("The value in this cell is %s" % value)
# append row in dataframe
</code></pre>
<p>I would like to know if it's possible to get a <code>pandas</code> dataframe whose fields are the ones portrayed in the headers list and the rows are each element from href.</p>
<p>For better perspective, the expected output should be equal to the table at the bottom of the provided website. Whose first row has the next schema:</p>
<pre><code>EMISORA SERIE HORA ΓLTIMO PPP ANTERIOR MΓXIMO MΓNIMO VOLUMEN IMPORTE OPS. VAR PUNTOS VAR %
AC * 3:20 191.04 191.32 194.51 193.92 191.01 380,544 73,122,008.42 2,793 -3.19 -1.64
</code></pre>
<p>Is this possible to create such dataset?</p>
|
<python><pandas><dataframe><web-scraping><python-requests>
|
2024-02-14 04:39:17
| 5
| 1,055
|
AlSub
|
77,992,011
| 964,267
|
How to resolve conflicts with conda packages
|
<p>I have this Dockerfile and I want to install all the Python dependencies which include opencv-python 4.7.0.68 in the requirements.txt file</p>
<pre><code># PyTorch image
...
RUN pip install -r requirements.txt
</code></pre>
<p>However, I am getting this error and I can't seem to resolve the issue except not installing the opencv-python 4.7.0.68</p>
<pre><code>55.09 Installing collected packages: voluptuous, tensorboard-plugin-wit, sentencepiece, library, easygui, tensorboard-data-server, safetensors, regex, pyasn1-modules, protobuf, opencv-python, oauthlib, multidict, markdown, lightning-utilities, grpcio, ftfy, frozenlist, entrypoints, cachetools, async-timeout, absl-py, yarl, requests-oauthlib, huggingface-hub, google-auth, aiosignal, torchmetrics, tokenizers, google-auth-oauthlib, diffusers, aiohttp, accelerate, transformers, timm, tensorboard, altair, pytorch-lightning, open-clip-torch
56.38 Found existing installation: protobuf 3.20.3
56.39 Uninstalling protobuf-3.20.3:
58.77 Successfully uninstalled protobuf-3.20.3
58.87 Attempting uninstall: opencv-python
58.87 Found existing installation: opencv-python 4.6.0
58.87 ERROR: Cannot uninstall opencv-python 4.6.0, RECORD file not found. Hint: The package was installed by conda.
</code></pre>
<p>Any idea?</p>
|
<python><docker><pytorch>
|
2024-02-14 03:59:29
| 1
| 750
|
ln9187
|
77,991,993
| 3,731,622
|
How to configure VSCode with pylance so that I can change conda environments?
|
<p>When I use VSCode with pylance, if I change conda environments I get the</p>
<blockquote>
<p>overriding the stdlib module ... Pylance (reportShadowedImports)</p>
</blockquote>
<p>warning.</p>
<p>There are lots of posts about this (e.g. <a href="https://github.com/microsoft/pylance-release/issues/5191" rel="nofollow noreferrer">https://github.com/microsoft/pylance-release/issues/5191</a>). Most of which suggest disabling the "reportShadowedImports":</p>
<pre><code>"python.analysis.diagnosticSeverityOverrides": {
"reportShadowedImports": "none"
</code></pre>
<p>Is there a way to configure VSCode with pylance so I can switch conda environments, not disable "reportShadowedImports", and not get the "overriding the stdlib module...Pylance (reportShadowedImports)" warnings?</p>
|
<python><visual-studio-code><conda><pylance>
|
2024-02-14 03:47:51
| 1
| 5,161
|
user3731622
|
77,991,777
| 1,477,064
|
NiceGui: Hide table header
|
<p>Following the <a href="https://nicegui.io/documentation/table" rel="nofollow noreferrer">documentation example</a>.</p>
<p>I wasn't able to figure out how to remove the table headers <code><th></code>, in this case the "Name" and "Age" row.</p>
<pre><code>columns = [
{'name': 'name', 'label': 'Name', 'field': 'name', 'required': True, 'align': 'left'},
{'name': 'age', 'label': 'Age', 'field': 'age', 'sortable': True},
]
rows = [
{'name': 'Alice', 'age': 18},
{'name': 'Bob', 'age': 21},
{'name': 'Carol'},
]
ui.table(columns=columns, rows=rows, row_key='name')
ui.run()
</code></pre>
|
<python><nicegui>
|
2024-02-14 02:06:08
| 1
| 4,849
|
xvan
|
77,991,709
| 10,576,557
|
Exception Handling for subprocess.run
|
<p>I am using the following code to test for error capture when sending a command to the shell:</p>
<pre class="lang-python prettyprint-override"><code>#!/usr/bin/python3
'''Trying to get a better error capture for subprocess.run'''
import subprocess
import argparse
cmd = 'ixconfig'
useShell = False
if useShell:
myCommand = cmd
else:
myCommand = cmd.split()
try:
result = subprocess.run(myCommand,
capture_output=True,
check=True,
shell=useShell,
text=True,
timeout=1,
)
except subprocess.TimeoutExpired:
print('Timeout occurred')
result = argparse.Namespace(err=f'"{myCommand}" timed out')
except subprocess.CalledProcessError as subErr:
print('subprocess.CalledProcessError')
result = argparse.Namespace(err=subErr)
except subprocess.SubprocessError as subErr:
print('except subprocess.SubprocessError')
result = argparse.Namespace(err=subErr)
# except:
# result = 'Poor way to handle an exception'
print(f'{result=}')
</code></pre>
<p>For the most part it works. I can get a CalledProcessError when I try to list an invalid directory (<code>cmd = 'ls /var/log/xxx'</code>) or if I get a timeout (<code>cmd = 'sleep 5'</code>).</p>
<p>However, if I send a bad command, such as <code>cmd = 'ixconfig'</code> I get a traceback instead of capturing via SubprocessError:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "/Users/shepherd/prog/aws/BBEditRunTemp-testSubprocess.py", line 17, in <module>
result = subprocess.run(myCommand,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/subprocess.py", line 489, in run
with Popen(*popenargs, **kwargs) as process:
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/subprocess.py", line 854, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/subprocess.py", line 1702, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'ixconfig'
</code></pre>
<p>If I uncomment the straight <code>except:</code> section, the exception is captured properly but I know a bare-except is poor programming. How can I properly capture a proper runtime exception?</p>
|
<python><python-3.x><exception>
|
2024-02-14 01:31:24
| 1
| 569
|
shepster
|
77,991,701
| 558,639
|
How to install pipenv in "externally managed environment"
|
<p>I want to use pipenv for managing my virtual Python environments. In a freshly built Debian (on RPi4B):</p>
<pre><code> % pip install pipenv
</code></pre>
<p>fails with</p>
<pre><code>error: externally-managed-environment
Γ This environment is externally managed
</code></pre>
<p>and it goes on to suggest that I use a virtual environment. But that's why I'm installing pipenv in the first place!</p>
<p>I could not find updated info on how to install pipenv -- how does one resolve this chicken and egg problem?</p>
|
<python><pip><debian><raspberry-pi4><pipenv>
|
2024-02-14 01:29:06
| 2
| 35,607
|
fearless_fool
|
77,991,554
| 10,637,953
|
How is ContiguousSelection mode set in QListWiget using Python?
|
<p>How can we set the 'ContiguousSelection' mode for a QListWidget in Python?</p>
<p>My problem seems to be in accessing the QAbstractView modes, as given in the
QAbstractItemView docs.</p>
<p>See: <a href="https://doc.qt.io/qt-6/qabstractitemview.html#setSelectionModel" rel="nofollow noreferrer">https://doc.qt.io/qt-6/qabstractitemview.html#setSelectionModel</a></p>
<p>See: <a href="https://doc.qt.io/qt-6/qabstractitemview.html#selectionMode-prop" rel="nofollow noreferrer">https://doc.qt.io/qt-6/qabstractitemview.html#selectionMode-prop</a></p>
<p>(I am running on a Windows machine.)</p>
<p>Here is my MRC:</p>
<pre><code>import sys
from PyQt6.QtWidgets import (
QApplication, QWidget, QMainWindow,
QVBoxLayout, QListWidget, QAbstractItemView
)
class MainWindow(QMainWindow):
def __init__(self):
super(MainWindow, self).__init__()
self.setWindowTitle("My App")
self.widget = QListWidget()
self.setCentralWidget(self.widget)
self.widget.addItems(["One", "Two", "Three", "Four", "Five", "Six"])
self.widget.currentItemChanged.connect(self.index_changed)
print("\n>> self.widget.setSelectionMode(QAbstractItemView:ContiguousSelection)\n\
as listed in the docs, throws 'invalid Syntex' exception.")
#self.widget.setSelectionMode(QAbstractItemView:ContiguousSelection)
try:
self.widget.setSelectionMode(QAbstractItemView.ContiguousSelection)
except:
print(">> self.widget.setSelectionMode(QAbstractItemView.ContiguousSelection)\n Fails.")
print('-='*35,end='\n\n')
print("(By default, only one item can be selected at a time.)".center(80))
def index_changed(self, i): # Not an index, i is a QListWidgetItem
print(i.text())
app = QApplication([])
window = MainWindow()
window.show()
app.exec()
</code></pre>
<p>-- Variations I have tried --</p>
<p>self.widget.setSelectionMode(QAbstractItemView:ContiguousSelection)</p>
<p>This version, as listed in the docs, causes the compiler to throw an 'invalid Syntax' error.</p>
<p>self.widget.setSelectionMode(QAbstractItemView.ContiguousSelection)</p>
<p>This version fails with the error message: type object 'QAbstractItemView' has no attribute 'ContiguousSelection'.</p>
<p>Thank you for any help you can give.</p>
|
<python><qt><selection><qlistwidget>
|
2024-02-14 00:24:50
| 0
| 353
|
user10637953
|
77,991,512
| 15,982,771
|
How do I send a modal after a deferred button call in Pycord?
|
<p>I'm trying to defer a button call and then send a modal afterward in Pycord. Here's a portion of my code:</p>
<pre class="lang-py prettyprint-override"><code>class AddItemButton(Button): # Button class from discord.ui.Button
def __init__(self, title):
super().__init__(style=discord.ButtonStyle.blurple, label="Add Item", emoji="β")
self.title = title # Title is just something from the database to access things
async def callback(self, interaction):
await interaction.response.defer() # Defer the call
# -- Add deletion if there's one or more items in the database --
fetch_one = itemsExistInList(title=self.title, user_id=interaction.user.id)
if fetch_one and self.view.get_item("delete"):
self.view.add_item(ItemDeleteSelect(title=self.title, user_id=interaction.user.id))
await self.view.message.edit(view=self.view)
await interaction.response.send_modal(AddItemModal(message=self.view.message, title=self.title))
# ^ Attempt a response
</code></pre>
<p>Specifically this portion:</p>
<pre><code>await interaction.response.send_modal(AddItemModal(message=self.view.message, title=self.title))
</code></pre>
<p>When I try the above, I get the following error:</p>
<pre><code>discord.errors.InteractionResponded: This interaction has already been responded to before
</code></pre>
<p>Because the interaction has already been responded to, I decided to use a followup. A followup in Pycord is:</p>
<blockquote>
<p>Returns the followup webhook for followup interactions.</p>
</blockquote>
<p>When I attempted to exchange the previous line with:</p>
<pre class="lang-py prettyprint-override"><code> await interaction.followup.send_modal(AddItemModal(message=self.view.message, title=self.title))
</code></pre>
<p>It returned:</p>
<pre class="lang-py prettyprint-override"><code> await interaction.followup.send_modal(AddItemModal(message=self.view.message, title=self.title))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'Webhook' object has no attribute 'send_modal'
</code></pre>
|
<python><pycord>
|
2024-02-14 00:11:07
| 1
| 1,128
|
Blue Robin
|
77,991,214
| 4,408,275
|
hashlib.sha256 returns different results on Linux
|
<p>I am hashing a file including its name from the path component with the code shown below<sup>1</sup>.
I am getting these strange results (hash value simplified for readability):</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Machine</th>
<th>Hash</th>
</tr>
</thead>
<tbody>
<tr>
<td>Windows 10 - local 1</td>
<td>abc</td>
</tr>
<tr>
<td>Windows 10 - local 2</td>
<td>abc</td>
</tr>
<tr>
<td>Windows 10 - CI - Machine 1</td>
<td>abc</td>
</tr>
<tr>
<td>Windows 10 - CI - Machine 2</td>
<td>abc</td>
</tr>
<tr>
<td>Linux RHEL 7 - local</td>
<td>abc</td>
</tr>
<tr>
<td>Linux RHEL 7 - CI</td>
<td>abc</td>
</tr>
<tr>
<td>Linux CentOS 7.9 - local</td>
<td>abc</td>
</tr>
<tr>
<td>Linux CentOS 7.9 - CI</td>
<td><strong>abc_linux_other</strong></td>
</tr>
</tbody>
</table></div>
<p>The interesting thing is</p>
<ul>
<li>it always works on Windows, doesn't matter the machine, the user, the patch level of windows etc.</li>
<li>on RHEL 7 it works, on my local machine, on the CI machine, and if I ssh in the CI machine and run it locally (as different user), it always works.</li>
<li>and then there is CentOS, where in CI, I get these strange results.</li>
</ul>
<p>On all machines, the same Python version, even the same Python installation is used!</p>
<p>I have absolutly no clue, where this behavior comes from and where I to look?</p>
<hr />
<p><sup>1</sup> Code</p>
<pre class="lang-py prettyprint-override"><code>import hashlib
from pathlib import Path
MAX_READ = 4096
some_file = Path("abc/abc.txt")
cs = hashlib.sha256()
buffer = some_file.name.encode("utf-8")
cs.update(buffer)
with open(some_file, "rb") as f:
while buffer:
buffer = f.read(MAX_READ)
cs.update(buffer)
</code></pre>
|
<python><hash><hashlib>
|
2024-02-13 22:32:08
| 0
| 1,419
|
user69453
|
77,991,137
| 1,351,691
|
python SSL error c:1000 using boto3 to connect to AWS
|
<p>Having an issue with python script (3.12) that previously worked that used boto3 to connect to AWS Glue and return table information.</p>
<p>The error message is:</p>
<p>"botocore.exceptions.SSLError: SSL validation failed for <a href="https://glue.us-west-2.amazonaws.com/" rel="nofollow noreferrer">https://glue.us-west-2.amazonaws.com/</a> [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1000)"</p>
<p>I use a separate script to first authenticate with AWS and then running code such as:</p>
<pre><code>import boto3
import json
import datetime
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
next_token = ""
client = boto3.client('glue', region_name ='us-west-2')
crawler_tables = []
while True:
response = client.get_tables(DatabaseName = 'XXXX', NextToken = next_token)
#print(response)
def myconverter(o):
if isinstance(o,datetime.datetime):
return o.__str__()
print(json.dumps(response, default=myconverter))
for tables in response['TableList']:
for columns in tables['StorageDescriptor']['Columns']:
crawler_tables.append(tables['Name'] + '|' + columns['Name']+ '|' + columns['Type'])
next_token = response.get('NextToken')
if next_token is None:
break
print(crawler_tables)
</code></pre>
<p>The error is new as the script functioned 8-12 months ago and believe that some security/network or workstation permissions have changed but not able to find the root cause.</p>
<p>I have tried to install certificates using the github Gist install Cerfificates.command But get the error:
OSError: [WinError 1314] A required privilege is not held by the client: '..\.venv\Lib\site-packages\certifi\cacert.pem' -> 'cert.pem'</p>
<p>I've checked the security on the folders and appears I should have the access needed (windows 10).</p>
<p>If this error and the above SSLError are actually tied together I've looked at options for moving the location of certs default directory c:\program files\common files\SSL\certs to some location I have full permissions to but not sure if going down the right path to resolve.</p>
<p>I appreciate input!</p>
|
<python><boto3><aws-glue>
|
2024-02-13 22:11:00
| 1
| 589
|
Jeff A
|
77,991,125
| 1,597,106
|
How can I stretch a list to be filled with its existing values distributed proportionally?
|
<p>I have a list like this:</p>
<pre><code>original_list = [10, 10, 2, 11, 11]
</code></pre>
<p>The values happen to be numbers, but they represent states. I need to "stretch" this list to a given length, and fill-in the missing values. If I were to stretch it to a length of 10 (i.e. double its length), it should look like this:</p>
<pre><code>[10, 10, 10, 10, 2, 2, 11, 11, 11, 11]
</code></pre>
<p>Since the stretched size is double the original length, there are two copies of each item. Originally there were 2 <code>10</code> values, now there are 4 <code>10</code> values, etc. They're in the same order.</p>
<p>In general, this will be used to stretch to larger lists, but it should work to shrink proportionally to a smaller list too.</p>
<p>I know I can do this as follows:</p>
<pre><code>def stretch_list(lst, length):
new_lst = []
ratio = len(lst) / length
for i in range(length):
index = int(i * ratio)
new_lst.append(lst[index])
return new_lst
original_list = [10, 10, 2, 11, 11]
stretched_list = stretch_list(original_list, 10)
</code></pre>
<p>Also, this could be shortened with list comprehension, although it's a bit hard to read:</p>
<pre><code>new_length = 10
stretched_list = [ original_list[int(i*len(original_list)/new_length)] for i in range(new_length) ]
</code></pre>
<p>However, I'm wondering if there are dedicated libraries for this type of operation? I've looked into <a href="https://numpy.org/doc/stable/reference/generated/numpy.resize.html" rel="nofollow noreferrer">numpy.resize</a> and <a href="https://numpy.org/doc/stable/reference/generated/numpy.interp.html" rel="nofollow noreferrer">numpy.interp</a>, but they don't seem to fit the bill - <code>resize</code> creates copies, and <code>interp</code> fills in with new values.</p>
|
<python>
|
2024-02-13 22:08:38
| 0
| 2,357
|
antun
|
77,991,106
| 8,491,510
|
Fill DataFrame column value with value from parent row
|
<p>Let's say that I have DataFrame like this:</p>
<pre><code>+-------+----------+---------+
| Key | Parent | Value |
|-------+----------+---------|
| Key1 | Key10 | 246 |
| Key2 | Key1 | None |
| Key3 | Key14 | 434 |
+-------+----------+---------+
</code></pre>
<p>Now I need to replace data in <code>Value</code> column (if it is None) with data from the <code>Parent</code> column (if parent exists - if not value should stay None). So expected results for above table should look like this:</p>
<pre><code>+-------+----------+---------+
| Key | Parent | Value |
|-------+----------+---------|
| Key1 | Key10 | 246 |
| Key2 | Key1 | 246 |
| Key3 | Key14 | 434 |
+-------+----------+---------+
</code></pre>
<p>DataFrame preparation code:</p>
<pre class="lang-py prettyprint-override"><code>data = [['Key1', "Key10", 246], ['Key2', "Key1", None], ['Key3', 'Key14', "434"]]
df = pd.DataFrame(data, columns=['Key', 'Parent', 'Value'])
</code></pre>
|
<python><pandas><dataframe>
|
2024-02-13 22:02:13
| 2
| 335
|
Karol Oleksy
|
77,991,049
| 9,915,864
|
Is there a way to print a formatted dictionary to a Python log file?
|
<p>I've got a logging handler setup that prints to stream and file. I'm working with a few dictionaries and modifying dictionaries. Is there a way to format a dictionary for the log file so that it shows up as one block rather than a line?</p>
<p>I've gone through a bunch of simpler formatting attempts and read through <a href="https://stackoverflow.com/questions/20111758/how-to-insert-newline-in-python-logging">How to insert newline in python logging?</a>. Before I try writing a custom formatter just for dictionaries, wanted to find out if there's a known way to do this. Thanks!</p>
<p>The desired output would be something like this:</p>
<pre><code>2024-02-13 13:27:03,685 [DEBUG] root: shelf_name = 'some shelf',
url = 'http://a_url',
creation_date = '02/12/2024'
2024-02-13 13:34:55,889 [DEBUG] root:
</code></pre>
<p>So is there any way to do this for a dictionary block?</p>
<p><strong>UPDATED: Removed the old extraneous iterations. I'm posting the closest acceptable result, but still would like to make the dictionary print as a single block</strong></p>
<pre class="lang-py prettyprint-override"><code>class Downloader:
def __init__(self, shelf_data) -> None:
shelf = ShelfData(shelf_data)
[logging.debug(f"shelf['{key}']: {val}") for key, val in shelf.__dict__.items()]
</code></pre>
<p>Log file:</p>
<pre><code>2024-02-13 16:29:18,024 [DEBUG] root: shelf['shelf_name']: test_shelf
2024-02-13 16:29:18,024 [DEBUG] root: shelf['url']: https://site/group/show/1865-scifi-and-fantasy-book-club
2024-02-13 16:29:18,024 [DEBUG] root: shelf['base_path']: C:\MyProjects\downloads
2024-02-13 16:29:18,024 [DEBUG] root: shelf['creation_date']: 02/12/2024
2024-02-13 16:29:18,039 [DEBUG] root: shelf['sort_order']: descend
2024-02-13 16:29:18,039 [DEBUG] root: shelf['books_per_page']: 100
2024-02-13 16:29:18,039 [DEBUG] root: shelf['download_dir']: None
2024-02-13 16:29:18,039 [DEBUG] root: shelf['book_count']: 37095
2024-02-13 16:29:18,039 [DEBUG] root: shelf['page_count']: 371
</code></pre>
<p>Also, might as well add my logger:</p>
<pre><code> logging_config = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'standard': {
'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s'
},
},
'handlers': {
'default_handler': {
'class': 'logging.FileHandler',
'level': 'DEBUG',
'formatter': 'standard',
'filename': os.path.join('logs', f'{log_name}.log'),
'encoding': 'utf-8'
},
'stream_handler': {
'class': 'logging.StreamHandler',
'level': 'DEBUG',
'formatter': 'standard',}
},
'loggers': {
'': {
'handlers': ['default_handler', 'stream_handler'],
'level': 'DEBUG',
'propagate': False
}
}
}
logging.config.dictConfig(logging_config)
</code></pre>
|
<python><python-logging>
|
2024-02-13 21:48:23
| 2
| 341
|
Meghan M.
|
77,990,969
| 3,837,778
|
Array value must start with "{" or dimension information when excute_values to jsonb column
|
<p>My codes</p>
<pre><code>from psycopg2.extras import Json
from psycopg2.extensions import register_adapter
register_adapter(dict, Json)
data = [{
'end_of_epoch_data': ['GasCoin', [{'Input': 5}, {'Input': 6}, {'Input': 7}]],
}]
def get_upsert_sql(schema: str, table: str, columns: str, primary_keys: list | tuple | set):
return f"""INSERT INTO {schema}.{table}
({', '.join(columns)}) VALUES %s
ON CONFLICT ({','.join(primary_keys)}) DO UPDATE
SET {', '.join([f"{col}=EXCLUDED.{col}" for col in columns if col not in primary_keys])}"""
def upsert(data: list, uri: str, schema: str, table: str, primary_keys: list | tuple | set):
connection = psycopg2.connect(uri)
cursor = connection.cursor()
try:
columns = data[0].keys()
query = get_upsert_sql(schema, table, columns, primary_keys)
values = [[d[col] for col in columns] for d in data]
execute_values(cursor, query, values)
connection.commit()
except Exception as e:
connection.rollback()
raise e
finally:
cursor.close()
connection.close()
</code></pre>
<p>but I got errors like</p>
<pre><code> File "/Users/tests/test_pg_write.py", line 47, in upsert
execute_values(cursor, query, values)
File "/Users/venv/lib/python3.9/site-packages/psycopg2/extras.py", line 1299, in execute_values
cur.execute(b''.join(parts))
psycopg2.errors.InvalidTextRepresentation: malformed array literal: "GasCoin"
LINE 2: (end_of_epoch_data) VALUES (ARRAY['GasCoin',ARRA...
^
DETAIL: Array value must start with "{" or dimension information.
</code></pre>
<p><code>end_of_epoch_data</code> is jsonb column in postgres table</p>
<p>any idea? thanks</p>
<p><strong>UPDATE</strong></p>
<p>it seems that the error is because i tried to write python list to jsonb column in pg table. but it seems that i can write <code>json.dumps(dataend_of_epoch_data)</code> which is str of python list into jsonb column of postgres table... is this right solution?</p>
|
<python><postgresql><psycopg2>
|
2024-02-13 21:30:22
| 1
| 9,056
|
BAE
|
77,990,956
| 5,837,992
|
Finding First and Last Values of a "Run" in Pandas
|
<p>I am trying to figure out the best way to get information on a "run" in pandas</p>
<p>The sample code below returns the following results:</p>
<p><a href="https://i.sstatic.net/8Lteh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8Lteh.png" alt="enter image description here" /></a></p>
<pre><code>import pandas as pd
import numpy as np
df2 = pd.DataFrame({'Price':[0.0, 3.6, 9.3, 4.5, 2.9, 3.2, 1.0, 6.7, 8.7, 9.8, 3.4, .7, 2.2, 6.5, 3.4, 1.7, 9.4, 10.0], 'PriceDate':['2023-10-01', '2023-10-02', '2023-10-03', '2023-10-04', '2023-10-05', '2023-10-06', '2023-10-07', '2023-10-08', '2023-10-09', '2023-10-10', '2023-10-11', '2023-10-12', '2023-10-13', '2023-10-14', '2023-10-15', '2023-10-16', '2023-10-17']})
df2['Trend']=np.where(df2['Price']>df2['Price'].shift(),"UPTREND","DOWNTREND")
</code></pre>
<p>Now ignore for a second that the first value shouldn't have a trend value. The trend value simply shows if the current price is greater than the prior price (uptrend) or less than the lower price (downtrend).</p>
<p>What I want to know is</p>
<ol>
<li>What is the first date of any uptrend/downtrend</li>
<li>What is the last date of the uptrend/downtrend</li>
<li>What is the first price of the uptrend/downtrend</li>
<li>What is the last price of the uptrend/downtrend</li>
</ol>
<p>So - the first uptrend starts on 10/2 and ends on 10/3, first price is 3.6, last price is 9.3
Another uptrend started on 10/8 ended on 10/10 with a first price of 6.7 and an end price on 9.8</p>
<p>I'd also like to get the last price of the prior trend, so for example - that 10/8 record looks like this</p>
<p><a href="https://i.sstatic.net/Ro95s.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ro95s.jpg" alt="enter image description here" /></a></p>
<p>Any help will be greatly appreciated</p>
|
<python><pandas><series>
|
2024-02-13 21:26:48
| 2
| 1,980
|
Stumbling Through Data Science
|
77,990,939
| 2,658,228
|
Unable to install packages in virtual environment VS Code
|
<p>I'm trying to install packages in a newly created virtual environment in VS Code. The environment is created and selected (highlighted in red in the below image) but <code>pip</code> is installing packages in the global environment instead:</p>
<p><a href="https://i.sstatic.net/74TQT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/74TQT.png" alt="enter image description here" /></a></p>
<p>Looking at other questions on SO, most were because the global environment/interpreter were selected instead of the (newly created) virtual environment.</p>
<p>Do I need to "activate" the environment separately? VS Code documentation doesn't mention this and I'm assuming it is automatically activated at creation.</p>
|
<python><visual-studio-code><virtualenv>
|
2024-02-13 21:23:46
| 2
| 2,763
|
Gautam
|
77,990,870
| 3,448,136
|
Pylint in VSCode is not reading the configuration in settings.json
|
<p>I installed Pylint v2023.11.13481007 (pre-release) in VSCode 1.86.1 and it works in vanilla install, i.e., no config settings. I run VSCode on a Windows machine with a remote SSH connection to a Mac laptop, so the Pylint extension is enabled on the remote SSH host. The remote host is running Python 3.10.9 and Pylint Python 2.16.2.</p>
<p>The trouble is that I cannot get Pylint to read a custom config settings file. Here's what I have tried:</p>
<ul>
<li><p>I set two Pylint config variables in File > Preferences > Settings. Then I saw that vscode created a file on the remote mac laptop, ~/.vscode-server/data/Machine/settings.json, with the lines below. But Pylint did not use these settings.</p>
<pre><code> {
"pylint.args": [
"--max-line-length=99"
],
"pylint.lintOnChange": true
}
</code></pre>
</li>
<li><p>I restarted Pylint. I restarted VSCode. The settings are still not used.</p>
</li>
<li><p>I copied the settings.json file to $PROJECT/.vscode/settings.json and restarted. No change in behavior.</p>
</li>
<li><p>I reverted Pylint to the v2023.10.1 (release) version. Still no settings change. No change in behavior.</p>
</li>
</ul>
<p>Pylint is reporting lots of errors. Most of them are "line too long", because it is not using the custom setting. But it also only lints when I save a file. So both settings are ignored.</p>
<p>How do I get Pylint to use custom configuration setting in this environment?</p>
|
<python><visual-studio-code><pylint>
|
2024-02-13 21:06:49
| 1
| 2,490
|
Lee Jenkins
|
77,990,860
| 6,943,622
|
Async Programming vs Multi-threading for multiple reads to postgres
|
<p>So I have a python script that is going to read a million or so rows from a csv file. With the data from each row, I am going to make at most two and at least one sql query to a table in postgres. I would like to introduce some parallelism to speed up the process. I'm not yet an expert in that regard so I would like to know when to use multi-threading, multi-processing or async programming in python. Here's the current synchronous code below:</p>
<pre><code>def distribute_rows_to_files(file_path: str) -> None:
exists_file = "exists_data.csv"
c_not_exists_file = "c_not_exists_data.csv"
i_not_exists_file = "i_not_exists_data.csv"
exception_file = "exceptions.csv"
# Open the files before the loop to reduce overhead
exists_file_handle = open(exists_file, mode='a', newline='')
c_not_exists_file_handle = open(c_not_exists_file, mode='a', newline='')
i_not_exists_file_handle = open(i_not_exists_file, mode='a', newline='')
exception_handle = open(exception_file, mode='a', newline='')
with psycopg2.connect(**AUTHENTICATOR_PG_DB_CREDENTIALS) as conn:
with open(file_path, mode='r', newline='') as file:
reader = csv.reader(file)
next(reader) # Skip the header line
count = 0
for row in reader:
count += 1
if count == 100:
break
# Process each row here
i_code, t_id, __, ___, ____ = row
try:
cur = conn.cursor()
query = """
SELECT customer_id
FROM buckets
WHERE i_code = %(i_code)s
LIMIT 1
"""
cur.execute(query, {"i_code": i_code})
result = cur.fetchone()
cur.close()
if result:
try:
cur = conn.cursor()
second_query = """
SELECT EXISTS (
SELECT 1
FROM customers
WHERE customer_id = %(customer_id)s
AND t_id = %(t_id)s
)
"""
cur.execute(second_query, {"customer_id": result[0], "toe_id": toe_id})
exists = cur.fetchone()[0]
cur.close()
file_handle = exists_file_handle if exists else c_not_exists_file_handle
writer = csv.writer(file_handle)
writer.writerow(row)
except Exception as e:
row_with_exception = row + (str(e),)
writer = csv.writer(exception_handle)
writer.writerow(row_with_exception)
else:
writer = csv.writer(i_not_exists_file_handle)
writer.writerow(row)
except Exception as e:
row_with_exception = row + (str(e),)
writer = csv.writer(exception_handle)
writer.writerow(row_with_exception)
exists_file_handle.close()
c_not_exists_file_handle.close()
i_not_exists_file_handle.close()
exception_handle.close()
</code></pre>
<p>If I could get a reason for whatever approach to take and what the code would look like, that would be great! I've done some reading that suggests <code>asyncpg</code> over <code>psycopg2</code> for async work. But that's assuming the async programming route is the approach to take</p>
|
<python><postgresql><parallel-processing><python-asyncio>
|
2024-02-13 21:04:38
| 1
| 339
|
Duck Dodgers
|
77,990,552
| 1,904,995
|
Cannot install/run pip install -U sentence-transformers
|
<p>I've tried to uninstall and reinstall the packages and several other dependencies.</p>
<p>Here's my error:</p>
<pre><code> Getting requirements to build wheel ... error
error: subprocess-exited-with-error
Γ Getting requirements to build wheel did not run successfully.
β exit code: 1
β°β> [31 lines of output]
Traceback (most recent call last):
File "C:\Users\17605\PycharmProjects\pythonProject\.venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "C:\Users\17605\PycharmProjects\pythonProject\.venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\17605\PycharmProjects\pythonProject\.venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\17605\AppData\Local\Temp\pip-build-env-6gejxtmm\overlay\Lib\site-packages\setuptools\build_meta.py", line 325, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\17605\AppData\Local\Temp\pip-build-env-6gejxtmm\overlay\Lib\site-packages\setuptools\build_meta.py", line 295, in _get_build_requires
self.run_setup()
File "C:\Users\17605\AppData\Local\Temp\pip-build-env-6gejxtmm\overlay\Lib\site-packages\setuptools\build_meta.py", line 480, in run_setup
super().run_setup(setup_script=setup_script)
File "C:\Users\17605\AppData\Local\Temp\pip-build-env-6gejxtmm\overlay\Lib\site-packages\setuptools\build_meta.py", line 311, in run_setup
exec(code, locals())
File "<string>", line 126, in <module>
File "C:\Users\17605\AppData\Local\Programs\Python\Python312\Lib\subprocess.py", line 408, in check_call
retcode = call(*popenargs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\17605\AppData\Local\Programs\Python\Python312\Lib\subprocess.py", line 389, in call
with Popen(*popenargs, **kwargs) as p:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\17605\AppData\Local\Programs\Python\Python312\Lib\subprocess.py", line 1026, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Users\17605\AppData\Local\Programs\Python\Python312\Lib\subprocess.py", line 1538, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [WinError 2] The system cannot find the file specified
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
Γ Getting requirements to build wheel did not run successfully.
β exit code: 1
β°β> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<p>I'm kind of new to python so could really use some help here. I'm also using PyCharm as an IDE but i don't think that mattes at all and i'm using the bash terminal inside the IDE.</p>
<p>How can I get this to work?</p>
|
<python><pip><sentence-transformers>
|
2024-02-13 19:50:25
| 1
| 568
|
Head
|
77,990,420
| 1,912,104
|
How to create average value by group and assign loop ID to generate a polygon in a data frame?
|
<p>My data looks like this:</p>
<pre class="lang-py prettyprint-override"><code>mydict = {'year' : [2010, 2010, 2011, 2011, 2010, 2010, 2011, 2011],
'region' : [1,1,1,1,2,2,2,2],
'group' : ['lower', 'upper', 'lower', 'upper', 'lower', 'upper', 'lower', 'upper'],
'var' : [10,20,30,40,50,60,70,80]}
pd.DataFrame(mydict)
</code></pre>
<p>It should be straightforward, the only note is for each year in each region, the value of <code>var</code> in <code>lower</code> group is always smaller than in <code>upper</code>.<br />
I need to do the following:</p>
<ul>
<li>Generate a new column <code>average</code>, which, for each year, is equal to the average of <code>var</code></li>
<li>Generate a new column <code>loop</code>, which, for each region and year, assign a pair of number to lower and upper group from 2 ends of the length of the region. That sounds awkward for a description, but the resulting data looks like this:</li>
</ul>
<pre class="lang-py prettyprint-override"><code>newdict = {'year' : [2010, 2010, 2011, 2011, 2010, 2010, 2011, 2011],
'region' : [1,1,1,1,2,2,2,2],
'group' : ['lower', 'upper', 'lower', 'upper', 'lower', 'upper', 'lower', 'upper'],
'var' : [10,20,30,40,50,60,70,80],
'average' : [15,15,35,35,55,55,75,75],
'loop' : [0,3,1,2,0,3,1,2]}
pd.DataFrame(newdict)
</code></pre>
<p>So for the <code>loop</code> column, you see as group 1 has 4 elements, then the numbers are assigned to the rows in a way that tries to make a loop, which will be used to define a polygon. In this case, 0 is assigned to 2010-lower, 1 to 2011 lower, 2 to 2011 upper, and 3 to 2010 upper.</p>
|
<python><pandas>
|
2024-02-13 19:24:39
| 1
| 851
|
NonSleeper
|
77,990,385
| 19,816
|
Complex C++ lifetime issue in python bindings between C++ and numpy
|
<p>I'm looking for advice on how to handle a complex lifetime issue between C++ and numpy / Python. Sorry for the wall of text, but I wanted to provide as much context as possible.</p>
<p>I developed <a href="https://github.com/pthom/cvnp" rel="nofollow noreferrer">cvnp</a>, a library that offers casts between bindings between <code>cv::Mat</code> and <code>py::array</code> objects, so that the memory is shared between the two, when using pybind11.
It is originally based on <a href="https://stackoverflow.com/questions/60949451/how-to-send-a-cvmat-to-python-over-shared-memory">a SO answer</a> by <a href="https://stackoverflow.com/users/3962537/dan-ma%c5%a1ek">
Dan MaΕ‘ek
</a>. All is going well and the library is used in several projects, including <a href="https://robotpy.github.io/" rel="nofollow noreferrer">robotpy</a>, which is a Python library for the FIRST Robotics Competition.</p>
<hr />
<p>However, <a href="https://github.com/pthom/cvnp/issues/13" rel="nofollow noreferrer">an issue</a> was raised by a user, that deals with the lifetime of linked <code>cv::Mat</code> and <code>py::array</code> objects.</p>
<ul>
<li>In the direction <code>cv::Mat</code> -> <code>py::array</code>, all is well, as <a href="https://github.com/pthom/cvnp/blob/1064c2a465770bc1e3d14ce0fba4c64f57f4ab1e/cvnp/cvnp.cpp#L81-L99" rel="nofollow noreferrer">mat_to_nparray</a> will create a <code>py::array</code> that keeps a reference to the linked cv::Mat via a "capsule" (a python handle).</li>
<li>However, in the direction <code>py::array</code> -> <code>cv::Mat</code>, <a href="https://github.com/pthom/cvnp/blob/1064c2a465770bc1e3d14ce0fba4c64f57f4ab1e/cvnp/cvnp.cpp#L116C1-L132C1" rel="nofollow noreferrer">nparray_to_mat</a> the cv::Mat will access the data of the py::array, without any reference to the array (so that the lifetime of the py::array is not guaranteed to be the same as the cv::Mat)</li>
</ul>
<p>See mat_to_nparray:</p>
<pre class="lang-cpp prettyprint-override"><code>py::capsule make_capsule_mat(const cv::Mat& m)
{
return py::capsule(new cv::Mat(m)
, [](void *v) { delete reinterpret_cast<cv::Mat*>(v); }
);
}
pybind11::array mat_to_nparray(const cv::Mat& m)
{
return pybind11::array(detail::determine_np_dtype(m.depth())
, detail::determine_shape(m)
, detail::determine_strides(m)
, m.data
, detail::make_capsule_mat(m)
);
}
</code></pre>
<p>and nparray_to_mat:</p>
<pre class="lang-cpp prettyprint-override"><code>cv::Mat nparray_to_mat(pybind11::array& a)
{
...
cv::Mat m(size, type, is_not_empty ? a.mutable_data(0) : nullptr);
return m;
}
</code></pre>
<hr />
<p>This worked well so far, until a user wrote this:</p>
<ul>
<li>a bound c++ function that returns the same cv::Mat that was passed as an argument</li>
</ul>
<pre class="lang-cpp prettyprint-override"><code>m.def("test", [](cv::Mat mat) { return mat; });
</code></pre>
<ul>
<li>some python code that uses this function</li>
</ul>
<pre class="lang-py prettyprint-override"><code>img = np.zeros(shape=(480, 640, 3), dtype=np.uint8)
img = test(img)
</code></pre>
<p>In that case, a segmentation fault may occur, because the <code>py::array</code> object is destroyed before the <code>cv::Mat</code> object, and the <code>cv::Mat</code> object tries to access the data of the <code>py::array</code> object. However, the segmentation fault is not systematic, and depends on the OS + python version.</p>
<p>I was able to reproduce it in CI via <a href="https://github.com/pthom/cvnp/commit/88d903329ded9506d2320fd723126dab6518507b" rel="nofollow noreferrer">this commit</a> using ASAN.
The reproducing code is fairly simple:</p>
<pre class="lang-cpp prettyprint-override"><code>void test_lifetime()
{
// We need to create a big array to trigger a segfault
auto create_example_array = []() -> pybind11::array
{
constexpr int rows = 1000, cols = 1000;
std::vector<pybind11::ssize_t> a_shape{rows, cols};
std::vector<pybind11::ssize_t> a_strides{};
pybind11::dtype a_dtype = pybind11::dtype(pybind11::format_descriptor<int32_t>::format());
pybind11::array a(a_dtype, a_shape, a_strides);
// Set initial values
for(int i=0; i<rows; ++i)
for(int j=0; j<cols; ++j)
*((int32_t *)a.mutable_data(j, i)) = j * rows + i;
printf("Created array data address =%p\n%s\n",
a.data(),
py::str(a).cast<std::string>().c_str());
return a;
};
// Let's reimplement the bound version of the test function via pybind11:
auto test_bound = [](pybind11::array& a) {
cv::Mat m = cvnp::nparray_to_mat(a);
return cvnp::mat_to_nparray(m);
};
// Now let's reimplement the failing python code in C++
// img = np.zeros(shape=(480, 640, 3), dtype=np.uint8)
// img = test(img)
auto img = create_example_array();
img = test_bound(img);
// Let's try to change the content of the img array
*((int32_t *)img.mutable_data(0, 0)) = 14; // This triggers an error that ASAN catches
printf("img data address =%p\n%s\n",
img.data(),
py::str(img).cast<std::string>().c_str());
}
</code></pre>
<hr />
<p>I'm looking for advices on how to handle this issue. I see several options:</p>
<p>An ideal solution would be to</p>
<ul>
<li>call <code>pybind11::array.inc_ref()</code> when constructing the cv::Mat inside <code>nparray_to_mat</code></li>
<li>make sure that <code>pybind11::array.dec_ref()</code> is called when this particular instance will be destroyed.
However, I do not see how to do it.</li>
</ul>
<p>Note: I know that cv::Mat can use a custom allocator, but it is useless here, as the cv::Mat will not allocate the memory itself, but will use the memory of the py::array object.</p>
<p>Thanks for reading this far, and thanks in advance for any advice!</p>
|
<python><c++><numpy><opencv><pybind11>
|
2024-02-13 19:15:36
| 2
| 4,111
|
Pascal T.
|
77,990,365
| 5,031,446
|
VSCode Test Debugger not stopping at breakpoints when using coverage
|
<p>I've started to use <code>pytest-cov</code> for coverage reporting. Using VSCode.</p>
<p>This is how I set up my <code>pytest.ini</code> file so that every time I run tests from the VSCode test explorer, the coverage report gets updated:</p>
<pre><code>[pytest]
addopts = "--cov=src/ --cov-report=lcov:lcov.info --cov-report=term"
env =
TESTING=true
ENV=local
</code></pre>
<p>But I also want to be able to debug my tests and stop on breakpoints. As the VSCode docs <a href="https://code.visualstudio.com/docs/python/testing#_pytest-configuration-settings" rel="noreferrer">say</a></p>
<blockquote>
<p>Note If you have the pytest-cov coverage module installed, VS Code doesn't stop at breakpoints while debugging because pytest-cov is using the same technique to access the source code being run. To prevent this behavior, include --no-cov in pytestArgs when debugging tests, for example by adding "env": {"PYTEST_ADDOPTS": "--no-cov"} to your debug configuration. (See Debug Tests above about how to set up that launch configuration.) (For more information, see Debuggers and PyCharm in the pytest-cov documentation.)</p>
</blockquote>
<p>So my configuration for debugging tests in <code>launch.json</code> is like this:</p>
<pre><code> {
"name": "Python: Debug Tests",
"type": "debugpy",
"request": "launch",
"program": "${file}",
"purpose": [
"debug-test"
],
"console": "integratedTerminal",
"justMyCode": true,
// "justMyCode": false,
"env": {
"ON_HEROKU": "0",
"PYTEST_ADDOPTS": "--no-cov",
"ECHO_SQL_QUERIES": "1"
},
},
</code></pre>
<p>Still, when debugging tests it doesn't stop on breakpoints even tho I set the PYTEST_ADDOPTS env var there. The only way to make it work is by commenting the <code>addopts</code> line in <code>pytest.ini</code></p>
<pre><code>[pytest]
# addopts = "--cov=src/ --cov-report=lcov:lcov.info --cov-report=term"
env =
TESTING=true
ENV=local
</code></pre>
<p>How can I make it behave the way I want without having to comment and uncomment that line in <code>pytest.ini</code>?</p>
|
<python><visual-studio-code><pytest><pytest-cov>
|
2024-02-13 19:12:16
| 3
| 393
|
Xoel
|
77,990,353
| 1,485,926
|
PyLong_Check() wrongly detecting PyBool type?
|
<p>I'm using <a href="https://docs.python.org/3/c-api/index.html" rel="nofollow noreferrer">Python C API</a> in my C++ program. I have a function like this one (simplified) that returns the a string with the type of a given PyObject passed as argument:</p>
<pre><code>#include <Python.h>
static std::string getPyObjectType(PyObject* obj)
{
if (PyLong_Check(obj))
{
return "long";
}
else if (PyFloat_Check(obj))
{
return "float";
}
else if (PyBool_Check(obj))
{
return "bool";
}
else
{
return "other";
}
}
</code></pre>
<p>The problem is that is not properly detecting boolean objects. When <code>obj</code> is a boolean, it returns <code>"long"</code> instead of <code>"bool"</code>. It is like <code>PyLong_Check()</code> were wrongly return true in this case...</p>
<p>If I use a breakpoint in the function to check the type of the <code>obj</code> in this case seems to be right (it shows <code>PyBool_Type</code>):</p>
<p><a href="https://i.sstatic.net/ntdo0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ntdo0.png" alt="enter image description here" /></a></p>
<p>What's the fail in my code?</p>
<p>Thanks in advance!</p>
|
<python><python-c-api>
|
2024-02-13 19:10:37
| 1
| 12,442
|
fgalan
|
77,990,093
| 7,700,802
|
Creating a pandas dataframe from a dictionary with unique structure
|
<p>I have this dictionary:</p>
<pre><code>{'CC OTH 00009438 2023 TR.2a1e3e6f-58c4-4166-93ea-96073626dccb.pdf_Rebate-Count': 'Two rebate types',
'CC OTH 00009438 2023 TR.2a1e3e6f-58c4-4166-93ea-96073626dccb.pdf_Rebate-Spec-CashCredit': 'Credit Note',
'CC OTH 00009438 2023 TR.2a1e3e6f-58c4-4166-93ea-96073626dccb.pdf_Rebate-Cadence-First-StartDate': 'July 1, 2021',
'CC OTH 00009438 2023 TR.2a1e3e6f-58c4-4166-93ea-96073626dccb.pdf_Rebate-Cadence-LastDate': 'July 15, 2023',
'CC OTH 00009438 2023 TR.2a1e3e6f-58c4-4166-93ea-96073626dccb.pdf_Rebate-Cadence-CadenceCollection': 'Quarterly'}
</code></pre>
<p>As context for what I need to do is extract the first key and perform a split into two i.e.</p>
<pre><code>CC OTH 00009438 2023 TR.2a1e3e6f-58c4-4166-93ea-96073626dccb
</code></pre>
<p>and</p>
<pre><code>_Rebate-Cadence-CadenceCollection
</code></pre>
<p>I then want to create a dataframe from these values that will look like this</p>
<pre><code>a = {'contract_id': ['CC OTH 00009438 2023 TR.2a1e3e6f-58c4-4166-93ea-96073626dccb'],
'Rebate-Count': ['Two rebate types'],
'Rebate-Spec-CashCredit': ['Credit Note'],
'Rebate-Cadence-First-StartDate': ['July 1, 2021'],
'Rebate-Cadence-LastDate': ['July 15, 2023'],
'Rebate-Cadence-CadenceCollection': ['Quarterly']}
df_test = pd.DataFrame(a)
</code></pre>
<p>Any help is greatly appreciated. I should of specified that the dictionary does contain multiple <code>contract_id</code>.</p>
<p>Here is another dictionary example:</p>
<pre><code>{'Rebate Agreement Final (Signed Document).pdf_Rebate-Exists': ['Yes'],
'Rebate Agreement Final (Signed Document).pdf_Rebate-Count': [],
'Rebate Agreement Final (Signed Document).pdf_Rebate-Spec-CashCredit': ['Cash Refund/Payment'],
'Rebate Agreement Final (Signed Document).pdf_Rebate-Cadence-First-StartDate': ['July 16, 2022'],
'Rebate Agreement Final (Signed Document).pdf_Rebate-Cadence-LastDate': ['July 15, 2023'],
'Rebate Agreement Final (Signed Document).pdf_Rebate-Cadence-CadenceCollection': ['Annual']}
</code></pre>
|
<python><pandas>
|
2024-02-13 18:18:26
| 5
| 480
|
Wolfy
|
77,989,853
| 3,828,463
|
ctypes.ArgumentError Don't know how to convert parameter 1
|
<p>I am getting this error (applying to x in the call to f in the definition of fun(x)):</p>
<pre><code>ctypes.ArgumentError: argument 1: <class 'TypeError'>: Don't know how to convert parameter 1
</code></pre>
<p>when running this code:</p>
<pre><code>import ctypes as ct
lib = ct.CDLL('x64\\Debug\\main2.dll')
f = getattr(lib,'MAIN2_MOD_mp_MAIN2')
f.restype = None
x = [2.0, 0.0]
def fun(x):
objf = 0.0
f(x,objf)
return objf
</code></pre>
<p>Any help greatly appreciated.</p>
<p>Update:
main2 is a Fortran subroutine:</p>
<pre><code>module main2_mod
contains
!dec$ attributes dllexport :: main2
subroutine main2(x,f)
implicit none
real(8), intent(inout) :: x(*)
real(8), intent( out) :: f
</code></pre>
|
<python><fortran>
|
2024-02-13 17:34:48
| 1
| 335
|
Adrian
|
77,989,852
| 824,954
|
Run custom function to generate choices in cookiecutter.json
|
<p>I need to generate choices for a cookiecutter at runtime that reflect the programs available on a system. The code to do so is β30 lines, and thus too complicated to be placed in <code>cookiecutter.json</code>. Where should it be placed and how should it be invoked?</p>
<p>e.g. <code>cookiecutter.json</code></p>
<pre><code>{
"project_name": "",
"program_version": ["generate_program_choices()"]
}
</code></pre>
<p>Note: this is for a data science cookiecutter, that will need to use the same version of the program throughout its life (a few months), the program is installed externally from the project, and the project will not be used in multiple locations, thus hardcoding this variable is acceptable.</p>
|
<python><jinja2><cookiecutter>
|
2024-02-13 17:34:37
| 1
| 2,484
|
Jonathon Vandezande
|
77,989,783
| 2,587,422
|
Custom sqlalchemy visitor not registered
|
<p>I have a Postgres DB maintained via SQLAlchemy/Alembic, to which I want to add a new column of type array. I'm trying to follow this page of the SQLAlchemy documentation: <a href="https://docs.sqlalchemy.org/en/20/core/compiler.html" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/20/core/compiler.html</a></p>
<p>The table is currently defined as:</p>
<pre class="lang-py prettyprint-override"><code>@compiles(CreateColumn, "postgres")
def use_identity(element, compiler, **kw):
text = compiler.visit_create_column(element, **kw)
return text.replace("SERIAL", "INT GENERATED BY DEFAULT AS IDENTITY")
my_table = Table(
"my_table",
my_metadata,
Column("my_id", Biginteger, primary_key=True),
# ...
)
</code></pre>
<p>The column I'm trying to add is defined as:</p>
<pre class="lang-py prettyprint-override"><code>Column(
"authors",
ARRAY(String(128)),
nullable=False,
server_default=text("ARRAY[]::varchar[]"),
),
</code></pre>
<p>When I run <code>alembic revision --autogenerate</code>, it correctly creates the revsion file. However, when I then run <code>alembic upgrade head</code>, I see this error message:</p>
<pre><code>psycopg2.errors.SyntaxError: column "my_id" of relation "my_table" is an identity column
HINT: Use ALTER TABLE ... ALTER COLUMN ... DROP IDENTITY instead.
[SQL: ALTER TABLE my_table ALTER COLUMN my_id DROP DEFAULT]
</code></pre>
<p>Which sounds pretty clear to me. So in the same file I tried to add another compilation extension:</p>
<pre class="lang-py prettyprint-override"><code>@compiles(AlterColumn, "postgres")
def visit_alter_column(element, compiler, **kw):
text = compiler.visit_alter_column(element, **kw)
return text.replace("DROP DEFAULT", "DROP IDENTITY IF EXISTS")
</code></pre>
<p>However when rerunning Alembic I get the exact same error as before. It's as if the new compilation extension doesn't get registered. I also tried (slightly different, just to see if it got picked up):</p>
<pre class="lang-py prettyprint-override"><code>@compiles(AlterColumn, "postgres")
def visit_alter_column(element, compiler, **kw):
return "ALTER TABLE {element.table.name} ALTER COLUMN {element.column.name} DROP IDENTITY IF EXISTS"
</code></pre>
<p>But still no cigar. What am I doing wrong?</p>
|
<python><postgresql><sqlalchemy><alembic>
|
2024-02-13 17:22:23
| 1
| 315
|
Luigi D.
|
77,989,646
| 9,403,186
|
Connecting to an On-prem Database from within Snowflake
|
<p>Can a connection to an on-prem database (e.g. Sybase) be made directly from Snowflake, or does it need to be made external to Snowflake? E.g., I know I can make a connection to my DB in some AWS compute (EMR, Lambda, EC2) and then load that data into Snowflake by opening a connection to Snwoflake, but I'm not sure I can open the connection from within a Python script that runs in Snowflake itself (a Python stored procedure).</p>
<p>I know that 3rd party tools like Talend, Informatica, and Fivetran are often used for this, but I want to know if I can just use Python in Snowflake.</p>
<p>Thanks!</p>
|
<python><snowflake-cloud-data-platform><cloud><etl>
|
2024-02-13 16:58:35
| 1
| 633
|
Python Developer
|
77,989,616
| 3,502,079
|
pyqtgraph: how to make simple plot in vs code?
|
<p>I am new to pyqtgraph but I cannot get it to work. I run the following code in VS code as a jupyter cell:</p>
<pre><code># %%
import pyqtgraph as pg
import numpy as np
x = np.linspace(0,10)
y = np.linspace(0,10)
X, Y = np.meshgrid(x, y)
img = np.sin(X)*np.sin(Y)
pg.image(img)
</code></pre>
<p>Then I get a frozen cell that is not responding. How do I get a simple plot in pyqtgraph?</p>
<p><a href="https://i.sstatic.net/DgXcV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DgXcV.png" alt="enter image description here" /></a></p>
|
<python><visual-studio-code><plot><pyqtgraph>
|
2024-02-13 16:53:44
| 1
| 392
|
AccidentalTaylorExpansion
|
77,989,391
| 541,729
|
How do I extract data from a document using the OpenAI API?
|
<p>I want to extract key terms from rental agreements.</p>
<p>To do this, I want to send the PDF of the contract to an AI service that must return some key terms in JSON format.</p>
<p>What are some of the different libraries and companies that can do this? So far, I've explored the OpenAI API, but it isn't as straightforward as I would have imagined.</p>
<p>When using the ChatGPT interface, it works very well, so I thought using the API should be equally simple.</p>
<p>It seems like I need to read the PDF text first and then send the text to OpenAI API.</p>
<p>Any other ideas to achieve this will be appreciated.</p>
|
<python><artificial-intelligence><openai-api><openai-assistants-api>
|
2024-02-13 16:13:32
| 1
| 7,341
|
Kritz
|
77,989,376
| 10,224,533
|
Using user defined functions in pandas eval
|
<p>I'm trying to use my custom function within pandas eval. It works properly for limited use:</p>
<pre class="lang-python prettyprint-override"><code>basic_df = DataFrame({"A":[1,2,3,4,5],"B":[20,40,60,100,90],
"C":["C1","C2","C3","C4","C5"],
})
def str_parse(element) -> str:
return str(element)
print(basic_df.eval("@str_parse(A+B+100)"))
</code></pre>
<p>But whenever I want to add some static string (to add string to string), it returns following result:</p>
<pre class="lang-python prettyprint-override"><code>basic_df.eval("@str_parse(A+B+100) + \"additional string\"",)
0 121
1 142
2 163
3 204
4 195
dtype: int64additional string.
</code></pre>
<p>How can i add string to string within creating additional column?</p>
|
<python><pandas><parsing><eval>
|
2024-02-13 16:11:07
| 1
| 303
|
Konstanty Maciej Lachowicz
|
77,989,371
| 893,254
|
Should Python properties use underscores as a name prefix?
|
<p>Take a look at <a href="https://realpython.com/python-getter-setter/" rel="nofollow noreferrer">this article</a>, which explains Python Properties.</p>
<p>(You can access it via archive.ph to get around the paywall.)</p>
<p>In the article the following code is presented:</p>
<pre><code># employee.py
from datetime import date
class Employee:
def __init__(self, name, birth_date, start_date):
self.name = name
self.birth_date = birth_date
self.start_date = start_date
@property
def name(self):
return self._name
@name.setter
def name(self, value):
self._name = value.upper()
# ... etc ...
</code></pre>
<p>Note that in the function <code>__init__</code>, <code>self.name</code> is initialized <strong>without</strong> an underscore. The same is true for the other attributes, <code>birth_date</code> and <code>start_date</code>.</p>
<p>However, in the property getter and setters, the variable accessed is names with a preceeding underscore.</p>
<p><code>name</code> -> <code>_name</code></p>
<p>Is this a mistake, or is it intentional? Should the <code>__init__</code> function have initialized <code>_name</code> instead of <code>name</code>?</p>
|
<python><properties>
|
2024-02-13 16:10:10
| 2
| 18,579
|
user2138149
|
77,989,271
| 306,381
|
Unable to run Connexion 3 /w Flask - Middleware loads swagger twice
|
<p>I have created a openapi.yml file and initiated a connexion app and started the app without any errors.</p>
<p>My directory structure is as follows:</p>
<pre><code>/my_app
/app
__init__.py
/api
__init__.py
api.py # My API endpoint implementations
/models
__init__.py
models.py # My SQLAlchemy models
swagger.yml # OpenAPI specification
.env
config.py # Configuration settings
main.py # Entry point for the application
</code></pre>
<p>I am using the following dependencies:</p>
<pre><code>connexion[swagger-ui, uvicorn, flask]==3.0.5
annotated-types==0.6.0
anyio==3.7.1
autopep8==2.0.4
blinker==1.7.0
click==8.1.7
connexion[swagger-ui, uvicorn, flask]==3.0.5
Flask==3.0.1
flask-sqlacodegen==2.0.0
Flask-SQLAlchemy==3.1.1
gunicorn==21.2.0
h11==0.14.0
idna==3.6
inflect==7.0.0
itsdangerous==2.1.2
Jinja2==3.1.3
MarkupSafe==2.1.4
mysqlclient==2.2.1
packaging==23.2
pycodestyle==2.11.1
pydantic_core==2.16.1
pydantic==2.6.0
PyMySQL==1.1.0
python-dotenv==1.0.1
sniffio==1.3.0
SQLAlchemy==2.0.25
SQLAlchemy-serializer==1.4.1
starlette==0.32.0.post1
typing_extensions==4.9.0
uvicorn==0.27.1
Werkzeug==3.0.1
</code></pre>
<p>Inside the app/_<em>init_</em>.py I have defined the following function:</p>
<pre><code>def create_app(config_class=Config, **kwargs):
load_dotenv()
# Create the Connexion application
connex_app = connexion.FlaskApp(__name__, specification_dir='./')
connex_app.add_api("swagger.yml") #, base_path='/api/v1', arguments={'title': 'API v1'}, strict_validation=False, validate_responses=False, pythonic_params=True, name='api_v1_admin')
application = connex_app.app
# Load configuration from .env file
application.config.from_object(config_class)
application.config.update(kwargs)
# initialize db connection
from app.models import init_app
init_app(application)
return connex_app
</code></pre>
<p>From the main.py I am calling the following:</p>
<pre><code>if __name__ == "__main__":
app = create_app()
app.run(host="0.0.0.0", port=8080)
</code></pre>
<p>The app starts without any error and the logs suggest that the app loaded the openapi file correctly.</p>
<pre><code>INFO: Started server process [17447]
INFO: Waiting for application startup.
DEBUG:connexion.middleware.abstract:Adding /api/v1/agent...
DEBUG:connexion.operations.openapi3:consumes: ['application/json']
DEBUG:connexion.operations.openapi3:produces: ['application/json']
DEBUG:connexion.operations.openapi3:consumes: ['application/json']
DEBUG:connexion.operations.openapi3:produces: ['application/json']
DEBUG:connexion.middleware.abstract:Adding /api/v1/agent/{agentId}...
DEBUG:connexion.operations.openapi3:consumes: []
DEBUG:connexion.operations.openapi3:produces: ['application/json']
DEBUG:connexion.operations.openapi3:consumes: ['application/json']
DEBUG:connexion.operations.openapi3:produces: ['application/json']
DEBUG:connexion.operations.openapi3:consumes: ['application/json']
DEBUG:connexion.operations.openapi3:produces: ['application/json']
DEBUG:connexion.operations.openapi3:consumes: []
DEBUG:connexion.operations.openapi3:produces: ['application/json', 'application/xml']
DEBUG:connexion.operations.openapi3:consumes: []
DEBUG:connexion.operations.openapi3:produces: ['application/json']
DEBUG:connexion.operations.openapi3:consumes: []
DEBUG:connexion.operations.openapi3:produces: ['application/json']
DEBUG:connexion.operations.openapi3:consumes: ['application/json']
DEBUG:connexion.operations.openapi3:produces: ['application/json']
DEBUG:connexion.operations.openapi3:consumes: ['application/json']
DEBUG:connexion.operations.openapi3:produces: ['application/json']
DEBUG:connexion.operations.openapi3:consumes: []
DEBUG:connexion.operations.openapi3:produces: ['application/json', 'application/xml']
DEBUG:connexion.operations.openapi3:consumes: []
DEBUG:connexion.operations.openapi3:produces: ['application/json']
DEBUG:connexion.operations.openapi3:consumes: []
DEBUG:connexion.operations.openapi3:produces: ['application/json']
DEBUG:connexion.middleware.validation:Strict Request Validation: None
DEBUG:connexion.operations.openapi3:consumes: ['application/json']
DEBUG:connexion.operations.openapi3:produces: ['application/json']
DEBUG:connexion.operations.openapi3:consumes: ['application/json']
DEBUG:connexion.operations.openapi3:produces: ['application/json']
DEBUG:connexion.operations.openapi3:consumes: []
DEBUG:connexion.operations.openapi3:produces: ['application/json', 'application/xml']
DEBUG:connexion.operations.openapi3:consumes: []
DEBUG:connexion.operations.openapi3:produces: ['application/json']
DEBUG:connexion.operations.openapi3:consumes: []
DEBUG:connexion.operations.openapi3:produces: ['application/json']
DEBUG:connexion.operations.openapi3:consumes: ['application/json']
DEBUG:connexion.operations.openapi3:produces: ['application/json']
DEBUG:connexion.middleware.security:... Security: [{'admin_auth': ['write:agent', 'read:agent']}]
INFO: ASGI 'lifespan' protocol appears unsupported.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8080 (Press CTRL+C to quit)
</code></pre>
<p>I have verified that my openapi yml file is syntactically correct.</p>
<p>The issue occurs when I try to call an api like so: http://localhost:8080/api/v1/agent/1</p>
<p>Connexion middleware tries to register the openapi endpoints again. Its evident from the logs below:</p>
<pre><code>DEBUG:connexion.middleware.abstract:Adding /api/v1/agent...
DEBUG:connexion.operations.openapi3:consumes: ['application/json']
DEBUG:connexion.operations.openapi3:produces: ['application/json']
DEBUG:connexion.operations.openapi3:consumes: ['application/json']
DEBUG:connexion.operations.openapi3:produces: ['application/json']
DEBUG:connexion.middleware.abstract:Adding /api/v1/agent/{agentId}...
DEBUG:connexion.operations.openapi3:consumes: []
DEBUG:connexion.operations.openapi3:produces: ['application/json']
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "xxx/venv/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 419, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "xxx/venv/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "xxx/venv/lib/python3.11/site-packages/connexion/middleware/main.py", line 497, in __call__
self.app, self.middleware_stack = self._build_middleware_stack()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "xxx/venv/lib/python3.11/site-packages/connexion/middleware/main.py", line 338, in _build_middleware_stack
app.add_api(
File "xxx/venv/lib/python3.11/site-packages/connexion/apps/flask.py", line 141, in add_api
self.app.register_blueprint(api.blueprint)
File "xxx/venv/lib/python3.11/site-packages/flask/sansio/scaffold.py", line 45, in wrapper_func
return f(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "xxx/venv/lib/python3.11/site-packages/flask/sansio/app.py", line 599, in register_blueprint
blueprint.register(self, options)
File "xxx/venv/lib/python3.11/site-packages/flask/sansio/blueprints.py", line 310, in register
raise ValueError(
ValueError: The name '/api/v1' is already registered for a different blueprint. Use 'name=' to provide a unique name.
INFO: 127.0.0.1:51609 - "GET /api/v1/agent/1 HTTP/1.1" 500 Internal Server Error
</code></pre>
<p>Can anyone suggest why this is happening? Why connexion tries adding openapi's endpoints again?</p>
<p>Note: If I manually set routes, the project is working fine.</p>
|
<python><flask><swagger><openapi><connexion>
|
2024-02-13 15:54:02
| 0
| 473
|
Qedrix
|
77,989,265
| 4,699,441
|
Pydantic/JSON: do not serialize fields with default value
|
<p>I have a class with a member that has a default value.</p>
<p>I do not wish the default value to be part of the serialization.</p>
<p>Example:</p>
<pre><code>import json
from typing import List
from pydantic import BaseModel
from pydantic.json import pydantic_encoder
class Animal(BaseModel):
name: str
legs: int
tails: int = 1
class AnimalList(BaseModel):
animals: List[Animal]
animals = AnimalList(animals=[
Animal(name='dog', legs=4),
Animal(name='human', legs=2, tails=0)
])
j = json.dumps(animals, default=pydantic_encoder)
print(j)
animals = AnimalList(**json.loads(j))
for animal in animals.animals:
print(f"The {animal.name} has {animal.legs} legs and {animal.tails} tails.")
</code></pre>
<p>This produces the following output:</p>
<pre><code>{"animals": [{"name": "dog", "legs": 4, "tails": 1}, {"name": "human", "legs": 2, "tails": 0}]}
The dog has 4 legs and 1 tails.
The human has 2 legs and 0 tails.
</code></pre>
<p>I wish to see the following output instead (the default <code>"tails": 1</code> for the dog being removed):</p>
<pre><code>{"animals": [{"name": "dog", "legs": 4}, {"name": "human", "legs": 2, "tails": 0}]}
The dog has 4 legs and 1 tails.
The human has 2 legs and 0 tails.
</code></pre>
|
<python><json><pydantic><pydantic-v2>
|
2024-02-13 15:53:13
| 1
| 1,078
|
user66554
|
77,989,093
| 5,722,359
|
"isinstance() and not" vs "==" for checking x=()
|
<p>I want an if-statement to check if <code>x</code> is an empty tuple.</p>
<p>Is there any significant advantage of writing the if-statement as</p>
<pre class="lang-py prettyprint-override"><code>if isinstance(x, tuple) and not x:
</code></pre>
<p>vs</p>
<pre class="lang-py prettyprint-override"><code>if x == ():
</code></pre>
<p>given the latter is shorter to write and simpler to read?</p>
|
<python>
|
2024-02-13 15:25:46
| 2
| 8,499
|
Sun Bear
|
77,988,462
| 2,558,463
|
Azzure AD JWT validation: signature verification failed
|
<p>i stumbled in a problem, i have this JWT verification code:</p>
<pre><code>def validate_access_token(self, access_token):
jwks_url = f"{self.authority}/discovery/v2.0/keys"
# Fetch keys from the JWKS endpoint
response = requests.get(jwks_url)
jwks_data = response.json()
# Extract and store Azure AD public keys
azure_ad_keys = {}
for key in jwks_data["keys"]:
key_id = key["kid"]
# Extract the x509 certificate
x5c_cert = key["x5c"][0]
# Convert x509 certificate to PEM format
pem_cert = f"-----BEGIN CERTIFICATE-----\n{x5c_cert}\n-----END CERTIFICATE-----"
azure_ad_keys[key_id] = pem_cert
# Decode the access token
print(azure_ad_keys)
try:
token_header = jwt.get_unverified_header(access_token)
logging.debug(f"Token header: {token_header}")
token_kid = token_header["kid"]
token_alg = token_header["alg"]
public_key = azure_ad_keys[token_kid]
logging.debug(f"Token kid: {token_kid}, Token alg: {token_alg}, public key: {public_key}")
payload = jwt.decode(
token=access_token,
key=public_key,
algorithms=[token_alg])
logging.debug(f"Token payload: {payload}")
except JWTError as e:
raise HTTPException(status_code=401, detail=f"Invalid access token: {e}")
# Validate token claims
if payload["iss"] != f"{self.authority}/v2.0":
raise HTTPException(status_code=401, detail="Invalid token issuer")
if payload["aud"] != self.client_id:
raise HTTPException(status_code=401, detail="Invalid token audience")
if payload["exp"] < time.time():
raise HTTPException(status_code=401, detail="Access token expired")
return payload
</code></pre>
<p>it is returning "Invalid access token: Signature verification failed"</p>
<p>this is my JWT:</p>
<pre><code>eyJ0eXAiOiJKV1QiLCJub25jZSI6IjVETWxPSjV4T1BrZUZ4bnE3QVJVZDl2Uy1taGYwc09Qb290NGRaTlViVkkiLCJhbGciOiJSUzI1NiIsIng1dCI6ImtXYmthYTZxczh3c1RuQndpaU5ZT2hIYm5BdyIsImtpZCI6ImtXYmthYTZxczh3c1RuQndpaU5ZT2hIYm5BdyJ9.eyJhdWQiOiIwMDAwMDAwMy0wMDAwLTAwMDAtYzAwMC0wMDAwMDAwMDAwMDAiLCJpc3MiOiJodHRwczovL3N0cy53aW5kb3dzLm5ldC81NzA2ZTg5Ny02NWYyLTQ4YjMtODY2ZS0wZGY3YTE4M2Y0NjEvIiwiaWF0IjoxNzA3ODMxODcwLCJuYmYiOjE3MDc4MzE4NzAsImV4cCI6MTcwNzgzNjYyMCwiYWNjdCI6MCwiYWNyIjoiMSIsImFpbyI6IkUyVmdZSEQ0K1o1VHE0L2ZXZEJCNU5GSlE0YnlYRlBGTzJHRjZUK3JsVVNtNTk3S0NNNmZWeSt0SU5GVFhYcENVRnowWC9jdkFBPT0iLCJhbXIiOlsicHdkIl0sImFwcF9kaXNwbGF5bmFtZSI6IkF1dGggU2VydmljZSIsImFwcGlkIjoiODVmMmEzMTktMGUzOC00NTU0LWIwM2UtZjVkMzcyY2EzZjI5IiwiYXBwaWRhY3IiOiIxIiwiZmFtaWx5X25hbWUiOiJBZG1pbmlzdHJhdG9yIiwiZ2l2ZW5fbmFtZSI6Ik1PRCIsImlkdHlwIjoidXNlciIsImlwYWRkciI6IjExNi4wLjEuODEiLCJuYW1lIjoiTU9EIEFkbWluaXN0cmF0b3IiLCJvaWQiOiI4MGMxMzJkMi1iOTQzLTRlNGMtYjA4OS02MzY3ZmNlNjVkNDEiLCJwbGF0ZiI6IjUiLCJwdWlkIjoiMTAwMzIwMDM0ODBDMTQ4RSIsInJoIjoiMC5BU3NBbC1nR1ZfSmxzMGlHYmczM29ZUDBZUU1BQUFBQUFBQUF3QUFBQUFBQUFBRENBRjAuIiwic2NwIjoiRGlyZWN0b3J5LlJlYWQuQWxsIGVtYWlsIG9wZW5pZCBwcm9maWxlIFVzZXIuUmVhZCBVc2VyLlJlYWQuQWxsIFVzZXIuUmVhZEJhc2ljLkFsbCIsInNpZ25pbl9zdGF0ZSI6WyJrbXNpIl0sInN1YiI6InV2TG90Q2dERFF0eGtpdGE4UHU0V3hBd0FpYWFEOG5rOV91eXVQX2E3NG8iLCJ0ZW5hbnRfcmVnaW9uX3Njb3BlIjoiQVMiLCJ0aWQiOiI1NzA2ZTg5Ny02NWYyLTQ4YjMtODY2ZS0wZGY3YTE4M2Y0NjEiLCJ1bmlxdWVfbmFtZSI6ImFkbWluQE0zNjV4NzI2NTQ3NzYub25taWNyb3NvZnQuY29tIiwidXBuIjoiYWRtaW5ATTM2NXg3MjY1NDc3Ni5vbm1pY3Jvc29mdC5jb20iLCJ1dGkiOiIzcWtySm1XRWowYTRBQ0ZzS2l0b0FBIiwidmVyIjoiMS4wIiwid2lkcyI6WyI2MmU5MDM5NC02OWY1LTQyMzctOTE5MC0wMTIxNzcxNDVlMTAiLCJiNzlmYmY0ZC0zZWY5LTQ2ODktODE0My03NmIxOTRlODU1MDkiXSwieG1zX3N0Ijp7InN1YiI6Ik1zdTFwRFVQc3g0SGlaODItVDBTZlB3a29NSmNDN3hkNWVJRGRpU3FWY1UifSwieG1zX3RjZHQiOjE3MDYwODE5NTF9.sQKGfmYPFCarHwx1hMKbgJsTmlSMt8-Sg491w0DWWXE1ZjXBLdOSbuEJjTEpNj0VG-YSuN1SR-fs44D969bGaKD4UvZBAAm8gJIm5inkGp6-x_3WL3VMmUz3nMthgFKocNcxp7HwOqn7Gsj_oiTzNMveD5YgT3SXvrgXQQFzG1eNNT8WsN1NCa1nanFO789yaqXdjaDQbo5HKqyg9xwrKp-n6fpsFQjWXesr3GFZLHNt1tTMAnjOY_hzQ9xSVGdd7z7FAacM_NlLQZVhvD0iDp2D4nFXQWHAy9qDuk8u-6U9jcweQYubw68W2oP3gYvMqacZKTr8z70iE9sF0uujrQ
</code></pre>
<p>this is my debug log:</p>
<pre><code>DEBUG:root:Token header: {'typ': 'JWT', 'nonce': '5DMlOJ5xOPkeFxnq7ARUd9vS-mhf0sOPoot4dZNUbVI', 'alg': 'RS256', 'x5t': 'kWbkaa6qs8wsTnBwiiNYOhHbnAw', 'kid': 'kWbkaa6qs8wsTnBwiiNYOhHbnAw'}
DEBUG:root:Token kid: kWbkaa6qs8wsTnBwiiNYOhHbnAw, Token alg: RS256, public key: -----BEGIN CERTIFICATE-----
MIIC/TCCAeWgAwIBAgIICHb5qy8hKKgwDQYJKoZIhvcNAQELBQAwLTErMCkGA1UEAxMiYWNjb3VudHMuYWNjZXNzY29udHJvbC53aW5kb3dzLm5ldDAeFw0yNDAxMTUxODA0MTRaFw0yOTAxMTUxODA0MTRaMC0xKzApBgNVBAMTImFjY291bnRzLmFjY2Vzc2NvbnRyb2wud2luZG93cy5uZXQwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC3pDZdJ5acwD/5ysfZRt+19LTVAMCoeg9AWqxG1WTEbh7Jqac2VOXcNrTtBCSOk8Rsugu2C0wjY+vSmU7vFT1/3iaFt8r9QnpjQpxbGHAoyQKCfNwU5AXh4f8AgIcs4pry8+2G1yms1wKaNuSxblNgFmLq4uEUvD8eyMY7GErRVoNadaLM1V6q/NUHSO31V2Z+GzpmiHL/VvZa6x1p3U2ZIrOELvggOOUhoWiKT9kkl20s6CgjA5lMtbQzVQqFGta2PsCNUKcT/MGKWgAKbUisgz8/KYTXRwknpYXPb16niDtfrnEIRTrMnmggWJu+TpwopwU0HsUWNt6FhWnDkHFVAgMBAAGjITAfMB0GA1UdDgQWBBQLGQYqt7pRrKWQ25XWSi6lGN818DANBgkqhkiG9w0BAQsFAAOCAQEAtky1EYTKZvbTAveLmL3VCi+bJMjY5wyDO4Yulpv0VP1RS3dksmEALOsa1Bfz2BXVpIKPUJLdvFFoFhDqReAqRRqxylhI+oMwTeAsZ1lYCV4hTWDrN/MML9SYyeQ441Xp7xHIzu1ih4rSkNwrsx231GTfzo6dHMsi12oEdyn6mXavWehBDbzVDxbeqR+0ymhCgeYjIfCX6z2SrSMGYiG2hzs/xzypnIPnv6cBMQQDS4sdquoCsvIqJRWmF9ow79oHhzSTwGJj4+jEQi7QMTDR30rYiPTIdE63bnuARdgNF/dqB7n4ZJv566jvbzHpfCTqrJyj7Guvjr9i56NpLmz2DA==
-----END CERTIFICATE-----
</code></pre>
<p>although i have matching public key and algorithm, it still cannot verify the signature.
Any idea about this?</p>
|
<python><azure-active-directory><jwt><fastapi>
|
2024-02-13 13:54:57
| 1
| 511
|
galih
|
77,988,452
| 2,882,125
|
Assigning Pydantic Settings Fields not by alias even when environment variable is defined
|
<p>Iβd like to be able to create a Pydantic Settings object where the environment variable can be overriden if desired. In the example below the constructor call for <code>user_3</code> fails:</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import Field
from pydantic_settings import BaseSettings, SettingsConfigDict
import os
class User(BaseSettings):
model_config = SettingsConfigDict(populate_by_name=True)
name: str = Field(alias='full_name')
age: int
user_1 = User(full_name='John Doe', age=20)
print(user_1) # >> name='John Doe' age=20
user_2 = User(name='John Doe', age=20)
print(user_2) # >> name='John Doe' age=20
os.environ["full_name"] = "foo"
user_3 = User(name='John Doe', age=20) # >> error (see below)
print(user_3)
</code></pre>
<p>with this message:</p>
<pre><code>ValidationError: 1 validation error for User
name
Extra inputs are not permitted [type=extra_forbidden, input_value='John Doe', input_type=str]
For further information visit https://errors.pydantic.dev/2.6/v/extra_forbidden
</code></pre>
<p>Is there a way to set up the object such that <code>populate_by_name</code> works (<code>user_3</code> constructor call) even when an environment variable exists?</p>
<p>Thanks!</p>
<p>Related question - <a href="https://stackoverflow.com/questions/69433904/assigning-pydantic-fields-not-by-alias">Assigning Pydantic Fields not by alias</a></p>
|
<python><pydantic><pydantic-settings>
|
2024-02-13 13:52:54
| 1
| 390
|
RonenKi
|
77,988,426
| 8,284,452
|
Why did mamba keep older version of package's files in /pkgs directory upon update to newer version?
|
<p>I updated a python package using mamba to a newer version, but I found it kept the files of the older version in the <code>mamabforge/pkgs</code> directory. Why would it do this and what is the proper way to update packages so it completely removes the older version?</p>
<p>I was updating the <code>libcurl</code> package from 8.2.1 to 8.5.0 on the base mamba environment (yeah, I know altering the base environment is dumb, but I didn't set up this server in the first place and wouldn't have done it this way if I had a choice).</p>
<p><code>mamba update libcurl</code> (the solver came up with upgrading to 8.5.0)</p>
<p>After updating the package, when I run <code>mamba list</code> I see <code>libcurl 8.5.0</code> listed (and not 8.2.1). When I navigate to <code>/opt/mambaforge/pkgs/</code> and list the files in the directory, I see these directories/files listed for <code>libcurl</code>:</p>
<pre><code>libcurl-8.2.1-hca28451_0
libcurl-8.2.1-hca28451_0.conda
libcurl-8.5.0-hca28451_0
libcurl-8.5.0-hca28451_0.conda
</code></pre>
<p>Can I safely delete the <code>libcurl-8.2.1</code> related directory and .conda file? How do I clean this up?</p>
|
<python><conda><mamba><mini-forge><mambaforge>
|
2024-02-13 13:49:17
| 1
| 686
|
MKF
|
77,988,380
| 22,221,987
|
Convert local coordinate system to global system and vice versa
|
<p>I'm trying to convert local coordinate system to global coordinate system and vise versa.<br />
I have 3 position coordinates and 3 rotating angles, so my input is <code>(X, Y, Z, RX, RY, RZ)</code>.</p>
<p>I'm trying to understand the theory of this process, but I don't have enough experience in math science to extract information from math docs efficiently (I just don't understand them, but I'm trying).</p>
<p>So, can someone explain the coordinate system converting process in simple terms? I just feel a lack of examples.</p>
<p>I already read about rotating and shifting matrixes, but I can't understand how to use them to obtain new <code>X, Y, Z</code> and their <code>rotating angles</code> at the end... Just mess in my head.</p>
<p>In addition I'll be really thanks for the tips on researching this issue.</p>
<p><strong>UPD</strong>: Here is the illustration of the problem with example.
<a href="https://i.sstatic.net/RxHxZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RxHxZ.png" alt="enter image description here" /></a><br />
We have two coordinate systems. <code>A - world system</code>, <code>A' - local system</code>.<br />
Both systems has the same scale, but for example <code>A' system</code> has cutted axis, just to improve readability.</p>
<p>Magenta ball is an object, located in <code>system A'</code>.
His coordinates and rotation angles (let shorten it up to CRa) in this system are: <code>(1.2, 0.7, 0, 0, 0, 0)</code>.<br />
<code>System A'</code> is located in <code>system A</code>. <code>System A'</code> CRa in <code>system A</code> are: <code>(0, 3, 5, 90, 0, 0)</code>.</p>
<p>So,<br />
I have ball's CRa in <code>system A'</code>.<br />
I have <code>system A'</code> CRa in <code>system A</code>.</p>
<p><em>I need to find ball's CRa in <code>system A</code>.</em></p>
<p>(and vise versa, I need to find ball's CRa in <code>system A'</code> in case I have <code>system A'</code> CRa in <code>system A</code> and ball's CRa in <code>system A</code>.)</p>
|
<python><math><matrix><linear-algebra><coordinate-systems>
|
2024-02-13 13:44:11
| 1
| 309
|
Mika
|
77,988,374
| 1,014,217
|
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 112.00 MiB
|
<p>I have a big compute, 448gb of ram and 4 GPUS and I am executing this code:</p>
<pre><code>import torch
import pandas as pd
from pyspark.sql.functions import pandas_udf
from pyspark.sql.types import StringType
from pyspark.sql import SparkSession
from transformers import pipeline
# Initialize the SparkSession
spark = SparkSession.builder \
.appName("MistralUDFExample") \
.getOrCreate()
# Initialize the pipeline outside the UDF
pipe = pipeline("text-generation", model="HuggingFaceH4/mistral-7b-sft-beta", torch_dtype=torch.bfloat16, device_map="auto")
@pandas_udf(StringType())
def mistral_udf(texts: pd.Series) -> pd.Series:
results = []
for text in texts:
messages = [
{
"role": "system",
"content": "You are a helpful assistant which only task is to analyze the text of emails sent to a customer service mailbox. From this text you must extract a category. No category labels will be provided. Email:",
},
{"role": "user", "content": text},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=8, do_sample=True, temperature=0, top_k=50, top_p=0.95)
generated_text = outputs[0]["generated_text"]
results.append(generated_text)
return pd.Series(results)
# Example usage
input_df = spark.createDataFrame(pd.DataFrame({'texts': ['Terrible experience shopping online?', 'When I called, the human on the other side was vert friendly']}))
output_df = input_df.withColumn('generated_text', mistral_udf(input_df['texts']))
output_df.show(truncate=False)
</code></pre>
<p>THe first time it works, but when I try to execute it a second time I get the out of memory exception.</p>
<ol>
<li>How can I improve this code, so that it uses the loaded model and not load it every time? Or how can I fix the out of memory error.</li>
</ol>
<blockquote>
<p>torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate
112.00 MiB (GPU 1; 15.77 GiB total capacity; 3.16 GiB already allocated; 47.12 MiB free; 3.22 GiB reserved in total by PyTorch) If
reserved memory is >> allocated memory try setting max_split_size_mb
to avoid fragmentation. See documentation for Memory Management and
PYTORCH_CUDA_ALLOC_CONF</p>
</blockquote>
|
<python><pandas><apache-spark><pyspark><pytorch>
|
2024-02-13 13:42:51
| 0
| 34,314
|
Luis Valencia
|
77,988,116
| 8,848,025
|
Using regular expressions in Python to find specific word
|
<p>I have following lines (the order of lines can be different, there can be other similar lines as well). And I would like to replace <code>"sid"</code> with <code>"tempvalue"</code> taking into an account that <code>"sid"</code> can be surrounded by any symbol except for letters and digits. How to do that on Python using regular expression?</p>
<pre><code>lines = [
"VAR0=sid_host1; -",
"VAR1=sid; -",
"VAR2=psid; -",
"VAR3=sid_host1; -",
"VAR4=psid_host2; -",
"VAR5 = (file=/dir1/sid_host1/sid/trace/alert_sid.log)(database=sid)"
]
</code></pre>
<p>For line 0 desired result is: <code>"VAR0=tempvalue_host1; -"</code></p>
<p>for line 1: <code>"VAR1=tempvalue; -"</code></p>
<p>for line 3: <code>"VAR3=tempvalue_host1; -"</code></p>
<p>for line 5: <code>"VAR5 = (file=/dir1/tempvalue_host1/tempvalue/trace/alert_tempvalue.log)(database=tempvalue)"</code></p>
<p>Other lines must remain untouched.</p>
|
<python><python-re>
|
2024-02-13 13:01:32
| 1
| 720
|
Prokhozhii
|
77,988,008
| 14,636,729
|
How to read scalar, vector and matrix information in string format resembling Python syntax in R?
|
<p>I have strings that contain scalars, vectors, and matrices, e.g.</p>
<p>"[0,1],[[1,2],[3,4]],42"</p>
<p>And following the same example, that input in R should give me this:</p>
<pre><code>[[1]]
[1] 0 1
[[2]]
[,1] [,2]
[1,] 1 3
[2,] 2 4
[[3]]
[1] 42
</code></pre>
<p>So the notation is similar to that of Python where matrices are actually lists of lists.</p>
<p>Should I just write a parser for this, or has somebody already done that or is there a simple and neat trick to achieve this with no or only a little effort?</p>
|
<python><r><parsing><matrix>
|
2024-02-13 12:45:35
| 1
| 653
|
Aku-Ville LehtimΓ€ki
|
77,987,920
| 5,820,814
|
Pandas: Explode a list of nested dictionaries column and append as new rows
|
<p>Please consider the below dict for example:</p>
<pre><code>d2 = [{'event_id': 't1',
'display_name': 't1',
'form_count': 0,
'repetition_id': None,
'children': [{'event_id': 't_01',
'display_name': 't(1)',
'form_count': 1,
'repetition_id': 't1',
'children': [],
'forms': [{'form_id': 'f1',
'form_repetition_id': '1',
'form_name': 'fff1',
'is_active': True,
'is_submitted': False}]}],
'forms': []},
{'event_id': 't2',
'display_name': 't2',
'form_count': 0,
'repetition_id': None,
'children': [{'event_id': 't_02',
'display_name': 't(2)',
'form_count': 1,
'repetition_id': 't2',
'children': [{'event_id': 't_03',
'display_name': 't(3)',
'form_count': 1,
'repetition_id': 't3',
'children': [],
'forms': [{'form_id': 'f3',
'form_repetition_id': '1',
'form_name': 'fff3',
'is_active': True,
'is_submitted': False}]}],
'forms': [{'form_id': 'f2',
'form_repetition_id': '1',
'form_name': 'fff2',
'is_active': True,
'is_submitted': False}]}],
'forms': []}]
</code></pre>
<p>Above <code>d2</code> is a list of dicts, where <code>children</code> is a nested dict with same keys as the parent.</p>
<p>Also, <code>children</code> can have nesting upto multiple levels which is not possible to know upfront. So in short, I don't know how many times to keep exploding it.</p>
<p>Current df:</p>
<pre><code>In [54]: df11 = pd.DataFrame(d2)
In [55]: df11
Out[55]:
event_id display_name form_count repetition_id children forms
0 t1 t1 0 None [{'event_id': 't_01', 'display_name': 't(1)', ... []
1 t2 t2 0 None [{'event_id': 't_02', 'display_name': 't(2)', ... []
</code></pre>
<p>I want to flatten it in the below way.</p>
<p><strong>Expected output</strong>:</p>
<pre><code> event_id display_name form_count repetition_id children forms
0 t1 t1 0 None {'event_id': 't_01', 'display_name': 't(1)', '... []
1 t2 t2 0 None {'event_id': 't_02', 'display_name': 't(2)', '... []
0 t_01 t(1) 1 t1 [] [{'form_id': 'f1', 'form_repetition_id': '1', ...
1 t_02 t(2) 1 t2 {'event_id': 't_03', 'display_name': 't(3)', ... [{'form_id': 'f2', 'form_repetition_id': '1', ...
0 t_03 t(3) 0 t3 [] [{'form_id': 'f2', 'form_repetition_id': '1'}]
</code></pre>
<p>How do I know that how many nested children are there?</p>
<p>My attempt:</p>
<pre><code>In [58]: df12 = df11.explode('children')
In [64]: final = pd.concat([df12, pd.json_normalize(df12.children)])
In [72]: final
Out[72]:
event_id display_name form_count repetition_id children forms
0 t1 t1 0 None {'event_id': 't_01', 'display_name': 't(1)', '... []
1 t2 t2 0 None {'event_id': 't_02', 'display_name': 't(2)', '... []
0 t_01 t(1) 1 t1 [] [{'form_id': 'f1', 'form_repetition_id': '1', ...
1 t_02 t(2) 1 t2 [{'event_id': 't_03', 'display_name': 't(3)', ... [{'form_id': 'f2', 'form_repetition_id': '1', ...
</code></pre>
|
<python><pandas><nested>
|
2024-02-13 12:29:40
| 1
| 34,176
|
Mayank Porwal
|
77,987,566
| 3,077,037
|
How can I load excel in visio using python?
|
<p>I have an excel file containing columns:
Process step ID, Process step description, Next step ID, Connector label, Shape type, Function, Phase.</p>
<p>It has everything required. I want to load it in visio and map the columns to corresponding types and then generate flow. I can do it manually using opening visio-> data tab-> etc.</p>
<p>I want to automate the process using python and (win32com or comtypes).
But I am getting so many errors.</p>
<p>Here is the initial code I have been trying so far:</p>
<pre><code>import comtypes.client
import pywintypes
def create_visio_diagram(excel_file, process_id_col, description_col):
visio_app = comtypes.client.CreateObject("Visio.Application")
visio_app.Visible = True # Set to False if you don't want to see Visio GUI
# Create a new document using a generic empty template
doc = visio_app.Documents.Add("Blank Drawing.vst")
# Access the active page
page = doc.Pages.Item(1)
# Link the data from Excel to shapes on the page
try:
dataRecordset = page.DataRecordsets.Add("DataRecordset")
dataRecordset.GetRecordset(excel_file, 0, 1, "Sheet1")
dataRecordset.SetDataRowLinkedColumn("Process ID", process_id_col)
dataRecordset.SetDataRowLinkedColumn("Description", description_col)
dataRecordset.Refresh()
except pywintypes.com_error as e:
print(f"Error linking data: {e}")
# Save the document
doc.SaveAs("C:\\Path\\To\\Your\\Output\\File.vsdx")
# Close Visio
visio_app.Quit()
# Example usage
excel_file_path = "C:\\Path\\To\\Your\\Excel\\Workbook.xlsx"
process_id_column = "process_id"
description_column = "description"
create_visio_diagram(excel_file_path, process_id_column, description_column)
</code></pre>
|
<python><visio>
|
2024-02-13 11:30:40
| 1
| 940
|
Nazmul Hasan
|
77,987,511
| 7,454,177
|
Why is this type annotation not showing up in FastAPI?
|
<p>This is one of the endpoints in my FastAPI application:</p>
<pre><code>@router.get("/", response_model=Page[DeviceConfigSchemaOUT])
async def get_device_configs(
search: str | None = None,
page: int = 1,
size: int = 50,
) -> Any:
...
</code></pre>
<p>Which shows up like this in Swagger:</p>
<p><a href="https://i.sstatic.net/oXpAu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oXpAu.png" alt="endpoint without type annotation" /></a></p>
<p>Why is there no type on the search parameter? I tried changing the declaration to an Annotation and manually specifying the type, but the result stays the same. This is using the latest FastAPI version 109.2 and Python 3.12.1.</p>
|
<python><fastapi><python-typing>
|
2024-02-13 11:22:25
| 1
| 2,126
|
creyD
|
77,987,451
| 18,030,965
|
Django - How to use AJAX within a form
|
<p>I'm trying to use AJAX for the first time as we have a Django project but a required feature is for one form to behave more like a react app or SPA so I'm trying to use AJAX to add that functionality. I have the following template for the page. It has a form to let the user select from a dropdown what their highest education is and should have a modal appear to let the user input a qualification at a time to be added to the database and displayed on the table. The issue is when the user fills out the modal and clicks submit it seem's to perform just a normal GET request as can be seen below rather than the API request to create qualifications from the modal. AJAX seemingly isn't intercepting the request.</p>
<p>GET REQUEST</p>
<pre><code> "GET /applications/personal-details/prior-attainment/4/?csrfmiddlewaretoken=s54x6U2BczYThuoYKTryBZwUzdPDuaxmzzubamRRz2Lw4qX1ztjGnAP1XcryPHXP&subject=Maths&qualification_name=Nat+5&date_achieved=2024-02-13&level_grade=A HTTP/1.1" 200 7974
</code></pre>
<pre><code>{% extends 'base.html' %}
`{% block content %}
<div class="container mt-5">
<h2>Prior Attainment and Qualifications</h2>
<form id="priorAttainmentForm" method="post" action="{% url 'prior_attainment_view' application_id %}">
{% csrf_token %}
<div class="form-group">
<label for="highestQualification">Highest Qualification Level</label>
<select class="form-control" id="highestQualification" name="highest_qualification_level">
{% for value, label in form.highest_qualification_level.field.choices %}
<option value="{{ value }}" {% if form.highest_qualification_level.value == value %} selected {% endif %}>{{ label }}</option>
{% endfor %}
</select>
</div>
<hr>
<h3>Qualifications</h3>
<button type="button" class="btn btn-success mb-2" data-toggle="modal" data-target="#qualificationModal">Add Qualification</button>
<div id="qualificationTable">
<!-- Table gets populated by JavaScript -->
</div>
<hr>
<button type="submit" name="action" value="save_and_continue" class="btn btn-primary">Continue</button>
<button type="submit" name="action" value="save_and_return" class="btn btn-secondary">Save and Return</button>
</form>
<!-- Qualification Modal -->
<div class="modal fade" id="qualificationModal" tabindex="-1" role="dialog" aria-labelledby="qualificationModalLabel" aria-hidden="true">
<div class="modal-dialog" role="document">
<div class="modal-content">
<form id="qualificationForm">
<div class="modal-header">
<h5 class="modal-title" id="qualificationModalLabel">Add Qualification</h5>
<button type="button" class="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">&times;</span>
</button>
</div>
<div class="modal-body">
<!-- CSRF Token -->
<input type="hidden" name="csrfmiddlewaretoken" value="{{ csrf_token }}">
<div class="form-group">
<label for="qualificationSubject">Subject</label>
<input type="text" class="form-control" id="qualificationSubject" name="subject" required>
</div>
<div class="form-group">
<label for="qualificationName">Qualification Name</label>
<input type="text" class="form-control" id="qualificationName" name="qualification_name" required>
</div>
<div class="form-group">
<label for="qualificationDateAchieved">Date Achieved</label>
<input type="date" class="form-control" id="qualificationDateAchieved" name="date_achieved" required>
</div>
<div class="form-group">
<label for="qualificationLevelGrade">Level/Grade</label>
<input type="text" class="form-control" id="qualificationLevelGrade" name="level_grade" required>
</div>
</div>
<div class="modal-footer">
<button type="button" class="btn btn-secondary" data-dismiss="modal">Close</button>
<button type="submit" class="btn btn-primary">Save Qualification</button>
</div>
</form>
</div>
</div>
</div>
</div>
{% endblock %}
`
{% block extra_js %}
<script>
$(document).ready(function() {
const loadQualifications = () => {
// AJAX call to load qualifications
$.get("{% url 'qualifications_list_api' %}", function(data) {
let listHtml = '';
data.forEach(qualification => {
listHtml += `<div>${qualification.subject} - ${qualification.qualification_name} - ${qualification.date_achieved} - ${qualification.level_grade}</div>`;
});
$('#qualificationsList').html(listHtml);
});
};
$('#qualificationForm').on('submit', function(e) {
e.preventDefault();
const formData = $(this).serialize();
$.post("{% url 'qualification_create_api' %}", formData, function(response) {
if(response.success) {
$('#qualificationModal').modal('hide');
loadQualifications();
return false
} else {
alert('Error saving qualification');
return false
}
}).fail(function() {
alert('Failed to save qualification');
return false
});
});
loadQualifications();
});
</script>
{% endblock %}
</code></pre>
<p>Within the base.html I also have the basic structure below. (The script imports are a mess this is being built as a really quick demo for supervisors as a POC)</p>
<pre><code><!DOCTYPE html>
{% load static i18n %}
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{% block title %}My Django Project{% endblock title %}</title>
<!-- Bootstrap CSS -->
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/css/bootstrap.min.css">
<!-- Custom CSS -->
<!-- <link rel="stylesheet" href="{% static 'css/style.css' %}"> -->
<link rel="stylesheet" type="text/css" href="https://cdn.datatables.net/1.10.22/css/jquery.dataTables.css">
<script type="text/javascript" src="https://code.jquery.com/jquery-3.5.1.min.js"></script>
<script type="text/javascript" src="https://cdn.datatables.net/1.10.22/js/jquery.dataTables.min.js"></script>
<link rel="stylesheet" type="text/css" href="https://cdn.datatables.net/1.10.22/css/dataTables.bootstrap4.min.css">
<script type="text/javascript" src="https://cdn.datatables.net/1.10.22/js/dataTables.bootstrap4.min.js"></script>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/bootstrap-datepicker/1.9.0/css/bootstrap-datepicker.min.css">
<script src="https://cdnjs.cloudflare.com/ajax/libs/bootstrap-datepicker/1.9.0/js/bootstrap-datepicker.min.js"></script>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/js/bootstrap.min.js"></script>
{% block extrahead %}{% endblock %}
</head>
<body>
<header>
</header>
<main role="main" class="container">
{% block content %}
{% endblock content %}
</main>
<script src="https://code.jquery.com/jquery-3.5.1.slim.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/@popperjs/core@2.5.2/dist/umd/popper.min.js"></script>
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/js/bootstrap.min.js"></script>
{% block extrajs %}{% endblock %}
</body>
</html>
</code></pre>
<p>URLS for adding Qualification in URLS.py</p>
<pre><code>path('/qualifications/add/', views.add_qualification, name='add_qualification'),
path('/qualifications/<int:qualification_id>/edit/', views.edit_qualification, name='edit_qualification'),
path('/qualifications/<int:qualification_id>/delete/', views.delete_qualification, name='delete_qualification'),
</code></pre>
<p>Views for the URLS</p>
<pre><code>def add_qualification(request):
if request.method == 'POST':
form = QualificationForm(request.POST)
if form.is_valid():
qualification = form.save()
return JsonResponse({'status': 'success', 'id': qualification.id})
else:
return JsonResponse({'status': 'error', 'errors': form.errors})
def edit_qualification(request, qualification_id):
if request.method == 'POST':
qualification = get_object_or_404(Qualification, pk=qualification_id)
form = QualificationForm(request.POST, instance=qualification)
if form.is_valid():
qualification = form.save()
return JsonResponse({'status': 'success', 'id': qualification.id})
else:
return JsonResponse({'status': 'error', 'errors': form.errors})
def delete_qualification(request, qualification_id):
if request.method == 'POST':
qualification = get_object_or_404(Qualification, pk=qualification_id)
qualification.delete()
return JsonResponse({'status': 'success'})
</code></pre>
<p>I've tried following multiple guides on setting up AJAX and also had some AI tools check over the code and all say it should seemingly work but the AJAX just does not seem to work and now been stuck on this for a couple of days. I'm expecting it to essentially just create the qualification using the route, to then close the modal and display the all qualification's. I do recognise it would have been much simpler to use React and I wish we did but for other reasons Django had to be used for the POC.</p>
<p>Edit: Further details,</p>
<p>The template contains 2 forms, the second one in the modal is what needs to perform the AJAX request. At current moment it sends a get request that can be seen above rather than an AJAX request. As far as I can tell its as if AJAX is not running in the first place to intercept the request and perform its own.</p>
|
<javascript><python><django><ajax><django-rest-framework>
|
2024-02-13 11:11:04
| 0
| 351
|
Jack Duffy
|
77,987,420
| 1,922,302
|
Get real count of solutions of MIP Gurobi model
|
<p>I have a MIP model that was already solved to some extent with a time limit, and now I want to get exactly one more solution.</p>
<p>One way to produce one additional solution is to keep doing this from the beginning by incrementing <code>model.Params.SolutionLimit</code> one by one.</p>
<p>But in my situation I don't know how many solutions were produced up until now, so how I can I set a reasonable <code>Params.SolutionLimit</code> that is exactly one higher than the number of solutions until now?</p>
<p><code>model.SolCount</code> does not help since that is capped by a certain limit (defaulting to 10)</p>
|
<python><optimization><gurobi><mixed-integer-programming>
|
2024-02-13 11:04:35
| 0
| 953
|
johk95
|
77,987,278
| 4,575,197
|
Feature selection using backward feature selection in scikit-learn and PCA
|
<p>i have calculated the scores of all the columns in my DF,which has 312 columns and 650 rows, using PCA with following code:</p>
<pre><code>all_pca=PCA(random_state=4)
all_pca.fit(tt)
all_pca2=all_pca.transform(tt)
plt.plot(np.cumsum(all_pca.explained_variance_ratio_) * 100)
plt.xlabel('Number of components')
plt.grid(which='both', linestyle='--', linewidth=0.5)
plt.xticks(np.arange(0, 330, step=25))
plt.yticks(np.arange(0, 110, step=10))
plt.ylabel('Explained variance (%)')
plt.savefig('elbow_plot.png', dpi=1000)
</code></pre>
<p>and the result is the following image.</p>
<p><a href="https://i.sstatic.net/4wYnZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4wYnZ.png" alt="enter image description here" /></a></p>
<p>My main goal is to use only important features for <em><strong>Random forest regression, Gradient boosting, OLS regression and LASSO</strong></em>. As you can see, 100 columns describe 95.2% of the variance in my Dataframe.</p>
<p>Can I use this threshold (100 Columns) for backward feature selection?</p>
|
<python><pca><feature-selection>
|
2024-02-13 10:42:34
| 1
| 10,490
|
Mostafa Bouzari
|
77,987,277
| 6,649,591
|
How can I build an embedding encoder with FastAPI
|
<p>I just want to use a pre-trained open source embedding model from SentenceTransformer for encoding plain text.</p>
<p>The goal is to use swagger as GUI - put in a sentence and get out embeddings.</p>
<pre><code>from fastapi import Depends, FastAPI
from pydantic import BaseModel
from sentence_transformers import SentenceTransformer
embedding_model = SentenceTransformer("./assets/BAAI/bge-small-en")
app = FastAPI()
class EmbeddingRequest(BaseModel):
text: str
class EmbeddingResponse(BaseModel):
embeddings: float
@app.post("/embeddings", response_model=EmbeddingResponse)
async def get_embeddings(request: EmbeddingRequest, model: embedding_model):
embeddings_result = model.encode(request.text)
return EmbeddingResponse(embeddings=embeddings_result)
</code></pre>
|
<python><fastapi><word-embedding>
|
2024-02-13 10:42:23
| 1
| 487
|
Christian
|
77,987,186
| 350,685
|
Unable to get flask_mysqldb to work with python 3.x
|
<p>I have a <code>flask</code> application that uses <code>flask_mysqldb</code> to connect with database. I had imported <code>MySQLdb</code> and <code>flask_mysqldb</code> packages to use. Some update today morning screwed this. I had to replace <code>MySQLdb</code> with <code>pymysql</code> because it is not compatible with Python 3.x as per some threads here. The code is still not working because <code>flask_mysqldb</code> uses <code>MySQLdb</code> internally. I get the following error:</p>
<pre><code> from flask_mysqldb import MySQL
File "/opt/homebrew/lib/python3.11/site-packages/flask_mysqldb/__init__.py", line 1, in <module>
import MySQLdb
ModuleNotFoundError: No module named 'MySQLdb'
</code></pre>
<p>Any help on how I can resolve this is most welcome. I have been googling and going through different threads here and not getting anywhere.</p>
|
<python><mysql-python><flask-mysql>
|
2024-02-13 10:28:18
| 0
| 10,638
|
Sriram
|
77,987,163
| 5,346,843
|
Change logging level dynamically in Python
|
<p>I am logging messages from various modules and want to change the log level dynamically. The code snippet is</p>
<pre><code>import logging
logging.basicConfig(filename='my_log.log', filemode='w', encoding='utf-8', level=logging.WARNING, force=True)
# Code here that works so only worry about WARNINGs
# ....
# Code here has an issue so increase log level to DEBUG
logging.setLevel(logging.DEBUG)
</code></pre>
<p>This gives the error <code>AttributeError: module 'logging' has no attribute 'setLevel'</code>. I tried using the logger generated by <code>basicConfig</code>, e.g.</p>
<pre><code>L = logging.basicConfig(filename='my_log.log', filemode='w', encoding='utf-8', level=logging.INFO, force=True)
L.setLevel(logging.DEBUG)
</code></pre>
<p>but got the error <code>AttributeError: 'NoneType' object has no attribute 'logging'</code>. How can I do this?</p>
|
<python><logging>
|
2024-02-13 10:25:24
| 0
| 545
|
PetGriffin
|
77,986,991
| 1,189,330
|
Errno 111 Connection refused from web.archive.org
|
<p>Does web.archive.org have any limits?</p>
<p>I'm trying to pump pages out of webarchive with pause using python. But after less than a minute I get an error:</p>
<pre><code>ConnectionRefusedError: [Errno 111] Connection refused
The above exception was the direct cause of the following exception:
NewConnectionError Traceback (most recent call last) NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7ae32e962fe0>: Failed to establish a new connection: [Errno 111] Connection refused
The above exception was the direct cause of the following exception:
MaxRetryError Traceback (most recent call last) MaxRetryError: HTTPSConnectionPool(host='web.archive.org', port=443): Max retries exceeded with url: /web/20230331062349/http://100vann.ru/ (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7ae32e962fe0>: Failed to establish a new connection: [Errno 111] Connection refused'))
During handling of the above exception, another exception occurred:
ConnectionError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
517 raise SSLError(e, request=request)
518
--> 519 raise ConnectionError(e, request=request)
520
521 except ClosedPoolError as e:
ConnectionError: HTTPSConnectionPool(host='web.archive.org', port=443): Max retries exceeded with url: /web/20230331062349/http://100vann.ru/ (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7ae32e962fe0>: Failed to establish a new connection: [Errno 111] Connection refused'))
</code></pre>
<p>I try to open by this code:</p>
<pre><code>headers = requests.utils.default_headers()
time.sleep(1)
headers.update(
{
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36',
}
)
webarchive = requests.get(url, headers=headers, timeout=15)
webarchive.raise_for_status()
if webarchive.status_code == 200:
</code></pre>
<p>How do services pump out a site of several thousand pages? If you put large pauses, it takes forever. What am I doing wrong?</p>
|
<python>
|
2024-02-13 09:54:37
| 1
| 1,304
|
IvanS
|
77,986,897
| 16,798,185
|
Dynamically derive dataframe names for assignment
|
<p>I have array of columns. For each of the column in the array, I need to perform same operation and create in a separate dataframe. So that later these dataframes can be merged/joined into one.</p>
<p>with below, getting error : cannot assign to expression here. Maybe you meant '==' instead of '='?</p>
<p>I get that the issue is due to we dynamically trying to derive the dataframe name using c +'_df_name' for which we are assigning the data.</p>
<pre><code>from pyspark.sql import SparkSession
from pyspark.sql.functions import lit
spark = SparkSession.builder.appName("test").getOrCreate()
data = [("John", 25), ("Alice", 30), ("Bob", 35)]
columns = ["name", "age"]
df = spark.createDataFrame(data, columns)
attributes = ["name","skill"]
for c in attributes:
c +'_df_name' = df.select(lit('xyz')) # simplified
</code></pre>
<p>So changed the above code as below:</p>
<pre><code>for c in attributes:
df_name = c +'_df_name'
print(df_name) # prints name_df_name, skill_df_name
df_name = df.select(lit('xyz')) # simplified
</code></pre>
<p>Now if change try to print the new dataframe value using below, getting error.name 'name_df_name' is not defined.</p>
<pre><code>name_df_name.show()
skill_df_name.show()
</code></pre>
<p>How do we dynamically generate unique dataframe names for each assignment during the iteration?</p>
|
<python><apache-spark><pyspark>
|
2024-02-13 09:39:46
| 2
| 377
|
user16798185
|
77,986,845
| 10,695,613
|
I am training an autoencoder for anomaly detection. How can I incorporate anomalous data in training using a custom loss function?
|
<p><a href="https://arxiv.org/pdf/1903.10709.pdf" rel="nofollow noreferrer">This</a> paper suggests the following algorithm that incorporates anomalous data points into the training process:</p>
<p>Algorithm 1 Autoencoding Binary Classifiers</p>
<pre><code>while not converged do:
Sample minibatch {(x1, y1), Β· Β· Β· , (xK , yK )} from dataset
Compute the gradient of ΞΈ as g0 (using custom loss function that incorporates the label of x)
Perform SGD-update for ΞΈ with gΞΈ
end while
</code></pre>
<p>I have the following code for an autoencoder implemented using keras:</p>
<pre><code>class AnomalyDetector(Model):
def __init__(self):
super(AnomalyDetector, self).__init__()
self.encoder = tf.keras.Sequential([
layers.Dense(32, activation="relu"),
layers.Dense(16, activation="relu"),
layers.Dense(8, activation="relu")])
self.decoder = tf.keras.Sequential([
layers.Dense(16, activation="relu"),
layers.Dense(32, activation="relu"),
layers.Dense(140, activation="sigmoid")])
def call(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decoded
autoencoder = AnomalyDetector()
</code></pre>
<p>My main problem is: how do I include the labels in a customly defined loss function if an autoencoder's input and output must be X. Assuming I encode the label as a feature, during inference, the label won't be available, so I am not sure how to implement the algorithm described by the paper.</p>
|
<python><tensorflow><keras>
|
2024-02-13 09:33:15
| 1
| 405
|
BovineScatologist
|
77,986,837
| 19,318,120
|
Celery Redis how to prevent task acknowledge if raised an exception
|
<p>I want to prevent celery from acknowledging task if it raised an exception</p>
<p>setting acks_late didn't make any difference</p>
<pre><code>@app.task(acks_late=True)
def send_mail(data):
raise Exception
</code></pre>
<p>I tried doing the following</p>
<pre><code>class CeleryTask(celery.Task):
def on_failure(self, exc, task_id, args, kwargs, einfo : ExceptionInfo):
app.send_task(self.name, args=args, kwargs=kwargs)
def run(self, *args, **kwargs):
pass
@app.task(base=CeleryTask)
def send_mail(data):
raise Exception
</code></pre>
<p>but that will cause an infinite recursion or I'm missing something for it to be an optimal solution</p>
<p>what is a better solution?</p>
|
<python><redis><celery>
|
2024-02-13 09:31:42
| 0
| 484
|
mohamed naser
|
77,986,775
| 10,053,485
|
Pydantic / FastAPI, how to set up a case-insensitive model with suggested values
|
<p>Using FastAPI I have set up a POST endpoint that takes a command, I want this command to be case insensitive, while still having suggested values (i.e. within the SwaggerUI docs)</p>
<p>For this, I have set up an endpoint with a <code>Command</code> class as a schema for the POST body parameters:</p>
<pre class="lang-py prettyprint-override"><code>@router.post("/command", status_code=HTTPStatus.ACCEPTED) # @router is a fully set up APIRouter()
async def control_battery(command: Command):
result = do_work(command.action)
return result
</code></pre>
<p>For <code>Command</code> I currently have 2 possible versions, which both do not have the full functionality I desire.</p>
<pre class="lang-py prettyprint-override"><code>from fastapi import HTTPException
from pydantic import BaseModel, field_validator
from typing import Literal
## VERSION 1
class Command(BaseModel):
action: Literal["jump", "walk", "sleep"]
## VERSION 2
class Command(BaseModel):
action: str
@field_validator('action')
@classmethod
def validate_command(cls, v: str) -> str:
"""
Checks if command is valid and converts it to lower.
"""
if v.lower() not in {'jump', 'walk', 'sleep'}:
raise HTTPException(status_code=422, detail="Action must be either 'jump', 'walk', or 'sleep'")
return v.lower()
</code></pre>
<p>Version 1 is obviously not case sensitive, but has the correct 'suggested value' behaviour, as below.</p>
<img src="https://i.sstatic.net/ujg7N.png" width="300" />
<p>Whereas Version 2 has the correct case sensitivity and allows for greater control over the validation, but no longer shares suggested values with users of the schema. e.g., in the image above "jump" would be replaced with "string".</p>
<p>How do I combine the functionality of both of these approaches?</p>
|
<python><fastapi><pydantic>
|
2024-02-13 09:22:47
| 3
| 408
|
Floriancitt
|
77,986,643
| 1,980,208
|
how to execute multiple functions in pandas groupby processing
|
<p>I am calculating <code>pct_change</code> in pandas groupby but from the first element of each group. Hence i am using <code>cumprod()</code>. I already have a working code but it is little ugly. How can i use <code>pct_change()</code> and <code>cumprod()</code> in one liner.</p>
<p><strong>My code:</strong></p>
<pre><code>import pandas as pd
import numpy as np
data = [[1, 10], [2, 17], [3, 15],[4, 11], [5, 17], [6, 15]]
df = pd.DataFrame(data, columns=["id", "open"])
#normal
df['Normal'] =df['open'].pct_change().fillna(0).add(1).cumprod().sub(1).mul(100).round(2)
#groupby
df["Group_of_3"] = df.groupby(np.arange(len(df)) // 3 )["open"].pct_change().fillna(0).add(1)
df["Group_of_3"] = df.groupby(np.arange(len(df)) // 3 )["Group_of_3"].cumprod().sub(1).mul(100).round(2)
print(df)
</code></pre>
<p><strong>output</strong></p>
<pre><code> id open Normal Group_of_3
0 1 10 0.0 0.00
1 2 17 70.0 70.00
2 3 15 50.0 50.00
3 4 11 10.0 0.00
4 5 17 70.0 54.55
5 6 15 50.0 36.36
</code></pre>
|
<python><pandas>
|
2024-02-13 08:59:21
| 1
| 439
|
prem
|
77,986,459
| 6,839,853
|
how to decompress and write to disk lz4 compressed OS image
|
<p>I am trying to fetch an OS image with pycurl and write the decompressed data to disk. With gzip it is straight forward, only with lz4 formats I face issues, it seems the write_lz4(buf) decompresses and writes to disk, only when I try to resize the partition, I get an error:</p>
<blockquote>
<p>entries is 0 bytes, but this program supports only 128-byte entries.
Adjusting accordingly, but partition table may be garbage.
Warning: Partition table header claims that the size of partition table
entries is 0 bytes, but this program supports only 128-byte entries.
Adjusting accordingly, but partition table may be garbage.
Creating new GPT entries in memory.
The operation has completed successfully.
Error: Partition doesn't exist</p>
</blockquote>
<p>I could also manage it with io.Byitesio:</p>
<pre><code>if url.endswith('.lz4'):
with io.BytesIO() as output_buffer:
curl.setopt(pycurl.WRITEDATA, output_buffer)
curl.perform()
output_buffer.seek(0)
decompressed_data = lz4.frame.decompress(output_buffer.read())
disk.write(decompressed_data)
</code></pre>
<p>But it seems this step is unnecessary. I tried the direct approach but it didn't work. Here is the code:</p>
<pre><code>def write_to_disk(self, url, dev, proxy=None):
if os.path.isfile(dev):
size = os.path.getsize(dev)
with open(os.path.realpath(dev), 'wb') as disk:
disk.seek(0)
def write_gz(buf):
disk.write(d.decompress(buf))
def write_lz4(buf):
disk.write(lz4.decompress(buf))
try:
curl = pycurl.Curl()
curl.setopt(pycurl.URL, url)
if proxy is not False:
curl.setopt(pycurl.PROXY, proxy)
curl.setopt(pycurl.BUFFERSIZE, 1024)
if url.endswith('.lz4'):
curl.setopt(pycurl.WRITEFUNCTION, write_lz4)
elif url.endswith('.gz'):
d = zlib.decompressobj(zlib.MAX_WBITS | 32)
curl.setopt(pycurl.WRITEFUNCTION, write_gz)
curl.perform()
except pycurl.error:
return False
if os.path.isfile(dev):
disk.seek(size - 1)
disk.write(b"\0")
return True
</code></pre>
<p>Thanks</p>
|
<python><zlib><pycurl><lz4>
|
2024-02-13 08:24:58
| 1
| 625
|
Max
|
77,986,253
| 10,570,372
|
List is invariant because it violates LSP?
|
<p>Python's type system says that list is invariant, and that <code>List[int]</code> is not a subtype of <code>List[float]</code>.</p>
<p>Here's the Python code snippet:</p>
<pre class="lang-py prettyprint-override"><code>def append_pi(lst: List[float]) -> None:
lst += [3.14]
my_list = [1, 3, 5] # type: List[int]
append_pi(my_list) # Expected to be safe, but it's not
my_list[-1] << 5 # This operation fails because the last element is no longer an int
</code></pre>
<p>In <a href="https://peps.python.org/pep-0483/#subtype-relationships" rel="nofollow noreferrer">PEP 483</a>, Guido explained the following:</p>
<blockquote>
<p>Let us also consider a tricky example: If <code>List[int]</code> denotes the type formed by all lists containing only integer numbers, then it is not a subtype of <code>List[float]</code>, formed by all lists that contain only real numbers. The first condition of subtyping holds, but appending a real number only works with <code>List[float]</code> so that the second condition fails.</p>
</blockquote>
<p>I assumed that the second condition refers to the second criterion as per the subsumption criterion, where a type <code>S</code> is a subtype of <code>T</code> if it obeys function applicability. More specifically:</p>
<ul>
<li>The second criterion states that if <code>T</code> has <code>N</code> functions (methods), then <code>S</code> must have at least the same <code>N</code> functions (methods). Furthermore, the behaviour and semantics of the functions must be consistent when applied to either the subtype or supertype. This basically is obeying the rule of LSP.</li>
</ul>
<p>My question focuses on the apparent behavioral or semantic violation implicated in the operation of appending a float to a <code>List[int]</code>, which to me does not contravene the first part of the second criterion of function applicability (since both <code>List[int]</code> and <code>List[float]</code> support the same set of methods). I'm grappling with understanding how this specific operation, which effectively introduces a float into a list expected exclusively to contain integers, results in a failure to adhere to the LSP by altering the <code>List[int]</code>'s presumed type integrity. This seems to suggest a deeper, perhaps semantic, aspect of type safety and method applicability under Python's type system that I might be missing. I would appreciate help on laying concretely and rigorously on why is Guido's claim on violating the second condition true.</p>
|
<python><python-typing><liskov-substitution-principle><invariance>
|
2024-02-13 07:44:22
| 0
| 1,043
|
ilovewt
|
77,986,231
| 3,828,463
|
Linking Python to a complex Fortran model which needs to return to the caller
|
<p>I have a complex model written in Fortran which is a self contained entity which reads an input file, allocates memory for all the data structures, initializes the model, provides guesses to the solution, uses a derivative-based solver to solve the nonlinear equations, and populates the output structures, deallocates memory, and finishes.</p>
<p>I am able to call this complete model as a once through from Python fairly easily. This I do as follows:</p>
<pre><code>import sys
import ctypes as ct
lib = ct.CDLL('prog.dll')
f = getattr(lib,'M_EXFUN_mp_EXFUN')
f.argtypes = [ct.c_char_p,ct.c_int]
f.restype = None
x = b'x ' + sys.argv[1].encode('UTF-8') + b' debug=yes'
f(x,len(x))
</code></pre>
<p>However my next task is more complex. I want to use a Python optimizer which will call my model as an objective function. That is, the Python optimizer will pass the current values of the independent variables, X, to the Fortran model which will return the value of some defined objective function.</p>
<p>The Python code is pretty simple and looks as follows:</p>
<pre><code>def fun(x):
return f(x)
x0 = [2.0, 0.0]
res = minimize(fun, x0)
</code></pre>
<p>Where "minimize" is a function provided by the Python library I am using.</p>
<p>The definition of f(x) is deep in my Fortran model, as is the code for estimating the starting values of X. Clearly I don't want to call my entire model for each function evaluation, not least because it will stop and return, deallocating all data structures, but also because the function evaluation and starting point code are very small parts of the model.</p>
<p>What I need to do is run the Fortran code from Python (which is easy to do), but then that code has to pause and return control back to the Python code where that will start asking for the starting point and repeated function evaluations by coming back into the Fortran code.</p>
<p>Is this possible? Or am I stretching the Python <> Fortran interface capabilities?</p>
<p>Update#1: Here is a simplified example for what my Fortran code is doing. I have a DLL containing a subroutine main2 which allocates memory for the rest of the problem. It allocates memory for the model variables x(), and the model parameters, par(), and deallocates them on exit. Here is the code:</p>
<pre><code>module main2_mod
contains
!dec$ attributes dllexport :: main2
subroutine main2(cc)
use objfun_mod
implicit none
character(*), intent(in) :: cc
integer :: i, n
real(8) :: f
real(8), allocatable :: x(:), par(:)
n = 5
allocate(x(n),par(n))
x(:) = (/(i,i=1,n)/)
par(:) = (/(i**0.5,i=1,n)/)
call objfun(n,x,par,f)
write(6,'(<n>f6.1,f6.1)') (x(i),i=1,n), f
deallocate(x)
deallocate(par)
return
end
end module
</code></pre>
<p>It calls a subroutine objfun in another DLL testobj.dll. This calculates the objective function, f, given the latest values of x() together with the parameters par() set up in main1. Here is the code for objfun:</p>
<pre><code>module objfun_mod
contains
!dec$ attributes dllexport :: objfun
subroutine objfun(n,x,par,f)
implicit none
integer, intent(in) :: n
real(8), intent(in) :: x(n)
real(8), intent(in) :: par(n)
real(8), intent(out) :: f
integer :: i
real(8) :: ff
ff = 0.0
do i = 1, n
ff = ff + x(i)*par(i)
enddo
f = ff
return
end
end module
</code></pre>
<p>Here is the Python code that calls main2:</p>
<pre><code># for ctypes
import ctypes as ct
# load the DLL
lib = ct.CDLL('main2.dll')
# find entry point for function to be called
f = getattr(lib,'MAIN2_MOD_mp_MAIN2')
# specify its argument types
f.argtypes = [ct.c_char_p,ct.c_int]
# specify return type
f.restype = None
# set up the argument to be sent to the function
x = b'x '
# run the function with commandline (x) as an argument
f(x,len(x))
</code></pre>
<p>This runs fine and prints out X() and f.</p>
<p>The problem I am trying to solve now is using an optimizer to call main2 first (to allocate memory and set up the parameters) and then repeatedly call objfun with different values of x() to get the resultant value of f.</p>
<p>The Python code for this has to look something like:</p>
<pre><code>def fun(x):
return f(x)
x0 = [1.0, 2.0, 3.0, 4.0, 5.0]
res = minimize(fun, x0)
</code></pre>
<p>The definition of f(x) above must somehow refer to objfun in the Fortran. Secondly this code must also somehow call main2 to allocate the memory and set up the parameters that objfun requires. It must not be allowed to complete as this would deallocate the memory for x() and par() that objfun needs.</p>
<p>I suspect I need a callback from the Fortran back to Python, but this is where my knowledge is lacking.</p>
|
<python><fortran>
|
2024-02-13 07:40:09
| 0
| 335
|
Adrian
|
77,986,158
| 2,711,059
|
Getting Empty Result while trying to read S3 bucket file using boto3
|
<p>I am trying to access a CSV file from S3 bucket by using the following code in python</p>
<pre><code>s3=boto3.client('s3')
_data=s3.get_object(Bucket='ml-titanic-dataset',Key='test_data/test.csv')
_data['Body'].read().decode('utf-8')
</code></pre>
<p>I am getting the result as empty but i have data inside the mapped CSV file.</p>
<ol>
<li>Is there anything missing</li>
<li>Am i doing the decoding right</li>
</ol>
<p>when using print(_data), i get the following output</p>
<pre><code> {'ResponseMetadata': {'RequestId': 'YTTJTHCNKWKV72SA',
'HostId': 'GGzZuluP68nAiPEAKIpc6W8n9wQLhq+yb4phZS5DugcjmLkMlmw4FXl+R5jXDAw56VX6AqaOYk8=',
'HTTPStatusCode': 200,
'HTTPHeaders': {'x-amz-id-2': 'GGzZuluP68nAiPEAKIpc6W8n9wQLhq+yb4phZS5DugcjmLkMlmw4FXl+R5jXDAw56VX6AqaOYk8=',
'x-amz-request-id': 'YTTJTHCNKWKV72SA', 'date': 'Tue, 13 Feb 2024 07:03:02 GMT',
'last-modified': 'Mon, 12 Feb 2024 16:31:53 GMT', 'etag': '"7533b82eae4b582610cbd68aa636b017"',
'x-amz-server-side-encryption': 'AES256', 'accept-ranges': 'bytes', 'content-type': 'text/csv',
'server': 'AmazonS3', 'content-length': '28629'}, 'RetryAttempts': 0},
'AcceptRanges': 'bytes', 'LastModified': datetime.datetime(2024, 2, 12, 16, 31, 53, tzinfo=tzutc()),
'ContentLength': 28629, 'ETag': '"7533b82eae4b582610cbd68aa636b017"',
'ContentType': 'text/csv', 'ServerSideEncryption': 'AES256', 'Metadata': {},
'Body': <botocore.response.StreamingBody object at 0x7fafcfb9caf0>}
</code></pre>
<p>assinging the data to the variable and accessing also give me empty result as shown</p>
<p><a href="https://i.sstatic.net/glB9L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/glB9L.png" alt="enter image description here" /></a></p>
|
<python><amazon-s3><boto3>
|
2024-02-13 07:25:57
| 0
| 5,268
|
Lijin Durairaj
|
77,985,988
| 10,795,596
|
how to avoid string with certain words before #
|
<p>I have to find strings doesn't have either of words(as word boundaries) abc, def or ghi anywhere before # using regex in python</p>
<p><strong>he is abc but # not xyz</strong> - no match</p>
<p><strong>he is good # but small</strong> - match</p>
<p><strong>he might ghi but # not abc will</strong> - no match</p>
<p><strong>he will help but # hope for def to come</strong> - match</p>
<p><strong>he is going for vabc but # not sure</strong> - match</p>
<p>This is the list that has strings</p>
<pre><code>l=["he is abc but # not xyz",
"he is good # but small",
"he might ghi but # not abc will",
"he will help but # hope for def to come",
"he is going for vabc but # not sure"]
</code></pre>
<p>It should print</p>
<pre><code>["he is good # but small", "he will help but # hope for def to come", "he is going for vabc but # not sure"]
</code></pre>
<p>Tried using negative lookahead concept in regex with</p>
<pre class="lang-bash prettyprint-override"><code>^(?!.*\b(?:abc|def|ghi)\b).*#
</code></pre>
<p>but couldn't achieve exact result</p>
|
<python><regex>
|
2024-02-13 06:51:35
| 1
| 332
|
Cassius Clay
|
77,985,422
| 9,776,466
|
Is regex in python different with regex in other system like PostgreSQL?
|
<p>I have this code to parsing street and house number in Python</p>
<pre><code>import re
def parse_street(address):
pattern = r'^(.*?)(?i)\b no\b'
match = re.match(pattern, address)
if match:
return match.group(1).strip()
else:
return None
def parse_housenumber(address):
pattern = r'(?i)\bno\.?\s*([\dA-Z]+)'
match = re.search(pattern, address)
print(match)
if match:
return match.group(1)
else:
return None
df['street'] = df['address'].apply(lambda x: pd.Series(parse_street(x)))
df['house_number'] = df['address'].apply(lambda x: pd.Series(parse_housenumber(x)))
</code></pre>
<p>Why I am getting error for running the query in postgresql with the same regex syntax query like:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT (regexp_matches('Jl. ABC No.01', '^(.*?)(?i)\b no\b'))[1]
</code></pre>
<p>The error message is:
<code>ERROR: invalid regular expression: quantifier operand invalid </code></p>
<p>Is regex in python different with regex in other system? How can I convert my python function to PostgreSQL syntax? I want to add "street" and "house_number" column from "address" column within the same table</p>
|
<python><regex><postgresql>
|
2024-02-13 03:24:58
| 1
| 561
|
Hermawan Wiwid
|
77,985,337
| 4,898,202
|
How can I calculate the difference between two named columns in a huge CSV file, then save the results to a second CSV file?
|
<p>I have a CSV file containing almost 200 million rows (gigabytes of data). It has only 5 columns. I want to iterate over the data and do simple calculations, first between columns, but then between rows.</p>
<p>Sample data:</p>
<pre><code>DateTime,Width,Length,Count,Age
01.01.2010 00:00:00,0.55,0.25,1,4
07.02.2010 00:00:01,0.53,0.28,2,3
21.02.2010 00:00:01,0.55,0.25,2,3
20.03.2010 00:00:01,0.55,0.25,1,3
09.05.2010 00:00:02,0.55,0.25,4,7
11.05.2010 00:00:02,0.5,0.3,3,5
</code></pre>
<p>I am using Python with Pandas to read the data in chunks, but I am not sure how to access each column in each row to perform basic arithmetic.</p>
<p>Here is my currently not working python:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
file_in = r"B:\Users\user\Documents\huge-dataset.csv"
file_out = r"B:\Users\user\Documents\aggregate.csv"
data = pd.read_csv(file_in, chunksize=100000)
for idx, chunk in enumerate(data):
for row in chunk:
print("row: ", row)
diff = row[1] - row[2]
data_out.append([row[0],diff])
if row[0] == 0:
prevrow = row
else:
rowdiff = row[1] - prevrow[1]
pd.write_csv(file_out, data_out)
</code></pre>
<p>I want to use the column names, for example:</p>
<p><code>ratio = row['Width']/row['Length']</code></p>
<p>I then want to compare each row with previous rows, for example:</p>
<p><code>width_diff = row['width'] - prev_row['width']</code></p>
<p>Any pointers/corrections?</p>
|
<python><pandas><dataframe><statistics><bigdata>
|
2024-02-13 02:47:00
| 1
| 1,784
|
skeetastax
|
77,985,260
| 1,887,281
|
Concurrency issue with writing batches via AIOFile - output file data getting corrupted
|
<p>I'm having trouble concurrently writing to a couple of files. I tried adding locks to prevent a race condition on the writes, and I verified that the data is valid when it enters the <code>write_batches</code> method, but when I check the data in either of the output files, it's all messed up. It appears that each write step is partially writing over the writes of a separate thread, so I get output where the first part of each line is randomly chopped up. Between invocations of <code>write_and_flush(..)</code>, I'm noticing that the contents of the output file are getting overwritten as well, so it seems like a combination of a concurrency issue and something I'm overlooking with the behavior of the write path.</p>
<p>Any ideas?</p>
<pre><code>import unittest
from langchain_community.chat_models import ChatOpenAI
from core.ChainFactory import ChainFactory
import asyncio
from aiofile import AIOFile
import pandas as pd
async def process_data(model, factory):
# Read the file
df = pd.read_csv(
"sitemap_data_raw",
header=None,
names=["Record"],
engine="python",
on_bad_lines="warn",
)
# Deduplicate the contents
df = df.drop_duplicates()
# Create locks for file writing to ensure exclusive access
q_lock = asyncio.Lock()
a_lock = asyncio.Lock()
async def process_batch(rows):
# Process a batch of rows concurrently
tasks = [
factory.build_qa_chain(model).ainvoke({"chunk": row.Record}) for row in rows
]
return await asyncio.gather(*tasks)
async def write_and_flush(writer, text, lock):
async with lock: # Ensure exclusive access to the writer
await writer.write(text)
await writer.fsync() # Ensure data is written and synchronized to disk
# Define a function to write batches to files, using different parameter names to avoid shadowing
async def write_batches(question_writer, answer_writer, results):
# Flatten results and write questions and answers to their respective files
for my_result in results:
for my_record in my_result:
await write_and_flush(
question_writer, my_record["question"] + "\n", q_lock
)
await write_and_flush(answer_writer, my_record["answer"] + "\n", a_lock)
# Open the output files
async with AIOFile("question_output.txt", "w") as q_writer, AIOFile(
"answer_output.txt", "w"
) as a_writer:
batch_size = 10
for i in range(0, len(df), batch_size):
batch_rows = df.iloc[i : i + batch_size].itertuples(index=False)
batch_results = await process_batch(batch_rows)
await write_batches(q_writer, a_writer, batch_results)
class TestDataPrep(unittest.TestCase):
def test_prepare_qa_dataset(self):
# Read sitemap_data_raw
model = ChatOpenAI(model_name="gpt-3.5-turbo-1106")
factory = ChainFactory()
# Run the async function
loop = asyncio.get_event_loop()
loop.run_until_complete(process_data(model, factory))
</code></pre>
|
<python><python-asyncio>
|
2024-02-13 02:17:28
| 1
| 5,144
|
devinbost
|
77,985,182
| 7,587,176
|
Reshaping DataFrame; Melt/Pivot
|
<p>Body:</p>
<p>Problem:</p>
<p>I have a wide DataFram that I'm struggling to reshape and aggregate using pandas.</p>
<p>Sample Data:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Unnamed: 1</th>
<th>Unnamed: 2</th>
<th>Unnamed: 3</th>
</tr>
</thead>
<tbody>
<tr>
<td>Bezeichnung (Bezirksregion)</td>
<td>Straftaten \n-insgesamt-</td>
<td>Raub</td>
</tr>
<tr>
<td>Mitte</td>
<td>81178</td>
<td>823</td>
</tr>
<tr>
<td>Tiergarten SΓΌd</td>
<td>4113</td>
<td>37</td>
</tr>
<tr>
<td>Regierungsviertel</td>
<td>6251</td>
<td>41</td>
</tr>
<tr>
<td>Alexanderplatz</td>
<td>18999</td>
<td>163</td>
</tr>
</tbody>
</table></div>
<pre><code>import pandas as pd
from io import StringIO
# Sample data
data = """
,Unnamed: 1,Unnamed: 2,Unnamed: 3
3,Bezeichnung (Bezirksregion),Straftaten \\n-insgesamt-,Raub
4,Mitte,81178,823
5,Tiergarten SΓΌd,4113,37
6,Regierungsviertel,6251,41
7,Alexanderplatz,18999,163
"""
# Load data into a DataFrame
df = pd.read_csv(StringIO(data), index_col=0)
# Display the DataFrame
print("Loaded DataFrame:")
print(df)
</code></pre>
<p>Desired Output:</p>
<pre><code>desired_output = {
'Region': ['Mitte', 'Tiergarten SΓΌd', 'Regierungsviertel', 'Alexanderplatz'],
'Crimes': ['Straftaten -insgesamt-', 'Straftaten -insgesamt-', 'Raub', 'Raub'],
'count': [81178, 4113, 41.33, 163],
}
</code></pre>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Region</th>
<th>Crimes</th>
<th>Count</th>
</tr>
</thead>
<tbody>
<tr>
<td>Mitte</td>
<td>Straftaten -insgesamt-</td>
<td>81178</td>
</tr>
<tr>
<td>Tiergarten SΓΌd</td>
<td>Straftaten -insgesamt-</td>
<td>4113</td>
</tr>
<tr>
<td>Regierungsviertel</td>
<td>Raub</td>
<td>41.33</td>
</tr>
<tr>
<td>Alexanderplatz</td>
<td>Raub</td>
<td>163</td>
</tr>
</tbody>
</table></div>
<p>Current Approach:</p>
<pre><code>melted_df = pd.melt(df, id_vars=[df.columns[0]], var_name='crime', value_name='crime_count')
</code></pre>
<p>Issues:</p>
<p>Result not as expected:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th></th>
<th>crime</th>
<th>crime_count</th>
</tr>
</thead>
<tbody>
<tr>
<td>Bezeichnung (Bezirksregion)</td>
<td>Unnamed: 2</td>
<td>Straftaten \n-insgesamt-</td>
</tr>
<tr>
<td>Mitte</td>
<td>Unnamed: 2</td>
<td>81178</td>
</tr>
<tr>
<td>Tiergarten SΓΌd</td>
<td>Unnamed: 2</td>
<td>4113</td>
</tr>
<tr>
<td>Regierungsviertel</td>
<td>Unnamed: 2</td>
<td>6251</td>
</tr>
<tr>
<td>Alexanderplatz</td>
<td>Unnamed: 2</td>
<td>18999</td>
</tr>
</tbody>
</table></div>
<p><strong>Assistance Requested:</strong></p>
<p>How can I reshape the DataFrame to achieve the desired output?</p>
|
<python><pandas><transformation><data-modeling>
|
2024-02-13 01:43:25
| 1
| 1,260
|
0004
|
77,985,008
| 816,566
|
How to migrate from python 2.7 urllib2.Request to python 3.10 urllib.requests.Request, and encode data in a compatible way?
|
<p>I'm trying to port some code from python 2.7 which uses <code>urllib2.Request</code>, to python 3.10, and use the newer <code>json.dumps()</code> function and the <code>urllib.requests.Request</code> class. My problem is that the server accepts Posts from the original code, but fails with an in internal 500 error when I use updated classes. I expect the issue is that I am not encoding my data into the format the server requires. Both versions of <code>post()</code> here are supposed to pass the bytes of a png image, within a dict, to a web service running on localhost.
Here is the original function, which works in python 2.7:</p>
<pre><code>def post(self, endpoint, data={}, params={}, headers=None):
try:
url = self.base_url + endpoint + "?" + urllib.urlencode(params)
data = json.dumps(data, False) # which parameter gets the False value?
headers = headers or {"Content-Type": "application/json", "Accept": "application/json"}
request = urllib2.Request(url=url, data=data, headers=headers)
response = urllib2.urlopen(request)
data = response.read()
data = json.loads(data)
return data
except Exception as ex:
logging.exception("ERROR: ApiClient.post")
</code></pre>
<p>Here are the docs from the python 2.7 <code>Request</code> <a href="https://%20https://docs.python.org/2.7/library/urllib2.htm" rel="nofollow noreferrer">https://docs.python.org/2.7/library/urllib2.htm</a>l</p>
<blockquote>
<p>class urllib2.Request(url[, data][, headers][, origin_req_host][,
unverifiable]) This class is an abstraction of a URL request.</p>
<p>url should be a string containing a valid URL.</p>
<p>data may be a string specifying additional data to send to the server,
or None if no such data is needed. Currently HTTP requests are the
only ones that use data; the HTTP request will be a POST instead of a
GET when the data parameter is provided. data should be a buffer in
the standard application/x-www-form-urlencoded format. The
urllib.urlencode() function takes a mapping or sequence of 2-tuples
and returns a string in this format.</p>
</blockquote>
<p>Here are the docs for python 3.10 <a href="https://docs.python.org/3.10/library/urllib.request.html#module-urllib.request" rel="nofollow noreferrer">https://docs.python.org/3.10/library/urllib.request.html#module-urllib.request</a></p>
<blockquote>
<p>class urllib.request.Request(url, data=None, headers={},
origin_req_host=None, unverifiable=False, method=None)</p>
<p>This class is an abstraction of a URL request.</p>
<p>url should be a string containing a valid URL.</p>
<p>data must be an object specifying additional data to send to the
server, or None if no such data is needed. Currently HTTP requests are
the only ones that use data. The supported object types include bytes,
file-like objects, and iterables of bytes-like objects. If no
Content-Length nor Transfer-Encoding header field has been provided,
HTTPHandler will set these headers according to the type of data.
Content-Length will be used to send bytes objects, while
Transfer-Encoding: chunked as specified in RFC 7230, Section 3.3.1
will be used to send files and other iterables. For an HTTP POST
request method, data should be a buffer in the standard
application/x-www-form-urlencoded format. The urllib.parse.urlencode()
function takes a mapping or sequence of 2-tuples and returns an ASCII
string in this format. It should be encoded to bytes before being used
as the data parameter.</p>
</blockquote>
<p>And this is my replacement function and decoder, which fails when I submit it to <code>urlopen()</code></p>
<pre><code>class BytesEncoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, bytes):
return base64.b64encode(o).decode('utf-8')
else:
return super().default(o)
def post(self, endpoint, dict_in={}, params={}, headers=None):
try:
url = self.base_url + endpoint + "?" + urllib.parse.urlencode(params)
# NOTE: In the original code, the string returned by json.dumps is sent directly to urllib2.Request.
# That changes in urllib.request.Request, where the data parameter specifies "bytes"
# Original line was "data = json.dumps(data, False)"
json_str: str = json.dumps(dict_in, sort_keys=True, indent=2, cls=BytesEncoder)
json_bytes: bytes = json_str.encode('utf-8')
headers = headers or {"Content-Type": "application/json", "Accept": "application/json"}
request_out = urllib.request.Request(url=url, data=json_bytes, headers=headers)
"""
FIXME: The following urlopen raises HTTPError
url=http://127.0.0.1:7860/sdapi/v1/img2img?, code=500, reason=Internal Server Error
#/components/schemas/StableDiffusionProcessingImg2Img
"""
response = urlopen(request_out)
response_json_str = response.read()
response_dict = json.loads(response_json_str)
return response_dict
except urllib.error.HTTPError as ex:
StabDiffAuto1111.LOGGER.exception("ERROR: ApiClient.post()")
message = "url=%s, code=%d, reason=%s" % (ex.url, ex.code, ex.reason)
StabDiffAuto1111.LOGGER.error(message)
StabDiffAuto1111.LOGGER.error(str(ex.headers))
</code></pre>
<p>My replacement function does several things differently. First, I pass a custom <code>BytesEncoder</code> to <code>json.dumps()</code>. If I don't use a custom encoder, then <code>json.dumps()</code> fails with a "cannot serialize bytes" error. But perhaps I should use something besides base64 to encode the bytes.
Next, instead of re-using whatever was passed as the "data" argument, I create intermediate variables, with the expected type hints. So all is explicitly clear.
Finally, I take the (unicode) string from <code>json.dumps</code>, and I encode that into utf-8 bytes. That was my understanding of the documentation sentence "It should be encoded to bytes before being used as the data parameter." But I expect this is what the server won't accept.</p>
<p>How do I post the data in the same format that the previous function did?</p>
|
<python><json><request><urllib>
|
2024-02-13 00:29:54
| 1
| 1,641
|
Charlweed
|
77,984,922
| 395,857
|
How can I get the embedding of a document in langchain?
|
<p>I use the langchain Python lib to create a vector store and retrieve relevant documents given a user query. How can I get the embedding of a document in the vector store?</p>
<p>E.g., in this code:</p>
<pre><code>import pprint
from langchain_community.vectorstores import FAISS
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain.docstore.document import Document
model = "sentence-transformers/multi-qa-MiniLM-L6-cos-v1"
embeddings = HuggingFaceEmbeddings(model_name = model)
def main():
doc1 = Document(page_content="The sky is blue.", metadata={"document_id": "10"})
doc2 = Document(page_content="The forest is green", metadata={"document_id": "62"})
docs = []
docs.append(doc1)
docs.append(doc2)
for doc in docs:
doc.metadata['summary'] = 'hello'
pprint.pprint(docs)
db = FAISS.from_documents(docs, embeddings)
db.save_local("faiss_index")
new_db = FAISS.load_local("faiss_index", embeddings)
query = "Which color is the sky?"
docs = new_db.similarity_search_with_score(query)
print('Retrieved docs:', docs)
print('Metadata of the most relevant document:', docs[0][0].metadata)
if __name__ == '__main__':
main()
</code></pre>
<hr />
<p>How can I get the embedding of documents <code>doc1</code> and <code>doc2</code>?</p>
<p>The code was tested with Python 3.11 with:</p>
<pre><code>pip install langchain==0.1.1 langchain_openai==0.0.2.post1 sentence-transformers==2.2.2 langchain_community==0.0.13 faiss-cpu==1.7.4
</code></pre>
|
<python><nlp><langchain><word-embedding>
|
2024-02-12 23:59:51
| 1
| 84,585
|
Franck Dernoncourt
|
77,984,857
| 23,393,520
|
Paho MQTT "Unsupported callback API version" error
|
<p>I am trying to implement Paho Python MQTT and connect to an online broker but the code seems to through an error.</p>
<blockquote>
<p>ValueError: Unsupported callback API version: version 2.0 added a callback_api_version, see migrations.md for details</p>
</blockquote>
<p>I was trying to implement a simple paho client example from the <a href="https://www.emqx.com/en/blog/how-to-use-mqtt-in-python" rel="nofollow noreferrer">given website</a>; the following will reproduce the issue:</p>
<pre><code>from paho.mqtt import client as mqtt_client
import random
broker = 'broker.emqx.io'
port = 1883
topic = "python/mqtt"
client_id = f'python-mqtt-{random.randint(0, 1000)}'
client = mqtt_client.Client(client_id)
</code></pre>
|
<python><mqtt><mosquitto><paho><hivemq>
|
2024-02-12 23:35:44
| 5
| 349
|
user330720
|
77,984,737
| 1,730,405
|
Python socket servers don't see abrupt disconnect when using docker network
|
<p>When running a socket server and client in python, an abrupt disconnect by the client is handled differently by the server when running both locally versus running both in docker containers and through a docker network. If it matters, all of these tests are done on a Linux machine.</p>
<p>Here is a simple socket server that serves incrementing bytes:</p>
<pre class="lang-py prettyprint-override"><code>import socket
import time
def main():
# Create a TCP/IP socket
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_socket.bind(("0.0.0.0", 12345))
server_socket.listen(1)
print('Server bound and listening.')
try:
# Wait for a connection
print('Waiting for a connection...')
connection, client_address = server_socket.accept()
print('Connection established from:', client_address)
try:
# Serve incrementing bytes
current_byte = 0
while True:
connection.sendall(bytes([current_byte]))
print(f'Sent byte: {current_byte}')
current_byte = (current_byte + 1) % 256
time.sleep(1)
finally:
connection.close()
print('Connection closed.')
except KeyboardInterrupt:
print('Server interrupted by user, shutting down.')
finally:
# Clean up the server socket
server_socket.close()
print('Server socket closed.')
if __name__ == '__main__':
main()
</code></pre>
<p>And here is the corresponding client to consume the incrementing bytes:</p>
<pre class="lang-py prettyprint-override"><code>import socket
import time
import os
# Override when running in docker to point to the correct host.
SERVER_HOST = os.getenv("SERVER_HOST", "localhost")
def main():
# Wait for server to come up.
time.sleep(1)
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((SERVER_HOST, 12345))
while True:
bytes_received = sock.recv(1)
print(f"Received bytes: {str(bytes_received)}")
if __name__ == "__main__":
main()
</code></pre>
<p>When I run these scripts locally (no docker containers and loopback), a <code>BrokenPipeError</code> is raised by <code>connection.sendall(...)</code> when I either Ctrl+C or kill the client.</p>
<p>Server:</p>
<pre><code>Server bound and listening.
Waiting for a connection...
Connection established from: ('127.0.0.1', 44778)
Sent byte: 0
Sent byte: 1
Sent byte: 2
Sent byte: 3
Connection closed.
Server socket closed.
Traceback (most recent call last):
File "/path/to/socket_server.py", line 43, in <module>
main()
File "/path/to/socket_server.py", line 23, in main
connection.sendall(bytes([current_byte]))
BrokenPipeError: [Errno 32] Broken pipe
</code></pre>
<p>Client (interrupted with a user-input Ctrl+C):</p>
<pre><code>Received bytes: b'\x00'
Received bytes: b'\x01'
Received bytes: b'\x02'
^CTraceback (most recent call last):
File "/path/to/socket_client.py", line 21, in <module>
main()
File "/path/to/socket_client.py", line 17, in main
bytes_received = sock.recv(1)
^^^^^^^^^^^^
KeyboardInterrupt
</code></pre>
<p>However, when running in docker containers and over a docker network, the server never detects that the client disconnected. Here is a docker-compose file to spin up the services and connect them together:</p>
<pre><code>version: "3.9"
networks:
socket-net:
name: socket-net
external: false
services:
client:
image: python:3.11
volumes:
- ./scripts:/scripts
environment:
# Show prints without explicit flush
PYTHONUNBUFFERED: 1
SERVER_HOST: "server"
command: ["python", "/scripts/socket_client.py"]
networks:
- socket-net
server:
hostname: server
image: python:3.11
volumes:
- ./scripts:/scripts
environment:
# Show prints without explicit flush
PYTHONUNBUFFERED: 1
command: ["python", "/scripts/socket_server.py"]
networks:
- socket-net
</code></pre>
<p>When I interrupt the <code>socket_client</code> with a <code>docker kill</code>, the server doesn't always raise an exception:</p>
<pre><code>ebenevedes@machine:/path/to/socket_mre$ docker compose up --force-recreate
[+] Running 3/0
β Network socket-net Created 0.0s
β Container socket_mre-client-1 Created 0.0s
β Container socket_mre-server-1 Created 0.0s
Attaching to client-1, server-1
server-1 | Server bound and listening.
server-1 | Waiting for a connection...
server-1 | Connection established from: ('172.25.18.2', 54428)
server-1 | Sent byte: 0
client-1 | Received bytes: b'\x00'
client-1 | Received bytes: b'\x01'
server-1 | Sent byte: 1
client-1 | Received bytes: b'\x02'
server-1 | Sent byte: 2
client-1 | Received bytes: b'\x03'
server-1 | Sent byte: 3
client-1 exited with code 137 # Killed with `docker kill socket_mre-client-1` from a different terminal.
server-1 | Sent byte: 4
server-1 | Sent byte: 5
server-1 | Sent byte: 6
server-1 | Sent byte: 7
server-1 | Sent byte: 8
... # Server continues to send bytes without raising an Exception
</code></pre>
<ol>
<li>Why do sockets exhibit different behavior when connecting through loopback versus through a docker network?</li>
<li>How can I consistently detect a disconnecting client from the server, no matter what the network path is between the server and client? I have seen a similar issue with EKS K8s deployments, but don't have a minimum reproducible example for that case.</li>
</ol>
|
<python><docker><sockets><docker-compose>
|
2024-02-12 22:58:22
| 2
| 391
|
Elias Benevedes
|
77,984,729
| 21,313,539
|
ImportError: cannot import name 'VectorStoreIndex' from 'llama_index' (unknown location)
|
<p>I ran into this problem when I was trying to import the following libraries and it is giving the error "ImportError: cannot import name 'VectorStoreIndex' from 'llama_index' (unknown location)"</p>
<p>I ran this exact same code in the morning and it worked perfectly.</p>
<p>I did <code>!pip install llama_index</code></p>
<pre><code>from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext
from llama_index.llms import HuggingFaceLLM
from llama_index.prompts.prompts import SimpleInputPrompt
</code></pre>
<p>I tried commenting out first line and faced same issue for HuggingFaceLLM module
Same issue for SimpleInputPrompt, got error "ModuleNotFoundError: No module named 'llama_index.prompts'"</p>
<p>First I faced the problem in a sagemaker notebook so I thought the issue was with the sagemaker notebook so I spun up a clean new notebook and I got the same error.
So, I tried the code in my local Jypiter notebook, google collab notebook, sagemaker studiolab notebook and I got the same error.</p>
|
<python><jupyter-notebook><amazon-sagemaker><huggingface><llama-index>
|
2024-02-12 22:56:20
| 7
| 471
|
lat
|
77,984,545
| 6,843,153
|
How to fulfill ABC class definition of abstract properties in child class in Python
|
<p>I have the following abc class:</p>
<pre><code>class Logger(ABC):
@abstractproperty
def filename_prefix(self):
pass
@abstractproperty
def trace_id_length(self):
pass
@abstractproperty
def run_id_length(self):
pass
def __init__(self):
self._load_history()
def _load_history(self):
try:
with open(
os.path.join(trace_history_dir, f"{self.filename_prefix}_history.json"),
"r",
) as f:
history = json.load(f)
except Exception as e:
if e.__class__.__name__ == "FileNotFoundError":
history = {"0".zfill(self.trace_id_length): "Init"}
with open(
os.path.join(
trace_history_dir, f"{self.filename_prefix}_history.json"
),
"w",
) as f:
json.dumps(history)
else:
raise e
return history
</code></pre>
<p>And the following child class:</p>
<pre><code>class CostsLogger(Logger):
def __init__(self):
self.filename_prefix = filename_prefix
self.trace_id_length = 4
self.run_id_length = 4
super.__init__()
</code></pre>
<p>The problem is that I'm getting this error when running:</p>
<p><code>Can't instantiate abstract class CostsLogger with abstract methods filename_prefix, run_id_length, trace_id_length</code></p>
<p>I don't want to force programmers to use <code>@property</code> when implementing a child class but keep the <code>@abstractporperty</code>'s in the abstract class definition to enforce their implementation, is there any way to avoid it?</p>
|
<python><abstract-class>
|
2024-02-12 22:01:47
| 0
| 5,505
|
HuLu ViCa
|
77,984,347
| 1,018,447
|
Pydantic 2: JSON serialize a set to a sorted list
|
<h3>Context</h3>
<p>In Pydantic 2, fields of type <code>set</code> are already JSON-serialized to lists. However, these lists are unordered. Or, more specifically, their items are ordered according to internal ordering of the original set.</p>
<p>Unfortunately, even when two sets contain the same items, their internal ordering might still be different. Consequently, serializing sets without explicitly ordering them produces nondeterministic results.</p>
<h3>Question</h3>
<p>I am looking for a way to configure a particular Pydantic 2 model to JSON-serialize all of its fields whose type is <code>set</code> to a sorted list first, before converting the outcome to string. I would like to avoid defining custom set type or adding a custom type annotation for every such attribute. The solution should be more generic because the number of attributes which might need this kind of handling is larger. Moreover, they might also be defined in subclasses of that particular model.</p>
<p>Is there a reasonably simple way to achieve this?</p>
<h3>Options</h3>
<p>It seems to me that using a <a href="https://docs.pydantic.dev/dev/api/functional_serializers/#pydantic.functional_serializers.model_serializer" rel="nofollow noreferrer">model serializer</a> is the most straightforward way to do it. But at the same time it seems cumbersome to me to loop through all the attributes, check their type, call the serialization routines for the items and then sort the results into a list. If possible, I would like to avoid that and leverage Pydantic's knowledge about the attribute types in some way.</p>
<p>In Pydantic 1, the desired effect could be achieved by using the <a href="https://docs.pydantic.dev/1.10/usage/exporting_models/#json_encoders" rel="nofollow noreferrer"><code>json_encoders</code></a> parameter of the configuration and defining a custom serialization function for all attributes of type <code>set</code>. However, in Pydantic 2 this option has been <a href="https://docs.pydantic.dev/dev/migration/#changes-to-pydanticbasemodel" rel="nofollow noreferrer">removed</a> due to "performance overhead and implementation complexity". It seems understandable.</p>
<p>I am not necessarily looking for a way to mimic the behavior of Pydantic 1. If there is a way to achieve similar effect using primary or recommended Pydantic 2 features, I would prefer to use it.</p>
<p>It seems that at some point of JSON serialization, Pydantic is converting the sets to lists anyway. In an ideal case, I would like to somehow tap into this conversion and merely call <code>sorted</code> on its outcome.</p>
|
<python><serialization><pydantic>
|
2024-02-12 21:11:24
| 1
| 957
|
Peter BaΕ‘ista
|
77,984,323
| 356,630
|
cross compiled numpy with clang depends on libgcc_s.so.1
|
<p>I am trying to cross compile numpy for an different platform using a clang-based cross compiler toolchain. I have built a custom version of python that runs on the target platform, and trying to use crossenv make a wheel of numpy. Even though the underlying compiler that I am using is clang (both host-python and build-python) the resulting numpy seems to have an gcc dependency libgcc_s.so.1.</p>
<p>I have some difficulty understanding why this happens and roughly what I can do to prevent this. I believe the problem is specific to numpy- I can't disclose specifics but I have a couple of packages like cffi that I was able to successfull cross compile.</p>
<p>Any suggestions of what I could try to prevent from linkking gcc dependencies, I assume it just thinks its using gcc because of crossenv.</p>
|
<python><numpy><clang><cross-env>
|
2024-02-12 21:06:06
| 1
| 1,618
|
Lawrence Kok
|
77,984,267
| 11,248,638
|
Expanding Rows in Pandas DataFrame based on Time Intervals, Accounting for Optional Breaks
|
<p>I have a Pandas DataFrame representing timesheets with start and end times, as well as optional rest and meal break intervals. My goal is to expand a single row into multiple rows with correct intervals. It's worth noting that:</p>
<ul>
<li>A timesheet might not have any breaks.</li>
<li>A timesheet might have only a rest break, only a meal break, or both.</li>
<li>The order of meal and rest breaks can vary (meal break can be both before and after rest break).</li>
</ul>
<p>Here's an example DataFrame:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Id</th>
<th>Start Time</th>
<th>End Time</th>
<th>Rest Break Start Time</th>
<th>Rest Break End Time</th>
<th>Meal Break Start Time</th>
<th>Meal Break End Time</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2024-01-26 07:59</td>
<td>2024-01-26 12:33</td>
<td>2024-01-26 10:43</td>
<td>2024-01-26 10:53</td>
<td>2024-01-26 12:03</td>
<td>2024-01-26 12:33</td>
</tr>
<tr>
<td>2</td>
<td>2024-01-26 14:29</td>
<td>2024-01-26 17:35</td>
<td>2024-01-26 16:33</td>
<td>2024-01-26 16:44</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<td>3</td>
<td>2024-01-26 08:02</td>
<td>2024-01-26 12:45</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<td>4</td>
<td>2024-01-26 09:15</td>
<td>2024-01-26 16:15</td>
<td>NaN</td>
<td>NaN</td>
<td>2024-01-26 12:15</td>
<td>2024-01-26 12:45</td>
</tr>
<tr>
<td>5</td>
<td>2024-01-26 09:10</td>
<td>2024-01-26 16:37</td>
<td>2024-01-26 15:43</td>
<td>2024-01-26 15:55</td>
<td>2024-01-26 13:06</td>
<td>2024-01-26 13:37</td>
</tr>
</tbody>
</table></div>
<p>The output I need:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Id</th>
<th>Category</th>
<th>Start Time</th>
<th>End Time</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Session</td>
<td>2024-01-26 07:59</td>
<td>2024-01-26 10:43</td>
</tr>
<tr>
<td>1</td>
<td>Rest Break</td>
<td>2024-01-26 10:43</td>
<td>2024-01-26 10:53</td>
</tr>
<tr>
<td>1</td>
<td>Session</td>
<td>2024-01-26 10:53</td>
<td>2024-01-26 12:03</td>
</tr>
<tr>
<td>1</td>
<td>Meal Break</td>
<td>2024-01-26 12:03</td>
<td>2024-01-26 12:33</td>
</tr>
<tr>
<td>2</td>
<td>Session</td>
<td>2024-01-26 14:29</td>
<td>2024-01-26 16:33</td>
</tr>
<tr>
<td>2</td>
<td>Rest Break</td>
<td>2024-01-26 16:33</td>
<td>2024-01-26 16:44</td>
</tr>
<tr>
<td>2</td>
<td>Session</td>
<td>2024-01-26 16:44</td>
<td>2024-01-26 17:35</td>
</tr>
<tr>
<td>3</td>
<td>Session</td>
<td>2024-01-26 08:02</td>
<td>2024-01-26 12:45</td>
</tr>
<tr>
<td>4</td>
<td>Session</td>
<td>2024-01-26 09:15</td>
<td>2024-01-26 12:15</td>
</tr>
<tr>
<td>4</td>
<td>Meal Break</td>
<td>2024-01-26 12:15</td>
<td>2024-01-26 12:45</td>
</tr>
<tr>
<td>4</td>
<td>Session</td>
<td>2024-01-26 12:45</td>
<td>2024-01-26 16:15</td>
</tr>
<tr>
<td>5</td>
<td>Session</td>
<td>2024-01-26 09:10</td>
<td>2024-01-26 13:06</td>
</tr>
<tr>
<td>5</td>
<td>Meal Break</td>
<td>2024-01-26 13:06</td>
<td>2024-01-26 13:37</td>
</tr>
<tr>
<td>5</td>
<td>Session</td>
<td>2024-01-26 13:37</td>
<td>2024-01-26 15:43</td>
</tr>
<tr>
<td>5</td>
<td>Rest Break</td>
<td>2024-01-26 15:43</td>
<td>2024-01-26 15:55</td>
</tr>
<tr>
<td>5</td>
<td>Session</td>
<td>2024-01-26 15:55</td>
<td>2024-01-26 16:37</td>
</tr>
</tbody>
</table></div>
<p>My current logic involves extracting all unique datetimes from each row, sorting them, and creating intervals. While this approach somewhat works, I'm struggling with assigning a 'Category' column to each interval.</p>
<pre><code>import pandas as pd
# Your original DataFrame
data = {'Id': [1, 2, 3, 4, 5],
'Start Time': ['2024-01-26 07:59', '2024-01-26 14:29', '2024-01-26 08:02', '2024-01-26 09:15', '2024-01-26 09:10'],
'End Time': ['2024-01-26 12:33', '2024-01-26 17:35', '2024-01-26 12:45', '2024-01-26 16:15', '2024-01-26 16:37'],
'Rest Break Start Time': ['2024-01-26 10:43', '2024-01-26 16:33', None, None, '2024-01-26 15:43'],
'Rest Break End Time': ['2024-01-26 10:53', '2024-01-26 16:44', None, None, '2024-01-26 15:55'],
'Meal Break Start Time': ['2024-01-26 12:03', None, None, '2024-01-26 12:15', '2024-01-26 13:06'],
'Meal Break End Time': ['2024-01-26 12:33', None, None, '2024-01-26 12:45', '2024-01-26 13:37']}
df = pd.DataFrame(data)
# Create an empty DataFrame to store the expanded rows
expanded_df = pd.DataFrame(columns=['Id', 'Start Time', 'End Time'])
# Iterate through each row of the original DataFrame
for index, row in df.iterrows():
id_value = row['Id']
start_time = pd.to_datetime(row['Start Time'])
end_time = pd.to_datetime(row['End Time'])
# Collect all times
times = {start_time, end_time}
for column in ['Rest Break Start Time', 'Rest Break End Time', 'Meal Break Start Time', 'Meal Break End Time']:
if not pd.isna(row[column]):
times.add(pd.to_datetime(row[column]))
# Sort the times
sorted_times = sorted(times)
# Create intervals
for i in range(len(sorted_times) - 1):
if sorted_times[i] != sorted_times[i + 1]:
expanded_df = expanded_df.append({'Id': id_value, 'Start Time': sorted_times[i], 'End Time': sorted_times[i + 1]}, ignore_index=True)
# Sort the expanded DataFrame by 'Id' and 'Start Time'
expanded_df = expanded_df.sort_values(by=['Id', 'Start Time']).reset_index(drop=True)
# Show the result
expanded_df
</code></pre>
<p>Here is the output:</p>
<p><a href="https://i.sstatic.net/JE4EP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JE4EP.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe><datetime>
|
2024-02-12 20:52:13
| 1
| 401
|
Yara1994
|
77,984,234
| 3,007,075
|
pandas rolling window apply function to dataframe slice?
|
<p>Without indexing and for/while loops, I'd like to apply a function to a rolling window on my dataframe, that returns a row, for what will become a new dataframe (or extra columns on my previous one). The best method I've found so far is the one below:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
def foo(window):
print(window, "\n-------\n")
return np.array([[1,2,3]])
df = pd.DataFrame({"A": range(10), "B": range(10, 20), "C": range(20, 30)})
df[["D", "F", "G"]] = df.rolling(window=2, method="table").apply(foo,engine="numba", raw=True)
print(df)
</code></pre>
<p>The problems for me here are:</p>
<ol>
<li>It only works with the <code>raw</code> option (i.e. my function gats a <code>numpy.ndarray</code> not a dataframe), so I can't reference columsn by their named in the operations of the function being applied.</li>
<li>The return value needs to have the same number of columns as my input, while I may need to have many more input columns than output ones.</li>
</ol>
<p>Is there a default method for this? I know could use a while loop, but this would very messy in my code base, and likely less efficient.</p>
|
<python><pandas><dataframe>
|
2024-02-12 20:43:34
| 1
| 1,166
|
Mefitico
|
77,983,855
| 7,587,176
|
CKAN Data Extraction via API
|
<p><strong>Goal:</strong> Extract Berlin Crime Data via CKAN.</p>
<p>Can you anyone help me write a basic api request to validate that you can return the data documented here via CKAN? I am having trouble understanding how to adjust the url when it is not demo data. The goal is to extract berlin crime data : <a href="https://daten.berlin.de/tags/ckan" rel="nofollow noreferrer">https://daten.berlin.de/tags/ckan</a> and documentation for ckan here: <a href="https://docs.ckan.org/en/2.9/api/" rel="nofollow noreferrer">https://docs.ckan.org/en/2.9/api/</a></p>
<p>current code:</p>
<pre><code>import requests
# CKAN API URL for listing datasets
ckan_url = "http://demo.ckan.org/api/action/package_list"
# Parameters for querying datasets in the "berlin" group
#params = {"group": "berlin"}
params = {"package": "offenedaten"}
# Make the request
response = requests.get(ckan_url, params=params)
# Check if the request was successful (status code 200)
if response.status_code == 200:
# Extract the list of datasets from the response
datasets = response.json()["result"]
print(response.text)
# Print the list of datasets
# print("List of datasets in the 'berlin' group:")
# for dataset in datasets:
# print("-", dataset)
else:
print("Error:", response.status_code, response.text)
</code></pre>
|
<python><ckan><opendata>
|
2024-02-12 19:28:00
| 0
| 1,260
|
0004
|
77,983,736
| 15,176,150
|
Can MLFlow be used without the `with mlflow.start_run()` block?
|
<p>I want to track an entire notebook and log the parameters of cleaning steps that occur before training a model. I'd like to use <a href="https://mlflow.org/docs/latest/index.html" rel="nofollow noreferrer"><code>mlflow</code></a> to do it, but on all the docs it looks like you have to track models using this format:</p>
<pre><code>with mlflow.start_run():
...
</code></pre>
<p>Is there a way to track an entire notebook using mlflow without the <code>with</code> block?</p>
|
<python><machine-learning><mlflow><mlops>
|
2024-02-12 19:05:13
| 1
| 1,146
|
Connor
|
77,983,609
| 1,711,271
|
merge some columns in a Polars dataframe and duplicate the others
|
<p>I have a similar problem to <a href="https://stackoverflow.com/q/77982046/1711271">how to select all columns from a list in a polars dataframe</a>, but slightly different:</p>
<pre><code>import polars as pl
import numpy as np
import string
rng = np.random.default_rng(42)
nr = 3
letters = list(string.ascii_letters)
uppercase = list(string.ascii_uppercase)
words, groups = [], []
for i in range(nr):
word = ''.join([rng.choice(letters) for _ in range(rng.integers(3, 20))])
words.append(word)
group = rng.choice(uppercase)
groups.append(group)
df = pl.DataFrame(
{
"a_0": np.linspace(0, 1, nr),
"a_1": np.linspace(1, 2, nr),
"a_2": np.linspace(2, 3, nr),
"b_0": np.random.rand(nr),
"b_1": 2 * np.random.rand(nr),
"b_2": 3 * np.random.rand(nr),
"words": words,
"groups": groups,
}
)
print(df)
shape: (3, 8)
βββββββ¬ββββββ¬ββββββ¬βββββββββββ¬βββββββββββ¬βββββββββββ¬ββββββββββββββββββ¬βββββββββ
β a_0 β a_1 β a_2 β b_0 β b_1 β b_2 β words β groups β
β --- β --- β --- β --- β --- β --- β --- β --- β
β f64 β f64 β f64 β f64 β f64 β f64 β str β str β
βββββββͺββββββͺββββββͺβββββββββββͺβββββββββββͺβββββββββββͺββββββββββββββββββͺβββββββββ‘
β 0.0 β 1.0 β 2.0 β 0.653892 β 0.234362 β 0.880558 β OIww β W β
β 0.5 β 1.5 β 2.5 β 0.408888 β 0.213767 β 1.833025 β KkeB β Z β
β 1.0 β 2.0 β 3.0 β 0.423949 β 0.646378 β 0.116173 β NLOAgRxAtjWOHuQ β O β
βββββββ΄ββββββ΄ββββββ΄βββββββββββ΄βββββββββββ΄βββββββββββ΄ββββββββββββββββββ΄βββββββββ
</code></pre>
<p>I want again to concatenate the columns <code>a_0</code>, <code>a_1</code>,... into a column <code>a</code>, columns <code>b_0</code>, <code>b_1</code>,... into a column <code>b</code>. However, unlike the preceding question, this time <code>a = [a_0; a_1; ...]</code>. I.e., all the elements of <code>a_0</code> go first, followed by all the elements of <code>a_1</code>, etc. All the columns whose name doesn't end with a <code>_</code> followed by a digit (in this example, <code>words</code> and <code>groups</code>) must be duplicated enough times to match the length of <code>a</code>. Let <code>nr</code> and <code>nc</code> be the number of rows/columns in <code>df</code>. Then the output dataframe must have <code>m*nr</code> rows (<code>m=3</code> in this case) and <code>nc-2*(m-1)</code> columns, i.e.</p>
<pre><code>shape: (9, 4)
βββββββββββββββββββ¬βββββββββ¬ββββββ¬βββββββββββ
β words β groups β a β b β
β --- β --- β --- β --- β
β str β str β f64 β f64 β
βββββββββββββββββββͺβββββββββͺββββββͺβββββββββββ‘
β OIww β W β 0.0 β 0.653892 β
β KkeB β Z β 0.5 β 0.408888 β
β NLOAgRxAtjWOHuQ β O β 1.0 β 0.423949 β
β OIww β W β 1.0 β 0.234362 β
β KkeB β Z β 1.5 β 0.213767 β
β NLOAgRxAtjWOHuQ β O β 2.0 β 0.646378 β
β OIww β W β 2.0 β 0.880558 β
β KkeB β Z β 2.5 β 1.833025 β
β NLOAgRxAtjWOHuQ β O β 3.0 β 0.116173 β
βββββββββββββββββββ΄βββββββββ΄ββββββ΄βββββββββββ
</code></pre>
<p>How can I do that?</p>
|
<python><concatenation><python-polars>
|
2024-02-12 18:36:51
| 2
| 5,726
|
DeltaIV
|
77,983,552
| 2,384,982
|
How to slice 2D array from 3D array using a 2D array that represents the indices for the first dimension?
|
<p>Create a sample 3D array with shape (10, 33, 66)</p>
<p><code>data_3d = np.random.random((10, 33, 66))</code></p>
<p>Create a 2D index array with shape (33, 66) representing indices for the first dimension</p>
<p><code>index_2d = np.random.randint(0, 10, size=(33, 66))</code></p>
<p>How to slice a 2D array with shape (33,66) from the 3D array based on the index array?</p>
|
<python><arrays><numpy>
|
2024-02-12 18:25:25
| 1
| 371
|
Yu Guo
|
77,983,466
| 6,498,753
|
scipy lognorm does not converge to params
|
<p>I have manually fitted a lognormal distribution to my data:</p>
<pre><code>from scipy.stats import lognorm
sigma = 0.15
mu = 2
x_fit = np.linspace(x.min(), x.max(), 100)
y_fit = lognorm.pdf(x_fit, sigma, scale=np.exp(mu))
fig, ax = plt.subplots()
plt.plot(x_fit, y_fit, color='red')
</code></pre>
<p><a href="https://i.sstatic.net/uT8CM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uT8CM.png" alt="Histo" /></a></p>
<p>Problem is that scipy does not converge to sigma = 0.15, but it only converges to mu ~ 2. Plus, I am getting some inf as well...
Note: This is occurring even if I use [2, 0.15] as starting guess:</p>
<pre><code>from scipy.optimize import curve_fit
y, x = np.histogram(z_data, bins=100)
x = (x[1:]+x[:-1])/2 #Β bin centers
def fit_lognormal(x, mu, sigma):
return lognorm.pdf(x, sigma, scale=np.exp(mu))
p0 = [2, 0.15]
params = curve_fit(fit_lognormal, x, y, p0=p0)
params
</code></pre>
<p>This returns:</p>
<pre><code>(array([1.97713319e+00, 6.98238900e-05]),
array([[inf, inf],
[inf, inf]]))
</code></pre>
|
<python><scipy><curve-fitting><distribution><scipy-optimize>
|
2024-02-12 18:10:17
| 1
| 461
|
Roland
|
77,983,362
| 9,582,542
|
Unable to use scrapy to login into Instagram
|
<p>I am trying to use scrapy to log into Instagram to scan and read comments but I cant seem to be able to login looks likr the request gets blocked by a bot. Is there a better way to attempt the scrapy login?</p>
<pre><code>url = 'https://www.instagram.com/accounts/login'
request = scrapy.Request(url,method='GET')
fetch(request)
2024-02-12 12:39:37 [scrapy.downloadermiddlewares.robotstxt] DEBUG: Forbidden by robots.txt: <GET https://www.instagram.com/accounts/login>
</code></pre>
<p>Yes I know I dont have user name and password declared here but I cant even get to the page source.</p>
|
<python><scrapy>
|
2024-02-12 17:47:37
| 0
| 690
|
Leo Torres
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.