QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,210,615 | 4,409,163 | How to reproduce visual division of pages with Gtk.TextView | <p>I'm working on a text editing application, initially aimed at serving literary writers. I'm using Gtk4, Python and Flatpak and I'm having some difficulties implementing things the way I want. So I have a few questions:</p>
<p>1 - I know that it is possible to display the same <strong>Gtk.TextBuffer</strong> in several <strong>Gtk.TextViews</strong>, would it be possible to link a <strong>Gtk.TextView</strong> to a subset of the content of a TextBuffer?</p>
<p>2 - Would it be possible to disable the built-in vertical Scrollbar of a <strong>Gtk.TextView</strong> and give it a fixed size? Something like an editable <strong>Gtk.Inscription</strong>?</p>
<p>What I'm trying to reproduce is the visual division of pages. Creating each page as a <strong>Gtk.TextView</strong> with its own <strong>Gtk.TextBuffer</strong> seems to make it difficult to maintain text flow control between one page and another.</p>
| <python><gtk><pygobject><gtk4> | 2023-10-01 13:16:03 | 0 | 544 | Kripto |
77,210,527 | 12,462,568 | Does Langchain’s `create_csv_agent` and `create_pandas_dataframe_agent` functions work with non-OpenAl LLMs | <p>Does Langchain's <code>create_csv_agent</code> and <code>create_pandas_dataframe_agent</code> functions work with non-OpenAl LLM models too like Llama
2 and Vicuna? The only example I have seen in the documentation (in the links below) are only using OpenAI API.</p>
<p><code>create_csv_agent</code>:
<a href="https://python.langchain.com/docs/integrations/toolkits/csv" rel="noreferrer">https://python.langchain.com/docs/integrations/toolkits/csv</a></p>
<p><code>create_pandas_dataframe_agent</code>:
<a href="https://python.langchain.com/docs/integrations/toolkits/pandas" rel="noreferrer">https://python.langchain.com/docs/integrations/toolkits/pandas</a></p>
<p>Would really appreciate ANY input on this. Many thanks!</p>
| <python><openai-api><langchain><large-language-model><llama> | 2023-10-01 12:45:47 | 1 | 2,190 | Leockl |
77,210,245 | 10,499,751 | Pandas get the combined sum of each row and colum | <p>I'm trying to add the "vertical" and "horizontal" sum for each cell.</p>
<p><a href="https://i.sstatic.net/OC9VS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OC9VS.png" alt="enter image description here" /></a></p>
<p>So far I've tired to get the sum of each row and column. But like this I'd have to iterate over every value and add them which seems quite inefficient.</p>
<p>You could combine <code>sum()</code> and <code>sum(1)</code> to create a new dataframe but then the cell in question is counted twice.</p>
<p>Used <code>df</code> :</p>
<pre><code>df = pd.DataFrame([[1,3], [8,4]], columns=list("ab"))
</code></pre>
<p>Is there any simple operation to archive the wanted result?</p>
| <python><python-3.x><pandas><dataframe> | 2023-10-01 11:18:26 | 1 | 646 | Lukas Neumann |
77,210,197 | 3,312,274 | flask error with sqlalchemy when doing rollback | <p>My error handler for 500 is throwing an error at <code>db.session.rollback</code>.</p>
<pre><code>@bp.app_errorhandler(500)
def internal_error(error):
db.session.rollback()
return render_template('errors/500.html'), 500
</code></pre>
<p>I tried to catch the error with try..except as follows:</p>
<pre><code>try:
db.session.rollback()
except sqlalchemy.exc.InvalidRequestError:
pass
finally:
return render_template('errors/500.html'), 500
</code></pre>
<p>The error still persists with the result of not rendering the <code>500.html</code> template. Error is:</p>
<pre><code>sqlalchemy.exc.InvalidRequestError: This session is in 'committed' state; no further SQL can be emitted within this transaction.
</code></pre>
<p>Is there a way to check if <code>db.session</code> is in a state where rollback is possible?</p>
<p><strong>EDIT:</strong></p>
<p>While it's a valid question, further debugging revealed that the error originates from an earlier <code>commit</code> in the code, rather than the <code>rollback</code>.</p>
| <python><flask><error-handling><sqlalchemy> | 2023-10-01 11:02:42 | 1 | 565 | JeffP |
77,210,131 | 4,430,968 | Can I force pip to install dependencies for x86_64 architecture in powershell to make Lambda layer? | <p>I'm trying to create a lambda layer that includes pydantic/pydantic_core (Lambda python 3.11, x86_64) but getting the following error: <code>"Unable to import module 'reddit_lambda': No module named 'pydantic_core._pydantic_core'"</code></p>
<p>For context, I'm installing this on a windows x64 machine.</p>
<p>From reading up about it <a href="https://github.com/pydantic/pydantic/discussions/6935" rel="noreferrer">here</a>, <a href="https://stackoverflow.com/questions/76650856/no-module-named-pydantic-core-pydantic-core-in-aws-lambda-though-library-is-i">here</a> and <a href="https://github.com/pydantic/pydantic/issues/6557" rel="noreferrer">here</a>, it seems that pip installing pydantic for an incompatible architecture (win_amd64).</p>
<p>Install log:</p>
<pre><code>Using cached pydantic-2.4.2-py3-none-any.whl (395 kB)
Using cached pydantic_core-2.10.1-cp311-none-win_amd64.whl (2.0 MB)
</code></pre>
<p>My question:
Is there a way to force pip to install packages into a directory for a specific architecture in powershell?</p>
<p>I saw an old discussion <a href="https://unix.stackexchange.com/questions/82089/pip-install-cpu-you-selected-does-not-support-x86-64-instruction-set">here</a>, that this is possible for Linux but not sure how to do it on powershell.</p>
<p>FYI: I also tried adding pydantic_core from binary (tried <code> pydantic_core-2.10.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl</code> and <code> pydantic_core-2.10.1-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl</code>) but I'm a bit confused about which version to choose?</p>
| <python><powershell><aws-lambda><pydantic> | 2023-10-01 10:43:38 | 1 | 528 | Davis |
77,209,783 | 3,692,272 | Create single parent figure where the Y-axis represents depth | <p>Using the dataset below, it represent amplitudes recorded a varying depths across time.</p>
<pre><code>data = {
'Time': [0, 2.5, 5, 7.5, 10],
'Depth_63.5': [161, 143, 134, 147, 163],
'Depth_64.5': [183, 190, 375, 255, 241],
'Depth_65.5': [711, 727, 914, 756, 747],
}
</code></pre>
<p>I can plot the graph for time vs amplitudes as seen below.</p>
<p><a href="https://i.sstatic.net/tc7sJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tc7sJ.png" alt="enter image description here" /></a></p>
<p>However, what I wanted is the graph I have be plotted on a parent figure with Time on X-axis and Depth on Y-axis as seen below.
<a href="https://i.sstatic.net/EpcuC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EpcuC.png" alt="enter image description here" /></a></p>
<p>How do I achieve this result? My current code is seen below...</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
# Sample data..
data = {
'Time': [0, 2.5, 5, 7.5, 10],
'Depth_63.5': [161, 143, 134, 147, 163],
'Depth_64.5': [183, 190, 375, 255, 241],
'Depth_65.5': [711, 727, 914, 756, 747],
}
# Convert data to a DataFrame
df = pd.DataFrame(data)
# Extract depth levels from column names
depth_levels = [col.split('_')[1] for col in df.columns if 'Depth_' in col]
# Create the parent figure with subplots
fig, ax = plt.subplots(figsize=(10, 6))
# Loop through each depth level and plot Time vs Amplitude on the same axes
for depth_level in depth_levels:
ax.plot(df['Time'], df[f'Depth_{depth_level}'], label=f'Depth {depth_level}')
# Customize the plot appearance
ax.set_title('Depth vs Time vs Amplitude')
ax.set_xlabel('Time (us)')
ax.set_ylabel('Depth Levels')
ax.legend()
ax.grid(True)
# Show the parent figure
plt.show()
</code></pre>
<p>I think the solution would be something close to multiple Y-Axes, but the secondary Y-axis will rely on primary Y-axis.</p>
| <python><matplotlib><time-series><axis> | 2023-10-01 08:40:43 | 2 | 992 | Umar Yusuf |
77,209,768 | 209,834 | How to merge pandas columns in a comma separated way | <p>I have a pandas dataframe like below</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Id</th>
<th style="text-align: left;">Other columns</th>
<th style="text-align: left;">Param1</th>
<th style="text-align: left;">Param2</th>
<th style="text-align: left;">Param3</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: left;"></td>
<td style="text-align: left;">29</td>
<td style="text-align: left;"></td>
<td style="text-align: left;"></td>
</tr>
<tr>
<td style="text-align: left;">--</td>
<td style="text-align: left;"></td>
<td style="text-align: left;"></td>
<td style="text-align: left;"></td>
<td style="text-align: left;"></td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: left;"></td>
<td style="text-align: left;">29</td>
<td style="text-align: left;">35</td>
<td style="text-align: left;"></td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: left;"></td>
<td style="text-align: left;">29</td>
<td style="text-align: left;"></td>
<td style="text-align: left;">74</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: left;"></td>
<td style="text-align: left;"></td>
<td style="text-align: left;">11</td>
<td style="text-align: left;">34</td>
</tr>
</tbody>
</table>
</div>
<p>I want to merge the last three columns using comma as a separator to make the dataframe look like below</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Id</th>
<th style="text-align: left;">Other columns</th>
<th style="text-align: left;">Params</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: left;"></td>
<td style="text-align: left;">Param1=29</td>
</tr>
<tr>
<td style="text-align: left;">--</td>
<td style="text-align: left;"></td>
<td style="text-align: left;"></td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: left;"></td>
<td style="text-align: left;">Param1=29,Param2=35</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: left;"></td>
<td style="text-align: left;">Param1=29,Param3=74</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: left;"></td>
<td style="text-align: left;">Param2=11,Param3=34</td>
</tr>
</tbody>
</table>
</div>
<p>How do I achieve this using the <code>merge</code> function of the pandas dataframe?</p>
<h2>Edit1</h2>
<p>I used <code>Param1</code>, <code>Param2</code> and <code>Param3</code> as examples. The actual column names different e.g. <code>Period</code>, <code>Length</code> etc. Therefore any solution that relies on finding columns using Regex or like operator will not work.</p>
<p>I am able to create another hardcoded dataframe that holds the names of the parameters if that can be of any use</p>
| <python><pandas><dataframe> | 2023-10-01 08:35:14 | 3 | 8,498 | Suhas |
77,209,550 | 4,281,353 | Why does dividing columns by another column yield NaN? | <p>There is a Pandas dataframe <code>df</code>.</p>
<pre><code>df.info()
---
<class 'pandas.core.frame.DataFrame'>
Int64Index: 3 entries, 1 to 3
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 0 3 non-null int64
1 1 3 non-null int64
2 2 3 non-null int64
dtypes: int64(3)
memory usage: 96.0 bytes
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;">Survived</th>
<th style="text-align: right;">0</th>
<th style="text-align: right;">1</th>
<th style="text-align: right;">2</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">Pclass</td>
<td style="text-align: right;"></td>
<td style="text-align: right;"></td>
<td style="text-align: right;"></td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">80</td>
<td style="text-align: right;">136</td>
<td style="text-align: right;">216</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: right;">97</td>
<td style="text-align: right;">87</td>
<td style="text-align: right;">184</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: right;">372</td>
<td style="text-align: right;">119</td>
<td style="text-align: right;">491</td>
</tr>
</tbody>
</table>
</div>
<p>Dividing 1st and 2nd columns with the 3rd column cause Nan. Why is it?</p>
<pre><code>df[[0, 1]].div(df[[2]], axis=0)
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;">Survived</th>
<th style="text-align: right;">0</th>
<th style="text-align: right;">1</th>
<th style="text-align: right;">2</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">Pclass</td>
<td style="text-align: right;"></td>
<td style="text-align: right;"></td>
<td style="text-align: right;"></td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">NaN</td>
<td style="text-align: right;">NaN</td>
<td style="text-align: right;">NaN</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: right;">NaN</td>
<td style="text-align: right;">NaN</td>
<td style="text-align: right;">NaN</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: right;">NaN</td>
<td style="text-align: right;">NaN</td>
<td style="text-align: right;">NaN</td>
</tr>
</tbody>
</table>
</div> | <python><pandas><dataframe> | 2023-10-01 07:13:29 | 1 | 22,964 | mon |
77,209,527 | 2,696,565 | Plotly box plot: remove transparency | <p>Boxplots in plotly by default are semi-transparent, how can I turn this off please. This has been asked on <a href="https://github.com/plotly/plotly.py/issues/3404" rel="nofollow noreferrer">plotly forums</a> before, but I'm not quite sure how to implement the suggestion. Below is a simple box plot from plotly doc, if someone can please help me remove any transparency from this.</p>
<pre><code>import plotly.graph_objects as go
import numpy as np
np.random.seed(1)
y0 = np.random.randn(50) - 1
y1 = np.random.randn(50) + 1
fig = go.Figure()
fig.add_trace(go.Box(y=y0))
fig.add_trace(go.Box(y=y1))
fig.show()
</code></pre>
| <python><plotly> | 2023-10-01 07:00:26 | 1 | 629 | user2696565 |
77,209,292 | 4,838,216 | Why do these folders get created when I create a virtual env for python instead of just a single venv folder? | <p>Did something change with using <code>venv</code>? I tried to use this command to create a virtual env in <code>C:\Users\USER\Desktop\FOLDER1\FOLDER2\venv</code>, but instead it created two folders - <code>v2</code> and <code>Windows</code> in <code>C:\Users\USER\Desktop\FOLDER1\FOLDER2</code>. Why did it do this? Which one of these folders do I use? And why didn't it just put everything into a single <code>venv</code> folder like I wanted?</p>
<pre><code> python -m venv C:\Users\USER\Desktop\FOLDER1\FOLDER2\venv
</code></pre>
<p><a href="https://i.sstatic.net/9r22a.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9r22a.png" alt="enter image description here" /></a></p>
| <python><python-3.x><windows><windows-10><python-venv> | 2023-10-01 04:53:39 | 0 | 2,326 | whatwhatwhat |
77,209,153 | 4,235,960 | How to dynamically create dataframes with a for loop | <p>My code currently looks like this:</p>
<pre><code>df_1 = portfolio_all[0].rename(columns={'Close': 'Close_1'} )
df_2 = portfolio_all[1].rename(columns={'Close': 'Close_2'} )
df_3 = portfolio_all[2].rename(columns={'Close': 'Close_3'} )
df_4 = portfolio_all[3].rename(columns={'Close': 'Close_4'} )
df_5 = portfolio_all[4].rename(columns={'Close': 'Close_5'} )
df_1['daily_return_1'] = df_1['Close_1'].pct_change(1)
df_2['daily_return_2'] = df_2['Close_2'].pct_change(1)
df_3['daily_return_3'] = df_3['Close_3'].pct_change(1)
df_4['daily_return_4'] = df_4['Close_4'].pct_change(1)
df_5['daily_return_5'] = df_5['Close_5'].pct_change(1)
df_1['perc_ret_1'] = (1 + df_1.daily_return_1).cumprod() - 1
df_2['perc_ret_2'] = (1 + df_2.daily_return_2).cumprod() - 1
df_3['perc_ret_3'] = (1 + df_3.daily_return_3).cumprod() - 1
df_4['perc_ret_4'] = (1 + df_4.daily_return_4).cumprod() - 1
df_5['perc_ret_5'] = (1 + df_5.daily_return_5).cumprod() - 1
</code></pre>
<p>Is there a way to dynamically create these dataframes in a for loop or something like that without having to write each dataframe as a line of code?</p>
| <python><pandas><dataframe> | 2023-10-01 03:46:03 | 2 | 3,315 | adrCoder |
77,209,084 | 13,000,229 | Flask: How can I use `app` variable in the entire project? | <p>I would like to know how to use <code>app</code> variable in any files in my project.</p>
<p>Tutorial: <a href="https://flask.palletsprojects.com/en/2.3.x/tutorial/" rel="nofollow noreferrer">https://flask.palletsprojects.com/en/2.3.x/tutorial/</a><br />
Code to create a variable <code>app</code>: <a href="https://github.com/pallets/flask/blob/2.3.3/examples/tutorial/flaskr/__init__.py" rel="nofollow noreferrer">https://github.com/pallets/flask/blob/2.3.3/examples/tutorial/flaskr/<strong>init</strong>.py</a></p>
<p>I tried this tutorial above. In this tutorial, <code>app = Flask(__name__)</code> is written in <code>__init__.py</code></p>
<pre><code>def create_app(test_config=None):
app = Flask(__name__, instance_relative_config=True)
#### DO SOMETHING
return app
</code></pre>
<p>The problem is <code>__init__.py</code> does not expose <code>app</code> variable, so I cannot use <code>app</code> (such as <code>@app.route</code> and <code>app.logger</code>) in other files.</p>
<p>What is the right way to make <code>app</code> available in other files?</p>
<h4>Environment</h4>
<ul>
<li>Python 3.11.4</li>
<li>Flask 2.3.3</li>
</ul>
| <python><flask> | 2023-10-01 03:00:21 | 1 | 1,883 | dmjy |
77,209,020 | 21,305,238 | Using @singledispatchmethod on __eq__(): Signature of "__eq__" incompatible with supertype "object" | <p>Here's a minimal reproducible example of what I'm trying to do (<a href="https://mypy-play.net/?mypy=latest&python=3.11&flags=strict&gist=a620d21948f7f7c3c22d204619fd084c" rel="nofollow noreferrer">mypy playground</a>):</p>
<pre class="lang-py prettyprint-override"><code>from functools import singledispatchmethod
class C:
value: int
def __init__(self, value: int) -> None:
self.value = value
@singledispatchmethod
def __eq__(self, other: object) -> bool:
return NotImplemented
@__eq__.register
def _(self, other: C) -> bool:
return self.value == other.value
@__eq__.register
def _(self, other: D) -> bool:
return self.value == other.foo / 2
</code></pre>
<p><code>D</code> is actually irrelevant to the problem, but here it is if you need it:</p>
<pre class="lang-py prettyprint-override"><code>class D:
value: int
def __init__(self, value: int) -> None:
self.value = value
@property
def foo(self) -> int:
return 42 * self.value
</code></pre>
<p>mypy gives me the following error:</p>
<pre class="lang-none prettyprint-override"><code>main.py:12: error: Signature of "__eq__" incompatible with supertype "object" [override]
main.py:12: note: Superclass:
main.py:12: note: def __eq__(self, object, /) -> bool
main.py:12: note: Subclass:
main.py:12: note: singledispatchmethod[bool]
</code></pre>
<p>...which makes no sense at all. What can I do, other than using a <code>type: ignore</code> comment? I have read <a href="https://stackoverflow.com/q/68412432">this question</a>, but it is not about type hinting, and the only answer there <a href="https://mypy-play.net/?mypy=latest&python=3.11&flags=strict&gist=cb623ab64e39dc43849bcde78e2b2d5b" rel="nofollow noreferrer">leads to the same error</a> (assuming I did not do anything incorrectly).</p>
| <python><mypy><python-typing> | 2023-10-01 02:18:46 | 2 | 12,143 | InSync |
77,208,996 | 5,986,907 | How can I profile mypy? | <p>Mypy is taking an unreasonably long time to type check my small project. How can I profile it to find out where it's spending all its time? I tried <code>mypy -v</code>, which gives a bit more of an idea, but there aren't timestamps on the logs so it's not practical.</p>
| <python><profiling><mypy> | 2023-10-01 02:05:00 | 1 | 8,082 | joel |
77,208,975 | 4,235,960 | Merge 4 or more Dataframes | <p>I've read the following thread: <a href="https://stackoverflow.com/questions/44327999/how-to-merge-multiple-dataframes">How to merge multiple dataframes</a></p>
<pre><code>data_frames = [df1, df2, df3]
df_merged = reduce(lambda left,right: pd.merge(left,right,on=['DATE'],
how='outer'), data_frames)
</code></pre>
<p>But it only works for up to 3 dataframes. When I want to merge 4 or more, it doesn't work. How can I merge 4 or more dataframes in Python?</p>
<p>The problem seems to be that I have a column named "Closed" in all dataframes.</p>
<p>Here is the error I get</p>
<pre><code>/var/folders/51/_yntlxy96y7_vwm7t537w8700000gn/T/ipykernel_1021/437066834.py:18: FutureWarning: Passing 'suffixes' which cause duplicate columns {'High_x', 'Close_x', 'Open_x', 'Volume_x', 'Adj Close_x', 'Low_x'} in the result is deprecated and will raise a MergeError in a future version. df_merged = reduce(lambda left, right: pd.merge(left, right, on=['Date'], how='outer'), data_frames)
</code></pre>
<p>Here is the head of 4 dataframes used as input:</p>
<pre><code>print(portfolio_all[0].head)
print(portfolio_all[1].head)
print(portfolio_all[2].head)
print(portfolio_all[3].head)
<bound method NDFrame.head of Date Open High ... Close Adj Close Volume
0 2020-07-02 50.060001 70.800003 ... 69.410004 69.410004 18371900
1 2020-07-06 73.393997 96.510002 ... 81.190002 81.190002 13467900
2 2020-07-07 83.800003 89.379997 ... 78.790001 78.790001 4602800
3 2020-07-08 79.000000 79.389999 ... 68.510002 68.510002 3499200
4 2020-07-09 73.970001 79.910004 ... 77.010002 77.010002 4178700
.. ... ... ... ... ... ... ...
812 2023-09-25 11.820000 11.920000 ... 11.690000 11.690000 1568300
813 2023-09-26 11.550000 11.765000 ... 11.380000 11.380000 1489800
814 2023-09-27 11.500000 11.620000 ... 11.530000 11.530000 1635300
815 2023-09-28 11.590000 11.830000 ... 11.690000 11.690000 1452800
816 2023-09-29 11.890000 12.080000 ... 11.620000 11.620000 1563800
[817 rows x 7 columns]>
<bound method NDFrame.head of Date Open High ... Close Adj Close Volume
0 2020-08-27 23.100000 25.000000 ... 21.219999 21.219999 82219700
1 2020-08-28 23.980000 24.400000 ... 22.790001 22.790001 44847300
2 2020-08-31 22.690001 22.790001 ... 20.500000 20.500000 20816000
3 2020-09-01 20.980000 21.790001 ... 21.610001 21.610001 15291400
4 2020-09-02 21.990000 22.000000 ... 21.090000 21.090000 9090100
.. ... ... ... ... ... ... ...
773 2023-09-25 16.530001 16.790001 ... 16.690001 16.690001 16998300
774 2023-09-26 16.320000 16.809999 ... 16.219999 16.219999 6960300
775 2023-09-27 16.360001 16.690001 ... 16.690001 16.690001 8127900
776 2023-09-28 16.490000 17.420000 ... 17.219999 17.219999 11415300
777 2023-09-29 18.020000 18.459999 ... 18.360001 18.360001 11957500
[778 rows x 7 columns]>
<bound method NDFrame.head of Date Open High ... Close Adj Close Volume
0 2005-08-05 6.600000 15.121000 ... 12.254000 12.254000 226811000
1 2005-08-08 13.775000 15.398000 ... 11.550000 11.550000 154889000
2 2005-08-09 12.050000 12.530000 ... 9.610000 9.610000 86677000
3 2005-08-10 10.100000 10.350000 ... 9.175000 9.175000 49638000
4 2005-08-11 9.120000 10.050000 ... 9.790000 9.790000 73248000
... ... ... ... ... ... ...
4564 2023-09-25 130.009995 132.350006 ... 132.050003 132.050003 849500
4565 2023-09-26 132.059998 132.600006 ... 131.009995 131.009995 1062400
4566 2023-09-27 131.130005 131.740005 ... 131.529999 131.529999 1222000
4567 2023-09-28 131.389999 132.850006 ... 132.449997 132.449997 1074000
4568 2023-09-29 135.860001 136.529999 ... 134.350006 134.350006 1741400
[4569 rows x 7 columns]>
<bound method NDFrame.head of Date Open High ... Close Adj Close Volume
0 2015-04-16 31.000000 35.740002 ... 30.000000 30.000000 19763300
1 2015-04-17 29.770000 30.299999 ... 27.580000 27.580000 3965500
2 2015-04-20 28.770000 28.900000 ... 24.900000 24.900000 3076200
3 2015-04-21 24.969999 26.040001 ... 25.750000 25.750000 2184700
4 2015-04-22 26.000000 26.240000 ... 25.120001 25.120001 1442500
... ... ... ... ... ... ...
2125 2023-09-25 63.099998 65.250000 ... 64.720001 64.720001 2738300
2126 2023-09-26 64.290001 64.519997 ... 61.869999 61.869999 3961200
2127 2023-09-27 61.770000 63.160000 ... 61.889999 61.889999 3171400
2128 2023-09-28 61.299999 63.959999 ... 63.750000 63.750000 3666800
2129 2023-09-29 64.790001 65.870003 ... 64.580002 64.580002 3269900
[2130 rows x 7 columns]>
</code></pre>
| <python><dataframe><merge> | 2023-10-01 01:54:29 | 1 | 3,315 | adrCoder |
77,208,951 | 19,130,803 | Detect page change in dash | <p>I am developing a <code>dash</code> web application which is a <code>multi</code> page app. Below is my structure</p>
<pre><code>- project/
- pages/
- home.py
- contact.py
- graph.py
- app.py
- index.py
</code></pre>
<p>In <code>app.py</code>, I have a component <code>dcc.Location(id="url", refresh=True),</code> to load different pages to a <code>html.Div(id="page-content")</code> using a callback. The pages are getting loaded successfully as user click on sidebar having <code>navlinks</code> for each page.</p>
<p>Say I am on <code>home</code> page (localhost/home) and I now click to go to <code>contact</code> page (localhost/contact), It works. Here, before leaving the <code>home</code> page, I am want to save the data in <code>dcc.store</code> component. One way is to have <code>hidden</code> button and activate this button callback through javascript. For that, In <code>assets</code> folder I have created a <code>custom-script.js</code></p>
<pre><code>document.addEventListener('visibilitychange', e=>{
if (document.visibilityState === 'visible') {
console.log("user is focused on the page")
} else {
// code to trigger button click event to activate callback
console.log("user left the page")
}
});
</code></pre>
<p>But what I am noticing is that even though I am switching from <code>home</code> page to <code>contact</code> page <strong>the event is not firing</strong>. May be an update of the page content by callback does not seems to affect refresh( even setting as True). But If, I do hard refresh via browser refresh button on <code>home</code> page <strong>the event is fired</strong></p>
<p>Is there a any other way to detect page change using dash or javascript?</p>
| <javascript><python><plotly-dash> | 2023-10-01 01:32:18 | 1 | 962 | winter |
77,208,902 | 5,032,387 | Invalid data binding expression when running AzureML pipeline | <p>I'm running an AzureML pipeline using the command line where the sole job (for now) is a sweep.</p>
<p>When I run
<code>run_id=$(az ml job create -f path_to_pipeline/pipeline.yaml --query name -o tsv -g grp_name -w ws-name)</code>,
I get the following error:</p>
<pre><code>ERROR: Met error <class 'Exception'>:{
"result": "Failed",
"errors": [
{
"message": "Invalid data binding expression: inputs.data, outputs.model_output, search_space.batch_size, search_space.learning_rate",
"path": "command",
"value": "python train.py --data_path ${{inputs.data}} --output_path ${{outputs.model_output}} --batch_size ${{search_space.batch_size}} --learning_rate ${{search_space.learning_rate}}"
}
]
}
</code></pre>
<p>The pipeline yaml looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>$schema: https://azuremlschemas.azureedge.net/latest/pipelineJob.schema.json
type: pipeline
display_name: pipeline_with_hyperparameter_sweep
description: Tune hyperparameters
settings:
default_compute: azureml:compute-name # sub with your compute name
jobs:
sweep_step:
type: sweep
inputs:
data:
type: uri_file
path: azureml:code_train_data:1 #data store I created
outputs:
model_output:
sampling_algorithm: random
search_space:
batch_size:
type: choice
values: [1, 5, 10, 15]
learning_rate:
type: loguniform
min_value: -6.90775527898 # ln(0.001)
max_value: -2.30258509299 # ln(0.1)
trial:
code: ../src
command: >-
python train.py
--data_path ${{inputs.data}}
--output_path ${{outputs.model_output}}
--batch_size ${{search_space.batch_size}}
--learning_rate ${{search_space.learning_rate}}
environment: azureml:env_finetune_component:1
objective:
goal: maximize
primary_metric: bleu_score
limits:
max_total_trials: 5
max_concurrent_trials: 3
timeout: 3600
trial_timeout: 720
</code></pre>
<p>For the <code>train.py</code> file, note that I of course have a lot of actual code in in the main function, but I commented it out with pass to check if it makes a difference and the error is the same. So the problem is upstream with the bindings, not what's inside of train.</p>
<pre class="lang-py prettyprint-override"><code>import argparse
def main(args):
pass
def parse_args():
parser = argparse.ArgumentParser()
parser.add_arguments("--data_path")
parser.add_arguments("--output_path")
parser.add_arguments("--batch_size", type=int)
parser.add_arguments("--learning_rate", type=float)
args = parser.parse_args()
return args
if __name__ == "__main__":
args = parse_args()
main(args)
</code></pre>
<p>If helpful, here's output when I run <code>az version</code>:</p>
<pre><code>{
"azure-cli": "2.53.0",
"azure-cli-core": "2.53.0",
"azure-cli-telemetry": "1.1.0",
"extensions": {
"ml": "2.20.0"
}
}
</code></pre>
| <python><azure-ml-pipelines> | 2023-10-01 01:00:55 | 2 | 3,080 | matsuo_basho |
77,208,687 | 4,324,307 | Why is for x in arr seem to evaluate multiple times? | <p><a href="https://leetcode.com/problems/permutations/" rel="nofollow noreferrer">https://leetcode.com/problems/permutations/</a></p>
<pre><code>class Solution:
def permute(self, nums: List[int]) -> List[List[int]]:
result = []
set_num = set(nums)
self.loop = 0
def dfs(curr):
self.loop += 1
print(self.loop, curr, set_num)
if not set_num:
result.append(curr.copy())
self.loop -= 1
return
set_num2 = set_num.copy() # <<-- without the copy, wrong solution
for num in set_num2:
print(self.loop, 'picking', num)
set_num.remove(num)
curr.append(num)
dfs(curr)
print(self.loop, 'Adding', num)
curr.pop()
set_num.add(num)
print(self.loop, 'recur end')
self.loop -= 1
dfs([])
return result
</code></pre>
<p>if <code>nums = [0,1]</code> this is the print statement without the copy:</p>
<pre><code>1 [] {0, 1}
1 picking 0
2 [0] {1}
2 picking 1
3 [0, 1] set()
2 Adding 1
2 recur end
1 Adding 0
1 picking 1
2 [1] {0}
2 picking 0
3 [1, 0] set()
2 Adding 0
2 picking 0
3 [1, 0] set()
2 Adding 0
2 recur end
1 Adding 1
1 picking 1
2 [1] {0}
2 picking 0
3 [1, 0] set()
2 Adding 0
2 picking 0
3 [1, 0] set()
2 Adding 0
2 recur end
1 Adding 1
1 recur end
</code></pre>
<p><a href="https://docs.python.org/3/reference/compound_stmts.html#the-for-statement" rel="nofollow noreferrer">https://docs.python.org/3/reference/compound_stmts.html#the-for-statement</a>
Here it is stated that the for loop is evaluated once.</p>
<p>My understanding is that</p>
<pre><code>for i in range(len(any_list)):
any_list.append(1)
</code></pre>
<p>would terminate since <code>len(any_list)</code> is only evaluated in the beginning.</p>
<p>But why does <code>for num in set_num2</code> seem to evaluate more than once?
And if it does evaluate itself multiple times, shouldn't the loop go into infinite loop?</p>
<p>What is going on here?</p>
| <python> | 2023-09-30 22:51:15 | 0 | 941 | zcahfg2 |
77,208,681 | 9,739,007 | Regex to find all class names from .css file | <p>I am working on a ticket for an open source project. There are over 800+ css files, and I need to get all the class names from all these files, and then check all the HTML files for these class names. The goal is to find which classes aren't being used anywhere in the project & delete them. So the 1st step is extracting all the class names properly.</p>
<p>The issue is I am having trouble constructing the correct Regex to get all the class names because some of the css files have class names that are not formatted properly, that are all on 1 line, jumbled like 1 big run-on sentence with minimal white spaces. So the Regex I created breaks on these occurrences and fails to extract the class names. I must create a more complex yet flexible Regex and I seek guidance.</p>
<p>Example of some of the formats:</p>
<pre class="lang-css prettyprint-override"><code>.class-Name-Here {
}
.class1,.class2,.class3 {
}
.class1 .class2 {
}
</code></pre>
<p>But then there are really messed up formats like this (real example):</p>
<pre class="lang-css prettyprint-override"><code>.toast-title{font-weight:700}.toast-message{word-wrap:break-word}.toast-message a,.toast-message label{color:#FFF}.toast-message a:hover{color:#CCC;text-decoration:none}.toast-close-button{position:relative;right:-.3em;top:-.3em;float:right;font-size:20px;font-weight:700;color:#FFF;-webkit-text-shadow:0 1px 0 #fff;text-shadow:0 1px 0 #fff;opacity:.8}.toast-top-center,.toast-top-full-width{top:0;right:0;width:100%}.toast-close-button:focus,.toast-close-button:hover{color:#000;text-decoration:none;cursor:pointer;opacity:.4}button.toast-close-button{padding:0;cursor:pointer;background:0 0;border:0;-webkit-appearance:none}.toast-bottom-center{bottom:0;right:0;width:100%}.toast-bottom-full-width{bottom:0;right:0;width:100%}.toast-top-left{top:12px;left:12px}.toast-top-right{top:12px;right:12px}.toast-bottom-right{right:12px;bottom:12px}.toast-bottom-left{bottom:12px;left:12px}#toast-container{position:fixed;z-index:999999}#toast-container *{-moz-box-sizing:border-box;-webkit-box-sizing:border-box;box-sizing:border-box}#toast-container>div{position:relative;overflow:hidden;margin:0 0 6px;padding:15px 15px 15px 50px;width:300px;-moz-border-radius:3px;-webkit-border-radius:3px;border-radius:3px;background-position:15px center;background-repeat:no-repeat;-moz-box-shadow:0 0 12px #999;-webkit-box-shadow:0 0 12px #999;box-shadow:0 0 12px #999;color:#FFF;opacity:.8}#toast-container>:hover{-moz-box-shadow:0 0 12px #000;-webkit-box-shadow:0 0 12px #000;box-shadow:0 0 12px #000;opacity:1;cursor:pointer}#toast-container>.toast-info{background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABgAAAAYCAYAAADgdz34AAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAAGwSURBVEhLtZa9SgNBEMc9sUxxRcoUKSzSWIhXpFMhhYWFhaBg4yPYiWCXZxBLERsLRS3EQkEfwCKdjWJAwSKCgoKCcudv4O5YLrt7EzgXhiU3/4+b2ckmwVjJSpKkQ6wAi4gwhT+z3wRBcEz0yjSseUTrcRyfsHsXmD0AmbHOC9Ii8VImnuXBPglHpQ5wwSVM7sNnTG7Za4JwDdCjxyAiH3nyA2mtaTJufiDZ5dCaqlItILh1NHatfN5skvjx9Z38m69CgzuXmZgVrPIGE763Jx9qKsRozWYw6xOHdER+nn2KkO+Bb+UV5CBN6WC6QtBgbRVozrahAbmm6HtUsgtPC19tFdxXZYBOfkbmFJ1VaHA1VAHjd0pp70oTZzvR+EVrx2Ygfdsq6eu55BHYR8hlcki+n+kERUFG8BrA0BwjeAv2M8WLQBtcy+SD6fNsmnB3AlBLrgTtVW1c2QN4bVWLATaIS60J2Du5y1TiJgjSBvFVZgTmwCU+dAZFoPxGEEs8nyHC9Bwe2GvEJv2WXZb0vjdyFT4Cxk3e/kIqlOGoVLwwPevpYHT+00T+hWwXDf4AJAOUqWcDhbwAAAAASUVORK5CYII=)!important}#toast-container>.toast-error{background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABgAAAAYCAYAAADgdz34AAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAAHOSURBVEhLrZa/SgNBEMZzh0WKCClSCKaIYOED+AAKeQQLG8HWztLCImBrYadgIdY+gIKNYkBFSwu7CAoqCgkkoGBI/E28PdbLZmeDLgzZzcx83/zZ2SSXC1j9fr+I1Hq93g2yxH4iwM1vkoBWAdxCmpzTxfkN2RcyZNaHFIkSo10+8kgxkXIURV5HGxTmFuc75B2RfQkpxHG8aAgaAFa0tAHqYFfQ7Iwe2yhODk8+J4C7yAoRTWI3w/4klGRgR4lO7Rpn9+gvMyWp+uxFh8+H+ARlgN1nJuJuQAYvNkEnwGFck18Er4q3egEc/oO+mhLdKgRyhdNFiacC0rlOCbhNVz4H9FnAYgDBvU3QIioZlJFLJtsoHYRDfiZoUyIxqCtRpVlANq0EU4dApjrtgezPFad5S19Wgjkc0hNVnuF4HjVA6C7QrSIbylB+oZe3aHgBsqlNqKYH48jXyJKMuAbiyVJ8KzaB3eRc0pg9VwQ4niFryI68qiOi3AbjwdsfnAtk0bCjTLJKr6mrD9g8iq/S/B81hguOMlQTnVyG40wAcjnmgsCNESDrjme7wfftP4P7SP4N3CJZdvzoNyGq2c/HWOXJGsvVg+RA/k2MC/wN6I2YA2Pt8GkAAAAASUVORK5CYII=)!important}#toast-container>.toast-success{background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABgAAAAYCAYAAADgdz34AAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAADsSURBVEhLY2AYBfQMgf///3P8+/evAIgvA/FsIF+BavYDDWMBGroaSMMBiE8VC7AZDrIFaMFnii3AZTjUgsUUWUDA8OdAH6iQbQEhw4HyGsPEcKBXBIC4ARhex4G4BsjmweU1soIFaGg/WtoFZRIZdEvIMhxkCCjXIVsATV6gFGACs4Rsw0EGgIIH3QJYJgHSARQZDrWAB+jawzgs+Q2UO49D7jnRSRGoEFRILcdmEMWGI0cm0JJ2QpYA1RDvcmzJEWhABhD/pqrL0S0CWuABKgnRki9lLseS7g2AlqwHWQSKH4oKLrILpRGhEQCw2LiRUIa4lwAAAABJRU5ErkJggg==)!important}#toast-container>.toast-warning{background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABgAAAAYCAYAAADgdz34AAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAAGYSURBVEhL5ZSvTsNQFMbXZGICMYGYmJhAQIJAICYQPAACiSDB8AiICQQJT4CqQEwgJvYASAQCiZiYmJhAIBATCARJy+9rTsldd8sKu1M0+dLb057v6/lbq/2rK0mS/TRNj9cWNAKPYIJII7gIxCcQ51cvqID+GIEX8ASG4B1bK5gIZFeQfoJdEXOfgX4QAQg7kH2A65yQ87lyxb27sggkAzAuFhbbg1K2kgCkB1bVwyIR9m2L7PRPIhDUIXgGtyKw575yz3lTNs6X4JXnjV+LKM/m3MydnTbtOKIjtz6VhCBq4vSm3ncdrD2lk0VgUXSVKjVDJXJzijW1RQdsU7F77He8u68koNZTz8Oz5yGa6J3H3lZ0xYgXBK2QymlWWA+RWnYhskLBv2vmE+hBMCtbA7KX5drWyRT/2JsqZ2IvfB9Y4bWDNMFbJRFmC9E74SoS0CqulwjkC0+5bpcV1CZ8NMej4pjy0U+doDQsGyo1hzVJttIjhQ7GnBtRFN1UarUlH8F3xict+HY07rEzoUGPlWcjRFRr4/gChZgc3ZL2d8oAAAAASUVORK5CYII=)!important}#toast-container.toast-bottom-center>div,#toast-container.toast-top-center>div{width:300px;margin:auto}#toast-container.toast-bottom-full-width>div,#toast-container.toast-top-full-width>div{width:96%;margin:auto}.toast{background-color:#030303}.toast-success{background-color:#51A351}.toast-error{background-color:#BD362F}.toast-info{background-color:#2F96B4}.toast-warning{background-color:#F89406}.toast-progress{position:absolute;left:0;bottom:0;height:4px;background-color:#000;opacity:.4}.toast{opacity:1!important}.toast.ng-enter{opacity:0!important;transition:opacity .3s linear}.toast.ng-enter.ng-enter-active{opacity:1!important}.toast.ng-leave{opacity:1;transition:opacity .3s linear}.toast.ng-leave.ng-leave-active{opacity:0!important}@media all and (max-width:240px){#toast-container>div{padding:8px 8px 8px 50px;width:11em}#toast-container .toast-close-button{right:-.2em;top:-.2em}}@media all and (min-width:241px) and (max-width:480px){#toast-container>div{padding:8px 8px 8px 50px;width:18em}#toast-container .toast-close-button{right:-.2em;top:-.2em}}@media all and (min-width:481px) and (max-width:768px){#toast-container>div{padding:15px 15px 15px 50px;width:25em}}/*!
</code></pre>
<p>I need to create a really flexible Regex that can get all these class names.</p>
<p>The issue is that it's getting quite complex and I'm trying to create something that has:</p>
<p>I've done some analysis on the class names and they follow these general rules:</p>
<ol>
<li>start with . and can be chained several times:
Ex: <code>.Interesting-Complex.Class-Name2.Specialty_Class {</code></li>
<li>Uses <code>A-Z</code>, <code>a-z</code>, and <em>sometimes</em> <code>-</code> and <code>_</code> multiple times</li>
<li>Can end with either a newline, a comma, <code>{</code>, or <code>:</code></li>
<li>Sometimes are preceded by <code>}</code></li>
</ol>
<p>My current expression is: <code>((?<=}))?\.(\w+(-+?)\w+)+?((?={)|(?=\s)|(?=,)|(?=:))</code></p>
<p>But it does not work on simple cases such as:</p>
<pre class="lang-css prettyprint-override"><code>.glyphicon {
}
</code></pre>
<p>Because it thinks that the <code>-</code> is mandatory and so does not pick that up.
It is also failing on chained classes such as:
Ex: <code>.Interesting-Complex.Class-Name2.Specialty_Class {</code></p>
| <python><regex><regex-lookarounds> | 2023-09-30 22:49:10 | 2 | 1,205 | Jamie |
77,208,623 | 188,096 | Why am I getting invalid model error with "gpt-35-turbo-16k" model in Azure Open AI but not with "gpt-35-turbo"? | <p>I have built a very simple application using Azure Open AI, Langchain and Streamlit. Following is my code:</p>
<pre><code>from dotenv import load_dotenv,find_dotenv
load_dotenv(find_dotenv())
import streamlit as st
from langchain.llms import AzureOpenAI
from langchain.prompts import PromptTemplate
LLM = AzureOpenAI(max_tokens=1500, deployment_name="gpt-35-turbo-16k", model="gpt-35-turbo-16k")
prompt_template = """
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Question: {question}
"""
PROMPT = PromptTemplate(template=prompt_template, input_variables=["question"])
st.title('Experiment AzureOpenAI :)')
user_question = st.text_input('Your query here please')
if user_question:
prompt = prompt_template.format(question = user_question)
response = LLM(prompt)
st.write(response)
st.write('done')
</code></pre>
<p>When I run the above code, I am getting the following error back:</p>
<blockquote>
<p>InvalidRequestError: The completion operation does not work with the
specified model, gpt-35-turbo-16k. Please choose different model and
try again. You can learn more about which models can be used with each
operation here: <a href="https://go.microsoft.com/fwlink/?linkid=2197993" rel="nofollow noreferrer">https://go.microsoft.com/fwlink/?linkid=2197993</a>.</p>
</blockquote>
<p>However my code runs perfectly fine if I change the model from <code>gpt-35-turbo-16k</code> to <code>gpt-35-turbo</code>. So the following code works:</p>
<pre><code>LLM = AzureOpenAI(max_tokens=1500, deployment_name="gpt-35-turbo", model="gpt-35-turbo")
</code></pre>
<p>I am wondering why is this error occurring?</p>
<p>From this <a href="https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#gpt-35" rel="nofollow noreferrer"><code>link</code></a>, only difference I could see is that <code>gpt-35-turbo-16k</code> supports up to 16k input tokens whereas <code>gpt-35-turbo</code> supports up to 4k input tokens.</p>
| <python><azure><streamlit><langchain><azure-openai> | 2023-09-30 22:16:55 | 1 | 136,942 | Gaurav Mantri |
77,208,622 | 22,466,650 | How to summarize a json based on the type of objects it holds? | <p>My input is this json:</p>
<pre class="lang-py prettyprint-override"><code>object = {'name': 'John',
'age': 30,
'address': {'street': '123 Main St',
'city': 'Sampletown',
'zipcode': '12345',
'frequency': 12.55},
'interests': ['Python programming',
{'hobbies': ['Playing chess',
{'outdoor': {'activity1': 'Hiking', 'activity2': 'Cycling'}}]}],
'friends': [{'name': 'Alice', 'age': 28},
{'name': 'Bob', 'age': 32, 'hobbies': ['Hiking', 'Reading']}]}
</code></pre>
<p>And I'm trying to get this kind of output :</p>
<pre class="lang-py prettyprint-override"><code>{"str": 30, "dict": 6, "list": 4, "int": 3, "float": 1}
</code></pre>
<p>My code below gives me: <code>{'str': 8, 'int': 3, 'float': 1}</code>. How can I fix that?</p>
<pre><code>from collections import Counter
def get_the_type(o):
for key,val in o.items():
if not isinstance(val, (list, dict)):
yield type(val).__name__
elif isinstance(val, list):
for v in val:
if isinstance(v, dict):
yield from get_the_type(v)
else:
yield from get_the_type(val)
final_dict = dict(Counter(get_the_type(object)))
</code></pre>
| <python><json> | 2023-09-30 22:16:23 | 1 | 1,085 | VERBOSE |
77,208,291 | 638,504 | How to use dynamic (growing) analytics data with Pandas | <p>I want to use Pandas for statistical analysis of analytics data. This data is not fixed, it's growing every day.</p>
<p>Saving the analytics data in a database just to convert it to the dataframe every time I run the analysis feels not very efficient to me as I need to run the analysis daily.</p>
<p>I can of course create the dataframe with first batch of data, then save the dataframe as a file. Afterwards open the file, append the new data to it and save it again.</p>
<p>Maybe there is better way for using Pandas with growing data? A mix between dataframe and database? Or maybe even another library?</p>
| <python><pandas><analytics> | 2023-09-30 20:09:06 | 0 | 2,193 | ddofborg |
77,207,944 | 3,555,115 | Remove First 5 directories in a file path column from data frame | <p>My dataframe looks like</p>
<pre><code>print df_function
address function file_name
0xffffffff87954120 load_buf /a/b/c/d/e/f/...
0xffffffff87954121 load_buf /a/b/c/d/e/f/...
....
....
</code></pre>
<p>Column "file_name" isn't displaying complete file path and it should like something like</p>
<pre><code>/a/b/c/d/e/f/file1.c
/a/b/c/d/e/f/file2.c
</code></pre>
<p>I dont need the first 5 directories in the filename column and require file path only after skiping the first 5 directories</p>
<p><strong>Desired Output:</strong></p>
<pre><code> address function file_name
0xffffffff87954120 load_buf f/file1.c
0xffffffff87954121 load_buf f/file2.c
....
....
</code></pre>
<p>I tried</p>
<pre><code>file_paths = df_function["file_name"].str.split("/")
file_paths = [x[5:] for x in file_paths]
new_file_paths = "".join(file_paths)
df_function["file_name"] = new_file_paths
</code></pre>
<p>I get error:</p>
<pre><code>new_file_paths = "".join(file_paths)
TypeError: sequence item 0: expected string, list found
</code></pre>
<p>Any suggestions please ?</p>
| <python> | 2023-09-30 18:18:05 | 2 | 750 | user3555115 |
77,207,555 | 7,695,845 | pkg_resources DeprecationWarning when running pytest? | <p>I am working on a Python library and I recently wanted to integrate <code>pytest</code> into my development workflow. I tried running a simple test that passed, but I got a warning:</p>
<pre><code>.venv\Lib\site-packages\pygame\pkgdata.py:25
D:\dev\python\physiscript\.venv\Lib\site-packages\pygame\pkgdata.py:25: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
from pkg_resources import resource_stream, resource_exists
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
</code></pre>
<p>From the warning message, it seems that this warning is coming from <code>pygame</code> which I use in my project (<code>pygame-ce</code> actually). However, when I run my app, I don't see this warning, and I never got this warning when working with <code>pygame</code> before. Here's the sample test:</p>
<pre class="lang-py prettyprint-override"><code># Implicitly imports pygame for other library functionality, and thus trigger the warning
import mylib
# sample test
def test_demo():
assert True
</code></pre>
<p>The library's <code>__init__.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import pygame
def print_version():
print(pygame.version.ver)
</code></pre>
<p>When I use the library normally, there is no warning and no problem:</p>
<pre class="lang-py prettyprint-override"><code>import mylib
mylib.print_version()
</code></pre>
<p>Can somebody explain why I get this warning with <code>pytest</code>, but not when I run the app normally?</p>
| <python><pytest> | 2023-09-30 15:15:38 | 2 | 1,420 | Shai Avr |
77,207,553 | 1,073,946 | Regex pattern - couples' names | <p>I am looking for a regex pattern that matches patterns like "Jane and John Smith" (I mean first-name and first-name last-name) in the middle of a text. It should not match "Jane Smith and John Smith".</p>
<p>Then I want to substitute all found "Jane and John Smith"s with "Jane Smith and John Smith" by name. I mean any word that matches <code>[A-Z][a-z]+</code>.</p>
<p>I wrote this:</p>
<pre><code>r'([A-Z][a-z]+)\s+and\s+([A-Z][a-z]+)\s+([A-Z][a-z]+)'
</code></pre>
<p>but this also matches "Jane Smith and John Smith". I do not know how to exclude "two words" before "and". I am using python.
Any help would be much appreciated.</p>
| <python><regex> | 2023-09-30 15:14:48 | 2 | 439 | Bahar S |
77,207,534 | 2,030,601 | Using Terminal, Launch VSCode into a project folder and open a project file | <p>I am writing a Python Script and I want to use <code>subprocess</code> - basically MacOS <code>terminal</code> (Flavor probably not important) to launch <code>VSCode</code> (installed) into a project directory e.g. <code>~/workspace/my/project/</code> and have it open a file within that folder e.g. <code>~/workspace/my/project/javascript-file.js</code>. How can I achieve this?</p>
<p>This is a self answered entry for clarification of others who may be looking for similar solutions when writing Python Scripts as local development tools.</p>
| <python><macos><visual-studio-code><terminal><subprocess> | 2023-09-30 15:08:55 | 1 | 1,145 | Decoded |
77,207,529 | 10,431,629 | Finding the closest pair (within a tolerance) of tuple from a pandas data frame in O(N) time complexity or better | <p>The problem I am facing may be realizable in a looping, however, I need to figure out a pythonic solution with max O(N) time complexity:</p>
<p>So the problem is this:</p>
<p>I have a data frame (which I call say lookup) say as follows:(it may be a very big data frame of 100M + rows) ( my A & B are the columns to search and M, N are the values which I will search for and finally col V is which I will extract). So let me be explicit. For the sake of our argument I only show 5 rows:</p>
<pre><code> lookup_df
A B M N V
a1 b1 2.6 12.7 200.6
u1 v1 4.5 19 145
a2 b2 3.2 15.9 100
a3 b3 5.5 21.5 45
a7 b7 6.8 41.8 90
a10 b10 70.0 120.5 123
</code></pre>
<p>So now the issue is I have another data frame which I call input. So that can have either the exact A & B column values or may have some new A and B values. So the input do may look like this (say). I show here also 5 rows to show case the point:</p>
<pre><code> input_df
A B M N
u1 v1 4.5 19
u1 v1 3.0 16.2
a3 b3 5.5 21.5
a3 x1 7.0 41.5
x7 y2 69.8 120.1
</code></pre>
<p>So now the problem to solve is whenever I have exact match of the values in both lookup and input data frames between A, B, M, N , my V column gets the value from lookup as expected. However, for example in second row of input data frame the A & B match but M & N values differ. So the nearest ( 0.3 say is my threshold) is row 3 of lookup table corresponding to V for a2 and b2, so I pick that V. Again a3 and b3 in input exactly matches with lookup for M and N columns, so I can pickup that V. For the last two rows in input , there are no rows in this sample lookup data frame, so I go by the closest values in lookup again, which for a3, x1 happens to be the values in V for a7,b7 and for x7, y2 matches with values for a10, b10.</p>
<p>Thus the output df for my input should look like:</p>
<pre><code> output_df
A B M N V
u1 v1 4.5 19 145
u1 v1 3.0 16.2 100
a3 b3 5.5 21.5 45
a3 x1 7.0 41.5 90
x7 y2 69.8 120.1 123
</code></pre>
<p>So my lookup data frame is enormous (100M +) with having all possible combination.My input data frame may not be that large, maybe 10000 rows, however, let’s say I use a threshold of 0.3 to find the nearest pair, how can I have an efficient search code to scan through the lookup to get my output as desired.</p>
| <python><pandas><indexing><grouping><lookup> | 2023-09-30 15:06:20 | 1 | 884 | Stan |
77,207,347 | 4,619,822 | Porting BSGS (ECDSA) on rust | <p>Hi guys I'm porting Baby step Giant Step algorithm from examples displayed here to Rust programming language.
<a href="https://github.com/ashutosh1206/Crypton" rel="nofollow noreferrer">https://github.com/ashutosh1206/Crypton</a>
the normal version is ported successfully without problems here is an example.</p>
<pre><code>use serde::{Deserialize, Serialize};
use num_bigint::BigUint;
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct BSGS(String);
pub struct Parallel(String);
use rayon::prelude::*;
use num_traits::{One, Zero};
use std::collections::HashMap;
use std::sync::{Mutex, Arc};
// https://github.com/ashutosh1206/Crypton/blob/master/Discrete-Logarithm-Problem/Algo-Baby-Step-Giant-Step/bsgs.py
impl BSGS {
// """
// Reference:
// To solve DLP: h = g^x % p and get the value of x.
// We use the property that x = i*m + j, where m = ceil(sqrt(n))
// :parameters:
// g : int/long
// Generator of the group
// h : int/long
// Result of g**x % p
// p : int/long
// Group over which DLP is generated. Commonly p is a prime number
// :variables:
// m : int/long
// Upper limit of baby steps / giant steps
// x_poss : int/long
// Values calculated in each giant step
// c : int/long
// Giant Step pre-computation: c = g^(-m) % p
// i, j : int/long
// Giant Step, Baby Step variables
// lookup_table: dictionary
// Dictionary storing all the values computed in the baby step
// """
pub fn new(bsgs: String) -> Self {
Self(bsgs)
}
pub fn run(g: &BigUint, h: &BigUint, p: &BigUint) -> Option<BigUint> {
let mod_size = p.bits();
println!("[+] Using BSGS algorithm to solve DLP");
println!("[+] Modulus size: {}. Warning! BSGS not space efficient\n", mod_size);
let m = (*&p - BigUint::one()).sqrt() + BigUint::one();
let mut lookup_table: HashMap<BigUint, BigUint> = HashMap::new();
// Baby Step
let mut j = BigUint::zero();
while &j < &m {
let key = g.modpow(&j, &p);
lookup_table.insert(key.clone(), j.clone());
j += BigUint::one();
}
// Giant Step pre-computation
let c = g.modpow(&(&m * (*&p - BigUint::from(2u32))), &p);
// Giant Steps
let mut i = BigUint::zero();
while &i < &m {
let temp = &(h * &c.modpow(&i, &p)) % p;
if let Some(j) = lookup_table.get(&temp) {
// x found
return Some(i * &m + j);
}
i += BigUint::one();
}
None
}
}
</code></pre>
<p>but I'm having a hard time porting the Ecurve defined as:</p>
<pre class="lang-python prettyprint-override"><code>from sage.all import *
def bsgs_ecdlp(P, Q, E):
if Q == E((0, 1, 0)):
return P.order()
if Q == P:
return 1
m = ceil(sqrt(P.order()))
lookup_table = {j*P: j for j in range(m)}
for i in range(m):
temp = Q - (i*m)*P
if temp in lookup_table:
return (i*m + lookup_table[temp]) % P.order()
return None
if __name__ == "__main__":
import random
E = EllipticCurve(GF(17), [2, 2])
try:
for i in range(100):
x = random.randint(2, 19)
assert bsgs_ecdlp(E((5, 1)), x*E((5, 1)), E) == x
except Exception as e:
print e
print "[-] Something's wrong!"
</code></pre>
<p>this part here is giving me headaches, as it is using sage to do the keys substraction.
as the module secp256k1 in rust follow a different pattern what would be the optimal way to do it.</p>
<pre><code>temp = Q - (i*m)*P
</code></pre>
<p>my intuition tells me that the <code>(i*m)*P</code> part is just</p>
<pre><code> let secp = Secp256k1::new();
let pub_key = PublicKey::from_secret_key(&secp, &secret);
</code></pre>
<p>where secret is <code>(i*m)</code> operation.
but how to do Q - New_Pub_Key_points in rust ?</p>
| <python><rust><cryptography><sage><secp256k1> | 2023-09-30 14:16:13 | 0 | 1,598 | PrinceZee |
77,207,062 | 15,632,586 | What should be done to make my own weighted loss function for BertForSequenceClassification? | <p>I am trying to fine tune SciBERT with a new dataset, but my dataset is having a high imbalance among classes (the class with highest number of elements has ~760 elements, while there are classes with 10-20 elements). Due to this problem, I am trying to make my own weighted loss function (using logits) for the fine-tuning process; however from what I read, I would have to write my own loss function with PyTorch though, and I am not sure how to integrate that class with my code.</p>
<p>Here is my current code though, for reference:</p>
<pre><code>import transformers
import torch
cuda = torch.device('cuda')
tokenizer = transformers.BertTokenizer.from_pretrained("allenai/scibert_scivocab_uncased",
max_length=512)
bert = transformers.BertForSequenceClassification.from_pretrained('scibert_single_label_model').to(cuda).train()
from sklearn.model_selection import train_test_split
train, test = train_test_split(data, test_size=0.2, shuffle=True)
def _tokenize(instances: list[str]):
return tokenizer(instances, return_tensors='pt', padding='max_length', truncation=True, max_length=512).input_ids
def _encode_labels(labels):
""":labels: should be the `labels` column (a Series) of the DataFrame"""
return torch.Tensor(encoder.transform(labels))
x_train = _tokenize(train['text'].tolist())
y_train = _encode_labels(train[['label']])
NUM_EPOCHS = 6
from statistics import mean
from tqdm import tqdm
from torch.optim import AdamW
optim = AdamW(bert.parameters(), lr=2e-5, eps=1e-8)
for epoch in range(NUM_EPOCHS):
epoch_losses = []
for x, y in tqdm(_load_data(x_train, y_train, batch_size=10)):
bert.zero_grad()
out = bert(x, attention_mask=x.ne(tokenizer.pad_token_id).to(int), labels=y)
epoch_losses.append(out.loss.item())
out.loss.backward()
optim.step()
</code></pre>
<p>So, could you provide me with ideas about how I could add my own customized weighted loss function to this code?</p>
| <python><pytorch><huggingface-transformers><bert-language-model> | 2023-09-30 12:47:39 | 1 | 451 | Hoang Cuong Nguyen |
77,206,851 | 19,358,731 | Confused about pip dependency package version resolution | <p>On two separate computers running Windows 11, I am trying to install the <code>tflite-support</code> package (PyPI link: <a href="https://pypi.org/project/tflite-support/" rel="nofollow noreferrer">https://pypi.org/project/tflite-support/</a>)</p>
<p>Computer A is running Python 3.8.10 and computer B is running Python 3.10.11.</p>
<p>I install tflite-support using the following code:</p>
<pre><code>python -m venv TestEnv
TestEnv\Scripts\activate
python -m pip install tflite-support
</code></pre>
<p>On Computer A, tflite-support installed version is <code>0.4.3</code>.</p>
<p>On Computer B, tflite-support installed version is <code>0.1.0a1</code>, surprisingly.</p>
<p>I have two questions:</p>
<ul>
<li>why is the installed version 0.1.0a1 (not 0.4.3) on computer B?</li>
<li>looking on PyPI, the latest version of tflite-support indeed is 0.4.4 (from July 2023). Why is the installed version 0.4.3 (not 0.4.4) on computer A?</li>
</ul>
| <python><python-3.x><pip> | 2023-09-30 11:41:53 | 0 | 355 | andynewman |
77,206,483 | 6,394,722 | subprocess (launch ssh process in background) communicate hang if enable stderr | <p>I have next code, it will do next:</p>
<ol>
<li>subprocess for <code>ssh -f -M</code> to let ssh launch a shared socket in background</li>
<li>As above is in background, so for the second ssh connection we could just reuse the socket <code>/tmp/control-channel</code> to connect ssh server without password.</li>
</ol>
<p><strong>test.py:</strong></p>
<pre><code>import subprocess
import os
import sys
import stat
ssh_user = "my_user" # change to your account
ssh_passwd = "my_password" # change to your password
try:
os.remove("/tmp/control-channel")
except:
pass
# prepare passwd file
file = open("./passwd","w")
passwd_content = f"#!/bin/sh\necho {ssh_passwd}"
file.write(passwd_content)
file.close()
os.chmod("./passwd", stat.S_IRWXU)
# setup shared ssh socket, put it in background
env = {'SSH_ASKPASS': "./passwd", 'DISPLAY':'', 'SSH_ASKPASS_REQUIRE':'force'}
args = ['ssh', '-f', '-o', 'LogLevel=ERROR', '-x', '-o', 'ConnectTimeout=30', '-o', 'ControlPersist=300', '-o', 'UserKnownHostsFile=/dev/null', '-o', 'StrictHostKeyChecking=no', '-o', 'ServerAliveInterval=15', '-MN', '-S', '/tmp/control-channel', '-p', '22', '-l', ssh_user, 'localhost']
process = subprocess.Popen(args, env=env,
stdout=subprocess.PIPE,
# stderr=subprocess.STDOUT, # uncomment this line to enable stderr will make subprocess hang
stdin=subprocess.DEVNULL,
start_new_session=True)
sout, serr = process.communicate()
print(sout)
print(serr)
# use shared socket
args2 = ['ssh', '-o', 'LogLevel=ERROR', '-o', 'ControlPath=/tmp/control-channel', '-p', '22', '-l', ssh_user, 'localhost', 'uname -a']
process2 = subprocess.Popen(args2,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
stdin=subprocess.DEVNULL)
content, _ = process2.communicate()
print(content)
</code></pre>
<p><strong>execution:</strong></p>
<pre><code>$ python3 test.py
b''
None
b'Linux shmachine 4.19.0-21-amd64 #1 SMP Debian 4.19.249-2 (2022-06-30) x86_64 GNU/Linux\n'
</code></pre>
<p>So far so good, just if I uncomment <code>stderr=subprocess.STDOUT</code> in the first subprocess, it will hang:</p>
<pre><code>$ python3 test.py
^CTraceback (most recent call last):
File "test.py", line 29, in <module>
sout, serr = process.communicate()
File "/usr/lib/python3.7/subprocess.py", line 926, in communicate
stdout = self.stdout.read()
KeyboardInterrupt
</code></pre>
<p>I wonder what's the problem here?</p>
<p>My environment:</p>
<pre><code>$ python3 --version
Python 3.7.3
$ ssh -V
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1n 15 Mar 2022
$ cat /etc/issue
Debian GNU/Linux 10 \n \l
</code></pre>
<p>UPDATE: I saw <a href="https://stackoverflow.com/questions/52116952/getting-race-condition-using-stderr-pipe-with-popen-communicate">this post</a> which similar to my issue, but no answer.</p>
<p>UPDATE2: Change <code>communicate</code> to <code>wait</code> makes it work, but the <code>pipe size which wait use</code> surely less than <code>memory size which communicate use</code>, so I still wonder why I can't make it work with <code>communicate</code>.</p>
| <python><python-3.x><linux><ssh><subprocess> | 2023-09-30 09:45:53 | 2 | 32,101 | atline |
77,206,414 | 13,086,128 | Get elements multiplied by their frequencies from a dictionary | <p>Suppose there is a dictionary with 1 Million records.</p>
<p>a minimum reproducible example:</p>
<pre><code>d = {1:2, 2:4, 3:5}
</code></pre>
<p>here keys represent the elements and values represent their respective frequencies.</p>
<p>Now, I want to get all the elements in a list like:</p>
<pre><code>[1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 3]
</code></pre>
<p>I did this:</p>
<pre><code>lst=[]
for key, freq in d.items():
for _ in range(freq):
lst.append(key)
print(lst)
#[1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 3]
</code></pre>
<p>The problem is that this method is slow <code>O(N**2)</code>.</p>
<p>I can do the same thing in list comprehension but, still it will have 2 nested loops.</p>
<p>What could be an elegant solution to this?</p>
| <python><list><dictionary> | 2023-09-30 09:25:35 | 2 | 30,560 | Talha Tayyab |
77,206,297 | 13,498,175 | Proper way of inverting matrix with so small elements as 1e-20 | <p>I am generating matrix which is symmetric, and the magnitude of its elements depend on the length scale of the problem I am dealing with. Typically the matrix is 9x9, but 6x6 version could be also used.</p>
<p>One example of the matrix is like this</p>
<pre><code>// A
2.8538093853e-21 7.1747731778e-22 7.1747731778e-22 7.9791996325e-22 7.9791996325e-22 3.0550862084e-22
7.1747731778e-22 2.8538093853e-21 7.1747731778e-22 7.9791996325e-22 3.0550862084e-22 7.9791996325e-22
7.1747731778e-22 7.1747731778e-22 2.8538093853e-21 3.0550862084e-22 7.9791996325e-22 7.9791996325e-22
7.9791996325e-22 7.9791996325e-22 3.0550862084e-22 7.1747731778e-22 3.0550862084e-22 3.0550862084e-22
7.9791996325e-22 3.0550862084e-22 7.9791996325e-22 3.0550862084e-22 7.1747731778e-22 3.0550862084e-22
3.0550862084e-22 7.9791996325e-22 7.9791996325e-22 3.0550862084e-22 3.0550862084e-22 7.1747731778e-22
</code></pre>
<p>It contains quite small values (order of 1e-22), and based on Numpy, their determinants and condition numbers are like this</p>
<pre><code>cond 18.552692061099496
det 8.466833864265427e-127
</code></pre>
<p>So the condition number is not that bad, but the determinant is very very small. So I expected Numpy will have hard time inverting this giving some random solution depending on its internal tolerance for its numerical scheme for matrix inversion.</p>
<p>And as a very naive approach to improve this situation, I simply scaled this matrix by a certain value (Here, I took 1.3972450422e+21).</p>
<pre><code>// A2
3.9874710151e+00 1.0024916252e+00 1.0024916252e+00 1.1148897128e+00 1.1148897128e+00 4.2687040583e-01
1.0024916252e+00 3.9874710151e+00 1.0024916252e+00 1.1148897128e+00 4.2687040583e-01 1.1148897128e+00
1.0024916252e+00 1.0024916252e+00 3.9874710151e+00 4.2687040583e-01 1.1148897128e+00 1.1148897128e+00
1.1148897128e+00 1.1148897128e+00 4.2687040583e-01 1.0024916252e+00 4.2687040583e-01 4.2687040583e-01
1.1148897128e+00 4.2687040583e-01 1.1148897128e+00 4.2687040583e-01 1.0024916252e+00 4.2687040583e-01
4.2687040583e-01 1.1148897128e+00 1.1148897128e+00 4.2687040583e-01 4.2687040583e-01 1.0024916252e+00
</code></pre>
<p>And its condition number and determinant are</p>
<pre><code>cond 18.552692061099485
det 6.300231416456931
</code></pre>
<p>Now, obviously the condition number and the determinant looks normal.</p>
<p>But what I cannot get at this moment is that the results from both cases are not that different in their quality. The inverted matrix values are very similar, and the A*invA shows a similar deviation (~1e-16) from the identity matrix.</p>
<pre><code>inv(A)
6.4337397752e+20 -2.7166291469e+18 -2.7166291469e+18 -5.6175766999e+20 -5.6175766999e+20 2.1049115695e+20
-2.7166291469e+18 6.4337397752e+20 -2.7166291469e+18 -5.6175766999e+20 2.1049115695e+20 -5.6175766999e+20
-2.7166291469e+18 -2.7166291469e+18 6.4337397752e+20 2.1049115695e+20 -5.6175766999e+20 -5.6175766999e+20
-5.6175766999e+20 -5.6175766999e+20 2.1049115695e+20 2.9200923433e+21 -4.3031775290e+20 -4.3031775290e+20
-5.6175766999e+20 2.1049115695e+20 -5.6175766999e+20 -4.3031775290e+20 2.9200923433e+21 -4.3031775290e+20
2.1049115695e+20 -5.6175766999e+20 -5.6175766999e+20 -4.3031775290e+20 -4.3031775290e+20 2.9200923433e+21
inv(A2)*fnorm
6.4337397752e+20 -2.7166291469e+18 -2.7166291469e+18 -5.6175766999e+20 -5.6175766999e+20 2.1049115695e+20
-2.7166291469e+18 6.4337397752e+20 -2.7166291469e+18 -5.6175766999e+20 2.1049115695e+20 -5.6175766999e+20
-2.7166291469e+18 -2.7166291469e+18 6.4337397752e+20 2.1049115695e+20 -5.6175766999e+20 -5.6175766999e+20
-5.6175766999e+20 -5.6175766999e+20 2.1049115695e+20 2.9200923433e+21 -4.3031775290e+20 -4.3031775290e+20
-5.6175766999e+20 2.1049115695e+20 -5.6175766999e+20 -4.3031775290e+20 2.9200923433e+21 -4.3031775290e+20
2.1049115695e+20 -5.6175766999e+20 -5.6175766999e+20 -4.3031775290e+20 -4.3031775290e+20 2.9200923433e+21
</code></pre>
<pre><code>check A*invA
1.0000000000e+00 -1.9428902931e-16 -2.7755575616e-17 -5.5511151231e-17 5.5511151231e-17 -5.5511151231e-17
-2.7755575616e-17 1.0000000000e+00 -5.5511151231e-17 -2.7755575616e-17 -5.5511151231e-17 0.0000000000e+00
1.6653345369e-16 0.0000000000e+00 1.0000000000e+00 2.7755575616e-17 5.5511151231e-17 0.0000000000e+00
1.1102230246e-16 3.3306690739e-16 0.0000000000e+00 1.0000000000e+00 1.6653345369e-16 5.5511151231e-17
-2.2204460493e-16 0.0000000000e+00 2.2204460493e-16 1.1102230246e-16 1.0000000000e+00 1.1102230246e-16
0.0000000000e+00 3.3306690739e-16 0.0000000000e+00 -2.7755575616e-17 0.0000000000e+00 1.0000000000e+00
check A2*invA2
1.0000000000e+00 -2.7755575616e-17 -1.1102230246e-16 4.1633363423e-17 6.9388939039e-17 2.7755575616e-17
-2.7755575616e-17 1.0000000000e+00 5.5511151231e-17 0.0000000000e+00 -2.7755575616e-17 0.0000000000e+00
5.5511151231e-17 0.0000000000e+00 1.0000000000e+00 0.0000000000e+00 -1.1102230246e-16 -5.5511151231e-17
-8.3266726847e-17 -1.1102230246e-16 0.0000000000e+00 1.0000000000e+00 2.7755575616e-17 5.5511151231e-17
-1.1102230246e-16 0.0000000000e+00 2.2204460493e-16 -5.5511151231e-17 1.0000000000e+00 0.0000000000e+00
-2.2204460493e-16 -7.2164496601e-16 -2.2204460493e-16 8.3266726847e-17 1.1102230246e-16 1.0000000000e+00
</code></pre>
<p>While 1e-16 order of difference could be regarded as negligible, but I am worrying because they have signs. If this kind of error results in the changes of the signs in the inverted matrix elements, and I would get an inversely directed force in my code. Would there be anything that I can do in this matrix inversion?</p>
<p>Maybe my problem is in the first place ill-posed, so there is essentially nothing I can do about?</p>
<p>Here is the full python code I used here</p>
<pre><code>import numpy as np
def printA(A, n):
for i in range(n):
for j in range(n):
print("{:.10e}".format(A[i,j]), end=" ")
print("")
A = np.array([[2.8538093853e-21, 7.1747731778e-22, 7.1747731778e-22, 7.9791996325e-22, 7.9791996325e-22, 3.0550862084e-22],
[7.1747731778e-22, 2.8538093853e-21, 7.1747731778e-22, 7.9791996325e-22, 3.0550862084e-22, 7.9791996325e-22],
[7.1747731778e-22, 7.1747731778e-22, 2.8538093853e-21, 3.0550862084e-22, 7.9791996325e-22, 7.9791996325e-22],
[7.9791996325e-22, 7.9791996325e-22, 3.0550862084e-22, 7.1747731778e-22, 3.0550862084e-22, 3.0550862084e-22],
[7.9791996325e-22, 3.0550862084e-22, 7.9791996325e-22, 3.0550862084e-22, 7.1747731778e-22, 3.0550862084e-22],
[3.0550862084e-22, 7.9791996325e-22, 7.9791996325e-22, 3.0550862084e-22, 3.0550862084e-22, 7.1747731778e-22]])
print("A - cond / det")
printA(A,6)
print("cond {}".format(np.linalg.cond(A)))
print("det {}".format(np.linalg.det(A)))
#fnorm = 2.7944900844667493e+20
fnorm = 2.7944900844667493e+21 / 2.
A2 = A*fnorm
print("\nA2 = fnorm * A - cond / det")
print("fnorm {:.10e}".format(fnorm))
printA(A2,6)
print("cond {}".format(np.linalg.cond(A2)))
print("det {}".format(np.linalg.det(A2)))
print("\ninv(A)")
printA(np.linalg.inv(A),6)
print("\ninv(A2)*fnorm")
printA(np.linalg.inv(A2)*fnorm,6)
print("\ncheck A*invA")
printA(np.matmul(np.linalg.inv(A),A),6)
print("\ncheck A2*invA2")
printA(np.matmul(np.linalg.inv(A2),A2),6)
</code></pre>
| <python><numpy><matrix> | 2023-09-30 08:51:24 | 1 | 455 | Sangjun Lee |
77,206,182 | 11,608,962 | Uploading bytes object to s3 throwing ValueError: Fileobj must implement read | <p>I am trying to upload a base64 string representing a <code>jpeg</code> file format but getting <code>ValueError</code>.</p>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>with open(base64_txt_file, 'r') as file:
base64_content = file.read()
img_data = base64_content.encode()
img_content = base64.b64decode(img_data)
print(type(img_content))
object_name = 'sample.jpeg'
client.upload_fileobj(img_content, bucket_name, object_name, ExtraArgs={'ACL': 'public-read'})
</code></pre>
<p>Output:</p>
<pre class="lang-bash prettyprint-override"><code><class 'bytes'>
Traceback (most recent call last):
File "D:\temp-project\doSpaces-base64-jpeg.py", line 9
client.upload_fileobj(img_content, bucket_name, object_name, ExtraArgs={'ACL': 'public-read'})
File "D:\temp-project\venv\lib\site-packages\boto3\s3\inject.py", line 618, in upload_fileobj
raise ValueError('Fileobj must implement read')
ValueError: Fileobj must implement read
</code></pre>
<p>I aim to upload a <code>base64</code> format file to the s3 bucket and get a public URL. Please note that I do not want to store the file locally, so I am relying on the <code>base64</code> string since I expect this string to be received from the API directly.</p>
| <python><amazon-web-services><amazon-s3> | 2023-09-30 08:10:30 | 1 | 1,427 | Amit Pathak |
77,206,151 | 1,618,465 | How to load multiple SSL certificates on server with python with sockets | <p>I'm creating a server that should support SSL. I have two pairs of signed cert and keyfile for two different domains.</p>
<p>To add both certs to the context I've tried two things:</p>
<ul>
<li>Calling twice to <code>context.load_cert_chain(certfile=certfile, keyfile=keyfile)</code></li>
<li>Concatenating both certfiles and keyfiles into one cerfile and one keyfile</li>
</ul>
<p>Both tries didn't work since it seems the server is using just one of them. My understanding is that I can use Server Name Indication (SNI) to have two domain certs in the same IP.</p>
<p>How can I make Server Name Indication (SNI) work with python ssl? I guess the browsers should send that info to the servers for the servers to know what certificate to serve right? How can I know what cert does the client want before calling <code>context.wrap_socket(csock, server_side=True)</code>?</p>
| <python><sockets><ssl><sni> | 2023-09-30 07:58:53 | 1 | 1,961 | user1618465 |
77,205,923 | 14,154,784 | Simple Example of Multiple ManyToMany Model Relations in Django | <p>I'm writing a workout app and have a number of models that need to link together. Specifically, I have an Exercise model that needs to track the name of each exercise as well as the primary and secondary muscle groups targeted by the exercise. I also need a muscle group model, and these all need to link together in a ManyToMany fashion, e.g. any muscle group be associated with the primary and/or secondary or multiple exercises and vice versa.</p>
<p>How do I write the code to define the classes, as well as write a script to populate the database and test how to access various fields? <em><strong>Answered below to save others time. I spent way too long figuring this out.</strong></em></p>
| <python><django><django-models><django-orm> | 2023-09-30 06:37:52 | 1 | 2,725 | BLimitless |
77,205,618 | 3,204,212 | When a file is on the Windows clipboard, how can I (in Python) access its path? | <p><strong>I don't mean reading text from the clipboard.</strong></p>
<p>When a user right-clicks a file in Explorer and selects 'Copy', a value is placed on the clipboard that is not simply its path as a string. Pasting that value behaves differently in different contexts: in Explorer, it would create a new file; in Audacity, it copies the file contents into a new temp file; and in VS Code, it inserts the file's path. Copying a file's path to the clipboard doesn't have that same behavior. This value does not show up in Windows clipboard history because clipboard history seemingly tracks only strings.</p>
<p>The Python clipboard utilities I can find —- tkinter, Pyperclip, and win32clipboard -— do not register copies of this value at all when listening for new pastes, because they too are seemingly only listening for strings.</p>
<p>I want to listen for copies of this type of value and extract the path from them, the way VS Code does.</p>
<p>Is there a way I can do this in Python already? If not, can anyone recommend where I should start researching the clipboard API/syscalls?</p>
| <python><clipboard> | 2023-09-30 04:02:25 | 1 | 2,480 | GreenTriangle |
77,205,575 | 2,487,607 | seaborn 0.13.0 pointplot native_scale TypeError: pointplot() got an unexpected keyword argument 'native_scale' | <p>According to the <a href="https://seaborn.pydata.org/generated/seaborn.pointplot.html" rel="nofollow noreferrer">documentation</a> seaborn pointplot says:</p>
<blockquote>
<p>By default, this function treats one of the variables as categorical and draws data at ordinal positions (0, 1, … n) on the relevant axis. As of version 0.13.0, this can be disabled by setting native_scale=True.</p>
</blockquote>
<p>it goes on to say native_scale is :</p>
<blockquote>
<p>"new in version 0.13.0</p>
</blockquote>
<p>I have seaborn version 0.13.0, but it throws unexpected keyword error when I pass <code>native_scale</code>. Here is a MWE in a Jupyter notebook:</p>
<pre><code>import seaborn as sns
from matplotlib import pyplot as plt
!pip3 show seaborn
</code></pre>
<p>returns:</p>
<blockquote>
<p>Name: seaborn
Version: 0.13.0
Summary: Statistical data visualization
Home-page:
Author:
Author-email: Michael Waskom mwaskom@gmail.com
License:
Location: //site-packages
Requires: matplotlib, numpy, pandas</p>
</blockquote>
<pre><code>datax = [ 10**i for i in range(5)]
datay = [ i for i in range(5)]
sns.pointplot(x = datax, y = datay, native_scale = True)
plt.show()
</code></pre>
<p>line starting with <code>sns...</code>returns error:</p>
<blockquote>
<p>TypeError: pointplot() got an unexpected keyword argument 'native_scale'</p>
</blockquote>
| <python><matplotlib><seaborn><scale><pointplot> | 2023-09-30 03:28:48 | 0 | 8,557 | travelingbones |
77,205,208 | 1,889,720 | How do I make my Cloud Run Job last indefinitely? | <p>I have a Job in Cloud Run that ingests data from an external source, and writes that data into Firebase Firestore. I want this job to run indefinitely - 365 days per year, 24 hours per day.</p>
<p>As I understand, these Cloud Run jobs have a timeout. The timeout is the reason I migrated from Firebase Functions to Cloud Run. My job fails due to the timeout, and then retries with this Error:</p>
<pre><code>"Terminating task because it has reached the maximum timeout of 600 seconds. To change this limit, see https://cloud.google.com/run/docs/configuring/task-timeout"
</code></pre>
<p>After retrying, the job reconnects to the external source, and starts populating Firestore again. This means, aside from the short interruption, it behaves exactly as I want - as long as it doesn't run out of retries. I can increase the timeout and retries, but this seems like a pretty ugly hack. Also, having the big red error makes me sad 😞</p>
<p>What is the correct way to run a Cloud Run job indefinitely?</p>
| <python><error-handling><timeout><google-cloud-run><indefinite> | 2023-09-29 23:37:39 | 2 | 7,436 | Evorlor |
77,205,203 | 1,687,469 | How can I dynamically change TreeView columns without TreeView width growing beyond column widths? | <p>I'm trying to update a ttk.TreeView's columns, but the TreeView always expands beyond the column widths. The issue is shown in the screenshots below.</p>
<p>Here is a minimum reproducible example:</p>
<pre class="lang-py prettyprint-override"><code>import tkinter as tk
from tkinter import ttk
class DataView(ttk.Labelframe):
"""Widget that displays tabular data in a console-like view."""
def __init__(self, master, max_lines=1000, **kwargs):
super().__init__(master, **kwargs)
self.rowconfigure(0, weight=1)
self._column_headers = ()
self._max_lines = max_lines
self._tv = ttk.Treeview(self, show="headings", selectmode="none", columns=[])
self._tv.grid(row=0, column=0, padx=2, pady=2, sticky=tk.NS)
def set_column_headers(self, headers: tuple[str, ...]):
self._column_headers = headers
self._tv.config(columns=headers)
for header in headers:
self._tv.column(header, width=70, stretch=False)
self._tv.heading(header, text=header)
def main():
root = tk.Tk()
root.title("Data View Test")
root.geometry("800x600")
root.columnconfigure(0, weight=1)
root.rowconfigure(0, weight=1)
data_view = DataView(root, text="Data View", max_lines=100)
data_view.grid(row=0, column=0, sticky=tk.NSEW)
data_view.set_column_headers(("Time", "Voltage", "Current"))
data_view.after(3000, data_view.set_column_headers, ("Time", "Voltage", "Current", "Power"))
root.mainloop()
if __name__ == "__main__":
main()
</code></pre>
<p>The first call to <code>set_column_headers</code> works as expected:
<a href="https://i.sstatic.net/3YG91.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3YG91.png" alt="first set_column_headers call" /></a></p>
<p>However, the second call (and any subsequent calls) makes the treeview expand beyong the width of the columns:</p>
<p><a href="https://i.sstatic.net/0kvNV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0kvNV.png" alt="second set_column_headers call" /></a></p>
<p>Stepping through the code, this seems to happen on <code>self._tv.config(columns=headers)</code></p>
<p>I would expect a result like the first screenshot, but with the additional "Power" column. Is there any way I can achieve this? This seems like a tkinter bug but I may be doing something wrong.</p>
<p>Python version: 3.11.4</p>
<p>OS: Windows</p>
<p><code>tkinter.TkVersion</code>: 8.6</p>
<p><code>tkinter.Tcl().call('info', 'patchlevel'))</code>: 8.6.12</p>
| <python><tkinter><treeview> | 2023-09-29 23:33:41 | 1 | 853 | Tur1ng |
77,205,123 | 1,773,216 | How do I slim down SBERT's sentencer-transformer library? | <p>SBERT's (<a href="https://www.sbert.net/" rel="noreferrer">https://www.sbert.net/</a>) <code>sentence-transformer</code> library (<a href="https://pypi.org/project/sentence-transformers/" rel="noreferrer">https://pypi.org/project/sentence-transformers/</a>) is the most popular library for producing vector embeddings of text chunks in the Python open source LLM ecosystem. It has a simple API but is a <strong>MASSIVELY large</strong> dependency. Where does all its bloat come from?</p>
<p>Below is a screenshot of building a base Docker container image with this tool which took over <code>11 mins</code> to build with a final image size of <code>7.5 GB</code>:</p>
<p><a href="https://i.sstatic.net/zdmwD.png" rel="noreferrer"><img src="https://i.sstatic.net/zdmwD.png" alt="base docker image" /></a></p>
<p>For reference, here is my <code>Dockerfile.base</code>:</p>
<pre><code>FROM python:3.11.2-slim-bullseye
RUN pip install --upgrade pip && pip install sentence-transformers
</code></pre>
<p>I anticipated that this is because it is installed with some models already pre-packaged, but when I tried their popular getting started snippet</p>
<pre class="lang-py prettyprint-override"><code>from sentence_transformers import SentenceTransformer
model = SentenceTransformer('all-MiniLM-L6-v2')
embeddings = model.encode("vectorize this text")
</code></pre>
<p>it downloaded another few hundred MBs of more files to my filesystem. So I'd like to find a way of slimming this down to just the packages I need. I believe the size of this library is largely due to the underlying <code>torch</code> dependencies (<code>6.9 GB</code>), which in turn takes up a lot disk space due to its underlying <code>nvidia-*</code> dependencies (to where are these installed btw?)</p>
<p>Let's suppose I already have a model I've downloaded to my file system (i.e. <code>path/to/all-MiniLM-L6-v2</code> repo from HuggingFace), and all I want to do is run the code above on just a CPU. How can I install only the things I need without the bloat?</p>
<p>Now let's suppose I've picked a GPU to run this on. What are the next set of marginal dependencies I need to get this code to run without the bloat?</p>
| <python><pytorch><huggingface><large-language-model><sentence-transformers> | 2023-09-29 22:53:58 | 1 | 1,657 | nmurthy |
77,204,995 | 1,035,897 | How to specify version of module installed from git repo with pip install -r requirements.txt | <p>In my <strong>Python</strong> <strong>FastAPI</strong> app, I need <strong>Pydantic</strong> version 2 to solve <a href="https://stackoverflow.com/questions/77203896/list-of-union-only-accepts-first-type-of-union-with-pydantic/77204842">an issue</a> with <code>Unions</code> that is fixed in version 2 only.</p>
<p>Since <strong>FastAPI</strong> version 0.100.0 and onward supports <strong>Pydantic</strong> v2, I was first hopeful that depending on <strong>Pydantic</strong> v2 would work fine, however another indirect depdendency in my project called <strong>sqlmodel</strong> specifies to only support <strong>Pedantic</strong> 1.x.</p>
<p><strong>sqlmodel</strong> is incidently <a href="https://sqlmodel.tiangolo.com/" rel="nofollow noreferrer">also written by tiangolo</a> which is the <a href="https://fastapi.tiangolo.com/" rel="nofollow noreferrer">author of FastAPI</a> and he stated that <strong>sqlmodel</strong> would get a version bump with support for <strong>Pydantic</strong> v2, <a href="https://github.com/tiangolo/sqlmodel/issues/532" rel="nofollow noreferrer">the discussion can be seen here</a>. But this has taken some time and has caused a lot of problems for many users and prompted someone to <a href="https://github.com/honglei/sqlmodel" rel="nofollow noreferrer">create a fork of sqlmodel, patching it to support <strong>Pydantic</strong> v2</a>.</p>
<p>I want to use this forked version of <strong>sqlmodel</strong> in my project to get <strong>Pydantic</strong> v2 support installed without the dependency problems. I know it is possible to specify a git repo directly in my <code>requirements.txt</code> file using the <code>-e git+https://github.com/author/project.git@branch_tag_or_hash#egg=packagename</code> syntax, however when I try this with the fork of <strong>sqlmodel</strong> I get an error because now <strong>sqlmodel</strong> has version "0" according to <strong>pip</strong>.</p>
<p>The exact syntax I have used in my requirements.txt looks like this:</p>
<pre><code>-e git+https://github.com/honglei/sqlmodel.git@main#egg=sqlmodel
</code></pre>
<p>and the exact error message after running pip install -r requirements.txt looks like this:</p>
<pre><code>ERROR: Cannot install sqlmodel 0 (from git+https://github.com/honglei/sqlmodel.git@main#egg=sqlmodel) because these package versions have conflicting dependencies.
</code></pre>
<p>As you can clearly see, pip thinks we are installing "version 0" of sqlmodel now which is clearly not the case.</p>
<p>So there are two questions;</p>
<ol>
<li><p>How can I convince pip that the git fork of <strong>sqlmodel</strong> is actually version <em>0.0.9</em> or some other arbitrary value that makes it be accepted as the latest version of the package during dependency resolution?</p>
</li>
<li><p>Taking a step back, are there any more elegant ways to resolve this? It seems quite radical and hacky to depend on a fork in github, but I don't see any other way of having FastAPI and Pydantic v2 in my app at this point.</p>
</li>
</ol>
| <python><version><fastapi><pydantic><sqlmodel> | 2023-09-29 22:09:15 | 2 | 9,788 | Mr. Developerdude |
77,204,947 | 19,907,524 | Is there a good way to compile .pyx cython files for AWS Lmabda? | <p>So I'm currently compiling <code>.pyx</code> files using a AWS EC2 instance because I didn't find a way to do it locally on my macbook. Is there an obvious solution to do this locally or do I just continue with how I'm doing it?</p>
<p>I thought maybe setting the <code>platform</code> kwarg to <code>'manylinux2014_x86_64'</code> in my <code>setup.py</code> might work but no luck.</p>
<p>code:</p>
<pre class="lang-py prettyprint-override"><code>from setuptools import setup
from Cython.Build import cythonize # pip
script = 'some.pyx'
setup(
ext_modules=cythonize(script),
# platforms='manylinux2014_x86_64'
)
</code></pre>
<p>calling <code>python3 setup.py build_ext --inplace</code> from the terminal</p>
<p>p.s. I have spent a significant amount of time researching this, sadly. Sometimes the answer is just I'm a bad "Googler"</p>
| <python><cython> | 2023-09-29 21:51:49 | 0 | 551 | Daniel Olson |
77,204,874 | 820,011 | Is it possible to type a dict of type -> handler functions for that type in python? | <p>eg, imagine a class hierarchy rooted at <code>Base</code> (which is abstract) with subclasses <code>A</code> and <code>B</code>. I'd like to be able to do something like this:</p>
<pre class="lang-py prettyprint-override"><code>T = TypeVar('T', bound=Base)
dispatch: dict[Type[T], Callable[[T], RetType] = {
A: a_handler,
B: b_handler
}
</code></pre>
<p>and have mypy know that <code>dispatch[type(x)](x)</code> is well typed (assuming <code>x: Base</code>).</p>
<p>This is not possible, because you can't have generics at the top level under mypy; it complains that <code>T</code> doesn't mean anything. However, I'd like to avoid a chain of isinstance checks:</p>
<pre class="lang-py prettyprint-override"><code>if isinstance(x, A):
return a_handler(x)
elif isintsance(x, B):
return b_handler
...
</code></pre>
<p>if it's possible.</p>
| <python><generics><mypy> | 2023-09-29 21:28:48 | 1 | 2,535 | ben w |
77,204,822 | 1,187,621 | GEKKO IPOPT trajectory propagation | <p>I am trying to propagate a spacecraft to optimize the time of flight using IPOPT in GEKKO/Python. Here is the code for my GEKKO model:</p>
<pre><code>m = GEKKO()
#manipulating variables and initial guesses
al_a = m.MV(value = -1, lb = -2, ub = 2)
al_a.STATUS = 1
l_e = m.MV(value = 0.001, lb = 0, ub = 10**6)
l_e.STATUS = 1
l_i = m.MV(value = 1, lb = 0, ub = 10**6)
l_i.STATUS = 1
#variables and initial guesses
a = m.Var(value = oe_i[0], lb = oe_i[0] - 6378000, ub = oe_f[0] + 6378000, name = 'sma')
e = m.Var(value = oe_i[1], lb = 0, ub = 1, name = 'ecc')
i = m.Var(value = oe_i[2], lb = 0, ub = math.radians(90), name = 'inc')
Om = m.Var(value = oe_i[3], lb = 0, ub = math.radians(360), name = 'raan')
om = m.Var(value = oe_i[4], lb = 0, ub = math.radians(360), name = 'ap')
nu = m.Var(value = oe_i[5], lb = 0, ub = math.radians(360), name = 'ta')
mass = m.Var(value = m0, lb = 0, ub = m0, name = 'mass')
#objective function
tf = m.FV(1.2 * ((m0 - mass)/dm), lb = 0, ub = t_max)
tf.STATUS = 1
#propagate
t = 0
while t <= tf:
deltas, Tp = propagate(a, e, i, Om, om, nu, mass)
m.Equation(Tp * a.dt() == (deltas[0] * delta_t * deltas[7]))
m.Equation(Tp * e.dt() == (deltas[1] * delta_t * deltas[7]))
m.Equation(Tp * i.dt() == (deltas[2] * delta_t * deltas[7]))
m.Equation(Tp * Om.dt() == (deltas[3] * delta_t * deltas[7]))
m.Equation(Tp * om.dt() == (deltas[4] * delta_t * deltas[7]))
m.Equation(nu.dt() == deltas[5] * delta_t)
m.Equation(Tp * mass.dt() == (deltas[6] * delta_t * deltas[7]))
t = t + delta_t
#starting constraints
m.fix(a, pos = 0, val = oe_i[0])
m.fix(e, pos = 0, val = oe_i[1])
m.fix(i, pos = 0, val = oe_i[2])
m.fix(Om, pos = 0, val = oe_i[3])
m.fix(om, pos = 0, val = oe_i[4])
m.fix(nu, pos = 0, val = oe_i[5])
m.fix(mass, pos = 0, val = m0)
#boundary constraints
m.fix(a, pos = len(m.time) - 1, val = oe_f[0])
m.fix(e, pos = len(m.time) - 1, val = oe_f[1])
m.fix(i, pos = len(m.time) - 1, val = oe_f[2])
m.fix(Om, pos = len(m.time) - 1, val = oe_f[3])
m.fix(om, pos = len(m.time) - 1, val = oe_f[4])
m.fix(nu, pos = len(m.time) - 1, val = oe_f[5])
m.fix(mass, pos = len(m.time) - 1, val = 0)
m.time = np.linspace(0,0.2,100)
m.Obj(tf)
m.options.IMODE = 6 # non-linear model
m.options.SOLVER = 3 # solver (IPOPT)
m.options.MAX_ITER = 15000
m.options.RTOL = 1e-7
m.options.OTOL = 1e-7
m.open_folder()
m.solve(display=false) # Solve
print('Optimal time: ' + str(tf.value[0]))
m.solve()
m.open_folder(infeasibilities.txt)
</code></pre>
<p>I know that the problem I am having is related to the propagate part. I want to propagate from the orbit corresponding to <strong>oe_i</strong> in 3600 s increments (<strong>delta_t</strong>) from time 0 to the final time (which is my objective function), achieving the orbit corresponding to <strong>oe_f</strong>, using the <strong>propagate</strong> function, which relies on the variations in the manipulation variables.</p>
<p>I had originally tried propagating without any sort of loop to go from 0 to the end time, and the model ran fine, but never found a solution. Looking at that code, I realized that it was not actually propagating the orbit over a long period of time, which is why I added the loop. I tried a <strong>for loop</strong> first, but had similar problems with errors about <strong>tf</strong> not being an <strong>int</strong>. I tried looping time to reach the calculated value for <strong>tf</strong> (<strong>1.2 * ((m0 - mass)/dm)</strong>), but had the problem with <strong>mass</strong> not being able to be used in the calculations.</p>
<p>If anyone is able to point out where I am going wrong in trying to do my propagation or to an example similar to what I am attempting, I would appreciate it.</p>
<p>Thanks!</p>
| <python><gekko><ipopt> | 2023-09-29 21:17:28 | 1 | 437 | pbhuter |
77,204,672 | 1,173,913 | Paho MQTT in browser with jupyterlite | <p>I'm attempting to run Paho MQTT in a browser using jupyterlite and pyodide. I was able to install paho using micropip (see example in link below). But the paho client times out when attempting to connect. I tried using websockets but this also failed with timeout. I tried ports 443, 9001 but that failed.</p>
<blockquote>
<p>client = mqtt_client.Client(client_id, transport='websockets')</p>
</blockquote>
<p>Broker is <a href="https://mqtt.eclipseprojects.io/" rel="nofollow noreferrer">https://mqtt.eclipseprojects.io/</a></p>
<p>Here is the notebook <a href="https://discoverling.github.io/morse-decoder/lab/index.html" rel="nofollow noreferrer">example</a> (see morse_decoder.ipynb) which you can run on a browser.</p>
<p>I import as follows:</p>
<pre><code>import pyodide_kernel
import micropip
await micropip.install(['numpy', 'scikit-learn', 'https://www.piwheels.org/simple/paho-mqtt/paho_mqtt-1.6.1-py3-none-any.whl#sha256=c44b3dd1b298894c44e3e842f7b0ca3ebe01f628a6f229c881b2324613d7bba7'])
pyodide_kernel.__version__
</code></pre>
<p>Client code:</p>
<pre><code>from paho.mqtt import client as mqtt_client
# MQTT settings
client_id = f'publish-{random.randint(0, 1000)}'
def connect_mqtt(broker, port):
def on_connect(client, userdata, flags, rc):
if rc == 0:
print("Connected to MQTT Broker!")
else:
print("Failed to connect, return code %d\n", rc)
client = mqtt_client.Client(client_id, transport='websockets')
client.on_connect = on_connect
client.connect(broker, port)
return client
</code></pre>
<p>Thanks for your assistance.</p>
| <python><mqtt><paho><pyodide><jupyterlite> | 2023-09-29 20:37:19 | 0 | 581 | Graham |
77,204,626 | 2,034,048 | Flask 404 errors | <p>I am getting 404 errors from a Flask app with Blueprints. This is on Windows 10 running the latest PyCharm and python 3.11.</p>
<p>I have seen several, usualy many years old, version of this question. I have a Flask app that was not complete but was "working" in early development. I was put on a series of other projects and am now back on the Flask app again. I updated python, Flask and its dependencies.</p>
<p>When I launch the app in the PyCharm debugger, it runs and shows me the expected startup info:</p>
<pre><code>FLASK_APP = my_app/my_app.py
FLASK_ENV = development
FLASK_DEBUG = 0
In folder C:/Users/ME/PycharmProjects/Project_folder
import sys; print('Python %s on %s' % (sys.version, sys.platform))
C:\Users\ME\PycharmProjects\Project_folder\venv\Scripts\python.exe C:/JetBrains/PyCharm-2023.2.1/plugins/python/helpers/pydev/pydevd.py --module --multiprocess --qt-support=auto --client 127.0.0.1 --port 50785 --file flask run
Connected to pydev debugger (build 232.9559.58)
[2023-09-29 14:31:12,134] INFO in my_app: STARTING APP
* Serving Flask app 'my_app/my_app.py'
* Debug mode: on
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on http://127.0.0.1:5000
Press CTRL+C to quit
</code></pre>
<p>My project layout -- edited for brevity</p>
<pre><code>:.
| my_app.py
| CONFIG.ini
| requirements.txt
+---app
| | local_settings.py
| | settings.py
| | __init__.py
| +---models
| +---static
| +---templates
| +---views
| | | admin.py
| | | API.py
| | | main.py
| | | utilities.py
| | | __init__.py
</code></pre>
<p>I define Blueprints in: <code>admin.py</code>, <code>API.py</code> and <code>main.py</code>. The admin and API sections are only stubs at this moment.</p>
<p><code>my_app.py</code> looks like:</p>
<pre><code>from app import create_app
my_app = None
try:
my_app = create_app()
my_app.logger.info(f'STARTING APP')
except Exception as e:
msg = repr(e)
if my_app:
my_app.logger.error(f'{repr(msg)}')
print(repr(msg))
</code></pre>
<p><code>aoo/__init__.py</code> looks like the following. Notice the <code>register_blueprints(app)</code> call in <code>create_app()</code>; it comes from <code>app/utilities.py</code> below. It is registering the blueprints because I can put a break point after the function call and see them in the <code>app</code> object along with the rules.</p>
<pre><code>import os
from dotenv import load_dotenv
from flask import Flask
from flask_mail import Mail
from flask_migrate import Migrate
from flask_sqlalchemy import SQLAlchemy
from flask_user import UserManager
from flask_wtf.csrf import CSRFProtect
from app.models.user import User, UserRegisterForm, UserProfileForm, UserLoginForm
from app.views.utilities import register_api, register_blueprints
# Instantiate Flask
app = Flask(__name__)
csrf_protect = CSRFProtect()
mail = Mail()
db = SQLAlchemy()
migrate = Migrate()
user_manager = None
def create_app():
try:
load_dotenv()
app.config.from_object("app.settings")
app.config.from_object("app.local_settings")
# Set up an error-logger to send emails to app.config.ADMINS
init_file_error_handler(app)
init_email_error_handler(app)
csrf_protect.init_app(app)
mail.init_app(app)
db.init_app(app)
migrate.init_app(app, db)
# Setup Flask-User
global user_manager
user_manager = UserManager(app, db, User)
user_manager.RegisterFormClass = UserRegisterForm
user_manager.EditUserProfileFormClass = UserProfileForm
user_manager.LoginFormClass = UserLoginForm
register_blueprints(app)
return app
except Exception as e:
print(repr(e))
raise e
@app.context_processor
def context_processor():
return dict(user_manager=user_manager)
def init_file_error_handler(my_app):
...
def init_email_error_handler(my_app):
...
</code></pre>
<p><code>utilities.py</code>:</p>
<pre><code>from app.views.API import API_blueprint
from app.views.admin import admin_blueprint
from app.views.main import main_blueprint
def register_blueprints(my_app):
try:
my_app.register_blueprint(main_blueprint)
my_app.register_blueprint(admin_blueprint)
my_app.register_blueprint(API_blueprint)
except Exception as e:
raise
def register_api(my_app, view, endpoint, url, pk="id", pk_type="uuid"):
...
</code></pre>
<p><code>main.py</code> has the basic home page and other things:</p>
<pre><code>import os
import random
from datetime import timedelta
from flask import redirect, render_template, session
from flask import request, url_for, Blueprint
from flask_user import current_user, login_required
main_blueprint = Blueprint("main", __name__, template_folder="templates")
# The Home page is accessible to anyone
@main_blueprint.route("/")
def home_page():
from app import app
...
return render_template("main/home_page.html")
</code></pre>
<p>The PyCharm debugger starts; I can break on the <code>create_app()</code> call in <code>my_app.py</code>; step over the create step and examine the <code>my_app</code> object and it seems to be correctly populated as far as I know without being an expert on such things.</p>
<p>clicking the URL in the PyCharm debug window brings up a browser but that reports a 404 error. No other errors are thrown as far as I can tell.</p>
<p>Ideas?</p>
<p><strong>Minor edit</strong> FLASK_DEBUG should have been set to 0 above.</p>
<p><strong>Update</strong>
I just learned about the <code>flask routes</code> command. In the debugger I can see that the blueprints have been added but when I run the routs command I get almost nothing:</p>
<pre><code>> flask --app .\my_app\app routes
Endpoint Methods Rule
-------- ------- -----------------------
static GET /static/<path:filename>
</code></pre>
| <python><flask><pycharm> | 2023-09-29 20:25:45 | 1 | 2,579 | 7 Reeds |
77,204,503 | 5,420,846 | Polars - Split LazyFrame collection based on key column | <h2>Problem</h2>
<p>I have a fairly large LazyFrame that crashes even if I use <code>.collect(streaming=True)</code>. In order to split up the calculation into smaller subsets handleable in memory, I'd like to separate the table into multiple tables, based on the value of a "Key Column".</p>
<hr />
<h2>Example</h2>
<pre><code>shape: (3, 213,231,322)
┌───────────────────────────────┬─────────┬────────────┐
│ Timestamp ┆ Temp ┆ Location │
│ --- ┆ --- ┆ --- │
│ pl.Datetime ┆ str ┆ str │
╞═══════════════════════════════╪═════════╪════════════╡
│ 2023-08-01 23:06:99.512383 ┆ 19 ┆ Dallas │
│ 2023-08-01 23:21:01.818792 ┆ 20 ┆ Austin │
│ ... ┆ ... ┆ ... │
│ 2023-08-30 23:23:00.238093 ┆ 21 ┆ New York │
└───────────────────────────────┴─────────┴────────────┘
</code></pre>
<p>To put things into concrete terms. Looking at the table above, I have a massive LazyFrame with high fidelity weather data for a bunch of cities. With one column being "Location", I'd like to split that big one, into <code>N</code> smaller ones, one for each city. Then I would run <code>.collect_all(streaming=True)</code> on those instead. The keys aren't known beforehand. So we can't hard-code the for loop...</p>
<hr />
<h2>What I Have</h2>
<p>The way I'm doing this right now is I run an initial collection that is heavily filtered to get a set of unique keys.</p>
<p>Then I run <code>.collect_all()</code> on a list of lazyframe's filtered by key. This is a minimum viable (sorta) runnable example down version of exactly how my code works right now:</p>
<pre><code># Load initial master data, use .cache() to speed up individual symbols' collection
master_lf = pl.scan_csv(...).cache()
# Get unique keys:
keys = master_lf.select(pl.col('key_col')).unique(subset='key_col').collect()['key_col']
# Create Symbol LazyFrames
symbol_lfs = [df.filter(pl.col('key_col') == k) for k in keys]
# Collect Them All
symbol_dfs = pl.collect_all(symbol_lfs, streaming=True)
</code></pre>
<hr />
<h2>What I'd Like</h2>
<p>That initial collect is annoying. Seems like we can optimize even further no? Is there a way to avoid it?</p>
<hr />
<h2>Edit:</h2>
<p>I found something called <a href="https://pola-rs.github.io/polars/py-polars/html/reference/dataframe/api/polars.DataFrame.partition_by.html" rel="nofollow noreferrer"><code>parition_by</code></a> that does exactly that but for df's. Seems like Polars needs a streaming version of that.</p>
| <python><python-polars> | 2023-09-29 19:55:07 | 1 | 1,988 | Joe |
77,204,421 | 10,197,791 | How can I upload a file to Synology NAS using python with Synology NAS API? | <p>I am trying to create a python script to upload an xlsx file to synology NAS using the official Synology NAS API and the python requests module. I have been able to successfully log in and display filestation information using the SYNO.API.Auth and SYNO.FileStation.List apis respectively, but for some reason I can not get the SYNO.FileStation.Upload api to work. Here is my current attempt:</p>
<pre><code>file_path = 'path to xlsx file'
# API endpoint for file upload
upload_url = f'{url}/webapi/entry.cgi'
# The file to be uploaded
files = {'file': (open(file_path, 'rb'))}
# Parameters for the file upload request
upload_params = {
'api': 'SYNO.FileStation.Upload',
'version': '2',
'method': 'upload',
'path': '/home/Drive',
'create_parents': 'true',
'_sid': sid,
}
try:
# Sending the file upload request
upload_response = requests.post(upload_url, params=upload_params, files=files)
upload_data = upload_response.json()
except Exception as e:
print(f'Error uploading file: {e}')
</code></pre>
<p>When I execute the script it seems to just hang indefinitely when it tries to make the POST request. Am I missing something or what other approaches can I try to upload a file to synology NAS using python?</p>
| <python><python-requests><synology><nas> | 2023-09-29 19:37:04 | 1 | 375 | NicLovin |
77,204,417 | 3,291,993 | How to show xaxis ticklabels in a grid | <p>How to show xaxis ticklabels below the grid?</p>
<pre><code>ax.set_xticklabels([0.1, '', '', '', 0.5, '', '', '', '', 1.0], minor=True)
</code></pre>
<p>When I run the code I get the figure below.</p>
<p><a href="https://i.sstatic.net/iRLhP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iRLhP.png" alt="enter image description here" /></a></p>
<p>If I change the line of code from</p>
<pre><code>ax.set_xticks(np.arange(0, len(diameter_labels), 1))
</code></pre>
<p>to</p>
<pre><code>ax.set_xticks(np.arange(0, len(diameter_labels), 1), minor=True)
</code></pre>
<p>This time I got the following figure.</p>
<p><a href="https://i.sstatic.net/8Iq8f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8Iq8f.png" alt="enter image description here" /></a></p>
<p>In fact, I want to get the following figure.</p>
<p><a href="https://i.sstatic.net/1BCzA.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1BCzA.jpg" alt="enter image description here" /></a></p>
<pre><code>import matplotlib.pyplot as plt
import os
import numpy as np
if __name__ == "__main__":
fig, ax = plt.subplots(figsize=(10, 3))
diameter_labels = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]
row_labels = ['circle']
ax.grid(which="major", color="white", linestyle='-', linewidth=3)
ax.set_aspect(1.0)
for row_index, row_label in enumerate(row_labels):
for diameter_index, diameter_label in enumerate(diameter_labels):
circle=plt.Circle((diameter_index + 0.5, row_index + 0.5), radius=(diameter_label/(2*1.09)), color='gray', fill=True)
ax.add_artist(circle)
ax.set_xlim([0, len(diameter_labels)])
ax.set_xticklabels([])
ax.set_xticks(np.arange(0, len(diameter_labels), 1))
ax.tick_params(axis='x', which='minor', length=0, labelsize=30)
ax.set_xticklabels([0.1, '', '', '', 0.5, '', '', '', '', 1.0], minor=True)
ax.xaxis.set_ticks_position('bottom')
ax.tick_params(
axis='x', # changes apply to the x-axis
which='major', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False) # labels along the bottom edge are off
ax.xaxis.set_label_position('top')
ax.set_xlabel('Test', fontsize=40, labelpad=5)
ax.set_ylim([0, len(row_labels)])
ax.set_yticklabels([])
ax.tick_params(axis='y', which='minor', length=0, labelsize=12)
ax.set_yticks(np.arange(0, len(row_labels), 1))
ax.tick_params(
axis='y',
which='major',
left=False)
ax.grid(which='major', color='black', linestyle='-', linewidth=1)
filename = 'test.png'
figureFile = os.path.join('/Users/burcakotlu/Desktop', filename)
fig.savefig(figureFile)
plt.close()
</code></pre>
| <python><matplotlib><axis-labels> | 2023-09-29 19:35:33 | 1 | 1,147 | burcak |
77,204,385 | 9,766,795 | Signature for this request is not valid. - Binance API | <p>I'm getting the following error when using the binance-connector with python:</p>
<pre><code> "price": 20348.91, "quantity": 4.914e-05, "side": "BUY", "symbol": "BTCUSDT", "timeInForce": "GTC", "timestamp": 1696014956799, "type": "LIMIT", "signature": "..."}}
{'error': {'code': -1022, 'msg': 'Signature for this request is not valid.'}
</code></pre>
<p>This is my python code (partially):</p>
<pre class="lang-py prettyprint-override"><code>
qty_to_buy: float = self.USER_TRADE_BALANCE/current_bid_price
qty_to_buy = float("{0:.8f}".format(qty_to_buy)) # only 8 digits allowed after the comma
self.currentlyBuying = True
self.buying_bid_price = current_bid_price
self.binance_client.new_order(
symbol=self.TRADE_SYMBOL,
side="BUY",
type="LIMIT",
timeInForce="GTC",
quantity=qty_to_buy,
price=current_bid_price,
)
</code></pre>
<p>qty to buy has 8 digits after the comma, there's nothing wrong about it since I'm respecting the ticker rules. I'm trying to buy at the current bid price.</p>
<p>I'm using the binance testnet where I've built a new API key (and secret), so they are both correct.</p>
<p><em><strong>What am I doing wrong? How do I fix this?</strong></em>.</p>
<p><em><strong>I've tried</strong></em> making a new API key twice and nothing happened, I've increased recvWindow to 50000 and it still didn't work. What else am I doing wrong ?</p>
| <python><binance><python-binance> | 2023-09-29 19:27:14 | 1 | 632 | David |
77,204,366 | 9,112,151 | How to perform selectinload behind many-to-many? | <p>With code below:</p>
<pre><code>import logging
from faker import Faker
from sqlalchemy import create_engine, ForeignKey, select
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column, relationship, Session, selectinload
engine = create_engine("sqlite:///:memory:")
fake = Faker('ru_RU')
logging.basicConfig()
logger = logging.getLogger('sqlalchemy.engine')
logger.setLevel(logging.INFO)
class Base(DeclarativeBase):
pass
class User(Base):
__tablename__ = "user"
id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
first_name: Mapped[str]
last_name: Mapped[str]
permissions: Mapped[list["Permission"]] = relationship(secondary="user_permission", back_populates="users")
class Level(Base):
__tablename__ = "level"
id: Mapped[int] = mapped_column(autoincrement=True, primary_key=True)
code: Mapped[str]
permissions: Mapped["Permission"] = relationship(back_populates="level")
class Permission(Base):
__tablename__ = "permission"
id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
code: Mapped[str]
level_id: Mapped[int] = mapped_column(ForeignKey("level.id"))
level: Mapped["Level"] = relationship(back_populates="permissions")
users: Mapped[list["User"]] = relationship(secondary="user_permission", back_populates="permissions")
class UserPermission(Base):
__tablename__ = "user_permission"
id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
user_id: Mapped[int] = mapped_column(ForeignKey("user.id"))
permission_id: Mapped[int] = mapped_column(ForeignKey("permission.id"))
Base.metadata.create_all(engine)
with Session(engine) as session:
level = Level(code="red")
permission1 = Permission(code="create", level=level)
permission2 = Permission(code="delete", level=level)
user = User(first_name=fake.first_name(), last_name=fake.last_name())
user.permissions.extend([permission1, permission2])
session.add(user)
session.commit()
with Session(engine) as session:
stmt = select(User).options(selectinload(Permission.level)) # this fails with error
users = session.scalars(stmt).all()
users[0].permissions[0].level # this makes sql-request
</code></pre>
<p>I'd like to perform join <code>Permission.level</code>. But code:</p>
<pre><code>stmt = select(User).options(selectinload(Permission.level))
</code></pre>
<p>fails with error:</p>
<blockquote>
<p>Traceback (most recent call last): File
"/Users/alber.aleksandrov/PycharmProjects/Playground/sa/m2m.py", line
71, in
users = session.scalars(stmt).all() File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py",
line 2337, in scalars
return self._execute_internal( File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py",
line 2120, in _execute_internal
result: Result[Any] = compile_state_cls.orm_execute_statement( File
"/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/orm/context.py",
line 293, in orm_execute_statement
result = conn.execute( File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py",
line 1412, in execute
return meth( File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/sql/elements.py",
line 483, in _execute_on_connection
return connection._execute_clauseelement( File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py",
line 1627, in _execute_clauseelement
compiled_sql, extracted_params, cache_hit = elem._compile_w_cache( File
"/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/sql/elements.py",
line 671, in _compile_w_cache
compiled_sql = self._compiler( File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/sql/elements.py",
line 288, in _compiler
return dialect.statement_compiler(dialect, self, **kw) File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py",
line 1425, in <strong>init</strong>
Compiled.<strong>init</strong>(self, dialect, statement, **kwargs) File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py",
line 866, in <strong>init</strong>
self.string = self.process(self.statement, **compile_kwargs) File
"/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py",
line 911, in process
return obj._compiler_dispatch(self, **kwargs) File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/sql/visitors.py",
line 143, in _compiler_dispatch
return meth(self, **kw) # type: ignore # noqa: E501 File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py",
line 4576, in visit_select
compile_state = select_stmt._compile_state_factory( File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/sql/base.py",
line 678, in create_for_statement
return klass.create_for_statement(statement, compiler, **kw) File
"/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/orm/context.py",
line 1116, in create_for_statement
opt.process_compile_state(self) File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/orm/strategy_options.py", line 905, in process_compile_state
self._process( File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/orm/strategy_options.py", line 1097, in _process
loader.process_compile_state( File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/orm/strategy_options.py", line 1649, in process_compile_state
keys = self._prepare_for_compile_state( File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/orm/strategy_options.py", line 1994, in _prepare_for_compile_state
self._raise_for_no_match(parent_loader, mapper_entities) File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/orm/strategy_options.py", line 1566, in _raise_for_no_match
raise sa_exc.ArgumentError( sqlalchemy.exc.ArgumentError: Mapped class Mapper[Permission(permission)] does not apply to any of the root
entities in this query, e.g. Mapper[User(user)]. Please specify the
full path from one of the root entities to the target attribute.</p>
</blockquote>
<p>What's the <code>full path</code>? How to fix the error?</p>
| <python><sqlalchemy> | 2023-09-29 19:22:38 | 1 | 1,019 | Альберт Александров |
77,204,171 | 1,889,720 | How do I make a Dockerfile for Cloud Run? | <p>I am deploying and running Python Jobs to Cloud Run. I can successfully deploy and run my jobs, and they behave as expected.</p>
<p>I am adding <code>import requests</code> to my job, and it is giving me the error:</p>
<pre><code>Traceback (most recent call last):
File "/workspace/main.py", line 3, in <module>
import requests
ModuleNotFoundError: No module named 'requests'
</code></pre>
<p>Which led me to <a href="https://stackoverflow.com/a/60599991/1889720">https://stackoverflow.com/a/60599991/1889720</a>.</p>
<p>I created a <code>Dockerfile</code>:</p>
<pre><code>FROM python:3
RUN pip install requests
</code></pre>
<p>My job still successfully deploys and runs, and I no longer receive the error. However, my <code>main.py</code> no longer runs. It just immediately exits:</p>
<pre><code>Container called exit(0).
</code></pre>
<p>I get the same results of an early exit if my <code>Dockerfile</code> is simply:</p>
<pre><code>FROM python:3
</code></pre>
<p>It feels like it overwriting something?</p>
<p>The addition of the <code>Dockerfile</code> is the only change I made, so I suspect I am creating my it wrong. I have it in my <code>jobs</code> directory alongside <code>main.py</code>.</p>
<p>How do I correctly set up a <code>Dockerfile</code> for a Cloud Run Job?</p>
| <python><python-requests><dockerfile><google-cloud-run> | 2023-09-29 18:40:13 | 2 | 7,436 | Evorlor |
77,203,986 | 20,530,490 | Airflow Snowflake Operator Error - AttributeError: 'dict' object has no attribute 'sfqid' | <p>I've recently upgraded our MWAA instance from Airflow 2.2.2 to 2.4.3 but i'm now getting the below error whenever I use the <a href="https://airflow.apache.org/docs/apache-airflow-providers-snowflake/3.0.0/_api/airflow/providers/snowflake/operators/snowflake/index.html#airflow.providers.snowflake.operators.snowflake.SnowflakeCheckOperator" rel="nofollow noreferrer">SnowflakeCheckOperator</a> or <a href="https://airflow.apache.org/docs/apache-airflow/1.10.14/_api/airflow/hooks/dbapi_hook/index.html#airflow.hooks.dbapi_hook.DbApiHook.get_first" rel="nofollow noreferrer">get_first</a> function (which the Snowflake python connector inherits from dbapi_hook).</p>
<pre><code>query_id = cur.sfqid
AttributeError: 'dict' object has no attribute 'sfqid'
</code></pre>
<p>What's odd is I don't get this error when using the SnowflakeOperator or the run function (dbapi_hook).</p>
<p>Below is my test DAG code, requirements.txt, and errors:-</p>
<h3>🧱 DAG</h3>
<pre><code>from datetime import datetime, timedelta
from airflow import DAG
from airflow.providers.snowflake.operators.snowflake import SnowflakeCheckOperator
with DAG(
"TEST_DAG",
schedule_interval="*/10 * * * *",
start_date=datetime(2023, 9, 29),
catchup=False,
) as dag:
# Check to see if a Snowflake table contains this DAG name.
check_snowflake_table = SnowflakeCheckOperator(
task_id="check_snowflake_table",
snowflake_conn_id="snowflake_conn-non-live",
sql="SELECT COUNT(*) FROM DEV_MONITORING.NZO_STATE.TBL_DAG_MESSAGE_QUEUE WHERE DAG_NAME = 'TEST_DAG'",
)
</code></pre>
<h3>✏️ Log</h3>
<pre><code>[2023-09-29, 17:43:54 UTC] {{sql.py:780}} INFO - Executing SQL check: SELECT COUNT(*) FROM DEV_MONITORING.NZO_STATE.TBL_DAG_MESSAGE_QUEUE WHERE DAG_NAME = 'TEST_DAG'
[2023-09-29, 17:43:54 UTC] {{base.py:71}} INFO - Using connection ID 'snowflake_conn-non-live' for task execution.
[2023-09-29, 17:43:54 UTC] {{connection.py:329}} INFO - Snowflake Connector for Python Version: 3.2.0, Python Version: 3.10.8, Platform: Linux-4.14.322-244.536.amzn2.x86_64-x86_64-with-glibc2.26
[2023-09-29, 17:43:54 UTC] {{connection.py:1069}} INFO - This connection is in OCSP Fail Open Mode. TLS Certificates would be checked for validity and revocation status. Any other Certificate Revocation related exceptions or OCSP Responder failures would be disregarded in favor of connectivity.
[2023-09-29, 17:43:54 UTC] {{connection.py:1087}} INFO - Setting use_openssl_only mode to False
[2023-09-29, 17:43:55 UTC] {{cursor.py:804}} INFO - query: [ALTER SESSION SET autocommit=False]
[2023-09-29, 17:43:55 UTC] {{cursor.py:817}} INFO - query execution done
[2023-09-29, 17:43:55 UTC] {{cursor.py:959}} INFO - Number of results in first chunk: 1
[2023-09-29, 17:43:55 UTC] {{snowflake.py:328}} INFO - Running statement: SELECT COUNT(*) FROM DEV_MONITORING.NZO_STATE.TBL_DAG_MESSAGE_QUEUE WHERE DAG_NAME = 'TEST_DAG', parameters: None
[2023-09-29, 17:43:55 UTC] {{cursor.py:804}} INFO - query: [SELECT COUNT(*) FROM DEV_MONITORING.NZO_STATE.TBL_DAG_MESSAGE_QUEUE WHERE DAG_NA...]
[2023-09-29, 17:43:55 UTC] {{cursor.py:817}} INFO - query execution done
[2023-09-29, 17:43:55 UTC] {{cursor.py:959}} INFO - Number of results in first chunk: 1
[2023-09-29, 17:43:55 UTC] {{snowflake.py:338}} INFO - Statement execution info - COUNT(*)
[2023-09-29, 17:43:55 UTC] {{connection.py:659}} INFO - closed
[2023-09-29, 17:43:55 UTC] {{connection.py:665}} INFO - No async queries seem to be running, deleting session
[2023-09-29, 17:43:55 UTC] {{taskinstance.py:1851}} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/airflow/.local/lib/python3.10/site-packages/airflow/providers/common/sql/operators/sql.py", line 781, in execute
records = self.get_db_hook().get_first(self.sql, self.parameters)
File "/usr/local/airflow/.local/lib/python3.10/site-packages/airflow/providers/common/sql/hooks/sql.py", line 264, in get_first
return self.run(sql=sql, parameters=parameters, handler=fetch_one_handler)
File "/usr/local/airflow/.local/lib/python3.10/site-packages/airflow/providers/snowflake/hooks/snowflake.py", line 341, in run
query_id = cur.sfqid
AttributeError: 'dict' object has no attribute 'sfqid'
</code></pre>
<h3>📄 Requirements.txt</h3>
<pre><code>apache-airflow[amazon,celery,ftp,http,imap,sqlite,mysql,opsgenie]==2.4.3
gnupg==2.3.1
xmltodict
pydantic
setuptools
pipenv
apache-airflow-providers-snowflake==3.0.0
apache-airflow-providers-common-sql==1.7.2
sqlparse==0.4.2
snowflake-sqlalchemy==1.5.0
snowflake-connector-python==3.2.0
apache-airflow-providers-http
apache-airflow-providers-slack
apache-airflow-providers-slack[http]
</code></pre>
<p>I've checked for package conflicts, upgraded the connector etc. As mentioned I can connect fine to Snowflake and run SQL queries, but some functions have stopped working because it can't find the <code>sfqid</code>... But when I use the <a href="https://airflow.apache.org/docs/apache-airflow/1.10.14/_api/airflow/hooks/dbapi_hook/index.html#airflow.hooks.dbapi_hook.DbApiHook.run" rel="nofollow noreferrer">run</a> function It works fine and It even prints:-</p>
<pre><code>[2023-09-29, 17:16:28 UTC] {{snowflake.py:343}} INFO - Snowflake query id: 01af512c-0202-dc1e-0001-8052018e474a
</code></pre>
<p>They've not depreciated the functions i'm using, so i'm really stuck!</p>
| <python><sqlalchemy><snowflake-cloud-data-platform><airflow><mwaa> | 2023-09-29 18:03:43 | 1 | 371 | CallumO |
77,203,958 | 2,410,605 | Using Python Selenium 4.13, still getting "your version of chrome not supported message" | <p>I just updated and verified that I am using Selenium 4.13. It's my understanding that Selenium Manager was being used with everything after Selenium 4.10 so I wouldn't have to worry about downloading the latest chromedriver every time a new Chrome version came out. I'm still in the learning stages with python so apparently I'm just not understanding something correctly.</p>
<p>Below is a screenshot of what I'm seeing. I'm importing selenium webdriver and the first thing i do is assign it to a variable, but it's coming back and telling me that my current browser version isn't being supported. What am I not understanding? Any help would be very much appreciated.</p>
<p><a href="https://i.sstatic.net/b5M1j.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/b5M1j.png" alt="enter image description here" /></a></p>
| <python><selenium-webdriver><selenium-chromedriver><seleniummanager> | 2023-09-29 17:58:51 | 1 | 657 | JimmyG |
77,203,897 | 5,084,560 | changing early_stopping_rounds on xgboost doesn't effect the performance. what's wrong? | <p>I have a binary classification dataset. I use xgboost. I changed early_stopping_rounds value and fit. it gave same results every time. I shared screenshots below. what is the reason of same results?</p>
<p><strong>early_stopping_rounds=10</strong>
<a href="https://i.sstatic.net/KPnpz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KPnpz.png" alt="early_stopping_rounds=10" /></a></p>
<p><strong>early_stopping_rounds=16</strong>
<a href="https://i.sstatic.net/HX52T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HX52T.png" alt="early_stopping_rounds=16" /></a></p>
<p><strong>early_stopping_rounds=30</strong>
<a href="https://i.sstatic.net/8srfe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8srfe.png" alt="early_stopping_rounds=30" /></a></p>
<p><strong>and lastly epoch eval_metric plots:</strong>
<a href="https://i.sstatic.net/C3pPe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/C3pPe.png" alt="enter image description here" /></a></p>
| <python><optimization><xgboost><hyperparameters> | 2023-09-29 17:46:23 | 1 | 305 | Atacan |
77,203,896 | 1,035,897 | List of Union only accepts first type of union with Pydantic | <p>With the following python type definitions:</p>
<pre><code>import pydantic
from typing import List, Union
import sys
import pprint
class TypeA(pydantic.BaseModel):
name:str = "Type A"
type_a_spesific:float = 1337
class TypeB(pydantic.BaseModel):
name:str = "Type B"
type_b_spesific:str = "bob"
class ReturnObject(pydantic.BaseModel):
parameters:List[Union[TypeA, TypeB] ]
parameters = list()
parameters.append(TypeA(name="item 1", type_a_spesific=69))
parameters.append(TypeB(name="item 2", type_b_spesific="lol"))
ret = ReturnObject(parameters = parameters)
print(f"Python version: {pprint.pformat(sys.version_info)}")
print(f"Pydantic version: {pydantic.__version__}")
print(ret.json(indent=3))
print(ret)
</code></pre>
<p>The version output looks like this:</p>
<pre><code>Python version: sys.version_info(major=3, minor=11, micro=5, releaselevel='final', serial=0)
Pydantic version: 1.10.11
</code></pre>
<p>I would expect the parameter list to contain two objects with different type like this:</p>
<h2>Expected</h2>
<pre><code>{
"parameters": [
{
"name": "item 1",
"type_a_spesific": 69
},
{
"name": "item 2",
"type_b_spesific": "lol"
}
]
}
parameters=[TypeA(name='item 1', type_a_spesific=69.0), TypeB(name='item 2', type_b_spesific="lol")]
</code></pre>
<p>However what comes out instead is this:</p>
<h2>Actual</h2>
<pre><code>{
"parameters": [
{
"name": "item 1",
"type_a_spesific": 69.0
},
{
"name": "item 2",
"type_a_spesific": 1337
}
]
}
parameters=[TypeA(name='item 1', type_a_spesific=69.0), TypeA(name='item 2', type_a_spesific=1337)]
</code></pre>
<p>For some reason the <strong>List[ Union[ TypeA, TypeB ]]</strong> does not work. <em>What gives</em>?</p>
| <python><list><union><python-typing><pydantic> | 2023-09-29 17:46:08 | 1 | 9,788 | Mr. Developerdude |
77,203,810 | 2,520,186 | Pandas DataFrame: Replace a column by a list with a new column name | <p>I have a Pandas <code>DataFrame</code> and want to replace a column with a new one. The column should have a new name and a list gives its content. I can do this in two different ways. See the code below.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'Name': ['Alice','Bob','Charlie'],
'Age': [18,19,20]
})
print('df=\n',df,'\n')
marks = [50,60,90]
# Drop and add
df1 = df.drop(columns = ['Age'])
df1['Marks'] = marks
print('df1=\n',df1,'\n')
print()
# Rename and assign
df1 = df.rename(columns={'Age':'Marks'})
df1 = df1.assign(Marks=marks)
print('df1=\n',df1)
</code></pre>
<p>Output:</p>
<pre><code> df=
Name Age
0 Alice 18
1 Bob 19
2 Charlie 20
df1=
Name Marks
0 Alice 50
1 Bob 60
2 Charlie 90
df1=
Name Marks
0 Alice 50
1 Bob 60
2 Charlie 90
</code></pre>
<p>My question is: Can there be a more elegant or oneliner to perform this "replace column" operation?</p>
| <python><python-3.x><pandas><dataframe> | 2023-09-29 17:29:35 | 3 | 2,394 | hbaromega |
77,203,702 | 6,260,154 | Get only folder names using glob.glob * pattern | <p>I am using below line of code to recursively find the folders which has a particular sub-file name endswith <code>.log</code>.</p>
<pre><code>glob.iglob(f"{ROOT}/**/*.log", recursive = True)
</code></pre>
<p>Now, this code gives me an iterator which has complete path. But I want to return only the folder names, not whole path.</p>
<p>If this is how my directory looks like:</p>
<pre><code><ROOT>/
├── a/
│ ├── a.log
├── b/
│ └── b.log
├── c/
│ ├── c.log
│ └── cd/
│ └── cd.log
</code></pre>
<p>I want only the folder names in list like this:</p>
<pre><code>['a', 'b', 'c', 'c/cd']
</code></pre>
<p>Is there a way to get only folder names?</p>
| <python><glob> | 2023-09-29 17:09:17 | 3 | 1,016 | Tony Montana |
77,203,666 | 1,856,291 | Is it possible to downcast SwigObject to a concrete type? | <p>I have a couple of C files:</p>
<pre class="lang-c prettyprint-override"><code>/*mid.h*/
#ifndef mid_h
#define mid_h
#include <stdlib.h> typedef struct PtrRec *Ptr, PtrStruct;
#endif /*mid_h*/
</code></pre>
<pre class="lang-c prettyprint-override"><code>/*left.h*/
#ifndef left_h
#define left_h
#include "mid.h"
Ptr create_left();
void print_left(Ptr,int);
#endif /*left_h*/
</code></pre>
<pre class="lang-c prettyprint-override"><code>
/*left.c*/
#include "left.h"
#include <stdio.h>
struct PtrRec {
Ptr next;
int g;
};
void print_left(Ptr left) {
printf("%zu left{%p}->%p %d\n",sizeof(*l), l, l->next, l->g);
}
Ptr create_left() {
Ptr rec;
rec= calloc(1, sizeof(*rec));
rec->g = 33;
return rec;
}
</code></pre>
<pre class="lang-c prettyprint-override"><code>
/*right.h*/
#ifndef right_h
#define right_h
#include "mid.h"
Ptr create_right();
void print_right(Ptr);
#endif /*right_h*/
</code></pre>
<pre class="lang-c prettyprint-override"><code>
/*right.c*/
#include "right.h"
#include <stdio.h>
struct PtrRec {
int i,j,k;
Ptr next;
};
void print_right(Ptr r) {
printf("%zu right {%p}->%p %d %d %d\n",sizeof(*r), r, r->next, r->i, r->j, r->k);
}
Ptr create_right() {
Ptr rec;
rec= calloc(1, sizeof(*rec));
rec->i = 31;
rec->j = 34;
rec->k = 36;
return rec;
}
</code></pre>
<p>I am making an library and importing interface via SWIG.</p>
<pre class="lang-c prettyprint-override"><code>
%module asd
%{
#include "mid.h"
#include "left.h"
#include "right.h"
%}
%include "mid.h"
%include "left.h"
%include "right.h"
%init %{
Py_Initialize();
%}
</code></pre>
<p>And make a assemble it via:</p>
<pre class="lang-bash prettyprint-override"><code>
PYTHON_INCLUDE="/usr/include/python3.11/"
swig -v -python -o asd_wrap.c all.i
clang --shared -I${PYTHON_INCLUDE} -o _asd.so left.c right.c asd_wrap.c
</code></pre>
<p>So I can import it from python and call some functions:</p>
<pre class="lang-py prettyprint-override"><code>from asd import *
left = create_left()
right = create_right()
print_left(left)
print_right(right)
</code></pre>
<p>So, is there a way to extend such code to make <code>PtrRec</code>s distinguishable from the python side? Any way to downcast it and append extra method, accessible from python for both of them, like attach some <code>get_size</code> method so they at least could print their sizes. Is it possible with such setup or it will require actually make them structures with different names and explicitly put them into header files?</p>
| <python><c><pointers><polymorphism><swig> | 2023-09-29 17:01:53 | 1 | 529 | Sugar |
77,203,655 | 12,407,899 | How to draw a box plot with white face color on top of a violin plot and swarm plot | <p>I want to draw the following box-plot with white face color overlaid on a violin plot and swarmplot, something similar to the following plot:</p>
<p><a href="https://i.sstatic.net/aeeW3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aeeW3.png" alt="enter image description here" /></a></p>
<p>This is the following code that I was trying:</p>
<pre><code>ax = sns.violinplot(x="variable",
y="value",
data=pd.melt(box_plot_df),
inner=None,
width=0.9,
edgecolor="black",
scale="width",
palette="Set1")
border_colors = ["red", "blue"]
# Set transparancy for all violins
for i in range(len(ax.collections)):
ax.collections[i].set_alpha(0.4)
ax.collections[i].set_edgecolor(border_colors[i])
ax = sns.boxplot(x="variable",
y="value",
width= 0.3,
medianprops={"color": "black", "linewidth" :2, "linestyle" : "--"},
boxprops={'facecolor': 'white'},
data=pd.melt(box_plot_df),
)
for patch in ax.patches:
r, g, b, a = patch.get_facecolor()
patch.set_facecolor((1.0, 1.0, 1.0, 1.0))
ax = sns.swarmplot(x="variable",
y="value",
hue=None,
data=pd.melt(box_plot_df),
palette="Set1",
size=5,
marker="o")
</code></pre>
<p>I was getting the following output:</p>
<p><a href="https://i.sstatic.net/DrKjf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DrKjf.png" alt="enter image description here" /></a></p>
<p>I was struggling to get the absolutely white face-color for each box. I am not sure, why I am not getting an absolute white color.</p>
| <python><matplotlib><seaborn><boxplot><violin-plot> | 2023-09-29 16:59:25 | 0 | 1,014 | Aditya Bhattacharya |
77,203,546 | 8,040,369 | Label Encoding of Categorical values for Future df | <p>I am building a model where <code>LabelEncoding</code> of 2 categorical columns is a better approach. So I had implemented the same on the <code>train_df</code> and finalized the model.</p>
<p>And for predicting the <code>test_df</code>, I used to fit the 2 categorical columns on <code>train_df</code> and then transform the values on <code>test_df</code> as something like below:</p>
<pre><code>from sklearn.preprocessing import LabelEncoder
le = preprocessing.LabelEncoder()
le.fit(train_df)
le.transform(test_df)
</code></pre>
<p>Now I have to save and give the <code>.pkl</code> file of the model to some other team. If in this case, they want to use the model, do they have to fit the labelencoding on <code>train_df</code> again and then transform on their new data?</p>
| <python><machine-learning><scikit-learn><label-encoding> | 2023-09-29 16:40:48 | 1 | 787 | SM079 |
77,203,530 | 4,529,487 | Python Popen Subprocess - interacting with executable not possible until stdin is closed | <p>I am trying to interact with a Commandline Executable, but I am stucked.
I would like to print out every line, the .exe is producing to stdout.
Until I get "condition" in a line -> then it should write "y\n" to stdin</p>
<p>This works, but as soon as I am writing to stdin, I don't get any stdout output anymore. I only get the output, if I close the stdin -> but afterwards I am not able to interact anymore, because the .exe stops.</p>
<p>Any hints?
Thanks!</p>
<pre><code> proc = Popen([test.exe], stdout=PIPE, stdin=PIPE, stderr=STDOUT)
while True:
line = proc.stdout.readline()
print(line)
if not line: break
if "condition" in line.decode(): break
proc.stdin.write('{}\n'.format("y\n").encode('utf-8'))
proc.stdin.close()
while True:
line = proc.stdout.readline()
print(line)
if not line: break
if "Group" in line.decode(): break
proc.wait()
</code></pre>
| <python><subprocess><stdout><stdin><popen> | 2023-09-29 16:38:42 | 1 | 675 | Manuel |
77,203,353 | 2,123,706 | Add column to existing SQL table usig sqlalcehmy | <p>I have a sqlalchemy connection and an existing table</p>
<pre><code>database_con = f'mssql://@{server}/{database}?driver={driver}'
engine = create_engine(database_con)
con = engine.connect()
tbl = '#temp'
df = pd.DataFrame({'col1':[1,2,3], 'col2':['a','b','c']})
df.to_sql(
name=tbl,
con=con,
if_exists="append",
index=False
)
</code></pre>
<p>I have new data, with new columns, and want to append it to the existing SQL table. I read <a href="https://stackoverflow.com/questions/7300948/add-column-to-sqlalchemy-table">add column to SQLAlchemy Table</a> stating that it is not possible, but that was 2011.</p>
<p>My workaround is to <code>pd.concat</code> the new df to the existing one, drop the existing table in SQL, then write the concatenated table in its place</p>
<pre><code>df = pd.DataFrame({'column1':['test_20230925'], 'column2':[234], 'column3':[234.56]})
df_new = pd.concat([data,df])
drop_query = f""" drop table {tbl}"""
con.execute(text(drop_query))
con.commit()
df_new.to_sql(
name=tbl,
con=con,
if_exists="append",
index=False
)
</code></pre>
<p>This is fine for small data, but I have some tables that are 10m rows, and it is horribly inefficient to read it in memory, drop, append, and write again to get the new columns in place.</p>
<p>Can anyone suggest if there is a more efficient method?</p>
<p>I also tried:</p>
<pre><code>alter_query = f""" alter table {tbl} add column New_Col varchar(255)"""
con.execute(text(alter_query))
con.commit()
</code></pre>
<p>with the error:</p>
<pre><code>sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near the keyword 'column'. (156) (SQLExecDirectW)")
[SQL: alter table #temp add column New_Col varchar(255)]
</code></pre>
| <python><sqlalchemy> | 2023-09-29 16:05:26 | 1 | 3,810 | frank |
77,203,229 | 18,878,905 | Compare 1 independent vs many dependent variables using plotly in a horizontal plot | <p>I'm looking for the answer to the same question as on <a href="https://stackoverflow.com/questions/31966494/compare-1-independent-vs-many-dependent-variables-using-seaborn-pairplot-in-an-h">Compare 1 independent vs many dependent variables using seaborn pairplot in an horizontal plot</a> but using Plotly.</p>
<p>I've checked <a href="https://plotly.com/python/splom/" rel="nofollow noreferrer">https://plotly.com/python/splom/</a> but doesn't seem like they have 'y_vars' or 'x_vars' arguments.</p>
<p>Any helped welcome!</p>
| <python><plotly> | 2023-09-29 15:43:22 | 0 | 389 | david serero |
77,203,214 | 1,820,665 | python-daemon pidfile is not created | <p>I implement a Python daemon that uses <code>python-daemon</code> to daemonize.</p>
<p>A stripped down minimal example would be:</p>
<pre><code>import daemon
import daemon.pidfile
import threading
import syslog
import signal
class Runner:
def run(self):
syslog.syslog("Running something")
def scheduleNextRun(self):
self.run()
self.timer = threading.Timer(3, self.scheduleNextRun)
self.timer.start()
def terminate(self, signum, frame):
syslog.syslog("Received {}".format(signal.Signals(signum).name))
if self.timer:
syslog.syslog("Stopping the timer")
self.timer.cancel()
syslog.syslog("Will now terminate")
def setup():
runner = Runner()
signal.signal(signal.SIGTERM, runner.terminate)
signal.signal(signal.SIGINT, runner.terminate)
runner.scheduleNextRun()
with daemon.DaemonContext(pidfile = daemon.pidfile.PIDLockFile("/var/run/test.pid")):
setup()
</code></pre>
<p>The daemon starts up and writes to syslog, and it also shuts down when receiving SIGTERM. However, no pidfile is created.</p>
<p>I tried different ways to invoke the <code>DaemonContext</code> whilst searching for a solution, but neither lead to a pidfile being created:</p>
<p>Both</p>
<pre><code>...
import lockfile
...
with daemon.DaemonContext(pidfile = lockfile.FileLock("/var/run/test.pid")):
...
</code></pre>
<p>and (using <code>pidfile.py</code> from <a href="https://github.com/bmhatfield/python-pidfile" rel="nofollow noreferrer">https://github.com/bmhatfield/python-pidfile</a>)</p>
<pre><code>...
from pidfile import PidFile
...
with daemon.DaemonContext(pidfile = PidFile("/var/run/test.pid")):
...
</code></pre>
<p>do work, but I never get a pidfile.</p>
<p>What's the correct way to get a pidfile a well-behaved daemon has to create?</p>
| <python><pid><python-daemon><lockfile> | 2023-09-29 15:41:07 | 1 | 1,774 | Tobias Leupold |
77,202,887 | 7,158,019 | Cannot install mysqlclient on Elastic BeanStalk Instance | <p>How can I get pip to install mysqlclient in Amazon Linux 2/Python 3.8 instance?</p>
<p>I am deploying a Django application to Elastic Bean Stalk. While the deployment process is running <code>pip install -r requirements.txt</code>, it fails and I get this error on eb-engine.log.</p>
<pre><code> × Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [22 lines of output]
Trying pkg-config --exists mysqlclient
Command 'pkg-config --exists mysqlclient' returned non-zero exit status 1.
Trying pkg-config --exists mariadb
Command 'pkg-config --exists mariadb' returned non-zero exit status 1.
Traceback (most recent call last):
File "/var/app/venv/staging-LQM1lest/lib64/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 363, in <module>
main()
File "/var/app/venv/staging-LQM1lest/lib64/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 345, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
...
File "<string>", line 154, in <module>
File "<string>", line 48, in get_config_posix
File "<string>", line 27, in find_package_name
Exception: Can not find valid pkg-config name.
Specify MYSQLCLIENT_CFLAGS and MYSQLCLIENT_LDFLAGS env vars manually
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
enter code here
</code></pre>
<p>I know it has to do with having mysqlclient package installed. So, I have these packages defined in my .ebextension package file. I have found multiple answers to suggest specifying <code>mariadb-devel.x86_64</code>, <code>mariadb-devel</code>, <code>mariadb-server</code>, etc,. as packages to install to fix the error, however even with the packages installed, I still run into the above error.</p>
<p>I also install this packages by running these platform prebuild hooks commands, but still does not fix the problem.</p>
<pre><code>wget https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm
sudo yum localinstall mysql80-community-release-el7-3.noarch.rpm
</code></pre>
<p>Have the package to install <code>mysqlclinet</code> in Amazon linux 2 changed recently as suggested <a href="https://stackoverflow.com/questions/62111066/mysqlclient-installation-error-in-aws-elastic-beanstalk">in these answers</a> (answers do not work for me)? Am I specifying the wrong packages?</p>
<pre><code>packages:
yum:
python3-devel: []
MySQL-python: []
mariadb-devel.x86_64: []
mariadb-server: []
gcc: []
gawk: []
polkit: []
gcc-c++: []
make: []
pkgconfig: []
</code></pre>
| <python><amazon-web-services><amazon-ec2><amazon-elastic-beanstalk><mysql-python> | 2023-09-29 14:45:52 | 1 | 669 | ytobi |
77,202,849 | 8,938,220 | python dbf query between two dates | <p>I am using <a href="http://pypi.python.org/pypi/dbf/" rel="nofollow noreferrer">dbf module</a>. I need to search (query) all the products where invoice date between 2023-01-23 to 2023-01-25, include both dates.</p>
<p>I have dbf table ( product.dbf ) with below data</p>
<pre><code>invno invdt hsn mname packg
2010 2023-01-21 30086666 Medicine01 10'S
2010 2023-01-23 30044444 Medicine02 10'S
2010 2023-01-23 30043333 Medicine03 15'S
2010 2023-01-23 30049999 Medicine04 15'S
2011 2023-01-24 30049999 Medicine05 15'S
2011 2023-01-24 30049999 Medicine06 15'S
2011 2023-01-24 30049444 Medicine07 30'ML
2011 2023-01-24 30049333 Medicine08 30'ML
2011 2023-01-26 30049111 Medicine09 30'ML
2011 2023-01-26 30049111 Medicine09 30'ML
2012 2023-01-31 30042333 Medicine10 60'S
2012 2023-01-31 30042234 Medicine11 15'S
</code></pre>
<p>I am trying with below code, it is not working as expected.
I am getting exception when exact date is not there in the table. example if 25-01-2023 is not there, then below exception will be displayed</p>
<blockquote>
<p><strong>dbf.NotFoundError: 'dbf.Index.index_search(x): x not in Index'</strong></p>
</blockquote>
<pre><code>import dbf
import datetime
table = dbf.Table('product.dbf')
table.open(dbf.READ_ONLY)
index = table.create_index(lambda rec: rec.invdt)
# search between 2023-01-23 to 2023-01-25, include both dates.
fromdate = index.index_search(match=(datetime.datetime.strptime('23-01-2023','%d-%m-%Y').date()))
todate = index.index_search(match=(datetime.datetime.strptime('25-01-2023','%d-%m-%Y').date()))
for row in index[fromdate:todate]:
print(row[0],row[1],row[2],row[3],row[4])
# Same way below code also throw exception for invoice range,
# where from or to invoice number(here invoice 2015) is not found.
index = table.create_index(lambda rec: rec.invno)
# search between 2010 to 2015 include both invno.
frominv = index.index_search(match=(2010,))
toinv = index.index_search(match=(2015,))
table.close()
</code></pre>
| <python><dbf> | 2023-09-29 14:41:26 | 2 | 343 | Ravi Kannan |
77,202,787 | 9,484,595 | Grouping in assigning colors to plotly histogram | <p>I am trying to get started with using Dash and I have a simple problem. I have a data table</p>
<pre><code>Index, Product, Customer_Age, Revenue
1, A, 12, 10
2, B, 99, 12
</code></pre>
<p>and so on.</p>
<p>I want to plot the revenue per product as a histograms. However, I want to have the bars split into different colors for different age groups, say groups of 10. How do I achieve this? At the moment, I am with</p>
<pre><code>from dash import Dash, html, dash_table, dcc
import plotly.express as px, pandas as pd
data = pd.read_csv('data.csv')
app = Dash(__name__)
app.layout = html.Div([
dcc.Graph(figure=px.histogram(data, x='Product', y='Revenue', histfunc='sum', color='Customer_Age')),
])
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>which assigns a different color to every age. Also, the colors are quite random, and not a nice continuous sequence of colors. Is there an elegant way to achieve what I want?</p>
| <python><pandas><plotly><histogram><plotly-dash> | 2023-09-29 14:31:43 | 1 | 893 | Bubaya |
77,202,743 | 2,535,207 | How to efficiently implement forward fill in pytorch | <p>How can I efficiently implement the fill forward logic (inspired for pandas <code>ffill</code>) for a vector shaped NxLxC (batch, sequence dimension, channel). Because each channel sequence is independent this can be equivalent to working with a tensor shaped (N*C)xL.</p>
<p>The computation should keep the torch variable so that the actual output is differentiable.</p>
<p>I managed to make something with advanced indexing, but it is L**2 in the memory and number of operations, so not very great and gpu friendly.</p>
<hr />
<p>Example:</p>
<p>Assuming you have the sequence <code>[0,1,2,0,0,3,0,4,0,0,0,5,6,0]</code> in a tensor shaped <code>1x14</code> the fill forward will give you the sequence <code>[0,1,2,2,2,3,3,4,4,4,4,5,6,6]</code>.</p>
<p>An other example shaped <code>2x4</code> is <code>[[0, 1, 0, 3], [1, 2, 0, 3]]</code> which should be forward filled into <code>[[0, 1, 1, 3], [1, 2, 2, 3]]</code>.</p>
<hr />
<p>Method used today:</p>
<p>We use the following code that is highly unoptimized but still faster than non vectorized loops:</p>
<pre><code>def last_zero_sequence_start_indices(t: torch.Tensor) -> torch.Tensor:
"""
Given a 3D tensor `t`, this function returns a two-dimensional tensor where each entry represents
the starting index of the last contiguous sequence of zeros up to and including the current index.
If there's no zero at the current position, the value is the tensor's length.
In essence, for each position in `t`, the function pinpoints the beginning of the last contiguous
sequence of zeros up to that position.
Args:
- t (torch.Tensor): Input tensor with shape [Batch, Channel, Time].
Returns:
- torch.Tensor: Three-dimensional tensor with shape [Batch, Channel, Time] indicating the starting position of
the last sequence of zeros up to each index in `t`.
"""
# Create a mask indicating the start of each zero sequence
start_of_zero_sequence = (t == 0) & torch.cat([
torch.full(t.shape[:-1] + (1,), True, device=t.device),
t[..., :-1] != 0,
], dim=2)
# Duplicate this mask into a TxT matrix
duplicated_mask = start_of_zero_sequence.unsqueeze(2).repeat(1, 1, t.size(-1), 1)
# Extract the lower triangular part of this matrix (including the diagonal)
lower_triangular = torch.tril(duplicated_mask)
# For each row, identify the index of the rightmost '1' (start of the last zero sequence up to that row)
indices = t.size(-1) - 1 - lower_triangular.int().flip(dims=[3]).argmax(dim=3)
return indices
</code></pre>
| <python><pytorch> | 2023-09-29 14:26:48 | 5 | 2,682 | Jeremy Cochoy |
77,202,550 | 1,726,779 | how to mock a function in pytest monkeypatch? | <p>I am trying to mock a simple function but I get an AttributeError and have no clue on how to fix it.</p>
<pre><code>def my_function():
return "original"
def mock_my_function():
return "mocked"
def test_my_function(monkeypatch):
monkeypatch.setattr(__name__, 'my_function', mock_my_function)
assert my_function() == "mocked"
</code></pre>
<p>What should I put in __name__'s place to make it work?</p>
| <python><pytest><monkeypatching> | 2023-09-29 13:59:24 | 2 | 2,460 | vmp |
77,202,430 | 13,944,524 | understanding ContextVar object in Python | <p>Here is a simple echo server that works correctly as expected - by "correctly" I mean, in the output we see that every user gets its own unique address when echoing inside <code>listen_for_messages</code>.</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
from contextvars import ContextVar
class Server:
user_address = ContextVar("user_address")
def __init__(self, host: str, port: int):
self.host = host
self.port = port
async def start_server(self):
server = await asyncio.start_server(self._on_connected, self.host, self.port)
await server.serve_forever()
def _on_connected(self, reader, writer):
self.user_address.set(writer.get_extra_info("peername"))
asyncio.create_task(self.listen_for_messages(reader))
async def listen_for_messages(self, reader):
while data := await reader.readline():
print(f"Got message {data} from {self.user_address.get()}")
async def main():
server = Server("127.0.0.1", 9000)
await server.start_server()
asyncio.run(main())
</code></pre>
<p>Now look at where we set the context var object's value. My question is why it is working correctly?</p>
<p>To my understanding, every "<code>Task</code>" will have its own copy of the <code>ContextVar</code>. If the line:</p>
<pre class="lang-py prettyprint-override"><code>self.user_address.set(writer.get_extra_info("peername"))
</code></pre>
<p>was inside <code>listen_for_messages</code> coroutine, I would not ask this question, since we explicitly created a new task for it. Even <a href="https://docs.python.org/3/library/asyncio-stream.html#asyncio.start_server" rel="nofollow noreferrer">if <code>_on_connected</code> was a coroutine function</a>, I wouldn't ask too. because:</p>
<blockquote>
<p><code>client_connected_cb</code> can be a plain callable or a coroutine function; if it is a <strong>coroutine function, it will be automatically scheduled as a Task</strong>.</p>
</blockquote>
<p>But it's inside the <code>_on_connected</code> (plain) function and it's not a new task. Before creating the new <code>listen_for_messages </code> task, we have a parent Task of <code>main()</code> (which <code>.run()</code> function created). I guessed it shouldn't work and the function simply just overrides the value (because it's just part of a single parent Task)</p>
<p>I would appreciate if you tell me which part of my understanding is wrong.</p>
| <python><python-3.x><asynchronous><python-asyncio><python-contextvars> | 2023-09-29 13:40:46 | 1 | 17,004 | S.B |
77,202,380 | 3,731,823 | How to install tensorflow-gpu from anaconda? | <p>I have run some very basic steps (<a href="https://anaconda.org/conda-forge/tensorflow-gpu" rel="noreferrer">tensorflow-gpu</a> is currently at 2.12.1):</p>
<pre><code>conda create --name py311_tf212 python=3.11 numpy numba scipy spyder pandas
conda activate py311_tf212
time conda install -c conda-forge tensorflow-gpu
</code></pre>
<p>After 3 hours of thinking and printing a few thousand lines of package dependencies, the installation fails.<br />
My system features are:</p>
<ul>
<li>Ubuntu 18.04,</li>
<li>feature:/linux-64::__glibc==2.27,</li>
<li>CUDA 11.8,</li>
<li>Nvidia driver 520.61.05.</li>
</ul>
<p>I'm not sure what other information is relevant. I'd be happy to get any tips.</p>
<p><strong>Edit 2023-12-19</strong>, I gave this a new go. This time I specified the conda-forge channel at the env creation step:</p>
<pre><code>$ time conda create --name py311_tf2_test -c conda-forge python=3.11 numpy numba scipy spyder pandas tensorflow-gpu
...
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
...
Package expat conflicts for:
numpy -> pypy3.9[version='>=7.3.13'] -> expat[version='>=2.2.9,<3.0.0a0|>=2.3.0,<3.0a0|>=2.4.1,<3.0a0|>=2.4.8,<3.0a0|>=2.4.9,<3.0a0|>=2.5.0,<3.0a0|>=2.4.7,<3.0a0']
scipy -> pypy3.9[version='>=7.3.13'] -> expat[version='>=2.2.9,<3.0.0a0|>=2.3.0,<3.0a0|>=2.4.1,<3.0a0|>=2.4.8,<3.0a0|>=2.4.9,<3.0a0|>=2.5.0,<3.0a0']
pandas -> pypy3.9[version='>=7.3.13'] -> expat[version='>=2.2.9,<3.0.0a0|>=2.3.0,<3.0a0|>=2.4.1,<3.0a0|>=2.4.8,<3.0a0|>=2.4.9,<3.0a0|>=2.5.0,<3.0a0']
spyder -> python[version='>=3.12,<3.13.0a0'] -> expat[version='>=2.5.0,<3.0a0']
...
Package typing_extensions conflicts for:
numba -> importlib-metadata -> typing_extensions[version='>=3.6.4']
spyder -> ipython[version='>=8.12.2,<9.0.0,!=8.17.1'] -> typing_extensions[version='>=3.10|>=3.10.0|>=3.7|>=3.6.4']
...
The following specifications were found to be incompatible with your system:
- feature:/linux-64::__glibc==2.27=0
- feature:/linux-64::__unix==0=0
- feature:|@/linux-64::__glibc==2.27=0
- feature:|@/linux-64::__unix==0=0
- numba -> libgcc-ng[version='>=10.3.0'] -> __glibc[version='>=2.17']
- numpy -> libgcc-ng[version='>=10.3.0'] -> __glibc[version='>=2.17']
- pandas -> libgcc-ng[version='>=10.3.0'] -> __glibc[version='>=2.17']
- python=3.11 -> libgcc-ng[version='>=11.2.0'] -> __glibc[version='>=2.17']
- scipy -> libgfortran-ng -> __glibc[version='>=2.17']
- spyder -> ipython[version='>=8.12.2,<9.0.0,!=8.17.1'] -> __linux
- tensorflow-gpu -> tensorflow==2.15.0=cuda118py39h5387621_0 -> __cuda
- tensorflow-gpu -> tensorflow==2.6.2=cuda111py37hf54207c_2 -> __glibc[version='>=2.17']
Your installed version is: 2.27
real 32m40.392s
</code></pre>
<p>I'm not sure how to interpret this, is the installed glibc version an issue here? Most packages seem to be happy with 2.17 or newer. Oddly tensorflow-gpu has dependencies <code>tensorflow==2.15.0</code> and <code>tensorflow==2.6.2</code>.</p>
| <python><tensorflow><anaconda> | 2023-09-29 13:33:13 | 4 | 4,208 | NikoNyrh |
77,202,326 | 685,683 | How do I get smooth text with pythons PIL? | <p>In my Script I open a tiff image and have to paste some text on it in the bottom right corner with PIL. So far so good. But the quality of the text is unusable! The the text should be white. Instead it has pixels cut out. It looks as though, when copying the text onto the original image, there is a cutoff on the alpha channel. Even when setting the mask attribute in Image.paste there is no difference.</p>
<p>I thought this is something basic. Is there a way to get usable text on an image in PIL? Or is there a better package?</p>
<p>I am on a MacBook Pro Ventura 13.4.1, PIL 10.0.1, Python 3.8</p>
<pre><code>from PIL import Image as PILImage, ImageDraw, ImageFont
import math
padding = (10, 16)
img = PILImage.open('/path/to/tiff/image.tiff')
draw = ImageDraw.Draw(img)
text = 'Holi Moli, this looks terrible'
font = ImageFont.truetype('/path/to/font/FreeMono.ttf', 24)
text_img = PILImage.new('RGB', img.size, (255, 255, 255))
textImg = ImageDraw.Draw(text_img)
# get size of rendered text
bbox = draw.textbbox((0,0), text, font)
# calculate position of text background
bgX = img.width - padding[0] - bbox[2]
bgY = img.height - padding[1] - bbox[3]
textX = img.width - math.ceil(padding[0] * 0.5) - bbox[2]
textY = img.height - math.ceil( padding[1] * 0.5) - bbox[3]
draw.text(xy=(textX, textY), text=text, font=font, fill=(255, 255, 255, 0))
img.paste(im=draw._image, box=(0, 0), mask=draw._image)
img.show()
</code></pre>
<p>SOLVED: Two things have to be considered. As suggested in @MarkSetchell's answer I did not need to draw the text onto another context. But the real problem with the badly rendered text came from using RGBA Mode.</p>
<p>I do not know why, but it seems that there is a problem with the alpha channel when drawing text in rgba mode. No matter if I draw it directly onto the opened image or copy it from another context. But most important: I could solve my issue by using RGB Mode on the image.</p>
<p><a href="https://i.sstatic.net/JizXS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JizXS.png" alt="Enlarged added text on black background of tiff" /></a></p>
<p><a href="https://i.sstatic.net/nIpV2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nIpV2.png" alt="The whole image with text" /></a></p>
<p>Sample images including exif data from this repo: <a href="https://github.com/ianare/exif-samples/tree/master/tiff" rel="nofollow noreferrer">https://github.com/ianare/exif-samples/tree/master/tiff</a></p>
| <python><python-imaging-library> | 2023-09-29 13:26:03 | 1 | 311 | habsi |
77,202,124 | 6,243,129 | Pycharm unable to debug the Python Flask project | <p>I have a simple Flask based project in Pycharm. I am trying to debug this by right clicking and selecting debug option. But keeps getting below error:</p>
<pre><code>Connected to pydev debugger (build 232.9559.58)
* Serving Flask app 'app'
* Debug mode: on
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:8000
* Running on http://192.168.0.29:8000
Press CTRL+C to quit
* Restarting with stat
C:\Python311\python.exe: can't open file 'C:\\Program': [Errno 2] No such file or directory
Process finished with exit code 2
</code></pre>
<p>I am unable to understand why and what it is looking for in <code>'C:\\Program'</code>. I am able to run the file perfectly fine and only having issues in debug. What can I try next?</p>
| <python><flask><debugging><pycharm> | 2023-09-29 12:57:31 | 2 | 7,576 | S Andrew |
77,201,890 | 12,015,249 | Read/write for different key-pairs in the same dicitonary at the same time will have race condition? | <p>Basically I have one single python dictionary for people to read/write, let's call it diction ary <code>a</code>, and I have two async functions to do actions about this dictionary: <code>write</code> and <code>delete</code>. <br>
The functions to write is discord-server based, which means the key-pair structure is <code>server_id: some_text</code>, by that means, one server can only write/delete its own key-pair, it cannot edit other server's key-pair. One server can only have one key-pair at most. The possible example of <code>write/delete</code> functions could be</p>
<pre class="lang-py prettyprint-override"><code>async def write(..., text):
a[server_id] = text
...
async def delete(...):
del a[server_id]
...
</code></pre>
<p><strong>So, the question is, if two servers modify one dictionary at the exact same moment, though with different key-pair, is this going to have possibility with race-condition, which will lead to possible data corruption?</strong> <br>
One possible solution could be adding a lock, like</p>
<pre class="lang-py prettyprint-override"><code>lock = asyncio.Lock()
# example for write, similiar for delete
async def write(..., text):
async with lock:
a[server_id] = text
...
</code></pre>
<p><strong>But since these two functions that might be called in a really high frequency, although lock can help us to solve possible race condition (that might exist), but it can let more and more task waiting in the background theoretically since the task needs to be waited by the lock (one by one). Is this idea correct?</strong> <br>
Any suggestion or comment is welcomed. Thank you!</p>
| <python><dictionary><discord><python-asyncio><race-condition> | 2023-09-29 12:20:42 | 1 | 646 | TimG233 |
77,201,862 | 12,466,687 | How to install Java manually using .rpm files on cloud (streamlit) using python script? | <p>I am new to <code>unix</code> & <code>cloud</code> not from IT background and not sure how to install <code>Java</code> manually on cloud. I am trying to install Java on <code>streamlit</code> to be able to use <code>Pyspark</code>.</p>
<p>I have tried below code but not sure how to <strong>install</strong> using <code>downloaded</code> <code>.rpm</code> file with <code>python</code>.</p>
<p><a href="https://os-explore.streamlit.app/" rel="nofollow noreferrer">App_link</a></p>
<p><a href="https://github.com/johnsnow09/os_explore/blob/main/app.py" rel="nofollow noreferrer">Github_link</a></p>
<pre><code>import streamlit as st
import pandas as pd
import os
import sys
import wget
# import tarfile
import platform
</code></pre>
<p><strong>1. Download Java</strong></p>
<pre><code># for linux from: https://crunchify.com/where-is-java-installed-on-my-mac-osx-system/
f = wget.download("https://download.oracle.com/otn-pub/java/jdk/11.0.2+9/f51449fcd52f4d52b93a989c5c56ed3c/jdk-11.0.2_linux-x64_bin.rpm")
st.write(" Java Downloaded")
</code></pre>
<blockquote>
<p>Java Downloaded</p>
</blockquote>
<p><strong>2. Cross check Platform</strong></p>
<pre><code>st.write("Checking platform: ",platform.system())
</code></pre>
<blockquote>
<p>Checking platform: Linux</p>
</blockquote>
<p><strong>3. Checking Directories & files</strong></p>
<pre><code>st.write("Current Directory: ",os.getcwd())
st.write("Directory list: ",os.listdir())
</code></pre>
<blockquote>
<p>Current Directory: /mount/src/os_explore</p>
</blockquote>
<p>Directory list:</p>
<blockquote>
<p>[ 0:"requirements.txt" 1:"app.py" 2:"README.md" 3:".git"
4:"jdk-11.0.2_linux-x64_bin.rpm" 5:"packages.txt" 6:".streamlit" ]</p>
</blockquote>
<p><strong>4. Installing using .rpm file</strong></p>
<pre><code># from: https://stackoverflow.com/questions/49484772/install-rpm-or-msi-file-through-python-script
import rpm
import subprocess
package_path = '/mount/src/os_explore/jdk-11.0.2_linux-x64_bin.rpm'
# command = ['rpm', '-Ivh', package_path]
command = ['rpm', '-ivh', package_path]
p = subprocess.Popen(command)
p.wait()
if p.returncode == 0:
print("OK")
else:
print("Something went wrong")
</code></pre>
<p>Getting Errors from here onwards</p>
<p><a href="https://i.sstatic.net/0lwW0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0lwW0.png" alt="enter image description here" /></a></p>
<p><strong>Error in text:</strong></p>
<blockquote>
<p>ImportError: Failed to import system RPM module. Make sure RPM Python
bindings are installed on your system.</p>
<p>2023-09-30 05:04:01.256 Uncaught app exception</p>
<p>Traceback (most recent call last):</p>
<p>File
"/home/adminuser/venv/lib/python3.9/site-packages/rpm/<strong>init</strong>.py",
line 102, in </p>
<pre><code>_shim_module_initializing_
</code></pre>
<p>NameError: name '<em>shim_module_initializing</em>' is not defined</p>
<p>During handling of the above exception, another exception occurred:</p>
<p>Traceback (most recent call last):</p>
<p>File
"/home/adminuser/venv/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py",
line 565, in _run_script</p>
<pre><code>exec(code, module.__dict__)
</code></pre>
<p>File "/mount/src/os_explore/app.py", line 61, in </p>
<pre><code>import rpm
</code></pre>
<p>File
"/home/adminuser/venv/lib/python3.9/site-packages/rpm/<strong>init</strong>.py",
line 105, in </p>
<pre><code>initialize()
</code></pre>
<p>File
"/home/adminuser/venv/lib/python3.9/site-packages/rpm/<strong>init</strong>.py",
line 94, in initialize</p>
<pre><code>raise ImportError(
</code></pre>
<p>ImportError: Failed to import system RPM module. Make sure RPM Python
bindings are installed on your system.</p>
</blockquote>
<p><strong>I would really Appreciate help here</strong> as I am stuck on installing Java from days and literally sick & tired of it. My PySpark code is ready in my local app but unable to install Java on cloud.</p>
| <python><java><pyspark><streamlit> | 2023-09-29 12:17:01 | 2 | 2,357 | ViSa |
77,201,842 | 9,370,018 | How to read specific frames from a video using OpenCV's Cuda Video Reader in Python? | <p>I've compiled OpenCV 4.8.0 with Cuda 10.2, cudNN 7.6.5.32, Video_Codec_SDK_12.1.14, and am running it on Python 3.9.13. I'm trying to read specific frames from a video file (video.mkv) using OpenCV's Cuda Video Reader, similar to how I would with the CPU-based VideoCapture.</p>
<p>Here's how I would do it using VideoCapture:</p>
<pre class="lang-py prettyprint-override"><code>cap.set(cv2.CAP_PROP_POS_FRAMES, target_frame)
</code></pre>
<p>For Cuda Video Reader, I currently have this code working:</p>
<pre class="lang-py prettyprint-override"><code>video_reader = cv2.cudacodec.createVideoReader('video.mkv')
fps_received, fps = video_reader.get(cv2.CAP_PROP_FPS)
total_frames_received, total_frames = video_reader.get(cv2.CAP_PROP_FRAME_COUNT)
for second in range(1, int(total_frames // fps) + 1):
target_frame = int(second * fps)
# TODO: Read only the target_frame and skip the rest
</code></pre>
<p>I consulted the <a href="https://docs.opencv.org/4.8.0/db/ded/classcv_1_1cudacodec_1_1VideoReader.html" rel="nofollow noreferrer">VideoReader API Reference</a> and attempted to use the <a href="https://docs.opencv.org/4.8.0/db/ded/classcv_1_1cudacodec_1_1VideoReader.html#aa0b71fd98fa3bce1f3bd834c8679ff1b" rel="nofollow noreferrer">set()</a> method, in which I can pass a <a href="https://docs.opencv.org/4.8.0/d0/d61/group__cudacodec.html#ga1ad08d8369c460158ad361779fab4753" rel="nofollow noreferrer">VideoReaderPropertys</a>. The only property that made sense to me was the PROP_DECODED_FRAME_IDX which is described as "Index for retrieving the decoded frame using retrieve()."</p>
<pre><code>set_success = video_reader.setVideoReaderProps(propertyId=0, propertyVal=target_frame) #PROP_DECODED_FRAME_IDX
print(set_success) # This is always False
has_frame, frame = video_reader.nextFrame()
</code></pre>
<ol>
<li>How can I read a specific frame in a video using the Cuda Video Reader?</li>
<li>Is there a property similar to CAP_PROP_POS_FRAMES for the Cuda version?</li>
</ol>
| <python><c++><opencv><video> | 2023-09-29 12:13:14 | 0 | 992 | Vaio |
77,201,787 | 22,466,650 | How to create a scatter plot with normalized marker transparencies | <p>I have this dataframe :</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
rng = np.random.default_rng(seed=111)
rints = rng.integers(low=0, high=1000, size=(5,5))
df = pd.DataFrame(rints)
0 1 2 3 4
0 474 153 723 169 713
1 505 364 658 854 767
2 718 109 141 797 463
3 968 130 246 495 197
4 450 338 83 715 787
</code></pre>
<p>And I'm trying to plot it as it is and set the size of markers and their transparency :</p>
<pre><code>ox = np.arange(len(df))
x = np.tile(ox[:, np.newaxis], (1, len(ox)))
y = np.tile(ox, (len(df), 1))
plt.scatter(x, y, marker='o', color='tab:orange', ec='k', ls='--', s=df.values)
for ix,iy,v in zip(x.ravel(), y.ravel(), df.values.ravel()):
plt.annotate(str(v), (ix,iy), textcoords='offset points', xytext=(0,10), ha='center')
plt.axis("off")
plt.margins(y=0.2)
plt.show()
</code></pre>
<p>Only two problems :</p>
<ol>
<li>The plot doesn't reflect the real dataframe shape, i feel like it is transposed</li>
<li>i don't know how to adjust the transparency of the scatter points as I did with the size
<ul>
<li>Just proportional to the value of the cell (like I did with size). Light color for lower values and strong color for higher ones.</li>
</ul>
</li>
</ol>
<p><a href="https://i.sstatic.net/c4FzD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c4FzD.png" alt="enter image description here" /></a></p>
| <python><pandas><matplotlib><scatter-plot><transparency> | 2023-09-29 12:05:04 | 1 | 1,085 | VERBOSE |
77,201,740 | 9,018,649 | Can someone explain how spark rdd.map decides to read lines insted of words from a textfile? | <p>I have the current code</p>
<pre><code>lines = sc.textFile("file:///SparkCourse/ml-100k/u.data")
ratings = lines.map(lambda x: x.split()[2])
result = ratings.countByValue()
</code></pre>
<p>According to this, 'x' ends up representing a line in text-file:
Excerpt from text-file:</p>
<pre><code>196 242 3 881250949
186 302 3 891717742
22 377 1 878887116
244 51 2 880606923
166 346 1 886397596
</code></pre>
<p>Why/how does map split this up per line and returns a line as 'x'?</p>
<ul>
<li>Contrary to returning each individual literal based on whitepace?</li>
<li>Does <em>map</em> allways look for lineshift? Can we specify any other distinction?</li>
</ul>
| <python><apache-spark><functional-programming> | 2023-09-29 11:57:34 | 2 | 411 | otk |
77,201,383 | 10,430,394 | Store telegram_send API keys on a linux machine and use in python while keeping them private? | <p>I have a bit of a headscratcher on my hands that I am not sure what to do with (or if it is even possible). I have written a script for batch runs of a program called ORCA (quantum chemistry calculation software). It runs on a PC that is available to the entire working group at my uni.</p>
<p>These calculations can take very long to execute and if would be nice if I could implement an optional function that lets people receive updates on telegram about the progress of their calculations and their results. So the general idea is as follows:</p>
<p>First, encrypt the Telegram API token using gpg. User sets their passphrase.</p>
<pre class="lang-bash prettyprint-override"><code>echo "TELEGRAM_TOKEN" | gpg --symmetric --armor --output tgt_USER_NAME.asc
</code></pre>
<p>Store the .asc files for all users in some common location, like</p>
<pre class="lang-bash prettyprint-override"><code>~/.asc/usrid1.asc
~/.asc/usrid2.asc
...
</code></pre>
<p>Call the optional commandline option -f (follow) from the <code>run_orca.sh</code> script:</p>
<pre class="lang-bash prettyprint-override"><code>./run_orca -f usrid
</code></pre>
<p>User is prompted to put in their private passphrase.
If condition is triggered, which runs a py-script with the <code>telegram_send</code> code.</p>
<pre class="lang-bash prettyprint-override"><code>#!/bin/bash
# defaults
folder="inpsfolder"
extension="inp"
follow=""
while getopts "i:x:f:" opt; do
case $opt in
(i) folder="$OPTARG";;
(x) extension="$OPTARG";;
(f) follow="$OPTARG" ;; # user types in their id (something like jdoe)
esac
done
token_path="~/.asc/${follow}"
token="$(gpg --decrypt ${token_path})"
if [ $follow!="" ]
then
python /path/to/script.py -t "$token" # telegram_send code is run and sends updates to the user.
fi
</code></pre>
<p>Is this safe? The users should just be able to echo out their own telegram API token if they know their personal passphrase, right? Is this even a good idea?</p>
| <python><bash><telegram> | 2023-09-29 10:57:31 | 0 | 534 | J.Doe |
77,201,374 | 1,586,108 | Alembic - Attempting to autogenerate revision throws "sqlalchemy.exc.NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:postgresql.psycopg" | <p>Edit: The outputs below were run on Windows with Git bash.</p>
<hr />
<p>I am new to Alembic and SQLAlchemy.</p>
<p>I have been tasked with adding Alembic support to an existing FastAPI + SQLAlchemy application. To this end, I have done some preliminary setup by calling <code>alembic init</code> and updated the <code>alembic.ini</code> and <code>env.py</code> files.</p>
<p>Then I tried to call <code>alembic revision --autogenerate -m "Initial revision"</code> but Alembic throws this error:</p>
<pre><code>$ alembic revision --autogenerate -m "Initial revision"
Traceback (most recent call last):
File "C:\Users\Desmond\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\Desmond\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\Users\Desmond\AppData\Local\Programs\Python\Python310\Scripts\alembic.exe\__main__.py", line 7, in <module>
File "C:\Users\Desmond\AppData\Local\Programs\Python\Python310\lib\site-packages\alembic\config.py", line 632, in main
CommandLine(prog=prog).main(argv=argv)
File "C:\Users\Desmond\AppData\Local\Programs\Python\Python310\lib\site-packages\alembic\config.py", line 626, in main
self.run_cmd(cfg, options)
File "C:\Users\Desmond\AppData\Local\Programs\Python\Python310\lib\site-packages\alembic\config.py", line 603, in run_cmd
fn(
File "C:\Users\Desmond\AppData\Local\Programs\Python\Python310\lib\site-packages\alembic\command.py", line 236, in revision
script_directory.run_env()
File "C:\Users\Desmond\AppData\Local\Programs\Python\Python310\lib\site-packages\alembic\script\base.py", line 582, in run_env
util.load_python_file(self.dir, "env.py")
File "C:\Users\Desmond\AppData\Local\Programs\Python\Python310\lib\site-packages\alembic\util\pyfiles.py", line 94, in load_python_file
module = load_module_py(module_id, path)
File "C:\Users\Desmond\AppData\Local\Programs\Python\Python310\lib\site-packages\alembic\util\pyfiles.py", line 110, in load_module_py
spec.loader.exec_module(module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\Desmond\Projects\application-git\extraction_accuracy\api\accuracy\alembic\env.py", line 8, in <module>
from database.session import DatabaseModelBase
File "C:\Users\Desmond\Projects\application-git\extraction_accuracy\api\accuracy\.\database\session.py", line 8, in <module>
engine = create_engine(
File "<string>", line 2, in create_engine
File "C:\Users\Desmond\AppData\Local\Programs\Python\Python310\lib\site-packages\sqlalchemy\util\deprecations.py", line 309, in warned
return fn(*args, **kwargs)
File "C:\Users\Desmond\AppData\Local\Programs\Python\Python310\lib\site-packages\sqlalchemy\engine\create.py", line 534, in create_engine
entrypoint = u._get_entrypoint()
File "C:\Users\Desmond\AppData\Local\Programs\Python\Python310\lib\site-packages\sqlalchemy\engine\url.py", line 655, in _get_entrypoint
cls = registry.load(name)
File "C:\Users\Desmond\AppData\Local\Programs\Python\Python310\lib\site-packages\sqlalchemy\util\langhelpers.py", line 343, in load
raise exc.NoSuchModuleError(
sqlalchemy.exc.NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:postgresql.psycopg
</code></pre>
<p>I have checked and I have psycopg installed in my global packages:</p>
<pre><code>$ pip list | grep psyco
psycopg 3.1.12
psycopg-binary 3.1.12
</code></pre>
<p>I have also updated the relevant details in the alembic.ini and env.py files:</p>
<p>alembic.ini:</p>
<pre><code>sqlalchemy.url = postgresql://%(USERNAME)s:%(PASSWORD)s@%(HOST)s:%(PORT)s/%(DB_NAME)s
</code></pre>
<p>env.py (environment variables have been created in the current shell as well):</p>
<pre><code>config = context.config
section = config.config_ini_section
config.set_section_option(
section, "HOST", os.environ.get("DATABASE_HOST")
)
config.set_section_option(
section, "PORT", os.environ.get("DATABASE_PORT")
)
config.set_section_option(
section, "DB_NAME", os.environ.get("DATABASE_NAME")
)
config.set_section_option(
section, "USERNAME", os.environ.get("USERNAME")
)
config.set_section_option(
section, "PASSWORD", os.environ.get("PASSWORD")
)
target_metadata = DatabaseModelBase.metadata
</code></pre>
<p>The DatabaseModelBase in the stack trace is an instance of declarative_model() in session.py.</p>
<p>Please help.</p>
| <python><sqlalchemy><alembic> | 2023-09-29 10:55:26 | 1 | 323 | SargentD |
77,200,902 | 2,030,026 | Why does this function passed as an argument to another function return None? | <pre class="lang-py prettyprint-override"><code>def run_my_func(runner, *args, **kwargs):
runner(*args)
def my_func(name):
return name
out = run_my_func(my_func, "my name is john", kwargs=None)
print(out)
</code></pre>
<p>Why does this print out <code>None</code>?</p>
<p>If I print out the <code>name</code> like so:</p>
<pre class="lang-py prettyprint-override"><code>def run_my_func(runner, *args, **kwargs):
runner(*args)
def my_func(name):
print(name)
return name
out = run_my_func(my_func, "my name is john", kwargs=None)
print(out)
</code></pre>
<p>I get the output</p>
<pre class="lang-bash prettyprint-override"><code>my name is john
None
</code></pre>
| <python><function> | 2023-09-29 09:40:44 | 3 | 1,928 | himmip |
77,200,817 | 1,050,187 | Generalized Goertzel algorithm for better peaks detection than FFT: shifted frequencies with real data? | <p>I have translated the algorithm of the <strong>Generalized</strong> Goertzel technique in Python from the Matlab code that can be found <a href="https://it.mathworks.com/matlabcentral/fileexchange/35103-generalized-goertzel-algorithm" rel="nofollow noreferrer">here</a>.</p>
<p>I have trouble using it on real data and only on real data: generating a testing "synthetic" signal with 5 sin components the Goertzel returns correct frequencies, obviously with better accuracy than FFT; both the techniques are aligned.</p>
<p>However, if I provide market data on N samples the FFT gives the lowest frequency f = 1/N as expected; Goertzel returns all frequencies higher than 1. The time frame of the data is 1 hour, but the data is unlabeled with timestamps, it could be also seconds, so my expectation is that the two ways of calculating the frequency transform should return, apart from different accuracies, the same harmonics on the frequency domain.</p>
<p>Why am I getting the lowest frequency in one method but greater than 1 using another method with real data?</p>
<pre><code>import numpy as np
def goertzel_general_shortened(x, indvec, maxes_tollerance = 100):
# Check input arguments
if len(indvec) < 1:
raise ValueError('Not enough input arguments')
if not isinstance(x, np.ndarray) or x.size == 0:
raise ValueError('X must be a nonempty numpy array')
if not isinstance(indvec, np.ndarray) or indvec.size == 0:
raise ValueError('INDVEC must be a nonempty numpy array')
if np.iscomplex(indvec).any():
raise ValueError('INDVEC must contain real numbers')
lx = len(x)
x = x.reshape(lx, 1) # forcing x to be a column vector
# Initialization
no_freq = len(indvec)
y = np.zeros((no_freq,), dtype=complex)
# Computation via second-order system
for cnt_freq in range(no_freq):
# Precompute constants
pik_term = 2 * np.pi * indvec[cnt_freq] / lx
cos_pik_term2 = 2 * np.cos(pik_term)
cc = np.exp(-1j * pik_term) # complex constant
# State variables
s0 = 0
s1 = 0
s2 = 0
# Main loop
for ind in range(lx - 1):
s0 = x[ind] + cos_pik_term2 * s1 - s2
s2 = s1
s1 = s0
# Final computations
s0 = x[lx - 1] + cos_pik_term2 * s1 - s2
y[cnt_freq] = s0 - s1 * cc
# Complex multiplication substituting the last iteration
# and correcting the phase for potentially non-integer valued
# frequencies at the same time
y[cnt_freq] = y[cnt_freq] * np.exp(-1j * pik_term * (lx - 1))
return y
</code></pre>
<p>Here are the charts of the FFT and Goertzel transform for the synthetic testing 5 components signal</p>
<p><a href="https://i.sstatic.net/UrC2X.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UrC2X.png" alt="enter image description here" /></a></p>
<p>here the Goertzel one</p>
<p><a href="https://i.sstatic.net/ydCU8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ydCU8.png" alt="enter image description here" /></a></p>
<p>the original frequencies were</p>
<pre><code>signal_frequencies = [30.5, 47.4, 80.8, 120.7, 133]
</code></pre>
<p>Instead, if I try to download market data</p>
<pre><code>data = yf.download("SPY", start="2022-01-01", end="2023-12-31", interval="1h")
</code></pre>
<p>and try to transform the data['Close'] of SPY, this is what I get with the FFT transform with N = 800 samples</p>
<p><a href="https://i.sstatic.net/xsM4z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xsM4z.png" alt="enter image description here" /></a></p>
<p>and the rebuilt signal on the first 2 components (not so good)</p>
<p><a href="https://i.sstatic.net/274Je.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/274Je.png" alt="enter image description here" /></a></p>
<p>and this is what I get with the Goertzel transform</p>
<p><a href="https://i.sstatic.net/lt0TY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lt0TY.png" alt="enter image description here" /></a></p>
<p>Note that the first peaks on FFT are below 0.005, for Goertzel above 1.</p>
<p>This is the way in which I tested the FFT on SPY data</p>
<pre><code>import yfinance as yf
import numpy as np
import pandas as pd
import plotly.graph_objs as go
from plotly.subplots import make_subplots
def analyze_and_plot(data, num_samples, start_date, num_harmonics):
num_harmonics = num_harmonics *1
# Seleziona i dati nell'intervallo specificato
original_data = data
data = data[data.index >= start_date]
# Estrai i campioni desiderati
data = data.head(num_samples)
# Calcola la FFT dei valori "Close"
fft_result = np.fft.fft(data["Close"].values)
frequency_range = np.fft.fftfreq(len(fft_result))
print("Frequencies: ")
print(frequency_range)
print("N Frequencies: ")
print(len(frequency_range))
print("First frequencies magnitude: ")
print(np.abs(fft_result[0:num_harmonics]))
# Trova le armoniche dominanti
# top_harmonics = np.argsort(np.abs(fft_result))[::-1][:num_harmonics]
top_harmonics = np.argsort(np.abs(fft_result[0:400]))[::-1][1:(num_harmonics + 1)] # skip first one
print("Top harmonics: ")
print(top_harmonics)
# top_harmonics = [1, 4]#, 8, 5, 9]
# Creazione del grafico per lo spettro
spectrum_trace = go.Scatter(x=frequency_range, y=np.abs(fft_result), mode='lines', name='FFT Spectrum')
fig_spectrum = go.Figure(spectrum_trace)
fig_spectrum.update_layout(title="FFT Spectrum", xaxis=dict(title="Frequency"), yaxis=dict(title="Magnitude"))
# Calcola la ricostruzione basata sulle prime N armoniche
reconstructed_signal = np.zeros(len(data))
time = np.linspace(0, num_samples, num_samples, endpoint=False)
# print('time')
# print(time)
for harmonic_index in top_harmonics[:num_harmonics]:
amplitude = np.abs(fft_result[harmonic_index]) #.real
phase = np.angle(fft_result[harmonic_index])
frequency = frequency_range[harmonic_index]
reconstructed_signal += amplitude * np.cos(2 * np.pi * frequency * time + phase)
# print('first reconstructed_signal len')
# print(len(reconstructed_signal))
zeros = np.zeros(len(original_data) - 2*len(data))
reconstructed_signal = np.concatenate((reconstructed_signal, reconstructed_signal), axis = 0)
# print('second reconstructed_signal len')
# print(len(reconstructed_signal))
reconstructed_signal = np.concatenate((reconstructed_signal, zeros), axis = 0)
original_data['reconstructed_signal'] = reconstructed_signal
# print('reconstructed_signal len')
# print(len(reconstructed_signal))
# print('original_data len')
# print(len(original_data))
# print('reconstructed_signal[300:320]')
# print(reconstructed_signal[290:320])
# print('original_data[300:320]')
# print(original_data[290:320][['Close', 'reconstructed_signal']])
# reconstructed_signal = np.fft.ifft(fft_result[top_harmonics[:num_harmonics]])
# print('reconstructed_signal')
# print(reconstructed_signal)
# Converte i valori complessi in valori reali per la ricostruzione
# reconstructed_signal_real = reconstructed_signal.real
# Creazione del secondo grafico con due subplot
fig = make_subplots(rows=2, cols=1, shared_xaxes=True, vertical_spacing=0.1, subplot_titles=("Original Close", "Reconstructed Close"))
# Aggiungi il grafico di "Close" originale al primo subplot
fig.add_trace(go.Scatter(x=original_data.index, y=original_data["Close"], mode="lines", name="Original Close"), row=1, col=1)
# Aggiungi il grafico della ricostruzione al secondo subplot
fig.add_trace(go.Scatter(x=original_data.index, y=original_data['reconstructed_signal'] , mode="lines", name="Reconstructed Close"), row=2, col=1)
# Aggiorna il layout del secondo grafico
fig.update_xaxes(title_text="Time", row=2, col=1)
fig.update_yaxes(title_text="Value", row=1, col=1)
fig.update_yaxes(title_text="Value", row=2, col=1)
# Aggiorna il layout generale
fig.update_layout(title="Close Analysis and Reconstruction")
# Visualizza il grafico dello spettro
fig_spectrum.show()
# fig.update_layout(xaxis = dict(type="category"))
# Aggiorna il layout dell'asse X per includere tutti i dati
# fig.update_xaxes(range=[original_data.index.min(), original_data.index.max()], row=2, col=1)
fig.update_xaxes(type="category", row=1, col=1)
fig.update_xaxes(type="category", row=2, col=1)
# Visualizza il secondo grafico con i subplot
fig.show()
# Esempio di utilizzo
data = yf.download("SPY", start="2022-01-01", end="2023-12-31", interval="1h")
analyze_and_plot(data, num_samples=800, start_date="2023-01-01", num_harmonics=2)
</code></pre>
<p>as well as the test of SPY data on Goertzel</p>
<pre><code>import yfinance as yf
import numpy as np
import pandas as pd
import plotly.graph_objs as go
from plotly.subplots import make_subplots
def analyze_and_plot(data, num_samples, start_date, num_harmonics):
# Seleziona i dati nell'intervallo specificato
original_data = data
data = data[data.index >= start_date]
# Estrai i campioni desiderati
data = data.head(num_samples)
# Frequenze desiderate
frequency_range = np.arange(0, 20, 0.001)
# Calcola lo spettro delle frequenze utilizzando la funzione Goertzel
transform = goertzel_general_shortened(data['Close'].values, frequency_range)
harmonics_amplitudes = np.abs(transform)
frequency_range = frequency_range
# Creazione del grafico per lo spettro
spectrum_trace = go.Scatter(x=frequency_range, y=harmonics_amplitudes, mode='lines', name='FFT Spectrum')
fig_spectrum = go.Figure(spectrum_trace)
fig_spectrum.update_layout(title="Frequency Spectrum", xaxis=dict(title="Frequency"), yaxis=dict(title="Magnitude"))
# Visualizza il grafico dello spettro
fig_spectrum.show()
peaks_indexes = argrelmax(harmonics_amplitudes, order = 10)[0] # find indexes of peaks
peak_frequencies = frequency_range[peaks_indexes]
peak_amplitudes = harmonics_amplitudes[peaks_indexes]
print('peaks_indexes')
print(peaks_indexes[0:30])
print('peak_frequencies')
print(peak_frequencies[0:30])
print('peak_amplitudes')
print(peak_amplitudes[0:30])
lower_freq_sort_peak_indexes = np.sort(peaks_indexes)[0:num_harmonics] # lower indexes <--> lower frequencies
higher_amplitudes_sort_peak_indexes = peaks_indexes[np.argsort(harmonics_amplitudes[peaks_indexes])[::-1]][0:num_harmonics]
print('higher_amplitudes_sort_peak_indexes')
print(higher_amplitudes_sort_peak_indexes[0:10])
# used_indexes = lower_freq_sort_peak_indexes
used_indexes = higher_amplitudes_sort_peak_indexes
# Creazione del segnale ricostruito utilizzando i picchi
time = np.linspace(0, num_samples, num_samples, endpoint=False)
reconstructed_signal = np.zeros(len(time), dtype=float)
print('num_samples')
print(num_samples)
print('time[0:20]')
print(time[0:20])
print('reconstructed_signal')
print(reconstructed_signal[0:10])
for index in used_indexes:
phase = np.angle(transform[index])
amplitude = np.abs(transform[index])
frequency = frequency_range[index]
print('phase')
print(phase)
print('amplitude')
print(amplitude)
print('frequency')
print(frequency)
reconstructed_signal += amplitude * np.sin(2 * np.pi * frequency * time + phase)
# Estrai la parte reale del segnale ricostruito
reconstructed_signal_real = reconstructed_signal
print('reconstructed_signal[1]')
print(reconstructed_signal[1])
print('reconstructed_signal.shape')
print(reconstructed_signal.shape)
zeros = np.zeros(len(original_data) - 2*num_samples)
reconstructed_signal_real = np.concatenate((reconstructed_signal_real, reconstructed_signal_real), axis = 0)
print('reconstructed_signal_real.shape')
print(reconstructed_signal_real.shape)
reconstructed_signal_real = np.concatenate((reconstructed_signal_real, zeros), axis = 0)
print('reconstructed_signal_real.shape')
print(reconstructed_signal_real.shape)
original_data['reconstructed_signal'] = reconstructed_signal_real
# Creazione del secondo grafico con due subplot
fig = make_subplots(rows=2, cols=1, shared_xaxes=True, vertical_spacing=0.1, subplot_titles=("Original Close", "Reconstructed Close"))
# Aggiungi il grafico di "Close" originale al primo subplot
fig.add_trace(go.Scatter(x=original_data.index, y=original_data["Close"], mode="lines", name="Original Close"), row=1, col=1)
# Aggiungi il grafico della ricostruzione al secondo subplot
fig.add_trace(go.Scatter(x=original_data.index, y=original_data['reconstructed_signal'] , mode="lines", name="Reconstructed Close"), row=2, col=1)
# Aggiorna il layout del secondo grafico
fig.update_xaxes(title_text="Time", row=2, col=1)
fig.update_yaxes(title_text="Value", row=1, col=1)
fig.update_yaxes(title_text="Value", row=2, col=1)
# Aggiorna il layout generale
fig.update_layout(title="Close Analysis and Reconstruction")
# fig.update_layout(xaxis = dict(type="category"))
# Aggiorna il layout dell'asse X per includere tutti i dati
# fig.update_xaxes(range=[original_data.index.min(), original_data.index.max()], row=2, col=1)
fig.update_xaxes(type="category", row=1, col=1)
fig.update_xaxes(type="category", row=2, col=1)
# Visualizza il secondo grafico con i subplot
fig.show()
# Esempio di utilizzo
analyze_and_plot(data, num_samples=800, start_date="2023-01-01", num_harmonics=2)
</code></pre>
<p><strong>Edit 30/09/2023</strong></p>
<p>I tried to normalize the SPY data as suggested in the answers, but the problem is still there, here is the resulting chart</p>
<p><a href="https://i.sstatic.net/Ruc0s.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ruc0s.png" alt="enter image description here" /></a></p>
| <python><fft><frequency-analysis><goertzel-algorithm> | 2023-09-29 09:25:00 | 1 | 623 | fede72bari |
77,200,794 | 10,413,428 | Use match case to handle exceptions | <p>I have a method that receives an exception of type Exception as a parameter and I want to filter which exceptions to show to the user and which to show as general errors.</p>
<p>Since I have more than 10 of these exceptions in the real code, I wanted to use a match case that filters out the different cases.</p>
<p>I only want to match the type of exceptions and I don't want to use an if elif for each exception if possible, as code inside each if would be the same.</p>
<pre class="lang-py prettyprint-override"><code>def handle_exception(self, exception: Exception):
# Built the error message, based on which exception was caught
match exception:
case MyCustomExceptionOne() \
| MyCustomExceptionTwo() \
| MyCustomExceptionThree():
show_error_message(exception)
case _:
show_error_message("Error occured")
</code></pre>
<p>Am I missing something?</p>
| <python><python-3.x> | 2023-09-29 09:22:07 | 1 | 405 | sebwr |
77,200,781 | 5,010,875 | SQL Alchemy results to Excel conversion in my Flask project | <p>I am a Python novice. I am writing a simple Flask app. I am running into a problem when I try to generate a Excel report from a SQL Alchemy query result. The error I get is:</p>
<blockquote>
<p>TypeError: Value must be a list, tuple, range or generator, or a dict. Supplied value is <class 'dict_keys'></p>
</blockquote>
<p>Can you please help? Thanks</p>
<pre class="lang-py prettyprint-override"><code>from openpyxl import Workbook
from flask import Response
from app.models import DataModelTbl #inherits from DB.Model
wb = Workbook()
ws = wb.active
# this returns the Flask SQLAlchemy database results
ResultSet = DataModelTbl.query.all()
rows = [u.__dict__ for u in ResultSet]
# Add headers to the Excel sheet
headers = rows[0].keys()
# I get the error here ->
# TypeError: Value must be a list, tuple, range or generator, or a dict.
# Supplied value is <class 'dict_keys'>
ws.append(headers)
# Add data to the Excel sheet
for row_data in rows:
ws.append(list(row_data.values()))
# Create a response object for the Excel file
open_xml = 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
response = Response(content_type=open_xml)
response.headers['Content-Disposition'] = 'attachment; filename=data.xlsx'
# Save the Excel workbook to the response
wb.save(response)
return response
</code></pre>
| <python><flask><flask-sqlalchemy> | 2023-09-29 09:20:02 | 1 | 347 | Steve |
77,200,580 | 7,648,377 | django.db.utils.InterfaceError: connection already closed on unit tests with postgres | <p>My unit tests are working perfectly on sqlite but I'm trying to make them work on postgres. I'm using python 3.6 and Django 2.2.28.</p>
<p>Unit test fails when Django tries to auth the user:</p>
<pre><code>.tox/commonenv/lib/python3.6/site-packages/django/http/response.py:169: in set_cookie
self.cookies[key] = value
/usr/local/lib/python3.6/http/cookies.py:518: in __setitem__
if isinstance(value, Morsel):
.tox/commonenv/lib/python3.6/site-packages/django/utils/functional.py:256: in inner
self._setup()
.tox/commonenv/lib/python3.6/site-packages/django/utils/functional.py:392: in _setup
self._wrapped = self._setupfunc()
.tox/commonenv/lib/python3.6/site-packages/django/contrib/auth/middleware.py:24: in <lambda>
request.user = SimpleLazyObject(lambda: get_user(request))
.tox/commonenv/lib/python3.6/site-packages/django/contrib/auth/middleware.py:12: in get_user
request._cached_user = auth.get_user(request)
.tox/commonenv/lib/python3.6/site-packages/django/contrib/auth/__init__.py:189: in get_user
user = backend.get_user(user_id)
.tox/commonenv/lib/python3.6/site-packages/django/contrib/auth/backends.py:102: in get_user
user = UserModel._default_manager.get(pk=user_id)
.tox/commonenv/lib/python3.6/site-packages/django/db/models/manager.py:82: in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
.tox/commonenv/lib/python3.6/site-packages/django/db/models/query.py:402: in get
num = len(clone)
.tox/commonenv/lib/python3.6/site-packages/django/db/models/query.py:256: in __len__
self._fetch_all()
.tox/commonenv/lib/python3.6/site-packages/django/db/models/query.py:1242: in _fetch_all
self._result_cache = list(self._iterable_class(self))
.tox/commonenv/lib/python3.6/site-packages/django/db/models/query.py:55: in __iter__
results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
.tox/commonenv/lib/python3.6/site-packages/django/db/models/sql/compiler.py:1140: in execute_sql
cursor = self.connection.cursor()
.tox/commonenv/lib/python3.6/site-packages/django/db/backends/base/base.py:256: in cursor
return self._cursor()
.tox/commonenv/lib/python3.6/site-packages/django/db/backends/base/base.py:235: in _cursor
return self._prepare_cursor(self.create_cursor(name))
.tox/commonenv/lib/python3.6/site-packages/django/db/utils.py:89: in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
.tox/commonenv/lib/python3.6/site-packages/django/db/backends/base/base.py:235: in _cursor
return self._prepare_cursor(self.create_cursor(name))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <django.db.backends.postgresql.base.DatabaseWrapper object at 0x7fd3e015f710>, name = None
def create_cursor(self, name=None):
if name:
# In autocommit mode, the cursor will be used outside of a
# transaction, hence use a holdable cursor.
cursor = self.connection.cursor(name, scrollable=False, withhold=self.connection.autocommit)
else:
> cursor = self.connection.cursor()
E django.db.utils.InterfaceError: connection already closed
.tox/commonenv/lib/python3.6/site-packages/django/db/backends/postgresql/base.py:223: InterfaceError
</code></pre>
<p>My deps are:</p>
<pre><code>deps = pip == 20.2.4
setuptools == 45.1.0
pytest == 6.1.2
pytest-django == 4.4.0
faker == 5.0.0
factory_boy == 2.12.0
pytest-factoryboy == 2.0.3
pytest-cov == 2.8.1
pytest-xdist == 1.34.0
pylint == 2.4.2
pylint-django == 2.0.11
pylint-celery == 0.3
</code></pre>
<p>My unit test:</p>
<pre><code>@pytest.mark.django_db
@pytest.mark.small
@pytest.mark.parametrize("url", [
"/site-url/preview/{}/",
"/console/sites/{0}/pages/{1}"
])
def test_renders_userid(admin_client, content_version_factory, url):
version = content_version_factory()
if "{1}" in url:
url = url.format(version.page.site.pk, version.page.pk)
else:
url = url.format(version.uuid)
assert admin_client.cookies.get("userId") is None
response = admin_client.get(url)
assert response.status_code == 200
response_user_id = response.cookies.get("userId")
assert response_user_id is not None
assert response_user_id.value == "admin"
admin_user_id = admin_client.cookies.get("userId")
assert admin_user_id is not None
assert admin_user_id.value == "admin"
admin_client.cookies["userId"] = "some.user"
response = admin_client.get(url)
assert response.status_code == 200
response_user_id = response.cookies.get("userId")
assert response_user_id is None
assert admin_user_id is not None
assert admin_user_id.value == "some.user"
admin_client.logout()
assert admin_client.cookies.get("userId") is None
</code></pre>
| <python><django><pytest> | 2023-09-29 08:49:20 | 1 | 2,762 | Andrei Lupuleasa |
77,200,508 | 1,818,059 | Multiply PDF pages from different documents, with blanks inbetween | <p>I have been asked to create a PDF document consisting of (many) duplicate pages of first page of different documents. I need to have a blanks inbetween.</p>
<p>Example: given PDF documents <code>A</code> and <code>B</code> my new document could be</p>
<pre class="lang-none prettyprint-override"><code>A1 A1 A1 ... A1 blank blank blank B1 B1 B1 B1 ... B1
<---- N times-> <-- N blanks ---> <----- N times -->
</code></pre>
<p>Python is my initial go to tool, but not a requirement. Just don't have access to C etc.</p>
<p>I have the initial part working. My problem is how do I <strong>add the blank pages</strong> ?
PdfFileMerger has no property for adding a blank page.</p>
<p>I would prefer not to create a separate PDF document just for this purpose. I will not modify source PDFs to add a blank page as <a href="https://stackoverflow.com/questions/72903196/how-do-i-insert-a-blank-page-between-files-using-pypdf2-pdfmerger">suggested here</a></p>
<p>I should notice that at the moment, A and B are single-page PDFs. But I could potentially be required to take a certain page from the source doc.</p>
<p>The library of choice is open, just found PdfFileMerger to be working.</p>
<pre class="lang-py prettyprint-override"><code>from PyPDF2 import PdfFileMerger
merged = PdfFileMerger()
# example: 20 of each A and B
docs = [(20, "A.pdf"),
(20, "B.pdf") ]
blankpagecount = 3
for p in docs:
numpages = p[0]
docname = p[1]
for i in range(numpages):
merged.append(docname)
# --- no property for adding blank page
#for i in range(blankpagecount):
# merged.addBlankPage()
merged.write("mergeddoc2.pdf")
merged.close()
</code></pre>
| <python><pdf> | 2023-09-29 08:35:57 | 4 | 1,176 | MyICQ |
77,200,419 | 8,219,760 | Bulk create many-to-many objects to self | <p>Having model</p>
<pre class="lang-py prettyprint-override"><code>class MyTable(Model):
close = ManyToManyField("MyTable")
</code></pre>
<p>How to bulk create objects to this relation?</p>
<p>With tables not related to itself, one could use</p>
<pre class="lang-py prettyprint-override"><code>db_payload =[MyTable.close.throught(tablea_id=x, tableb_id=y) for x,y in some_obj_list]
MyTable.close.through.objects.bulk_create(db_payload)
</code></pre>
<p>What would the keyword arguments be in case where relation is to the table itself?</p>
| <python><django> | 2023-09-29 08:21:02 | 1 | 673 | vahvero |
77,200,077 | 9,099,423 | Doesnt' follow PyTorch's LRScheduler API - Rewrite function into lambda (lr_lambda) | <p>Is there anyone to know how to rewrite the following lr_lambda <code>def</code> into <code>Lambda</code> format</p>
<pre><code>from torch.optim.lr_scheduler import LambdaLR
def cosine_scheduler(optimizer, training_steps, warmup_steps):
def lr_lambda(current_step):
if current_step < warmup_steps:
return current_step / max(1, warmup_steps)
progress = current_step - warmup_steps
progress /= max(1, training_steps - warmup_steps)
return max(0.0, 0.5 * (1.0 + math.cos(math.pi * progress)))
return LambdaLR(optimizer, lr_lambda)
</code></pre>
<p>I can only come up with the first part</p>
<pre><code>from torch.optim.lr_scheduler import LambdaLR
def cosine_scheduler(optimizer, training_steps, warmup_steps):
lambda current_step : current_step / max(1, warmup_steps) if current_step < warmup_steps else ...
return LambdaLR(optimizer, lr_lambda)
</code></pre>
| <python><pytorch><pytorch-lightning> | 2023-09-29 07:13:48 | 1 | 2,109 | W Kenny |
77,200,020 | 128,618 | sum the list of dictionary if there are the same key using pandas | <pre><code>import pandas as pd
customer1 = {'name': 'John Smith',"qty": 10, 'income': 35, 'email': 'john.smith1@email.com'}
customer2 = {'name': 'John Smith', "qty": 10,'income': 28, 'phone': '555-555-5555',"other": "something", 'email': 'john.smith@email.com'}
customer3 = {'name': 'Bob Johnson',"qty": 10,'income': 20, 'address': '123 Main St', 'email': 'bob.johnson@email.com',"c2":"abc","c3":"edf"}
customer3 = {'name': 'Joe Johnson', "qty": 10,'income': 8, 'address': '123 Main St', 'email': 'bob.johnson@email.com',"c2":"abc","c3":"edf"}
data = [customer1, customer2, customer3]
df = pd.DataFrame.from_dict(data)
print(df)
</code></pre>
<p>I want to sum it <code>qty</code> and <code>income</code> if there is the same name, the result will look like</p>
<pre><code>result = [
{'name': 'John Smith', "qty": 20,'income': 63, 'phone': '555-555-5555',"other": "something", 'email': 'john.smith@email.com','email2': 'john.smith1@email.com'},
{'name': 'Bob Johnson',"qty": 10,'income': 20, 'address': '123 Main St', 'email': 'bob.johnson@email.com',"c2":"kanel","c3":"pong"},
{'name': 'Joe Johnson', "qty": 10,'income': 8, 'address': '123 Main St', 'email': 'bob.johnson@email.com',"c2":"abc","c3":"edf"}
</code></pre>
<p>]</p>
| <python><python-3.x><pandas> | 2023-09-29 07:02:47 | 1 | 21,977 | tree em |
77,199,972 | 742,033 | LLM model is very slow | <p>I'm running a 34b <a href="https://huggingface.co/Phind/Phind-CodeLlama-34B-v2" rel="nofollow noreferrer">LLM model</a> on an nvidia g5.8xlarge instance (1 Nvidia A10G GPU, 24GB GPU RAM, 32 vCPU, 128GB RAM)</p>
<p>Here is the code for the inference</p>
<pre><code>from transformers import AutoTokenizer,LlamaForCausalLM, AutoConfig, AutoModelForCausalLM
from accelerate import infer_auto_device_map, init_empty_weights
import torch
model_path = "Phind/Phind-CodeLlama-34B-v2"
model = LlamaForCausalLM.from_pretrained(model_path, device_map="auto", offload_folder="offload", torch_dtype=torch.float16, offload_state_dict = True)
tokenizer = AutoTokenizer.from_pretrained(model_path)
def generate_one_completion(prompt: str):
tokenizer.pad_token = tokenizer.eos_token
inputs = tokenizer(prompt, return_tensors="pt", truncation=True, max_length=4096)
# Generate
generate_ids = model.generate(inputs.input_ids.to("cuda"), max_new_tokens=384, do_sample=True, top_p=0.75, top_k=10, temperature=0.1)
completion = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
completion = completion.replace(prompt, "").split("\n\n\n")[0]
return completion
text = "Hello, you are you?"
print(generate_one_completion(text))
</code></pre>
<ol>
<li>Loading checkpoint shards - this takes 4 minutes. How can this be sped up ?</li>
<li>Even simple inferences take 60+ seconds. Code infilling/prompting takes 10+ minutes. Can this be sped on this ec2 instance ?</li>
</ol>
| <python><machine-learning><large-language-model><llama> | 2023-09-29 06:54:14 | 0 | 2,642 | Abhijith |
77,199,725 | 1,581,090 | Redirection of python script output to file not working | <p>I have a simple python file on a Linux VM</p>
<pre><code>import time
while True:
print("test")
time.sleep(10)
</code></pre>
<p>which I start with</p>
<pre><code>python test.py > test.log
</code></pre>
<p>but even after a longer time that file is empty</p>
<pre><code>-rwxrwx--- 1 root vboxsf 0 Sep 29 07:57 test.log
</code></pre>
<p>Is the output cached or something? How can I make the output to go into the file right away?</p>
<p>I am pretty sure I forgot something very easy...</p>
| <python><linux> | 2023-09-29 06:00:28 | 1 | 45,023 | Alex |
77,199,597 | 843,400 | Mypy complaining "unsupported right operand type for in ("object") and "object" has no attribute "get" but interpreter runs fine? | <p>I have some really simply python code running and I don't understand why mypy is complaining about it. I am kind of new to using Python, so maybe it's obvious, but I haven't had much luck.</p>
<p>I have this dict:</p>
<pre><code> DEFAULTS = {
"environments": {
"beta": {
"level": 10
},
"prod": {
"level": 20
}
},
"contact": {
"username": "blah",
"type": "test",
"name": "blah2"
}
}
</code></pre>
<p>If I drop it in the interpreter and mess with it in the console, I can do things like</p>
<pre><code> stage = # get stage from somewhere
if stage in DEFAULTS.get("environments"):
# Do something
</code></pre>
<p>But if I do the same exact thing in my "real" code, I get a mypy error that says:
<code>error: Unsupported right operand type for in ("object")</code> .</p>
<p>Similarly, if I try to do:
<code>something = DEFAULTS.get("contact").get("username")</code>, it works just fine from my interpreter, but mypy says <code>error: "object" has no attribute "get"</code></p>
<p>Any help would be greatly appreciated!</p>
| <python><mypy> | 2023-09-29 05:24:09 | 1 | 3,906 | CustardBun |
77,199,304 | 7,709,727 | How to use mock.patch with wraps on a class method? | <p>I am using <code>unittest.mock.patch</code> with <code>wraps</code> to mock a function while being able to access the original function. For example:</p>
<pre class="lang-py prettyprint-override"><code>from unittest import mock
def f(x, y):
return x * y
print(f(2, 3)) # output: 6
original_f = f
def m(x, y):
return original_f(x, y + 1)
with mock.patch('__main__.f', wraps=m) as mk:
print(f(2, 3)) # output: 8
</code></pre>
<p>However, a similar program that tries to mock a class method does not work:</p>
<pre class="lang-py prettyprint-override"><code>from unittest import mock
class A:
def __init__(self, x):
self.x = x
def f(self, y):
return self.x * y
a = A(2)
print(a.f(3)) # output: 6
original_f = A.f
def m(self, y):
return original_f(self, y + 1)
with mock.patch('__main__.A.f', wraps=m) as mk:
print(a.f(3)) # Exception
</code></pre>
<p>The error is:</p>
<pre><code>Traceback (most recent call last):
File "a.py", line 18, in <module>
print(a.f(3))
^^^^^^
File "/usr/lib/python3.11/unittest/mock.py", line 1118, in __call__
return self._mock_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/unittest/mock.py", line 1122, in _mock_call
return self._execute_mock_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/unittest/mock.py", line 1192, in _execute_mock_call
return self._mock_wraps(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: m() missing 1 required positional argument: 'y'
</code></pre>
<p>It appears that the mock package tries to call <code>m(self=3, y=<undefined>)</code>, so the exception happens.</p>
<p>Why does <code>mock.patch</code> with <code>wraps</code> fail to work on class methods? How should I fix my program?</p>
<p><strong>Edit:</strong></p>
<p>Here is a more complicated working example, to demonstrate that</p>
<ol>
<li>The <code>a.f(3)</code> call cannot be modified.</li>
<li>The tester (<code>run_test</code>) cannot access <code>a</code> in testee (<code>test_target</code>).</li>
</ol>
<pre class="lang-py prettyprint-override"><code>from unittest import mock
class A:
# This class cannot be changed
def __init__(self, x):
self.x = x
def f(self, y):
return self.x * y
def test_target():
# This function cannot be changed
a = A(2)
b = A(3)
return a.f(3), b.f(3)
def run_test():
print(test_target()) # output: (6, 9)
original_f = A.f
def m(self, y):
return original_f(self, y + 1)
with mock.patch('__main__.A.f', wraps=m) as mk:
#with mock.patch.object(A, 'f', wraps=m) as mk:
print(test_target()) # Exception
if __name__ == '__main__':
run_test()
</code></pre>
| <python><mocking> | 2023-09-29 03:32:55 | 1 | 1,570 | Eric Stdlib |
77,199,237 | 5,942,100 | Tricky calculation refresh based on columns using Pandas | <p>Logic is to add the value in the new_r_aa after the first line of code output under 'aa_cumul' Example, the code will take 3 (which is the first output under 'aa_cumul') and sum it with the value 8 under the 'new_r_aa' column. This will result in 3+8 = 11 11+9=20, 20+ 8=28, 28+8=36 and so on. But remember the first groupby row should be unaffected and unchanged from the original input.</p>
<p><strong>Data</strong></p>
<pre><code>data = {
'city': ['NY', 'NY', 'NY', 'NY', 'NY', 'CA'],
'ID': ['AA', 'AA', 'AA', 'AA', 'AA', 'AA'],
'quarter': ['2024Q1', '2024Q2', '2024Q3', '2024Q4', '2025Q1', '2024Q1'],
'cml_bb_racks': [6, 13, 18, 20, 30, 5],
'r_aa_bx': [0, 2, 3, 4, 2, 1],
'cml_aa_bx': [1, 3, 6, 10, 12, 2],
'BB_AA_Bx_Ratio': [6.0, 4.333333, 3.0, 2.0, 2.5, 2.5],
'expected_aa_bx_delta': [1.81, 2.856537, 2.395498, 0.0, 2.1, 2.0],
'total aa': [1.810000, 4.856537, 5.395498, 4.000000, 4.100000, 3.000000],
'total round aa': [2.0, 6.0, 6.0, 4.0, 6.0, 4.0],
'new_r_aa': [2.0, 8.0, 9.0, 8.0, 8.0, 5.0],
'aa_cumul': [3.0, 0.0, 0.0, 0.0, 0.0, 6.0]
}
df = pd.DataFrame(data)
</code></pre>
<p><strong>Desired</strong></p>
<pre><code>data = {
'city': ['NY', 'NY', 'NY', 'NY', 'NY', 'CA'],
'ID': ['AA', 'AA', 'AA', 'AA', 'AA', 'AA'],
'quarter': ['2024Q1', '2024Q2', '2024Q3', '2024Q4', '2025Q1', '2024Q1'],
'cml_bb_racks': [6, 13, 18, 20, 30, 5],
'r_aa_bx': [0, 2, 3, 4, 2, 1],
'cml_aa_bx': [1, 3, 6, 10, 12, 2],
'BB_AA_Bx_Ratio': [6.0, 4.333333, 3.0, 2.0, 2.5, 2.5],
'expected_aa_bx_delta': [1.81, 2.856537, 2.395498, 0.0, 2.1, 2.0],
'total aa': [1.800000, 4.856537, 5.395498, 4.000000, 4.100000, 3.000000],
'total round aa': [2, 6, 6, 4, 6, 4],
'new_r_aa': [2.0, 8.0, 9.0, 8.0, 8.0, 5.0],
'aa_cumul': [3.0, 11.0, 20.0, 28.0, 36.0, 6.0]
}
df = pd.DataFrame(data)
</code></pre>
<p><strong>Doing</strong></p>
<pre><code>df['aa_cumul'] = df.groupby(['city', 'ID'])['new_r_aa'].transform(lambda x: x.cumsum().add(x.iloc[0])).fillna(df['new_r_aa'])
</code></pre>
<p>However, this code is not applying the correct reset. Any suggestion is helpful.</p>
| <python><pandas><numpy> | 2023-09-29 03:01:09 | 1 | 4,428 | Lynn |
77,198,918 | 5,924,264 | SQL query from date column (represented as an int) returns a datetime.date object | <p>I have a sql table with a single row. It has a column <code>date</code> with the entry <code>20230906</code>.
In the program I am using, when I do:</p>
<pre><code>query = "select max(date) from table_name"
print("res max", cursor.execute(query).fetchall()[0][0])
query = "select date from table_name order by date asc"
print("res all", cursor.execute(query).fetchall()[0][0])
</code></pre>
<p>it prints</p>
<pre><code>('res max', 20230906)
('res all', datetime.date(2023, 9, 6))
</code></pre>
<p>I tried to reproduce this in ipython on the same database file that the program accessed and i cannot reproduce this (in ipython, the second print is also an integer <code>20230906</code>). Does anyone know how the second was converted to a <code>datetime.date</code> object?</p>
<pre><code>def query_sql(conn, query):
return conn.cursor().execute(query).fetchall()
</code></pre>
| <python><sql><sqlite><datetime> | 2023-09-29 00:35:24 | 0 | 2,502 | roulette01 |
77,198,874 | 20,771,478 | Send mail from Outlook with Python and Graph API - Authentication Problem | <p>I have written a Python script that uses the <a href="https://github.com/vgrem/Office365-REST-Python-Client" rel="nofollow noreferrer">office365</a> library and loads Excel files from SharePoint to my local system. From there I push selected content via SQL Alchemy to a SQL server.</p>
<p>This process is to run daily as a routine gathering data for a report. The problem with Excel however is that people work with it. Therefore I expect things to go wrong sometimes. For example, a file not being available or just having a name that is slightly off (and not caught by my regex).</p>
<p>Luckily the report users are also the file providers. Therefore I would like to implement checks in my script that automatically send problems to the file providers (and to me) if the script can't do its job.</p>
<p>To begin with, I want to send a mail from my outlook account. To do so I want to use the Microsoft Graph API.</p>
<p>I have found my tenant ID and went to entra.microsoft.com to create an application. I gave the application the Mail.Send and Mail.Send.Shared permission and I created a client secret.</p>
<p>With that, I was able to write the following code.</p>
<pre><code>import msal
import requests
import json
TENANT_ID = 'Tenant ID'
AUTHORITY_URL = 'https://login.microsoftonline.com/' + TENANT_ID
CLIENT_ID = 'Client ID'
CLIENT_SECRET = 'Client Secret'
SCOPES = ["https://graph.microsoft.com/.default"]
app = msal.ConfidentialClientApplication(
client_id=CLIENT_ID,
client_credential=CLIENT_SECRET,
authority=AUTHORITY_URL)
result = app.acquire_token_for_client(scopes=SCOPES)
userId = "My Outlook Mail Adress"
endpoint = f'https://graph.microsoft.com/v1.0/users/{userId}/sendMail'
toUserEmail = "My Outlook Mail Adress"
email_msg = {'Message': {'Subject': "Test Sending Email from Python",
'Body': {'ContentType': 'Text', 'Content': "This is a test email."},
'ToRecipients': [{'EmailAddress': {'Address': toUserEmail}}]
},
'SaveToSentItems': 'true'}
r = requests.post(endpoint,
headers={'Authorization': 'Bearer ' + result['access_token']},
json=email_msg)
if r.ok:
print('Sent email successfully')
else:
print(r.json())
</code></pre>
<p>However, the Graph API replies with:</p>
<blockquote>
<p>{'error': {'code': 'ErrorAccessDenied', 'message': 'Access is denied.
Check credentials and try again.'}}</p>
</blockquote>
<p>So I guess, I do something wrong when delegating my users permissions to the application. <a href="https://learn.microsoft.com/en-us/azure/active-directory/develop/permissions-consent-overview?WT.mc_id=Portal-Microsoft_AAD_RegisteredApps" rel="nofollow noreferrer">Here</a> is the Microsoft documentation I found about this topic.</p>
<p>Do you know how I have to change my code in order to get permission to send the mail?</p>
<p>As far as I can see, I am doing things exactly as described in <a href="https://stackoverflow.com/questions/76805337/python-send-email-using-graph-api-and-office365-rest-python-client">this</a> question's answer. But what worked for them, doesn't do the trick for me.</p>
<p><a href="https://stackoverflow.com/questions/75650693/microsoft-graph-apis-access-is-denied-check-credentials-and-try-again/75658620#75658620">This</a> question is also closely related. The only difference that I can spot is that the person who answered gave the application the Mail.Send permission on an application level and I would like to work with the Delegated Level because the other is not enabled by our organization.</p>
<p><a href="https://i.sstatic.net/JC2Au.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JC2Au.png" alt="enter image description here" /></a></p>
<p>Edit as reply to the answer from Jesse:
When I run your code I need to ask for approval. Is this avoidable?</p>
<p><a href="https://i.sstatic.net/LlH65.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LlH65.png" alt="enter image description here" /></a></p>
| <python><microsoft-graph-api><office365> | 2023-09-29 00:12:19 | 1 | 458 | Merlin Nestler |
77,198,755 | 3,326,666 | Expand list of tuples | <p>I have a big list of tuples (20-30 million entries). Each sub-tuple has three pieces of information <code>(start, end, value)</code> and is supposed to only represent one position in a sequence. However, the program generating this data has concatenated some of the values of adjacent positions if the value is the same. Thus they look a like this:</p>
<pre><code>[..., (71, 72, -0.2998250126838684),
(72, 73, -0.2858070135116577),
(73, 74, -0.29049500823020935),
(74, 75, -0.3044680058956146),
(75, 76, -0.28386199474334717),
(76, 80, -0.27730199694633484), # PROBLEM: end - start > 1
(80, 81, -0.2726449966430664),
(81, 82, -0.26151400804519653),
(82, 84, -0.2679719924926758), # PROBLEM: end - start > 1
(84, 85, -0.24273300170898438),
(85, 86, -0.23799900710582733),
(86, 87, -0.24745100736618042),
(87, 88, -0.2568419873714447),
(88, 89, -0.2585819959640503), ...]
</code></pre>
<p>To fix this I would like to create new entries in the list which separate these tuples which represent multiple positions into new tuples which represent just one position.
I thus desire an output like this:</p>
<pre><code>(..., (71, 72, -0.2998250126838684),
(72, 73, -0.2858070135116577),
(73, 74, -0.29049500823020935),
(74, 75, -0.3044680058956146),
(75, 76, -0.28386199474334717),
(76, 77, -0.27730199694633484), # New
(77, 78, -0.27730199694633484), # New
(78, 79, -0.27730199694633484), # New
(79, 80, -0.27730199694633484), # New
(80, 81, -0.2726449966430664),
(81, 82, -0.26151400804519653),
(82, 83, -0.2679719924926758), # New
(83, 84, -0.2679719924926758), # New
(84, 85, -0.24273300170898438),
(85, 86, -0.23799900710582733),
(86, 87, -0.24745100736618042),
(87, 88, -0.2568419873714447),
(88, 89, -0.2585819959640503), ...)
</code></pre>
<p>To do this, with my list of tuples called <code>bwList</code> I have done the following:</p>
<pre><code>replacementTuple = ()
for t in bwList:
if t[1] - t[0] == 1:
replacementTuple = replacementTuple + (t,)
else:
numNewTuples = t[1] - t[0]
st, ed = range(t[0], t[1]), range(t[0] + 1, t[1] + 1)
for m in range(numNewTuples):
replacementTuple = replacementTuple + ((st[m], ed[m], t[2]) ,)
</code></pre>
<p>Here the output is a tuple of tuples, not a list. I don't mind too much either way.</p>
<p>This approach appears to work, but is very slow!</p>
<p>Is there a way to speed this up?</p>
| <python><for-loop><tuples> | 2023-09-28 23:22:50 | 2 | 1,597 | G_T |
77,198,734 | 5,942,100 | Tricky create calculation that pulls in retro values using Pandas | <p>I have a dataset where I would like to create a new column called 'aa_cumul', by taking the sum, (Where the first instance of a numerical value occurs) for a specific city and ID of the value in the column,'new_r_aa', which is 2, and the value in the column 'cml_aa_bx', 1 = 3.
From there we will take the cumulative sum of the value in 'aa_cumul' and 'new r aa'
(3+8 = 11, 11+9 = 20 etc)</p>
<p><strong>Data</strong></p>
<pre><code>import pandas as pd
data = {
'city': ['NY', 'NY', 'NY', 'NY', 'NY', 'CA'],
'ID': ['AA', 'AA', 'AA', 'AA', 'AA', 'AA'],
'cml_aa_bx': [1, 3, 6, 10, 12, 2],
'new_r_aa': [2, 6, 9, 8, 6, 5]
}
df = pd.DataFrame(data)
</code></pre>
<p><strong>Desired</strong></p>
<pre><code>data = {
'city': ['NY', 'NY', 'NY', 'NY', 'NY', 'CA'],
'ID': ['AA', 'AA', 'AA', 'AA', 'AA', 'AA'],
'cml_aa_bx': [1, 3, 6, 10, 12, 2],
'new_r_aa': [2, 6, 9, 8, 6, 5],
'aa_cumul': [3, 11, 20, 28, 34, 6]
}
</code></pre>
<p><strong>Doing</strong></p>
<pre><code># Initialize the 'new cuml aa' column
new_cuml_aa = []
# Initialize the first value in 'new cuml aa' with the sum of the first value in 'new r aa' and 'cml_aa_bx'
new_cuml_aa.append(df['new_r_aa'][0] + df['cml_aa_bx'][0])
# Loop through the DataFrame to calculate 'new cuml aa' values
for i in range(1, len(df)):
new_cuml_aa_value = new_cuml_aa[i - 1] + df['new_r_aa'][i]
new_cuml_aa.append(new_cuml_aa_value)
</code></pre>
<p>However, this is giving me the wrong values/output. Any suggestion is appreciated</p>
| <python><pandas><numpy> | 2023-09-28 23:12:17 | 2 | 4,428 | Lynn |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.