QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
76,809,683 | 8,849,755 | Plotly Heatmap texttemplate not working with customdata | <p>I am trying to use custom data in the <code>texttemplate</code> of a <code>Heatmap</code> in Plotly, but it does not seem to work. I am using Python 3.10.12 and Plotly 5.15.0. Below there is a MWE:</p>
<pre class="lang-py prettyprint-override"><code>import plotly.express as px
import numpy
df = px.data.medals_wide(indexed=True)
fig = px.imshow(df)
customdata = numpy.zeros(df.shape)
fig.data[0].customdata = customdata
fig.data[0].hovertemplate = 'Custom data: %{customdata}' # This works fine.
fig.data[0].texttemplate = '%{customdata}' # This fails.
fig.write_html('deleteme.html',include_plotlyjs='cdn')
</code></pre>
<p>Produces:
<a href="https://i.sstatic.net/JsGF8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JsGF8.png" alt="enter image description here" /></a></p>
<p>Expected:
<a href="https://i.sstatic.net/iPLZn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iPLZn.png" alt="enter image description here" /></a></p>
| <python><plotly><custom-data-attribute> | 2023-08-01 08:14:29 | 1 | 3,245 | user171780 |
76,809,660 | 6,218,849 | How do I structure/design a multipage dash application to avoid duplicate callback errors? | <p>I am trying to refactor my one-page dashboard app into a multipage app in order to provide an "About" page where I lay out some details about the dashboard. However, I run into duplicate callback erorrs. Note that</p>
<ol>
<li>When I first start the application the root page loads fine and works as expected.</li>
<li>If I refresh the page I get duplicate callback errors</li>
<li>If I navigate to the about page I get duplicate callback errors</li>
</ol>
<p>I provide below a full MWE which reproduces the error.</p>
<h1>My thoughts</h1>
<p>I think the issue lies with the fact that I am getting the <code>Dash</code> instance in the pages via <code>app = dash.get_app()</code>. Whenever the duplicate callback error occurs, I notice in the callback tree debugger tool that the connections have doubled.</p>
<p>I think I need to pass the <code>app</code> to the utility functions, since I need them to set up the callback. I decided to put the callback inside the utility function, because</p>
<ol>
<li>I am generating a lot of identically looking components where the figure data and titles are different. In this way I can generate all these components by calling the same utility function by passing the relevant parameters.</li>
<li>I think the code is simple when the callback for updating the figure lives inside the function responsible for generating the figure.</li>
</ol>
<h1>Tree structure</h1>
<pre><code>.
├── main.py
├── pages
│ ├── about.py
│ └── home.py
└── utils.py
</code></pre>
<h1><code>utils.py</code></h1>
<pre class="lang-py prettyprint-override"><code>import plotly.graph_objects as go
import numpy as np
from dash import Dash, Patch, html, dcc
from dash.dependencies import Input, Output
def sine(xs: np.array) -> np.array:
return 100 * np.sin(xs)
def make_figure(app: Dash) -> html.Div:
fig = go.Figure()
fig.add_trace(go.Scatter(
x=None,
y=None,
name="Randomized Data"
))
fig.update_layout(height=800, template="plotly_dark")
@app.callback(
Output("plotly-figure", "figure"),
Input("dcc-interval", "n_intervals")
)
def update(n_clicks: int) -> Patch:
xs = np.linspace(0, 6*np.pi, 100)
ys = sine(xs) * np.random.random(size=len(xs))
patch = Patch()
patch["data"][0]["x"] = xs
patch["data"][0]["y"] = ys
return patch
return html.Div(children=[
dcc.Graph(figure=fig, id="plotly-figure")
])
def make_random_title(app: Dash) -> html.Div:
adjectives = [
"Affable",
"Affluent",
"Deprived",
"Destitute",
"Elaborate",
"Evocative",
"Frivolous",
"Gritty",
"Immense"
]
nouns = [
"Crackdown",
"Culprit",
"Discrepancy",
"Endorsement",
"Enmity",
"Gravity",
"Immunity",
"Juncture",
"Legislation",
"Offspring"
]
@app.callback(
Output("random-title", "children"),
Input("dcc-interval", "n_intervals")
)
def update(n_intervals: int) -> html.H1:
return html.H1(f"{np.random.choice(adjectives)} {np.random.choice(nouns)}")
return html.Div(children=[
f"{np.random.choice(adjectives)} {np.random.choice(nouns)}"
], id="random-title")
</code></pre>
<h1><code>pages/about.py</code></h1>
<pre class="lang-py prettyprint-override"><code>import dash
dash.register_page(__name__, path="/about")
from dash import html
def layout() -> html.Div:
return html.Div(children=[
html.H1("About Page")
])
</code></pre>
<h1><code>pages/home.py</code></h1>
<pre class="lang-py prettyprint-override"><code>import dash
import utils
dash.register_page(__name__, path="/")
app = dash.get_app()
from dash import html
def layout() -> html.Div:
return html.Div(children=[
utils.make_random_title(app),
utils.make_figure(app)
])
</code></pre>
<h1><code>main.py</code></h1>
<pre class="lang-py prettyprint-override"><code>import dash
import dash_bootstrap_components as dbc
from dash import Dash, html, dcc
def main() -> Dash:
app = Dash(__name__, use_pages=True, external_stylesheets=[dbc.themes.SLATE])
app.layout = html.Div(children=[
dash.page_container,
dcc.Interval(id="dcc-interval", n_intervals=0, interval=1000)
])
return app
if __name__ == "__main__":
app = main()
app.run(debug=True)
</code></pre>
<h1><code>requirements.txt</code></h1>
<pre><code>plotly
dash
numpy
dash-bootstrap-components
</code></pre>
| <python><plotly-dash> | 2023-08-01 08:11:08 | 2 | 710 | Yoda |
76,809,593 | 2,242,558 | Normalizing Nmap Results from File with Python | <p>What I am trying to achieve is the following:</p>
<p>Step 1: Scan the Target IP Address with the following <code>nmap</code> command:</p>
<pre><code>nmap_scan_command = f"nmap -p- -sC -sV -oG output.txt {target}"
result = subprocess.run(nmap_scan_command, shell=True, check=True, capture_output=True, text=True)
</code></pre>
<p>the <code>result</code> temp could contain the following text:</p>
<pre><code>Host: 192.168.0.1 () Status: Up
Host: 192.168.0.1 () 22/open/tcp//tcpwrapped///, 80/open/tcp//tcpwrapped///, 443/open/tcp//ssl|http//Golang net|http server (Go-IPFS json-rpc or InfluxDB API)/, 5671/open/tcp//http//Golang net|http server (Go-IPFS json-rpc or InfluxDB API)/, 49363/open/tcp///// Ignored State: filtered (65530) # Nmap done at Mon Jul 31 16:14:16 2023 -- 1 IP address (1 host up) scanned in 551.90 seconds
</code></pre>
<p>Step 2: Parse the results and extract open ports and running services on them so I can save them to a dictionary:</p>
<pre><code>grep_command = "grep -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' output.txt"
result2 = subprocess.run(grep_command, shell=True, check=True, capture_output=True, text=True)
</code></pre>
<p>With the above command in Step 2, I can get the lines that contain IPs, but how to separate open ports for each IP and store them in a dictionary for further processing?</p>
<p>Maybe on the <code>result2</code> temp contains results for more than one IP address. I am pretty sure that could be a one-liner command for this but I am not able on how to achieve this...</p>
| <python><python-3.x><regex> | 2023-08-01 08:00:07 | 0 | 367 | chrysst |
76,809,432 | 9,506,773 | txt.gz file isn't in my published pypi package | <p>I have a <code>.txt.gz</code> in my repo which should part of a pypi package. When I publish a new pypi release and inspect the installed pacakge in my environment I can see that file is not part of the pacakge. Why isn't this file in the package?</p>
| <python><continuous-integration><pypi> | 2023-08-01 07:35:55 | 0 | 3,629 | Mike B |
76,809,394 | 3,099,733 | How to specify a default value for unspecified variable in Python string templates? | <p>Consider the following code</p>
<pre class="lang-py prettyprint-override"><code>from string import Template
s = Template('$a $b')
print(s.safe_substitute(a='hello')) # $b is not substituted
print(s.substitute(a='hello')) # raise error
</code></pre>
<p>What I want is that I can specified a default value for all the unspecified variables in the template, for example, an empty string ''.</p>
<p>The reason I need this feature is there is a case that both template and variables are provided by users. They may provided some placeholders in the template string as an injection point in case someone else may need to use them, but if the user don't provide value for it those placeholders should be ignore. And I don't need to specify different default values for different variables, but just a common default value for all missing variables.</p>
<p>I don't have to stick to the string template module, but I don't want to use heavy solution like <code>jinja2</code> either. I just want to have a lightweight template soltuion to get things done.</p>
<p>The method to provide a default dictionary doesn't work in my case as the template is provided by other users, so I don't know the possbile vaiables before hand.</p>
| <python><stringtemplate> | 2023-08-01 07:31:14 | 2 | 1,959 | link89 |
76,808,985 | 4,858,867 | cx_Freeze with RecursionError: maximum recursion depth exceeded in __instancecheck__ | <p>I'm trying to build an execution file from a directory containing a set of Python <a href="https://dash.plotly.com/" rel="nofollow noreferrer">Dash Plotly</a> files. To achieve this, I've used the Python package <a href="https://cx-freeze.readthedocs.io/en/stable/index.html" rel="nofollow noreferrer">cx_Freeze</a>. However, after running the command below</p>
<pre><code>python setup.py build
</code></pre>
<p>I've got the following error</p>
<p><a href="https://i.sstatic.net/RcOCQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RcOCQ.png" alt="enter image description here" /></a></p>
<p>It looks like that this error comes from Python package itself. Does anyone have experiences with this? Thank you for your help.</p>
| <python><plotly-dash><cx-freeze> | 2023-08-01 06:25:18 | 1 | 383 | Kai-Chun Lin |
76,808,151 | 16,702,151 | are pytorch and torch compatible in code? | <p>I followed the website instructions:</p>
<pre><code>pip install transformers torch accelerate
</code></pre>
<p>After the installation, the following code works</p>
<pre><code>import torch
</code></pre>
<p>But, the performance was slow in Mac M1. I then installed</p>
<pre><code>pip install pytorch
</code></pre>
<p>Without any code modification, the program seems okay. However, are we supposed to do some coding optimization from torch to pytorch?</p>
| <python><pytorch> | 2023-08-01 02:27:48 | 0 | 642 | SBMVNO |
76,808,065 | 2,813,606 | Treemap with variable # of categories (Plotly) | <p>I am working on a dashboard that uses a dataset for a treemap that looks like the following:</p>
<pre><code>import pandas as pd
import plotly.express as px
col1 = ['United States','Mexico','Canada','Argentina','France','Portugal','Italy','Spain','Brazil','Peru']
col2 = [4,6,6,9,10,3,20,15,16,2]
df = pd.DataFrame({
'country': col1,
'count': col2
})
</code></pre>
<p>I will be using a filter that will change the countries and counts. Here is some code that will produce the treemap:</p>
<pre><code>tree_fig = px.treemap(
df,
path = ['country'],
values = 'count',
template='plotly_dark',
color = 'country',
color_discrete_map={
df['country'][0]: '#626ffb',
df['country'][1]: '#b064fc',
df['country'][2]: '#ef563b',
df['country'][3]: '#f45498',
df['country'][4]: '#ff94fc',
df['country'][5]: '#a8f064',
df['country'][6]: '#24cce6',
df['country'][7]: '#ffa45c',
df['country'][8]: '#00cc96',
df['country'][9]: '#fff000'
}
)
tree_fig.update_traces(
hovertemplate='Count=%{value}'
)
tree_fig.show()
</code></pre>
<p>This all works, but when using the filters there will be situations where I have fewer than 10 countries in the dataframe. How can I rewrite the <code>color_discrete_map</code> argument to account for a varying number of categories.</p>
| <python><pandas><colors><plotly><treemap> | 2023-08-01 01:56:57 | 2 | 921 | user2813606 |
76,807,810 | 1,991,502 | Pylint complains about imports | <p>When I try to import Protocol from the typing module, I get the error "unable to import typing (import-error)" It's error 0401. E.g. I have a file test_import with the single line</p>
<pre><code>from typing import Protocol
</code></pre>
<p>The code itself runs fine.</p>
<p>I'm using: Python version 3.10.11, pylint version 2.17.5</p>
<p>I run the pylint in a command line on windows as follows</p>
<pre><code>C:\Path\To\Python310\Scripts\pylint.exe test_import.py
</code></pre>
<p>or alternatively</p>
<pre><code>C:\Path\To\Python310\python.exe -m pylint test_import.py
</code></pre>
<p>I similarly run python on the command line</p>
<pre><code>C:\Path\To\Python310\python.exe test_import.py
</code></pre>
| <python><pylint> | 2023-08-01 00:21:02 | 0 | 749 | DJames |
76,807,795 | 632,943 | How can I use a spellchecker to add back a missing ñ? | <p>How can I use a spellcheck to correctly identify that a word is missing ñ?</p>
<p>I have tried to use <code>autocorrect</code>, but it will not detect that the ñ is missing</p>
<pre><code>from autocorrect import Speller
spell = Speller(lang='es')
print(spell('gatto'))
print(spell('ano'))
print(spell('manana'))
gato
ano
manana
</code></pre>
<p>I have also tried <code>spellchecker</code> but that does not detect the word is spelt wrong</p>
<pre><code>from spellchecker import SpellChecker
spell = SpellChecker(language='es')
misspelled = ["gatto", "manana", "ano"]
misspelled = spell.unknown(misspelled)
for word in misspelled:
print(word, spell.correction(word))
gatto gato
</code></pre>
| <python><spell-checking><autocorrect> | 2023-08-01 00:16:08 | 1 | 1,168 | chadb |
76,807,787 | 2,929,914 | Python Polars - how to build a supersession conversion table | <p>The scenario is as follow:</p>
<ul>
<li><p>At my store I sell items that can be replaced by other items (i.e. have supersession). For example, until a certain date I may had for sale the item 'A' which was eventually replaced by a new item 'B'.</p>
</li>
<li><p>These supersessions can happen successively. This means that 'A' can be replaced by 'B', which can be replaced by 'C', which can be replaced by 'D'.</p>
</li>
<li><p>More than one product can be replaced by a common new product. This means that 'B' can be replaced by 'C', but item 'K' can also be replaced by 'C'.</p>
</li>
</ul>
<p>Starting from a DataFrame with the old and the new items code, my goal is to add another column to this DataFrame with the latest item code of each item.</p>
<p>With the examples above, that means that starting from:</p>
<p><a href="https://i.sstatic.net/BIT2n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BIT2n.png" alt="enter image description here" /></a></p>
<p>I need to end up with:</p>
<p><a href="https://i.sstatic.net/fEbxK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fEbxK.png" alt="enter image description here" /></a></p>
<p>Note: this question is not duplicated with <a href="https://stackoverflow.com/questions/70821516/power-bi-dynamically-build-a-supersession-conversion-table">this one</a>. That question I asked when I was trying to accomplish this in PowerBI. The question is now for Polars application.</p>
<p>Althougnt I came up with a solution I'm really not satisfied with it, because:</p>
<ul>
<li><p>I loop though all items on the DataFrame (number_of_items)² times.</p>
</li>
<li><p>As far as I'm aware it is not a good practice to loop through items in a DataFrame/Series using the for/while loop (Polars has tons of built-in methods to do this, so why can't I use one of them)?</p>
</li>
<li><p>Polars <a href="https://pola-rs.github.io/polars/py-polars/html/reference/series/api/polars.Series.set_at_idx.html" rel="nofollow noreferrer">documentation</a> says that Series.set_at_idx() (which I used in my solution) is frequently an anti-pattern, as it can block optimisation (predicate pushdown, etc).</p>
</li>
</ul>
<p>So any idea of how can I perform such task with a cleaner and more performatic approach?</p>
<p>Follows my solution:</p>
<pre><code># Import Polars.
import polars as pl
# Create the sample DataFrame.
df = pl.DataFrame(
{
'Old_code': ['A', 'B', 'C', 'K'],
'New_code': ['B', 'C', 'D', 'C']
}
)
# Retrieve the New_code column.
new_PN_col = df.get_column('New_code')
# Checks if the old code appears on the new code column.
df = df.with_columns(
pl.col('Old_code').is_in(new_PN_col).alias('Has_chained')
)
# Breaks the DataFrame into Series.
old_code_Series = df.get_column('Old_code')
new_code_Series = df.get_column('New_code')
has_chain = df.get_column('Has_chained')
# Retrieve the DataFrame height.
size = df.height
# Loop through all items.
for i in range(size):
# Checks if the item has chained supersession.
if has_chain[i]:
# Retrieves the old and new item's code.
old_1 = old_code_Series[i]
new_1 = new_code_Series[i]
# Loop through all items again.
for k in range(size):
# Finds where we need to replace the item code.
if new_code_Series[k] == old_1:
# Updates the item code.
new_code_Series.scatter(k, new_1)
# Concat the original DataFrame with the updated Series horizontally.
df = pl.concat(
[
df,
new_code_Series.rename('Last_new_code').to_frame()
],
how='horizontal'
).select(pl.col('*').exclude('Has_chained'))
</code></pre>
| <python><python-polars> | 2023-08-01 00:13:04 | 1 | 705 | Danilo Setton |
76,807,753 | 4,067,676 | possible to specify platform dependent package data in pyproject.toml | <p>The title kind of says it all. If I have a pyproject.toml file:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.setuptools.package-data]
"somemodule.somepackage" = ["somedataforwindowsonly.ext"] # can I do this only for windows or linux or whatever?
</code></pre>
<p>can I specify the package data separately for each platform?</p>
| <python><setuptools><pyproject.toml> | 2023-08-01 00:00:41 | 0 | 3,849 | Vince W. |
76,807,734 | 14,546,482 | Not seeing sting "NULL" values go into Snowflake stored procedure from python connector | <p>I have a FAST API app that is connecting to snowflake via sqlalchemy session maker. The route is pointing to a dynamic stored procedure where the front-end react JS passes in data to express sever via a form which then passes data to fastapi.</p>
<p>What is happening is that if I dont pass in a value in the form then in snowflake that value is being passed in as a " " value inside my stored procedure. I need it to pass a NULL and not an empty string.</p>
<p>My current setup: my fast API endpoint is using the provided data to fill in a stored procedure and then executing the stored procedure and returning the results to the express server then to a react frontend table.</p>
<p>When a user doesn't provide a form field item the data gets passed in as None to my fast api. I wrote a function that checks to see if there is a value and if there is NO VALUE then convert that value to a string "NULL"</p>
<p>This is logic for that process: value = is coming from a pydantic checker</p>
<pre><code>value = value if value is not None else "NULL"
</code></pre>
<p>I then take all the values and construct the stored procedure query.</p>
<p>First a few noteworthy items:</p>
<p>I added <code>.replace()</code> values because I found that items weren't containing <code>' '</code> around them and <code>{ }</code> items were getting placed inside the string value and when NULL was entered it was really entered as 'NULL' all of these scenarios caused syntax errors so having the replace fixed it</p>
<p>The hard coded NULL at position 5 is a value that I don't want to send to the SF so I just leave it as NULL. That NULL is appearing correctly in the final query!</p>
<pre><code>stored_procedure= f"CALL {environment}_DB.SCHEMA.PROCNAME(%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)" % ({data1},{data2},{data3},{data4},"NULL",{data5},{data6},{data7},{data8})
stored_procedure = stored_procedure.replace('"', "'").replace("{", "").replace("}", "").replace("'NULL'", "NULL")
results = db.execute(stored_procedure)
## where db = db: Session = Depends(get_db)
</code></pre>
<p>Using the example where data1 = First Name</p>
<pre><code>data1 = "TIM" json is being sent to fast api
</code></pre>
<p>The stored procedure that's being constructed and being sent to Snowflake is:</p>
<pre><code>{environment}_DB.SCHEMA.PROCNAME('TIM','','','',NULL,'','','','')
</code></pre>
<p>I want/expect this to be passed to snowflake: (No '' values)</p>
<pre><code>{environment}_DB.SCHEMA.PROCNAME('TIM',NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL)
</code></pre>
<p>The snowflake query is also adding ilike = '' items to each null value! and I'm not sure why.</p>
<p>Any help would be apprenticed thanks!</p>
| <python><reactjs><express><snowflake-cloud-data-platform><fastapi> | 2023-07-31 23:53:03 | 1 | 343 | aero8991 |
76,807,694 | 678,572 | Dependencies not installed in conda environment within a Docker container | <p>This exact Dockerfile was working 2 weeks ago and I can't figure out why it's not workingnow. Nothing has changed in the script.</p>
<p>Here's the <a href="https://github.com/jolespin/veba/blob/main/install/docker/Dockerfile" rel="nofollow noreferrer">Dockerfile</a></p>
<pre><code># v2023.6.6
# =================================
# Miniconda3
# =================================
FROM continuumio/miniconda3
ARG ENV_NAME
SHELL ["/bin/bash","-l", "-c"]
WORKDIR /root/
# Data
RUN mkdir -p /volumes/input
RUN mkdir -p /volumes/output
RUN mkdir -p /volumes/database
# Retrieve VEBA repository
RUN mkdir -p veba/
COPY ./install/ veba/install/
COPY ./src/ veba/src/
COPY ./VERSION veba/VERSION
COPY ./LICENSE veba/LICENSE
# Install Miniconda
RUN /opt/conda/bin/conda init bash && \
/opt/conda/bin/conda config --add channels jolespin && \
/opt/conda/bin/conda config --add channels bioconda && \
/opt/conda/bin/conda config --add channels conda-forge && \
/opt/conda/bin/conda update conda -y && \
# /opt/conda/bin/conda install -c conda-forge mamba -y && \ # Mamba adds about 450 MB to image
# /opt/conda/bin/mamba init bash && \
/opt/conda/bin/conda clean -afy
# =================================
# Add conda bin to path
ENV PATH /opt/conda/bin:$PATH
# Create environment
RUN conda env create -n ${ENV_NAME} -f veba/install/environments/${ENV_NAME}.yml
# RUN mamba env create -n ${ENV_NAME} -f veba/install/environments/${ENV_NAME}.yml
# Add environment scripts to environment bin
RUN /bin/bash veba/install/update_environment_scripts.sh veba/
# # Add contents to path
# ENV PATH /opt/conda/envs/${ENV_NAME}/bin:$PATH
# Set up environment
RUN echo "conda activate ${ENV_NAME}" >> ~/.bashrc
# Set entrypoint to bash
ENTRYPOINT ["bash", "-l", "-c"]
</code></pre>
<p>Here's my <a href="https://github.com/jolespin/veba/blob/main/install/environments/VEBA-preprocess_env.yml" rel="nofollow noreferrer">environment.yml</a> file</p>
<p>Last time I built this image, my dockerfile created exactly as expected. Now it's not even installing pandas. I tried launching a docker container and manually installing. No errors were thrown but it didn't install any of the dependencies.</p>
<p>This is from within the Docker container in an interactive session:</p>
<pre><code>(base) root@c3be34361e98:~# conda env create -n test_env -f veba/install/environments/VEBA-preprocess_env.yml
Collecting package metadata (repodata.json): done
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 23.5.0
latest version: 23.7.2
Please update conda by running
$ conda update -n base -c conda-forge conda
Or to minimize the number of packages updated during conda update use
conda install conda=23.7.2
Downloading and Extracting Packages
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
#
# To activate this environment, use
#
# $ conda activate test_env
#
# To deactivate an active environment, use
#
# $ conda deactivate
(base) root@c3be34361e98:~# conda activate test_env
(test_env) root@c3be34361e98:~# conda env list
# conda environments:
#
base /opt/conda
test_env * /opt/conda/envs/test_env
(test_env) root@c3be34361e98:~# conda list
# packages in environment at /opt/conda/envs/test_env:
#
# Name Version Build Channel
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 2_gnu conda-forge
bzip2 1.0.8 h7f98852_4 conda-forge
ca-certificates 2023.7.22 hbcca054_0 conda-forge
dependencies 7.7.0 pyhd8ed1ab_0 conda-forge
ld_impl_linux-64 2.40 h41732ed_0 conda-forge
libexpat 2.5.0 hcb278e6_1 conda-forge
libffi 3.4.2 h7f98852_5 conda-forge
libgcc-ng 13.1.0 he5830b7_0 conda-forge
libgomp 13.1.0 he5830b7_0 conda-forge
libnsl 2.0.0 h7f98852_0 conda-forge
libsqlite 3.42.0 h2797004_0 conda-forge
libuuid 2.38.1 h0b41bf4_0 conda-forge
libzlib 1.2.13 hd590300_5 conda-forge
ncurses 6.4 hcb278e6_0 conda-forge
openssl 3.1.1 hd590300_1 conda-forge
pip 23.2.1 pyhd8ed1ab_0 conda-forge
python 3.11.4 hab00c5b_0_cpython conda-forge
readline 8.2 h8228510_1 conda-forge
setuptools 68.0.0 pyhd8ed1ab_0 conda-forge
tk 8.6.12 h27826a3_0 conda-forge
tzdata 2023c h71feb2d_0 conda-forge
wheel 0.41.0 pyhd8ed1ab_0 conda-forge
xz 5.2.6 h166bdaf_0 conda-forge
</code></pre>
<p>My question:</p>
<p><strong>Why is my conda not installing the dependencies within the Docker container?</strong></p>
| <python><bash><docker><conda><miniconda> | 2023-07-31 23:40:35 | 1 | 30,977 | O.rka |
76,807,671 | 3,261,292 | BeautifulSoup closes automatically html tags that are unclosed | <p>I have an issue with BeautifukSoup. Whenever I parse an HTML input, it closes HTML tags that weren't closed (e.g. <code><input></code> , or tags that weren't closed by mistake).</p>
<p>For example:</p>
<pre><code>from bs4 import BeautifulSoup
tags = BeautifulSoup('<span id="100" class="test">', "html.parser")
print(str(tags))
</code></pre>
<p>Prints:</p>
<pre><code><span id="100" class="test"></span>
</code></pre>
<p>My main goal here is to preserve the original shape of the HTML input after parsing it.</p>
<p>I found that it's possible by using <a href="https://stackoverflow.com/questions/53704600/beautifulsoup-parser-adds-unnecessary-closing-html-tags">"XML" parser</a> instead of "html.parser", but I am looking to solve this for "html.parser".</p>
| <python><html><parsing><beautifulsoup> | 2023-07-31 23:31:16 | 1 | 5,527 | Minions |
76,807,598 | 12,361,700 | Better having loops inside or outside tf.functions | <p>I don't quite see in the TF documentation anything regarding loops and performances, so I wanted to know which one was better:</p>
<ol>
<li>loop inside:</li>
</ol>
<pre><code>@tf.function
def f(iters):
for i in tf.range(iters):
...
f(10)
</code></pre>
<ol start="2">
<li>loop outside:</li>
</ol>
<pre><code>@tf.function
def f():
...
for i in range(10):
f()
</code></pre>
<p>In other words, will the conditional branching of the for loop hurt performance inside the graph?</p>
| <python><tensorflow> | 2023-07-31 23:06:47 | 0 | 13,109 | Alberto |
76,807,581 | 10,881,963 | How can I plot sequence segmentation results in a color plot? | <p>I am new to this field of sequence segmentation. What I want to do is something like this:
Given a temporal sequence (i.e. video), I assign a label to each 1 second of this video along time axis. I can have two list:</p>
<pre><code>(1) GroundTruth_List = ['sit', 'sit', 'sit', 'order_coco', 'order_coco', 'drink_coco']
(2) Predicted_List = ['sit', 'sit', 'sit', 'order_coco', 'order_coco', 'drink_milk']
</code></pre>
<p>I hope to roughly visualize how many wrong predictions were made for this video.
After searching online, I find maybe matplotlib might be the correct direction.
Can someone provide some exemplary code on how to visualize the above two lists? For example, two horizontal lines representing the two list, if the segment class is 'sit', we plot it as blue color, but for other classes, we plot the line as a different color automatically.</p>
| <python><matplotlib><visualization><video-processing> | 2023-07-31 23:03:33 | 0 | 461 | Jim Wang |
76,807,462 | 10,634,126 | Using solved reCAPTCHA to scrape a redirect with Python Requests | <p>I am trying to scrape a page using Python Requests. <a href="https://isoms.lasallecounty.org/portal/Jail" rel="nofollow noreferrer">https://isoms.lasallecounty.org/portal/Jail</a> is intercepted with a reCAPTCHA page (<a href="https://isoms.lasallecounty.org/portal/imahuman" rel="nofollow noreferrer">https://isoms.lasallecounty.org/portal/imahuman</a>). In a manual test, upon completing this reCAPTCHA there is a POST using a <code>g-recaptcha-response</code> and a <code>__RequestVerificationToken</code> which redirects to the initial URL and allows a successful request.</p>
<p>I am using anti-captcha to successfully collect the <code>g-recaptcha-response</code> param and BeautifulSoup to collect the verification token, but I am unsure how to use these to then successfully collect the redirected jail page.</p>
<p>Including commented code below; the final request, whether made to the /imahuman or to the /Jail URL, and whether made with or without <code>allow_redirects=True</code>, fails.</p>
<pre><code># make an initial request and collect verification token
headers = {
"Referer": "https://isoms.lasallecounty.org/portal/imahuman",
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36"
}
session = requests.Session()
response = session.get("https://isoms.lasallecounty.org/portal/Jail") # redirects to /portal/imahuman
token = BeautifulSoup(response.text, "lxml").find("input", {"name": "__RequestVerificationToken"})["value"]
site_key = BeautifulSoup(response.text, "lxml").find("div", {"id": "recaptcha"})["data-sitekey"]
# anticaptcha solver
solver = recaptchaV3Proxyless()
solver.set_verbose(1)
solver.set_key(os.getenv("ANTICAPTCHA_API_KEY"))
solver.set_website_url("https://isoms.lasallecounty.org/portal/imahuman")
solver.set_website_key(site_key)
solver.set_min_score(0.9)
g_response = solver.solve_and_return_solution()
if g_response != 0:
print("success!")
else:
print("task finished with error " + solver.error_code)
# plug everything in params dict
data = {
"g-recaptcha-response": g_response,
"__RequestVerificationToken": token
}
# try to post imahuman page
response = session.post("https://isoms.lasallecounty.org/portal/imahuman", headers=headers, data=data, allow_redirects=True)
</code></pre>
| <python><web-scraping><http-redirect><python-requests><recaptcha> | 2023-07-31 22:27:26 | 1 | 909 | OJT |
76,807,440 | 3,261,292 | BeautifulSoup shuffles the attributes of html tags | <p>I have an issue with BeautifukSoup. Whenever I parse an HTML input, it changes the order of the attributes (e.g. class, id) of the HTML tags.</p>
<p>For example:</p>
<pre><code>from bs4 import BeautifulSoup
tags = BeautifulSoup('<span id="100" class="test"></span>', "html.parser")
print(str(tags))
</code></pre>
<p>Prints:</p>
<pre><code><span class="test" id="100"></span>
</code></pre>
<p>As you can see, the <code>class</code> and <code>id</code> order was changed. How can I prevent such behavior?</p>
<p>I am unfamiliar with web development, but I know that the order of the attributes doesn't matter.</p>
<p>My main goal here is to preserve the original shape of the HTML input after parsing it because I want to loop through the tags and match them (at character-level) with other HTML texts.</p>
| <python><html><parsing><beautifulsoup> | 2023-07-31 22:19:35 | 1 | 5,527 | Minions |
76,807,248 | 7,061,265 | Poetry add always takes 5 minutes to resolve dependencies | <p>I have a fairly large project, and everytime I want to install a dependency it takes like 5 minutes on an M1 MacBook pro. Can someone please help me with my constraints, and debugging where the dependency resolver is spending time?</p>
<p><em>pyproject.toml</em></p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry.dependencies]
python = "^3.10"
streamlit = "^1.15.2"
openai = "0.27.0"
python-decouple = "^3.6"
requests = "^2.28.1"
glom = "^22.1.0"
parse = "^1.19.0"
html2text = "^2020.1.16"
pandas = "^2.0.1"
google-cloud-firestore = "^2.7.0"
replicate = "^0.4.0"
fastapi = "^0.85.0"
uvicorn = {extras = ["standard"], version = "^0.18.3"}
firebase-admin = "^6.0.0"
# mediapipe for M1 macs
mediapipe-silicon = {version = "^0.8.11", markers = "platform_machine == 'arm64'", platform = "darwin"}
# mediapipe for others
mediapipe = {version = "^0.8.11", markers = "platform_machine != 'arm64'"}
furl = "^2.1.3"
itsdangerous = "^2.1.2"
pytest = "^7.2.0"
google-cloud-texttospeech = "^2.12.1"
Wand = "^0.6.10"
readability-lxml = "^0.8.1"
transformers = "^4.24.0"
stripe = "^5.0.0"
python-multipart = "^0.0.5"
html-sanitizer = "^1.9.3"
plotly = "^5.11.0"
sentry-sdk = "^1.12.0"
httpx = "^0.23.1"
pyquery = "^1.4.3"
redis = "^4.5.1"
pytest-xdist = "^3.2.0"
requests-html = "^0.10.0"
pdftotext = "^2.2.2"
"pdfminer.six" = "^20221105"
google-api-python-client = "^2.80.0"
oauth2client = "^4.1.3"
tiktoken = "^0.3.2"
google-cloud-translate = ">=2.0.1,<2.1.0"
google-cloud-speech = "^2.21.0"
yt-dlp = "^2023.3.4"
llama-index = "^0.5.27"
nltk = "^3.8.1"
Jinja2 = "^3.1.2"
Django = "^4.2"
django-phonenumber-field = {extras = ["phonenumberslite"], version = "^7.0.2"}
gunicorn = "^20.1.0"
psycopg2-binary = "^2.9.6"
whitenoise = "^6.4.0"
black = "^23.3.0"
django-extensions = "^3.2.1"
pytest-django = "^4.5.2"
celery = "^5.3.1"
qrcode = "^7.4.2"
opencv-contrib-python = "^4.7.0.72"
numpy = "^1.25.0"
pyzbar = "^0.1.9"
gspread = "^5.10.0"
hashids = "^1.3.1"
langcodes = "^3.3.0"
language-data = "^1.1"
simplejson = "^3.19.1"
[tool.poetry.group.dev.dependencies]
watchdog = "^2.1.9"
ipython = "^8.5.0"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
</code></pre>
<pre><code>$ poetry --version
Poetry (version 1.5.1)
$ python --version
Python 3.10.5
</code></pre>
<pre><code>$ poetry add google-cloud-documentai
Using version ^2.18.0 for google-cloud-documentai
Updating dependencies
Resolving dependencies... (344.7s)
Package operations: 1 install, 9 updates, 0 removals
• Updating certifi (2022.12.7 -> 2023.5.7)
• Updating charset-normalizer (2.0.12 -> 3.2.0)
• Updating packaging (21.3 -> 23.1)
• Updating urllib3 (1.26.13 -> 1.26.16)
• Updating requests (2.27.1 -> 2.31.0)
• Updating cryptography (3.4.6 -> 41.0.2)
• Updating pygments (2.14.0 -> 2.15.1)
• Updating pyyaml (6.0 -> 6.0.1)
• Updating pyjwt (2.4.0 -> 2.8.0)
• Installing google-cloud-documentai (2.18.0)
Writing lock file
</code></pre>
| <python><python-poetry> | 2023-07-31 21:30:41 | 0 | 8,696 | Dev Aggarwal |
76,807,224 | 11,141,816 | sympy simplify function contained a bug in sign | <p>I was using the sympy to verify some hand calculation, the version was anaconda and</p>
<pre><code>import sympy as sp
sp.__version__
'1.11.1'
</code></pre>
<p>on Windows 11 Jupyter notebook.</p>
<p>The code ran as follow</p>
<pre><code>from sympy import Function
eta = Function('eta')
from sympy import *
s,c,h=symbols('s c h',real=True)
A,B =symbols("A B")
q=exp(-2*pi*exp(s))
chi_0=eta(s)**(-1) * q**(0-(c-1)/24)*(1-q)
</code></pre>
<p>The following code</p>
<pre><code>print( chi_0.diff(s).subs((eta(s)**(-1)).diff(s),-B) .subs(eta(s)**(-1),A) .subs(s,0) )
</code></pre>
<p>produced the correct results</p>
<pre><code>print( chi_0.diff(s).subs((eta(s)**(-1)).diff(s),-B) .subs(eta(s)**(-1),A) .subs(s,0) )
-2*pi*A*(1/24 - c/24)*(1 - exp(-2*pi))*exp(-2*pi*(1/24 - c/24)) + 2*pi*A*exp(-2*pi)*exp(-2*pi*(1/24 - c/24)) - B*(1 - exp(-2*pi))*exp(-2*pi*(1/24 - c/24))
</code></pre>
<p>Notice the sign in the term <code>-2*pi*A*(1/24 - c/24)*(1 - exp(-2*pi))*exp(-2*pi*(1/24 - c/24))</code>. However, when I tried to cancel the <code>*exp(-2*pi*(1/24 - c/24))</code>,</p>
<pre><code>#bug
print( (( chi_0.diff(s).subs((eta(s)**(-1)).diff(s),-B) .subs(eta(s)**(-1),A) .subs(s,0) )/exp(pi*(c-1)/12) ).simplify() )
(-pi*A*(1 - exp(2*pi))*(c - 1)/12 + 2*pi*A + B*(1 - exp(2*pi)))*exp(-2*pi)
</code></pre>
<p>Notice that two of the terms <code>(-pi*A*(1 - exp(2*pi))*(c - 1)/12 </code> and <code>B*(1 - exp(2*pi)))*exp(-2*pi)</code> obtained the wrong sign, while the sign of the <code> 2*pi*A</code> remained correct.</p>
<p>Though an expansion statement could fix the result,</p>
<pre><code>print( (( chi_0.diff(s).subs((eta(s)**(-1)).diff(s),-B) .subs(eta(s)**(-1),A) .subs(s,0) ).expand()/exp(pi*(c-1)/12) ).simplify() )
-pi*A*c*exp(-2*pi)/12 + pi*A*c/12 - pi*A/12 + 25*pi*A*exp(-2*pi)/12 - B + B*exp(-2*pi)
</code></pre>
<p>it was not useful.</p>
<p>How did this happen?</p>
| <python><sympy><bug-reporting> | 2023-07-31 21:25:38 | 1 | 593 | ShoutOutAndCalculate |
76,807,205 | 6,743,506 | Preserving different color text when thresholding with OpenCV-Python | <p>I have an image with mostly light text on a dark background. Some of the text is a darker color (purple).</p>
<p>I'm using opencv-python to manipulate the image for better OCR parsing.</p>
<p>There is a little more processing that happens before this, but I feel like the processing steps giving me trouble are as follow.</p>
<p>The image gets converted to grayscale</p>
<p><a href="https://i.sstatic.net/n1ky9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/n1ky9.png" alt="enter image description here" /></a></p>
<p><code>cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)</code></p>
<p>The image then gets inverted (this seems to keep the final text clearer)</p>
<p><a href="https://i.sstatic.net/p7K5X.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p7K5X.png" alt="enter image description here" /></a></p>
<p><code>cv2.bitwise_not(img)</code></p>
<p>The image then gets run through threhold</p>
<p><a href="https://i.sstatic.net/02aJ2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/02aJ2.png" alt="enter image description here" /></a></p>
<p><code>cv2.threshold(img, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]</code></p>
<p>You can see I'm totally losing the darker text. Switching to an adaptive threshold does preserve the text better but creates a ton of noise (the background appears flat black but is not).</p>
<p>Any thoughts on how I can modify my current thresholding to preserve that darker text?</p>
| <python><opencv><image-processing><ocr><image-thresholding> | 2023-07-31 21:19:20 | 1 | 1,754 | Bryant Makes Programs |
76,806,920 | 425,895 | Invalid parameter 'logisticregression' for estimator Pipeline. GridSearchCV and ColumnTransformer | <p>I'm trying to perform a GridSearchCV including a pipeline.
I want to impute and standardize the numerical variables.
And just impute the categorical ones.</p>
<p>I've tried to do it like this:</p>
<pre><code>numeric_cols = X_train.select_dtypes(include=['float64', 'int']).columns.to_list()
cat_cols = X_train.select_dtypes(include=['object', 'category']).columns.to_list()
numeric_transformer = Pipeline( steps=[ ('imputer', SimpleImputer(strategy='median')),
('scaler', StandardScaler())])
categorical_transformer = Pipeline(steps=[('imputer', SimpleImputer(strategy='most_frequent'))])
preprocessor = ColumnTransformer( transformers=[
('numeric', numeric_transformer, numeric_cols),
('cat', categorical_transformer, cat_cols)],
remainder='passthrough',verbose_feature_names_out = False).set_output(transform="pandas")
completo = Pipeline( steps=[("preprocessor", preprocessor), ("classifier",
LogisticRegression(solver='liblinear', penalty='l2', max_iter=100, class_weight='balanced'))] )
params = dict(logisticregression__C=np.logspace(-4, 16,21, base=1.5))
grid_search_lr2 = GridSearchCV(completo, params, scoring='roc_auc')
results_lr2 = grid_search_lr2.fit(X_train, y_train)
</code></pre>
<p>But it produces an error:</p>
<blockquote>
<p>Invalid parameter 'logisticregression' for estimator
Pipeline(steps=[('preprocessor',
ColumnTransformer(remainder='passthrough',...</p>
</blockquote>
<p>What is the proper way to do it?
It may not need to use ColumnTransformer. Or it may be a problem with the way I'm introducing the LogisticRegression.</p>
<p>I don't have any problem when I use a simpler pipeline (the same for all data types) and make_pipeline.</p>
| <python><pandas><scikit-learn><pipeline> | 2023-07-31 20:23:29 | 1 | 7,790 | skan |
76,806,742 | 1,312,850 | Create Mock class with values taken from a dictionary | <p>I have a dictionary where keys are the same keys as a class that I want to mock.
How can I create a Mock class from this dict?</p>
<p>For example, let's say I have the class</p>
<pre><code>class User:
def __init__(self, amount, enabled):
self.amount = amount
self.enabled = enabled
</code></pre>
<p>Now, I would like to create a Mock</p>
<pre><code>values1 = {amount: 1, enabled: False}
values2 = {amount: 2, enabled: True}
user_mock1 = MagicMock(spec=User)
user_mock2 = MagicMock(spec=User)
</code></pre>
<p>How can I create mocks, from the values of the dict?</p>
<p>I have a dict with lot of values, so I do not want to do this:</p>
<pre><code>user_mock1.amount = values1['amount']
</code></pre>
| <python><mocking> | 2023-07-31 19:51:05 | 1 | 902 | Gonzalo |
76,806,718 | 16,635,269 | Cannot query Azure Data Warehouse tables using Python in a GitHub action | <h1>Background</h1>
<p>I am able to query various tables from my company's Azure Data Warehouse locally in Visual Studio Code with the following function:</p>
<pre class="lang-py prettyprint-override"><code>def query_all_from_table(self, database, table):
connection_string = (
'DRIVER=SQL Server;'
'SERVER=azr-warehouse;'
f'DATABASE={database};'
f'UID={os.getenv("AZURE_USER")};'
f'PWD={os.getenv("AZURE_PW")};'
'TRUSTED_CONNECTION=YES;'
'Connection Timeout=60;'
)
try:
# Connect to Azure Data Warehouse
conn = pyodbc.connect(connection_string)
# Query all
sql_query = f"SELECT * FROM {table}"
# Execute the query
data = pd.read_sql(sql_query, conn)
conn.close()
return data
except for pyodbc.Error as e:
print('Error connecting to Azure Data Warehouse:', e)
return pd.DataFrame()
</code></pre>
<p>Where the <code>AZUER_USER</code> and <code>AZURE_PW</code> are my company SSO credentials.</p>
<h1>Problem</h1>
<p>When I try to use this function in a Github action, I get the following error:</p>
<pre><code>Error connecting to Azure Data Warehouse: ('01000', "[01000] [unixODBC][Driver Manager]Can't open lib 'SQL Server' : file not found (0) (SQLDriverConnect)")
</code></pre>
<p>Below is my YML file for the action:</p>
<pre class="lang-yaml prettyprint-override"><code>name: Update
on:
push:
branches:
- main
schedule:
- cron: '0 8 * * *'
workflow_dispatch: # Allow manual triggering from GitHub UI
jobs:
run_query:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Install ODBC Driver for SQL Server
run: sudo apt-get install -y unixodbc-dev odbcinst odbcinst1debian2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.x
- name: Install dependencies
run: pip install -r requirements.txt
- name: Run Python script
env:
AZURE_USER_HAGEN: ${{ secrets.AZURE_USER_HAGEN }}
AZURE_PW_HAGEN: ${{ secrets.AZURE_PW_HAGEN }}
FIREBASE_CREDS: ${{ secrets.FIREBASE_CREDS }}
run: python update.py
</code></pre>
<h1>My Attempts at Solution</h1>
<ul>
<li>tried using `ODBC Driver 17 for SQL Server as the server name but I get a timeout error</li>
</ul>
<pre><code>Error connecting to Azure Data Warehouse: ('HYT00', '[HYT00] [Microsoft][ODBC Driver 17 for SQL Server]Login timeout expired (0) (SQLDriverConnect)')
</code></pre>
<ul>
<li>tried adding <code>.database.windows.net</code> but I get the same timeout error</li>
</ul>
| <python><azure><github-actions> | 2023-07-31 19:46:48 | 1 | 301 | Fruity Fritz |
76,806,672 | 9,194,957 | 'function' object has no attribute 'encode' error using AWS SES + Python SMTP | <p>I am trying to send an email using AWS SES from python smtp but I am receiving this error.</p>
<pre><code>'function' object has no attribute 'encode'
</code></pre>
<p>the email sending looks like this</p>
<pre><code>import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
import common.config_constants as constants
class Email_util:
def __init__(self) -> None:
self.smtp_server = 'email-smtp.us-east-1.amazonaws.com'
self.smtp_port = constants.SMTP_PORT
self.workmail_user = constants.SMTP_USER
self.workmail_pass = constants.SMTP_PASS
self.sender = constants.FORGOT_PASSWORD_EMAIL_HANDLER
pass
def send_email(self, receiver, subject, body):
msg = MIMEMultipart()
msg['From'] = self.send_email
msg['To'] = receiver
msg['Subject'] = subject
# Attach the email body
msg.attach(MIMEText(body, 'plain'))
try:
# Establish a connection to Amazon WorkMail SMTP server
server = smtplib.SMTP(self.smtp_server, self.smtp_port)
server.starttls()
# Login to Amazon WorkMail
server.login(self.workmail_user, self.workmail_pass)
# Send the email
server.sendmail(self.sender, receiver, msg.as_string())
print("Email sent successfully!")
return True
except for Exception as e:
print(f"Error sending email: {e}")
return False
</code></pre>
<p>any help is appreciated. Thanks in advance</p>
| <python><amazon-web-services><email><smtp><amazon-ses> | 2023-07-31 19:36:58 | 1 | 461 | Biswas Sampad |
76,806,639 | 15,547,292 | Python/VS Code: How to achieve IDE highlighting & auto-completion for namespaces dynamically created using `globals()` | <p>In Python, I am dynamically creating a namespace by modifying <a href="https://docs.python.org/3.8/library/functions.html#globals" rel="nofollow noreferrer"><code>globals()</code></a>.</p>
<p>For some background, I'm masking/filtering another file to get rid of some unwanted members and wrap foreign functions with a mutex. The masked file contains auto-generated bindings for a relatively large C library that is not thread-compatible.</p>
<p>However, this confuses VS code syntax highlighting and auto-completion: the wrapper file's members, which are added dynamically on import time, are not recognized, and the namespace is seen as empty.</p>
<p>As a workaround, I thought about filtering the original namespace in place rather than masking it, but would prefer to preserve it as-is so we can still access the unmodified namespace if desired. Also, this way the IDE's namespace info would still be incorrect: dynamic changes would not be taken into account and excluded members would still be shown.</p>
<p>As a last resort, supposedly I could write some code to generate a PEP 484 stub file (<code>.pyi</code>), but don't like the idea of another dynamic file that is effectively unnecessary if only the IDE were smarter.</p>
<p>Is there an option to make VS code agnostic of <code>globals()</code>, or any other workaround to make it recognize the dynamic members?</p>
| <python><visual-studio-code><highlight><completion> | 2023-07-31 19:28:58 | 0 | 2,520 | mara004 |
76,806,544 | 4,414,359 | 'PGTypeCompiler' object has no attribute 'visit_SUPER' | <p>Very little hope here :)
But i'm getting this error when trying to upload a pandas table with a json column as described in this other SO post:
<a href="https://stackoverflow.com/questions/68940899/how-to-push-dict-column-into-redshift-super-type-column-using-pandas-to-sql">How to push `dict` column into Redshift SUPER type column using `pandas.to_sql`?</a></p>
<p><code>AttributeError: 'PGTypeCompiler' object has no attribute 'visit_SUPER'</code></p>
<p>I tried googling the error and it came back with all of two results, so I guess I'm like the third person ever to run into this.
But if anyone by chance knows anything about this, I'd super appreciate any help.</p>
<pre><code>from psycopg2.extensions import register_adapter
from psycopg2.extras import Json
from sqlalchemy_redshift import dialect
from sqlalchemy import create_engine, types
import pandas as pd
import json
register_adapter(dict, Json)
register_adapter(list, Json)
RS_creds = {}
RS_creds['host'] = %env RS_DATA_HOST
RS_creds['user'] = %env RS_DATA_USER
RS_creds['pass'] = %env RS_DATA_PASS
RS_creds['port'] = %env RS_DATA_PORT
RS_creds['db'] = %env RS_DATA_DB
test = pd.DataFrame([[1, json.dumps({"a": "A1", "b": "B1"})], [2, json.dumps({"a": "A2", "b": "B2"})]])
test.columns = ['x', 'y']
test_dict_types = {'x': types.INTEGER(), 'y': dialect.SUPER()}
url = f"postgresql://{RS_creds['user']}:{RS_creds['pass']}@{RS_creds['host']}:5439/{RS_creds['db']}"
engine = create_engine(url)
test.to_sql('test', engine, schema = 'test', index = False, dtype = test_dict_types})
</code></pre>
<p>traceback:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
File /opt/homebrew/lib/python3.10/site-packages/sqlalchemy/sql/visitors.py:77, in _generate_compiler_dispatch.<locals>._compiler_dispatch(self, visitor, **kw)
76 try:
---> 77 meth = getter(visitor)
78 except AttributeError as err:
AttributeError: 'PGTypeCompiler' object has no attribute 'visit_SUPER'
The above exception was the direct cause of the following exception:
UnsupportedCompilationError Traceback (most recent call last)
File /opt/homebrew/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py:4534, in DDLCompiler.visit_create_table(self, create, **kw)
4533 try:
-> 4534 processed = self.process(
4535 create_column, first_pk=column.primary_key and not first_pk
4536 )
4537 if processed is not None:
File /opt/homebrew/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py:499, in Compiled.process(self, obj, **kwargs)
498 def process(self, obj, **kwargs):
--> 499 return obj._compiler_dispatch(self, **kwargs)
File /opt/homebrew/lib/python3.10/site-packages/sqlalchemy/sql/visitors.py:82, in _generate_compiler_dispatch.<locals>._compiler_dispatch(self, visitor, **kw)
81 else:
---> 82 return meth(self, **kw)
File /opt/homebrew/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py:4568, in DDLCompiler.visit_create_column(self, create, first_pk, **kw)
4566 return None
-> 4568 text = self.get_column_specification(column, first_pk=first_pk)
4569 const = " ".join(
4570 self.process(constraint) for constraint in column.constraints
4571 )
File /opt/homebrew/lib/python3.10/site-packages/sqlalchemy/dialects/postgresql/base.py:2734, in PGDDLCompiler.get_column_specification(self, column, **kwargs)
2733 else:
-> 2734 colspec += " " + self.dialect.type_compiler.process(
2735 column.type,
2736 type_expression=column,
2737 identifier_preparer=self.preparer,
2738 )
2739 default = self.get_column_default_string(column)
File /opt/homebrew/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py:533, in TypeCompiler.process(self, type_, **kw)
532 def process(self, type_, **kw):
--> 533 return type_._compiler_dispatch(self, **kw)
File /opt/homebrew/lib/python3.10/site-packages/sqlalchemy/sql/visitors.py:79, in _generate_compiler_dispatch.<locals>._compiler_dispatch(self, visitor, **kw)
78 except AttributeError as err:
---> 79 return visitor.visit_unsupported_compilation(self, err, **kw)
81 else:
File /opt/homebrew/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py:536, in TypeCompiler.visit_unsupported_compilation(self, element, err, **kw)
535 def visit_unsupported_compilation(self, element, err, **kw):
--> 536 util.raise_(
537 exc.UnsupportedCompilationError(self, element),
538 replace_context=err,
539 )
File /opt/homebrew/lib/python3.10/site-packages/sqlalchemy/util/compat.py:211, in raise_(***failed resolving arguments***)
210 try:
--> 211 raise exception
212 finally:
213 # credit to
214 # https://cosmicpercolator.com/2016/01/13/exception-leaks-in-python-2-and-3/
215 # as the __traceback__ object creates a cycle
UnsupportedCompilationError: Compiler <sqlalchemy.dialects.postgresql.base.PGTypeCompiler object at 0x162ac95a0> can't render element of type SUPER (Background on this error at: https://sqlalche.me/e/14/l7de)
The above exception was the direct cause of the following exception:
CompileError Traceback (most recent call last)
Cell In[38], line 1
----> 1 test.to_sql('test', \
2 engine, \
3 schema = 'wbx_data_persistent', \
4 index = False, \
5 if_exists = 'replace', \
6 dtype = test_dict_types)
File /opt/homebrew/lib/python3.10/site-packages/pandas/core/generic.py:2987, in NDFrame.to_sql(self, name, con, schema, if_exists, index, index_label, chunksize, dtype, method)
2830 """
2831 Write records stored in a DataFrame to a SQL database.
2832
(...)
2983 [(1,), (None,), (2,)]
2984 """ # noqa:E501
2985 from pandas.io import sql
-> 2987 return sql.to_sql(
2988 self,
2989 name,
2990 con,
2991 schema=schema,
2992 if_exists=if_exists,
2993 index=index,
2994 index_label=index_label,
2995 chunksize=chunksize,
2996 dtype=dtype,
2997 method=method,
2998 )
File /opt/homebrew/lib/python3.10/site-packages/pandas/io/sql.py:695, in to_sql(frame, name, con, schema, if_exists, index, index_label, chunksize, dtype, method, engine, **engine_kwargs)
690 elif not isinstance(frame, DataFrame):
691 raise NotImplementedError(
692 "'frame' argument should be either a Series or a DataFrame"
693 )
--> 695 return pandas_sql.to_sql(
696 frame,
697 name,
698 if_exists=if_exists,
699 index=index,
700 index_label=index_label,
701 schema=schema,
702 chunksize=chunksize,
703 dtype=dtype,
704 method=method,
705 engine=engine,
706 **engine_kwargs,
707 )
File /opt/homebrew/lib/python3.10/site-packages/pandas/io/sql.py:1728, in SQLDatabase.to_sql(self, frame, name, if_exists, index, index_label, schema, chunksize, dtype, method, engine, **engine_kwargs)
1678 """
1679 Write records stored in a DataFrame to a SQL database.
1680
(...)
1724 Any additional kwargs are passed to the engine.
1725 """
1726 sql_engine = get_engine(engine)
-> 1728 table = self.prep_table(
1729 frame=frame,
1730 name=name,
1731 if_exists=if_exists,
1732 index=index,
1733 index_label=index_label,
1734 schema=schema,
1735 dtype=dtype,
1736 )
1738 total_inserted = sql_engine.insert_records(
1739 table=table,
1740 con=self.connectable,
(...)
1747 **engine_kwargs,
1748 )
1750 self.check_case_sensitive(name=name, schema=schema)
File /opt/homebrew/lib/python3.10/site-packages/pandas/io/sql.py:1631, in SQLDatabase.prep_table(self, frame, name, if_exists, index, index_label, schema, dtype)
1619 raise ValueError(f"The type of {col} is not a SQLAlchemy type")
1621 table = SQLTable(
1622 name,
1623 self,
(...)
1629 dtype=dtype,
1630 )
-> 1631 table.create()
1632 return table
File /opt/homebrew/lib/python3.10/site-packages/pandas/io/sql.py:838, in SQLTable.create(self)
836 raise ValueError(f"'{self.if_exists}' is not valid for if_exists")
837 else:
--> 838 self._execute_create()
File /opt/homebrew/lib/python3.10/site-packages/pandas/io/sql.py:824, in SQLTable._execute_create(self)
821 def _execute_create(self):
822 # Inserting table into database, add to MetaData object
823 self.table = self.table.to_metadata(self.pd_sql.meta)
--> 824 self.table.create(bind=self.pd_sql.connectable)
File /opt/homebrew/lib/python3.10/site-packages/sqlalchemy/sql/schema.py:962, in Table.create(self, bind, checkfirst)
960 if bind is None:
961 bind = _bind_or_error(self)
--> 962 bind._run_ddl_visitor(ddl.SchemaGenerator, self, checkfirst=checkfirst)
File /opt/homebrew/lib/python3.10/site-packages/sqlalchemy/engine/base.py:3228, in Engine._run_ddl_visitor(self, visitorcallable, element, **kwargs)
3226 def _run_ddl_visitor(self, visitorcallable, element, **kwargs):
3227 with self.begin() as conn:
-> 3228 conn._run_ddl_visitor(visitorcallable, element, **kwargs)
File /opt/homebrew/lib/python3.10/site-packages/sqlalchemy/engine/base.py:2211, in Connection._run_ddl_visitor(self, visitorcallable, element, **kwargs)
2204 def _run_ddl_visitor(self, visitorcallable, element, **kwargs):
2205 """run a DDL visitor.
2206
2207 This method is only here so that the MockConnection can change the
2208 options given to the visitor so that "checkfirst" is skipped.
2209
2210 """
-> 2211 visitorcallable(self.dialect, self, **kwargs).traverse_single(element)
File /opt/homebrew/lib/python3.10/site-packages/sqlalchemy/sql/visitors.py:524, in ExternalTraversal.traverse_single(self, obj, **kw)
522 meth = getattr(v, "visit_%s" % obj.__visit_name__, None)
523 if meth:
--> 524 return meth(obj, **kw)
File /opt/homebrew/lib/python3.10/site-packages/sqlalchemy/sql/ddl.py:895, in SchemaGenerator.visit_table(self, table, create_ok, include_foreign_key_constraints, _is_metadata_operation)
891 if not self.dialect.supports_alter:
892 # e.g., don't omit any foreign key constraints
893 include_foreign_key_constraints = None
--> 895 self.connection.execute(
896 # fmt: off
897 CreateTable(
898 table,
899 include_foreign_key_constraints= # noqa
900 include_foreign_key_constraints, # noqa
901 )
902 # fmt: on
903 )
905 if hasattr(table, "indexes"):
906 for index in table.indexes:
File /opt/homebrew/lib/python3.10/site-packages/sqlalchemy/engine/base.py:1380, in Connection.execute(self, statement, *multiparams, **params)
1376 util.raise_(
1377 exc.ObjectNotExecutableError(statement), replace_context=err
1378 )
1379 else:
-> 1380 return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
File /opt/homebrew/lib/python3.10/site-packages/sqlalchemy/sql/ddl.py:80, in DDLElement._execute_on_connection(self, connection, multiparams, params, execution_options)
77 def _execute_on_connection(
78 self, connection, multiparams, params, execution_options
79 ):
---> 80 return connection._execute_ddl(
81 self, multiparams, params, execution_options
82 )
File /opt/homebrew/lib/python3.10/site-packages/sqlalchemy/engine/base.py:1469, in Connection._execute_ddl(self, ddl, multiparams, params, execution_options)
1465 schema_translate_map = exec_opts.get("schema_translate_map", None)
1467 dialect = self.dialect
-> 1469 compiled = ddl.compile(
1470 dialect=dialect, schema_translate_map=schema_translate_map
1471 )
1472 ret = self._execute_context(
1473 dialect,
1474 dialect.execution_ctx_cls._init_ddl,
(...)
1478 compiled,
1479 )
1480 if self._has_events or self.engine._has_events:
File /opt/homebrew/lib/python3.10/site-packages/sqlalchemy/sql/elements.py:503, in ClauseElement.compile(self, bind, dialect, **kw)
498 url = util.preloaded.engine_url
499 dialect = url.URL.create(
500 self.stringify_dialect
501 ).get_dialect()()
--> 503 return self._compiler(dialect, **kw)
File /opt/homebrew/lib/python3.10/site-packages/sqlalchemy/sql/ddl.py:32, in _DDLCompiles._compiler(self, dialect, **kw)
28 def _compiler(self, dialect, **kw):
29 """Return a compiler appropriate for this ClauseElement, given a
30 Dialect."""
---> 32 return dialect.ddl_compiler(dialect, self, **kw)
File /opt/homebrew/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py:464, in Compiled.__init__(self, dialect, statement, schema_translate_map, render_schema_translate, compile_kwargs)
462 if self.can_execute:
463 self.execution_options = statement._execution_options
--> 464 self.string = self.process(self.statement, **compile_kwargs)
466 if render_schema_translate:
467 self.string = self.preparer._render_schema_translates(
468 self.string, schema_translate_map
469 )
File /opt/homebrew/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py:499, in Compiled.process(self, obj, **kwargs)
498 def process(self, obj, **kwargs):
--> 499 return obj._compiler_dispatch(self, **kwargs)
File /opt/homebrew/lib/python3.10/site-packages/sqlalchemy/sql/visitors.py:82, in _generate_compiler_dispatch.<locals>._compiler_dispatch(self, visitor, **kw)
79 return visitor.visit_unsupported_compilation(self, err, **kw)
81 else:
---> 82 return meth(self, **kw)
File /opt/homebrew/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py:4544, in DDLCompiler.visit_create_table(self, create, **kw)
4542 first_pk = True
4543 except exc.CompileError as ce:
-> 4544 util.raise_(
4545 exc.CompileError(
4546 util.u("(in table '%s', column '%s'): %s")
4547 % (table.description, column.name, ce.args[0])
4548 ),
4549 from_=ce,
4550 )
4552 const = self.create_table_constraints(
4553 table,
4554 _include_foreign_key_constraints=create.include_foreign_key_constraints, # noqa
4555 )
4556 if const:
File /opt/homebrew/lib/python3.10/site-packages/sqlalchemy/util/compat.py:211, in raise_(***failed resolving arguments***)
208 exception.__cause__ = replace_context
210 try:
--> 211 raise exception
212 finally:
213 # credit to
214 # https://cosmicpercolator.com/2016/01/13/exception-leaks-in-python-2-and-3/
215 # as the __traceback__ object creates a cycle
216 del exception, replace_context, from_, with_traceback
CompileError: (in table 'test', column 'y'): Compiler <sqlalchemy.dialects.postgresql.base.PGTypeCompiler object at 0x162ac95a0> can't render element of type SUPER
</code></pre>
| <python><sqlalchemy><amazon-redshift><psycopg2> | 2023-07-31 19:08:12 | 1 | 1,727 | Raksha |
76,806,414 | 4,414,359 | Why doesn't sqlalchemy_redshift work in my Jupyter? | <p>I have a feeling it's not happy with the python version? But not sure how to change it. I went into the Kernel option and there is a venv there, but it's also 3.10</p>
<p><a href="https://i.sstatic.net/f7tRh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f7tRh.png" alt="enter image description here" /></a></p>
| <python><jupyter-notebook><sqlalchemy> | 2023-07-31 18:49:48 | 1 | 1,727 | Raksha |
76,806,323 | 19,838,445 | Make sure abstract method would be a coroutine when implemented | <p>First of all, I'm aware <a href="https://stackoverflow.com/questions/47555934/how-require-that-an-abstract-method-is-a-coroutine">it is developers' responsibility</a> to make sure your define your method as a coroutine when implementing child class</p>
<pre class="lang-py prettyprint-override"><code>class MyBase(ABC):
@abstractclassmethod
def the_coroutine(self):
"""
I want this to be a coroutine
"""
class ImplBaseA(MyBase):
async def the_coroutine(self):
return "awaited"
class ImplBaseB(MyBase):
def the_coroutine(self):
# some condition that happens quite often
if True:
raise ValueError("From the first glance I was even awaited")
return "not the coroutine"
</code></pre>
<p>But how to prevent this issue in the code from occurring?</p>
<pre class="lang-py prettyprint-override"><code>await a.the_coroutine()
# When inspecting the code it seems like it does the right thing
await b.the_coroutine()
# and when raising exception it really does
</code></pre>
<p>Should I use mypy or some similar tool?
What's the pythonic way of making sure implementation is coroutine only (regular function only)?</p>
| <python><python-3.x><python-asyncio><mypy><abc> | 2023-07-31 18:33:52 | 1 | 720 | GopherM |
76,806,308 | 13,764,814 | openpyxl table ref using matrix dimensions | <p>if I have a matrix (2d list) that I am writing to an excel doc, how can I state the table ref using a1 cell notation using the dimensions of the array?</p>
<p>for example:</p>
<pre class="lang-py prettyprint-override"><code>headers = ['first', 'second', 'third']
data = [
[1,2,3],
[1,2,3]
]
</code></pre>
<p>I know that the headers span from A1:C1 and the rows from 1:3 so I can deduce the table ref is A1:C3 but for a much larger matrix; for example a table with range A1:CD75 I am not sure how to properly calculate that range knowing only the dimensions of the array</p>
| <python><openpyxl> | 2023-07-31 18:31:29 | 1 | 311 | Austin Hallett |
76,806,290 | 9,983,652 | why this try except doesn't work when renaming pandas column? | <p>I am trying to rename one of pandas columnns, and use try except for the cases with different column names.</p>
<p>Here is the data before using try and except.</p>
<pre><code>df_data=pd.read_csv(file_GG,sep='\t')
print('data_early')
print(df_data)
df_data=df_data.rename(columns={'MD':'WellMD'})
df_data
data_early
MD Shallow_Res_ohmm
0 1031.7 8.14
1 1031.8 10.04
2 1031.9 10.11
WellMD Shallow_Res_ohmm
0 1031.7 8.14
1 1031.8 10.04
2 1031.9 10.11
3 1032.0 7.61
4 1032.1 5.12
</code></pre>
<p>Now I'd like to change the other column name which might have different names, so I use try and except</p>
<pre><code>try:
df_data=df_data.rename(columns={'Deep_Res_ohmm':'new_name'})
except:
print('testing different column name')
df_data=df_data.rename(columns={'Shallow_Res_ohmm':'new_name'})
print('df_after')
print(df_data)
df_after
WellMD Shallow_Res_ohmm
0 1031.7 8.14
1 1031.8 10.04
2 1031.9 10.11
</code></pre>
<p>so you can see the code never go to except section and column name remain the same.</p>
| <python><pandas> | 2023-07-31 18:29:40 | 1 | 4,338 | roudan |
76,806,209 | 16,011,842 | Trouble with decrypting from PHP openssl_encrypt with Python pyca/cryptography | <p>I need to encrypt a message in PHP and then later decode it with a Python script. The problem is that the decryption is not working (or maybe I'm doing the encryption wrong).</p>
<p>Here is the PHP test script.</p>
<pre><code><?php
$pass = "password";
$salt = "salt";
$data = "here is a test message";
$cipher = "aes-256-cbc";
$options = OPENSSL_RAW_DATA;
$rnds = 1024;
$klen = 64;
$key = hash_pbkdf2("sha256", $pass, $salt, $rnds, $klen, FALSE);
echo("Key = '" . $key . "'<br>");
$iv = openssl_random_pseudo_bytes(16);
$enc = openssl_encrypt($data, $cipher, $key, $options, $iv);
echo("IV = '" . base64_encode($iv) . "'<br>");
echo("Encrypted = '" . base64_encode($enc) . "'<br>");
?>
</code></pre>
<p>The output will be different each time but here is an example output</p>
<pre><code>Key = '231afb7dcd2e860cfd58ab13372bd12c923076c3598a121960320f6fec8a5698'
IV = 'zBSQYEZ5p10COkbRC9O32Q=='
Encrypted = 'tLMg1MXNGwPgVdPIOiKwkS8WVVWpcAiZUT8FlSL8LO8='
</code></pre>
<p>And here is the Python test script using the above values.</p>
<pre><code>import base64
from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives.kdf.pbkdf2 import PBKDF2HMAC
kdf = PBKDF2HMAC(
algorithm=hashes.SHA256(),
length=32,
salt=b'salt',
iterations=1024
)
key = kdf.derive(b'password')
print(key.hex()) #231afb7dcd2e860cfd58ab13372bd12c923076c3598a121960320f6fec8a5698
iv = base64.b64decode('zBSQYEZ5p10COkbRC9O32Q==')
enc = base64.b64decode('tLMg1MXNGwPgVdPIOiKwkS8WVVWpcAiZUT8FlSL8LO8=')
cipher = Cipher(algorithms.AES(key), modes.CBC(iv))
decryptor = cipher.decryptor()
plain = decryptor.update(enc) + decryptor.finalize()
print(plain)
#b'\x0f"\xb1\x99\x00\xd8\x06\xe3\xe1))\xb06\xab\x83\x8d\x89\t\x014\x8a\xfb\x9b\xb6\x0e\ns\xd9\xe2\x97\xf9n'
</code></pre>
<p>The pbkdf2 key derivation outputs match between the two scripts and so does the IV obviously. But there seems to be something basic I'm overlooking because it's not all that complicated code.</p>
| <python><php> | 2023-07-31 18:19:09 | 0 | 329 | barneyAgumble |
76,806,165 | 4,414,359 | ProgrammingError: (psycopg2.errors.UndefinedObject) type "json" does not exist | <p>I'm trying to upload a pandas table with a json column into redshift. Here is a simplified version.</p>
<pre><code>from sqlalchemy import create_engine
from sqlalchemy.types import JSON
import pandas as pd
RS_creds = {}
RS_creds['host'] = %env RS_DATA_HOST
RS_creds['user'] = %env RS_DATA_USER
RS_creds['pass'] = %env RS_DATA_PASS
RS_creds['port'] = %env RS_DATA_PORT
RS_creds['db'] = %env RS_DATA_DB
test = pd.DataFrame([[1, json.dumps({"a": "A1", "b": "B1"})], [2, json.dumps({"a": "A2", "b": "B2"})]])
test.columns = ['x', 'y']
url = f"postgresql://{RS_creds['user']}:{RS_creds['pass']}@{RS_creds['host']}:5439/{RS_creds['db']}"
conn = create_engine(url)
test.to_sql('test', conn, schema = 'test', index = False, dtype={'y': JSON})
</code></pre>
<p>This throws an error:</p>
<pre><code>ProgrammingError: (psycopg2.errors.UndefinedObject) type "json" does not exist
[SQL:
CREATE TABLE test.test (
x BIGINT,
y JSON
)
]
</code></pre>
<p>My actual table is uploaded from MySQL and i tried adding things like</p>
<p><code>from psycopg2.extras import Json</code></p>
<p>but no luck.</p>
| <python><sql><json><pandas><amazon-redshift> | 2023-07-31 18:13:08 | 1 | 1,727 | Raksha |
76,806,042 | 3,516,936 | RetinaFace python package gives ValueError: The channel dimension of the inputs should be defined | <pre><code>from retinaface import RetinaFace
RetinaFace.detect_faces('img.jpg') # this line gives following error
RetinaFace.build_model() # this also gives same error
</code></pre>
<p>I get</p>
<pre><code>ValueError: The channel dimension of the inputs should be defined. The input_shape
received is (None, None, None, 9), where axis -3 (0-based) is the channel dimension,
which found to be 'None'.`
</code></pre>
<p>Tensorflow version is 2.9.1 and python is 3.7/3.10</p>
<p>RetinaFace version is 0.0.13</p>
<p>Same function works in google colab but not on my jupyter notebook.</p>
| <python><deep-learning><computer-vision><valueerror><face-detection> | 2023-07-31 17:51:42 | 0 | 4,498 | Neo |
76,805,975 | 713,200 | How to access a variable in python script from another python script class avoiding circular module import? | <p>Here is the basic structure of my code</p>
<pre><code>#master_la.py
global dname
class CommonSetup:
def get_device:
dname="BNV023"
</code></pre>
<p>And I have the script where I want to access the variable <code>dname</code></p>
<pre><code>#salve_la.py
from foo.bar import master_la
class LaUtils:
name = master_la.dname
</code></pre>
<p>This code is not working. Can I know how to resolve this ?
with this coded I'm getting error like</p>
<pre><code>AttributeError: module 'foo.bar.master_la' has no attribute 'dname'
</code></pre>
| <python><python-3.x><global-variables> | 2023-07-31 17:41:31 | 1 | 950 | mac |
76,805,967 | 11,898,085 | SQL insert into error: parameters are of unsupported type | <p>I'm writing my first script with <code>sqlite</code> and have not been able to insert a fictional observation into the data with <code>insert into</code>. Here is my attempt:</p>
<pre class="lang-py prettyprint-override"><code>from pandas import read_csv, read_sql_query
from pathlib import Path
from sqlite3 import connect
Path('database.db').touch()
conn = connect('database.db')
c = conn.cursor()
c.execute(
'CREATE TABLE IF NOT EXISTS spiders ('
'record_id INT, month INT, day INT, year INT,'
'plot_id INT, species_id CHAR(2), sex CHAR(1), hindfoot_length FLOAT,'
' weight FLOAT'
')'
)
data = read_csv('surveys.csv')
data = data.dropna()
data.to_sql('spiders', conn, if_exists='append', index=False)
query_1 = read_sql_query('SELECT * FROM spiders', conn)
print('The first five rows of the data:\n')
print(query_1.head())
c.execute(
'INSERT INTO spiders ('
'record_id, month, day, year, plot_id, species_id, sex, '
'hindfoot_length, weight'
') VALUES (0, 7, 31, 2023, 2, "DM", "F", 99.0, 60.0)', conn
)
</code></pre>
<p>Here is the output with traceback:</p>
<pre><code>The first five rows of the data:
record_id month day year plot_id species_id sex hindfoot_length weight
0 63 8 19 1977 3 DM M 35.0 40.0
1 64 8 19 1977 7 DM M 37.0 48.0
2 65 8 19 1977 4 DM F 34.0 29.0
3 66 8 19 1977 4 DM F 35.0 46.0
4 67 8 19 1977 7 DM M 35.0 36.0
Traceback (most recent call last):
File ~/mambaforge/envs/spyder-env/lib/python3.11/site-packages/spyder_kernels/py3compat.py:356 in compat_exec
exec(code, globals, locals)
File ~/Desktop/data/sql_demo.py:38
c.execute(
ProgrammingError: parameters are of unsupported type
</code></pre>
<p>What is wrong in my script?</p>
| <python><sql><pandas><sqlite> | 2023-07-31 17:39:48 | 1 | 936 | jvkloc |
76,805,911 | 3,851,407 | `ModbusSerialClient` from `pymodbus.client.sync` vs. `pymodbus.client` | <p>I've been using pymodbus to comminicate with a temperature sensor. Previously, I used:</p>
<pre class="lang-py prettyprint-override"><code>from pymodbus.client.sync import ModbusSerialClient
</code></pre>
<p>and everything worked fine.</p>
<p>After a recent update, this was no longer possible (<code>pymodbus.client.sync</code> seems to have been removed, and I was getting an import error) and I had to switch to:</p>
<pre class="lang-py prettyprint-override"><code>from pymodbus.client import ModbusSerialClient
</code></pre>
<p>The import now works, but none of my previous commands do. I've just spent the last 3 hours scouring the documentation and can't work out <em>why</em> this no longer works. I can communicate with my device using <code>minimalmodbus</code>, so I am confident that the device is working fine.</p>
<p>What have I missed? Any ideas?!</p>
| <python><serial-port><pymodbus> | 2023-07-31 17:29:30 | 1 | 4,107 | oscarbranson |
76,805,746 | 6,315,736 | Python numpy np.setdiff1d giving error "TypeError: Cannot compare structured arrays unless they have a common dtype." | <p>I have been searching for this error but did not get anything related to np.setdiff1d with type error. It would really help is you could let me know why this error and how I can resolve it. Below is my sample code snippet -</p>
<pre><code>import pandas as pd
import numpy as np
data1 = {'a' : [32,156], 'b' :[56,177]}
data2 = {'c' : [12,32,12,45,32,45], 'd' :[11,56,76,43,44,45], 'e': [111,156,176,143,144,145], 'f':[411,456,476,443,444,445] }
df1 = pd.DataFrame(data1)
df2 = pd.DataFrame(data2)
## converting to array
npdf1= df1.to_records(index=False)
npdf2= df2.to_records(index=False)
diff = np.setdiff1d(npdf1,npdf2[['c','e']])
# Above line gave error "TypeError: Cannot compare structured arrays unless they have a common dtype. I.e. `np.result_type(arr1, arr2)` must be defined."
# npdf1 >> gives below
# rec.array([( 32, 56), (111, 177)],
# dtype=[('a', '<i8'), ('b', '<i8')])
# npdf2[['c','e']] >> gives below
# rec.array([(12, 111), (32, 156), (12, 176), (45, 143), (32, 144),
# (45, 145)],
# dtype={'names': ['c', 'e'], 'formats': ['<i8', '<i8'], 'offsets': [0, 16], 'itemsize': 32})
## Above the format is matching i8 but still not sure why the error.
## So as a work round I thought to converted the record arrays to normal numpy arrays
npdf1 = np.array(npdf1)
df2a = df2[['c','e']]
npdf2a = df2a.to_records(index=False)
npdf2a = np.array(npdf2a)
diff = np.setdiff1d(npdf1,npdf2a)
# Still get the error "TypeError: Cannot compare structured arrays unless they have a common dtype. I.e. `np.result_type(arr1, arr2)` must be defined."
</code></pre>
| <python><arrays><dataframe><numpy><typeerror> | 2023-07-31 17:03:09 | 1 | 729 | user1412 |
76,805,676 | 19,130,803 | How to replace loop with "if" and "elif" but without "else" by list comprehension | <p>I am trying to use list comprehension for below code</p>
<pre><code># Input
values = [1, 2, 3, 4]
</code></pre>
<pre><code># Using for loop
ans = []
for value in values:
if value == 1:
ans.append(value + value)
elif value == 4:
ans.append(value * value)
print(f"Ans: {ans}")
</code></pre>
<pre><code># Using list comprehension
ans = [
(value + value)
if value == 1
else (value * value) if value == 4
for value in values
]
print(f"Ans: {ans}")
</code></pre>
<p>But getting error for list comprehension as below:</p>
<blockquote>
<p>SyntaxError: expected 'else' after 'if' expression</p>
</blockquote>
<p>I tried defining <code>else</code> part like below, but it did not work:</p>
<pre><code>else pass
else continue
</code></pre>
<p>The output should be <code>[2, 16]</code>. Other values are skipped.</p>
<p>What I am missing?</p>
| <python> | 2023-07-31 16:53:00 | 2 | 962 | winter |
76,805,568 | 2,505,650 | Extract list from 'script type="text/json' in HTML file with Python | <p>I have HTML files containing the following structures:</p>
<pre><code><script type="text/json" placeholder-data="1">
{"name":"john","args":{"id":989,"hobbies":["gardening","boxing","guitar","hunting"],"is_rookie":false}}
</script>
</code></pre>
<p>I am only interested in the list name 'hobbies'.
How can I extract it with Python ?</p>
| <python><json><web-scraping><beautifulsoup> | 2023-07-31 16:37:51 | 2 | 1,381 | user2505650 |
76,805,550 | 206,253 | How to encapsulate pandas plotting in a function? | <p>I want to produce a number of graphs of a time series focusing on different periods.</p>
<p>Here is a small sample of my data:</p>
<pre><code>data = {'CAT': {0: 1.0,
1: 1.0,
2: 1.0,
3: 1.0,
4: 1.0,
5: 1.0,
6: 1.0,
7: 1.0,
8: 1.0,
9: 1.0,
10: 1.0,
11: 1.0},
'month': {0: 201401,
1: 201402,
2: 201403,
3: 201404,
4: 201405,
5: 201406,
6: 201407,
7: 201408,
8: 201409,
9: 201410,
10: 201411,
11: 201412},
'year': {0: '2014',
1: '2014',
2: '2014',
3: '2014',
4: '2014',
5: '2014',
6: '2014',
7: '2014',
8: '2014',
9: '2014',
10: '2014',
11: '2014'},
'appl': {0: 0,
1: 1,
2: 2,
3: 3,
4: 0,
5: 1,
6: 2,
7: 3,
8: 0,
9: 1,
10: 2,
11: 3}}
</code></pre>
<p>I am using this code</p>
<pre><code>merged_df_2014 = sample[sample.year == '2014']
tmp = (sample.groupby(['CAT', pd.to_datetime(sample['month'], format='%Y%m')])
['appl'].mean().reset_index(name='probability of application')
)
ax = sns.lineplot(data=tmp, x='month', y='probability of application', hue='CAT')
ax.tick_params(axis='x', rotation=45)
ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m'))
</code></pre>
<p>to produce this graph:
<a href="https://i.sstatic.net/N3FBL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N3FBL.png" alt="enter image description here" /></a></p>
<p>I wanted to encapsulate this into a function:</p>
<pre><code>def plot_year (df, y):
df = df[df.year == y]
tmp = (df.groupby(['CAT', pd.to_datetime(df['month'], format='%Y%m')])
['appl'].mean().reset_index(name='probability of application')
)
ax = sns.lineplot(data=tmp, x='month', y='probability of application', hue='CAT')
ax.tick_params(axis='x', rotation=45)
ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m'))
return ax
</code></pre>
<p>so that I can call it with different parameters so that I can plot a slightly different graph (e.g. by supplying a different year I should plot a different part of the time series).</p>
<p>but when I tried to call this function like this:</p>
<pre><code>plot_year (sample, 2014)
</code></pre>
<p>or this</p>
<pre><code>ax = plot_year (sample, 2014)
</code></pre>
<p>I am getting an empty plot with a wrong period on the x axis:</p>
<p><a href="https://i.sstatic.net/x1LrJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/x1LrJ.png" alt="enter image description here" /></a></p>
<p>How can I properly encapsulate and call this graphing function so that I can use the same code with different parameters to produce different graphs?</p>
| <python><pandas><plot> | 2023-07-31 16:34:42 | 1 | 3,144 | Nick |
76,805,527 | 21,420,742 | How to create a new column that gets count by groupby in pandas | <p>I have a dataset with Employee history. Including Name, ID number, Job, Manager, Manager ID, and Termed(1 = Termed or 0 = Active). Here is a sample.</p>
<p>df =</p>
<pre><code>Name Emp_ID Job Manager Manager_ID Termed
Adam 100 Sales Steve 103 0
Beth 101 Sales Steve 103 0
Rick 102 Tech John 106 0
Steve 103 Sales Mgr. Lisa 110 0
Drake 104 Tech John 106 1
Sarah 105 Sales Steve 103 1
John 106 Tech Mgr. Rodger 122 0
Mike 107 Sales Steve 103 1
</code></pre>
<p>I would like to findout by manager how many <code>1</code> or Terminated employees each manager has.
The code I tried was <code>df['term_count'] = df.groupby('Manager_ID')['Termed'].sum()</code>
When I try this code I get a <strong>NaN</strong> value when I'm not sure why. This is the desired output.</p>
<p>df =</p>
<pre><code>Name Emp_ID Job Manager Manager_ID Termed term_count
Adam 100 Sales Steve 103 0 0
Beth 101 Sales Steve 103 0 0
Rick 102 Tech John 106 0 0
Steve 103 Sales Mgr. Lisa 110 0 2
Drake 104 Tech John 106 1 0
Sarah 105 Sales Steve 103 1 0
John 106 Tech Mgr. Rodger 122 0 1
Mike 107 Sales Steve 103 1 0
</code></pre>
<p>Any suggestions would be great thank you!!</p>
| <python><pandas><dataframe><group-by> | 2023-07-31 16:31:15 | 2 | 473 | Coding_Nubie |
76,805,513 | 10,574,250 | Assign multiple columns in pd.assign using dictionary comprehension | <p>I am trying to take the difference between column pairs and create new columns with the column name and 'diff' using pd.assign and dictionary comprehension</p>
<p>A sample df looks like this:</p>
<pre><code>df
A B C D E F
0 2 1 3 5 2 2
1 3 4 5 6 3 5
</code></pre>
<p>My mapping of which columns takes which difference looks like this:</p>
<pre><code>column_mapping = {
'A': 'B',
'C': 'D',
'E': 'F'}
</code></pre>
<p>I have tried to create a kwargs dictionary comprehension for an assign method as such:</p>
<pre><code>kwargs = {key+'_diff': lambda df: eval(f"(df['{key}'] - df['{value}']) / df['{key}']") for key, value in zip(column_mapping.keys(), column_mapping.values())}
</code></pre>
<p>I also tried</p>
<pre><code>kwargs = {key+'_diff': lambda df: (df[key] - df[value]) / df[key]) for key, value in zip(column_mapping.keys(), column_mapping.values())}
</code></pre>
<p>This give a mapping of the lambda functions which I pass to .assign</p>
<pre><code>df.assign(**kwargs)
</code></pre>
<p>This does work, however is produces all the diff columns with different names and the exact same numbers which are the difference of column E and F:</p>
<p>A sample df looks like this:</p>
<pre><code>df
A B C D E F a_diff c_diff e_diff
0 2 1 3 5 2 2 0 0 0
1 3 4 5 6 3 5 -2 -2 -2
</code></pre>
<p>I think this should be possible and feel I am close but it believe it is iterating over all lambda functions instead of one. Please could someone help point what I am doing here.</p>
<p>If anything is unclear please let me know.</p>
<p>Thanks</p>
| <python><pandas><lambda><assign> | 2023-07-31 16:28:57 | 2 | 1,555 | geds133 |
76,805,455 | 7,231,968 | Dataframe Pandas: make new column by partial string match based on columns of another dataframe | <p>I have two dataframes</p>
<p><code>df1</code> looks like this (concerned columns):</p>
<pre><code> brand product_line size profile
0 winter hdw2 21.r LINEN
1 stone r294 31x5 WAXY
2 han th22 3t NaN
3 winter trjj 5t LINEN
4 stone ael2 2d LINEN
</code></pre>
<p>i have another dataframe that has many columns with concerned columns below:</p>
<pre><code>df2
</code></pre>
<pre><code> NAME PRODUCT_LINE SIZE MAKE
0 winter+ hdw2 21.r 0.2
1 stoneas r294 31x5 0.5
2 han th22 3t 3
3 winter trjj 5t 1
4 stone ael2 2d 34
</code></pre>
<p>Now basically the columns in first dataframe maps to <code>NAME</code>, <code>PRODUCT_LINE</code> and <code>SIZE</code> respectively. the columns in both dataframes have been <code>str.lower()</code> and <code>str.replace(" ", "")</code></p>
<p>Now the I want the <code>MAKE</code> column in first dataframe such that this condition should be true for all the rows. I can do it row by row but its taking a lot of time. Is there a way to do it quickly on the entire column without looping through the dataframe? I want an efficient solution.</p>
<p>Below is how I was doing it in loop:</p>
<pre><code>spec = df2[(df2['NAME'].str.lower().str.replace(' ', '').str.contains(row.brand.lower().replace(' ', '')))
& (df2['PRODUCT_NAME'].str.lower().str.replace(' ', '').str.contains(row.product_line.lower().replace(' ', '')))
& (df2['SIZE'].str.lower().str.replace(' ', '').str.contains(row.size.lower().replace(' ', ''))) ]
</code></pre>
<p>So basically when the above condition is true for a row, it should then fetch the corresponding MAKE from <code>df2</code> and place in <code>df1</code>. Also this should only happen when the <code>profile</code> column in <code>df1</code> is not <code>NaN</code>.</p>
| <python><pandas><dataframe><numpy> | 2023-07-31 16:19:33 | 1 | 323 | SunAns |
76,805,338 | 10,027,628 | GitHub Action Configuration for Pyproject.toml in Subfolder | <p>I am trying to get my Pytests running on pushes to my repository. The issue I am facing is that I have my <em>pyproject.toml</em> file in the subfolder app and not in the root directory. How can I move to the foler app before the step <em>Install dependencies</em>? I hope this would fix the issues.</p>
<p>This is my GitHub Action workflow.yaml file:</p>
<pre><code># .github/workflows/app.yaml
name: PyTest
on: push
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Check out repository code
uses: actions/checkout@v3
- name: Switch to Current Branch
run: git checkout ${{ env.BRANCH }}
# Setup Python (faster than using Python container)
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: "3.x"
# install & configure poetry
- name: Install Poetry
uses: snok/install-poetry@v1
with:
virtualenvs-create: true
virtualenvs-in-project: true
installer-parallel: true
# load cached venv if cache exists
- name: Load cached venv
id: cached-poetry-dependencies
uses: actions/cache@v3
with:
path: .venv
key: venv-${{ runner.os }}-${{ steps.setup-python.outputs.python-version }}-${{ hashFiles('**/poetry.lock') }}
# install dependencies if cache does not exist
- name: Install dependencies
if: steps.cached-poetry-dependencies.outputs.cache-hit != 'true'
run: poetry install --no-interaction --no-root
# install root project
- name: Install project
run: poetry install --no-interaction
# run test suite
- name: Run tests
run: |
export PYTHONPATH=$PWD
poetry run pytest -v -cov=pp_from_python
</code></pre>
| <python><pytest><github-actions> | 2023-07-31 16:04:18 | 1 | 377 | Christoph H. |
76,805,337 | 9,318,323 | Python send email using Graph API and Office365-REST-Python-Client | <p>I am trying to send an email using Graph API and Python. I tried doing it with graph explorer and it worked. I found this example: <a href="https://github.com/vgrem/Office365-REST-Python-Client#working-with-outlook-api" rel="nofollow noreferrer">https://github.com/vgrem/Office365-REST-Python-Client#working-with-outlook-api</a></p>
<pre><code>from office365.graph_client import GraphClient
client = GraphClient(acquire_token_func)
client.me.send_mail(
subject="Meet for lunch?",
body="The new cafeteria is open.",
to_recipients=["fannyd@contoso.onmicrosoft.com"]
).execute_query()
</code></pre>
<p>Here's my code:</p>
<pre><code>import msal
dict_ = {'client_id': 'foo', 'secret': 'bar', 'tenant_id': 'etc'}
def acquire_token():
authority_url = f'https://login.microsoftonline.com/{dict_["tenant_id"]}'
app = msal.ConfidentialClientApplication(
authority=authority_url,
client_id=dict_["client_id"],
client_credential=dict_["secret"]
)
token = app.acquire_token_for_client(scopes=["https://graph.microsoft.com/.default"])
return token
from office365.graph_client import GraphClient
client = GraphClient(acquire_token)
client.me.send_mail(
subject="Meet for lunch?",
body="The new cafeteria is open.",
to_recipients=['elon.musk@company.com']
).execute_query()
</code></pre>
<p>Even though it's exactly like in the example I still get:</p>
<pre><code>TypeError: send_mail() got an unexpected keyword argument 'subject'
</code></pre>
<p>Can you help me fix this or provide a different way of sending an email?</p>
| <python><azure><microsoft-graph-api><office365><microsoft-graph-mail> | 2023-07-31 16:04:16 | 2 | 354 | Vitamin C |
76,805,316 | 125,244 | Plotly : Keep areas between lines filled | <p>I have a Pandas dataframe containing 10 columns and I want to plot 3 of those columns using plotly.express</p>
<p>If 2 or 3 lines are plotted I want the area between them to be filled with a transparent color (rgba-value with alpha being low).
If just 1 line plotted I don't want the plot to be filled to zero-axis.</p>
<p>My problem is that I can fill the area between the HIGH and LOW lines and it remains filled if I remove MIDDLE, but it goes wrong otherwise.</p>
<p>'''</p>
<pre><code># Draw aRange using a dataframe containing a, aHigh, aLow
HIGH = "rgb(255,0,0,0)"
LOW = "rgb(0,0,255)"
MIDDLE = "rgb(0,255,0)"
FILL = "rgba(255, 172, 167, 0.2)"
maxRange = df.aHigh.max() # Maximum price within aHigh
minRange = df.aLow[df.aLow>0].min() # Minimum price within aLow skipping None and zero values
extraRange = 0.05 * (maxRange - minRange) # Take 5% extra space for the range above and below
fig = go.Figure()
fig.add_trace(go.Scatter(x=df.date, y=df.aHigh, mode='lines', name='High', line_color=HIGH,
fill=None))
fig.add_trace(go.Scatter(x=df.date, y=df.aLow, mode='lines', name='Low', line_color=LOW,
fill='tonexty', fillcolor=FILL))
fig.add_trace(go.Scatter(x=df.date, y=df.a, mode='lines', name='Middle', line_color=MIDDLE,
fill=None))
fig.update_layout(title = "Ticker", xaxis_title = 'Date', yaxis_title = 'Price',
yaxis_range = [minRange - extraRange, maxRange + extraRange])
fig.update_yaxes(fixedrange=False)
fig.show()
</code></pre>
<p>'''
The pictures show various results after hiding one of the lines.
The second and thord picture show undesired results, the first and fourth picture are OK.</p>
<p>How can I achieve what I want?</p>
<p><a href="https://i.sstatic.net/bVZHu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bVZHu.png" alt="All OK" /></a></p>
<p><a href="https://i.sstatic.net/VEmhM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VEmhM.png" alt="After hiding HIGH" /></a></p>
<p><a href="https://i.sstatic.net/Lw8eX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Lw8eX.png" alt="After hiding LOW" /></a>
<a href="https://i.sstatic.net/kMibG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kMibG.png" alt="After hiding MIDDLE (OK)" /></a></p>
| <python><plotly><scatter-plot> | 2023-07-31 16:02:11 | 1 | 1,110 | SoftwareTester |
76,805,262 | 5,013,084 | altair: boxplot with one observation | <p>I am working on a dashboard for a project where I am receiving updated data on a daily basis (meaning that some of the dataframes are only poorly populated).</p>
<p>I would like to visualize the different groups via boxplots, but I noticed that during the beginning of the project, sometimes the underlying dataframes only contain one entry.</p>
<p>Here is my question: If I am visualizing a boxplot with only one data point, it does not show anything (see code and screenshots below). Is it possible to show a single point for only one observation such as outliers are shown? Thank you very much</p>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import altair as alt
df1 = pd.DataFrame({"a": [1,3,2,5,4,5,3,6,4,3,5,10]})
alt.Chart(df1).mark_boxplot().encode(
alt.X("a")
)
</code></pre>
<p><a href="https://i.sstatic.net/hvlVx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hvlVx.png" alt="df1" /></a></p>
<pre class="lang-py prettyprint-override"><code>df2 = pd.DataFrame({"b": [1]})
alt.Chart(df2).mark_boxplot().encode(
alt.X("b")
)
</code></pre>
<p><a href="https://i.sstatic.net/uhm8K.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uhm8K.png" alt="df2" /></a></p>
| <python><altair> | 2023-07-31 15:55:29 | 0 | 2,402 | Revan |
76,805,196 | 4,518,180 | How can I set up the default redirect URL in Airflow? | <p>I am currently running Airflow within a docker compose, and it is accessible at <code>http://localhost:8081</code>. However, whenever I access this URL, I am automatically redirected to <code>http://localhost:8081/login/?next=http%3A%2F%2Flocalhost%3A8081%2Fhome</code>. This redirection is defined as the page to go after login, and it takes me to <code>http://localhost:8081/home</code>.</p>
<p>I want to change the default redirect URL to a different page after login. Is there a way to achieve this by modifying the configuration files like <code>airflow.cfg</code> or <code>webserver_config.py</code>? Any guidance on the specific configuration changes needed to set up a new default redirect URL would be greatly appreciated. Thank you!</p>
| <python><flask><airflow><airflow-webserver> | 2023-07-31 15:46:13 | 1 | 3,108 | mkUltra |
76,805,184 | 5,416,228 | Add a column based on a condition in Pandas | <p>I have the following df.</p>
<pre><code>data = {
'policyId': ['X','X','X','X', 'Y','Y','Y','Y', 'Z', 'Z','Z','Z'],
'element_id': [242, 243, 241, 257, 242, 243, 241, 257, 242, 243, 241, 257],
'element_selected': [True, True, True, True, False, True, True, True, True, True, True, False]
}
df = pd.DataFrame(data)
</code></pre>
<p>All <code>policyIds</code> have the same <code>element_id</code>. What changes is that the <code>element_id</code> can be select or not (<code>element_selected</code> assigns that). I want to add a <code>contract_type</code> column to the df. If <code>element_id</code> 242 has been selected the <code>contract_type</code> should be "CONTENT". If <code>element_id</code> 257 has been selected then the <code>contract_type</code> should "PRIVATE LIABILITY". If <code>element_id</code> 242 and 257 have both been selected then the <code>contract_type</code> should "CONTENT + PRIVATE LIABILITY".</p>
<p>The problem is that each <code>policyId</code> should have only one value per <code>contract_type</code> and I do not know how to implement this logic</p>
<p>Expected output</p>
<pre><code>policyId element_id element_selected contract_type
X 242 True CONTENT + PRIVATE LIABILITY
X 243 True CONTENT + PRIVATE LIABILITY
X 241 True CONTENT + PRIVATE LIABILITY
X 257 True CONTENT + PRIVATE LIABILITY
Y 242 False PRIVATE LIABILITY
Y 243 True PRIVATE LIABILITY
Y 241 True PRIVATE LIABILITY
Y 257 True PRIVATE LIABILITY
Z 242 True CONTENT
Z 243 True CONTENT
Z 241 True CONTENT
Z 257 False CONTENT
</code></pre>
| <python><pandas> | 2023-07-31 15:44:14 | 5 | 675 | Rods2292 |
76,805,171 | 601,976 | How wide can a Python bitwise reference be? | <p>Theoretical, because I haven't yet found a definitive answe..</p>
<p>In Python, I want to reference a variable of bitwise flags of arbitrary length, such as checking</p>
<pre><code>if (bvar & 1<<135):
pass
</code></pre>
<p>Can this be handled under the covers, or do I need to use special typing or some other explicit mechanism?</p>
| <python><binary><bit-manipulation> | 2023-07-31 15:42:23 | 0 | 2,010 | RoUS |
76,805,082 | 3,919,277 | Parsing xml file with Python using root.iter does not list text | <p>I am trying to use Python to parse an xml file. I would like to identify text which occurs between specified xml tags.</p>
<p>The code I am running is</p>
<pre class="lang-py prettyprint-override"><code>
import xml.etree.ElementTree as ET
tree = ET.parse('020012_doctored.xml')
root = tree.getroot()
for w in root.iter('w'):
print(w.text)
</code></pre>
<p>The xml file is as follows. It's a complex file with quite a loose structure, which combines elements of sequence and hierarchy (and I have simplified it for the purposes of this query), but there clearly is a "w" tag, which should be getting picked up by the code.</p>
<p>Thanks.</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="UTF-8"?>
<CHAT xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://www.talkbank.org/ns/talkbank"
xsi:schemaLocation="http://www.talkbank.org/ns/talkbank https://talkbank.org/software/talkbank.xsd"
Media="020012" Mediatypes="audio"
DesignType="long"
ActivityType="toyplay"
GroupType="TD"
PID="11312/c-00018213-1"
Version="2.20.0"
Lang="eng"
Options="bullets"
Corpus="xxxx"
Date="xxxx-xx-xx"
>
<Participants>
<participant
id="MOT"
name="Mother"
role="Mother"
language="eng"
sex="female"
/>
</Participants>
<comment type="Date">15-APR-1999</comment>
<u who="INV" uID="u0">
<w untranscribed="untranscribed">www</w>
<t type="p"></t>
<media
start="7.639"
end="9.648"
unit="s"
/>
<a type="addressee">MOT</a>
</u>
<u who="MOT" uID="u1">
<w untranscribed="untranscribed">www</w>
<t type="p"></t>
<media
start="7.640"
end="9.455"
unit="s"
/>
<a type="addressee">INV</a>
</u>
<u who="CHI" uID="u2">
<w untranscribed="unintelligible">xxx</w>
<w formType="family-specific">choo_choos<mor type="mor"><mw><pos><c>fam</c></pos><stem>choo_choos</stem></mw><gra type="gra" index="1" head="0" relation="INCROOT"/></mor></w>
<t type="p"><mor type="mor"><mt type="p"/><gra type="gra" index="2" head="1" relation="PUNCT"/></mor></t>
<postcode>I</postcode>
<media
start="10.987"
end="12.973"
unit="s"
/>
<a type="comments">looking at pictures of trains</a>
</u>
</CHAT>
</code></pre>
| <python><xml><elementtree> | 2023-07-31 15:29:36 | 3 | 421 | Nick Riches |
76,805,028 | 4,267,439 | scipy genextreme fit returns different parameters from MATLAB gev fit function on the same data | <p>I'm trying to port some code from MATLAB to PYTHON and I realized <code>gevfit</code> function in MATLAB seems to behave differently from scipy <code>genextreme</code>, so I realized this minimal example:</p>
<p>MATLAB</p>
<pre><code>% Create the MATLAB array
x = [0.5700, 0.8621, 0.9124, 0.6730, 0.5524, 0.7608, 0.2150, 0.5787, ...
0.7210, 0.7826, 0.8181, 0.5449, 0.7501, 1.1301, 0.7784, 0.5378, ...
0.9550, 0.9623, 0.6865, 0.6863, 0.6153, 0.4372, 0.5485, 0.6318, ...
0.5501, 0.8333, 0.8044, 0.9111, 0.8560, 0.6178, 1.0688, 0.7535, ...
0.7554, 0.7123, 0.7589, 0.8415, 0.7586, 0.3865, 0.3087, 0.7067];
disp(numbers);
parmHat = gevfit(x);
disp('Estimated parameters (A, B):');
disp(parmHat);
</code></pre>
<blockquote>
<p>Estimated parameters (A, B): -0.3351 0.1962 0.6466</p>
</blockquote>
<p>PYTHON</p>
<pre><code>import numpy as np
import scipy.stats as stats
x = np.array([0.5700, 0.8621, 0.9124, 0.6730, 0.5524, 0.7608, 0.2150, 0.5787,
0.7210, 0.7826, 0.8181, 0.5449, 0.7501, 1.1301, 0.7784, 0.5378,
0.9550, 0.9623, 0.6865, 0.6863, 0.6153, 0.4372, 0.5485, 0.6318,
0.5501, 0.8333, 0.8044, 0.9111, 0.8560, 0.6178, 1.0688, 0.7535,
0.7554, 0.7123, 0.7589, 0.8415, 0.7586, 0.3865, 0.3087, 0.7067])
# Fit the GEV distribution to the data
parameters3 = stats.genextreme.fit(x)
print("Estimated GEV parameters:", parameters3)
</code></pre>
<blockquote>
<p>Estimated GEV parameters: (1.0872284332032054, 0.534605335200113,
0.6474387313912493)</p>
</blockquote>
<p>I'd expect the same parameters, but results are totally different. Any help?</p>
| <python><matlab><scipy> | 2023-07-31 15:23:15 | 1 | 2,825 | rok |
76,804,936 | 13,978,463 | Using greedy behavior to match string after x times of occurrences | <p>I have a data frame that looks like this:</p>
<pre><code>id ec_number_clean euclidean_clean cluster label is_akker eggNOG_OGs COG_category Description GOs EC CAZy
04XYNu_00699 3.2.1.52 0.0968 49 non-singleton akkermansia COG3525@1|root,COG3525@2|Bacteria,46UY2@74201|Verrucomicrobia,2IU6A@203494|Verrucomicrobiae G Glycoside hydrolase, family 20, catalytic core - 3.2.1.52 GH20
</code></pre>
<p>The column that I'm interested in is the <code>eggNOG_OGs</code>. This column has a particular format that is not always the same in all rows. Here an example:</p>
<pre><code>COG3525@1|root,COG3525@2|Bacteria,46UY2@74201|Verrucomicrobia,2IU6A@203494|Verrucomicrobiae
COG3525@1|root,COG3525@2|Bacteria
COG3525@1|root,KOG2499@2759|Eukaryota,38D1Y@33154|Opisthokonta,3NUJ9@4751|Fungi,3QMST@4890|Ascomycota,216QI@147550|Sordariomycetes,3TDHM@5125|Hypocreales,3G4R2@34397|Clavicipitaceae
COG3525@1|root,KOG2499@2759|Eukaryota,3ZBNG@5878|Ciliophora
</code></pre>
<p>As you can see, the pattern to follow here is the "|" (pipe) in the string.
My code uses regex to find the last occurrence of the "|" and create a new column with the string that is immediately after the last occurrence of the "|".</p>
<p>Now, I need to do something slightly different. Instead of the last occurrence, I need to stop after 3 occurrences of the "|", for example, based on the four lines just above this text, the new column must contain this information on each row:</p>
<pre><code>Verrucomicrobia
Bacteria
Opisthokonta
Ciliophora
</code></pre>
<p>Here, there is little detail, sometimes there is not a third occurrence of "|". In that case, if there is not a third occurrence, just put the string after the last occurrence. For that reason, in the second line, I put <code>Bacteria</code>, due to the absent of a third occurrence of "|".</p>
<p>Here is my code, that works perfectly to find the string after the last occurrences of "|":</p>
<pre><code># Read file
input_file_1 = sys.argv[1]
output_file_1 = sys.argv[2]
# .*: match any character (except newlines), this is based on the "greedily regex method"
# \|: match the last occurrence of "|"
# ([^|]+)$: capture everything after the last occurrences of "|", so in this case everything that start with "|".
# The [^|]+ means one or more characters that are not "|". Finally, the $ matches the end of the string.
searching_root = r'.*\|([^|]+)$'
def searching_taxonomy(text):
"""
:param text: pattern that is search
:return: the first not None string
"""
# Search for pattern
match = re.search(searching_root, text)
# If match is not None, return the first match
# Remove any leading and trailing whitespace characters
return match.group(1).strip() if match else None
# Define data frame
df_input = pd.read_csv(input_file_1, header=0, sep="\t")
# Create a new column and apply the function above to append the matches
df_input['eggnog_taxonomy'] = df_input['eggNOG_OGs'].apply(searching_taxonomy)
</code></pre>
<p>I do not know if the regex pattern that I'm using has a particular name, but I know that has a "greedy behavior". However, I think that my goal or idea is more like a strict greedy behavior because I need everything (string) after three times the occurrence of "|" but nothing more. As well as if the occurrence is not three times, just put the last one.</p>
<p>Any idea to modify only the pattern? Maybe combining some regex techniques.
Maybe add an if statement based on the times of occurrences, however, I want to check (first) if it is possible to modify the regex.</p>
| <python><regex> | 2023-07-31 15:09:43 | 1 | 425 | Someone_1313 |
76,804,871 | 4,409,950 | Create, save, and load spatial index using GeoPandas | <p>I want to create a spatial index using GeoPandas once and save it to a file instead of recreating it every time. How do I do this?</p>
| <python><geopandas> | 2023-07-31 15:00:20 | 2 | 565 | Tyler |
76,804,798 | 4,100,282 | Correct use of buttons in streamlit | <p>I'm a new user of streamlit, and I'm surprised by the interaction between two buttons.</p>
<p>Here is a simple example where one button populates the dataframe with arbitrary data, and another button computes the sum of a column in a dataframe. After I generate data using the first button, clicking on the second button resets the data before computing the sum. By contrast, when I edit the dataframe contents manually, these changes are not lost.</p>
<p>Again, I'm surprised that streamlit "cares" whether the list to sum was generated one way or another.</p>
<ul>
<li>What am I missing? Why does this behavior make sense?</li>
<li>What is the simplest way to achieve what I want (first generate data using the first button, then process it using another button)?</li>
</ul>
<p>MWE:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import streamlit as st
df = pd.DataFrame({
'foo': pd.Series([1,2], dtype = 'float'),
})
if st.button('Generate data'):
df = pd.DataFrame({
'foo': pd.Series([3,4,5], dtype = 'float'),
})
myinput = st.data_editor(df)
if st.button('Compute sum'):
st.write(myinput.sum()['foo'])
</code></pre>
| <python><streamlit> | 2023-07-31 14:53:05 | 0 | 305 | Mathieu |
76,804,674 | 5,405,669 | Execute script that uses my local package - ImportErrors | <p>My project was up-and-running for a while running in a kubernetes container... until, I decided to "clean-up" my use of the <code>sys.add</code> calls that I had at the top of my modules. This included describing my dependencies in <code>pyproject.toml</code>, and all-together ditching <code>setup.py</code>; it imported setup tools, called <code>setup()</code> <code>when __main__</code>.</p>
<p>The design intent is not to run anything in <code>/tnc/app</code> as a script. But rather, a collection of modules, or a package. The only part of the codebase that serves as a <code>__main__</code> is the <code>api.py</code> file. It initializes and fires-up flask.</p>
<h3>Implementation</h3>
<p>I have a lean deployment setup that consists of the following:</p>
<ol>
<li>the core library in <code>/opt/venv</code></li>
<li>my package <code>/app/tnc</code></li>
<li>and the entry point <code>/app/bin/api</code></li>
</ol>
<p>I kick-off the flask app with: <code>python /app/bin/api</code>.</p>
<p>The <em>build</em> takes place in the <code>python:3.11-slim</code> docker image. Here I install the recommended gcc and specify the following in the dockerfile:</p>
<pre><code>-- build
RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
COPY pyproject.toml project.toml
RUN pip3 install -e . -- << aside: better would be to use python -m pip3 install -e .
</code></pre>
<p>I then copy the following from the build into my <em>runtime</em> image.</p>
<pre><code>-- runtime
ENV PATH "/opt/venv/bin:$PATH"
ENV PYTHONPATH "/opt/venv/bin:/app/tnc"
COPY --chown=appuser:appuser bin bin
COPY --chown=appuser:appuser tnc tnc
COPY --chown=appuser:appuser config.py config.py
COPY --from=builder /opt/venv/ /opt/venv
</code></pre>
<p>As I mentioned, in the kubernetes deployment I fire-up the container with:</p>
<pre><code>command: ["python3"]
args: ["bin/api"]
</code></pre>
<h3>My observations working to find the solution</h3>
<p>Firing up the container in such a way that I can run the python REPL:</p>
<ul>
<li><code>import flask</code> generates <code>AttributeError ...replace(' -> None', '')</code></li>
<li><em>remove</em> <code>/app/tnc</code> from the <code>PYTHONPATH</code>, <code>import flask</code> generates <code>ModuleNotFound ... no tnc</code></li>
</ul>
<h4><code>AttributeError ...replace(' -> None', '')</code></h4>
<pre class="lang-bash prettyprint-override"><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/venv/lib/python3.10/site-packages/werkzeug/__init__.py", line 2, in <module>
from .test import Client as Client
File "/opt/venv/lib/python3.10/site-packages/werkzeug/test.py", line 35, in <module>
from .sansio.multipart import Data
File "/opt/venv/lib/python3.10/site-packages/werkzeug/sansio/multipart.py", line 19, in <module>
class Preamble(Event):
File "/usr/local/lib/python3.10/dataclasses.py", line 1175, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
File "/usr/local/lib/python3.10/dataclasses.py", line 1093, in _process_class
str(inspect.signature(cls)).replace(' -> None', ''))
AttributeError: module 'inspect' has no attribute 'signature'
</code></pre>
<h3><code>ModuleNotFoundError: No module named 'tnc'</code></h3>
<pre class="lang-bash prettyprint-override"><code>appuser@tnc-py-deployment-set-1:/app$ echo $PYTHONPATH
/opt/venv/bin
appuser@tnc-py-deployment-set-1:/app$ echo $PATH
/opt/venv/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
appuser@tnc-py-deployment-set-1:/app$ python -m /app/bin/api
/opt/venv/bin/python: No module named /app/bin/api
appuser@tnc-py-deployment-set-1:/app$ python /app/bin/api
Traceback (most recent call last):
File "/app/bin/api", line 12, in <module>
from tnc.s3 import S3Session
ModuleNotFoundError: No module named 'tnc'
</code></pre>
<h3>The project structure</h3>
<pre><code>├── bin
│ └── api
├── config.py
├── pyproject.toml
└── tnc
├── __init__.py
├── data
│ ├── __init__.py
│ ├── download.py
│ ├── field_types.py
│ └── storage_providers
├── errors.py
├── inspect
│ ├── __init__.py
│ └── etl_time_index.py
├── test
│ ├── __init__.py
│ └── test_end-to-end.py
├── utils.py
└── www
├── __init__.py
└── routes
├── __init__.py
├── feedback.py
├── livez.py
└── utils.py
</code></pre>
<h4>pyproject.toml</h4>
<pre class="lang-ini prettyprint-override"><code>[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"
[tool.setuptools.packages.find]
where = ["./"]
exclude = [ "res", "notes" ]
dependencies = [ ... with version specs ]
</code></pre>
| <python><dockerfile><python-venv><python-install> | 2023-07-31 14:38:40 | 1 | 916 | Edmund's Echo |
76,804,639 | 6,930,340 | Concatenate column levels to an existing multi-column pandas dataframe | <p>I have two dataframes with a multi-column index. Both dataframes have exactly the same number of columns.</p>
<pre><code>import pandas as pd
columns_df1 = pd.MultiIndex.from_tuples([
('A', 1, 'X', 'Y', 'Z'),
('B', 2, 'X', 'Y', 'Z'),
('C', 3, 'X', 'Y', 'Z')
], names=['level1', 'level2', 'level3', 'level4', 'level5'])
df1 = pd.DataFrame([[1, 2, 3]], columns=columns_df1)
columns_df2 = pd.MultiIndex.from_tuples([
('D', 4, 'P', 'Q', 'R'),
('E', 5, 'P', 'Q', 'R'),
('F', 6, 'P', 'Q', 'R')
], names=['level6', 'level7', 'level8', 'level9', 'level10'])
df2 = pd.DataFrame([[4, 5, 6]], columns=columns_df2)
print(df1)
print(df2)
level1 A B C
level2 1 2 3
level3 X X X
level4 Y Y Y
level5 Z Z Z
0 1 2 3
level6 D E F
level7 4 5 6
level8 P P P
level9 Q Q Q
level10 R R R
0 4 5 6
</code></pre>
<p>I need to add the last two column levels (level9 and level10) of <code>df2</code> to <code>df1</code>. What's the best way to do this?</p>
<p>Expected result:</p>
<pre><code>level1 A B C
level2 1 2 3
level3 X X X
level4 Y Y Y
level5 Z Z Z
level9 Q Q Q
level10 R R R
0 1 2 3
</code></pre>
| <python><pandas> | 2023-07-31 14:34:33 | 3 | 5,167 | Andi |
76,804,506 | 22,221,987 | Struggling with multiprocessing in python3 | <p>I'm trying to launch the receiver process. This is a class, which just receive data from socket. For minimum example it will just write rows in csv file.
I will send the exit signal through the pipe. After this signal the while loop in the separate process will end and this separate writer-process will finish.</p>
<pre><code>import multiprocessing
from datetime import datetime
import time
import datetime
import csv
from multiprocessing import Process, Pipe
class CSVWriter:
def __init__(self, pipe: multiprocessing.Pipe):
self.pipe = pipe
def write(self):
with open(f'RTD.csv', 'w') as csvfile:
writer = csv.DictWriter(csvfile, delimiter=',', fieldnames=['date'])
while self.pipe.recv() != 'exit':
writer.writerow({'date': (datetime.datetime.now())})
time.sleep(1)
print('exit csv')
pipe_receiver, pipe_sender = Pipe()
def write_csv(pipe):
csv_writer = CSVWriter(pipe=pipe)
csv_writer.write()
process = Process(target=write_csv, args=(pipe_receiver,))
process.start()
pipe_sender.send(input())
process.join()
process.close()
</code></pre>
<p>But, instead of my expectations, it doesn't create the file. Or, if i enter the wrong exit-key, it creates empty file.</p>
<p>I'm struggling with pipe around 2 days and can't understand the problem. I'll really appreciate for any information.</p>
<p>P.S. I'm using processes bc i need to receive the data from socket really fast (in example i just writes the data in file). So, i can't use threads.</p>
| <python><python-3.x><multiprocessing><python-multiprocessing> | 2023-07-31 14:18:54 | 1 | 309 | Mika |
76,804,014 | 2,069,099 | Poetry: how to define fallback for a specific package | <p>Imagine you develop both a backend and a frontend. When creating the <code>pyproject.toml</code> for the frontend, you may like to specify a dependency based on path (and <code>develop</code> mode). then when frontend is deployed on your server, that the installation fallback to "official" backend PyPI package.</p>
<p>Something like:</p>
<pre><code>[tool.poetry.dependencies]
python = ">=3.11"
my-backend = {path = "../my-backend", develop = true} ## develop mode if available
my-backend = {version = "^3.0.0"} ## Prod mode fallback since relative path won't exist
</code></pre>
<p>Is that somehow possible?</p>
| <python><python-poetry> | 2023-07-31 13:15:22 | 0 | 3,517 | Nic |
76,804,004 | 1,678,780 | "RuntimeError: CustomJob resource has not been created" when creating Vertex AI CustomJob | <p>I try to create a Vertex AI CustomJob similar to the example from <a href="https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.CustomJob" rel="nofollow noreferrer">https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.CustomJob</a></p>
<pre class="lang-py prettyprint-override"><code>import time
from google.cloud import aiplatform
worker_pool_specs = [
{
"machine_spec": {
"machine_type": "n1-standard-4",
},
"replica_count": 1,
"container_spec": {
"image_uri": "eu.gcr.io/somexistingimage",
"command": ["python", "myscript.py", "test", "--var"],
"args": [],
},
}
]
job = aiplatform.CustomJob(
display_name="job_{}".format(round(time.time())),
worker_pool_specs=worker_pool_specs,
project="my-project",
staging_bucket="gs://some-bucket",
)
</code></pre>
<p>Now when I inspect the job, practically all fields (create_time, display_name, end_time, ...) contain the following text:</p>
<pre><code>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "..../lib/python3.9/site-packages/google/cloud/aiplatform/base.py", line 686, in display_name
self._assert_gca_resource_is_available()
File "..../lib/python3.9/site-packages/google/cloud/aiplatform/base.py", line 1332, in _assert_gca_resource_is_available
raise RuntimeError(
RuntimeError: CustomJob resource has not been created.
</code></pre>
<p>Environment:
python 3.9.16
google-cloud-aiplatform 1.28.1</p>
<p>I'm logged in and default application auth is set correctly, as I can submit <code>CustomContainerTrainingJob</code>s. Just not <code>CustomJob</code>s.</p>
<p>I cannot find anything on this error. How can I fix this?</p>
| <python><google-cloud-vertex-ai> | 2023-07-31 13:14:57 | 1 | 1,216 | GenError |
76,803,977 | 813,946 | Saving pandas dataframe containing vertical tabs | <p>I've got a program that saves data from database to Excel table. It worked well so far, but it turned out to have problem with texts including vertical space (hexa 0B ASCII). How can I make the program more robust? Is there an easy way to add some parameters to the <code>pd.ExcelWriter</code> or to the <code>df.to_excel</code> that solves the problem?</p>
<p>This is the example code:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df1 = pd.DataFrame(data=dict(a=["x", "y\vz"], b=[1, 2]))
df2 = pd.DataFrame(data=dict(a=["x\vxx", "yz"], b=[11, 22]))
writer = pd.ExcelWriter("output.xlsx", engine='openpyxl')
for sheet_name, df in (("first", df1), ("second", df2)):
df.to_excel(writer, index=False, sheet_name=sheet_name)
writer.close()
</code></pre>
<p>The Excel file has several worksheets and many columns, many of them are text. I know that I could do something like this:</p>
<pre class="lang-py prettyprint-override"><code>for sheet_name, df in (("first", df1), ("second", df2)):
df["a"] = df.a.apply(lambda x: x.replace("\v", "\n"))
df.to_excel(writer, index=False, sheet_name=sheet_name)
</code></pre>
<p>or even doing this on each string column, but is there any simpler solution?</p>
<p>(Python 3.11, pandas 1.5.3, openpyxl 3.1.2)</p>
| <python><pandas><openpyxl> | 2023-07-31 13:11:23 | 1 | 1,982 | Arpad Horvath -- Слава Україні |
76,803,863 | 6,026,338 | extra input feature stateIn "LSTM state input" after model generation from Create ML Application | <p>I just generate an .mlmodel from create ML application from these feature set. Image below show the selected input features for training the model.</p>
<p><a href="https://i.sstatic.net/bCg3W.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bCg3W.png" alt="feature set for model training" /></a></p>
<p>After my training got finished model preview is showing an extra input feature vector with the name of stateIn (LSTM state input). Image below shows the preview of all input feature vectors including this LSTM input.</p>
<p><a href="https://i.sstatic.net/EdBO0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EdBO0.png" alt="Extra input vector" /></a></p>
<p>So, I have no idea why this is added into the model prediction and what should be passed to this parameter. Below is the class that is automatically generated while I dragged the .mlmodel into the Xcode. I need explanation about <code>stateIn</code> input vector.</p>
<pre><code>import CoreML
/// Model Prediction Input Type
@available(macOS 10.13, iOS 11.0, tvOS 11.0, watchOS 4.0, *)
class ReloadInput : MLFeatureProvider {
/// accX window input as 100 element vector of doubles
var accX: MLMultiArray
/// accY window input as 100 element vector of doubles
var accY: MLMultiArray
/// accZ window input as 100 element vector of doubles
var accZ: MLMultiArray
/// gyroX window input as 100 element vector of doubles
var gyroX: MLMultiArray
/// gyroY window input as 100 element vector of doubles
var gyroY: MLMultiArray
/// gyroZ window input as 100 element vector of doubles
var gyroZ: MLMultiArray
/// LSTM state input as 400 element vector of doubles
var stateIn: MLMultiArray
var featureNames: Set<String> {
get {
return ["accX", "accY", "accZ", "gyroX", "gyroY", "gyroZ", "stateIn"]
}
}
func featureValue(for featureName: String) -> MLFeatureValue? {
if (featureName == "accX") {
return MLFeatureValue(multiArray: accX)
}
if (featureName == "accY") {
return MLFeatureValue(multiArray: accY)
}
if (featureName == "accZ") {
return MLFeatureValue(multiArray: accZ)
}
if (featureName == "gyroX") {
return MLFeatureValue(multiArray: gyroX)
}
if (featureName == "gyroY") {
return MLFeatureValue(multiArray: gyroY)
}
if (featureName == "gyroZ") {
return MLFeatureValue(multiArray: gyroZ)
}
if (featureName == "stateIn") {
return MLFeatureValue(multiArray: stateIn)
}
return nil
}
init(accX: MLMultiArray, accY: MLMultiArray, accZ: MLMultiArray, gyroX: MLMultiArray, gyroY: MLMultiArray, gyroZ: MLMultiArray, stateIn: MLMultiArray) {
self.accX = accX
self.accY = accY
self.accZ = accZ
self.gyroX = gyroX
self.gyroY = gyroY
self.gyroZ = gyroZ
self.stateIn = stateIn
}
@available(macOS 12.0, iOS 15.0, tvOS 15.0, watchOS 8.0, *)
convenience init(accX: MLShapedArray<Double>, accY: MLShapedArray<Double>, accZ: MLShapedArray<Double>, gyroX: MLShapedArray<Double>, gyroY: MLShapedArray<Double>, gyroZ: MLShapedArray<Double>, stateIn: MLShapedArray<Double>) {
self.init(accX: MLMultiArray(accX), accY: MLMultiArray(accY), accZ: MLMultiArray(accZ), gyroX: MLMultiArray(gyroX), gyroY: MLMultiArray(gyroY), gyroZ: MLMultiArray(gyroZ), stateIn: MLMultiArray(stateIn))
}
}
</code></pre>
| <python><swift><coreml><createml> | 2023-07-31 12:57:27 | 0 | 1,604 | Qazi Ammar |
76,803,710 | 10,418,143 | Add a Classification Head on Top of Huggingface Vilt Model | <p>I want to add a classification layer in pytorch on top of the <a href="https://huggingface.co/docs/transformers/model_doc/vilt" rel="nofollow noreferrer">huggingface vilt transformer</a>, so that I can classify my text labels.</p>
<p>Generally in normal settings vilt takes an image, question pair and outputs the answer of the question after forward pass</p>
<p>I Want to make the task a classification task instead of a text generation task. I have a set of labels which I want the vilt to assign which label has the highest probability of being the answer of the given question.</p>
<p>I'm completely new to the transformers and have very little idea of how this task can be achieved. Can someone please help me?</p>
<p>I checked this medium <a href="https://towardsdatascience.com/adding-custom-layers-on-top-of-a-hugging-face-model-f1ccdfc257bd" rel="nofollow noreferrer">blog</a> but couldn't make sense out of it.</p>
| <python><deep-learning><pytorch><huggingface-transformers><huggingface> | 2023-07-31 12:37:58 | 0 | 352 | user10418143 |
76,803,694 | 11,028,689 | How to make the equivalent of Tensorflow sequential model in PyTorch? | <p>I have a simple shallow model in Tensorflow for multiclass classification.</p>
<pre><code>model = tf.keras.models.Sequential([
tf.keras.layers.Dense(64, input_shape = (384,), activation = "relu"),
tf.keras.layers.Dense(36, activation="softmax", use_bias = False)
])
model.summary()
Model: "sequential_5"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_13 (Dense) (None, 64) 24640
dense_14 (Dense) (None, 36) 2304
</code></pre>
<p>I want to replicate it in PyTorch where I am using softmax in the forward function.</p>
<pre><code># base model - it works but gives v low accuracy. need to add more layers.
class LR(torch.nn.Module):
def __init__(self, n_features):
super(LR, self).__init__()
self.lr = torch.nn.Linear(n_features, 36)
nn.ReLU()
def forward(self, x):
out = torch.softmax(self.lr(x),dim = 1,dtype=None)
return out
# parameters
n_features = 384
n_classes = 36
optim = torch.optim.SGD(model.parameters(), lr=0.1)
criterion = torch.nn.CrossEntropyLoss()
accuracy_fn = Accuracy(task="multiclass", num_classes=36)
</code></pre>
<p>I have tried modifying my base model like so and it gives me an error later on.</p>
<pre><code>class LR(torch.nn.Module):
def __init__(self, n_features):
super(LR, self).__init__()
self.lr = torch.nn.Linear(n_features, 64)
self.lr = nn.ReLU()
self.lr = torch.nn.Linear(64, 36)
def forward(self, x):
out = torch.softmax(self.lr(x),dim = 1,dtype=None)
return out
epochs = 15
def train(model, optim, criterion, x, y, epochs=epochs):
for e in range(1, epochs + 1):
optim.zero_grad()
out = model(x)
loss = criterion(out, y)
loss.backward()
optim.step()
print(f"Loss at epoch {e}: {loss.data}")
return model
model = LR(n_features)
model = train(model, optim, criterion, X_train, y_train)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[175], line 18
12 # acc = accuracy_fn(out, y)
13 # acc.backward()
14 # optim.step()
15 # print(f"Acc at epoch {e}: {acc.data}")
16 return model
---> 18 model = train(model, optim, criterion, X_train, y_train)
Cell In[175], line 6, in train(model, optim, criterion, x, y, epochs)
4 for e in range(1, epochs + 1):
5 optim.zero_grad()
----> 6 out = model(x)
7 loss = criterion(out, y)
8 loss.backward()
File /usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
Cell In[172], line 10, in LR.forward(self, x)
9 def forward(self, x):
---> 10 out = torch.softmax(self.lr(x),dim = 1,dtype=None)
11 return out
File /usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /usr/local/lib/python3.10/site-packages/torch/nn/modules/linear.py:114, in Linear.forward(self, input)
113 def forward(self, input: Tensor) -> Tensor:
--> 114 return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (155648x384 and 64x36)
</code></pre>
<p>my data shapes are:</p>
<pre><code>X_train.shape
torch.Size([155648, 384])
y_train.shape
torch.Size([155648]).
</code></pre>
<p>Can someone give me a hand with my class LR and train function?</p>
| <python><tensorflow><machine-learning><keras><pytorch> | 2023-07-31 12:35:50 | 2 | 1,299 | Bluetail |
76,803,595 | 3,318,528 | Python - Find a phrase inside a long text | <p>What's the most efficient way of finding a phrase inside a longer text with python?
What I would like to do is finding the complete phrase, but if it's not found, split it into smaller parts and try to find them, down to single words.</p>
<p>For example, I have a text:</p>
<blockquote>
<p>Paragraphs are the building blocks of papers. Many students define paragraphs in terms of length: a paragraph is a group of at least five sentences, a paragraph is half a page long, etc...There are many techniques for brainstorming; whichever one you choose, this stage of paragraph development cannot be skipped.</p>
</blockquote>
<p>I want to find the phrase: <code>there is a group of students</code></p>
<p>The entire phrase as it is will not be found, but its smaller parts yes. So it will find:</p>
<ul>
<li>there</li>
<li>is a group of</li>
<li>students</li>
</ul>
<p>Is this even possible? if so, what's the most efficient algorithm to achieve so?</p>
<p>I tried with some recursive functions but they are not able to find these sub-parts of the phrase, either they find the entire phrase or they just find the single words.</p>
| <python><text><nlp> | 2023-07-31 12:24:05 | 4 | 335 | Val |
76,803,447 | 323,128 | GroupBy Spark Dataframe and manipulate aggregated data as string | <p>The tranfformation is happening in AWS Glue Spark job.
In the example below I group rows by “item_guid” and “item_name” and aggregate “option” column into a collection set. A collection set is an array, however later I will need to map it to a Postgres database and I need to make that array into a string.
Thus,</p>
<pre><code>array_to_string_df = grouped_df.withColumn("option", concat_ws(',', col("option")))
</code></pre>
<p>will transform options into comma separated strings.
However, for the Postgres which column for options has type text[], the string must be enclosed into curly braces and should look like:
{90000,86000,81000}</p>
<p>The question: How can I in the latest step of transformation make options value into “{90000,86000,81000}” enclosed string?
It seems like a simple trick but I couln't come up with an elegant solution to solve it.</p>
<p>Code Example:</p>
<pre><code>from pyspark.sql.functions import collect_list, collect_set, concat_ws, col, lit
simpleData = [("001","1122","YPIA_PROD",90000),
("002","1122","YPIA_PROD",86000),
("003","1122","YPIA_PROD",81000),
("004","1122","YPIA_ABC",90000),
("005","1133","YPIA_PROD",99000),
("006","1133","YPIA_PROD",83000),
("007","1144","YPIA_PROD",79000),
("008","1144","YPIA_PROD",80000),
("009","1144","YPIA_ABC",91000)
]
rrd = spark.sparkContext.parallelize(simpleData)
df = rrd.toDF(["id","item_guid","item_name","option"])
df.show()
grouped_df = df.groupby("item_guid", "item_name").agg(collect_set("option").alias("option"))
array_to_string_df = grouped_df.withColumn("option", concat_ws(',', col("option")))
grouped_df.show()
array_to_string_df.show()
</code></pre>
<p>DF show output:</p>
<pre><code>+---+----------+---------+------+
| id| item_guid|item_name|option|
+---+----------+---------+------+
|001| 1122|YPIA_PROD| 90000|
|002| 1122|YPIA_PROD| 86000|
|003| 1122|YPIA_PROD| 81000|
|004| 1122| YPIA_ABC| 90000|
|005| 1133|YPIA_PROD| 99000|
|006| 1133|YPIA_PROD| 83000|
|007| 1144|YPIA_PROD| 79000|
|008| 1144|YPIA_PROD| 80000|
|009| 1144| YPIA_ABC| 91000|
+---+----------+---------+------+
+----------+---------+--------------------+
| item_guid|item_name| option|
+----------+---------+--------------------+
| 1133|YPIA_PROD| [83000, 99000]|
| 1122|YPIA_PROD|[90000, 86000, 81...|
| 1122| YPIA_ABC| [90000]|
| 1144|YPIA_PROD| [79000, 80000]|
| 1144| YPIA_ABC| [91000]|
+----------+---------+--------------------+
+----------+---------+-----------------+
|item_guid |item_name| option|
+----------+---------+-----------------+
| 1133|YPIA_PROD| 83000,99000|
| 1122|YPIA_PROD|90000,86000,81000|
| 1122| YPIA_ABC| 90000|
| 1144|YPIA_PROD| 79000,80000|
| 1144| YPIA_ABC| 91000|
+----------+---------+-----------------+
</code></pre>
| <python><apache-spark><aws-glue> | 2023-07-31 12:06:22 | 2 | 4,244 | Maxim |
76,803,356 | 12,176,250 | assigning dates using regex, converting them with strptime and then applying to dataframe using lambda. Code works but pytest is failing? | <p>I was wondering if i could get some help on this code? The function is assigning dates using regex, converting them with strptime and then applying to dataframe using lambda. Code works but pytest is failing and i have no idea why?</p>
<p>Here is the code for you to look at:</p>
<pre><code>from datetime import datetime, date
import re
import pandas as pd
def conv_date(dte: str) -> date:
acceptable_mappings = {
"\d{4}-\d{2}-\d{2}\s\d{2}\:\d{2}\:\d{2}": "%Y-%m-%d %H:%M:%S",
}
for regex in acceptable_mappings.keys():
if re.fullmatch(regex, dte):
return datetime.strptime(dte, acceptable_mappings[regex]).date()
raise Exception(f"Expected date in one of supported formats, got {dte}")
def full_list_parse(unclean_list: list) -> list:
return [conv_date(dte) for dte in unclean_list]
mock_dict = [
{"name": "xx", "role": "loves only-fans", "date": "2023-07-26 12:46:21"},
]
df = pd.DataFrame(mock_dict)
if __name__ == "__main__":
print(df)
df['date_clean'] = df['date'].apply(lambda x: conv_date(x))
print(df)
</code></pre>
<p>When i run the above code i get back which is desired. The date to be converted to 2023-07-26:</p>
<pre><code> name role date
0 xx loves only-fans 2023-07-26 12:46:21
name role date date_clean
0 xx loves only-fans 2023-07-26 12:46:21 2023-07-26
</code></pre>
<p>And here is my test:</p>
<pre><code>from x import *
import pytest
class TestConvDate:
classical_programming_with_datetime = "2023-07-26 12:46:21"
def test_classical_programming_with_datetime(self):
assert datetime(2023, 7, 26) == conv_date(self.classical_programming_with_datetime)
def test_error_is_raised(self):
with pytest.raises(Exception):
conv_date("some issue")
</code></pre>
<p>But when i run this test, i get this error!</p>
<pre><code>Expected :datetime.date(2023, 7, 26)
Actual :datetime.datetime(2023, 7, 26, 0, 0)
</code></pre>
<p>Do I need to assert differently, i thought my function handles the time portion of the datetime? Where are these zeros coming from? I tried changing the assert to:</p>
<p><code>assert datetime(2023, 7, 26, 0, 0) == conv_date(self.classical_programming_with_datetime)</code></p>
<p>but im still getting the same error!
Any ideas?</p>
<p>Thanks</p>
| <python><pandas><regex><lambda><pytest> | 2023-07-31 11:54:19 | 1 | 346 | Mizanur Choudhury |
76,803,230 | 14,368,631 | Unbound TypeVar variable in overloaded class | <p>I have this following code which is a simplified version of an entity component system in python:</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
from typing import TYPE_CHECKING, TypeVar, overload
if TYPE_CHECKING:
from collections.abc import Generator
T = TypeVar("T")
T1 = TypeVar("T1")
T2 = TypeVar("T2")
class Registry:
def __init__(self) -> None:
self._next_game_object_id = 0
self._components: dict[type[T], set[int]] = {}
self._game_objects: dict[int, dict[type[T], T]] = {}
@overload
def get_components(
self,
component: type[T],
) -> Generator[tuple[int, T], None, None]:
...
@overload
def get_components(
self,
component: type[T],
component_two: type[T1],
) -> Generator[tuple[int, T, T1], None, None]:
...
@overload
def get_components(
self,
component: type[T],
component_two: type[T1],
component_three: type[T2],
) -> Generator[tuple[int, T, T1, T2], None, None]:
...
def get_components(self, *components: type[T]) -> Generator[tuple[int, tuple[T, ...]], None, None]:
game_object_ids = set.intersection(
*(self._components[component] for component in components),
)
for game_object_id in game_object_ids:
yield game_object_id, tuple(
self._game_objects[game_object_id][component]
for component in components
)
</code></pre>
<p>However, I'm getting some mypy errors which I cannot figure out. One of which says that the TypeVar <code>T</code> is unbound and another says that <code>get_components</code> does not accept all possible arguments of signature 1, 2, and 3. How can I fix these errors?</p>
| <python><python-typing><mypy> | 2023-07-31 11:36:29 | 1 | 328 | Aspect11 |
76,803,228 | 2,030,026 | Manually decorating function with extend_schema results in RecursionError | <p>I have a <code>HistoricalModelViewSet</code> class which is used in all other view sets (<code>class StuffViewSet(HistoricalModelViewSet)</code>), it adds couple of <code>/history</code> endpoints (DRF thing, <a href="https://www.django-rest-framework.org/api-guide/viewsets/#marking-extra-actions-for-routing" rel="nofollow noreferrer">https://www.django-rest-framework.org/api-guide/viewsets/#marking-extra-actions-for-routing</a>).</p>
<p>Then I'm using <code>drf-spectacular</code> to generate a <code>/swagger</code> view, and there is a different structure to the <code>/history</code> endpoint than the rest of the view set.</p>
<p>So I dynamically decorate the function with <code>extend_schema</code> using a serializer set in the child view:</p>
<pre class="lang-py prettyprint-override"><code>class HistoricalModelViewSet(viewsets.ModelViewSet):
def __init__(self, **kwargs):
self.list_history = extend_schema(
responses={200: self.historical_serializer_class(many=True)},
)(self.list_history)
@action(detail=False, methods=["get"], url_path="history", url_name="list-history-list")
def list_history(self, request):
serializer = self.historical_serializer_class(history_items, many=True, context=context)
return Response(serializer.data)
</code></pre>
<p>This all works fine, can view <code>/history</code> for all models.</p>
<p>However when viewing <code>/swagger</code> everything works fine couple of times but then after about 5 reloads it errors with:</p>
<pre class="lang-bash prettyprint-override"><code>myapp-api-1 | 2023-07-31 11:23:16,553 ERROR django.request log 2577 140001382692608 Internal Server Error: /myapp/schema/
myapp-api-1 | Traceback (most recent call last):
myapp-api-1 | File "/usr/local/lib/python3.10/site-packages/django/core/handlers/exception.py", line 55, in inner
myapp-api-1 | response = get_response(request)
myapp-api-1 | File "/usr/local/lib/python3.10/site-packages/django/core/handlers/base.py", line 197, in _get_response
myapp-api-1 | response = wrapped_callback(request, *callback_args, **callback_kwargs)
myapp-api-1 | File "/usr/local/lib/python3.10/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
myapp-api-1 | return view_func(*args, **kwargs)
myapp-api-1 | File "/usr/local/lib/python3.10/site-packages/django/views/generic/base.py", line 84, in view
myapp-api-1 | return self.dispatch(request, *args, **kwargs)
myapp-api-1 | File "/usr/local/lib/python3.10/site-packages/rest_framework/views.py", line 509, in dispatch
myapp-api-1 | response = self.handle_exception(exc)
myapp-api-1 | File "/usr/local/lib/python3.10/site-packages/rest_framework/views.py", line 469, in handle_exception
myapp-api-1 | self.raise_uncaught_exception(exc)
myapp-api-1 | File "/usr/local/lib/python3.10/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
myapp-api-1 | raise exc
myapp-api-1 | File "/usr/local/lib/python3.10/site-packages/rest_framework/views.py", line 506, in dispatch
myapp-api-1 | response = handler(request, *args, **kwargs)
myapp-api-1 | File "/usr/local/lib/python3.10/site-packages/drf_spectacular/views.py", line 83, in get
myapp-api-1 | return self._get_schema_response(request)
myapp-api-1 | File "/usr/local/lib/python3.10/site-packages/drf_spectacular/views.py", line 91, in _get_schema_response
myapp-api-1 | data=generator.get_schema(request=request, public=self.serve_public),
myapp-api-1 | File "/usr/local/lib/python3.10/site-packages/drf_spectacular/generators.py", line 268, in get_schema
myapp-api-1 | paths=self.parse(request, public),
myapp-api-1 | File "/usr/local/lib/python3.10/site-packages/drf_spectacular/generators.py", line 239, in parse
myapp-api-1 | operation = view.schema.get_operation(
myapp-api-1 | File "/usr/local/lib/python3.10/site-packages/drf_spectacular/utils.py", line 422, in get_operation
myapp-api-1 | return super().get_operation(path, path_regex, path_prefix, method, registry)
myapp-api-1 | File "/usr/local/lib/python3.10/site-packages/drf_spectacular/utils.py", line 422, in get_operation
myapp-api-1 | return super().get_operation(path, path_regex, path_prefix, method, registry)
myapp-api-1 | File "/usr/local/lib/python3.10/site-packages/drf_spectacular/utils.py", line 422, in get_operation
myapp-api-1 | return super().get_operation(path, path_regex, path_prefix, method, registry)
myapp-api-1 | [Previous line repeated 482 more times]
myapp-api-1 | File "/usr/local/lib/python3.10/site-packages/drf_spectacular/openapi.py", line 61, in get_operation
myapp-api-1 | if self.is_excluded():
myapp-api-1 | File "/usr/local/lib/python3.10/site-packages/drf_spectacular/utils.py", line 427, in is_excluded
myapp-api-1 | return super().is_excluded()
myapp-api-1 | File "/usr/local/lib/python3.10/site-packages/drf_spectacular/utils.py", line 427, in is_excluded
myapp-api-1 | return super().is_excluded()
myapp-api-1 | File "/usr/local/lib/python3.10/site-packages/drf_spectacular/utils.py", line 427, in is_excluded
myapp-api-1 | return super().is_excluded()
myapp-api-1 | [Previous line repeated 437 more times]
myapp-api-1 | RecursionError: maximum recursion depth exceeded while calling a Python object
</code></pre>
<p>Errors on the <code>/schema</code>, where <code>drf-spectacular</code> is generating the Swagger schema.</p>
<p>Why does it work couple of times and then error?</p>
<p>If I comment out the code in <code>__init__</code> everything works, but I have the wrong output JSON in <code>/swagger</code>.</p>
<p>But why would this method of decorating not work?</p>
| <python><django><django-rest-framework><drf-spectacular> | 2023-07-31 11:36:06 | 0 | 1,928 | himmip |
76,803,186 | 12,415,855 | Change fields in pdf using pypdf? | <p>i try to update entry-fields in a pdf using the following code with the pytho-module pypdf. At first i read the pdf-files and get all available fields on this firt pdf-page.</p>
<pre><code>from pypdf import PdfReader, PdfWriter
reader = PdfReader("exmpl3.pdf")
writer = PdfWriter()
page = reader.pages[0]
fields = reader.get_fields()
for k,v in enumerate(fields):
print (k, v)
writer.add_page(page)
writer.update_page_form_field_values(
writer.pages[0], {0: "some filled in text"}
)
with open("filled-out.pdf", "wb") as output_stream:
writer.write(output_stream)
</code></pre>
<p>But when i run this program i get the following error-message:</p>
<pre><code>C:\DEV\Python-Diverses\pypdf>python exmpl3.py
Traceback (most recent call last):
File "C:\DEV\Python-Diverses\pypdf\exmpl3.py", line 13, in <module>
writer.update_page_form_field_values(
File "C:\Users\WRSPOL\AppData\Local\Programs\Python\Python39\lib\site-packages\pypdf\_writer.py", line 946, in update_page_form_field_values
raise PyPdfError("No /AcroForm dictionary in PdfWriter Object")
pypdf.errors.PyPdfError: No /AcroForm dictionary in PdfWriter Object
C:\DEV\Python-Diverses\pypdf>python exmpl3.py
0 form1[0].#subform[0].TextField1[0]
1 form1[0].#subform[0].TextField1[1]
2 form1[0].#subform[0].TextField1[2]
3 form1[0].#subform[0].TextField1[3]
4 form1[0].#subform[0].TextField1[4]
5 form1[0].#subform[0].TextField1[5]
6 form1[0].#subform[0].TextField1[6]
7 form1[0].#subform[0].TextField1[7]
8 form1[0].#subform[0].TextField1[8]
9 form1[0].#subform[0].CheckBox1[0]
10 form1[0].#subform[0].CheckBox2[0]
11 form1[0].#subform[0].CheckBox3[0]
12 form1[0].#subform[0].CheckBox4[0]
13 form1[0].#subform[0].CheckBox5[0]
14 form1[0].#subform[0].SSN[0]
15 form1[0].#subform[0].HouseholdIncome1[0]
16 form1[0].#subform[0].HouseholdIncome1[1]
17 form1[0].#subform[0].HouseholdIncome2[0]
18 form1[0].#subform[0].HouseholdIncome3[0]
19 form1[0].#subform[0].DateSigned[0]
20 form1[0].#subform[0].SignatureField1[0]
21 form1[0].#subform[0]
22 form1[0].#subform[1].Text38[0]
23 form1[0].#subform[1].RadioButtonList[0]
24 form1[0].#subform[1].DateRecordUpdated[0]
25 form1[0].#subform[1].DateSigned[1]
26 form1[0].#subform[1].SignatureField1[1]
27 form1[0].#subform[1].DateNotified[0]
28 form1[0].#subform[1]
29 form1[0]
Traceback (most recent call last):
File "C:\DEV\Python-Diverses\pypdf\exmpl3.py", line 13, in <module>
writer.update_page_form_field_values(
File "C:\Users\WRSPOL\AppData\Local\Programs\Python\Python39\lib\site-packages\pypdf\_writer.py", line 946, in update_page_form_field_values
raise PyPdfError("No /AcroForm dictionary in PdfWriter Object")
pypdf.errors.PyPdfError: No /AcroForm dictionary in PdfWriter Object
</code></pre>
<p>How can i update the fields in this pdf?</p>
| <python><pypdf> | 2023-07-31 11:28:21 | 4 | 1,515 | Rapid1898 |
76,803,183 | 1,173,884 | Loading list of snakemake inputs from file | <p>In Snakemake, is there a better way to write the following?
I have a file <code>inputs.txt</code> that contains paths to the real input files, and I would like this file to be regarded as one of the inputs of a rule.</p>
<pre class="lang-py prettyprint-override"><code>rule all:
input: "test.txt"
def get_inputs(wildcards):
fh = open("inputs.txt")
return fh.read().splitlines()
rule test:
input:
"inputs.txt", # unfortunately this won't trigger the exception
additional=get_inputs
output:
"test.txt"
shell:
"echo {input.additional} > {output}"
</code></pre>
<p>My problem with this code is that when <code>inputs.txt</code> does not exist, it throws an exception on the <code>open()</code> call. I would prefer if it crashed before entering the <code>get_inputs</code> function.</p>
| <python><snakemake> | 2023-07-31 11:28:14 | 1 | 3,766 | Jindra Helcl |
76,803,053 | 10,248,483 | How to take difference between last value and first value of 30 min window, and keep other columns as well? | <p>Parquet:
<a href="https://drive.google.com/file/d/1Ylq4tP9ck5bp8mla5GzwUZpRHtyOS62U/view?usp=sharing" rel="nofollow noreferrer">Link to the sample dataset</a> <br />
CSV formatted:
<a href="https://drive.google.com/file/d/1sZlz-z_cwHwPIVRi0Q9OTA9cokKLx4Wg/view?usp=sharing" rel="nofollow noreferrer">Link to the sample dataset</a> \</p>
<p>Updated dataset: <a href="https://drive.google.com/file/d/16r-hB3VhnlIG7XtZjtTynyyfqQETvqfm/view?usp=sharing" rel="nofollow noreferrer">Link to new dataset</a></p>
<p>The expected result should <strong>have the result of difference between last value and first value of the 30 min sample window</strong> (<strong>From the sample data shown above</strong>), that is:</p>
<pre><code> Item Value AgentStatus OriginalTimestamp AgentTimeStamp
0 Channel1.Device1.Tag1 847 good 2023-07-28T13:09:00.0098328+09:00 2023-07-28T13:09:00.0000000
1 Channel1.Device1.Tag2 0 good 2023-07-28T13:09:00.0408696+09:00 2023-07-28T13:09:00.0000000
2 Channel1.Device1.Tag1 848 good 2023-07-28T13:09:05.0138770+09:00 2023-07-28T13:09:05.0000000
3 Channel1.Device1.Tag2 0 good 2023-07-28T13:09:05.0454734+09:00 2023-07-28T13:09:05.0000000
4 Channel1.Device1.Tag1 849 good 2023-07-28T13:09:10.0073605+09:00 2023-07-28T13:09:10.0000000
5 Channel1.Device1.Tag2 0 good 2023-07-28T13:09:10.0379516+09:00 2023-07-28T13:09:10.0000000
6 Channel1.Device1.Tag1 850 good 2023-07-28T13:09:15.0074263+09:00 2023-07-28T13:09:15.0000000
7 Channel1.Device1.Tag2 0 good 2023-07-28T13:09:15.0387691+09:00 2023-07-28T13:09:15.0000000
8 Channel1.Device1.Tag1 851 good 2023-07-28T13:09:20.0176840+09:00 2023-07-28T13:09:20.0000000
9 Channel1.Device1.Tag2 0 good 2023-07-28T13:09:20.0329268+09:00 2023-07-28T13:09:20.0000000
10 Channel1.Device1.Tag1 852 good 2023-07-28T13:09:25.0070191+09:00 2023-07-28T13:09:25.0000000
11 Channel1.Device1.Tag2 0 good 2023-07-28T13:09:25.0384699+09:00 2023-07-28T13:09:25.0000000
12 Channel1.Device1.Tag1 853 good 2023-07-28T13:09:30.0109244+09:00 2023-07-28T13:09:30.0000000
13 Channel1.Device1.Tag2 0 good 2023-07-28T13:09:30.0417249+09:00 2023-07-28T13:09:30.0000000
14 Channel1.Device1.Tag1 854 good 2023-07-28T13:09:35.0118763+09:00 2023-07-28T13:09:35.0000000
15 Channel1.Device1.Tag2 0 good 2023-07-28T13:09:35.0429050+09:00 2023-07-28T13:09:35.0000000
16 Channel1.Device1.Tag1 855 good 2023-07-28T13:09:40.0027594+09:00 2023-07-28T13:09:40.0000000
17 Channel1.Device1.Tag2 0 good 2023-07-28T13:09:40.0340476+09:00 2023-07-28T13:09:40.0000000
18 Channel1.Device1.Tag1 856 good 2023-07-28T13:09:45.0029277+09:00 2023-07-28T13:09:45.0000000
19 Channel1.Device1.Tag2 0 good 2023-07-28T13:09:45.0336946+09:00 2023-07-28T13:09:45.0000000
20 Channel1.Device1.Tag1 857 good 2023-07-28T13:09:50.0153041+09:00 2023-07-28T13:09:50.0000000
21 Channel1.Device1.Tag2 0 good 2023-07-28T13:09:50.0459796+09:00 2023-07-28T13:09:50.0000000
22 Channel1.Device1.Tag1 858 good 2023-07-28T13:09:55.0103680+09:00 2023-07-28T13:09:55.0000000
23 Channel1.Device1.Tag2 0 good 2023-07-28T13:09:55.0412343+09:00 2023-07-28T13:09:55.0000000
24 Channel1.Device1.Tag1 859 good 2023-07-28T13:10:00.0095407+09:00 2023-07-28T13:10:00.0000000
25 Channel1.Device1.Tag2 0 good 2023-07-28T13:10:00.0395870+09:00 2023-07-28T13:10:00.0000000
26 Channel1.Device1.Tag1 860 good 2023-07-28T13:10:05.0069727+09:00 2023-07-28T13:10:05.0000000
27 Channel1.Device1.Tag2 0 good 2023-07-28T13:10:05.0374699+09:00 2023-07-28T13:10:05.0000000
28 Channel1.Device1.Tag1 861 good 2023-07-28T13:10:10.0113827+09:00 2023-07-28T13:10:10.0000000
29 Channel1.Device1.Tag2 0 good 2023-07-28T13:10:10.0431140+09:00 2023-07-28T13:10:10.0000000
30 Channel1.Device1.Tag1 862 good 2023-07-28T13:10:15.0024582+09:00 2023-07-28T13:10:15.0000000
31 Channel1.Device1.Tag2 0 good 2023-07-28T13:10:15.0338704+09:00 2023-07-28T13:10:15.0000000
32 Channel1.Device1.Tag2 0 good 2023-07-28T13:11:15.0338704+09:01 2023-07-28T13:11:15.0000000
33 Channel1.Device1.Tag2 0 good 2023-07-28T13:12:15.0338704+09:02 2023-07-28T13:12:15.0000000
34 Channel1.Device1.Tag2 0 good 2023-07-28T13:15:15.0338704+09:03 2023-07-28T13:15:15.0000000
35 Channel1.Device1.Tag2 0 good 2023-07-28T13:20:15.0338704+09:04 2023-07-28T13:20:15.0000000
36 Channel1.Device1.Tag2 0 good 2023-07-28T13:21:15.0338704+09:05 2023-07-28T13:21:15.0000000
37 Channel1.Device1.Tag2 0 good 2023-07-28T13:22:15.0338704+09:06 2023-07-28T13:22:15.0000000
38 Channel1.Device1.Tag2 0 good 2023-07-28T13:25:15.0338704+09:07 2023-07-28T13:25:15.0000000
39 Channel1.Device1.Tag2 878 bad 2023-07-28T13:30:15.0338704+09:08 2023-07-28T13:30:15.0000000
40 Channel1.Device1.Tag2 0 good 2023-07-28T13:31:15.0338704+09:09 2023-07-28T13:31:15.0000000
</code></pre>
<p>Expected result/calculation:</p>
<p>After sampling within 30 mins, the result will contain data from these two items as shown:
<a href="https://i.sstatic.net/XjLQj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XjLQj.png" alt="enter image description here" /></a></p>
<ul>
<li>For each Item, Example: For <strong>Channel1.Device1.Tag2</strong>:
<strong>Last Value - First Value of the 30 minute windows</strong>, i.e:</li>
</ul>
<p>Last Row Value (878):</p>
<pre><code>39 Channel1.Device1.Tag2 878 bad 2023-07-28T13:30:15.0338704+09:08 2023-07-28T13:30:15.0000000
</code></pre>
<p>First row value (0):</p>
<pre><code>1 Channel1.Device1.Tag2 0 good 2023-07-28T13:09:00.0408696+09:00 2023-07-28T13:09:00.0000000
</code></pre>
<p><strong>Expected result/ dataframe:</strong></p>
<p><a href="https://i.sstatic.net/xBNfm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xBNfm.png" alt="enter image description here" /></a></p>
<p>As seen, the Value 878 and 3 corresponds to Channel1.Device1.Tag2 and Channel1.Device1.Tag1, taking the difference between (last value - first value): (878 - 0) and (850-847), whereas other column values are retained.</p>
<p><strong>Update:</strong></p>
<p>The new dataset was run through the provided solution from mozway, and the results has a slight deviation (Instead of taking diff b/w 00:30:00 - 00:00:00) because of the duplicate value of timestamp against each item tag name. How to resolve this ? Or How to skip that duplicate value from the row.</p>
<p>I have to add this image, to explain:
<a href="https://i.sstatic.net/PyGK1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PyGK1.png" alt="enter image description here" /></a></p>
| <python><pandas><dataframe> | 2023-07-31 11:13:25 | 2 | 369 | Nishad Nazar |
76,803,003 | 21,445,669 | How to bundle a program with auto-py-to-exe without changing PATH? | <p>I want to include an external program with my Python program that I'll be turning into an .exe. The program is Antiword, which is used by <code>textract</code> to parse .doc files. If I have Antiword on my machine and modify the PATH environment variable to point to it, textract will use it correctly. (as suggested in <a href="https://stackoverflow.com/questions/51727237/reading-doc-file-in-python-using-antiword-in-windows-also-docx">this SO post</a>).<br />
However, I don't want my users to download Antiword separately and manually modify PATH. Can I include something in my main.py file, or during the auto-py-to-exe process so that Antiword comes bundled with it? Another acceptable solution would be for the program to add a new variable to PATH titled <code>MY_PROGRAM_ANTIWORD_PATH</code> with the correct location relative to the .exe.</p>
| <python><text-extraction><auto-py-to-exe><antiword> | 2023-07-31 11:07:55 | 0 | 549 | mkranj |
76,802,748 | 11,028,689 | How to use encrypted TenSEAL ckksvectors in Tensorflow model? | <p>I am trying to encrypt my data using the TenSEAL library in python by following a tutorial(<a href="https://github.com/OpenMined/TenSEAL/blob/main/tutorials/Tutorial%202%20-%20Working%20with%20Approximate%20Numbers.ipynb" rel="nofollow noreferrer">https://github.com/OpenMined/TenSEAL/blob/main/tutorials/Tutorial%202%20-%20Working%20with%20Approximate%20Numbers.ipynb</a>)</p>
<p>I have converted my arrays to lists for ckks vectors to work and run encryption on them like so:</p>
<pre><code>X_traintf = X_train.tolist()
X_testtf = X_train.tolist()
y_traintf = y_train.tolist()
y_testtf = y_train.tolist()
t_start = time()
enc_x_test = [ts.ckks_vector(ctx_eval, x) for x in X_testtf]
enc_y_test = [ts.ckks_vector(ctx_eval, y) for y in y_testtf]
t_end = time()
print(f"Encryption of the test-set took {int(t_end - t_start)} seconds")
</code></pre>
<p>then I tried to use them as inputs for my tensorflow model and got this error:</p>
<pre><code>model.fit(enc_x_train, enc_y_train,
epochs=epochs,
validation_data=(enc_x_test, enc_y_test))
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[20], line 1
----> 1 model.fit(enc_x_train, enc_y_train,
2 epochs=epochs,
3 validation_data=(enc_x_test, enc_y_test))
File /usr/local/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs)
67 filtered_tb = _process_traceback_frames(e.__traceback__)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
File /usr/local/lib/python3.10/site-packages/keras/src/engine/data_adapter.py:1105, in select_data_adapter(x, y)
1102 adapter_cls = [cls for cls in ALL_ADAPTER_CLS if cls.can_handle(x, y)]
1103 if not adapter_cls:
1104 # TODO(scottzhu): This should be a less implementation-specific error.
-> 1105 raise ValueError(
1106 "Failed to find data adapter that can handle input: {}, {}".format(
1107 _type_name(x), _type_name(y)
1108 )
1109 )
1110 elif len(adapter_cls) > 1:
1111 raise RuntimeError(
1112 "Data adapters should be mutually exclusive for "
1113 "handling inputs. Found multiple adapters {} to handle "
1114 "input: {}, {}".format(adapter_cls, _type_name(x), _type_name(y))
1115 )
ValueError: Failed to find data adapter that can handle input: (<class 'list'> containing values of types {"<class 'tenseal.tensors.ckksvector.CKKSVector'>"}), (<class 'list'> containing values of types {"<class 'tenseal.tensors.ckksvector.CKKSVector'>"})
</code></pre>
<p>I have also tried to convert them into tensorflow tensors like so:</p>
<pre><code>X_train_tf = tf.convert_to_tensor(enc_x_train)
y_train_tf = tf.convert_to_tensor(enc_y_train)
X_test_tf = tf.convert_to_tensor(enc_x_test)
y_test_tf = tf.convert_to_tensor(enc_y_test)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[22], line 1
----> 1 X_train_tf = tf.convert_to_tensor(enc_x_train)
2 y_train_tf = tf.convert_to_tensor(enc_y_train)
3 X_test_tf = tf.convert_to_tensor(enc_x_test)
File /usr/local/lib/python3.10/site-packages/tensorflow/python/util/traceback_utils.py:153, in filter_traceback.<locals>.error_handler(*args, **kwargs)
151 except Exception as e:
152 filtered_tb = _process_traceback_frames(e.__traceback__)
--> 153 raise e.with_traceback(filtered_tb) from None
154 finally:
155 del filtered_tb
File /usr/local/lib/python3.10/site-packages/tensorflow/python/framework/constant_op.py:98, in convert_to_eager_tensor(value, ctx, dtype)
96 dtype = dtypes.as_dtype(dtype).as_datatype_enum
97 ctx.ensure_initialized()
---> 98 return ops.EagerTensor(value, ctx.device_name, dtype)
ValueError: Attempt to convert a value (<tenseal.tensors.ckksvector.CKKSVector object at 0x7f78b8273df0>) with an unsupported type (<class 'tenseal.tensors.ckksvector.CKKSVector'>) to a Tensor.
</code></pre>
<p>Is there any way for these ckks vectors to be integrated into a tensorflow model?/</p>
| <python><tensorflow><keras><encryption><deep-learning> | 2023-07-31 10:31:12 | 0 | 1,299 | Bluetail |
76,802,576 | 8,452,469 | how to estimate pose of single marker in opencv-python 4.8.0? | <p>As shown in this <a href="https://stackoverflow.com/questions/74964527/attributeerror-module-cv2-aruco-has-no-attribute-dictionary-get">Question</a> There are API change for aruco package.
I managed to detect the markers and drawDetectedMarkers by following accepted answer but I can't find 'estimatePoseSingleMarkers' in both cv2.aruco and cv2.aruco.detector attributes.
still got error message</p>
<p><code>module 'cv2.aruco' has no attribute 'estimatePoseSingleMarkers'</code></p>
<p>or</p>
<p><code>'cv2.aruco.ArucoDetector' object has no attribute 'estimatePoseSingleMarkers'</code></p>
| <python><opencv><aruco> | 2023-07-31 10:06:53 | 1 | 304 | M lab |
76,802,512 | 13,921,399 | Pass array containing float singletons and arrays into a njit function | <p>Consider the following function:</p>
<pre class="lang-py prettyprint-override"><code>from numba import njit
import numpy as np
def get_rate(arr: np.ndarray) -> float:
v = 1
for i in arr:
if isinstance(i, float):
v *= (1 - i / 100)
else:
j = np.random.randint(0, len(i))
v *= (1 - i[j] / 100)
return v
</code></pre>
<p>The function iterates over an array and applies an exact or random quote from a collection cumulatively.</p>
<p>Unfortunately, I have the following input array:</p>
<pre class="lang-py prettyprint-override"><code>a = np.array([1.0, np.array([2.0, 0.0, 2.0, 3.0]), 3.0, 2.0, 4.0, 10.0, np.array([4.0, 3.0, 1.5, 2.0])])
</code></pre>
<p>You can assume that each entry is either a float or an array with length four. The function works fine as it is. However, as my whole code infrastructure is build upon njit, I have to rely on <code>numba's</code> just-in-time compiler <code>njit</code> for this function as well. But I do not know whether <code>njit</code> can handle arrays that contain float singletons and array data types. It would also be fine to convert the input array to a Dict.</p>
<pre class="lang-py prettyprint-override"><code>from numba.typed import Dict
d = Dict.empty(key_type=np.int64, value_type=...)
</code></pre>
<p>My solution so far:</p>
<p>I convert the array to a matrix, where I zip each row according to the longest row. Then, I can just use the function <code>get_rate</code> decorated by <code>njit</code>.</p>
<p>Matrix conversion:</p>
<pre class="lang-py prettyprint-override"><code>ls = list(map(lambda v: list(v) if isinstance(v, np.ndarray) else [v], list(a)))
a = np.array(list(zip_longest(*ls, fillvalue=np.nan))).T
a = pd.DataFrame(a).fillna(method="ffill", axis=1).values
</code></pre>
<p>Can I somehow specify the <code>dtype</code> argument to make <code>numba</code> accept such arrays with different data types? Is there a pure numba approach to get the same result?</p>
| <python><arrays><numpy><numba> | 2023-07-31 09:58:38 | 0 | 1,811 | ko3 |
76,802,374 | 9,561,443 | while doing subset sum whose sum equals to B i am not able to get correct result | <p>the code is very simple i am taking or not taking an element from A array and using left i am keeping track left sum it is <em>interview bit</em> problem</p>
<p><strong>Subset Sum Problem!</strong></p>
<pre><code>class Solution:
def solve(self, A, B):
dp = []
for _ in range(len(A)+1):
dp.append([-1]*(B+1))
def rec(index,left):
if left < 0 or index==len(A):
return False
elif left == 0:
return True
if dp[index][left] != -1:
return dp[index][left]
dp[index][left] = rec(index+1,left) or rec(index+1,left-A[index])
return dp[index][left]
if rec(0,B):
return 1
else:
return 0
</code></pre>
<p>why this code is giving wrong output
if i am getting left < 0 and index == n then i am returning False
but this below one is correct</p>
<pre><code>class Solution:
def solve(self, A, B):
dp = [[-1 for _ in range(B+1)] for _ in range(len(A)+1)]
def rec(index, left):
if index == len(A):
if left == 0:
return True
else:
return False
if dp[index][left] != -1:
return dp[index][left]
# Excluding the current element
exclude = rec(index + 1, left)
# Including the current element
include = False
if left - A[index] >= 0:
include = rec(index + 1, left - A[index])
dp[index][left] = exclude or include
return dp[index][left]
return int(rec(0, B))
</code></pre>
| <python><recursion><dynamic-programming> | 2023-07-31 09:41:58 | 1 | 1,017 | Sanjay |
76,801,901 | 5,868,293 | Calculate the mass of a distribution around a point, python | <p>I have the following data:</p>
<pre><code>ll = [25.553885868617463,
1.4285714285714288,
5.0,
14.142857142857142,
2.714285430908202,
-4.428571428571429,
3.428571428571429,
2.8571428571428568,
5.857142857142858,
-2.0,
8.571428571428573,
1.4285714285714288,
1.857142333984374,
21.714285714285715,
3.1428571428571423,
2.428571428571427,
-0.2857142857142856,
-3.0,
4.142857687813894,
0.7142857142857135,
0.714285507202149,
-1.9999995858328674,
4.857142464773997,
2.8571428571428577,
-6.714285714285714,
3.57142848423549,
15.999999901907785,
0.14285714285714413,
-3.0,
0.5687830243791847,
7.857142900739401,
-3.0,
9.0,
2.428571428571427,
2.0000001634870266,
0.7999999999999998,
-5.7142857142857135,
3.1428571428571423,
0.14285714285714235,
22.5,
18.571428527832033,
2.7142857142857135,
0,
3.1428571428571423,
13.142857142857146,
10.428571428571427,
30.71428684779576,
0,
0.2857140350341787,
3.571428571428571,
2.0,
24.428570175170897,
2.428571428571429,
-0.3333333333333339,
4.2857142857142865,
-8.000000216166178,
15.57142857142857,
2.2857142857142856,
8.71428565979004,
0.8571428571428577,
2.1428570447649276,
1.0,
5.000000991821288,
4.714285714285715,
6.0,
2.8571428571428577,
1.6666666666666679,
1.9987989153180798,
12.714285714285715,
9.85714340209961,
7.71428658621652,
-5.857142857142858,
15.857142857142858,
4.428571428571429,
0.5676193237304688,
1.2857142857142847,
0.14285705566406248,
3.428570938110351,
5.142857142857142,
-1.2857142857142856,
-1.0,
11.714285714285715,
-0.7142857142857144,
0.714285888671875,
-1.0,
9.428571428571429,
4.428571428571429,
-2.428571428571429,
-20.571428571428573,
4.0,
1.1428571428571432,
2.2857142857142847,
19.0,
15.142857142857142,
5.571428451538086,
7.428571428571427,
1.0,
4.285714481898715,
3.7142853546142582,
-3.7142854309082036]
</code></pre>
<p>I can create the histogram</p>
<pre><code>import pandas as pd
import plotly.express as px
px.histogram(pd.DataFrame(ll, columns=['val']), x='val', nbins=100)
</code></pre>
<p><a href="https://i.sstatic.net/3CgLs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3CgLs.png" alt="enter image description here" /></a></p>
<p>How can I calculate the mass/density of the distribution around a point ?</p>
<p>In other words, I want to be able to calculate the mass <strong>between the 2 red lines</strong>, for different <code>thr</code> &<code>centre</code>:</p>
<pre><code>thr = 0.5
centre = 0
fig = px.histogram(pd.DataFrame(ll, columns=['val']), x='val', nbins=100)
fig.add_vline(x=centre+thr, line_color='red')
fig.add_vline(x=centre-thr, line_color='red')
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/z5em5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z5em5.png" alt="enter image description here" /></a></p>
| <python><pandas><distribution> | 2023-07-31 08:34:01 | 3 | 4,512 | quant |
76,801,844 | 9,768,260 | How to active the host's python virtual environment in docker container | <p>I using the volume mount to directly readAndWrite the host's python venv file in docker container, if the host is already active this venv file, is it need to active this file again in docker container?</p>
| <python><docker><python-venv> | 2023-07-31 08:23:21 | 2 | 7,108 | ccd |
76,801,838 | 3,294,994 | python: type hinting for sorted(set(lst)) | <p>How do I type-hint <code>T</code> to make the type checkers happy? <code>x</code> needs to be comparable and hashable, but not sure how to express that.</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar
T = TypeVar("T")
def f(x: list[T]) -> list[T]:
return sorted(set(x))
</code></pre>
<pre class="lang-bash prettyprint-override"><code>$ mypy script.py
script.py:7: error: Value of type variable "SupportsRichComparisonT" of "sorted" cannot be "T" [type-var]
Found 1 error in 1 file (checked 1 source file)
$ pyright script.py
WARNING: there is a new pyright version available (v1.1.317 -> v1.1.318).
Please install the new version or set PYRIGHT_PYTHON_FORCE_VERSION to `latest`
script.py
script.py:7:19 - error: Argument of type "set[T@f]" cannot be assigned to parameter "__iterable" of type "Iterable[SupportsRichComparisonT@sorted]" in function "sorted"
"set[T@f]" is incompatible with "Iterable[SupportsRichComparisonT@sorted]"
TypeVar "_T_co@Iterable" is covariant
Type "T@f" cannot be assigned to type "SupportsRichComparison"
Type "T@f" cannot be assigned to type "SupportsRichComparison"
"object*" is incompatible with protocol "SupportsDunderLT[Any]"
"object*" is incompatible with protocol "SupportsDunderGT[Any]" (reportGeneralTypeIssues)
</code></pre>
<p><code>Python 3.9.16</code>, <code>mypy 1.2.0</code>, <code>pyright 1.1.317</code></p>
<p>I feel like I want a compound type that is <code>Hashable</code> and <code>Comparable</code>... using <code>Protocol</code>...?</p>
| <python><python-typing> | 2023-07-31 08:22:02 | 1 | 846 | obk |
76,801,796 | 336,527 | How to make pd.DataFrame print list values in cells as a proper python list? | <pre><code>> print(pd.DataFrame({'a': [['x', 'y']]}))
a
0 [x, y]
</code></pre>
<p>Even though the DataFrame cell contains a list just as expected, when printed, it is represented as <code>[x, y]</code> instead of <code>['x', 'y']</code>. This happens even if I use <code>repr()</code>.</p>
<p>Can I tell pandas to print list values properly, e.g.:</p>
<pre><code> a
0 ["x", "y"]
</code></pre>
<p>And why does this happen anyway? Python has a convention that <code>repr</code> should be as close to reversible as possible (i.e., <code>ast.literal_eval(repr(x))</code> should work with minimal tweaks whenever possible). This behavior is quite confusing and makes debugging unrelated errors very tricky (I actually thought there was a bug in my code causing the list to be stored in the DataFrame as a string).</p>
| <python><pandas> | 2023-07-31 08:16:30 | 2 | 52,663 | max |
76,801,733 | 8,219,760 | Take non-null value from one of several columns in polars | <p>Having dataframe</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame(
{
"a": [1, None, None],
"b": [None, 2, None],
"c": [None, None, 3],
"non_relevant": [1, 2, 3],
})
</code></pre>
<p>I wish to produce result</p>
<pre class="lang-py prettyprint-override"><code>result = pl.DataFrame(
{
"a": [1, 2, 3],
"non_relevant": [1, 2, 3],
}
)
</code></pre>
<p>With knowledge of</p>
<pre class="lang-py prettyprint-override"><code>remapping = {"a": ("a", "b", "c"), "other_key": ("aN", "bN", "cN")}
</code></pre>
<p>I would like this function to throw if combination ever results in multiple non-null combinations. In my use case, there are a lot of columns to be combined this way.</p>
| <python><python-polars> | 2023-07-31 08:04:33 | 0 | 673 | vahvero |
76,801,641 | 4,451,315 | keep elements in list of lists which appear at least twice | <p>Say I start with</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
ser = pl.Series([[1,2,1,4], [3, 3, 3, 4], [1,2,3,4]])
</code></pre>
<p>How can I filter each list so it only has elements which appear at least twice?</p>
<p>Expected output:</p>
<pre class="lang-py prettyprint-override"><code>shape: (3,)
Series: '' [list[i64]]
[
[1, 1]
[3, 3, 3]
[]
]
</code></pre>
<p>Is there a way to do this in polars, without using <code>apply</code>?</p>
| <python><python-polars> | 2023-07-31 07:50:40 | 4 | 11,062 | ignoring_gravity |
76,801,637 | 250,003 | This code works on PyCharm but not on VSCode | <p>This question is 3 years old: <a href="https://stackoverflow.com/questions/64962922/code-is-working-in-pycharm-but-not-in-visual-studio-code">Code is working in pyCharm but not in Visual Studio Code</a></p>
<p>So, maybe this is the reason why it doesn't work anymore. Anyway, I have this project structure:</p>
<p><a href="https://i.sstatic.net/gBFsI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gBFsI.png" alt="enter image description here" /></a></p>
<p>It works perfectly when I run "main.py" under "01SpaceInvaders", but the same code in vscode returns this error:</p>
<pre class="lang-none prettyprint-override"><code>Hello from the pygame community. https://www.pygame.org/contribute.html
Traceback (most recent call last):
File "C:\Users\salva\Documents\VS Code\pysandbox\game\01SpaceInvaders\main.py", line 5, in <module>
from game.sdfengine.locals import *
ModuleNotFoundError: No module named 'game'
</code></pre>
<p>For sure, there is a configuration missing in my VSCode, but I don't know what. Any clue?</p>
<p>This is how I execude the code from VSCode:</p>
<ol>
<li>I open a new terminal</li>
<li>Go to <code>01SpaceInvaders</code></li>
<li><code>py main.py</code></li>
</ol>
<p>Following the comments it seems VSCode configuration problem:</p>
<p>Where can I find the file to edit and a guide to understand how to edit it?</p>
| <python><visual-studio-code><pycharm> | 2023-07-31 07:50:22 | 2 | 715 | Salvatore Di Fazio |
76,801,460 | 2,201,789 | python error unreconized arguments when run specific function | <p>I have a script which I just want to run a function within the script</p>
<pre><code>python testfile.py _test_one_xml --password Test123! --username TESTER
</code></pre>
<p>The cmd returned error:</p>
<pre><code>testfile.py: error: unrecognized arguments: _test_one_xml
</code></pre>
<p>script :</p>
<pre><code> #!/usr/bin/python3
"""
ABSTRACT: Package for load tickets
COMPILER: Python3.4
"""
import os # import python package operationg system
import sys # import python package system
import datetime # import python package date time
import shutil # import python package shell utilities
import glob # import python package glob
import argparse # import python package argument parser
import requests # import python package for l1tc requests
import time # import python package time
import xml.etree.ElementTree as ET # import python package for xml handling
from requests.packages import urllib3 # import python package urllib3 for request
# suspress ssl verification warnings
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
# main method for loading tickets to prjABC
def _load_tickets(): # method definition
# parse command line arguments
parser = argparse.ArgumentParser(description="Loads tickets to prjABC.", formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('-p', '--password', help="set password", default='tester!')
parser.add_argument('-u', '--username', help="set user name", default='tester')
args = parser.parse_args()
# check if user name or password are valid
if args.username == '' or args.password == '':
print('Username or password is missing.')
sys.exit()
# load tickets
_load_tickets_from_prjABC_directory(args.username, args.password)
# method loads prjABC test plans for check if product and version in ticket matches
# then all test suites and test cases will be loaded to get STP-DB ID
# if Test ID from ticket and STP-DB ID in prjABC are equal ticket is valid
def _load_tickets_from_prjABC_directory(sUserName, sPassword):
# local variables
sDirTransfer = os.path.normpath(os.path.join('\\\\RSINT.NET', 'DATA', 'MU', 'SATURN', 'PROJECT', '1ES', 'EMIL', 'SW', 'TEST', 'Doc', 'STP', 'transfer'))
sDirprjABCNew = os.path.normpath(os.path.join(sDirTransfer, 'prjABC', 'new'))
sDirprjABCInProgress = os.path.normpath(os.path.join(sDirTransfer, 'prjABC', 'inprogress'))
sDirprjABCAssessed = os.path.normpath(os.path.join(sDirTransfer, 'prjABC', 'assessed'))
sDirprjABCFailed = os.path.normpath(os.path.join(sDirTransfer, 'prjABC', 'failed'))
sDirprjABCInvalid = os.path.normpath(os.path.join(sDirTransfer, 'prjABC', 'invalid'))
sUrlServer = 'https://test123.net/resources/'
sProjectArea = 'Project ONE'
sUrlProjectArea = 'https://test123.netProjectArea/_aruiy997Fiyr3213fef'
sUrlTestPlan = sUrlServer + sProjectArea + '/testplan'
lprjABCTestPlanName = []
lprjABCTestPlanID = []
lprjABCTestCaseExecutionRecord = []
dicprjABCTestCaseList = {}
dicprjABCTestCaseID = {}
dicprjABCTestCaseName = {}
iIdx = 0
bValid = True
# print info
print('Dir prjABC New: '+sDirprjABCNew)
print('Load prjABC Test Plan Names and IDs.')
lProducts = ['A', 'C', 'Z']
for sProduct in lProducts:
# create request for single product
sUrlProduct = sUrlTestPlan + '?fields=feed/entry/content/testplan[category/@term="Instrument" and category/@value="' + sProduct + '" and state!="com.ibm.prjABC.planning.common.retired"]/*'
# load prjABC test plan names and ids
lProductprjABCTestPlanName = []
lProductprjABCTestPlanID = []
lProductprjABCTestPlanName, lProductprjABCTestPlanID = _get_all_test_plans(sUrlProduct, sUserName, sPassword)
# append results to global test plan lists
lprjABCTestPlanID = lprjABCTestPlanID + lProductprjABCTestPlanID
lprjABCTestPlanName = lprjABCTestPlanName + lProductprjABCTestPlanName
# initialize lists containing test cases
for name in lprjABCTestPlanName:
dicprjABCTestCaseList[name] = []
iIdx += 1
# loop over all tickets in prjABC new directory
for sTicket in glob.glob(sDirprjABCNew + os.sep + "*.tic"):
# get modification date of ticket and current time
sTimeMod = time.ctime(os.path.getmtime(sTicket))
sTimeMod = sTimeMod.split(' ')
sTimeMod = sTimeMod[3]
sTimeNow = str(datetime.datetime.now())
sTimeNow = sTimeNow.split(' ')
sTimeNow = sTimeNow[1].split('.')
sTimeNow = sTimeNow[0]
# skip current ticket if modification time is now
if sTimeMod == sTimeNow:
continue
print('Ticket: '+sTicket)
bValid = False
# load values from current ticket
dicTicket = _get_ticket_values(sTicket)
# check if result file is a memory file
if dicTicket['File'].endswith('.mem') or dicTicket['File Type'] == 'Memory File':
print('Skip ticket containing memory data.')
#elif int(dicTicket['Version Sideline']) > 99:
# print('Skip ticket for release sidelines.')
elif dicTicket['ConfTest Version'] != '-':
print('Skip ticket from Confidence Test.')
else:
# set test plan name
sprjABCTestPlanName = dicTicket['Product'] + ' ' + dicTicket['Version'] + ' MU'
sprjABCTestPlanID = ''
print('prjABC Test Plan Name: ' + sprjABCTestPlanName)
print('Searching for Test Plan \"' + sprjABCTestPlanName + '\".')
iIdx = 0
# loop over all test plans
for name in lprjABCTestPlanName:
# check if current test plan matches product and version
if name == sprjABCTestPlanName:
# get current test plan id
sprjABCTestPlanID = lprjABCTestPlanID[iIdx]
# check if test cases for current test plan was not loaded
if len(dicprjABCTestCaseList.get(name)) == 0:
lTestSuites = _get_all_test_suites_from_test_plan(sprjABCTestPlanID, sUserName, sPassword)
for TestSuites in lTestSuites:
print('Test Suites: ' + TestSuites)
lCases = _get_all_test_cases_from_test_suite(TestSuites, sUserName, sPassword)
for sCase in lCases:
dicprjABCTestCaseList[name].append(sCase)
lCases = _get_all_test_cases_from_test_plan(sprjABCTestPlanID, sUserName, sPassword)
for sCase in lCases:
dicprjABCTestCaseList[name].append(sCase)
iIdx += 1
# check if test cases for current test plan was not loaded
if not dicprjABCTestCaseList.get(sprjABCTestPlanName):
print('Test Plan \"' + sprjABCTestPlanName + '\" was not found.')
else:
print('Test Plan \"' + sprjABCTestPlanName + '\" was found.')
print('Searching for Test Case with STP-ID \"' + dicTicket['Test ID'] + '\" (' + str(int(dicTicket['Test ID'], 16)) + ').')
# loop over all test cases from current test plan
for sTC in dicprjABCTestCaseList.get(sprjABCTestPlanName):
# check if current test case was not loaded
if not dicprjABCTestCaseID.get(sTC):
# load stp-db id and name from current test case
iprjABCSTPID, sprjABCTestName = _get_all_infos_from_test_case(sTC, sUserName, sPassword)
# save stp-db id and name from current test case
dicprjABCTestCaseID [sTC] = iprjABCSTPID
dicprjABCTestCaseName[sTC] = sprjABCTestName
print('Test Case: '+dicprjABCTestCaseName.get(sTC)+', Test ID: '+str(dicprjABCTestCaseID.get(sTC)))
# check if stp-db id is test id from ticket
if dicprjABCTestCaseID.get(sTC) == str(int(dicTicket['Test ID'], 16)):
print('Test Case was found.')
bValid = True
sState, sContent = _get_ticket_file_content(os.path.normpath(os.path.join(sDirprjABCNew, dicTicket['File'])))
sState = _assess_special_tests(dicTicket['Test ID'], sState, sContent)
_test_one_xml(sprjABCTestPlanID, sTC, dicTicket, sState, sContent, sUrlServer, sProjectArea, sUrlProjectArea, sUserName, sPassword)
# set tag
sTag = dicTicket['Version Major'] + '.' + dicTicket['Version Minor'] + '.' + dicTicket['Version Revision'] + '.' + dicTicket['Version Sideline']
# use version as tag for all not ES-MAIN devices
if dicTicket['Version Major'] == '' and dicTicket['Version Minor'] == '' and dicTicket['Version Revision'] == '' and dicTicket['Version Sideline'] == '' and not dicTicket['Version'] == '':
sTag = dicTicket['Version']
# check is ticket is invalid
if bValid == False:
# set destination directory
sDirDst = os.path.normpath(os.path.join(sDirprjABCInvalid, dicTicket['Product'], sTag))
# create directory structure
_create_directory_structure(sDirprjABCInvalid, dicTicket['Product'], sTag)
# check if ticket is valid
else:
# check state of ticket to determine output directory
if sState == 'com.ibm.prjABC.execution.common.state.inprogress':
# set destination directory
sDirDst = os.path.normpath(os.path.join(sDirprjABCInProgress, dicTicket['Product'], sTag))
# create directory structure
_create_directory_structure(sDirprjABCInProgress, dicTicket['Product'], sTag)
# check state of ticket to determine output directory
if sState == 'com.ibm.prjABC.execution.common.state.passed':
# set destination directory
sDirDst = os.path.normpath(os.path.join(sDirprjABCAssessed, dicTicket['Product'], sTag))
# create directory structure
_create_directory_structure(sDirprjABCAssessed, dicTicket['Product'], sTag)
# check state of ticket to determine output directory
if sState == 'com.ibm.prjABC.execution.common.state.failed':
# set destination directory
sDirDst = os.path.normpath(os.path.join(sDirprjABCFailed, dicTicket['Product'], sTag))
# create directory structure
_create_directory_structure(sDirprjABCFailed, dicTicket['Product'], sTag)
# copy ticket and result file to destination directory
shutil.copy(sTicket, sDirDst)
shutil.copy(os.path.normpath(os.path.join(sDirprjABCNew, dicTicket['File'])), sDirDst)
# delete ticket and result file in source directory
try:
os.remove(sTicket)
os.remove(os.path.normpath(os.path.join(sDirprjABCNew, dicTicket['File'])))
except ValueError:
print('Error: Could not delete file. Maybe its used by another process.')
# method writes xml with test case result data and uploads it to prjABC
def _test_one_xml(sTestPlan, sTestCase, dicTicket, sState, sContent, sUrlServer, sProjectArea, sUrlProjectArea, username, password):
username = 'tester'
password = 'tester!'
# determine current date and time and format it
sDateTime = str(datetime.datetime.now())
sDateTime = sDateTime[:-3] + 'Z'
sDateTime = sDateTime.replace(' ', 'T')
# set content for xml file
sCreator = username.upper()
sOwner = dicTicket['Tester'].upper()
sMachine = os.environ['COMPUTERNAME']
sTestCaseExecutionRecord = _get_test_case_execution_record(sTestPlan, sTestCase, dicTicket, sUrlServer, sProjectArea, username, password)
# check if test case execution record was found
if sTestCaseExecutionRecord is None:
print("Test Case Record was not found.")
return
# get title and test script
sTitle = _get_title_from_test_case_execution_record(sTestCaseExecutionRecord, username, password)
sTestScript = _get_test_script_from_test_case(sTestCase, username, password)
sDescription = ''
sExceptedResult = ''
# use computer name from ticket if exist
if not dicTicket['PC Name'] == '':
sMachine = dicTicket['PC Name']
# set tag from ticket data and get build record
sTag = 'ES-MAIN_' + dicTicket['Version Major'] + '.' + dicTicket['Version Minor'] + '.' + dicTicket['Version Revision'] + '.' + dicTicket['Version Sideline']
sBuildRecord = _get_build_record(sUrlServer, sProjectArea, sUrlProjectArea, sTag, username, password)
# check if test script exists
if not sTestScript == '':
# get description and expected result from test script
sDescription, sExceptedResult = _get_description_from_test_script(sTestScript, username, password)
# test case results
url = sUrlServer + sProjectArea + '/executionresult'
headers = {'Content-Type':'application/x-www-form-urlencoded'}
fobj = open("testcaseresult.xml", "r")
xml = fobj.read()
fobj.close()
print(xml)
r = requests.post(url, headers=headers, data=xml, auth=(username, password), verify=False)
print (r.content)
# remove test case results
if os.path.exists("testcaseresult.xml"):
os.remove("testcaseresult.xml")
def _main(): # main method definition
_load_tickets() # call method _load_tickets
if __name__ == '__main__': # check if load_tickets.py is called by repl
_main() # call main method
</code></pre>
| <python><python-3.x> | 2023-07-31 07:24:00 | 0 | 1,201 | user2201789 |
76,801,382 | 10,546,710 | Parsing data from bash table with empty fields | <p>I am currently trying to parse some data from bash tables, and I found strange behavior in parsing data if some columns is empty for example</p>
<p>i have data like this</p>
<pre><code>containerName ipAddress memoryMB name numberOfCpus status
--------------- --------------- ---------- ------- -------------- ----------
TEST_VM 192.168.150.111 8192 TEST_VM 4 POWERED_ON
</code></pre>
<p>and sometimes like this</p>
<pre><code>containerName ipAddress memoryMB name numberOfCpus status
--------------- ----------- ---------- ---------------------- -------------- -----------
TEST_VM_second 3072 TEST_VM_second_renamed 1 POWERED_OFF
</code></pre>
<p>I tried with python and with bash, but same results, I need data "name" but when I am using bash
for example awk '{print $4}' in first table it prints expected result:</p>
<pre><code>name
-------
TEST_VM
</code></pre>
<p>but in second table in prints:</p>
<pre><code>name
----------------------
1
</code></pre>
<p>same results with python:</p>
<pre><code>df_info = pd.read_table(StringIO(table), delim_whitespace=True)
df_info = df_info.drop(0)
pd.set_option('display.max_colwidth', None)
print(df_info['name'], df_info['containerName'])
</code></pre>
<p>Output:</p>
<pre><code>
1 TEST_VM
Name: name, dtype: object 1 TEST_VM
Name: containerName, dtype: object
1 1
Name: name, dtype: object 1 TEST_VM_second
Name: containerName, dtype: object
</code></pre>
<p>Maybe someone knows how to play around if ipaddress is empty field ?</p>
| <python><pandas><awk> | 2023-07-31 07:10:40 | 3 | 334 | robotiaga |
76,801,327 | 1,652,954 | How to read { and ( from .ini configs file | <p>i have the below posted password, and it saved in <code>.ini</code> config. file. the following characters are marked or highlighted in red: <code>(</code> and <code>{</code>
how can i be able to read them</p>
<p>here is how i created the <code>.ini</code> file:</p>
<pre><code>config = configparser.ConfigParser()
config.read('configs.ini')
</code></pre>
<p><strong>configs.ini</strong>:</p>
<pre><code>CLIENT_SECRET = P^L|Tf5x&9_2TIStIM({IdN^hObZ/-v2{KB(3
</code></pre>
| <python><python-3.x><ini> | 2023-07-31 07:02:31 | 2 | 11,564 | Amrmsmb |
76,801,264 | 7,212,273 | Can the same name fields of sub-class and parent-class exist independently in python? | <p>If my question is true, how to access the parent-class's same name field? Please complete the code.</p>
<pre><code># python 2/3
class Parent:
def __init__(self):
self.x = 1
class Child(Parent):
def __init__(self):
super(Child, self).__init__()
self.x = 2
def getX(self):
# how to code here ?
super_x = None
print("super.x={} Child.x={}".format(super_x, self.x))
Child().getX()
</code></pre>
<p>As a contrast, I also post the following java code snippet, and it can correctly run.</p>
<pre><code>// java code
public class Parent {
int x=1;
}
public class Child extends Parent {
int x = 2;
void getX() {
System.out.println("super.x=" + super.x + " Child.x=" + x);
}
public static void main(String[] args) {
// output "super.x=1 Child.x=2"
(new Child()).getX();
}
}
</code></pre>
| <python><inheritance> | 2023-07-31 06:50:25 | 2 | 305 | yichudu |
76,801,024 | 10,366,334 | Why does `recv()` of the connection from python multiprocessing block? | <p>I have the following code:</p>
<pre class="lang-py prettyprint-override"><code>from multiprocessing import Pipe, Process
def f(conn):
conn.close()
if __name__ == "__main__":
c1, c2 = Pipe()
p = Process(target=f, args=(c1,))
p.start()
print(111)
c2.send(123)
print(222)
print(c2.poll(5))
print(333)
c2.recv()
print(444)
p.join()
</code></pre>
<p>And the code blocks in <code>c2.recv()</code>. According to the <a href="https://docs.python.org/3/library/multiprocessing.html#connection-objects" rel="nofollow noreferrer">python doc</a>:</p>
<blockquote>
<p>recv()</p>
<p>Return an object sent from the other end of the connection using send(). Blocks until there is something to receive. Raises EOFError if there is nothing left to receive and the other end was closed.</p>
</blockquote>
<p>But my code do not raise EOFError although the other end <code>c1</code> has been closed. Why?</p>
| <python><python-multiprocessing> | 2023-07-31 06:05:11 | 0 | 314 | Nicolás_Tsu |
76,801,014 | 105,315 | lxml xpath syntax to access the ancestor of an XML element of specific depth? | <p>I am trying to access ancestors of depth 3 in an XML file, i.e. for element <code>/a/b/c/d/e/f</code>, I want to get element <code>c</code>.</p>
<p>Here is my more realistic example input file:</p>
<pre><code><?xml version="1.0" encoding="utf-8"?>
<Project xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:QDA-XML:project:1.0">
<Sources>
<TextSource name="document example">
<Description />
<PlainTextSelection>
<Description />
<Coding>
<CodeRef targetGUID="a2a627dd-f7e7-4fc7-b8db-918e3ad50450" />
</Coding>
</PlainTextSelection>
</TextSource>
<VideoSource name="myvideo">
<Transcript>
<SyncPoint/>
<SyncPoint/>
<TranscriptSelection>
<Description />
<Coding>
<CodeRef targetGUID="a2a627dd-f7e7-4fc7-b8db-918e3ad50450" />
</Coding>
</TranscriptSelection>
</Transcript>
<VideoSelection>
<Coding>
<CodeRef targetGUID="a2a627dd-f7e7-4fc7-b8db-918e3ad50450" />
</Coding>
</VideoSelection>
</VideoSource>
</Sources>
<Notes>
<Note name="some text">
<Description />
<PlainTextSelection>
<Description />
<Coding>
<CodeRef targetGUID="a2a627dd-f7e7-4fc7-b8db-918e3ad50450" />
</Coding>
</PlainTextSelection>
</Note>
</Notes>
</Project>
</code></pre>
<p>In this case for instance, I want to access the elements <code>Note</code>, <code>TextSource</code> and <code>VideoSource</code> that are ancestors of <code>CodeRef</code> elements.</p>
<p>I have the following working code, but am wondering if there is a nicer way to go about it, perhaps using Xpath syntax:</p>
<pre><code>import lxml.etree as ET
tree = ET.parse('coderef_examples/project_simplified.xml')
root = tree.getroot()
for i in root.findall('.//CodeRef', root.nsmap):
p = tree.getelementpath(i)
p = p.replace('{urn:QDA-XML:project:1.0}', '')
print('namespace-free path: ', p)
p = tree.getpath(i) # Xpath
s = '/'.join(p.split('/')[:4]) # Xpath of depth 3
print('xpath string: ', s)
ancestor = root.xpath(s)[0]
print('source tag: ', ancestor.tag, ', source name: ', ancestor.get('name'))
</code></pre>
<p>Output:</p>
<pre><code>namespace-free path: Sources/TextSource/PlainTextSelection/Coding/CodeRef
xpath string: /*/*[1]/*[1]
source tag: {urn:QDA-XML:project:1.0}TextSource , source name: document example
namespace-free path: Sources/VideoSource/Transcript/TranscriptSelection/Coding/CodeRef
xpath string: /*/*[1]/*[2]
source tag: {urn:QDA-XML:project:1.0}VideoSource , source name: myvideo
namespace-free path: Sources/VideoSource/VideoSelection/Coding/CodeRef
xpath string: /*/*[1]/*[2]
source tag: {urn:QDA-XML:project:1.0}VideoSource , source name: myvideo
namespace-free path: Notes/Note/PlainTextSelection/Coding/CodeRef
xpath string: /*/*[2]/*
source tag: {urn:QDA-XML:project:1.0}Note , source name: some text
</code></pre>
<p>Can it be done directly in Xpath? (ideally in a way that is independent of the tag of the ancestor and the depth of the CodeRef elements)</p>
<p><strong>edit: Solution based on Conal Tuohy's answer:</strong></p>
<pre><code>import lxml.etree as ET
tree = ET.parse('coderef_examples/project_simplified.xml')
root = tree.getroot()
for ancestor in root.xpath('/*/*/*[descendant::qda:CodeRef]', namespaces={'qda': 'urn:QDA-XML:project:1.0'}):
print('source tag: ', ancestor.tag, ', source name: ', ancestor.get('name'))
</code></pre>
<p>Much faster and much more efficient. It may not print an entry for every CodeRef node, but since I only want the unique ancestors, it is even better.</p>
| <python><xml><xpath><lxml> | 2023-07-31 06:02:54 | 1 | 522 | KIAaze |
76,800,991 | 3,847,117 | How do I use the IPython kernel programmatic interface? | <p>I have the following code which should programmatically run a jupyter kernel.</p>
<p>Here's an example:</p>
<pre><code>from jupyter_client import KernelManager
from queue import Empty
from jupyter_client.blocking import BlockingKernelClient
# maybe the kernel needs time to boot up?
km = KernelManager(kernel_name="python3")
client = km.client()
client = km.blocking_client()
client.start_channels()
import time
time.sleep(1)
msg_id = client.execute("???")
print(msg_id)
while True:
import time
time.sleep(1)
print("sleeping ...")
try:
reply = client.get_shell_msg(timeout=1)
print("reply")
print(reply)
if reply['parent_header']['msg_id'] == msg_id:
if reply['content']['status'] == 'error':
print({"error_type": "failed_to_execute", 'error': '\n'.join(reply['content']['traceback'])})
break
except Empty:
print("shell_empty")
# pass
try:
iopub_msg = client.get_iopub_msg(timeout=1)
print("iopub_msg")
print(iopub_msg)
break
except Empty:
print("iopub_empty")
</code></pre>
<p>but in practice, I just get a message id, and then both channels just return empty forever. What am I doing wrong?</p>
| <python><python-3.x><jupyter> | 2023-07-31 05:58:35 | 0 | 8,659 | Foobar |
76,800,927 | 7,106,508 | How do you terminate threading using Pyinput | <p>I realize this question has been asked many times but I have a very difficult time understanding threading and with statement and consequently I cannot understand the other answers given. I was able to accomplish this task about 5 years ago but my threading skills have since fallen into disuse. In any case, I'm trying to register the x,y coordinates of a mouse click, store the values to a global, then terminate the keyboard listener, then do other things. In the code below, I cannot get the code to advance to where it says <code>p("done")</code>, even after I press the 'e' button. When I pause the code and analyse the values of the <code>done</code> variable it will output 1 so I don't understand why the code is not advancing to the next line. Again, I want to iterate to please not assume that I understand what you're saying when you talk about <code>threading</code> or <code>with</code> statements. I have only a very vague notion of what these things do.</p>
<pre><code>from pynput import keyboard, mouse
done = 0
xa = 0
ya = 0
p = print
def on_click(x, y, button, pressed):
global xa, ya
print (x,y)
xa = x
ya = y
def on_press(key):
global done
if key.char == 'e':
done = 1
def listen():
global done
with keyboard.Listener(on_press=on_press) as kl, \
mouse.Listener(on_click=on_click) as ml:
kl.join()
ml.join()
while 1:
if done:
p ('done')
kl.terminate()
ml.terminate()
p ('you did it')
return
</code></pre>
| <python><multithreading><keyboard> | 2023-07-31 05:44:17 | 1 | 304 | bobsmith76 |
76,800,900 | 13,838,385 | How can I specify an editor to use for the pip config edit --editor command that contains spaces in the file path? | <p><code>pip config edit --editor notepad</code> works.</p>
<p><code>pip config edit --editor "C:\Program Files\Notepad++\notepad++.exe"</code> does not work.</p>
<p>How can I assign NPP as the editor without adding it to path?</p>
<p>Any help appreciated!</p>
| <python><pip> | 2023-07-31 05:37:27 | 1 | 577 | fmotion1 |
76,800,766 | 6,847,222 | Error pulling file from SFTP using SSIS (execute package) | <p>I have recevied the below error when executed through SQL server Agent</p>
<blockquote>
<p>Executed as user: <strong>/</strong>. Traceback (most recent call last): File
"E:/file//import_file_.py", line 18, in with
pysftp.Connection(host=myHostname, username=myUsername,
password=myPassword, cnopts=cnopts) as sftp:<br />
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "
C:\Users\tt\AppData\Local\Programs\Python\Python311\Lib\site-packages\pysftp_<em>init</em>_.py",
line 140, in <strong>init</strong> self.<em>start_transport(host, port) File "
C:\Users\tt\AppData\Local\Programs\Python\Python311\Lib\site-packages\pysftp_<em>init</em></em>.py",
line 176, in _start_transport self.<em>transport =
paramiko.Transport((host, port))<br />
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File
"C:\Users\tt\AppData\Local\Programs\Python\Python311\Lib\site-packages\paramiko\transport.py",
line 448, in <strong>init</strong> raise SSHException(
paramiko.ssh_exception.SSHException: Unable to connect to
sftp.site.com: [WinError 10060] A connection attempt failed because
the connected party did not properly respond after a period of time,
or established connection failed because connected host has failed to
respond Microsoft (R) SQL Server Execute Package Utility Version
15.0.4312.2 for 64-bit Copyright (C) 2019 Microsoft. All rights reserved. Started: 14:42:00 Error: 2023-07-31 14:42:43.27<br />
Code: 0xC0029151 Source: Execute Process Task Execute Process Task
Description: In Executing "
C:\Users\tt\AppData\Local\Programs\Python\Python311\python.exe" "
E:/file//import_file</em>.py " at "E:\tt\file\data", The process exit
code was "1" while the expected was "0". End Error DTExec: The
package execution returned DTSER_FAILURE (1). Started: 14:42:00
Finished: 14:42:43 Elapsed: 42.828 seconds. The package execution
failed. The step failed.</p>
</blockquote>
<p>and below is my code</p>
<pre><code>from msilib.schema import Directory
import pysftp
import os
import glob
import fnmatch
from datetime import date, timedelta
#assuming that I have a connection to the SFTP server
cnopts = pysftp.CnOpts()
cnopts.hostkeys = None
myHostname = 'sftp.mm.com'
myUsername = 'name'
myPassword = '**'
with pysftp.Connection(host=myHostname, username=myUsername, password=myPassword, cnopts=cnopts) as sftp:
print("Connection Success")
remotefilepath='/data.zip/'
localfilepath= r'E:\\data.zip'
sftp.get(remotefilepath,localfilepath)
print("File transfer Success")
</code></pre>
| <python><ssis> | 2023-07-31 05:01:30 | 0 | 357 | MSM |
76,800,759 | 18,029,617 | Integrating Matplotlib Charts into Angular Web Application | <p>Recently, I've been working with the <strong>matplotlib</strong> Python library to fulfill a requirement of <strong>integrating matplotlib charts into a web application built on Angular</strong>.</p>
<p>Initially, my plan was to serve the matplotlib chart image to a web application from the <strong>python flask server</strong>. In this requirement, the matplotlib chart is updated every second. So, a new image file has to be created every second, And that has to be sent to the web application. So it seems this approach is inefficient and unsuitable for meeting the requirements.</p>
<p>Now, I'm actively searching for a better solution to integrate matplotlib with the web application. If anyone has any insights or suggestions on how to achieve this, please let me know.</p>
<p>if there is a way to <strong>visualize matplotlib charts using Grafana panels</strong>. If anyone has experience or knowledge in this area, Please guide me to achieve my requirement.</p>
| <python><matplotlib> | 2023-07-31 05:00:06 | 0 | 691 | Bennison J |
76,800,597 | 9,906,395 | RuntimeErro Importing Langchain | <p>I am getting the following error while trying to import langchain - any suggestions of what could have gone wrong?</p>
<pre><code>---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Input In [4], in <cell line: 1>()
----> 1 import langchain
File ~\Anaconda3\envs\rioxarray\lib\site-packages\langchain\__init__.py:6, in <module>
3 from importlib import metadata
4 from typing import Optional
----> 6 from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
7 from langchain.cache import BaseCache
8 from langchain.chains import (
9 ConversationChain,
10 LLMBashChain,
(...)
16 VectorDBQAWithSourcesChain,
17 )
File ~\Anaconda3\envs\rioxarray\lib\site-packages\langchain\agents\__init__.py:11, in <module>
2 from langchain.agents.agent import (
3 Agent,
4 AgentExecutor,
(...)
8 LLMSingleActionAgent,
9 )
10 from langchain.agents.agent_iterator import AgentExecutorIterator
---> 11 from langchain.agents.agent_toolkits import (
12 create_csv_agent,
13 create_json_agent,
14 create_openapi_agent,
15 create_pandas_dataframe_agent,
16 create_pbi_agent,
17 create_pbi_chat_agent,
18 create_spark_dataframe_agent,
19 create_spark_sql_agent,
20 create_sql_agent,
21 create_vectorstore_agent,
22 create_vectorstore_router_agent,
23 create_xorbits_agent,
24 )
25 from langchain.agents.agent_types import AgentType
26 from langchain.agents.conversational.base import ConversationalAgent
File ~\Anaconda3\envs\rioxarray\lib\site-packages\langchain\agents\agent_toolkits\__init__.py:6, in <module>
2 from langchain.agents.agent_toolkits.amadeus.toolkit import AmadeusToolkit
3 from langchain.agents.agent_toolkits.azure_cognitive_services import (
4 AzureCognitiveServicesToolkit,
5 )
----> 6 from langchain.agents.agent_toolkits.csv.base import create_csv_agent
7 from langchain.agents.agent_toolkits.file_management.toolkit import (
8 FileManagementToolkit,
9 )
10 from langchain.agents.agent_toolkits.gmail.toolkit import GmailToolkit
File ~\Anaconda3\envs\rioxarray\lib\site-packages\langchain\agents\agent_toolkits\csv\base.py:4, in <module>
1 from typing import Any, List, Optional, Union
3 from langchain.agents.agent import AgentExecutor
----> 4 from langchain.agents.agent_toolkits.pandas.base import create_pandas_dataframe_agent
5 from langchain.schema.language_model import BaseLanguageModel
8 def create_csv_agent(
9 llm: BaseLanguageModel,
10 path: Union[str, List[str]],
11 pandas_kwargs: Optional[dict] = None,
12 **kwargs: Any,
13 ) -> AgentExecutor:
File ~\Anaconda3\envs\rioxarray\lib\site-packages\langchain\agents\agent_toolkits\pandas\base.py:18, in <module>
16 from langchain.agents.mrkl.base import ZeroShotAgent
17 from langchain.agents.openai_functions_agent.base import OpenAIFunctionsAgent
---> 18 from langchain.agents.types import AgentType
19 from langchain.callbacks.base import BaseCallbackManager
20 from langchain.chains.llm import LLMChain
File ~\Anaconda3\envs\rioxarray\lib\site-packages\langchain\agents\types.py:5, in <module>
3 from langchain.agents.agent import BaseSingleActionAgent
4 from langchain.agents.agent_types import AgentType
----> 5 from langchain.agents.chat.base import ChatAgent
6 from langchain.agents.conversational.base import ConversationalAgent
7 from langchain.agents.conversational_chat.base import ConversationalChatAgent
File ~\Anaconda3\envs\rioxarray\lib\site-packages\langchain\agents\chat\base.py:6, in <module>
3 from pydantic import Field
5 from langchain.agents.agent import Agent, AgentOutputParser
----> 6 from langchain.agents.chat.output_parser import ChatOutputParser
7 from langchain.agents.chat.prompt import (
8 FORMAT_INSTRUCTIONS,
9 HUMAN_MESSAGE,
10 SYSTEM_MESSAGE_PREFIX,
11 SYSTEM_MESSAGE_SUFFIX,
12 )
13 from langchain.agents.utils import validate_tools_single_input
File ~\Anaconda3\envs\rioxarray\lib\site-packages\langchain\agents\chat\output_parser.py:12, in <module>
7 from langchain.schema import AgentAction, AgentFinish, OutputParserException
9 FINAL_ANSWER_ACTION = "Final Answer:"
---> 12 class ChatOutputParser(AgentOutputParser):
13 """Output parser for the chat agent."""
15 pattern = re.compile(r"^.*?`{3}(?:json)?\n(.*?)`{3}.*?$", re.DOTALL)
File ~\Anaconda3\envs\rioxarray\lib\site-packages\pydantic\main.py:229, in pydantic.main.ModelMetaclass.__new__()
File ~\Anaconda3\envs\rioxarray\lib\site-packages\pydantic\fields.py:491, in pydantic.fields.ModelField.infer()
File ~\Anaconda3\envs\rioxarray\lib\site-packages\pydantic\fields.py:421, in pydantic.fields.ModelField.__init__()
File ~\Anaconda3\envs\rioxarray\lib\site-packages\pydantic\fields.py:542, in pydantic.fields.ModelField.prepare()
File ~\Anaconda3\envs\rioxarray\lib\site-packages\pydantic\fields.py:804, in pydantic.fields.ModelField.populate_validators()
File ~\Anaconda3\envs\rioxarray\lib\site-packages\pydantic\validators.py:723, in find_validators()
RuntimeError: no validator found for <class 're.Pattern'>, see `arbitrary_types_allowed` in Config
</code></pre>
| <python><langchain> | 2023-07-31 04:02:58 | 3 | 1,122 | Filippo Sebastio |
76,800,552 | 10,735,143 | ModuleNotFoundError when running Python code in a Docker container with copied python environment | <p>I am facing an issue with running my Python code in a Docker container. The problem is that I have a slow internet connection on my server, which causes the <code>pip install -r requirement.txt</code> command to take a long time and even raise a timeout error.</p>
<p>To overcome this issue, I decided to copy my local Python environment into the Docker container and run the code using that environment. However, I encountered a <code>ModuleNotFoundError</code> indicating that the required module (in this case, <code>fastapi</code>) is not found within the container.</p>
<p>I have created a Dockerfile with the following content:</p>
<pre class="lang-bash prettyprint-override"><code>FROM python:3.10-slim
WORKDIR /app/
COPY . /app
ENV PATH="/app/envcache/bin:$PATH"
CMD ["python", "app.py"]
</code></pre>
| <python><docker> | 2023-07-31 03:45:43 | 2 | 634 | Mostafa Najmi |
76,800,465 | 8,606,164 | Define a 'minimum' for pandas.DataFrame.resample() that is lower than current dataframe's minimum TimedeltaIndex | <p>I have a dataframe for a protocol that tracks the value of 2 settings every minute of a test. For example:</p>
<pre><code>In [1]: df = pd.DataFrame(
{
"time": [
pd.Timedelta(1, unit="min"),
pd.Timedelta(2, unit="min"),
pd.Timedelta(3, unit="min"),
pd.Timedelta(4, unit="min"),
pd.Timedelta(5, unit="min"),
],
"setting_1": [4.0, 4.0, 6.0, 6.0, 8.0],
"setting_2": [1.0, 2.0, 3.0, 4.0, 5.0],
}
).set_index("time")
In [2]: df.head()
Out[2]:
setting_1 setting_2
time
0 days 00:01:00 4.0 1.0
0 days 00:02:00 4.0 2.0
0 days 00:03:00 6.0 3.0
0 days 00:04:00 6.0 4.0
0 days 00:05:00 8.0 5.0
</code></pre>
<p>I need to join this dataframe with another which contains the outcome measures of the test but the data in that dataframe is sampled every 10s. Thus, I expand <code>df</code> so the timedelta index increases by 10s and the missing values are backfilled.</p>
<pre><code>In [3]: df = df.resample("10S").bfill()
In [4]: df.head()
Out[4]:
setting_1 setting_2
time
0 days 00:01:00 4.0 1.0
0 days 00:01:10 4.0 1.0
0 days 00:01:20 4.0 1.0
0 days 00:01:30 4.0 1.0
0 days 00:01:40 4.0 1.0
</code></pre>
<p>However, I want the index to start at a timedelta of 10s (i.e., <code>0 days 00:00:10</code>) rather than the <code>0 days 00:01:00</code> which is the minimum value in the csv file the data is imported from. As the values of <code>setting_1</code> and <code>setting_2</code> over that first minute are represented by the values at 1min, they should also be backfilled.</p>
<p>I currently solve this by concatenating a new dataframe containing a single row with an index of <code>pd.Timedelta(10, unit="s")</code> and the column values corresponding to the 1 min row to the original <code>df</code>. I can then use <code>.resample().bfill()</code> as before to obtain what I need.</p>
<pre><code>In [5]: df = pd.concat(
[
df,
pd.DataFrame(
{
"time": [pd.Timedelta(10, unit="s")],
"setting_1": [df.iloc[0, 0]
"setting_2": [df.iloc[0, 1]
}
).set_index("time")
]
)
In [6]: df
Out[6]:
setting_1 setting_2
time
0 days 00:01:00 4.0 1.0
0 days 00:02:00 4.0 2.0
0 days 00:03:00 6.0 3.0
0 days 00:04:00 6.0 4.0
0 days 00:05:00 8.0 5.0
0 days 00:00:10 4.0 1.0
In [7]: df = df.resample("10S").bfill()
In [8]: df.head()
Out[8]:
setting_1 setting_2
time
0 days 00:00:10 4.0 1.0
0 days 00:00:20 4.0 1.0
0 days 00:00:30 4.0 1.0
0 days 00:00:40 4.0 1.0
0 days 00:00:50 4.0 1.0
</code></pre>
<p>Is there a better way to achieve this without the need for the intermediary step of concatenating a dummy dataframe? I.e., is there a way to resample below the current minimum TimedeltaIndex by defining some sort of 'minimum' setting for the <code>.resample()</code> method?</p>
| <python><pandas><pandas-resample> | 2023-07-31 03:10:13 | 1 | 769 | Braden |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.