QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
76,842,588 | 8,376,001 | I cannot import Web3 in python in colab.research.google.com | <p>I am trying to use Web3 library in <a href="https://colab.research.google.com/" rel="nofollow noreferrer">colab.research.google.com/</a> using the following code
<code>from web3 import Web3</code> after I installed it using the following code <code>!pip install web3</code> but I am getting the error shown below:</p>
<blockquote>
<p>ContextualVersionConflict: (protobuf 3.20.3 (/usr/local/lib/python3.10/dist-packages), Requirement.parse('protobuf>=4.21.6'), {'web3'})</p>
</blockquote>
<p>I searched it and found <a href="https://stackoverflow.com/questions/74365129/contextualversionconflict-error-message-when-using-from-google-cloud-import-vis">this</a> and <a href="https://stackoverflow.com/questions/76048240/why-i-getcontextualversionconflict-error-when-i-am-importing-web3">this</a> but I could not find a proper solution.</p>
<p><strong>Update</strong></p>
<p>Here is the output of <code>pip freeze</code></p>
<pre><code>absl-py==1.4.0
aiohttp==3.8.5
aiosignal==1.3.1
alabaster==0.7.13
albumentations==1.2.1
altair==4.2.2
anyio==3.7.1
appdirs==1.4.4
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
array-record==0.4.0
arviz==0.15.1
astropy==5.2.2
astunparse==1.6.3
async-timeout==4.0.2
attrs==23.1.0
audioread==3.0.0
autograd==1.6.2
Babel==2.12.1
backcall==0.2.0
beautifulsoup4==4.11.2
bitarray==2.8.0
bleach==6.0.0
blinker==1.4
blis==0.7.10
blosc2==2.0.0
bokeh==3.1.1
branca==0.6.0
build==0.10.0
CacheControl==0.13.1
cachetools==5.3.1
catalogue==2.0.9
certifi==2023.7.22
cffi==1.15.1
chardet==4.0.0
charset-normalizer==2.0.12
chex==0.1.7
click==8.1.6
click-plugins==1.1.1
cligj==0.7.2
cloudpickle==2.2.1
cmake==3.25.2
cmdstanpy==1.1.0
colorcet==3.0.1
colorlover==0.3.0
community==1.0.0b1
confection==0.1.0
cons==0.4.6
contextlib2==21.6.0
contourpy==1.1.0
convertdate==2.4.0
cryptography==3.4.8
cufflinks==0.17.3
cvxopt==1.3.1
cvxpy==1.3.2
cycler==0.11.0
cymem==2.0.7
Cython==0.29.36
cytoolz==0.12.2
dask==2022.12.1
datascience==0.17.6
db-dtypes==1.1.1
dbus-python==1.2.18
debugpy==1.6.6
decorator==4.4.2
defusedxml==0.7.1
distributed==2022.12.1
distro==1.7.0
dlib==19.24.2
dm-tree==0.1.8
docutils==0.18.1
dopamine-rl==4.0.6
duckdb==0.8.1
earthengine-api==0.1.361
easydict==1.10
ecos==2.0.12
editdistance==0.6.2
en-core-web-sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.5.0/en_core_web_sm-3.5.0-py3-none-any.whl#sha256=0964370218b7e1672a30ac50d72cdc6b16f7c867496f1d60925691188f4d2510
entrypoints==0.4
ephem==4.1.4
et-xmlfile==1.1.0
eth-abi==4.1.0
eth-account==0.9.0
eth-hash==0.5.2
eth-keyfile==0.6.1
eth-keys==0.4.0
eth-rlp==0.3.0
eth-typing==3.4.0
eth-utils==2.2.0
etils==1.4.0
etuples==0.3.9
exceptiongroup==1.1.2
fastai==2.7.12
fastcore==1.5.29
fastdownload==0.0.7
fastjsonschema==2.18.0
fastprogress==1.0.3
fastrlock==0.8.1
filelock==3.12.2
Fiona==1.9.4.post1
firebase-admin==5.3.0
Flask==2.2.5
flatbuffers==23.5.26
flax==0.7.0
folium==0.14.0
fonttools==4.41.1
frozendict==2.3.8
frozenlist==1.4.0
fsspec==2023.6.0
future==0.18.3
gast==0.4.0
gcsfs==2023.6.0
GDAL==3.4.3
gdown==4.6.6
gensim==4.3.1
geographiclib==2.0
geopandas==0.13.2
geopy==2.3.0
gin-config==0.5.0
glob2==0.7
google==2.0.3
google-api-core==2.11.1
google-api-python-client==2.84.0
google-auth==2.17.3
google-auth-httplib2==0.1.0
google-auth-oauthlib==1.0.0
google-cloud-bigquery==3.10.0
google-cloud-bigquery-connection==1.12.1
google-cloud-bigquery-storage==2.22.0
google-cloud-core==2.3.3
google-cloud-datastore==2.15.2
google-cloud-firestore==2.11.1
google-cloud-functions==1.13.1
google-cloud-language==2.9.1
google-cloud-storage==2.8.0
google-cloud-translate==3.11.2
google-colab @ file:///colabtools/dist/google-colab-1.0.0.tar.gz#sha256=0885853d84f852df4da0d294de7c7d02c701dd982d3280a456c3b4a12dc5e859
google-crc32c==1.5.0
google-pasta==0.2.0
google-resumable-media==2.5.0
googleapis-common-protos==1.59.1
googledrivedownloader==0.4
graphviz==0.20.1
greenlet==2.0.2
grpc-google-iam-v1==0.12.6
grpcio==1.56.2
grpcio-status==1.48.2
gspread==3.4.2
gspread-dataframe==3.3.1
gym==0.25.2
gym-notices==0.0.8
h5netcdf==1.2.0
h5py==3.8.0
hexbytes==0.3.1
holidays==0.29
holoviews==1.15.4
html5lib==1.1
httpimport==1.3.1
httplib2==0.21.0
humanize==4.6.0
hyperopt==0.2.7
idna==3.4
imageio==2.25.1
imageio-ffmpeg==0.4.8
imagesize==1.4.1
imbalanced-learn==0.10.1
imgaug==0.4.0
importlib-metadata==4.6.4
importlib-resources==6.0.0
imutils==0.5.4
inflect==6.0.5
iniconfig==2.0.0
intel-openmp==2023.2.0
ipykernel==5.5.6
ipython==7.34.0
ipython-genutils==0.2.0
ipython-sql==0.4.1
ipywidgets==7.7.1
itsdangerous==2.1.2
jax==0.4.13
jaxlib @ https://storage.googleapis.com/jax-releases/cuda11/jaxlib-0.4.13+cuda11.cudnn86-cp310-cp310-manylinux2014_x86_64.whl#sha256=af30095a0adf342b837a0ed9607e13177ee66f4e654c031a383aa546cd21d815
jeepney==0.7.1
jieba==0.42.1
Jinja2==3.1.2
joblib==1.3.1
jsonpickle==3.0.1
jsonschema==4.3.3
jupyter-client==6.1.12
jupyter-console==6.1.0
jupyter-server==1.24.0
jupyter_core==5.3.1
jupyterlab-pygments==0.2.2
jupyterlab-widgets==3.0.8
kaggle==1.5.16
keras==2.12.0
keyring==23.5.0
kiwisolver==1.4.4
langcodes==3.3.0
launchpadlib==1.10.16
lazr.restfulclient==0.14.4
lazr.uri==1.0.6
lazy_loader==0.3
libclang==16.0.6
librosa==0.10.0.post2
lightgbm==3.3.5
linkify-it-py==2.0.2
lit==16.0.6
llvmlite==0.39.1
locket==1.0.0
logical-unification==0.4.6
lru-dict==1.2.0
LunarCalendar==0.0.9
lxml==4.9.3
Markdown==3.4.4
markdown-it-py==3.0.0
MarkupSafe==2.1.3
matplotlib==3.7.1
matplotlib-inline==0.1.6
matplotlib-venn==0.11.9
mdit-py-plugins==0.4.0
mdurl==0.1.2
miniKanren==1.0.3
missingno==0.5.2
mistune==0.8.4
mizani==0.8.1
mkl==2019.0
ml-dtypes==0.2.0
mlxtend==0.22.0
more-itertools==9.1.0
moviepy==1.0.3
mpmath==1.3.0
msgpack==1.0.5
multidict==6.0.4
multipledispatch==1.0.0
multitasking==0.0.11
murmurhash==1.0.9
music21==8.1.0
natsort==8.3.1
nbclient==0.8.0
nbconvert==6.5.4
nbformat==5.9.1
nest-asyncio==1.5.6
networkx==3.1
nibabel==4.0.2
nltk==3.8.1
notebook==6.4.8
numba==0.56.4
numexpr==2.8.4
numpy==1.22.4
oauth2client==4.1.3
oauthlib==3.2.2
opencv-contrib-python==4.7.0.72
opencv-python==4.7.0.72
opencv-python-headless==4.8.0.74
openpyxl==3.0.10
opt-einsum==3.3.0
optax==0.1.7
orbax-checkpoint==0.3.1
osqp==0.6.2.post8
packaging==23.1
palettable==3.3.3
pandas==1.5.3
pandas-datareader==0.10.0
pandas-gbq==0.17.9
pandocfilters==1.5.0
panel==1.2.1
param==1.13.0
parsimonious==0.9.0
parso==0.8.3
partd==1.4.0
pathlib==1.0.1
pathy==0.10.2
patsy==0.5.3
pexpect==4.8.0
pickleshare==0.7.5
Pillow==9.4.0
pip-tools==6.13.0
platformdirs==3.9.1
plotly==5.13.1
plotnine==0.10.1
pluggy==1.2.0
polars==0.17.3
pooch==1.6.0
portpicker==1.5.2
prefetch-generator==1.0.3
preshed==3.0.8
prettytable==0.7.2
proglog==0.1.10
progressbar2==4.2.0
prometheus-client==0.17.1
promise==2.3
prompt-toolkit==3.0.39
prophet==1.1.4
proto-plus==1.22.3
protobuf==4.23.4
psutil==5.9.5
psycopg2==2.9.6
ptyprocess==0.7.0
py-cpuinfo==9.0.0
py4j==0.10.9.7
pyarrow==9.0.0
pyasn1==0.5.0
pyasn1-modules==0.3.0
pycocotools==2.0.6
pycparser==2.21
pycryptodome==3.18.0
pyct==0.5.0
pydantic==1.10.12
pydata-google-auth==1.8.1
pydot==1.4.2
pydot-ng==2.0.0
pydotplus==2.0.2
PyDrive==1.3.1
pyerfa==2.0.0.3
pygame==2.5.0
Pygments==2.14.0
PyGObject==3.42.1
PyJWT==2.3.0
pymc==5.1.2
PyMeeus==0.5.12
pymystem3==0.2.0
PyOpenGL==3.1.7
pyparsing==3.1.0
pyproj==3.6.0
pyproject_hooks==1.0.0
pyrsistent==0.19.3
PySocks==1.7.1
pytensor==2.10.1
pytest==7.2.2
python-apt==0.0.0
python-dateutil==2.8.2
python-louvain==0.16
python-slugify==8.0.1
python-utils==3.7.0
pytz==2022.7.1
pyunormalize==15.0.0
pyviz-comms==2.3.2
PyWavelets==1.4.1
PyYAML==6.0.1
pyzmq==23.2.1
qdldl==0.1.7.post0
qudida==0.0.4
regex==2022.10.31
requests==2.27.1
requests-oauthlib==1.3.1
requirements-parser==0.5.0
rich==13.4.2
rlp==3.0.0
rpy2==3.4.2
rsa==4.9
scikit-image==0.19.3
scikit-learn==1.2.2
scipy==1.10.1
scs==3.2.3
seaborn==0.12.2
SecretStorage==3.3.1
Send2Trash==1.8.2
shapely==2.0.1
six==1.16.0
sklearn-pandas==2.2.0
smart-open==6.3.0
sniffio==1.3.0
snowballstemmer==2.2.0
sortedcontainers==2.4.0
soundfile==0.12.1
soupsieve==2.4.1
soxr==0.3.5
spacy==3.5.4
spacy-legacy==3.0.12
spacy-loggers==1.0.4
Sphinx==5.0.2
sphinxcontrib-applehelp==1.0.4
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.1
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
SQLAlchemy==2.0.19
sqlparse==0.4.4
srsly==2.4.7
statsmodels==0.13.5
sympy==1.11.1
tables==3.8.0
tabulate==0.9.0
tblib==2.0.0
tenacity==8.2.2
tensorboard==2.12.3
tensorboard-data-server==0.7.1
tensorflow==2.12.0
tensorflow-datasets==4.9.2
tensorflow-estimator==2.12.0
tensorflow-gcs-config==2.12.0
tensorflow-hub==0.14.0
tensorflow-io-gcs-filesystem==0.32.0
tensorflow-metadata==1.13.1
tensorflow-probability==0.20.1
tensorstore==0.1.40
termcolor==2.3.0
terminado==0.17.1
text-unidecode==1.3
textblob==0.17.1
tf-slim==1.1.0
thinc==8.1.10
threadpoolctl==3.2.0
tifffile==2023.7.18
tinycss2==1.2.1
toml==0.10.2
tomli==2.0.1
toolz==0.12.0
torch @ https://download.pytorch.org/whl/cu118/torch-2.0.1%2Bcu118-cp310-cp310-linux_x86_64.whl#sha256=a7a49d459bf4862f64f7bc1a68beccf8881c2fa9f3e0569608e16ba6f85ebf7b
torchaudio @ https://download.pytorch.org/whl/cu118/torchaudio-2.0.2%2Bcu118-cp310-cp310-linux_x86_64.whl#sha256=26692645ea061a005c57ec581a2d0425210ac6ba9f923edf11cc9b0ef3a111e9
torchdata==0.6.1
torchsummary==1.5.1
torchtext==0.15.2
torchvision @ https://download.pytorch.org/whl/cu118/torchvision-0.15.2%2Bcu118-cp310-cp310-linux_x86_64.whl#sha256=19ca4ab5d6179bbe53cff79df1a855ee6533c2861ddc7389f68349d8b9f8302a
tornado==6.3.1
tqdm==4.65.0
traitlets==5.7.1
triton==2.0.0
tweepy==4.13.0
typer==0.9.0
types-setuptools==68.0.0.3
typing_extensions==4.7.1
tzlocal==5.0.1
uc-micro-py==1.0.2
uritemplate==4.1.1
urllib3==1.26.16
vega-datasets==0.9.0
wadllib==1.3.6
wasabi==1.1.2
wcwidth==0.2.6
web3==6.8.0
webcolors==1.13
webencodings==0.5.1
websocket-client==1.6.1
websockets==11.0.3
Werkzeug==2.3.6
widgetsnbextension==3.6.4
wordcloud==1.8.2.2
wrapt==1.14.1
xarray==2022.12.0
xarray-einstats==0.6.0
xgboost==1.7.6
xlrd==2.0.1
xyzservices==2023.7.0
yarl==1.9.2
yellowbrick==1.5
yfinance==0.2.25
zict==3.0.0
zipp==3.16.2
</code></pre>
| <python><import><web3py> | 2023-08-05 16:33:45 | 2 | 1,234 | Mohamed Rahouma |
76,842,563 | 131,050 | Prefetching an iterator of 128-dim array to device | <p>I'm having trouble using <code>flax.jax_utils.prefetch_to_device</code> for the simple function below. I'm loading the SIFT 1M dataset, and converting the array to jnp array.</p>
<p>I then want to prefetch the iterator of 128-dim arrays.</p>
<pre><code>import tensorflow_datasets as tfds
import tensorflow as tf
import jax
import jax.numpy as jnp
import itertools
import jax.dlpack
import jax.tools.colab_tpu
import flax
def _sift1m_iter():
def prepare_tf_data(xs):
def _prepare(x):
dl_arr = tf.experimental.dlpack.to_dlpack(x)
jax_arr = jax.dlpack.from_dlpack(dl_arr)
return jax_arr
return jax.tree_util.tree_map(_prepare, xs['embedding'])
ds = tfds.load('sift1m', split='database')
it = map(prepare_tf_data, ds)
#it = flax.jax_utils.prefetch_to_device(it, 2) => this causes an error
return it
</code></pre>
<p>However, when I run this code, I get an error:</p>
<pre><code>ValueError: len(shards) = 128 must equal len(devices) = 1.
</code></pre>
<p>I'm running this on a CPU-only device, but from the error it seems like the shape of the data I'm passing into <code>prefetch_to_device</code> is wrong.</p>
| <python><numpy><jax><flax> | 2023-08-05 16:28:10 | 1 | 13,910 | jeffreyveon |
76,842,455 | 1,942,868 | Cancel create when data is wrong | <p>I have this class view which is connected to <code>Model</code></p>
<pre><code>class DrawingViewSet(viewsets.ModelViewSet):
queryset = m.Drawing.objects.all()
serializer_class = s.DrawingSerializer
def create(self, request, *args, **kwargs):
request.data._mutable = True
request.data['update_user'] = request.user.id
request.data['create_user'] = request.user.id
try:
isDataGood(request.data)
.
.
return Response(serializer.data)
except:
logger.error("data is not good")
return Response(data={'error': 'This file is not pdf'}, status=406)
</code></pre>
<p>In create method, I check the data in <code>isDataGood</code>.</p>
<p>I want to do is like this below</p>
<ul>
<li><p>When data is not good, row will not be created.</p>
</li>
<li><p>When data is good, row in the model will be created.</p>
</li>
</ul>
<p>However, even data is not good, it makes the row in model <code>Drawing</code>.</p>
<p>I would like to cancel the creating row.</p>
<p>Is there any way to do this?</p>
| <python><django> | 2023-08-05 16:00:14 | 2 | 12,599 | whitebear |
76,842,381 | 20,088,885 | Is is possible to automatically generate all user that is related to the other user in Odoo? | <p>I'm learning Odoo SaaS right now and I'm trying to create a relational database, you can imagine it as a organizational tree table, I don't know what's the best word for it, but its like that.</p>
<p>So for Example, I created a <strong>HEAD PERSONNEL</strong> and Brandon Freeman was working under him, and I can also see people working under Brandon too.</p>
<p><a href="https://i.sstatic.net/ktyqc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ktyqc.png" alt="enter image description here" /></a></p>
<p>And then If I view Brandon Freeman's profile, I can see that on the head tab, <strong>HEAD Personnel</strong> will be there automatically and under him will be his two personnel</p>
<p><a href="https://i.sstatic.net/FTAFw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FTAFw.png" alt="enter image description here" /></a></p>
<p>Is this possible to do in Odoo SaaS? I'm having a difficult time experimenting on this one since I can't post on their community, and the functions are already built in.</p>
| <python><odoo><saas> | 2023-08-05 15:42:41 | 1 | 785 | Stykgwar |
76,842,366 | 3,247,006 | pytest -k vs pytest -m in Pytest | <p>I created and used custom markers <code>orange</code>, <code>apple</code> and <code>pineapple</code> as shown below:</p>
<pre class="lang-ini prettyprint-override"><code># "pytest.ini"
[pytest]
markers =
orange: Orange marker
apple: Apple marker
pineapple: Pineapple marker
</code></pre>
<pre class="lang-py prettyprint-override"><code>import pytest
@pytest.mark.orange
def test1():
assert True
@pytest.mark.apple
def test2():
assert True
@pytest.mark.pineapple
def test3():
assert True
</code></pre>
<p>Then, I ran <code>pytest</code> with <code>-k orange</code> and <code>-m orange</code> as shown below:</p>
<pre class="lang-none prettyprint-override"><code>pytest -k orange
</code></pre>
<pre class="lang-none prettyprint-override"><code>pytest -m orange
</code></pre>
<p>Then, there was the exactly same output as shown below:</p>
<pre class="lang-none prettyprint-override"><code>=================== test session starts ===================
platform win32 -- Python 3.9.13, pytest-7.4.0, pluggy-1.2.0
django: settings: core.settings (from ini)
rootdir: C:\Users\kai\test-django-project2
configfile: pytest.ini
plugins: django-4.5.2
collected 3 items / 2 deselected / 1 selected
tests\test_store.py . [100%]
============= 1 passed, 2 deselected in 0.09s =============
</code></pre>
<p>So, what is the difference between <code>-k</code> and <code>-m</code> in Pytest?</p>
| <python><unit-testing><testing><pytest><pytest-markers> | 2023-08-05 15:38:51 | 1 | 42,516 | Super Kai - Kazuya Ito |
76,842,309 | 2,566,565 | Migrating App Engine project to Cloud NDB: local dev_appserver accessing production cloud rather than local datastore | <p>I am trying to migrate a Python 2.7 App Engine project from NDB to Cloud NDB as part of the migration process to Python 3.</p>
<p>After following the <a href="https://cloud.google.com/appengine/migration-center/standard/python/migrate-to-cloud-ndb" rel="nofollow noreferrer">Cloud NDB migration instructions</a>, just running the dev_appserver as before now results in accessing the cloud rather than the local datastore. I see Google's <a href="https://cloud.google.com/appengine/migration-center/standard/python/migrate-to-cloud-ndb#testing" rel="nofollow noreferrer">instructions for ensuring one accesses the local data</a>, but I guess I don't understand how to use this in practice.</p>
<p>Assuming I have to use the datastore emulator to prevent this, I run dev_appserver with the flag <code>--support_datastore_emulator true</code>. This results in a successful conversion of my local datastore data into the sqllite format, but still queries the cloud.</p>
<p>I then set the required environment variables in app.yaml: DATASTORE_DATASET, DATASTORE_PROJECT_ID, DATASTORE_EMULATOR_HOST, DATASTORE_EMULATOR_HOST_PATH, DATASTORE_HOST (the values match the output of <code>gcloud beta emulators datastore env-init</code>). Running it complains that DATASTORE_APP_ID is not set, so I set it as well.</p>
<p>Everything now launches with a confirmation message that the emulator is being used, but trying to access the datastore results in "BadArgumentError: Could not import googledatastore. This library must be installed with version >= 6.0.0 to use the Cloud Datastore API." After installing that, I get a never-ending series of additional installation requirements and module conflicts... it's a mess, and this isn't listed in the documentation anyway.</p>
<p>How can I get dev_appserver (with or without the datastore emulator) to access local data rather than the cloud? Sadly, I have now spent days trying to make this work.</p>
| <python><google-app-engine><pycharm><dev-appserver> | 2023-08-05 15:23:36 | 2 | 728 | Dev93 |
76,842,245 | 5,743,692 | Google Speech-to-Text API Speaker Diarization with Python .long_running_recognize() method | <p>I was following the answer in this <a href="https://stackoverflow.com/questions/59052891/speaker-diarization-when-using-python-speech-recognition">question</a>. But my audio is more then 1 min so I have to use <code>.long_running_recognize(config, audio)</code> method instead <code>.recognize(config, audio)</code>. Here is the code:</p>
<pre><code>from pathlib import Path
# https://cloud.google.com/python/docs/reference/speech/latest/google.cloud.speech_v1p1beta1.services.speech.SpeechClient
from google.cloud import speech_v1p1beta1 as speech
from google.cloud import storage
def file_upload(client, file: Path, bucket_name: str = 'wav_files_ua_eu_standard'):
# https://stackoverflow.com/questions/62125584/file-upload-using-pythonlocal-system-to-google-cloud-storage#:~:text=You%20can%20do%20it%20in%20this%20way%2C%20from,string%20of%20text%20blob.upload_from_string%20%28%27this%20is%20test%20content%21%27%29
bucket = client.get_bucket(bucket_name)
blob = bucket.blob(file.name)
# Uploading from local file without open()
blob.upload_from_filename(file)
# https://cloud.google.com/python/docs/reference/storage/latest/google.cloud.storage.blob.Blob
uri3 = 'gs://' + blob.id[:-(len(str(blob.generation)) + 1)]
print(F"{uri3=}")
return uri3
client = speech.SpeechClient()
client_bucket = storage.Client(project='my-project-id-is-hidden')
speech_file_name = R"C:\Users\vasil\OneDrive\wav_samples\wav_sample_phone_call.wav"
speech_file = Path(speech_file_name)
if speech_file.exists:
uri = file_upload(client_bucket, speech_file)
# audio = speech.RecognitionAudio(content=content)
audio = speech.RecognitionAudio(uri=uri)
diarization_config = speech.SpeakerDiarizationConfig(
enable_speaker_diarization=True,
min_speaker_count=2,
max_speaker_count=3,
)
config = speech.RecognitionConfig(
encoding=speech.RecognitionConfig.AudioEncoding.LINEAR16,
sample_rate_hertz=8000,
language_code="ru-RU", # "uk-UA", "ru-RU",
# alternative_language_codes=["uk-UA", ],
diarization_config=diarization_config,
)
print("Waiting for operation to complete...")
# response = client.recognize(config=config, audio=audio)
response = client.long_running_recognize(config=config, audio=audio)
words_info = result.results
# Printing out the output:
for word_info in words_info[0].alternatives[0].words:
print(f"word: '{word_info.word}', speaker_tag: {word_info.speaker_tag}")
</code></pre>
<p>The differences are</p>
<ul>
<li>I have to upload the file for recognitions and get URI uploaded file</li>
<li>use <code>speech.RecognitionAudio(uri=uri</code>) - not <code>.RecognitionAudio(content=content)</code></li>
<li>use <code>client.long_running_recognize(config=config, audio=audio)</code> - not <code>client.recognize(config=config, audio=audio)</code></li>
</ul>
<p>So the code is working - but... Result has no information about diariztion labels...
What I am doing wrong? Here is output, speaker_tag always is equal to zero.</p>
<pre><code>word: 'Алло', speaker_tag: 0
word: 'здравствуйте', speaker_tag: 0
word: 'Я', speaker_tag: 0
word: 'хочу', speaker_tag: 0
word: 'котёнок', speaker_tag: 0
word: 'Ты', speaker_tag: 0
word: 'очень', speaker_tag: 0
word: 'классная', speaker_tag: 0
word: 'Спасибо', speaker_tag: 0
word: 'приятно', speaker_tag: 0
word: 'что', speaker_tag: 0
word: 'вы', speaker_tag: 0
word: 'и', speaker_tag: 0
word: 'Хорошего', speaker_tag: 0
word: 'вам', speaker_tag: 0
word: 'дня', speaker_tag: 0
word: 'сегодня', speaker_tag: 0
word: 'Спасибо', speaker_tag: 0
word: 'до', speaker_tag: 0
word: 'свидания', speaker_tag: 0
</code></pre>
| <python><google-cloud-platform><audio><speech-to-text><diarization> | 2023-08-05 15:07:08 | 1 | 451 | Vasyl Kolomiets |
76,841,714 | 13,023,647 | Formatting text in a sent message in smtplib | <p>To send an email, I use a certain function:</p>
<pre><code>def send_email(self, subject, to_addr, from_addr, content):
body_text = ''
for cnt in content:
body_text += str(cnt)
BODY = '\r\n'.join((
'From: %s' % from_addr,
'To: %s' % to_addr,
'Subject: %s' % subject,'', body_text
))
server = smtplib.SMTP(self.host)
server.sendmail(from_addr, to_addr, BODY.encode('utf-8'))
server.quit()
</code></pre>
<p>As you can see, the message body is in the <code>body_text</code> variable.</p>
<p>The message is generated in this piece of code:</p>
<pre><code>class Event():
def __init__(self, path):
self.path = path
self.time = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
def __str__(self):
return ('Time: ' + self.time + '\tPath: '+ '"' + self.path + '"' + '\n')
</code></pre>
<p>Could you tell me please, how can I format a message sent by mail?</p>
<p>I would like that it was possible to make the text bold, italic or underlined, insert a hyperlink and, accordingly, all this would be displayed correctly in the letter itself.</p>
<p>Now if I use the <code>class color</code> in the function where the text is formed:</p>
<pre><code>class color:
PURPLE = '\033[95m'
CYAN = '\033[96m'
DARKCYAN = '\033[36m'
BLUE = '\033[94m'
GREEN = '\033[92m'
YELLOW = '\033[93m'
RED = '\033[91m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
END = '\033[0m'
def __str__(self):
return (color.BOLD + 'Time: ' + color.END + self.time + '\tPath: '+ '"' + self.path + '"' + '\n')
</code></pre>
<p>then the following characters will be displayed in the incoming message:</p>
<p><a href="https://i.sstatic.net/nydkK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nydkK.png" alt="enter image description here" /></a></p>
<p>After changes, if you do <code>print(message_str)</code>:</p>
<p><a href="https://i.sstatic.net/gD4Jh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gD4Jh.png" alt="enter image description here" /></a></p>
| <python><python-3.x><smtplib> | 2023-08-05 12:49:33 | 2 | 374 | Alex Rebell |
76,841,614 | 10,973,108 | How can I create a dict of dicts in recursion with a provided list of keys | <p>For example:</p>
<p>I need to create a dict of dicts when iterating over a list of keys</p>
<p>For example, I have this list</p>
<pre class="lang-py prettyprint-override"><code>my_dict = {'root': {}}
my_keys = ['foo', 'bar', 'lorem', 'ipsum']
</code></pre>
<p>I wanna create a function to return this following dict</p>
<pre class="lang-py prettyprint-override"><code>my_dict = {
'root': {
'foo': {
'bar': {
'lorem': {
'ipsum': {}
}
}
}
}
}
</code></pre>
<p>I think something using recursion, but I'm stucked in the logic.</p>
| <python><logic> | 2023-08-05 12:22:32 | 2 | 348 | Daniel Bailo |
76,841,459 | 3,305,301 | How can I select or alias a duckdb relation column which has an aggregate function in its column name using the Python-API? | <p>The DuckDB Python API lets you compose complex queries by building it up from chained functions on a relation. For example, to do a group by, one can do a simple select, and then use the aggregate function on the select relation like this:</p>
<pre><code>rel = duckdb.sql('select date, businessunit, pnl from tbl')
rel = rel.aggregate('date, sum(pnl)')
</code></pre>
<p>This will create a new relation where all <code>pnl</code> is grouped by date and the column name for the grouped <code>pnl</code> is the string "sum(pnl)".</p>
<p>Now we have a problem because we can no longer select this column named "sum(pnl)" using the Python API. DuckDB can no longer differentiate between the "sum"-command on a column named "pnl" and a column called "sum(pnl)."</p>
<p>Referencing the column by its name gives an error because DuckDB thinks you want to sum the column.</p>
<p><code>rel["date"]</code> works and gives you the date column,</p>
<p><code>rel["sum(pnl)"]</code> errors with:</p>
<blockquote>
<p>BinderException: Binder Error: Referenced column "pnl" not found in FROM clause!<br />
Candidate bindings: "query_relation.sum(pnl)"</p>
</blockquote>
<p><code>rel["pnl"]</code> errors with Attribute error because the column is now called "sum(pnl)" not "pnl".</p>
<p>Aliasing the grouped columns would resolve the issue, but the Python API does not seem to give an option to do this.</p>
<p>Also quoting the column name does not work, because duckdb now thinks you are referencing a string and not a column:</p>
<blockquote>
<p>AttributeError: This relation does not contain a column by the name of '"sum(PnL)"'.</p>
</blockquote>
<p>How can I either reference the column using the Python-api or alias the column using the Python-api?</p>
<p>(I could probably use pure SQL throughout, but the question is about the Python client api)</p>
| <python><sql><alias><duckdb> | 2023-08-05 11:38:05 | 0 | 1,027 | tomanizer |
76,841,174 | 11,141,816 | How to find the polynomial form coefficients of x with the presence of exp(1/x) | <p>Consider the following expression</p>
<pre><code>from sympy import *
a,b,c,x=symbols('a,b,c,x',real=True)
expr=a+(a+(a+b)*x+c*(a+b)*x**2 )*exp(a*x/c)
</code></pre>
<p>I wanted to simplify this expression such that it's in the "almost" polynomial form, i.e.</p>
<pre><code>a+a*exp(a*x/c) + (a+b)*exp(a*x/c)*x + c*(a+b)*exp(a*x/c)*x**2
</code></pre>
<p>where one can extract the "coefficients" of the terms of <code>x**n</code>.</p>
<pre><code>a+a*exp(a*x/c)
(a+b)*exp(a*x/c)
c*(a+b)*exp(a*x/c)
</code></pre>
<p>Though it's not a trivial task, I suspect that there might be a simpler built in function, but did not find one.</p>
<pre><code>expr.series(x)
</code></pre>
<p>returned a series expansion, which expanded the <code>exp(a*x/c)</code> with respect to <code>x</code> and complicated the things.</p>
<pre><code>from sympy import poly
poly(expr,x)
</code></pre>
<p>also failed, because</p>
<pre><code>PolynomialError: exp(a*x/c) contains an element of the set of generators.
</code></pre>
<p>I also tried</p>
<pre><code>expr.rewrite(x)
</code></pre>
<p>which did not do anything.</p>
<p>How to find the polynomial form coefficients of x in this types of expression? Is there a built in function for it?</p>
| <python><sympy> | 2023-08-05 10:21:23 | 2 | 593 | ShoutOutAndCalculate |
76,840,819 | 1,850,007 | TypeError while operating on list indices in Python | <p>I have the following code:</p>
<pre><code>(len(parameters) / 2
</code></pre>
<p>The code above returned the error:</p>
<pre><code>TypeError: slice indices must be integers or None or have an __index__ method
</code></pre>
<p>Why is this the case - especially the length of the vector I gave is an int which is divisible by 2 (in fact, I check for this)?</p>
<p>I would think integers have exact representations in python, so len(a)/2 should always return an int.</p>
<p>Any help is this matter is welcome.</p>
| <python><python-3.x><list><typeerror><slice> | 2023-08-05 08:43:24 | 2 | 1,062 | Lost1 |
76,840,704 | 10,137,268 | How to do crossvalidation finetuning of a Transformer model? | <p>I want to run 10-fold crossvalidation on the finetuning process of a huggingface transformer trainer (<a href="https://github.com/universal-ie/UIE" rel="nofollow noreferrer">UIE</a>). I'm experimenting with the UIE model, which basically works with the basic transformer architecture.</p>
<p>To achieve 10-fold crossvalidation I need to split my dataset into 10 distinct folds, such that the training and evaluation set gets switched around. This image explains it for k=5 (<a href="https://miro.medium.com/v2/resize:fit:4600/1*RCAAhv_IFVDK1aLMMsF2uw.png" rel="nofollow noreferrer">Source</a>):
<a href="https://i.sstatic.net/MyvsO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MyvsO.png" alt="https://miro.medium.com/v2/resize:fit:4600/1*RCAAhv_IFVDK1aLMMsF2uw.png" /></a></p>
<p>But the Trainer class wants a <code>eval_dataset</code> (<a href="https://github.com/universal-ie/UIE/blob/main/run_uie_finetune.py" rel="nofollow noreferrer">Code</a>).</p>
<pre><code> trainer = ConstraintSeq2SeqTrainer(
model=model,
args=training_args,
train_dataset=train_dataset if training_args.do_train else None,
eval_dataset=eval_dataset if training_args.do_eval else None,
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics if training_args.predict_with_generate else None,
decoding_type_schema=record_schema,
decoding_format=data_args.decoding_format,
source_prefix=prefix,
task=data_args.task,
)
</code></pre>
<p>I know that the validation dataset is not the test set, because in the end of the script the test results are evaluated:</p>
<pre><code>if training_args.do_predict:
logger.info("*** Test ***")
test_results = trainer.predict(
test_dataset,
metric_key_prefix="test",
max_length=data_args.val_max_target_length,
num_beams=data_args.num_beams,
)
test_metrics = test_results.metrics
test_metrics["test_loss"] = round(test_metrics["test_loss"], 4)
output_test_result_file = os.path.join(training_args.output_dir, "test_results_seq2seq.txt")
</code></pre>
<p>I'm getting errors when I try to pass an empty validation dataset in json or set <code>eval_dataset</code> to None. So it appears to me that the Trainer really requires a eval dataset. So I have the following questions:</p>
<ol>
<li>I don't understand why I need a validation set in the first place, I thought that a transformer trains solely on the training set and evaluates the trained model on the test set.</li>
<li>If the trainer really requires a validation set, how would I do the the crossvalidaiton? Most examples show crossvalidation on 2 train_test splits, but not on train_val_test splits.</li>
</ol>
<p>I implemented and ran it like this and it worked fine, but I think that's missing the point of crossvalidation, since the testset is always the same:
<a href="https://i.sstatic.net/grQHO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/grQHO.png" alt="My Crossvalidation implementation" /></a></p>
| <python><huggingface-transformers><cross-validation> | 2023-08-05 08:11:17 | 1 | 531 | Paul Erlenmeyer |
76,840,582 | 11,741,232 | Paste image on interactive map using two known points | <p>I have an interesting open-ended problem.</p>
<p>I have an image like the following:</p>
<p><a href="https://i.sstatic.net/JwWb9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JwWb9.png" alt="enter image description here" /></a></p>
<p>The red and purple dots are points with known GPS location. I am trying to find the GPS location of the green point. A simple approximation could be made by pasting this image onto an interactive map (like Folium), aligning the red and purple dots with the GPS locations they are known to hold, and just clicking on the green dot to get its point. This approximation is good enough if these points are close to one another in real life, e.g. dots are just ~30 m away from each other.</p>
<p>However, the math seems very hard to align this picture with a map programatically, even though the fact that the image is uniquely constrained to a position is true. I'm looking for suggestions that can decrease the complexity of this problem. Any help would be appreciated. I don't want solutions that find the GPS position of other points on the image, otherwise I could just get the GPS position of the target. I want to use the data from the map explicitly.</p>
<p>I've added a Python tag because that is my preference, but it can be something else.</p>
| <python><geometry><geolocation><gps><folium> | 2023-08-05 07:32:34 | 2 | 694 | kevinlinxc |
76,840,054 | 12,091,935 | segmenation fault when running cvxpy in modelica buildings python module | <p>I am not sure if this is a CVXPy, modelica, python, or maybe even a C problem (I believe CVXPy and/or the SCS solver calls functions written in C). I am simulating a microgrid using openmodelica that has a python module that calls a CVXPy function as a control algorithm. I tested the function using Python 3.8(modelica buildings does not use newer versions of python) and it runs without error, however when I run it in openmodelica I get a segmentation fault, and then it crashes. Is there a way to adjust memory in openmodelica, or what else may be causing the error? Note: For the optimization to work it needs to gather data from the modelica for the first 96 iterations, it is not until the 97th iteration when the modelica optimizer is called, and it runs fine. Both the modelica and CVXPy work independently, I only see the problem when Modelica tries to run CVXPy.</p>
<p>Python function test:</p>
<p>Input:</p>
<pre><code>for i in range(100):
tou_cvx_one_day([abs(df.building_load[i]), abs(df.solar[i]), 0.5, i * 900])
</code></pre>
<p>Output:</p>
<pre><code>===============================================================================
CVXPY
v1.3.1
===============================================================================
(CVXPY) Aug 01 02:03:02 PM: Your problem has 480 variables, 960 constraints, and 0 parameters.
(CVXPY) Aug 01 02:03:02 PM: It is compliant with the following grammars: DCP, DQCP
(CVXPY) Aug 01 02:03:02 PM: (If you need to solve this problem multiple times, but with different data, consider using parameters.)
(CVXPY) Aug 01 02:03:02 PM: CVXPY will first compile your problem; then, it will invoke a numerical solver to obtain a solution.
-------------------------------------------------------------------------------
Compilation
-------------------------------------------------------------------------------
(CVXPY) Aug 01 02:03:02 PM: Compiling problem (target solver=SCS).
(CVXPY) Aug 01 02:03:02 PM: Reduction chain: Dcp2Cone -> CvxAttr2Constr -> ConeMatrixStuffing -> SCS
(CVXPY) Aug 01 02:03:02 PM: Applying reduction Dcp2Cone
(CVXPY) Aug 01 02:03:02 PM: Applying reduction CvxAttr2Constr
(CVXPY) Aug 01 02:03:02 PM: Applying reduction ConeMatrixStuffing
(CVXPY) Aug 01 02:03:03 PM: Applying reduction SCS
(CVXPY) Aug 01 02:03:03 PM: Finished problem compilation (took 5.538e-01 seconds).
-------------------------------------------------------------------------------
Numerical solver
-------------------------------------------------------------------------------
(CVXPY) Aug 01 02:03:03 PM: Invoking solver SCS to obtain a solution.
------------------------------------------------------------------
SCS v3.2.3 - Splitting Conic Solver
(c) Brendan O Donoghue, Stanford University, 2012
------------------------------------------------------------------
problem: variables n: 483, constraints m: 1248
cones: z: primal zero / dual free vars: 288
l: linear vars: 960
settings: eps_abs: 1.0e-05, eps_rel: 1.0e-05, eps_infeas: 1.0e-07
alpha: 1.50, scale: 1.00e-01, adaptive_scale: 1
max_iters: 100000, normalize: 1, rho_x: 1.00e-06
acceleration_lookback: 10, acceleration_interval: 10
lin-sys: sparse-direct-amd-qdldl
nnz(A): 1917, nnz(P): 0
------------------------------------------------------------------
iter | pri res | dua res | gap | obj | scale | time (s)
------------------------------------------------------------------
0| 4.50e+02 1.96e+01 2.85e+03 1.43e+03 1.00e-01 1.59e-02
250| 4.05e-01 6.32e-03 1.90e+00 -1.81e+01 9.53e-03 2.55e-02
500| 2.89e-01 3.57e-03 1.79e+00 -1.95e+01 9.53e-03 3.37e-02
750| 1.07e-01 2.15e-03 2.78e+00 -2.09e+01 9.53e-03 4.31e-02
1000| 1.14e-01 1.38e-03 4.14e-01 -2.29e+01 9.53e-03 5.34e-02
1250| 8.00e-02 1.09e-03 1.10e+00 -2.40e+01 9.53e-03 6.46e-02
1500| 5.70e-02 1.43e-03 1.39e+00 -2.44e+01 9.53e-03 7.35e-02
1750| 3.43e+02 3.29e+00 1.20e+02 4.40e+01 9.53e-03 8.15e-02
2000| 4.36e-02 7.66e-04 8.12e-01 -2.36e+01 9.53e-03 8.88e-02
2250| 2.97e-02 5.15e-04 9.12e-01 -2.37e+01 9.53e-03 9.60e-02
2500| 2.24e-02 5.13e-04 9.24e-01 -2.38e+01 9.53e-03 1.03e-01
2750| 3.10e-02 4.82e-04 2.37e+00 -2.56e+01 9.53e-03 1.11e-01
3000| 3.38e-02 2.01e-04 1.73e+00 -2.53e+01 9.53e-03 1.18e-01
3250| 3.62e-02 3.74e-04 1.10e+00 -2.50e+01 9.53e-03 1.26e-01
3500| 1.48e-02 2.73e-04 1.64e-01 -2.45e+01 9.53e-03 1.33e-01
3750| 2.95e-02 2.56e-04 8.51e-01 -2.49e+01 9.53e-03 1.40e-01
4000| 2.20e-02 2.00e-04 4.30e-02 -2.45e+01 9.53e-03 1.47e-01
4250| 2.46e-02 2.71e-04 2.70e-01 -2.46e+01 9.53e-03 1.55e-01
4500| 2.12e-03 3.13e-05 2.26e-01 -2.44e+01 9.53e-03 1.62e-01
4750| 1.29e-03 3.72e-05 2.15e-01 -2.46e+01 9.53e-03 1.69e-01
5000| 6.15e-03 6.53e-05 2.88e-01 -2.46e+01 9.53e-03 1.76e-01
5250| 2.33e-03 4.50e-05 2.99e-01 -2.43e+01 9.53e-03 1.83e-01
5500| 4.44e-03 4.87e-05 2.29e-01 -2.46e+01 9.53e-03 1.90e-01
5750| 3.85e-03 3.49e-05 1.83e-01 -2.44e+01 9.53e-03 1.97e-01
6000| 8.10e-03 1.17e-04 5.12e-01 -2.47e+01 9.53e-03 2.05e-01
6250| 1.59e-02 5.62e-05 1.94e-01 -2.46e+01 9.53e-03 2.13e-01
6500| 1.24e-02 3.67e-05 9.37e-02 -2.44e+01 9.53e-03 2.25e-01
6750| 2.92e-03 2.45e-05 6.63e-02 -2.45e+01 9.53e-03 2.35e-01
7000| 2.63e-03 1.49e-05 9.56e-02 -2.44e+01 9.53e-03 2.42e-01
7250| 2.33e-03 1.58e-05 9.63e-02 -2.45e+01 9.53e-03 2.50e-01
7500| 4.15e+00 3.96e-02 2.37e+00 -2.33e+01 9.53e-03 2.58e-01
7750| 9.16e-04 1.74e-05 1.05e-01 -2.45e+01 9.53e-03 2.65e-01
8000| 1.63e-03 3.81e-05 1.82e-01 -2.44e+01 9.53e-03 2.72e-01
8250| 1.02e-03 6.16e-05 1.25e-02 -2.45e+01 9.53e-03 2.79e-01
8500| 3.78e-04 5.75e-06 3.63e-02 -2.45e+01 9.53e-03 2.86e-01
8750| 3.77e-04 3.33e-06 2.26e-04 -2.45e+01 9.53e-03 2.94e-01
------------------------------------------------------------------
status: solved
timings: total: 2.94e-01s = setup: 8.91e-03s + solve: 2.85e-01s
lin-sys: 2.00e-01s, cones: 2.23e-02s, accel: 1.19e-02s
------------------------------------------------------------------
objective = -24.469205
------------------------------------------------------------------
-------------------------------------------------------------------------------
Summary
-------------------------------------------------------------------------------
(CVXPY) Aug 01 02:03:03 PM: Problem status: optimal
(CVXPY) Aug 01 02:03:03 PM: Optimal value: -2.447e+01
(CVXPY) Aug 01 02:03:03 PM: Compilation took 5.538e-01 seconds
(CVXPY) Aug 01 02:03:03 PM: Solver (including time spent in interface) took 3.015e-01 seconds
</code></pre>
<p>Openmodelica buildings output:</p>
<blockquote>
<p>Process crashed Simulation process failed. Exited with code 11.
/tmp/OpenModelica_sigi-laptop/OMEdit/conf_paper_microgrid/conf_paper_microgrid -port=45477 -logFormat=xmltcp -override=startTime=0,stopTime=3.15351e+07,stepSize=899.974,tolerance=1e-06,solver=dassl,outputFormat=mat,variableFilter=.* -r=/tmp/OpenModelica_sigi-laptop/OMEdit/conf_paper_microgrid/conf_paper_microgrid_res.mat -w -lv=LOG_STATS -inputPath=/tmp/OpenModelica_sigi-laptop/OMEdit/conf_paper_microgrid -outputPath=/tmp/OpenModelica_sigi-laptop/OMEdit/conf_paper_microgrid
The initialization finished successfully without homotopy method.
Process crashed
=============================================================================== CVXPY v1.3.2 =============================================================================== (CVXPY) Aug 04 06:37:08 PM: Your problem has 480 variables, 960 constraints, and 0 parameters. (CVXPY) Aug 04 06:37:08 PM: It is compliant with the following grammars: DCP, DQCP (CVXPY) Aug 04 06:37:08 PM: (If you need to solve this problem multiple times, but with different data, consider using parameters.) (CVXPY) Aug 04 06:37:08 PM: CVXPY will first compile your problem; then, it will invoke a numerical solver to obtain a solution. ------------------------------------------------------------------------------- Compilation ------------------------------------------------------------------------------- (CVXPY) Aug 04 06:37:08 PM: Compiling problem (target solver=SCS). (CVXPY) Aug 04 06:37:08 PM: Reduction chain: Dcp2Cone -> CvxAttr2Constr -> ConeMatrixStuffing -> SCS (CVXPY) Aug 04 06:37:08 PM: Applying reduction Dcp2Cone (CVXPY) Aug 04 06:37:08 PM: Applying reduction CvxAttr2Constr (CVXPY) Aug 04 06:37:08 PM: Applying reduction ConeMatrixStuffing (CVXPY) Aug 04 06:37:08 PM: Applying reduction SCS (CVXPY) Aug 04 06:37:08 PM: Finished problem compilation (took 5.609e-01 seconds). ------------------------------------------------------------------------------- Numerical solver ------------------------------------------------------------------------------- (CVXPY) Aug 04 06:37:08 PM: Invoking solver SCS to obtain a solution.
Limited backtrace at point of segmentation fault /lib/x86_64-linux-gnu/libpthread.so.0(+0x14420)[0x7fa45551d420] /home/sigi-laptop/.local/lib/python3.8/site-packages/_scs_direct.cpython-38-x86_64-linux-gnu.so(QDLDL_factor+0x82)[0x7fa4203f1722] /home/sigi-laptop/.local/lib/python3.8/site-packages/_scs_direct.cpython-38-x86_64-linux-gnu.so(+0x4b0e)[0x7fa4203ecb0e] /home/sigi-laptop/.local/lib/python3.8/site-packages/_scs_direct.cpython-38-x86_64-linux-gnu.so(scs_init_lin_sys_work+0x580)[0x7fa4203ed280] /home/sigi-laptop/.local/lib/python3.8/site-packages/_scs_direct.cpython-38-x86_64-linux-gnu.so(scs_init+0x4c3)[0x7fa4203fde33] /home/sigi-laptop/.local/lib/python3.8/site-packages/_scs_direct.cpython-38-x86_64-linux-gnu.so(+0x17dc1)[0x7fa4203ffdc1] /lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x2491b9)[0x7fa4568491b9] /lib/x86_64-linux-gnu/libpython3.8.so.1.0(PyObject_Call+0x74)[0x7fa4568a8994] /lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0x590a)[0x7fa45667aa7a] /lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalCodeWithName+0x8fb)[0x7fa4567cae4b] /lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyFunction_Vectorcall+0x94)[0x7fa4568a8124] /lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyObject_FastCallDict+0x79)[0x7fa4568aa3a9] /lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyObject_Call_Prepend+0xcd)[0x7fa4568aa52d] /lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x243d47)[0x7fa456843d47] /lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x2491b9)[0x7fa4568491b9]</p>
</blockquote>
<p>Thank you for the help.</p>
| <python><python-3.x><modelica><openmodelica><cvxpy> | 2023-08-05 03:37:05 | 0 | 435 | Luis Enriquez-Contreras |
76,840,036 | 2,537,486 | Is there a way to suppress legend entry when plotting directly from pandas? | <p>I need to do multiple plot operations on a Matplotlib axis <code>ax</code>. The data are in a pandas DataFrame, so it is natural to use multiple plotting commands:</p>
<pre><code>df.plot(..., ax=ax)
</code></pre>
<p>But for some of those, I don't want a label entry in the legend. I have tried to add <code>label="_nolegend_"</code> but that does not work.</p>
<p>The only workaround I found is to first extract the data from the DataFrame, then plot them in a loop with individual <code>ax.plot</code> commands, adding a <code>label</code> parameter where I want it in the legend. This is tedious.</p>
<p>NOTE: I am not asking how to remove the legend altogether.</p>
<p>EDIT: After @Vitalizzare posted his nice answer (thank you!): Of course, I tried that because it's in the documentation, but it does not work in my case. It isn't easy to understand why, but I think it has to do with the number of curves plotted. Consider this code:</p>
<pre><code>dg = pd.DataFrame(data=np.random.rand(400).reshape(10,40))
fig1, ax = plt.subplots(figsize=(12, 4))
dg.iloc[1:8].plot(ax=ax,style='.-',color='lightgray',legend=False)
dg[[32,33]].iloc[1:8].plot(ax=ax)
</code></pre>
<p>Something very strange happens here. I get inconsistent results! Sometimes I get a figure with NO legend. I have no idea why. I am running mpl 3.5.1 and pandas 1.4.2. In my original notebook, I cannot get the correct behavior.</p>
| <python><pandas><matplotlib> | 2023-08-05 03:24:08 | 1 | 1,749 | germ |
76,839,846 | 7,077,532 | groupby() Col A, count Col C, and count unique column B | <p>I have a dataframe with multiple columns. I want to groupby column A (which is a person's name). Then I want to count <strong>total</strong> number of rows in column C grouped by column A. I also want to count number of <strong>unique</strong> rows in Column B grouped by column A.</p>
<p>Is there a way to do this in Python?</p>
| <python><pandas><group-by><count><pivot> | 2023-08-05 01:52:01 | 1 | 5,244 | PineNuts0 |
76,839,745 | 2,192,824 | How to import the module in python | <p>I'm having the following project structure</p>
<p><a href="https://i.sstatic.net/XJ4oD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XJ4oD.png" alt="enter image description here" /></a></p>
<p>In the file task_functions.py, I want to import the function defined in utility_functions.py and this is what I have in the files</p>
<p>task_functions.py</p>
<pre><code>from utility.utility_functions import utility_func
def task_function() -> bool:
print(r'I\'m calling utility function now')
utility_func()
</code></pre>
<p>utility_functions.py</p>
<pre><code>def utility_func() -> bool:
print('This is my utility function')
</code></pre>
<p>When I have the following launch.json, I was able to run it in debug mode.</p>
<pre><code>{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Python: Current File",
"type": "python",
"env": {"PYTHONPATH": "${workspaceRoot}"},
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"justMyCode": true
}
]
</code></pre>
<p>}</p>
<p>However, if I run the file task_function.py in 'Run' mode in vscode, it always throws the exception "ModuleNotFoundError: No module named 'utility'". I tried to print out the directory when run the script, it points to the root directory, that is, "DIRECTORY_IMPORT". I also tried to change the current directory using os.chdir(..) to move one directory up or other directory, it still didn't work. I guess I'm misunderstanding something fundamental here. What should be the right way to do the import? Thanks!</p>
| <python><python-3.x><import><python-import><importerror> | 2023-08-05 00:50:47 | 1 | 417 | Ames ISU |
76,839,521 | 3,446,927 | What alternative deployment options are there for Azure Functions that remove build automation with Oryx? | <p>I am trying to deploy an Azure Function to a vnet attached Azure App Service and am receiving the following error:</p>
<pre><code>Error: System.AggregateException: Http request to retrieve the SDKs available to download from 'https://oryx-cdn.microsoft.io' failed. Please ensure that your network configuration allows traffic to required Oryx dependencies, as documented in 'https://github.com/microsoft/Oryx/blob/main/doc/hosts/appservice.md#network-dependencies'
</code></pre>
<p>The error message is clear and provides a good link to documentation that explains what needs to change in my network environment to enable the App Service to reach out to the internet to pull the required Oryx dependencies.</p>
<p>Are there any other options for me to deploy my Function App that eliminate the requirement to change my networking configuration, or is enabling Oryx build automation the only way I will be able to deploy my Function App?</p>
| <python><azure><azure-functions><azure-web-app-service> | 2023-08-04 23:03:49 | 1 | 539 | Joe Plumb |
76,839,479 | 5,431,734 | bypassing interactive mode detection in python | <p>I am importing a third party package and that causes some statements to get printed on the console, jupyter notebook etc because inside the <code>__init__.py</code> of that package there is a call like:</p>
<pre><code>if hasattr(sys,'ps1'):
print('hello world')
</code></pre>
<p>Is there any way I could circumvent this from my end?</p>
| <python> | 2023-08-04 22:50:12 | 1 | 3,725 | Aenaon |
76,839,449 | 12,300,981 | Why do my contour maps look different depending on data input? | <p>I'm generating contour maps from a single maximum assuming a normal distribution. However for some reason my contour maps look different depending on the value of the single maximum. I have 3 examples of this</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import multivariate_normal
data_1=[8.58 ,8.584 ,8.569 ,8.538]
data_2=[119.614, 119.633, 119.697, 119.96 ]
#data_1=[8.601 ,8.605 ,8.607 ,8.624]
#data_2=[120.976 ,120.988 ,120.961, 120.988]
#data_1=[8.901, 8.907 ,8.902, 8.89 ]
#data_2= [121.547, 121.56, 121.547 ,121.514]
x_axis=np.linspace((min(data_1)-0.01),(max(data_1)+0.01),100)
y_axis=np.linspace((min(data_2)-0.01),(max(data_2)+0.01),100)
spread=[[2e-6, 1e-9], [1e-9, 2e-6]]
x,y=np.meshgrid(x_axis,y_axis)
pos=np.dstack((x,y))
pdf_sum=np.zeros_like(x)
for points in np.stack((data_1,data_2),axis=-1):
pdf_sum+=(multivariate_normal(points,spread)).pdf(pos)
plt.contour(x,y,pdf_sum,levels=5,colors='black')
plt.show()
</code></pre>
<p>Comparison of the different datasets shows weird behaviors. The 2nd set is the best looking but I don't know what the black line in the first data set is, or why some of the contours in the 3rd data set only have 3 levels instead of 5.</p>
<p>Edit: It appear there is also a large huge circle around my contours as well, which I don't quite understand why (I believe the black lines are coming from this).</p>
<p><a href="https://i.sstatic.net/AkbGP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AkbGP.png" alt="First Data Set" /></a></p>
<p><a href="https://i.sstatic.net/JWlyC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JWlyC.png" alt="Second Data Set" /></a></p>
<p><a href="https://i.sstatic.net/67qrB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/67qrB.png" alt="Third Data Set" /></a></p>
<p><a href="https://i.sstatic.net/y79Ca.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/y79Ca.png" alt="enter image description here" /></a></p>
| <python><numpy><matplotlib><scipy> | 2023-08-04 22:41:06 | 1 | 623 | samman |
76,839,366 | 7,385,563 | tf_rep.export_graph(tf_model_path): KeyError: 'input.1 | <p>I am trying to convert a <code>onnx</code> model to <code>tflite</code>, im facing an error executing line <code>tf_rep.export_graph(tf_model_path)</code>. This question was asked in SO before but none provided a definitive solution.</p>
<p>Requirements installed: <code>tensorflow: 2.12.0</code>, <code>onnx 1.14.0</code>, <code>onnx-tf 1.10.0</code>, <code>Python 3.10.12</code></p>
<pre><code> import torch
import onnx
import tensorflow as tf
import onnx_tf
from torchvision.models import resnet50
# Load the PyTorch ResNet50 model
pytorch_model = resnet50(pretrained=True)
pytorch_model.eval()
# Export the PyTorch model to ONNX format
input_shape = (1, 3, 224, 224)
dummy_input = torch.randn(input_shape)
onnx_model_path = 'resnet50.onnx'
torch.onnx.export(pytorch_model, dummy_input, onnx_model_path, opset_version=12, verbose=False)
# Load the ONNX model
onnx_model = onnx.load(onnx_model_path)
# Convert the ONNX model to TensorFlow format
tf_model_path = 'resnet50.pb
onnx_model = onnx.load(onnx_model_path)
from onnx_tf.backend import prepare
tf_rep = prepare(onnx_model)
tf_rep.export_graph(tf_model_path) #ERROR
</code></pre>
<p>Error:</p>
<pre><code>WARNING:absl:`input.1` is not a valid tf.function parameter name. Sanitizing to `input_1`.
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-4-f35b83c104b8> in <cell line: 8>()
6 tf_model_path = 'resnet50'
7 tf_rep = prepare(onnx_model)
----> 8 tf_rep.export_graph(tf_model_path)
35 frames
/usr/local/lib/python3.10/dist-packages/onnx_tf/handlers/backend/conv_mixin.py in tf__conv(cls, node, input_dict, transpose)
17 do_return = False
18 retval_ = ag__.UndefinedReturnValue()
---> 19 x = ag__.ld(input_dict)[ag__.ld(node).inputs[0]]
20 x_rank = ag__.converted_call(ag__.ld(len), (ag__.converted_call(ag__.ld(x).get_shape, (), None, fscope),), None, fscope)
21 x_shape = ag__.converted_call(ag__.ld(tf_shape), (ag__.ld(x), ag__.ld(tf).int32), None, fscope)
KeyError: in user code:
File "/usr/local/lib/python3.10/dist-packages/onnx_tf/backend_tf_module.py", line 99, in __call__ *
output_ops = self.backend._onnx_node_to_tensorflow_op(onnx_node,
File "/usr/local/lib/python3.10/dist-packages/onnx_tf/backend.py", line 347, in _onnx_node_to_tensorflow_op *
return handler.handle(node, tensor_dict=tensor_dict, strict=strict)
File "/usr/local/lib/python3.10/dist-packages/onnx_tf/handlers/handler.py", line 59, in handle *
return ver_handle(node, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/onnx_tf/handlers/backend/conv.py", line 15, in version_11 *
return cls.conv(node, kwargs["tensor_dict"])
File "/usr/local/lib/python3.10/dist-packages/onnx_tf/handlers/backend/conv_mixin.py", line 29, in conv *
x = input_dict[node.inputs[0]]
KeyError: 'input.1'
</code></pre>
| <python><deep-learning><tensorflow-lite><onnx> | 2023-08-04 22:14:57 | 2 | 691 | afsara_ben |
76,839,162 | 602,506 | How to denote a generic method constrained by class' generic typevar? | <p>I'm not sure how to express the equivalent of this Java generic by using Python's type hinting. The interface has a generic, and the method has its own generic that must extend the class's generic:</p>
<pre class="lang-java prettyprint-override"><code>public interface Example<T> {
<T_SUB extends T> T_SUB doSomething(T_SUB input);
}
</code></pre>
<p>An example implementation class, showing that the method must accept things that extend <code>Serializable</code>, and the return type is always equal to the type of the parameter.</p>
<pre class="lang-java prettyprint-override"><code>public class Impl implements Example<Serializable> {
@Override
public <T_SUB extends Serializable> T_SUB doSomething(T_SUB input) {
return input;
}
}
</code></pre>
<p>And example usages:</p>
<pre class="lang-java prettyprint-override"><code>Impl impl = new Impl();
StringBuilder result = impl.doSomething(new StringBuilder());
Duration result2 = impl.doSomething(Duration.ofMinutes(1));
</code></pre>
<p>I've tried various approaches in Python which will pass mypy checking, but the answer is evading me.</p>
| <python><python-typing> | 2023-08-04 21:22:54 | 0 | 1,160 | Shannon |
76,839,139 | 1,554,020 | How to copy elements of one pytorch tensor at given indices into another tensor without intermediate allocation or looping | <p>Given</p>
<pre><code>import torch
a: torch.Tensor
b: torch.Tensor
assert a.shape[1:] == b.shape[1:]
idx = torch.randint(b.shape[0], [a.shape[0]])
</code></pre>
<p>I want to do</p>
<pre><code>b[...] = a[idx]
</code></pre>
<p>But without intermediate buffer produced by <code>a[idx]</code> or looping over <code>idx</code>. How do I do this?</p>
| <python><pytorch><slice><tensor> | 2023-08-04 21:15:13 | 1 | 14,259 | yuri kilochek |
76,838,859 | 3,246,693 | Dataframe column with quoted CSV to named dataframe columns | <p>I am pulling some JSON formatted log data out of my SEIM and into a pandas dataframe. I am able to easily convert the JSON into multiple columns within the dataframe, but there is a "message" field in the JSON that contains a quoted CSV, like this.</p>
<pre><code># dummy data
dfMyData = pd.DataFrame({"_raw": [\
"""{"timestamp":1691096387000,"message":"20230803 20:59:47,ip-123-123-123-123,mickey,321.321.321.321,111111,10673010,type,,'I am a, quoted, string, with commas,',0,,","logstream":"Blah1","loggroup":"group 1"}""",
"""{"timestamp":1691096386000,"message":"20230803 21:00:47,ip-456-456-456-456,mouse,654.654.654.654,222222,10673010,type,,'I am another quoted string',0,,","logstream":"Blah2","loggroup":"group 2"}"""
]})
# Column names for the _raw.message field that is generated.
MessageColumnNames = ["Timestamp","dest_host","username","src_ip","port","number","type","who_knows","message_string","another_number","who_knows2","who_knows3"]
# Convert column to json object/dict
dfMyData['_raw'] = dfMyData['_raw'].map(json.loads)
# convert JSON into columns within the dataframe
dfMyData = pd.json_normalize(dfMyData.to_dict(orient='records'))
</code></pre>
<p>I've seen this done before with <code>str.split()</code> to split on columns and then concat it back to the original dataframe, however the <code>str.split</code> method doesn't handle quoted values within the CSV. <code>pd.read_csv</code> can handle the quoted CSV correctly, but I can't figure out how to apply it across the dataframe and expand the output of that into new dataframe columns.</p>
<p>Additionally, when I split <code>dfMyData['_raw.message']</code> out into new columns, I'd also like to supply a list of column names for the data and have the new columns be created with those names.</p>
<p>Anyone know of an easy way to split a quoted CSV string in a dataframe column into new named columns within the dataframe?</p>
| <python><pandas><dataframe> | 2023-08-04 20:05:44 | 3 | 803 | user3246693 |
76,838,830 | 10,853,071 | Converting timestamp to date and to period | <p>I know that looking at this example code, iy might seem inefficient, but on the original DF I must apply a regex function on a columns so I need to do it through iterrows.</p>
<p>My questions is how do I covert the date column o the data3 dataframe to a period('M') column because I want do run a groupby.</p>
<pre><code>import pandas as pd
import datetime as dt
ts = dt.datetime.now()
data = pd.DataFrame({
'status' : ['pending', 'pending','pending'],
'brand' : ['brand_1', 'brand_2', 'brand_3'],
'date' : [pd.Timestamp(dt.datetime.now()),pd.Timestamp(dt.datetime.now()),pd.Timestamp(dt.datetime.now())]})
data2 = list()
for index, row in data.iterrows():
a = row['status']
b = row['brand']
c = row['date'].date()
data2.append((a,b,c))
data3 = pd.DataFrame(data=data2,columns=['status','brand','date'])
</code></pre>
<pre><code> status brand date
0 pending brand_1 2023-08-04
1 pending brand_2 2023-08-04
2 pending brand_3 2023-08-04
</code></pre>
<p>So, if I try to run a groupby then, I can´t get it working.</p>
<pre><code>a = data3.groupby([data3['date'].dt.to_period('M')], observed=True).aggregate({'brand':'count'})
</code></pre>
<p>_</p>
<pre><code>AttributeError Traceback (most recent call last)
Cell In[26], line 1
----> 1 a = data3.groupby([data3['date'].dt.to_period('M')], observed=True).aggregate({'brand':'count'})
2 a
File c:\Users\fabio\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\core\generic.py:5989, in NDFrame.__getattr__(self, name)
5982 if (
5983 name not in self._internal_names_set
5984 and name not in self._metadata
5985 and name not in self._accessors
5986 and self._info_axis._can_hold_identifiers_and_holds_name(name)
5987 ):
5988 return self[name]
-> 5989 return object.__getattribute__(self, name)
File c:\Users\fabio\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\core\accessor.py:224, in CachedAccessor.__get__(self, obj, cls)
221 if obj is None:
222 # we're accessing the attribute of the class, i.e., Dataset.geo
223 return self._accessor
--> 224 accessor_obj = self._accessor(obj)
225 # Replace the property with the accessor object. Inspired by:
226 # https://www.pydanny.com/cached-property.html
227 # We need to use object.__setattr__ because we overwrite __setattr__ on
228 # NDFrame
...
577 elif is_period_dtype(data.dtype):
578 return PeriodProperties(data, orig)
--> 580 raise AttributeError("Can only use .dt accessor with datetimelike values")
AttributeError: Can only use .dt accessor with datetimelike values
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...
</code></pre>
| <python><pandas> | 2023-08-04 19:58:23 | 1 | 457 | FábioRB |
76,838,751 | 9,855,588 | PySpark monotonically_increasing_id results differ locally and on AWS EMR | <p>I created a small function that would assign a composite id to each row to essentially group rows into smaller subsets, given a subset size. Locally on my computer the logic works flawlessly. Once I deploy and test the Spark application using PySpark on AWS EMR, the results are completely different.</p>
<p>Subset logic:</p>
<pre><code>partition_column = "partition"
partitioned_df = dataframe.withColumn(
partition_column, floor(monotonically_increasing_id() / subset_length)
)
partitioned_df_ids = (
partitioned_df.select(partition_column)
.distinct()
.rdd.flatMap(lambda x: x)
.collect()
)
for partition_id in partitioned_df_ids:
temp_df = partitioned_df.filter(col(partition_column) == partition_id)
dataframe_refs.append(temp_df)
</code></pre>
<p>Given this function and a dataframe with 77,700 rows, if I set the subset length to 50,000 for example, I would get 2 smaller dataframes. One with 50,000 rows and the other with 27,700 rows.
However, when I test this against AWS EMR PySpark, I'm seeing much smaller subsets ~26 of them not above 3200 per subset.</p>
<p>Possible solution (for review):</p>
<pre><code>dataframe_refs = []
partition_window = Window.orderBy(lit(1))
ranges_to_subset_by = []
num_of_rows = dataframe.count()
num_of_splits = math.ceil(num_of_rows / subset_length)
remainder = num_of_rows
start = 0
for _ in range(num_of_splits):
print(_)
end = start + subset_length if _ != num_of_splits else start + remainder
ranges_to_subset_by.append(
(start + 1, end)
)
remainder -= subset_length
start = end
print(ranges_to_subset_by)
df = dataframe.withColumn("row_number", row_number().over(partition_window))
df.show()
for start, stop in ranges_to_subset_by:
dataframe_refs.append(df.filter(col("row_number").between(start, stop)))
</code></pre>
| <python><apache-spark><pyspark><amazon-emr><distributed-computing> | 2023-08-04 19:39:41 | 1 | 3,221 | dataviews |
76,838,740 | 726,730 | QGraphicsView focusEvent on proxy widget | <p>In pyqt5 if i have a QGraphicsView widget and in the scene i use addWidget to add a QFrame as a proxy widget.</p>
<p>Is there any way to call a method whenever the QFrame is focused (for example mouse click).</p>
<p>Example code:</p>
<p><strong>File: run_me.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from focus import Ui_Dialog
from PyQt5 import QtWidgets, QtCore, QtGui
from win32api import GetSystemMetrics
import sys
import time
class Run_me:
def __init__(self):
self.app = QtWidgets.QApplication(sys.argv)
self.Dialog = QtWidgets.QDialog()
self.ui = Ui_Dialog()
self.ui.setupUi(self.Dialog)
self.Dialog.showMaximized()
self.max_width = int(0.9*GetSystemMetrics(0))
self.max_height = int(0.85*GetSystemMetrics(1))
self.Dialog.setFixedWidth(self.max_width)
self.Dialog.setFixedHeight(self.max_height)
self.center()
self.frame = QtWidgets.QFrame()
self.verticalLayout = QtWidgets.QVBoxLayout(self.frame)
self.lineedit = QtWidgets.QLineEdit(self.frame)
self.verticalLayout.addWidget(self.lineedit)
self.scene = QtWidgets.QGraphicsScene()
self.ui.graphicsView.setScene(self.scene)
self.ui.graphicsView.setFixedHeight(self.ui.graphicsView.height())
self.ui.graphicsView.setFixedWidth(self.ui.graphicsView.width())
#self.ui.graphicsView.setSceneRect(0, 0, self.scene_width, self.total_height)
self.proxy = self.scene.addWidget(self.frame)
self.proxy.setPos(100,100)
self._filter = Filter(self)
self.frame.installEventFilter(self._filter)
sys.exit(self.app.exec())
def center(self):
screen_width = GetSystemMetrics(0)
screen_height = GetSystemMetrics(1)
dw = self.app.desktop()
taskbar_height = dw.screenGeometry().height() - dw.availableGeometry().height()
taskbar_width = dw.screenGeometry().width() - dw.availableGeometry().width()
if taskbar_height<100:
available_screen_height = screen_height - taskbar_height
available_screen_width = screen_width
else:
available_screen_height = screen_height
available_screen_width = screen_width - taskbar_width
qdialog_width = self.Dialog.frameSize().width()
qdialog_height = self.Dialog.frameSize().height()
x = (available_screen_width - qdialog_width) / 2
y = (available_screen_height - qdialog_height) / 2
width = self.Dialog.size().width()
height = self.Dialog.size().height()
self.Dialog.move(int(x),int(y))
self.Dialog.setFixedWidth(width)
self.Dialog.setFixedHeight(height)
def frame_focused(self):
print("Frame focused")
class Filter(QtCore.QObject):
def __init__(self,main_self):
super().__init__()
self.main_self = main_self
def eventFilter(self, widget, event):
# FocusIn event
if event.type() == QtCore.QEvent.FocusIn:
self.main_self.frame_focused()
return False
else:
# we don't care about other events
return False
if __name__ == "__main__":
program = Run_me()
</code></pre>
<p><strong>File focus.py</strong></p>
<pre class="lang-py prettyprint-override"><code># -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'telephone_calls.ui'
#
# Created by: PyQt5 UI code generator 5.15.9
#
# WARNING: Any manual changes made to this file will be lost when pyuic5 is
# run again. Do not edit this file unless you know what you are doing.
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_Dialog(object):
def setupUi(self, Dialog):
Dialog.setObjectName("Dialog")
Dialog.resize(599, 351)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(Dialog.sizePolicy().hasHeightForWidth())
Dialog.setSizePolicy(sizePolicy)
self.verticalLayout = QtWidgets.QVBoxLayout(Dialog)
self.verticalLayout.setObjectName("verticalLayout")
self.graphicsView = QtWidgets.QGraphicsView(Dialog)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Expanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.graphicsView.sizePolicy().hasHeightForWidth())
self.graphicsView.setSizePolicy(sizePolicy)
self.graphicsView.setStyleSheet("QGraphicsView#graphicsView{\n"
" border: 1px solid #ABABAB;\n"
" background:white;\n"
"}")
self.graphicsView.setSizeAdjustPolicy(QtWidgets.QAbstractScrollArea.AdjustIgnored)
self.graphicsView.setObjectName("graphicsView")
self.verticalLayout.addWidget(self.graphicsView)
self.retranslateUi(Dialog)
QtCore.QMetaObject.connectSlotsByName(Dialog)
def retranslateUi(self, Dialog):
_translate = QtCore.QCoreApplication.translate
Dialog.setWindowTitle(_translate("Dialog", "Στοιχεία ηχητικών κλήσεων"))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
Dialog = QtWidgets.QDialog()
ui = Ui_Dialog()
ui.setupUi(Dialog)
Dialog.show()
sys.exit(app.exec_())
</code></pre>
| <python><pyqt5><qgraphicsview> | 2023-08-04 19:38:10 | 1 | 2,427 | Chris P |
76,838,648 | 3,247,006 | @pytest.mark.skip vs @pytest.mark.xfail in Pytest | <p>I have <a href="https://docs.pytest.org/en/7.4.x/reference/reference.html#pytest-mark-skip" rel="nofollow noreferrer">@pytest.mark.skip</a>'s <code>test1()</code> and <a href="https://docs.pytest.org/en/7.4.x/reference/reference.html#pytest-mark-xfail" rel="nofollow noreferrer">@pytest.mark.xfail</a>'s <code>test2()</code> which are both <code>True</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>import pytest
@pytest.mark.skip
def test1():
assert True
@pytest.mark.xfail
def test2():
assert True
</code></pre>
<p>Then, I ran <code>pytest</code>, then there is the output as shown below:</p>
<pre class="lang-none prettyprint-override"><code>$ pytest
=================== test session starts ===================
platform win32 -- Python 3.9.13, pytest-7.4.0, pluggy-1.2.0
django: settings: core.settings (from ini)
rootdir: C:\Users\kai\test-django-project2
configfile: pytest.ini
plugins: django-4.5.2
collected 2 items
tests\test_store.py sX [100%]
============== 1 skipped, 1 xpassed in 0.10s ==============
</code></pre>
<p>Next, I have <a href="https://docs.pytest.org/en/7.4.x/reference/reference.html#pytest-mark-skip" rel="nofollow noreferrer">@pytest.mark.skip</a>'s <code>test1()</code> and <a href="https://docs.pytest.org/en/7.4.x/reference/reference.html#pytest-mark-xfail" rel="nofollow noreferrer">@pytest.mark.xfail</a>'s <code>test2()</code> which are both <code>False</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>import pytest
@pytest.mark.skip
def test1():
assert False
@pytest.mark.xfail
def test2():
assert False
</code></pre>
<p>Then, I ran <code>pytest</code>, then there is the same output as shown below:</p>
<pre class="lang-none prettyprint-override"><code>$ pytest
=================== test session starts ===================
platform win32 -- Python 3.9.13, pytest-7.4.0, pluggy-1.2.0
django: settings: core.settings (from ini)
rootdir: C:\Users\kai\test-django-project2
configfile: pytest.ini
plugins: django-4.5.2
collected 2 items
tests\test_store.py sx [100%]
============== 1 skipped, 1 xfailed in 0.24s ==============
</code></pre>
<p>So, what is the difference between <code>@pytest.mark.skip</code> and <code>@pytest.mark.xfail</code>?</p>
| <python><unit-testing><testing><pytest><pytest-markers> | 2023-08-04 19:17:46 | 1 | 42,516 | Super Kai - Kazuya Ito |
76,838,619 | 1,106,484 | How to disable scientific notation for small numbers in FastAPI + Postgres | <p>In my PostgreSQL database, I have a numeric type column with very small numbers (e.g. <code>0.000000022385</code>). I have a FastAPI endpoint which returns this data like so:</p>
<pre><code>@app.get("/data", response_model=List[models.Data_Read], tags=["DataModel"])
async def getall_data():
with Session(database.engine) as session:
query = select(models.data_row)
results = session.exec(query).all()
return results
</code></pre>
<p>When I access this endpoint's return value in my React front-end, it shows up like <code>2.2385e-8</code>. I want to avoid all such instances. I have tried to do it on the front-end but haven't found any robust method. Instead, I have to apply a workaround to every single such value. Is there any way I can achieve this in FastAPI or PostgreSQL?</p>
| <python><reactjs><postgresql><fastapi> | 2023-08-04 19:11:16 | 1 | 1,648 | Uzair A. |
76,838,569 | 16,912,844 | Python MySQL Connection Issue With `mysql-connector-python` | <p>I am trying to use the <code>mysql-connector-python</code> Python package to connect to a remote MySQL server. I am getting the below error. I followed the simple <a href="https://github.com/mysql/mysql-connector-python#getting-started" rel="nofollow noreferrer">Getting Started</a> code from GitHub.</p>
<pre><code>2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0
</code></pre>
<p>I looked through many article and forums and tried many methods, but it still give the same error.</p>
<ul>
<li>I set the <code>bind-address</code> to <code>0.0.0.0</code></li>
<li>Firewall set to allow MySQL port (tested with replication and connection with another server)</li>
<li>I scanned the machine with nmap and made sure the port (<code>3306</code>) that I am connecting to is open.</li>
<li>I tried setting <code>skip-name-resolve</code> for <code>[mysqld]</code></li>
</ul>
<p><strong>Code:</strong></p>
<pre class="lang-py prettyprint-override"><code>config = {
'user': '<user>',
'password': '<password>',
'host': '<IP>',
'port': <port>,
'database': '<database>'
}
try:
cnx = mysql.connector.connect(**config)
cursor = cnx.cursor()
query = ('SELECT * FROM student')
print(f'Executing Query: {query}')
cursor.execute(query)
for id, name in cursor:
print(f'ID: {id}, Name: {name}')
except mysql.connector.Error as err:
if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
print('Something is wrong with your user name or password')
elif err.errno == errorcode.ER_BAD_DB_ERROR:
print('Database does not exist')
else:
print(err)
else:
cursor.close()
cnx.close()
</code></pre>
| <python><mysql> | 2023-08-04 19:02:51 | 1 | 317 | YTKme |
76,838,537 | 8,648,710 | Jax: generating random numbers under **JIT** | <p>I have a setup where I need to generate some random number that is consumed by <code>vmap</code> and then <code>lax.scan</code> later on:</p>
<pre class="lang-py prettyprint-override"><code>def generate_random(key: Array, upper_bound: int, lower_bound: int) -> int:
...
return num.astype(int)
def forward(key: Array, input: Array) -> Array:
k = generate_random(key, 1, 5)
computation = model(.., k, ..)
...
# Computing the forward pass
output = jax.vmap(forward, in_axes=.....
</code></pre>
<p>But attempting to convert <code>num</code> from a <code>jax.Array</code> to an <code>int32</code> causes the <code>ConcretizationError</code>.</p>
<p>This can be reproduced through this <strong>minimal example</strong>:</p>
<pre class="lang-py prettyprint-override"><code>@jax.jit
def t():
return jnp.zeros((1,)).item().astype(int)
o = t()
o
</code></pre>
<p>JIT requires that all the manipulations be of the Jax type.</p>
<p>But <code>vmap</code> uses JIT implicitly. And I would prefer to keep it for performance reasons.</p>
<hr />
<h3>My Attempt</h3>
<p>This was my hacky attempt:</p>
<pre class="lang-py prettyprint-override"><code>@partial(jax.jit, static_argnums=(1, 2))
def get_rand_num(key: Array, lower_bound: int, upper_bound: int) -> int:
key, subkey = jax.random.split(key)
random_number = jax.random.randint(subkey, shape=(), minval=lower_bound, maxval=upper_bound)
return random_number.astype(int)
def react_forward(key: Array, input: Array) -> Array:
k = get_rand_num(key, 1, MAX_ITERS)
# forward pass the model without tracking grads
intermediate_array = jax.lax.stop_gradient(model(input, k)) # THIS LINE ERRORS OUT
...
return ...
a = jnp.zeros((300, 32)).astype(int)
rndm_keys = jax.random.split(key, a.shape[0])
jax.vmap(react_forward, in_axes=(0, 0))(rndm_keys, a).shape
</code></pre>
<p>Which involves creating the <code>batch_size</code> # of subkeys to use at every batch during <code>vmap</code> (<code>a.shape[0]</code>) thus getting random numbers.</p>
<p>But it doesn't work, because of the <code>k</code> being casted from <code>jax.Array -> int</code>.</p>
<p>But making these changes:</p>
<blockquote>
<pre><code>- k = get_rand_num(key, 1, MAX_ITERS)
+ k = 5 # any hardcoded int
</code></pre>
</blockquote>
<p>Works perfectly. Clearly, the sampling is causing the problem here...</p>
<hr />
<h3>Clarifications</h3>
<p>To not make this into an X-Y problem I'll clearly define what I want precisely:</p>
<p>I'm implementing a version of stochastic depth; basically, my <code>model</code>'s forward pass can accept a <code>depth: int</code> at runtime which is the length of a <code>scan</code> run internally - specifically, the <code>xs = jnp.arange(depth)</code> for the <code>scan</code>.</p>
<p>I want my architecture to flexibly adapt to different depths. Therefore, at training time, I need a way to produce pseudorandom numbers that would equal the <code>depth</code>.</p>
<p>So I require a function, that on <strong>every</strong> call to it (such is the case in <code>vmap</code>) it returns a different number, sampled within some bound: <code>depth ∈ [1, max_iters]</code>.</p>
<p>The function has to be <code>jit</code>-able (implicit requirement of <code>vmap</code>) and has to produce an <code>int</code> - as that's what fed into <code>jnp.arange</code> later (<em>Workarounds that directly get <code>generate_random</code> to produce an <code>Array</code> of <code>jnp.arange(depth)</code> without converting to a static value might be possible</em>)</p>
<blockquote>
<p>(I have no idea honestly how others do this; this seems like a common enough want, especially if one's dealing with sampling during train time)</p>
</blockquote>
<p>I've attached the error traceback generated by my "hacky solution attempt" if that helps...</p>
<pre class="lang-bash prettyprint-override"><code>---------------------------------------------------------------------------
ConcretizationTypeError Traceback (most recent call last)
<ipython-input-32-d6ff062f5054> in <cell line: 16>()
14 a = jnp.zeros((300, 32)).astype(int)
15 rndm_keys = jax.random.split(key, a.shape[0])
---> 16 jax.vmap(react_forward, in_axes=(0, 0))(rndm_keys, a).shape
[... skipping hidden 3 frame]
4 frames
<ipython-input-32-d6ff062f5054> in react_forward(key, input)
8 k = get_rand_num(key, 1, MAX_ITERS)
9 # forward pass the model without tracking grads
---> 10 intermediate_array = jax.lax.stop_gradient(model(input, iters_to_do=k))
11 # n-k passes, but track the gradient this time
12 return model(input, MAX_ITERS - k, intermediate_array)
[... skipping hidden 12 frame]
<ipython-input-22-4760d53eb89c> in __call__(self, input, iters_to_do, prev_thought)
71 #interim_thought = self.main_block(interim_thought)
72
---> 73 interim_thought = self.iterate_for_steps(interim_thought, iters_to_do, x)
74
75 return self.out_head(interim_thought)
[... skipping hidden 12 frame]
<ipython-input-22-4760d53eb89c> in iterate_for_steps(self, interim_thought, iters_to_do, x)
56 return self.main_block(interim_thought), None
57
---> 58 final_interim_thought, _ = jax.lax.scan(loop_body, interim_thought, jnp.arange(iters_to_do))
59 return final_interim_thought
60
/usr/local/lib/python3.10/dist-packages/jax/_src/numpy/lax_numpy.py in arange(start, stop, step, dtype)
2286 util.check_arraylike("arange", start)
2287 if stop is None and step is None:
-> 2288 start = core.concrete_or_error(None, start, "It arose in the jnp.arange argument 'stop'")
2289 else:
2290 start = core.concrete_or_error(None, start, "It arose in the jnp.arange argument 'start'")
/usr/local/lib/python3.10/dist-packages/jax/_src/core.py in concrete_or_error(force, val, context)
1379 return force(val.aval.val)
1380 else:
-> 1381 raise ConcretizationTypeError(val, context)
1382 else:
1383 return force(val)
ConcretizationTypeError: Abstract tracer value encountered where concrete value is expected: traced array with shape int32[].
It arose in the jnp.arange argument 'stop'
This BatchTracer with object id 140406974192336 was created on line:
<ipython-input-32-d6ff062f5054>:8 (react_forward)
See https://jax.readthedocs.io/en/latest/errors.html#jax.errors.ConcretizationTypeError
</code></pre>
<p>Really appreciate you helping me out here. Cheers!</p>
| <python><random><equinox><jax> | 2023-08-04 18:56:45 | 1 | 1,257 | neel g |
76,838,532 | 1,880,182 | Start terminal when a function is called in Python | <p>My Python program runs in the background. I need to open a terminal when a specific function is called and print on this terminal. How to do it?</p>
<p>For example:</p>
<pre><code>def example_function():
<exit from the background and open a terminal>
print("Something")
<exit from the function, terminal might stay there>
</code></pre>
<p>Example view of the terminal:</p>
<pre><code>Something
[user@localhost ~]$
</code></pre>
| <python><function><terminal> | 2023-08-04 18:55:52 | 0 | 541 | Eftal Gezer |
76,838,487 | 10,416,012 | Are numpy logical element wise operations broken for pandas 2.0? (np.logical_or) | <p>I have a code that was working till I update to pandas 2.0, I checked in the changelog and I see that they changed the behavior of logical_or, but it doesn't seem exactly the same thing: <a href="https://github.com/pandas-dev/pandas/pull/37374" rel="nofollow noreferrer">https://github.com/pandas-dev/pandas/pull/37374</a> so it a is very unexpected error for me.</p>
<pre><code>a = pd.DataFrame({"or":[False,True], "a":[True,True], "b":[True, False]})
np.logical_or(a[["a","b"]], a[2*["or"]])
</code></pre>
<p>Before 2.0 it use to do a element wise "or", now it directly fails 0.o with:</p>
<pre><code>ValueError: cannot reindex on an axis with duplicate labels
</code></pre>
<p>Or if provided with two different labels in the second then is even worst as it doesn't fail but concatenates both in a extremely erratic way.</p>
<p>Is this a known bug on pandas / numpy? or is it intended? Is there any known efficient alternative?</p>
| <python><pandas><numpy> | 2023-08-04 18:48:06 | 1 | 2,235 | Ziur Olpa |
76,838,315 | 11,580,131 | Accessing C pointers to vertices in Blender's Python API | <p>I'm currently making a render engine in C and C++ for Blender. I want to access the vertices of a mesh from C via a pointer, to reduce the time spent in Python and avoid unneeded data duplication.</p>
<p>I am aware that objects derived from the <code>ID</code> class, there is an <code>as_pointer()</code> method available. However, when I tried to use <code>as_pointer()</code> on the vertices collection of a mesh like so:</p>
<pre class="lang-py prettyprint-override"><code>mesh = bpy.context.object.data
pointer = mesh.vertices.as_pointer()
</code></pre>
<p>I received an error stating that <code>bpy_prop_collection</code> object has no attribute <code>as_pointer</code>, which makes sense as <code>bpy_prop_collection</code> isn't derived from <code>ID</code>.</p>
<p>The documentation mentions states that the type of <code>vertices</code>is "MeshVertices bpy_prop_collection of MeshVertex, (readonly)" (<a href="https://docs.blender.org/api/current/bpy.types.Mesh.html#bpy.types.Mesh.vertices" rel="nofollow noreferrer">doc</a>), and <code>MeshVertices</code> should be abe to return a pointer, but this isn't the type of neither <code>vertices</code> nor it elements.</p>
<p>As a workaround, I've been retrieving the vertices data into a numpy array which is then passed onto my C library, as shown in the following example code:</p>
<pre class="lang-py prettyprint-override"><code>import bpy
import numpy as np
import ctypes
obj = bpy.context.object # Suppose 'obj' is the mesh object
mesh = obj.data
# Allocate an array and copy the data, then set the pointer
# of the struct to the array
vert_array = np.zeros((len(mesh.vertices) * 3), dtype=np.float32)
mesh.vertices.foreach_get("co", vert_array)
vert_ptr = vert_array.ctypes.data_as(ctypes.POINTER(ctypes.c_float))
# Pass the pointer
lib = ctypes.CDLL("bin/libengine.so")
lib.load_vert.argtypes = [ctypes.POINTER(ctypes.float)]
lib.load_vert(vert_ptr)
</code></pre>
<p>However, this approach duplicates the vertex data in memory (once in Blender's internal data structures, and once in the numpy array) and require processing which could be avoided.</p>
<p>I've looked into Blender's source code and noticed that the underlying C/C++ API does allow direct memory access. By looking at the <code>BKE_mesh_vert_positions</code> and <code>CustomData_get_layer_named</code> functions, we see that the vertices are stored in a contiguous data block:</p>
<pre class="lang-c prettyprint-override"><code>BLI_INLINE const float (*BKE_mesh_vert_positions(const Mesh *mesh))[3]
{
return (const float(*)[3])CustomData_get_layer_named(&mesh->vdata, CD_PROP_FLOAT3, "position");
}
</code></pre>
<pre class="lang-c prettyprint-override"><code>const void *CustomData_get_layer_named(const CustomData *data,
const eCustomDataType type,
const char *name)
{
int layer_index = CustomData_get_named_layer_index(data, type, name);
if (layer_index == -1) {
return nullptr;
}
return data->layers[layer_index].data;
}
</code></pre>
<p>This means that we could have, at least in theory, a pointer to the data.</p>
<p>Is there a method in the Python API to expose these pointers or a way to work with the C/C++ API from Python to get this memory directly, without having to compile a custom version of Blender?</p>
<p>Any guidance on how to directly access these pointers or alternative solutions that avoid memory duplication would be highly appreciated.</p>
| <python><ctypes><blender><bpy> | 2023-08-04 18:16:11 | 1 | 979 | Elzaidir |
76,838,279 | 652,528 | How to create a column with the value based on the condition of another column in pandas? | <p>I'm trying this</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame(range(0, 10))
df[1] = df[0] % 2 == 0
df[2] = 1 if df[1] else 0
</code></pre>
<p>which gives me this error</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/tmp/ipykernel_2233547/1119079756.py in ?()
1 df = pd.DataFrame(range(0, 10))
2 df[1] = df[0] % 2 == 0
----> 3 df[2] = 1 if df[1] else df[0]
~/.local/lib/python3.11/site-packages/pandas/core/generic.py in ?(self)
1464 @final
1465 def __nonzero__(self) -> NoReturn:
-> 1466 raise ValueError(
1467 f"The truth value of a {type(self).__name__} is ambiguous. "
1468 "Use a.empty, a.bool(), a.item(), a.any() or a.all()."
1469 )
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
| <python><pandas><dataframe> | 2023-08-04 18:10:13 | 0 | 6,449 | geckos |
76,838,192 | 1,371,116 | How to apply multiple filters in an aggregation in polars while staying in streaming mode | <p>I have a large-ish dataset, and I would like to run several aggregations on it without loading the whole thing into memory. Specifically, I have a list of simple filters of the form <code>pl.col('x1') == 'y1'</code>, and I'd like get the result of my aggregation under each of these filters separately.</p>
<p>I can get the desired result by running</p>
<pre class="lang-py prettyprint-override"><code>dataset.filter(
pl.col(x) == y
).groupby(pl.col('a')).agg(
pl.col('b').sum()
).collect(streaming=True)
</code></pre>
<p>for each <code>x</code> and <code>y</code> separately, but I have somewhere on the order of a hundred different filters, so taking this many passes over the dataset is very time-consuming.</p>
<p>I expected that I should be able to get the desired result by doing something like</p>
<pre class="lang-py prettyprint-override"><code>dataset.groupby(pl.col('a')).agg(
*[pl.col('b').filter(pl.col(x) == y).sum().alias(f'{x}={y}') for x, y in filters]
).collect(streaming=True)
</code></pre>
<p>but while this works for a small subset of the data, as soon as I try to run it on the full dataset (even with a single filter), the process consumes all memory on the machine and dies. I assume this means that polars is not running this query in streaming mode, even though all individual components I'm using should be streaming-compatible.</p>
<p>Is there a way to get my desired results using a single pass in streaming mode? Is this something that polars doesn't yet support?</p>
| <python><python-polars> | 2023-08-04 17:55:19 | 1 | 3,616 | Isaac |
76,838,097 | 508,222 | New dimension with coordinates in xarray apply_ufunc | <p>I have a 3D array of signals with dimensions <code>('experiment', 'trial', 'time')</code>.
I am trying to vectorize the computation of Welch's periodogram for each trial of each experiment using <code>xr.apply_ufunc</code> with <code>scipy.signal.welch</code>, but cannot get it to work around the dimensions. <code>scipy.signal.welch</code> returns two arrays, the frequencies and the PSD/power spectrum.</p>
<p>Creating random data:</p>
<pre><code> data = xr.DataArray(
np.random.random((3, 2, 1024)),
dims=['experiment', 'trial', 'time'],
coords={
'experiment': np.arange(3),
'trial': np.arange(2),
'time': np.arange(1024),
},
)
</code></pre>
<p>Now applying scipy.sig.welch for 1D input:</p>
<pre><code> ret = xr.apply_ufunc(
sig.welch,
data,
input_core_dims=[['trial', 'time']],
output_core_dims=[['frequency'], []],
vectorize=True,
)
</code></pre>
<p>throws</p>
<blockquote>
<p>TypeError: only length-1 arrays can be converted to Python scalars</p>
<p>The above exception was the direct cause of the following exception:</p>
<p>Traceback (most recent call last):
....
numpy/lib/function_base.py", line 2506, in _vectorize_call_with_signature
output[index] = result</p>
<pre><code>ValueError: setting an array element with a sequence.
</code></pre>
</blockquote>
<p>Probably the numpy vectorization expects the returned values to be scalars?
Since sig.welch can work on 2D arrays, another attempt:</p>
<pre><code> ret = xr.apply_ufunc(
sig.welch,
data,
input_core_dims=[['trial', 'time']],
output_core_dims=[['trial', 'frequency'], []],
vectorize=True,
)
</code></pre>
<p>and this throws:</p>
<blockquote>
<p>... numpy/lib/function_base.py", line 2050, in _update_dim_sizes
raise ValueError(
ValueError: 1-dimensional argument does not have enough dimensions for all core dimensions ('dim2', 'dim0')</p>
</blockquote>
<p>Is there a way to do the vectorization, or must one loop through the top level dimension?</p>
| <python><multidimensional-array><vectorization><python-xarray> | 2023-08-04 17:37:01 | 1 | 919 | subhacom |
76,838,071 | 1,795,924 | How do I define a dependencies.py file for an ML Endpoint instance for inference in Azure Machine Learning Studio? | <p>So, when I create an Endpoint instance for inferencing, it asks me for a <em>scoring_script.py</em> file (which I provide, no problems), but inside of it, I have a dependency that must be met.</p>
<p>My instance is crashing, because the image I've selected for <a href="https://en.wikipedia.org/wiki/Machine_learning" rel="nofollow noreferrer">ML</a> work doesn't have all Azure SDK dependencies, and I need to add custom dependencies. This "dependencies / add file" button asks me for a Python file, not a requirements text file or a <a href="https://en.wikipedia.org/wiki/Conda_(package_manager)" rel="nofollow noreferrer">Conda</a> <a href="https://en.wikipedia.org/wiki/YAML" rel="nofollow noreferrer">YAML</a> file, so I don't know how to define this script.</p>
<p>How can I specify these dependencies in a script? I couldn't find it in the documentation.</p>
<p><a href="https://i.sstatic.net/pnEWQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pnEWQ.png" alt="Dependencies" /></a></p>
| <python><azure><machine-learning><azure-devops> | 2023-08-04 17:32:21 | 1 | 7,893 | Ericson Willians |
76,837,908 | 1,552,172 | Azure Function v2 Python deployed functions are not showing | <p>Locally the functions debug just fine, if I deploy via vscode to my azure function I get No HTTP Triggers found and the devops pipeline does not deploy triggers either.</p>
<p>I have "AzureWebJobsFeatureFlags": "EnableWorkerIndexing" set locally and as a function app setting.</p>
<p>Code is appropriately decorated</p>
<pre><code>@app.route(route="functionname", auth_level=func.AuthLevel.FUNCTION)
def functioname(req: func.HttpRequest) -> func.HttpResponse:
</code></pre>
<p>Deployments succeed both ways but no functions show</p>
<p>Azure Pipeline shows correct files:
<a href="https://i.sstatic.net/LSPEw.png" rel="noreferrer"><img src="https://i.sstatic.net/LSPEw.png" alt="enter image description here" /></a></p>
<p>Azure function app files show function_app.py at the root folder</p>
<p>test function</p>
<pre><code>app = func.FunctionApp(http_auth_level=func.AuthLevel.ANONYMOUS)
@app.function_name("personas")
@app.route(route="character-managment/personas")
def personas(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
return func.HttpResponse("ok", status_code=200)
</code></pre>
<p>Folder structure</p>
<p><a href="https://i.sstatic.net/gk1IF.png" rel="noreferrer"><img src="https://i.sstatic.net/gk1IF.png" alt="enter image description here" /></a></p>
<p>Works locally</p>
<p><a href="https://i.sstatic.net/cgOH4.png" rel="noreferrer"><img src="https://i.sstatic.net/cgOH4.png" alt="enter image description here" /></a></p>
| <python><azure><azure-functions> | 2023-08-04 17:02:09 | 2 | 676 | user1552172 |
76,837,748 | 21,420,742 | How to remove duplicated values by row but keeping the row and specific row values | <p>I have a dataset and would like to remove duplicated values with nothing but keep those rows. Here is what I have:</p>
<p>df =</p>
<pre><code> id column_a column_b column_c name
101 abc def ghi adam
101 abc def ghi brook
101 abc def ghi chris
</code></pre>
<p>I would like to only keep the top row untouched but all the other rows with the same <code>id</code> value have all values removed but rows not deleted, only leaving <code>id</code> and <code>name</code> column to be present. Like this:</p>
<pre><code>id column_a column_b column_c name
101 abc def ghi adam
101 brook
101 chris
</code></pre>
<p>I did <code>df['column_a'] = np.where(df['id'] = df['id'].shift(1), '', df['column_a']</code> and this seems to have worked I was just trying to find a more effective way to do so. Thank you.</p>
| <python><python-3.x><pandas><dataframe><numpy> | 2023-08-04 16:33:07 | 1 | 473 | Coding_Nubie |
76,837,716 | 4,505,998 | How to slice a 2D tensor using a 1D tensor instead of scalar | <p>Normally, you can slice a 2D tensor like this <code>slice = t[:, :k]</code> where k is an integer. Is it possible to do something like this but with k being a 1-dimensional vector of integers with the number of items that I want to obtain for each row?</p>
<p>Masking the items with 0's or NaN would also be fine.</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>k = torch.Tensor([1,2,3])
t = torch.Tensor([1,1,1], [2,2,2], [3,3,3])
# perform some operations and the result should be
# 1 - -
# 2 2 -
# 3 3 3
</code></pre>
| <python><pytorch> | 2023-08-04 16:28:35 | 1 | 813 | David Davó |
76,837,612 | 10,331,422 | Good way to view matrices and higher dimensional arrays in VScode | <p>When working with PyTorch/numpy and similar packages, is there a good way to view matrices (or, in general, arrays with two or more dimensions) in debug mode, similar to the way Matlab (or even pyCharm if I remember correctly) present it?
This is, for example, a PyTorch tensor, which is very confusing -- opening <code>H</code> here gives me the same thing again and again.
<a href="https://i.sstatic.net/DS7Pm.png" rel="noreferrer"><img src="https://i.sstatic.net/DS7Pm.png" alt="PyTorch Tensor" /></a></p>
<p>As opposed to Matlab, where I can watch it like that:
<a href="https://i.sstatic.net/bcfEM.png" rel="noreferrer"><img src="https://i.sstatic.net/bcfEM.png" alt="Matlb array" /></a></p>
<p>Would appreciate any help with that!</p>
| <python><numpy><matlab><pytorch><pycharm> | 2023-08-04 16:12:54 | 3 | 617 | MRm |
76,837,579 | 7,321,700 | Grouping values inside of a dict of Dataframes | <p><strong>Scenario:</strong> I have one dict of Dataframes. Each of those Dataframes contains the data for one year (2017 to 2022). They have each two columns, the Code and the Value (where the value column name is simply the year of that given Dataframe).</p>
<p><strong>Input Data Sample:</strong></p>
<pre><code>Code 2017
33200 6957
33200 151906
33200 142025
33200 729494
33200 68842
32420 153499
32320 1756310
32320 33949
32310 81860
32310 56127
32200 165520
</code></pre>
<p>Each of the Dataframes has the same list of codes, only difference is the year.</p>
<p><strong>Expected output:</strong></p>
<pre><code>Code 2017
33200 1099224
32420 153499
32320 1790259
32310 137987
32200 165520
</code></pre>
<p><strong>Objective:</strong> I am trying to do a groupby code to sum each value for that given code (similar to an SUMIF).</p>
<p><strong>Issue:</strong> When I run the code below, the output dictionary is exactly the same as the input.</p>
<p><strong>Code:</strong></p>
<pre><code>year_list_1 = [2017,2018,2019,2020,2021,2022]
sales_dict_2={}
for year_var in year_list_1:
sales_dict_2[year_var] = sales_dict[year_var].groupby('Code',as_index=False)[[year_var]].sum() # where sales_dict is the dictionary mentioned above
</code></pre>
<p><strong>Question:</strong> Why is this code outputting the same DF as the input DF?</p>
| <python><pandas><dataframe><dictionary> | 2023-08-04 16:08:04 | 1 | 1,711 | DGMS89 |
76,837,560 | 15,637,940 | Inherited class is using methods from abstract parent in APScheduler job | <p>I have a problem that <a href="https://github.com/agronholm/apscheduler" rel="nofollow noreferrer">APScheduler</a> jobs are not running correctly. They are using methods from parent abstract class even having their own implementetion. It's not happening while launch function by command-trigger. All of this inside telegram API bot. Seems like this error appears only with <code>RedisJobStore</code></p>
<pre><code>from abc import ABC, abstractmethod
from datetime import datetime, timedelta
import asyncio
import logging
from apscheduler.schedulers.asyncio import AsyncIOScheduler
from apscheduler.jobstores.redis import RedisJobStore
logging.basicConfig(filename='logfile.log', level=logging.INFO)
logging.getLogger('apscheduler').setLevel(logging.DEBUG)
class MyAbstractClass(ABC):
@classmethod
@abstractmethod
def get_required_members(cls):
raise NotImplementedError
@classmethod
async def initiate_all(cls):
members = cls.get_required_members()
logging.info(f'Got {members=}')
...
class MyImplementation(MyAbstractClass):
@classmethod
def get_required_members(cls):
return ['Alex', 'Anna']
@classmethod
def append_to_scheduler(cls, scheduler: AsyncIOScheduler, run_date: datetime):
return scheduler.add_job(
func=cls.initiate_all,
trigger='date',
run_date=run_date
)
redis_job_store = RedisJobStore()
scheduler = AsyncIOScheduler(jobstores={'default': redis_job_store}, logger=logging.getLogger())
scheduler.start()
run_date = datetime.now() + timedelta(seconds=5)
MyImplementation.append_to_scheduler(scheduler=scheduler, run_date=run_date)
asyncio.get_event_loop().run_forever()
</code></pre>
<p>Error:</p>
<pre><code>Job "MyAbstractClass.initiate_all (trigger: date[2023-08-04 19:01:03 MSK], next run at: 2023-08-04 19:01:03 MSK)" raised an exception
Traceback (most recent call last):
File "/media/russich555/hdd/Programming/Freelance/YouDo/29.2pilot/venv/lib/python3.11/site-packages/apscheduler/executors/base_py3.py", line 30, in run_coroutine_job
retval = await job.func(*job.args, **job.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/russich555/hdd/Programming/Freelance/YouDo/29.2pilot/mre/api.py", line 13, in initiate_all
members = cls.get_required_members()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/russich555/hdd/Programming/Freelance/YouDo/29.2pilot/mre/api.py", line 9, in get_required_members
raise NotImplementedError
NotImplementedError
</code></pre>
<hr />
<p>P.S. I posted this as <a href="https://github.com/agronholm/apscheduler/issues/767" rel="nofollow noreferrer">issue</a></p>
| <python><apscheduler> | 2023-08-04 16:05:13 | 1 | 412 | 555Russich |
76,837,194 | 9,751,892 | mypy error: Unsupported operand types for + ("Self" and "A") [operator] in Python dataclasses | <p>I'm working on a project that relies on strict type hinting. The minimal code example that I'm working on will return this mypy error:</p>
<pre><code>error.py:15: error: Unsupported operand types for + ("Self" and "A") [operator]
return self + rhs.to_A()
^~~~~~~~~~
</code></pre>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
from dataclasses import dataclass
from typing_extensions import Self
@dataclass
class A:
num: int = 0
def __add__(self, rhs: Self) -> Self:
return type(self)(self.num + rhs.num)
def add_B(self, rhs: B) -> Self:
return self + rhs.to_A()
@dataclass
class B:
num: int
def to_A(self) -> A:
return A(self.num)
</code></pre>
<p>Can someone explain to me why this is the case?</p>
| <python><types><mypy> | 2023-08-04 15:11:36 | 1 | 715 | Thomas |
76,837,160 | 1,471,980 | how do you insert data frame to ms sql table faster | <p>I need to insert a big (200k row) data frame into ms SQL table. when I do line by line insert, it takes a very long time. I have tried the following:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import pyodbc
import numpy as np
engine = create_engine("mssql+pyodbc://server1/<database>?driver=odbc drvier 17 for sql server?trusted_connection=yes")
df.to_sql('<db_table_name>', engine, if_exists='append')
</code></pre>
<p>Is there an option for commit and connection close?</p>
<p>It seems that <code>df.to_sql</code> is executing, not putting out any errors.</p>
<p>I tried setting <code>chunksize</code> argument with the parameter, and it was the same: no errors or insertion.</p>
| <python><sql-server><pandas> | 2023-08-04 15:06:43 | 5 | 10,714 | user1471980 |
76,837,018 | 211,858 | Parametrize endpoints in FastApi | <p>I have a collecton of GET endpoints named <code>/endpoint1/{id}</code>, <code>/endpoint2/{id}</code> that do essentially the same thing. See the below toy example. I'm trying to factor out the redundent <code>get1</code> and <code>get2</code> into a <code>get</code>. The problem is that the swagger docs for <code>endpoint3/{id}</code> has the documentation from the docstring of the partial function. See</p>
<pre><code>uvicorn main:app --reload
</code></pre>
<pre><code>from functools import partial
from fastapi import FastAPI
app = FastAPI()
@app.get("/endpoint1/{id}")
def get1(id: str) -> str:
return "endpoint1" + id
@app.get("/endpoint2/{id}")
def get2(id: str) -> str:
return "endpoint2" + id
def get(endpoint: str, id: str) -> str:
return endpoint + id
app.get("/endpoint3/{id}")(partial(get, endpoint="endpoint3"))
</code></pre>
<p>It's not an option for me to have a single GET endpoint with two arguments <code>endpoint</code> and <code>id</code>. What is the proper way to refactor this code and to rapidly create this family of GET endpoints? Thank you.</p>
| <python><fastapi> | 2023-08-04 14:51:02 | 1 | 1,459 | hwong557 |
76,836,871 | 722,553 | How do I get tox to use interpreters installed by pyenv when using tox installed via pipx? | <p>I installed tox globally via <a href="https://pypa.github.io/pipx/" rel="nofollow noreferrer">pipx</a> as follows:</p>
<pre class="lang-bash prettyprint-override"><code>pipx install tox
tox --version
4.6.4
</code></pre>
<p>I have installed Python 3.10 via <a href="https://github.com/pyenv/pyenv" rel="nofollow noreferrer">pyenv</a> as follows:</p>
<pre class="lang-bash prettyprint-override"><code>pyenv install 3.10.12
</code></pre>
<p>However, when I run <code>tox</code> with a <code>py310</code> environment, I get the following error message:</p>
<pre><code>skipped because could not find python interpreter with spec(s): py310
</code></pre>
<p>How can I get tox (installed via pipx) to recognise the versions of Python I installed via pyenv?</p>
| <python><pyenv><tox><pipx> | 2023-08-04 14:31:27 | 2 | 3,593 | Dawngerpony |
76,836,799 | 20,122,390 | Should the data be validated independently in each microservice? | <p>I have an application in Python and FastAPI with a microservices architecture. So, I perform the data validation with Pydantic. That makes for a lot of pydantic schemas replicated across microservices. But suppose for now I only have two microservices: "backend" and "ds-backend".
"backend" will take the role of gateway and "ds-backend" is a microservice in charge of doing some things with pandas with data that comes from "backend"
So, I have a pydantic schema in "backend" to validate the data that I will serve to the frontend:</p>
<pre><code>class Monitoring(BaseModel):
type: Optional[EnumType]
average: int
max: int
min: int
</code></pre>
<p>Normally, I would have this same scheme in "ds-backend" where I also validate the output data...(then I would validate the data twice, once in "ds-backend" and again in "backend"). Should it be done like this? Or what is the correct approach?
On the one hand, resources are wasted validating data that is already validated (Pydantic also allows you to create schemas without validation, it could be applied). But on the other hand, perhaps in the future "ds-backend" should communicate with other microservices, so it would be necessary to define where it is validated and where it is not.</p>
| <python><validation><microservices><fastapi><pydantic> | 2023-08-04 14:21:58 | 1 | 988 | Diego L |
76,836,604 | 13,615,317 | Python3.8 Type-hinting A ctypes function | <p><strong>Solution must support Python3.8</strong></p>
<ul>
<li>What am I doing wrong?</li>
<li>Is there a simpler way to express what I want (the return value of <code>get_ctypes_func()</code> is a callable, which takes params <code>arg_types</code> and returns <code>ret_type</code>)? I feel my solution is over-engineered.</li>
<li>Is what I want even possible in 3.8? Or any python version for that matter?</li>
</ul>
<p>I have the following:</p>
<pre class="lang-py prettyprint-override"><code>import ctypes
from collections.abc import Sequence
from typing import (
TypeVar, Protocol, Generic, Optional, cast as TYPE_CAST
)
_T = TypeVar('_T', bound=type)
_T = TypeVar('_U', covariant=True, bound=type)
class CTypesCallable(Protocol[_T, _U]):
# work around a bug in mypy:
# error: "CTypesCallable[_T, _U]" has no attribute "__name__"
__name__: str
@property
def argtypes(self) -> Sequence[type[_T]]: ...
def __call__(self, *args: _T) -> _U: ...
class CTypesFunction(Generic[_T, _U]):
def __init__(self, function: CTypesCallable[_T, _U]) -> None:
self.function = function
return
def __call__(self, *args: _T) -> _U:
# do some pre-processing/arg validation
ret = self.function(*args)
# do some post-processing and return validation
return ret
def get_ctypes_func(
name: str,
arg_types: Sequence[type[_T]],
ret_type: Optional[type[_U]] = None
) -> CTypesFunction[_T, _U]:
if ret_type is None:
ret_type = ctypes.c_uint
func = getattr(some_c_lib, name)
return CTypesFunction(TYPE_CAST(CTypesCallable[_T, _U], func))
</code></pre>
<p>which is then used like so:</p>
<pre class="lang-py prettyprint-override"><code>ret = get_ctypes_func(
'foo',
[SomePythonType, int, float]
)(SomePythonType(), 1, 2.0) # ret is int
</code></pre>
<p>Yet I get the following errors:</p>
<pre><code>error: Incompatible types in assignment (expression has type
"Type[c_uint]", variable has type "Optional[Type[_U]]") [assignment]
ret_type = ctypes.c_uint
^~~~~~~~~~~~~
error: Need type annotation for "ret" [var-annotated]
ret = get_ctypes_func('foo', [SomePythonType, int, float])(SomePythonType(), 1, 2.0)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
error: Argument 1 to "__call__" of "CTypesFunction" has incompatible
type "SomePythonType"; expected <nothing> [arg-type]
foo = get_ctypes_func('foo', [SomePythonType, int, float])(SomePythonType(), 1, 2.0)
^~~~~~~~~~~~~~~~
error: Argument 2 to "__call__" of "CTypesFunction" has incompatible
type "int"; expected <nothing> [arg-type]
foo = get_ctypes_func('foo', [SomePythonType, int, float])(SomePythonType(), 1, 2.0)
^
error: Argument 3 to "__call__" of "CTypesFunction" has incompatible
type "float"; expected <nothing> [arg-type]
foo = get_ctypes_func('foo', [SomePythonType, int, float])(SomePythonType(), 1, 2.0)
^~~
</code></pre>
<p><code>_T</code> and <code>_U</code> should represent <em>types</em>, but I am not convinced that I have set them up correctly. I need to support Python3.8, so I cannot use <code>ParamSpec</code> and friends.</p>
| <python><ctypes><python-3.8><python-typing> | 2023-08-04 13:54:35 | 0 | 1,150 | Jacob Faib |
76,836,503 | 10,452,700 | How can reproduce animation for rolling window over time? | <p>I'm experimenting with 1D time-series data and trying to reproduce the following approach via animation over my own data in GoogleColab notebook.</p>
<p>It's about reproducing the animation of <a href="https://machinelearningmastery.com/convert-time-series-supervised-learning-problem-python/" rel="nofollow noreferrer">STS transformation</a> (implemented by <code>series_to_supervised()</code> function with Lookback steps to past time <code>n_in=9</code>) equal to <a href="https://cienciadedatos.net/documentos/py27-time-series-forecasting-python-scikitlearn.html" rel="nofollow noreferrer"><em>Backtesting with refit and fixed training size (rolling origin)</em></a> animation approach introduced <a href="/questions/tagged/skforecast" class="post-tag" title="show questions tagged 'skforecast'" aria-label="show questions tagged 'skforecast'" rel="tag" aria-labelledby="tag-skforecast-tooltip-container">skforecast</a> package. It's more about <code>train</code> and <code>test</code> selection over actual time-series data <code>y</code>. Visualize fixed train size and refit and predict next step(s).</p>
<p><img src="https://d33wubrfki0l68.cloudfront.net/f9e6d3495ba5437512a3ff12ac0bdef7fa1745ae/7ef53/images/backtesting_refit_fixed_train_size.gif" alt="ani" /></p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
from matplotlib.animation import FuncAnimation
from IPython.display import HTML
print(pd.__version__)
# Generate univariate (1D) time-series data into pandas DataFrame
import numpy as np
np.random.seed(123) # for reproducibility and get reproducible results
df = pd.DataFrame({
"TS_24hrs": np.arange(0, 274),
"count" : np.abs(np.sin(2 * np.pi * np.arange(0, 274) / 7) + np.random.normal(0, 100.1, size=274)) # generate sesonality
})
#df = pd.read_csv('/content/U2996_24hrs_.csv', header=0, index_col=0).values
print(f"The raw data {df.shape}")
#print(f"The raw data columns {df.columns}")
# visulize data
import matplotlib.pyplot as plt
fig, ax = plt.subplots( figsize=(10,4))
# plot data
df['count'].plot(label=f'data or y', c='red' )
#df['count'].plot(label=f'data', linestyle='--')
plt.xticks([0, 50, 100, 150, 200, 250, df['TS_24hrs'].iloc[-1]], visible=True, rotation="horizontal")
plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left")
plt.title('Plot of data')
plt.ylabel('count', fontsize=15)
plt.xlabel('Timestamp [24hrs]', fontsize=15)
plt.grid()
plt.show()
# slecet train/test data using series_to_supervised (STS)
from pandas import DataFrame, concat
def series_to_supervised( data, n_in, n_out=1, dropnan=True):
"""
Frame a time series as a supervised learning dataset.
Arguments:
data: Sequence of observations as a list or NumPy array.
n_in: Number of lag observations as input (X).
n_out: Number of observations as output (y).
dropnan: Boolean whether or not to drop rows with NaN values.
Returns:
Pandas DataFrame of series framed for supervised learning.
"""
n_vars = 1 if type(data) is list else data.shape[1]
df = pd.DataFrame(data)
cols = list()
# input sequence (t-n, ... t-1)
for i in range(n_in, 0, -1):
cols.append(df.shift(i))
# forecast sequence (t, t+1, ... t+n)
for i in range(0, n_out):
cols.append(df.shift(-i))
# put it all together
agg = concat(cols, axis=1)
# drop rows with NaN values
if dropnan:
agg.dropna(inplace=True)
return agg.values
values=series_to_supervised(df, n_in=9)
data_x,data_y =values[:, :-1], values[:, -1]
print(data_x.shape)
print(data_y.shape)
# define animation function
import matplotlib.pyplot as plt
import pandas as pd
from matplotlib.animation import FuncAnimation
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(40, 8))
plt.subplots_adjust(bottom=0.25)
plt.xticks(fontsize=12)
ax.set_xticks(range(0, len(data_y), 9))
ax.set_yticks(range(0, 2500, 200))
data_y = pd.Series(data_y)
data_y.plot(color='r', linestyle='-', label="y")
ax.set_title('Time Series')
ax.set_xlabel('Time')
ax.set_ylabel('Value')
ax.legend(loc="upper left")
ax.grid(True, which='both', linestyle='-', linewidth=3)
ax.set_facecolor('gainsboro')
ax.spines['bottom'].set_position('zero')
ax.spines['left'].set_position('zero')
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
nested_list = list(trainX_tss)
lines = [ax.plot([], [], color='g', linestyle='-')[0] for _ in range(len(trainX_tss))]
def init():
for line in lines:
line.set_data([], [])
return lines
def update(frame):
for i, line in enumerate(lines):
data = pd.Series(nested_list[i], index=range(frame + i, frame + i + 9))
line.set_data([], [])
line.set_data(data.index, data)
return lines
# define animation setup
anim = FuncAnimation(fig, update,
frames=len(nested_list) - 9,
init_func=init,
interval=500,
blit=True,
repeat=False)
# Save animation (.gif))
anim.save('BrowniamMotion.gif', writer = "pillow", fps=10 )
# visulize animation in GoogleColab Notebook
# suppress final output
plt.close(0)
HTML(anim.to_html5_video())
</code></pre>
<p><a href="https://machinelearningmastery.com/convert-time-series-supervised-learning-problem-python/" rel="nofollow noreferrer">STS transformation</a>:</p>
<blockquote>
<p><em><strong>Multi-Step or Sequence Forecasting</strong>
A different type of forecasting problem is using past observations to forecast a sequence of future observations. This may be called sequence forecasting or multi-step forecasting.</em></p>
</blockquote>
<p>So far, I could reach this output, which is so ugly and incorrect.</p>
<p><img src="https://i.imgur.com/dmuIwIA.gif" alt="img" /></p>
| <python><matplotlib><time-series><sliding-window><matplotlib-animation> | 2023-08-04 13:42:53 | 1 | 2,056 | Mario |
76,836,454 | 10,232,932 | Avoid for loops over colum values in a pandas dataframe with a function | <p>I have the following structur of a dataframe:</p>
<pre><code>df = pd.DataFrame({'Level': ["a","b", "c"], 'Kontogruppe': ["a", "a", "b"],
'model': ["alpha", "beta", "alpha"], 'MSE': [0, 1 ,1],
'actual_value': [1,2,3], 'forecast_value': [2,2,2]})
</code></pre>
<p>For this dataframe I run severel functions, for example:</p>
<pre><code>def metrics(df):
df_map= pd.DataFrame({'Level': ["a"], 'Kontogruppe': ["a"],
'model': ["alpha"], 'MSE': [0]})
for i in df['Level'].unique():
for j in df['Kontogruppe'].unique():
for k in df['model'].unique():
df_lkm = df.loc[(df['Level'] == i) & (df['Kontogruppe'] == j) &
(df['model'] == k)]
if df_lkm.empty:
out_MSE = 10000000000
else:
out_MSE = sum(df_lkm['actual_value'])/sum(df_lkm['forecast_value'])
df_map_map = pd.DataFrame({'Level': [i], 'Kontogruppe': [j], 'model': [k],
'out_MSE': [out_MSE]})
df_map = pd.concat([df_map, df_map_map])
df = pd.merge(df, df_map, how='left', on=['Level', 'Kontogruppe', 'model'])
return df
df = metrics(df)
</code></pre>
<p>so basically I loop over the unique column values and filter the dataframe based on this.
In this case I get for every Level, Kontogruppe and model the value 'out_MSE' gets calculated over all entries of actual_values and forecast_values. And is appended as a value for every row in a new column.</p>
<p>For this problem is there are more efficient way to this?
Is there any pythonic way in general to avoid this for loops, my dataframe is big and this costs a lot of performance.</p>
| <python><pandas><dataframe> | 2023-08-04 13:36:25 | 1 | 6,338 | PV8 |
76,836,403 | 3,091,161 | TypeError when using super() in a dataclass with slots=True | <p>I have a dataclass with (kind of) a getter method.</p>
<p>This code works as expected:</p>
<pre><code>from dataclasses import dataclass
@dataclass()
class A:
def get_data(self):
# get some values from object's fields
# do some calculations
return "a calculated value"
@dataclass()
class B(A):
def get_data(self):
data = super().get_data()
return data + " (modified)"
b = B()
print(b.get_data()) # a calculated value (modified)
</code></pre>
<p>However, if I add <code>slots=True</code>, I get a <code>TypeError</code>:</p>
<pre><code>from dataclasses import dataclass
@dataclass(slots=True)
class A:
def get_data(self):
return "a calculated value"
@dataclass(slots=True)
class B(A):
def get_data(self):
data = super().get_data()
return data + " (modified)"
b = B()
print(b.get_data()) # TypeError: super(type, obj): obj must be an instance or subtype of type
</code></pre>
<p>The error vanishes if I use the old-style super(), contrary to <a href="https://peps.python.org/pep-3135/" rel="noreferrer">pep-3135</a>:</p>
<pre><code>from dataclasses import dataclass
@dataclass(slots=True)
class A:
def get_data(self):
return "a calculated value"
@dataclass(slots=True)
class B(A):
def get_data(self):
data = super(B, self).get_data()
return data + " (modified)"
b = B()
print(b.get_data()) # a calculated value (modified)
</code></pre>
<p>Why does this happen and how to fix it the right way?</p>
| <python><python-dataclasses><slots> | 2023-08-04 13:29:54 | 1 | 1,693 | enkryptor |
76,836,294 | 6,710,525 | Type hints for a function that has both named and unnamed kwargs | <p>My function definition contains both named and unnamed kwargs:</p>
<pre><code>def safe_format(text: str, max_errors: int = 10, **kwargs: str) -> None:
print(text)
print(max_errors)
print(kwargs)
</code></pre>
<p>If I call it without specifying max_errors:</p>
<pre><code>safe_format(some_str, **some_dict)
</code></pre>
<p>I do get the expected result (some_str is printed, then 10, then some_dict). Yet mypy is unhappy and believes I'm trying to use some_dict as a value for max_errors:</p>
<pre><code>Argument 2 to "safe_format" has incompatible type "**Dict[str, str]"; expected "int"
</code></pre>
<p>Is there a specific syntax I can use for mypy to recognize what I'm doing?</p>
| <python><mypy> | 2023-08-04 13:16:30 | 1 | 1,475 | totooooo |
76,835,859 | 275,088 | Error installing fastavro==1.7.3 on MacOS, Python 3.10 | <p>I have a weird error when trying to install <code>fastavro==1.7.3</code> (as part of <code>poetry install</code>) on a <code>pyenv</code>-managed Python 3.10.12. However, it installs fine on the same machine using Python 3.11.4. Any idea what is happening in 3.10?</p>
<pre><code>$ pip install fastavro==1.7.3
Collecting fastavro==1.7.3
Using cached fastavro-1.7.3.tar.gz (791 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: fastavro
Building wheel for fastavro (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for fastavro (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [150 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-13.5-arm64-cpython-310
creating build/lib.macosx-13.5-arm64-cpython-310/fastavro
copying fastavro/_schema_common.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro
copying fastavro/_schema_py.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro
copying fastavro/_logical_writers_py.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro
copying fastavro/json_read.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro
copying fastavro/write.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro
copying fastavro/_write_common.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro
copying fastavro/_write_py.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro
copying fastavro/__init__.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro
copying fastavro/_read_py.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro
copying fastavro/types.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro
copying fastavro/json_write.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro
copying fastavro/_read_common.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro
copying fastavro/_validate_common.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro
copying fastavro/_logical_readers_py.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro
copying fastavro/utils.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro
copying fastavro/logical_writers.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro
copying fastavro/_validation_py.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro
copying fastavro/__main__.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro
copying fastavro/logical_readers.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro
copying fastavro/const.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro
copying fastavro/schema.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro
copying fastavro/read.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro
copying fastavro/validation.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro
creating build/lib.macosx-13.5-arm64-cpython-310/fastavro/io
copying fastavro/io/binary_decoder.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro/io
copying fastavro/io/__init__.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro/io
copying fastavro/io/binary_encoder.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro/io
copying fastavro/io/parser.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro/io
copying fastavro/io/symbols.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro/io
copying fastavro/io/json_encoder.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro/io
copying fastavro/io/json_decoder.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro/io
creating build/lib.macosx-13.5-arm64-cpython-310/fastavro/repository
copying fastavro/repository/__init__.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro/repository
copying fastavro/repository/flat_dict.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro/repository
copying fastavro/repository/base.py -> build/lib.macosx-13.5-arm64-cpython-310/fastavro/repository
copying fastavro/py.typed -> build/lib.macosx-13.5-arm64-cpython-310/fastavro
running build_ext
Error compiling Cython file:
------------------------------------------------------------
...
writer_schema, named_schemas, offset, size, return_record_name, return_record_name_override
)
class Block:
def __init__(
^
------------------------------------------------------------
fastavro/_read.pyx:976:4: Compiler crash in AnalyseDeclarationsTransform
File 'ModuleNode.py', line 203, in analyse_declarations: ModuleNode(_read.pyx:1:0,
doc = 'Python code for reading AVRO files',
full_module_name = 'fastavro._read')
File 'Nodes.py', line 393, in analyse_declarations: StatListNode(_read.pyx:10:0)
File 'Nodes.py', line 393, in analyse_declarations: StatListNode(_read.pyx:975:0)
File 'Nodes.py', line 5121, in analyse_declarations: PyClassDefNode(_read.pyx:975:0,
name = 'Block')
File 'Nodes.py', line 393, in analyse_declarations: StatListNode(_read.pyx:976:4)
File 'Nodes.py', line 2710, in analyse_declarations: CFuncDefNode(_read.pyx:976:4,
args = [...]/11,
modifiers = [...]/0,
outer_attrs = [...]/2,
overridable = True,
visibility = 'private')
File 'Nodes.py', line 2721, in declare_cpdef_wrapper: CFuncDefNode(_read.pyx:976:4,
args = [...]/11,
modifiers = [...]/0,
outer_attrs = [...]/2,
overridable = True,
visibility = 'private')
File 'Nodes.py', line 2787, in call_self_node: CFuncDefNode(_read.pyx:976:4,
args = [...]/11,
modifiers = [...]/0,
outer_attrs = [...]/2,
overridable = True,
visibility = 'private')
Compiler crash traceback from this point on:
File "/private/var/folders/98/t4j615w14737x8f1qv8rynlm0000gq/T/pip-build-env-itnk3zqh/overlay/lib/python3.10/site-packages/Cython/Compiler/Nodes.py", line 2787, in call_self_node
type_entry = self.type.args[0].type.entry
AttributeError: 'PyObjectType' object has no attribute 'entry'
Compiling fastavro/_read.pyx because it changed.
[1/1] Cythonizing fastavro/_read.pyx
Traceback (most recent call last):
File "/Users/test/.venv/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/Users/test/.venv/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/Users/test/.venv/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 251, in build_wheel
return _build_backend().build_wheel(wheel_directory, config_settings,
File "/private/var/folders/98/t4j615w14737x8f1qv8rynlm0000gq/T/pip-build-env-itnk3zqh/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 416, in build_wheel
return self._build_with_temp_dir(['bdist_wheel'], '.whl',
File "/private/var/folders/98/t4j615w14737x8f1qv8rynlm0000gq/T/pip-build-env-itnk3zqh/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 401, in _build_with_temp_dir
self.run_setup()
File "/private/var/folders/98/t4j615w14737x8f1qv8rynlm0000gq/T/pip-build-env-itnk3zqh/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 338, in run_setup
exec(code, locals())
File "<string>", line 37, in <module>
File "/private/var/folders/98/t4j615w14737x8f1qv8rynlm0000gq/T/pip-build-env-itnk3zqh/overlay/lib/python3.10/site-packages/setuptools/__init__.py", line 107, in setup
return distutils.core.setup(**attrs)
File "/private/var/folders/98/t4j615w14737x8f1qv8rynlm0000gq/T/pip-build-env-itnk3zqh/overlay/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
File "/private/var/folders/98/t4j615w14737x8f1qv8rynlm0000gq/T/pip-build-env-itnk3zqh/overlay/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/private/var/folders/98/t4j615w14737x8f1qv8rynlm0000gq/T/pip-build-env-itnk3zqh/overlay/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
self.run_command(cmd)
File "/private/var/folders/98/t4j615w14737x8f1qv8rynlm0000gq/T/pip-build-env-itnk3zqh/overlay/lib/python3.10/site-packages/setuptools/dist.py", line 1234, in run_command
super().run_command(command)
File "/private/var/folders/98/t4j615w14737x8f1qv8rynlm0000gq/T/pip-build-env-itnk3zqh/overlay/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/private/var/folders/98/t4j615w14737x8f1qv8rynlm0000gq/T/pip-build-env-itnk3zqh/overlay/lib/python3.10/site-packages/wheel/bdist_wheel.py", line 346, in run
self.run_command("build")
File "/private/var/folders/98/t4j615w14737x8f1qv8rynlm0000gq/T/pip-build-env-itnk3zqh/overlay/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "/private/var/folders/98/t4j615w14737x8f1qv8rynlm0000gq/T/pip-build-env-itnk3zqh/overlay/lib/python3.10/site-packages/setuptools/dist.py", line 1234, in run_command
super().run_command(command)
File "/private/var/folders/98/t4j615w14737x8f1qv8rynlm0000gq/T/pip-build-env-itnk3zqh/overlay/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/private/var/folders/98/t4j615w14737x8f1qv8rynlm0000gq/T/pip-build-env-itnk3zqh/overlay/lib/python3.10/site-packages/setuptools/_distutils/command/build.py", line 131, in run
self.run_command(cmd_name)
File "/private/var/folders/98/t4j615w14737x8f1qv8rynlm0000gq/T/pip-build-env-itnk3zqh/overlay/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "/private/var/folders/98/t4j615w14737x8f1qv8rynlm0000gq/T/pip-build-env-itnk3zqh/overlay/lib/python3.10/site-packages/setuptools/dist.py", line 1234, in run_command
super().run_command(command)
File "/private/var/folders/98/t4j615w14737x8f1qv8rynlm0000gq/T/pip-build-env-itnk3zqh/overlay/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/private/var/folders/98/t4j615w14737x8f1qv8rynlm0000gq/T/pip-build-env-itnk3zqh/overlay/lib/python3.10/site-packages/setuptools/command/build_ext.py", line 84, in run
_build_ext.run(self)
File "/private/var/folders/98/t4j615w14737x8f1qv8rynlm0000gq/T/pip-build-env-itnk3zqh/overlay/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 345, in run
self.build_extensions()
File "/private/var/folders/98/t4j615w14737x8f1qv8rynlm0000gq/T/pip-build-env-itnk3zqh/overlay/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 467, in build_extensions
self._build_extensions_serial()
File "/private/var/folders/98/t4j615w14737x8f1qv8rynlm0000gq/T/pip-build-env-itnk3zqh/overlay/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 493, in _build_extensions_serial
self.build_extension(ext)
File "/private/var/folders/98/t4j615w14737x8f1qv8rynlm0000gq/T/pip-build-env-itnk3zqh/overlay/lib/python3.10/site-packages/setuptools/command/build_ext.py", line 246, in build_extension
_build_ext.build_extension(self, ext)
File "/private/var/folders/98/t4j615w14737x8f1qv8rynlm0000gq/T/pip-build-env-itnk3zqh/overlay/lib/python3.10/site-packages/Cython/Distutils/build_ext.py", line 122, in build_extension
new_ext = cythonize(
File "/private/var/folders/98/t4j615w14737x8f1qv8rynlm0000gq/T/pip-build-env-itnk3zqh/overlay/lib/python3.10/site-packages/Cython/Build/Dependencies.py", line 1134, in cythonize
cythonize_one(*args)
File "/private/var/folders/98/t4j615w14737x8f1qv8rynlm0000gq/T/pip-build-env-itnk3zqh/overlay/lib/python3.10/site-packages/Cython/Build/Dependencies.py", line 1301, in cythonize_one
raise CompileError(None, pyx_file)
Cython.Compiler.Errors.CompileError: fastavro/_read.pyx
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for fastavro
Failed to build fastavro
ERROR: Could not build wheels for fastavro, which is required to install pyproject.toml-based projects
</code></pre>
| <python><pip><cython><python-3.10><fastavro> | 2023-08-04 12:25:08 | 1 | 16,548 | planetp |
76,835,729 | 386,861 | How to filter out numbers from a dataframe column in Pandas | <p>I've got a Pandas dataframe that I've almost cleaned with its various functions.</p>
<p>The bit I can't work out is what is the apply function to remove numbers from the 'region' column.</p>
<p>England6 should be England.</p>
<pre><code> region admissions_total admissions_count_men admissions_count_women adm_per_100_000_all adm_per_male adm_per_female year
11 England6 710562 243565 466978 1322.8 963.1 1685.2 2019
13 North East 24410 8645 15765 932 684 1177 2019
14 Darlington 645 295 355
</code></pre>
<p>What can I do next?</p>
| <python><pandas> | 2023-08-04 12:05:56 | 0 | 7,882 | elksie5000 |
76,835,527 | 18,771,355 | TkInter 'invalid command "after" script" by using destroy(): is it safe using quit() and exit()? | <p>I'm am using a mix of <code>customtkinter</code> and <code>tkinter</code> to program my software, based on MVC architecture.
However, when quitting using the regular <code>[x]</code> button, I have the following error repeated mutliple times for different lambdas:</p>
<pre class="lang-py prettyprint-override"><code> while executing
"140292574995648"
(command for "WM_DELETE_WINDOW" window manager protocol)
invalid command name "140292575264640update"
while executing
"140292575264640update"
("after" script)
</code></pre>
<p>And even though my tkapp closes, the Python interpreter of PyCharm still runs.</p>
<p>I read in <a href="https://stackoverflow.com/questions/26168967/invalid-command-name-while-executing-after-script">this first post</a> and in <a href="https://stackoverflow.com/questions/75811765/invalid-command-name-1775568714624update-in-customtkinter">this second post</a> that it has to do with the <code>destroy()</code> method when there are <code>after()</code> statements used after the destruction of the main root, coupled with the fact that to cite the second post, <code>customtkinter uses the after command to handle the animation of the button. However, in between the button being pressed and released you're destroying the root window. That makes it impossible for the after command to run since it has been destroyed.</code></p>
<p>Here is a sample of my code:</p>
<pre class="lang-py prettyprint-override"><code>from functools import partial
import customtkinter
import params
from VIEW.MainView import MainView
class App(customtkinter.CTk):
def __init__(self):
super().__init__()
self.withdraw()
self.title(f'FireLearn GUI v{params.version}')
view = MainView(self)
self.after(0, self.deiconify)
def onClosure(app):
app.quit()
exit()
if __name__ == '__main__':
app = App()
app.protocol('WM_DELETE_WINDOW', partial(onClosure, app)) # root is your root window
app.mainloop()
</code></pre>
<p>knowing that the <code>after()</code> called in the <code>init()</code> is the only one I wrote in my whole code and doesn't change much when removed.</p>
<p>I tried to avoid using <code>destroy()</code> by handling the closure by first using <code>quit()</code> to close the app, then <code>exit()</code> to end all the remaining processes, in my <code>onClosure()</code> function. This works fine.</p>
<p>But am I 'in the right' to do this ? Is it bad practice ? Am I missing some important sub-process or killing processes in a bad way doing so ? I'd like to do things upright.</p>
<p>Thanks for the help.</p>
| <python><tkinter><process><customtkinter> | 2023-08-04 11:38:30 | 0 | 316 | Willy Lutz |
76,835,497 | 369,287 | Unable to install packages in python virtual environment in Ubuntu | <p>In my Ubuntu system I have python 3.8. I created virtual environment using Python3.7.
I used below command for creating virtual environment.</p>
<pre><code>sudo virtualenv -p /usr/bin/python3.7 venv
</code></pre>
<p>Then activated the virtual environment using the command source venv/bin/activate , after that when I try to install boto3==1.24.35 by executing requirements.txt using the command</p>
<pre><code>sudo pip install -r requirements.txt
</code></pre>
<p>I am getting the message Requirement already satisfied: boto3==1.24.35 in /usr/local/lib/python3.8/dist-packages. It is not installing boto3 under the virtual environmet so when I execute the application in virtual environmentI am getting the error "ModuleNotFoundError: No module named 'boto3'".</p>
<p>Why pip install is not considering virtual environment python version? Please help me to resolve this problem.</p>
| <python><python-3.x><ubuntu> | 2023-08-04 11:33:31 | 0 | 708 | user369287 |
76,835,389 | 2,251,058 | Fastapi enabling Cors gives - error error serving tap http server: http: Server closed | <p>Adding CORS to Python FastAPI Framework is giving this error</p>
<pre><code>error error serving tap http server: http: Server closed
</code></pre>
<p>Configuration -</p>
<pre><code>def get_app() -> FastAPI:
"""
Returns the FastAPI ASGI object that will be later consumed by an uvicorn worker. This is where you register
dependencies, route prefixes, swagger tags, and associate routers to the application.
"""
app_config = get_config() # get config and initialise logging
app = FastAPI(title=APPLICATION_NAME, version="0.1.0")
log.info(f"Starting {APPLICATION_NAME} version={app_config.app_version}!")
app.include_router(home.router, tags=["home"])
app.include_router(health.router, tags=["health"])
app.include_router(content_relevancy_interface.router, tags=["content-relevancy-interface"])
app.include_router(copperfield.router, tags=["copperfield"])
origins = [
"https://abcc,com",
"http://abcc.net",
"http://localhost",
"http://localhost:8080",
]
app.add_middleware(
CORSMiddleware,
allow_origins=origins,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
app.middleware('http')(catch_exceptions_middleware) # register global exception handler
slash_patcher(app)
return app
</code></pre>
<p>This is how it is run in a shell script</p>
<pre><code>export APP_RUN="gunicorn -c /etc/gunicorn/config.py seo_automation_common_apis.fastapi_app:get_app"
.
.
Some other setup
# Start app
$APP_RUN
</code></pre>
<p>It seems the error is coming from istio -</p>
<p><a href="https://github.com/istio/istio/issues/44445" rel="nofollow noreferrer">https://github.com/istio/istio/issues/44445</a></p>
<p><a href="https://github.com/istio/istio/issues/44244" rel="nofollow noreferrer">https://github.com/istio/istio/issues/44244</a></p>
<p>Is istio used internally in FastApi ?</p>
<p>Let me know if any other detail is needed/missing.</p>
<p>Any help is appreciated</p>
| <python><python-3.x><fastapi><fastapi-middleware> | 2023-08-04 11:14:54 | 0 | 3,287 | Akshay Hazari |
76,835,326 | 4,350,650 | Dynamically populating column in Polars | <p>I am trying to translate a function I use to calculate skew over each feature of a dataframe.
I calculate 2 skews, one for the feature when <code>date == d</code>, and one over the data when <code>date <= d</code>.
I was able to translate the first line of the following code but not the second.
The code to recreate a dataframe :</p>
<pre><code>import random
date = [random.choice([1,2,3]) for x in range(0,100)]
feature1 = [random.gauss(0,1) for x in range(0,100)]
feature2 = [random.gauss(0,2) for x in range(0,100)]
df = pd.DataFrame({"date":date,'feature1':feature1,'feature2':feature2})
</code></pre>
<p>Here is the pandas function:</p>
<pre><code>def features_augment(df):
dff = df.copy()
for col,d in itertools.product(dff.columns[2:],dff.date.unique()):
dff.loc[dff.date==d,'sk_'+col] = scipy.stats.skew(dff.loc[dff.date==d,col]) #translated in polars
dff.loc[dff.date==d,'rsk_'+col] = scipy.stats.skew(dff.loc[dff.date<=d,col]) #couldn't translate
return dff
</code></pre>
<p>Here is my polars function so far:</p>
<pre><code>def pl_feature_augment(df):
pl_df = pl.from_pandas(df)
sk = pl_df.groupby("date").agg(pl.all().exclude("id","volvol").skew().prefix("sk_")) #calculate skew on for each feature on each date
pl_df = pl_df.join(sk,"date")
for d in pl_df.select(pl.col("date")).unique():
pl_df.with_columns(pl.when(pl.col("date")<=d).then(pl.all().exclude("id","vol","volvol").skew()).prefix("rsk_")) #doesn't work
return pl_df.to_pandas()
</code></pre>
<p>I managed to avoid the <code>for</code> loop for the first expression but can't figure out how to translate the second one even with a <code>for</code> loop...</p>
| <python><python-polars> | 2023-08-04 11:03:39 | 1 | 2,099 | Mayeul sgc |
76,834,944 | 19,626,271 | Error: Failed building wheel for pysha3 while trying to install it or use conda | <p>I am trying to setup a Jupyter Notebook data analytics project using GraphSense, and I am having pysha3 problems. I use Windows 11, both Python 3.11 and 3.10, the newest Conda version 23.7.2. One of the steps is I need to run the following command to create an environment from a YAML file:</p>
<p><code>conda env create -f environment.yml</code></p>
<p>The file can be found under C:\GitHubRepo*my_environment*\ and I changed directory to here.
It starts downloading, extracting packages, successfully preparing, executing, verifying, then there is an "Installing pip dependencies" problem:</p>
<pre><code>Building wheel for pysha3 (setup.py): started
Building wheel for pysha3 (setup.py): finished with status 'error'
Running setup.py clean for pysha3
Successfully built graphsense-python
Failed to build pysha3
Pip subprocess error:
Running command git clone --filter=blob:none --quiet https://github.com/graphsense/graphsense-python 'C:\Users\*me*\AppData\Local\Temp\pip-req-build-bdcsrhjs'
error: subprocess-exited-with-error
...
C:\Users\*me*\anaconda3\envs\*my_environment*\include\pyconfig.h(59): fatal error C1083: Cannot open include file: 'io.h': No such file or directory
error: command 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.31.31103\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pysha3
ERROR: Could not build wheels for pysha3, which is required to install pyproject.toml-based projects
</code></pre>
<p>I noticed, that it tries to open files from my environment for the procedure, so I quit and just tried to install pysha3 with pip install pysha3 under my user directory, first with Python 3.11 which didn't work, based on other posts I switched to Python 3.10 with Anaconda and ran it in the Anaconda Prompt, and still:</p>
<pre><code>Building wheel for pysha3 (setup.py) ... error
error: subprocess-exited-with-error
python setup.py bdist_wheel did not run successfully.
"C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.31.31103\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -DPY_WITH_KECCAK=1 -IC:\Users\*me*\anaconda3\envs\py310\include -IC:\Users\*me*\anaconda3\envs\py310\Include "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.31.31103\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" /TcModules/_sha3/sha3module.c /Fobuild\temp.win-amd64-cpython-310\Release\Modules/_sha3/sha3module.obj
sha3module.c
C:\Users\*me*\anaconda3\envs\py310\include\pyconfig.h(59): fatal error C1083: Cannot open include file: 'io.h': No such file or directory
Building wheel for pysha3 (setup.py) ... error
error: subprocess-exited-with-error
error: command 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.31.31103\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
</code></pre>
<p>So this doesn't work either, what can I do?</p>
| <python><python-3.x><installation><pip><anaconda> | 2023-08-04 10:10:00 | 3 | 395 | me9hanics |
76,834,821 | 8,930,751 | Scheduling Python calculations using Schedule Library | <p>I need a code to schedule my Python calculations which does some operation every two hours. So I have written this code to test how it is working and scheduled it to run every minute for testing.</p>
<pre><code>import schedule
from datetime import datetime
def do_job():
print("Inside do_job at ", datetime.now().strftime("%H:%M:%S"))
schedule.every(1).minutes.do(do_job)
print("Starting at ",datetime.now().strftime("%H:%M:%S"))
while True:
schedule.run_pending()
</code></pre>
<p>This code is working as expected. The do_job() def is getting called every minute. The only thing is the do_job method is not getting called as soon as I run the script. It's executed from the next minute. This is the output when I run the above script.</p>
<blockquote>
<p>Starting at 15:15:52<br />
Inside do_job at 15:16:52<br />
Inside do_job at 15:17:52</p>
</blockquote>
<p>Is there a way to specify in the cron schedule to run it at the first minute as well?</p>
| <python><schedule> | 2023-08-04 09:54:47 | 1 | 2,416 | CrazyCoder |
76,834,699 | 9,488,023 | Combine two Pandas dataframes if unique values found in some columns | <p>I have two Pandas dataframes in Python that looks something like this example:</p>
<pre><code>df_test = pd.DataFrame(data=None, columns = ['file', 'comments', 'priority', 'id'])
df_test.file = ['file_1', 'file_1_v2', 'file_2_old', 'file_2_new', 'file_3', 'file_4_v1', 'file_4_v2']
df_test.comments = ['abc', 'def', 'xyz', 'ghi', 'pqr', 'uvw', 'pqr']
df_test.priority = [10, 100, 15, 25, 50, 20, 40]
df_test.id = [1, 1, 2, 2, 3, 4, 4]
df_test2 = pd.DataFrame(data=None, columns = ['file', 'comments', 'priority', 'id'])
df_test2.file = ['file_1', 'file_1_test', 'file_2_old', 'file_3_s', 'file_4_v3', 'file_4_v4']
df_test2.comments = ['abd', 'def', 'xya', 'pqr', 'uvw', 'pqr']
df_test2.priority = [10, 110, 35, 50, 20, 50]
df_test2.id = [1, 1, 2, 3, 4, 4]
</code></pre>
<p>What I want to do is to add the rows from the second dataframe into the first one, but only if it has unique values in either 'comments' or 'priority' that are not found in the first dataframe for all entries with the same 'id' value.</p>
<p>In the example, it would mean that we would add row 1, 2, 3, and 6 from the second dataframe, since these have unique values in those two columns for the given 'id' number.</p>
<p>I suppose I could concatenate the two and drop duplicates like this:</p>
<pre><code>df_test3 = pd.concat([df_test, df_test2])
df_test3 = df_test3.drop_duplicates(subset = ['comments', 'priority'], keep = 'first')
</code></pre>
<p>But this also drops row 6 from dataframe two, as it is not unique compared to a row with another 'id' number. What I want is for it to check if the rows are unique only to their own 'id' number. Any help on how to do it would be really appreciated!</p>
| <python><pandas><dataframe><merge> | 2023-08-04 09:37:20 | 1 | 423 | Marcus K. |
76,834,671 | 18,904,265 | What is the most efficient way to get data from InfluxDB (v2) into polars? | <p>I want to retrieve data from InfluxDB (self hosted, Version 2.0) and process it in python using polars. For communication I am using the <a href="https://github.com/influxdata/influxdb-client-python" rel="nofollow noreferrer">InfluxDBClient</a> for InfluxDB v2. At the moment I am doing it using this:</p>
<pre class="lang-py prettyprint-override"><code>
from dotenv import load_dotenv
from influxdb_client import InfluxDBClient
import polars
import os
load_dotenv()
client = InfluxDBClient(
url=os.getenv("url"), token=os.getenv("token"), org=os.getenv("org"))
query_api = client.query_api()
flux_string = 'from(bucket: "test_bucket") |> range(start:-2y) |> drop(columns: ["_start","_stop"])'
data_frame = query_api.query_data_frame(flux_string)
polars_data = polars.from_pandas(data_frame)
</code></pre>
<p>For around 80 000 rows of data, this takes about 1 to 2 seconds (varies a bit). Is there a more efficient way of getting the data? This will be part of a web app for internal use, so I'd want to avoid any long processing times.</p>
<p>I also tried some of the other methods offered by the influxdb-client package, maybe there is some way to take some other output and get it into polars? Further info about the functions: <a href="https://influxdb-client.readthedocs.io/en/stable/api.html#influxdb_client.QueryApi.query_csv" rel="nofollow noreferrer">docs</a></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>function</th>
<th>processing time</th>
<th>return type</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>query_raw()</code></td>
<td>0,5 s</td>
<td>HTTPResponse object (converts to string)</td>
</tr>
<tr>
<td><code>query_data_frame()</code></td>
<td>1,5 s</td>
<td>pandas DataFrame</td>
</tr>
<tr>
<td><code>query_data_frame_stream()</code></td>
<td>0,6 s</td>
<td>generator object</td>
</tr>
<tr>
<td><code>query_csv()</code></td>
<td>0,5 s</td>
<td>csv iterator</td>
</tr>
<tr>
<td><code>query_stream()</code></td>
<td>0,7 s</td>
<td>generator object</td>
</tr>
</tbody>
</table>
</div>
<p>As always, thank you a lot for any help!</p>
| <python><influxdb><python-polars> | 2023-08-04 09:34:11 | 0 | 465 | Jan |
76,834,622 | 7,848,740 | Python pyserial use a lot of CPU for simple task on Celery | <p>I'm using Pyserial inside a Celery worker to write every <em>150ms</em> a packet of max 20 bytes on the serial</p>
<p>Literally the celery worker does nothing except send data on the serial with <code>ser.write(packet)</code> with <code>ser = serial.Serial(COM, 38400, timeout=0.1)</code></p>
<h3>Celery worker</h3>
<pre><code>@shared_task(bind=True)
def start_serial_acquisition(self, idObj, porta_COM):
ser = serial.Serial(COM, 38400, timeout=0.1)
packer = b'\x80\x80\x80\x80\x80\x80\x80\x80\x80\x80\x80'
time_start = time.monotonic()
time_stop = time.monotonic()
while True:
time_stop = time.monotonic()
if(time_stop - time_start > 0.149)
ser.write(packet)
time_start = time.monotonic()
</code></pre>
<p>The main issue is that this simple process, takes an entire core of my two core CPU with the result 50% of the CPU is used just to send data on the serial. The specs of my PC are</p>
<pre><code>PRETTY_NAME="Ubuntu 22.04.2 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.2 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
</code></pre>
<p>I'm using Python 3.10 and the latest version of pyserial and celery v5.3.0</p>
<p>When I stop writing on the serial, the CPU basically goes IDLE with a consumption between 1-2%</p>
<p>I can't find the issue as, I would like to optimize it. I'm using a <a href="https://it.farnell.com/ftdi/usb-rs485-we-1800-bt/cavo-usb-rs485-convertitore-seriale/dp/1740357?gclid=EAIaIQobChMIl_nD5dfCgAMVCN7tCh2GkQhmEAQYAyABEgLtAfD_BwE&mckv=_dc%7Cpcrid%7C%7Cplid%7C%7Ckword%7C%7Cmatch%7C%7Cslid%7C%7Cproduct%7C1740357%7Cpgrid%7C%7Cptaid%7C&CMP=KNC-GIT-GEN-SHOPPING-PMAX-Low-Roas-Short-Title-Test-21-Dec-22&gross_price=true" rel="nofollow noreferrer">USB - RS485 converter</a> as serial to communicate data</p>
| <python><celery><pyserial><rs485> | 2023-08-04 09:27:51 | 1 | 1,679 | NicoCaldo |
76,834,604 | 1,460,514 | How to update an attribute value in same date format | <p>I want to change the <code>publicationDateTime="2023-07-31T07:02:59+00:00"</code> attribute.
My xml is</p>
<pre><code><?xml version="1.0" encoding="UTF-8" standalone="no"?><Research xmlns="http://www.rixml.org/2005/3/RIXML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" createDateTime="2023-07-31T07:02:16+00:00" language="eng" researchID="GPS-4409687-0" xsi:schemaLocation="http://www.rixml.org/2005/3/RIXML http://www.rixml.org/assets/documents/schemas/RIXML-2_4.xsd">
<Product productID="12345-0">
<Source>
<Organization primaryIndicator="Yes" type="SellSideFirm">
<OrganizationID idType="VendorCode">ABP</OrganizationID>
<OrganizationName nameType="Display">ABCDF</OrganizationName>
</Organization>
</Source>
<Content>
<Title>Novice</Title>
</Content>
<Context external="Yes">
<ProductDetails periodicalIndicator="No" publicationDateTime="2023-07-31T07:02:59+00:00">
<ProductCategory productCategory="Support"/>
</ProductDetails>
</Context>
</Product>
</Research>
</code></pre>
<p>This is my code</p>
<pre><code>import os
import xml.etree.ElementTree as ET
import uuid
import time
ET.register_namespace('', "http://www.rixml.org/2005/3/RIXML")
ET.register_namespace('', "http://www.rixml.org/2005/3/RIXML")
OUTPUT_FOLDER = "OUTPUT/"
input_folder = "INPPUT/"
all_files = os.listdir(input_folder)
json_files = {f: f for f in all_files if f.endswith(".xml")}
json_files_keys = list(json_files.keys())
json_files_keys.sort()
for file_name in json_files_keys:
print(file_name)
xmlTree = ET.parse(input_folder+file_name)
root = xmlTree.getroot()
print(root)
print(root.attrib)
for child in root:
print(child.attrib)
pid = '2023-08-04T08:02:59+00:00'
print(pid)
child.set('publicationDateTime', pid)
xmlTree.write(OUTPUT_FOLDER+file_name)
print("written")
</code></pre>
<p>I am not able to update the attribute. It gets added at the root level.</p>
<p>Please suggest how to add at the same location.</p>
<p>I am new to Python apology if its very obvious question.</p>
| <python><xml><elementtree><xml-attribute> | 2023-08-04 09:25:54 | 1 | 1,589 | Sudarshan kumar |
76,834,383 | 16,222,048 | How do I view code while resolving merge conflicts in PyCharm | <p>Say I have a repo with 2 files:</p>
<p><br />
<em>source.py</em></p>
<pre><code>def foo() -> int:
return 1
</code></pre>
<p><em>conflict.py</em></p>
<pre><code>from source import foo
def bar() -> int:
return foo()
</code></pre>
<hr />
<p>Now, I create and try to resolve a merge conflict:</p>
<p><br />
<em>conflict.py</em></p>
<pre><code>from source import foo
def bar() -> float:
return foo()
</code></pre>
<hr />
<p>I get the expected dialog box and am able to resolve the conflict:</p>
<p><a href="https://i.sstatic.net/Pm0bd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Pm0bd.png" alt="Resolve merge conflict" /></a></p>
<hr />
<p>Now how do I review the code and check the actual return type of <code>foo()</code>?</p>
<p>The merge conflict resolve window is fixed on top. Command/ctrl+clicking <code>foo()</code> does not work. Even opening a second clone of my repo doesn't escape this intrusive behaviour.</p>
<p>This behaviour is the same for macOS and Windows. I'm using PyCharm 2022.2.3 Professional.</p>
| <python><pycharm><merge-conflict-resolution> | 2023-08-04 08:53:33 | 2 | 371 | Angelo van Meurs |
76,834,245 | 8,849,755 | Making room for larger title in plotly | <p>I am doing some plots in which I would like to add some text info. Normally this is just a single line and I use the <code>title</code> field for this, and it works perfectly out of the box. Now I want to add some more information and the title is overlapping the plot, which is ugly. Here is an example:</p>
<pre class="lang-py prettyprint-override"><code>import numpy
import plotly.express as px
px.ecdf(
numpy.random.randn(99),
title = 'Title<br><sup>Subtitle</sup><br><sup>Some info that I want to display for this plot</sup><br><sup>Some other info I would like to hardcode here</sup>',
).write_html('plot.html',include_plotlyjs='cdn')
</code></pre>
<p><a href="https://i.sstatic.net/GdP9f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GdP9f.png" alt="enter image description here" /></a></p>
<p>How can I compress the plot vertically, from the top, such that there is more room for this larger title?</p>
| <python><plotly> | 2023-08-04 08:32:30 | 1 | 3,245 | user171780 |
76,834,173 | 2,604,247 | How Does Celery AsyncResult Function Know Which Broker or Backend to Query? | <p>This is how I am sending a task to the celery workers and capturing the resulting task id. The <code>app</code> variable, i.e. the celery app, has the redis broker and backend url set as part of its properties. This is working fine.</p>
<pre class="lang-py prettyprint-override"><code>from celery import Celery
REDIS_BROKER:str='redis://127.0.0.1:6369'
app: Celery = Celery(backend=REDIS_BROKER, broker=REDIS_BROKER)
task: str = app.send_task(name='add_two_numbers',
args=[28, 93]).id
# task='574b72ad-4512-4e0b-a14b-f56a2f725374' # An example
</code></pre>
<p>When I check the task status and result, this also works, when I thought it should not (which is why I am asking the question).</p>
<pre class="lang-py prettyprint-override"><code>from celery.result import AsyncResult
status: str = AsyncResult(id='574b72ad-4512-4e0b-a14b-f56a2f725374').state
result: int = AsyncResult(id='574b72ad-4512-4e0b-a14b-f56a2f725374').result
</code></pre>
<p>My question is <em>why</em> this works? There can be any number of redis brokers running on any IP address, which is why I am supplying the socket address of the broker to the <code>app</code> variable. Should not <code>AsyncResult</code> ask for task id, broker URL and backend URL to fetch the task status/result for me?
Or does it have some kind of invisible connection with the <code>app</code> object in the script to get those broker and backend URLs?</p>
<p>When I look up, it seems most tutorials and answers couple django and celery together to answer the questions, but I am using celery without django, purely as a task processor without any web development component. Hence, the question.</p>
| <python><redis><celery><task><messagebroker> | 2023-08-04 08:21:37 | 1 | 1,720 | Della |
76,834,156 | 18,928,131 | Merge dataframes in a dictionary based on the first dataframe | <p>I have multiple dataframes in a dictionary with this structure (the value should depict the dataframe):</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import requests
ADWARE_MALWARE="https://raw.githubusercontent.com/ScriptTiger/scripttiger.github.io/master/alts/domains/blacklist.txt"
FAKENEWS="https://raw.githubusercontent.com/ScriptTiger/scripttiger.github.io/master/alts/domains/blacklist-f.txt"
GAMBLING="https://raw.githubusercontent.com/ScriptTiger/scripttiger.github.io/master/alts/domains/blacklist-g.txt"
PORN="https://raw.githubusercontent.com/ScriptTiger/scripttiger.github.io/master/alts/domains/blacklist-p.txt"
SOCIAL="https://raw.githubusercontent.com/ScriptTiger/scripttiger.github.io/master/alts/domains/blacklist-s.txt"
class Blocklist:
def req(self, url: str) -> list:
req = requests.get(url)
lst = []
if req.status_code == 200:
read_data = req.content
read_data = read_data.decode('utf-8')
for line in read_data.splitlines():
if not line.startswith("#"):
lst.append(line)
return lst
else:
raise Exception('Website not available: Error ', req.status_code)
def create_df(self, blocklist: list, blocklist_name: str) -> pd.DataFrame:
df = pd.DataFrame({'Domain': blocklist, 'Blocklist Name': blocklist_name})
return df
def insert(self):
dic = {
ADWARE_MALWARE: "ads_malware",
FAKENEWS: "fakenews",
GAMBLING: "gambling",
PORN: "porn",
SOCIAL: "social"
}
d = {}
for key, value in dic.items():
blocklist = Blocklist().req(key)
d[value] = Blocklist().create_df(blocklist, value)
print(d[value])
</code></pre>
<pre><code># Current dataframes
Domain Blocklist Name
0 ck.getcookiestxt.com ads_malware
1 eu1.clevertap-prod.com ads_malware
2 wizhumpgyros.com ads_malware
Domain Blocklist Name
0 ck.getcookiestxt.com fakenews
1 eu1.clevertap-prod.com fakenews
2 wizhumpgyros.com fakenews
3 yournationnews.com fakenews
4 yournewswire.com fakenews
Domain Blocklist Name
0 ck.getcookiestxt.com gambling
1 eu1.clevertap-prod.com gambling
2 wizhumpgyros.com gambling
3 zebrabet.com.au gambling
4 zenitbet.com gambling
Domain Blocklist Name
0 ck.getcookiestxt.com porn
1 eu1.clevertap-prod.com porn
2 wizhumpgyros.com porn
3 www.zetton-av.com porn
4 www.zeus-web.net porn
Domain Blocklist Name
0 ck.getcookiestxt.com social
1 eu1.clevertap-prod.com social
2 wizhumpgyros.com social
3 match.com social
4 mbga.jp social
Domain Blocklist Name
0 ck.getcookiestxt.com fakenews
1 eu1.clevertap-prod.com fakenews
2 wizhumpgyros.com fakenews
3 yournationnews.com fakenews
4 yournewswire.com fakenews
</code></pre>
<pre><code># Expected dataframes
Domain Blocklist Name
0 ck.getcookiestxt.com ads_malware
1 eu1.clevertap-prod.com ads_malware
2 wizhumpgyros.com ads_malware
Domain Blocklist Name
0 yournationnews.com fakenews
1 yournewswire.com fakenews
Domain Blocklist Name
0 zebrabet.com.au gambling
1 zenitbet.com gambling
Domain Blocklist Name
0 www.zetton-av.com porn
1 www.zeus-web.net porn
Domain Blocklist Name
0 match.com social
1 mbga.jp social
Domain Blocklist Name
0 yournationnews.com fakenews
1 yournewswire.com fakenews
</code></pre>
<p>All of the other dataframes after <code>adware_malware</code> contain the data of <code>adware_malware</code>. This structure does not come from the fact that only append the new data to the existing. This happens because the source of this data is built like that. I want to remove them in order to prevent duplicates. I'm not sure how to implement this with <code>pandas pd.merge</code>.</p>
| <python><pandas><dataframe> | 2023-08-04 08:18:46 | 1 | 304 | Jan |
76,834,138 | 3,240,790 | Spark sql throws error while reading CDF enabled DELTA table in Azure databricks | <p>I am trying to run the below query in python notebook inside azure databricks</p>
<pre><code>tab ='db.t1'
df =spark.sql(f"SELECT MAX(_commit_version) as max_version FROM table_changes({tab},0)")
df.first()["max_version"]
</code></pre>
<p>But it throws error as below</p>
<pre><code>AnalysisException: [UNRESOLVED_COLUMN.WITHOUT_SUGGESTION] A column or function parameter with name `t1` cannot be resolved. ; line 1 pos 62;
'Project ['MAX('_commit_version) AS max_version#5374]
+- 'UnresolvedTableValuedFunction [table_changes], ['db.t1, 0]
</code></pre>
<p>Can some some one help me</p>
| <python><apache-spark-sql><azure-databricks><delta-lake> | 2023-08-04 08:15:36 | 1 | 3,629 | Surender Raja |
76,834,090 | 11,776,529 | Speed Up Concatenating Bin Files with Pandas and Exporting | <p>I currently have about 1,500 .bin.txt files that I am analysing using Excel Powerquery; however, the loading of the data is very slow (15+ minutes) so I decided to create a short python script to combine all the bin files first, then read them all together in powerquery.</p>
<p>I have the following script, but the time to concatenate all these files is over 20+ minutes and the full dataset will be 10 or more times the 1,500 number. Is there any way to speed this up?</p>
<pre><code>def combineBinFiles():
root = tk.Tk()
root.withdraw()
#Get bin files folder
folder_selected = filedialog.askdirectory()
print(folder_selected)
os.chdir(folder_selected)
files = os.listdir(folder_selected)
print(files)
df = pd.DataFrame()
temp_df = pd.DataFrame(columns= ['Timestamp', 'Wind speed', 'Own consumption'])
for file in files:
if file.endswith('.bin.txt'):
print("Reading file: " + file)
#Format file based on space delimiter
temp_df = pd.read_csv(file, delimiter=" ", header=0)
#Extract date from column name
date = temp_df.columns[0]
#Concatenate date to the beginning of each timestamp before adding it to the dataframe
temp_df[date] = temp_df[date].apply(lambda x: date + ' ' + x)
df = pd.concat([df, temp_df], axis=0, ignore_index=True)
df.to_csv('combinedBinFile.csv', index=False)
print(df)
combineBinFiles()
</code></pre>
<p>The files are formatted as: Timestamp, Wind speed, and Consumption. Each file has the date as the timestamp header, and the rest of the column is the exact time (without the date). So in the code I concatenate the date to each time value before adding it to the overall dataframe.</p>
<p><strong>Edit</strong>: Bin files example; each file looks like this just different data and dates.</p>
<pre><code>14_07_2023 .WindSpeed .Power
17 50 00 006 10,53 0
17 50 00 016 10,53 0
17 50 00 026 10,53 0
17 50 00 036 10,53 0
17 50 00 046 10,53 0
17 50 00 056 10,53 0
</code></pre>
<p>Modified code with multithreading and chunk reading:</p>
<pre><code>def worker(q, df_list):
while not q.empty():
file = q.get()
if file.endswith('.bin.txt'):
print("Reading file: " + file)
temp_df = pd.read_csv(file, delimiter="\t", header=0, engine='python')
date = temp_df.columns[0]
temp_df[date] = temp_df[date].apply(lambda x: date + ' ' + x)
#Delete the last column:
temp_df = temp_df.iloc[:, :-1]
temp_df.columns = ['Timestamp', 'Wind speed', 'Own consumption']
df_list.append(temp_df)
q.task_done()
def combineBinFilesThreaded():
root = tk.Tk()
root.withdraw()
folder_selected = filedialog.askdirectory()
print(folder_selected)
os.chdir(folder_selected)
files = os.listdir(folder_selected)
print(files)
df_list = []
q = queue.Queue()
# Create 2 worker threads
for i in range(4):
t = threading.Thread(target=worker, args=(q, df_list))
t.daemon = True
t.start()
# Put the files in the queue
for file in files:
q.put(file)
# Wait for all the tasks in the queue to be processed
q.join()
print("Done joining")
# Combine all the dataframes
df = pd.concat(df_list, axis=0, ignore_index=True)
print("Done concatenating")
chunksize = 100000
#Add if statement to check if combinedBinFile exists so that it doesn't append to itself
if os.path.exists('combinedBinFile.csv'):
os.remove('combinedBinFile.csv')
for i in range(0, len(df), chunksize):
print("Writing chunk: " + str(i))
df.iloc[i:i+chunksize].to_csv('combinedBinFile.csv', index=False, mode='a')
print("Done writing")
</code></pre>
| <python><pandas><excel><dataframe> | 2023-08-04 08:09:05 | 1 | 355 | Alexander |
76,833,587 | 8,868,327 | How to create a mixin for `django_filters.FilterSet`? | <p>I have a bunch of <code>FilterSet</code>s to which I'd like to add the same new filters, but whenever I do something like below, I get an error saying that <code>FilterSet.Meta</code> must specify a model. E.G:</p>
<pre class="lang-py prettyprint-override"><code>class ModifiedAtMixin:
modified_at_until = django_filters.DateTimeFilter(method="modified_until")
def modified_until(self, queryset, name, value):
return queryset.filter(modified_at__lte=value)
class Meta:
fields = ("modified_at_until",)
class FooFilterSet(ModifiedAtMixin, django_filters.rest_framework.FilterSet):
created_at_until = django_filters.DateTimeFilter(method="created_until")
def created_until(self, queryset, name, value):
return queryset.filter(created_at__lte=value)
class Meta:
model = Foo
fields = ModifiedAtMixin.Meta.fields + ("created_at_until",)
</code></pre>
<p>For reference, I also tried changing the order of parent classes in <code>FooFilterSet</code> and got nothing.</p>
<p>How can I created a reusable mixin such as <code>ModifiedAtMixin</code></p>
| <python><django> | 2023-08-04 06:54:08 | 2 | 992 | EDG956 |
76,833,578 | 139,150 | not able to append a zip file | <p>I have this function that is working and replacing the content from a given file.</p>
<pre><code>import os
import zipfile
import tempfile
def updateZip(zipname, filename, data):
# generate a temp file
tmpfd, tmpname = tempfile.mkstemp(dir=os.path.dirname(zipname))
os.close(tmpfd)
# create a temp copy of the archive without filename
with zipfile.ZipFile(zipname, 'r') as zin:
with zipfile.ZipFile(tmpname, 'w') as zout:
zout.comment = zin.comment # preserve the comment
for item in zin.infolist():
if item.filename != filename:
zout.writestr(item, zin.read(item.filename))
# replace with the temp archive
os.remove(zipname)
os.rename(tmpname, zipname)
# now add filename with its new data
with zipfile.ZipFile(zipname, mode='a', compression=zipfile.ZIP_DEFLATED) as zf:
zf.writestr(filename, data)
</code></pre>
<p>If I call the function twice as shown below, then the text in the file is "second note".</p>
<pre><code>updateZip('acor_en-US.dat', 'DocumentList.xml', 'first note')
updateZip('acor_en-US.dat', 'DocumentList.xml', 'second note')
</code></pre>
<p>But I need "first note" <strong>and</strong> "second note" in that file.
In other words I am not able to append a zip file.</p>
| <python><python-zipfile> | 2023-08-04 06:52:22 | 1 | 32,554 | shantanuo |
76,833,182 | 8,353,711 | Create a DataFrame from a dictionary with list of dict's in multiple keys | <p>Instead of normalizing twice(creating 2 df's for 1 dictionary) and merging it, is there any other efficient way to create a DataFrame from the below data directly?</p>
<p><strong>Data:</strong></p>
<pre><code>dummy_data = {
"key1": "val1",
"key2": "val2",
"key3list": [
{
"key3nestedkey1": "val3list1nested1",
"key3nestedkey2": "val3list1nested2"
},
{
"key3nestedkey1": "val3list2nested1",
"key3nestedkey2": "val3list2nested2"
}
],
"key4list": [
{
"key4nestedkey1": "val4list1nested1",
"key4nestedkey2": "val4list1nested2"
},
{
"key4nestedkey1": "val4list2nested1",
"key4nestedkey2": "val4list2nested2"
}
]
}
</code></pre>
<p><strong>Code:</strong></p>
<pre><code>key3_df = pd.json_normalize(dummy_data, "key3list", ["key1", "key2"])
key4_df = pd.json_normalize(dummy_data, "key4list", ["key1", "key2"])
df = pd.merge(key3_df, key4_df, on=["key1", "key2"])
</code></pre>
<p><strong>Expected & Actual Output:</strong></p>
<pre><code>>>> df
key3nestedkey1 key3nestedkey2 key1 key2 key4nestedkey1 key4nestedkey2
0 val3list1nested1 val3list1nested2 val1 val2 val4list1nested1 val4list1nested2
1 val3list1nested1 val3list1nested2 val1 val2 val4list2nested1 val4list2nested2
2 val3list2nested1 val3list2nested2 val1 val2 val4list1nested1 val4list1nested2
3 val3list2nested1 val3list2nested2 val1 val2 val4list2nested1 val4list2nested2
</code></pre>
| <python><pandas> | 2023-08-04 05:37:28 | 0 | 5,588 | shaik moeed |
76,833,161 | 19,500,571 | Row-wise concatenation of a numpy-array | <p>I have a numpy-array consisting of strings and floats, and I want to concatenate each row of this array. For this I use map as shown here:</p>
<pre><code>import numpy as np
arr = np.array([["this", "is", "test", "nr", 1],
["this", "is", "test", "nr", 2],
["this", "is", "test", "nr", 3],
["this", "is", "test", "nr", 4]])
map(lambda x: x[0] + " " + x[1] + " " + x[2] + " " + x[3] + " " + str(x[4]), arr)
</code></pre>
<p>My question is, is this the fastest way to do it? Or does a native numpy-method exist that is faster than map?</p>
| <python><numpy> | 2023-08-04 05:33:29 | 1 | 469 | TylerD |
76,833,143 | 10,262,805 | How can I split csv file read in langchain | <p>this is set up for <code>langchain</code></p>
<pre><code>from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter=RecursiveCharacterTextSplitter(chunk_size=100,
chunk_overlap=20,
length_function=len)
</code></pre>
<p>now I need to read a csv file</p>
<pre><code>import csv
with open("test.csv") as f:
# test is an iterator
test=csv.reader(f,delimiter=",")
</code></pre>
<p>this does not work because <code>test</code> is an iterator</p>
<pre><code># object of type '_csv.reader' has no len()
chunks=text_splitter.create_documents(test)
</code></pre>
<p><code>text_splitter.create_documents</code> accepts <code>str</code>. If I read a <code>.txt</code> file and pass it, it works. so I need to convert <code>_csv.reader</code> type to <code>str</code>. I tried</p>
<pre><code> chunks=text_splitter.create_documents("".join(test)
</code></pre>
<p>I get</p>
<pre><code>ValueError: I/O operation on closed file.
</code></pre>
| <python><csv><openai-api><langchain><large-language-model> | 2023-08-04 05:28:29 | 2 | 50,924 | Yilmaz |
76,833,106 | 11,681,306 | python requirements.txt, individual lines to a target | <p>I'd like to move these lines to my requirements.txt, I couldn't find a way...</p>
<pre><code>pip install ExistingLibFromPypi==10.0.0.1 --target= /Users/myuser/Documents/github/venv/lib/python3.11/site-packages/PyU4V_versions/
pip install ExistingLibFromPypi==9.0.0.1 --target= /Users/myuser/Documents/github/venv/lib/python3.11/site-packages/PyU4V_versions/
</code></pre>
<p>the reason is that there's no parity amongst the 2 versions, so I need them both installed, there's no way around it.
I am creating a wrapper that automatically chooses the appropriate one.</p>
<p>Ideally -since I am not the only user/installer- I'd like to have those on requirements.txt, but am not sure if there's a way to achieve that: simply replacing the line</p>
<p><code>ExistingLibFromPypi==10.0.0.1</code></p>
<p>with
<code>ExistingLibFromPypi==10.0.0.1 --target= /Users/myuser/Documents/github/venv/lib/python3.11/site-packages/PyU4V_versions/</code></p>
<p>does not work (target seems to be an invalid flag).</p>
<p>I know in the worst case I can still install those manually after the requirements.txt, but ideally I'd like to have them included in the file.</p>
<p>Any help is appreciated!</p>
| <python><python-3.x><pip><requirements.txt> | 2023-08-04 05:16:32 | 1 | 309 | Fabri Ba |
76,833,005 | 1,747,834 | Why are my subprocesses running sequentially, rather than all at once? | <p>My script has two loops:</p>
<ul>
<li>in the first one multiple <code>ssh</code>-processes are launched, one for each remote machine;</li>
<li>in the second I collect and print each process' stderr, stdout, and examine its <code>returncode</code></li>
</ul>
<p>I see, that they all run in sequence -- confirmed by printed output of the <code>date</code>-commands on each -- whereas I expected them to run in parallel... What am I doing wrong?</p>
<pre class="lang-py prettyprint-override"><code>pipes = {}
myhostname = os.uname()[1].lower()
for node in Nodes:
if node.lower() == myhostname:
cmd = '/bin/bash'
else:
cmd = ('ssh -q '
'-o PasswordAuthentication=no '
'-o StrictHostKeyChecking=no '
'%s /bin/bash') % node
pipe = subprocess.Popen(cmd.split(' '),
stdin = subprocess.PIPE, stdout = subprocess.PIPE,
stderr = subprocess.PIPE, close_fds = True,
encoding = locale.getlocale()[1])
pipe.stdin.write("""
date
sleep 1
date
exit""")
pipes[node] = pipe
logging.info('%d processes spawn, collecting outputs', len(pipes))
errCount = 0
for (node, pipe) in pipes.items():
out, err = pipe.communicate()
if err:
logging.warn('%s stderr: %s', node, err.strip())
if out:
logging.info('%s stdout: %s', node, out.strip())
if pipe.returncode != 0:
errCount += 1
logging.error('Got exit code %d from %s, increasing '
'error-count to %d', pipe.returncode, node, errCount)
</code></pre>
| <python><parallel-processing><subprocess> | 2023-08-04 04:46:15 | 3 | 4,246 | Mikhail T. |
76,832,971 | 12,883,297 | Get the total count of True and False at groupby level and total count of True only, False only and Combination of True-False from the dataframe | <p>I have a dataframe</p>
<pre><code>df = pd.DataFrame([["A",20,True],["C",21,True],["B",20,False],["A",21,False],["B",20,False],["A",20,False]],columns=["id1","id2","val1"])
</code></pre>
<pre><code>id1 id2 val1
A 20 True
C 21 True
B 20 False
A 21 False
B 20 False
A 20 False
</code></pre>
<p>I need total count of True and False at id1 and id2 groupby level.</p>
<p><strong>Expected Output1</strong></p>
<pre><code>df_out1 = pd.DataFrame([["A",20,1,1],["A",21,0,1],["B",20,0,2],["C",21,1,0]],columns=["id1","id2","Total_True","Total_False"])
</code></pre>
<pre><code>id1 id2 Total_True Total_False
A 20 1 1
A 21 0 1
B 20 0 2
C 21 1 0
</code></pre>
<p>Also I need another output which tells how many id1 and id2 combination has All the True values, How many has all the False Values and how many id1 and id2 combination have both True and False values.</p>
<p><strong>Expected Output2</strong></p>
<pre><code>df_out2 = pd.DataFrame([["All_True",1],["All_False",2],["Both_TrueFalse",1]],columns=["Type","Total_id"])
</code></pre>
<pre><code>Type Total_id
All_True 1
All_False 2
Both_TrueFalse 1
</code></pre>
<p>How to do it in pandas?</p>
| <python><python-3.x><pandas><dataframe> | 2023-08-04 04:36:12 | 2 | 611 | Chethan |
76,832,955 | 6,385,767 | Python plotly dropdown not working in HTML, Dropdown event is not getting triggered | <p>I have created a Plotly chart in Python with a drop-down and exported the chart to JSON. Then storing that JSON in the database. After that render chart JSON in HTML. In HTML dropdown within a chart is not working means when I select any dropdown value chart is not changing according to that selection. Following is my Python Plotly chart code. Following is the code for Plotly</p>
<pre><code>import pandas as pd
import plotly.express as px
import json
# Create the data table
data = {
'Country': ['United States', 'United States', 'United States', 'United States', 'United States', 'United States',
'United States', 'United States', 'United States', 'China', 'China', 'China', 'China', 'China', 'China',
'China', 'China', 'China', 'India', 'India', 'India', 'India', 'India', 'India', 'India', 'India',
'India', 'Indonesia', 'Indonesia', 'Indonesia', 'Indonesia', 'Indonesia', 'Indonesia'],
'State': ['California', 'California', 'California', 'Texas', 'Texas', 'Texas', 'New York', 'New York', 'New York',
'Guangdong', 'Guangdong', 'Guangdong', 'Shandong', 'Shandong', 'Shandong', 'Henan', 'Henan', 'Henan',
'Uttar Pradesh', 'Uttar Pradesh', 'Uttar Pradesh', 'Maharashtra', 'Maharashtra', 'Maharashtra', 'Bihar',
'Bihar', 'Bihar', 'Jakarta', 'Jakarta', 'Jakarta', 'West Java', 'West Java', 'West Java'],
'Race': ['White', 'Hispanic', 'Asian', 'White', 'Hispanic', 'African-American', 'White', 'African-American', 'Asian',
'Han Chinese', 'Zhuang', 'Yao', 'Han Chinese', 'Hui', 'Manchu', 'Han Chinese', 'Uighur', 'Korean',
'Indo-Aryan', 'Dravidian', 'Mongoloid', 'Indo-Aryan', 'Dravidian', 'Mongoloid', 'Indo-Aryan', 'Dravidian',
'Mongoloid', 'Javanese', 'Sundanese', 'Betawi', 'Sundanese', 'Javanese', 'Betawi'],
'Population': [20000000, 10000000, 8000000, 15000000, 5000000, 3000000, 12000000, 8000000, 4000000, 100000000,
5000000, 3000000, 80000000, 6000000, 4000000, 75000000, 4000000, 3000000, 180000000, 120000000,
10000000, 100000000, 60000000, 5000000, 90000000, 75000000, 5000000, 10000000, 7000000, 4000000,
25000000, 10000000, 4000000],
'Weight %': [50, 25, 20, 60, 20, 12, 45, 30, 15, 90, 4.5, 2.5, 85, 6, 4, 88, 4.7, 3.5, 55, 37, 3, 60, 35, 3, 50,
40, 4, 40, 28, 16, 60, 25, 10]
}
df = pd.DataFrame(data)
# Create the treemap
fig = px.treemap(df, path=['Country', 'State', 'Race'], values='Population', color='Population', color_continuous_scale='RdBu', title='Population Treemap')
# Set the color bar title
fig.update_layout(coloraxis_colorbar_title='<b>Population</b>')
# Set the title
fig.update_layout(title={'text': '<b>Population by Race in Top States of Different Countries</b>', 'font': {'color': '#283347'}})
# Customize hover labels
fig.data[0].hovertemplate = '<b>%{label}</b><br>Population: %{value:,}<br>Weight %: %{customdata:.2f}%'
# Remove parent and ID fields from hover
fig.data[0].hovertemplate = fig.data[0].hovertemplate.replace('%{parent}<br>', '').replace('%{id}<br>', '')
# Display the treemap
# fig.show()
# Create dropdown options
dropdown_options = [
{'label': 'Group by Country', 'value': ['Country', 'State', 'Race']},
{'label': 'Group by Race', 'value': ['Race', 'Country', 'State']}
]
# Define a callback function to update the treemap based on dropdown selection
def update_treemap(selected_option):
fig.update_traces(path=selected_option)
# Add dropdown menu
update_menus = [
{
'buttons': [
{'method': 'relayout', 'label': option['label'], 'args': [{'treemap.path': option['value']}]}
for option in dropdown_options
],
'direction': 'down',
'showactive': True,
'x': 0.1,
'y': 1.07
}
]
fig.update_layout(updatemenus=update_menus)
# Save the chart data as JSON
chart_data_json = fig.to_json()
#store chart_data_json to postgres
</code></pre>
<p>Then Store the JSON output in RDBMS and get it to render on HTML. Following is the HTML file</p>
<pre><code><!DOCTYPE html>
<html>
<head>
<title>Plotly Chart</title>
<!-- Include plotly.js -->
<script src="https://cdn.plot.ly/plotly-latest.min.js"></script>
</head>
<body>
<!-- Container for the chart -->
<div id="chart-container"></div>
<!-- Load and render the chart using JavaScript -->
<script>
var chartData = '{"data":[{"branchvalues":"total","customdata":[[8000000.0],[3000000.0],[8000000.0],[4000000.0],[4000000.0],[4000000.0],[75000000.0],[60000000.0],[120000000.0],[100000000.0],[75000000.0],[80000000.0],[10000000.0],[5000000.0],[6000000.0],[90000000.0],[100000000.0],[180000000.0],[10000000.0],[10000000.0],[3000000.0],[4000000.0],[5000000.0],[5000000.0],[10000000.0],[7000000.0],[25000000.0],[4000000.0],[20000000.0],[12000000.0],[15000000.0],[3000000.0],[5000000.0],[80882352.94117647],[14842105.263157895],[92907407.4074074],[68902439.02439025],[7857142.857142857],[82575757.57575758],[9333333.333333334],[71688888.8888889],[11260869.56521739],[151290322.58064517],[19000000.0],[79057142.85714285],[115155038.75968993],[15100000.0],[12317647.05882353]],"domain":{"x":[0.0,1.0],"y":[0.0,1.0]},"hovertemplate":"\\u003cb\\u003e%{label}\\u003c\\u002fb\\u003e\\u003cbr\\u003ePopulation: %{value:,}\\u003cbr\\u003eWeight %: %{customdata:.2f}%","ids":["United States\\u002fNew York\\u002fAfrican-American","United States\\u002fTexas\\u002fAfrican-American","United States\\u002fCalifornia\\u002fAsian","United States\\u002fNew York\\u002fAsian","Indonesia\\u002fJakarta\\u002fBetawi","Indonesia\\u002fWest Java\\u002fBetawi","India\\u002fBihar\\u002fDravidian","India\\u002fMaharashtra\\u002fDravidian","India\\u002fUttar Pradesh\\u002fDravidian","China\\u002fGuangdong\\u002fHan Chinese","China\\u002fHenan\\u002fHan Chinese","China\\u002fShandong\\u002fHan Chinese","United States\\u002fCalifornia\\u002fHispanic","United States\\u002fTexas\\u002fHispanic","China\\u002fShandong\\u002fHui","India\\u002fBihar\\u002fIndo-Aryan","India\\u002fMaharashtra\\u002fIndo-Aryan","India\\u002fUttar Pradesh\\u002fIndo-Aryan","Indonesia\\u002fJakarta\\u002fJavanese","Indonesia\\u002fWest Java\\u002fJavanese","China\\u002fHenan\\u002fKorean","China\\u002fShandong\\u002fManchu","India\\u002fBihar\\u002fMongoloid","India\\u002fMaharashtra\\u002fMongoloid","India\\u002fUttar Pradesh\\u002fMongoloid","Indonesia\\u002fJakarta\\u002fSundanese","Indonesia\\u002fWest Java\\u002fSundanese","China\\u002fHenan\\u002fUighur","United States\\u002fCalifornia\\u002fWhite","United States\\u002fNew York\\u002fWhite","United States\\u002fTexas\\u002fWhite","China\\u002fGuangdong\\u002fYao","China\\u002fGuangdong\\u002fZhuang","India\\u002fBihar","United States\\u002fCalifornia","China\\u002fGuangdong","China\\u002fHenan","Indonesia\\u002fJakarta","India\\u002fMaharashtra","United States\\u002fNew York","China\\u002fShandong","United States\\u002fTexas","India\\u002fUttar Pradesh","Indonesia\\u002fWest Java","China","India","Indonesia","United States"],"labels":["African-American","African-American","Asian","Asian","Betawi","Betawi","Dravidian","Dravidian","Dravidian","Han Chinese","Han Chinese","Han Chinese","Hispanic","Hispanic","Hui","Indo-Aryan","Indo-Aryan","Indo-Aryan","Javanese","Javanese","Korean","Manchu","Mongoloid","Mongoloid","Mongoloid","Sundanese","Sundanese","Uighur","White","White","White","Yao","Zhuang","Bihar","California","Guangdong","Henan","Jakarta","Maharashtra","New York","Shandong","Texas","Uttar Pradesh","West Java","China","India","Indonesia","United States"],"marker":{"coloraxis":"coloraxis","colors":[8000000.0,3000000.0,8000000.0,4000000.0,4000000.0,4000000.0,75000000.0,60000000.0,120000000.0,100000000.0,75000000.0,80000000.0,10000000.0,5000000.0,6000000.0,90000000.0,100000000.0,180000000.0,10000000.0,10000000.0,3000000.0,4000000.0,5000000.0,5000000.0,10000000.0,7000000.0,25000000.0,4000000.0,20000000.0,12000000.0,15000000.0,3000000.0,5000000.0,80882352.94117647,14842105.263157895,92907407.4074074,68902439.02439025,7857142.857142857,82575757.57575758,9333333.333333334,71688888.8888889,11260869.56521739,151290322.58064517,19000000.0,79057142.85714285,115155038.75968993,15100000.0,12317647.05882353]},"name":"","parents":["United States\\u002fNew York","United States\\u002fTexas","United States\\u002fCalifornia","United States\\u002fNew York","Indonesia\\u002fJakarta","Indonesia\\u002fWest Java","India\\u002fBihar","India\\u002fMaharashtra","India\\u002fUttar Pradesh","China\\u002fGuangdong","China\\u002fHenan","China\\u002fShandong","United States\\u002fCalifornia","United States\\u002fTexas","China\\u002fShandong","India\\u002fBihar","India\\u002fMaharashtra","India\\u002fUttar Pradesh","Indonesia\\u002fJakarta","Indonesia\\u002fWest Java","China\\u002fHenan","China\\u002fShandong","India\\u002fBihar","India\\u002fMaharashtra","India\\u002fUttar Pradesh","Indonesia\\u002fJakarta","Indonesia\\u002fWest Java","China\\u002fHenan","United States\\u002fCalifornia","United States\\u002fNew York","United States\\u002fTexas","China\\u002fGuangdong","China\\u002fGuangdong","India","United States","China","China","Indonesia","India","United States","China","United States","India","Indonesia","","","",""],"values":[8000000,3000000,8000000,4000000,4000000,4000000,75000000,60000000,120000000,100000000,75000000,80000000,10000000,5000000,6000000,90000000,100000000,180000000,10000000,10000000,3000000,4000000,5000000,5000000,10000000,7000000,25000000,4000000,20000000,12000000,15000000,3000000,5000000,170000000,38000000,108000000,82000000,21000000,165000000,24000000,90000000,23000000,310000000,39000000,280000000,645000000,60000000,85000000],"type":"treemap"}],"layout":{"template":{"data":{"histogram2dcontour":[{"type":"histogram2dcontour","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"choropleth":[{"type":"choropleth","colorbar":{"outlinewidth":0,"ticks":""}}],"histogram2d":[{"type":"histogram2d","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"heatmap":[{"type":"heatmap","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"heatmapgl":[{"type":"heatmapgl","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"contourcarpet":[{"type":"contourcarpet","colorbar":{"outlinewidth":0,"ticks":""}}],"contour":[{"type":"contour","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"surface":[{"type":"surface","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"mesh3d":[{"type":"mesh3d","colorbar":{"outlinewidth":0,"ticks":""}}],"scatter":[{"fillpattern":{"fillmode":"overlay","size":10,"solidity":0.2},"type":"scatter"}],"parcoords":[{"type":"parcoords","line":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scatterpolargl":[{"type":"scatterpolargl","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"bar":[{"error_x":{"color":"#2a3f5f"},"error_y":{"color":"#2a3f5f"},"marker":{"line":{"color":"#E5ECF6","width":0.5},"pattern":{"fillmode":"overlay","size":10,"solidity":0.2}},"type":"bar"}],"scattergeo":[{"type":"scattergeo","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scatterpolar":[{"type":"scatterpolar","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"histogram":[{"marker":{"pattern":{"fillmode":"overlay","size":10,"solidity":0.2}},"type":"histogram"}],"scattergl":[{"type":"scattergl","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scatter3d":[{"type":"scatter3d","line":{"colorbar":{"outlinewidth":0,"ticks":""}},"marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scattermapbox":[{"type":"scattermapbox","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scatterternary":[{"type":"scatterternary","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scattercarpet":[{"type":"scattercarpet","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"carpet":[{"aaxis":{"endlinecolor":"#2a3f5f","gridcolor":"white","linecolor":"white","minorgridcolor":"white","startlinecolor":"#2a3f5f"},"baxis":{"endlinecolor":"#2a3f5f","gridcolor":"white","linecolor":"white","minorgridcolor":"white","startlinecolor":"#2a3f5f"},"type":"carpet"}],"table":[{"cells":{"fill":{"color":"#EBF0F8"},"line":{"color":"white"}},"header":{"fill":{"color":"#C8D4E3"},"line":{"color":"white"}},"type":"table"}],"barpolar":[{"marker":{"line":{"color":"#E5ECF6","width":0.5},"pattern":{"fillmode":"overlay","size":10,"solidity":0.2}},"type":"barpolar"}],"pie":[{"automargin":true,"type":"pie"}]},"layout":{"autotypenumbers":"strict","colorway":["#636efa","#EF553B","#00cc96","#ab63fa","#FFA15A","#19d3f3","#FF6692","#B6E880","#FF97FF","#FECB52"],"font":{"color":"#2a3f5f"},"hovermode":"closest","hoverlabel":{"align":"left"},"paper_bgcolor":"white","plot_bgcolor":"#E5ECF6","polar":{"bgcolor":"#E5ECF6","angularaxis":{"gridcolor":"white","linecolor":"white","ticks":""},"radialaxis":{"gridcolor":"white","linecolor":"white","ticks":""}},"ternary":{"bgcolor":"#E5ECF6","aaxis":{"gridcolor":"white","linecolor":"white","ticks":""},"baxis":{"gridcolor":"white","linecolor":"white","ticks":""},"caxis":{"gridcolor":"white","linecolor":"white","ticks":""}},"coloraxis":{"colorbar":{"outlinewidth":0,"ticks":""}},"colorscale":{"sequential":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]],"sequentialminus":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]],"diverging":[[0,"#8e0152"],[0.1,"#c51b7d"],[0.2,"#de77ae"],[0.3,"#f1b6da"],[0.4,"#fde0ef"],[0.5,"#f7f7f7"],[0.6,"#e6f5d0"],[0.7,"#b8e186"],[0.8,"#7fbc41"],[0.9,"#4d9221"],[1,"#276419"]]},"xaxis":{"gridcolor":"white","linecolor":"white","ticks":"","title":{"standoff":15},"zerolinecolor":"white","automargin":true,"zerolinewidth":2},"yaxis":{"gridcolor":"white","linecolor":"white","ticks":"","title":{"standoff":15},"zerolinecolor":"white","automargin":true,"zerolinewidth":2},"scene":{"xaxis":{"backgroundcolor":"#E5ECF6","gridcolor":"white","linecolor":"white","showbackground":true,"ticks":"","zerolinecolor":"white","gridwidth":2},"yaxis":{"backgroundcolor":"#E5ECF6","gridcolor":"white","linecolor":"white","showbackground":true,"ticks":"","zerolinecolor":"white","gridwidth":2},"zaxis":{"backgroundcolor":"#E5ECF6","gridcolor":"white","linecolor":"white","showbackground":true,"ticks":"","zerolinecolor":"white","gridwidth":2}},"shapedefaults":{"line":{"color":"#2a3f5f"}},"annotationdefaults":{"arrowcolor":"#2a3f5f","arrowhead":0,"arrowwidth":1},"geo":{"bgcolor":"white","landcolor":"#E5ECF6","subunitcolor":"white","showland":true,"showlakes":true,"lakecolor":"white"},"title":{"x":0.05},"mapbox":{"style":"light"}}},"coloraxis":{"colorbar":{"title":{"text":"\\u003cb\\u003ePopulation\\u003c\\u002fb\\u003e"}},"colorscale":[[0.0,"rgb(103,0,31)"],[0.1,"rgb(178,24,43)"],[0.2,"rgb(214,96,77)"],[0.3,"rgb(244,165,130)"],[0.4,"rgb(253,219,199)"],[0.5,"rgb(247,247,247)"],[0.6,"rgb(209,229,240)"],[0.7,"rgb(146,197,222)"],[0.8,"rgb(67,147,195)"],[0.9,"rgb(33,102,172)"],[1.0,"rgb(5,48,97)"]]},"legend":{"tracegroupgap":0},"title":{"text":"\\u003cb\\u003ePopulation by Race in Top States of Different Countries\\u003c\\u002fb\\u003e","font":{"color":"#283347"}},"updatemenus":[{"buttons":[{"args":[{"treemap.path":["Country","State","Race"]}],"label":"Group by Country","method":"relayout"},{"args":[{"treemap.path":["Race","Country","State"]}],"label":"Group by Race","method":"relayout"}],"direction":"down","showactive":true,"x":0.1,"y":1.07}]}}';
chartData = JSON.parse(chartData);
// Render the chart using plotly.js
Plotly.newPlot('chart-container', chartData.data, chartData.layout);
</script>
</body>
</html>
</code></pre>
| <javascript><python><plotly><plotly.js> | 2023-08-04 04:30:26 | 0 | 642 | Ravindra Gupta |
76,832,944 | 2,525,940 | Non-matching colors for arrows in matplotlib 3D quiver plot | <p>I'm having this weird problem with coloring the arrows on a matplotlib 3d quiver plot.</p>
<p>The following minimum example correctly colors the base (from the scatter plot) the shaft and the ends of the arrows with the same color</p>
<pre class="lang-py prettyprint-override"><code>
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import axes3d
sp = 0.501
x, y, z = np.meshgrid(
np.arange(-0.5, 1, sp), np.arange(-0.5, 1, sp), np.arange(-0.5, 1, sp)
)
X = x.ravel()
Y = y.ravel()
Z = z.ravel()
# Make some direction and size data for the arrows
C = np.random.random(X.shape)
U = Y * C
V = np.zeros_like(U)
W = np.zeros_like(U)
# Color
np.random.seed(0)
col = np.random.random(X.shape)
# Normalize
c1 = (col - col.min()) / col.ptp()
cm1 = plt.cm.hsv(c1)
# Repeat colors for each body line and two head lines
c3 = np.concatenate((c1, np.repeat(c1, 2)))
cm3 = plt.cm.hsv(c3)
fig = plt.figure()
ax3 = fig.add_subplot(111, projection="3d")
ax3.scatter(X, Y, Z, s=5, c=cm1)
ax3.quiver(X, Y, Z, U, V, W, length=0.5, colors=cm3)
plt.show()
</code></pre>
<p>And gives this plot</p>
<p><a href="https://i.sstatic.net/VCjzN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VCjzN.png" alt="colors match" /></a></p>
<p>If I change the first line to <code>sp =0.500</code>
The colors no longer match</p>
<p><a href="https://i.sstatic.net/ujTMk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ujTMk.png" alt="colors do not match" /></a></p>
<p>In each case the same values in the <code>cm3</code> array are passed to the <code>colors</code> parameter of <code>quiver</code></p>
| <python><matplotlib><matplotlib-3d> | 2023-08-04 04:26:09 | 1 | 499 | elfnor |
76,832,790 | 1,089,161 | testing equality of Eq (not Expr) in SymPy | <p><a href="https://stackoverflow.com/users/8387458/yourhomicidalape">YourHomicidalApe</a> asked a question which is essentially this:</p>
<p>Given two strings that represent equations that are converted to SymPy <code>Eq</code> instances, how would you tell if they represent the same equations? (The "same" meaning that algebraic manipulation of each equation could be done to obtain equivalent expressions, perhaps in some canonical form.)</p>
<p>For example, the following equations should compare the same. What tools can be used to do this? It is <a href="https://stackoverflow.com/questions/37112738/sympy-comparing-expressions">understood</a> that the <code>==</code> cannot be used since that tests for structural equality and the <code>equals</code> method apparently doesn't work with instances of <code>Eq</code> unless they are trivially the same:</p>
<pre class="lang-py prettyprint-override"><code>>>> from sympy.parsing.sympy_parser import T
>>> from sympy.abc import x, y
>>> Eq(y, x).equals(Eq(x, y))
True
>>> Eq(2*y, x).equals(Eq(x/2, y))
False
</code></pre>
<p>What suggestions do you have for dealing with testing the mathematical equivalence of equations like the ones shown below?</p>
<pre class="lang-py prettyprint-override"><code>>>> from sympy.parsing import *
>>> a = parse_expr('y=x^2+.5', transformations=T[:])
>>> b = parse_expr('2*y=2x^2+1', transformations=T[:])
>>> a==b
False
>>> a.equals(b)
False
>>> e1, e2 = [i.rewrite(Add) for i in (a, b)]
>>> e1.equals(e2)
False
</code></pre>
<p>Does anyone else deal with such expressions, perhaps in the context of getting input from beginning algebra students that are to be tested against a known answer?</p>
| <python><sympy><equality> | 2023-08-04 03:26:49 | 1 | 19,565 | smichr |
76,832,666 | 9,768,260 | How to exec the cli code of python through k8s's dns address | <p>If there is a test k8s pod like below, how to execute the test.py through k8s dns address?</p>
<pre><code># dockerfile.test -- have a test.py under /test fold.
FROM base image
RUN apt-get update
RUN apt-get install -y python3-pip
COPY . /test
WORKDIR /test
RUN pip3 install -r requirements.txt
# test.yaml -- have a dns name http://test-api.test-system.svc.cluster.local:8080
apiVersion: v1
kind: Namespace
metadata:
name: test-system
---
apiVersion: v1
kind: Service
metadata:
name: test-api
namespace: test-system
spec:
selector:
app: test
type: ClusterIP
ports:
- name: test-api
port: 8080
targetPort: test-api
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
namespace: test-system
spec:
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
containers:
- name: test-api
image: test-image
</code></pre>
| <python><kubernetes> | 2023-08-04 02:48:40 | 1 | 7,108 | ccd |
76,832,444 | 6,436,545 | Polars: aggregating a subset of rows | <p>Suppose I have a data set with a group column, subgroup column, and value column:</p>
<pre><code>import polars as pl
df = pl.DataFrame(dict(
grp = ['A', 'A', 'A', 'B', 'B', 'B'],
subgroup = ['x', 'x', 'y', 'x', 'x', 'y'],
value = [1, 2, 3, 4, 5, 6]
))
┌─────┬──────────┬───────┐
│ grp ┆ subgroup ┆ value │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ i64 │
╞═════╪══════════╪═══════╡
│ A ┆ x ┆ 1 │
│ A ┆ x ┆ 2 │
│ A ┆ y ┆ 3 │
│ B ┆ x ┆ 4 │
│ B ┆ x ┆ 5 │
│ B ┆ y ┆ 6 │
└─────┴──────────┴───────┘
</code></pre>
<p>I'd like to calculate the mean <code>value</code> for each <code>group</code>, and also calculate the mean of the values where <code>subgroup</code> is "x". In polars, the most succinct way I've found to do this is:</p>
<pre><code>(
df
.group_by('grp')
.agg(
pl.col('value').mean().alias('mean_all'),
pl.when(pl.col('subgroup') == 'x').then(pl.col('value')).mean().alias('mean_x')
)
)
┌─────┬──────────┬────────┐
│ grp ┆ mean_all ┆ mean_x │
│ --- ┆ --- ┆ --- │
│ str ┆ f64 ┆ f64 │
╞═════╪══════════╪════════╡
│ A ┆ 2.0 ┆ 1.5 │
│ B ┆ 5.0 ┆ 4.5 │
└─────┴──────────┴────────┘
</code></pre>
<p>This works fine, but the call to <code>when...then</code> inside <code>agg</code> produces a warning about this not being a valid aggregation (even though the chain is finished with <code>mean</code>). Is there a more idiomatic or elegant way to perform this operation from inside a polars chain?</p>
| <python><python-polars> | 2023-08-04 01:31:03 | 1 | 11,997 | jdobres |
76,832,421 | 1,187,621 | Python GEKKO Using Manipulation Variables | <p>Using Python GEKKO with IPOPT for Poliastro/Astropy, I have the following:</p>
<pre><code># Manipulating variables and initial guesses
launch = m.MV(value = 2460159.5, lb = 2460159.5, ub = 2460525.5)
launch.STATUS = 1
flyby = m.MV(value = 2460424.5, lb = 2460342.5, ub = 2460525.5)
flyby.STATUS = 1
arrival = m.MV(value = 2460694.5, lb = 2460480.5, ub = 2460845.5)
arrival. STATUS = 1
</code></pre>
<p>In my calculations, I have:</p>
<pre><code># Dates
date_launchE = Time(launch, format="jd", scale="utc").tdb
date_flyby = Time(flyby, format="jd", scale="utc").tdb # Mars
date_arrivalE = Time(arrival, format="jd", scale="utc").tdb
</code></pre>
<p>However, I am getting an error on 'launch', telling me:</p>
<blockquote>
<p>ValueError: Input values did not match the format class jd: TypeError:
for jd class, input should be doubles, string, or Decimal, and second
values are only allowed for doubles.</p>
</blockquote>
<p>Does anyone have an idea of how I can access the value of 'launch' (i.e., 2460159.5) to convert the time format as needed for the subsequent code?</p>
<p>I have tried to look into 'launch', with a print() command, but it just returns 'p1'.</p>
| <python><gekko> | 2023-08-04 01:21:00 | 1 | 437 | pbhuter |
76,832,348 | 160,808 | Grpc requiring glibc 2.33 debian buster on arm cpu | <pre><code>Traceback (most recent call last):
File "/home/steven/GassistPi/src/main.py", line 25, in <module>
from google.cloud import speech
File "/home/steven/env/lib/python3.7/site-packages/google/cloud/speech.py", line 19, in <module>
from google.cloud.speech_v1 import SpeechClient
File "/home/steven/env/lib/python3.7/site-packages/google/cloud/speech_v1/__init__.py", line 17, in <module>
from google.cloud.speech_v1.gapic import speech_client
File "/home/steven/env/lib/python3.7/site-packages/google/cloud/speech_v1/gapic/speech_client.py", line 22, in <module>
import google.api_core.gapic_v1.client_info
File "/home/steven/env/lib/python3.7/site-packages/google/api_core/gapic_v1/__init__.py", line 16, in <module>
from google.api_core.gapic_v1 import config
File "/home/steven/env/lib/python3.7/site-packages/google/api_core/gapic_v1/config.py", line 23, in <module>
import grpc
File "/home/steven/env/lib/python3.7/site-packages/grpc/__init__.py", line 22, in <module>
from grpc import _compression
File "/home/steven/env/lib/python3.7/site-packages/grpc/_compression.py", line 15, in <module>
from grpc._cython import cygrpc
ImportError: /lib/arm-linux-gnueabihf/libc.so.6: version `GLIBC_2.33' not found (required by /home/steven/env/lib/python3.7/site-packages/grpc/_cython/cygrpc.cpython-37m-arm-linux-gnueabihf.so)
</code></pre>
<p>I am trying to setup this project GassistPi but I am getting the error above when I run the main python script.</p>
<p>The default version that came with debian buster is 2.28. I am running this in a python virtual envirorment. Is there a way I can tell the python virtual enviroment to use the glibc I have compiled and installed in a different folder?</p>
<p>I have tested the glibc install by running the testrun script againest my hello world c program. It works.</p>
| <python><python-3.x><glibc> | 2023-08-04 00:52:45 | 1 | 2,311 | Ageis |
76,832,226 | 13,119,730 | Stuck at deadlock when reading stdout - Popen and subprocess | <p>Im trying to manage a process using python subprocess module.</p>
<p>Lets say I have python script that runs forever and prints each second <code>Running x seconds.</code>. And after 5 seconds the app asks me if it should continue.</p>
<p>I would like to run this process using <code>Popen</code>.</p>
<pre><code>process = subprocess.Popen(['python3', 'test.py'],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True)
# Now the process is running, I want to read all the stdout
lines = []
while True:
line = process.stdout.readline() # when there is nothing more to read the readline is just stuck in deadlock
if not line:
break # nothing more to read
lines.append(line)
</code></pre>
<p>but as described on example, when reading stdout of running process at the point where there is nothing more to read by readline() the readline is stuck at deadlock.</p>
<p>Is there any way how we can read output of stdout till the end?</p>
| <python><process><subprocess><python-multiprocessing> | 2023-08-03 23:55:33 | 0 | 387 | Jakub Zilinek |
76,832,124 | 807,797 | Where was python installed? | <p><strong>How can I find out where Python was installed in a Windows 11 machine, so that I can use the address to add Python to the PATH variable?</strong></p>
<p>The documentation I have found on this assumes that the user can already use the python command in the cli. But in this case, the cli cannot find python yet because python has not been added to the PATH yet.</p>
<p>Also, I looked closely in Windows file explorer and was not able to find Python in Program Files, Program Files (x86), the root of the C drive, or any of the many other places that I looked.</p>
<p>Here are the commands that first install python and then try to check the resulting Python version.</p>
<pre><code>PS C:\Users\Administrator> C:\temp\python-3.11.4-amd64.exe /passive InstallAllUsers=0 InstallLauncherAllUsers=0 PrependPath=1 Include_test=0
PS C:\Users\Administrator> python --version
python : The term 'python' is not recognized as the name of a cmdlet, function, script file, or operable program.
Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
+ python --version
+ ~~~~~~
+ CategoryInfo : ObjectNotFound: (python:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
PS C:\Users\Administrator>
</code></pre>
<p>The /passive flag above results in the python install GUI launching while the install happens, and then closing without giving any error message, so it seems clear that the install command is indeed running. ... If you can suggest any alterations to the command to elucidate any possible error message, I would be happy to try your suggested fully automated commands that might leave better logs.</p>
<p>This is on an Amazon EC2 instance being accessed remotely using an RDP file, if that makes any difference. This provisioning process must be completely automated.</p>
<p>Powershell is running as an administrator while invoking the above commands.</p>
| <python><windows><powershell> | 2023-08-03 23:25:10 | 1 | 9,239 | CodeMed |
76,831,986 | 11,152,224 | Pyngrok: SSL certificate verify failed while downloading ngrok | <p>I want to tunnel my FastAPI server, so I installed pyngrok using poetry and created/activated virtual env.</p>
<p>Here is my pyproject.toml:</p>
<pre><code>[tool.poetry.dependencies]
python = "^3.11"
fastapi = "^0.100.1"
uvicorn = "^0.23.2"
pyngrok = "^6.0.0"
httpx = "^0.24.1"
</code></pre>
<p>Then in virtual env I run command: <code>ngrok http 9000</code> and get such error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\sergey\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 1348, in do_open
h.request(req.get_method(), req.selector, req.data, headers,
File "C:\Users\sergey\AppData\Local\Programs\Python\Python311\Lib\http\client.py", line 1282, in request
self._send_request(method, url, body, headers, encode_chunked)
File "C:\Users\sergey\AppData\Local\Programs\Python\Python311\Lib\http\client.py", line 1328, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "C:\Users\sergey\AppData\Local\Programs\Python\Python311\Lib\http\client.py", line 1277, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "C:\Users\sergey\AppData\Local\Programs\Python\Python311\Lib\http\client.py", line 1037, in _send_output
self.send(msg)
File "C:\Users\sergey\AppData\Local\Programs\Python\Python311\Lib\http\client.py", line 975, in send
self.connect()
File "C:\Users\sergey\AppData\Local\Programs\Python\Python311\Lib\http\client.py", line 1454, in connect
self.sock = self._context.wrap_socket(self.sock,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sergey\AppData\Local\Programs\Python\Python311\Lib\ssl.py", line 517, in wrap_socket
return self.sslsocket_class._create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sergey\AppData\Local\Programs\Python\Python311\Lib\ssl.py", line 1075, in _create
self.do_handshake()
File "C:\Users\sergey\AppData\Local\Programs\Python\Python311\Lib\ssl.py", line 1346, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:992)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\fast-api-telegram-bot-a-NRO46l-py3.11\Lib\site-packages\pyngrok\installer.py", line 117, in install_ngrok
download_path = _download_file(url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\fast-api-telegram-bot-a-NRO46l-py3.11\Lib\site-packages\pyngrok\installer.py", line 261, in _download_file
response = urlopen(url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sergey\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 216, in urlopen
return opener.open(url, data, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sergey\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 519, in open
response = self._open(req, data)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sergey\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 536, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sergey\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 496, in _call_chain
result = func(*args)
^^^^^^^^^^^
File "C:\Users\sergey\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 1391, in https_open
return self.do_open(http.client.HTTPSConnection, req,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sergey\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 1351, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:992)>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\fast-api-telegram-bot-a-NRO46l-py3.11\Scripts\ngrok.exe\__main__.py", line 7, in <module>
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\fast-api-telegram-bot-a-NRO46l-py3.11\Lib\site-packages\pyngrok\ngrok.py", line 527, in main
run(sys.argv[1:])
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\fast-api-telegram-bot-a-NRO46l-py3.11\Lib\site-packages\pyngrok\ngrok.py", line 513, in run
install_ngrok(pyngrok_config)
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\fast-api-telegram-bot-a-NRO46l-py3.11\Lib\site-packages\pyngrok\ngrok.py", line 100, in install_ngrok
installer.install_ngrok(pyngrok_config.ngrok_path, pyngrok_config.ngrok_version)
File "C:\Users\sergey\AppData\Local\pypoetry\Cache\virtualenvs\fast-api-telegram-bot-a-NRO46l-py3.11\Lib\site-packages\pyngrok\installer.py", line 121, in install_ngrok
raise PyngrokNgrokInstallError("An error occurred while downloading ngrok from {}: {}".format(url, e))
pyngrok.exception.PyngrokNgrokInstallError: An error occurred while downloading ngrok from https://bin.equinox.io/c/bNyj1mQVY4c/ngrok-v3-stable-windows-amd64.zip: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:992)>
</code></pre>
<p>I tried to open link <a href="https://bin.equinox.io/c/bNyj1mQVY4c/ngrok-v3-stable-windows-amd64.zip" rel="nofollow noreferrer">https://bin.equinox.io/c/bNyj1mQVY4c/ngrok-v3-stable-windows-amd64.zip</a> from traceback in browser and it dowloaded zip, I can open it and run ngrok.exe but that didn't help. Any way to solve this?</p>
| <python><python-poetry><pyngrok> | 2023-08-03 22:40:56 | 1 | 569 | WideWood |
76,831,889 | 22,212,435 | Is it possible to make return statement dependent on the type of instance in python? | <p>It is hard to formulate a question… I want to create a function that can return for example string OR list. It will return depends on the type of instance this value is assigned to latter. For example, consider the following code attempt:</p>
<pre><code>def do_something():
some_list = [1, 4, 7, 8]
some_text = "Hello"
return some_text or some_list
text: str = do_something() # want text to become some_text (i.e. "Hello")
List: list[int] = do_something() # List should take some_list value (so [1, 4, 7, 8])
print(text, List) # want it to print < Hello [1, 4, 7, 8] >
</code></pre>
<p>I know, that the following code will not work for obvious reasons. I also know that I can return a list or a tuple in return statement, like <code>return some_text, some_list</code>. But then I will need to state, which value to take and I really don't want to make something like <code>text: str = do_something()[0]</code>.</p>
<p>That is because I just want to create a function that will return main value (function main task) and some additional that is less important and fully optional. Using tuple for me means that they are both equally important, but they are not, by default return always should return the first value. Can I do that in python, or is it completely impossible?</p>
| <python><return> | 2023-08-03 22:13:27 | 2 | 610 | Danya K |
76,831,835 | 3,247,006 | How to override the templates of `django-two-factor-auth` for Django Admin? | <p>I'm trying to override the templates of <a href="https://github.com/jazzband/django-two-factor-auth" rel="nofollow noreferrer">django-two-factor-auth</a> for Django Admin but I don't know how to do it. *I don't have frontend with Django. Instead, I have frontend with <code>Next.js</code> and backend with Django.</p>
<p>This is my django project:</p>
<pre class="lang-none prettyprint-override"><code>django-project
|-core
| |-settings.py
| └-urls.py
|-my_app1
| |-models.py
| |-admin.py
| └-urls.py
└-templates
</code></pre>
<p>And, how I set <code>django-two-factor-auth</code> following <a href="https://django-two-factor-auth.readthedocs.io/en/stable/installation.html" rel="nofollow noreferrer">the doc</a> is first, I installed <code>django-two-factor-auth[phonenumbers]</code>:</p>
<pre class="lang-none prettyprint-override"><code>pip install django-two-factor-auth[phonenumbers]
</code></pre>
<p>Then, set these apps below to <a href="https://docs.djangoproject.com/en/4.2/ref/settings/#installed-apps" rel="nofollow noreferrer">INSTALLED_APPS</a>, <code>OTPMiddleware</code> to <a href="https://docs.djangoproject.com/en/4.2/ref/settings/#std-setting-MIDDLEWARE" rel="nofollow noreferrer">MIDDLEWARE</a>, <a href="https://docs.djangoproject.com/en/4.2/ref/settings/#login-url" rel="nofollow noreferrer">LOGIN_URL</a> and <a href="https://docs.djangoproject.com/en/4.2/ref/settings/#login-redirect-url" rel="nofollow noreferrer">LOGIN_REDIRECT_URL</a> in <code>core/settings.py</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "core/settings.py"
INSTALLED_APPS = [
...
'django_otp', # Here
'django_otp.plugins.otp_static', # Here
'django_otp.plugins.otp_totp', # Here
'two_factor' # Here
]
...
MIDDLEWARE = [
...
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django_otp.middleware.OTPMiddleware', # Here
...
]
LOGIN_URL = 'two_factor:login' # Here
# this one is optional
LOGIN_REDIRECT_URL = 'admin:index' # Here
...
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [
BASE_DIR / 'templates',
],
...
},
]
</code></pre>
<p>Then, set the path below to <code>core/urls.py</code>:</p>
<pre class="lang-py prettyprint-override"><code># "core/urls.py"
...
from two_factor.urls import urlpatterns as tf_urls
urlpatterns = [
...
path('', include(tf_urls)) # Here
]
</code></pre>
<p>Finally, migrate:</p>
<pre class="lang-none prettyprint-override"><code>python manage.py migrate
</code></pre>
<p>And, this is <strong>Login</strong> page:</p>
<pre class="lang-none prettyprint-override"><code>http://localhost:8000/account/login/
</code></pre>
<p><a href="https://i.sstatic.net/EO9yS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EO9yS.png" alt="enter image description here" /></a></p>
<p>My questions:</p>
<ol>
<li>How can I override the templates of <code>django-two-factor-auth</code> for Django Admin?</li>
<li>Are there any recommended customizations for the templates of <code>django-two-factor-auth</code> for Django Admin?</li>
</ol>
| <python><django><django-templates><django-admin><django-two-factor-auth> | 2023-08-03 22:02:09 | 1 | 42,516 | Super Kai - Kazuya Ito |
76,831,562 | 13,838,385 | If I run a Python script from Powershell via the call operator ($val = & python myscript.py) - How can I pass in an array to $val from Python? | <p>Here's a sample script:</p>
<pre><code>import os
import sys
import pathlib
import json
from contextlib import redirect_stderr
from fontTools import ttLib
from fontmeta import FontMeta
# Check for commandline argument
if len(sys.argv) == 1:
print('No argument was supplied.')
exit(0)
fontfile = sys.argv[1]
def font_name(font_path, name_idx):
font = ttLib.TTFont(font_path, ignoreDecompileErrors=True)
with redirect_stderr(None):
names = font['name'].names
details = {}
for x in names:
if x.langID == 0 or x.langID == 1033:
try:
details[x.nameID] = x.toUnicode()
except UnicodeDecodeError:
details[x.nameID] = x.string.decode(errors='ignore')
# details[4] = Full Name
# details[1] = Family Name
# details[2] = Style Name
return details[name_idx]
meta_instance = FontMeta(fontfile)
metadata = meta_instance.get_full_data()
fontFullName = font_name(fontfile,4)
fontFamily = font_name(fontfile,1)
fontStyle = font_name(fontfile,2)
fontVers = metadata[5]['value'];
fontVers = fontVers.replace('Version ',"v")
fontLang = metadata[1]['language']['value'];
fontUniqueID = metadata[3]['value']
fontPostscriptName = metadata[6]['value']
fontPostscriptEncoding = metadata[6]['encoding']['value']
fontDesigner = metadata[9]['value']
fontLicenseURL = metadata[14]['value']
print('Full Name: ' + fontFullName)
print('Family: ' + fontFamily)
print('Style: ' + fontStyle)
print('Version: ' + fontVers)
print('Language: ' + fontLang)
print('UniqueID: ' + fontUniqueID)
print('License URL: ' + fontLicenseURL)
print('Font Designer: ' + fontDesigner)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>Full Name: Sharp Sans Bold
Family: Sharp Sans
Style: Bold
Version: v1.001
Language: English/United States
UniqueID: 1.001;2016;SHRP;SharpSans-Bold
License URL: http://www.sharptype.co
Font Designer: Lucas Sharp
</code></pre>
<p>ps1:</p>
<pre><code>& "D:\Dev\Python\00 VENV\FontTools\Scripts\Activate.ps1"
$val = python "D:\Dev\Python\Font Scripts\GetFontInfo.py" "D:\Fonts\00 Test\SharpSans-Bold.otf"
Write-Host "`$val:" $val -ForegroundColor Green
</code></pre>
<p>Right now the Python code is just printing values. My PS script is echoing the printed values as a string. Is there a way to pass these values to powershell other than just printing them - I.E. as an array?</p>
<p>Or should I return JSON and parse it in PowerShell?</p>
<p>Any help appreciated.</p>
| <python><powershell> | 2023-08-03 20:57:47 | 2 | 577 | fmotion1 |
76,831,492 | 2,525,857 | Autoreload in IPython | <p>I have used autoreload before and it worked. For some reason, I am getting error in my new set-up. May be I am missing something.</p>
<pre><code>import utility as ut
%load_ext autoreload
%autoreload 2
</code></pre>
<p>When I run the above, I get the following error.</p>
<pre><code>The autoreload module is not an IPython extension.
UsageError: Line magic function `%autoreload` not found.
</code></pre>
| <python><autoreload> | 2023-08-03 20:42:11 | 0 | 351 | deb |
76,831,432 | 16,155,502 | Setting local time and date conversion in Python | <p>I run the following code on the system and server and get different results</p>
<p>How to set the time zone to get the same result regardless of the system or server settings</p>
<pre><code>print(datetime(2023,6,6,0,0).timestamp())
</code></pre>
| <python><django> | 2023-08-03 20:28:12 | 1 | 351 | ali |
76,831,383 | 11,140,420 | SMTPServerDisconnected in Google App Engine Standard (but it works in Flexible) | <p>I am trying to switch my Django (Python) app from GAE flexible environment to Standard.</p>
<p>Everything works fine except, sending emails via gmail SMTP. It works with a version in a flexible environment, but it doesn't work in the same code as standard.</p>
<p>my app.yaml file includes:</p>
<pre><code> inbound_services:
- mail
- mail_bounce
</code></pre>
<p>My settings are:</p>
<pre><code> EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_HOST = 'smtp-relay.gmail.com'
EMAIL_HOST_USER = 'hidden@hidden.com'
EMAIL_HOST_PASSWORD = 'hidden'
DEFAULT_FROM_EMAIL = 'hidden@hidden.com'
SERVER_EMAIL = 'hidden@hidden.com'
EMAIL_PORT = 587
EMAIL_USE_TLS = True
</code></pre>
<p>The strange thing is that SMTP and the same email work fine in my other project with GAE standard with the same settings.</p>
| <python><django><google-cloud-platform><google-app-engine><smtp> | 2023-08-03 20:17:37 | 0 | 711 | Vit Amin |
76,831,373 | 10,627,413 | Wrap each item in my dataset with double quotes not reading to csv | <p>This is driving me nuts.</p>
<p><strong>Input:</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;">id</th>
<th style="text-align: right;">name</th>
<th style="text-align: right;">last_name</th>
<th style="text-align: right;">datetime</th>
<th style="text-align: right;">lat</th>
<th style="text-align: right;">long</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">Stack</td>
<td style="text-align: right;">Overflow</td>
<td style="text-align: right;">2024-01-01</td>
<td style="text-align: right;">40.21324</td>
<td style="text-align: right;">39.6969</td>
<td></td>
</tr>
</tbody>
</table>
</div>
<p><strong>Desired output:</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;">id</th>
<th style="text-align: right;">name</th>
<th style="text-align: right;">last_name</th>
<th style="text-align: right;">datetime</th>
<th style="text-align: right;">lat</th>
<th style="text-align: right;">long</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">"1"</td>
<td style="text-align: right;">"Stack"</td>
<td style="text-align: right;">"Overflow"</td>
<td style="text-align: right;">"2024-01-01"</td>
<td style="text-align: right;">"40.21324"</td>
<td style="text-align: right;">"39.6969"</td>
<td></td>
</tr>
</tbody>
</table>
</div>
<p>Issue: I am able to wrap each item in double quotes in Jupyter Notebook but when I read to CSV I lose the double quotes.
I have a dataset with a bunch of different fields.
For example:</p>
<pre><code>123, Peter, 40.83429, 2023-10-23
</code></pre>
<p>I want to wrap every field with a double quote:</p>
<pre><code> "123", "Peter", "40.83429", "2023-10-23"
</code></pre>
<p>My code:
What happens when I read back the csv in Jupyter Notebook:</p>
<p>Code to add double quotes:</p>
<pre><code>all_cols = list(df)
for i in all_cols:
df[i] = '"' + df[i] + '"'
</code></pre>
<p>Code to read that into the csv:</p>
<pre><code>df.to_csv('test1.csv')
</code></pre>
<p>However, when I open my CSV file it's STILL not wrapped in double quotes. What am I doing wrong? Thank you SO MUCH in advance this is driving me absolutely nuts.</p>
| <python><list> | 2023-08-03 20:14:56 | 2 | 366 | Maggie Liu |
76,831,243 | 13,838,385 | Python keeps telling me that I'm passing in more than one argument, when I'm not? | <p>Python Version: 3.11.4</p>
<p>I'm using a VENV called <code>FontTools</code> with all my dependencies installed.</p>
<p>Script:</p>
<pre><code>import os.path
import sys
from fontTools import ttLib
from fontmeta import FontMeta
if len(sys.argv) > 1:
print("Usage: Only one argument required: [FONT]")
exit(1)
fontfile = sys.argv[1]
meta_instance = FontMeta(fontfile)
metadata = meta_instance.get_full_data()
print(metadata)
</code></pre>
<p>Usage:
<code>D:\Dev\Python\Font Scripts> python getFontMeta.py "SharpSans-Bold.otf"</code></p>
<p>Returns:
<code>Usage: Only one argument required: [FONT]</code></p>
<p>I inspected <code>sys.argv[0]</code>, and it's set to the name of my script: <code>getFontMeta.py</code>
<code>sys.argv[1]</code> is set to the font I passed in.</p>
<p>Why is this happening? This script worked before, is this new behavior in Python v3.11.4?</p>
<p>Thanks for any help.</p>
| <python><python-3.x> | 2023-08-03 19:51:59 | 0 | 577 | fmotion1 |
76,831,055 | 11,770,286 | How to change the bar width while keeping an even space around all bars | <p>I'm trying to make a bar chart where there is equal space around all bars using <code>pandas</code>. When I don't specify a <code>width</code> this works fine out of the box. The problem is that when I specify the <code>width</code>, the margin on the left and right of the chart doesn't change, which makes the space around the left-most and right-most bar bigger than for the others. I've tried adjusting the margin with <code>ax.margins(x=0)</code> but this has no effect. How can I keep an even space for all bars?</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.use('TkAgg')
print(matplotlib.__version__) # 3.5.3
print(pd.__version__) # 1.3.5
def grid_in_between(ax):
"""Create gridlines in between (major) data axis values using minor gridlines
Args:
ax: Matplotlib axes
"""
ticks = ax.get_xticks()
ax.set_xticks(np.array(ticks[:-1]) + np.diff(ticks) / 2, minor=True)
ax.grid(visible=True, axis='x', which='minor')
ax.grid(visible=False, axis='x', which='major')
ax.tick_params(which='minor', length=0, axis='x')
df = pd.DataFrame({'value': range(8)})
fig, ax = plt.subplots(1, 2)
df.plot.bar(ax=ax[0])
df.plot.bar(ax=ax[1], width=.95)
grid_in_between(ax[0])
grid_in_between(ax[1])
ax[0].set_title('Evenly spaced')
ax[1].set_title('Parameter width\nmakes first and last space bigger')
ax[1].margins(x=0) # no effect
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/EvaMD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EvaMD.png" alt="example" /></a></p>
| <python><pandas><matplotlib><bar-chart> | 2023-08-03 19:20:52 | 2 | 3,271 | Wouter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.