QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,166,767
| 1,659,362
|
Improve Pandas performance for very large dataframes?
|
<p>I have a few Pandas dataframes with several millions of rows each. The dataframes have columns containing JSON objects each with 100+ fields. I have a set of 24 functions that run sequentially on the dataframes, process the JSON (for example, compute some string distance between two fields in the JSON) and return a JSON with some new fields added. After all 24 functions execute, I get a final JSON which is then usable for my purposes.</p>
<p>I am wondering what the best ways to speed up performance for this dataset. A few things I have considered and read up on:</p>
<ul>
<li>It is tricky to vectorize because many operations are not as straightforward as "subtract this column's values from another column's values".</li>
<li>I read up on some of the Pandas documentation and a few options indicated are Cython (may be tricky to convert the string edit distance to Cython, especially since I am using an external Python package) and Numba/JIT (but this is mentioned to be best for numerical computations only).</li>
<li>Possibly controlling the number of threads could be an option. The 24 functions can mostly operate without any dependencies on each other.</li>
</ul>
|
<python><pandas><dataframe><performance>
|
2023-01-19 00:47:16
| 1
| 549
|
hologram
|
75,166,765
| 3,247,006
|
"formfield_overrides" vs "formfield_for_dbfield()" vs "form" vs "get_form()" to change the width of the field in Django Admin
|
<p>For example, there is <strong><code>Person</code> model</strong> below:</p>
<pre class="lang-py prettyprint-override"><code># "models.py"
from django.db import models
class Person(models.Model):
name = models.CharField(max_length=20)
age = models.PositiveSmallIntegerField()
def __str__(self):
return self.name
</code></pre>
<p>Then, if using <a href="https://docs.djangoproject.com/en/4.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.formfield_overrides" rel="nofollow noreferrer">formfield_overrides</a>, <a href="https://github.com/django/django/blob/4593bc5da115f2e808a803a4ec24104b6c7a6152/django/contrib/admin/options.py#L149" rel="nofollow noreferrer">formfield_for_dbfield()</a>, <a href="https://docs.djangoproject.com/en/4.1/ref/contrib/admin/#django.contrib.admin.InlineModelAdmin.form" rel="nofollow noreferrer">form</a> or <a href="https://docs.djangoproject.com/en/4.1/ref/contrib/admin/#django.contrib.admin.ModelAdmin.get_form" rel="nofollow noreferrer">get_form()</a> as shown below:</p>
<p><code>formfield_overrides</code>:</p>
<pre class="lang-py prettyprint-override"><code># "admin.py"
from django.contrib import admin
from .models import Person
from django.db import models
from django import forms
@admin.register(Person)
class PersonAdmin(admin.ModelAdmin):
formfield_overrides = { # Here
models.PositiveSmallIntegerField: {
'widget': forms.NumberInput(attrs={'style': 'width:50ch'})
},
}
</code></pre>
<p><code>formfield_for_dbfield()</code>:</p>
<pre class="lang-py prettyprint-override"><code># "admin.py"
from django.contrib import admin
from .models import Person
@admin.register(Person)
class PersonAdmin(admin.ModelAdmin): # Here
def formfield_for_dbfield(self, db_field, request, **kwargs):
field = super().formfield_for_dbfield(db_field, request, **kwargs)
if db_field.name == 'age':
field.widget.attrs['style'] = 'width: 50ch'
return field
</code></pre>
<p><code>form</code>:</p>
<pre class="lang-py prettyprint-override"><code># "admin.py"
from django.contrib import admin
from .models import Person
from django import forms
class PersonForm(forms.ModelForm):
age = forms.CharField(
widget=forms.NumberInput(attrs={'style':'width:50ch'})
)
@admin.register(Person)
class PersonAdmin(admin.ModelAdmin):
form = PersonForm # Here
</code></pre>
<p><code>get_form()</code>:</p>
<pre class="lang-py prettyprint-override"><code># "admin.py"
from django.contrib import admin
from .models import Person
@admin.register(Person)
class PersonAdmin(admin.ModelAdmin): # Here
def get_form(self, request, obj=None, **kwargs):
form = super().get_form(request, obj, **kwargs)
form.base_fields['age'].widget.attrs['style'] = 'width: 50ch;'
return form
</code></pre>
<p>Then, I can change the width of <strong><code>age</code> field</strong> on <strong>"Add" and "Change" pages</strong> as shown below:</p>
<p><a href="https://i.sstatic.net/7Z78I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7Z78I.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/Ekpq7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ekpq7.png" alt="enter image description here" /></a></p>
<p>Now, are there any differences between <code>formfield_overrides</code>, <code>formfield_for_dbfield()</code>, <code>form</code> and <code>get_form()</code> to change the width of the field in Django Admin?</p>
|
<python><django><django-models><django-admin><django-widget>
|
2023-01-19 00:47:12
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
75,166,575
| 687,739
|
Binary incompatibility after Cythoning on Mac M2
|
<p>I have a custom module that includes a <code>setup.py</code> file.</p>
<p>I run <code>python setup.py build_ext --inplace</code> in a Conda-built virtual environment on an M2 Mac.</p>
<p>I get the following runtime error during runtime (truncated):</p>
<p><code>ValueError: libs.portfolio_manager.lib.adjustment.Adjustment size changed, may indicate binary incompatibility. Expected 56 from C header, got 24 from PyObject</code></p>
<p>The directory structure is something like this:</p>
<pre><code>/src
/libs
/portfolio_manager
/lib/
__init__.py
adjustment.pyx
adjustment.pxd
setup.py
</code></pre>
<p>Here are the relevant parts of the <code>setup.py</code> file:</p>
<pre><code>import os
import versioneer
from setuptools import Extension, setup
class LazyBuildExtCommandClass(dict):
"""
Lazy command class that defers operations requiring Cython and numpy until
they've actually been downloaded and installed by setup_requires.
"""
def __contains__(self, key):
return key == "build_ext" or super(LazyBuildExtCommandClass, self).__contains__(
key
)
def __setitem__(self, key, value):
if key == "build_ext":
raise AssertionError("build_ext overridden!")
super(LazyBuildExtCommandClass, self).__setitem__(key, value)
def __getitem__(self, key):
if key != "build_ext":
return super(LazyBuildExtCommandClass, self).__getitem__(key)
import numpy
from Cython.Distutils import build_ext as cython_build_ext
# Cython_build_ext isn't a new-style class in Py2.
class build_ext(cython_build_ext, object):
"""
Custom build_ext command that lazily adds numpy's include_dir to
extensions.
"""
def build_extensions(self):
"""
Lazily append numpy's include directory to Extension includes.
This is done here rather than at module scope because setup.py
may be run before numpy has been installed, in which case
importing numpy and calling `numpy.get_include()` will fail.
"""
numpy_incl = numpy.get_include()
for ext in self.extensions:
ext.include_dirs.append(numpy_incl)
super(build_ext, self).build_extensions()
return build_ext
ext_modules = [
Extension(
name="src.libs.portfolio_manager.lib.adjustment",
sources=["src/libs/portfolio_manager/lib/adjustment.pyx"],
define_macros=[("NPY_NO_DEPRECATED_API", "NPY_1_7_API_VERSION")],
),
]
setup(
ext_modules=ext_modules,
cmdclass=LazyBuildExtCommandClass(versioneer.get_cmdclass()),
package_data={
root.replace(os.sep, "."): ["*.pyi", "*.pyx", "*.pxi", "*.pxd"]
for root, dirnames, filenames in os.walk("src/libs/portfolio_manager")
if "__pycache__" not in root
},
)
</code></pre>
<p>Everything I read about this error has to do with an unsuppressed warning in a past version of Numpy.</p>
<p>What is this error caused by and how do I fix it?</p>
|
<python><cython>
|
2023-01-19 00:14:56
| 0
| 15,646
|
Jason Strimpel
|
75,166,572
| 9,470,078
|
Numpy split that returns an ndarray
|
<p>Is there a method analogous to <code>numpy.split</code> that returns a <code>numpy.ndarray</code> instead of a <code>list</code>? Assuming that the array splits evenly is fine (to prevent jagged arrays).</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>x = np.arange(9.0)
print(np.split(x, 3))
# [array([0., 1., 2.]), array([3., 4., 5.]), array([6., 7., 8.])]
print(np.???(x, 3))
# array([[0., 1., 2.], [3., 4., 5.], [6., 7., 8.]])
</code></pre>
<p>I'd rather not stack the list given from <code>np.split</code> together for performance reasons.</p>
|
<python><numpy>
|
2023-01-19 00:14:29
| 1
| 1,157
|
Monolith
|
75,166,505
| 9,543,330
|
keep the same name until value = true in another pandas column
|
<p>I have a dataframe with 3 columns: <code>session_id</code>, <code>name</code>, <code>reset_flag</code>.</p>
<p>I need to make a new column, <code>new_name</code>, where the new name will be set to the first <code>name</code> where <code>reset_flag=True</code>, and then it will continue as that name WITHIN that session, until there is new <code>reset_flag</code>.</p>
<p>Not really sure best way to approach.</p>
<p>EDIT: I thought of a way to do so with df.iterrows(), by storing into list and then appending, but it seems very bulky. is there a more efficient 'pandas' way?</p>
<p>Sample expected output</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>session_id</th>
<th>name</th>
<th>reset_flag</th>
<th>new_name</th>
</tr>
</thead>
<tbody>
<tr>
<td>06c97a-bc7-6cc-29f-65978ee8d</td>
<td>some_name_1</td>
<td>TRUE</td>
<td>some_name_1</td>
</tr>
<tr>
<td>06c97a-bc7-6cc-29f-65978ee8d</td>
<td>some_name_1</td>
<td></td>
<td>some_name_1</td>
</tr>
<tr>
<td>06c97a-bc7-6cc-29f-65978ee8d</td>
<td>some_name_1</td>
<td></td>
<td>some_name_1</td>
</tr>
<tr>
<td>06c97a-bc7-6cc-29f-65978ee8d</td>
<td>some_name_2</td>
<td>TRUE</td>
<td>some_name_2</td>
</tr>
<tr>
<td>06c97a-bc7-6cc-29f-65978ee8d</td>
<td>some_name_2</td>
<td></td>
<td>some_name_2</td>
</tr>
<tr>
<td>06c97a-bc7-6cc-29f-65978ee8d</td>
<td>some_name_2</td>
<td></td>
<td>some_name_2</td>
</tr>
<tr>
<td>06c97a-bc7-6cc-29f-65978ee8d</td>
<td>some_name_3</td>
<td></td>
<td>some_name_2</td>
</tr>
<tr>
<td>06c97a-bc7-6cc-29f-65978ee8d</td>
<td>some_name_3</td>
<td></td>
<td>some_name_2</td>
</tr>
<tr>
<td>06c97a-bc7-6cc-29f-65978ee8d</td>
<td>some_name_4</td>
<td></td>
<td>some_name_2</td>
</tr>
<tr>
<td>06c97a-bc7-6cc-29f-65978ee8d</td>
<td>some_name_4</td>
<td></td>
<td>some_name_2</td>
</tr>
<tr>
<td>06c97a-bc7-6cc-29f-65978ee8d</td>
<td>some_name_4</td>
<td></td>
<td>some_name_2</td>
</tr>
<tr>
<td>06c97a-bc7-6cc-29f-65978ee8d</td>
<td>some_name_5</td>
<td>TRUE</td>
<td>some_name_5</td>
</tr>
<tr>
<td>3943d5-e1e-63e-6c4-aa1899bd9</td>
<td>some_name_1</td>
<td>TRUE</td>
<td>some_name_1</td>
</tr>
<tr>
<td>3943d5-e1e-63e-6c4-aa1899bd9</td>
<td>some_name_1</td>
<td></td>
<td>some_name_1</td>
</tr>
<tr>
<td>3943d5-e1e-63e-6c4-aa1899bd9</td>
<td>some_name_1</td>
<td></td>
<td>some_name_1</td>
</tr>
<tr>
<td>3943d5-e1e-63e-6c4-aa1899bd9</td>
<td>some_name_2</td>
<td></td>
<td>some_name_1</td>
</tr>
<tr>
<td>3943d5-e1e-63e-6c4-aa1899bd9</td>
<td>some_name_2</td>
<td></td>
<td>some_name_1</td>
</tr>
<tr>
<td>3943d5-e1e-63e-6c4-aa1899bd9</td>
<td>some_name_2</td>
<td></td>
<td>some_name_1</td>
</tr>
<tr>
<td>3943d5-e1e-63e-6c4-aa1899bd9</td>
<td>some_name_3</td>
<td>TRUE</td>
<td>some_name_3</td>
</tr>
<tr>
<td>3943d5-e1e-63e-6c4-aa1899bd9</td>
<td>some_name_3</td>
<td></td>
<td>some_name_3</td>
</tr>
<tr>
<td>3943d5-e1e-63e-6c4-aa1899bd9</td>
<td>some_name_4</td>
<td></td>
<td>some_name_3</td>
</tr>
<tr>
<td>3943d5-e1e-63e-6c4-aa1899bd9</td>
<td>some_name_4</td>
<td></td>
<td>some_name_3</td>
</tr>
<tr>
<td>3943d5-e1e-63e-6c4-aa1899bd9</td>
<td>some_name_4</td>
<td></td>
<td>some_name_3</td>
</tr>
<tr>
<td>3943d5-e1e-63e-6c4-aa1899bd9</td>
<td>some_name_5</td>
<td>TRUE</td>
<td>some_name_5</td>
</tr>
<tr>
<td>3943d5-e1e-63e-6c4-aa1899bd9</td>
<td>some_name_6</td>
<td></td>
<td>some_name_5</td>
</tr>
</tbody>
</table>
</div>
|
<python><pandas>
|
2023-01-19 00:03:53
| 3
| 1,094
|
yulGM
|
75,166,456
| 9,749,124
|
How to set colours with big contrast on Matplotlib scatter plot
|
<p>I want to plot scatter plot of my clusters. I have done it with this:</p>
<pre><code>figure(figsize=(22, 25), dpi = 80)
plt.scatter(reduced_features[:, 0], reduced_features[:,1], c = kmeans.predict(vec_matrix_pca), s = 7)
plt.scatter(reduced_cluster_centers[:, 0], reduced_cluster_centers[:, 1], marker = 'x', s = 120, c = 'r')
plt.grid()
plt.show()
</code></pre>
<p>This is the result:</p>
<p><a href="https://i.sstatic.net/F0ujD.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F0ujD.jpg" alt="enter image description here" /></a></p>
<p>The thing is, as you see, it is very hard to make a difference between clusters.
How can I select different color map, or colours with bigger contrast.
FYI, I have 150 clusters.</p>
|
<python><matplotlib>
|
2023-01-18 23:55:13
| 1
| 3,923
|
taga
|
75,166,348
| 640,558
|
group by removing column I'd like to group by in pandas
|
<p>I'm trying to take a list of list and then add it to pandas to sum up by one value.</p>
<p>My list of list:</p>
<pre><code>[['she', 'walked', 4],
['she', 'my', 3],
['she', 'dog', 2],
['she', 'to', 1],
['sniffed', 'I', 5],
['sniffed', 'walked', 4],
['sniffed', 'my', 3],
['sniffed', 'dog', 2],
['sniffed', 'to', 1]]
</code></pre>
<p>I create the dataframe:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(distanceList, columns = ['word1', 'word2', 'weight'])
</code></pre>
<p>the result looks weird(it has the extra index column for some reason):</p>
<pre><code> word1 word2 weight
0 I walked 5
1 I my 4
2 I dog 3
3 I to 2
4 I the 1
... ... ... ...
1135 I walked 5
1136 I my 4
1137 I dog 3
1138 I to 2
1139 I the 1
1140 rows × 3 columns
</code></pre>
<p>but when I sum it, seems to combine the words.
I used this:</p>
<pre><code>df.groupby('weight').sum()
word1 word2
weight
1 Iwalkedmydogtotheparkandshesniffedgrassthenrol... thethethethethetotototototototototototototothe...
2 Iwalkedmydogtotheparkandshesniffedgrassthenrol... totototodogdogdogdogdogdogdogdogdogdogdogdogdo...
3 Iwalkedmydogtotheparkandshesniffedgrassthenrol... dogdogdogmymymymymymymymymymymymymymymymydogdo...
4 Iwalkedmydogtotheparkandshesniffedgrassthenrol... mymywalkedwalkedwalkedwalkedwalkedwalkedwalked...
5 Iwalkedmydogtotheparkandshesniffedgrassthenrol... walkedIIIIIIIIIIIIIIIIIIwalkedIIIIIIIIIIIIIIII...
</code></pre>
<p>What I want is if I have:</p>
<pre><code>dog, cat, 1
dog, cat, 5
dog, rabbit, 1
</code></pre>
<p>then the result is:</p>
<pre><code>dog, cat, 6
dog, rabbit, 1
</code></pre>
|
<python><pandas>
|
2023-01-18 23:35:13
| 1
| 26,167
|
Lostsoul
|
75,166,262
| 875,295
|
When is the fork of preload done in gunicorn?
|
<p>I'm trying to understand something that I don't think is in the doc of gunicorn.
For the feature <code>preload_app</code> it says <code>Load application code before the worker processes are forked.</code> , but when is the process actually forked, and how can gunicorn know when it should fork ?</p>
<p>For instance, if my fastapi code looks like this with a <code>main.py</code> file:</p>
<pre><code># some code goes here
...
app = FastAPI(title=x)
@app.get("/")
async def root():
return "ok"
</code></pre>
<p>and if I start it like so:</p>
<pre><code>gunicorn main:app --host 0.0.0.0 --port 8080 --workers 4 --preload
</code></pre>
<p>At what point will the initial process actually be forked ?
is it before the creation of "app" object ?
is it at the end of the execution of the main.py file ?</p>
|
<python><fastapi><gunicorn>
|
2023-01-18 23:20:01
| 0
| 8,114
|
lezebulon
|
75,166,187
| 306,296
|
How do I determine that a folder really exists in a Streamed Google Drive?
|
<p>I have a Google Drive that had a large number of changes accidentally made to its file and folder structure externally, which I am trying to undo. I am trying to run a Python script on a Windows machine (disconnected from the Internet, so not currently syncing said Google Drive) that has a previous version of the folder structure preserved, to get a list of all folders and files in the drive. However, the issue is that the most files and folders in the Drive are "streamed", which is to say they are not really on my machine, and only exist as markers.</p>
<p>I tried a number of methods, including built-in OS commands (like dir), to list the folder structure, but they all hang up on specific folders. I attempted to use Python <code>os.walk()</code> as well as <code>os.scandir()</code>, but the same thing happens. For certain folders, even though every method confirms their existence (<code>os.path.exists()</code>, etc), when they are scanned, the script hangs up, and after a long while, times out with "[WinError 2] The system cannot find the file specified".</p>
<p>Is there a way to tell, without a timeout, whether these folders <strong>really</strong> exist before I get stuck scanning them?</p>
|
<python><os.walk><scandir>
|
2023-01-18 23:06:00
| 0
| 484
|
allenrabinovich
|
75,166,140
| 14,355,404
|
How to connect to Azure Devops git repo through databricks?
|
<p>I want to create a python notebook in databricks that will do the following -</p>
<ol>
<li>Connect to Azure Devops Git repo</li>
<li>Make couple of changes in a yaml file</li>
<li>Commit the changes in the master branch</li>
<li>Push the changes back to repo</li>
</ol>
<p>I tried the below code to achieve step 1:</p>
<pre><code>import git
repo_url = <repo url>
pat_token = <token>
dbfs_path ='<dbfs_path>'
repo = git.Repo.clone_from(repo_url, dbfs_path, branch='master')
</code></pre>
<p>But I got the below error:</p>
<pre><code>GitCommandError: Cmd('git') failed due to: exit code(128)
cmdline: git clone --branch=master -v <repo_url> <dbfs_path>
stderr: 'Cloning into <dbfs_path>...
fatal: could not read Username for <repo_url>: No such device or address
</code></pre>
<p>I am not sure where to provide the username.</p>
|
<python><git><azure-devops><databricks-repos>
|
2023-01-18 22:58:39
| 0
| 389
|
user19930511
|
75,165,837
| 7,984,318
|
pandas how to count boolean column value and the distinct count of other columns at the same time
|
<p>I have a Dataframe df, you can have it by running the following code:</p>
<pre><code>import pandas as pd
from io import StringIO
df = """
month status_review supply_review case_id
2023-01-01 False False 12
2023-01-01 True True 33
2022-12-01 False True 45
2022-12-01 True True 45
2022-12-01 False False 44
"""
df= pd.read_csv(StringIO(df.strip()), sep='\s\s+', engine='python')
</code></pre>
<p>How can I count how many status_reviews and supply_review are True in each month and also the number of case in each month?</p>
<p>The output should looks like the following:</p>
<pre><code> month # of true status_review # of true supply_review # of case
2023-01-01 1 1 2
2022-12-01 1 2 2
</code></pre>
<p>I have tried both:</p>
<pre><code>df.groupby("month").sum()
df.groupby('month').agg('sum')
</code></pre>
<p>But the output is:</p>
<pre><code> status_review supply_review case_id
month
2022-12-01 1 2 134
2023-01-01 1 1 45
</code></pre>
<p>The case_id is not what I want. I want the distinct count of case_id. How can I achieve the desired output?</p>
|
<python><python-3.x><pandas><dataframe>
|
2023-01-18 22:18:10
| 2
| 4,094
|
William
|
75,165,794
| 3,728,901
|
Cannot install keras on Windows OS
|
<p>Environment: Windows 11 x64, run CMD as Administrator, command</p>
<pre><code>pip install keras
</code></pre>
<p>Console log</p>
<pre><code>Microsoft Windows [Version 10.0.22621.1105]
(c) Microsoft Corporation. All rights reserved.
C:\Windows\System32>conda install -c conda-forge tensorflow
Collecting package metadata (current_repodata.json): failed
CondaSSLError: OpenSSL appears to be unavailable on this machine. OpenSSL is required to
download and install packages.
Exception: HTTPSConnectionPool(host='conda.anaconda.org', port=443): Max retries exceeded with url: /conda-forge/win-64/current_repodata.json (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available."))
C:\Windows\System32>pip install keras
WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available.
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/keras/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/keras/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/keras/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/keras/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/keras/
Could not fetch URL https://pypi.org/simple/keras/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/keras/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /keras/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /keras/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /keras/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /keras/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /keras/
Could not fetch URL https://pypi.ngc.nvidia.com/keras/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.ngc.nvidia.com', port=443): Max retries exceeded with url: /keras/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping
ERROR: Could not find a version that satisfies the requirement keras (from versions: none)
ERROR: No matching distribution found for keras
WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available.
Could not fetch URL https://pypi.org/simple/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/pip/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping
Could not fetch URL https://pypi.ngc.nvidia.com/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.ngc.nvidia.com', port=443): Max retries exceeded with url: /pip/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping
C:\Windows\System32>
</code></pre>
<p><a href="https://i.sstatic.net/qtDh3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qtDh3.png" alt="enter image description here" /></a></p>
<p>How to install keras success in this context?</p>
|
<python><tensorflow><keras><pip>
|
2023-01-18 22:12:16
| 1
| 53,313
|
Vy Do
|
75,165,781
| 3,763,616
|
How to round numbers in place in a string in python
|
<p>I'd like to take some numbers that are in a string in python, round them to <strong>2 decimal spots</strong> in place and return them. So for example if there is:</p>
<pre><code>"The values in this string are 245.783634 and the other value is: 25.21694"
</code></pre>
<p>I'd like to have the string read:</p>
<pre><code>"The values in this string are 245.78 and the other value is: 25.22"
</code></pre>
|
<python><rounding>
|
2023-01-18 22:09:16
| 4
| 489
|
Drthm1456
|
75,165,770
| 5,032,387
|
Beta distribution with bounds at [0.1, 0.5]
|
<p>I'd like to construct a beta distribution where the mu and sigma are 0.28 and 0.003, respectively, and the distribution is bound at [0.1, 0.5].</p>
<pre><code>import numpy as np
import pandas as pd
import plotly.express as px
from scipy import stats
mu = 0.28
stdev = 0.003
lb = 0.1
ub = 0.5
def calc_alpha(x, y):
res = x * ((x*(1-x) / y**2)-1)
return res
def calc_beta(x, y):
res = (1 - x) * ((x*(1-x) / y**2)-1)
return res
alpha_ = calc_alpha(mu, stdev)
beta_ = calc_beta(mu, stdev)
x = np.linspace(lb, ub, 100000)
y = stats.beta(alpha_, beta_, loc = lb, scale = ub).pdf(x)
fig = px.line(x=x, y=y)
fig.show()
</code></pre>
<p>This seems to work. However, as a test, I sample from the same distribution, and I calculate the mean and standard deviation of the sample and get different values than what I started out with.</p>
<p>Also, the min and max of these values isn't the range I want, so pretty sure that I'm not using loc and scale correctly.</p>
<pre><code>beta_rands = stats.beta(alpha_, beta_, loc = lb, scale = ub).rvs(1000000)
# 0.24
beta_rands.mean()
# 0.0014
beta_rands.std()
#0.232
beta_rands.min()
#0.247
beta_rands.max()
</code></pre>
|
<python><scipy><beta-distribution>
|
2023-01-18 22:07:20
| 2
| 3,080
|
matsuo_basho
|
75,165,751
| 6,282,633
|
How to safely cast a python variable to a literal type?
|
<p>Say I have an arbitrary value, how do I check that the value is a valid value for a given literal type?</p>
<p>Some explanation or useful examples what i expected:</p>
<pre class="lang-py prettyprint-override"><code>KnownFormats = Literal["json", "py", "txt"]
def do_something(format: KnownFormats): ...
def is_known_format(format: Any): ...
</code></pre>
<p>I expected this to work, similar to the <code>isinstance</code> method:</p>
<pre class="lang-py prettyprint-override"><code>value = str() # some runtime value
# similar to: if isinstance(value, KnownFormats):
if is_known_format(value):
# This is not allowed:
# Type "str" cannot be assigned to type "KnownFormats"
do_something(value)
</code></pre>
<p>Or something like this:</p>
<pre class="lang-py prettyprint-override"><code>def cast_format(str: Any) -> Optional[KnownFormats]: ...
unknown: None = cast_format("what?")
json_format: Literal["json"] = cast_format("json")
</code></pre>
|
<python><python-typing>
|
2023-01-18 22:04:25
| 1
| 674
|
Michael Chen
|
75,165,745
| 1,330,719
|
Cannot determine if type of field in a Pydantic model is of type List
|
<p>I am trying to automatically convert a Pydantic model to a DB schema. To do that, I am recursively looping through a Pydantic model's fields to determine the type of field.</p>
<p>As an example, I have this simple model:</p>
<pre class="lang-py prettyprint-override"><code>from typing import List
from pydantic import BaseModel
class TestModel(BaseModel):
tags: List[str]
</code></pre>
<p>I am recursing through the model using the <code>__fields__</code> property as described here: <a href="https://docs.pydantic.dev/usage/models/#model-properties" rel="nofollow noreferrer">https://docs.pydantic.dev/usage/models/#model-properties</a></p>
<p>If I do <code>type(TestModel).__fields__['tags']</code> I see:</p>
<pre class="lang-py prettyprint-override"><code>ModelField(name='tags', type=List[str], required=True)
</code></pre>
<p>I want to programatically check if the <code>ModelField</code> type has a <code>List</code> origin. I have tried the following, and none of them work:</p>
<ul>
<li><code>type(TestModel).__fields__['tags'].type_ is List[str]</code></li>
<li><code>type(TestModel).__fields__['tags'].type_ == List[str]</code></li>
<li><code>typing.get_origin(type(TestModel).__fields__['tags'].type_) is List</code></li>
<li><code>typing.get_origin(type(TestModel).__fields__['tags'].type_) == List</code></li>
</ul>
<p>Frustratingly, this does return <code>True</code>:</p>
<ul>
<li><code>type(TestModel).__fields__['tags'].type_ is str</code></li>
</ul>
<p>What is the correct way for me to confirm a field is a <code>List</code> type?</p>
|
<python><python-typing><pydantic>
|
2023-01-18 22:03:55
| 2
| 1,269
|
rbhalla
|
75,165,736
| 3,728,901
|
Collecting package metadata (current_repodata.json): failed
|
<p>Environment: Windows 11 x64, run CMD as Administrator:</p>
<pre><code>conda install -c conda-forge tensorflow
</code></pre>
<p>Error</p>
<pre><code>Microsoft Windows [Version 10.0.22621.1105]
(c) Microsoft Corporation. All rights reserved.
C:\Windows\System32>conda install -c conda-forge tensorflow
Collecting package metadata (current_repodata.json): failed
CondaSSLError: OpenSSL appears to be unavailable on this machine. OpenSSL is required to
download and install packages.
Exception: HTTPSConnectionPool(host='conda.anaconda.org', port=443): Max retries exceeded with url: /conda-forge/win-64/current_repodata.json (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available."))
C:\Windows\System32>
</code></pre>
<p><a href="https://i.sstatic.net/ps4vi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ps4vi.png" alt="enter image description here" /></a></p>
<p>How to install tensorflow by conda success?</p>
|
<python><tensorflow><anaconda><conda>
|
2023-01-18 22:02:39
| 0
| 53,313
|
Vy Do
|
75,165,690
| 3,611,472
|
Keep the sum of two objects updated as one (or both) objects gets updated
|
<p>I want to create a class in Python that implements a <code>__add__</code> object method that allows summing two objects of the same class. Let's call this class <code>Indicator</code> and the two objects to sum <code>ind1</code> and <code>ind2</code>. The <code>Indicator</code> object has only one property <code>elements </code>, which is a dictionary of integer values.</p>
<p>My implementation of <code>__add__</code> combines the <code>elements </code> properties of two objects, eventually summing those values with the same key.</p>
<pre><code>from __future__ import annotations
import copy
class Indicator:
def __init__(self, elements={}):
self.elements = elements
def __add__(self, other: Indicator):
new = copy.deepcopy(self)
new.values = {k: self.elements.get(k, 0) + other.elements.get(k, 0) for k in set(self.elements) | set(other.elements)}
return new
ind1 = Indicator({1:1,2:2,3:3})
ind2 = Indicator({1:1,2:2})
new = ind1 + ind2
print('ind1: ',ind1.elements)
print('ind2: ',ind2.elements)
print(new.elements) # {1: 2, 2: 4, 3: 3}
</code></pre>
<p>I would like <code>__add__</code> to return an object whose <code>elements</code> property gets updated as one or both objects in the summation get updated along the code flow.</p>
<p>For example,</p>
<pre><code>ind1.elements[4] = 4
print(new.elements) # I would like this to be {1: 2, 2: 4, 3: 3, 4:4}
ind1.elements[1] = 3
print(new.elements) # I would like this to be {1: 4, 2: 4, 3: 3, 4:4}
</code></pre>
<p>How can I do it?</p>
<p><strong>EDIT</strong></p>
<p>First of all, let me thank you all the users who posted a comment/answer. Following the suggestions given in the comments and answers, I came up with the following solution. The idea is to add two lists as properties of <code>Indicator</code>: <code>self.adds</code> and <code>self.linked</code>.</p>
<ul>
<li><p>The list <code>self.adds</code> collects the addends of the summation. It gets filled up when <code>__add__</code> is called. So, in the example below, <code>ind1.adds is []</code> and <code>ind2.adds is []</code> since both objects don't arise from a sum. On the contrary, <code>new.adds is [ind1,ind2]</code></p>
</li>
<li><p>The list <code>self.linked</code> collects all those object that needs to be updated whenever <code>self</code> gets updated. In the example below, <code>ind1.linked is [new]</code> and <code>ind2.linked is [new]</code>.</p>
</li>
</ul>
<p>I am not completely satisfied with this solution. For example, it fails to work if we sum up three objects and then modify one of them. I can try to fix the code, but I am wondering if I am doing something unconventional. Any thoughts? The code is the following</p>
<pre><code>from __future__ import annotations
import copy
class Indicator:
def __init__(self, elements=None):
if elements is None:
self._elements = {}
else:
self._elements = elements
self.adds = []
self.linked = []
@property
def elements(self):
return self._elements
@elements.setter
def elements(self, value):
self._elements = value
for i in range(len(self.linked)):
el = self.linked[i]
el.update()
def update(self):
summation = self.adds[0]
for obj in self.adds[1:]:
summation = summation.__add__(obj)
self._elements = summation.elements
def __add__(self, other: Indicator):
new = copy.deepcopy(self)
self.linked.append(new)
other.linked.append(new)
new.adds = [self, other]
new._elements = {k: self.elements.get(k, 0) + other.elements.get(k, 0) for k in
set(self.elements) | set(other.elements)}
return new
ind1 = Indicator({1: 1, 2: 2, 3: 3})
ind2 = Indicator({1: 1, 2: 2})
new = ind1 + ind2
print('ind1: ', ind1.elements)
print('ind2: ', ind2.elements)
print(new.elements) # {1: 2, 2: 4, 3: 3}
ind1.elements = {0: 0, 1: 3}
print('Updating ind1: ',new.elements == (ind1+ind2).elements)
ind2.elements = {0: 0, 7: 9}
print('Updating ind2: ',new.elements == (ind1+ind2).elements)
</code></pre>
|
<python><class>
|
2023-01-18 21:55:44
| 3
| 443
|
apt45
|
75,165,684
| 13,218,664
|
Is there a way to connect to the database after the initial connection is made using mysql-connector?
|
<p>I am trying to connect to a MYSQL instance using <code>mysql.connector</code> for python. Once the initial connection is established, I want to connect to the database as input by the user.
Here is my implementation:</p>
<pre class="lang-py prettyprint-override"><code>...
import mysql.connector as sqlcon
connection = sqlcon.connect(
user=conn_info['user'], password=conn_info['password'], host=conn_info['host'], auth_plugin='mysql_native_password')
#conn_info is a dictionary holding the values
cursor = connection.cursor()
db_name = input("Enter db name")
...
</code></pre>
<p>Is there a way to connect to <code>db_name</code> after this step?</p>
|
<python><mysql><mysql-connector-python>
|
2023-01-18 21:55:08
| 1
| 342
|
ag2byte
|
75,165,452
| 5,623,899
|
VSCode and Jupyter Notebooks on WSL2. Previewing a notebook and then closing the window results in a "unsaved changes" warning. What is this?
|
<p>VSCode and Jupyter Notebooks on WSL2. Previewing a notebook and then closing the window results in a "unsaved changes" warning. What is this?</p>
<p>What did I do?</p>
<ul>
<li>open a project with vs code <code>code /path/to/project</code></li>
<li><code>git pull</code> to make sure everything is up to date</li>
<li>use file explore to find a <code>.ipynb</code> file</li>
<li>click once on a notebook file
<ul>
<li>file opens with name in italic <em>file.ipynb</em></li>
<li>file then changes to straight with unsaved changes notifier <code>file.ipynb *</code></li>
</ul>
</li>
<li>click the "x" close button for the file</li>
<li>"Do you want to save changes"</li>
</ul>
<p>I didn't change anything. What is this and how do I fix it?</p>
|
<python><visual-studio-code><jupyter-notebook>
|
2023-01-18 21:25:41
| 0
| 5,218
|
SumNeuron
|
75,165,431
| 12,436,050
|
Regex to extract substring from pandas DataFrame column
|
<p>I have following column in a DataFrame.</p>
<pre class="lang-py prettyprint-override"><code>col1
['SNOMEDCT_US:32113001', 'UMLS:C0265660']
['UMLS:C2674738', 'UMLS:C2674739']
['UMLS:C1290857', 'SNOMEDCT_US:118930001', 'UMLS:C123455']
</code></pre>
<p>I would like extract the value after UMLS: and store it in another column.
I am trying following lines of code but I am not getting the expected output.</p>
<pre class="lang-py prettyprint-override"><code>df['col1'].str.extract(r'\['.*UMLS:(.*)]')
</code></pre>
<p>The expected output is:</p>
<pre class="lang-py prettyprint-override"><code>col1 col2
['SNOMEDCT_US:32113001', 'UMLS:C0265660'] C0265660
['UMLS:C2674738', 'UMLS:C2674739'] C2674738, C2674739
['UMLS:C1290857', 'SNOMEDCT_US:118930001', 'UMLS:C123455'] C1290857, C123455
</code></pre>
|
<python><pandas><string><list><dataframe>
|
2023-01-18 21:22:46
| 2
| 1,495
|
rshar
|
75,165,383
| 15,781,591
|
How make stacked bar chart from dataframe in python
|
<p>I have the following dataframe:</p>
<pre><code> Color Level Proportion
-------------------------------------
0 Blue 1 0.1
1 Blue 2 0.3
2 Blue 3 0.6
3 Red 1 0.2
4 Red 2 0.5
5 Red 3 0.3
</code></pre>
<p>Here I have 2 color categories, where each color category has 3 levels, and each entry has a proportion, which sum to 1 for each color category. I want to make a stacked bar chart from this dataframe that has 2 stacked bars, one for each color category. Within each of those stacked bars will be the proportion for each level, all summing to 1. So while the bars will be "stacked" different, the bars as complete bars will be the same length of 1.</p>
<p>I have tried this:</p>
<pre><code>df.plot(kind='bar', stacked=True)
</code></pre>
<p>I then get this stacked bar chart, which is not what I want:</p>
<p><a href="https://i.sstatic.net/NH01h.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NH01h.png" alt="enter image description here" /></a></p>
<p>I want 2 stacked bars, and so a stacked bar for "Blue" and a stacked bar for "Red", where these bars are "stacked" by the proportions, with the colors of these stacks corresponding to each level. And so both of these bars would be of length 1 along the x-axis, which would be labelled "proportion". How can I fix my code to create this stacked bar chart?</p>
|
<python><pandas><matplotlib><bar-chart>
|
2023-01-18 21:16:41
| 2
| 641
|
LostinSpatialAnalysis
|
75,165,351
| 6,342,337
|
How can I return progress_hook for yt_dlp using FastAPI to end user?
|
<p>Relevant portion of my code looks something like this:</p>
<pre><code>@directory_router.get("/youtube-dl/{relative_path:path}", tags=["directory"])
def youtube_dl(relative_path, url, name=""):
"""
Download
"""
relative_path, _ = set_path(relative_path)
logger.info(f"{DATA_PATH}{relative_path}")
if name:
name = f"{DATA_PATH}{relative_path}/{name}.%(ext)s"
else:
name = f"{DATA_PATH}{relative_path}/%(title)s.%(ext)s"
ydl_opts = {
"outtmpl": name,
# "quiet": True
"logger": logger,
"progress_hooks": [yt_dlp_hook],
# "force-overwrites": True
}
with yt.YoutubeDL(ydl_opts) as ydl:
try:
ydl.download([url])
except Exception as exp:
logger.info(exp)
return str(exp)
</code></pre>
<p>I am using this webhook/end point to allow an angular app to accept url/name input and download file to folder. I am able to logger.info .. etc. output the values of the yt_dlp_hook, something like this:</p>
<pre><code>def yt_dlp_hook(download):
"""
download Hook
Args:
download (_type_): _description_
"""
global TMP_KEYS
if download.keys() != TMP_KEYS:
logger.info(f'Status: {download["status"]}')
logger.info(f'Dict Keys: {download.keys()}')
TMP_KEYS = download.keys()
logger.info(download)
</code></pre>
<p>Is there a way to stream a string of relevant variables like ETA, download speed etc. etc. to the front end? Is there a better way to do this?</p>
|
<python><fastapi><yt-dlp>
|
2023-01-18 21:12:49
| 1
| 1,837
|
ScipioAfricanus
|
75,165,283
| 1,148,979
|
Modbus home assistant and python struct (little endian, big endian, negative values from uint16)
|
<p>I have a question regarding to modbus settings. I have read the documentation carefully, tried to search some topics, but unfortunately I did not find an answer to my problem.</p>
<p>I have a heat pump, which is able to communicate through modbus. In the past without HA I had my own application on ESP8266, reading the data, uploading them, etc. Now I would like to move it into HA. I found the modbus protocol is implemented in HA, which is great.</p>
<p>Now, in my custom app I had to read the registry and modify the respond for each value as the device has MSB implementation, let me provide an example:</p>
<p>From the device documentation, the only things I know (and it seems to be enough as I was able to implement the app) are: The heat pump communicates on ..... IP address, on .... port. It uses slave and the slave ID is .... All values are represented in MSB (most significant byte). Now about the values, for example outside temperature is on address 0, type is read only and the scale is 100 (for modbus in configuration file it should be 0.01 - lets ignore it for now), unit is <strong>°</strong>C.</p>
<p>So, my configuration look like:</p>
<pre><code># Modbus configuration
modbus:
- name: ...
type: tcp
host: ...
port: ...
delay: 5
timeout: 5
sensors:
- name: Heat pump outside temperature
address: 0
slave: 1
input_type: holding
device_class: temperature
state_class: measurement
data_type: uint16
unique_id: "ac_heating_outside_temp"
</code></pre>
<p>This results to value of 65436 on this entity, which is wrong obviously. The real value at this moment is -1. The biggest value for uint16 is 65535, 65436-65535 = -99, multiplied by 0.01 (or divided by 100) is -0.99, which is (if we deduct zero) -1.00 degree... This is the value which I need. Well, in my C app, I have been doing this recalculation on my own (in bytes). Unfortunately I have no idea how to do that in "our" modbus yaml description.</p>
<p>I have been looking to SWAP, DATA_TYPE as well as STRUCTURE in the documentation: <a href="https://www.home-assistant.io/integrations/modbus/" rel="nofollow noreferrer">DOCUMENTATION (documentation link)</a> unfortunately nothing is working for me. I know I have to set custom data_type if I would like to provide structure, but defining the custom type and ">I" in the structure requires 2 registries to read, but the address of the entity is 0, which is 1 registry. Even like that I tried that, but I am not able to get the proper value. Having the data_type to uint16 with the swap byte or even swap word does not seems to work. I tried to play (out of necessity) with uint8, 2 registries and swap together, but no combination leads to the proper result. Python struct documentation: <a href="https://docs.python.org/3.8/library/struct.html" rel="nofollow noreferrer">https://docs.python.org/3.8/library/struct.html</a></p>
<p>Can anybody help me with this one?</p>
|
<python><endianness><modbus><home-assistant><uint16>
|
2023-01-18 21:04:52
| 2
| 1,048
|
tomdelahaba
|
75,165,254
| 4,133,188
|
Row-wise sorting a batch of pytorch tensors by column value
|
<p>I would like to sort each row in a <code>bxmxn</code> pytorch tensor (where <code>b</code> represents the batch size) by the k-th column value in each row. So my input tensor is <code>bxmxn</code>, and my output tensor is also <code>bxmxn</code> with the rows of each <code>mxn</code> tensor rearranged based on the k-th column value.</p>
<p>For example, if my original tensor is:</p>
<pre><code>a = torch.as_tensor([[[1, 3, 7, 6], [9, 0, 6, 2], [3, 0, 5, 8]], [[1, 0, 1, 0], [2, 1, 0, 3], [0, 0, 6, 1]]])
</code></pre>
<p>My sorted tensor should be:</p>
<pre><code>sorted_dim = 1 # sort by rows, preserving each row
sorted_column = 2 # sort rows on value of 3rd column of each row
sorted_a = torch.as_tensor([[[3, 0, 5, 8], [9, 0, 6, 2], [1, 3, 7, 6]], [[2, 1, 0, 3], [1, 0, 1, 0], [0, 0, 6, 1]]])
</code></pre>
<p>Thanks!</p>
|
<python><pytorch>
|
2023-01-18 21:01:58
| 1
| 771
|
BeginnersMindTruly
|
75,165,180
| 10,705,248
|
How to fit and calculate conditional probability of copula in Python
|
<p>I would like to fit a copula to a dataframe with 2 columns: <code>a</code> and <code>b</code>. Then I have to calculate the conditional probability of <code>a</code> < 0, when <code>b</code><-2 (i.e. P(a<0|b<-1).</p>
<p>I have tried the following code in python using the library <code>copula</code>; I am able to fit the copula to the data but I am not sure about calculating cdf :</p>
<pre><code>import copula
df = pandas.read_csv("filename")
cop = copulas.multivariate.GaussianMultivariate()
cop.fit(df)
</code></pre>
<p>I know the function <code>cdf</code> can calculate the conditional probability but I am not fully sure how to use that here.</p>
|
<python><cdf><probability-distribution>
|
2023-01-18 20:54:35
| 1
| 854
|
lsr729
|
75,165,064
| 4,418,481
|
Make Dash callback return results while it is still executing
|
<p>I'm creating a Dash app and in some parts of it, I have a button and a loader.</p>
<p>Once the user clicks the button, I want the loader to appear while the calculations are performed (might take 1-2 minutes) and when it's done to remove the loader.</p>
<p>In order to do so, I created a callback that has as an input the button click and the output is the style of the slider.</p>
<p>However, In this single callback, I want to change the view of the slider twice (to show it once that calculation started and second, to hide it once the calculations are done)</p>
<p>The problems are (from my understanding):</p>
<ol>
<li>The callback only knows to return a result once it is ended.</li>
<li>I can't create another callback for the output of the slider because Dash says I can't have multiple outputs:
<a href="https://i.sstatic.net/y0OcZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/y0OcZ.png" alt="enter image description here" /></a></li>
</ol>
<p>The code I used is something like this:</p>
<pre><code>@app.callback(
Output("container", "children"),
Output("lottie-loader", "style"), ---> initial state is that the loader is hidden
Input("scan-button", "n_clicks"),
State("container", "children")
)
def show_on_map(n_clicks, children):
if (n_clicks > 0):
style = {'display': 'none'} ---> it is here because if i play it in the return it will be executed right on startup
LONG CALCULATIONS
return children, style ---> should return to be hidden
@app.callback(
Output("lottie-loader", "style"),
Input("scan-button", "n_clicks"),
)
def update_style(n_clicks):
if (n_clicks > 0):
style = {'display': 'block'} ---> tried to make the style visible
return style
</code></pre>
<p>due to those limitations, is there any way that I can for example click the button, already update the loader while the calculations are still performed in the callback, and not wait for the return?</p>
|
<python><callback><plotly-dash>
|
2023-01-18 20:41:22
| 2
| 1,859
|
Ben
|
75,165,026
| 4,029,467
|
How to keep html entity such as `☒` intact in beautiful soup?
|
<p>My goal is to read an html document using beautiful soup, add <code>ids</code> to some tags and write the html back to file.</p>
<p>The html document has html entities such as <code>&#9746</code> representing <code>☒</code>. When I create a beautiful soup object, the html entity gets converted to <code>☒</code>. When I write the soup back to html using <code>str(soup)</code>, the html file contains <code>☒</code> instead of <code>&#9746</code>. Opening this in a browser yields <code>☒</code> when I want <code>☒</code>.</p>
<p>I tried using <code>str(soup.encode(formatter='html'))</code>, where it did convert to UTF-8 encoding, but the html in the browser shows <code>\xe2\x98\x92</code>.</p>
<p>I'm guessing there is something simple that I'm missing. Any thoughts, on how to keep the special characters in the original document intact after processing it in beautiful soup?</p>
|
<python><html><beautifulsoup>
|
2023-01-18 20:37:21
| 1
| 399
|
kyc12
|
75,164,984
| 13,488,334
|
Python - View & remove all unused dependencies in requirements.txt
|
<p>Somewhere along the way, the requirements.txt in my Python application has become extremely bloated. There are 100+ dependencies listed and iteratively going through it and removing each dependency until the application breaks is not an option.</p>
<p>Does anyone know of a tool that can show which packages in the requirements.txt are being used during runtime? If no tool exists, how has anyone solved this problem in a more efficient way than deleting packages one-by-one?</p>
|
<python><dependencies><python-packaging><requirements.txt>
|
2023-01-18 20:32:51
| 2
| 394
|
wisenickel
|
75,164,945
| 7,984,318
|
pandas how to count column boolean value that based on group
|
<p>I have a Dataframe df ,you can have it by running the following code:</p>
<pre><code>import pandas as pd
from io import StringIO
df = """
month status_review supply_review
2023-01-01 False False
2023-01-01 True True
2022-12-01 False True
2022-12-01 True True
2022-12-01 False False
"""
df= pd.read_csv(StringIO(df.strip()), sep='\s\s+', engine='python')
</code></pre>
<p>How can I count how many status_reviews and supply_review are True in each month?</p>
<p>The output should look like the following:</p>
<pre><code> month # of true status_review # of true supply_review
2023-01-01 1 1
2022-12-01 1 2
</code></pre>
|
<python><pandas><dataframe>
|
2023-01-18 20:28:46
| 3
| 4,094
|
William
|
75,164,872
| 3,507,584
|
Bar polar with areas proportional to values
|
<p>Based on <a href="https://stackoverflow.com/questions/75126470/barplot-based-on-coloured-sectors/75128286#75128286">this question</a> I have the plot below.
The issue is plotly misaligns the proportion between plot area and data value. I mean, higher values (e.g. going from 0.5 to 0.6) lead to a large increase in area (big dark green block) whereas from 0 to 0.1 is not noticiable (even if the actual data increment is the same 0.1).</p>
<p><a href="https://i.sstatic.net/6ZGE0.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6ZGE0.jpg" alt="Plot" /></a></p>
<pre><code>import numpy as np
import pandas as pd
import plotly.express as px
df = px.data.wind()
df_test = df[df["strength"]=='0-1']
df_test_sectors = pd.DataFrame(columns=df_test.columns)
## this only works if each group has one row
for direction, df_direction in df_test.groupby('direction'):
frequency_stop = df_direction['frequency'].tolist()[0]
frequencies = np.arange(0.1, frequency_stop+0.1, 0.1)
df_sector = pd.DataFrame({
'direction': [direction]*len(frequencies),
'strength': ['0-1']*len(frequencies),
'frequency': frequencies
})
df_test_sectors = pd.concat([df_test_sectors, df_sector])
df_test_sectors = df_test_sectors.reset_index(drop=True)
df_test_sectors['direction'] = pd.Categorical(
df_test_sectors['direction'],
df_test.direction.tolist() #sort the directions into the same order as those in df_test
)
df_test_sectors['frequency'] = df_test_sectors['frequency'].astype(float)
df_test_sectors = df_test_sectors.sort_values(['direction', 'frequency'])
fig = px.bar_polar(df_test_sectors, r='frequency', theta='direction', color='frequency', color_continuous_scale='YlGn')
fig.show()
</code></pre>
<p>Is there any way to make the plot with proportional areas to blocks to keep a more "truthful" alignment between the aesthetics and the actual data? So the closer to the center, the "longer" the blocks so the areas of all blocks are equal? Is there any option in Plotly for this?</p>
|
<python><plotly><bar-chart><polar-coordinates><plotly-express>
|
2023-01-18 20:19:31
| 1
| 3,689
|
User981636
|
75,164,809
| 5,212,614
|
How can we do Dense Rank on a field in a dataframe?
|
<p>I am trying to dense rank a field, from the highest number (lowest rank) to the lowest number (highest rank). I tried this.</p>
<pre><code>df['DenseRank'] = df['CountsOfCircuits'].rank(ascending=False)
</code></pre>
<p>The max number I have in 'CountsOfCircuits' is 804. I want this to be ranked as 1, but I'm getting 402.50! How can I dense rank a field, with the top number having a dense rank of 1 and the bottom having the max dense rank?</p>
|
<python><python-3.x><dataframe><rank>
|
2023-01-18 20:12:54
| 0
| 20,492
|
ASH
|
75,164,772
| 15,376,262
|
Expand list of dates by incrementing dates by one day in python
|
<p>In Python I have a list of dates as strings:</p>
<pre><code>dates = ['2022-01-01', '2022-01-08', '2022-01-21']
</code></pre>
<p>I would like to increment these dates by one day and add them to this list, like so:</p>
<pre><code>dates_new = ['2022-01-01', '2022-01-02', '2022-01-08', '2022-01-09', '2022-01-21', '2022-01-22']
</code></pre>
<p>What is the best way to achieve this?</p>
|
<python><list><date>
|
2023-01-18 20:09:25
| 3
| 479
|
sampeterson
|
75,164,716
| 4,133,188
|
Finding closest matches (by distance metric) in two batches of pytorch tensors
|
<p>I am trying to find the closest matches between two batches of pytorch tensors. Assuming I have a batch of <code>mxn</code> tensors with batch size <code>b1</code> and a batch of <code>mxn</code> tensors with batch size <code>b2</code>, I would like to find:</p>
<ul>
<li>The distance between each <code>mxn</code> tensor in batch <code>b1</code> and each <code>mxn</code> tensor in batch <code>b2</code>. This distance matrix would be of size <code>b1xb2</code>.</li>
<li>For each tensor in <code>b1</code>, I would like the batch index of the closest (by distance) tensor in <code>b2</code>.</li>
</ul>
<p>I define distance as the sum of the elementwise squared Euclidean distance between corresponding elements in each tensor. For example, if the first tensor in <code>b1</code> (i.e. batch index = 0) is <code>[[a, b, c], [d, e, f], [g, h, i], [j, k, l]]</code> and the first tensor in <code>b2</code> (i.e. batch index = 0) is <code>[[z, y, x], [w, v, u], [t, s, r]]</code>, then the distance between <code>b1</code> and <code>b2</code> is: (a-z)^2 + (b-y)^2 + (c-x)^2 + (d-w)^2 + (e-v)^2 +(f-u)^2 +...+(l-r)^2</p>
<p>Here's what I have tried:</p>
<pre><code>a = torch.rand((3, 3, 4))
b = torch.rand((5, 3, 4))
flat_a = torch.flatten(a, start_dim = 1)
flat_b = torch.flatten(b, start_dim = 1)
torch.cdist(flat_a, flat_b)
</code></pre>
<p>Which gives me a <code>3x5</code> matrix that I hope is correct. And I would now like to return the batch indices of the <code>3x4</code> tensors in <code>b</code> that are the closest matches to the tensors in <code>a</code>.</p>
<p>Thanks</p>
|
<python><pytorch>
|
2023-01-18 20:04:19
| 1
| 771
|
BeginnersMindTruly
|
75,164,640
| 707,145
|
Facing issues when running a shiny app for Python locally
|
<p>I want to run <a href="/questions/tagged/shiny" class="post-tag" title="show questions tagged 'shiny'" aria-label="show questions tagged 'shiny'" rel="tag" aria-labelledby="shiny-container">shiny</a> app for <a href="/questions/tagged/python" class="post-tag" title="show questions tagged 'python'" aria-label="show questions tagged 'python'" rel="tag" aria-labelledby="python-container">python</a> locally. The following code works like a charm:</p>
<pre><code>cd/d D:/app1
py
from shiny import run_app
run_app()
</code></pre>
<p>However, when I try the following code, it does not work.</p>
<pre><code>cd/d D:
py
from shiny import run_app
run_app("app1:app")
</code></pre>
<p>Any hints, please.</p>
<p>I am following the <a href="https://shiny.rstudio.com/py/api/reference/shiny.run_app.html" rel="nofollow noreferrer">shiny.run_app documentation</a>.</p>
|
<python><cmd><py-shiny>
|
2023-01-18 19:56:17
| 1
| 24,136
|
MYaseen208
|
75,164,598
| 5,224,236
|
how to handle sign-in popups with python selenium
|
<p>Using selenium I need to handle this Chrome popup on a virtual machine, even when the remote desktop connection to it is closed.</p>
<p><a href="https://i.sstatic.net/ep1s9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ep1s9.png" alt="enter image description here" /></a></p>
<p>I manage to get the credentials auto-filled by Chrome using my own user profile as described <a href="https://stackoverflow.com/a/67389309/5224236">here</a>.</p>
<p>Using pyautogui to press "enter" works when I am on the VM. The challenge is to make it work after I disconnect.</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.keys import Keys
import time
from datetime import datetime, date
import dateutil.relativedelta
import numpy as np
import pandas as pd
import keyboard
import os
import pyautogui
pyautogui.FAILSAFE = False # trying to keep sending keystrokes when remote desktop minimized or closed
chrome_driver = "path/to/exe"
# initial connection
rooturl = 'https://myurl'
options = webdriver.ChromeOptions()
options.add_argument(r"--user-data-dir=C:\Users\MYUSER\AppData\Local\Google\Chrome\User Data")
options.add_argument(r'--profile-directory=Default')
driver = webdriver.Chrome(chrome_driver, chrome_options=options)
driver.implicitly_wait(20)
driver.maximize_window()
driver.get(rooturl)
windtitles = [x.title for x in pyautogui.getAllWindows()]
chromewindidx = [x for x, y in enumerate(windtitles) if 'identityproviderinternal' in y][0]
chromewind = pyautogui.getAllWindows()[chromewindidx]
chromewind.minimize()
chromewind.maximize()
time.sleep(1)
if True:
pyautogui.press('enter') # not working when I am not on the remote desktop
time.sleep(15)
</code></pre>
<p>I also tried using <code>https://user:password@myurl</code> but it doesn't work on this particular system.</p>
<p>I also tried using <code>driver.switch_to.alert.accept()</code> which gives <code>NoAlertPresentException: Message: no such alert</code></p>
<p>All solutions welcome but what I <em>think</em> I'd be after is some (<code>javascript</code>?) script to press this OK button using <code>driver.execute</code>, <code>driver.execute_script</code> or similar</p>
|
<javascript><python><selenium>
|
2023-01-18 19:51:38
| 3
| 6,028
|
gaut
|
75,164,545
| 396,014
|
InportError: cannot import 'Node2Vec'
|
<p>I am trying to use node2vec and I can't get past the import section:</p>
<pre><code>import networkx as nx
from node2vec import Node2Vec
</code></pre>
<p>Second line throws error</p>
<pre><code>Traceback (most recent call last):
File "node2vec2.py", line 2, in <module>
from node2vec import Node2Vec
ImportError: cannot import name 'Node2Vec'
</code></pre>
<p>I found <a href="https://github.com/eliorc/node2vec/issues/21" rel="nofollow noreferrer">this thread</a> on the Git repository for the library. I didn't follow everything they were saying but it seemed this was some problem with how the library was installed. So I checked the directory C:\Python36\Lib\site-packages. There is a node2vec folder. The script that's in there is named node2vec.py not Node2Vec.py but changing that on the import statement didn't change anything.</p>
<p>To be certain I'm not running some zombie install I executed python with an explicit path. No help.</p>
<p>Toward the end of that git entry it says</p>
<pre><code>Last time I resolved it by cloning the repository and navigating to the code folder and put:
pip install .
</code></pre>
<p>And that reportedly fixed it for someone else. But I don't understand what he means by "the code folder." Is that the folder where my script is being run from?</p>
|
<python><git>
|
2023-01-18 19:46:39
| 1
| 1,001
|
Steve
|
75,164,413
| 5,223,033
|
For a given wikipedia article, find all wikipedia articles containing hyperlink to the input article in the text
|
<p>Let me try to explain my problem: For a Wikipedia article url, Let's say Yann LeCun (<a href="https://en.wikipedia.org/wiki/Yann_LeCun" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Yann_LeCun</a>), I would like to retrieve URLs of wikipedia articles that contains a word with this hyperlink. In this case, for example, one of the returned URLS can be the URL of the Meta AI article (<a href="https://en.wikipedia.org/wiki/Meta_AI" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Meta_AI</a>) because inside that article there is this text:</p>
<p><em>"FAIR was directed by New York University's <a href="https://en.wikipedia.org/wiki/Yann_LeCun" rel="nofollow noreferrer">Yann LeCun</a>, a deep learning Professor and Turing Award winner."</em></p>
<p>Is there any kind of API or python code to do something like that? I've seen "What links here" tool available in wikipedia but unfortunately not all the articles in its output list has text with hyperlinks to the input article. Thanks in advance</p>
|
<python><wikipedia><wikipedia-api>
|
2023-01-18 19:32:34
| 1
| 1,844
|
zwlayer
|
75,164,370
| 11,670,455
|
Python Polars: how to convert a list of dictionaries to polars dataframe without using pandas
|
<p>I have a list of dictionaries like this:</p>
<pre><code>[{"id": 1, "name": "Joe", "lastname": "Bloggs"}, {"id": 2, "name": "Bob", "lastname": "Wilson"}]
</code></pre>
<p>And I would like to transform it to a polars dataframe. I've tried going via pandas but if possible, I'd like to avoid using pandas.</p>
<p>Any thoughts?</p>
|
<python><python-polars>
|
2023-01-18 19:28:43
| 2
| 379
|
Frank Jimenez
|
75,164,339
| 14,167,846
|
Supblots to include radar plot
|
<p>I'm running into issues with some subplots. I've provided some sample code to generate the types of plots I would like to create. I'd like these to be the same size, side by side.</p>
<p>I'm am having a really hard time figuring out how to create the subplots though. I keep running into some issues with the thetagrids here. This is what i've tried. I can get these to work seprarately, but cant figure out how to combine them. Eventually I might want a third plot as well.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
## Plot 1
x1 = np.array([0, 1, 2, 3])
y1 = np.array([7, 2, 4, 2])
plt.subplot(1, 2, 1)
plt.figure(figsize=(5, 5))
plt.scatter(x1, y1)
# plt.show()
### Plot 2
# make up data for plot
polar_list = ['a', 'b', 'c', 'd', 'a']
polar_points = [4, 3, 6, 7, 4]
# modify lists for plots
label_loc = np.linspace(start=0, stop=2 * np.pi, num=len(polar_list))
plt.figure(figsize=(5, 5))
plt.subplot(1, 2, 2, polar=True)
plt.plot(label_loc, polar_points, label='DataLable')
plt.title('DataLable comparison', size=20, y=1.05)
lines, labels = plt.thetagrids(np.degrees(label_loc), labels=polar_list)
plt.legend()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/7cuIo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7cuIo.png" alt="Basically what i'm going for" /></a></p>
|
<python><matplotlib><subplot><radar-chart>
|
2023-01-18 19:25:59
| 1
| 545
|
pkpto39
|
75,164,321
| 10,256,274
|
get xml comment using python
|
<p>I am trying to get the comment from my xml file, but do not know how. The reason is because currently the xml doesnt have time data, but it is located in the comment. I wanted to grab <code>18 OCT 2022 1:40:55 PM</code> and convert it into epoch timestamp. Can someone help me?</p>
<pre><code><!-- My Movies, Generated 18 JAN 2023 1:40:55 PM -->
<collection shelf="New Arrivals">
<movie title="Enemy Behind">
<type>War, Thriller</type>
<format>DVD</format>
<year>2003</year>
<rating>PG</rating>
<stars>10</stars>
<description>Talk about a US-Japan war</description>
</movie>
<movie title="Transformers">
<type>Anime, Science Fiction</type>
<format>DVD</format>
<year>1989</year>
<rating>R</rating>
<stars>8</stars>
<description>A scientific fiction</description>
</movie>
</collection>
</code></pre>
|
<python><xml>
|
2023-01-18 19:24:15
| 1
| 429
|
Nelly Yuki
|
75,164,206
| 12,875,823
|
FastAPI + pytest unable to clean Django ORM
|
<p>I'm creating a FastAPI project that integrates with the Django ORM. When running pytest though, the PostgreSQL database is not rolling back the transactions. Switching to SQLite, the SQLite database is not clearing the transactions, but it is tearing down the db (probably because SQLite uses in-memory db). I believe pytest-django is not calling the rollback method to clear the database.</p>
<p>In my pytest.ini, I have the <code>--reuse-db</code> flag on.</p>
<p>Here's the repo: <a href="https://github.com/Andrew-Chen-Wang/fastapi-django-orm" rel="nofollow noreferrer">https://github.com/Andrew-Chen-Wang/fastapi-django-orm</a> which includes pytest-django and pytest-asyncio If anyone's done this with flask, that would help too.</p>
<p>Assuming you have PostgreSQL:</p>
<p>Steps to reproduce:</p>
<ol>
<li><code>sh bin/create_db.sh</code> which creates a new database called <code>testorm</code></li>
<li><code>pip install -r requirements/local.txt</code></li>
<li><code>pytest tests/</code></li>
</ol>
<p>The test is calling a view that creates a new record in the database tables and tests whether there is an increment in the number of rows in the table:</p>
<pre class="lang-py prettyprint-override"><code># In app/core/api/a_view.py
@router.get("/hello")
async def hello():
await User.objects.acreate(name="random")
return {"message": f"Hello World, count: {await User.objects.acount()}"}
# In tests/conftest.py
import pytest
from httpx import AsyncClient
from app.main import fast
@pytest.fixture()
def client() -> AsyncClient:
return AsyncClient(app=fast, base_url="http://test")
# In tests/test_default.py
async def test_get_hello_view(client):
"""Tests whether the view can use a Django model"""
old_count = await User.objects.acount()
assert old_count == 0
async with client as ac:
response = await ac.get("/hello")
assert response.status_code == 200
new_count = await User.objects.acount()
assert new_count == 1
assert response.json() == {"message": "Hello World, count: 1"}
async def test_clears_database_after_test(client):
"""Testing whether Django clears the database"""
await test_get_hello_view(client)
</code></pre>
<p>The first test case passes but the second doesn't. If you re-run pytest, the first test case also starts not passing because the test database is not clearing the transaction from the first run.</p>
<p>I adjusted the test to not include the client call, but it seems like pytest-django is simply not creating a transaction around the Django ORM, because the db is not being cleared for each test:</p>
<pre class="lang-py prettyprint-override"><code>async def test_get_hello_view(client):
"""Tests whether the view can use a Django model"""
old_count = await User.objects.acount()
assert old_count == 0
await User.objects.acreate(name="test")
new_count = await User.objects.acount()
assert new_count == 1
async def test_clears_database_after_test(client):
"""Testing whether Django clears the database"""
await test_get_hello_view(client)
</code></pre>
<p>How should I clear the database for each test case?</p>
|
<python><django><pytest><fastapi>
|
2023-01-18 19:14:43
| 1
| 998
|
acw
|
75,163,923
| 17,696,880
|
Remove from a list of strings, those strings that have only empty spaces or that are made up of less than 3 alphanumeric characters
|
<pre class="lang-py prettyprint-override"><code>import re
sentences_list = ['Hay 5 objetos rojos sobre la mesada de ahí.', 'Debajo de la mesada hay 4 objetos', '', ' ', "\taa!", '\t\n \n', '\n ', 'ai\n ', 'Salto rapidamente!!!', 'y la vio volar', '!', ' aa', 'aa', 'día']
#The problem with this is that there are several cases that need to be eliminated
# and the complexity to figure that out should be resolved with a regex.
sentences_list = [i for a,i in enumerate(sentences_list) if i != ' ']
print(repr(sentences_list)) #print the already filtered list to verify
</code></pre>
<p>I got these strings with a sentence separator, the problem is that some sentences aren't really sentences or aren't really linguistically significant units.</p>
<ul>
<li><p>Those strings that have less than 3 alphanumeric characters (that is, 2 characters or less) must be eliminated from the list.</p>
</li>
<li><p>Those strings that are empty <code>""</code> or <code>" "</code> , or that are made up of single symbols <code>"...!"</code>, <code>";"</code>, <code>".\n"</code>, <code>"\taa!"</code> must be eliminated from the list.</p>
</li>
<li><p>Those strings that have only escape characters and nothing else, except symbols or that have less than 3 alphanumeric characters, for example <code>"\t\n ab ."</code> , <code>"\n ."</code>, <code>"\n"</code> must be eliminated from the list.</p>
</li>
</ul>
<p>This is how the correct list should look after having filtered those elements that are substrings that do not meet the conditions</p>
<pre><code>['Hay 5 objetos rojos sobre la mesada de ahí.', 'Debajo de la mesada hay 4 objetos', 'Salto rapidamente!!!', 'y la vio volar', 'día']
</code></pre>
|
<python><python-3.x><regex><list><regex-group>
|
2023-01-18 18:44:56
| 2
| 875
|
Matt095
|
75,163,544
| 10,924,836
|
Calculating of tolerance
|
<p>I am working with one data set. Data contains values with different decimal places. Data and code you can see below :</p>
<pre><code>data = {
'value':[9.1,10.5,11.8,
20.1,21.2,22.8,
9.5,10.3,11.9,
]
}
df = pd.DataFrame(data, columns = ['value'])
</code></pre>
<p>Which gives the following dataframe:</p>
<pre class="lang-none prettyprint-override"><code> value
0 9.1
1 10.5
2 11.8
3 20.1
4 21.2
5 22.8
6 9.5
7 10.3
8 11.9
</code></pre>
<p>Now I want to add a new column with the title <code>adjusted</code>.This column I want to calculate with <code>numpy.isclose</code> function with a tolerance of 2 (plus or minus 1). At the end I expect to have results as result shown in the next table</p>
<pre class="lang-none prettyprint-override"><code> value adjusted
0 9.1 10
1 10.5 10
2 11.8 10
3 20.1 21
4 21.2 21
5 22.8 21
6 9.5 10
7 10.3 10
8 11.9 10
</code></pre>
<p>I tried with this line but I get only results such true and false and also this is only for one value (10) not for all values.</p>
<pre><code>np.isclose(df1['value'],10,atol=2)
</code></pre>
<p>So can anybody help me how to solve this problem and calculate tolerance for values 10 and 21 with one line ?</p>
|
<python><pandas><numpy>
|
2023-01-18 18:12:46
| 1
| 2,538
|
silent_hunter
|
75,163,524
| 859,227
|
Clearing the line before carriage return
|
<p>In the following code, I would like to show the progress of two loops with the iteration number for the outer loop and dots for the inner loop.</p>
<pre><code>import time
def foo():
for j in range(0,100):
if j % 10 == 0:
print('.', end='', flush=True, sep='')
time.sleep(0.2)
for i in range(0,100):
if i % 10 == 0:
print('\r', i, end='', flush=True, sep='')
time.sleep(1)
foo()
</code></pre>
<p>Although at <code>i==10</code> it goes to the beginning of the line, it doesn't remove the dots. So, it actually overwrites the dots. See this figure:</p>
<p><a href="https://i.sstatic.net/KbMAi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KbMAi.png" alt="enter image description here" /></a></p>
<p>Is there a way to clean the line and go to the start point with <code>\r</code>?</p>
|
<python>
|
2023-01-18 18:10:43
| 0
| 25,175
|
mahmood
|
75,163,508
| 4,706,745
|
How to create a regular expression to replace a url?
|
<p>I'm trying to create a regular expression using re.sub() that can replace a URL from a string for example.</p>
<pre><code>tool(id='merge_tool', server='http://localhost:8080')
</code></pre>
<p>I created a regular expression that returns a string something like given below.</p>
<pre><code> a = "https:192.168.1.1:8080"
re.sub(r'http\S+', a, "tool(id='merge_tool', server='http://localhost:8080')")
</code></pre>
<p>results:</p>
<pre><code> "tool(id='merge_tool', server='https:192.168.1.1"
</code></pre>
<p>Or if I provide this URL:</p>
<pre><code> b = 'https:facebook.com'
re.sub(r'http\S+', b, "tool(id='merge_tool', server='http://localhost:8080')")
</code></pre>
<p>Results:</p>
<pre><code> "tool(id='merge_tool', server='https:facebook.com"
</code></pre>
<p>How to fix this so that it can return the entire string after replacing the URL?</p>
|
<python><regex>
|
2023-01-18 18:09:09
| 1
| 4,217
|
jax
|
75,163,472
| 3,352,254
|
Calculate the rolling mean of every n-th element over an m-element window in python
|
<p>Suppose I have a vector like so:</p>
<pre><code>s = pd.Series(range(50))
</code></pre>
<p>The rolling sum over, let's say a 2-element window is easily calculated:</p>
<pre><code>s.rolling(window=2, min_periods=2).mean()
</code></pre>
<pre><code>0 NaN
1 0.5
2 1.5
3 2.5
4 3.5
5 4.5
6 5.5
7 6.5
8 7.5
9 8.5
...
</code></pre>
<p>Now I don't want to take the adjacent 2 elements for the window, but I want to take e.g. every third element. Still only take the last 2 of them. It would result in this vector:</p>
<pre><code>0 NaN
1 NaN
2 NaN
3 1.5 -- (3+0)/2
4 2.5 -- (4+1)/2
5 3.5 -- (5+2)/2
6 4.5 -- ...
7 5.5
8 6.5
9 7.5
...
</code></pre>
<p>How can I achieve this efficiently?</p>
<p>Thanks!</p>
|
<python><pandas><window><rolling-computation>
|
2023-01-18 18:05:39
| 2
| 825
|
smaica
|
75,163,465
| 11,869,866
|
Pydantic model with Union field with one option mark as deprecated
|
<p>I have some Pydantic models with fields that are unions of different models.
I'm looking for a way to make some models of the union deprecated in my fastapi generated doc.</p>
<p>I can make the whole field deprecated with :
<code>Field(default=None, deprecated=True)</code></p>
<p>but i find no way to do it on one of the possible values. For example in the following example, is it possible to tag SimpleUser as deprecated for the Log model and generate the doc according to that.</p>
<pre class="lang-py prettyprint-override"><code>from typing import Union
from pydantic import BaseModel, Field
class Admin(BaseModel):
name: str
class SimpleUser(BaseModel):
age: int
class Log(BaseModel):
user: Union[Admin, SimpleUser, None] = Field(default=None)
</code></pre>
|
<python><fastapi><swagger-ui><jsonschema><pydantic>
|
2023-01-18 18:04:29
| 1
| 1,339
|
Bastien B
|
75,163,410
| 9,118,312
|
pandas groupby().apply() grouping the same group again and again under different names
|
<p>I have a pandas data from made up of various columns. Among them are 'branch' and 'barcode', by which I would like to group the dataframe and apply a function. Something I have done thousands of times before.</p>
<p>But this time it is showing behavior I've never seen before. Instead of sending each group to the function it sends the same group over and over. Only the name of the group changes as expected.</p>
<p>To showcase the problem, I'm printing out the group name (which contains the changing groupby keys) and the barcode and branch of the first row, which should be the same as the name but isn't.</p>
<p>Here's the basic code:</p>
<pre><code>def main_features(df):
print(df.name)
print(df[['barcode', 'branch']].iloc[0])
df5 = df4.groupby(['branch', 'barcode']).apply(main_features)
</code></pre>
<p>Note the output:</p>
<pre><code>(1, 90162800)
barcode 90162800
branch 1
Name: 1, dtype: int64
(1, 38000232176)
barcode 90162800
branch 1
Name: 3, dtype: int64
(1, 38000232183)
barcode 90162800
branch 1
Name: 4, dtype: int64
(1, 3014260280772)
barcode 90162800
branch 1
Name: 18, dtype: int64
(1, 3014260289287)
barcode 90162800
branch 1
Name: 19, dtype: int64
(1, 4015400562818)
barcode 90162800
branch 1
Name: 44, dtype: int64
(1, 4015400563747)
barcode 90162800
branch 1
Name: 45, dtype: int64
(1, 4015400563846)
barcode 90162800
branch 1
Name: 46, dtype: int64
(1, 4015400564324)
...
...
...and so on
</code></pre>
<p>Note that the barcode and branch are changing in the df.name. But the actual branch and barcode are constant. Weirdest Pandas behavior ever.</p>
<p>Any ideas?</p>
|
<python><pandas><group-by>
|
2023-01-18 18:00:11
| 1
| 608
|
Bigga
|
75,163,396
| 8,795,358
|
How to convert JSON string (with double quotes in its values) to python dictionary
|
<p>I have some JSON files like this:</p>
<pre><code>{
"@context": "http://schema.org",
"@type": "Product",
"name": "ADIZERO ADIOS PRO 2 Löparskor",
"@id": "adidas-adizero-adios-pro-2-loparskor",
"color": "Lila",
"description": "Example text "Best Comfort" an other example text.",
"brand": {
"@type": "Thing",
"name": "adidas"
},
"audience": {
"@type": "Audience",
"name": "Herr, Dam"
}
}
</code></pre>
<p>I know it is not a valid JSON file since in the description field there is <code>" "</code> but how can I manipulate this string with python and use <code>json.loads()</code></p>
<p>I'm thinking about some regular expressions to remove these inner double quotes, Is that possible?</p>
<p>BTW: It's not possible to manipulate the source JSON files.</p>
|
<python><json><string>
|
2023-01-18 17:59:12
| 1
| 359
|
Tanhaeirad
|
75,163,224
| 16,937,053
|
New variable in pandas conditioned on two variables where one variable transcend multiple rows
|
<p>I want to add a column <code>col3</code> to my data frame <code>df</code> with the binary outcome <code>yes</code> or <code>no</code> .</p>
<p>The issue is that the values in <code>col3</code> should be conditioned on <code>col1</code> and <code>col2</code> in the sense that the outcome will be <code>yes</code> if the value for <code>col2</code> is also <code>yes</code> for all instances of a unique value in <code>col1</code>. In case one or more values are <code>no</code> in <code>col2</code> then the corresponding row in <code>col3</code> should also be <code>no</code>.</p>
<p>A simple example of the logic.</p>
<pre><code>import pandas as pd
df={"col1": [1,1,1,2,3,3,4,4], "col2": ["yes","no","yes","no","yes","yes","yes","no"]}
df = pd.DataFrame(data=df)
</code></pre>
<pre><code> col1 col2
0 1 yes
1 1 no
2 1 yes
3 2 no
4 3 yes
5 3 yes
6 4 yes
7 4 no
</code></pre>
<p>The desired outcome.</p>
<pre><code>df_new
col1 col2 col3
0 1 yes no
1 1 no no
2 1 yes no
3 2 no no
4 3 yes yes
5 3 yes yes
6 4 yes no
7 4 no no
</code></pre>
|
<python><pandas><if-statement><conditional-statements>
|
2023-01-18 17:42:46
| 2
| 341
|
Marco Liedecke
|
75,163,188
| 6,694,814
|
Python - jinja2 template picks up only the first record from the data
|
<p>I would like to make the on-click feature with a circle coming up when clicking on the marker.
So far I've developed the class which includes relevant elements as shown in the code below:</p>
<pre><code>df = pd.read_csv("survey.csv")
class Circle(MacroElement):
def __init__(self):
for i,row in df.iterrows():
rad = int(df.at[i, 'radius'])
def __init__(self,
popup=None
):
super(Circle, self).__init__()
self._name = 'Circle',
self.radius = rad * 1560
self._template = Template(u"""
{% macro script(this, kwargs) %}
var circle_job = L.circle();
function newCircle(e){
circle_job.setLatLng(e.latlng).addTo({{this._parent.get_name()}});
circle_job.setRadius({{this.radius}});
circle_job.setStyle({
color: 'black',
fillcolor: 'black'
});
};
{{this._parent.get_name()}}.on('click', newCircle);
{% endmacro %}
""") # noqa
for i,row in df.iterrows():
lat =df.at[i, 'lat']
lng = df.at[i, 'lng']
sp = df.at[i, 'sp']
phone = df.at[i, 'phone']
role = df.at[i, 'role']
rad = int(df.at[i, 'radius'])
popup = '<b>Phone: </b>' + str(df.at[i,'phone'])
job_range = Circle()
if role == 'Contractor':
fs.add_child(
folium.Marker(location=[lat,lng],
tooltip=folium.map.Tooltip(
text='<strong>Contact surveyor</strong>',
style=("background-color: lightgreen;")),
popup=popup,
icon = folium.Icon(color='darkred', icon='glyphicon-user'
)
)
)
fs.add_child (
folium.Marker(location=[lat,lng],
popup=popup,
icon = folium.DivIcon(html="<b>" + sp + "</b>",
class_name="mapText_contractor",
icon_anchor=(30,5))
#click_action = js_f
)
)
fs.add_child(job_range)
</code></pre>
<p>which works but unfortunately takes into account only the very first record.</p>
<p><a href="https://i.sstatic.net/mxnMc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mxnMc.png" alt="enter image description here" /></a></p>
<p>How could I make these pop-up circles adjusted to the radius of the given input (as presented in the CSV document?)?</p>
|
<python><pandas><jinja2><folium>
|
2023-01-18 17:39:18
| 1
| 1,556
|
Geographos
|
75,163,082
| 9,070,040
|
Fix pyrIght warning "Import [module] could not be resolved"?
|
<p>I have the following <code>Projects</code> folder structure:</p>
<p><a href="https://i.sstatic.net/bJX0v.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bJX0v.png" alt="enter image description here" /></a></p>
<p>and the file <code>Tasks/Scripts/test.py</code> (shown below) attaches <code>util.py</code> from <code>Libs/PyLibs</code>:</p>
<p><a href="https://i.sstatic.net/GtGz6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GtGz6.png" alt="enter image description here" /></a></p>
<p>The file <code>test.py</code> executes fine without any issue, but I cannot get rid of the import warning (I am using latest neovim with lsp/mason/null-ls plugins). Similar issues were reported <a href="https://stackoverflow.com/questions/60820640/how-to-set-a-root-directory-for-pyright">here</a> and <a href="https://stackoverflow.com/questions/59108805/fixing-import-module-could-not-be-resolved-in-pyright">here</a>, but none of the methods suggested there, for example putting a <code>pyrightconfig.json</code> or <code>pyproject.toml</code> file in the project root with appropriate <a href="https://github.com/Microsoft/pyright/blob/main/docs/configuration.md" rel="nofollow noreferrer">config settings</a>, is working for me, maybe because I have a somewhat more complex folder structure.</p>
<p>Any help is greatly appreciated. Thanks!</p>
|
<python><neovim><pyright><nvim-lspconfig>
|
2023-01-18 17:29:50
| 1
| 671
|
Manojit
|
75,163,008
| 12,114,641
|
Efficient way to open and close file with while loop in python
|
<p>I'm writing a crawl where I crawl pages on a website and collects links which I write in a file. I can think of two options as mentioned below. I'm using first method right now which I know is not efficient as there will be file open and close in each loop but it is safe in the sense as it will write into file and if code crashes for some reason I'll still have data in it.</p>
<p>I'm not sure about the 2nd method. What if it crashes and file is not able to be closed properly, will I've have data written on file?</p>
<p>Is there any other more efficient way to achieve this?</p>
<p>I'm only writing the pseudo code.</p>
<p><strong>Method 1: collect all urls on a page and write it in the file, close the file and repeat</strong></p>
<pre><code>def crawl(max_pages):
# do stuff
while(page <= max_pages):
#do stuff
with open(FILE_NAME, 'a') as f:
f.write(profile_url + '\n')
f.close()
</code></pre>
<p><strong>Method 2: Keep file opened, collect urls from all pages and close it in the very end</strong></p>
<pre><code>crawl(300)
def crawl(max_pages):
# do stuff
with open(FILE_NAME, 'a') as f:
while(page <= max_pages):
#do stuff
f.write(profile_url + '\n')
f.close()
crawl(300)
</code></pre>
|
<python>
|
2023-01-18 17:23:22
| 2
| 1,258
|
Raymond
|
75,162,837
| 974,555
|
How can I use filter predicates for a geometry column?
|
<p>When reading a DataFrame from a parquet file using <code>pandas.read_parquet</code>, I can use the <a href="https://arrow.apache.org/docs/python/generated/pyarrow.parquet.ParquetDataset.html" rel="nofollow noreferrer">pyarrow filters</a> API to read only a subset of rows. This equally works for geopandas. For example:</p>
<pre class="lang-py prettyprint-override"><code>gdf = geopandas.read_parquet(files, filters=[("station", "in", ["sjisjka", "kaitum"]))
</code></pre>
<p>How can I use this for a geometry column? I could do it explicitly:</p>
<pre class="lang-py prettyprint-override"><code>gdf = geopandas.read_parquet(files)
pol = shapely.geometry.Polygon(((65, 15), (65, 25), (70, 25), (70, 15), (65, 15)))
selection = gdf[gdf["location"].within(pol)]
</code></pre>
<p>but this is relatively slow (even if using dask/dask-geopandas). Is there a way to apply such a location search directly using an arrow filter?</p>
<pre class="lang-py prettyprint-override"><code>gdf = dask_geopandas.read_parquet(files, filters=[("location", "within", pol)])
</code></pre>
<p>doesn't work (unsurprisingly). but fails with</p>
<pre><code>ValueError: "('location', 'within', <shapely.geometry.polygon.Polygon object at 0x7fd5ffa7ce80>)" is not a valid operator in predicates.
</code></pre>
|
<python><geopandas><pyarrow>
|
2023-01-18 17:09:54
| 0
| 26,981
|
gerrit
|
75,162,832
| 913,098
|
How to look at a previous pytest results output in Pycharm?
|
<p>I am running a long Pytest folder, with hundreds of tests.</p>
<p>Some of them fail.</p>
<p>When I fix one, I want to run it, and still have the previous long run results available. What currently happens is Pycharm only shows the last run results.</p>
<p>How can I show the results of a previous pytest run in Pycharm?</p>
|
<python><debugging><pycharm><pytest>
|
2023-01-18 17:09:21
| 1
| 28,697
|
Gulzar
|
75,162,788
| 966,179
|
Is it possible to use `_getattr__`-generated methods in a subclass?
|
<p>I have a class whose methods may or may not be auto-generated. I want to be able to call these methods from a subclass, but can't figure out how to do that.</p>
<p>Consider this code:</p>
<pre><code>class Upgrader:
max = 99
def _upgrade_no_op(self, db):
if self.version < Upgrader.max:
self.version += 1
def __getattribute__(self, key):
try:
return super().__getattribute__(key)
except AttributeError:
if key.startswith("upgrade_v"):
nr = int(key[9:])
if 0 <= nr < self.max:
return self._upgrade_no_op
raise
class Foo(Upgrader):
version = 1
def upgrade_v2(self):
# None of these work:
# Upgrade.upgrade_v2(self)
# super().upgrade_v2()
# super(Foo,self).upgrade_v2()
# x = Upgrade.__getattribute__(self, "upgrade_v2"); # bound to Foo.upgrade_v2
breakpoint()
Foo().upgrade_v2()
</code></pre>
<p>In my real code there are one or two other classes between <code>Foo</code> and <code>Upgrader</code>, but you get the idea.</p>
<p>Is there a way to do this, preferably without writing my own version of <code>super()</code>?</p>
|
<python><inheritance>
|
2023-01-18 17:06:07
| 1
| 2,622
|
Matthias Urlichs
|
75,162,495
| 1,298,416
|
Comparing numbered urls in django template
|
<p>"In urls.py I have:</p>
<pre><code>path("viewer/<str:case>", views.viewer, name="viewer"),
</code></pre>
<p>This works when I go to the viewer:</p>
<pre><code><a class="nav-link dropdown-toggle {% if request.resolver_match.url_name == "viewer" %}active{% endif %}">
</code></pre>
<p>Now, there is a submenu in nav bar that lists cases.
I need to know which specific page I'm on to make one of menu items active:</p>
<pre><code>{% for item in cases %}
<li>
<a class="dropdown-item {% if request.get_full_path == "/viewer/{{ item.id }}" %}active{% endif %}" href="/viewer/{{ item.id }}">{{ item.patient_name }}</a>
</li>
</code></pre>
<p>request.get_full_path returns /viewer/47 for example and one of the item's id is 47. I've tried different combinations instead of "/viewer/{{ item.id }}", nothing works.</p>
|
<python><django><django-templates>
|
2023-01-18 16:41:42
| 1
| 341
|
user1298416
|
75,162,381
| 1,274,613
|
Python pattern for read-only attributes in a typed context
|
<p>Our code is quite strict in separating private, protected, and public attributes of our Python objects, following the convention that private attributes start with <code>__</code> (and are thus mangled to include the class name), protected attributes start with <code>_</code> and public attributes don't start with <code>_</code>.</p>
<p>However, a frequent pattern we have is wanting to expose an attribute as privately writable, but publically readable, and subject to static type annotations. This is further complicated by the fact that we extensively use the <code>override</code> package to typecheck the methods of subclasses.</p>
<p>Our code is thus littered with</p>
<pre><code>class C:
self.__attribute: T
def __init__(self, attribute: T):
self.__attribute = attribute
@property
def attribute(self) -> T:
return self.attribute
</code></pre>
<p>in place of what could be simple data classes.</p>
<p>Is there a good pattern to minimize the boilerplate? What about</p>
<pre><code>class B(metaclass=ABCMeta):
@property
@abstractmethod
def weird_attribute(self):
raise NotImplementedError
class D(B):
self.__weird_attribute: T
def __init__(self, wattribute: T):
self.__weird_attribute = wattribute
@property # type: ignore
@overrides
def weird_attribute(self) -> T:
return self.__weird_attribute
class E(B):
@property # type: ignore
@overrides
def weird_attribute(self) -> T:
return 1
</code></pre>
<p>This style really bugs me, because we try to use static type checking to have a good grasp of our code – and then this bad pattern requires a <code># type: ignore</code> because properties cannot be decorated and overrides cannot be properties. And it's not even concise.</p>
<p>Is there a way out?</p>
|
<python><mypy><private-members>
|
2023-01-18 16:32:25
| 1
| 6,472
|
Anaphory
|
75,162,377
| 17,473,587
|
Apply function to specific element's value of a list of dictionaries
|
<pre><code>tbl_headers = db_admin.execute("SELECT name, type FROM PRAGMA_TABLE_INFO(?);", table_name)
</code></pre>
<p><code>tbl_headers</code> is same below:</p>
<pre><code>[{'name': 'id', 'type': 'INTEGER'}, {'name': 'abasdfasd', 'type': 'TEXT'}, {'name': 'sx', 'type': 'TEXT'}, {'name': 'password', 'type': 'NULL'}, {'name': 'asdf', 'type': 'TEXT'}]
</code></pre>
<p>I need apply <code>hash_in()</code> function on the <code>'name'</code> values are in dictionary elements of above list.</p>
<h2>Have tried these:</h2>
<pre><code>tbl_headers = [hash_in(i['name']) for i in tbl_headers]
</code></pre>
<p>suppresses dictionaries and return only a list of 'name' values:</p>
<pre><code>['sxtw001c001h', 'sxtw001c001r001Z001e001c001r001Z001a001Z', 'sxtw001w001r', 'sxtw001c001q001n001v001r001r001Z001o', 'sxtw001e001c001r001Z']
</code></pre>
<p>OR</p>
<pre><code>tbl_headers = map(hash_in, tbl_headers)
</code></pre>
<p>Returns error.</p>
<h2>Update</h2>
<p>The Output result I have seek is same:</p>
<pre><code>[{'name': hash_in('id'), 'type': 'INTEGER'}, {'name': hash_in('abasdfasd'), 'type': 'TEXT'}, {'name': hash_in('sx'), 'type': 'TEXT'}, {'name': ('password'), 'type': 'NULL'}, {'name': ('asdf'), 'type': 'TEXT'}]
</code></pre>
<p>Appreciate you.</p>
|
<python><python-3.x><dictionary>
|
2023-01-18 16:32:10
| 1
| 360
|
parmer_110
|
75,162,293
| 14,724,837
|
How to read the rest in Pretty MIDI?
|
<p>I am using Pretty MIDI in Python. Currently, I want to use pretty midi, a python tool to read the MIDI pitch and length. <a href="https://craffel.github.io/pretty-midi/" rel="nofollow noreferrer">https://craffel.github.io/pretty-midi/</a>. However, I found that the MIDI will skip the rest and jump to the next note directly. For example, this is the score.
<a href="https://i.sstatic.net/kYSE0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kYSE0.png" alt="enter image description here" /></a></p>
<p>After reading "G note" in the third measure, it will jump to the "C note" in the fifth measure without any blank time. In other words, the midi will ignore the rest automatically.
Here is the result that pretty midi output.</p>
<p><a href="https://i.sstatic.net/mmloI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mmloI.png" alt="enter image description here" /></a></p>
<p>Anyone know how to solve this problem?
Thanks a lot</p>
|
<python><midi>
|
2023-01-18 16:25:51
| 0
| 689
|
Megan
|
75,162,222
| 17,561,414
|
How to change the schema of the spark dataframe
|
<p>I am reading a JSON file with <code>spark.read.json</code> and it automatically gives me the dataframe with schema but is it possible to change the schema of exisiting Dataframe with the below schema?</p>
<pre><code>schema = StructType([StructField("_links", MapType(StringType(), MapType(StringType(), StringType()))),
StructField("identifier", StringType()),
StructField("enabled", BooleanType()),
StructField("family", StringType()),
StructField("categories", ArrayType(StringType())),
StructField("groups", ArrayType(StringType())),
StructField("parent", StringType()),
StructField("values", MapType(StringType(), ArrayType(MapType(StringType(), StringType())))),
StructField("created", StringType()),
StructField("updated", StringType()),
StructField("associations", MapType(StringType(), MapType(StringType(), ArrayType(StringType())))),
StructField("quantified_associations", MapType(StringType(), IntegerType())),
StructField("metadata", MapType(StringType(), StringType()))])
</code></pre>
|
<python><apache-spark><pyspark>
|
2023-01-18 16:19:42
| 1
| 735
|
Greencolor
|
75,161,984
| 1,977,614
|
Passing range of numbers from terminal to Python script
|
<p>I have a python script which is executed from terminal as</p>
<p><code>script.py 0001</code></p>
<p>where <code>0001</code> indicates the subcase to be run. If I have to run different subcases, then I use</p>
<p><code>script.py 0001 0002</code></p>
<p>Question is how to specify a range as input? Lets say I want to run <code>0001..0008</code>. I got to know <code>seq -w 0001 0008</code> outputs what I desire. How to pipe this to Python as input from terminal? Or is there a different way to get this done?</p>
|
<python><bash><terminal><sequence>
|
2023-01-18 16:02:39
| 2
| 5,858
|
SKPS
|
75,161,980
| 143,931
|
Change pandas default string format for Timestamps
|
<p><strong>TL;DR:</strong></p>
<p>Is there a way to change the <em>default</em> string representation of <a href="https://pandas.pydata.org/docs/reference/api/pandas.Timestamp.html" rel="nofollow noreferrer">pandas' <code>Timestamp</code>s</a>?</p>
<p><strong>Long version:</strong></p>
<p>For example, say I was not interested in the seconds of the timestamps and I have a large number of <code>print</code> statements. By default, pandas will print the timestamps including seconds:</p>
<pre><code>import pandas as pd
ts = pd.Timestamp('2017-01-01T12')
print(f"Timestamp: {ts}")
# Timestamp: 2017-01-01 12:00:00
</code></pre>
<p>I can change this by passing a format string:</p>
<pre><code>print(f"Timestamp: {ts.strftime('%Y-%m-%d %H:%M')}")
# Timestamp: 2017-01-01 12:00
</code></pre>
<p>However, I would need to add the <code>strftime</code> call and format string explicitly for every timestamp in every <code>print</code>. Is there a way to change the default format that is used to represent timestamps (other than wrapping them in a function to do the string conversion myself)?</p>
|
<python><pandas>
|
2023-01-18 16:02:24
| 0
| 8,472
|
fuenfundachtzig
|
75,161,850
| 10,574,250
|
Writing a parquet file from python that is compatible for SQL/Impala
|
<p>I am trying to write a pandas Dataframe to a parquet file that is compatible with a table in Impala but am struggling to find a solution.</p>
<p>My df has 3 columns</p>
<pre><code>code int64
number float
name object
</code></pre>
<p>When I create this into a parquet file and load it into impala, the python schema is preserved and it fails. I would like the parquet to save with the following schema:</p>
<pre><code>code int
number decimal(36,18)
name string
</code></pre>
<p>I tried this:</p>
<pre><code>env_schema = """
code int
number decimal(36,18)
name string
"""
df.to_parquet(f'path', index=False, schema=env_schema)
</code></pre>
<p>but get the following error:</p>
<pre><code>Argument 'schema' has incorrect type (expected pyarrow.lib.Schema, got str)
</code></pre>
<p>Does anyone know how I could achieve this? Thanks</p>
|
<python><apache-spark><impala><pyarrow>
|
2023-01-18 15:51:21
| 1
| 1,555
|
geds133
|
75,161,667
| 1,518,100
|
why implement abstractmethod as staticmethod
|
<p>I'm learning python design patterns from github repo <a href="https://github.com/faif/python-patterns" rel="nofollow noreferrer">faif/python-patterns</a> and found the example <a href="https://github.com/faif/python-patterns/blob/master/patterns/behavioral/chain_of_responsibility.py#L52" rel="nofollow noreferrer">chain_of_responsibility</a> implements <em>abstractmethod</em> <code>check_range</code> as <em>staticmethod</em>.</p>
<p>My question is, is there any benefit other than less typing a <code>self</code>?</p>
<p>Simplify code is</p>
<pre class="lang-py prettyprint-override"><code>from abc import ABC, abstractmethod
class A(ABC):
@abstractmethod
def foo(self, x):
pass
class B(A):
@staticmethod
def foo(x):
print("B.foo", x)
# both the two works
B.foo(1)
b = B()
b.foo(2)
</code></pre>
|
<python><inheritance><abstract>
|
2023-01-18 15:36:32
| 1
| 4,435
|
Lei Yang
|
75,161,551
| 9,657,938
|
Failed to load server-wide module `web
|
<p>i facing this error when i try to run odoo15 on Debian 11 server and try to run it on nginx
here is the odoo log file</p>
<pre><code>2023-01-18 16:35:48,178 48668 INFO ? odoo: database: odoo15@default:default
2023-01-18 16:35:48,181 48668 CRITICAL ? odoo.modules.module: Couldn't load module web
2023-01-18 16:35:48,181 48668 CRITICAL ? odoo.modules.module: cannot import name 'replace_exceptions' from 'odoo.tools' (/opt/odoo15/odoo/odoo/tools/__init__.py)
2023-01-18 16:35:48,181 48668 ERROR ? odoo.service.server: Failed to load server-wide module `web`.
The `web` module is provided by the addons found in the `openerp-web` project.
Maybe you forgot to add those addons in your addons_path configuration.
Traceback (most recent call last):
File "/opt/odoo15/odoo/odoo/service/server.py", line 1210, in load_server_wide_modules
odoo.modules.module.load_openerp_module(m)
File "/opt/odoo15/odoo/odoo/modules/module.py", line 396, in load_openerp_module
__import__('odoo.addons.' + module_name)
File "/usr/lib/python3/dist-packages/odoo/addons/web/__init__.py", line 4, in <module>
from . import controllers
File "/usr/lib/python3/dist-packages/odoo/addons/web/controllers/__init__.py", line 4, in <module>
from . import binary
File "/usr/lib/python3/dist-packages/odoo/addons/web/controllers/binary.py", line 22, in <module>
from odoo.tools import file_open, file_path, replace_exceptions
ImportError: cannot import name 'replace_exceptions' from 'odoo.tools' (/opt/odoo15/odoo/odoo/tools/__init__.py)
2023-01-18 16:35:48,291 48668 INFO ? odoo.addons.base.models.ir_actions_report: You need Wkhtmltopdf to print a pdf version of the reports.
2023-01-18 16:35:48,435 48668 INFO ? odoo.service.server: HTTP service (werkzeug) running on localhost:8069
</code></pre>
<blockquote>
<p>and here is my odoo15.service file which located in /etc/systemd/system/</p>
</blockquote>
<pre><code>[Unit]
Description=Odoo
Documentation=http://www.odoo.com
[ Service]
Type=simple
User=odoo15
ExecStart= /usr/bin/python3 /opt/odoo15/odoo/odoo-bin -c /etc/odoo.conf
[Install]
WantedBy=default.target
</code></pre>
<blockquote>
<p>and here is my odoo.conf which is created in /etc/</p>
</blockquote>
<pre><code>[options]
; This is the password that allows database operations:
admin_passwd = adminpassword
db_host = False
db_port = False
db_user = odoo15
db_password = False
xmlrpc_interface = 127.0.0.1
proxy_mode = True
addons_path = /opt/odoo15/odoo/addons
logfile = /var/log/odoo/odoo.log
</code></pre>
|
<python><odoo><odoo-15><odoo-enterprise>
|
2023-01-18 15:27:09
| 1
| 369
|
Sideeg MoHammed
|
75,161,513
| 2,396,640
|
Can't pause python process using debug
|
<p>I have a python script which starts multiple sub processes using these lines :</p>
<pre><code>for elm in elements:
t = multiprocessing.Process(target=sub_process,args=[elm])
threads.append(t)
t.start()
for t in threads:
t.join()
</code></pre>
<p>Sometimes, for some reason the thread halts and the script never finishes.
I'm trying to use VSCode debugger to find the problem and check where in the thread itself it stuck but I'm having issues pausing these sub processes because when I click the pause in the debugger window:
<a href="https://i.sstatic.net/CPhYZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CPhYZ.png" alt="enter image description here" /></a></p>
<p>It will pause the main thread and some other threads that are running properly but it won't pause the stuck sub process.
Even when I try to pause the threads manually one by one using the Call Stack window, I can still pause only the working threads and not the stuck one.
<a href="https://i.sstatic.net/pJZhN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pJZhN.png" alt="enter image description here" /></a></p>
<p>Please help me figure this thing, It's a hard thing because the thing that makes the process stuck doesn't always happen so it makes it very hard to debug.</p>
|
<python><vscode-debugger>
|
2023-01-18 15:23:59
| 2
| 369
|
user2396640
|
75,161,498
| 4,391,249
|
Is there a more succinct way to type hint `Union[NDArray[np.float64], Sequence[float]`?
|
<p>Suppose there is a function that takes numpy arrays as inputs, but it's okay if a user passes in lists (which is often the case). What's a right way to hint that:</p>
<ol>
<li>The function does numpy-based arithmetic.</li>
<li>But it's okay to provide lists (or sequences).</li>
<li>The underlying data type is <code>T</code>. So it could be that <code>T</code> is <code>np.float64</code>.</li>
</ol>
<p>Is <code>Union[NDArray[np.float64], Sequence[float]]</code> the most succinct way short of making a type alias to that?</p>
|
<python><numpy><typing>
|
2023-01-18 15:23:21
| 0
| 3,347
|
Alexander Soare
|
75,161,027
| 1,584,906
|
Accessing UVC camera controls from PyGST
|
<p>I'm using PyGST to display the feed from a UVC webcam inside a PyQt application. I can access some camera controls, such as brightness and contrast, directly using the corresponding properties of the <code>v4l2src</code> elements. However, I'd like to access additional controls, namely focus, available through <code>v4l2-ctl</code>. My understanding is that such controls should be accessible through the <code>extra-controls</code> property (<code>extra_controls</code> in Python) of the <code>v4l2src</code> element. However, the property is empty at runtime after the pipeline is started. What am I missing?</p>
<p>EDIT: minimal sample</p>
<pre><code>import sys
from PyQt5 import QtWidgets as qtw
from PyQt5 import QtCore as qtc
from PyQt5 import QtGui as qtg
from PyQt5 import uic
import subprocess
import re
#GStreamer libraries
import gi
gi.require_version('Gst', '1.0')
gi.require_version('GstVideo', '1.0')
from gi.repository import Gst, GObject, GstVideo
Gst.init(None)
class WebcamTest(qtw.QMainWindow):
def __init__(self):
super().__init__()
self.resize(640,480)
self.gst_video = VideoWidget()
self.setCentralWidget(self.gst_video)
self.gst_video.prepare()
class VideoWidget(qtw.QWidget):
def __init__(self):
super(VideoWidget, self).__init__()
self.setAttribute(qtc.Qt.WA_NativeWindow)
self.windowId = self.winId()
qtw.qApp.sync()
self.pipeline = None
def prepare(self):
pipeline = "v4l2src device=/dev/video0 name=source ! image/jpeg, width=640, height=480, framerate=30/1, format=MJPG ! jpegdec ! videoconvert ! xvimagesink"
self.pipeline = Gst.parse_launch(pipeline)
self.source = self.pipeline.get_child_by_name('source')
bus = self.pipeline.get_bus()
bus.add_signal_watch()
bus.enable_sync_message_emission()
bus.connect('sync-message::element', self.on_sync_message)
self.pipeline.set_state(Gst.State.PLAYING)
print(f'source properties: {self.source.list_properties()}')
print(f'brightness={self.source.props.brightness}')
print(f'extra-controls={self.source.props.extra_controls}')
def on_sync_message(self, bus, msg):
message_name = msg.get_structure().get_name()
# qtc.qDebug(message_name)
if message_name == 'prepare-window-handle':
win_id = self.windowId
assert win_id
imagesink = msg.src
imagesink.set_window_handle(win_id)
def dispose(self):
if (self.pipeline):
self.pipeline.set_state(Gst.State.NULL)
self.pipeline = None
def isPlaying(self):
return self.pipeline.current_state == Gst.State.PLAYING
if __name__ == '__main__':
app = qtw.QApplication(sys.argv)
main_ui = WebcamTest()
main_ui.move(200,100)
main_ui.show()
app.exec_()
</code></pre>
<p>List of available controls from the camera:</p>
<pre><code>$ v4l2-ctl -l
brightness 0x00980900 (int) : min=-64 max=64 step=1 default=0 value=0
contrast 0x00980901 (int) : min=0 max=64 step=1 default=32 value=32
saturation 0x00980902 (int) : min=0 max=128 step=1 default=64 value=64
hue 0x00980903 (int) : min=-40 max=40 step=1 default=0 value=0
white_balance_temperature_auto 0x0098090c (bool) : default=1 value=1
white_balance_red_component 0x0098090e (int) : min=1 max=500 step=1 default=100 value=100 flags=inactive
white_balance_blue_component 0x0098090f (int) : min=1 max=500 step=1 default=100 value=100 flags=inactive
gamma 0x00980910 (int) : min=72 max=500 step=1 default=100 value=100
gain 0x00980913 (int) : min=0 max=100 step=1 default=0 value=0
power_line_frequency 0x00980918 (menu) : min=0 max=2 default=1 value=1
hue_auto 0x00980919 (bool) : default=0 value=0
white_balance_temperature 0x0098091a (int) : min=2800 max=6500 step=1 default=4600 value=4600 flags=inactive
sharpness 0x0098091b (int) : min=0 max=6 step=1 default=3 value=3
backlight_compensation 0x0098091c (int) : min=0 max=2 step=1 default=1 value=1
exposure_auto 0x009a0901 (menu) : min=0 max=3 default=3 value=3
exposure_absolute 0x009a0902 (int) : min=1 max=5000 step=1 default=157 value=157 flags=inactive
exposure_auto_priority 0x009a0903 (bool) : default=0 value=1
focus_absolute 0x009a090a (int) : min=1 max=1023 step=1 default=1 value=688
focus_auto 0x009a090c (bool) : default=0 value=0
zoom_continuous 0x009a090f (int) : min=0 max=0 step=0 default=0 value=0 flags=write-only
privacy 0x009a0910 (bool) : default=0 value=0
iris_absolute 0x009a0911 (int) : min=0 max=0 step=0 default=0 value=0
iris_relative 0x009a0912 (int) : min=0 max=0 step=0 default=0 value=0 flags=write-only
pan_speed 0x009a0920 (int) : min=0 max=0 step=0 default=0 value=0
tilt_speed 0x009a0921 (int) : min=0 max=0 step=0 default=0 value=0
</code></pre>
<p>EDIT: Found a way to change a parameter, for instance this snippet will activate the auto focus:</p>
<pre><code> extra_controls = Gst.Structure.new_from_string('i,focus_auto=1')
self.source.set_property('extra_controls', extra_controls)
</code></pre>
<p>However, I still don't understand how to query current values of other controls (similarly to what I can get with <code>v4l2-ctl -l</code>).</p>
|
<python><gstreamer><webcam><pygst>
|
2023-01-18 14:48:03
| 0
| 1,465
|
Wolfy
|
75,160,906
| 46,634
|
Python code for generating valid BIP-39 mnemonic words for a bitcoin wallet not working
|
<p>I am trying to generate valid <a href="https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki" rel="nofollow noreferrer">BIP-39</a> mnemonic <a href="https://github.com/bitcoin/bips/blob/master/bip-0039/english.txt" rel="nofollow noreferrer">words</a> for a bitcoin wallet in Python, but I am encountering an issue with the generated words being rejected by verification <a href="https://iancoleman.io/bip39/#english" rel="nofollow noreferrer">tools</a>. I have followed the guidelines outlined in the BIP-39 standard, but the 24th word, which serves as a checksum of the others, is causing the mnemonic to be deemed incorrect. I have searched for solutions and checked other people's code, but I have yet to find a solution. Can someone please help me understand what I am doing wrong and how to fix it?</p>
<p>At the end of the message I shall write a few examples of 24 words that are not acceptable, but are the results of the program</p>
<pre><code>from hashlib import sha256
import secrets
#following the instructions here: https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki
#depending on the number of words, we take the value for ENT, and CS.a
word_number=24
size_ENT=256
size_CS=int(size_ENT/32)
with open("Bip39-wordlist.txt", "r") as wordlist_file:
words = [word.strip() for word in wordlist_file.readlines()]
#First, an initial entropy of ENT bits is generated.
n_bytes=int(size_ENT/8)
random_bytes = secrets.token_bytes(n_bytes)
random_bits = ''.join(['{:08b}'.format(b) for b in random_bytes])
INITIAL_ENTROPY = random_bits[:size_ENT]
assert(len(INITIAL_ENTROPY)==size_ENT)
encoded=INITIAL_ENTROPY.encode('utf-8')
hash=sha256(encoded).digest()
bhash=''.join(format(byte, '08b') for byte in hash)
assert(len(bhash)==256)
#the first ENT / 32 bits of its SHA256 hash
CS=bhash[:size_CS]
#This checksum is appended to the end of the initial entropy.
FINAL_ENTROPY=INITIAL_ENTROPY+CS
assert(len(FINAL_ENTROPY)==size_ENT+size_CS)
#Next, these concatenated bits are split into groups of 11 bits,
# each encoding a number from 0-2047, serving as an index into a wordlist.
for t in range(word_number):
#split into groups of 11 bits,
extracted_bits=FINAL_ENTROPY[11*t:11*(t+1)]
# each encoding a number from 0-2047,
word_index=int(extracted_bits,2)
#serving as an index into a wordlist.
if t==0: words_extracted= words[word_index]
else: words_extracted+=' '+words[word_index]
print (words_extracted)
</code></pre>
<p>Output incorrect examples:</p>
<p>kitten oak breeze dismiss breeze reduce stem symbol trend input thunder old burden brisk level hard luggage alarm upper creek deputy desert diesel primary</p>
<p>wave flee narrow notable budget hamster layer potato menu security wall shove save mobile badge nephew blouse major cute park margin entry drink mask</p>
|
<python><hash><cryptography><checksum><bitcoin>
|
2023-01-18 14:38:42
| 1
| 3,261
|
Pietro Speroni
|
75,160,904
| 9,323,635
|
Create a history dataframe to save historical values python
|
<p>I need to create a history dataframe (history_df), in python, that stores the all the content of other dataset (current_df), that is being refreshed every hour.</p>
<p>The contents of the current_df are being erased with the new ones because its primary key is the Robot ID column, so that´s why I need the history_df to have everything.</p>
<p>For instance, if current_df has this data at 1:00 pm:</p>
<pre><code>Robot ID Distance to finish line (km)
AB2 2
FG3 7
GJ7 56
</code></pre>
<p>And like this at 2:00 pm:</p>
<pre><code>Robot ID Distance to finish line (km)
AB2 0,5
FG3 3
GJ7 20
HHV 2
</code></pre>
<p>I would need history_df to store all the rows on current_df:</p>
<pre><code>Robot ID Distance to finish line (km)
AB2 2
FG3 7
GJ7 56
AB2 0,5
FG3 3
GJ7 20
HHV 2
</code></pre>
<p>Is there a way to do this?</p>
<p>I looked up the write_dataframe() function, but not sure if it removes duplicate values.</p>
<p>Thank you and kind regards.</p>
|
<python><dataframe><pyspark>
|
2023-01-18 14:38:38
| 1
| 771
|
HRDSL
|
75,160,879
| 11,542,205
|
Paginate data from two tables that are sorted by the creation date and located in different databases
|
<p>Table A is in a PostgreSQL database, while Table B is in a MongoDB database.</p>
<p>I want to paginate the data in both tables programmatically and without merging them, and the result pages should be ordered by the <code>creation_date</code> attribute in both tables.</p>
<hr />
<p>Example:</p>
<p>Schema of table A in database X:
<code>A(id, name, creation_date, ...)</code></p>
<p>and it contains:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>name</th>
<th>creation_date</th>
</tr>
</thead>
<tbody>
<tr>
<td>A1</td>
<td>...</td>
<td>2023-01-18 15:00</td>
</tr>
<tr>
<td>A2</td>
<td>...</td>
<td>2023-01-18 14:00</td>
</tr>
<tr>
<td>A3</td>
<td>...</td>
<td>2023-01-18 13:00</td>
</tr>
<tr>
<td>A4</td>
<td>...</td>
<td>2023-01-18 11:00</td>
</tr>
<tr>
<td>A5</td>
<td>...</td>
<td>2023-01-18 10:00</td>
</tr>
<tr>
<td>A6</td>
<td>...</td>
<td>2023-01-18 08:00</td>
</tr>
</tbody>
</table>
</div>
<p>Schema of table B in database Y:
<code>B(id, name, creation_date, ...)</code></p>
<p>and it contains:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>name</th>
<th>creation_date</th>
</tr>
</thead>
<tbody>
<tr>
<td>B1</td>
<td>...</td>
<td>2023-01-18 12:00</td>
</tr>
<tr>
<td>B2</td>
<td>...</td>
<td>2023-01-18 09:00</td>
</tr>
<tr>
<td>B3</td>
<td>...</td>
<td>2023-01-18 07:00</td>
</tr>
<tr>
<td>B4</td>
<td>...</td>
<td>2023-01-18 06:00</td>
</tr>
<tr>
<td>B5</td>
<td>...</td>
<td>2023-01-18 05:00</td>
</tr>
<tr>
<td>B6</td>
<td>...</td>
<td>2023-01-18 04:00</td>
</tr>
</tbody>
</table>
</div>
<p>Let's say the items per page is 5, the result of each page should be:</p>
<p>Page 1:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>name</th>
<th>creation_date</th>
</tr>
</thead>
<tbody>
<tr>
<td>A1</td>
<td>...</td>
<td>2023-01-18 15:00</td>
</tr>
<tr>
<td>A2</td>
<td>...</td>
<td>2023-01-18 14:00</td>
</tr>
<tr>
<td>A3</td>
<td>...</td>
<td>2023-01-18 13:00</td>
</tr>
<tr>
<td>B1</td>
<td>...</td>
<td>2023-01-18 12:00</td>
</tr>
<tr>
<td>A4</td>
<td>...</td>
<td>2023-01-18 11:00</td>
</tr>
</tbody>
</table>
</div>
<p>Page 2:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>name</th>
<th>creation_date</th>
</tr>
</thead>
<tbody>
<tr>
<td>A5</td>
<td>...</td>
<td>2023-01-18 10:00</td>
</tr>
<tr>
<td>B2</td>
<td>...</td>
<td>2023-01-18 09:00</td>
</tr>
<tr>
<td>A6</td>
<td>...</td>
<td>2023-01-18 08:00</td>
</tr>
<tr>
<td>B3</td>
<td>...</td>
<td>2023-01-18 07:00</td>
</tr>
<tr>
<td>B4</td>
<td>...</td>
<td>2023-01-18 06:00</td>
</tr>
</tbody>
</table>
</div>
<p>Page 3:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>name</th>
<th>creation_date</th>
</tr>
</thead>
<tbody>
<tr>
<td>B5</td>
<td>...</td>
<td>2023-01-18 05:00</td>
</tr>
<tr>
<td>B6</td>
<td>...</td>
<td>2023-01-18 04:00</td>
</tr>
</tbody>
</table>
</div><hr />
<p>What is the most efficient solution to do this using code?
It makes no difference what programming language is used. I'm just looking for an optimized algorithm and hopefully it can be converted into any language.</p>
|
<javascript><python><java><mongodb><postgresql>
|
2023-01-18 14:36:48
| 0
| 1,248
|
Tarek Hammami
|
75,160,861
| 1,512,250
|
converting feet-inch string DataFrame column to centimeters
|
<p>I have a column of basketball players height:</p>
<pre><code>0 6-10
1 6-9
2 7-2
3 6-1
4 6-6
...
4545 6-11
4546 7-1
4547 6-1
4548 7-1
4549 6-3
</code></pre>
<p>I want to convert the values from feet to cm.
I made a split: <code>player_data['height'].str.split('-')</code>, and received a Series of arrays with separate feet and inches:</p>
<pre><code>0 [6, 10]
1 [6, 9]
2 [7, 2]
3 [6, 1]
4 [6, 6]
...
4545 [6, 11]
4546 [7, 1]
4547 [6, 1]
4548 [7, 1]
4549 [6, 3]
</code></pre>
<p>Now I try to convert values to float:</p>
<pre><code>df = player_data['height'].str.split('-').astype(float)
</code></pre>
<p>But I receive an error: <code>ValueError: setting an array element with a sequence.</code>
What I'm doing wrong?</p>
|
<python><arrays><pandas><series>
|
2023-01-18 14:34:58
| 1
| 3,149
|
Rikki Tikki Tavi
|
75,160,843
| 5,561,649
|
Can exceptions still interrupt the execution when they are wrapped in an appropriate try/except block?
|
<p>I have some debugging code (involving <code>debugpy.connect()</code>) in a custom package, which is expected to raise an exception if the debug server isn't currently launched. The thing is, that code is wrapped in a <code>try / except Exception</code> clause, as I don't want it to interrupt the execution of the programs that call it when I'm not debugging (it's just there if I need it).</p>
<p>But then, when <code>pip install</code> tries to install that package in my virtual environment, it fails at that point, with a Traceback that make it look as if the exception wasn't caught.</p>
<p>This seems particularly weird to me. Outside of <code>pip install</code>, it works as expected and the exception is caught, and the program keeps running.</p>
<p>I've also tried with a bare <code>except</code> clause, it also let the exception "escape". I tried researching this on Google, looking at all the possible ways an exception might not be properly caught, and I couldn't find anything.</p>
<p>Do you know what might be causing this?</p>
|
<python><exception><pip><debugpy>
|
2023-01-18 14:33:43
| 0
| 550
|
LoneCodeRanger
|
75,160,816
| 4,491,532
|
Bokeh iterator callbacks
|
<p>I just found this interesting example of using easily Bokeh widgets in a notebook.</p>
<p><a href="https://github.com/bokeh/bokeh/blob/3.0.3/examples/output/jupyter/push_notebook/Jupyter%20Interactors.ipynb" rel="nofollow noreferrer">https://github.com/bokeh/bokeh/blob/3.0.3/examples/output/jupyter/push_notebook/Jupyter%20Interactors.ipynb</a></p>
<p>It works just fine. However, I do not understand how to get the actual values of the three sliders (w, A, Phi) after the plot is modified. I suppose I have to use callbacks but I do not know how. Can someone suggest some code? Thanks.</p>
|
<python><jupyter-notebook><bokeh>
|
2023-01-18 14:31:48
| 1
| 357
|
polgia0
|
75,160,757
| 10,197,418
|
How to properly set binary flags in a Python polars dataframe
|
<p>When implementing a binary flag column in Python <strong>polars v0.15.15</strong>, I came across some seemingly weird behavior. Given a df</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({
"col1": [0,1,2,3],
"flag": [0,0,0,0]
})
</code></pre>
<p>I set the flag by <code>or</code>-ing the current flag value, e.g. 2</p>
<pre class="lang-py prettyprint-override"><code>df = df.with_column(
pl.when((pl.col("col1") < 1) | (pl.col("col1") >= 3))
.then(pl.col("flag") | 2) # set flag b0010
.otherwise(pl.col("flag"))
)
print(df)
shape: (4, 2)
┌──────┬──────┐
│ col1 ┆ flag │
│ --- ┆ --- │
│ i64 ┆ i64 │
╞══════╪══════╡
│ 0 ┆ 2 │
│ 1 ┆ 0 │
│ 2 ┆ 0 │
│ 3 ┆ 2 │
└──────┴──────┘
</code></pre>
<p>So far so good, however when <em>adding another flag</em>, I get something unexpected:</p>
<pre class="lang-py prettyprint-override"><code>df = df.with_column(
pl.when(pl.col("col1") > -1)
.then(pl.col("flag") | 4) # also set flag b0100
.otherwise(pl.col("flag"))
)
print(df)
shape: (4, 2)
┌──────┬──────┐
│ col1 ┆ flag │
│ --- ┆ --- │
│ i64 ┆ i64 │
╞══════╪══════╡
│ 0 ┆ 6 │
│ 1 ┆ 6 │ # <-- ?! 0 | 4 is 4, not 6
│ 2 ┆ 6 │ # <-- ?! 0 | 4 is 4, not 6
│ 3 ┆ 6 │
└──────┴──────┘
</code></pre>
<p>Why are all flags now 6? I'd expect <code>[6, 4, 4, 6]</code></p>
<p>Doing it the other way around (set flag 4, then flag 2), the result is as expected:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({"col1": [0,1,2,3], "flag": [0,0,0,0]})
df = df.with_column(
pl.when(pl.col("col1") > -1)
.then(pl.col("flag") | 4)
.otherwise(pl.col("flag"))
)
df = df.with_column(
pl.when((pl.col("col1") < 1) | (pl.col("col1") >= 3))
.then(pl.col("flag") | 2)
.otherwise(pl.col("flag"))
)
print(df)
shape: (4, 2)
┌──────┬──────┐
│ col1 ┆ flag │
│ --- ┆ --- │
│ i64 ┆ i64 │
╞══════╪══════╡
│ 0 ┆ 6 │
│ 1 ┆ 4 │
│ 2 ┆ 4 │
│ 3 ┆ 6 │
└──────┴──────┘
</code></pre>
<p>What's going on here, what am I missing?</p>
|
<python><binary><python-polars>
|
2023-01-18 14:27:48
| 1
| 26,076
|
FObersteiner
|
75,160,501
| 7,525,747
|
Modify the elements of a list inside a for loop (Python equivalent of Matlab code with a nested loop)
|
<p>I have the following Matlab code (adopted from Programming and Numerical Methods in MATLAB by Otto&Denier, page 75)</p>
<pre><code>clear all
p = input('Enter the power you require: ');
points = p+2;
n = 1:points;
for N = n
sums(N) = 0;
for j = 1:N
sums(N) = sums(N)+j^p;
end
end
</code></pre>
<p>The output for 3 as the given value of p is the following list</p>
<pre><code>>> sums
sums =
1 9 36 100 225
</code></pre>
<p>I have written the following Python code (maybe not the most 'Pythonic way') trying to follow as much as possible Matlab instructions.</p>
<pre><code>p = int(input('Enter the power you require: '))
points = p+2
n = range(points)
for N in range(1, len(n)+1):
sums = [0]*N
for index, item in list(enumerate(sums)):
sums[index] = item+index**p
</code></pre>
<p>Nevertheless the output is not same list. I have tried to replace the inner loop with</p>
<pre><code>for j in range(1,N+1):
sums[N] = sums[N]+j**p
</code></pre>
<p>but this results to an index error message. Thanks in advance for any suggestions.</p>
|
<python><list><loops><matlab>
|
2023-01-18 14:09:02
| 2
| 579
|
Dimitris
|
75,160,262
| 7,184,301
|
Mock a Lambda Layer in AWS Lambda Function
|
<p>I want to unit test my AWS Lambda Function. The problem is, the Lambda Functions rely on Lambda Layers, which are called in AWS Lambda environment</p>
<pre><code>import os
import function from lambda_layer #this is called fine in AWS Lambda, but not locally
def lambda_handler(event, context):
result = function(param1, param2)
print(result)
....
</code></pre>
<p>In the unit test:</p>
<pre><code>from unittest import TestCase
from unittest import mock
#this is where I need help:
with mock.patch(...... replace the lambda_layer with some mocked value or path to lambda layer???
from path.to.lambda import lambda_hander as under_test
class TestStuff(TestCase):
def test_lambda_handler(self):
#given, when then....
</code></pre>
<p>Error message: E ModuleNotFoundError: No module named 'lambda_layer'
.... obviously. But how can I fix this?</p>
|
<python><aws-lambda><mocking><python-unittest><aws-lambda-layers>
|
2023-01-18 13:50:21
| 1
| 387
|
James
|
75,160,191
| 11,252,809
|
Poetry is being overwritten in docker and I can't see why - can you spot problem?
|
<p>Here's the error after I try to run poetry directly (I gave up on PATH whilst troubleshooting) in a docker container:</p>
<p><code>Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "~/.local/share/pypoetry/venv/bin/poetry": stat ~/.local/share/pypoetry/venv/bin/poetry: no such file or directory: unknown</code></p>
<p>Dockerfile:</p>
<pre><code>FROM python:3.10
# to run poetry directly as soon as it's installed
ENV PATH="/root/.local/bin:$PATH"
# INSTALL POETRY
RUN apt-get update \
&& apt-get install -y curl \
&& curl -sSL https://install.python-poetry.org | python3 -
# set work directory
WORKDIR /app
# Install dependencies
COPY poetry.lock pyproject.toml /app/
RUN ~/.local/share/pypoetry/venv/bin/poetry install --no-root
# copy files to Container
COPY ./. /app
CMD ["~/.local/share/pypoetry/venv/bin/poetry", "run", "uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
</code></pre>
<p>docker-compose.yml</p>
<pre><code># Docker compose file for nginx and letsencrypt
version: "3.10"
services:
fastapi:
build:
context: .
dockerfile: Dockerfile
container_name: fastapi
ports:
- "80:80"
- "443:443"
volumes:
- .:/app
</code></pre>
<p>What's strange is, poetry will actually build and install all those dependencies first, I watch them get instaled. Then when I try and use poetry at the cmd line, you the build errors out producing the error above. Suddently by that point in the build, the poetry installation is gone?</p>
|
<python><docker><python-poetry>
|
2023-01-18 13:44:51
| 2
| 565
|
phil0s0pher
|
75,160,154
| 15,637,940
|
create dataclass with optional attribute
|
<p>I'm trying create dataclass with optional attribute <code>is_complete</code>:</p>
<pre><code>from dataclasses import dataclass
from typing import Optional
@dataclass(frozen=True)
class MyHistoricCandle:
open: float
high: float
low: float
close: float
volume: int
time: datetime
is_complete: Optional[bool]
</code></pre>
<p>But when i init <code>MyHistoricCandle</code> object without <code>is_complete</code> attribute:</p>
<pre><code>MyHistoricCandle(open=1, high=1, low=1, close=1, volume=1, time=datetime.now())
</code></pre>
<p>Getting this error:</p>
<pre><code>TypeError: MyHistoricCandle.__init__() missing 1 required positional argument: 'is_complete'
</code></pre>
<p>Question: Is it even possible to create dataclass with optional attribute? I tried
<code>is_complete: Optional[bool] = None</code> , but sometimes i don't want add this field instead of setting <code>None</code> value</p>
|
<python><python-3.x><python-dataclasses><python-3.10>
|
2023-01-18 13:41:53
| 1
| 412
|
555Russich
|
75,160,109
| 2,340,127
|
Python3 and pytest: jinja2.exceptions.TemplateNotFound:
|
<p>i'm creating some unit tests and i'm facing this issue</p>
<blockquote>
<p>FAILED test_views_home.py::test_index -
jinja2.exceptions.TemplateNotFound: index.html</p>
</blockquote>
<p>this is part of my testing code:</p>
<pre><code>templates = Jinja2Templates('templates')
def test_index():
@router.get('/', include_in_schema=False)
async def index(request: Request):
return templates.TemplateResponse('index.html', {'request': request})
client = TestClient(router)
response = client.get("/")
assert response.status_code == 200
assert '<!DOCTYPE html>' in response.text
</code></pre>
<p>and this is my folder structure:</p>
<pre><code>root/
├── views/
│ ├── index.py
├── templates/
│ ├── index.html
├── tests/
│ ├── test_sorullo.py
</code></pre>
<p>i got:</p>
<blockquote>
<p>FAILED test_views_home.py::test_index -
jinja2.exceptions.TemplateNotFound: index.html</p>
</blockquote>
<p>I'm guessing i'm putting wrong</p>
<pre><code>templates = Jinja2Templates('templates')
</code></pre>
<p>but i couldn't figure it out. I didn't find anything similar searching. What i'm doing wrong?</p>
<p>thanks!</p>
|
<python><jinja2><pytest>
|
2023-01-18 13:38:56
| 2
| 581
|
emboole
|
75,160,060
| 1,103,911
|
Conditional assignment to multiple columns in pandas
|
<p>Using pandas 1.42</p>
<p>Having a DataFrame with 5 columns: A, B, C, D, E</p>
<p>I need to assign values from columns D and E to columns A and B if the value of column C is true.
I want to achieve this in one line using the <code>.loc</code> method.</p>
<h4>example</h4>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
<th>E</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>4</td>
<td>True</td>
<td>7</td>
<td>10</td>
</tr>
<tr>
<td>2</td>
<td>5</td>
<td>False</td>
<td>8</td>
<td>11</td>
</tr>
<tr>
<td>3</td>
<td>6</td>
<td>True</td>
<td>9</td>
<td>12</td>
</tr>
</tbody>
</table>
</div><h4>expected result</h4>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
<th>E</th>
</tr>
</thead>
<tbody>
<tr>
<td>7</td>
<td>10</td>
<td>True</td>
<td>7</td>
<td>10</td>
</tr>
<tr>
<td>2</td>
<td>5</td>
<td>False</td>
<td>8</td>
<td>11</td>
</tr>
<tr>
<td>9</td>
<td>12</td>
<td>True</td>
<td>9</td>
<td>12</td>
</tr>
</tbody>
</table>
</div>
<pre><code>df = pd.DataFrame(
{'A': [1, 2, 3],
'B': [4, 5, 6],
'C': [True, False, True],
'D': [7, 8, 9],
'E': [10, 11, 12]}
)
df.loc[df['C'], ['A', 'B']] = df[['D', 'E']]
</code></pre>
<h4>actual result</h4>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
<th>E</th>
</tr>
</thead>
<tbody>
<tr>
<td>nan</td>
<td>nan</td>
<td>True</td>
<td>7</td>
<td>10</td>
</tr>
<tr>
<td>2</td>
<td>5</td>
<td>False</td>
<td>8</td>
<td>11</td>
</tr>
<tr>
<td>nan</td>
<td>nan</td>
<td>True</td>
<td>9</td>
<td>12</td>
</tr>
</tbody>
</table>
</div><h4>workaround I figured</h4>
<pre><code>df.loc[df['C'], ['A', 'B']] = (df.D[df.C], df.E[df.C])
</code></pre>
<p>Seems pandas not getting right the to be assigned values if they come in form of a DataFrame, but it gets it right if you pack it nicely as tuple of Series.
Do I get the syntax wrong or is it a bug in pandas?</p>
|
<python><pandas><dataframe>
|
2023-01-18 13:35:27
| 1
| 588
|
Eliy Arlev
|
75,159,821
| 13,455,916
|
Installing Python 3.11.1 on a docker container
|
<p>I want to use <code>debian:bullseye</code> as a base image and then install a specific Python version - i.e. 3.11.1. At the moment I am just learning docker and linux.</p>
<p>From what I understand I can either:</p>
<ol>
<li>Download and compile sources</li>
<li>Install binaries (using apt-get)</li>
<li>Use a Python base image</li>
</ol>
<p>I have come across countless questions on here and articles online. Do I use <a href="https://launchpad.net/%7Edeadsnakes/+archive/ubuntu/nightly/+packages" rel="noreferrer">deadsnakes</a>? What version do I need? Are there any official python distributions (<a href="https://askubuntu.com/questions/1398568/installing-python-who-is-deadsnakes-and-why-should-i-trust-them">who is deadsnakes anyway</a>)?</p>
<p>But ultimately I want to know the best means of getting Python on there. I don't want to use a Python base image - I am curious in the steps involved. Compile sources - I am far from having that level of knowhow - and one for another day.</p>
<p>Currently I am rolling with the following:</p>
<pre><code>FROM debian:bullseye
RUN apt update && apt upgrade -y
RUN apt install software-properties-common -y
RUN add-apt-repository "ppa:deadsnakes/ppa"
RUN apt install python3.11
</code></pre>
<p>This fails with:</p>
<pre><code>#8 1.546 E: Unable to locate package python3.11
#8 1.546 E: Couldn't find any package by glob 'python3.11'
</code></pre>
<p>Ultimately - it's not the error - its just finding a good way of getting a specific Python version on my container.</p>
|
<python><docker>
|
2023-01-18 13:17:00
| 3
| 347
|
andrewthedev
|
75,159,784
| 1,794,714
|
Convert python pandas iterator and string concat into pyspark
|
<p>I am attempting to move a process from Pandas into Pyspark, but I am a complete novice in the latter. Note: This is an EDA process so I am not too worried about having it as a loop for now, I can optimise that at a later date.</p>
<p>Set up:</p>
<pre><code>import pandas as pd
import numpy as np
import pyspark.pandas as ps
</code></pre>
<p>Dummy Data:</p>
<pre><code>df = ps.DataFrame({'id': ['ID_01', 'ID_02', 'ID_02', 'ID_03', 'ID_03'], 'name': ['Jack', 'John', 'John', 'James', 'Jamie']})
df_pandas = df.to_pandas()
df_spark = df.to_spark()
df
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>name</th>
</tr>
</thead>
<tbody>
<tr>
<td>ID_01</td>
<td>Jack</td>
</tr>
<tr>
<td>ID_02</td>
<td>John</td>
</tr>
<tr>
<td>ID_02</td>
<td>John</td>
</tr>
<tr>
<td>ID_03</td>
<td>James</td>
</tr>
<tr>
<td>ID_03</td>
<td>Jamie</td>
</tr>
</tbody>
</table>
</div>
<p>Pandas code:</p>
<pre><code>unique_ids = df_pandas['id'].unique()
for unique_id in unique_ids:
names = '; '.join(sorted(df_pandas[df_pandas['id'] == unique_id]['name'].unique()))
df.loc[df['id'] == unique_id, 'name'] = names
df
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>name</th>
</tr>
</thead>
<tbody>
<tr>
<td>ID_01</td>
<td>Jack</td>
</tr>
<tr>
<td>ID_02</td>
<td>John</td>
</tr>
<tr>
<td>ID_02</td>
<td>John</td>
</tr>
<tr>
<td>ID_03</td>
<td>James; Jamie</td>
</tr>
<tr>
<td>ID_03</td>
<td>James; Jamie</td>
</tr>
</tbody>
</table>
</div>
<p>This last table is the desired output. However, I am having issues achieving this in PySpark. This is where I have got to:</p>
<pre><code>unique_ids = df_spark.select('id').distinct().collect()
for unique_id in unique_ids:
names = df_spark.filter(df_spark.id == unique_id.id).select('name').distinct()
</code></pre>
<p>I am then unsure how to do the next steps; i.e. how to concatenate the resulting single column DataFrame, nor how to ensure the correct replacement.</p>
<p>I have investigated the following sources, with no success (likely due to my inexperience in PySpark):</p>
<ul>
<li><a href="https://stackoverflow.com/questions/73907438/concatenate-all-pyspark-dataframe-columns-into-one-string-column">This</a> answer shows how to concatenate columns and not rows</li>
<li><a href="https://stackoverflow.com/questions/50311732/pyspark-equivalence-of-df-loc">This</a> answer might be helpful for the <code>loc</code> conversion (but I have not managed to get there yet</li>
<li><a href="https://stackoverflow.com/questions/41788919/concatenating-string-by-rows-in-pyspark">This</a> answer initially proved promising, since it would remove the need for the loop as well, but I could not figure out how to do the <code>distinct</code> and <code>sort</code> equivalents on the <code>collect_list</code> output object</li>
</ul>
|
<python><pandas><apache-spark><pyspark>
|
2023-01-18 13:13:57
| 1
| 391
|
FitzKaos
|
75,159,782
| 672,305
|
ValidationError pass multi level dictionary insted of single dict
|
<p>Is there a way to raise a ValidationError with nested dict of errors?. For example:</p>
<pre><code>raise ValidationError({
"index_errors": {"index1": {
"test_con": "Error text example",
"test_con2": "Error text example2"
}}
}
</code></pre>
<p>I am getting 'ValidationError' object has no attribute 'error_list'</p>
|
<python><django>
|
2023-01-18 13:13:45
| 1
| 865
|
pikk
|
75,159,744
| 17,473,587
|
How to Negate in this Jinja if condition?
|
<p>I have this in template:</p>
<pre><code>{% if cell %}{% set cell = "b" %}{% endif %}
</code></pre>
<p>What is contradiction of above conditional?</p>
<p>This not works:</p>
<pre><code>{% if !cell %}
</code></pre>
|
<python><jinja2>
|
2023-01-18 13:10:19
| 1
| 360
|
parmer_110
|
75,159,721
| 5,919,632
|
How to export Enterprise Architect diagram links to csv using python?
|
<p>I want to export Enterprise Architect model relation links to CSV file.</p>
<p>I'm doing it using python in given way.</p>
<pre class="lang-py prettyprint-override"><code>import win32com.client
def ea():
try:
eaApp = win32com.client.Dispatch('EA.App')
eaRep = eaRep.Repository
except:
sys.exit()
try:
eaRep.OpenFile2("C:\\path-to-model.eap", 1, 0)
package_guid = "{ABC34Hs-*****}"
dia = eaRep.getDiagramByGUID(package_guid)
res = eaRep.SQLQuery(f"select * from t_diagram where ea_guid = '{package_guid}'")
print(res)
for do in dia.diagramobjects:
elem = eaRep.getElementByID(do.elementId)
if elem.name == "my-diagram":
print(elem.Name, elem.Type, do.left, do.right, do.top)
print(elem.Notes)
except:
pass
ea()
</code></pre>
<p>Here I'm only getting model diagram details like geometry points some more info like Package-ID, Diagram-Id, Type etc.</p>
<ol>
<li>How can we extract all relation links of model to CSV/Excel file using Python?</li>
<li>Is Python a really good language preference for EA links exporting work? as I could no see well maintained documentation for <code>win32com.client</code>.</li>
</ol>
<p>Thanks</p>
|
<python><enterprise-architect>
|
2023-01-18 13:08:16
| 2
| 647
|
Akash Pagar
|
75,159,677
| 6,564,294
|
I do not want to write and read the same document in python
|
<p>I have pdf files where I want to extract info only from the first page. My solution is to:</p>
<ol>
<li>Use PyPDF2 to read from S3 and save only the first page.</li>
<li>Read the same one-paged-pdf I saved, convert to byte64 and analyse it on AWS Textract.</li>
</ol>
<p>It works but I do not like this solution. What is the need to save and still read the exact same file? Can I not use the file directly at runtime?</p>
<p>Here is what I have done that I don't like:</p>
<pre><code>from PyPDF2 import PdfReader, PdfWriter
from io import BytesIO
import boto3
def analyse_first_page(bucket_name, file_name):
s3 = boto3.resource("s3")
obj = s3.Object(bucket_name, file_name)
fs = obj.get()['Body'].read()
pdf = PdfReader(BytesIO(fs), strict=False)
writer = PdfWriter()
page = pdf.pages[0]
writer.add_page(page)
# Here is the part I do not like
with open("first_page.pdf", "wb") as output:
writer.write(output)
with open("first_page.pdf", "rb") as pdf_file:
encoded_string = bytearray(pdf_file.read())
#Analyse text
textract = boto3.client('textract')
response = textract.detect_document_text(Document={"Bytes": encoded_string})
return response
analyse_first_page(bucket, file_name)
</code></pre>
<p>Is there no AWS way to do this? Is there no better way to do this?</p>
|
<python><amazon-web-services><pdf><amazon-textract>
|
2023-01-18 13:05:19
| 1
| 324
|
Chukwudi
|
75,159,675
| 1,378,055
|
Installing Open3d-Ml with Pytorch (on MacOs)
|
<p>I created a <code>virtualenv</code> with <code>python 3.10</code> and installed open3d and PyTorch according to the instructions on open3d-ml webpage: <a href="https://github.com/isl-org/Open3D-ML" rel="nofollow noreferrer">Open3d-ML</a> but when I tested it with <code>import open3d.ml.torch</code> I get the error:
<code>Exception: Open3D was not built with PyTorch support!</code></p>
<p><strong>Steps to reproduce</strong></p>
<pre><code>python3.10 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install open3d
pip install torch torchvision torchaudio
</code></pre>
<p><strong>Error</strong></p>
<pre><code>% python -c "import open3d.ml.torch as ml3d"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/xx/.venv/lib/python3.10/site-packages/open3d/ml/torch/__init__.py", line 34, in <module>
raise Exception('Open3D was not built with PyTorch support!')
Exception: Open3D was not built with PyTorch support!
</code></pre>
<p><strong>Environment:</strong></p>
<pre><code>% python3 --version
Python 3.10.9
% pip freeze
open3d==0.16.1
torch==1.13.1
torchaudio==0.13.1
torchvision==0.14.1
</code></pre>
<p><strong>OS</strong></p>
<pre><code>macOS 12.6
Kernel Version: Darwin 21.6.0
</code></pre>
<p>I also checked below similar issues but they don't have answers:</p>
<p><a href="https://github.com/isl-org/Open3D/discussions/5849" rel="nofollow noreferrer">https://github.com/isl-org/Open3D/discussions/5849</a></p>
<p><a href="https://github.com/isl-org/Open3D-ML/issues/557" rel="nofollow noreferrer">https://github.com/isl-org/Open3D-ML/issues/557</a></p>
<p><a href="https://stackoverflow.com/questions/65794655/open3d-ml-and-pytorch">Open3D-ML and pytorch</a></p>
<p>According to this issue <a href="https://github.com/isl-org/Open3D/discussions/5849" rel="nofollow noreferrer">5849</a> the problem can't be related only to MacOs because, in a docker with Ubuntu20.04, there is a similar error.</p>
<p>Does anyone know how we can tackle this?</p>
|
<python><python-3.x><pytorch><open3d>
|
2023-01-18 13:05:12
| 3
| 445
|
Bruce
|
75,159,579
| 4,495,238
|
Data not getting written to Joining Column
|
<p>Following Python Apache Beam code is not writing Null Value to Bigquery field <code>sum_rpp_million</code>. All other columns are getting loaded as per expectations.</p>
<p>I am expecting that it should write Sum calculated at PCollection <code>data_sum</code> to all records of Pcollection <code>data_loading</code>.</p>
<p>Please help me in identifying where code is going wrong.</p>
<pre><code> data_loading = (
p1
| 'ReadData' >> beam.io.ReadFromText(input, skip_header_lines =1)
| 'SplitData' >> beam.Map(lambda x: x.split(';'))
| 'FormatToDict' >> beam.Map(lambda x: {"country_code": x[1], "unique_code": x[2], "name": x[3], "geom": x[4], "population": None if x[5]=='' else round(float(x[5])), "households":None if x[6]=='' else round(float(x[6])), "rpp_million": float(x[7]) if x[7] != '' else None, "rppc_million": (0 if x[8]=='' else float(x[7]))+(0 if x[9]=='' else float(x[9])), "pp_million": (None if x[10]=='' else float(x[10])), "sum_rpp_million": None})
)
data_sum = (
data_loading
| 'ExtractColumn' >> beam.Map(lambda x: 0 if x['rpp_million']==None else x['rpp_million'])
| 'SumFieldC2' >> beam.CombineGlobally(sum)
| 'AddSumField' >> beam.MapTuple(lambda record, sum_c2: {**record, 'sum_rpp_million': sum_c2})
)
combined_data = (data_loading, data_sum) | beam.Flatten()
#---------------------Type = audit----------------------------------------------------------------------------------------------------------------------
result = (
combined_data
| 'Write-Audit' >> beam.io.WriteToBigQuery(
table='bqdata124',
dataset=dataset_id,
project=project,
schema=table_schema_Audit,
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
write_disposition=beam.io.BigQueryDisposition.WRITE_TRUNCATE
))
</code></pre>
<p>Currently my Code is populating data as given below :-
<a href="https://i.sstatic.net/a0kfT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/a0kfT.png" alt="enter image description here" /></a></p>
<p>Expectation is Field <code>sum_rpp_million</code> should populate <code>Sum(rpp_million)</code> in all records. if value of <code>Sum(rpp_million)</code> equal to <code>100</code>, then <code>100</code> should be populated in all records under field <code>sum_rpp_million</code></p>
|
<python><google-cloud-platform><google-cloud-dataflow><apache-beam>
|
2023-01-18 12:56:13
| 1
| 699
|
Vibhor Gupta
|
75,159,453
| 542,270
|
Specifying local relative dependency in pyproject.toml
|
<p>I have the following project structure:</p>
<pre><code>root
- sample/
- src/
- tests/
- pyproject.toml
- libs/
- lol/
- src/
- tests/
- pyproject.toml
</code></pre>
<p>I'd like to specify <code>lol</code> as a dependency for <code>sample</code> in <code>sample/pyproject.toml</code>. How it can be done? I've tried:</p>
<pre><code>dependencies = [
"lol @ file://libs/lol"
]
</code></pre>
<p>But it gives me:</p>
<pre><code>ValueError: non-local file URIs are not supported on this platform: 'file://libs/lol'
</code></pre>
<p>and that's ok however I cannot put absolute path here since this is going to be shared code. Same for <code>file://./lib/lol</code>.</p>
<p>What can be done about that? Can I use env variables here, or some placeholders? I don't want to use tools like poetry.</p>
|
<python><pip><pyproject.toml>
|
2023-01-18 12:47:11
| 5
| 85,464
|
Opal
|
75,159,336
| 245,362
|
Using Box with PyO3
|
<p>I have a struct in Rust that works like a linked-list that I want to expose to Python. The struct has a <code>parent</code> field, which is a reference to a parent which is a struct of the same type. I need to wrap this in a <code>Box</code>, since Rust complains about needing indirection if I don't, but then PyO3 gives the following error:</p>
<pre><code>required for `Box<ListNode>` to implement `pyo3::FromPyObject<'_>`
required for `Box<ListNode>` to implement `PyFunctionArgument<'_, '_>`
</code></pre>
<p>A simplified version of the struct looks like below:</p>
<pre class="lang-rust prettyprint-override"><code>#[pyclass]
#[derive(Clone)]
pub struct ListNode {
pub parent: Option<Box<ListNode>>,
}
#[pymethods]
impl ListNode {
#[new]
pub fn new(parent: Option<Box<ListNode>>) -> ListNode {
ListNode { parent }
}
}
</code></pre>
<p>What do I need to do to implement <code>FromPyObject</code> for <code>Box</code>? Or is there a more correct way to resolve this? It seems like the error occurs whenever there's a <code>Box</code> in Rust, no matter what the contents of the <code>Box</code> are.</p>
<p>EDIT: The full error output from <code>cargo check</code> is below:</p>
<pre><code>error[E0277]: the trait bound `Box<ListNode>: PyClass` is not satisfied
--> src/ListNode.rs:14:1
|
14 | #[pymethods]
| ^^^^^^^^^^^^ the trait `PyClass` is not implemented for `Box<ListNode>`
...
17 | pub fn new(parent: Option<Box<ListNode>>) -> ListNode {
| ------ required by a bound introduced by this call
|
= note: required for `Box<ListNode>` to implement `pyo3::FromPyObject<'_>`
= note: required for `Box<ListNode>` to implement `PyFunctionArgument<'_, '_>`
note: required by a bound in `extract_optional_argument`
--> /me/.cargo/registry/src/github.com-1ecc6299db9ec823/pyo3-0.17.3/src/impl_/extract_argument.rs:104:8
|
104 | T: PyFunctionArgument<'a, 'py>,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ required by this bound in `extract_optional_argument`
error[E0277]: the trait bound `Box<ListNode>: PyClass` is not satisfied
--> src/ListNode.rs:17:24
|
17 | pub fn new(parent: Option<Box<ListNode>>) -> ListNode {
| ^^^^^^ the trait `PyClass` is not implemented for `Box<ListNode>`
|
= note: required for `Box<ListNode>` to implement `pyo3::FromPyObject<'_>`
= note: required for `Box<ListNode>` to implement `PyFunctionArgument<'_, '_>`
</code></pre>
|
<python><rust><pyo3>
|
2023-01-18 12:36:11
| 1
| 563
|
David Chanin
|
75,159,310
| 9,106,985
|
Abaqus Python: Accessing XYDataFromHistory at particular nodes
|
<p>I have defined the following in an attempt to export HISTORY OUTPUT data at specified nodes from abaqus odb file. It is not clear to me how to resolve this error. Any suggestions?</p>
<pre><code>from odbAccess import
def main():
odb=openOdb('name.odb')
['Spatial acceleration: A1 at Node 84735155 in NSET SENSOR1',
'Spatial acceleration: A2 at Node 84735155 in NSET SENSOR2']
results = []
for i in range(len(new_list)):
f=XYDataFromHistory(odb=odb,
outputVariableName=new_list[i],
steps=('Step-4', ), name='test{}'.format(i) )
results.append(f)
</code></pre>
<p><strong>Error</strong></p>
<pre><code> Traceback (most recent call last):
File "odb_processing_SSD_acceleration_export_v4.py", line 66, in <module>
main()
File "odb_processing_SSD_acceleration_export_v4.py", line 32, in main
f=XYDataFromHistory(odb=odb,
NameError: global name 'XYDataFromHistory' is not defined
</code></pre>
|
<python><abaqus><abaqus-odb>
|
2023-01-18 12:34:22
| 1
| 575
|
shoggananna
|
75,159,278
| 4,865,723
|
How to get a Series's parent DataFrame in Pandas?
|
<p>Does a <code>pandas.Series</code> instance do know it's parent <code>pandas.DataFrame</code> when it comes from there?</p>
<p>Example:</p>
<pre><code>import pandas
df = pandas.DataFrame({'col': range(10)})
series_column = df.col
print('My parent is {}'.format(series_column.parent))
# or
print('My parent is {}'.format(df.col.parent))
</code></pre>
<p>My goal is to make the signature of a method easier.</p>
<pre><code>def foobar(data: DataFrame, column: str):
return data[column].do_something()
# I would like to save one argument
def foobar(column: pandas.Series):
return column.parent[column].do_something()
</code></pre>
<p>Here is a more real world example:</p>
<pre><code>def frequency(data: pandas.DataFrame, column: str, dropna: bool = False):
tab = data[column].value_counts(dropna=dropna)
# sort index if it is an ordered category
if data[column].dtype.name == 'category':
if data[column].cat.ordered:
tab = tab.sort_index()
# Series to DataFrame
tab = tab.to_frame()
# two column MultiIndex
a = random_label()
tab[a] = column
tab = tab.reset_index()
tab = tab.set_index([a, column])
tab.index.names = (None, None)
tab.columns = ['n']
return tab
</code></pre>
|
<python><pandas>
|
2023-01-18 12:31:58
| 0
| 12,450
|
buhtz
|
75,159,093
| 8,849,071
|
How to make only a method generic in a python class
|
<p>I have the following base class to represent fields:</p>
<pre class="lang-py prettyprint-override"><code>T = TypeVar("T")
@dataclass
class BaseClass(Generic[T]):
def validate(self, value: T):
raise NotImplementedError
</code></pre>
<p>I also have an enum to represent the available implementations of this class:</p>
<pre class="lang-py prettyprint-override"><code>class Types(Enum):
A = auto()
B = auto()
@staticmethod
def from_instance(instance: BaseClass) -> "Types":
if isinstance(instance, ClassA):
return Types.A
if isinstance(instance, ClassB):
return Types.B
raise ValueError("Not supported")
</code></pre>
<p>Now, from these class, I have several implementations:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class ClassA(BaseClass[str]):
def validate(self, value: str):
pass
@dataclass
class ClassB(BaseClass[int]):
def validate(self, value: int):
pass
</code></pre>
<p>After this setup, I have another class to store a list of <code>BaseClass</code>:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class Container:
instances: List[BaseClass]
def get_by_type(self, type: Types) -> List[BaseClass]:
return [instance for instance in self.instances if type == Types.from_instance(instance)]
</code></pre>
<p>At the end I have the following code and the following error:</p>
<pre class="lang-py prettyprint-override"><code>def function(fields_from_class_a: List[ClassA]):
print(fields_from_class_a)
container = Container(instances=[ClassA(), ClassB()])
fields = container.get_by_type(Types.A)
# throws error:
# Argument 1 to "function" has incompatible type "List[BaseClass[Any]]"; expected "List[ClassA]"
function(fields)
</code></pre>
<p>So my question is, can I modify the code in such a way that the method <code>get_by_type</code> is correctly typed?</p>
|
<python><python-3.6><mypy>
|
2023-01-18 12:15:34
| 1
| 2,163
|
Antonio Gamiz Delgado
|
75,158,916
| 10,413,428
|
fstring float to int with leading zeros
|
<p>I need to generate a string from a float which is always the length of 5.
For example:</p>
<pre class="lang-py prettyprint-override"><code>input_number: float = 2.22
output_str = "00222"
</code></pre>
<p>The float never larger then 999.xx and can have an arbitrary number of decimal places.
I came up with the following code, but I doubt whether what I have in mind can't be done in a more pythonic way.</p>
<p>My solution:</p>
<pre class="lang-py prettyprint-override"><code>input_number = 343.2423423
input_rounded = round(input_number, 2)
input_str = str(input_rounded)
input_str = input_str.replace(".","")
input_int = int(input_str)
output_str = f"{input_int:05d}"
</code></pre>
<p>More examples:</p>
<p>343.2423423 -> "34324" <br />
23.3434343 -> "02334"</p>
|
<python><f-string>
|
2023-01-18 12:00:11
| 3
| 405
|
sebwr
|
75,158,879
| 12,965,658
|
Python regex for url
|
<p>I need help to create regex for the url.</p>
<p>The part https://test/ is fixed and will be in all samples.</p>
<p>I want to to have regex which can have values after test between starting with a letter from a to j (case insensitive) and can have any digit or character after it.</p>
<p>Valid samples:</p>
<pre><code>https://test/abacus/b/
https://test/horse/1/3/
</code></pre>
<p>Invalid samples:</p>
<pre><code>https://test/zoo
</code></pre>
<p>so far I have tried:</p>
<pre><code>^https://test/[a-j][a-zA-Z0-9]*$
</code></pre>
|
<python><regex>
|
2023-01-18 11:55:51
| 1
| 909
|
Avenger
|
75,158,684
| 11,308,029
|
How to send event to Sentry without Sentry-SDK?
|
<p>I need to run Python script without external dependencies so I can not use SDK for python. I also do not want call external tools like sentry-cli from script for that purpouse</p>
<p>I need simply send two events to specific Project, using DSN.</p>
<p>I can not google it or find in <a href="https://docs.sentry.io/api/" rel="nofollow noreferrer">API reference</a>(there only methods about listing\retrivieng issues/events but not sending them)</p>
<p>So my question is how to send event to sentry project using DSN?</p>
|
<python><logging><alert><monitoring><sentry>
|
2023-01-18 11:39:33
| 1
| 702
|
Archirk
|
75,158,670
| 7,216,834
|
How to merge list of dictionary into one dictionary in python, the value should be of list if the key has different values?
|
<p>I have a list of dictionaries,</p>
<pre><code>lst = [{'A':1,'B':2,'C':4},{'A':2,'B':2,'C':4},{'A':3,'B':2,'C':4}]
</code></pre>
<p>I want to merge this into one dictionary and put the values inside list if a key has different values.</p>
<pre><code>desired output = {'A':[1,2,3},'B':2,'C':4}
</code></pre>
<p>I tried but it was resulting in something like,</p>
<pre><code>{'A':[1,2,3},'B':[2],'C':[4]}
</code></pre>
|
<python><python-3.x><dictionary>
|
2023-01-18 11:37:57
| 1
| 1,325
|
Jennifer Therese
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.