QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,435,329
| 12,243,638
|
fillna multiple columns of a dataframe with corresponding columns of another dataframe pandas
|
<p>There is a dataframe <code>df_1</code> which has some nan values. These nan values should be filled by values from another dataframe df_2 with correspond to same column and row.</p>
<pre><code>df_1 = pd.DataFrame([
[0.1, 2, 55, 0,np.nan],
[0.2, 4, np.nan, 1,99],
[0.3, np.nan, 22, 5,88],
[0.4, np.nan, np.nan, 4,77]
],
columns=list('ABCDE'))
df_2 = pd.DataFrame([
[0.1, 2, 55, 0.5],
[0.2, 4, 6, 1],
[0.3, 7, 22, 5],
],
columns=list('ABCD'))
</code></pre>
<p>The output is expected as:</p>
<pre><code> A B C D E
0 0.1 2.0 55.0 0 NaN
1 0.2 4.0 6.0 1 99.0
2 0.3 7.0 22.0 5 88.0
3 0.4 NaN NaN 4 77.0
</code></pre>
<p>I tried with df_1 = df_1.fillna(df_2). But it does not fill up the nans. Is it any way to fix it?</p>
|
<python><pandas><dataframe>
|
2023-02-13 11:34:26
| 2
| 500
|
EMT
|
75,435,290
| 9,173,710
|
opencv single edge detection and contour extraction without closed contour
|
<p>I'm having trouble extracting a contour from this and other similar streak images.<br />
<a href="https://i.sstatic.net/1NC9m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1NC9m.png" alt="enter image description here" /></a></p>
<p>First I tried this:</p>
<pre class="lang-py prettyprint-override"><code>edges = cv2.Canny(cvimg, 150, 600)
plt.subplot(111)
plt.imshow(cvimg,cmap = 'gray')
contours, _ = cv2.findContours(edges, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# get the contour that has the least mean y value, which should be the upper most contour
avg_y = [cont.T[1].mean() for cont in contours]
idx = np.argmin(avg_y)
contour = np.asarray(contours[idx]).squeeze().T # reshape and reorient for plotting
plt.plot(contour[0], contour[1], color="r")
plt.show()
</code></pre>
<p>On first glance this solution works:<br />
<a href="https://i.sstatic.net/zJG48.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zJG48m.png" alt="contour" /></a><a href="https://i.sstatic.net/21Vgz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/21Vgzm.png" alt="edge" /></a><br />
But here both the upper and underside of the canny edge are part of a closed-loop contour. I cannot easily separate the two sides. At least nothing comes to mind.</p>
<p>My second try was to threshold the image and not use canny at all:</p>
<pre class="lang-py prettyprint-override"><code>_, cvimg = cv2.threshold(cvimg, 50, 255, type=cv2.THRESH_OTSU)
contours, _ = cv2.findContours(cvimg, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# get the contour that has the least mean y value, which should be the upper most contour
avg_y = [cont.T[1].mean() for cont in contours]
idx = np.argmin(avg_y)
contour = np.asarray(contours[idx]).squeeze().T
plt.plot(contour[0], contour[1], color="r")
plt.show()
</code></pre>
<p>But that also yields a closed contour:</p>
<p><a href="https://i.sstatic.net/QUamOm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QUamOm.png" alt="enter image description here" /></a></p>
<p>As the last try I used the canny filtered image and just took the first nonzero index in every column:</p>
<pre class="lang-py prettyprint-override"><code>edges = cv2.Canny(cvimg, 150, 600)
contour = np.argmax(edges>0, axis=0)
plt.subplot(111)
plt.imshow(cvimg,cmap = 'gray')
plt.plot(contour, color="r")
plt.show()
</code></pre>
<p>This is also not ideal, since there is too much being detected on both sides.<br />
<a href="https://i.sstatic.net/5PQuHm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5PQuHm.png" alt="enter image description here" /></a></p>
<p>How can I get just the upper V shape contour, so I don't have to do postprocessing which might not work for all my data? Is there a way to tell OpenCV to not loop around the Canny edges? Or am I just too dumb to use the library correctly?<br />
Thanks!</p>
|
<python><opencv><edge-detection>
|
2023-02-13 11:31:30
| 1
| 1,215
|
Raphael
|
75,435,280
| 4,662,490
|
python/regex: match letter only or letter followed by number
|
<p>I want to split this string 'AB4F2D' in ['A', 'B4', 'F2', 'D'].
Essentially, if character is a letter, return the letter, if character is a number return previous character plus present character (luckily there is no number >9 so there is never a X12).</p>
<p>I have tried several combinations but I am not able to find the correct one:</p>
<pre><code>def get_elements(input_string):
patterns = [
r'[A-Z][A-Z0-9]',
r'[A-Z][A-Z0-9]|[A-Z]',
r'\D|\D\d',
r'[A-Z]|[A-Z][0-9]',
r'[A-Z]{1}|[A-Z0-9]{1,2}'
]
for p in patterns:
elements = re.findall(p, input_string)
print(elements)
</code></pre>
<p>results:</p>
<pre><code>['AB', 'F2']
['AB', 'F2', 'D']
['A', 'B', 'F', 'D']
['A', 'B', 'F', 'D']
['A', 'B', '4F', '2D']
</code></pre>
<p>Can anyone help? Thanks</p>
|
<python><regex>
|
2023-02-13 11:30:20
| 2
| 423
|
Marco Di Gennaro
|
75,434,950
| 2,876,079
|
How to correctly consider multi index column names for pandas concat?
|
<p>I have two DataFrames that are indexed by the columns <code>id_a</code>, <code>id_b</code> but in different order:</p>
<pre><code>foo = pd.DataFrame([{'id_a': 1, 'id_b': 1, 'value': 1}])
foo.set_index(['id_a','id_b'], inplace=True)
baa = pd.DataFrame([{'id_b': 2, 'id_a': 1, 'value': 10}])
baa.set_index(['id_b', 'id_a'], inplace=True)
</code></pre>
<p>If I concatenate those two DataFrames:</p>
<pre><code>qux = pd.concat([foo, baa])
</code></pre>
<p>I would expect the result to contain the entries</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id_a</th>
<th>id_b</th>
<th>value</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td></td>
<td>2</td>
<td>10</td>
</tr>
</tbody>
</table>
</div>
<p>However, I get</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id_a</th>
<th>id_b</th>
<th>value</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>2</td>
<td>1</td>
<td>10</td>
</tr>
</tbody>
</table>
</div>
<p><code>id_a</code> gets a non-existing new value "2"</p>
<p>(At least I would expect to get a warning, that the order of the id columns is different.)</p>
<p>=> How can I concatenate those two indexed DataFrames, correctly considering the index column names?</p>
<p>Following variants did not help:</p>
<pre><code>qux = pd.concat([foo, baa], names=['id_a', 'id_b'])
</code></pre>
<p>Maybe it would work somehow using the options <code>keys</code> and/or <code>levels</code> but I did not manage to do so.</p>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html</a></p>
|
<python><pandas><concatenation><multi-index>
|
2023-02-13 10:58:11
| 1
| 12,756
|
Stefan
|
75,434,907
| 2,463,112
|
how to find and export each cell datatype into a csv using python
|
<p>What is the best way to loop through each line in CSV file and find and export the datatype of each cell into csv file?</p>
<p>Here is the python script I am using with pandas library,</p>
<pre><code>import pandas as pd
out = pd.read_csv("C:\\Users\\Sridhar\\Downloads\\Leads.csv")
dict(out)
</code></pre>
|
<python><pandas><dask>
|
2023-02-13 10:52:36
| 1
| 2,248
|
sridharnetha
|
75,434,883
| 6,468,436
|
I can not figure out why I can not start this simlple python script
|
<p>My directory looks like this</p>
<p><a href="https://i.sstatic.net/gRklB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gRklB.png" alt="enter image description here" /></a></p>
<p>When I start directly with PyCharm it works.
But when I try to start the script with a commandline I get this error messsage</p>
<pre><code> > python .\PossibilitiesPlotter.py
Traceback (most recent call last):
File "C:\Users\username\PycharmProjects\SwapMatrixPlotter\possibilitiesplotter\PossibilitiesPlotter.py", line 7, in <module>
from plotterresources.PlotterProps import PlotterProps
ModuleNotFoundError: No module named 'plotterresources'
</code></pre>
<p>This is how the import looks from my main class PossibilitesPlotter.py</p>
<pre><code>import sys
sys.path.append("plotterresources/PlotterProps.py")
from csv import reader
from pathlib import Path
from plotterresources.PlotterProps import PlotterProps
from possibilitiesplotter.PossibilitiesGraph import PossibilitiesGraph
from possibilitiesplotter.PossibilitiesModel import PossibilitiesModel
class PossibilitiesPlotter:
</code></pre>
|
<python><pycharm>
|
2023-02-13 10:49:55
| 1
| 325
|
User1751
|
75,434,833
| 344,162
|
Create private composite Github action that runs a python script and is called from another private repo
|
<p>I have a Python script that I want to run as a final deployment step from several of my repos. Since this script uses the Github API I wanted to abstract all of this logic into its own github action (thus its own separate repo). <a href="https://github.blog/changelog/2022-12-14-github-actions-sharing-actions-and-reusable-workflows-from-private-repositories-is-now-ga/" rel="nofollow noreferrer">Since Dec-22</a> it's possible to share between private repos to all Github users. I've already updated my action repo with the appropriate setting to be accessed from internal repos to the organization.</p>
<p>I have two repos:</p>
<ul>
<li><code>org/my-action</code>: the github action repo, containing the python script</li>
<li><code>org/core</code>: repo from which I want to trigger <code>my-action</code></li>
</ul>
<p>My <code>action.yml</code> from <code>org/my-action</code> looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>name: Release to Calendar
inputs:
calendar-id:
description: 'Unique identifier'
required: true
runs:
using: "composite"
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install dependencies
shell: "bash"
run: pip install -r requirements.txt
- name: Create calendar event
shell: "bash"
run: python main.py ${{inputs.calendar-id}}
</code></pre>
<p>The workflow in <code>org/core</code> that invokes my action looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>name: release to calendar
on:
workflow_dispatch
jobs:
to-calendar:
runs-on: ubuntu-latest
steps:
- uses: org/my-action@main
with:
calendar-id: 'some value'
</code></pre>
<p>However, the <code>actions/checkout</code> step starts checking out my <code>org/core</code> repo instead, and then running pip installs the requirements from the <code>core</code> repo as well.</p>
<p>To fix this, I updated my action to look like this:</p>
<pre class="lang-yaml prettyprint-override"><code>name: Release to Calendar
inputs:
calendar-id:
description: 'Unique identifier'
required: true
runs:
using: "composite"
steps:
- uses: actions/checkout@v3
with:
repository: 'org/my-action'
ref: 'main'
path: 'calendar-action'
- uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install dependencies
shell: "bash"
run: pip install -r calendar-action/requirements.txt
- name: Create calendar event
shell: "bash"
run: python calendar-action/main.py ${{inputs.calendar-id}}
</code></pre>
<p>But now I get the message: <code>Error: fatal: repository 'https://github.com/org/my-action' not found</code>.</p>
<p>I understand that I could [create a PAT][3] and use that as a secret in one of the repos (not sure which one) but is this the only way to go? It seems redundant to create a token that will give repo access to "itself", in a sense. It is possible that I'm misunderstanding fundamentally how actions work: Is there a more elegant way to achieve this?</p>
|
<python><github><continuous-integration><github-actions>
|
2023-02-13 10:44:55
| 0
| 1,065
|
Gustavo Puma
|
75,434,737
| 707,145
|
shiny for Python using add_layer for Popus from ipyleaflet
|
<p>I want to use <code>m.add_layer</code> for <code>Popus</code> from <a href="/questions/tagged/ipyleaflet" class="post-tag" title="show questions tagged 'ipyleaflet'" aria-label="show questions tagged 'ipyleaflet'" rel="tag" aria-labelledby="ipyleaflet-container">ipyleaflet</a> in <a href="/questions/tagged/shiny" class="post-tag" title="show questions tagged 'shiny'" aria-label="show questions tagged 'shiny'" rel="tag" aria-labelledby="shiny-container">shiny</a> for <a href="/questions/tagged/python" class="post-tag" title="show questions tagged 'python'" aria-label="show questions tagged 'python'" rel="tag" aria-labelledby="python-container">python</a> (<a href="https://ipyleaflet.readthedocs.io/en/latest/layers/popup.html" rel="noreferrer">as given here</a>). However, it is not working as expected. My minimum working example is given below:</p>
<pre><code>from shiny import App, render, ui
from shinywidgets import output_widget, reactive_read, register_widget
from ipywidgets import HTML
from ipyleaflet import Map, Marker, Popup
app_ui = ui.page_fluid(
output_widget("m")
)
def server(input, output, session):
center = (52.204793, 360.121558)
m = Map(center=center, zoom=9, close_popup_on_click=False)
message1 = HTML()
message1.value = "Try clicking the marker!"
# Popup with a given location on the map:
popup = Popup(
location=center,
child=message1,
close_button=False,
auto_close=False,
close_on_escape_key=False
)
m.add_layer(popup) # This line is not working
register_widget("m", m)
app = App(app_ui, server)
</code></pre>
<p>Wondering what basic am I missing here?</p>
|
<python><html><ipyleaflet><py-shiny>
|
2023-02-13 10:35:11
| 2
| 24,136
|
MYaseen208
|
75,434,681
| 4,151,075
|
Type hint decorator for sync & async functions
|
<p>How can I type hint decorator that is meant to be used for both sync & async functions?<br />
I've tried something like below, but <code>mypy</code> raises errors:</p>
<pre class="lang-py prettyprint-override"><code>x/decorator.py:130: error: Incompatible types in "await" (actual type "Union[Awaitable[Any], R]", expected type "Awaitable[Any]") [misc]
x/decorator.py:136: error: Incompatible return value type (got "Union[Awaitable[Any], R]", expected "R") [return-value]
</code></pre>
<pre class="lang-py prettyprint-override"><code>def log_execution_time(foo: Callable[P, AR | R]) -> Callable[P, AR | R]:
module: Any = inspect.getmodule(foo)
module_spec: Any = module.__spec__ if module else None
module_name: str = module_spec.name if module_spec else foo.__module__ # noqa
@contextmanager
def log_timing():
start = time()
try:
yield
finally:
exec_time_ms = (time() - start) * 1000
STATS_CLIENT.timing(
metric_key.FUNCTION_TIMING.format(module_name, foo.__name__),
exec_time_ms,
)
async def async_inner(*args: P.args, **kwargs: P.kwargs) -> R:
with log_timing():
result = await foo(*args, **kwargs) <- error
return result
def sync_inner(*args: P.args, **kwargs: P.kwargs) -> R:
with log_timing():
result = foo(*args, **kwargs)
return result <- error
if inspect.iscoroutinefunction(foo):
return wraps(foo)(async_inner)
return wraps(foo)(sync_inner)
</code></pre>
<p>I know there's a trick like this:</p>
<pre class="lang-py prettyprint-override"><code> if inspect.iscoroutinefunction(foo):
async_inner: foo # type: ignore[no-redef, valid-type]
return wraps(foo)(async_inner)
sync_inner: foo # type: ignore[no-redef, valid-type]
return wraps(foo)(sync_inner)
</code></pre>
<p>But I was hoping that there's a way to properly type hint this.</p>
<p>I'm on python 3.10.10.</p>
<p>PS. I forgot to say that it's important that PyCharm picks it up & suggests proper types.</p>
|
<python><python-typing><python-decorators>
|
2023-02-13 10:30:02
| 1
| 1,269
|
Marek
|
75,434,668
| 1,422,096
|
Update a chart in realtime with matplotlib
|
<p>I'd like to update a plot by redrawing a new curve (with 100 points) in real-time.</p>
<p>This works:</p>
<pre><code>import time, matplotlib.pyplot as plt, numpy as np
fig = plt.figure()
ax = fig.add_subplot(111)
t0 = time.time()
for i in range(10000000):
x = np.random.random(100)
ax.clear()
ax.plot(x, color='b')
fig.show()
plt.pause(0.01)
print(i, i/(time.time()-t0))
</code></pre>
<p>but there is only ~10 FPS, which seems slow.</p>
<p>What is the standard way to do this in Matplotlib?</p>
<p>I have already read <a href="https://stackoverflow.com/questions/4098131/how-to-update-a-plot-in-matplotlib">How to update a plot in matplotlib</a> and <a href="https://stackoverflow.com/questions/11874767/how-do-i-plot-in-real-time-in-a-while-loop-using-matplotlib">How do I plot in real-time in a while loop using matplotlib?</a> but these cases are different because they <em>add a new point to an existing plot</em>. In my use case, I need to redraw everything and keep 100 points.</p>
|
<python><matplotlib>
|
2023-02-13 10:28:52
| 1
| 47,388
|
Basj
|
75,434,493
| 20,615,590
|
Using terminal on Jupyter Lab not working
|
<p>I am working on Jupyter Lab, and I opened the terminal. I tried to install the module <code>spacy</code>, but when I came to the part where it said <code>$ python ...</code>, it said <code>/bin/bash: python: command not found</code>.</p>
<p>My code:</p>
<pre><code>$ pip install -U pip setuptools wheel
$ pip install -U spacy
$ python -m spacy download en_core_web_sm
</code></pre>
<p>The output:</p>
<hr />
<pre><code>Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: pip in /usr/local/lib/python3.8/dist-packages (23.0)
Requirement already satisfied: setuptools in /home/repl/.local/lib/python3.8/site-packages (67.2.0)
Requirement already satisfied: wheel in /usr/local/lib/python3.8/dist-packages (0.38.4)
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: spacy in /home/repl/.local/lib/python3.8/site-packages (3.5.0)
Requirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.8/dist-packages (from spacy) (3.0.8)
Requirement already satisfied: typer<0.8.0,>=0.3.0 in /usr/local/lib/python3.8/dist-packages (from spacy) (0.7.0)
Requirement already satisfied: pydantic!=1.8,!=1.8.1,<1.11.0,>=1.7.4 in /usr/local/lib/python3.8/dist-packages (from spacy) (1.10.2)
Requirement already satisfied: requests<3.0.0,>=2.13.0 in /usr/local/lib/python3.8/dist-packages (from spacy) (2.28.1)
Requirement already satisfied: wasabi<1.2.0,>=0.9.1 in /usr/local/lib/python3.8/dist-packages (from spacy) (0.10.1)
Requirement already satisfied: numpy>=1.15.0 in /usr/local/lib/python3.8/dist-packages (from spacy) (1.23.2)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.8/dist-packages (from spacy) (21.3)
Requirement already satisfied: langcodes<4.0.0,>=3.2.0 in /usr/local/lib/python3.8/dist-packages (from spacy) (3.3.0)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from spacy) (3.1.2)
Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from spacy) (2.0.7)
Requirement already satisfied: spacy-legacy<3.1.0,>=3.0.11 in /home/repl/.local/lib/python3.8/site-packages (from spacy) (3.0.12)
Requirement already satisfied: srsly<3.0.0,>=2.4.3 in /usr/local/lib/python3.8/dist-packages (from spacy) (2.4.5)
Requirement already satisfied: pathy>=0.10.0 in /home/repl/.local/lib/python3.8/site-packages (from spacy) (0.10.1)
Requirement already satisfied: smart-open<7.0.0,>=5.2.1 in /usr/local/lib/python3.8/dist-packages (from spacy) (5.2.1)
Requirement already satisfied: catalogue<2.1.0,>=2.0.6 in /usr/local/lib/python3.8/dist-packages (from spacy) (2.0.8)
Requirement already satisfied: tqdm<5.0.0,>=4.38.0 in /usr/local/lib/python3.8/dist-packages (from spacy) (4.64.0)
Requirement already satisfied: thinc<8.2.0,>=8.1.0 in /usr/local/lib/python3.8/dist-packages (from spacy) (8.1.5)
Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.8/dist-packages (from spacy) (1.0.9)
Requirement already satisfied: setuptools in /home/repl/.local/lib/python3.8/site-packages (from spacy) (67.2.0)
Requirement already satisfied: spacy-loggers<2.0.0,>=1.0.0 in /usr/local/lib/python3.8/dist-packages (from spacy) (1.0.3)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/dist-packages (from packaging>=20.0->spacy) (3.0.9)
Requirement already satisfied: typing-extensions>=4.1.0 in /usr/local/lib/python3.8/dist-packages (from pydantic!=1.8,!=1.8.1,<1.11.0,>=1.7.4->spacy) (4.3.0)
Requirement already satisfied: idna<4,>=2.5 in /usr/lib/python3/dist-packages (from requests<3.0.0,>=2.13.0->spacy) (2.8)
Requirement already satisfied: charset-normalizer<3,>=2 in /usr/local/lib/python3.8/dist-packages (from requests<3.0.0,>=2.13.0->spacy) (2.0.12)
Requirement already satisfied: certifi>=2017.4.17 in /usr/lib/python3/dist-packages (from requests<3.0.0,>=2.13.0->spacy) (2019.11.28)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/lib/python3/dist-packages (from requests<3.0.0,>=2.13.0->spacy) (1.25.8)
Requirement already satisfied: confection<1.0.0,>=0.0.1 in /usr/local/lib/python3.8/dist-packages (from thinc<8.2.0,>=8.1.0->spacy) (0.0.3)
Requirement already satisfied: blis<0.8.0,>=0.7.8 in /usr/local/lib/python3.8/dist-packages (from thinc<8.2.0,>=8.1.0->spacy) (0.7.9)
Requirement already satisfied: click<9.0.0,>=7.1.1 in /usr/local/lib/python3.8/dist-packages (from typer<0.8.0,>=0.3.0->spacy) (8.1.3)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2->spacy) (2.1.1)
/bin/bash: python: command not found
</code></pre>
<hr />
<p>And the whole point of this was to load the <code>spacy</code> <code>'en_core_web_sm'</code> tool.</p>
<p>If this is unfixable, then what other ways are there to load the <code>en_core_web_sm</code> model?</p>
|
<python><powershell><terminal><spacy><jupyter-lab>
|
2023-02-13 10:12:03
| 0
| 423
|
Pythoneer
|
75,434,485
| 11,668,258
|
Neo4j python driver | AttributeError: 'Session' object has no attribute 'execute_read'
|
<p>In the Neo4j Python code, I have an issue while query from Neo4j DB, with getting below error:</p>
<blockquote>
<p>AttributeError: 'Session' object has no attribute 'execute_read'</p>
</blockquote>
<p>I want to convert the results of cypher queries into a JSON format.I am using the neo4j python library to connect the graph DB.</p>
<p>Here is my current code:</p>
<pre><code>cypherQuery="MATCH (n:BusinessApplication) RETURN n"
#Run cypher query
with driver.session() as session:
results = session.execute_read(lambda tx: tx.run(cypherQuery).data())
driver.close()
</code></pre>
|
<python><python-3.x><neo4j><neo4j-python-driver>
|
2023-02-13 10:11:38
| 1
| 916
|
Tono Kuriakose
|
75,434,453
| 13,568,193
|
If dublicate key exists then update in postgres from python pandas
|
<p>I have a table name stock with (item,date) as index in Postgres database which record monthly record. Then I have pandas dataframe name stock_df which calculate monthly stock.
Now I need to append that stock_df in table stock in postgres in such way that.</p>
<pre><code>If stock_df has same item and same date like in stock table, it should update table and if it is new record it should be append.
</code></pre>
<p>I tried this:-</p>
<pre><code>stock_df.to_sql(name="stock", con=conn_postgres, if_exists='update', schema='arpan', index=False)
</code></pre>
<p>But this code check all the column and if it is dublicate it update or append the table which is different from what I desire My requirement is:- If stock_df item and date match the index of stock(item,date) then it should update data and if it didn't then it should append.</p>
|
<python><pandas><postgresql>
|
2023-02-13 10:08:02
| 0
| 383
|
Arpan Ghimire
|
75,434,294
| 12,931,358
|
Cannot copy/move file from remote SFTP server to local machine by Paramiko code running on remote SSH server
|
<p>I want to copy a file from my SFTP server to local computer. However, when I run my code, it didn't show any error while I still cannot find my file on local computer. My code like that:</p>
<pre><code>import paramiko
host_name ='10.110.100.8'
user_name = 'abc'
password ='xyz'
port = 22
remote_dir_name ='/data/.../PMC1087887_00003.jpg'
local_dir_name = 'D:\..\pred.jpg'
t = paramiko.Transport((host_name, port))
t.connect(username=user_name, password=password)
sftp = paramiko.SFTPClient.from_transport(t)
sftp.get(remote_dir_name,local_dir_name)
</code></pre>
<p>I have found the main problem. If I run my code in local in VS Code, it works. But when I login in my server by SSH in VS Code, and run my code on server, I found that my file appeared in current code folder (for example <code>/home/.../D:\..\pred.jpg</code>) and its name is <code>D:\..\pred.jpg</code>. How to solve this problem if I want to run code on server and download file to local?</p>
|
<python><ssh><sftp><paramiko>
|
2023-02-13 09:51:55
| 1
| 2,077
|
4daJKong
|
75,434,262
| 9,475,509
|
nbviewer can not display vpython result
|
<p>If a Jupyter Notebook file is rendering too long in GitHub, it is recommended <a href="https://towardsdatascience.com/jupyter-notebook-not-rendering-on-github-heres-a-simple-solution-e51aa6ca29b6" rel="nofollow noreferrer">here</a> to show it in <a href="https://nbviewer.org/" rel="nofollow noreferrer">nbviewer</a>, since it is <a href="https://stackoverflow.com/a/74319406/9475509">an issue with GitHub backend</a>.</p>
<p>From <a href="https://github.com/jupyter/nbviewer/blob/main/nbviewer/templates/faq.md#can-nbviewer-run-my-python-julia-r-scala-etc-notebooks" rel="nofollow noreferrer">here</a>, which I do not understand</p>
<blockquote>
<p>nbviewer does not execute notebooks. It only renders the inputs and outputs saved in a notebook document as a web page.</p>
</blockquote>
<p>I make a notebook importing <a href="https://vpython.org/" rel="nofollow noreferrer">vpython</a>, which is rendered locally, but can not be rendered in GitHub <a href="https://github.com/dudung/py-jupyter-nb/blob/main/src/import/external/vpython/basic/sphere_3.ipynb" rel="nofollow noreferrer">here</a> nor in nbviewer <a href="https://nbviewer.org/github/dudung/py-jupyter-nb/blob/main/src/import/external/vpython/basic/sphere_3.ipynb" rel="nofollow noreferrer">here</a>.</p>
<p>It also does not work with <a href="https://mybinder.org/" rel="nofollow noreferrer">binder</a> as <a href="https://hub.gke2.mybinder.org/user/dudung-py-jupyter-nb-qnn1mtpg/doc/tree/src/import/external/vpython/basic/sphere_3.ipynb" rel="nofollow noreferrer">here</a>.</p>
<p>Is there any solution to display it correctly online?</p>
|
<python><github><jupyter-notebook><vpython>
|
2023-02-13 09:47:56
| 1
| 789
|
dudung
|
75,434,180
| 5,703,081
|
Saving a dictionary with string keys and bytes values to file in Python
|
<p>I am writing a python program which should be able to run on Windows, iOS and Android.</p>
<p>To handle logins in I have a <code>dict</code> <code>{key_string: hashed_password_bytes}</code> Currently I manually convert any strings to binary and concatenate them all into a binary string I parse when loading credentials from file. Once it's in a readable binary format file I/O is pretty trivial</p>
<p>The code is pretty lengthy (below). <strong>Does anyone know of a package which can write dictionaries to file, not using clear text, that does the same job?</strong></p>
<p>Packages I have tried but couldn't find a suitable method of using them</p>
<ul>
<li>Pickle. Dropped due to the code injection risk <a href="https://docs.python.org/3/library/pickle.html" rel="nofollow noreferrer">https://docs.python.org/3/library/pickle.html</a></li>
<li>YAML, JSON etc as I don't want to use clear text solutions for storing credentials.</li>
<li>Struct requires that you know the length of variables and string variables before opening.</li>
<li>Netstruct allows variable length strings however you I couldn't figure out how to unpack dictionaries into it or the required format string to return an unknown number of a mixture of strings and bytes variables.</li>
<li>Pyvault would take the pain out of this task but unfortunately it requires sqlcipher and I don't want to install that on every device that the program runs on.</li>
</ul>
<p>Current solution (print calls will obviously be removed after a healthy amount of debugging):</p>
<pre><code>import scrypt
import os
MAX_TIME = 0.5 # Set here for ease of changing
FILE_001_DAT_LOCATION = "Source/001.dat"
DATA_LENGTH = 64 # Data length for hashing password
ENCODING_STRING = 'utf-8' # Used in all string encode and decode
PACKING_SIZE_STRING_LENGTH = 4 # The number of bytes used by the size indicator for the proceeding username/password
SIZE_STRING_FORMAT = '{:0>4}' # Used to format the size integer for packing dictionaries
# Turns {string: bytes} into bytes for writing to file
# Format is username1size, username1, password1size, password1, username2size...
def pack_credentials(credentials: dict):
bytes_out = b""
print("pack_credentials() called. Starting loop")
# Loop over every key and add keys
for key in credentials.keys():
# Pack username
print(f"Encoding username: {key}")
encoded_username = key.encode(ENCODING_STRING)
username_size = len(encoded_username)
print(f"Encoded username has size: {username_size}")
username_size_string = SIZE_STRING_FORMAT.format(username_size)
bytes_out += username_size_string.encode(ENCODING_STRING) + encoded_username
# Pack hashed password
print(f"Encoding password: {str(credentials[key])}")
password_size = len(credentials[key])
print(f"Hashed password has size: {password_size}")
password_size_string = SIZE_STRING_FORMAT.format(password_size)
bytes_out += password_size_string.encode(ENCODING_STRING) + credentials[key]
print("Packing complete. Returning bytes.")
return bytes_out
# Hash a raw password and return - for adding new credentials to the dictionary
def hash_password(password):
return scrypt.encrypt(os.urandom(DATA_LENGTH), password, maxtime=MAX_TIME)
# Turns {string: bytes} into bytes for writing to file
# Format is username1size, username1, password1size, password1, username2size...
def unpack_credentials(bytes_in: bytes):
credentials_out = {} # Defined here to add to in loop
slice_start = 0 # Starting index for next slice from bytes_in
slice_end = slice_start + PACKING_SIZE_STRING_LENGTH # Ending index for next slice from bytes_in
bytes_in_length = len(bytes_in)
print("unpack_credentials() called. Starting loop")
# Loop over every key and add keys
while slice_end < bytes_in_length:
# Unpack username
print(f"Unpacking username size")
username_length_bytes = bytes_in[slice_start:slice_end]
print(f"Username size: {username_length_bytes}")
print(f"Unpacking username")
slice_start = slice_end
slice_end = slice_end + int(username_length_bytes.decode(ENCODING_STRING))
username_bytes = bytes_in[slice_start:slice_end]
print(f"Username: {username_bytes.decode(ENCODING_STRING)}")
# Pack hashed password
print(f"Unpacking password size")
slice_start = slice_end
slice_end = slice_end + PACKING_SIZE_STRING_LENGTH
password_length_bytes = bytes_in[slice_start:slice_end]
print(f"Password size: {password_length_bytes.decode(ENCODING_STRING)}")
slice_start = slice_end
slice_end = slice_end + int(password_length_bytes.decode(ENCODING_STRING))
print(f"Unpacking password")
password_bytes = bytes_in[slice_start:slice_end]
#Prepare for next iteration
slice_start = slice_end
slice_end = slice_end + PACKING_SIZE_STRING_LENGTH
#Populate dict
credentials_out[username_bytes.decode(ENCODING_STRING)] = password_bytes
print("Unpacking complete. Returning dict.")
return credentials_out
if __name__ == '__main__':
# appGUI = MyApp()
# appGUI.run()
original_credentials = {"User 1 boiiiiiii": hash_password("password1wEDFRAwerwq rq"),
"Sonic the hedgehog": hash_password("password2qwergasdfghwerterwqgdfsg"),
"Misty water colour memories": hash_password("password3qdaserfgwqeryhger5wtywrthsfdghsd"),
"Niko the big black dog": hash_password("password4eadgrasewrdgerwqttgeqrwtgdfgewrtyertertgeertet")}
credentials_binary = pack_credentials(original_credentials)
new_credentials = unpack_credentials(credentials_binary)
print(
f"Do password hashes match? {original_credentials['User 1 boiiiiiii'] == new_credentials['User 1 boiiiiiii']}")
</code></pre>
|
<python><python-3.x><io><credentials>
|
2023-02-13 09:40:19
| 0
| 628
|
Flash_Steel
|
75,434,178
| 854,585
|
Convolve a grid of delta functions, with random positional offsets at the pixel level, with a kernel, in Python
|
<p>The convolution of a regular grid of Dirac delta functions with a kernel is pretty standard:</p>
<pre><code>import numpy as np
deltas = np.zeros((2048, 2048))
deltas[8::16,8::16] = 1
# Construct a Gaussian kernel
x, y = np.meshgrid(np.linspace(-2, 2, 15), np.linspace(-2, 2, 15))
gauss = np.exp(-(x*x + y*y)/2)
from scipy.signal import convolve2d
convolved_image = convolve2d(deltas, gauss)
</code></pre>
<p>Then you get all your Gaussians on a regular grid. An FFT convolution would be a lot faster and is as easy to implement, but that is not my issue.</p>
<p>Now, in order to generate input data for a statistical test, the Gaussians should not be centered exactly at the middle of a pixel, but have a random position anywhere on a pixel. So the chance of its peak being positioned in the lower left corner of a pixel should be equal to its peak being positioned at the center of pixel.</p>
<p>Besides these random positional fluctuations at the pixel level, I would still like to produce a regular grid of Gaussian, so from a distance it would still look like a perfectly regular grid.</p>
<p>How would you code this in Python?</p>
|
<python><random><scipy><statistics><convolution>
|
2023-02-13 09:40:04
| 1
| 409
|
Alex van Houten
|
75,434,165
| 653,770
|
How do I extend/pad a CDF plot (using seaborn.ecdfplot) to the x-axis limits?
|
<p>This code:</p>
<pre><code>N = 1000
df = pd.DataFrame({
'distribution_1': np.random.randn(N),
'distribution_2': np.random.randn(N)
})
df['distribution_2'] = df['distribution_2'] / 2
df = df.melt(value_vars=['distribution_1', 'distribution_2'], value_name='value', var_name='distribution')
g = sns.ecdfplot(data=df, x='value', hue='distribution')
g.set(ylim=(-.1, 1.1))
</code></pre>
<p>yields a figure like this:</p>
<p><a href="https://i.sstatic.net/Tk0F0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Tk0F0.png" alt="Example of the orange CDF not being extended to the limits of the axis" /></a></p>
<p>The CDFs of both distributions do not extend to the limits of the x-axis. I would like to know how to do this. In ggplot2 there is a boolean flag called <code>pad</code> that does this (see e.g. <a href="https://ggplot2.tidyverse.org/reference/stat_ecdf.html" rel="nofollow noreferrer">REF</a>).</p>
<p>This is also possible in seaborn? (I was unable to find it ...)</p>
|
<python><pandas><ggplot2><seaborn>
|
2023-02-13 09:38:53
| 1
| 1,292
|
packoman
|
75,434,074
| 4,495,790
|
How to interpret cost in case of K-Prototypes in kmodes, Python?
|
<p>I'm doing a clustering on mixed (numerical and categorical) type data with <a href="https://github.com/nicodv/kmodes" rel="nofollow noreferrer">kmodes.kprototypes</a> in Python:</p>
<pre><code>from kmodes.kprototypes import KPrototypes
kp = KPrototypes(n_clusters=5, init='Cao')
kp.fit(df, categorical=categorical_column_list)
</code></pre>
<p>After done, I would like to evaluate/compare the results. As I work with mixed type data, I cannot use any of <a href="https://scikit-learn.org/stable/modules/clustering.html#clustering-performance-evaluation" rel="nofollow noreferrer">sklearn's usual clustering evaluation methods</a> (they're for numerical data only as I understood). Instead, KPrototypes provides built in <code>.cost_</code> attribute with some kind of aggregated similarities (it's <a href="https://github.com/nicodv/kmodes/blob/7481a88841e67d99d47287ea7b8a36f4a14d5344/kmodes/kprototypes.py#L95" rel="nofollow noreferrer">source code docstring</a> declares that it is "<em>defined as the sum distance of all points to their respective cluster centroids</em>").</p>
<p>I'm not sure how to interpret this cost. Based on my poor understanding of the code, it is indeed the summary of distance values for each data points, but this would mean that the higher <code>n_clusters</code> is, the lower <code>cost_</code> would be (and indeed, this is the tendency) and in extreme (but absurd) case of <code>n_clusters = len(X)</code> <code>cost_</code> would be the lowest.</p>
<p>What actually <code>cost_</code> is here, and could it be used for usual evaluation? If not, what metrics to use here?</p>
|
<python><cluster-analysis><evaluation>
|
2023-02-13 09:30:19
| 1
| 459
|
Fredrik
|
75,433,926
| 242,944
|
Python construct a URL with username and password credentials
|
<p>I'm trying to port some basic code from JavaScript to Python. In JavaScript the URL class makes this trivial:</p>
<pre><code>const url = new URL('https://python.org');
url.username = 'foo';
url.password = 'bar';
console.log(url.href)
</code></pre>
<blockquote>
<p>https://foo:bar@python.org/</p>
</blockquote>
<p>How do I do this <em>natively</em> in Python? In other words, how can this be done using core libraries (without any third party libraries e.g. the <code>requests</code> module)?</p>
<p>Thanks in advance!</p>
|
<python><python-3.x><url>
|
2023-02-13 09:15:20
| 1
| 1,812
|
Casey
|
75,433,811
| 990,639
|
Unable to create URI with whitespace in MarkLogic
|
<p>I have created a Marklogic transform which tries to convert some URL encoded characters: [ ] and whitespace when ingesting data into database. This is the xquery code:</p>
<pre><code>xquery version "1.0-ml";
module namespace space = "http://marklogic.com/rest-api/transform/space-to-space";
declare function space:transform(
$context as map:map,
$params as map:map,
$content as document-node()
) as document-node()
{
let $puts := (
xdmp:log($params),
xdmp:log($context),
map:put($context, "uri", fn:replace(map:get($context, "uri"), "%5B+", "[")),
map:put($context, "uri", fn:replace(map:get($context, "uri"), "%5D+", "]")),
map:put($context, "uri", fn:replace(map:get($context, "uri"), "%20+", " ")),
xdmp:log($context)
)
return $content
};
</code></pre>
<p>When I tried this with my python code below</p>
<pre><code>def upload_document(self, inputContent, uri, fileType, database, collection):
if fileType == 'XML':
headers = {'Content-type': 'application/xml'}
fileBytes = str.encode(inputContent)
elif fileType == 'TXT':
headers = {'Content-type': 'text/*'}
fileBytes = str.encode(inputContent)
else:
headers = {'Content-type': 'application/octet-stream'}
fileBytes = inputContent
endpoint = ML_DOCUMENTS_ENDPOINT
params = {}
if uri is not None:
encodedUri = urllib.parse.quote(uri)
endpoint = endpoint + "?uri=" + encodedUri
if database is not None:
params['database'] = database
if collection is not None:
params['collection'] = collection
params['transform'] = 'space-to-space'
req = PreparedRequest()
req.prepare_url(endpoint, params)
response = requests.put(req.url, data=fileBytes, headers=headers, auth=HTTPDigestAuth(ML_USER_NAME, ML_PASSWORD))
print('upload_document result: ' + str(response.status_code))
if response.status_code == 400:
print(response.text)
</code></pre>
<p>The following lines are from the xquery logging:</p>
<ol>
<li><p>2023-02-13 16:59:00.067 Info: {}</p>
</li>
<li><p>2023-02-13 16:59:00.067 Info:
{"input-type":"application/octet-stream",
"uri":"/Judgment/26856/supportingfiles/[TEST] 57_image1.PNG", "output-type":"application/octet-stream"}</p>
</li>
<li><p>2023-02-13 16:59:00.067 Info:
{"input-type":"application/octet-stream",
"uri":"/Judgment/26856/supportingfiles/[TEST] 57_image1.PNG", "output type":"application/octet-stream"}</p>
</li>
<li><p>2023-02-13 16:59:00.653 Info: Status 500: REST-INVALIDPARAM: (err:FOER0000)
Invalid parameter: invalid uri:
/Judgment/26856/supportingfiles/[TEST] 57_image1.PNG</p>
</li>
</ol>
|
<python><rest><marklogic>
|
2023-02-13 09:03:04
| 2
| 1,147
|
Eugene
|
75,433,717
| 18,949,720
|
module 'keras.utils.generic_utils' has no attribute 'get_custom_objects' when importing segmentation_models
|
<p>I am working on google colab with the segmentation_models library. It worked perfectly the first week using it, but now it seems that I can't import the library anymore. Here is the error message, when I execute import segmentation_models as sm :</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-3-6f48ce46383f> in <module>
1 import tensorflow as tf
----> 2 import segmentation_models as sm
3 frames
/usr/local/lib/python3.8/dist-packages/efficientnet/__init__.py in init_keras_custom_objects()
69 }
70
---> 71 keras.utils.generic_utils.get_custom_objects().update(custom_objects)
72
73
AttributeError: module 'keras.utils.generic_utils' has no attribute 'get_custom_objects'
</code></pre>
<p>Colab uses tensorflow version 2.11.0.</p>
<p>I did not find any information about this particular error message. Does anyone know where the problem may come from ?</p>
|
<python><tensorflow><keras><google-colaboratory><image-segmentation>
|
2023-02-13 08:54:42
| 6
| 358
|
Droidux
|
75,433,358
| 6,930,340
|
How to handle Unsupported operand types in mypy when type should be clear?
|
<p>Consider the following toy example.</p>
<pre><code>class MyClass:
def __init__(self, optional_arg: float | None = None) -> None:
self.optional_arg = optional_arg
self._parse_args()
def _parse_args(self) -> None:
if self.optional_arg is None:
self.optional_arg = 0.5
def multiply(self) -> float:
# mypy is complaining with the below statement:
# error: Unsupported operand types for * ("float" and "None") [operator]
# Right operand is of type "Optional[float]"
return 0.5 * self.optional_arg
</code></pre>
<p>After instantiating this class with <code>optional_arg=None</code>, it should be clear that <code>self.optional_arg</code> will be set to a <code>float</code> due to the <code>_parse_args</code> method.</p>
<p>In my view, it should be clear that when calling the method <code>multiply</code>, it will return a <code>float</code>. However, <code>mypy</code> is still complaining that <code>self.optional_arg</code> might be <code>None</code>.</p>
<p>What is a pythonic way to tell <code>mypy</code> that <code>self.optional_arg</code> can't possibly be <code>None</code>?</p>
|
<python><mypy>
|
2023-02-13 08:17:35
| 2
| 5,167
|
Andi
|
75,433,252
| 2,531,161
|
Python annotation for list of specific types
|
<p>I would like to annotate a <code>list</code> of a <code>str</code> and an <code>int</code>, e.g.:</p>
<pre><code> state = ['mystring', 4]
</code></pre>
<p>I can't use tuple, becuase I have to change the integer without changing the reference of the state (it is passed to functions, which also have to modify it). <em>Note that this is a typical tuple case, just since <code>int</code> is immutable in python, ...</em></p>
<pre><code>def myfunc(state: ?) -> bool:
...
state[1] += 1
...
return success
</code></pre>
<p>What is the best way to annotate it? I tried</p>
<pre><code> state: list[str, int]
</code></pre>
<p>(similarly to a tuple) which compiles well, but <code>mypy</code> throws an exception, that <code>list</code> expects only a single argument. I can also use</p>
<pre><code> state: list[str|int]
</code></pre>
<p>But this is a "different type" for different use cases.</p>
|
<python><list><python-typing>
|
2023-02-13 08:06:30
| 1
| 500
|
FERcsI
|
75,433,241
| 19,238,204
|
Python Pandas Error: 'DataFrame' object has no attribute 'Date'
|
<p>I am trying this (<a href="https://towardsdatascience.com/analyzing-world-stock-indices-performance-in-python-610df6a578f" rel="nofollow noreferrer">https://towardsdatascience.com/analyzing-world-stock-indices-performance-in-python-610df6a578f</a>) on Jupyter Notebook with Python 3.9.13</p>
<p>The full code:</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import yfinance as yf
# Retrieving List of World Major Stock Indices from Yahoo! Finance
df_list = pd.read_html('https://finance.yahoo.com/world-indices/')
majorStockIdx = df_list[0]
majorStockIdx.head()
tickerData = yf.Ticker('^GSPC')
tickerDf1 = tickerData.history(period='1d', start='2010-1-1', end='2020-10-1')
stock_list = []
for s in majorStockIdx.Symbol: # iterate for every stock indices
# Retrieve data from Yahoo! Finance
tickerData = yf.Ticker(s)
tickerDf1 = tickerData.history(period='1d', start='2010-1-1', end='2020-9-30')
# Save historical data
tickerDf1['ticker'] = s # don't forget to specify the index
stock_list.append(tickerDf1)
# Concatenate all data
msi = pd.concat(stock_list, axis = 0)
# categorize each index by the region
region_idx={ 'US & Canada' : ['^GSPC', '^DJI', '^IXIC', '^RUT','^GSPTSE'],
'Latin America' : ['^BVSP', '^MXX', '^IPSA'],
'East Asia' : ['^N225', '^HSI', '000001.SS', '399001.SZ', '^TWII', '^KS11'],
'ASEAN & Oceania' : ['^STI', '^JKSE', '^KLSE','^AXJO', '^NZ50'],
'South & West Asia' : ['^BSESN', '^TA125.TA'],
'Europe' : ['^FTSE', '^GDAXI', '^FCHI', '^STOXX50E','^N100', '^BFX']
}
# make a new column for the region.
def getRegion(ticker):
for k in region_idx.keys():
if ticker in region_idx[k]:
return k
msi['region']= msi.ticker.apply(lambda x: getRegion(x))
# Get the data for 4 Jan 2010
begRef = msi.loc[msi.Date == '2010-01-04']
def retBegin(ticker, val):
start_val = begRef.loc[begRef.ticker == ticker, 'Close'].values[0]
return (val/start_val - 1) * 100
msi['chBegin'] = msi.apply(lambda x: retBegin(x.ticker, x.Close), axis = 1)
# Transform the data to be ticker column-wise
chBegin = msi.groupby(['Date', 'ticker'])['chBegin'].first().unstack()
# Fill null values with the values on the row before
chBegin = chBegin.fillna(method='bfill')
fig, axes = plt.subplots(3,2, figsize=(12, 8),sharex=True)
pagoda = ["#965757", "#D67469", "#4E5A44", "#A1B482", '#EFE482', "#99BFCF"] # for coloring
for i, k in enumerate(region_idx.keys()):
# Iterate for each region
ax = axes[int(i/2), int(i%2)]
for j,t in enumerate(region_idx[k]):
# Iterate and plot for each stock index in this region
ax.plot(chBegin.index, chBegin[t], marker='', linewidth=1, color = pagoda[j])
ax.legend([ticker[t] for t in region_idx[k]], loc='upper left', fontsize=7)
ax.set_title(k, fontweight='bold')
fig.text(0.5,0, "Year", ha="center", va="center", fontweight ="bold")
fig.text(0,0.5, "Price Change/Return (%)", ha="center", va="center", rotation=90, fontweight ="bold")
fig.suptitle("Price Change/Return for Major Stock Indices based on 2010", fontweight ="bold",y=1.05, fontsize=14)
fig.tight_layout()
</code></pre>
<p>I am using Jupyter Notebook and the problem occurs at this section:</p>
<pre><code># Get the data for 4 Jan 2010
begRef = msi.loc[msi.date == '2010-01-04']
def retBegin(ticker, val):
start_val = begRef.loc[begRef.ticker == ticker, 'Close'].values[0]
return (val/start_val - 1) * 100
msi['chBegin'] = msi.apply(lambda x: retBegin(x.ticker, x.Close), axis = 1)
# Transform the data to be ticker column-wise
chBegin = msi.groupby(['Date', 'ticker'])['chBegin'].first().unstack()
# Fill null values with the values on the row before
chBegin = chBegin.fillna(method='bfill')
</code></pre>
<p>I get:</p>
<p>AttributeError: 'DataFrame' object has no attribute 'Date'</p>
<p>How to fix this?</p>
<p>this is the full error:</p>
<pre><code>AttributeError Traceback (most recent call last)
Input In [14], in <cell line: 2>()
1 # Get the data for 4 Jan 2010
----> 2 begRef = msi.loc[msi.Date == '2010-01-04']
3 def retBegin(ticker, val):
4 start_val = begRef.loc[begRef.ticker == ticker, 'Close'].values[0]
File ~/.julia/conda/3/lib/python3.9/site-packages/pandas/core/generic.py:5575, in NDFrame.__getattr__(self, name)
5568 if (
5569 name not in self._internal_names_set
5570 and name not in self._metadata
5571 and name not in self._accessors
5572 and self._info_axis._can_hold_identifiers_and_holds_name(name)
5573 ):
5574 return self[name]
-> 5575 return object.__getattribute__(self, name)
AttributeError: 'DataFrame' object has no attribute 'Date'
</code></pre>
<p>I hope someone here can help me out. Is the package incorrect? the code incorrect? or something else?</p>
|
<python><pandas>
|
2023-02-13 08:05:11
| 2
| 435
|
Freya the Goddess
|
75,433,217
| 9,092,563
|
How do you add environment variables to a python scrapy project? dotenv didn't work
|
<p>I'm having trouble incorporating an IP address into a format string in my Python Scrapy project. I was trying to use python-dotenv to store sensitive information, such as server IPs, in a .env file and load it into my project, instead of hardcoding it.</p>
<p>I added python-dotenv to the settings.py file of my Scrapy project, but when I run a function that should use the values stored in os, I get an error saying that it can't detect dotenv. Can someone help me understand why this is happening and how to properly incorporate an IP address in a format string using python-dotenv in a Python Scrapy project?</p>
|
<python><scrapy><environment-variables><python-dotenv>
|
2023-02-13 08:02:23
| 1
| 692
|
rom
|
75,433,179
| 2,604,247
|
How to form an OPCUA connection in python from server IP address, port, security policy and credentials?
|
<p>I have never used OPC-UA before, but now faced with a task where I have to pull data from a OPC-UA machine to push to a SQL database using python. I can handle the database part, but how to basically connect to the OPCUA server when I have only the following fields available?</p>
<ul>
<li>IP address 192.168.38.94</li>
<li>Port 8080</li>
<li>Security policy: Basic256</li>
<li>Username: della_client</li>
<li>Password: amorphous@#</li>
</ul>
<p>Some tutorials I saw directly use a url, but is there any way to form the URL from these parameters, or should I ask the machine owners something more specific to be able to connect? I just want to be sure of what I need before I approach him.</p>
<p>Related, how to use the same parameters in the application called UA-Expert to verify the connections as well? Is it possible?</p>
<p>If it is relevant, I am using python 3.10 on Ubuntu 22.04.</p>
|
<python><opc-ua><connectivity><opc>
|
2023-02-13 07:58:38
| 1
| 1,720
|
Della
|
75,433,136
| 14,673,832
|
How to convert s-expression which is in string into a tuple in Python
|
<p>I have an S-expression in Python which I need to convert it into tuple with operators (add,multiply) inside the semi-colon ie as a string.</p>
<p>I have the following code snippet:</p>
<p>This code works fine, but the requirement of the work is that the user doesnot input tuple like <code>('add', ('multiply', 3, 4), 5)</code> instead pass an s-expression like <code>"(add (multiply 2 3) 2)"</code> from the system command line.</p>
<p>The command to put an input will be like this:</p>
<pre><code>python calc.py "(add (multiply 2 3) 2)"
</code></pre>
<p>For now I have already defined expression in the code, but the users will be giving the expression and we fetch it using <code>sys.argv</code> . So how to convert the user input <code>"(add (multiply 2 3) 2)"</code> to <code>('add', ('multiply', 3, 4), 5)</code> so that I can use the above code easily.</p>
<p><strong>Update:</strong></p>
<p>I tried the following code but it didnt give the desired result.</p>
<pre><code>def from_expression_string(expression_string):
tokens = expression_string.strip().split()
#['(add', '(multiply', '2', '3)', '2)']
stack = []
for token in tokens:
if token == '(':
# print("hello")
pass
elif token == ')':
args = [stack.pop(), stack.pop()]
stack.append((stack.pop(), *reversed(args)))
else:
try:
stack.append(int(token))
except ValueError:
stack.append(token)
return stack[0]
</code></pre>
<p>The output of this snippet gives <code>(add</code>. The above code seems easy to understand but is not working.</p>
|
<python><command-line><tuples><s-expression>
|
2023-02-13 07:53:37
| 1
| 1,074
|
Reactoo
|
75,433,096
| 6,113,142
|
Combine two dataframes cell-wise by a third func mapping dataframe
|
<p>Consider <code>df_a</code> and <code>df_b</code> where</p>
<ol>
<li><code>df_a.index == df_b.index</code></li>
<li><code>df_a.columns == df_b.columns</code></li>
<li><code>df_a.dtypes == df_b.dtypes == float</code></li>
</ol>
<p>Now I have <s>another dataframe</s>, <code>op</code>, a <code>pd.Series</code> where</p>
<ol>
<li><code>op.index == df_a.index</code></li>
<li><code>op.shape[0] == 1</code>, ie, <code>op</code> is a vector (<code>pd.Series</code>) and each row (element) is of the form <code>operator(a: float, b: float) -> bool</code></li>
</ol>
<p>I want to achieve <code>df_a <op> df_b</code>, ie, each cell of <code>a</code> and <code>b</code> must be compared using <code>operator</code> and the result <code>df_op</code> should have shape identical to <code>df_a</code> and <code>df_b</code>.</p>
<p>EDIT: here's a minimal example of what I want to achieve:</p>
<pre class="lang-py prettyprint-override"><code>from operator import gt, lt
from numpy import nan
df_a = pd.DataFrame({'CAT': {'marketCapTTM': 33.39465142383641, 'shareholdersEquityPerShareTTM': 81.05974830429687, 'tangibleBookValuePerShareTTM': 5.221448839220668, 'bookValuePerShareTTM': 12.643766377289978, 'netIncomePerShareTTM': 34.43608541699433, 'dividendYieldTTM': nan, 'earningsYieldTTM': nan, 'priceToSalesRatioTTM': nan, 'revenuePerShareTTM': nan, 'ROE': 1.6988369839109645, 'enterpriseValueTTM': nan, 'debtToEquityTTM': 2.8755516480792895, 'freeCashFlowPerShareTTM': 11.318374596873056, 'operatingCashFlowPerShareTTM': 3.296170855785674}, 'KO': {'marketCapTTM': 16.19804346060541, 'shareholdersEquityPerShareTTM': 13.520338019619583, 'tangibleBookValuePerShareTTM': 1.4791311212081197, 'bookValuePerShareTTM': 5.3419302547561, 'netIncomePerShareTTM': 44.63557734663028, 'dividendYieldTTM': 33.504017101396435, 'earningsYieldTTM': nan, 'priceToSalesRatioTTM': 11.13681987256062, 'revenuePerShareTTM': 27.529850258482526, 'ROE': nan, 'enterpriseValueTTM': 39.04183307431106, 'debtToEquityTTM': nan, 'freeCashFlowPerShareTTM': nan, 'operatingCashFlowPerShareTTM': nan}, 'DIS': {'marketCapTTM': 10.350071360171354, 'shareholdersEquityPerShareTTM': 92.3074779922466, 'tangibleBookValuePerShareTTM': 2.489890153696122, 'bookValuePerShareTTM': 16.84014609959747, 'netIncomePerShareTTM': 29.892799370630662, 'dividendYieldTTM': 7.276474595382507, 'earningsYieldTTM': 38.39492540086857, 'priceToSalesRatioTTM': 4.0478509770567825, 'revenuePerShareTTM': 9.658344729379438, 'ROE': 45.11442699745053, 'enterpriseValueTTM': nan, 'debtToEquityTTM': nan, 'freeCashFlowPerShareTTM': nan, 'operatingCashFlowPerShareTTM': nan}})
df_b = pd.DataFrame({'CAT': {'marketCapTTM': 29.887325295389743, 'shareholdersEquityPerShareTTM': 31.83889927186704, 'tangibleBookValuePerShareTTM': 27.134180811823384, 'bookValuePerShareTTM': 10.849504294710492, 'netIncomePerShareTTM': 20.887572108177135, 'dividendYieldTTM': nan, 'earningsYieldTTM': nan, 'priceToSalesRatioTTM': nan, 'revenuePerShareTTM': nan, 'ROE': 25.230080182979187, 'enterpriseValueTTM': nan, 'debtToEquityTTM': 50.175058716128994, 'freeCashFlowPerShareTTM': 39.21225330073516, 'operatingCashFlowPerShareTTM': 25.26732056715597}, 'KO': {'marketCapTTM': 35.57854672737116, 'shareholdersEquityPerShareTTM': 52.098967463491945, 'tangibleBookValuePerShareTTM': 22.943564836479496, 'bookValuePerShareTTM': 7.022975757514489, 'netIncomePerShareTTM': 4.90371517588241, 'dividendYieldTTM': 2.1442957674601324, 'earningsYieldTTM': nan, 'priceToSalesRatioTTM': 64.68716099305611, 'revenuePerShareTTM': 9.960264176165484, 'ROE': nan, 'enterpriseValueTTM': 11.32154660489711, 'debtToEquityTTM': nan, 'freeCashFlowPerShareTTM': nan, 'operatingCashFlowPerShareTTM': nan}, 'DIS': {'marketCapTTM': 5.935286159527827, 'shareholdersEquityPerShareTTM': 17.255701701169624, 'tangibleBookValuePerShareTTM': 37.50072163718486, 'bookValuePerShareTTM': 14.009615847232455, 'netIncomePerShareTTM': 17.91946520859328, 'dividendYieldTTM': 2.2431492946899283, 'earningsYieldTTM': 47.90549865927282, 'priceToSalesRatioTTM': 38.315078361282225, 'revenuePerShareTTM': 1.7762807962951885, 'ROE': 44.23368129207099, 'enterpriseValueTTM': nan, 'debtToEquityTTM': nan, 'freeCashFlowPerShareTTM': nan, 'operatingCashFlowPerShareTTM': nan}})
op = pd.Series({'marketCapTTM': <built-in function gt>, 'shareholdersEquityPerShareTTM': <built-in function gt>, 'tangibleBookValuePerShareTTM': <built-in function gt>, 'bookValuePerShareTTM': <built-in function gt>, 'netIncomePerShareTTM': <built-in function gt>, 'dividendYieldTTM': <built-in function gt>, 'earningsYieldTTM': <built-in function gt>, 'priceToSalesRatioTTM': <built-in function gt>, 'revenuePerShareTTM': <built-in function gt>, 'ROE': <built-in function gt>, 'enterpriseValueTTM': <built-in function gt>, 'debtToEquityTTM': <built-in function gt>, 'freeCashFlowPerShareTTM': <built-in function gt>, 'operatingCashFlowPerShareTTM': <built-in function gt>})
print(df_a.shape, df_b.shape, op.shape, type(df_a.shape), type(df_b.shape), type(op.shape), df_a.columns, op.index)
</code></pre>
<p>Stats are as follows:</p>
<pre class="lang-bash prettyprint-override"><code>(14, 3) (14, 3) (14,) <class 'pandas.core.frame.DataFrame'> <class 'pandas.core.frame.DataFrame'> <class 'pandas.core.series.Series'> Index(['CAT', 'KO', 'DIS'], dtype='object') Index(['marketCapTTM', 'shareholdersEquityPerShareTTM',
'tangibleBookValuePerShareTTM', 'bookValuePerShareTTM',
'netIncomePerShareTTM', 'dividendYieldTTM', 'earningsYieldTTM',
'priceToSalesRatioTTM', 'revenuePerShareTTM', 'ROE',
'enterpriseValueTTM', 'debtToEquityTTM', 'freeCashFlowPerShareTTM',
'operatingCashFlowPerShareTTM'],
dtype='object', name='indicator')
</code></pre>
<p>Desired output is generated (currently) using:</p>
<pre class="lang-py prettyprint-override"><code>signals = _funcs.reset_index().apply(
lambda row: row.direction(
df_a.loc[row.indicator],
df_b.loc[row.indicator]
),
axis=1
).set_index(_funcs.index)
</code></pre>
<p>and results in:</p>
<pre class="lang-bash prettyprint-override"><code> CAT KO DIS
indicator
marketCapTTM True False True
shareholdersEquityPerShareTTM True False True
tangibleBookValuePerShareTTM False False False
bookValuePerShareTTM True False True
netIncomePerShareTTM True True True
</code></pre>
|
<python><pandas>
|
2023-02-13 07:47:57
| 1
| 350
|
Pratik K.
|
75,433,011
| 9,648,374
|
Convert column containing '.' in dataframe to dictonary
|
<p>I want to convert df to df1</p>
<pre><code>df = pd.DataFrame({'A': [1], 'in.1': [8977], 'in.2': [8977], 'B': [
{
"C.i": 87387460,
"C.j":233
}]})
df1 = pd.DataFrame({'A': [1], 'in':{'1': [8977], '2': [8977]}, 'B': [
{"C":{
"i": 87387460,
"j":233}
}]})
</code></pre>
<p>I tried using recursive function but no luck.</p>
<p>My Code:</p>
<pre><code>def convert_df(df):
if df.shape[0] == 0:
return []
elif df.shape[0] == 1 and df.shape[1] == 1:
return df.iloc[0, 0]
elif df.shape[1] == 1:
return [convert_df(pd.DataFrame(val)) for val in df[df.columns[0]].tolist()]
else:
return [{col_name: convert_df(pd.DataFrame(val)) for col_name, val in row.to_dict().items()} for i, row in df.iterrows()]
</code></pre>
|
<python><pandas><dataframe>
|
2023-02-13 07:35:49
| 2
| 489
|
Isha Nema
|
75,432,454
| 17,696,880
|
Split this string into substrings that are in middle of certain patterns, and within them a pattern and 3 different words, or more, from it?
|
<p>I was trying several ways to divide these strings according to the separators that are in the <code>separator_symbols</code> variable, but only if the content in the middle meets the fact that there is a substring that meets the sequence of the pattern <code>"\(\(VERB\)\s*\ w+(?:\s+\w+)*\)"</code> and that also finds within that substring 3 different words of that pattern (I interpret the word as a sequence of text, with uppercase or/and lowercase letters, which is separated from the rest of the text by at least one whitespace)</p>
<pre class="lang-py prettyprint-override"><code>import re
def this_substring_has_a_verb_substring(substring):
pattern = r"\(\(VERB\)\s*\w+(?:\s+\w+)*\)"
return re.search(pattern, substring) is not None
#example 1
input_string = 'El árbol ((VERB es)) grande, las hojas ((VERB)son) doradas y ((VERB)son) secas, los juegos del parque ((VERB)estan) algo oxidados y ((VERB)es) peligroso subirse a ellos'
#example 2
input_string = 'hay que ((VERB) correr), ((VERB)saltar), ((VERB)volar) y ((VERB)caminar) para llegar a ese lugar',
separator_symbols = r'(?:(?:,|;|\.|)\s*y\s+|,\s*|;\s*)(?:[A-Z]|l[oa]s|la|[eé]l)'
</code></pre>
<p>In order to divide these strings and obtain the outputs that are at the end of this question, I have tried 2 ways to achieve it, although I found limitations in both.</p>
<ul>
<li>As a first option try to create a very generic pattern that is enclosed in the middle of the symbols of the pattern stored in separator_symbols or that is limited by the start or end of the original string.</li>
</ul>
<pre><code>#OPTION 1
# "\(\(VERB\)\s*\w+(?:\s+\w+)*\)" #((VERB)asfdgfg)
# "((?:(?:\w+))?){3}" # 3 words
#the identification pattern should not tolerate so many possibilities, since it would be useless in many cases in its role of information validation
captured_sentence_part = r"(.)*"
substrings = re.findall(separator_symbols + captured_sentence_part + r'(?:' + separator_symbols + r'|$)', string)
</code></pre>
<ul>
<li>In the second option try to use the <code>split()</code> method to separate the words in a list, and then use the <code>len()</code> function to count the number of items in this list of hypothetical divisions of the original input string, although in reality everything is done in an auxiliary variable, since you still have to confirm with an if that it meets both conditions</li>
</ul>
<p>But the problem with this option 2 is that it's too complicated to put the split results together and put them together in a list as seen at the end of the question</p>
<pre><code>#OPTION 2
pattern = r'(?:(?:,|;|\.|)\s*y\s+|,\s*|;\s*)(?:[A-Z]|l[oa]s|la|[eé]l)'
sub_sentences_list = re.split(pattern, input_string )
for i_sub_input_text in sub_sentences_list:
words = i_sub_input_text.split()
word_count = len(words)
#conditions validation
if(word_count > int(number_of_words) + 1) and this_substring_has_a_verb_substring(i_sub_input_text) == True:
print("extraction!")
</code></pre>
<p>The outputs in each of the examples should look like these lists with the divisions of the original string:</p>
<pre><code>#for example 1:
['El árbol ((VERB)es) grande,',
'las hojas ((VERB)son) doradas y ((VERB)son) secas,',
'los juegos del parque ((VERB)estan) algo oxidados y ((VERB)es) peligroso subirse a ellos']
#for example 2:
['hay que ((VERB) correr), ((VERB)saltar), ((VERB)volar) y ((VERB)caminar) para llegar a ese lugar']
</code></pre>
<p>Note that in example 2, the string was not split, since it did not meet the condition of having present a sequence <code>((VERB) )</code> and 3 other words from it, in the middle of the <code>separator_symbols</code></p>
<p>What method would be the most recommended? And how should I fix it to get this list as output?</p>
|
<python><python-3.x><regex><split><regex-group>
|
2023-02-13 06:21:02
| 0
| 875
|
Matt095
|
75,432,168
| 188,331
|
Juypter Lab is running with GPU (claimed to be), but nvidia-smi said otherwise
|
<p>Here is the output of <code>nvidia-smi</code> when the GPU-intensive codes are running:</p>
<pre><code>$ nvidia-smi
Mon Feb 13 10:20:42 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.60.11 Driver Version: 525.60.11 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:81:00.0 Off | Off |
| 0% 47C P8 28W / 450W | 8MiB / 24564MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA GeForce ... Off | 00000000:C1:00.0 Off | Off |
| 0% 36C P8 29W / 450W | 8MiB / 24564MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1947 G /usr/lib/xorg/Xorg 4MiB |
| 1 N/A N/A 1947 G /usr/lib/xorg/Xorg 4MiB |
+-----------------------------------------------------------------------------+
</code></pre>
<p>it shows 0% utilization for both units and the script does not show up in the Processes list, which means that the GPU is idle.</p>
<p>The check on the running device is:</p>
<pre><code>import torch
# show PyTorch version
print(torch.__version__)
# Check if CUDA is available
print('Is CUDA available?', torch.cuda.is_available())
</code></pre>
<p>And the output of the above is:</p>
<pre><code>1.13.1+cu117
Is CUDA available? True
</code></pre>
<p>The concerned GPU codes are available <a href="https://github.com/shivanraptor/machine-learning-study/blob/main/RNN%20Cantonese%20Test%20001.ipynb" rel="nofollow noreferrer">here</a>. Sorry for not pasting the whole chunk of codes here, as it is too long.</p>
<hr />
<p><em>UPDATE</em>: The concerned code is as follows:</p>
<pre><code>class Net(nn.Module):
device = torch.device("cuda") # I added this
def __init__(self, n_vocab, embedding_dim, hidden_dim, dropout=0.2):
super(Net, self).__init__()
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim # dim = dimension
embedding_dim.to(device) # I added this
self.embeddings = nn.Embedding(n_vocab, embedding_dim)
# LSTM Layer (input_size, hidden_size)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, dropout=dropout)
# Fully connected layer, change "Hidden State" Linear to output
self.hidden2out = nn.Linear(hidden_dim, n_vocab)
def forward(self, seq_in):
seq_in.to(device) # I added this
embeddings = self.embeddings(seq_in.t())
lstm_out, _ = self.lstm(embeddings)
ht = lstm_out[-1]
out = self.hidden2out(ht)
return out
</code></pre>
<p>The <code>RuntimeError</code> occurs at the line <code>embeddings = self.embeddings(seq_in.t())</code>.</p>
<p>The full <code>RuntimeError</code> is as follows:</p>
<blockquote>
<p>RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select)</p>
</blockquote>
<p>How can I modify the <code>Net</code> class in order to make it working again?</p>
|
<python><pytorch><gpu><jupyter-lab>
|
2023-02-13 05:29:26
| 1
| 54,395
|
Raptor
|
75,432,155
| 10,620,003
|
Put the values in the middle of the bars in histogram plot
|
<p>I have a histogram plot and I want to add the values of the VAL in the middle of the bar plot with a color which are fitted with the color of the bar. Thank you. Like the following image I only use the black color to show the number <a href="https://i.sstatic.net/OToRB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OToRB.png" alt="enter image description here" /></a></p>
<pre><code>import numpy as np
import seaborn as sns
VAL = [8, 4, 5, 20]
objects = ['h', 'b', 'c', 'a']
y_pos = np.arange(len(objects))
cmap = plt.get_cmap('RdYlGn_r')
norm = plt.Normalize(vmin=min(VAL), vmax=max(VAL))
ax = sns.barplot(x=VAL, y=objects, hue=VAL, palette='RdYlGn_r', dodge=False)
plt.yticks(y_pos, objects)
plt.show()
</code></pre>
|
<python><matplotlib><histogram>
|
2023-02-13 05:26:29
| 2
| 730
|
Sadcow
|
75,432,101
| 19,950,360
|
how to create plotly histogram with two columns
|
<p>I have dataframe like this</p>
<pre><code>df = pd.DataFrame({'_3321131460': ['col1', 'col1', 'col2'], '_3952604542': ['col1', 'col2', 'col2']})
df
</code></pre>
<p>So want to create graph like this
<a href="https://i.sstatic.net/egOcD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/egOcD.png" alt="enter image description here" /></a></p>
<p>one graph but x are two columns and legent have two columns data</p>
<p>And y is count But know how to do that</p>
<p>So one columns histogram, another columns histogram in one graph</p>
<p>But have same legend</p>
|
<python><graph><plotly>
|
2023-02-13 05:15:43
| 1
| 315
|
lima
|
75,432,098
| 4,420,797
|
OSError: [Errno 28] No space left on device: Unzip file on HPC Cloud Server
|
<p>I am unzip file of 35.8 GB and have a <code>1.7 TB</code> space in my Virtual machine to extract zip file. Also, I already give all security permission to execute it but still stuck on <code>no space left</code></p>
<p>Directory is<code> /home2/coremax</code> and you can see <code>10.100.201.21:/cloudhome/coremax 2097152000 692367040 1404784960 34% /home2/coremax</code> there are lots of space. I am searching on Google but this issue but did not find a solution</p>
<p><img src="https://user-images.githubusercontent.com/11488932/220792843-4e5f573e-6fa1-4d9b-968b-a184c8f233d9.png" alt="image" /></p>
<p><strong>RAM utilization</strong></p>
<p><img src="https://user-images.githubusercontent.com/11488932/220793935-89c6b834-bde8-415d-afb0-54924ec1758a.png" alt="image" /></p>
<p><strong>Traceback</strong></p>
<pre><code>/home2/coremax/Documents/ocr_dataset/datasets/SynthText/SynthText
Unpacking SynthText: 33%|███▎ | 252421/772875 [2:13:24<4:35:03, 31.54it/s]
Traceback (most recent call last):
File "/home2/coremax/Documents/doctr/references/recognition/original_pytorch_train.py", line 521, in <module>
main(args)
File "/home2/coremax/Documents/doctr/references/recognition/original_pytorch_train.py", line 324, in main
synth_train = SynthText(
File "/home2/coremax/Documents/doctr/doctr/datasets/synthtext.py", line 114, in __init__
tmp_img.save(os.path.join(reco_folder_path, f"{reco_images_counter}.png"))
File "/home2/coremax/anaconda3/envs/doctr_hpc/lib/python3.9/site-packages/PIL/Image.py", line 2350, in save
fp = builtins.open(filename, "w+b")
OSError: [Errno 28] No space left on device: '/home2/coremax/Documents/ocr_dataset/datasets/SynthText/SynthText/SynthText_recognition_train/2097151.png'
</code></pre>
|
<python><zip><extract><torch><unzip>
|
2023-02-13 05:15:17
| 1
| 2,984
|
Khawar Islam
|
75,431,947
| 10,964,685
|
Display interactive metrics within dbc.Card - Dash
|
<p>I'm aiming to include interactive counts as metrics. Specifically, insert total sums within cards that will changed depending on the variable selected from a nav bar.</p>
<p>Using below, the Navbar is used to display the proportion of successes and failures as a pie chart. I want to use the corresponding sums of <em>success_count</em> and <em>failure_count</em> as a metric to be displayed as a number.</p>
<p>Is it possible to return these values and display them within the designated dbc.Card?</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import plotly.express as px
import plotly.graph_objs as go
import dash
from dash import dcc
from dash import html
import dash_bootstrap_components as dbc
from dash.dependencies import Input, Output
# This dataframe has 244 lines, but 4 distinct values for `day`
url="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/datasets/spacex_launch_dash.csv"
spacex_df = pd.read_csv(url)
spacex_df.rename(columns={'Launch Site':'Site'}, inplace=True)
external_stylesheets = [dbc.themes.SPACELAB, dbc.icons.BOOTSTRAP]
app = dash.Dash(__name__, external_stylesheets = external_stylesheets)
success_card = dbc.Card(
[
dbc.CardHeader("Some header"),
dbc.CardBody(
[
html.H6("Success Count", className="card-title"),
html.P("###total count of successes", className="card-text"),
]
),
],
className='text-center m-4'
)
failure_card = dbc.Card(
[
dbc.CardHeader("Some header"),
dbc.CardBody(
[
html.H6("Failure Count", className="card-title"),
html.P("##total count of failures", className="card-text"),
]
),
],
className='text-center m-4'
)
nav_bar = html.Div([
html.P("site-dropdown:"),
dcc.Dropdown(
id='Site',
value='Site',
options=[{'value': x, 'label': x}
for x in ['CCAFS LC-40', 'CCAFS SLC-40', 'KSC LC-39A', 'VAFB SLC-4E']],
clearable=False
),
])
app.layout = dbc.Container([
dbc.Row([
dbc.Col(html.Div(nav_bar, className="bg-secondary h-100"), width=2),
dbc.Col([
dbc.Row([
dbc.Col(success_card),
]),
dbc.Row([
dbc.Col(dcc.Graph(id = 'pie-chart'), style={
"padding-bottom": "10px",
},),
]),
dbc.Row([
# insert pie chart
dbc.Col(dcc.Graph(id = "bar-chart")),
]),
], width=5),
dbc.Col([
dbc.Row([
dbc.Col(failure_card),
]),
dbc.Row([
# insert bar chart
dbc.Col(dcc.Graph()),
], className="h-100"),
], width=5),
])
], fluid=True)
@app.callback(
Output("pie-chart", "figure"),
[Input("Site", "value")])
def generate_chart(value):
pie_data = spacex_df[spacex_df['Site'] == value]
success_count = sum(pie_data['class'] == 0)
failure_count = sum(pie_data['class'] == 1)
fig = go.Figure(data=[go.Pie(labels=['success','failure'], values=[success_count, failure_count])])
fig.update_layout(title=f"Site: {value}")
return fig
if __name__ == '__main__':
app.run_server(debug=True)
</code></pre>
|
<python><plotly-dash><dash-bootstrap-components>
|
2023-02-13 04:40:13
| 1
| 392
|
jonboy
|
75,431,791
| 2,782,382
|
Finding (unique) set of values across columns
|
<p>I have a Polars Dataframe that looks like the below</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({
'id': [1, 2, 3],
'attribute': [True, True, False],
'val1': ['A', 'A', 'A'],
'val2': ['A', 'A', 'B'],
'val3': ['A', 'B', 'C'],
})
</code></pre>
<p>I would like to create a new column that is the set of the values in val1, val2, and val3.</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>df = df.with_columns(pl.struct('^val.*$').alias('set'))
df = df.with_columns(pl.col('set').map_elements(lambda x: list(set(x.values()))))
</code></pre>
<pre><code>shape: (3, 6)
┌─────┬───────────┬──────┬──────┬──────┬─────────────────┐
│ id ┆ attribute ┆ val1 ┆ val2 ┆ val3 ┆ set │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ bool ┆ str ┆ str ┆ str ┆ list[str] │
╞═════╪═══════════╪══════╪══════╪══════╪═════════════════╡
│ 1 ┆ true ┆ A ┆ A ┆ A ┆ ["A"] │
│ 2 ┆ true ┆ A ┆ A ┆ B ┆ ["A", "B"] │
│ 3 ┆ false ┆ A ┆ B ┆ C ┆ ["C", "A", "B"] │
└─────┴───────────┴──────┴──────┴──────┴─────────────────┘
</code></pre>
<p>However, with the <code>map_elements</code>, it is predictable slow. Is there a way to do this using native Polars functionality?</p>
|
<python><dataframe><python-polars>
|
2023-02-13 03:58:22
| 1
| 1,353
|
Chris
|
75,431,639
| 18,022,759
|
Python list converting oddly to NumPy array
|
<p>I have a python <code>list</code>:</p>
<pre><code>[[([1, 20230112060000], [10000, 20230112060000]),
([1, 20230108060000], [7000, 20230109060000]),
([3, 20221229060000], [6929, 20221229060000]),
([1, 20221227060000], [3900, 20221227060000]),
([1, 20221226060000], [6500, 20221226060000]),
([1, 20221221060000], [4400, 20221222060000]),
([1, 20221216060000], [3888, 20221216060000]),
([1, 20221205060000], [5998, 20221205060000]),
([1, 20221128060000], [5000, 20221128060000]),
([1, 20221127060000], [5000, 20221127060000]),
([1, 20221123060000], [5666, 20221123060000]),
([1, 20221122060000], [6000, 20221122060000]),
([1, 20221120060000], [4300, 20221120060000]),
([1, 20221118060000], [4998, 20221118060000]),
([1, 20221028050000], [2700, 20221028050000]),
([1, 20221027050000], [5000, 20221027050000]),
([1, 20221022050000], [4300, 20221022050000]),
([1, 20221019050000], [4498, 20221019050000]),
([1, 20221018050000], [3500, 20221018050000]),
([2, 20221015050000], [3899, 20221015050000]),
([1, 20221011050000], [4500, 20221011050000]),
([2, 20221008050000], [4850, 20221008050000]),
([2, 20221007050000], [5898, 20221007050000]),
([1, 20221004050000], [7499, 20221004050000]),
([1, 20221001050000], [3400, 20221001050000]),
...
[],
[([2, 20230206060000], [357500, 20230206060000])],
[([2, 20230206060000], [357500, 20230206060000]),
([6, 20230205060000], [353833, 20230205060000])],
...]
</code></pre>
<p>But when I try to convert it to a NumPy array something weird happens:</p>
<pre><code>import numpy as np
a = [...] # the above list
b = np.array(a)
</code></pre>
<p><code>b</code>:</p>
<pre><code>array([list([([1, 20230112060000], [10000, 20230112060000]), ([1, 20230108060000], [7000, 20230109060000]), ([3, 20221229060000], [6929, 20221229060000]), ([1, 20221227060000], [3900, 20221227060000]), ([1, 20221226060000], [6500, 20221226060000]), ([1, 20221221060000], [4400, 20221222060000]), ([1, 20221216060000], [3888, 20221216060000]), ([1, 20221205060000], [5998, 20221205060000]), ([1, 20221128060000], [5000, 20221128060000]), ([1, 20221127060000], [5000, 20221127060000]), ([1, 20221123060000], [5666, 20221123060000]), ([1, 20221122060000], [6000, 20221122060000]), ([1, 20221120060000], [4300, 20221120060000]), ([1, 20221118060000], [4998, 20221118060000]), ([1, 20221028050000], [2700, 20221028050000]), ([1, 20221027050000], [5000, 20221027050000]), ([1, 20221022050000], [4300, 20221022050000]), ([1, 20221019050000], [4498, 20221019050000]), ([1, 20221018050000], [3500, 20221018050000]), ([2, 20221015050000], [3899, 20221015050000]), ([1, 20221011050000], [4500, 20221011050000]), ([2, 20221008050000], [4850, 20221008050000]), ([2, 20221007050000], [5898, 20221007050000]), ([1, 20221004050000], [7499, 20221004050000]), ([1, 20221001050000], [3400, 20221001050000]), ([1, 20220928050000], [5000, 20220929050000]), ([1, 20220926050000], [3000, 20220926050000]), ([1, 20220925050000], [4500, 20220925050000]), ([1, 20220922050000], [4000, 20220922050000]), ([1, 20220920050000], [5000, 20220920050000]), ([1, 20220916050000], [8000, 20220916050000]), ([2, 20220915050000], [6625, 20220915050000]), ([2, 20220914050000], [4500, 20220914050000]), ([1, 20220903050000], [10000, 20220903050000]), ([1, 20220821050000], [8600, 20220821050000]), ([2, 20220820050000], [37500, 20220820050000]), ([1, 20220819050000], [30000, 20220819050000]), ([2, 20220818050000], [13999, 20220818050000]), ([1, 20220816050000], [4000, 20220817050000]), ([1, 20220815050000], [4000, 20220815050000])]),
list([]), list([([1, 20230112060000], [10000, 20230112060000])]),
...,
list([([1, 20230123060000], [5745, 20230123060000]), ([1, 20230105060000], [13000, 20230105060000]), ([1, 20221228060000], [6000, 20221228060000]), ([2, 20221227060000], [6000, 20221227060000]), ([1, 20221222060000], [8571, 20221222060000]), ([1, 20221218060000], [8250, 20221218060000]), ([1, 20221216060000], [8000, 20221216060000]), ([1, 20221213060000], [7500, 20221213060000]), ([1, 20221210060000], [3500, 20221210060000]), ([1, 20221109060000], [6500, 20221109060000])]),
list([([1, 20230123060000], [5745, 20230123060000]), ([1, 20230105060000], [13000, 20230105060000]), ([1, 20221228060000], [6000, 20221228060000]), ([2, 20221227060000], [6000, 20221227060000]), ([1, 20221222060000], [8571, 20221222060000]), ([1, 20221218060000], [8250, 20221218060000]), ([1, 20221216060000], [8000, 20221216060000]), ([1, 20221213060000], [7500, 20221213060000]), ([1, 20221210060000], [3500, 20221210060000]), ([1, 20221109060000], [6500, 20221109060000]), ([1, 20220909050000], [9999, 20220909050000])]),
list([([1, 20230123060000], [5745, 20230123060000]), ([1, 20230105060000], [13000, 20230105060000]), ([1, 20221228060000], [6000, 20221228060000]), ([2, 20221227060000], [6000, 20221227060000]), ([1, 20221222060000], [8571, 20221222060000]), ([1, 20221218060000], [8250, 20221218060000]), ([1, 20221216060000], [8000, 20221216060000]), ([1, 20221213060000], [7500, 20221213060000]), ([1, 20221210060000], [3500, 20221210060000]), ([1, 20221109060000], [6500, 20221109060000]), ([1, 20220909050000], [9999, 20220909050000]), ([1, 20220901050000], [8444, 20220901050000])])],
dtype=object)
</code></pre>
<p>For some reason, the tuples and the lists are not converted properly. Because of this, <code>b</code> does not function like a normal NumPy array because all of the items are objects. I know I could go through and covert all of the <code>tuples</code> to <code>lists</code>, but is there a way to force NumPy to convert everything properly?</p>
<p>By the way, by converted properly I mean instead of:</p>
<pre><code>array([list([()])])
</code></pre>
<p>It should be converted like:</p>
<pre><code>array([[[]]])
</code></pre>
|
<python><arrays><python-3.x><numpy>
|
2023-02-13 03:16:27
| 1
| 976
|
catasaurus
|
75,431,636
| 9,773,920
|
Reorder sheets of an existing excel file in S3 - Python Lambda
|
<p>I am generating an excel file with multiple sheets in the below order:</p>
<pre><code>["COMP1_hires","COMP2_hires","COMP1_det","COMP2_det"]
</code></pre>
<p>I want to re-order the sheets in the below order:</p>
<pre><code>["COMP1_hires","COMP1_det","COMP2_hires","COMP2_det"]
</code></pre>
<p>My file is generated in a specific folder in S3. Is there a way to change the sheet order during or after the excel file generation?</p>
<p>Below is my current working code. This code generates an excel file with separate sheet for each component:</p>
<pre><code> filename = 'exported_comp_file_'+today
with io.BytesIO() as output:
with pd.ExcelWriter(output, engine='xlsxwriter') as writer:
for comp in components_sep:
df1= df[df['component']==comp]
df1.to_excel(writer, sheet_name=comp, index=False)
writer.save()
data = output.getvalue()
s3 = boto3.resource('s3')
s3.Bucket('demo bucket').put_object(Key='myfolder/'+filename+'.xlsx', Body=data)
</code></pre>
<p>comp in the above code is ["COMP1_hires","COMP2_hires","COMP1_det","COMP2_det"]. So an excel file with sheet for every component above is generated. How to reorder the sheets in the file?</p>
|
<python><pandas><amazon-s3><aws-lambda><openpyxl>
|
2023-02-13 03:14:07
| 0
| 1,619
|
Rick
|
75,431,596
| 847,200
|
Parallel fitting of single tf.keras.Model on single GPU
|
<p>I'm trying to train a single model with a huge dataset on a single GPU. I don't mind setting smaller batch size, since I'll be running this for a while anyway, but what bothers me is that the GPU is utilized only by 25% when training the model. I have plenty of spare CPU and GPU capacity that could be used to fit the data faster. There seem to be a lot of computing capability lying on the table, and I could just traing 3-4 of these models in parallel. What I would like to do instead is to somehow run multiple copies of the same model.fit on a single gpu to consume the excess capacity and speed up training.</p>
<p>How do I do this?</p>
|
<python><tensorflow><keras><gpu>
|
2023-02-13 03:04:56
| 0
| 25,467
|
Arsen Zahray
|
75,431,587
| 10,380,766
|
type hinting with Unions and collectables 3.9 or greater
|
<p>I've been on Python 3.8 for quite some time. I usually type hint with the convention:</p>
<pre class="lang-py prettyprint-override"><code>from typing import List, Union
some_stuff: List[Union[int, str, float, List[str]]] = [98, "Fido", -34.925, ["Phantom", "Tollbooth"]]
</code></pre>
<p>I understand that with python 3.9 or greater you can type hint lists and collectible like: <code>some_ints: list[int]</code></p>
<p>If you have a union of types that a variable can occupy do you still need to import the <code>Union</code> class? like so:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Union
some_stuff: list[Union[int, str, float, list[str]]] = [98, "Fido", -34.925, ["Phantom", "Tollbooth"]]
</code></pre>
<p>Or is there a new convention for Union type hinting as well?</p>
|
<python><python-typing>
|
2023-02-13 03:03:02
| 2
| 1,020
|
Hofbr
|
75,431,355
| 2,005,559
|
generating synthetic data using scikit-learn for ML
|
<p>I have data at some given temperature [30, 40,45...].</p>
<p>Is it possible to generate synthetic data for other temperatures using scikit-learn or any other library?</p>
<p>I am using the existing data and the python code to get the mean plot.</p>
<pre><code>#!/usr/bin/env python
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy import stats
data = pd.read_csv("trialdata.csv")
# data[(np.abs(stats.zscore(data)) < 3).any(axis=1)]
# print(data)
data = data.groupby("Temp").mean()
data["Temp"] = [30, 40, 45, 50, 55, 60]
print(data)
data.plot.line(y="Er", x="Temp", use_index=True, style="o-")
plt.ylabel("Er")
plt.tight_layout()
plt.show()
</code></pre>
<p>I want to generate data for other temperatures eg [35, 65,70] etc for machine learning training set.</p>
|
<python><machine-learning>
|
2023-02-13 01:58:43
| 1
| 3,260
|
BaRud
|
75,431,224
| 3,261,292
|
Getting "Method Not Allowed" when using fastapi
|
<p>I am trying to run the following code but I am getting a <code>{"detail":"Method Not Allowed"}</code> error when I open <code>http://localhost:8000/predict</code>.</p>
<p>The code:</p>
<pre><code>from fastapi import FastAPI
app = FastAPI()
@app.post("/predict")
def predict_(request: str):
return {"message": 'Hellooo!'}
</code></pre>
<p>What's the problem with my code?</p>
<p>I searched online for similar examples but I got the same error! For example this one: <a href="https://codehandbook.org/post-data-fastapi/" rel="nofollow noreferrer">https://codehandbook.org/post-data-fastapi/</a></p>
|
<python><http-post><fastapi>
|
2023-02-13 01:22:41
| 1
| 5,527
|
Minions
|
75,431,149
| 7,376,511
|
subclass inheriting annotations from parent class for a function
|
<pre><code>class A:
def test(self, value: int, *args: str, **kwargs: int) -> str:
pass
class B(A)
def test(self, value, *args, **kwargs):
# do some stuff
super().test(value)
</code></pre>
<p>Is there a way to tell mypy that the subclass's <code>test</code> has identical typing as the parent class?</p>
<p>I ask this because some the typing of some methods I'm inheriting require a lot of imports. One example would be <code>requests.Session.get</code>. If I'm just writing a wrapper that manipulates the headers before sending everything to the proper function, how can I tell mypy to just consider the annotations of the parent class for a specific function?</p>
|
<python><mypy>
|
2023-02-13 01:02:22
| 2
| 797
|
Some Guy
|
75,430,987
| 3,343,783
|
Cannot add a column (pandas `Series`) to a Dask `DataFrame` without introducing `NaN`
|
<p>I am constructing a Dask <code>DataFrame</code> from a <code>numpy</code> array and after this I would like to add a column from a <code>pandas</code> <code>Series</code>.</p>
<p>Unfortunately the resulting dataframe contains <code>NaN</code> values, and I am not able to understand where the error lies.</p>
<pre><code>from dask.dataframe.core import DataFrame as DaskDataFrame
import dask.dataframe as dd
import pandas as pd
import numpy as np
xy = np.random.rand(int(3e6), 2)
c = pd.Series(np.random.choice(['a', 'b', 'c'], int(3e6)), dtype='category')
# alternative 1 -> # lot of values of x, y are NaN
table: DaskDataFrame = dd.from_array(xy, columns=['x', 'y'])
table['c'] = dd.from_pandas(c, npartitions=1)
print(table.compute())
# alternative 2 -> # lot of values of c are NaN
table: DaskDataFrame = dd.from_array(xy, columns=['x', 'y'])
table['c'] = dd.from_pandas(c, npartitions=table.npartitions)
print(table.compute())
</code></pre>
<p>Any help is appreciated.</p>
|
<python><pandas><dask-dataframe>
|
2023-02-13 00:14:06
| 1
| 3,486
|
Nisba
|
75,430,842
| 9,365,845
|
Is it possible to prevent circular imports in SQLAlchemy and still have models in different files?
|
<p>I'm new to SQLAlchemy and I'm trying to build models in different files. My model looks like this :</p>
<pre class="lang-py prettyprint-override"><code>from typing import List, Optional
from uuid import UUID, uuid4
from sqlalchemy.orm import Mapped, mapped_column
from sqlalchemy.orm import relationship
from database.database import Base
class User(Base):
__tablename__ = "user"
id: Mapped[UUID] = mapped_column(primary_key=True, default=uuid4)
name: Mapped[str] = mapped_column(unique=True, index=True, nullable=False)
email: Mapped[str] = mapped_column(unique=True, index=True, nullable=False)
password: Mapped[str] = mapped_column(nullable=False)
bets: Mapped[List["Bet"]] = relationship(back_populates="user")
scores: Mapped[List["Score"]] = relationship(back_populates="user")
</code></pre>
<p>When I try to make a request on my API, I get the following error :</p>
<pre><code>sqlalchemy.exc.InvalidRequestError: One or more mappers failed to initialize - can't
proceed with initialization of other mappers. Triggering mapper: 'Mapper[User(user)]'.
Original exception was: When initializing mapper Mapper[User(user)], expression 'Bet'
failed to locate a name ('Bet'). If this is a class name, consider adding this
relationship() to the <class 'models.User.User'> class after both dependent classes have
been defined.
</code></pre>
<p>So then I tried to update my model the following way :</p>
<pre class="lang-py prettyprint-override"><code>from typing import List, Optional
from uuid import UUID, uuid4
from sqlalchemy.orm import Mapped, mapped_column
from sqlalchemy.orm import relationship
from database.database import Base
from models.Bet import Bet
from models.Score import Score
class User(Base):
__tablename__ = "user"
id: Mapped[UUID] = mapped_column(primary_key=True, default=uuid4)
name: Mapped[str] = mapped_column(unique=True, index=True, nullable=False)
email: Mapped[str] = mapped_column(unique=True, index=True, nullable=False)
password: Mapped[str] = mapped_column(nullable=False)
discord: Mapped[Optional[str]] = mapped_column(unique=True, index=True)
points: Mapped[Optional[int]] = mapped_column(nullable=False, default=0)
bets: Mapped[List[Bet]] = relationship(back_populates="user")
scores: Mapped[List[Score]] = relationship(back_populates="user")
</code></pre>
<p>But then I have a <strong>circular error</strong> issue :</p>
<pre><code>ImportError: cannot import name 'Bet' from partially initialized module 'models.Bet'
(most likely due to a circular import)
</code></pre>
<p>I'd welcome some help to understand what I'm missing and what are the good practices to work around that issue.</p>
<p>Here's the structure of my project :</p>
<pre><code>├── models
│ ├── Bet.py
│ ├── Race.py
│ ├── Result.py
│ ├── Rider.py
│ ├── Score.py
│ ├── Stage.py
│ ├── Team.py
│ ├── User.py
│ ├── __init__.py
</code></pre>
<p><code>Bet.py</code> imports <code>User.py</code> but <code>User.py</code> needs to import <code>Bet.py</code> as well. Here's how Bet.py looks like :</p>
<pre class="lang-py prettyprint-override"><code>from typing import Optional
from uuid import UUID, uuid4
from sqlalchemy import ForeignKey
from sqlalchemy.orm import Mapped, mapped_column, relationship
from database.database import Base
from models.Race import Race
from models.Rider import Rider
from models.Stage import Stage
from models.User import User
class Bet(Base):
__tablename__ = "bet"
id: Mapped[UUID] = mapped_column(primary_key=True, default=uuid4)
fk_user: Mapped[UUID] = mapped_column(ForeignKey("user.id"))
fk_stage: Mapped[Optional[UUID]] = mapped_column(ForeignKey("stage.id"))
fk_race: Mapped[UUID] = mapped_column(ForeignKey("race.id"))
fk_rider: Mapped[UUID] = mapped_column(ForeignKey("rider.id"))
race: Mapped[Race] = relationship(back_populates="bets")
rider: Mapped[Rider] = relationship(back_populates="bets")
stage: Mapped[Stage] = relationship(back_populates="bets")
user: Mapped[User] = relationship(back_populates="bets")
</code></pre>
|
<python><sqlalchemy>
|
2023-02-12 23:32:39
| 2
| 329
|
RomainM
|
75,430,780
| 8,513,648
|
Emulating python msvcrt module's kbhit() on Unix
|
<p>I am trying to write python code for Unix/Posix systems that emulates the <code>kbhit()</code> and <code>getch()</code> functions in the Windows <code>msvcrt</code> module. There's actually a few solutions online on how to do this; my problem is that none of the <code>kbhit</code> solutions I've found (even on stackoverflow) work correctly for me.</p>
<p>I will explain in more detail:</p>
<p>Basically, the code to emulate <code>getch</code> is straightforward:</p>
<pre><code>def getch():
ch = sys.stdin.read(1)
return ch
</code></pre>
<p>And the code to emulate <code>kbhit</code> is... well... simple enough using the <code>select</code> module:</p>
<pre><code>def kbhit():
results = select.select([sys.stdin], [], [], 0)
return results[0] != []
</code></pre>
<p>However, as programmers who've tried to read one character at a time have eventually discovered, you need to set your terminal settings to allow for unbuffered input. Usually something like this is used before calling <code>getch()</code> or <code>kbhit()</code>:</p>
<pre><code>import os, select, sys, termios
fd = sys.stdin.fileno()
old_term = termios.tcgetattr(fd) # (For restoring later.)
# Create new unbufferd terminal settings to use:
new_term = termios.tcgetattr(fd)
new_term[3] = (new_term[3] & ~termios.ICANON & ~termios.ECHO)
termios.tcsetattr(fd, termios.TCSAFLUSH, new_term)
</code></pre>
<p>At exit, we can use something like this to restore the old terminal settings:</p>
<pre><code>termios.tcsetattr(fd, termios.TCSAFLUSH, old_term)
</code></pre>
<p>The problem with this is that occasionally the <code>kbhit()</code> function returns <code>False</code> even when there is input waiting to be read.</p>
<p>To demonstrate, I've created a proof-of-concept program that you can run yourself (on a Unix/Posix) system to see the problem first-hand:</p>
<pre><code>#!/usr/bin/env python3
# File: proof-of-concept-kbhit.py
import atexit
import os
import select
import sys
import termios
import time
def getch():
ch = sys.stdin.read(1)
return ch
def kbhit():
results = select.select([sys.stdin], [], [], 0)
return results[0] != []
def main():
# Save the terminal settings:
fd = sys.stdin.fileno()
old_term = termios.tcgetattr(fd)
# Create new unbufferd terminal settings to use:
new_term = termios.tcgetattr(fd)
new_term[3] = (new_term[3] & ~termios.ICANON & ~termios.ECHO)
termios.tcsetattr(fd, termios.TCSAFLUSH, new_term)
# Reset original terminal settings on exit:
atexit.register(termios.tcsetattr, fd, termios.TCSAFLUSH, old_term)
seconds_to_wait = 10
print()
print(f"Type a short word or phrase in the next {seconds_to_wait} seconds.")
print("(Your typed letters will not show up right away; don't stop typing.)")
print(flush=True)
while True:
print(f" >>> Seconds left to wait: {seconds_to_wait} \r", end='', flush=True)
if seconds_to_wait <= 0:
break
time.sleep(1)
seconds_to_wait -= 1
print()
print()
print('Now your typed letters will be retrieved\n'
'and printed at a rate of one per second.')
print('A dot (".") should also print out every second,\n'
'whether you type anything or not.')
print('(Hit CTRL-C to exit this program.)')
print(flush=True)
time.sleep(1)
while True:
time.sleep(1)
print('.', end='', flush=True)
if kbhit():
ch = getch()
print('\n', ch, sep='', end='', flush=True)
if __name__ == '__main__':
main()
</code></pre>
<p>To see the problem I'm seeing, simply run this program on a Unix/Posix system. You will immediately prompted to type a short word or phrase in the next ten seconds. Then:</p>
<ol>
<li>Type <kbd>Hello, world</kbd> (or simply <kbd>Hello</kbd>). (You don't need to hit <kbd>Enter</kbd>.)</li>
<li>Wait out the ten seconds.</li>
<li>You will see text that tells you that the program will then begin to print out your text one letter at a time, at a rate of one letter per second.</li>
<li>Watch as the letter <code>H</code> gets printed.</li>
<li>Note that no other letters get printed, even after ten seconds or more. (The dots are still printing at the rate of one dot per second, but there are no letters being printed.)</li>
<li>Now, press the letter <kbd>z</kbd>.</li>
<li>Note that the remaining letters you typed (that is, <code>ello, world</code>) start showing up, including the <kbd>z</kbd> you typed in the previous step.</li>
<li>If you are able to quickly type three <code>w</code>s in a row (that is, <kbd>w</kbd><kbd>w</kbd><kbd>w</kbd>) between two printings of <code>.</code>, then you will see that only one of the <code>w</code>s is picked up, while the others are temporarily ignored.</li>
<li>Press any other letter (for example, <kbd>g</kbd>) to see the other two <code>w</code>s reported, as well as that <code>g</code>.</li>
</ol>
<p>What's going on?</p>
<p>The documentation for <code>select.select</code> says that <code>select.select()</code> will return any given file descriptors in the first argument that have input waiting to be read (more specifically, that are "ready for reading").</p>
<p>For whatever reason, the code <code>results = select.select([sys.stdin], [], [], 0)</code> keeps setting <code>results[0]</code> to an empty list, even though <code>ello, world</code> is still waiting to be read.</p>
<p>And <code>ello, world</code> is still indeed in there, as evidenced by the fact that pressing an additional letter will cause <code>ello, world</code> to be detected and printed out.</p>
<p>As far as I can tell, <code>getch()</code> is working correctly; it's <code>kbhit()</code> that is incorrectly returning <code>False</code> when input is waiting.</p>
<p>Does anyone know why <code>kbhit()</code> (or, rather, the <code>select()</code> call inside it) is failing to detect the <code>ello, world</code> waiting to be read? Does anyone know how to fix this? (I wonder if it's some terminal setting that I haven't specified correctly.)</p>
<p>(I ran this program on a Linux platform, on MacOS, and a NetBSD platform, and the program had the same issue on all three platforms.)</p>
<hr />
<p>Addendum (2023-02-13):</p>
<p>Just to make sure it's clear, I want my emulated <code>kbhit()</code> function to be non-blocking, just like the <code>kbhit()</code> function in the <code>msvcrt</code> Python module for Windows.</p>
<p>In other words, if input is waiting, it should immediately return True. But it there is no input waiting, it should immediately return False, without blocking to wait for input to arrive.</p>
<p>And to verify that <code>kbhit()</code> is non-blocking, you should see dots (".") print out every second, continuously, whether or not there is input waiting. So if you leave the program alone, you should see a continuous stream of dots, like:</p>
<pre><code>.............................
</code></pre>
<p>(If you fail to see this growing stream of dots, it probably means that <code>kbhit()</code> is blocking execution while it waits for keyboard input. This blocking behavior is not what I want, as <code>msvcrt.kbhit()</code> is not a blocking call.)</p>
<p>This is my proof of concept code that uses GIZ's code for the <code>kbhit()</code> function. As I've stated as a comment in his reply, when I run this the <code>kbhit()</code> function blocks, so if there's no input waiting, the program will freeze until new input is made. However, GIZ says that when tested, the code does not freeze for him/her.</p>
<pre><code>#!/usr/bin/env python3
# File: proof-of-concept-kbhit-using-fcntl.py
import atexit
import fcntl
import os
import select
import struct
import sys
import termios
import time
def getch():
ch = sys.stdin.read(1)
return ch
# This old function has been replaced by the newer function that follows.
def kbhit_old():
results = select.select([sys.stdin], [], [], 0)
return results[0] != []
# This is the newer function that is replacing the older function above.
def kbhit():
return bool(fcntl.ioctl(sys.stdin.fileno(), termios.FIONREAD, struct.pack('I', 0)))
def main():
# Save the terminal settings:
fd = sys.stdin.fileno()
old_term = termios.tcgetattr(fd)
# Create new unbufferd terminal settings to use:
new_term = termios.tcgetattr(fd)
new_term[3] = (new_term[3] & ~termios.ICANON & ~termios.ECHO)
termios.tcsetattr(fd, termios.TCSAFLUSH, new_term)
# Reset original terminal settings on exit:
atexit.register(termios.tcsetattr, fd, termios.TCSAFLUSH, old_term)
seconds_to_wait = 10
print()
print(f"Type a short word or phrase in the next {seconds_to_wait} seconds.")
print("(Your typed letters will not show up right away; don't stop typing.)")
print(flush=True)
while True:
print(f" >>> Seconds left to wait: {seconds_to_wait} \r", end='', flush=True)
if seconds_to_wait <= 0:
break
time.sleep(1)
seconds_to_wait -= 1
print()
print()
print('Now your typed letters will be retrieved\n'
'and printed at a rate of one per second.')
print('A dot (".") should also print out every second,\n'
'whether you type anything or not.')
print('(Hit CTRL-C to exit this program.)')
print(flush=True)
time.sleep(1)
while True:
time.sleep(1)
print('.', end='', flush=True)
if kbhit():
ch = getch()
print('\n', ch, sep='', end='', flush=True)
if __name__ == '__main__':
main()
</code></pre>
<p>If anyone wants to test this code to see if <code>kbhit()</code> freezes for them (and then report back as a comment), maybe we can figure out why we're seeing differences.</p>
<hr />
<p>Thank you for any help!</p>
|
<python><unix><tty>
|
2023-02-12 23:14:53
| 1
| 1,941
|
J-L
|
75,430,663
| 8,010,921
|
streaming real time audio over UDP
|
<p>I have been willing to develop an application which transmits real-time audio over a local network with the lowest latency possible in python.</p>
<p>So I ran into <a href="https://pyshine.com/How-to-send-audio-from-PyAudio-over-socket/" rel="nofollow noreferrer">this program</a> (the last UDP version) and I have been trying to tweak a few variables to achieve the lowest latency possible, which I guess can be pretty low in a <code>localhost</code> environment. However removing the 5 seconds wait time and reducing the <code>CHUNK</code> size down to 1024 still results in an audible latency.</p>
<p>Does the <code>BUFF_SIZE</code> and the sleep time of the <code>server_socket.sendto()</code> loop have to do with it?
Are they related to each other?</p>
<p>It seems that with a <code>BUFF_SIZE</code> lower than 4096 (which is approximately the latency that I experience) the client does not playback the stream.</p>
<p>Is it then related to the client's <code>queue</code>?</p>
<p>My MAIN QUESTION would be:
how do you fine tune these parameters to achieve the lowest latency possible?</p>
<p>Thank you very much!</p>
|
<python><sockets>
|
2023-02-12 22:51:15
| 1
| 327
|
ddgg
|
75,430,562
| 21,046,803
|
problem with itertools permutations in combination with english_words_set in a for loop
|
<p>I want to get all possible words with 5,6,7 characters from a random string: <code>word="ualargd"</code></p>
<p>I do the following:</p>
<ul>
<li>A for loop to change the length of my permutations.</li>
</ul>
<pre><code>for i in range(5,len(word)+1):
mywords=permutations(word, i)
</code></pre>
<ul>
<li>The items created by the permutation get compared with an english dictionary set.</li>
</ul>
<p>This works fine as long as i dont use the for loop.</p>
<p>The for loop creates the permutations <code>mywords</code> correctly. Once it contains strings like "aura" with 4 letters in the next loop "gudar" with 5 letters...</p>
<p>But unfortunately as soon as my loop leaves the first loop no word gets found anymore in the english dictionary. How come? If i replace the i value by hand in <code>mywords=permutations(word, i)</code> everything works fine.</p>
<pre><code>#output #expected output
5 5
neeld neeld
dense dense
leden leden
neele neele
neese neese
needs needs
6 6
7 lendee
needle
selene
lensed
sendee
7
needles
</code></pre>
<p>The code for reproduction looks like this:</p>
<pre><code>from itertools import permutations
from english_words import get_english_words_set
word="dseeenl"
web2lowerset = get_english_words_set(['web2'], lower=True)
for i in range(5,len(word)+1):
print(i)
mywords=permutations(word, i)
for item in set(mywords):
word=''.join(item)
if word in web2lowerset:
print(word)
</code></pre>
|
<python><for-loop><set><python-itertools>
|
2023-02-12 22:19:18
| 1
| 1,539
|
tetris programming
|
75,430,498
| 107,083
|
Options for Bundling a Python web server in Electron?
|
<p>What are some options for packaging a python web server (i.e. flask, tornado, quart, sanic, etc.) that uses numpy inside an electron application?</p>
<p>I took a look at PyOxidizer, but could not find any examples in the documentation to suggest that this is possible with it.</p>
|
<python><numpy><electron><electron-packager><pyoxidizer>
|
2023-02-12 22:06:35
| 0
| 18,077
|
chaimp
|
75,430,433
| 3,533,030
|
StableBaselines3 / PPO / model rollsout but does not learn?
|
<p>When a model <em>learns</em> there is:</p>
<ol>
<li>A rollout phase</li>
<li>A learning phase</li>
</ol>
<p>My models are <em>rolling out</em> but they never show a <em>learning phase</em>. This is apparent both in the text output in a <code>jupyter Notebook</code> in <code>vscode</code> as well as in <code>tensorboard</code>.</p>
<p>I built a very simple <code>environment</code> and tried many more timesteps. What I discovered was:</p>
<blockquote>
<p>If there are too few timesteps, the model never displays that it learns</p>
</blockquote>
<ul>
<li>What is the minimum number of timesteps to learn?</li>
<li>Is this the same for all environments or does it depend upon your environment?</li>
</ul>
<pre><code>import time
tic = time.perf_counter()
log_path = os.path.join('Training', 'Logs')
model = PPO("MlpPolicy", env, verbose=1, tensorboard_log=log_path)
modelRtn = model.learn(total_timesteps=1000, progress_bar=True)
toc = time.perf_counter()
print("Elapsed time: " + str(toc-tic) + " sec")
</code></pre>
|
<python><tensorboard><stable-baselines>
|
2023-02-12 21:55:18
| 1
| 449
|
user3533030
|
75,430,313
| 1,102,514
|
Is it possible to enqueue an instance method, as opposed to a function, using Python-RQ?
|
<p>The examples provided in the <a href="https://python-rq.org/docs" rel="nofollow noreferrer">Python-RQ</a> documentation consistently show functions being enqueued, using <code>queue.enqueue()</code>. For example:</p>
<p><code>job = q.enqueue(count_words_at_url, 'http://nvie.com')</code></p>
<p>Is it possible to enqueue a method of a particular object instance? And if so, is this considered reasonable practise?</p>
<p>For example:</p>
<pre><code>member = Member.objects.get(id=142)
job = q.enqueue(member.send_email())
</code></pre>
<p>If the answer to either of the above questions is no, what is the reccommended pattern to approach this situation?</p>
<p>One possiblity I had considered was to create a helper function, that is independent of the Member model. For example:</p>
<pre><code>def send_member_email_bg_task(member_id):
Member.objects.get(id=member_id).send_email()
</code></pre>
<p>and enqueue it as follows:</p>
<pre><code>job = q.enqueue(send_member_email_bg_task, member.id)
</code></pre>
<p>I would be grateful for suggestions on the best way to approach this.
Thanks</p>
|
<python><redis><task-queue><python-rq><django-rq>
|
2023-02-12 21:31:58
| 1
| 1,401
|
Scratcha
|
75,430,291
| 11,731,185
|
Django static files not found in production 1.2
|
<p>My code is working fine in the testing environment (DEBUG=TRUE) but when I switch to DEBUG=FALSE, static files are not being loaded.</p>
<p><strong>To run the application I follow</strong>:</p>
<pre><code>python3 manage.py collectstatic --no-input --clear
python3 manage.py runserver
</code></pre>
<p><strong>settings.py</strong>:</p>
<pre><code>from pathlib import Path, os
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/4.0/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'django-insecure-z$w^9$+no7vh2hbu#d6j7b3rx!+m=ombxy9=h%u&cr_=0v)ojv'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = False
ALLOWED_HOSTS = ['.vercel.app', '.alisolanki.com', '.now.sh']
# Application definition
INSTALLED_APPS = [
'home',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'sentiment_analysis.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'sentiment_analysis.wsgi.application'
# Database
# https://docs.djangoproject.com/en/4.0/ref/settings/#databases
# DATABASES = {
# 'default': {
# 'ENGINE': 'django.db.backends.sqlite3',
# 'NAME': BASE_DIR / 'db.sqlite3',
# }
# }
# Password validation
# https://docs.djangoproject.com/en/4.0/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/4.0/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'Asia/Kolkata'
USE_I18N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/4.0/howto/static-files/
STATIC_URL = 'static/'
# Default primary key field type
# https://docs.djangoproject.com/en/4.0/ref/settings/#default-auto-field
DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'
# Added by me
import os
STATICFILES_DIRS = os.path.join(BASE_DIR, 'static'),
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles_build', 'static')
</code></pre>
<p>Here, the <code>python3 manage.py collectstatic</code> command correctly fetches my static files and stores it to staticfiles_build/static. However none of my static files are being loaded in production.</p>
|
<python><django><django-static>
|
2023-02-12 21:27:54
| 1
| 1,706
|
Ali Solanki YouTuber
|
75,430,286
| 8,587,334
|
HTML Email content overwrite preview text in email Python
|
<p>I'm working on a Django app that sends emails, here's my code:</p>
<pre><code> message = MIMEMultipart('alternative')
message["From"] = formataddr(("CustomName", ""))
message["Subject"] = subject
message["Text"] = "My tagline"
template_path = os.path.join(BASE_DIR, f"templates/email/{template}")
preview_text = "Welcome to site, complete your registration"
html_text = render_to_string(template_path, context)
preview = MIMEText(preview_text, 'plain')
html = MIMEText(html_text, 'html')
message.attach(preview)
message.attach(html)
self.server.sendmail(EMAIL_HOST_USER, to_email, message.as_string())
self.server.quit()
</code></pre>
<p>The HTML Django Template file is:</p>
<pre><code>{% block html_body %}
<!DOCTYPE html>
<html lang="en">
<head>
<title>This is a preview {{ name }}</title>
</head>
<h1>Welcome {{ name }}</h1>
</html>
{% endblock html_body %}
</code></pre>
<p>What I want to achieve is something like LikedIn, but the problem is that the preview text is overwritten by the html content.</p>
<p>Here's what I want:
<a href="https://i.sstatic.net/X4kyO.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/X4kyO.jpg" alt="enter image description here" /></a></p>
<p>And here's what the code do:
<a href="https://i.sstatic.net/8mvXq.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8mvXq.jpg" alt="enter image description here" /></a></p>
<p>As you can see the highlighted text is not the "preview" but the html content.
Note that what I want to achieve is that the preview text will be visible only in the notification and not attached to the html email once opened in the app.</p>
<p>Thank you</p>
|
<python><email><smtp>
|
2023-02-12 21:25:48
| 0
| 582
|
PietroPutelli
|
75,430,161
| 11,236,470
|
Cursor.count() gives AttributeError in pymongo 4.3.3
|
<p>As the title suggests, I am trying to use <code>count()</code> with a <code>find()</code> on a collection but it keeps throwing the error <code>AttributeError: 'Cursor' object has no attribute 'count'</code>.</p>
<p>For reference, I went through <a href="https://stackoverflow.com/questions/4415514/in-mongodbs-pymongo-how-do-i-do-a-count">this question</a> but <code>count_documents()</code> seems to be tehre for colelctions themselves, and not cursors. The other option mentioned was <code>len(list(cursor))</code> but I can't use that as it consumes the cursor itself (can't iterate over it after this). I went through a couple more answers, but these seem to be the main ways out.</p>
<p>Moreover, my pymongo version is <code>4.3.3</code> which can't be changed due to some restrictions.</p>
<p>Is there any operation I can perform directly on <code>Cursor</code> which doesn't consume it?</p>
<h4>Sample code</h4>
<pre class="lang-py prettyprint-override"><code>def temp(col):
return col.find().count()
print(temp(collection))
</code></pre>
<p>Thanks!</p>
|
<python><pymongo><database-cursor>
|
2023-02-12 21:02:47
| 2
| 388
|
Eagle
|
75,430,107
| 15,900,832
|
Python regex non-capturing not working within capturing groups
|
<p>I was working a regex expression in Python to extract groups. I am correctly extracting the 3 groups I want (symbol, num, atom). However, the 'symbol' group should <strong>not</strong> have the '[' or ']' as I am using 'non-capturing' notation <code>(?:..)</code> per python's docs (<a href="https://docs.python.org/3/library/re.html" rel="nofollow noreferrer">https://docs.python.org/3/library/re.html</a>).</p>
<p>Am I understanding non-capturing wrong, or is this a bug?</p>
<p>Thanks!</p>
<pre><code>import re
result = re.match(r'(?P<symbol>(?:\[)(?P<num>[0-9]{0,3})(?P<atom>C)(?:\]))', '[12C]')
print(result.groups())
# ('[12C]', '12', 'C')
# expected: ('12C', '12', 'C')
</code></pre>
|
<python><regex>
|
2023-02-12 20:52:18
| 1
| 633
|
basil_man
|
75,430,008
| 148,668
|
Can pip install editable avoid recompiling from scratch?
|
<p>I'm developing a <a href="https://github.com/libigl/libigl-python-bindings/" rel="noreferrer">python module</a> with a long pybind11 C++ extension compilation step invoked via cmake in <code>setup.py</code>.</p>
<p>When making changes to a C++ file, cmake invoked via <code>python setup.py develop</code> will avoid recompiling units whose dependent files have not changed. However, invoking <code>setup.py</code> directly ignores the settings in my <code>pyproject.toml</code> and I understand that the modern way to do a developmental build is with <code>python -m pip install -e .</code></p>
<p>While <code>pip install -e</code> successfully builds, it unfortunately starts from scratch inside a clean temporary directory every invocation. Is there a way to instruct <code>pip install -e</code> to maintain my CMakeCache.txt and compilation dependency tracking?</p>
<p>(And/or does this somehow indicate that I misunderstand <code>pip install -e</code> or am using it incorrectly?)</p>
<p><em>This <a href="https://stackoverflow.com/questions/73849192/python-pip-how-to-do-an-editable-package-install-with-extension-modules">previous unanswered question</a> is quite similar sounding. Perhaps, I have the added detail that my <code>python setup.py develop</code> is working in this regard.</em></p>
|
<python><cmake><pip><setup.py><pybind11>
|
2023-02-12 20:33:49
| 0
| 6,336
|
Alec Jacobson
|
75,429,774
| 4,404,805
|
Django: Return values based on condition using SerializerMethodField
|
<p>I have the tables: <code>User</code>, <code>Event</code> and <code>EventTicket</code>. I have to return total transactions and total tickets sold of each event. I am trying to achieve this using Django serializers. Using only serializers, I need to find the following:</p>
<pre><code> 1. total tickets sold: count of items in EventTicket table for each event
2. total transactions: sum of total_amount of items in EventTicket table for each event where payment_status: 1
</code></pre>
<p><strong>models.py:</strong></p>
<pre><code>class User(AbstractUser):
first_name = models.CharField(max_length=50,blank=False)
middle_name = models.CharField(max_length=50,blank=True)
last_name = models.CharField(max_length=50,blank=True)
class Event(models.Model):
name = models.CharField(max_length=100, null=False, blank=False)
user = models.ForeignKey(User, related_name='%(class)s_owner', on_delete=models.CASCADE)
description = models.TextField(null=False, blank=False)
date_time = models.DateTimeField()
class EventTicket(models.Model):
user = models.ForeignKey(User,on_delete=models.CASCADE)
event = models.ForeignKey(Event,on_delete=models.CASCADE, related_name='event_tickets')
payment_status = models.SmallIntegerField(default=0, null=False, blank=False) ## payment_success: 1, payment_failed: 0
total_amount = models.FloatField()
date_time = models.DateTimeField()
</code></pre>
<p>My desired output is:</p>
<pre><code>"ticket_details": [
{
"event_id": 1,
"event_name": Event-1,
"total_transactions": 10000, ## Sum of all ticket amounts of event_id: 1, where payment_status: 1
"total_tickets": 24, ## Count of all tickets that belong to event_id: 1
},
{
"event_id": 2,
"event_name": Event-2,
"total_transactions": 10000, ## Sum of all ticket amounts of event_id: 2, where payment_status: 1
"total_tickets": 24, ## Count of all tickets that belong to event_id: 2
}]
</code></pre>
<p>This is what I have done:</p>
<p><strong>serializers.py:</strong></p>
<pre><code>class EventListSerializer(serializers.ModelSerializer):
# ticket_id = serializers.IntegerField(source='id')
# event_id = serializers.PrimaryKeyRelatedField(source='event', read_only=True)
# event_name = serializers.PrimaryKeyRelatedField(source='event.name', read_only=True)
total_transactions = serializers.SerializerMethodField()
total_tickets = serializers.SerializerMethodField()
class Meta:
model = Event
fields = ('total_transactions', 'total_tickets')
def get_total_transactions(self, obj):
return obj.event_tickets.all().aggregate(sum('total_amount'))['total_amount__sum']
def get_total_tickets(self, obj):
return obj.event_tickets.all().count()
</code></pre>
<p>No matter whatever I try, I am always getting this error: <code>'EventTicket' object has no attribute 'event_tickets'</code></p>
|
<python><django><django-models><django-serializer>
|
2023-02-12 19:52:28
| 2
| 1,207
|
Animeartist
|
75,429,759
| 4,570,472
|
W&B - How to construct custom string in sweep based on sweep parameters
|
<p>I want to use W&B sweep parameters to dynamically construct a command line argument. How can I do this?</p>
<p>Modifying <a href="https://docs.wandb.ai/guides/sweeps/faq" rel="nofollow noreferrer">the W&B's example</a>, suppose I have the following sweep:</p>
<pre><code>program:
train.py
method: grid
parameters:
batch_size:
values: [8, 10, 12]
lr:
values: [0.0001, 0.001]
command:
- ${env}
- python3
- ${program}
- ${args}
</code></pre>
<p>Suppose I want to pass in an additional flag like <code>--output_dir /home/users/me/outputs/bs={args.bs}_lr={args.lr}</code>. How can I do this?</p>
|
<python><wandb>
|
2023-02-12 19:49:59
| 1
| 2,835
|
Rylan Schaeffer
|
75,429,684
| 14,715,170
|
How to get value from the different django model in DTL?
|
<p>I am having 3 different models - <code>User</code>, <code>Thread</code> and <code>UserProfile</code>.</p>
<p><code>User</code> model contains information like <code>ID, First_name and Last_name</code>.</p>
<p><code>Thread</code> model contains information like</p>
<pre><code>class Thread(models.Model):
first_person = models.ForeignKey(User, on_delete=models.CASCADE, null=True, blank=True, related_name='thread_first_person')
second_person = models.ForeignKey(User, on_delete=models.CASCADE, null=True, blank=True,related_name='thread_second_person')
updated = models.DateTimeField(auto_now=True)
</code></pre>
<p>and <code>UserProfile</code> model,</p>
<pre><code>class UserProfile(models.Model):
custom_user = models.OneToOneField(CustomUser, on_delete=models.CASCADE)
picture = models.ImageField(default='profile_image/pro.png', upload_to='profile_image', blank=True)
</code></pre>
<p>when I am trying to get all the threads from <code>Thread</code> model and pass it from views.py to my HTML template then I can access <code>User</code> model fields like -</p>
<p><code>{{ thread.second_person.ID}} {{ thread.second_person.First_name}}</code></p>
<p>But how can I access <code>picture</code> field from <code>UserProfile</code> with the help of <code>custom_user</code> ?</p>
|
<python><python-3.x><django><django-models><jinja2>
|
2023-02-12 19:38:29
| 1
| 334
|
sodmzs1
|
75,429,596
| 2,947,435
|
OpenAI GPT-3 API: How to parse the response into an ordered list or dictionary?
|
<p>GPT-3 is amazing, but parsing its results is a bit of a headache, or I'm missing something here?
For example, I'm asking GPT-3 to write something about "digital marketing" and it's returning back some interesting stuff:</p>
<pre><code>\n\n1. Topic: The Benefits of Digital Marketing \nHeadlines: \na. Unlocking the
Potential of Digital Marketing \nb. Harnessing the Power of Digital Marketing for
Your Business \nc. How to Maximize Your Return on Investment with Digital Marketing
\nd. Exploring the Benefits of a Comprehensive Digital Marketing Strategy \ne.
Leveraging Technology to Take Your Business to the Next Level with Digital Marketing
\n\n2. Topic: Social Media Strategies for Effective Digital Marketing \nHeadlines:
\na. Crafting an Engaging Social Media Presence for Maximum Impact \nb. How to Reach
and Engage Your Target Audience Through Social Media Platforms \nc. Optimizing Your
Content Strategy for Maximum Reach on Social Media Platforms \nd. Utilizing Paid
Advertising Strategies on Social Media Platforms \t\t\t\t\t\t\t e .Analyzing
and Improving Performance Across Multiple Social Networks\n\n3. Topic: SEO Best
Practices for Effective Digital Marketing Headlines: a .Understanding Search
Engine Algorithms and Optimizing Content Accordingly b .Developing an Effective
SEO Strategy That Delivers Results c .Leveraging Keywords and Metadata For Maximum
Visibility d .Exploring Advanced SEO Techniques To Increase Traffic e .Analyzing
Performance Data To Improve Rankings\n\n4Topic : Email Campaigns For Successful
Digital Marketin g Headlines : a .Creating Compelling Email Campaigns That Drive
Results b .Optimizing Email Deliverability For Maximum Impact c .Utilizing Automation
Tools To Streamline Email Campaign Management d .Measuring Performance And Analyzing
Data From Email Campaigns e .Exploring Creative Ways To Increase Open Rates On
Emails\n\n5Topic : Mobile Advertising Strategies For Effective Digita l Marketin g
Headlines : a ..Maximizing Reach With Mobile Ads b ..Understanding User Behavior On
Mobile Devices c ..Optimizing Ads For Different Screen Sizes d ..Leveraging Location-
Based Targeting To Increase Relevance e ..Analyzing Performance Data From Mobile Ads
</code></pre>
<p>As you can see, it's sent me back a list of topics related to "digital marketing" with some headlines (apparently from a to e). I see some line breaks and tabulation here and there. So my first reflex was to split the text on the line breaks, but it looks like the format is not equal everywhere, as there are very few line breaks in the second half of the response (which make it inaccurate).
What I'd like to do, is reformatting the output, so I can have a kind of list of topics and headlines. Something like this:</p>
<pre><code>[
{"Topic 1": ["headline 1", "headline 2","..."]},
{"Topic 2": ["headline 1", "headline 2","..."]},
{"Topic 3": ["headline 1", "headline 2","..."]}
]
</code></pre>
<p>Maybe there is a parameter to send over withing my request, but I didn't find anything in the doc. So I guess my best bet is to reformat using <code>regex</code>. Here I see a pattern <code>Topic:</code>and <code>Headlines:</code> but it's not always the case. What is consistent is the number prefixing each element <code>(like Ì., II., 1., 2. or a., b.)</code> but sometimes it looks like <code>a ..</code> (you can see that at the end of the response for example.</p>
<p>Any idea how to do that? (I'm using python for that, but can adapt from another language)</p>
|
<python><openai-api><gpt-3>
|
2023-02-12 19:24:01
| 2
| 870
|
Dany M
|
75,429,566
| 11,815,307
|
Stray Python processes in memory on MacOS 11.6
|
<p>I have a Macbook Air M1 from 2020 with MacOS Big Sur (11.6). I regularly use Python with Jupyter notebooks, or from the terminal. To install Python, I use Anaconda3 for MacOS Apple Silicon. I often use Python from different <code>conda</code> environments.</p>
<p>After I close all windows and running python processes, and quit every application, the Activity Monitor application says that I have numerous Python processes in memory. These processes do not take any CPU, only just 10s Mb of memory. I occasionally quit the processes with activity monitor, but then they slowly build up again over time.</p>
<p>Why are these processes here? What can I do to prevent them from building up and taking memory? Is this a bug?</p>
<p><a href="https://i.sstatic.net/tLq5n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tLq5n.png" alt="stray processes" /></a></p>
|
<python><macos><anaconda><conda>
|
2023-02-12 19:20:04
| 1
| 699
|
nicholaswogan
|
75,429,559
| 661,720
|
Ren'Py : ModuleNotFoundError: No module named 'netrc'
|
<p>Trying to plug openai's API to Ren'Py (to make characters answer to the player)</p>
<p>I installed the openai python module in Ren'Py using this pip command :</p>
<blockquote>
<p>pip install --target /Users/...../GameName/game/python-packages netrc</p>
</blockquote>
<p>And then inside the game I use this to import the module (as expalined in the Ren'Py documentation here : <a href="https://www.renpy.org/doc/html/python.html" rel="nofollow noreferrer">https://www.renpy.org/doc/html/python.html</a></p>
<blockquote>
<pre><code>init python:
import openai
</code></pre>
</blockquote>
<p>But I get this error :</p>
<blockquote>
<p>File "python-packages/aiohttp/helpers.py", line 9, in
ModuleNotFoundError: No module named 'netrc'</p>
</blockquote>
<p>I guess it means that Ren'Py runs a custom python environment without netrc, but I don't know how to install netrc as a module in Ren'Py</p>
<p>Any help would be greatly appreciated, I'll gladly open source this GPT-3 powered Ren'Py project once it starts working.</p>
|
<python><openai-api><renpy>
|
2023-02-12 19:19:34
| 3
| 1,349
|
Taiko
|
75,429,473
| 4,015,156
|
Memory overwriting other memory issue in Numba?
|
<p>Take the following script:</p>
<pre><code>import numba
@numba.njit
def foo():
lst1 = [[1,2,3,4,5], [6,7,8,9,10]]
for i in range(300):
lst2 = []
lst2.append(1)
del lst1[1]
del lst1[0]
lst1.append([1,2,3,4,5])
lst1.append([6,7,8,9,10])
if len(lst2) > 1:
print("This should never print!")
print("Length of lst2(should not ever be greater than 1): ", len(lst2))
for i in range(200):
foo()
</code></pre>
<p>If you run this, you will get that sometimes the prints happen even though they shouldn't. 'lst2' should never have length more than 1 since it only gets appended to once and then reset in each iteration. It seems that appending to lst1 is somehow "interfering" with lst2. My suspicion is some kind of memory problem where 'lst1' appending is overwriting other memory? But I am not sure.. When I run this, the length of lst2 somehow ends up being 5 sometimes which is super weird.. Thank you for the help!</p>
|
<python><list><memory><numba>
|
2023-02-12 19:06:05
| 0
| 517
|
TheMAAAN
|
75,429,196
| 4,446,853
|
VS Code Jupyter notebook dark theme for interactive elements
|
<p>I'm using a dark theme for entire VS Code and I set <code>"jupyter.themeMatplotlibPlots": true</code> to also make the matplotlib plots dark. This setup works great as long as I don't use any <strong>interactive</strong> elements.</p>
<h2>Examples of bad behaviour</h2>
<p>For example if I run <code>%matplotlib widget</code> (you need to <code>python -m pip install ipympl</code> for this to work) I get an ugly white border around the plot:
<a href="https://i.sstatic.net/fJTOa.png" rel="noreferrer"><img src="https://i.sstatic.net/fJTOa.png" alt="enter image description here" /></a></p>
<p>Also the widgets from <code>ipywidgets</code> don't respect the dark theme:
<a href="https://i.sstatic.net/iAbCA.png" rel="noreferrer"><img src="https://i.sstatic.net/iAbCA.png" alt="enter image description here" /></a></p>
<p>Same goes for elements from <code>IPython.display</code>:
<a href="https://i.sstatic.net/VOkOh.png" rel="noreferrer"><img src="https://i.sstatic.net/VOkOh.png" alt="enter image description here" /></a></p>
<p>Above examples respect the dark theme in Google's Collab so there must be a way to tell these widgets to use the dark theme. But I can't find a good way to do so.</p>
<h2>Unfruitful attempts of fixing the problem</h2>
<p>I tried running <code>set_nb_theme('chesterish')</code> from <code>jupyterthemes.stylefx</code> and it did change the background slightly but the white borders around the widgets remained unchanged.</p>
<p><a href="https://i.sstatic.net/Cwo0m.png" rel="noreferrer"><img src="https://i.sstatic.net/Cwo0m.png" alt="enter image description here" /></a></p>
<p>I noticed that the <code>set_nb_theme</code> function returns a <code>IPython.core.display.HTML</code> object which contains some CSS so I tried to make my own:</p>
<pre class="lang-py prettyprint-override"><code>from IPython.display import HTML
HTML('''
<style>
.jupyter-matplotlib {
background-color: #000;
}
.widget-label, .jupyter-matplotlib-header{
color: #fff;
}
.jupyter-button {
background-color: #333;
color: #fff;
}
</style>
''')
</code></pre>
<p>and this indeed made the interactive plot dark:</p>
<p><a href="https://i.sstatic.net/ElYiv.png" rel="noreferrer"><img src="https://i.sstatic.net/ElYiv.png" alt="enter image description here" /></a></p>
<p>I'm not fully satisfied with this solution bacause to get the dark theme I have to add some lengthy code to the beginning of each notebook (the VS Code setting <code>"jupyter.runStartupCommands"</code> doesn't work since the HTML object must be in a cell output) and I would have to write custom CSS for every widget that I use (the above solution does not fix the slider or audio widget).</p>
<h3>The Question</h3>
<p>So my qusetion is: How to make the interactive widgets use their dark mode stylesheets inside VS Code notebook in the most elegant way possible?</p>
|
<python><matplotlib><jupyter-notebook><ipython><ipympl>
|
2023-02-12 18:19:02
| 1
| 759
|
Nejc Jezersek
|
75,429,095
| 10,197,418
|
Inconsistency when parsing year-weeknum string to date
|
<p>When parsing year-weeknum strings, I came across an inconsistency when comparing the results from <code>%W</code> and <code>%U</code> (<a href="https://docs.python.org/3/library/datetime.html#strftime-and-strptime-format-codes" rel="nofollow noreferrer">docs</a>):</p>
<p><strong>What works:</strong></p>
<pre class="lang-py prettyprint-override"><code>from datetime import datetime
print("\nISO:") # for reference...
for i in range(1,8): # %u is 1-based
print(datetime.strptime(f"2019-01-{i}", "%G-%V-%u"))
# ISO:
# 2018-12-31 00:00:00
# 2019-01-01 00:00:00
# 2019-01-02 00:00:00
# ...
# %U -> week start = Sun
# first Sunday 2019 was 2019-01-06
print("\n %U:")
for i in range(0,7):
print(datetime.strptime(f"2019-01-{i}", "%Y-%U-%w"))
# %U:
# 2019-01-06 00:00:00
# 2019-01-07 00:00:00
# 2019-01-08 00:00:00
# ...
</code></pre>
<p><strong>What is unexpected</strong>:</p>
<pre class="lang-py prettyprint-override"><code># %W -> week start = Mon
# first Monday 2019 was 2019-01-07
print("\n %W:")
for i in range(0,7):
print(datetime.strptime(f"2019-01-{i}", "%Y-%W-%w"))
# %W:
# 2019-01-13 00:00:00 ## <-- ?! expected 2019-01-06
# 2019-01-07 00:00:00
# 2019-01-08 00:00:00
# 2019-01-09 00:00:00
# 2019-01-10 00:00:00
# 2019-01-11 00:00:00
# 2019-01-12 00:00:00
</code></pre>
<p>The date jumping from 2019-01-13 to 2019-01-07? What's going on here? I don't see any ambiguities in the <a href="https://www.timeanddate.com/calendar/?year=2019&country=9" rel="nofollow noreferrer">calendar for 2019</a>... I also tried to parse the same dates in <code>rust</code> with chrono, and it fails for the <code>%W</code> directive -> <a href="https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=a38796a60d86028977c56f76f0e386be" rel="nofollow noreferrer">playground example</a>. A jump backwards in Python and an error in Rust, what am I missing here?</p>
|
<python><date><datetime>
|
2023-02-12 18:04:12
| 2
| 26,076
|
FObersteiner
|
75,429,055
| 3,136,710
|
How do I indicate a user's input has ended when writing to a file in Python?
|
<p>I believe in C we could do ctrl+z for EOF (ctrl+z worked for this), but how do I do this in Python? I'm using <code>'0'</code> temporarily. I'm using Windows 10 with Python 3.11.</p>
<pre><code>#create a text file and write some lines to it
f = open("newfile.txt",'w')
line = ""
while line != '0': #I'm not sure what to use to indicate the end of the user's input
line = input()
f.write(line + '\n')
f.close()
</code></pre>
|
<python>
|
2023-02-12 17:58:16
| 1
| 652
|
Spellbinder2050
|
75,429,049
| 2,710,983
|
Python Selenium: Getting StaleElementReferenceException on click for SPA website
|
<p>I'm trying to extract 1000 usernames and the team rosters for each username in a list from this SPA (single page application) sports website (requires free login): <a href="https://rumble.otmnft.com/contest/live?contest=Rumble" rel="nofollow noreferrer">https://rumble.otmnft.com/contest/live?contest=Rumble</a></p>
<p>I've tried numerous strategies to try to avoid the common StaleElementReferenceException problem in Selenium including <code>try / except</code> block, <code>WebDriverWait</code> and <code>time.sleep(5)</code> among many others, but none have worked. Would be amazing if someone knows an alternative solution which could work as I've been trying to solve this by myself for quite a few days now.</p>
<p>Here is the relevant part of the latest attempted code which still throws StaleElementReferenceException below:</p>
<pre><code>parent_dict = {}
parent_dict['data'] = []
USERNAME_XPATH = "//span[@class='hidden lg:flex font-semibold']"
ROSTER_XPATH = "//span[@class='text-white hidden lg:flex font-semibold']"
WebDriverWait(driver, 50).until(EC.presence_of_element_located((By.XPATH, "//table")))
username_elements = driver.find_elements(By.XPATH, USERNAME_XPATH)
try:
for usr_element in username_elements:
usr_element.click()
except StaleElementReferenceException:
# Find the element again and continue with the loop
WebDriverWait(driver, 50).until(EC.presence_of_element_located((By.XPATH, USERNAME_XPATH)))
username_elements = driver.find_elements(By.XPATH, USERNAME_XPATH)
time.sleep(5)
for usr_element in username_elements:
usr_element.click()
WebDriverWait(driver, 50).until(EC.presence_of_element_located((By.XPATH, ROSTER_XPATH)))
roster_elements = driver.find_elements(By.XPATH, ROSTER_XPATH)
roster = [cell.text for cell in roster_elements]
TITLE_USER_XPATH = "//div[@class='flex flex-col space-y-2']/span"
title_user = driver.find_element(By.XPATH, TITLE_USER_XPATH)
child_dict = {"username": title_user.text, "roster": roster }
print("CHILD_DICT", child_dict)
parent_dict['data'].append(child_dict)
with open('data_v2.json', 'w') as outfile:
json.dump(parent_dict, outfile)
print("PARENT_DICT", parent_dict)
return parent_dict
</code></pre>
<p>In the browser I can see that the script is able to click on the very first username but it fails when trying click on the second username, but reports the error for the line <code>usr_element.click()</code> both inside the <code>try</code> and <code>except</code> blocks</p>
<p>Full error trace is:</p>
<pre><code>Traceback (most recent call last):
File "/Users/danpro12/Dropbox/01 FREE TIME STUFF/01 WEB DEVELOPMENT/12 DAPPER RELATED/otm_python_scraper/otm_specific/utils.py", line 23, in sub_func
usr_element.click()
File "/Users/danpro12/.virtualenvs/otm/lib/python3.9/site-packages/selenium/webdriver/remote/webelement.py", line 88, in click
self._execute(Command.CLICK_ELEMENT)
File "/Users/danpro12/.virtualenvs/otm/lib/python3.9/site-packages/selenium/webdriver/remote/webelement.py", line 396, in _execute
return self._parent.execute(command, params)
File "/Users/danpro12/.virtualenvs/otm/lib/python3.9/site-packages/selenium/webdriver/remote/webdriver.py", line 429, in execute
self.error_handler.check_response(response)
File "/Users/danpro12/.virtualenvs/otm/lib/python3.9/site-packages/selenium/webdriver/remote/errorhandler.py", line 243, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.StaleElementReferenceException: Message: The element reference of <span class="hidden lg:flex font-semibold"> is stale; either the element is no longer attached to the DOM, it is not in the current frame context, or the document has been refreshed
Stacktrace:
RemoteError@chrome://remote/content/shared/RemoteError.sys.mjs:8:8
WebDriverError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:180:5
StaleElementReferenceError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:461:5
element.resolveElement@chrome://remote/content/marionette/element.sys.mjs:674:11
evaluate.fromJSON@chrome://remote/content/marionette/evaluate.sys.mjs:255:31
evaluate.fromJSON@chrome://remote/content/marionette/evaluate.sys.mjs:263:29
receiveMessage@chrome://remote/content/marionette/actors/MarionetteCommandsChild.sys.mjs:74:34
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/danpro12/Dropbox/01 FREE TIME STUFF/01 WEB DEVELOPMENT/12 DAPPER RELATED/otm_python_scraper/otm_specific/rumble_client.py", line 53, in <module>
sub_func(driver)
File "/Users/danpro12/Dropbox/01 FREE TIME STUFF/01 WEB DEVELOPMENT/12 DAPPER RELATED/otm_python_scraper/otm_specific/utils.py", line 30, in sub_func
usr_element.click()
File "/Users/danpro12/.virtualenvs/otm/lib/python3.9/site-packages/selenium/webdriver/remote/webelement.py", line 88, in click
self._execute(Command.CLICK_ELEMENT)
File "/Users/danpro12/.virtualenvs/otm/lib/python3.9/site-packages/selenium/webdriver/remote/webelement.py", line 396, in _execute
return self._parent.execute(command, params)
File "/Users/danpro12/.virtualenvs/otm/lib/python3.9/site-packages/selenium/webdriver/remote/webdriver.py", line 429, in execute
self.error_handler.check_response(response)
File "/Users/danpro12/.virtualenvs/otm/lib/python3.9/site-packages/selenium/webdriver/remote/errorhandler.py", line 243, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.StaleElementReferenceException: Message: The element reference of <span class="hidden lg:flex font-semibold"> is stale; either the element is no longer attached to the DOM, it is not in the current frame context, or the document has been refreshed
Stacktrace:
RemoteError@chrome://remote/content/shared/RemoteError.sys.mjs:8:8
WebDriverError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:180:5
StaleElementReferenceError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:461:5
element.resolveElement@chrome://remote/content/marionette/element.sys.mjs:674:11
evaluate.fromJSON@chrome://remote/content/marionette/evaluate.sys.mjs:255:31
evaluate.fromJSON@chrome://remote/content/marionette/evaluate.sys.mjs:263:29
receiveMessage@chrome://remote/content/marionette/actors/MarionetteCommandsChild.sys.mjs:74:34
</code></pre>
|
<python><selenium-webdriver><xpath>
|
2023-02-12 17:57:14
| 1
| 476
|
neo5_50
|
75,428,748
| 258,572
|
Building Pybind11 with setuptools for NEON intrinsics
|
<p>I am trying to compile/bind a python extension written in C++ that uses NEON intrinsics using
setuptools build of PyBind11. But it keep giving me errors.</p>
<p>(arm_neon.h:28:2: error: "NEON intrinsics not available with the soft-float ABI. Please use -mfloat-abi=softfp or -mfloat-abi=hard"
#error "NEON intrinsics not available with the soft-float ABI. Please use -mfloat-abi=softfp or -mfloat-abi=hard")</p>
<p>To reproduce:
clone <a href="https://github.com/pybind/python_example" rel="nofollow noreferrer">https://github.com/pybind/python_example</a>
Add #include <arm_neon.h> to the main.cpp</p>
<p>Then I tried to install/build it using pip, this gives me the following error:</p>
<p>arm_neon.h:28:2: error: "NEON intrinsics not available with the soft-float ABI. Please use -mfloat-abi=softfp or -mfloat-abi=hard"
#error "NEON intrinsics not available with the soft-float ABI. Please use -mfloat-abi=softfp or -mfloat-abi=hard"</p>
<p>So, I tried to add these options to the compiler flags by defining:
extra_compile_args=["-mfloat-abi=hard", "-O3", "-mcpu=native"]</p>
<p>But It still fails, and I see from the output:
clang: warning: argument unused during compilation: '-mfloat-abi=hard' [-Wunused-command-line-argument]</p>
<p>However there are some gcc parts in the output as well, so I tried to force the clang++ compiler by
setting:
os.environ["CC"] = "clang++"
In the top of setup.py</p>
<p>However I still get the same error.
(I have also tried a bunch of other tricks, but I feel that im just searching in the wrong direction so I will not list these).</p>
<p>I can compile an stand alone c++ file with clang so it seems like im doing something wrong with the setuptools configurations.</p>
<p>I am running a Macbook Pro M2.</p>
|
<python><setuptools><pybind11><neon>
|
2023-02-12 17:13:48
| 1
| 351
|
Ooki
|
75,428,729
| 3,520,363
|
No files found - print folders and file list using a service google account
|
<p>I enabled the google drive api, created a service account, added the service account email to the folder and successfully generated the .json credentials
But when I try to print the contents of the temp2 folder I get "no files found"
I'm attempting to do it this way</p>
<pre><code>import logging
from google.oauth2.service_account import Credentials
from googleapiclient.discovery import build
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
creds = Credentials.from_service_account_file('bayfiles-779393795f34.json', scopes=['https://www.googleapis.com/auth/drive'])
service = build('drive', 'v3', credentials=creds)
query = "mimeType='application/vnd.google-apps.folder' or mimeType='application/vnd.google-apps.document' or mimeType='application/vnd.google-apps.spreadsheet' or mimeType='application/vnd.google-apps.presentation' and trashed = false and parents in '16x7o7CNCscM-lFqnKaXwUf1Bv-OTQK0W'"
results = service.files().list(q=query).execute()
items = results.get("files", [])
if not items:
logger.debug("No files found")
else:
# Print file names
for item in items:
logger.debug(f'The service account has access to the file "{item["name"]}" with ID "{item["id"]}"')
</code></pre>
<p>The strange thing is that the number of requests appears to me in the API page of my project</p>
<p><a href="https://i.sstatic.net/KDxCS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KDxCS.png" alt="enter image description here" /></a></p>
<p>but through the code it is returned to me as if there were no files inside the temp2 folder, but it's wrong.</p>
|
<python><google-cloud-platform>
|
2023-02-12 17:10:56
| 1
| 380
|
user3520363
|
75,428,689
| 3,120,501
|
Flickering windows for Matplotlib animation
|
<p>I'm trying to create a simulator in Python with Matplotlib as the GUI. I'd like to have a number of different figures, all individually animated - i.e. one showing the main simulation window, another showing graphs, etc. I'm using the Matplotlib set_data() function to animate the plots, rather than clearing and re-drawing everything each time, which is working well, however all of the windows are flickering, and I'd really like to avoid this if possible (I've looked at this question: <a href="https://stackoverflow.com/questions/51133678/matplotlib-figure-flickering-when-updating">Matplotlib Figure flickering when updating</a>, but the answer didn't work for me unfortunately).</p>
<p>The best way to do this might be using the Matplotlib FuncAnimation class, however I'd like to drive the animation externally (e.g. by calling an update function as and when I want to update the plots), whereas FuncAnimation seems to 'have control' of the clock, which isn't appropriate for my use case. I've tried the 'blitting' approach described here: <a href="https://stackoverflow.com/questions/11002338/how-do-i-re-draw-only-one-set-of-axes-on-a-figure-in-python">How do I re-draw only one set of axes on a figure in python?</a>, but this doesn't seem to stop the flickering, although I've very new to animation so I'm probably not doing it correctly.</p>
<p>A few observations:</p>
<ul>
<li>When there's just one window (figure) open, the flickering doesn't happen. E.g. if there are two Matplotlib windows flickering and then one is closed, the one left open will start running much more smoothly (and will no longer flicker).</li>
<li>A window with an animated figure will flicker even if the second window doesn't contain an animated figure - it seems to be related to there being multiple Matplotlib windows open.</li>
</ul>
<p>Does anyone know how I can fix this? Here's a minimal working example of my problem:</p>
<pre><code>import matplotlib.pyplot as plt
class View:
def __init__(self):
plt.ion()
self.sim_fig, self.sim_ax = plt.subplots(num="Simulator")
self.energy_fig, self.energy_ax = plt.subplots(num="Energy")
# Set axis limits
x_min, x_max = (-3, 6)
y_min, y_max = (-6, 3)
self.sim_ax.set_xlim(x_min, x_max)
self.sim_ax.set_ylim(y_min, y_max)
self.sim_ax.axis('equal')
self.energy_ax.set_xlim(0, 200)
self.energy_ax.set_ylim(-20, 20)
self.times = []
self.energies = []
# Create plot objects for animation
self.posn_pt, = self.sim_ax.plot([], [], 'bo')
self.energy_data, = self.energy_ax.plot([], [], '.-')
plt.show(block=False)
def update(self, time, x, y, energy):
# Allows non-constant updates between steps
self.times.append(time)
# Update dot position
self.posn_pt.set_xdata(x)
self.posn_pt.set_ydata(y)
# Append energy
self.energies.append(energy)
self.energy_data.set_data(self.times, self.energies)
plt.draw()
view = View()
from random import uniform
x = 0
y = 0
energy = 0
time = 0
# Simulator is a bit like this (but obviously much more complicated!)
# Time between steps is variable (might not be constant)
view.update(time, x, y, energy)
for i in range(200):
x += uniform(-0.02, 0.05)
y += uniform(-0.05, 0.02)
energy += uniform(-1, 1)
time += uniform(0, 1)
view.update(time, x, y, energy)
plt.pause(0.05)
input("Simulation finished")
</code></pre>
<p>... and with my attempt to 'blit'...</p>
<pre><code>import matplotlib.pyplot as plt
class View:
def __init__(self):
plt.ion()
self.sim_fig, self.sim_ax = plt.subplots(num="Simulator")
self.sim_fig.canvas.draw()
self.sim_background = self.sim_fig.canvas.copy_from_bbox(self.sim_ax.bbox)
self.energy_fig, self.energy_ax = plt.subplots(num="Energy")
self.energy_fig.canvas.draw()
self.energy_background = self.energy_fig.canvas.copy_from_bbox(self.energy_ax.bbox)
# Set axis limits
x_min, x_max = (-3, 6)
y_min, y_max = (-6, 3)
self.sim_ax.set_xlim(x_min, x_max)
self.sim_ax.set_ylim(y_min, y_max)
self.sim_ax.axis('equal')
self.energy_ax.set_xlim(0, 200)
self.energy_ax.set_ylim(-20, 20)
self.times = []
self.energies = []
# Create plot objects for animation
self.posn_pt, = self.sim_ax.plot([], [], 'bo')
self.energy_data, = self.energy_ax.plot([], [], '.-')
# Part of the flickerless-attempt animation code
self.sim_fig.canvas.draw()
self.energy_fig.canvas.draw()
plt.show(block=False)
def update(self, time, x, y, energy):
# For flickerless plotting
self.sim_fig.canvas.restore_region(self.sim_background)
self.energy_fig.canvas.restore_region(self.energy_background)
# Allows non-constant updates between steps
self.times.append(time)
# Update dot position
self.posn_pt.set_xdata(x)
self.posn_pt.set_ydata(y)
# Append energy
self.energies.append(energy)
self.energy_data.set_data(self.times, self.energies)
self.sim_ax.draw_artist(self.posn_pt)
self.energy_ax.draw_artist(self.energy_data)
self.sim_fig.canvas.blit(self.sim_ax.bbox)
self.energy_fig.canvas.blit(self.energy_ax.bbox)
view = View()
from random import uniform
x = 0
y = 0
energy = 0
time = 0
# Simulator is a bit like this (but obviously much more complicated!)
# Time between steps is variable (might not be constant)
view.update(time, x, y, energy)
for i in range(200):
x += uniform(-0.02, 0.05)
y += uniform(-0.05, 0.02)
energy += uniform(-1, 1)
time += uniform(0, 1)
view.update(time, x, y, energy)
plt.pause(0.05)
input("Simulation finished")
</code></pre>
<p>I'd also like to be able to dynamically rescale the axes, which some answers have said complicates things.</p>
|
<python><matplotlib><animation><simulation><blit>
|
2023-02-12 17:03:54
| 1
| 528
|
LordCat
|
75,428,616
| 4,404,805
|
Django - Find sum and count of values in child table using serializers
|
<p>I have two tables: <code>Event</code> and <code>EventTicket</code>. I have to return total transactions and total tickets sold of each event. I am trying to achieve this using Django serializers. Using only serializers, I need to find the following:</p>
<pre><code> 1. total tickets sold: count of items in EventTicket table for each event
2. total transactions: sum of total_amount of items in EventTicket table for each event where payment_status: 1
</code></pre>
<p>I read about <code>SerializerMethodField</code> but couldn't find a solution specific to this scenario.</p>
<pre><code>class Event(models.Model):
name = models.CharField(max_length=100, null=False, blank=False)
description = models.TextField(null=False, blank=False)
date_time = models.DateTimeField()
class EventTicket(models.Model):
user = models.ForeignKey(User,on_delete=models.CASCADE)
event = models.ForeignKey(Event,on_delete=models.CASCADE)
payment_status = models.SmallIntegerField(default=0, null=False, blank=False) ## payment_success: 1, payment_failed: 0
total_amount = models.FloatField()
date_time = models.DateTimeField()
</code></pre>
<p>My desired output is:</p>
<pre><code>"ticket_details": [
{
"event_id": 1,
"event_name": Event-1,
"total_transactions": 10000, ## Sum of all ticket amounts of event_id: 1, where payment_status: 1
"total_tickets": 24, ## Count of all tickets that belong to event_id: 1
},
{
"event_id": 2,
"event_name": Event-2,
"total_transactions": 10000, ## Sum of all ticket amounts of event_id: 2, where payment_status: 1
"total_tickets": 24, ## Count of all tickets that belong to event_id: 2
}]
</code></pre>
<p>This is what I have done:</p>
<p><strong>models.py:</strong></p>
<pre><code>class EventTicket(models.Model):
event = models.ForeignKey(Event,on_delete=models.CASCADE, related_name='event_tickets')
</code></pre>
<p><strong>serializers.py:</strong></p>
<pre><code>class EventListSerializer(serializers.ModelSerializer):
# ticket_id = serializers.IntegerField(source='id')
# event_id = serializers.PrimaryKeyRelatedField(source='event', read_only=True)
# event_name = serializers.PrimaryKeyRelatedField(source='event.name', read_only=True)
total_transactions = serializers.SerializerMethodField()
total_tickets = serializers.SerializerMethodField()
class Meta:
model = Event
fields = ('total_transactions', 'total_tickets')
def get_total_transactions(self, obj):
return obj.event_tickets.all().aggregate(sum('total_amount'))['total_amount__sum']
def get_total_tickets(self, obj):
return obj.event_tickets.count()
</code></pre>
|
<python><django><django-serializer>
|
2023-02-12 16:52:41
| 2
| 1,207
|
Animeartist
|
75,428,512
| 1,519,304
|
matplotlib.plt adds an unwanted line when using fill_between
|
<p>Question about using <code>fill_between</code>:</p>
<p>Using <code>fill_between</code> in the following snippet adds an unwanted vertical and horizontal line. Is it because of the parametric equation? How can I fix it?</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
t = np.linspace(0, 2 * np.pi, 100)
x = 16 * np.sin(t)**3
y = 13 * np.cos(t) - 5 * np.cos(2*t) - 2*np.cos(3*t) - np.cos(4*t)
plt.plot(x,y, 'red', linewidth=2)
plt.fill_between(x, y, color = 'red', alpha = 0.2)
</code></pre>
<p>Here is the outcome:</p>
<p><a href="https://i.sstatic.net/edTND.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/edTND.jpg" alt="Please notice the horizontal and vertical lines" /></a></p>
|
<python><matplotlib>
|
2023-02-12 16:34:09
| 1
| 1,073
|
NNsr
|
75,428,484
| 726,730
|
QTreeWidget clear empty space at the bottom of widget
|
<p>Is this example i have a QTreeWidget with 4 columns. The last column is filled by QFrames.</p>
<p><strong>File ui.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from PyQt5 import QtCore, QtGui, QtWidgets
import sys
if __name__ == "__main__":
app = QtWidgets.QApplication(sys.argv)
app.setStyle("Windows")
treeWidget = QtWidgets.QTreeWidget()
treeWidget.headerItem().setText(0, "Α/Α")
treeWidget.headerItem().setText(1,"Τύπος")
treeWidget.headerItem().setText(2,"Τίτλος")
treeWidget.headerItem().setText(3,"Προεπισκόπιση")
treeWidget.setStyleSheet("QTreeWidget::item{height:60px;}")
l = []
for i in range(0,30):
l.append(QtWidgets.QTreeWidgetItem(["1","1","1","1"]))
treeWidget.addTopLevelItems(l) # add everything to the tree
treeWidget.show()
right_height = treeWidget.header().height()
for el in l:
right_height += treeWidget.visualItemRect(el).height()
print(right_height)
sys.exit(app.exec_())
</code></pre>
<p>Output (after scrolling to the bottom of QTreeWidget):</p>
<p><a href="https://i.sstatic.net/IkYwF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IkYwF.png" alt="enter image description here" /></a></p>
<p>The desired total height of ScrollArea (inside QTreeWidget) is 1823 and it's calculated as the sum of header height and height of each line.</p>
<p>As you can see there is empty space after last row in QTreeWidget. This problem doesn't appear after resizing QDialog manually.</p>
<p>Edit: <a href="https://stackoverflow.com/questions/62950235/qtreeview-and-scrollarea-size">This</a> may be usefull.</p>
|
<python><pyqt5><qtreewidget>
|
2023-02-12 16:30:57
| 1
| 2,427
|
Chris P
|
75,428,181
| 8,953,248
|
Is there official documentation for python's len() function warning that it can sometimes return apparently wrong values for some strings?
|
<pre><code>$ python
Python 3.8.10 (default, Nov 14 2022, 12:59:47)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> for name in ["Blue Oyster Cult", "Blue Öyster Cult", "Spinal Tap", "Spın̈al Tap"]:
... print(f'{len(name):3d} {name}')
...
16 Blue Oyster Cult
16 Blue Öyster Cult
10 Spinal Tap
11 Spın̈al Tap
>>> quit()
</code></pre>
<p>I'm not asking for an explanation of this behaviour, I'm asking for any official documentation for the <code>len()</code> function itself saying that it will return a seemingly wrong answer for the last case.</p>
|
<python>
|
2023-02-12 15:42:08
| 3
| 618
|
Ray Butterworth
|
75,428,125
| 3,045,351
|
Package geobuf returning NoneType when trying to convert .pbf to json
|
<p>I am downloading a .pbf file from the following location:</p>
<pre><code>https://api.mapbox.com/v4/tableau-enterprise.ctcaz7rr/4/7/5.vector.pbf?sku=101NmEGuqERal&access_token=pk.eyJ1IjoidGFibGVhdS1lbnRlcnByaXNlIiwiYSI6ImNrY29iZjN6MzA4ZzgycHF6MHd0cXhyaXoifQ.rtfFAKyzk-qMYue5C-8RVA
</code></pre>
<p>This returns a JSON blob as per the below:</p>
<pre><code>{"bounds":[-171.5625,-55.37911,178.59375,81.093214],"center":[120.585938,23.563592,9],"created":1608167491585,"created_by_client":"studio","description":"admin2-lines_z49.mbtiles","filesize":56315904,"format":"pbf","generator":"tile-join v1.36.0","generator_options":"tippecanoe -f -o admin2-labels_z49.mbtiles -r0 -Z4 -z9 -l admin-2-labels-j admin2_label_mb_stage2.geojson; tippecanoe -f -o admin2-lines_z49.mbtiles -r0 -Z4 -z9 -l admin-2-lines admin2_mb_merge.geojson; tile-join -o admin-2-join_z9.mbtiles admin2-lines_z49.mbtiles admin2-labels_z49.mbtiles","id":"tableau-enterprise.ctcaz7rr","mapbox_logo":true,"maxzoom":9,"minzoom":4,"modified":1608167491377,"name":"admin_2_z9-9pepve","private":true,"scheme":"xyz","tilejson":"2.2.0","tiles":["https://a.tiles.mapbox.com/v4/tableau-enterprise.ctcaz7rr/{z}/{x}/{y}.vector.pbf?access_token=pk.eyJ1IjoidGFibGVhdS1lbnRlcnByaXNlIiwiYSI6ImNrY29iZjN6MzA4ZzgycHF6MHd0cXhyaXoifQ.rtfFAKyzk-qMYue5C-8RVA","https://b.tiles.mapbox.com/v4/tableau-enterprise.ctcaz7rr/{z}/{x}/{y}.vector.pbf?access_token=pk.eyJ1IjoidGFibGVhdS1lbnRlcnByaXNlIiwiYSI6ImNrY29iZjN6MzA4ZzgycHF6MHd0cXhyaXoifQ.rtfFAKyzk-qMYue5C-8RVA"],"type":"overlay","upload_id":"ckis5d9e5082z23phtcg5sol1","vector_layers":[{"description":"","fields":{"default":"String","id":"Number","name_de":"String","name_en":"String","name_es":"String","name_fr":"String","name_ja":"String","name_ko":"String","name_pt":"String","name_zh":"String","name_zh-Hans":"String","parent_0":"String","worldview":"String"},"id":"admin-2-labels-j","maxzoom":9,"minzoom":4,"source":"tableau-enterprise.ctcaz7rr","source_name":"admin_2_z9-9pepve"},{"description":"","fields":{"worldview":"String"},"id":"admin-2-lines","maxzoom":9,"minzoom":4,"source":"tableau-enterprise.ctcaz7rr","source_name":"admin_2_z9-9pepve"}],"version":"2","webpage":"https://a.tiles.mapbox.com/v4/tableau-enterprise.ctcaz7rr/page.html?access_token=pk.eyJ1IjoidGFibGVhdS1lbnRlcnByaXNlIiwiYSI6ImNrY29iZjN6MzA4ZzgycHF6MHd0cXhyaXoifQ.rtfFAKyzk-qMYue5C-8RVA"}
</code></pre>
<p>Playing around with <code>Chrometools</code>, I have managed to download the <code>.pbf</code> file locally. I am then attempting to unpack this to convert this back to <code>geojson</code> using <code>geobuf</code> as per the below:</p>
<pre><code>import geobuf
with open('C:\\Users\\<mydir>\\Downloads\\5.vector.pbf', 'rb') as f:
contents = f.read()
f.close()
my_json = geobuf.decode(contents) # Geobuf string -> GeoJSON or TopoJSON
print('********', my_json)
</code></pre>
<p>...however, all I am getting back from that is a NoneType object. What am I doing wrong?</p>
<p>Thanks</p>
|
<python><python-3.x><geojson><osm.pbf>
|
2023-02-12 15:31:22
| 0
| 4,190
|
gdogg371
|
75,428,029
| 1,883,795
|
*minimal, fast* least-squares function for python
|
<p>Is there a pre-existing library that does the equivalent of</p>
<pre><code>B = np.linalg.inv(X.T @ X) @ X.T @ y
return B
</code></pre>
<p>and nothing else?? I want something fast and minimal (i.e.: no pvalues, no standard errors). Obviously I've already done it above, but I don't want to write my own unit tests or handle all the corner cases :)</p>
|
<python><least-squares>
|
2023-02-12 15:17:57
| 1
| 3,582
|
generic_user
|
75,427,978
| 2,131,659
|
How to survive revert_mainfile() in memory?
|
<p>I'm trying to reload the blend file inside the loop in my script. Modal operator doesn't work, because after scene reload the operator's instance dies.</p>
<p>So, I create my own class, which has my loop in generator, which yields after calling
revert_mainfile(), and returning control to it from blender's callbacks (chained load_post, scene_update_pre).</p>
<p>Once it fires ok, but on the 2nd iteration it closes blender with AV.</p>
<p>What am I doing wrong?</p>
<p>Here is the code:</p>
<pre><code>"""
The script shows how to have the iterator with Operator.modal()
iterating with stored generator function to be able to reload blend file
and wait for blender to load the scene asynchronously with callback to have
a valid context to execute the next iteration.
see also: https://blender.stackexchange.com/questions/17960/calling-delete-operator-after-revert-mainfile
Sadly, a modal operator doesn't survive the scene reload, so use stored runner
instance, getting control back with callbacks.
"""
import bpy
#from bpy.props import IntProperty, FloatProperty
import time
from bpy.app.handlers import persistent
ITERATIONS = 3
my_start_time = 0.0 # deb for my_report()
@persistent
def my_report(mess):
print ("deb@ {:.3f} > {}".format(time.time() - my_start_time, mess))
@persistent
def scene_update_callback(scene):
""" this fires after actual scene reloading """
my_report("scene_update_callback start")
# self-removal, only run once
bpy.app.handlers.scene_update_pre.remove(scene_update_callback)
# register that we have updated the scene
MyShadowOperator.scene_loaded = True
# emulate event for modal()
MyShadowOperator.instance.modal()
my_report("scene_update_callback finish")
pass
# after this, it seems we have nowhere to return, and an AV occurs :-(
@persistent
def load_post_callback(dummy):
""" it's called immediately after revert_mainfile() before any actual loading """
my_report("load_post_callback")
# self-removal, so it isn't called again
bpy.app.handlers.load_post.remove(load_post_callback)
# use a scene update handler to delay execution of code
bpy.app.handlers.scene_update_pre.append(scene_update_callback)
class MyShadowOperator():
""" This must survive after the file reload. """
instance = None
scene_loaded = False
def __init__(self, operator):
__class__.instance = self
# store all of the original operator parameters (in case of any)
self.params = operator.as_keywords()
# create the generator (no code from it is executing here)
self.generator = self.processor()
def modal(self):
""" here we are on each iteration """
my_report("Shadow modal start")
return next(self.generator)
def processor(self): #, context
""" this is a generator function to process chunks.
returns {'FINISHED'} or {'RUNNING_MODAL'} """
my_report("processor init")
for i in range(ITERATIONS):
my_report(f"processor iteration step {i}")
# here we reload the main blend file
# first registering the callback
bpy.app.handlers.load_post.append(load_post_callback)
__class__.scene_loaded = False
my_report("processor reloading the file")
bpy.ops.wm.revert_mainfile()
# the scene is already loaded
my_report("processor after reloading the file")
# yield something which will not confuse ops.__call__()
if i == ITERATIONS-1:
yield {} #'FINISHED'
else:
yield {} #'RUNNING_MODAL'
return
class ShadowOperatorRunner(bpy.types.Operator):
"""Few times reload original file, and create an object. """
bl_idname = "object.shadow_operator_runner"
bl_label = "Simple Modal Operator"
def execute(self, context):
""" Here we start from the UI """
global my_start_time
my_start_time = time.time() # deb
my_report("runner execute() start")
# create and init a shadow operator
shadow = MyShadowOperator(self)
# start the 1st iteration
shadow.modal()
# and just leave and hope the next execution in shadow via the callback
my_report("runner execute() return")
return {'FINISHED'} # no sense to keep it modal because it dies anyway
def register():
bpy.utils.register_class(ShadowOperatorRunner)
def unregister():
bpy.utils.unregister_class(ShadowOperatorRunner)
if __name__ == "__main__":
register()
# test call
bpy.ops.object.shadow_operator_runner('EXEC_DEFAULT')
</code></pre>
<p>And this is console output:</p>
<pre><code>deb@ 0.000 > runner execute() start
deb@ 0.001 > Shadow modal start
deb@ 0.001 > processor init
deb@ 0.002 > processor iteration step 0
deb@ 0.003 > processor reloading the file
Read blend: D:\mch\MyDocuments\scripts\blender\modal_operator_reloading_blend\da
ta\runner_modal_operator.blend
deb@ 0.026 > load_post_callback
deb@ 0.027 > processor after reloading the file
deb@ 0.028 > runner execute() return
deb@ 0.031 > scene_update_callback start
deb@ 0.031 > Shadow modal start
deb@ 0.033 > processor iteration step 1
deb@ 0.034 > processor reloading the file
Read blend: D:\mch\MyDocuments\scripts\blender\modal_operator_reloading_blend\da
ta\runner_modal_operator.blend
deb@ 0.057 > load_post_callback
deb@ 0.060 > processor after reloading the file
deb@ 0.060 > scene_update_callback finish
Error : EXCEPTION_ACCESS_VIOLATION
Address : 0x00007FF658DDC2F7
Module : D:\Program files\Blender Foundation\blender-2.79.0\blender.exe
</code></pre>
<p>Blender version:</p>
<pre><code>Blender 2.79 (sub 7)
build date: 28/07/2019
build time: 17:48
build commit date: 2019-06-27
build commit time: 10:41
build hash: e045fe53f1b0
build platform: Windows
build type: Release
</code></pre>
<p>After the last line of scene_update_callback() there is nothing to return to (if I watch the call stack in the debugger), it just closes blender with the Access Violation error.</p>
|
<python><blender><reload>
|
2023-02-12 15:10:14
| 1
| 319
|
Mechanic
|
75,427,829
| 127,508
|
Why does the same hash function of Python and Rust produce different result for the same string?
|
<p>TL;DR:</p>
<p>With the same parameters both hash functions produce the same results. There are few pre-conditions have to be met to achieve that.</p>
<p>I am building a system that has parts in Rust and Python. I need a hashing library that produces the same values for the same input on both ends. I thought that Python and Rust also uses SipHash 1-3 so I have tried to use that.</p>
<p>Python:</p>
<pre><code>>>> import ctypes
>>> ctypes.c_size_t(hash(b'abcd')).value
14608482441665817778
>>> getsizeof(ctypes.c_size_t(hash(b'abcd')).value)
36
>>> type(b'abcd')
<class 'bytes'>
</code></pre>
<p>Rust:</p>
<pre><code>use hashers::{builtin::DefaultHasher};
use std::hash::{Hash, Hasher};
pub fn hash_str(s: &str) -> u64 {
let mut hasher = DefaultHasher::new();
s.hash(&mut hasher);
hasher.finish()
}
pub fn hash_bytes(b: &[u8]) -> u64 {
let mut hasher = DefaultHasher::new();
b.hash(&mut hasher);
hasher.finish()
}
fn test_hash_str() {
let s1: &str = "abcd";
let h1: u64 = hash_str(s1);
assert_eq!(h1, 13543138095457285553);
}
#[test]
fn test_hash_bytes() {
let b1: &[u8] = "abcd".as_bytes();
let h1: u64 = hash_bytes(b1);
assert_eq!(h1, 18334232741324577590);
}
</code></pre>
<p>Unfortunately I am not able to produce the same values on both end. Is there a way to get the same values somehow?</p>
<p>UPDATE:</p>
<p>After checkin Python's implementation there was a detail that I originally missed, so that Python uses a kind of random salt for every run. This means that the result I got from the Python function could not be the same as the Rust version.</p>
<p>This can be disabled with PYTHONHASHSEED=0 python ...</p>
<p>However this still does not make Python produce the same vales as the Rust version. I have tried custom SipHash implementations on both end. The results are consistent on both ends:</p>
<p>Both use siphasher::sip::SipHasher13; and DefaultHasher produces the same outputs. The result for a String is the same as for the &str but different for the .as_bytes() version.</p>
<pre><code> #[test]
fn test_hash_string() {
let s1: String = "abcd".to_string();
let h1: u64 = hash_string(s1);
assert_eq!(h1, 13543138095457285553);
}
#[test]
fn test_hash_str() {
let s1: &str = "abcd";
let h1: u64 = hash_str(s1);
assert_eq!(h1, 13543138095457285553);
}
#[test]
fn test_hash_bytes() {
let b1: &[u8] = "abcd".as_bytes();
let h1: u64 = hash_bytes(b1);
assert_eq!(h1, 18334232741324577590);
}
</code></pre>
<p>On Python side after disabling the randomization:</p>
<pre><code> sh = SipHash(c=1, d=3)
h = sh.auth(0, "abcd")
assert h == 16416137402921954953
</code></pre>
|
<python><rust><hash>
|
2023-02-12 14:47:37
| 1
| 8,822
|
Istvan
|
75,427,671
| 4,865,723
|
Test for instantiation of a mocked class in Python?
|
<p>I want to make my tests "real" unittests in a strict meaning. There shouldn't be any dependencies. All dependencies should be mocked.</p>
<p>The class <code>Bar</code> in module <code>bar</code> need to be tested. But in some situations it will have a member of type <code>Foo</code> from module <code>foo</code>. The goal is that the unittest does not have to import <code>foo</code> or <code>foo.Foo</code>.</p>
<p>My mocking seems to work. But I'm not sure how to test if <code>Bar()</code> now does have tried to instantiate <code>foo.Foo()</code>. My <code>assert_called()</code> in the below example failed with <code>Expected 'mock' to have been called.</code>. And that assert also doesn't give me all information's I want.</p>
<p>This is the test code:</p>
<pre><code>#!/usr/bin/env python3
import unittest
from unittest import mock
import bar
# The goal ist to test "Bar" wihtout importing "foo.Foo" as an extra dependency
class MyTest(unittest.TestCase):
def test_bar(self):
mock_foo = mock.Mock()
bar.foo = mock_foo
b = bar.Bar(7)
print('')
print(f'{mock_foo=}')
print(f'{b.y=}')
# Of course this doesn't work, because "foo" is not imported
# self.assertIsInstance(b.y, foo.Foo)
print(mock_foo.__dict__)
mock_foo.assert_called() # failes
def test_int(self):
b = bar.Bar(8)
self.assertIsInstance(b.y, int)
if __name__ == '__main__':
unittest.main()
</code></pre>
<p>Here comes module <code>bar</code>:</p>
<pre><code>import foo
class Bar:
def __init__(self, y):
if y == 7:
self.y = foo.Foo(7)
else:
self.y = y
</code></pre>
<p>And this is module <code>foo</code>:</p>
<pre><code>class Foo:
def __init__(self, x):
self.x = x
</code></pre>
|
<python><unit-testing><mocking>
|
2023-02-12 14:23:03
| 1
| 12,450
|
buhtz
|
75,427,538
| 1,574,054
|
RegularGridInterpolator excruciatingly slow compared to interp2d
|
<p>Consider the following code example:</p>
<pre><code># %%
import numpy
from scipy.interpolate import interp2d, RegularGridInterpolator
x = numpy.arange(9000)
y = numpy.arange(9000)
z = numpy.random.randint(-1000, high=1000, size=(9000, 9000))
f = interp2d(x, y, z, kind='linear', copy=False)
f2 = RegularGridInterpolator((x, y), z, "linear")
mx, my = np.meshgrid(x, y)
M = np.stack([mx, my], axis=-1)
# %%
%timeit f(x, y)
# %%
%timeit f2(M)
</code></pre>
<p>It sets up some example interpolators using <code>scipy.interpolate.interp2d</code> and <code>scipy.interpolate.RegularGridInterpolator</code>. The output of the two cells above is</p>
<blockquote>
<p>1.09 s ± 4.38 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)</p>
</blockquote>
<p>and</p>
<blockquote>
<p>10 s ± 17.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)</p>
</blockquote>
<p>respectively.</p>
<p>The <code>RegularGridInterpolator</code> is about 10 times slower than the <code>interp2d</code>. The problem is that <code>interp2d</code> has been <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp2d.html#scipy-interpolate-interp2d" rel="nofollow noreferrer">marked as deprecated in scipy 1.10.0</a>. And new code should use <code>RegularGridInterpolator</code>. This seems a bit strange to me since it would be such a bad replacement. Is there maybe a problem in my code example above? How can I speed this interpolation process up?</p>
|
<python><scipy><interpolation><deprecation-warning><linear-interpolation>
|
2023-02-12 14:04:28
| 3
| 4,589
|
HerpDerpington
|
75,427,436
| 3,360,096
|
Mocker.patch a function when unit testing a Flask blueprint
|
<p>I've got a blueprint file <code>/views/index.py</code>:</p>
<pre><code>from flask import Blueprint, render_template
index = Blueprint('index', __name__)
def auth():
return "dog"
@index.route('/')
def index_view():
return render_template(
'index.html', user=auth())
</code></pre>
<p>This is initialized fine from <code>/main.py</code>:</p>
<pre><code>from flask import Flask
from views.index import index
from views.login import login
app = Flask(__name__)
app.register_blueprint(index)
</code></pre>
<p>How can I mock the auth() function in my blueprint to return an override like "cat"?</p>
|
<python><python-mock>
|
2023-02-12 13:48:33
| 1
| 762
|
penitent_tangent
|
75,426,952
| 6,564,294
|
Strange pandas behaviour. character is found where it does not exist
|
<p>I aim to write a function to apply to an entire dataframe. Each column is checked to see if it contains the currency symbol '$' and remove it.</p>
<p>Surprisingly, a case like:</p>
<pre><code>import pandas as pd
dates = pd.date_range(start='2021-01-01', end='2021-01-10').strftime('%d-%m-%Y')
print(dates)
</code></pre>
<p>output:</p>
<p><code>Index(['01-01-2021', '02-01-2021', '03-01-2021', '04-01-2021', '05-01-2021', '06-01-2021', '07-01-2021', '08-01-2021', '09-01-2021', '10-01-2021'], dtype='object')</code></p>
<p>But when I do:</p>
<pre><code>dates.str.contains('$').all()
</code></pre>
<p>It returns <code>True</code>. Why???</p>
|
<python><pandas>
|
2023-02-12 12:22:08
| 1
| 324
|
Chukwudi
|
75,426,868
| 2,723,298
|
TensorFlow GPU problem 'libnvinfer.so.7' and ' 'libnvinfer.so.7'' could not load
|
<p>I installed TensorFlow under <a href="https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux#WSL_2" rel="nofollow noreferrer">WSL 2</a>, <a href="https://en.wikipedia.org/wiki/Ubuntu_version_history#Ubuntu_22.04_LTS_(Jammy_Jellyfish)" rel="nofollow noreferrer">Ubuntu 22.04</a> (Jammy Jellyfish), I followed the instructions in <em><a href="https://www.tensorflow.org/install/pip" rel="nofollow noreferrer">Install TensorFlow with pip</a></em>.</p>
<p>*I also installed Nvidia drivers for Windows and in my other WSL 2, I use GPU-supported simulation program.</p>
<p>Everything seemed OK. I didn't get any error message during installation, but when I imported TensorFlow in Python 3, I got this error:</p>
<pre class="lang-none prettyprint-override"><code>2023-02-12 14:49:58.544771: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvrtc.so.11.0: cannot open shared object file: No such file or directory
2023-02-12 14:49:58.544845: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2023-02-12 14:49:58.544874: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
</code></pre>
<p>I searched my libnvinfer_plugin.so.7 files:</p>
<pre class="lang-none prettyprint-override"><code>sudo find / -name libnvinfer.so.7 2> /dev/null
</code></pre>
<p>and I found them in this directory:</p>
<pre class="lang-none prettyprint-override"><code>cat /usr/lib/x86_64-linux-gnu/libnvinfer.so.7
</code></pre>
<p>and I added this directory to LD_LIBRARY_PATH like in <em><a href="https://stackoverflow.com/questions/74956134/could-not-load-dynamic-library-libnvinfer-so-7">Could not load dynamic library 'libnvinfer.so.7'</a></em>, but nothing changed. Still TensorFlow is working, but I can't use the GPU.</p>
<p>nvidia-smi:</p>
<pre class="lang-none prettyprint-override"><code>+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.65.01 Driver Version: 516.94 CUDA Version: 11.7 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... On | 00000000:01:00.0 Off | N/A |
| N/A 43C P0 22W / N/A | 0MiB / 6144MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
</code></pre>
<p>nvcc--version:</p>
<pre class="lang-none prettyprint-override"><code>nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Fri_Jan__6_16:45:21_PST_2023
Cuda compilation tools, release 12.0, V12.0.140
Build cuda_12.0.r12.0/compiler.32267302_0
</code></pre>
<p>*The TensorFlow version is: 2.11.0</p>
<p>So, how can I fix this problem?</p>
|
<python><tensorflow><gpu>
|
2023-02-12 12:08:30
| 2
| 538
|
BARIS KURT
|
75,426,783
| 655,404
|
Interfacing MIDI device (Akai LPD8 mk2) from Python - works for reading but not sending
|
<h1>Preface</h1>
<p>I have an <a href="https://www.akaipro.com/lpd8-mk2" rel="nofollow noreferrer">Akai LPD8 mk2</a> which I would like to interface from Python.</p>
<p>There are two projects that (I believe) provide interfaces for the predecessor (mk1):</p>
<ul>
<li><a href="https://github.com/zetof/LPD8" rel="nofollow noreferrer">https://github.com/zetof/LPD8</a></li>
<li><a href="https://github.com/DrLuke/python-lpd8" rel="nofollow noreferrer">https://github.com/DrLuke/python-lpd8</a></li>
</ul>
<p>I found I needed to make a few small changes to get the reading to work with these libs (*example below, not sure how relevant this is and it could easily be because I misunderstand something).</p>
<p>However, no matter what I tried, I could not get the sending to the device to work. Specifically, changing the state (on/off) of the pads of the device from Python. This state is indicated by the color of the pads' lighting.</p>
<h1>Question</h1>
<p>A minimal example of what I thought should work is the following:</p>
<p>With the very nice <a href="https://mido.readthedocs.io" rel="nofollow noreferrer">mido</a> library, I would like to read a few state changes of the pads. Then send them back with a small delay (similar idea here: <a href="https://stackoverflow.com/a/29501455/655404">https://stackoverflow.com/a/29501455/655404</a>):</p>
<pre class="lang-py prettyprint-override"><code>from time import sleep
import mido
def get_ioport_name(search):
# see footnote **
names = mido.get_ioport_names()
names = set(n for n in names if search in n)
assert len(names) == 1
return names.pop()
def get_messages(port):
return list(port.iter_pending())
name = get_ioport_name("LPD8")
port = mido.open_ioport(name) # see footnote ***
# ensure that there are no messages in the queue
get_messages(port)
#pause
input("Press a few pads, then press Enter to continue...")
msgs = get_messages(port)
print(f"Recorded these messages:\n{msgs}\n")
print("Echoing these messages:")
for m in msgs:
print(m)
port.send(m)
sleep(1)
</code></pre>
<p>This runs and gives no error messages. However, the expected color changes on the pads do <strong>not</strong> happen.</p>
<p><strong>What am I doing wrong?</strong></p>
<p>I have tried with different <a href="https://mido.readthedocs.io/en/latest/backends/index.html" rel="nofollow noreferrer">backends</a> for mido (rtmidi, portmidi, pygame) as well as using <a href="https://pypi.org/project/python-rtmidi/" rel="nofollow noreferrer">rtmidi</a> directly, all with the same result.</p>
<h1>Footnotes</h1>
<h3>*Example for changes needed:</h3>
<p>I had to change the constant's block here <a href="https://github.com/zetof/LPD8/blob/master/lpd8/lpd8.py#L16" rel="nofollow noreferrer">https://github.com/zetof/LPD8/blob/master/lpd8/lpd8.py#L16</a> to:</p>
<pre class="lang-py prettyprint-override"><code> NOTE_ON = 144+9
NOTE_OFF = 128+9
CTRL = 176
PGM_CHG = 192+9
</code></pre>
<h3>**Port names</h3>
<p><code>names = mido.get_ioport_names()</code> gives:</p>
<pre class="lang-py prettyprint-override"><code>['Midi Through:Midi Through Port-0 14:0',
'LPD8 mk2:LPD8 mk2 MIDI 1 20:0',
'Midi Through:Midi Through Port-0 14:0',
'LPD8 mk2:LPD8 mk2 MIDI 1 20:0']
</code></pre>
<p>and <code>set(names)</code> gives:</p>
<pre class="lang-py prettyprint-override"><code>{'LPD8 mk2:LPD8 mk2 MIDI 1 20:0', 'Midi Through:Midi Through Port-0 14:0'}
</code></pre>
<p>The result is also the same for <code>mido.get_input_names()</code> and <code>mido.get_output_names()</code>.</p>
<h3>***Ports in mido</h3>
<p>From what I understand mido has three port classes:</p>
<pre class="lang-py prettyprint-override"><code>open_input() # only receive
open_output() # only send
open_ioport() # both (and what I used above)
</code></pre>
<p>If I change from ioport to using an input/output pair, the result is the same:</p>
<pre class="lang-py prettyprint-override"><code>port_in = mido.open_input(name)
port_out = mido.open_output(name)
# ensure that there are no messages in the queue
get_messages(port_in)
#pause
input("Press a few pads, then press Enter to continue...")
msgs = get_messages(port_in)
print(f"Recorded these messages:\n{msgs}\n")
print("Echoing these messages:")
for m in msgs:
print(m)
port_out.send(m)
sleep(1)
</code></pre>
|
<python><midi>
|
2023-02-12 11:53:50
| 0
| 1,906
|
NichtJens
|
75,426,621
| 10,613,037
|
How to write to the cell below level 0 of a multiindex
|
<p>I've got a df with a MultiIndex like so</p>
<pre class="lang-py prettyprint-override"><code>nums = np.arange(5)
key = ['kfc'] * 5
mi = pd.MultiIndex.from_arrays([key,nums])
df = pd.DataFrame({'rent': np.arange(10,60,10)})
df.set_index(mi)
</code></pre>
<pre><code>
rent
kfc 0 10
1 20
2 30
3 40
4 50
</code></pre>
<p>How can I write to the cell below <code>kfc</code>, I want to add meta info e.g. The address or the monthly rent</p>
<pre><code> rent
kfc 0 10
NYC 1 20
2 30
3 40
4 50
</code></pre>
|
<python><pandas>
|
2023-02-12 11:22:35
| 1
| 320
|
meg hidey
|
75,426,519
| 2,487,835
|
Map array values to unique sequential values sorted by occurences
|
<p>In my ML app, I use an output 1D np.array Y to color code a scatterplot dots.
I need to bring a variety of widely distributed integer values to sequential integers to utilize better distribution of colors in the colormap.</p>
<p>What I did is this:</p>
<pre><code>def normalize(Y):
U = np.unique(Y)
for i in range(U.size):
Y[Y==U[i]] = i
return Y
</code></pre>
<p>Which replaces them with indices in array's unique'd form.</p>
<p>I wonder if there is a way to do this more efficiently with numpy.
There's got to be a powerful one-liner somewhere out there</p>
<p>*Another thing I could not figure out how to do is to have the sequential values sorted accordingly to the number of corresponding occurences in Y, so that distribution of clustering was obvious on the plot.</p>
|
<python><numpy><numpy-ndarray>
|
2023-02-12 11:05:33
| 4
| 3,020
|
Lex Podgorny
|
75,426,461
| 12,574,341
|
Check if generator is empty without consuming it
|
<p>I filter a location for entities other than the player:</p>
<pre class="lang-py prettyprint-override"><code>entities = filter(lambda entity: entity != player, location.entities)
</code></pre>
<p>If there are any entities, print each one of them:</p>
<pre class="lang-py prettyprint-override"><code>if any(entities):
print('Entites nearby:')
for entity in entities:
print(entity)
</code></pre>
<p>However, using <code>any()</code> consumes the elements in the generator, as the subsequent for loop iterates zero times.</p>
<p>Is there a way to check if a generator is empty or check its length without resorting to lists or having to manually reset the iterator?</p>
|
<python><iterator>
|
2023-02-12 10:52:26
| 0
| 1,459
|
Michael Moreno
|
75,426,142
| 1,497,139
|
pure text non latex results for python bibtex parser
|
<p>see also <a href="https://github.com/sciunto-org/python-bibtexparser/issues/352" rel="nofollow noreferrer">https://github.com/sciunto-org/python-bibtexparser/issues/352</a></p>
<p>currently i am doing:</p>
<pre class="lang-py prettyprint-override"><code>doi=DOI(self.doi)
meta_bibtex=doi.fetchBibtexMeta()
bd=bibtexparser.loads(meta_bibtex)
btex=bd.entries[0]
</code></pre>
<p>using the DOI helper class below. I was hoping to simplify my life since the citeproc result looks quite complicated and i'd love to have some cleanup in e.g. authors and titles.</p>
<p>The bibtexparser does a great job but i don'want a latex result but just clear text.</p>
<p>E.g for 10.1145/800001.811672 i get</p>
<pre><code>The structure of the {\\textquotedblleft}the{\\textquotedblright}-multiprogramming system
</code></pre>
<p>While the plain text</p>
<pre><code>The structure of the "the"-multiprogramming system
</code></pre>
<p>would be better for my use case. Is this already possible with the current bibtexparser or a feature request?</p>
<p><strong>doi.py</strong></p>
<pre class="lang-py prettyprint-override"><code>'''
Created on 2023-02-12
@author: wf
'''
import urllib.request
import json
from dataclasses import dataclass
@dataclass
class DOI:
"""
get DOI data
"""
doi:str
def fetchMeta(self,headers:dict)->dict:
"""
get the metadata for my doi
Args:
headers(dict): the headers to use
Returns:
dict: the metadata according to the given headers
"""
url=f"https://doi.org/{self.doi}"
req=urllib.request.Request(url,headers=headers)
response=urllib.request.urlopen(req)
encoding = response.headers.get_content_charset('utf-8')
content = response.read()
text = content.decode(encoding)
return text
def fetchBibtexMeta(self)->dict:
"""
get the meta data for my doi by getting the bibtext JSON
result for the doi
Returns:
dict: metadata
"""
headers= {
'Accept': 'application/x-bibtex; charset=utf-8'
}
text=self.fetchMeta(headers)
return text
def fetchCiteprocMeta(self)->dict:
"""
get the meta data for my doi by getting the Citeproc JSON
result for the doi
see https://citeproc-js.readthedocs.io/en/latest/csl-json/markup.html
Returns:
dict: metadata
"""
headers= {
'Accept': 'application/vnd.citationstyles.csl+json; charset=utf-8'
}
text=self.fetchMeta(headers)
json_data=json.loads(text)
return json_data
</code></pre>
|
<python><bibtex>
|
2023-02-12 09:55:05
| 1
| 15,707
|
Wolfgang Fahl
|
75,426,108
| 1,413,799
|
Is there a way to do conditional parsing with python's struct.unpack?
|
<p>I'm trying to parse out a byte-oriented structure (GMTI dwell data in AEDP-7 format) where the presence of some of the fields is indicated by a 64-bit flag field at the start of the structure.</p>
<p>Because it works so well for earlier parts, I'd like to use Python's <code>struct</code> module to do the parsing. However I'd need to find a way to do conditional parsing based on the bits that are set in the flag field.</p>
<p>So far I have code that looks like this:</p>
<pre><code>DWELL_SEGMENT_INITIAL_FORMAT = ">QHHBHIIII"
DWELL_SEGMENT_INITIAL_SIZE = struct.calcsize(DWELL_SEGMENT_INITIAL_FORMAT)
def unpackDwellSegmentData(segmentData):
dwellSegment = struct.unpack(DWELL_SEGMENT_INITIAL_FORMAT, segmentData[0:DWELL_SEGMENT_INITIAL_SIZE])
print(dwellSegment)
</code></pre>
<p>The next field (D10 - the tenth field in the dwell segment) is conditional, based on the value of byte 6 bit 7 in the bit mask. I could just switch to open coding the parsing logic based on the values in the bit mask, but would prefer not do that if there is a declarative way do handle this.</p>
<p>Is there a way to parse out this value (and the 39 following it) with <code>struct.unpack</code> declarative approach, and if so, what does it look like?</p>
|
<python><parsing>
|
2023-02-12 09:46:48
| 0
| 702
|
BradHards
|
75,426,081
| 10,613,037
|
How to attach a groupby aggregate to the original dataframe where the aggregate is placed in a new column at the bottom of each group
|
<p>I've got a dataframe <code>df = pd.DataFrame({'A':[1,1,2,2],'values':np.arange(10,30,5)})</code></p>
<p>How can I group by <code>A</code> to get the sum of <code>values</code>, where the sum is placed in a new column <code>sum_values_A</code>, but only once at the bottom of each group. e.g.</p>
<pre><code> A values sum_values_A
0 1 10 NaN
1 1 15 25
2 2 20 NaN
3 2 25 45
</code></pre>
<p>I tried</p>
<pre><code>df['sum_values_A'] = df.groupby('A')['values'].transform('sum')
df['sum_values_A'] = df.groupby('A')['sum_values_A'].unique()
</code></pre>
<p>But couldn't find a way to get the unique sums to be sorted at the bottom of each group</p>
|
<python><pandas><dataframe>
|
2023-02-12 09:41:06
| 2
| 320
|
meg hidey
|
75,425,973
| 16,220,410
|
extract data from each pdf in a folder
|
<p>pic below is the code and output of my pdf, what would like is to extract the data in each pdf file in a folder and paste into an excel file, below is also my initial python code</p>
<p><a href="https://i.sstatic.net/PeMel.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PeMel.png" alt="enter image description here" /></a></p>
<p>update with code from Jorj but error on [words = page.get_text("words", sort=True):]</p>
<pre><code> import os
import fitz
import pandas as pd
# Specify the folder containing the PDF files
folder = r'C:\Users\WORK\Downloads\Invoice Verification'
# Create an empty list to store the data
data = []
# Iterate through each PDF file in the folder
with fitz.open(filename) as doc:
filename = doc.name
for page in doc:
words = page.get_text("words", sort=True):
for i, word in enumerate(words):
if word[4] == "Date:" # item 4 is the actual word string
date = words[i + 1][4] # the date as a string
elif word[4] == "Total:"
total = float(words[i + 1][4])
elif word[4] == "Invoice": # here we need to skip over "No."
invoice_no = words[i + 2][4]
# Add the data to the list
data.append([filename, invoice_no, date, total])
# Convert the data to a DataFrame
df = pd.DataFrame(data, columns=["File Name", "Invoice No.", "Date", "Total"])
# Write the DataFrame to an Excel spreadsheet
df.to_excel("invoice_data.xlsx", index=False)
</code></pre>
<p>link to file - <a href="https://ufile.io/r2qbw2yz" rel="nofollow noreferrer">https://ufile.io/r2qbw2yz</a></p>
<p>desired output
<a href="https://i.sstatic.net/Eq9Pt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Eq9Pt.png" alt="enter image description here" /></a></p>
|
<python><pdf>
|
2023-02-12 09:20:54
| 2
| 1,277
|
k1dr0ck
|
75,425,967
| 4,935,567
|
Keep installed packages between patch updates
|
<p>Using <em>pyenv</em> is there a way to keep packages installed for a version when upgrading to a different patch release? E.g. suppose I installed <em>IPython</em> with <code>3.11.1</code> and now am installing <code>3.11.2</code>, but want to keep the IPython. My own guess is it's impossible as there are different directories for different versions at a higher level not just minor releases, but maybe I'm missing something.</p>
|
<python><python-3.x><pyenv><pyenv-virtualenv>
|
2023-02-12 09:19:45
| 0
| 2,618
|
Masked Man
|
75,425,931
| 2,532,496
|
Python Link TKinter distribution at runtime
|
<p>How can I link to a non-default Tkinter (Tk/Tcl) distribution at runtime? The following will link automatically to the default Tkinter. I am on macOS and have custom Tk.Frameworks and Tcl.Framework. I want to be able to link to them when I run my script.</p>
<pre><code>import tkinter as tk
root= tk.Tk()
canvas1 = tk.Canvas(root, width = 300, height = 300)
canvas1.pack()
def hello ():
label1 = tk.Label(root, text= 'Hello World!', fg='blue', font=('helvetica', 12, 'bold'))
canvas1.create_window(150, 200, window=label1)
button1 = tk.Button(text='Click Me', command=hello, bg='brown',fg='white')
canvas1.create_window(150, 150, window=button1)
root.mainloop()
</code></pre>
|
<python><tkinter>
|
2023-02-12 09:14:06
| 0
| 445
|
Kelly o'Brian
|
75,425,731
| 1,982,032
|
Why the data type in last columns is str instead of float?
|
<p>All the data fomr column B to column K are numbers stored as text in excel file.<br />
<a href="https://i.sstatic.net/JPq3h.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JPq3h.png" alt="enter image description here" /></a></p>
<p>I have uploaded the excel file in dropbox as a sample to test with.<br />
<a href="https://www.dropbox.com/s/fwltvuaq36mfy0p/tsm.xlsx?dl=0" rel="nofollow noreferrer">sample data text</a></p>
<p>Download it and save in <code>/tmp/tsm.xlsx</code>.</p>
<p><a href="https://www.dropbox.com/s/fwltvuaq36mfy0p/tsm.xlsx?dl=0" rel="nofollow noreferrer">tsm.xlsx for testing</a></p>
<p>I find that data type in the last cloumn K is str,column from B till J are all in numbers type after reading it into a dataframe:</p>
<pre><code>import pandas as pd
sexcel = '/tmp/tsm.xlsx'
df = pd.read_excel(sexcel,sheet_name='ratios_annual')
row_num = len(df)
for id in range(row_num):
print('the data type in last column--K is',type(df.iloc[id,-1]))
print('the data type in column--J is',type(df.iloc[id,-2]))
the data type in last column--K is <class 'str'>
the data type in column--J is <class 'numpy.float64'>
the data type in last column--K is <class 'str'>
the data type in column--J is <class 'numpy.float64'>
</code></pre>
<p>It's obvious that from column B till column K are all <code>number stored as text</code> when opening it in excel.Why the types differ when i read it into a dataframe?<br />
Please download the sample data and have a check.</p>
|
<python><excel><pandas>
|
2023-02-12 08:27:25
| 5
| 355
|
showkey
|
75,425,729
| 18,143,359
|
Issue with QR Code reading with OpenCV or Pyzbar
|
<p>I'm trying to decode a QR Code using Python, or any other language to be honest. I am also familiar with Javascript or PHP, but Python seemed to be the most appropriate one for this task.</p>
<p>This is part of a bigger piece of code that I am writing for a little challenge. I need to extract a password from the QR Code. I've tried using a QR Code reader on my phone and I can get the password so I can confirm that there is no issue with the QR Code itself.</p>
<p>Here is the QRCode:
<a href="https://i.sstatic.net/cmzvu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cmzvu.png" alt="QRCode" /></a></p>
<p>And the string to retrieve is "The key is /qrcod_OMevpf".</p>
<p>So far I've tried using two different python libraries. Open CV and Pyzbar, with the following codes:</p>
<p><strong>OpenCV</strong></p>
<pre><code> image = cv2.imread(imgAbsolutePath)
qrCodeDetector = cv2.QRCodeDetector()
decodedText, points, _ = qrCodeDetector.detectAndDecode(image)
if points is not None:
# QR Code detected handling code
print("QR code detected")
print(decodedText)
else:
print("QR code not detected")
</code></pre>
<p>Which prints "QR code detected" and then an empty string.</p>
<p><strong>Pyzbar</strong></p>
<pre><code>qr = decode(Image.open('result.png'), symbols=[ZBarSymbol.QRCODE])
print(qr)
</code></pre>
<p>Which prints "[]"</p>
<p>Do you know why these don't work or can you suggest any other libraries that works ?
Thanks</p>
|
<python><opencv><qr-code><zbar>
|
2023-02-12 08:26:51
| 2
| 639
|
Emilien
|
75,425,390
| 13,349,539
|
Positional operator not working in MongoDB with array elements FastAPI
|
<p>I have a document that looks like this:</p>
<pre><code>{
"_id": "cc3a8d7f-5962-47e9-a3eb-09b0a57c9fdb",
"isDeleted": false,
"user": {
"timestamp": "2023-02-12",
"name": "john",
"surname": "doe",
"email": "a.s@ug.bilkent.edu.tr",
"phone": "+012345678912",
"age": 25,
"gender": "female",
"nationality": "smth",
"universityMajor": "ENGINEERING",
"preferences": null,
"highPrivacy": false,
},
"postings": [
{
"id": "f61b103d-8118-4054-8b24-b26e2f4febc4",
"isDeleted": false,
"timestamp": "2023-02-12",
"houseType": "apartment",
"totalNumOfRoommates": 5,
"location": {
"neighborhood": "Oran",
"district": "Çankaya",
"city": "Adana"
},
"startDate": "2022-11-10",
"endDate": "2022-11-15",
"postingType": "House Sharer",
"title": "House sharer post 1",
"description": "This is house sharer post 1",
"price": 2500,
"houseSize": "2 + 0"
},
{
"id": "b7d34113-1b13-4265-ba9b-766accecd267",
"isDeleted": false,
"timestamp": "2023-02-12",
"houseType": "apartment",
"totalNumOfRoommates": 5,
"location": {
"neighborhood": "Dikmen",
"district": "Çankaya",
"city": "Adana"
},
"startDate": "2022-09-13",
"endDate": "2023-12-24",
"postingType": "House Seeker",
"startPrice": 2002,
"endPrice": 2500
}
],
}
</code></pre>
<p>Each posting object has an ID. I am trying to "delete" (setting the property isDeleted to True, rather than actual deletion) the post whose ID is specified in the code below:</p>
<pre><code>@router.delete('/{id}', response_description='Deletes a single posting')
async def deletePost(id: str):
update_result = await dbConnection.update_one({"postings.id": id, "postings.isDeleted" : False},
{"$set" : {"postings.$.isDeleted" : True} })
if update_result.modified_count == 1:
return Response(status_code=status.HTTP_204_NO_CONTENT)
else:
raise HTTPException(status_code=404, detail=f"Post {id} not found or has already been deleted")
</code></pre>
<p>The issue is that the first document (the one with ID <em>f61b103d-8118-4054-8b24-b26e2f4febc4</em>) is being "deleted" even when I supply the ID <em>b7d34113-1b13-4265-ba9b-766accecd267</em> to the function. If I hit the endpoint again with the same ID, it "deletes" the array elements in order regardless of which ID I supply. Even though I am using the positional operator to set the specific element's property <em>isDeleted</em> to <em>True</em>.</p>
<p>What exactly could be the problem here?</p>
<p>Here is a link with the earlier setup in Mongo Playground: <a href="https://mongoplayground.net/p/03HSkwDUPUE" rel="nofollow noreferrer">https://mongoplayground.net/p/03HSkwDUPUE</a></p>
<p>P.S although @ray's answer does work in MongoPlayground, I had to change a couple of things in the query with FastAPI, for anyone interested, the working query is below:</p>
<pre><code>update_result = await dbConnection.update_one(
{"postings.id": id},
{
"$set": {
"postings.$[p].isDeleted": True
}
},
upsert=True,
array_filters=[
{
"p.id": id,
"p.isDeleted": False
}]
)
</code></pre>
|
<python><mongodb><nosql><fastapi>
|
2023-02-12 06:59:11
| 1
| 349
|
Ahmet-Salman
|
75,425,311
| 2,403,819
|
tox is not able to recognize python versions installed by pyvenv
|
<p>I am trying to learn how to use tox for python unit testing on multiple python installations. I have used <code>pyenv</code> to install multiple python installations to the <code>/home/username/.pyenv/versions/version_number/bin/python</code>, where <code>version_number</code> could be 3.9.16, or 3.10.9. I have added the following lines to my <code>pyproject.toml</code> file. I have a simple test directory structure that looks like</p>
<pre><code>hello
|_ pyproject.toml
|_ hello
| |_ __init__.py
| |_ main.py
|_tests
| |_ __init__.py
|_ test_main.py
</code></pre>
<p>Among other lines in my <code>pyproject.toml</code> file I have the following <code>tox</code> config lines</p>
<pre><code>[tool.pytest.ini_options]
testpaths = ["tests"]
console_output_style = "progress"
[tool.tox]
legacy_tox_ini = """
[tox]
env_list = py39, py310, mypy
[testenv]
deps =
pytest
[testenv:py39]
basepython = /home/jonwebb/.pyenv/versions/3.9.16/bin/python
[testenv:py310]
basepython = /home/jonwebb/.pyenv/versions/3.10.9/bin/python
[testenv:mypy]
deps = mypy
commands = mypy hello
"""
</code></pre>
<p>When I try to run tox from the uppermost <code>hello</code> directory I get an error stating;</p>
<pre><code>py39: failed with env name py39 conflicting with base python /home/username/.pyenv/version/3.9.16/bin/python
py310: failed with env name py310 conflicting with base python /home/username/.pyenv/versions/3.10.9/bin/python
</code></pre>
<p>It appears that <code>tox</code> is having problems with my python installations created with <code>pyenv</code>. How can I fix this installation so it recognizes my python installations installed with <code>pyenv</code>? It is probably worth noting that this code implementation is using a local .venv folder created with <code>poetry</code>.</p>
|
<python><pyenv><tox>
|
2023-02-12 06:39:13
| 1
| 1,829
|
Jon
|
75,425,177
| 1,344,144
|
Is it possible to check the exact type of parameterized generics at runtime in Python ^3.10?
|
<p>I have some custom type definitions using parameterized generics. Now at runtime, I want to find out if a variable does have this custom type. Here's a minimal example:</p>
<pre><code>>>> from collections.abc import Callable
>>> MyType = Callable[[int], int]
>>> my_fun: MyType = lambda x: x
>>> isinstance(my_fun, MyType)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: isinstance() argument 2 cannot be a parameterized generic
</code></pre>
<p>Is there any way to achieve this currently?</p>
|
<python><generics><python-typing>
|
2023-02-12 05:57:21
| 1
| 1,101
|
idleherb
|
75,425,140
| 16,853,253
|
Flask app showing error showing attribute error when accessing foreignkey value
|
<p>AttributeError: 'int' object has no attribute 'username' . I don't know why this happening.
In this <code>return jsonify([{'products':c.products, "user": c.user.username} for c in cart])</code> <code>c.user.username</code> throws Attribute error.</p>
<p>Code is below</p>
<p>#Views</p>
<pre><code>@app.route('/cart-all')
def show_cart():
user = current_user
cart = Cart.query.filter_by(user=user.id).all()
return jsonify([{'products':c.products, "user": c.user.username} for c in cart])
</code></pre>
<p>#Models</p>
<pre><code>class User(db.Model, UserMixin):
id = db.Column(db.Integer(), primary_key=True)
username = db.Column(db.String(length=20), unique=True, nullable=False)
email = db.Column(db.String(length=50), unique=True, nullable=False)
password = db.Column(db.String(length=60), nullable=False)
is_admin = db.Column(db.Boolean, default=False)
cart = db.relationship(
"Cart", cascade="all, delete, delete-orphan", lazy=True
)
</code></pre>
<pre><code>class Cart(db.Model):
id = db.Column(db.Integer(), primary_key=True)
user = db.Column(db.Integer(), db.ForeignKey("user.id"), nullable=False)
products = db.Column(db.Integer(), db.ForeignKey("product.id"), nullable=False)
quantity = db.Column(db.Integer())
def __repr__(self):
return self.user.username
</code></pre>
|
<python><sqlalchemy>
|
2023-02-12 05:47:39
| 1
| 387
|
Sins97
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.