QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
76,763,737 | 12,470,058 | How to get the list of errors from pylint? | <p>I have a Python script called <code>my_code.py</code>. I am using:</p>
<pre><code>import pylint
pylint.run_pylint(argv=["my_code.py"])
</code></pre>
<p>How to get all errors and store them in a <strong>list</strong> in the above snippet?</p>
| <python><python-3.x><pylint> | 2023-07-25 14:28:33 | 3 | 368 | Bsh |
76,763,575 | 4,249,338 | Feather read failure from pandas with pd.cut data | <p>The following read-write sequence through feather throws with error:</p>
<p><code>ArrowInvalid: Ran out of field metadata, likely malformed</code></p>
<pre class="lang-py prettyprint-override"><code>dg = pd.DataFrame({'A': pd.cut((1, 2, 3), bins=[0, 1, 2])})
file_name = 'myfile.feather'
dg.to_feather(file_name)
dg=pd.read_feather(file_name)
</code></pre>
<p>(it's surprising to me that the <code>to_feather</code> step doesn't complain).</p>
<p>I checked that the following works fine, which suggests it's not just a categorical variable limitation - must be something related to the type returned by <code>pd.cut</code>.</p>
<pre class="lang-py prettyprint-override"><code>dg = pd.DataFrame({"A": list("123321")}, dtype="category")
file_name = 'myfile.feather'
dg.to_feather(file_name)
dg=pd.read_feather(file_name)
</code></pre>
<p>I really want to not lose the categorical meta-data resulting from my cut as I persist the df through feather. Any recommendations?</p>
<p>I am using pandas 1.5.3, pyarrow 11.0.0 and python 3.8.13 on linux.</p>
| <python><pandas><feather> | 2023-07-25 14:08:27 | 2 | 656 | gg99 |
76,763,556 | 10,617,728 | Error: reduceRegions is not a function in Google Earth Engine | <p>I'm trying to get population density data from Google Earth Engine but i get an error.</p>
<blockquote>
<p>Line 17: gpw_roi.reduceRegions is not a function</p>
</blockquote>
<pre><code>// Load GPW population count dataset
var gpw = ee.ImageCollection('CIESIN/GPWv4/population-count');
// Define the area of interest (AOI) using a polygon geometry
var aoi = ee.Geometry.Polygon([
[-74.1, 40.6],
[-74.1, 40.7],
[-73.9, 40.7],
[-73.9, 40.6],
]);
// Filter the GPW dataset to the AOI
var gpw_roi = gpw.filterBounds(aoi);
// Function to calculate population for each feature in the collection
var calculatePopulation = function(feature) {
var population = gpw_roi.reduceRegions({
reducer: ee.Reducer.sum(),
collection: feature.geometry(),
scale: 1000, // Scale in meters (adjust based on your area size and resolution)
});
return feature.set('population_count', population.first().get('population-count'));
};
// Map the function over the FeatureCollection and calculate population for each area
var populationCounts = gpw_roi.map(calculatePopulation);
// Print the population counts
print('Population Counts:', populationCounts);
</code></pre>
<p>What could be wrong with the script? How can i get the population count?</p>
| <python><google-earth-engine> | 2023-07-25 14:06:32 | 1 | 1,199 | Shadow Walker |
76,763,512 | 3,423,825 | How to completly delete a Django application which has dependencies with another app? | <p>My Django project has several applications and I would like to delete one of them. To do that, I removed the models of this app and everything was fine, but after I removed the application from INSTALLED_APPS and its files, Django complains about dependencies with another application.</p>
<pre><code>django.db.migrations.exceptions.NodeNotFoundError: Migration anotherapp.0009_entitystatus dependencies reference nonexistent parent node ('apptodelete', '0002_alter_anotherapp_code')
</code></pre>
<p>I've tried to reverse migrations for the app I want to remove with the <code>zero</code> option and Django unapplied migrations, but then after I remove the files it keeps complaining because dependencies are still present in the migration files of the other application.</p>
<pre><code>$ python manage.py migrate apptodelete zero
</code></pre>
<p>What is the proper way to do that ?</p>
| <python><django> | 2023-07-25 14:01:32 | 0 | 1,948 | Florent |
76,763,380 | 10,507,036 | How to implement resizable rectangle in PySide6 QGraphicsView | <p>I am trying unsucessfully to implement a resizable rectangle QGraphicsItem in the pyside6 QGraphics framework by inheriting from <code>QGraphicsRectItem</code>. In the <code>ResizableRectItem</code> class, I create resize handles that are themselves <code>QGraphicsRectItems</code> and children of the <code>ResizableRectItem</code>, which allows them to be translated together with their parent item.</p>
<p>The problem I encounter and can't seem to solve, is that I want these handles to only appear when the Rectangle is selected with the mouse. This works fine (by overriding the <code>itemChange</code> method), but the problem is that the handles can only be moved when the Rectangle is NOT selected. While the rectangle is selected, they cannot be moved.</p>
<p>Moreover, I don't know how, once I get the handles to move, I can propagate the movement of a specific handle to the parent rectangle, so as to resize its shape accordingly. I initially thought about using signals/slots, however this mechanism is unavailable since <code>QGraphicsItem</code> does not inherit from <code>QObject</code>. I read that there is also a <code>QGraphicsObject</code> to provide signals/slots, but I suspect there might be a more elegant solution which I'm not seeing at the moment. I would be glad if someone could help me out here. I have googled this question extensively and not found a satisfactory answer. Thank you in advance!</p>
<pre><code>import sys
from PySide6.QtCore import Qt
from PySide6.QtGui import QPen, QColor, QBrush
from PySide6.QtWidgets import QApplication, QMainWindow, QGraphicsView, QGraphicsScene, QGraphicsRectItem, QGraphicsItem
class ResizableRectItem(QGraphicsRectItem):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.setPen(QPen(Qt.black))
self.setBrush(QBrush(Qt.gray))
self.setFlags(QGraphicsItem.ItemIsMovable | QGraphicsItem.ItemIsSelectable)
self.handleSize = 8
self.handles = {}
self.directions = ['topLeft', 'topRight', 'bottomLeft', 'bottomRight']
self.createHandles()
def createHandles(self):
rect = self.rect()
pen = QPen(QColor(0, 0, 0))
for direction in self.directions:
handle = QGraphicsRectItem(-self.handleSize/2, -self.handleSize/2, self.handleSize, self.handleSize, self)
handle.setPen(pen)
handle.setFlags(QGraphicsItem.ItemIsMovable)
handle.setVisible(False)
# Use getattr to replace this calls like this: rect.upperLeft()
handle.setPos(getattr(rect, direction)())
self.handles[direction] = handle
def itemChange(self, change, value):
# Intercept selection event to change visibility of handles
if change == QGraphicsItem.GraphicsItemChange.ItemSelectedChange:
for handle in self.handles.values():
handle.setVisible(bool(value))
# Pass to original method to handle all other changes
return super().itemChange(change, value)
class MainWindow(QMainWindow):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.setGeometry(500, 500, 500, 500)
self.view = QGraphicsView()
self.scene = QGraphicsScene()
self.view.setScene(self.scene)
rectItem = ResizableRectItem(100, 100, 200, 200)
self.scene.addItem(rectItem)
self.setCentralWidget(self.view)
if __name__ == '__main__':
app = QApplication(sys.argv)
window = MainWindow()
window.show()
sys.exit(app.exec())
</code></pre>
| <python><qt><qgraphicsview><pyside6> | 2023-07-25 13:46:15 | 0 | 2,066 | sunnytown |
76,763,351 | 7,357,166 | How to create nested dictionaries from a data frame in Python? | <p>I have a pandas data frame like this example and want to transform it into a list of python dictionaries with nested dictionaries. I need this format to process the data with a given API.</p>
<p>Basically it should convert some columns to a dict per row and some column for the address to a nested dict with the address details.</p>
<p>Sample DF</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Name</th>
<th>Last Name</th>
<th>Street</th>
<th>City</th>
<th>Postcode</th>
</tr>
</thead>
<tbody>
<tr>
<td>John</td>
<td>Doe</td>
<td>Milky Way 1</td>
<td>Star City</td>
<td>31415</td>
</tr>
<tr>
<td>Jim</td>
<td>Beam</td>
<td>Moonshine Rd 8</td>
<td>Sin City</td>
<td>12345</td>
</tr>
<tr>
<td>Joe</td>
<td>Biden</td>
<td>1600 Pennsylvania Avenue NW</td>
<td>Washington, D.C.</td>
<td>20500</td>
</tr>
</tbody>
</table>
</div>
<p>Sample list of dicts</p>
<pre><code>lst = [{'Name': 'John',
'Last Name': 'Doe',
'Address': {'Street': 'Milky Way 1',
'City': 'Star City',
'Postcode': '31415'}},
{'Name': 'Jim',
'Last Name': 'Beam',
'Address': {'Street': 'Moonshine Rd 8',
'City': 'Sin City',
'Postcode': '12345'}},
{'Name': 'Joe',
'Last Name': 'Biden',
'Address': {'Street': '1600 Pennsylvania Avenue NW',
'City': 'Washington, D.C.',
'Postcode': '20500'}},
]
</code></pre>
<p>Reading through other posts, I found the opposite process (to convert dicts with nested dicts to a Pandas data frame) but that is not what I need. I also read that you should not iterate through the rows of a pandas data frame, but that is what I usually would try as I am new to pandas.</p>
<p>Is there a more elegant way to use pandas tools and group columns?</p>
| <python><pandas> | 2023-07-25 13:42:58 | 2 | 422 | Empusas |
76,763,318 | 12,100,211 | How to select multiple relative objects of a single object at once Django ForeignKey | <p>So I have two classes one is Book and another one is Category initially I have add the foreign key in Book which points to Category.</p>
<pre><code>class Category(models.Model):
class Meta:
verbose_name_plural = "Categories"
category = models.CharField(max_length=20)
def __str__(self):
return self.category
class Book(models.Model):
book_title = models.CharField(max_length=20)
category = models.ForeignKey(Category, on_delete=models.CASCADE)
def __str__(self):
return self.book_title
</code></pre>
<p>And it works fine I was able to select a category from a selected book inside the admin panel</p>
<blockquote>
<p>But now I want to select books from category section so that I can select all the books which are of that single category. And for that I changed the code in the following way</p>
</blockquote>
<pre><code>class Book(models.Model):
book_title = models.CharField(max_length=20)
def __str__(self):
return self.book_title
class Category(models.Model):
class Meta:
verbose_name_plural = "Categories"
category = models.CharField(max_length=20)
book = models.ForeignKey(Book, on_delete=models.PROTECT)
def __str__(self):
return self.category
</code></pre>
<blockquote>
<p>The upper code returns tons of errors.</p>
</blockquote>
<p>Let suppose I have four books in Book table and one category in Category table. so now I want to relate all those books to that category in Category table by just opening a single category. What happening right now is that I have to open each book entry and relate to it's category. But I want a drop down menu inside a category's admin panel where I can select all the books of that category. It will be also one to many as single book relate to a single category but one category can has many books.
IS IT POSSIBLE ?</p>
| <python><django> | 2023-07-25 13:39:38 | 1 | 303 | Kanha Tomar |
76,763,238 | 2,173,773 | How to use QtBot.waitSignal() in pytest-qt? | <p>I am using <code>pytest-qt</code> and its <a href="https://pytest-qt.readthedocs.io/en/latest/signals.html" rel="nofollow noreferrer"><code>waitSignal()</code></a> method to test a <code>QDialog</code> but it is not working. Here is a minimal example (my real application is more complex):</p>
<pre><code>import sys
from PyQt6.QtWidgets import (
QApplication, QDialog, QGridLayout, QPushButton
)
def get_dialog(app):
dialog = QDialog()
dialog.resize(300,200)
dialog.setWindowTitle("Test dialog")
layout = QGridLayout()
button1 = QPushButton("&Ok", dialog)
def ok_pressed():
dialog.done(0)
button1.clicked.connect(ok_pressed)
layout.addWidget(button1, 0, 0)
button2 = QPushButton("&Cancel", dialog)
def cancel_pressed():
dialog.done(1)
button2.clicked.connect(cancel_pressed)
layout.addWidget(button2, 0, 1)
dialog.setLayout(layout)
dialog.open()
return dialog, button1
def main():
app = QApplication(sys.argv)
get_dialog(app)
app.exec()
def test_dialog(qtbot, qapp):
app = qapp
dialog, button1 = get_dialog(app)
button1.click()
with qtbot.waitSignal(dialog.finished, timeout=2000):
pass
assert True
if __name__ == '__main__':
main()
</code></pre>
<p>Running this example with pytest gives:</p>
<pre><code>$ pytest t.py
============================================================================ test session starts ============================================================================
platform linux -- Python 3.10.4, pytest-7.3.2, pluggy-1.0.0
PyQt6 6.5.1 -- Qt runtime 6.5.0 -- Qt compiled 6.5.1
rootdir: /home/hakon/test/python/pytest-qt/qdialog-cancel/pre
plugins: mock-3.11.1, qt-4.2.0, anyio-3.7.1
collected 1 item
t.py F [100%]
================================================================================= FAILURES ==================================================================================
________________________________________________________________________________ test_dialog ________________________________________________________________________________
qtbot = <pytestqt.qtbot.QtBot object at 0x7fea78d47a30>, qapp = <PyQt6.QtWidgets.QApplication object at 0x7fea78d5c1f0>
def test_dialog(qtbot, qapp):
app = qapp
dialog, button1 = get_dialog(app)
button1.click()
> with qtbot.waitSignal(dialog.finished, timeout=2000):
E pytestqt.exceptions.TimeoutError: Signal finished(int) not emitted after 2000 ms
t.py:36: TimeoutError
--------------------------------------------------------------------------- Captured Qt messages ----------------------------------------------------------------------------
QtWarningMsg: Could not connect "org.freedesktop.IBus" to globalEngineChanged(QString)
========================================================================== short test summary info ==========================================================================
FAILED t.py::test_dialog - pytestqt.exceptions.TimeoutError: Signal finished(int) not emitted after 2000 ms
============================================================================= 1 failed in 2.02s =============================================================================
</code></pre>
<p>so the signal <code>dialog.finished</code> is not emitted or I am not using <code>waitSignal()</code> correctly. However, if I use <a href="https://pytest-qt.readthedocs.io/en/latest/wait_until.html" rel="nofollow noreferrer"><code>waitUntil()</code></a> instead of <code>waitSignal()</code> it works fine:</p>
<pre><code>def test_dialog(qtbot, qapp):
app = qapp
dialog, button1 = get_dialog(app)
dialog_done = False
def dialog_done_cb():
nonlocal dialog_done
dialog_done = True
dialog.finished.connect(dialog_done_cb)
button1.click()
qtbot.waitUntil(lambda: dialog_done)
assert True
</code></pre>
<p>Since this works, it also indicates that the <code>dialog.finished</code> signal is emitted and I am not using <code>waitSignal()</code> correctly. Any idea what might be the problem?</p>
| <python><pyqt><pytest><pytest-qt> | 2023-07-25 13:31:05 | 1 | 40,918 | Håkon Hægland |
76,763,186 | 10,975,692 | What is the best way to wait for a synchronous condition with python asyncio? | <p>I have a syncronous condition which I need to wait for.
Currently I do active waiting like this:</p>
<pre class="lang-py prettyprint-override"><code>while my_syncronous_condition_is_not_fulfilled():
asyncio.sleep(0.001)
</code></pre>
<p>This works, but I'm sure it is not very performant since with every call of <code>asyncio.sleep</code> there is an overhead. And choosing a bigger value like <code>asyncio.sleep(1)</code> is also not a good option because we do not want to wait longer than necessary.
What is the best way to do this?</p>
<p>Full code:</p>
<pre class="lang-py prettyprint-override"><code>async def calculate_in_subprocess(func, *args, **kwargs):
rx, tx = Pipe(duplex=False) # receiver & transmitter ; Pipe is one-way only
process = Process(target=_inner, args=(tx, func, *args), kwargs=kwargs)
process.start()
while not rx.poll(): # do not use process.is_alive() as condition here
await asyncio.sleep(0.001)
result = rx.recv()
process.join() # this blocks synchronously! make sure that process is terminated before you call join()
rx.close()
if isinstance(result, Exception):
raise result
return result
def _inner(tx, fun, *a, **kw_args) -> None:
""" This runs in another process. """
event_loop = None
if inspect.iscoroutinefunction(fun):
event_loop = asyncio.new_event_loop()
asyncio.set_event_loop(event_loop)
try:
if event_loop is not None:
res = event_loop.run_until_complete(fun(*a, **kw_args))
else:
res = fun(*a, **kw_args)
except Exception as ex:
tx.send(ex)
else:
tx.send(res)
</code></pre>
<p>Example usage:</p>
<pre class="lang-py prettyprint-override"><code>import time
import asyncio
def f(value: int) -> int:
time.sleep(10) # a long taking synchronous blocking calculation
return 2 * value
asyncio.run(calculate_in_subprocess(func=f, value=42))
</code></pre>
| <python><python-3.x><python-asyncio> | 2023-07-25 13:24:36 | 1 | 1,500 | DarkMath |
76,763,107 | 8,430,629 | Keras Tuner keyError "mae" for multiple output neural network | <p>I am trying to tune hyperparameters with KerasTuner, the neural network has two outputs, I have been recording errors for each output. E.g. <code>model = tf.keras.models.Model(inputs=inputs, outputs=[out1, out2])</code> The tuning process is here:</p>
<pre><code> tuner = keras_tuner.BayesianOptimization(
hypermodel=wrapped_model,
objective="mae",
max_trials=max_trials, overwrite=True, hyperparameters=hyperparameters)
</code></pre>
<p><strong>The code works when I have a single output neural network</strong>, but with two outputs it seems to give <code>KeyError: 'mae'</code>.</p>
<p>The architecture is below</p>
<pre><code>optimizer = tf.keras.optimizers.Adam(learning_rate=hp_lr,
beta_1=0.9,beta_2=0.999,epsilon=1e-07,decay=0)
# compile model
model.compile(loss='mae',optimizer=optimizer,metrics=['mae','mse','mape'])
</code></pre>
<p>During the tuning process, usually after each trial it shows the "best mae so far" after each trial (it does this in the single output version). But here it says None.</p>
| <python><tensorflow><keras> | 2023-07-25 13:15:43 | 1 | 312 | Governor |
76,763,044 | 4,437,911 | Conditional hiding/showing endpoint's swagger doc with Flask-RESTX | <p>I am trying to conditionally hide/show an endpoint swagger doc with Flask-RESTx, I am aware of using <code>@api.doc(False)</code> to disable (as shown here <a href="https://stackoverflow.com/q/26142997/4437911">Hide endpoints in UI of Flask restful Swagger</a>) however, it doesn't work with <code>@api.route("/register", doc=True)</code> will get the error -</p>
<pre><code>kwargs["route_doc"] = self._build_doc(cls, doc)
File "python3.9/site-packages/flask_restx/namespace.py", line 115, in _build_doc
unshortcut_params_description(doc)
File "/python3.9/site-packages/flask_restx/namespace.py", line 355, in unshortcut_params_description
if "params" in data:
TypeError: argument of type 'bool' is not iterable
</code></pre>
<p>I wonder if it's possible to hide/show endpoint's swagger doc based on condition variable</p>
| <python><swagger><flask-restx> | 2023-07-25 13:08:34 | 1 | 699 | barha |
76,762,901 | 8,458,083 | fetchPypi doesn't fetch the right url to load a .whl file to build a package | <p>I ve followed these <a href="https://nixos.wiki/wiki/Packaging/Python" rel="nofollow noreferrer">instructions</a> (paragraph : build from source):</p>
<pre><code>{
description = "virtual environment with python and streamlit";
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
inputs.flake-utils.url = "github:numtide/flake-utils";
outputs = { self, nixpkgs, flake-utils }:
flake-utils.lib.eachDefaultSystem (system:
let
pkgs = nixpkgs.legacyPackages.${system};
python=pkgs.python311;
xTuring = pkgs.python3Packages.buildPythonPackage rec {
pname = "xTuring";
version = "v0.1.6";
format = "wheel";
src = pkgs.python3Packages.fetchPypi rec {
inherit pname version format;
sha256 = "a9e78c46b807f3a14f567e151feaed993085dad601a5b5826db258afd8699914";
dist = python;
python = "py3";
#abi = "none";
#platform = "any";
};
#propagatedBuildInputs = with python; [ setproctitle ];
};
f = ps: with ps;[
ipython
matplotlib
pandas
];
pip_python_packages= python.withPackages(f);
myDevTools = [
pip_python_packages
pkgs.streamlit
xTuring
];
in rec {
devShells.default = pkgs.mkShell {
buildInputs = myDevTools;
};
});
}
</code></pre>
<p>error message:</p>
<blockquote>
<p>trying
<a href="https://files.pythonhosted.org/packages/py3/x/xTuring/xTuring-v0.1.6-py3-none-any.whl" rel="nofollow noreferrer">https://files.pythonhosted.org/packages/py3/x/xTuring/xTuring-v0.1.6-py3-none-any.whl</a>
blabla
fails</p>
</blockquote>
<p>It is normal <a href="https://pypi.org/project/xturing/#files" rel="nofollow noreferrer">the package page</a> says that the url should be <a href="https://files.pythonhosted.org/packages/7e/c5/f38749c4f5121fdd79e669f0e39d8c5f03949cbcc57ff4ffc8f4d56a0dcc/xturing-0.1.6-py3-none-any.whl" rel="nofollow noreferrer">https://files.pythonhosted.org/packages/7e/c5/f38749c4f5121fdd79e669f0e39d8c5f03949cbcc57ff4ffc8f4d56a0dcc/xturing-0.1.6-py3-none-any.whl</a><br />
instead of</p>
<p><a href="https://files.pythonhosted.org/packages/py3/x/xTuring/xTuring-v0.1.6-py3-none-any.whl" rel="nofollow noreferrer">https://files.pythonhosted.org/packages/py3/x/xTuring/xTuring-v0.1.6-py3-none-any.whl</a></p>
<p>I've followed the instructions why fetchPypi doesn't fetch the right URL</p>
| <python><nix><nix-flake> | 2023-07-25 12:52:28 | 1 | 2,017 | Pierre-olivier Gendraud |
76,762,842 | 12,057,138 | Unable to save/overwrite and excel file in python | <p>I have the following code, it gets a CSV and should save it as an excel file.
If the file already exists, it should overwrite it with the new CSV.</p>
<pre><code>import pandas as pd
from openpyxl import Workbook
from io import StringIO
def save_to_excel(csv, type):
csv_data_frame = pd.read_csv(StringIO(csv))
excel_file = pd.ExcelWriter(f"{type}_results.xlsx", engine='openpyxl')
excel_file.book = Workbook()
csv_data_frame.to_excel(excel_file, index=False, sheet_name="results")
excel_file.save()
excel_file.close()
</code></pre>
<p>However, I am getting the following warnings:</p>
<pre><code>FutureWarning: Setting the `book` attribute is not part of the public API, usage can give unexpected or corrupted results and will be removed in a future version
excel_file.book = Workbook()
save is not part of the public API, usage can give unexpected results and will be removed in a future version
excel_file.save()
</code></pre>
<p>Not sure exactly what is the public API for these actions, can anyone assist?</p>
| <python><pandas><openpyxl> | 2023-07-25 12:45:37 | 2 | 688 | PloniStacker |
76,762,740 | 3,225,420 | Matplotlib Mosaic Share Axes Labels and Ticks | <p>Trying to share x and y axis across axes in figure using <a href="https://matplotlib.org/stable/gallery/subplots_axes_and_figures/mosaic.html" rel="nofollow noreferrer">mosaic</a> in Matplotlib.</p>
<p>Example from link:</p>
<pre><code>mosaic = """
AB
CD
"""
fig = plt.figure(layout="constrained")
ax_dict = fig.subplot_mosaic(mosaic)
</code></pre>
<p>But this yields the x and y axis ticks and labels on all four axes objects.</p>
<p>I want the y axis ticks to be on the same scale with only labels on the A and C axes going down the left hand side of the figure.</p>
<p>Similarly I want the x-axis ticks and labels going across the bottom of the figure on the C and D axes, with axes objects above to be on the same scale and only have labels across the bottom.</p>
<p>Using other axes creation methods this format has worked:</p>
<pre><code># share x and y
ax3 = plt.subplot(313, sharex=ax1, sharey=ax1)
</code></pre>
<p>But when trying this approach I get the following execption:</p>
<pre><code>AttributeError: 'Rectangle' object has no property 'sharex'
</code></pre>
<p>What should I try next?</p>
| <python><matplotlib><subplot> | 2023-07-25 12:32:41 | 1 | 1,689 | Python_Learner |
76,762,678 | 14,498,998 | TypeError: MyUserManager.create_superuser() missing 1 required positional argument: 'username' | <p>I'm trying to create a super-user using this command: "python manage.py createsuperuser", but I can't. Here's the error: "TypeError: MyUserManager.create_superuser() missing 1 required positional argument: 'username'".</p>
<p>Here's my code:
'''</p>
<p>class MyUserManager(UserManager):</p>
<pre><code>def create_superuser(self, username: str, email: str | None, password: str | None, **extra_fields: Any) -> Any:
self.username = extra_fields['phone']
REQUIRED_FIELDS = ['username']
return super().create_superuser(username, email, password, **extra_fields)
def create_user(self, username: str, email: str | None = ..., password: str | None = ..., **extra_fields: Any) -> Any:
username = extra_fields['phone']
return super().create_user(username, email, password, **extra_fields)
</code></pre>
<p>'''</p>
| <python><django><django-forms> | 2023-07-25 12:24:08 | 2 | 313 | Alin |
76,762,628 | 1,145,011 | requests.get method takes huge time for zip file in python | <p>I have a website with huge number of pdfs, zip files, images, ppt, html links. So I use python requests method to check if files ( pdfs, zip, images, links etc) is not broken. But when the link passed is a zip file whose size is huge "get" method takes time to send the response. So wanted to know if there is alternative to check if the passed link( can be image, zip file, html page) is not broken.</p>
<pre><code>response = requests.get(pageURL)
if (response.status_code ==404):
#print("Broken Link")
</code></pre>
| <python><python-requests> | 2023-07-25 12:17:28 | 1 | 1,551 | user166013 |
76,762,353 | 5,931,672 | tf recall precision not working for class_id 0 | <p>So, I am quite puzzled. I have the following arrays:</p>
<pre><code>import tensorflow as tf
import numpy as np
y_true = np.array(
[1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1,
1, 1, 1, 1, 1])
y_pred = np.array(
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1])
</code></pre>
<p>You can see how the prediction got NO class label. Therefore, it should have a Recall of 0 (for class 0). So I compute it like this:</p>
<pre><code>m = tf.keras.metrics.Recall(class_id=0)
m.update_state(y_true=y_true, y_pred=y_pred)
m.result().numpy()
</code></pre>
<p>And it returns <code>1.0</code>! As if it is still giving me the Recall for class 1. So my question is, how can I get the right value? This question is also valid for Precision. However, precision also gives 1 which is no the case even for class 1.</p>
<p>For reference, the expected results are:</p>
<pre><code>from sklearn.metrics import precision_score, recall_score
recall_score(y_true=y_true, y_pred=y_pred, pos_label=0)
precision_score(y_true=y_true, y_pred=y_pred, pos_label=0)
</code></pre>
| <python><tensorflow><precision-recall> | 2023-07-25 11:41:11 | 0 | 4,192 | J Agustin Barrachina |
76,762,185 | 10,962,766 | Matching specific Geonames IDs with Wikidata IDs using Pywikibot | <p>I have an extensive list of Geonames IDs for which I want to find the matching Wikidata IDs. I would like to use Pywikibot and, if possible, iterate over the list.</p>
<p>The SPARQL query for an individual Geonames ID would be:</p>
<pre><code>SELECT DISTINCT ?item ?itemLabel WHERE {
SERVICE wikibase:label { bd:serviceParam wikibase:language "de". }
{
SELECT DISTINCT ?item WHERE {
?item p:P1566 ?statement0.
?statement0 (ps:P1566) "2867714".
}
}
}
</code></pre>
<p>2867714 is the Geonames ID for Munich, and running the query via the following script returns the correct Wikidata ID:</p>
<pre><code>import pywikibot
from pywikibot import pagegenerators as pg
# read query file
with open('C:\\Users\\p70076654\\Downloads\\SPARQL_mapGeonamesID.rq', 'r') as query_file:
QUERY = query_file.read()
#print(QUERY)
# create generator based on query
# returns an iterator that produces a sequence of values when iterated over
# useful when creating large sequences of values
wikidata_site = pywikibot.Site("wikidata", "wikidata")
generator = pg.WikidataSPARQLPageGenerator(QUERY, site=wikidata_site)
print(generator)
# OUTPUT: <generator object WikidataSPARQLPageGenerator.<locals>.<genexpr> at 0x00000169FAF3FD10>
# iterate over generator
for item in generator:
print(item)
</code></pre>
<p>The correct output returned is: <code>wikidata:Q32664319</code></p>
<p>Ideally, I want to replace the specific ID for a variable to add IDs from my list successively. I checked the <a href="https://pypi.org/project/pywikibot/" rel="nofollow noreferrer">Pywikibot documentation</a> but could not find information on my specific use case. How can I ingest replace the individual ID for a variable and iterate over my ID list?</p>
| <python><wikidata><pywikibot> | 2023-07-25 11:19:05 | 1 | 498 | OnceUponATime |
76,761,963 | 5,439,470 | load dvc files from and to cache using python api | <p>At the moment i use this code to load a certain revision of my data to a pandas dataframe. Since my data are stored in an azure storage account, this always requires me to download the data for every run from the remote location.</p>
<pre><code>data_url = dvc.api.get_url(path="data/data.csv", rev="data-v1")
data = pd.read_csv(data_url)
</code></pre>
<p>Is there a way to use the dvc api to</p>
<ol>
<li>check if the file is already in the cache</li>
<li>If the file is not in the cache load it there</li>
<li>return the path to the cache file</li>
</ol>
<p>This way I would not be dependend on a connection to azure for most of my test runs and would not create as much trafic. At <a href="https://dvc.org/doc/api-reference" rel="nofollow noreferrer">https://dvc.org/doc/api-reference</a> I did not really find the necessary functions for that.</p>
| <python><caching><version-control><dvc> | 2023-07-25 10:48:49 | 1 | 1,303 | jan-seins |
76,761,923 | 3,701,393 | How to remove "Export" action menu? | <p>I would like to remove "Export" in action menu in Odoo 14 Community Edition.</p>
<p>I want to remove it for all views at once if possible; otherwise, one by one for each required model or view would be fine.</p>
<p><a href="https://i.sstatic.net/tUJkl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tUJkl.png" alt="Export action menu" /></a></p>
<p>I tried:</p>
<pre><code><xpath expr="//tree" position="attributes">
<attribute name="export_xlsx">false</attribute>
</xpath>
</code></pre>
<p>in individual model. Doesn't work.</p>
<p>Also tried overwriting the Sidebar in javascript. Doesn't work.</p>
| <javascript><python><xml><odoo><odoo-14> | 2023-07-25 10:44:07 | 3 | 6,768 | holydragon |
76,761,876 | 2,307,570 | Why is "await ..." not the same as "a = ..." followed by "await a"? | <p>The following code illustrates how to use coroutines with <code>asyncio</code>.</p>
<p>The output of <code>consecutive</code> and <code>parallel_fail</code> is:</p>
<pre><code>SLEEP for 2.5 seconds # 0
back after 2.5 seconds # 2.5
SLEEP for 2 seconds # 2.5
back after 2 seconds # 4.5
</code></pre>
<p>(printed after 0, 2.5, 2.5 and 4.5 seconds)</p>
<p>And the output of <code>parallel</code> is:</p>
<pre><code>SLEEP for 2.5 seconds # 0
SLEEP for 2 seconds # 0
back after 2 seconds # 2
back after 2.5 seconds # 2.5
</code></pre>
<p>The outputs of <code>consecutive</code> and <code>parallel</code> are expected.<br>
But why is <code>parallel_fail</code> like <code>consecutive</code>, and not like <code>parallel</code>?</p>
<p>One would expect that <code>await something</code> is equivalent to <code>a = something</code> followed by <code>await a</code>, right?</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
async def f(n):
print(f'SLEEP for {n} seconds')
await asyncio.sleep(n)
print(f'back after {n} seconds')
async def consecutive():
print('consecutive:')
await f(2.5)
await f(2)
async def parallel():
print('parallel:')
a = asyncio.create_task(f(2.5))
b = asyncio.create_task(f(2))
await a
await b
async def parallel_fail():
print('parallel fail:')
await asyncio.create_task(f(2.5))
await asyncio.create_task(f(2))
asyncio.run(consecutive())
print('----------------------')
asyncio.run(parallel())
print('----------------------')
asyncio.run(parallel_fail())
</code></pre>
| <python><python-asyncio> | 2023-07-25 10:39:22 | 1 | 1,209 | Watchduck |
76,761,812 | 293,003 | How to measure time spent inside Python trio coroutine? | <p>For testing purposes, I would like to measure the time that is spent on blocking execution of a coroutine (i.e, excluding the time for which it is suspended).</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>import trio
import time
async def under_test():
await trio.sleep(2)
time.sleep(3)
async def measure():
with measure_blocking_time() as ctx: # or something like that
await under_test()
assert ctx.elapsed == 3
trio.run(measure)
</code></pre>
<p>How do I do that?</p>
<p>(There seems to be a somewhat <a href="https://stackoverflow.com/questions/73028924/how-to-measure-time-spent-in-blocking-code-while-using-asyncio-in-python?rq=4">hacky way to do this when using asyncio</a> - hopefully it can be done more nicer in Trio?)</p>
| <python><python-trio> | 2023-07-25 10:31:02 | 1 | 2,489 | Nikratio |
76,761,800 | 1,137,529 | Concurency with event loop (async/await) in Python | <p>I'm a bit confused. Asyncio has single-threaded event loop. Every co-routine runs in it. If I have coroutine that doesn't have any await/yield, such co-routine should be atomic. Why I should ever syncronouse access to global variable (as example of shared resource)?</p>
<p>For example,</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
global_var = 0
async def coroutine1():
global global_var
global_var += 1
async def coroutine2():
global global_var
global_var -= 1
async def main():
await asyncio.gather(coroutine1(), coroutine2())
asyncio.run(main())
</code></pre>
<p>I don't see any problem.</p>
<p>Furthermore, if the co-routine does yields control, it yields it to event-loop that resumes another co-routine (if available). So, also in this case no synchronization is required.</p>
<p>What I got wrong?</p>
<p><strong>EDIT</strong>:</p>
<pre class="lang-py prettyprint-override"><code>
import asyncio
import time
async def task1():
await asyncio.sleep(5)
print("Task 1 done")
async def task2():
await asyncio.sleep(7)
print("Task 2 done")
async def main():
t_0 = time.time()
await asyncio.gather(task1(), task2())
print((int)(time.time() - t_0))
print("All tasks completed")
asyncio.run(main())
</code></pre>
<p>Output:</p>
<p>Task 1 done</p>
<p>Task 2 done</p>
<p>7</p>
<p>All tasks completed</p>
<p>Hmm, it takes max(5,7)=7 seconds. It seems that's yielding control is done not only without noticeable overhead, but it "seems" that's they progress concurrently. I still don't get it...</p>
<p><strong>EDIT</strong>:</p>
<p>In one hand they run on event loop sequentially, in another hands it "seems" that they run concurrently. How this effect is achieved?</p>
| <python><python-3.x><async-await><concurrency><python-asyncio> | 2023-07-25 10:29:46 | 1 | 5,823 | alexsmail |
76,761,795 | 12,466,687 | Unable to remove Multiindex from dataframe after pivoting data in Python | <p>I am trying to do <code>pivot</code> of a two categorical Variable (<code>status_type</code>) in <code>python</code> <code>pandas</code> but it is resulting in <code>multiindex</code> dataframe. I would like this to be a normal dataframe but couldn't figure out how. Would appreciate any help.</p>
<pre><code>import pandas as pd
import numpy as np
# data
test_df = pd.DataFrame({"id": np.arange(0, 8),
"Window": np.random.rand(8),
"status_type" : ["snap","perf","snap","perf","snap","perf","snap","perf"],
"status_level": [1, 2, 20, 35, 10, 5, 42, 9],
})
test_df
</code></pre>
<p>Pivot:</p>
<pre><code>test_df.pivot(index=['id','Window'],columns='status_type',values='status_level')
</code></pre>
<p>Result in Multi index df:
<a href="https://i.sstatic.net/bjuDY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bjuDY.png" alt="result" /></a></p>
<p>I have tried below code to reset it into normal dataframe and remove <code>status_type</code> column but it didn't work.</p>
<pre><code>(test_df
.pivot(index=['id','Window'], columns='status_type', values='status_level')
.reset_index()
.drop('status_type', axis = 1)
)
</code></pre>
| <python><pandas><dataframe> | 2023-07-25 10:29:00 | 2 | 2,357 | ViSa |
76,761,686 | 11,937,776 | How to specify a local TTF file as a layout font in Plotly (Python) on PaaS? | <p>I have a <strong>TrueType</strong> font file that I want to use as the layout font in a chart created with <strong>Plotly</strong> (<em>Python</em>). However, I can't set the font manually in the system because the chart is generated on <strong>PaaS</strong>.</p>
<p>I have tried using <code>fig.update_layout(font_family=f"assets/{font_name}.ttf")</code>, but it doesn't work and just changes the font to the standard one (<em>Open Sans</em>). I have also attempted to install the font using the link (like in this <a href="https://stackoverflow.com/a/73318757/11937776">answer</a>), but that doesn't work either.</p>
<hr />
<p>Here's a code snippet for reference:</p>
<pre class="lang-py prettyprint-override"><code>import plotly.express as px
df = pd.DataFrame({"date": date, "data": data})
fig = px.area(df, x='date', y="data")
fig.update_layout(font_family=f"assets/{font_name}.ttf") # This line works incorrect
fig.write_image(f"images/{file_name}.png")
</code></pre>
<hr />
<p><strong>How can I specify a local font file to use in my chart, or is there another way to set the font using code?</strong></p>
| <python><fonts><plotly> | 2023-07-25 10:15:53 | 2 | 441 | m3r1v3 |
76,761,310 | 14,729,820 | How to skip adding newline in data frame? | <p>I have the code below to merge 2 dataframs columns in one dataframe :
where the first one contains predicted text
And for the other part contains ground truth value</p>
<pre><code>import pandas as pd
from google.colab import drive
drive.mount('/content/drive')
def load_jsonl(text_path):
return pd.read_json(
path_or_buf = text_path,
lines=True
)
# get the path/directory
working_dir = "/content/drive/MyDrive/Class_B/"
df = load_jsonl(f'{working_dir}labels.jsonl')
# Using readlines()
file1 = open(f'{working_dir}3rd_col.txt', 'r')
Lines = file1.readlines()
col_3rd = pd.DataFrame(Lines, columns=['Ground_truth'])
result = pd.concat([df, col_3rd ], axis=1)
import json
reddit = result.to_dict(orient= "records")
print(type(reddit) , len(reddit))
with open(f"{working_dir}Class_B.jsonl","w") as f:
for line in reddit:
f.write(json.dumps(line,ensure_ascii=False) + "\n")
</code></pre>
<p>the input files in json line format<br />
<code>labels.jsonl</code></p>
<pre><code>{"image_name": "1.JPG", "text": "Flattery is words of kindness for a"}
{"image_name": "2.JPG", "text": "potential favor."}
</code></pre>
<p>The third column name <code>3rd_col.txt</code></p>
<pre><code>Flattery is words of kindness for a
potential favor.
</code></pre>
<p>I got the resulting file as below <code>Class_B.jsonl</code>:</p>
<pre><code>{"image_name": "1.JPG", "text": "Flattery is words of kindness for a", "Ground_truth": "Flattery is words of kindness for a \n"}
{"image_name": "2.JPG", "text": "potential favor.", "Ground_truth": "potential favor. \n"}
</code></pre>
<p>The expacted reults : Should look without <code>\n</code></p>
<pre><code>{"image_name": "1.JPG", "text": "Flattery is words of kindness for a", "Ground_truth": "Flattery is words of kindness for a"}
{"image_name": "2.JPG", "text": "potential favor.", "Ground_truth": "potential favor."}
</code></pre>
| <python><pandas><dataframe><file> | 2023-07-25 09:31:16 | 1 | 366 | Mohammed |
76,761,266 | 3,502,079 | Why can't I use __file__ in Spyder? | <p>I want to get the directory that my current .py file is saved in so that I can set the current working directory to this directory programmatically. Like:</p>
<pre><code>import os
file_path = ???
os.chdir( file_path )
</code></pre>
<p>I thought you could use <code>__file__</code> to achieve this, but it says <code>name '__file__' is not defined</code>.</p>
| <python><spyder> | 2023-07-25 09:24:56 | 1 | 392 | AccidentalTaylorExpansion |
76,761,239 | 16,591,513 | Postgres refuses to find existing table, why? | <p>I have a Postgres database and following tables inside of it, created using python Alembic ORM. Everything looks great at the first moment, but when trying to access any of the given tables, it throws: <code>Did not find any relation named</code>.</p>
<pre><code> List of relations
Schema | Name | Type | Owner
--------+-----------------------------+----------+----------
public | CreditTransactions | table | postgres
public | CustomerApplications | table | postgres
public | CustomerApplications_ID_seq | sequence | postgres
public | alembic_version | table | postgres
(4 rows)
</code></pre>
<pre><code>\d CustomerTransactions
</code></pre>
<p>Result:
<code>Did not find any relation named "CustomerTransactions".</code></p>
<pre><code>\d CustomerApplications
</code></pre>
<p>Result:
<code>Did not find any relation named "CustomerApplications".</code></p>
<p>How my tables look like:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy import Column, Integer, Boolean, Float
from sqlalchemy.orm import declarative_base
Model = declarative_base()
class CreditTransaction(Model):
__tablename__ = "CreditTransactions"
ID = Column(Integer, unique=True, primary_key=True, index=True, nullable=False)
customer_id = Column(Integer, unique=True, primary_key=True)
bad = Column(Boolean, default=False)
class CustomerApplication(Model):
__tablename__ = "CustomerApplications"
ID = Column(Integer, unique=True, primary_key=True, index=True, nullable=False)
email = Column(Integer, unique=True, nullable=False)
annual_income = Column(Float, nullable=False)
total_children = Column(Integer, nullable=True)
age = Column(Integer, nullable=False)
has_realty = Column(Boolean, default=False)
has_car = Column(Boolean, default=False)
has_mobile_phone = Column(Boolean, default=False)
</code></pre>
<p>Alembic Migrations seems to be okay, as I don't see any errors.</p>
<p>What in your opinion can cause this problem?</p>
| <python><sqlalchemy><alembic> | 2023-07-25 09:20:24 | 3 | 449 | CraZyCoDer |
76,761,232 | 1,268,100 | Dictionary parameter only initialised on first call to function | <p>I have a directory walker that I invoke in a loop for specified directories:</p>
<pre><code>fileDataBySize = walkDirectory( directoryToProcess, ... )
</code></pre>
<p>The walker then invokes itself recursively, collecting the file data, and returning it when done:</p>
<pre><code>def walkDirectory(path, fileDataBySize=dict(), ...):
...
if ...:
fileDataBySize = walkDirectory( subpath, fileDataBySize, ... )
...
return fileDataBySize
</code></pre>
<p>The <code>fileDataBySize</code> dictionary clearly needs to be reinitialised at the start of each top level invocation of <code>walkDirectory()</code>, but whilst the code outlined above initialises it fine on the first call, it then manages to retain the contents for subsequent top level invocations.</p>
<p>Obviously this is easy to fix, but I'm wondering how this is being done, and what I've missed in my understanding of the language.</p>
| <python> | 2023-07-25 09:19:06 | 0 | 1,717 | Ian |
76,761,210 | 2,953,995 | KL and JS Divergence analysis of PDFs of numbers | <p>I am playing around with divergence metrics, and I found that if I implement the calculations on my own or rely on the built-in libs, I get two different numbers. Now, I don't know what am (is) I (the built-in function) doing wrong.</p>
<p>For simple analysis, I came up with the following toy example in Python:</p>
<pre><code>import numpy as np
import pandas as pd
#two arrays of "events"
p=np.array([1,2,2,3,5,4,2,3,2,3,4,2,1,1,1,2,2,3,5,4,2,3,2,3,4,2,1,1,1,2,2,3,5,4,2,3,1,1,2,3,4,2,1,1,1,2,2,2,2,1,2,2,3,5,4,2,2,1,2,2,3])
q=np.array([2,3,2,4,2,3,2,3,2,3,2,3,2,2,2,2,1,2,2,3,5,4,2,3,1,1,2,3,4,2,1,1,1,2,2,3,5,4,2,3,1,1,2,3,4,2,1,1,1,2,2,2,2,1,2,2,3,5,4])
</code></pre>
<p>I was told that since I will be comparing the two PDF and calculating divergence metrics for them, the "sample space" for each of them should be the same. Hence, I take all possible values taken by both <code>p</code> and <code>q</code> and use that to calculate the PDFs.
This is my simple function to calculate PDF for an array of data in an array of sample space:</p>
<pre><code>def create_prob_dist(data: np.array, sample_space: np.array):
#number of all events
sum_of_events = sample_space.size
#get the counts of each event via pandas.crosstab()
data_counts = pd.crosstab(index='counts', columns=data)
#create probabilities for each event
prob_dist=dict()
for i in sample_space:
if i in data_counts:
prob_dist[i]=(data_counts[i]['counts'])/sum_of_events
else:
prob_dist[i]=0
return prob_dist
</code></pre>
<p>To calculate the PDFs with the function, I do these steps:</p>
<pre><code>#get all possible discrete events from p and q
px=np.array(list(set(p))) #we use set here to remove duplicates
qx=np.array(list(set(q))) #we use set here to remove duplicates
#create all possible discrete events of both p and q
mx=np.concatenate([px,qx]) #concatenate first
mx=np.array(list(set(mx))) #remove duplicates
mx.sort() #then sort
#create true PDFs of p and q using mx
p_pdf=create_prob_dist(p, mx)
q_pdf=create_prob_dist(q, mx)
#get the probability values only from the dictionary
p_pdf=np.array(list(p_pdf.values()))
q_pdf=np.array(list(q_pdf.values()))
</code></pre>
<p>Then, I can plot the PDFs and the results are in line with my expectations:</p>
<pre><code>plt.figure()
plt.plot(mx, q_pdf, 'g', label="Q")
plt.plot(mx, p_pdf, 'r', label="P")
plt.legend(loc="upper right")
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/FEjFm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FEjFm.png" alt="PDF of p and q" /></a></p>
<p>So, once I have the PDFs and also view them, I have a sense of what I would expect from a divergence calculation. In fact, in this case, we should expect something much closer to 0 than 1.</p>
<h2>KL divergence</h2>
<p>I followed the equation of KL divergence as this:</p>
<p><a href="https://i.sstatic.net/VTGis.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VTGis.png" alt="KL divergence metric" /></a></p>
<p>Accordingly, I created this simple function:</p>
<pre><code>def KL_divergence(P:np.array, Q: np.array):
KL=0
for i,x in enumerate(P):
if ((Q[i] != 0) and (x != 0)): #avoid dividing with 0 and avoid having 0 in math.log()
KL += x * math.log(x / Q[i])
return KL
</code></pre>
<p>Note, in my case, P and Q are already prepared well (meaning they were calculated with the same sample space)</p>
<p>I compared my calculation with built-in functions:</p>
<pre><code>from scipy.special import kl_div,rel_entr
print("KL divergence of p and q : {}".format(KL_divergence(p_pdf,q_pdf)))
kl_divergence=kl_div(p_pdf,q_pdf)
print("KL divergence (lib) of p and q : {}".format(sum(kl_divergence)))
print("KL divergence (lib2) of p and q: {}".format(sum(rel_entr(p_pdf, q_pdf))))
</code></pre>
<p>and I get the following output:</p>
<pre><code>KL divergence of p and q : 0.4900499180923177
KL divergence (lib) of p and q : 0.09004991809231755
KL divergence (lib2) of p and q: 0.4900499180923177
</code></pre>
<p>The rel_entr() gives the same metric as mine, but the kl_div() gives something totally different.</p>
<p>What do you think? Which one is the right one and why?</p>
<h2>JS divergence</h2>
<p>Since JS divergence is a normalized/balanced version of KL divergence, I also calculated that and compared it to built-in functions.
I found two slightly different definitions. One is just doing a bi-directional KL divergence comparison and getting the average.
The other one, from Wikipedia, uses a mixture distribution as well described as
<a href="https://i.sstatic.net/2oXhN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2oXhN.png" alt="JS divergence" /></a></p>
<p>Accordingly, I have the following implementations for JS (using my KL functions)</p>
<pre><code>#the simple JS
def JS_divergence(P:np.array, Q:np.array):
KL_P_Q=KL_divergence(P, Q)
KL_Q_P=KL_divergence(Q, P)
JS=(KL_P_Q+KL_Q_P)/2
return JS
# Wikipedia version
def mod_JS_divergence(P:np.array, Q:np.array):
#create M
M=(P+Q)/2 #sum the two distributions then get average
KL_P_Q=KL_divergence(P, M)
KL_Q_P=KL_divergence(Q, M)
JS=(KL_P_Q+KL_Q_P)/2
return JS
</code></pre>
<p>And this is the code to get the results, which include the use of the built-in function too.</p>
<pre><code>from scipy.spatial.distance import jensenshannon
print("JS divergence of p and q : {}".format(JS_divergence(p_pdf,q_pdf)))
print("mod JS divergence of p and q : {}".format(mod_JS_divergence(p_pdf,q_pdf)))
js_divergence=jensenshannon(p_pdf,q_pdf)
print("JS divergence (lib) of p and q : {}".format(js_divergence))
</code></pre>
<p>output:</p>
<pre><code>JS divergence of p and q : 0.08763662020764684
mod JS divergence of p and q : 0.021872274274735898
JS divergence (lib) of p and q : 0.041044079757403054
</code></pre>
<p>I am more concerned now about my JS divergence calculations as none of my functions return the same outcome as the built-in one.</p>
<p>My question is again the same: what am I doing wrong? what is the way the built-in function differs from my calculations? Do you guys have any idea?</p>
| <python><numpy><probability><probability-density><probability-distribution> | 2023-07-25 09:16:14 | 1 | 322 | cs.lev |
76,761,190 | 4,660,492 | How to define PYTHONPATH for VSCodes integrated test discovery | <p>I've been trying to make VSCodes test discovery (using pytest) find my test, but without success. The discovery fails, because pytest cannot resolve where my modules are coming from, hence I need to add the correct folder to the PYTHONPATH. Everything works nicely, when I use the terminal. I just export the correct PYTHONPATH and when I run "pytest" it works.
However, I'm unable to make VSCode use that PYTHONPATH, when running the integrated test discovery.
Using a ".env" file in the root folder, with the PYTHONPATH defined DOES NOT WORK (as suggested everywhere on the internet). Also what is suggested here: <a href="https://stackoverflow.com/questions/63385586/how-to-integrate-vscode-with-pytest-test-discovery-fails">How to integrate VSCode with pytest ('test discovery fails')?</a> also does not work.</p>
<p>My current workaround is to export the PYTHONPATH in the shell, start VSCode from that same shell session and then it works. I would like for it to just work with the ".env" file for example.</p>
<p>Is there a way?</p>
<p>EDIT:</p>
<p>Just figured out that adding a "pytest.ini" to the root folder, where I use the config "pythonpath" <a href="https://docs.pytest.org/en/latest/reference/reference.html#confval-pythonpath" rel="nofollow noreferrer">https://docs.pytest.org/en/latest/reference/reference.html#confval-pythonpath</a> works as well.
Still I find it highly frustrating that VSCode does not seem to use the ".env" file.</p>
| <python><visual-studio-code><pytest> | 2023-07-25 09:13:36 | 2 | 965 | kidman01 |
76,761,064 | 12,730,406 | Regex - Capture all text up until Capital Letter across new lines? | <p>I have the following string in python:</p>
<pre><code>sample_string = """STEVE SMITH, AMERICAN DAD : Good morning, good
afternoon, usa . Before I hand over to Homer, I want to give a quick reminder of the cartoons we are making.
Numbers in the presentation today. Our focus is now on reported num bers, but we will call out and
specify notable items . we like films and ultimately benefit your schedule going
forward . Homer, over to you.
HOMER SIMPSON, HEAD OF SIMPSON HOUSEHOLD: Thanks, Steve, and good morning in China,
good afternoon in welcome to our viewership results call . Beans is
going to lead the presentation, but I’d like to make some opening comments.
We’ve announced about 1000 hours viewing time, so our strategy is working."""
</code></pre>
<p>The string is over multiple lines and sentences, i am trying to write a regex to capture two groups - one group is the person speaking, the other is the text associated with them.</p>
<p>So trying to get an output like:</p>
<pre><code>output_string_1 = ("""STEVE SMITH, AMERICAN DAD""", """Good morning, good
afternoon, usa . Before I hand over to Homer, I want to give a quick reminder of the cartoons we are making.
Numbers in the presentation today. Our focus is now on reported num bers, but we will call out and
specify notable items . we like films and ultimately benefit your schedule going
forward . Homer, over to you. """ )
output_string_2 = ("""HOMER SIMPSON, HEAD OF SIMPSON HOUSEHOLD""", """Thanks, Steve, and good morning in China,
good afternoon in welcome to our viewership results call . Beans is
going to lead the presentation, but I’d like to make some opening comments.
We’ve announced about 1000 hours viewing time, so our strategy is working.""" )
</code></pre>
<p>The output can be a list of Tuples instead of what i have done above.</p>
<p>I have the following regex so far:</p>
<p><code>^([^a-z:]+?)\s*:\s*(.|\n?)</code></p>
<p>The text should stop capturing the next group when it encounters a new speaker who is always identified by CAPITAL LETTERS and a COLON :</p>
<p>so a positive lookahead i think is needed - any ideas?</p>
<p>Python 3.9.X is being used and re package</p>
| <python><regex> | 2023-07-25 08:58:08 | 2 | 1,121 | Beans On Toast |
76,761,029 | 4,865,723 | None capturing groups in regex not working as expected | <p>The following match patter does look for the character <code>/</code> not having a blank space before it, but having a blank or a dot or a line ending after it.</p>
<pre><code>>>> import re
>>> re.search(r"[^ ]/([ .]|$)", "Foo /markup/ bar")
<re.Match object; span=(10, 13), match='p/ '>
</code></pre>
<p>I'm not interested only in the <code>/</code> and its position. Here I do use a simplified regex as an MWE. In the original I'm not able to just do <code>pos = m.start() + 1</code> to get the position of <code>/</code>.</p>
<p>I assume <em>no capturing groups</em> (<code>(?:)</code>) are but I can't get them work. The result I expect would be</p>
<pre><code><re.Match object; span=(11, 11), match='/'>
</code></pre>
<p>What do I make wrong here?</p>
<pre><code>>>> re.search(r"(?:[^ ])/(?:[ .]|$)", "Foo /markup/ bar")
<re.Match object; span=(10, 13), match='p/ '>
</code></pre>
| <python><regex><capturing-group> | 2023-07-25 08:54:32 | 1 | 12,450 | buhtz |
76,760,913 | 264,136 | how to check if a substring is NOT present in string | <p>In my python code, I want to make sure that <code>x pause output</code> does not occur in a string where x is any number greater than 0.</p>
| <python> | 2023-07-25 08:41:18 | 2 | 5,538 | Akshay J |
76,760,906 | 5,431,734 | Installing mamba on a machine with conda | <p>I dont know if I have missed it, but this is not clear to me. I already have miniconda on my machine and I want now to install mamba. How this should be done please, I am supposed to download/run the correct mambaforge installer? Does this co-exist happily side by side with conda or you are supposed to have just one of them on your system. Doing something like <code>conda install mamba</code> is not recommended</p>
| <python><conda><mamba> | 2023-07-25 08:40:15 | 2 | 3,725 | Aenaon |
76,760,802 | 14,072,456 | Position Label In Center Without Filling The Entire Layout in PyQt5 | <p>I'm creating an app in PyQt5 and I want my label to be in the center of the window, without it filling the entire space. So <code>Some Label</code> should be where <code>Hello</code> is as seen below, but I don't want <code>Some Label</code> to fill up the entire space.</p>
<p><a href="https://i.sstatic.net/67mUm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/67mUm.png" alt="Image" /></a></p>
<p>I thought perhaps I could put <code>Some Label</code> in a container that fills up the entire space, and center <code>Some Label</code> inside the container just like <code>Hello</code>, but I just can't get this to work.</p>
<p>Here's my code:</p>
<pre><code>def main():
app = QApplication(sys.argv)
win = QWidget()
win.setFixedSize(225,150)
label1 = QLabel("Hello")
label1.setStyleSheet('border: 1px solid red')
label2 = QLabel("Some Label", label1)
label2.setAlignment(QtCore.Qt.AlignCenter)
label2.setStyleSheet('background: red')
label1.setAlignment(QtCore.Qt.AlignCenter)
layout = QVBoxLayout()
layout.addWidget(label1)
win.setLayout(layout)
win.show()
sys.exit(app.exec_())
main()
</code></pre>
<p>Any ideas?</p>
| <python><pyqt5><qvboxlayout> | 2023-07-25 08:28:32 | 1 | 339 | Itsjul1an |
76,760,764 | 2,953,995 | Getting univariate probability densitiy function for a dataset of IP addresses | <p>I have two simple datasets having <em>10k IP addresses</em> encoded as <em>Integers</em> (so the data is discrete and can take any number range between 1 and 4B).</p>
<p>FYI: One dataset is a real dataset captured at a network, while the other one is a synthetic one. At the end of the day, I want to see how good the synthetic one is (generated via AI/ML) compared to the real one. But I am pretty stuck at the beginning:D</p>
<p>Since the dataset's distribution is unknown yet not following any well-known distribution, I want to calculate the PDF of them (and later compare how similar they are).</p>
<p>My two datasets are termed <code>p</code> and <code>q</code>, both arrays of IP addresses (as integers).</p>
<p>I am not an expert in probability theory, so please, bear with me :)</p>
<p>Since I want to compare the two probabilities eventually, to calculate the PDFs of them, I take all possible events (i.e., IP addresses) present in <code>p</code> and <code>q</code>. For this, I do the following in Python using <code>numpy</code>:</p>
<pre><code>import numpy as np
import pandas as pd
q=np.array(real_data_1m.srcip) #
p=np.array(syn_data_1m.srcip)
#get all possible discrete events from p and q
px=np.array(list(set(p))) #use set here to remove duplicates
qx=np.array(list(set(q))) #use set here to remove duplicates
#concatenate px and qx
mx=np.concatenate([px,qx])
mx.sort() #sort them, as they are anyway integers
mx=np.array(list(set(mx))) #remove duplicates by creating a set
#mx.reshape((len(mx),1)) #reshape from 1D to nD, where n=len(mx)
</code></pre>
<p>Then, to calculate the PDF, I created a simply function <code>create_prob_dist()</code> to help towards this goal.</p>
<pre><code>def create_prob_dist(data: np.array, sample_space: np.array):
#number of all events
sum_of_events = sample_space.size
#get the counts of each event via pandas.crosstab()
data_counts = pd.crosstab(index='counts', columns=data)
#create probabilities for each event
prob_dist=dict()
for i in sample_space:
if i in data_counts:
prob_dist[i]=(data_counts[i]['counts'])/sum_of_events
else:
prob_dist[i]=0
return prob_dist
</code></pre>
<p>This function does not return the PDF itself. At this stage, it returns a Python dictionary, where the keys are the possible IP addresses that are represented in both <code>p</code> and <code>q</code>, i.e., in <code>mx</code>. The corresponding values, therefore, are the probability of each of them. Something like: dict[2130706433]=0.05, meaning the probability of IP address 127.0.0.1 in the dataset is 0.05.</p>
<p>After I have this dictionary of probabilities, I try to plot it, but then comes my problems:</p>
<pre><code>#create true PDFs of p and q using mx
p_pdf=create_prob_dist(p, mx)
q_pdf=create_prob_dist(q, mx)
#get the probability values only from the dictionary
p_pdf=np.array(list(p_pdf.values())) #already sorted according to mx
q_pdf=np.array(list(q_pdf.values())) #already sorted according to mx
plt.figure()
plt.plot(mx, q_pdf, 'g', label="Q")
plt.plot(mx, p_pdf, 'r', label="P")
plt.legend(loc="upper right")
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/69CKa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/69CKa.png" alt="The PDF plot does not look good" /></a></p>
<p>I know there should be a problem around the scales or something, but I could not get my head around it.</p>
<p>What am I doing wrong? Is it a wrong Python call or is the calculation of the PDF wrong?</p>
<p>Btw., the pure histogram of the <code>p</code> and <code>q</code> looks like this:</p>
<pre><code># plot a histogram of the two datasets to have a quick look at them
plt.hist(np.array(syn_data_1m.srcip), bins=100)
plt.hist(np.array(real_data_1m.srcip),bins=100, alpha=.5)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/9k7g2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9k7g2.png" alt="Histogram of the two datasets" /></a></p>
| <python><numpy><probability-density><probability-distribution><probability-theory> | 2023-07-25 08:23:02 | 1 | 322 | cs.lev |
76,760,719 | 4,119,226 | Custom validation for FastAPI with Pydantic - how to access request data (like HTTP headers)? | <p>Suppose I have the following hello world-ish example:</p>
<pre><code>from dataclasses import dataclass
from typing import Union
from fastapi import FastAPI
@dataclass
class Item:
name: str
price: float
description: Union[str, None] = None
tax: Union[float, None] = None
app = FastAPI()
@app.post("/items/")
async def create_item(item: Item):
return item
</code></pre>
<p>I want to validate the <code>item.name</code> with some custom logic based on the value of HTTP Host Header. Let's say I only allow names that are equal to the exact value of that header. How can I achieve that?</p>
<p>Perhaps the question could be rephrased to "How to let custom Pydantic validator access request data in FastAPI?".</p>
<p>I see that making I/O calls in Pydantic validators is generally <a href="https://stackoverflow.com/questions/72379689/fastapi-access-redis-cache-inside-of-pydantic-validator">discouraged</a>, but for my usecase I don't plan to query anything outside my application.</p>
| <python><fastapi><pydantic> | 2023-07-25 08:16:42 | 1 | 1,261 | blahblah |
76,760,704 | 6,241,554 | Python docstring: inconsistent leading whitespace error without new line | <p>my docstring:</p>
<pre class="lang-py prettyprint-override"><code>def some_f():
"""
1. First story
>>> str(5)
if you want to see the value:
>>> print(str(5))
2. Another story
bla bla
"""
</code></pre>
<p>When using doctest, I'm getting:</p>
<blockquote>
<p>ValueError: line 6 of the docstring for File.Class.some_f has inconsistent leading whitespace: '2. Another story'</p>
</blockquote>
<p>I've read (<a href="https://stackoverflow.com/questions/40918168/docstring-has-inconsistent-leading-whitespace">here</a> and <a href="https://stackoverflow.com/questions/18772991/python-doctest-with-newline-characters-inconsistent-leading-whitespace-error">here</a>) that this problem may occur when one use <code>\n</code> or other special character in the docstring, but it's not the case here. No idea why this is happenning and how to fix that. In fact it expects me to move the second point to the right, since this is working properly:</p>
<pre class="lang-py prettyprint-override"><code>def some_f():
"""
1. First story
>>> str(5)
if you want to see the value:
>>> print(str(5))
2. Another story
bla bla
"""
</code></pre>
<p>But it's not what I want.</p>
| <python><special-characters><docstring><doctest> | 2023-07-25 08:14:36 | 1 | 1,841 | Piotr Wasilewicz |
76,760,682 | 1,702,592 | Tensorflow 2.13.1, no matching distribution found for tensorflow-text 2.13.0 | <p>I am trying to install the latest Tensorflow models 2.13.1 (<a href="https://pypi.org/project/tf-models-official/2.13.1/#history" rel="noreferrer">pip install tf-models-official==2.13.1</a>), with Python 3.11. There seems to be an issue with Cython and PyYAML not playing nice together since last week in Tensorflow models 2.13.0, so it won't install.</p>
<p>But 2.13.1 is giving me an error that the corresponding tensorflow-text version 2.13.0 is not found.
The error I am receiving is as follows:</p>
<pre><code>(tensorflow-env) username@DESKTOP:~/projects/tensorflow/models-master/research$ pip install tf-models-official==2.13.1
INFO: pip is looking at multiple versions of tf-models-official to determine which version is compatible with other requirements. This could take a while.
ERROR: Ignored the following versions that require a different python version: 1.21.2 Requires-Python >=3.7,<3.11; 1.21.3 Requires-Python >=3.7,<3.11; 1.21.4 Requires-Python >=3.7,<3.11; 1.21.5 Requires-Python >=3.7,<3.11; 1.21.6 Requires-Python >=3.7,<3.11; 1.6.2 Requires-Python >=3.7,<3.10; 1.6.3 Requires-Python >=3.7,<3.10; 1.7.0 Requires-Python >=3.7,<3.10; 1.7.1 Requires-Python >=3.7,<3.10; 1.7.2 Requires-Python >=3.7,<3.11; 1.7.3 Requires-Python >=3.7,<3.11; 1.8.0 Requires-Python >=3.8,<3.11; 1.8.0rc1 Requires-Python >=3.8,<3.11; 1.8.0rc2 Requires-Python >=3.8,<3.11; 1.8.0rc3 Requires-Python >=3.8,<3.11; 1.8.0rc4 Requires-Python >=3.8,<3.11; 1.8.1 Requires-Python >=3.8,<3.11
ERROR: Could not find a version that satisfies the requirement tensorflow-text~=2.13.0 (from tf-models-official) (from versions: 2.12.0rc0, 2.12.0, 2.12.1, 2.13.0rc0)
ERROR: No matching distribution found for tensorflow-text~=2.13.0
</code></pre>
<p>But the release history on pypi.org shows that the 2.13.0 version of tensorflow-text is out: <a href="https://pypi.org/project/tensorflow-text/2.13.0/#history" rel="noreferrer">https://pypi.org/project/tensorflow-text/2.13.0/#history</a></p>
<p>What am I doing wrong?</p>
| <python><python-3.x><tensorflow><pip><tensorflow2.0> | 2023-07-25 08:11:14 | 4 | 4,350 | Karl Johan Vallner |
76,760,676 | 9,272,737 | Odoo 16 plugin creation: override an existing function | <p>I'm very new at Odoo addon development, yet i'm trying to create a simple Odoo 16 plugin that will just override a few methods from the built-in addons and skip their functionality. The plugin installs succesfully, but won't override the code or actually log anything.</p>
<p>In particular, i'm trying to override the action_notify function in addons/mail/models/mail_activity.py model and add some extra code to the existing one, by fully replacing the method. My plugin structure is the following:</p>
<pre><code>.
├── __init__.py
├── __manifest__.py
├── models
│ ├── __init__.py
│ ├── mail_message.py
│ └── task.py
└── security
└── ir.model.access.csv
</code></pre>
<p>my <code>./__init__.py</code> file:</p>
<pre><code>from . import models
</code></pre>
<p>my <code>models/__init__.py</code> file:</p>
<pre><code>from . import task, mail_activity
</code></pre>
<p>my <code>models/mail_activity.py</code> file:</p>
<pre><code>from odoo import models
import logging
_logger = logging.getLogger(__name__)
class MailActivity(models.Model):
_inherit = 'mail.activity'
def action_notify(self):
_logger.info("LOG: _task_message_auto_subscribe_notify method invoked")
return
</code></pre>
<p>at this point i tried:</p>
<ul>
<li>to use the pass command as the only content of the new action_notify() method</li>
<li>to use the logger</li>
<li>to just add some arbitrary code</li>
</ul>
<p>but nothing seems to work, yet editing the main odoo code. Could please someone help me find the correct way to override the base addon function?</p>
<p>Thanks in advance</p>
| <python><python-3.x><odoo><odoo-14><odoo-16> | 2023-07-25 08:10:27 | 0 | 303 | Fed C |
76,760,559 | 9,110,646 | What is the most elegant and efficient way to encoding strings to numeric values in pandas? | <p>One solution would be to use pandas.DataFrame.apply. But is there a more efficient way?+
In the following pattern is applied in the examples: AA = 0.0, AB = 0.5, BB = 1.0.</p>
<p><strong>Input Table</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Index</th>
<th style="text-align: center;">Col1</th>
<th style="text-align: right;">Col2</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">Sample1</td>
<td style="text-align: center;">AB</td>
<td style="text-align: right;">BB</td>
</tr>
<tr>
<td style="text-align: left;">Sample2</td>
<td style="text-align: center;">AA</td>
<td style="text-align: right;">AB</td>
</tr>
</tbody>
</table>
</div>
<p><strong>Output Table</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Index</th>
<th style="text-align: center;">Col1</th>
<th style="text-align: right;">Col2</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">Sample1</td>
<td style="text-align: center;">0.5</td>
<td style="text-align: right;">1.0</td>
</tr>
<tr>
<td style="text-align: left;">Sample2</td>
<td style="text-align: center;">0.0</td>
<td style="text-align: right;">0.5</td>
</tr>
</tbody>
</table>
</div>
<pre><code>import pandas as pd
table_input = pd.DataFrame({'Col1': ["AB", "BB"],
'Col2': ["AA", "AB"]},
index=['Sample1', 'Sample2'])
table_output = pd.DataFrame({'Col1': [0.5, 1.0],
'Col2': [0.0, 0.5]},
index=['Sample1', 'Sample2'])
# Please insert solution here...
</code></pre>
| <python><pandas><dataframe><encoding><vectorization> | 2023-07-25 07:56:59 | 0 | 423 | Pm740 |
76,760,484 | 10,780,974 | No colours in legend when using zero handlelength with colours from a colourmap | <p>I'm trying to plot multiple lines, using the colours from the viridis colourmap. I want to set the <code>handlelength=0</code> in the legend, and when I do, the legend is now totally blank. If I don't mess around with <code>handlelength</code>, then it works fine (so the legend reads with the value, and then a line in the correct colour). And this works if I set the colours individually (so <code>c='C'+str(i)</code> for example).</p>
<p>Code below. matplotlib version is 3.6.2., pytho is 3.9.13</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
energy = np.linspace(40., 75., 36)
einds = np.linspace(0, 35, 8).astype(int)
fig, ax = plt.subplots(1, 1, figsize=(4,3), dpi=150)
for i in range(0, len(einds)):
ax.plot(energy, energy+i, c=plt.cm.viridis([einds[i]/einds.max()]),
label='{:.0f}keV'.format(energy[einds[i]]))
ax.grid()
leg=ax.legend(handlelength=0, handletextpad=0, fancybox=1,
framealpha=1, loc='upper right', labelcolor='linecolor')
for item in leg.legendHandles:
item.set_visible(False)
plt.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/03wxA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/03wxA.png" alt="enter image description here" /></a></p>
| <python><matplotlib><legend><colormap> | 2023-07-25 07:46:33 | 1 | 374 | Steven Thomas |
76,760,480 | 11,071,831 | Pandas using groupby on a groupby object | <p>I have a timeseries dataframe with Date, Symbol and some values with granularity of 1 min data.
I want to resample it to 5 min bucket. Each symbol has to grouped separately. I have attached sample data in the end, use <code>pd.from_dict()</code> to get data in dataframe. <code>Symbol</code> is meant to represent stock market tickers (more specifically options on the same ticker). Real data may contain hundreds of Symbols</p>
<p>Resampling a single symbol is easy</p>
<pre><code>df = df.groupby(pd.Grouper(key='Date',freq='5Min')).agg({'Open': 'first',
'High': 'max',
'Low': 'min',
'Close': 'last',}).reset_index()
</code></pre>
<p>But doing it for multiple symbols is not so straightforward. What I am thinking of is that first I group them by <code>Symbol</code> and then group them again with <code>Date</code> but I have no idea how this would work because you can't chain groupby</p>
<p>pseudo code would be something like</p>
<p><code>df.groupby('Symbol').groupby(pd.Grouper(key='Date',freq='5Min')).agg...</code></p>
<pre><code>{'Date': {0: '2018-01-01 09:15:00',
1: '2018-01-01 09:16:00',
2: '2018-01-01 09:17:00',
3: '2018-01-01 09:18:00',
4: '2018-01-01 09:19:00',
5: '2018-01-01 09:20:00',
6: '2018-01-01 09:21:00',
7: '2018-01-01 09:22:00',
8: '2018-01-01 09:23:00',
9: '2018-01-01 09:24:00',
10: '2018-01-01 09:25:00',
11: '2018-01-01 09:15:00',
12: '2018-01-01 09:16:00',
13: '2018-01-01 09:17:00',
14: '2018-01-01 09:18:00',
15: '2018-01-01 09:19:00',
16: '2018-01-01 09:20:00',
17: '2018-01-01 09:21:00',
18: '2018-01-01 09:22:00',
19: '2018-01-01 09:23:00',
20: '2018-01-01 09:24:00',
21: '2018-01-01 09:25:00'},
'Open': {0: 10531.7,
1: 10523.0,
2: 10525.8,
3: 10522.6,
4: 10521.05,
5: 10524.2,
6: 10526.85,
7: 10529.95,
8: 10533.2,
9: 10530.95,
10: 11528.4,
11: 11531.7,
12: 11523.0,
13: 11525.8,
14: 11522.6,
15: 11521.05,
16: 11524.2,
17: 11526.85,
18: 11529.95,
19: 11533.2,
20: 11530.95,
21: 11528.4},
'High': {0: 10533.7,
1: 10527.5,
2: 10526.15,
3: 10522.95,
4: 10523.8,
5: 10530.25,
6: 10530.3,
7: 10533.55,
8: 10534.8,
9: 10531.25,
10: 11529.1,
11: 11533.7,
12: 11527.5,
13: 11526.15,
14: 11522.95,
15: 11523.8,
16: 11530.25,
17: 11530.3,
18: 11533.55,
19: 11534.8,
20: 11531.25,
21: 11529.1},
'Low': {0: 10518.35,
1: 10522.25,
2: 10522.6,
3: 10519.65,
4: 10520.9,
5: 10523.65,
6: 10526.8,
7: 10529.95,
8: 10530.1,
9: 10527.6,
10: 11526.1,
11: 11518.35,
12: 11522.25,
13: 11522.6,
14: 11519.65,
15: 11520.9,
16: 11523.65,
17: 11526.8,
18: 11529.95,
19: 11530.1,
20: 11527.6,
21: 11526.1},
'Close': {0: 10523.15,
1: 10525.95,
2: 10522.6,
3: 10521.35,
4: 10523.7,
5: 10526.45,
6: 10530.3,
7: 10533.3,
8: 10530.5,
9: 10528.45,
10: 11527.45,
11: 11523.15,
12: 11525.95,
13: 11522.6,
14: 11521.35,
15: 11523.7,
16: 11526.45,
17: 11530.3,
18: 11533.3,
19: 11530.5,
20: 11528.45,
21: 11527.45},
'Symbol': {0: 'A',
1: 'A',
2: 'A',
3: 'A',
4: 'A',
5: 'A',
6: 'A',
7: 'A',
8: 'A',
9: 'A',
10: 'B',
11: 'B',
12: 'B',
13: 'B',
14: 'B',
15: 'B',
16: 'B',
17: 'B',
18: 'B',
19: 'B',
20: 'B',
21: 'B'}}
</code></pre>
| <python><pandas> | 2023-07-25 07:45:30 | 2 | 440 | Charizard_knows_to_code |
76,760,475 | 5,743,692 | Can I activate another conda env inside python code? | <p>It is copy <a href="https://stackoverflow.com/questions/54430434/activate-another-conda-env-inside-python-code">this question</a> because the previous answer wasn’t appropriate for me. The situation is that I have to use different <a href="https://pytorch.org/get-started/previous-versions/#conda-1" rel="nofollow noreferrer">PyTorch</a> versions in my code for some recognitions. So while talking with GPT, I assembled this code.</p>
<pre><code>import os
import subprocess
def get_env_list():
envs = []
output = subprocess.check_output(['conda', 'env', 'list'])
for line in output.splitlines():
line = line.decode('utf-8')
if line.startswith('#') or not line:
continue
env = line.split()[0]
envs.append(env)
return envs
def set_environment(env_name, valid_envs=get_env_list()):
if env_name in valid_envs:
os.system(F'conda activate {env_name}')
else:
raise ValueError('Invalid environment name')
def get_current_env():
env = os.environ.get('CONDA_DEFAULT_ENV')
if env is None:
return 'base'
else:
return env
if __name__ == '__main__':
print(get_env_list())
print(get_current_env())
d = {1 : 1}
print(d)
set_environment('AMZ')
print(get_current_env())
print(d)
</code></pre>
<p>But it is not work.
The output is</p>
<pre><code>['base', 'AMZ', 'Amazon', 'lilit', 'lilit_clone', 'sCool']
lilit_clone
{1: 1}
lilit_clone
{1: 1}
</code></pre>
<p>I've started in <code>lilit_clone</code> env and still in it.
So - how to resolve my situation?</p>
| <python><pytorch><subprocess><conda><python-venv> | 2023-07-25 07:44:59 | 0 | 451 | Vasyl Kolomiets |
76,760,443 | 2,013,747 | using pydantic causes stdlib dataclass frozen+slots behavior to change (read-only attribute error). why? | <p>Here is a minimal example of my dataclass usage:</p>
<pre><code>from dataclasses import dataclass
@dataclass(slots=True, frozen=True)
class MyStdLibDataclass:
x : int = 0
d1 = MyStdLibDataclass(x=100)
assert d1.x == 100
</code></pre>
<p>The above code works. However, if I add the following code to the same file:</p>
<pre><code>from pydantic import BaseModel
class MyModel(BaseModel):
y : MyStdLibDataclass|None = None
d2 = MyStdLibDataclass(x=101) # error here
assert d2.x == 101
</code></pre>
<p>I get the error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Ross\Desktop\dataclass_bug.py", line 15, in <module>
d2 = MyStdLibDataclass(x=101)
File "<string>", line 3, in __init__
AttributeError: 'MyStdLibDataclass' object attribute 'x' is read-only
</code></pre>
<p>What am I doing wrong?</p>
<p>It appears that there is some side effect of using a stdlib dataclass in a pydantic model. Does pydantic patch the stdlib dataclass implementation?</p>
<p>Python 3.10.7, pydantic 2.0.2</p>
| <python><pydantic> | 2023-07-25 07:41:18 | 1 | 4,240 | Ross Bencina |
76,760,261 | 9,330,812 | How to avoid the issue "ValueError: ZIP does not support timestamps before 1980" in ZipFile write() in python 3.10 | <p>Recently I encountered the issue <code>ValueError: ZIP does not support timestamps before 1980</code> while trying to write a zip file. Searched on google, and most of the solutions tell me to change the file timestamp using "<code>os.utime()</code>".<br />
For some other reasons, I find the answer not ideal for me.<br />
Are there any other options to fix the issue or workaround the issue?
Thanks in advance.</p>
<p>ps: Find an answer later, put the answer below, hope it could help someone.</p>
| <python><zip> | 2023-07-25 07:16:09 | 1 | 421 | MadHatter |
76,760,094 | 11,162,983 | RuntimeError: shape '[4, 3]' is invalid for input of size 3 | <p>I tried to compute the Euler angles from the quaternion using this code:</p>
<pre><code>def compute_euler_angles_from_quaternion(quaternions, sequence='xyz'):
batch_size = quaternions.shape[0]
print("Batch size:", batch_size)
q = quaternions.detach().cpu().numpy() # Convert to NumPy array
rotations = Rotation.from_quat(q)
print("q shape & type :", q.shape, type(q))
print("q :", q)
euler_angles = rotations.as_euler(sequence, degrees=False)
print("Shape of euler_angles:", euler_angles.shape, euler_angles)
euler_angles = torch.tensor(euler_angles, device=quaternions.device)
euler_angles = euler_angles.view(batch_size, 3)
return euler_angles
</code></pre>
<p>But I got this issue when the batch sizes == 1 or 2, or 16:</p>
<pre><code>......
Batch size: 2
q shape & type : (2, 4) <class 'numpy.ndarray'>
q : [[-0.02834811 0.38461712 -0.17447884 0.9059931 ]
[-0.08619151 0.40973052 0.17108667 0.8918639 ]]
Shape of euler_angles: (2, 3) [[-0.25826138 0.75739155 -0.48375319]
[-0.02085263 0.86383674 0.3694439 ]]
Batch size: 2
q shape & type : (2, 4) <class 'numpy.ndarray'>
q : [[-8.2189016e-02 -1.8607294e-04 3.9553974e-02 9.9583155e-01]
[-1.4399211e-01 5.9701569e-02 3.8889091e-02 9.8701048e-01]]
Shape of euler_angles: (2, 3) [[-0.16445085 0.00613125 0.07889206]
[-0.285834 0.12941251 0.06011334]]
Batch size: 4
q shape & type : (4,) <class 'numpy.ndarray'>
q : [-0.07577483 0.17546612 -0.10769336 0.9756393 ]
Shape of euler_angles: (3,) [-0.19766829 0.3321353 -0.25311127]
Traceback (most recent call last):
File "test_quat.py", line 147, in <module>
euler = utils.compute_euler_angles_from_quaternion(
File "/home/redhwan/2/HPE/quat/utils.py", line 304, in compute_euler_angles_from_quaternion
euler_angles = euler_angles.view(batch_size, 3)
RuntimeError: shape '[4, 3]' is invalid for input of size 3
</code></pre>
<p>Please help.</p>
<p>Thank you in advance.</p>
| <python><numpy><scipy><touch> | 2023-07-25 06:50:43 | 0 | 987 | Redhwan |
76,760,019 | 5,300,978 | SecurityError: Permission denied to access property "pageXOffset" on cross-origin object | <p>I am trying to take a screenshot using Selenium with Python 3.9 but suddenly I've got the error as:</p>
<pre><code>Message: SecurityError: Permission denied to access property "pageXOffset" on cross-origin object
</code></pre>
<p>I've tried to use:</p>
<pre><code>profile.setPreference("privacy.trackingprotection.enabled", false)
</code></pre>
<p>But it does not work. How to fix it?</p>
| <python><selenium-webdriver><firefox><geckodriver><cross-origin-read-blocking> | 2023-07-25 06:37:12 | 1 | 1,324 | M. Mariscal |
76,759,808 | 5,567,893 | How to move duplicate rows into columns allocating new column names? | <p>Although I found a similar question here (<a href="https://stackoverflow.com/questions/35927220/how-to-move-duplicate-rows-into-columns-with-python">How to move duplicate rows into columns with python</a>), I have an error with the <code>column</code> parameter.</p>
<p>In my case, I have a dataframe <code>df</code> and want to change it to <code>df1</code>:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(data=[['A',1],['A',2],['B',3],['C',4],['C',5],['C',6]],columns=['Name','Value'])
df
# Name Value
#0 A 1
#1 A 2
#2 B 3
#3 C 4
#4 C 5
#5 C 6
df1
# Name 0 1 2
#0 A 1 2 Nan
#1 B 3 Nan Nan
#2 C 4 5 6
</code></pre>
<p>When I tried to run the code, it returned an error as below:</p>
<pre class="lang-py prettyprint-override"><code>df.pivot(index='Name', columns=range(max(df.pivot_table(columns=['Name'], aggfunc='size'))), values='Value')
#KeyError: 3
</code></pre>
<p>I don't know why it couldn't allocate the column name automatically. Can anyone tell me where should I fix the problem in the above code?</p>
| <python><pandas><dataframe> | 2023-07-25 05:58:09 | 1 | 466 | Ssong |
76,759,767 | 15,320,579 | Select i+4 elements from a python dictionary based on a keyword | <p>I have a Python dictionary as follows:</p>
<pre><code>ip_dict = {
"doc_1": "ADMINISTRATION LIABILITY COVERAGE PART CG7023 1096 EXCL-ASBESTOS",
"doc_2": "DIRECT BILL L7F6 20118 INSURED COPY ACP GLDO 7285650787 919705952 43 0001404",
"doc_3": "What Contractor Additional Insured LIABILITY CG 20 10 04 13 THIS ENDORSEMENT CHANGES",
"doc_4": "That portion of \"your work\" out of which the 1. Required by the contract or agreement",
"doc_5": "LIABILITY CG 20 10 04 13 Contractor Additional Insured THIS ENDORSEMENT CHANGES THE POLICY",
"doc_6": "That portion of \"your work\" out of which the 1. Contractor Additional Insured Required",
"doc_7": "LIABILITY CG 20 26 04 13 THIS ENDORSEMENT CHANGES THE POLICY.",
"doc_8": "COMMERCIAL GENERAL LIABILITY CG 21 87 0115 THIS ENDORSEMENT CHANGES THE POLICY.",
"doc_9": "Page 2 of 2 ACP GLDO7285650787 L7F6 20118 CG 21 87 01 15 B. The following definitions are added",
"doc_10": "POLICY NUMBER: THIS ENDORSEMENT CHANGES THE POLICY. COMMERCIAL GENERAL LIABILITY CG 25 03 05 09 ",
"doc_11": "Page 2 of 2 ACP GLDO7285650787 L7F6 20118 CG 25 03 05 09 B"
}
</code></pre>
<p>Now I want <strong>search for the keyword</strong> <code>Contractor Additional Insured</code> in the values and if found then <strong>extract that element plus the next 4 consecutive elements</strong> appearing after that element and store in a new dictionary. So my output would look something like this:</p>
<pre><code>op_dict = {
"doc_3": "What Contractor Additional Insured LIABILITY CG 20 10 04 13 THIS ENDORSEMENT CHANGES",
"doc_4": "That portion of \"your work\" out of which the 1. Required by the contract or agreement",
"doc_5": "LIABILITY CG 20 10 04 13 Contractor Additional Insured THIS ENDORSEMENT CHANGES THE POLICY",
"doc_6": "That portion of \"your work\" out of which the 1. Contractor Additional Insured Required",
"doc_7": "LIABILITY CG 20 26 04 13 THIS ENDORSEMENT CHANGES THE POLICY.",
"doc_8": "COMMERCIAL GENERAL LIABILITY CG 21 87 0115 THIS ENDORSEMENT CHANGES THE POLICY.",
"doc_9": "Page 2 of 2 ACP GLDO7285650787 L7F6 20118 CG 21 87 01 15 B. The following definitions are added",
"doc_10": "POLICY NUMBER: THIS ENDORSEMENT CHANGES THE POLICY. COMMERCIAL GENERAL LIABILITY CG 25 03 05 09 ",
}
</code></pre>
<p>Here the keyword appears in the third element <code>doc_3</code>, so we consider 4 elements after <code>doc_3</code> i.e. <code>doc_4</code>, <code>doc_5</code>, <code>doc_6</code>, <code>doc_7</code>. Hence elements till <code>doc_7</code> will be considered.</p>
<p>Now next the keyword appears in <code>doc_5</code>. Hence 4 elements after <code>doc_5</code> (which are <code>doc_6</code>, <code>doc_7</code>, <code>doc_8</code>, <code>doc_9</code>).</p>
<p>Similarly next the keyword appears in <code>doc_6</code> so the next 4 consecutive elements will be selected (<code>doc_7</code>, <code>doc_8</code>, <code>doc_9</code>, <code>doc_10</code>).</p>
<p>Any help is appreciated!</p>
| <python><python-3.x><dictionary><for-loop><dictionary-comprehension> | 2023-07-25 05:49:15 | 3 | 787 | spectre |
76,759,702 | 2,153,235 | How to access Python help on [Py]Spark's "toDF"? | <p>I am trying to spin up on Python and PySpark. I followed <a href="https://sparkbyexamples.com/pyspark/install-pyspark-in-anaconda-jupyter-notebook/?expand_article=1" rel="nofollow noreferrer">this page</a> on installing and checking PySpark in Anaconda on Windows. The following checking code works:</p>
<pre><code>>>> import findspark
>>> findspark.init()
>>> findspark.find()
'C:\\Users\\User.Name\\anaconda3\\envs\\py39\\lib\\site-packages\\pyspark'
>>> from pyspark.sql import SparkSession
>>> spark = SparkSession.builder.appName('SparkExamples.com').getOrCreate()
>>> data = [("Java","20000"), ("Python","100000"), ("Scala","3000")]
>>> columns = ["language","users_count"]
>>> df = spark.createDataFrame(data).toDF(*columns)
>>> df.show()
+--------+-----------+
|language|users_count|
+--------+-----------+
| Java| 20000|
| Python| 100000|
| Scala| 3000|
+--------+-----------+
</code></pre>
<p>I am tried accessing the online help for the methods <code>createDataFrame</code> and <code>toDF</code>. Getting help on <code>createDataFrame</code> was straighforward: <code>help(spark.createDataFrame)</code>.</p>
<p>I haven't been able to access the online help for <code>toDF</code>:</p>
<pre><code>>>> help(spark.toDF)
AttributeError: 'SparkSession' object has no attribute 'toDF'
>>> help(DataFrame.toDF)
NameError: name 'DataFrame' is not defined
>>> help(spark.DataFrame.toDF)
AttributeError: 'SparkSession' object has no attribute 'DataFrame'
>>> help(DataFrame)
NameError: name 'DataFrame' is not defined
>>> help(spark.DataFrame)
AttributeError: 'SparkSession' object has no attribute 'DataFrame'
</code></pre>
<p><strong>(1) How is the documentation accessed?</strong></p>
<p><strong>(2) Is there a scheme for accessing the help that one can infer based on the checking code above?</strong></p>
| <python><apache-spark><pyspark> | 2023-07-25 05:35:52 | 1 | 1,265 | user2153235 |
76,759,600 | 10,982,755 | How to capture SIGTERM signal in a flask server? | <p>I'm currently running a flask server with uwsgi. It is a single process and multi threaded server. I'd like to perform some activities on the pod when k8s sends a SIGTERM or SIGINT signal. However when I try to do the following inside my class,</p>
<pre><code>def __register_signal_handlers(self):
signal.signal(signal.SIGINT, self.__exit_gracefully)
signal.signal(signal.SIGTERM, self.__exit_gracefully)
</code></pre>
<p>It raises <code>ValueError: Signal must be registered on the main thread</code>
I also checked uwsgi signal register and they don't mention anything about SIGTERM or SIGINT.</p>
<p>How do I capture the above mentioned signal so that I can perform some activities when a pod is terminated by k8s?</p>
| <python><multithreading><kubernetes><flask><uwsgi> | 2023-07-25 05:06:18 | 1 | 617 | Vaibhav |
76,759,568 | 336,527 | Why does pandas.DataFrame.apply return identical rows? | <p>Why does this code produces a DF with three identical rows?</p>
<pre><code>df = pd.DataFrame({'a': [1, 2, 3], 'b': ['x', 'y', 'z']})
print(df.apply(lambda s: [s], axis=1))
</code></pre>
<p>Output:</p>
<pre><code>0 [[3, z]]
1 [[3, z]]
2 [[3, z]]
dtype: object
</code></pre>
| <python><pandas> | 2023-07-25 04:58:33 | 4 | 52,663 | max |
76,759,473 | 19,157,137 | Unable to Access pytest Test Features Without Modifying sys.path | <p>I am working on a Python project where I have the following project structure:</p>
<pre><code>project_directory/
├── src/
│ └── my_functions.py # Contains functions to test
├── tests/
│ └── test_my_functions.py # pytest test file
└── other_files_and_folders/ # Additional files or folders in the project
</code></pre>
<p>In the <code>project_directory</code>, I have a <code>src</code> directory containing the source code with functions that I want to test using pytest. The <code>tests</code> directory holds the pytest test file <code>test_my_functions.py</code>, which contains the test functions.</p>
<p>Initially, to import and access the functions from the <code>src</code> directory in the <code>test_my_functions.py</code> file, I used the following code to modify the Python path:</p>
<pre class="lang-py prettyprint-override"><code>import sys
import os
# Get the path of the current directory (where test_my_functions.py is located)
current_directory = os.path.dirname(os.path.abspath(__file__))
# Add the parent directory of 'src' to the Python path
parent_directory = os.path.dirname(current_directory)
sys.path.append(parent_directory)
</code></pre>
<p>While this approach works, I am seeking a cleaner and more efficient way to access the pytest test features without explicitly modifying the Python path in the test script. I want to ensure that my test script can import the modules from the <code>src</code> directory seamlessly while following best practices and avoiding any potential pitfalls.</p>
<p>Is there a more elegant solution to achieve this without explicitly modifying the Python path in the test script? I am looking for insights and suggestions on how to accomplish this while maintaining a clean and maintainable project structure. Thank you for your help!</p>
| <python><unit-testing><testing><pytest><system> | 2023-07-25 04:33:47 | 2 | 363 | Bosser445 |
76,759,390 | 6,005,206 | pd.offsets.WeekOfMonth() behavior in Pandas | <p>With <code>pd.offsets.WeekOfMonth()</code> <a href="https://pandas.pydata.org/docs/reference/api/pandas.tseries.offsets.WeekOfMonth.html" rel="nofollow noreferrer">here</a>:</p>
<ul>
<li><code>2020-08-10</code> (i.e. <code>Monday</code>) offsets to <code>2020-08-12</code> as expected i.e. <code>Wednesday</code> of 2nd week (i.e. <code>week=1</code>)</li>
</ul>
<p>Trying to understand why doesn't:</p>
<ul>
<li><code>2020-08-01</code> offset to <code>2020-08-05</code> instead of <code>2020-08-12</code>.</li>
<li><code>2020-08-21</code> offset to <code>2020-08-26</code> instead of <code>2020-09-09</code>.</li>
</ul>
<p><code>[Python 3.9.13; pandas: 2.0.1]</code></p>
<p>Code:</p>
<pre><code>import pandas as pd
data = pd.Series(
[pd.Timestamp('2020-08-01 01:01:01.001001001'), # Saturday
pd.Timestamp('2020-08-10 01:01:01.001001001'), # Monday
pd.Timestamp('2020-08-21 01:01:01.001001001')]) # Friday
print(data)
print()
</code></pre>
<p>Offset:</p>
<pre><code>w = data + pd.offsets.WeekOfMonth(week=1, weekday=2)
print(w)
</code></pre>
<p>Output:</p>
<pre><code>0 2020-08-01 01:01:01.001001001 # Saturday
1 2020-08-10 01:01:01.001001001 # Monday
2 2020-08-21 01:01:01.001001001 # Friday
dtype: datetime64[ns]
0 2020-08-12 01:01:01.001001001 # Wednesday
1 2020-08-12 01:01:01.001001001 # Wednesday
2 2020-09-09 01:01:01.001001001 # Wednesday
dtype: datetime64[ns]
</code></pre>
| <python><pandas><pd.offsets> | 2023-07-25 04:09:50 | 2 | 1,893 | Nilesh Ingle |
76,759,193 | 1,203,797 | why MinMaxScaler's fit_transform function drop some customer_ids on the dataframe? | <p>The goal of the code below is to scale all columns (except <code>customer_id</code>) inside <code>df_filtered</code> to 0-1 range and save the output to <code>df_scaled</code>, while preserving all <code>customer_id</code>.</p>
<pre><code># Separate customer_id column and feature columns
customer_ids = df_filtered['customer_id']
features = df_filtered.drop('customer_id', axis=1)
# Transform the feature columns
scaler = MinMaxScaler()
scaled_features = scaler.fit_transform(features)
# Create a new DataFrame with the scaled features and customer_id column
df_scaled = pd.DataFrame(scaled_features, columns=features.columns)
df_scaled['customer_id'] = customer_ids
# Reorder the columns (optional, to match the original DataFrame)
df_scaled = df_scaled[['customer_id', 'trx_cnt', 'gtv', 'service_cnt', 'active_day_cnt', 'recency']]
</code></pre>
<p>However, I noticed ~10% of <code>customer_id</code> inside <code>df_scaled</code> becomes NaN, hence I'm losing the identifier. The row numbers persist but the value of <code>customer_id</code> becomes NaN.</p>
<p>Why is this happening and is this expected?</p>
<p>If not, could you point out the way to fix this?</p>
<p>Thanks!</p>
| <python><dataframe><scikit-learn> | 2023-07-25 03:01:07 | 1 | 10,958 | Blaze Tama |
76,759,158 | 5,960,363 | Type hinting a JSON object in Python | <p>I'd like to type-hint JSON objects with an unknown or changing structure (pulled in from external API). I'd like to avoid using <code>Any</code> or solutions like <code>cast()</code> as much as possible.</p>
<p>I believe the right hint is:</p>
<pre><code>Json: TypeAlias = dict[str, "Json"] | list["Json"] | str | int | float | bool | None
</code></pre>
<h3>Problem</h3>
<p>I find this hint often doesn't work. The following code sample recreates the problem.</p>
<pre class="lang-py prettyprint-override"><code>import requests
from typing_extensions import TypeAlias
Json: TypeAlias = dict[str, "Json"] | list["Json"] | str | int | float | bool | None
res: requests.models.Response = requests.get(
"https://randomuser.me/api/?results=5&nat=gb"
)
data: Json = res.json()
results: Json = data["results"]
</code></pre>
<p>On <code>data["results:]</code> I get the following complaints from mypy:</p>
<blockquote>
<p>No overload variant of "<strong>getitem</strong>" of "list" matches argument type "str" [call-overload]</p>
</blockquote>
<blockquote>
<p>Possible overload variants:
def <strong>getitem</strong>(self, SupportsIndex, /) -> Json
def <strong>getitem</strong>(self, slice, /) -> list[Json]</p>
</blockquote>
<blockquote>
<p>Value of type "dict[str, Json] | list[Json] | str | int | float | bool | None" is not indexable [index]</p>
</blockquote>
<h3>Question</h3>
<p>What am I doing wrong? I've managed to find <a href="https://github.com/python/typing/issues/182#issuecomment-1320974824" rel="nofollow noreferrer">this GitHub issue</a> which may well contain the solution, but if so, I'm not yet good enough with types to see it.</p>
<p>Thanks!</p>
| <python><json><mypy><python-typing> | 2023-07-25 02:47:00 | 2 | 852 | FlightPlan |
76,759,128 | 5,032,387 | TypeError when running compute that includes map_blocks and reduce | <p>I am having difficulty diagnosing the cause of the error. My code involves running a convolution (with <code>map_blocks</code>) over some arrays if they belong to the same group of variables, otherwise just record the 2-dim array. I then do an <code>argmax</code> operation and add the result to a list, that we then concatenate.</p>
<p>I tried running compute with <code>scheduler='single-threaded'</code> argument, to help debug, but I still wasn't able to see the cause of the error.</p>
<pre class="lang-py prettyprint-override"><code>import dask.array as da
from functools import reduce
import numpy as np
size = 100000
vals = da.linspace(0, 1, size)
nvars = 12
test = da.random.uniform(low=0, high=1, size=(100000, nvars, size), chunks=(100, nvars, size))
# number of total unique items corresponds to nvars
var_lookup = {
'a': [0, 1],
'b':
[0, 1],
'c': [0],
'd': [0, 1],
'e': [0],
'f': [0, 1, 2],
'g': [0],
}
# Iterates over all 0 dimension coordinates
# and convolves relevant values from x and y
def custom_convolve(x,y):
temp_lst = []
for i in range(x.shape[0]):
a = da.fft.rfft(x[i])
b = da.fft.rfft(y[i])
conv_res = da.fft.irfft(a * b, n = size)
temp_lst.append(conv_res)
res = da.stack(temp_lst, axis=0)
return res
n_groups = len(var_lookup.keys())
counter = 0
group_cols = []
for i in var_lookup.keys():
grp = var_lookup[i]
# if group consists of 1 value, then just record that 2-dim array
if len(grp)==1:
temp = test[:,counter,:]
counter += 1
else:
test_list = []
for _ in var_lookup[i]:
test_list.append(test[:, counter, :])
counter += 1
temp = reduce(lambda x, y: da.map_blocks(custom_convolve, x, y, dtype='float32'), test_list)
res = vals[da.argmax(temp, axis=1)]
group_cols.append(res)
loc = da.stack(group_cols, axis=1)
</code></pre>
<p>Error when running compute:</p>
<pre><code>res = loc.compute()
</code></pre>
<p>Traceback for error from the last line is long, but the end is here</p>
<pre><code>File c:\Users\x\lib\site-packages\dask\array\slicing.py:990, in check_index(axis, ind, dimension)
987 elif ind is None:
988 return
--> 990 elif ind >= dimension or ind < -dimension:
991 raise IndexError(
992 f"Index {ind} is out of bounds for axis {axis} with size {dimension}"
993 )
TypeError: '>=' not supported between instances of 'str' and 'int'
</code></pre>
<p>Maybe the <code>reduce</code> function coupled with <code>map_blocks</code> is causing the problem?</p>
<p><strong>Debug attempt update 1:</strong></p>
<p>I used pdb, converted the code to a .py file, changed compute argument to scheduler='single-threaded'), added a set_trace to right after the <code>for i</code> line and stepped through. It only errors out when I get to the compute step with the same error, so not helpful.</p>
<p><strong>Debug attempt update 2:</strong></p>
<p>I've identified the exact line that gives the problem.
I simplified the code a little to make sure that it wasn't the reduce function and got rid of the loops.</p>
<pre><code>size = 10000
x_vals = da.linspace(0, 1, 1000)
test = da.random.uniform(low=0, high=1, size=(size,4,1000), chunks=(size / 10, 1, 1000))
def simple_convolve(x, y):
temp_lst = []
for i in range(x.shape[0]):
a = da.fft.rfft(x[i])
b = da.fft.rfft(y[i])
conv_res = da.fft.irfft(a * b, n = size)
temp_lst.append(conv_res)
res = da.stack(temp_lst, axis=0)
return res
res = da.map_blocks(simple_convolve, test[:,0], test[:,1], dtype='float32')
temp = x_vals[da.argmax(res, axis=1)]
</code></pre>
<p>We get an error here. If we drill in, then the error
actually comes from running this</p>
<pre><code>da.argmax(res, axis=1)
</code></pre>
<p>Since the error is saying I'm comparing a string and an integer, I checked that res has no nulls and no infinity values:</p>
<pre><code># btw don't understand why just 1 compute still returns a dask array
da.isnan(res).sum().compute().compute()
0
(~da.isfinite(res)).sum().compute().compute()
0
</code></pre>
| <python><numpy><dask> | 2023-07-25 02:32:57 | 1 | 3,080 | matsuo_basho |
76,759,072 | 6,646,421 | SQLAlchemy Automap not loading table | <p>I am using SQLAlchemy version 2.0.19 (latest public release).</p>
<p>I am trying to map existing tables as documented in <a href="https://docs.sqlalchemy.org/en/20/orm/extensions/automap.html#basic-use" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/20/orm/extensions/automap.html#basic-use</a></p>
<p>I created a SQLite table called <code>user</code>:</p>
<pre><code>sqlite3 /tmp/mydatabase.db
SQLite version 3.39.5 2022-10-14 20:58:05
Enter ".help" for usage hints.
sqlite>
sqlite> .schema user
CREATE TABLE user (id int, a int);
sqlite>
</code></pre>
<p>I am trying to automap this table using SQLAlchemy but the table is not available in <code>Base.classes</code> <a href="https://docs.sqlalchemy.org/en/20/orm/extensions/automap.html#basic-use" rel="nofollow noreferrer">as documented</a>:</p>
<pre><code>from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine
Base = automap_base()
# engine
engine = create_engine("sqlite:////tmp/mydatabase.db")
# reflect the tables
Base.prepare(autoload_with=engine)
print(Base.classes.keys())
# mapped classes are now created with names by default
# matching that of the table name.
User = Base.classes.user
</code></pre>
<p><code>print(Base.classes.keys())</code> returns <code>[]</code>.</p>
<p><code>User = Base.classes.user</code> shows the table not mapped:</p>
<pre><code>Traceback (most recent call last):
File "/***/venv/lib/python3.10/site-packages/sqlalchemy/util/_collections.py", line 186, in __getattr__
return self._data[key]
KeyError: 'user'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/***/test.py", line 17, in <module>
User = Base.classes.user
File "/***/venv/lib/python3.10/site-packages/sqlalchemy/util/_collections.py", line 188, in __getattr__
raise AttributeError(key)
AttributeError: user
</code></pre>
<p>I am a little confused because the example is pretty straightforward and nonetheless it does not appear to work.</p>
| <python><sqlalchemy> | 2023-07-25 02:15:07 | 1 | 3,538 | Gab |
76,759,069 | 11,642,691 | How to make resizable windows with glut? | <p>I am following the sample programs in the OpenGL Superbible, and they make it sound like windows created with <em>glutCreateWindow</em> will be resizable. I am using my own python-version of their Listing 2.2 below, and my windows are never resizable. I am using:
PyOpenGL 3.1.7 on macOS Ventura (and "freeglut" (I think it was also the same before I installed freeglut).</p>
<p>Here is the listing:</p>
<pre><code>from OpenGL.GLUT import *
from OpenGL.GL import *
from OpenGL.GLU import *
def change_size(w, h):
glViewport(0, 0, w, h)
glMatrixMode(GL_PROJECTION)
glLoadIdentity()
aspect_ratio = w/h
if w <= h:
glOrtho(-100, 100, -100/aspect_ratio, 100/aspect_ratio, 1, -1)
else:
glOrtho(-100*aspect_ratio, 100*aspect_ratio, -100, 100, 1, -1)
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
def render_scene():
glClear(GL_COLOR_BUFFER_BIT)
glColor(1,0,0,0)
glRectf(-25,25,25,-25)
glFlush()
def setup_rc():
glClearColor(0, 0, 1, 1)
def main():
glutInit(sys.argv)
glutInitDisplayMode(GLUT_SINGLE|GLUT_RGBA)
glutCreateWindow("Simple")
glutDisplayFunc(render_scene)
glutReshapeFunc(change_size)
setup_rc()
glutMainLoop()
main()
</code></pre>
<p>is there an easy way to create the main window so it is resizable?</p>
| <python><macos><pyopengl><freeglut><glutcreatewindow> | 2023-07-25 02:14:11 | 1 | 305 | PaulM |
76,758,979 | 22,070,773 | Need help finding which script Python runs on startup | <p>I'm trying to find which Python script loads whenever python is first run. For example, if you run <code>python -v</code>, you see some output like:</p>
<pre><code># installing zipimport hook
import zipimport # builtin
# installed zipimport hook
import site # precompiled from /lib/python2.7/site.pyo
import os # precompiled from /lib/python2.7/os.pyo
import errno # builtin
import posix # builtin
import posixpath # precompiled from /lib/python2.7/posixpath.pyo
import stat # precompiled from /lib/python2.7/stat.pyo
import genericpath # precompiled from /lib/python2.7/genericpath.pyo
import warnings # precompiled from /lib/python2.7/warnings.pyo
import linecache # precompiled from /lib/python2.7/linecache.pyo
import types # precompiled from /lib/python2.7/types.pyo
import UserDict # precompiled from /lib/python2.7/UserDict.pyo
import _abcoll # precompiled from /lib/python2.7/_abcoll.pyo
import abc # precompiled from /lib/python2.7/abc.pyo
import _weakrefset # precompiled from /lib/python2.7/_weakrefset.pyo
import _weakref # builtin
import copy_reg # precompiled from /lib/python2.7/copy_reg.pyo
import traceback # precompiled from /lib/python2.7/traceback.pyo
import sysconfig # precompiled from /lib/python2.7/sysconfig.pyo
import re # precompiled from /lib/python2.7/re.pyo
import sre_compile # precompiled from /lib/python2.7/sre_compile.pyo
import _sre # builtin
import sre_parse # precompiled from /lib/python2.7/sre_parse.pyo
import sre_constants # precompiled from /lib/python2.7/sre_constants.pyo
import _sysconfigdata # precompiled from /lib/python2.7/_sysconfigdata.pyo
import encodings # directory /lib/python2.7/encodings
import encodings # precompiled from /lib/python2.7/encodings/__init__.pyo
import codecs # precompiled from /lib/python2.7/codecs.pyo
import _codecs # builtin
import encodings.aliases # precompiled from /lib/python2.7/encodings/aliases.pyo
import encodings.utf_8 # precompiled from /lib/python2.7/encodings/utf_8.pyo
</code></pre>
<p>I want to find the script that is running these commands, in order to lazy load some of the modules for use when running in a low memory environment.
I think it might be built into <code>libpython2.7.a</code> which makes it difficult to lazy load.
I have found that alot of these modules are object files such as</p>
<pre><code>posixmodule.o errnomodule.o pwdmodule.o _sre.o _codecsmodule.o _weakref.o zipimport.o symtablemodule.o xxsubtype.o
</code></pre>
<p>Which then gets compiled to produce the final <code>libpython2.7.a</code>.</p>
<p>What I would like is some place in the source, perhaps even before building it, where I can replace:</p>
<p><code>import linecache</code></p>
<p>or whichever the imports that can be deferred after loading, to</p>
<p><code>linecache = lazy_import.lazy_module("linecache")</code>.</p>
<p>Does anyone know where/if this script is in the Python source code?
Thanks</p>
| <python><initialization><lazy-loading> | 2023-07-25 01:50:15 | 1 | 451 | got here |
76,758,943 | 8,328,007 | S3 Glacier Instant Retrieval via boto3 | <p>I am using boto3 to successfully copy S3 objects to GLACIER_IR storage class with the following code:</p>
<p><code>copy_source = {'Bucket': obj['Bucket'], 'Key': obj['Key']}</code></p>
<p><code>s3.copy(CopySource=copy_source, Bucket=bucket, Key=filename_w_timestamp, StorageClass='GLACIER_IR')</code></p>
<p>However, when I use a very similar code to retrieve the above file back, I get the following error:</p>
<p><code>botocore.errorfactory.ObjectNotInActiveTierError: An error occurred (ObjectNotInActiveTierError) when calling the CopyObject operation: The source object of the COPY operation is not in the active tier and is only stored in Amazon Glacier.</code></p>
<p>Here is the piece of code I am using to attempt retrieval:</p>
<pre><code>s3.copy(CopySource=copy_source, Bucket=bucket, Key=filename_w_timestamp)
</code></pre>
<p>Can someone please help out with what I am doing wrong here.</p>
| <python><amazon-web-services><amazon-s3><boto3> | 2023-07-25 01:39:36 | 1 | 346 | python_enthusiast |
76,758,816 | 10,941,410 | Does it make sense to add some in-memory cache when using AWS lambda? | <p>Let's say that I have a value stored in Redis that it's periodically updated. Now, let's say that I want to avoid fetching this value every time I need it in my application: I could cache this value in memory since I know that it's guaranteed that the value won't be updated given one periodic schedule and the next.</p>
<p>Something similar to <a href="https://github.com/jazzband/django-constance/blob/master/constance/backends/redisd.py#L55" rel="nofollow noreferrer">this</a> implementation.</p>
<p>Given the ephemeral nature of serverless, would this make any sense in an AWS lambda environment? I can imagine that it doesn't for cold start, but does it for warm starts?!</p>
| <python><caching><aws-lambda><redis><in-memory-cache> | 2023-07-25 00:46:29 | 2 | 305 | Murilo Sitonio |
76,758,741 | 16,319,191 | Create binary columns after groupby based on occurrence | <p>An empty df w particular cols of interest (col1-5)</p>
<pre><code>dfw_columns = pd.DataFrame({
"col1": [],
"col2": [],
"col3": [],
"col4": [],
"col5": []
})
</code></pre>
<p>The df w actual entries</p>
<pre><code>df = pd.DataFrame({
"Name": ["abc", "abc", "abc", "def", "def", "ghi", "ghi"],
"colids": ["col1", "col33", np.nan, "col5", "col1", "col2", np.nan]
})
</code></pre>
<p>Place values in the dfw_columns based on occurrence (1 or 0) in df for each Name and colids.</p>
<p>Desired output (after filling the empty dfw_columns)</p>
<pre><code>desireddf = pd.DataFrame({
"Name": ["abc", "def", "ghi"],
"col1": [1,1, 0],
"col2": [0,0, 1],
"col3": [0,0, 0],
"col4": [0,0, 0],
"col5": [0,1,0]
})
desireddf
</code></pre>
| <python><pandas><dataframe> | 2023-07-25 00:21:00 | 2 | 392 | AAA |
76,758,543 | 16,319,191 | Unlist (ungroup) a string df col into separate rows | <p>I used the command to group. Now I want to 'ungroup' and get back the original df</p>
<p>example</p>
<pre><code>original_df = pd.DataFrame({
"id": ["abc", "abc","abc","def"],
"col2": ["adam","eve","john","john"]
})
</code></pre>
<pre><code>original_df.groupby('id').agg(
unique_count_col2=('col2', 'nunique'),
unique_values_col2=('col2', 'unique')
).reset_index()
</code></pre>
<pre><code>desired_df = pd.DataFrame({
"id": ["abc", "abc","abc","def"],
"col2": ["adam","eve","john","john"]
})
</code></pre>
| <python><pandas><group-by> | 2023-07-24 23:17:04 | 1 | 392 | AAA |
76,758,542 | 166,838 | Django: How do I serialize a nested relationship when the nested target has inherited subtypes? | <p>How can I create a serializer for a nested relationship, with a transaction, where the nested relationship has several child types? I’ve read through the documentation and that doesn’t seem to be well-covered.</p>
<p>We have an application for insuring homeowners, with the following structure:</p>
<pre class="lang-py prettyprint-override"><code>class Homeowner(models.Model):
name = models.CharField(max_length = 100, null = True, blank = True)
property = models.ForeignKey(Property, null = True)
class Property(models.Model):
address = models.CharField(max_length = 500, null = True, blank = True)
# lots more removed. Not an abstract model.
class House(Property):
number_of_floors = models.IntegerField(default = 1)
class Condo(Property):
has_doorman = models.BooleanField(default = False)
</code></pre>
<p>(This is strictly for show; the different home types have a <em>lot</em> of discount or liability fields.) For years, the way we’ve done this is that we’ve entered the property by hand, and then entered the homeowner as two separate stages, linking them with a search-for-properties dialog on the homeowner page. It’s always been a zero-or-one-to-one relationship: A property may have zero or one homeowners, and a homeowner may have zero-or-one properties.</p>
<p>The orders have come down to create a unified experience that the homeowner can fill out themself: enter the whole thing, homeowner and property, and validate them together, rolling everything back if anything fails. Sadly, too much depends on the existing structure to re-arrange the models. As far as I know (it’s been a few years since I wrestled with Django), if I just had a property it would be as simple as:</p>
<pre class="lang-py prettyprint-override"><code>class PolicySerializer(serializers.ModelSerializer):
property = PropertySerializer()
class Meta:
model = Homeowner
fields = ['name', 'property']
def create(self, validated_data):
property_data = validated_data.pop('property')
property = Property.objects.create(**property_data)
Homeowner.objects.create(property=property, **validated_data)
return homeowner
</code></pre>
<p>It’s fairly trivial to detect what kind of property form the homeowner filled out, but once I know that, how do I write the <code>create()</code> method to route to right kind of Property subtype model?</p>
<p>And if the Homeowner object fails to validate, how to I ensure the Property object isn’t just left lying around? Is that automatically handled now, or should I decorate the <code>create()</code> method with <code>@transaction.atomic</code>?</p>
| <python><django><django-rest-framework> | 2023-07-24 23:16:45 | 1 | 16,441 | Elf Sternberg |
76,758,415 | 2,386,605 | Pydantic issue for tuple length | <p>I have the following model in pydantic (Version 2.0.3)</p>
<pre><code>from typing import Tuple
from pydantic import BaseModel
class Model(BaseModel):
test_field: Tuple[int]
</code></pre>
<p>But when I enter</p>
<pre><code>model = Model(test_field=(1,2))
</code></pre>
<p>I get as error:</p>
<pre><code>Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
File "/Users/tobi/Documents/scraiber/z_legacy/fastapi_test_app/venv/lib/python3.10/site-packages/pydantic/main.py", line 150, in __init__
__pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)
pydantic_core._pydantic_core.ValidationError: 1 validation error for Model
test_field
Tuple should have at most 1 item after validation, not 2 [type=too_long, input_value=(1, 2), input_type=tuple]
For further information visit https://errors.pydantic.dev/2.0.3/v/too_long
</code></pre>
<p>Do you know how I can fix that?</p>
| <python><fastapi><pydantic> | 2023-07-24 22:38:45 | 2 | 879 | tobias |
76,758,283 | 19,157,137 | Using pytest to test classes and functions in Jupyter Notebook Files | <p>I am facing an issue while attempting to import the jupyter notebook file <code>MyNotebook.ipynb</code> into a pytest test script (<code>test_add.py</code>). The test script aims to test the <code>add</code> function defined in the <code>MyNotebook.ipynb</code> Jupyter Notebook. However, I encounter an <code>ImportError</code> with the error message:</p>
<pre><code>ImportError while importing test module '/app/test_add.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/local/lib/python3.11/importlib/__init__.py:126: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
test_add.py:1: in <module>
from MyNotebook import add
ModuleNotFoundError: No module named 'MyNotebook'
</code></pre>
<p>Steps Taken:</p>
<ol>
<li>I have created a Jupyter Notebook named <code>MyNotebook.ipynb</code> containing the <code>add</code> function implementation as follows:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>def add(a, b):
return a + b
</code></pre>
<ol start="2">
<li>I created a pytest test script named <code>test_add.py</code>, which aims to test the <code>add</code> function from <code>MyNotebook.ipynb</code>:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>from MyNotebook import add
def test_add():
assert add(2, 3) == 5
assert add(0, 0) == 0
assert add(-1, 1) == 0
</code></pre>
<ol start="3">
<li>I ran pytest in the project directory using the command <code>pytest</code>. However, I encountered an <code>ImportError</code> with the message <code>ModuleNotFoundError: No module named 'MyNotebook'</code>.</li>
</ol>
<p>Additional Notes:</p>
<ul>
<li>I have already ensured that the <code>MyNotebook.ipynb</code> and <code>test_add.py</code> files are in the same directory.</li>
<li>The Python version used is 3.11.4, and pytest version is 7.4.0.</li>
<li>I suspect there might be an issue with the format of the jupyter notebook importation in <code>test_add.py</code>, as it tries to import from a module 'MyNotebook' but cant find it.</li>
</ul>
<p>I am seeking assistance to resolve the import error and successfully run the pytest test script to test the <code>add</code> function in <code>MyNotebook.ipynb</code>. Thank you for any insights or suggestions.</p>
| <python><unit-testing><testing><jupyter-notebook><pytest> | 2023-07-24 22:01:46 | 0 | 363 | Bosser445 |
76,758,183 | 14,250,641 | Map one dataframe to another based on position falling within a range | <p>I have a dataframe (600k rows)</p>
<pre><code>index chr position other_cols
1 1 100 ...
2 1 2100
3 1 3300
4 2 4
5 2 2200
6 3 8420
</code></pre>
<p>I have another dataframe df2 (1.2 million rows):</p>
<pre><code>index chr start stop
1 1 1 100
2 1 2000 3000
3 1 4000 8000
4 2 1 1500
5 3 20 40000
</code></pre>
<p>I want my final dataframe to basically map df1 to df2 by finding the positions that lie within one of the regions. :</p>
<pre><code>index chr position matched_start matched_end other_cols (from df1)
1 1 100 1 100 ...
2 1 2100 2000 3000
3 2 4 1 1500
4 3 8420 20 40000
</code></pre>
<p>Here's the code I've tried running, but my laptop keeps crashing everytime I try to run on the entire dfs (only works if I use small subset). I also tried doing it in chunks, but it is also crashing my laptop.</p>
<pre><code># Merge the DataFrames based on the 'chromosome' column
merged_df = pd.merge(df1, df2, on='chr', how='inner')
# Filter the rows where variant_position lies within the start/end ranges
filtered_df = merged_df[(merged_df['position'] >= merged_df['start']) & (merged_df['position'] <= merged_df['end'])]
</code></pre>
| <python><pandas><dataframe><dataset><mapping> | 2023-07-24 21:41:26 | 1 | 514 | youtube |
76,758,098 | 2,887,833 | Pass the content of a variable instead of variable name | <p>I have this code</p>
<pre><code>response = sf.list_state_machines(maxResults=2)
</code></pre>
<p>I want to:</p>
<ul>
<li>make list_state_machines as a passed in argument to the function - named action_cmd</li>
<li>make maxResults=2 be a passed in argument to the function - named payload
then I can do the following</li>
</ul>
<p><code>response = sf.action_cmd(payload)</code></p>
<p>this does not work, of course. How do I pass these variables so sf recognizes contents of action_cmd as the name of the command and passes the contents of payload as input to the function?</p>
<p>I have no control over the function I am calling</p>
| <python> | 2023-07-24 21:21:11 | 1 | 1,287 | efultz |
76,758,084 | 1,818,713 | How to make a generator of bytes instead of writing to a file from pyarrow for fastapi | <p>Let's say generically my setup is like this:</p>
<pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI, Response
import pyarrow as pa
import pyarrow.ipc as ipc
app = FastAPI()
@app.get("/api/getdata")
async def getdata():
table = pa.Table.from_pydict({
"name": ["Alice", "Bob", "Charlie"],
"age": [25, 30, 22]})
### Not really sure what goes here
## something like this...
sink = io.BytesIO()
with ipc.new_file(sink, table.schema) as writer:
for batch in table.to_batches():
writer.write(batch)
sink.seek(0)
return StreamingResponse(content=sink, media_type="application/vnd.apache.arrow.file")
</code></pre>
<p>This <em>works</em> but I'm copying the whole table to BytesIO first? It seems like what I need to do is make a generator that yields whatever <code>writer.write(batch)</code> writes to the Buffer instead of actually writing it but I don't know how to do that. I tried using the <code>pa.BufferOutputStream</code> instead of BytesIO but I can't put that in as a return object for fastapi.</p>
<p>My goal is to be able to get the data on the js side like this...</p>
<pre class="lang-py prettyprint-override"><code>import { tableFromIPC } from "apache-arrow";
const table = await tableFromIPC(fetch("/api/getdata"));
console.table([...table]);
</code></pre>
<p>In my approach, this works, I'd just like to know if there's a way to do this without the copying.</p>
<p>I tried doing small batches but it still didn't work like this:</p>
<pre><code>@app.get("/")
def get_table():
table_or_ds = ds.dataset(file_or_files, format=format)
def gen_out():
with BytesIO() as sink:
with ipc.new_file(sink, table_or_ds.schema) as writer:
for batch in table_or_ds.to_batches():
writer.write_batch(batch)
yield sink.getvalue()
sink.seek(0)
sink.truncate()
yield sink.getvalue()
return StreamingResponse(gen_out(), media_type="application/vnd.apache.arrow.file")
</code></pre>
<p>The issue is that <code>sink.truncate</code> doesn't clear any memory so it doesn't help.</p>
| <python><fastapi><pyarrow><apache-arrow> | 2023-07-24 21:18:47 | 3 | 19,938 | Dean MacGregor |
76,758,016 | 5,032,387 | Slicing using broadcasting in dask | <p>I want to get the values of array x where the index is taken from contents of another array filt. So the output needs to be some of the values from x, equivalent to the length of filt.</p>
<pre><code>import dask.array as da
import numpy as np
x = np.linspace(0,5,10).reshape(10,1)
# shape is (1,3)
filt = np.array([[2,3,5]])
# get the x values coinciding to the 2nd, 3rd and 5th index
x[filt]
</code></pre>
<p>This works in numpy. How would I get it to work in dask? Currently errors out with an AssertionError.</p>
<pre><code>x = da.linspace(0,5,10).reshape(10,1)
filt = da.array([[2,3,5]])
x[filt]
</code></pre>
| <python><arrays><dask> | 2023-07-24 21:05:12 | 1 | 3,080 | matsuo_basho |
76,757,987 | 6,077,239 | Polars: inefficiency of over expression | <p>I found out that at least for the scenario below, doing <code>over</code> is much slower (2~3x) than doing <code>group_by/agg</code> + <code>explode</code>. And, the results are exactly the same.</p>
<p>Based on this finding, I have the following questions:</p>
<ul>
<li>Is such behaviour as expected? If so, should we always do a 2-step procedure (<code>group_by/agg</code> + <code>explode</code>) instead of using <code>over</code> directly?</li>
<li>Or, does this mean that there may be some room to optimize <code>over</code>?</li>
<li>Or, the performance between these two approaches really depends on the problem setup and users should try to see which approach fits the better?</li>
</ul>
<pre><code>import time
import numpy as np
import polars as pl
from polars.testing import assert_frame_equal
## setup
rng = np.random.default_rng(1)
nrows = 20_000_000
df = pl.DataFrame(
dict(
id=rng.integers(1, 50, nrows),
id2=rng.integers(1, 500, nrows),
v=rng.normal(0, 1, nrows),
v1=rng.normal(0, 1, nrows),
v2=rng.normal(0, 1, nrows),
v3=rng.normal(0, 1, nrows),
v4=rng.normal(0, 1, nrows),
v5=rng.normal(0, 1, nrows),
v6=rng.normal(0, 1, nrows),
v7=rng.normal(0, 1, nrows),
v8=rng.normal(0, 1, nrows),
v9=rng.normal(0, 1, nrows),
v10=rng.normal(0, 1, nrows),
)
)
## over
start = time.perf_counter()
res = (
df.lazy()
.select(
"id",
"id2",
*[
(pl.col(f"v{i}") - pl.col(f"v{i}").mean().over("id", "id2"))
/ pl.col(f"v{i}").std().over("id", "id2")
for i in range(1, 11)
],
)
.collect()
)
print(
time.perf_counter() - start
)
# 8.541702497983351
## groupby/agg + explode
start = time.perf_counter()
res2 = (
df.lazy()
.group_by("id", "id2")
.agg(
(pl.col(f"v{i}") - pl.col(f"v{i}").mean()) / pl.col(f"v{i}").std()
for i in range(1, 11)
)
.explode(pl.exclude("id", "id2"))
.collect()
)
print(
time.perf_counter() - start
)
# 3.1841439900454134
## compare results
assert_frame_equal(res.sort(pl.all()), res2.sort(pl.all()))
</code></pre>
| <python><python-polars> | 2023-07-24 20:58:54 | 1 | 1,153 | lebesgue |
76,757,888 | 6,447,563 | Does OpenCV-python guarantee that a contour remains the same even if other parts of an image change? | <p>Concider this situation. you have two images.</p>
<p>Image 1</p>
<p><a href="https://i.sstatic.net/uPUbX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uPUbX.png" alt="enter image description here" /></a></p>
<p>Image 2</p>
<p><a href="https://i.sstatic.net/bRh5q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bRh5q.png" alt="enter image description here" /></a></p>
<p>Yes, the upper left figure is the same in both cases. Yes, the other (non-overlapping figures) might be different. Can I be sure that if my figure doesnot overlap with other stuff, then str(cv2.get_contours(...)) yields the same order of points?</p>
| <python><image-processing><computer-vision><contour><opencv> | 2023-07-24 20:41:50 | 0 | 1,233 | sixtytrees |
76,757,795 | 1,911,036 | Check if FITS file has valid WCS | <p>I am looking for a way to check if there is valid WCS data in the FITS file header or not.</p>
<p>When creating a WCS from FITS file with no WCS inside (ds9 does not see any WCS for example) I surprisingly had no warnings or errors.</p>
<pre><code>from astropy import wcs, fits
hdu = fits.open(path_to_fit_file)
header = hdu[0].header
w = wcs.WCS(header)
print(w)
</code></pre>
<p>This code gave</p>
<pre><code>WCS Keywords
Number of WCS axes: 2
CTYPE : '' ''
CRVAL : 0.0 0.0
CRPIX : 0.0 0.0
PC1_1 PC1_2 : 1.0 0.0
PC2_1 PC2_2 : 0.0 1.0
CDELT : 1.0 1.0
NAXIS : 1920 1200
</code></pre>
<p><code>wcs.validate</code> also does not return a clear answer, only some warnings.</p>
<p>Here is a full header:</p>
<pre><code>SIMPLE = T / C# FITS: 07/16/2023 02:02:13
BITPIX = 16
NAXIS = 2 / Dimensionality
NAXIS1 = 1920
NAXIS2 = 1200
BLKLEVEL= 0 /
CAMID = '926f8841ad8ccd139' /
OBJCTALT= 34.4417644433851 /
GPS_EU = 3868.9 / EndShutterMicroSeconds
OBJECT = 'Sa11 ' /
EQUINOX = 2023.53719335833 /
OBJCTAZ = 215.096223700444 /
EXTEND = T / Extensions are permitted
BZERO = 32768 /
BSCALE = 1 /
ROWORDER= 'TOP-DOWN' /
EXPTIME = 3 / seconds
XPIXSZ = 5.86 / microns, includes binning if any
YPIXSZ = 5.86 / microns, includes binning if any
XBINNING= 1 /
YBINNING= 1 /
CCD-TEMP= -15.1 / C
FRAMETYP= 'Light ' /
SWCREATE= 'SharpCap v4.0.9268.0, 64 bit' /
DATE-OBS= '2023-07-15T23:02:10.0038605' / GPS:Start Exposure
DATE-END= '2023-07-15T23:02:13.1115415' / System Clock:Frame Received
DATE-OB2= '2023-07-15T23:02:13.0038689' / GPS:End Exposure
DATE-AVG= '2023-07-15T23:02:11.5038647' / GPS:Mid Exposure
FOCALLEN= 300 /
RA = 281.387381627622 / Epoch : JNOW
DEC = 0.507833420011808 / Epoch : JNOW
OBJCTRA = '18 45 32.000' / Epoch : JNOW
OBJCTDEC= '+00 30 28.000' / Epoch : JNOW
END
</code></pre>
| <python><astropy><fits> | 2023-07-24 20:23:03 | 0 | 333 | Viktor |
76,757,495 | 141,789 | Compress secp256k1 elliptic curve x, y coordinates strings strings into hex public key | <h1>For context</h1>
<p>I am using the <code>azurerm_key_vault_key</code> terraform resource to create an secp256k1 key. I can output the X and Y coordinates for this key as strings that look like this (appears to be a base64 url-encoded, but I cannot find any docs that clearly define this)</p>
<pre><code>"x": "ARriqkpHlC1Ia1Tk86EM_bqH_9a88Oh2zMYF3fUUGJw"
"y": "wTYd3CEiwTk1n-lFPdpZ51P4Z0EzlVNXLvJMY-k55pQ"
</code></pre>
<h1>The problem</h1>
<p>I need to convert this into a public key in hexadecimal format. I have tried the following in Python:</p>
<pre><code>import ecdsa
import binascii
import base64
def base64url_to_bytes(base64url_string):
padding = '=' * (4 - (len(base64url_string) % 4))
base64_string = base64url_string.replace('-', '+').replace('_', '/') + padding
return base64.b64decode(base64_string)
def compress_point(x, y):
"""Compresses the point (x, y) to a 33-byte compressed public key."""
prefix = "02" if y % 2 == 0 else "03"
return prefix + x
def decompress_point(compressed_key):
"""Decompresses the compressed public key to (x, y)."""
prefix = compressed_key[:2]
x = compressed_key[2:]
y = ecdsa.ellipticcurve.Point(ecdsa.SECP256k1.curve, int(x, 16), int(prefix, 16))
return x, y.y()
x_value = base64url_to_bytes("ARriqkpHlC1Ia1Tk86EM_bqH_9a88Oh2zMYF3fUUGJw")
y_value = base64url_to_bytes("wTYd3CEiwTk1n-lFPdpZ51P4Z0EzlVNXLvJMY-k55pQ")
x, y = int.from_bytes(x_value, byteorder='big'), int.from_bytes(y_value, byteorder='big')
# Compress the point
compressed_key = compress_point(format(x, 'x'), y)
print(compressed_key)
decompressed_x, decompressed_y = decompress_point(compressed_key)
print("Decompressed X:", decompressed_x)
print("Decompressed Y:", decompressed_y)
</code></pre>
<p>It compresses, but <code>decompress_point</code> throws an assertion error, which suggests that I am doing something wrong either in the compression or decompression step.</p>
<p>Here is the full output of the script above:</p>
<pre><code>0211ae2aa4a47942d486b54e4f3a10cfdba87ffd6bcf0e876ccc605ddf514189c
Traceback (most recent call last):
File "get_public_key3.py", line 32, in <module>
decompressed_x, decompressed_y = decompress_point(compressed_key)
File "get_public_key3.py", line 19, in decompress_point
y = ecdsa.ellipticcurve.Point(ecdsa.SECP256k1.curve, int(x, 16), int(prefix, 16))
File "/home/ubuntu/.local/lib/python3.8/site-packages/ecdsa/ellipticcurve.py", line 1090, in __init__
assert self.__curve.contains_point(x, y)
AssertionError
</code></pre>
<p>I am not at all familiar with the packages in use here and the code above has mostly been put together from various bits and pieces I could find online.</p>
<p>Could someone provide me with a definitive answer on how to do this?</p>
| <python><ecdsa><elliptic-curve><python-cryptography> | 2023-07-24 19:26:27 | 1 | 5,842 | Karl |
76,757,470 | 15,098,472 | merge two scatter markers into one for the legend in matplot | <p>Assume I have a function, that represents a path of a driving car:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
# positions of the car
x = np.linspace(0, 20, 100)
y = np.sin(x)
fig, ax = plt.subplots(figsize=(12, 8))
ax.plot(x, y)
</code></pre>
<p><a href="https://i.sstatic.net/sUB8g.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sUB8g.png" alt="enter image description here" /></a></p>
<p>The car stops at certain positions and I want to mark these position in the graph by using the pause symbol, which some of you are certainly familiar with. The symbol is simply two vertical lines. Unfortunately, matplot does not have a marker that looks like that, however, matplot comes with the vertical line as a marker. So, as a plan B i came up with the following implementation:</p>
<pre><code># position where the car stops
x_stop = np.array([1, 5, 10])
y_stop = np.sin(x_stop)
# to create a stop symbol we shift the vertical line from matplot
shift = 0.1
ax.scatter(x_stop - shift, y_stop, marker='|', color='r', s=225, label='stop')
ax.scatter(x_stop + shift, y_stop, marker='|', color='r', s=225, label='stop')
plt.legend(loc='best')
</code></pre>
<p><a href="https://i.sstatic.net/q54NV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/q54NV.png" alt="enter image description here" /></a></p>
<p>The issue with this is that the legend obviously does not make sense. What I would like to have is the two markers next to each other in the legend, just like they appear in the graph on the path of the car. Is there a way to achieve this? Any other approach or a fix for my problem is fine!</p>
| <python><matplotlib> | 2023-07-24 19:21:01 | 1 | 574 | kklaw |
76,757,314 | 6,447,563 | Efficient algorithm to check if two arrays have the same values in Python. The values are guaranteed to be in the same order | <p>I have two images with white blobs of target objects on a black background. I need to exclude all blobs from set 2 if they overlap with a blob in set 1. My bruteforce approach is shown below, but it is inefficient.</p>
<p>Now, I can compute all contours of the initial mask and of the <code>mask * (1 -occupied)</code>. This yields two arrays_outer of arrays_inner of coordinates. If an array from array_inner1 has a match in the array_inner from array_outer2, then this contour should be included. Else - scipped. I would like to have a way to turn an array of coordinates (mutable type) into some sort of hash. Then do set intersect of hashes. Then use <code>cv2.drawContours</code> on arrays that survived intersect operation. Would this work? Is there a more efficient way to do this?</p>
<hr />
<p>Progress so far. Bruteforce by</p>
<pre><code>occupied_mask = fill_countours(contours0) # numpy array of a mask of higher priority contours
contours, hierarchy = cv2.findContours(thresh2, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
mask_of_a_layer = np.zeros(occupied_mask.shape)
for i in range(len(contours)):
tmp_mask = np.zeros(occupied_mask.shape)
cv2.drawContours(tmp_mask, contours, i, color=1, thickness=cv2.FILLED)
if np.sum(tmp_mask) == if np.sum(tmp_mask * (1-occupied_mask)):
mask_of_a_layer = (mask_of_a_layer + tmp_mask).clip(0,1)
occupied_mask = (occupied_mask + mask_of_a_layer).clip(0,1)
</code></pre>
<p>However, this is very inefficient: takes 10 seconds rather than ms.</p>
<p><a href="https://i.sstatic.net/jTIAV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jTIAV.png" alt="enter image description here" /></a></p>
<p>On this picture we see a case of a partial overlap. The desired output should be</p>
<p><a href="https://i.sstatic.net/FzEvc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FzEvc.png" alt="enter image description here" /></a></p>
| <python><opencv><image-processing><computer-vision> | 2023-07-24 18:56:23 | 4 | 1,233 | sixtytrees |
76,757,307 | 9,937,874 | Pandas built in function to handle concatenated data | <p>I have the following data set.</p>
<pre><code>id_number type amount date
1 employer contributioncontribution $100$200 1/1/2023
2 employer contributioncontribution $100$200 2/1/2023
</code></pre>
<p>The data should look like this</p>
<pre><code>id_number type amount date
1 employer contribution $100 1/1/2023
1 contribution $200 1/1/2023
2 employer contribution $100 2/1/2023
2 contribution $200 2/1/2023
</code></pre>
<p>Are there any built in pandas features to deconvolute this data? I feel like the best approach is to just parse each row and create a new data frame from the text.</p>
| <python><pandas><dataframe> | 2023-07-24 18:54:56 | 0 | 644 | magladde |
76,757,298 | 6,674,235 | Unable to renew lock on session-enabled Azuer Servicebus Queue | <p>I have checked every post on this site as well as the errors reported on the azure python SDK github page but I cannot figure this out. I am getting:</p>
<pre><code>SessionLockLostError: The lock on the session has expired. Callers should request the session again.
</code></pre>
<p>Traceback:</p>
<pre><code>Traceback (most recent call last):
File "azure_pipeline.py", line 119, in __call__
await receiver.complete_message(msg)
File "/root/.pyenv/versions/3.10.9/envs/env/lib/python3.10/site-packages/azure/servicebus/aio/_servicebus_receiver_async.py", line 806, in complete_message
await self._settle_message_with_retry(message, MESSAGE_COMPLETE)
File "/root/.pyenv/versions/3.10.9/envs/env/lib/python3.10/site-packages/azure/servicebus/aio/_servicebus_receiver_async.py", line 440, in _settle_message_with_retry
self._check_live()
File "/root/.pyenv/versions/3.10.9/envs/env/lib/python3.10/site-packages/azure/servicebus/aio/_base_handler_async.py", line 237, in _check_live
raise SessionLockLostError(error=self._session.auto_renew_error)
azure.servicebus.exceptions.SessionLockLostError: The lock on the session has expired. Callers should request the session again.
</code></pre>
<p>My code is below. I have checked all the async samples and tried everything with the AutoLockRenewer() including passing it to the get_queue_receiver etc. I have also tried both</p>
<pre><code>received_msgs = await receiver.get_messages()
and
async for msg in receiver
</code></pre>
<p>I have also tried putting the auto_lock_renewer.register() code before and after the message loop and other places. I have also tried renewing the message instead of the session but that does not work for session-enabled queues.</p>
<p>Right now, the</p>
<blockquote>
<p>msg_body = await self.run(msg)</p>
</blockquote>
<p>is just calling time.sleep(120) and retuning a string.</p>
<pre><code> async def __call__(self):
receiver = self.servicebus_client.get_queue_receiver(
queue_name=self.request_queue,
session_id=NEXT_AVAILABLE_SESSION,
receive_mode=ServiceBusReceiveMode.PEEK_LOCK,
)
async with receiver:
try:
async with AutoLockRenewer(
max_lock_renewal_duration=300,
on_lock_renew_failure=self.my_callback,
) as auto_lock_renewer:
auto_lock_renewer.register(receiver, receiver.session)
# received_msgs = await receiver.receive_messages()
async for msg in receiver:
logger.info(
f"Session locked until: {receiver.session.locked_until_utc}"
)
logger.info(f"Received msg {msg}")
msg_body = await self.run(msg)
sender = self.servicebus_client.get_queue_sender(
queue_name=self.reply_queue, session_id=msg.session_id
)
async with sender:
message_to_send = ServiceBusMessage(
msg_body, session_id=msg.session_id
)
await sender.send_messages(message_to_send)
logger.info("Message successfully sent!")
# complete the message so that the message is removed from the queue
await receiver.complete_message(msg)
except Exception as e:
logger.info("CLOSING RECEIVER")
await receiver.close()
tb = traceback.format_exc()
logger.error(f"Failed to process message with exception {e} \n {tb}")
</code></pre>
| <python><azureservicebus><azure-servicebus-queues> | 2023-07-24 18:53:15 | 1 | 654 | James Steele |
76,757,289 | 13,801,302 | How to set marker size/color in go.Scattergeo? | <p>I want to set the maker size or color according to a list of values. The result of the markers should shown as the follwing scatterplot for example:
<a href="https://i.sstatic.net/6B4Ds.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6B4Ds.png" alt="enter image description here" /></a></p>
<p>Another example is this:
<a href="https://i.sstatic.net/4ONB4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4ONB4.png" alt="enter image description here" /></a></p>
<p>My code looks like this:</p>
<pre><code>fig = go.Figure(data=go.Scattergeo(
#lon = tmp[1],
#lat = tmp[0],
mode = 'markers',
lon = long,
lat = lat,
))
fig.update_layout(
geo_scope = 'europe',
width = 1200,
height = 800,
)
fig.update_geos(showsubunits=True)
fig.show()
</code></pre>
<p>My return:
<a href="https://i.sstatic.net/l57ds.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/l57ds.png" alt="enter image description here" /></a></p>
<p>What do I have to add to manipulate each point according to my list of values <em>counts</em>?</p>
| <python><plotly><figure> | 2023-07-24 18:52:27 | 0 | 621 | Christian01 |
76,757,241 | 11,388,321 | I'm trying to turn this python script into a web app with flask | <p>I am trying to turn my script into a web app, it uses wordpress and chatgpt to generate articles based on keywords and what you type in the input boxes and allows the user to instantly upload to drafts on their wordpress site..
I am running into an issue:</p>
<p><strong>updated</strong></p>
<pre><code># app.py
from flask import Flask, render_template, request, flash
from flask_wtf import FlaskForm
from wtforms import StringField, TextAreaField, SubmitField
from wtforms.validators import InputRequired
import openai
import requests
import random
app = Flask(__name__)
app.config['SECRET_KEY'] = 'your_secret_key'
# Set your OpenAI API key
openai.api_key = ""
class InputForm(FlaskForm):
api_key = StringField('OpenAI API Key', validators=[InputRequired()])
input_text = TextAreaField('Input Text', validators=[InputRequired()])
keywords = TextAreaField('Keywords (comma-separated)', validators=[InputRequired()])
wp_base_url = StringField('WordPress Base URL', validators=[InputRequired()])
wp_username = StringField('WordPress Username', validators=[InputRequired()])
wp_password = StringField('WordPress Password', validators=[InputRequired()])
submit = SubmitField('Submit')
# Array holding wordpress author IDs
authorList = ["1", "2"]
# You can call the "gpt-3.5-turbo" model
modelEngine = "gpt-3.5-turbo"
# WordPress information
wp_base_url = ""
wp_username = ""
wp_password = ""
# Post creator function
def create_post(inputTitleSent, outputText):
randomAuthor = random.choice(authorList)
post_status = "draft"
headers = {
"Content-Type": "application/x-www-form-urlencoded"
}
post = {
"title": inputTitleSent,
"content": outputText,
"status": post_status,
"author": randomAuthor,
"categories:": "6"
}
url = wp_base_url + "/wp-json/wp/v2/posts"
response = requests.post(url, data=post, headers=headers, auth=(wp_username, wp_password))
return response
# Post title creator function
def create_title(outputText):
response = openai.ChatCompletion.create(
model=modelEngine,
messages=[{"role": "user", "content": f"Write a title for this article:\n\n {outputText}"}],
n=1
)
create_title = response.choices[0].message.content
create_title = create_title.replace('"', '')
return create_title
@app.route('/', methods=['GET', 'POST'])
def index():
global wp_base_url, wp_username, wp_password # Move the global declaration here
form = InputForm()
if form.validate_on_submit():
# Get user input from the form
data = request.get_json()
api_key = data.get('api_key')
input_text = data.get('input_text')
keywords = data.get('keywords')
wp_base_url = data.get('wp_base_url')
wp_username = data.get('wp_username')
wp_password = data.get('wp_password')
try:
response = openai.ChatCompletion.create(
model=modelEngine,
messages=[{"role": "user", "content": "Testing API Key"}],
n=1
)
if 'choices' not in response or not response['choices']:
flash("Invalid API Key. Please check and try again.", "error")
return render_template('index.html', form=form, data=None)
# ... (rest of the code remains the same)
except requests.exceptions.RequestException as e:
flash("Error occurred while connecting to the OpenAI API. Please check your internet connection and try again.", "error")
return render_template('index.html', form=form, data=None)
except Exception as e:
flash("An unexpected error occurred. Please try again later.", "error")
return render_template('index.html', form=form, data=None)
return render_template('index.html', form=form, data=None)
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>the requests are getting sent to the server on debug but it doesn't run the script when pressing submit and the inputs doesn't render, how can I fix this?</p>
<p>index.html <strong>(updated)</strong>:</p>
<pre><code><!DOCTYPE html>
<html>
<head>
<title>OpenAI WordPress Web App</title>
</head>
<body>
<h1>Input your information:</h1>
<form id="inputForm" method="POST">
{{ form.csrf_token }}
{{ form.api_key.label }} {{ form.api_key(id='api_key', size=60) }}<br>
{{ form.input_text.label }} {{ form.input_text(id='input_text', rows=10, cols=60) }}<br>
{{ form.keywords.label }} {{ form.keywords(id='keywords', rows=3, cols=60) }}<br>
{{ form.wp_base_url.label }} {{ form.wp_base_url(id='wp_base_url', size=60) }}<br>
{{ form.wp_username.label }} {{ form.wp_username(id='wp_username', size=60) }}<br>
{{ form.wp_password.label }} {{ form.wp_password(id='wp_password', size=60) }}<br>
{{ form.submit() }}
</form>
{% if data %}
<h1>Generated Posts:</h1>
{% for post in data %}
<h2>{{ post.title }}</h2>
<p>{{ post.content }}</p>
<hr>
{% endfor %}
{% endif %}
<script>
document.getElementById('inputForm').addEventListener('submit', function (event) {
event.preventDefault(); // Prevent the form from submitting normally
var api_key = document.getElementById('api_key').value;
var input_text = document.getElementById('input_text').value;
var keywords = document.getElementById('keywords').value;
var wp_base_url = document.getElementById('wp_base_url').value;
var wp_username = document.getElementById('wp_username').value;
var wp_password = document.getElementById('wp_password').value;
var xhr = new XMLHttpRequest();
xhr.open('POST', '/');
xhr.setRequestHeader('Content-Type', 'application/json');
xhr.onreadystatechange = function () {
if (xhr.readyState === XMLHttpRequest.DONE) {
if (xhr.status === 200) {
// Request successful, update the page if necessary
var response = JSON.parse(xhr.responseText);
if (response.error) {
alert('Error: ' + response.error);
} else {
alert('Data processed successfully!');
// Optionally, you can update the page with the generated posts here
}
} else {
// Handle error here
alert('Error occurred. Please try again later.');
}
}
};
var data = JSON.stringify({
api_key: api_key,
input_text: input_text,
keywords: keywords,
wp_base_url: wp_base_url,
wp_username: wp_username,
wp_password: wp_password
});
xhr.send(data);
});
</script>
</body>
</html>
</code></pre>
| <python><flask><openai-api> | 2023-07-24 18:43:26 | 1 | 810 | overdeveloping |
76,757,217 | 12,300,981 | LMFIT vs. Scipy Why am I getting different results in minimize | <p>In short, I have data I'm trying to fit, and LMFIT and Scipy give 2 different solutions, with LMFIT being significantly worse.</p>
<p>Just some detail as to what is going on:</p>
<p>The "model" is a population weighted average. The "populations" are derived from 4 variables (the adjustable parameters). There are 2 independent "models" (n and h), but these are combined at the very end. The residuals is a combination of the residuals of both of these models against their respective data.</p>
<p>Assume the math is correct. The question is then why the different solutions when given identical functions, variables, bounds, and methods.</p>
<p>Given this MVE</p>
<pre><code>import numpy as np
import scipy.optimize as so
from scipy.sparse.linalg import lsmr
from lmfit import minimize,Parameters
from lmfit.printfuncs import report_fit
data_1=[[117.417, 117.423, 117.438, 117.501], [124.16, 124.231, 124.089, 124.1], [115.632, 115.645, 115.828, 115.947], [118.314, 118.317, 118.287, 118.228], [108.407, 108.419, 108.396, 108.564]]
data_2=[[9.05, 9.044, 9.057, 9.079], [9.178, 9.167, 9.16, 9.176], [7.888, 7.893, 7.911, 7.895], [7.198, 7.202, 7.197, 7.213], [7.983, 7.976, 7.979, 8.02]]
def get_populations_lmfit(initial,io):
k,k1,x,y=initial['kvar'],initial['k1var'],initial['xvar'],initial['yvar']
kx,k1x=k*x,k1*x
ky,k1y=k*y,k1*y
kxy,k1xy=k*x*y,k1*x*y
partial_free_concentration_WT=(np.sqrt((k*k1)**2+(8*io*k*k1)+(8*io*k*k1**2))-(k*k1))/(4*(1+k1))
partial_closed_concentration_WT=(((4*io)+(4*io*k1))/(4*(1+k1)**2))-(partial_free_concentration_WT/(1+k1))
partial_open_concentration_WT=k1*partial_closed_concentration_WT
partial_free_concentration_L273A=(np.sqrt((kx*k1x)**2+(8*io*kx*k1x)+(8*io*kx*k1x**2))-(kx*k1x))/(4*(1+k1x))
partial_closed_concentration_L273A=(((4*io)+(4*io*k1x))/(4*(1+k1x)**2))-(partial_free_concentration_L273A/(1+k1x))
partial_open_concentration_L273A=k1x*partial_closed_concentration_L273A
partial_free_concentration_I272A=(np.sqrt((ky*k1y)**2+(8*io*ky*k1y)+(8*io*ky*k1y**2))-(ky*k1y))/(4*(1+k1y))
partial_closed_concentration_I272A=(((4*io)+(4*io*k1y))/(4*(1+k1y)**2))-(partial_free_concentration_I272A/(1+k1y))
partial_open_concentration_I272A=k1y*partial_closed_concentration_I272A
partial_free_concentration_ILAA=(np.sqrt((kxy*k1xy)**2+(8*io*kxy*k1xy)+(8*io*kxy*k1xy**2))-(kxy*k1xy))/(4*(1+k1xy))
partial_closed_concentration_ILAA=(((4*io)+(4*io*k1xy))/(4*(1+k1xy)**2))-(partial_free_concentration_ILAA/(1+k1xy))
partial_open_concentration_ILAA=k1xy*partial_closed_concentration_ILAA
local_chi2=0
for experimental_shifts_n,experimental_shifts_h in zip(data_1,data_2):
populations=np.array([[partial_free_concentration_WT,partial_open_concentration_WT,partial_closed_concentration_WT],[partial_free_concentration_L273A,partial_open_concentration_L273A,partial_closed_concentration_L273A],[partial_free_concentration_I272A,partial_open_concentration_I272A,partial_closed_concentration_I272A],[partial_free_concentration_ILAA,partial_open_concentration_ILAA,partial_closed_concentration_ILAA]])
experimental_shifts_n=(np.array([experimental_shifts_n])/10)*800
experimental_shifts_h=(np.array([experimental_shifts_h]))*800
least_squared_fit_n=lsmr(populations/io,experimental_shifts_n,maxiter=10)
least_squared_fit_h=lsmr(populations/io,experimental_shifts_h,maxiter=10)
local_chi2+=((least_squared_fit_n[3])**2+least_squared_fit_h[3]**2)
return local_chi2
def get_populations_scipy(initial,io):
k,k1,x,y=initial[0],initial[1],initial[2],initial[3]
kx,k1x=k*x,k1*x
ky,k1y=k*y,k1*y
kxy,k1xy=k*x*y,k1*x*y
partial_free_concentration_WT=(np.sqrt((k*k1)**2+(8*io*k*k1)+(8*io*k*k1**2))-(k*k1))/(4*(1+k1))
partial_closed_concentration_WT=(((4*io)+(4*io*k1))/(4*(1+k1)**2))-(partial_free_concentration_WT/(1+k1))
partial_open_concentration_WT=k1*partial_closed_concentration_WT
partial_free_concentration_L273A=(np.sqrt((kx*k1x)**2+(8*io*kx*k1x)+(8*io*kx*k1x**2))-(kx*k1x))/(4*(1+k1x))
partial_closed_concentration_L273A=(((4*io)+(4*io*k1x))/(4*(1+k1x)**2))-(partial_free_concentration_L273A/(1+k1x))
partial_open_concentration_L273A=k1x*partial_closed_concentration_L273A
partial_free_concentration_I272A=(np.sqrt((ky*k1y)**2+(8*io*ky*k1y)+(8*io*ky*k1y**2))-(ky*k1y))/(4*(1+k1y))
partial_closed_concentration_I272A=(((4*io)+(4*io*k1y))/(4*(1+k1y)**2))-(partial_free_concentration_I272A/(1+k1y))
partial_open_concentration_I272A=k1y*partial_closed_concentration_I272A
partial_free_concentration_ILAA=(np.sqrt((kxy*k1xy)**2+(8*io*kxy*k1xy)+(8*io*kxy*k1xy**2))-(kxy*k1xy))/(4*(1+k1xy))
partial_closed_concentration_ILAA=(((4*io)+(4*io*k1xy))/(4*(1+k1xy)**2))-(partial_free_concentration_ILAA/(1+k1xy))
partial_open_concentration_ILAA=k1xy*partial_closed_concentration_ILAA
local_chi2=0
for experimental_shifts_n,experimental_shifts_h in zip(data_1,data_2):
populations=np.array([[partial_free_concentration_WT,partial_open_concentration_WT,partial_closed_concentration_WT],[partial_free_concentration_L273A,partial_open_concentration_L273A,partial_closed_concentration_L273A],[partial_free_concentration_I272A,partial_open_concentration_I272A,partial_closed_concentration_I272A],[partial_free_concentration_ILAA,partial_open_concentration_ILAA,partial_closed_concentration_ILAA]])
experimental_shifts_n=(np.array([experimental_shifts_n])/10)*800
experimental_shifts_h=(np.array([experimental_shifts_h]))*800
least_squared_fit_n=lsmr(populations/io,experimental_shifts_n,maxiter=10)
least_squared_fit_h=lsmr(populations/io,experimental_shifts_h,maxiter=10)
local_chi2+=((least_squared_fit_n[3])**2+least_squared_fit_h[3]**2)
return local_chi2
io=270000
params=Parameters()
params.add('kvar',value=500,min=0,max=np.inf)
params.add('k1var',value=0.02,min=0,max=np.inf)
params.add('xvar',value=7,min=0,max=np.inf)
params.add('yvar',value=30,min=0,max=np.inf)
lmfit_solution=minimize(get_populations_lmfit,params,args=(io,),method='nelder',max_nfev=1000)
scipy_solution=so.minimize(get_populations_scipy,args=(io,), x0=np.array([500,0.02,7,30]),bounds=((0,np.inf),)*4,method='Nelder-Mead',options={'maxiter':1000})
print(report_fit(lmfit_solution))
print(scipy_solution)
</code></pre>
<p>You will note the solutions from scipy vs. lmfit are completely different. Not only that, the chi2 for scipy is significantly better than lmfit. I don't quite understand why however. The setup is practically identical, the conditions and bounds and methods are identical, why do they give different results? I.E. Why is LMFITs solution so much worse?</p>
<p>Now I know you can give LMFIT an array of residuals instead of the sum like I have, and while this does improve the solutions a bit, it's still worse than Scipy and significantly worse chi2.</p>
| <python><numpy><scipy><minimize><lmfit> | 2023-07-24 18:40:04 | 1 | 623 | samman |
76,757,194 | 395,857 | Why do I get the error "Unrecognized request argument supplied: functions" when using `functions` when calling Azure OpenAI GPT? | <p>I'm trying to use <code>functions</code> when calling Azure OpenAI GPT, as documented in <a href="https://platform.openai.com/docs/api-reference/chat/create#chat/create-functions" rel="noreferrer">https://platform.openai.com/docs/api-reference/chat/create#chat/create-functions</a></p>
<p>I use:</p>
<pre><code>import openai
openai.api_type = "azure"
openai.api_base = "https://XXXXXXXX.openai.azure.com/"
openai.api_version = "2023-06-01-preview"
openai.api_key = os.getenv("OPENAI_API_KEY")
response = openai.ChatCompletion.create(
engine="gpt-35-turbo-XXX",
model="gpt-35-turbo-0613-XXXX"
messages=messages,
functions=functions,
function_call="auto",
)
</code></pre>
<p>but I get the error:</p>
<pre><code>openai.error.InvalidRequestError:
Unrecognized request argument supplied: functions
</code></pre>
<p>Why?</p>
<hr />
<p>Data to run the example code above (<code>messages</code> and <code>functions</code> need to be defined):</p>
<pre><code>messages = [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello!"}]
functions = [
{
"name": "fetch_pages",
"description": "Fetch the content of specified pages from the document.",
"parameters": {
"type": "object",
"properties": {
"pages": {
"type": "array",
"items": {
"type": "number"
},
"description": "The list of pages to fetch."
}
},
"required": ["pages"]
}
},
{
"name": "fetch_section",
"description": "Fetch the content of a specified section.",
"parameters": {
"type": "object",
"properties": {
"section_title": {
"type": "string",
"description": "The title of the section to fetch."
}
},
"required": ["section_title"]
}
},
{
"name": "search",
"description": "Search the document for a string query.",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search term."
}
},
"required": ["query"]
}
}
]
</code></pre>
| <python><azure><gpt-3><azure-openai><large-language-model> | 2023-07-24 18:35:33 | 2 | 84,585 | Franck Dernoncourt |
76,757,147 | 4,399,016 | Converting a JSON Response into pandas dataframe | <p>I have this code that returns a json response.</p>
<pre><code>import requests
url = "https://usda.library.cornell.edu/api/v1/publication/findAll"
payload = {}
headers = {
'Authorization': 'Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOjE0NDc1fQ.YpppSpjBnGfHv8DBEiWCrmv-KgtF3MLzniHF89hW9L8'
}
response = requests.request("GET", url, headers=headers, data=payload)
</code></pre>
<p>I am trying to use json_normalize as shown <a href="https://stackoverflow.com/a/44802232/4399016">here</a> to get a pandas data frame.</p>
<pre><code>data = response.text
df = json_normalize(data)
print (df)
</code></pre>
<p>Getting NotImplementedError:</p>
<p>What am I doing wrong?</p>
| <python><json><pandas><python-requests> | 2023-07-24 18:28:21 | 1 | 680 | prashanth manohar |
76,757,141 | 1,056,563 | Referencing python class variables in other class variables | <p>Consider these two snippets, both of which include class variable that references the prior class variable:</p>
<pre><code>class A:
X = 1
Y = X+1
</code></pre>
<p>The first class parses and runs:</p>
<pre><code>In [8]: a = A()
In [9]: a.Y
Out[9]: 2
</code></pre>
<p>Now consider a second class:</p>
<pre><code>class TestDataSource:
import os
CONFIG_DIR = os.path.join(os.getcwd(),"configs")
FPATHS = [CONFIG_DIR + "schema{x}.yml" for x in range(1, 5)]
</code></pre>
<p>The second does not parse:</p>
<pre><code>NameError Traceback (most recent call last)
Cell In[12], line 1
----> 1 class TestDataSource:
2 import os
4 CONFIG_DIR = os.path.join(os.getcwd(),"configs")
Cell In[12], line 5, in TestDataSource()
2 import os
4 CONFIG_DIR = os.path.join(os.getcwd(),"configs")
----> 5 FPATHS = [f"{CONFIG_DIR}/schema{x}.yml" for x in range(1, 5)]
Cell In[12], line 5, in <listcomp>(.0)
2 import os
4 CONFIG_DIR = os.path.join(os.getcwd(),"configs")
----> 5 FPATHS = [f"{CONFIG_DIR}/schema{x}.yml" for x in range(1, 5)]
NameError: name 'CONFIG_DIR' is not defined
</code></pre>
<p>I have tried variations on it including including the class Name :</p>
<pre><code>FPATHS = [TestDataSource.CONFIG_DIR + "/schema{x}.yml" for x in range(1, 5)]
</code></pre>
<p>This gives:</p>
<pre><code>NameError: name 'TestDataSource' is not defined
</code></pre>
<p>So why/how is the first <code>class A</code> working and the second is not under any conditions I can come up with?</p>
| <python> | 2023-07-24 18:27:18 | 0 | 63,891 | WestCoastProjects |
76,756,981 | 11,197,957 | An elegant means of installing APT packages within a Python Setuptools framework | <p>I have created a <strong>custom PIP package</strong>, with the assistance of <strong>Setuptools</strong>. At one point in the code, my package does something like this:</p>
<pre class="lang-py prettyprint-override"><code>subprocess.run(["srm", path_to_file], check=True)
</code></pre>
<p>The difficulty with this is that, while I can assume a Linux OS in my case, <code>srm</code> is not a built-in command. You will need to run something like <code>sudo apt install install-delete</code> at least once. Therefore, I would like that install command to run whenever I call <code>pip install my_package</code>.</p>
<p>This is easy in principle but difficult in practise. <a href="https://stackoverflow.com/questions/20288711/post-install-script-with-python-setuptools">This question with its answers</a> lays out how to run any script as part of a Setuptools installation. In practise, this is not a viable strategy for running <code>sudo</code> commands because:</p>
<ol>
<li>There is no way to communicate with the user via print statements within the Setuptools installation process.</li>
<li>As a result of (1), the user just gets a password prompt - without any context or explanation.</li>
</ol>
<p>A few potential solutions:</p>
<ul>
<li>Force the Setuptools install process to display print statements - but how to do this?</li>
<li>Add a <code>my-package-install-specials</code> script to the package - but how to communicate to the user that running this script is part of the installation process?</li>
<li>Check that <code>secure-delete</code> is installed every time I invoke <code>srm</code> - but is this not bad practise?</li>
</ul>
| <python><setuptools><apt> | 2023-07-24 18:00:14 | 0 | 734 | Tom Hosker |
76,756,959 | 4,008,056 | Check for loading spinner or content in Python Playwright | <p>I have a React app that fetches data from an API, and I'm building out tests with Python Playwright.</p>
<p>My problem is that in situations where the UI is the result of an API call, there are three possible states:</p>
<ul>
<li>Loading</li>
<li>Error message</li>
<li>Success (unknown number of elements)</li>
</ul>
<p>I'm trying to find an elegant way to check for the various states.</p>
<p><a href="https://playwright.dev/python/docs/api/class-frame#frame-wait-for-load-state" rel="nofollow noreferrer">From the docs</a>, apparently it's not recommended to wait until the network is idle, so I need a way to check for when the loading spinner has been cleared and either check for the error message or start iterating over the loaded content locators.</p>
<p>This seems like a common use case, but I can't find much in the way of recommendations.</p>
| <python><playwright> | 2023-07-24 17:56:44 | 2 | 13,465 | Toby |
76,756,706 | 104,950 | Streamlit session_state and execution model | <pre><code>import streamlit as st
st.title('Counter Example')
if 'count' not in st.session_state:
print(f'initializing count!')
st.session_state.count = 0
increment = st.button('Increment')
if increment:
st.session_state.count += 1
st.write('Count = ', st.session_state.count)
</code></pre>
<p>In the above code, I would expect to see the print statement 'initializing count!' only once. But I see it many times in a single execution run with only one active tab. Can someone explain why?</p>
| <python><streamlit> | 2023-07-24 17:14:01 | 0 | 38,691 | Amir Afghani |
76,756,402 | 4,246,716 | converting a dictionary to a pandas dataframe | <p>I have a dictionary as follows :</p>
<pre><code>{'result': '{"maid":{"0":"0365206d-e97d-4ab0-aa63-64091e66a1a4","1":"0955bbcc-3a83-4c64-8170-f5deb799a5ba","2":"0570ba29-1ee8-4bc6-a12c-c2b706d805c8"},"category":{"0":"EventHall","1":"SuperMarket","2":"Bank"},"geo_behavior":{"0":null,"1":null,"2":null},"polygonid":{"0":2332,"1":2332,"2":2332},"places":{"0":"Shri Sai Dj","1":"D Mart","2":"Bank of Baroda"},"age":{"0":38.0,"1":18.0,"2":37.0},"gender":{"0":0.0,"1":0.0,"2":0.0},"mobile":{"0":null,"1":null,"2":null},"make":{"0":"oppo","1":"vivo","2":"oneplus"},"deviceprice":{"0":160.0,"1":190.0,"2":392.0},"weight":{"0":2.5684166749,"1":2.0,"2":2.0},"__index_level_0__":{"0":0,"1":1,"2":2}}'}
</code></pre>
<p>I am trying to parse this dictionary as a dataframe. I tried the following:</p>
<pre><code>l=k.get("result")
m=json.loads(l)
maid=m.get('maid')
cate=m.get('category')
geobh=m.get('geo_behavior')
pid=m.get('polygonid')
places=m.get('places')
age=m.get('age')
gender=m.get('gender')
mobile=m.get('mobile')
make=m.get('make')
deviceprice=m.get('deviceprice')
weight=m.get('weight')
</code></pre>
<p>this gives me individual dictionaries to merge to form a dataframe. I also tried</p>
<pre><code>pd.json_normalize
pd.DataFrame.from_records(
</code></pre>
<p>but not helping. I am wondering if there is a simpler method to convert the <code>result</code> to a dictionary.</p>
| <python><pandas> | 2023-07-24 16:27:38 | 1 | 3,045 | Apricot |
76,756,325 | 6,494,707 | ModuleNotFoundError: No module named 'cv_bridge.boost.cv_bridge_boost' | <p>I have already installed <code>ros noetic</code> distribution and the <code>sudo apt-get install ros-noetic-cv-bridge</code> and <code>sudo apt-get install ros-noetic-cv-bridge</code> successfully. However, when I am trying to run the following code in pycharm, I am getting the following error:</p>
<pre><code>import os
import argparse
import pdb
import cv2
import rosbag
from sensor_msgs.msg import Image
from cv_bridge import CvBridge
bag_file = './bag_files/20230707_152832.bag'
output_dir= './frames/rgb_bag_output'
image_topic = '/device_0/sensor_1/Color_0/image/data'
# image_topic ='sensor_msgs/Image'
bag = rosbag.Bag(bag_file, "r")
bridge = CvBridge()
#gg =bag.read_messages(topics = '/device_0/sensor_1/Color_0/image/data')
# bag.get_message_count()
count = 0
for topic, msg, t in bag.read_messages(topics=image_topic):
cv_img = bridge.imgmsg_to_cv2(msg, desired_encoding= "rgb8")#"passthrough")
cv2.imwrite(os.path.join(output_dir, "frame%06i.png" % count), cv_img)
print("Wrote image %i" % count)
count += 1
bag.close()
</code></pre>
<p>the error:</p>
<pre><code>Traceback (most recent call last):
File "/home/es/PycharmProjects/2-Process-RGBD/extract_bag_frame.py", line 32, in <module>
cv_img = bridge.imgmsg_to_cv2(msg, desired_encoding= "rgb8")#"passthrough")
File "/home/es/anaconda3/envs/hsi-env/lib/python3.8/site-packages/cv_bridge/core.py", line 163, in imgmsg_to_cv2
dtype, n_channels = self.encoding_to_dtype_with_channels(img_msg.encoding)
File "/home/es/anaconda3/envs/hsi-env/lib/python3.8/site-packages/cv_bridge/core.py", line 99, in encoding_to_dtype_with_channels
return self.cvtype2_to_dtype_with_channels(self.encoding_to_cvtype2(encoding))
File "/home/es/anaconda3/envs/hsi-env/lib/python3.8/site-packages/cv_bridge/core.py", line 91, in encoding_to_cvtype2
from cv_bridge.boost.cv_bridge_boost import getCvType
ModuleNotFoundError: No module named 'cv_bridge.boost.cv_bridge_boost'
</code></pre>
<p>How can I resolve this issue?</p>
| <python><ros><robotics><ros2><rosbag> | 2023-07-24 16:17:37 | 0 | 2,236 | S.EB |
76,756,249 | 12,657,753 | How to integrate stable_baselines3 with dagshub and MLflow? | <p>I am trying to integrate stable_baselines3 in dagshub and MlFlow. I am new to MLOPS</p>
<p>Here is a sample code that is easy to run:</p>
<pre><code>import mlflow
import gym
from gym import spaces
import numpy as np
from stable_baselines3 import PPO
import os
os.environ['MLFLOW_TRACKING_USERNAME'] = "correct_dagshub_username"
os.environ['MLFLOW_TRACKING_PASSWORD'] = "correct_dagshub_token"
os.environ['MLFLOW_TRACKING_URI'] = "correct_URL")
# Create a simple custom gym environment
class SimpleEnv(gym.Env):
def __init__(self):
super(SimpleEnv, self).__init__()
self.action_space = spaces.Discrete(3)
self.observation_space = spaces.Box(low=-np.inf, high=np.inf, shape=(4,))
def step(self, action):
return np.array([0, 0, 0, 0]), 0, False, {}
def reset(self):
return np.array([0, 0, 0, 0])
# Create and train the model
env = SimpleEnv()
model = PPO("MlpPolicy", env, verbose=1)
model.learn(total_timesteps=1000)
# Save the model using MLflow
mlflow.log_artifact("model.zip")
# Load the model from MLflow using the captured run_id
run_id = mlflow.active_run().info.run_id
loaded_model = mlflow.pyfunc.load_model(f"runs:/{run_id}/model")
</code></pre>
<p>The problem is that I always get this error:</p>
<pre><code>---------------------------------------------------------------------------
MlflowException Traceback (most recent call last)
Cell In[13], line 11
6 # Now the model is saved to MLflow with the corresponding run_id
7
8 # Step 5: Load the model from MLflow
9 run_id = mlflow.active_run().info.run_id
---> 11 loaded_model = mlflow.pytorch.load_model(f"runs:/{run_id}/model")
File ~\anaconda3\envs\metatrader\lib\site-packages\mlflow\pytorch\__init__.py:698, in load_model(model_uri, dst_path, **kwargs)
637 """
638 Load a PyTorch model from a local file or a run.
639
(...)
694 predict X: 30.0, y_pred: 60.48
695 """
696 import torch
--> 698 local_model_path = _download_artifact_from_uri(artifact_uri=model_uri, output_path=dst_path)
699 pytorch_conf = _get_flavor_configuration(model_path=local_model_path, flavor_name=FLAVOR_NAME)
700 _add_code_from_conf_to_system_path(local_model_path, pytorch_conf)
File ~\anaconda3\envs\metatrader\lib\site-packages\mlflow\tracking\artifact_utils.py:100, in _download_artifact_from_uri(artifact_uri, output_path)
94 """
95 :param artifact_uri: The *absolute* URI of the artifact to download.
96 :param output_path: The local filesystem path to which to download the artifact. If unspecified,
97 a local output path will be created.
98 """
99 root_uri, artifact_path = _get_root_uri_and_artifact_path(artifact_uri)
--> 100 return get_artifact_repository(artifact_uri=root_uri).download_artifacts(
101 artifact_path=artifact_path, dst_path=output_path
102 )
File ~\anaconda3\envs\metatrader\lib\site-packages\mlflow\store\artifact\runs_artifact_repo.py:125, in RunsArtifactRepository.download_artifacts(self, artifact_path, dst_path)
110 def download_artifacts(self, artifact_path, dst_path=None):
111 """
112 Download an artifact file or directory to a local directory if applicable, and return a
113 local path for it.
(...)
123 :return: Absolute path of the local filesystem location containing the desired artifacts.
124 """
--> 125 return self.repo.download_artifacts(artifact_path, dst_path)
File ~\anaconda3\envs\metatrader\lib\site-packages\mlflow\store\artifact\artifact_repo.py:200, in ArtifactRepository.download_artifacts(self, artifact_path, dst_path)
197 failed_downloads[path] = repr(e)
199 if failed_downloads:
--> 200 raise MlflowException(
201 message=(
202 "The following failures occurred while downloading one or more"
203 f" artifacts from {self.artifact_uri}: {failed_downloads}"
204 )
205 )
207 return os.path.join(dst_path, artifact_path)
MlflowException: The following failures occurred while downloading one or more artifacts from URL/artifacts: {'model': 'MlflowException("API request to some api', port=443): Max retries exceeded with url: some_url (Caused by ResponseError(\'too many 500 error responses\'))")'}
</code></pre>
<p>Stable_baselines3 save the model as a zip file, I can see the artifact in MLflow but whatever I do cannot load the model from MLflow.
I also tried it with</p>
<pre><code>loaded_model = mlflow.pytorch.load_model(model_uri)
</code></pre>
<p>Any help would be greatly appreciated</p>
| <python><mlflow><stable-baselines><mlops><dagshub> | 2023-07-24 16:06:45 | 1 | 663 | TheGainadl |
76,756,132 | 1,788,318 | Python client library for Language Server Protocol (LSP) | <p>I need to interact with a language server (Eclipse JDT LS) implementing the Language Server Protocol (LSP), and I need it for a Python project that will be used to analyze different programming languages, starting from Java.</p>
<p>I've been looking around for a client library in Python that would fit my purpose, any suggestions?</p>
<p>If it was very easy I wouldn't mind writing one, but I feel the world doesn't need yet another library if a good one exists already.</p>
| <python><language-server-protocol> | 2023-07-24 15:48:37 | 2 | 849 | Federico Bonelli |
76,756,118 | 8,458,083 | How to build a pip python package from the github repository with nix | <p>I gave the right repo and version
but according to the error, the source can't be download</p>
<pre><code>{
description = "virtual environment with python and streamlit";
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
inputs.flake-utils.url = "github:numtide/flake-utils";
outputs = { self, nixpkgs, flake-utils }:
flake-utils.lib.eachDefaultSystem (system:
let
pkgs = nixpkgs.legacyPackages.${system};
python=pkgs.python311;
xTuring = pkgs.python3Packages.buildPythonPackage rec {
name = "xTuring";
version = "0.16";
src = pkgs.fetchFromGitHub {
owner = "stochasticai";
repo = "${name}";
rev = "${version}";
#sha256 = "1ibrwal80z27c2mh9hx85idmzilx6cpcmgc15z3lyz57bz0krigb";
};
};
f = ps: with ps;[
ipython
matplotlib
pandas
];
pip_python_packages= python.withPackages(f);
myDevTools = [
pip_python_packages
pkgs.streamlit
xTuring
];
in rec {
devShells.default = pkgs.mkShell {
buildInputs = myDevTools;
};
});
}
</code></pre>
<blockquote>
<p>warning: found empty hash, assuming
'sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=' error: builder
for '/nix/store/lnpnyfdq132wl3g82h66v8qldg2mnbw0-source.drv' failed
with exit code 1;
last 8 log lines:
>
> trying <a href="https://github.com/stochasticai/xTuring/archive/0.16.tar.gz" rel="nofollow noreferrer">https://github.com/stochasticai/xTuring/archive/0.16.tar.gz</a>
> % Total % Received % Xferd Average Speed Time Time Time Current
> Dload Upload Total Spent Left Speed
> 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
> 0 14 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
> curl: (22) The requested URL returned error: 404
> error: cannot download source from any mirror
For full logs, run 'nix log /nix/store/lnpnyfdq132wl3g82h66v8qldg2mnbw0-source.drv'. error: 1
dependencies of derivation
'/nix/store/p90maxb3jvyry1m57zypil2ybq1ssbvx-python3.10-xTuring.drv'
failed to build error: 1 dependencies of derivation
'/nix/store/3ad8nfn68zljv1rnrzh2665s7yz7fg0p-nix-shell-env.drv' failed
to build</p>
</blockquote>
<p>According to the error message, nix search to download <a href="https://github.com/stochasticai/xTuring/archive/0.16.tar.gz" rel="nofollow noreferrer">https://github.com/stochasticai/xTuring/archive/0.16.tar.gz</a> .</p>
<p>Why?</p>
| <python><nix> | 2023-07-24 15:46:58 | 1 | 2,017 | Pierre-olivier Gendraud |
76,756,039 | 14,222,845 | How to color each individual excel cell within specified columns in a pandas data frame? | <p>I have a Pandas data frame with 4 columns. How would I color <strong>each</strong> cell of some specified columns based on some preset criteria before outputting the data frame to an excel file?</p>
<pre><code># Example Data frame
df = pd.DataFrame({'A':[1,15,10,47,35],
'B':["Mac","Mac","Mac","Mac","Mac"],
'C':["Dog","Dog","Cat","Dog","Tiger"],
'D':["CDN", "USD", "CDN", "Pe", "Dr"]
})
</code></pre>
<p>I want to color each element in columns 'B', 'C', 'D' based on the relative frequency of each respective element within the column. For example, the relative frequency of "CDN" in the 'D' column is 2/5 = 0.4.</p>
<p>These are my criteria for the color based on the relative frequency:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Relative frequency</th>
<th>Color</th>
</tr>
</thead>
<tbody>
<tr>
<td>Greater than or equal to 0.90</td>
<td>Green</td>
</tr>
<tr>
<td>Less than 0.90 and greater than or equal to 0.30</td>
<td>Yellow</td>
</tr>
<tr>
<td>Less than 0.30</td>
<td>Red</td>
</tr>
</tbody>
</table>
</div>
<p>Since the relative frequency of "CDN" within the 'D' column is 0.4, then, that cell would be assigned a background color of Yellow.</p>
<p>I know how to find the relative frequency of each element within a column and how to develop the conditional for color based on the relative frequency. What I don't know is how to assign a color to each individual cell at a time. I looked at <a href="https://stackoverflow.com/questions/39299509/coloring-cells-in-excel-with-pandas">coloring cells in excel with pandas</a> but the problem is that it assigns the background-color to the entire data frame, not a single cell.</p>
<p>This is what the output excel file from the example should look like:</p>
<p><a href="https://i.sstatic.net/dThr5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dThr5.png" alt="enter image description here" /></a></p>
| <python><pandas><excel><colors> | 2023-07-24 15:39:03 | 2 | 330 | Diamoniner12345 |
76,756,024 | 13,978,463 | Search for the lower value in a repetitive pattern in column | <p>I defined a function in Python that get me the lower value after comparing float numbers in a data frame with a particular configuration.
Here is my data frame:</p>
<pre><code>xzjSqz_01292 3.2.1.133/0.0001
xzjSqz_01546 3.2.1.52/0.0840
xzjSqz_01559 3.2.1.52/0.0195,2.4.1.25/0.0087
WP_250821872.1 1.2.1.3/0.0032,3.2.1.52/0.0021,1.2.1.28/0.0017
WP_250822533.1 3.2.1.52/0.1338
</code></pre>
<p>As you can see, some rows in the second column have this pattern (which is okay):</p>
<pre><code>3.2.1.133/0.0001
</code></pre>
<p>But others have this one (which is not okay):</p>
<pre><code>3.2.1.52/0.0195,2.4.1.25/0.0087
or
1.2.1.3/0.0032,3.2.1.52/0.0021,1.2.1.28/0.0017
</code></pre>
<p>My goal is to check all rows that are not okay, and compare the values after the "/". For example,</p>
<pre><code>3.2.1.52/0.0195,2.4.1.25/0.0087
</code></pre>
<p>I need to compare the values and select the lower one:</p>
<pre><code>0.0195 versus 0.0087;
the lower one is 0.0087
</code></pre>
<p>So, after that, I need to put back that value in the column, but with their respective "code", in this case, is:</p>
<pre><code>2.4.1.25
</code></pre>
<p>So the updated data frame:</p>
<pre><code>xzjSqz_01292 3.2.1.133/0.0001
xzjSqz_01546 3.2.1.52/0.0840
xzjSqz_01559 2.4.1.25/0.0087
WP_250821872.1 1.2.1.28/0.0017
WP_250822533.1 3.2.1.52/0.1338
</code></pre>
<p>In my current code, first takes all the rows where this pattern is present more than one time:
r'(\d+.\d+.\d+.\d+)/(\d+.\d+)'
and then, I apply the function to get the lower value.
My code:</p>
<pre><code>df_input = pd.read_csv(input_file_1, header=None, sep="\t")
# Regular expression to match the pattern you want to exclude
exclude_pattern = r'(\d+\.\d+\.\d+\.\d+)/(\d+\.\d+)'
# Specify the column where you want to check if the pattern is more than one time
mask = df_input[1].str.count(exclude_pattern) > 1
# Create a new data frame with the filtered data
result_df = df_input[mask]
# print(result_df)
def get_lower_value(row):
values = re.findall(r'/(\d+\.\d+)', row[1])
if len(values) >= 2:
lower_value = min(float(value) for value in values)
return f"{row[0]} {values[values.index(str(lower_value)) - 1]}/{lower_value}"
return None
df_input[1] = df_input.apply(get_lower_value, axis=1)
</code></pre>
<p>However, I got an error:</p>
<pre><code>return f"{row[0]} {values[values.index(str(lower_value)) - 1]}/{lower_value}"
ValueError: '0.009' is not in list
</code></pre>
<p>The error is strange because all the floats have four-digit after the decimal point and in my real data frame there is a lot of 0.009X, where X is any number. However, no 0.009 in my actual data frame, as the error said.</p>
<p>Any idea about what is going on?
Is there any unexpected function behavior?
Is there a better way to reach my goal?</p>
| <python> | 2023-07-24 15:35:50 | 5 | 425 | Someone_1313 |
76,756,007 | 2,622,368 | python request how to send \u instead of \\u | <p>demo</p>
<pre class="lang-py prettyprint-override"><code>import requests
url = "http://127.0.0.1:5000"
payload = {r"\u005F\u005F\u0069\u006E\u0069\u0074\u005F\u005F": {}}
res = requests.post(url + '/register', json=payload)
</code></pre>
<p>The request data is always modified to <code>\\u005F\\u005F\\u0069\\u006E\\u0069\\u0074\\u005F\\u005F</code></p>
<p>I don't want <code>\</code> to become <code>\\</code></p>
<p><a href="https://i.sstatic.net/acfaU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/acfaU.png" alt="enter image description here" /></a></p>
| <python> | 2023-07-24 15:34:11 | 1 | 829 | wgf4242 |
76,755,905 | 4,794 | Unsupported object type in Tensorflow tutorial | <p>I'm trying to follow a tutorial on <a href="https://www.tensorflow.org/tutorials/keras/regression" rel="nofollow noreferrer">regression with Tensorflow</a>, but the code from the tutorial throws an error. Is the tutorial out of date, and how can I fix it?</p>
<p>Here's the code from the start of the tutorial:</p>
<pre><code>import numpy as np
import pandas as pd
# Make NumPy printouts easier to read.
np.set_printoptions(precision=3, suppress=True)
import tensorflow as tf
print(tf.__version__)
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data'
column_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepower', 'Weight',
'Acceleration', 'Model Year', 'Origin']
raw_dataset = pd.read_csv(url, names=column_names,
na_values='?', comment='\t',
sep=' ', skipinitialspace=True)
dataset = raw_dataset.copy()
dataset = dataset.dropna()
dataset['Origin'] = dataset['Origin'].map({1: 'USA', 2: 'Europe', 3: 'Japan'})
dataset = pd.get_dummies(dataset, columns=['Origin'], prefix='', prefix_sep='')
print(dataset.tail())
train_dataset = dataset.sample(frac=0.8, random_state=0)
test_dataset = dataset.drop(train_dataset.index)
train_features = train_dataset.copy()
test_features = test_dataset.copy()
train_labels = train_features.pop('MPG')
test_labels = test_features.pop('MPG')
print(train_dataset.describe().transpose()[['mean', 'std']])
normalizer = tf.keras.layers.Normalization(axis=-1)
normalizer.adapt(np.array(train_features))
</code></pre>
<p>When I run that, the last line raises this error:</p>
<pre><code>Traceback (most recent call last):
File "/home/don/.config/JetBrains/PyCharm2023.1/scratches/scratch2.py", line 36, in <module>
normalizer.adapt(np.array(train_features))
File "/home/don/.local/share/virtualenvs/zero-play-9HEKD3Xj/lib/python3.10/site-packages/keras/layers/preprocessing/normalization.py", line 286, in adapt
super().adapt(data, batch_size=batch_size, steps=steps)
File "/home/don/.local/share/virtualenvs/zero-play-9HEKD3Xj/lib/python3.10/site-packages/keras/engine/base_preprocessing_layer.py", line 246, in adapt
data_handler = data_adapter.DataHandler(
File "/home/don/.local/share/virtualenvs/zero-play-9HEKD3Xj/lib/python3.10/site-packages/keras/engine/data_adapter.py", line 1260, in __init__
self._adapter = adapter_cls(
File "/home/don/.local/share/virtualenvs/zero-play-9HEKD3Xj/lib/python3.10/site-packages/keras/engine/data_adapter.py", line 246, in __init__
x, y, sample_weights = _process_tensorlike((x, y, sample_weights))
File "/home/don/.local/share/virtualenvs/zero-play-9HEKD3Xj/lib/python3.10/site-packages/keras/engine/data_adapter.py", line 1140, in _process_tensorlike
inputs = tf.nest.map_structure(_convert_single_tensor, inputs)
File "/home/don/.local/share/virtualenvs/zero-play-9HEKD3Xj/lib/python3.10/site-packages/tensorflow/python/util/nest.py", line 917, in map_structure
structure[0], [func(*x) for x in entries],
File "/home/don/.local/share/virtualenvs/zero-play-9HEKD3Xj/lib/python3.10/site-packages/tensorflow/python/util/nest.py", line 917, in <listcomp>
structure[0], [func(*x) for x in entries],
File "/home/don/.local/share/virtualenvs/zero-play-9HEKD3Xj/lib/python3.10/site-packages/keras/engine/data_adapter.py", line 1135, in _convert_single_tensor
return tf.convert_to_tensor(x, dtype=dtype)
File "/home/don/.local/share/virtualenvs/zero-play-9HEKD3Xj/lib/python3.10/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/don/.local/share/virtualenvs/zero-play-9HEKD3Xj/lib/python3.10/site-packages/tensorflow/python/framework/constant_op.py", line 103, in convert_to_eager_tensor
return ops.EagerTensor(value, ctx.device_name, dtype)
ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type int).
</code></pre>
| <python><tensorflow><regression> | 2023-07-24 15:21:40 | 1 | 56,676 | Don Kirkby |
76,755,903 | 4,019,495 | VSCode Pylance: failing to recognize subtype of Pydantic StrictStr as str | <p>I have the following code typed into a python file in VS Code.</p>
<pre><code>from typing import Dict
from pydantic import StrictStr
class StrictStrSub(StrictStr):
pass
def f(d: Dict[StrictStrSub, str]):
e: Dict[str, str] = d
def h(d: Dict[StrictStr, str]):
e: Dict[str, str] = d
</code></pre>
<p>VS Code identifies one problem with this file (red underline):
<a href="https://i.sstatic.net/3ICiQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3ICiQ.png" alt="enter image description here" /></a></p>
<p>But it seems to me this is erroneous, due to the combination of</p>
<ol>
<li><code>StrictStr</code> does not have this error</li>
<li><code>StrictStrSub</code> is a subclass of <code>StrictStr</code>.</li>
</ol>
<p>Questions.</p>
<ul>
<li>Does anyone else observe this issue?</li>
<li>Am I correct in saying this is an issue? If so, how should I resolve?</li>
</ul>
| <python><visual-studio-code><pydantic> | 2023-07-24 15:21:30 | 0 | 835 | extremeaxe5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.