QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,063,506 | 459,745 | Curses does not print output | <p>I am new to curses so am a little confused why I don't see any output from this little "hello world":</p>
<pre class="lang-py prettyprint-override"><code>import curses
import time
def main(window: curses.window):
window.addstr(1, 1, "Hello, world")
time.sleep(3)
if __name__ == "__main__":
curses.wrapper(main)
</code></pre>
<p>When I ran, the screen was blank for a couple of seconds, but I did not see the text at coordinate (1, 1) at all. After that, the screen was restored to what it was before running the script.</p>
<p>How do I fix this so the text shows?</p>
| <python><python-curses> | 2023-09-08 00:32:19 | 1 | 41,381 | Hai Vu |
77,063,487 | 6,421,708 | How do I always return the value from an enum as a string | <p>I am using Python 3.9.6. I am trying to write more readable code by using enums. The enums are being used for REST API output. In the example below the output from the enum MUST be a string and look like MON. In all caps. So in short I always want the output from the enum to be the value of the enum and be returned as an actual string.</p>
<p>#1 below is how I 'want' to use the enum. Passing a list of days to a function. Unfortunately the object detail is printed.</p>
<p>#5 and #6 below have the correct output but I find the usage clumsy and would prefer the usage in #1</p>
<p>Is there a way to always return the enum value as a string?</p>
<pre><code>from enum import Enum
class Days(Enum):
Mon = 'MON'
def __str__(self):
print( '*******' )
return f'{self.value}'
# def __format__(self, spec):
# return f'{self.value}'
# under any condition I want to ALWAYS
# return the enum value as a string
print( '1:' ) # Use enum --> (output NOT correct)'
print( [Days.Mon] )
print( '2:' ) # Force Usage of __str__' )
print( f'{Days.Mon}' )
print( '3: ' ) # This works because it calls __str__' )
print( Days.Mon )
print( '4:' ) # Example List of strings (correct)' )
print( ['Mon'] )
print( '5:' ) # Don't want to force my users to do this'
print( str([Days.Mon.value]) )
print( '6:' ) # Don't want to do this'
print( [f'{Days.Mon}'] )
</code></pre>
<p>Output looks like this:</p>
<pre><code>1:
[<Days.Mon: 'MON'>]
2:
*******
MON
3:
*******
MON
4:
['Mon']
5:
['MON']
6:
*******
['MON']
</code></pre>
| <python><python-3.x><enums> | 2023-09-08 00:21:58 | 1 | 5,191 | Keith |
77,063,327 | 258,418 | Python typehinting class decorator | <p>I am trying to implement the equivalent of the <code>@contextlib.contextmanager</code> decorator for classes. When I researched this topic, it became apparent that I was not the first needing these and used the approach from <a href="https://discuss.python.org/t/yield-based-contextmanager-for-classes/8453" rel="nofollow noreferrer">https://discuss.python.org/t/yield-based-contextmanager-for-classes/8453</a>.</p>
<p>Here is the implementation, with my attempt to add typehinting, unfortunately it breaks autocompletions: pylance does not know what <code>__enter__()</code> returns, and thinks that an internal class of the decorated class becomes a function. I.e. <code>with Example() as e: e.<autocomplete></code> brings no joy.</p>
<pre><code>from __future__ import annotations
import contextlib
from typing import Iterator, Protocol, Type, TypeVar, cast
# based on https://discuss.python.org/t/yield-based-contextmanager-for-classes/8453
T = TypeVar("T", covariant=True)
D = TypeVar("D", covariant=True)
class ClassYieldContextmanagerProtocol(Protocol[T, D]):
def __contextmanager__(self: D) -> Iterator[T]:
...
class ClassYieldContextmanagerTransformedProtocol(Protocol[T, D]):
def __contextmanager__(self: D) -> Iterator[T]:
...
def __enter__(self: D) -> T:
...
def __exit__(self: D, e_type, e, tb) -> T:
...
C = TypeVar("C", bound=ClassYieldContextmanagerProtocol, covariant=True)
# def contextmanager(
# cls: Type[C[T]],
# ) -> Type[C[T]]:
# def contextmanager(
# cls: Type[C],
# ) -> Type[C]:
def contextmanager(
cls: Type[ClassYieldContextmanagerProtocol[T, D]],
) -> Type[ClassYieldContextmanagerTransformedProtocol[T, D]]:
"""Yield-based contextmanager."""
contextmanager: contextlib.AbstractContextManager[T] # | None = None # check this
def __enter__(self: D) -> T: # noqa: N807 intend to declare magic method
nonlocal contextmanager
contextmanager = self.__contextmanager__()
return contextmanager.__enter__()
def __exit__( # noqa: N807 intend to declare magic method
self: D, exc_type, exc_value, traceback
) -> bool | None:
return contextmanager.__exit__(exc_type, exc_value, traceback)
cls = cast(Type[ClassYieldContextmanagerTransformedProtocol[T, D]], cls)
cls.__enter__ = __enter__ # type: ignore[attr-defined]
cls.__contextmanager__ = contextlib.contextmanager(cls.__contextmanager__) # type: ignore[assignment]
cls.__exit__ = __exit__ # type: ignore[attr-defined]
return cls
@contextmanager
class Example:
def open(self):
...
def close(self):
...
class Internal:
def __init__(self, parent: Example):
self._parent = parent
def operation(self):
pass
def __contextmanager__(self) -> Iterator[Example.Internal]:
self.open()
yield Example.Internal(self)
self.close()
Example.Internal # Pylance shows me that Internal is a function here
with Example() as e:
x = e # pylance infers x: Any instead of Example.Internal
</code></pre>
<p>In the code above I had various attempts at typehinting:</p>
<ol>
<li>Specify only <code>ClassYieldContextmanagerProtocol</code>, with a single parameter.
mypy tells me, that I do not match the protocl, since <code>self</code> is of a different type</li>
</ol>
<pre><code>playground.py:57: error: Argument 1 to "contextmanager" has incompatible type "type[Example]"; expected "ClassYieldContextmanagerProtocol[Internal]" [arg-type]
playground.py:57: note: Following member(s) of "Example" have conflicts:
playground.py:57: note: Expected:
playground.py:57: note: def __contextmanager__() -> Iterator[Internal]
playground.py:57: note: Got:
playground.py:57: note: def __contextmanager__(self: Example) -> Iterator[Internal]
</code></pre>
<ol start="2">
<li>Use a bound <code>TypeVar C</code>.
I am not sure how to bind T now, so that T has the type which is returned by <code>__contextmanager__</code> and should be returned by <code>__enter__</code></li>
<li>Add a second <code>TypeVar D</code> for the <code>ClassYieldContextmanagerProtocol</code> to make the type of self generic.
Now it is impossible to work with the type vars, the Protocol expects covariance, but when used as a parameter that is not allowed.
<pre><code>playground.py:43: error: Cannot use a covariant type variable as a parameter [misc]
playground.py:45: error: "D" has no attribute "__contextmanager__" [attr-defined]
playground.py:49: error: Cannot use a covariant type variable as a parameter [misc]
playground.py:83: error: "Example" has no attribute "__enter__" [attr-defined]
playground.py:83: error: "Example" has no attribute "__exit__" [attr-defined]
</code></pre>
</li>
</ol>
<h2>Summary</h2>
<p>How can I typehint the class-<code>contextmanager</code> decorator [1] so that autocompletion with pylance does not break? (Bonus points if it works for mypy to, since that is used for typechecking, but the developer convenience of keeping autocompletion is paramount).</p>
<p>While an alternative implementation as an inheritable class might be possible, I would prefer the decorator model. (And learning how to tell mypy that <code>attr-define</code> is ok in a class decorator).</p>
<p>[1] I guess I should rename it to avoid confusion with contextlib.contextmanager, but I do not want to regenerate all the error messages</p>
| <python><python-typing><mypy><pyright> | 2023-09-07 23:14:11 | 0 | 5,003 | ted |
77,063,308 | 13,968,392 | Equivalent of pandas .append() method, which allows method chaining | <p>Now that <code>append()</code> is removed in pandas 2.0, what is a short alternative to <code>append()</code> allowing method chaining?
The <a href="https://pandas.pydata.org/docs/whatsnew/v2.0.0.html" rel="nofollow noreferrer">"What’s new in 2.0.0"</a> section of pandas says:</p>
<blockquote>
<p>Removed deprecated <code>Series.append()</code>, <code>DataFrame.append()</code>, use <code>concat()</code>
instead <a href="https://github.com/pandas-dev/pandas/issues/35407" rel="nofollow noreferrer">(GH 35407)</a></p>
</blockquote>
<p>I am looking for something like below to add one row, but within a method chain.</p>
<pre><code>import pandas as pd
df = pd.DataFrame([[4, 5, 6],
[7, 8, 9],
])
df.loc[df.index.size] = [10, 20, 30]
</code></pre>
| <python><pandas><method-chaining> | 2023-09-07 23:07:39 | 1 | 2,117 | mouwsy |
77,063,265 | 915,989 | Credential and location issues with bigquery_datatransfer Python APIs | <p>I am an Owner of a GCP project. As such, in the BigQuery UI, it is trivially simple for me to copy a dataset (as a one-time operation) in the <code>us</code> multi-region as a new dataset with a different name that's also in the <code>us</code> multi-region.</p>
<p>However, after scouring the docs, when I try to approximate performing the same task using the Python SDK, two issues arise:</p>
<ul>
<li><p>Unlike all other BigQuery API calls using the GCP Python SDK, my local <code>gcloud auth</code> login isn't automatically used. Instead, I have to <code>export GOOGLE_APPLICATION_CREDENTIALS=<path to json file></code>. While OK for a POC, that is concerning because this code will be deployed as a <code>beam.DoFn</code> at the start of a Dataflow Pipeline Template, and it is not clear how to do the same in that context. I would vastly prefer to have the code automatically use the service account in its runtime context (be that Google auth on my laptop or our Dataflow service account in GCP), like all other BigQuery calls do. (And yes, I ensured that <code>gcloud auth login && gcloud auth application-default login</code> succeeded immediately before trying.) [UPDATE] - This works perfectly using <code>DataflowRunner</code>, and uses the expected service account. It's only the dev ENV that's hosed, requiring a downloaded credentials JSON file.</p>
</li>
<li><p>With the credentials file supplied, I then get the error: "BigQuery Data Transfer Service does not yet support location: us". However, that can't be true, because I'm able to perform the Copy Dataset just fine in the GCP UI. [UPDATE] After something like 15 tries, this suddenly started working. Based upon <code>git</code> history, nothing relevent changed. So, this is no longer an issue.</p>
</li>
</ul>
<p>Here's the (comment-annotated) WIP code so far; any pointers are very much appreciated:</p>
<pre><code>import argparse
import logging
from datetime import date, timedelta, datetime
import time
import re
import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions, GoogleCloudOptions
from google.cloud import bigquery, bigquery_datatransfer_v1
from google.protobuf.timestamp_pb2 import Timestamp
class BackupDataset(beam.DoFn):
def __init__(self, bq_analytics_dataset_option, bq_backup_dataset_ttl_days_option, project):
self.bq_analytics_dataset = bq_analytics_dataset_option
self.bq_backup_dataset_ttl_days = bq_backup_dataset_ttl_days_option
self.project = project
def create_backup_dataset(self, bq_client, destination_dataset_id):
logging.info(f"Creating backup dataset with name: {destination_dataset_id} ...")
query_job = bq_client.query(f"CREATE SCHEMA IF NOT EXISTS {destination_dataset_id}")
query_job.result() # Wait for job to finish (it usually does immediately, but not always)
def create_transfer_config(self, bq_client, bqdt_client, source_dataset_id):
date_s = "{:%Y_%m_%d}".format(date.today())
destination_dataset_id = f"{source_dataset_id}_{date_s}"
self.create_backup_dataset(bq_client, destination_dataset_id)
display_name = f"Backup of {source_dataset_id} on {date_s}"
logging.info(f"Creating transfer_config with display name: {display_name} ...")
transfer_config = bigquery_datatransfer_v1.TransferConfig(
destination_dataset_id=destination_dataset_id,
display_name=display_name,
data_source_id="cross_region_copy",
params={
"source_project_id": self.project,
"source_dataset_id": source_dataset_id,
"overwrite_destination_table": True
},
schedule_options={
"disable_auto_scheduling": True
} # run this only once - the default is recurring
)
# In order for this call to not puke with "Failed to find a valid credential. The field 'version_info' or 'service_account_name' must be specified.",
# the GOOGLE_APPLICATION_CREDENTIALS ENV var must be set to point to a downloaded service account key file on the local machine.
# Despite the error text, this call does NOT accept a `service_account_name` param.
# FIXME: Find a way for this to work without ^^ for ease of developer maintenance
remote_transfer_config = bqdt_client.create_transfer_config(
parent=bqdt_client.common_project_path(project),
transfer_config=transfer_config
)
logging.info(f"Created transfer_config with name: {remote_transfer_config.name} ...")
return remote_transfer_config
def run_transfer_config(self, bqdt_client, config):
logging.info(f"Running transfer config with name {config.name} ...")
start_time = Timestamp(seconds=int(time.time()))
request = bigquery_datatransfer_v1.types.StartManualTransferRunsRequest(
{ "parent": config.name, "requested_run_time": start_time }
)
bqdt_client.start_manual_transfer_runs(request, timeout=360)
# ...
</code></pre>
| <python><google-bigquery> | 2023-09-07 22:51:58 | 0 | 1,193 | aec |
77,063,081 | 2,153,235 | Does importing a namespace package also import the modules in the directory? | <p><a href="https://chrisyeh96.github.io/2017/08/08/definitive-guide-python-imports.html#importing-packages" rel="nofollow noreferrer">This</a> explanation of package imports says that importing a package <code>packB</code> without an <code>__init__.py</code> file doesn't do anything because <code>__init__.py</code> tells the import process what other <code>*.py</code> files to import.</p>
<p>I'm at a loss as to how that is useful. I'm a Python newbie, but everything I've read online says that such an "implicit namespace package" lacks <code>__init__.py</code> files because there may be <em>multiple</em> <code>packB</code> folders whose contents are to be combined into a common <code>packB</code> namespace.</p>
<p>Is this simply an error in the page cited above, or is there a clearer explanation that I just haven't found?</p>
| <python><namespace-package> | 2023-09-07 21:58:29 | 1 | 1,265 | user2153235 |
77,063,075 | 1,199,464 | How to avoid pytest mocked data conflicting between tests | <p>I have two commands (django management commands) which load data from an API, then do something based on the data they collect. So certain function names are the same for consistency, like <code>get_api_data</code> to gather data and <code>get_specific_event</code> to get an event from the data.</p>
<p>In the test module for each command I have a function that returns a dictionary to represent the data from the API which the command expects. For example;</p>
<pre><code>#test_command_one.py
from core.management.commands.command_one import Command as CommandOne
def get_data(current_event=None, next_event=1):
return {
"current": current_event,
"next": next_event,
"data": [
{
"id": i,
"time": time,
}
for i, time in enumerate(
[
"2023-08-11 17:30:00+00:00",
"2023-08-18 17:15:00+00:00",
"2023-08-25 17:30:00+00:00",
"2023-09-01 17:30:00+00:00",
"2023-09-16 10:00:00+00:00",
],
start=1,
)
],
}
@pytest.mark.django_db
class TestCommandOne:
def test_get_current_event(self, mocker):
cmd = CommandOne()
m_events = mocker.patch.object(cmd, "get_api_data")
m_events.return_value = get_data(next_event=1)
assert cmd.get_current_event() is None
</code></pre>
<pre><code>#test_command_two.py
from project.management.commands.command_two import Command
def get_events():
return {
"data": [
{
"id": i,
"time": time,
"time2": time,
"time3": time,
}
for i, time in enumerate(
[
"2023-08-11 17:30:00+00:00",
"2023-08-18 17:15:00+00:00",
"2023-08-25 17:30:00+00:00",
"2023-09-01 17:30:00+00:00",
"2023-09-16 10:00:00+00:00",
],
start=1,
)
],
}
def test_valid_output(mocker):
mocker.patch.object(
Command, "get_api_data", return_value=get_events()
)
out2 = StringIO()
call_command("command_two", stdout=out2)
assert out2.getvalue() == "test output"
</code></pre>
<p>Running the tests in isolation they pass. But when running the whole test suite, I'm seeing;</p>
<pre><code>FAILED project/tests/test_command_two.py::test_valid_output - KeyError: 'time2'
</code></pre>
<p>So the data being used by the second test is still the return value of <code>m_events = mocker.patch.object(cmd, "get_api_data")</code> in the first test. But switching the order in which the tests run, still fails on the same issue, so it's not like it loads the mock from one test and then still uses that on the second test.</p>
<p>How do I either clear these mocks at the beginning of a test, or ensure that the new mock is used?</p>
<p>I'd expect that each time you define <code>mocker.patch.object(cmd, "get_api_data")</code> for a test, it's specified return value is the one used for the context of that test.</p>
| <python><pytest><pytest-mock> | 2023-09-07 21:57:17 | 0 | 12,944 | markwalker_ |
77,063,027 | 2,192,824 | How to convert the hour difference in timezone info to the abbreviated timezone name | <p>This is a followup question to</p>
<p><a href="https://stackoverflow.com/questions/77056774/how-can-i-convert-the-time-stamp-with-timezone-info-to-utc-time-in-python/77056831#77056831">How can I convert the time stamp with timezone info to utc time in python</a></p>
<p>I was able to get the time string like this "2023-09-06T22:02:44-07:00" through calling</p>
<pre><code>datetime.now().astimezone().isoformat(timespec='seconds')
</code></pre>
<p>I wonder whether there is a way to get the time string like "2023-09-06T22:02:44-PDT"? And also convert the time string to UTC time -- something like "2023-09-07T05:02:44-UTC"? Thanks!</p>
| <python><timezone><timestamp-with-timezone> | 2023-09-07 21:45:33 | 1 | 417 | Ames ISU |
77,062,958 | 7,169,895 | Python QStyledItemDelegate - How do I properly align the text | <p>This question is almost identical to <a href="https://stackoverflow.com/questions/15778029/qstyleditemdelegatepaint-why-is-my-text-not-properly-aligned">this</a> question. However, the solution did not work for me and the question is an older version of Qt. I am using a <code>QStyledItemDelegate</code> to color certain cells based on their value. However, after these cells are colored, the text in them shifts far to the upper left. Using Alignment flags helps the issue but they are still far to the left (maybe only a space or two). In the code below, a table is created and the columns next to <code>GOAT</code> are highlighted to demonstrate this. We can see the text is 'more' to the left than the other cells. How do I fix this? <code>translate</code> moves the whole cell, not just the text. Is there a way to just move the text?</p>
<pre><code>import sys
import pandas as pd
from PySide6 import QtCore, QtWidgets
from PySide6.QtCore import Qt
from PySide6.QtGui import QPen, QBrush, QColor
from PySide6.QtWidgets import QStyledItemDelegate
class TableModel(QtCore.QAbstractTableModel):
def __init__(self, data):
super(TableModel, self).__init__()
self._dfDisplay = data
self._data = data
def data(self, index, role):
if role == Qt.DisplayRole:
value = self._data[index.column()][index.row()]
return str(value)
def rowCount(self, index):
return self._data.shape[0]
def columnCount(self, index):
return self._data.shape[1]
def headerData(self, col, orientation, role):
if orientation == Qt.Orientation.Horizontal:
if role == Qt.ItemDataRole.DisplayRole:
return str(self._dfDisplay.columns[col])
return None
class MyDelegate(QStyledItemDelegate):
def paint(self, painter, option, index):
super().paint(painter, option, index)
painter.setPen(QPen(Qt.GlobalColor.black, 3))
# Highlight our bm and tbm ROE cells
if index.siblingAtColumn(0).data() in ["GOAT"]:
if index.column() > 0:
# Set the fill color
brush = QBrush(QColor('#90EE90'))
brush.setStyle(Qt.SolidPattern)
painter.setBrush(brush)
# Set the outline color
painter.setPen(Qt.GlobalColor.white)
painter.drawRect(option.rect)
# Draw the text
painter.setPen(Qt.GlobalColor.black)
painter.translate(1, 0)
painter.drawText(option.rect,Qt.AlignmentFlag.AlignLeft | Qt.AlignmentFlag.AlignVCenter, str(index.data()))
class MainWindow(QtWidgets.QMainWindow):
def __init__(self):
super().__init__()
self.table = QtWidgets.QTableView()
self.delegate = MyDelegate()
self.table.setItemDelegate(self.delegate)
header_view = CustomHeaderView(QtCore.Qt.Orientation.Horizontal)
self.table.setHorizontalHeader(header_view)
data = pd.DataFrame([["GOAT", "Giraffe", "Potatoe", "Another"],
[77, 33, 111111, 233],
[50, 70, 89, 100000]
])
print(data)
self.model = TableModel(data)
self.table.setModel(self.model)
self.setCentralWidget(self.table)
app = QtWidgets.QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec()
</code></pre>
| <python><pyside6> | 2023-09-07 21:27:18 | 1 | 786 | David Frick |
77,062,953 | 20,591,261 | Exclude datetime from GroupBy sum | <p>I'm looking a way to use groupby on a dataframe that contains datetime64 format on some columns. But im having this error:</p>
<pre><code> dfsum = df.groupby(['City']).sum()
TypeError: datetime64 type does not support sum operations
</code></pre>
<p>I tried using:</p>
<pre><code>desired = df.select_dtypes(include=[int, float, object])
dfsum_city = desired.groupby(['City']).sum()
dfsum_city.reset_index(inplace=True)
</code></pre>
<p>But i dont think it is the right way to solve the problem. Any advice?</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Order ID</th>
<th>Product</th>
<th>Quantity Ordered</th>
<th>Price Each</th>
<th>Order Date</th>
<th>Purchase Address</th>
<th>day</th>
<th>month</th>
<th>year</th>
<th>Total Price</th>
<th>City</th>
</tr>
</thead>
<tbody>
<tr>
<td>141234</td>
<td>iPhone</td>
<td>1</td>
<td>700</td>
<td>2019-01-22 21:25:00</td>
<td>944 Walnut St, Boston, MA 02215</td>
<td>22</td>
<td>1</td>
<td>2019</td>
<td>700</td>
<td>Boston (MA)</td>
</tr>
<tr>
<td>141235</td>
<td>Lightning Charging Cable</td>
<td>1</td>
<td>14.95</td>
<td>2019-01-28 14:15:00</td>
<td>185 Maple St, Portland, OR 97035</td>
<td>28</td>
<td>1</td>
<td>2019</td>
<td>14.95</td>
<td>Portland (OR)</td>
</tr>
<tr>
<td>141236</td>
<td>Wired Headphones</td>
<td>2</td>
<td>11.99</td>
<td>2019-01-17 13:33:00</td>
<td>538 Adams St, San Francisco, CA 94016</td>
<td>17</td>
<td>1</td>
<td>2019</td>
<td>23.98</td>
<td>San Francisco (CA)</td>
</tr>
<tr>
<td>141237</td>
<td>27in FHD Monitor</td>
<td>1</td>
<td>149.99</td>
<td>2019-01-05 20:33:00</td>
<td>738 10th St, Los Angeles, CA 90001</td>
<td>5</td>
<td>1</td>
<td>2019</td>
<td>149.99</td>
<td>Los Angeles (CA)</td>
</tr>
<tr>
<td>141238</td>
<td>Wired Headphones</td>
<td>1</td>
<td>11.99</td>
<td>2019-01-25 11:59:00</td>
<td>387 10th St, Austin, TX 73301</td>
<td>25</td>
<td>1</td>
<td>2019</td>
<td>11.99</td>
<td>Austin (TX)</td>
</tr>
</tbody>
</table>
</div> | <python><pandas><dataframe><datetime> | 2023-09-07 21:26:11 | 2 | 1,195 | Simon |
77,062,830 | 4,904,821 | Fastest way to update a few million JSON values in postgres | <p>I have a Postgres table <code>User</code>. User has many attributes, including <code>settings</code>, a JSON (JSONB) dictionary.</p>
<p>I have a CSV file with a few million user IDs, every one of which needs to update the dictionary with the following query:</p>
<pre><code> UPDATE "User"
SET preferences = jsonb_set(
jsonb_set(
preferences,
'{setting1}',
'true'::jsonb,
true
),
'{setting2}',
'true'::jsonb,
true
)
WHERE id = '{my-user-id}';"
</code></pre>
<p>This basically just updates 2 fields within the dictionary in a single query.</p>
<p>My problem is that in testing, running this sequentially with a python script yielded only a few hundred updates per second, which would take many hours to complete.</p>
<p>How can I speed this up?</p>
| <python><database><postgresql><database-performance> | 2023-09-07 21:03:00 | 1 | 2,355 | David Ferris |
77,062,828 | 5,769,814 | Shapely for pixel geometry | <p>I'm using the <code>shapely</code> library to handle my geometry-based computation. However, I'm working with integer geometry (such as pixels), which unfortunately means the <code>shapely</code> computations are off-by-one. For instance, normally this is the expected result:</p>
<pre><code>>>> import shapely
>>> mp = shapely.MultiPoint([(1,1), (2,2)])
>>> shapely.box(*mp.bounds).area
1.0
</code></pre>
<p>However, with pixel geometry, this rectangle should have an area of 4 (since it contains the pixels <code>(1,1)</code>, <code>(1,2)</code>, <code>(2,1)</code>, <code>(2,2)</code>).</p>
<p>I tried to fix this by subclassing <code>MultiPoint</code>:</p>
<pre><code>class Cluster(shapely.MultiPoint):
def __init__(self, *args, **kw):
super().__init__(*args, **kw)
if not all(all(isinstance(c, int) for c in coord) for coord in self):
raise ValueError("Cluster coordinates must be integers.")
@property
def bounds(self):
bounds = super().bounds
return bounds[0], bounds[1], bounds[2] + 1, bounds[3] + 1
</code></pre>
<p>However, this didn't seem to do anything, as even <code>bounds</code> is still printing the same thing:</p>
<pre><code>>>> c = Cluster([(1,1), (2,2)])
>>> c.bounds
(1.0, 1.0, 2.0, 2.0)
>>> shapely.box(*c.bounds).area
1.0
</code></pre>
<p>Also, pylint is warning me that the methods <code>coords</code> and <code>xy</code> are abstract but not implemented in my class, which makes no sense to me, since <code>MultiPoint</code> can be instantiated, so it shouldn't contain any unimplemented abstract methods.</p>
<p>Is there a <code>shapely</code> alternative that handles pixel geometry? Alternatively, why is my overridden <code>bounds</code> property not returning the values I want?</p>
| <python><python-3.x><properties><shapely> | 2023-09-07 21:02:44 | 1 | 1,324 | Mate de Vita |
77,062,266 | 12,708,740 | Pivot df without introducing NaNs | <p>I have a dataframe that I am trying to pivot. However, I am accidentally introducing NaNs in my rows.</p>
<p>Code:</p>
<pre><code>category = ['animal', 'animal', 'animal', 'fruit', 'fruit', 'fruit', 'veggie', 'veggie', 'veggie']
obj = ['animal_1', 'animal_2', 'animal_3', 'fruit_1', 'fruit_2', 'fruit_3', 'veggie_1', 'veggie_2', 'veggie_3']
df = pd.DataFrame(list(zip(category, obj)), columns=['category', 'object'])
# pivot the DataFrame
result = df.pivot(columns='category', values='object')
</code></pre>
<p>Instead I need something that would be like the output of this code. Importantly, the below does not have any NaNs.</p>
<pre><code>animals=['animal_1', 'animal_2', 'animal_3']
fruit=['fruit_1', 'fruit_2', 'fruit_3']
veggies=['veggie_1', 'veggie_2', 'veggie_3']
pd.DataFrame(list(zip(animals, fruit, veggies)), columns=['animal', 'fruit', 'veggie'])
</code></pre>
| <python><pandas><dataframe> | 2023-09-07 19:12:26 | 1 | 675 | psychcoder |
77,062,239 | 835,523 | Pandas read_csv skiprows=1 skips entire file | <p>I am calling pandas <code>read_csv(... skiprows=1)</code> and instead of skipping the first row it's skipping every line. I suspect it's because there's an issue with the line ending character it expects vs what is there.</p>
<p>Does anyone know how to make this more robust?</p>
<p><EDIT> My file uses \r\n, but when I set <code>lineterminator='\r\n'</code> I get "only length-1 line terminators supported"</p>
| <python><pandas> | 2023-09-07 19:06:59 | 1 | 4,741 | Steve |
77,062,121 | 11,069,811 | download with gdown a file in a folder with original name | <p>i am trying to download a file called file.zip from google drive with gdown in colab</p>
<pre><code>!gdown url -O /out
</code></pre>
<p>When checking the out folder the file name is something like zips2dkqipb0ntmp
instead of file.zip
also try with</p>
<pre><code>gdown.download(url, "/out")
</code></pre>
<p>also bad results</p>
| <python><google-colaboratory> | 2023-09-07 18:42:21 | 0 | 407 | molo32 |
77,062,103 | 7,347,774 | How to remove double quotes in all Snowpark DataFrame column names? | <p>I'm trying to save my Pandas DataFrame into Snowpark DataFrame with this code:</p>
<blockquote>
<pre><code>pdf = pd.DataFrame(...)
pdf.columns = pdf.columns.str.replace(' ', '')
sdf = session.createDataFrame(pdf)
sdf.write.saveAsTable('MY_TABLE_NAME', mode="overwrite")
</code></pre>
</blockquote>
<p>Columns in Pandas DataFrame only contain letters and underscores after removing spaces.
Unfortunately, at the end I get a Snowflake table with column names with double quotes.</p>
<p>How to save the table without the double quotes in the column names?</p>
<p><strong>Edit</strong></p>
<p>Solutions that worked:</p>
<ol>
<li>Uppercase all column names by ADITYA PAWAR:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>pdf.columns = pdf.columns.str.upper().str.replace(' ', '')
sdf = session.createDataFrame(pdf)
sdf.write.saveAsTable('MY_TABLE_NAME', mode="overwrite")
</code></pre>
<ol start="2">
<li>Use <code>write_pandas()</code> by Aek:</li>
</ol>
<pre class="lang-py prettyprint-override"><code># Temporary table
session.write_pandas(pdf, 'MY_TABLE_NAME', auto_create_table=True, table_type="temporary", schema='MY_SCHEMA_NAME')
# Permanent table
session.write_pandas(pdf, 'MY_TABLE_NAME', auto_create_table=True, schema='MY_SCHEMA_NAME')
</code></pre>
| <python><pandas><snowflake-cloud-data-platform> | 2023-09-07 18:39:37 | 1 | 1,055 | Piotr K |
77,062,080 | 1,996,760 | Can't make Python Stockfish weaker | <p>I’ve written a chess software in Python which use Stockfish via this library:
<a href="https://pypi.org/project/stockfish/" rel="nofollow noreferrer">https://pypi.org/project/stockfish/</a></p>
<p>The software works and plays well. Too well actually…</p>
<p>Mi idea was the app can offer a level / ELO choice, but I just can’t scale it down. It doesn’t matter if I use Stockfish or Fairy Stockfish seemingly none of the parameters has any effect, especially not the “Skill Level” or “Elo Parameter”.</p>
<p>I tried multiple combination it’s <em>only an example:</em></p>
<pre><code>params = {
"Debug Log File": "",
"Contempt": 0,
"Min Split Depth": 0,
"Threads": 2, # More threads will make the engine stronger, but should be kept at less than the number of logical processors on your computer.
"Ponder": False,
"Hash": 512, # Default size is 16 MB. It's recommended that you increase this value, but keep it as some power of 2. E.g., if you're fine using 2 GB of RAM, set Hash to 2048 (11th power of 2).
"MultiPV": 1,
"Skill Level": 1,
"Move Overhead": 10,
"Minimum Thinking Time": 20,
"Slow Mover": 100,
"UCI_Chess960": False,
"UCI_LimitStrength": True,
"UCI_Elo": 1000
}
#stockfish = Stockfish(path="/usr/games/stockfish", parameters = params)
stockfish = Stockfish(path="/home/python/chess/fairy-stockfish_x86-64", parameters = params)
</code></pre>
| <python><pypi><chess><python-chess><stockfish> | 2023-09-07 18:35:08 | 3 | 305 | igoemon |
77,062,058 | 8,834,335 | How to fill in a dataframe piecemeal in Python | <p>I have a dataframe Results with columns ID and Prediction, and a subfunction Calc that returns a dataframe of ID, pred for some subset of the IDs at a time. I would like to fill in Results as predictions come in. In R, this would be simply:</p>
<pre><code>for x in cycles
temp = Calc(x)
Results[temp, on="ID",Prediction:=pred]
</code></pre>
<p>How do I do this in Python? When I try using merge with something like <code>Results = Results.merge(temp, on=['ID'], how='left')</code> it tries to fill the whole column, so in the second loop I end up with Results having columns ID, Prediction_x, Prediction_y. One solution might be to make a new table and append temp in each cycle, then merge at the end, but I feel like there has to be a better way...</p>
<p>EDIT: Full worked example using not_speshal's solution:</p>
<pre><code>Results = pd.DataFrame({'ID': [1,2,3,4]})
Results["Prediction"] = np.nan
Results = Results.set_index("ID")
def Calc(x):
if x == 1:
return pd.DataFrame({'ID': [1,2], 'Prediction': [5,6]})
if x == 2:
return pd.DataFrame({'ID': [3,4], 'Prediction': [7,8]})
cycles = [1,2]
for x in cycles:
temp = Calc(x)
Results = Results.fillna(temp.set_index("ID"))
### Results Output ###
Prediction
ID
1 5.0
2 6.0
3 7.0
4 8.0
</code></pre>
| <python><pandas><merge> | 2023-09-07 18:30:15 | 1 | 468 | Sinnombre |
77,061,999 | 2,550,810 | Reducing augmented matrices in SymPy | <p>A common task in Linear Algebra (101) is to row-reduce an "augmented matrix" <code>[A | b]</code> where <code>b</code> is the right hand side of the linear system <code>Ax=b</code>. Often, we want to row reduce using symbolic variables for <code>b</code>. The entries in the row-reduced <code>b</code> will reveal any compatibility conditions on the components of <code>b</code> that are needed to obtain a solution to the linear system.</p>
<p>If <code>A</code> has full row rank and <code>b</code> contains symbolic variables, the Sympy function <code>rref()</code> will return the expected results</p>
<pre><code>import sympy as sp
from pprint import pprint
b = sp.symbols('b_(0:3)',real=True)
A = sp.Matrix(2,2,[1,0,-1,1])
B = sp.Matrix(2,1,b[0:2])
A_aug = sp.Matrix.hstack(A,B)
# Augmented matrix
print("Augumented matrix [A,b] (A has full row rank)")
pprint(A_aug)
print("")
# Row reduced Echelon form (correct/expected)
print("Row-reduced form (correct)")
pprint(A_aug.rref())
</code></pre>
<p>The results are as expected :</p>
<pre><code>Augumented matrix [A,b] (A has full row rank )
Matrix([
[ 1, 0, b_0],
[-1, 1, b_1]])
Row-reduced form (correct/expected)
(Matrix([
[1, 0, b_0],
[0, 1, b_0 + b_1]]), (0, 1))
</code></pre>
<p>However, when trying this on a matrix <code>A</code> that does not have full row rank, the row-reduction is too aggressive, and SymPy will reduce the column(s) with symbolic variables to obtain additional pivot rows.</p>
<pre><code># A does not have full row rank
A = sp.Matrix(3,2,[1,0,-1,1,0,-1])
B = sp.Matrix(3,1,b)
A_aug = sp.Matrix.hstack(A,B)
print("Augmented matrix (A is row-rank deficient)")
pprint(A_aug)
print("")
print("Row-reduced form (Sympy)")
pprint(A_aug.rref())
</code></pre>
<p>Results :</p>
<pre><code>Augmented matrix (A is row-rank deficient)
Matrix([
[ 1, 0, b_0],
[-1, 1, b_1],
[ 0, -1, b_2]])
Row-reduced form (Sympy)
(Matrix([
[1, 0, 0],
[0, 1, 0],
[0, 0, 1]]), (0, 1, 2))
</code></pre>
<p>Sympy interprets that last column as a "pivot column", even though in Linear algebra 101, the final step that would turn the last column into a pivot column is ignored.</p>
<p>The results I expect/was hoping for are :</p>
<pre><code>Row-reduced form (expected)
Matrix([
[1, 0, b_0],
[0, 1, b_0 + b_1],
[0, 0, b_0 + b_1 + b_2]])
</code></pre>
<p>The last entry in the last column is useful, since it gives us a compatibility condition on the entries in the right hand side vector.</p>
<p>It seems that what is needed is a way to indicate potential pivot columns ahead of time so that SymPy ignores non-pivot columns when row-reducing.</p>
<p><strong>UPDATE:</strong> Sympy <code>rref()</code> will produced expected results for matrices of full row rank.</p>
<pre><code>A_aug = [A|b]
Matrix([
[1, 2, 3, 5, b_1],
[2, 4, 8, 12, b_2],
[3, 6, 7, 12, b_3]])
A_aug.rref()
Matrix([
[1, 2, 0, 0, -6*b_1 + b_2/2 + 2*b_3],
[0, 0, 1, 0, -6*b_1 + 3*b_2/2 + b_3],
[0, 0, 0, 1, 5*b_1 - b_2 - b_3]])
</code></pre>
<p>If <code>A</code> is missing pivot rows, Sympy will reduce symbolic variables in the augmented columns to produce the missing pivot rows.</p>
| <python><matrix><sympy> | 2023-09-07 18:20:22 | 4 | 1,610 | Donna |
77,061,994 | 12,708,740 | Generate combination of subset N from multiple lists | <p>I have 4 lists of elements and am trying to generate combinations of elements from those lists. Specifically, I'd like each combination to have 1 element from each list. However, I would only like each combination to have 3 elements in it. I would also like to record which elements are left out.</p>
<p>Example lists:</p>
<pre><code>list1 = ['a', 'b', 'c']
list2 = ['m', 'n', 'o']
list3 = ['x', 'y', 'z']
list4 = ['q', 'r', 's']
</code></pre>
<p>Example desired output of just 2 rows but I would like all combinations:</p>
<pre><code>combos = [[['a', 'm', 'x'], 'q'],
[['a', 'n', 'r'], 'z'],
[['s', 'z', 'o'], 'a']]
df = pd.DataFrame(combos, columns = ['combo', 'extra'])
</code></pre>
<p>Importantly, the "extras" should be sampled from all lists.</p>
<p>I am currently uncertain if order matters to me or not for the combinations, but if the code is easy to explain for permutations that would also be great. Thank you!</p>
| <python><combinations><permutation> | 2023-09-07 18:19:20 | 1 | 675 | psychcoder |
77,061,913 | 425,871 | Why does quadratic interpolation on four points produce a cubic plot? | <p>Consider the following Python program:</p>
<pre><code>import numpy as np
from scipy.interpolate import interp1d
import matplotlib.pyplot as plt
# Define your four data points
x = np.array([0, 1, 2, 3]) # x-coordinates of the points
y = np.array([0, 1, 0, 1]) # y-coordinates of the points
# Create the interpolation function with quadratic interpolation
f = interp1d(x, y, kind='quadratic')
# Generate a finer grid of x values for plotting
x_interp = np.linspace(min(x), max(x), 1000)
# Compute the corresponding y values using the interpolation function
y_interp = f(x_interp)
# Plot the data points and the interpolated curve
plt.scatter(x, y, label='Data Points')
plt.plot(x_interp, y_interp, label='Quadratic Interpolation')
plt.legend()
plt.xlabel('x')
plt.ylabel('y')
plt.title('Quadratic Interpolation with 4 Data Points')
plt.show()
</code></pre>
<p>This is the resulting plot:</p>
<p><a href="https://i.sstatic.net/KP09Y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KP09Y.png" alt="enter image description here" /></a></p>
<p>That is not quadratic, but rather that appears to be a cubic polynomial.</p>
<p><strong>What is happening inside <code>interp1d()</code> and why is the resulting curve not quadratic?</strong></p>
| <python><scipy><interpolation> | 2023-09-07 18:04:04 | 1 | 6,464 | Steve |
77,061,900 | 10,461,632 | How to convert a dataframe into a tree data structure | <p>I have a pandas dataframe that I want to convert into a tree data structure so that I can then use something like <a href="https://mui.com/x/react-tree-view/" rel="nofollow noreferrer">TreeView</a> to show the tree structure in my application.</p>
<p>Preferably, I want to use recursion to build the data structure. I have tried without recursion, but am coming up short because I can't get the fully nested structure I'm after. Everything I've researched has examples for tree structure recursion, but I'm stuck because I have a list of tree structures.</p>
<p>The expected output is this:</p>
<pre><code>[
{
"section": "1.0",
"title": "Main section",
"children": [
{
"section": "1.1",
"title": "One subsection"
},
{
"section": "1.2",
"title": "Another subsection"
},
{
"section": "1.3",
"title": "And another subsection"
}
]
},
{
"section": "2.0",
"title": "The second main section",
"children": [
{
"section": "2.1",
"title": "A subsection of the second main section",
"children": [
{
"section": "2.1.1",
"title": "A sub subsection"
},
{
"section": "2.1.2",
"title": "Another sub subsection",
"children": [
{
"section": "2.1.2.1",
"title": "I am a deep subsection"
},
{
"section": "2.1.2.2",
"title": "I am another deep subsection"
}
]
}
]
}
]
},
]
</code></pre>
<p>Here's some working sample code and the output of it:</p>
<pre><code>import pandas as pd
import json
data = [
['1.0', 'Main section'],
['1.1', 'One subsection'],
['1.2', 'Another subsection'],
['1.3', 'And another subsection'],
['2.0', 'The second main section'],
['2.1', 'A subsection of the second main section'],
['2.1.1', 'A sub subsection'],
['2.1.2', 'Another sub subsection'],
['2.1.2.1', 'I am a deep subsection'],
['2.1.2.2', 'I am another deep subsection'],
]
df = pd.DataFrame(data, columns=['section', 'title'])
# Add parent column
def add_parent(val):
parent = val.split('.')[:-1]
if len(parent) == 1:
parent.append('0')
parent = '.'.join(parent)
if parent == val:
return None
return parent
df['parent'] = df['section'].apply(lambda x: add_parent(x))
records = []
for row in df.to_dict(orient='records'):
if row['section'] in df['parent'].unique():
row['children'] = []
for value, dataframe in df.groupby('parent'):
if row['section'] == value:
for val in dataframe.to_dict(orient='records'):
val.pop('parent')
row['children'].append(val)
row.pop('parent')
records.append(row)
json_str = json.dumps(records, indent=2)
</code></pre>
<pre><code>print(json_str)
[
{
"section": "1.0",
"title": "Main section",
"children": [
{
"section": "1.1",
"title": "One subsection"
},
{
"section": "1.2",
"title": "Another subsection"
},
{
"section": "1.3",
"title": "And another subsection"
}
]
},
{
"section": "2.0",
"title": "The second main section",
"children": [
{
"section": "2.1",
"title": "A subsection of the second main section"
}
]
},
{
"section": "2.1",
"title": "A subsection of the second main section",
"children": [
{
"section": "2.1.1",
"title": "A sub subsection"
},
{
"section": "2.1.2",
"title": "Another sub subsection"
}
]
},
{
"section": "2.1.2",
"title": "Another sub subsection",
"children": [
{
"section": "2.1.2.1",
"title": "I am a deep subsection"
},
{
"section": "2.1.2.2",
"title": "I am another deep subsection"
}
]
}
]
</code></pre>
| <python><recursion> | 2023-09-07 18:02:11 | 1 | 788 | Simon1 |
77,061,724 | 1,471,980 | how do you style data frame in Pandas | <p>I have this data frame:</p>
<pre><code>df
Server Env. Model Percent_Utilized
server123 Prod Cisco. 50
server567. Prod Cisco. 80
serverabc. Prod IBM. 100
serverdwc. Prod IBM. 45
servercc. Prod Hitachi. 25
Avg 60
server123Uat Uat Cisco. 40
server567u Uat Cisco. 30
serverabcu Uat IBM. 80
serverdwcu Uat IBM. 45
serverccu Uat Hitachi 15
Avg 42
</code></pre>
<p>I need to apply style to this data frame based on Percent_Utilized column. I have this solution so far:</p>
<pre><code>def color(val):
if pd.isnull(val):
return
elif val > 80:
background_color = 'red'
elif val > 50 and val <= 80:
background_color = 'yellow'
else:
background_color = 'green'
return 'background-color: %s' % background_color
def color_for_avg_row(row):
styles = [''] * len(row)
if row['Server'] == 'Avg':
if row['Percent_Utilized'] > 80:
color = 'background-color: red'
elif row['Percent_Utilized'] > 50:
color = 'background-color: yellow'
else:
color = 'background-color: green'
styles = [color for _ in row.index]
return pd.Series(styles, index=row.index)
df_new = (df.style
.apply(color_for_avg_row, axis=1)
.applymap(color, subset=["Percent_Utilized"]))
df_new
</code></pre>
<p>pd.isnull(val): line seem to skip over the values in color function. but now I get a different error:</p>
<pre><code> AttributeError: 'NoneType oject has no attribute 'rstrip'.
</code></pre>
<p>I think when I try to style to to df_new, it is putting this error.</p>
| <python><pandas><pandas-styles> | 2023-09-07 17:32:54 | 1 | 10,714 | user1471980 |
77,061,446 | 630,544 | Why does Python unittest auto-discovery not work when running in a subprocess? | <p>I'd like to be able to run Python's <code>unittest</code> module programmatically via a subprocess (e.g. <code>subprocess.Popen()</code>, <code>subprocess.run()</code>, <code>asyncio.create_subprocess_exec()</code>) and have it auto-discover tests.</p>
<p>I do not want to run the tests by importing the <code>unittest</code> module into my script, because I would like the same code to be able to run <em>any</em> arbitrary command from the command line, and I'd like to avoid handling running tests differently than other commands.</p>
<h2>Example Code</h2>
<p>Here is a GitHub repository with code that illustrates the issue I'm seeing: <a href="https://github.com/sscovil/python-subprocess" rel="nofollow noreferrer">https://github.com/sscovil/python-subprocess</a></p>
<p>For completeness, I'll include it here as well.</p>
<pre><code>.
├── src
│ ├── __init__.py
│ └── example
│ ├── __init__.py
│ └── runner.py
└── test
├── __init__.py
└── example
├── __init__.py
└── runner_test.py
</code></pre>
<p><strong>src/example/runner.py</strong></p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import os
import shutil
import subprocess
import unittest
from subprocess import CompletedProcess, PIPE
from typing import Final, List
UNIT_TEST_CMD: Final[str] = "python -m unittest discover test '*_test.py' --locals -b -c -f"
def _parse_cmd(cmd: str) -> List[str]:
"""Helper function that splits a command string into a list of arguments with a full path to the executable."""
args: List[str] = cmd.split(" ")
args[0] = shutil.which(args[0])
return args
async def async_exec(cmd: str, *args, **kwargs) -> int:
"""Runs a command using asyncio.create_subprocess_exec() and logs the output."""
cmd_args: List[str] = _parse_cmd(cmd)
process = await asyncio.create_subprocess_exec(*cmd_args, stdout=PIPE, stderr=PIPE, *args, **kwargs)
stdout, stderr = await process.communicate()
if stdout:
print(stdout.decode().strip())
else:
print(stderr.decode().strip())
return process.returncode
def popen(cmd: str, *args, **kwargs) -> int:
"""Runs a command using subprocess.call() and logs the output."""
cmd_args: List[str] = _parse_cmd(cmd)
with subprocess.Popen(cmd_args, stdout=PIPE, stderr=PIPE, text=True, *args, **kwargs) as process:
stdout, stderr = process.communicate()
if stdout:
print(stdout.strip())
else:
print(stderr.strip())
return process.returncode
def run(cmd: str, *args, **kwargs) -> int:
"""Runs a command using subprocess.run() and logs the output."""
cmd_args: List[str] = _parse_cmd(cmd)
process: CompletedProcess = subprocess.run(cmd_args, stdout=PIPE, stderr=PIPE, check=True, *args, **kwargs)
if process.stdout:
print(process.stdout.decode().strip())
else:
print(process.stderr.decode().strip())
return process.returncode
def unittest_discover() -> unittest.TestResult:
"""Runs all tests in the given directory that match the given pattern, and returns a TestResult object."""
start_dir = os.path.join(os.getcwd(), "test")
pattern = "*_test.py"
tests = unittest.TextTestRunner(buffer=True, failfast=True, tb_locals=True, verbosity=2)
results = tests.run(unittest.defaultTestLoader.discover(start_dir=start_dir, pattern=pattern))
return results
def main():
"""Runs the example."""
print("\nRunning tests using asyncio.create_subprocess_exec...\n")
asyncio.run(async_exec(UNIT_TEST_CMD))
print("\nRunning tests using subprocess.Popen...\n")
popen(UNIT_TEST_CMD)
print("\nRunning tests using subprocess.run...\n")
run(UNIT_TEST_CMD)
print("\nRunning tests using unittest.defaultTestLoader...\n")
unittest_discover()
if __name__ == "__main__":
main()
</code></pre>
<p><strong>test/example/runner_test.py</strong></p>
<pre class="lang-py prettyprint-override"><code>import unittest
from src.example.runner import async_exec, popen, run, unittest_discover
class AsyncTestRunner(unittest.IsolatedAsyncioTestCase):
async def test_async_call(self):
self.assertEqual(await async_exec("echo Hello"), 0)
class TestRunners(unittest.TestCase):
def test_popen(self):
self.assertEqual(popen("echo Hello"), 0)
def test_run(self):
self.assertEqual(run("echo Hello"), 0)
def test_unittest_discover(self):
results = unittest_discover()
self.assertEqual(results.testsRun, 4) # There are 4 test cases in this file
if __name__ == "__main__":
unittest.main()
</code></pre>
<h2>Expected Behavior</h2>
<p>When running tests from the command line, Python's <code>unittest</code> module auto-discovers tests in the <code>test</code> directory:</p>
<pre class="lang-bash prettyprint-override"><code>python -m unittest discover test '*_test.py' --locals -bcf
....
----------------------------------------------------------------------
Ran 4 tests in 0.855s
OK
</code></pre>
<h2>Actual Behavior</h2>
<p>...but it fails to auto-discover tests when that same command is run using Python's <code>subprocess</code> module:</p>
<pre class="lang-bash prettyprint-override"><code>$ python -m src.example.runner
Running tests using asyncio.create_subprocess_exec...
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
Running tests using subprocess.Popen...
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
Running tests using subprocess.run...
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
Running tests using unittest.defaultTestLoader...
test_async_call (example.runner_test.AsyncTestRunner.test_async_call) ... ok
test_popen (example.runner_test.TestRunners.test_popen) ... ok
test_run (example.runner_test.TestRunners.test_run) ... ok
test_unittest_discover (example.runner_test.TestRunners.test_unittest_discover) ... ok
----------------------------------------------------------------------
Ran 4 tests in 0.864s
OK
</code></pre>
<p>Note that the <code>unittest.defaultTestLoader</code> test runner works as expected, because it is explicitly using the <code>unittest</code> module to run the other tests. However, when running tests using <code>asyncio.create_subprocess_exec</code>, <code>subprocess.Popen</code>, or <code>subprocess.run</code>, as if using the CLI from the command line, the tests are not auto-discovered.</p>
<h2>Different Python Versions</h2>
<p>If you have Docker installed, you can run the tests in a container using any version of Python you like. For example:</p>
<h3>Python 3.11 on Alpine Linux</h3>
<pre class="lang-bash prettyprint-override"><code>docker run -it --rm -v $(pwd):$(pwd) -w $(pwd) --name test python:3.11-alpine python3 -m src.example.runner
</code></pre>
<h3>Python 3.10 on Ubuntu Linux</h3>
<pre class="lang-bash prettyprint-override"><code>docker run -it --rm -v $(pwd):$(pwd) -w $(pwd) --name test python:3.10 python3 -m src.example.runner
</code></pre>
<p>In every version I tried, from 3.8 to 3.11, I saw the same results.</p>
<h2>Question</h2>
<p>Why does Python <code>unittest</code> auto-discovery not work when running in a subprocess?</p>
| <python><python-3.x><subprocess><python-unittest><autodiscovery> | 2023-09-07 16:42:51 | 2 | 4,007 | Shaun Scovil |
77,061,416 | 9,271,275 | groupby and compare timestamps in each group pandas | <p>i have the following pandas dataframe:</p>
<pre class="lang-none prettyprint-override"><code>id | start | end |
---|---------------------|--------------------|
TA | 2022-05-20 06:30:36 | 2022-05-20 09:58:52|
TA | 2022-05-20 08:47:13 | 2022-05-20 08:57:47|
TA | 2022-05-20 08:44:11 | 2022-05-20 10:15:14|
TA | 2022-06-10 07:45:11 | 2022-06-10 10:15:14|
TA | 2022-06-10 07:55:11 | 2022-06-10 11:15:14|
BA | 2022-05-24 08:48:12 | 2022-05-24 10:57:27|
BA | 2022-05-24 10:48:29 | 2022-05-24 12:08:54|
RG | 2022-05-31 07:57:26 | 2022-05-31 08:09:46|
RG | 2022-05-31 08:06:50 | 2022-05-31 08:08:49|
RG | 2022-05-31 08:07:51 | 2022-05-31 08:18:37|
</code></pre>
<p>for each id I want to compare if the start timestamp is contained between the start and end timestamps and if it is contained then i take the lowest timestamp value from start col and the highest value from end col. The resulting dataframe will look as follows:</p>
<pre class="lang-none prettyprint-override"><code>id | start | end |
---|---------------------|--------------------|
TA | 2022-05-20 06:30:36 | 2022-05-20 10:15:14|
TA | 2022-06-10 07:45:11 | 2022-06-10 11:15:14|
BA | 2022-05-24 08:48:12 | 2022-05-24 12:08:54|
RG | 2022-05-31 07:57:26 | 2022-05-31 08:18:37|
</code></pre>
<p>There might be rows where timestamps may not be contained/overlap in a group and those will be remain as they are but those that overlap as in the above example will be reduced and grouped. Can anyone suggest an optimal way in python to achieve this?</p>
<p><strong>Update</strong><br />
Data is sorted by <em>id</em> and <em>start</em>.</p>
| <python><pandas><group-by> | 2023-09-07 16:38:19 | 2 | 407 | Hanif |
77,061,401 | 4,299,527 | How to properly use Modulo in Rabin Karp algorithm? | <p>I am trying to solve leetcode problem <a href="https://leetcode.com/problems/repeated-dna-sequences/description/" rel="nofollow noreferrer">187. Repeated DNA Sequences</a> using Rabin Karp algorithm with rolling hash approach. At first, I solved the problem without using any MOD operations like below.</p>
<pre><code>class Solution:
def calculate_hash(self, prime, text):
hash_value = 0
for i in range(len(text)):
hash_value = hash_value + (ord(text[i]) * pow(prime, i))
return hash_value
def recalculate_hash(self, prime, old_hash, text, index, L):
new_hash = old_hash - ord(text[index - 1])
new_hash /= prime
new_hash = new_hash + (ord(text[index + L - 1]) * pow(prime, L - 1))
return new_hash
def findRepeatedDnaSequences(self, s: str) -> List[str]:
L, s_len = 10, len(s)
if s_len <= L:
return []
prime = 7
seen, res = set(), set()
old_hash = self.calculate_hash(prime, s[0:L])
seen.add(old_hash)
for i in range(1, s_len - L + 1):
new_hash = self.recalculate_hash(prime, old_hash, s, i, L)
if new_hash in seen:
res.add(s[i:i+L])
seen.add(new_hash)
old_hash = new_hash
return list(res)
</code></pre>
<p>In the above approach, I used the little endian approach and I had to use division during the calculation of the rolling hash value. However, I came across to this <a href="https://stackoverflow.com/a/50822610/4299527">answer</a> in StackOverlow, where the big endian approach is suggested like below.</p>
<p><a href="https://i.sstatic.net/Znm0o.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Znm0o.png" alt="enter image description here" /></a></p>
<p>I tried to apply the approach but not getting desired answer. This is my tried code.</p>
<pre><code>class Solution:
def calculate_hash(self, prime, text, L, MOD):
hash_value = 0
for i in range(len(text)):
p_power = pow(prime, L - i - 1, MOD)
hash_value = (hash_value + (ord(text[i]) * p_power)) % MOD
return hash_value
def recalculate_hash(self, prime, old_hash, text, index, L, MOD):
p_power = pow(prime, L - 1, MOD)
new_hash = (old_hash * prime)
new_hash = (new_hash - (ord(text[index - 1]) * p_power) + ord(text[index + L - 1])) % MOD
return new_hash
def findRepeatedDnaSequences(self, s: str) -> List[str]:
L, s_len = 10, len(s)
if s_len <= L:
return []
prime = 7
MOD = 2**31 - 1
seen, res = set(), set()
old_hash = self.calculate_hash(prime, s[0:L], L, MOD)
seen.add(old_hash)
for i in range(1, s_len - L + 1):
new_hash = self.recalculate_hash(prime, old_hash, s, i, L, MOD)
if new_hash in seen:
res.add(s[i:i+L])
seen.add(new_hash)
old_hash = new_hash
return list(res)
</code></pre>
<p><strong>What am I missing here during the MOD calculation?</strong></p>
| <python><hash><modulo><rabin-karp> | 2023-09-07 16:36:08 | 1 | 12,152 | Setu Kumar Basak |
77,061,276 | 5,287,011 | Dataframe filtering with multiple conditions | <p>I have a data frame (transfers) that I need to filter based on its comparison with another data frame (make_costs).</p>
<p>transfers:
<a href="https://i.sstatic.net/H8mCN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H8mCN.png" alt="enter image description here" /></a></p>
<p>make_costs:</p>
<p><a href="https://i.sstatic.net/g8Smh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g8Smh.png" alt="enter image description here" /></a></p>
<p>Here is what I need:
Remove SourceEquipmentID and DestinationEquipmentID combinations when TransferType is 'Make' or 'MakeTransport' and a combination of SourceEquipmentID x DestinationEquipmentID is not found in make_costs.</p>
<p>Here is my code:</p>
<pre><code>mask = ((~transfers['TransferType'].isin(['Make', 'MakeTransport'])) | (transfers[['SourceEquipmentID', 'DestinationEquipmentID']] <= make_costs[['SourceEquipmentID', 'DestinationEquipmentID']].values))
transfers1 = transfers[mask]
</code></pre>
<p>I am getting the following error:</p>
<pre><code>236 mask = ((~transfers['TransferType'].isin(['Make', 'MakeTransport'])) |
--> 237 (transfers[['SourceEquipmentID', 'DestinationEquipmentID']] <= make_costs[[
238 'SourceEquipmentID', 'DestinationEquipmentID']].values))
File ~/anaconda3/lib/python3.10/site-packages/pandas/core/ops/common.py:81, in _unpack_zerodim_and_defer.<locals>.new_method(self, other)
77 return NotImplemented
79 other = item_from_zerodim(other)
---> 81 return method(self, other)
</code></pre>
<p>File ~/anaconda3/lib/python3.10/site-packages/pandas/core/arraylike.py:52, in OpsMixin.<strong>le</strong>(self, other)
50 @unpack_zerodim_and_defer("<strong>le</strong>")
51 def <strong>le</strong>(self, other):
---> 52 return self._cmp_method(other, operator.le)
File ~/anaconda3/lib/python3.10/site-packages/pandas/core/frame.py:7442, in DataFrame._cmp_method(self, other, op)
7439 def _cmp_method(self, other, op):
7440 axis: Literal<a href="https://i.sstatic.net/H8mCN.png" rel="nofollow noreferrer">1</a> = 1 # only relevant for Series other case
-> 7442 self, other = ops.align_method_FRAME(self, other, axis, flex=False, level=None)
7444 # See GH#4537 for discussion of scalar op behavior
7445 new_data = self._dispatch_frame_op(other, op, axis=axis)</p>
<pre><code>File ~/anaconda3/lib/python3.10/site-packages/pandas/core/ops/__init__.py:288, in align_method_FRAME(left, right, axis, flex, level)
285 right = to_series(right[0, :])
287 else:
--> 288 raise ValueError(
289 "Unable to coerce to DataFrame, shape "
290 f"must be {left.shape}: given {right.shape}"
291 )
293 elif right.ndim > 2:
294 raise ValueError(
295 "Unable to coerce to Series/DataFrame, "
296 f"dimension must be <= 2: {right.shape}"
297 )
ValueError: Unable to coerce to DataFrame, shape must be (81, 2): given (6, 2)
</code></pre>
<p>Both data frames have different shapes by design. I am trying to eliminate from "transfers" all combinations SourceEquipID x DestEquipID that do not exist in "make_costs" data frame</p>
<p>Please, help!!</p>
| <python><dataframe><filtering> | 2023-09-07 16:14:58 | 1 | 3,209 | Toly |
77,061,268 | 7,169,895 | Is there a way to access the WebExtension's history to access browser history (not session history) | <p>I saw <a href="https://stackoverflow.com/questions/22414735/save-and-load-browser-history-for-selenium">this</a> question which navigates the session history. It links to the WebExtension history object <a href="https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/API/history" rel="nofollow noreferrer">here</a>. I was wondering how one uses this to get a list of history items in Python.</p>
<p>The following code gives me an error that search is not found in the history object despite the documentation saying it is:</p>
<pre><code>from selenium.webdriver.firefox import webdriver
driver = webdriver.WebDriver()
script = 'history.search({})'
driver.execute_script(script)
</code></pre>
| <python><selenium-webdriver> | 2023-09-07 16:13:44 | 0 | 786 | David Frick |
77,061,230 | 217,844 | Python / asyncio: How to test GraphQL subscriptions? | <p>I have created a GraphQL API on AWS AppSync and I need to run tests to verify things work as expected.</p>
<p>I have implemented a command line API client in Python with <a href="https://pypi.org/project/gql" rel="nofollow noreferrer"><code>gql</code></a> to interact with the API:</p>
<ul>
<li>I can run queries ✅</li>
<li>I can run mutations ✅</li>
<li>I can start a subscription, run a matching mutation in a second Terminal tab and see the expected message from the subscription about the change in the first tab ✅</li>
</ul>
<p>Alternatively, I can use the AWS AppSync console to perform the same actions.</p>
<p>Further, I have created a suite of tests that use <a href="https://pypi.org/project/pytest" rel="nofollow noreferrer"><code>pytest</code></a> along with various plugins (most prominently <a href="https://pypi.org/project/pytest-asyncio" rel="nofollow noreferrer"><code>pytest-asyncio</code></a>):</p>
<ul>
<li>Tests for queries and mutations were straight-forward to implement; they work fine. ✅</li>
</ul>
<p>However, I am struggling with implementing the subscription tests: As opposed to queries and mutations, a subscription run by itself obviously doesn't really do anything, it only responds to changes to the API resource it is subscribed to (typically triggered by a mutation); testing a subscription therefore needs to take the following actions:</p>
<ol>
<li>start the subscription under test</li>
<li>run a mutation that changes the API resource the subscription is subscribed to</li>
<li>assert the subscription reports the expected API resource change</li>
<li>stop the subscription under test</li>
</ol>
<p>There is sample code in the <a href="https://gql.readthedocs.io/en/stable/advanced/async_advanced_usage.html" rel="nofollow noreferrer"><code>gql</code> docs</a> for running multiple GraphQL queries in parallel using <a href="https://docs.python.org/3/library/asyncio.html" rel="nofollow noreferrer"><code>asyncio</code></a>, but unfortunately, that example just keeps running until stopped manually (e.g. using <code>CTRL-C</code> or so) - which obviously is not an option for a test suite that should run non-interactively.</p>
<h3>My problem:</h3>
<p>I don't know how to use <code>asyncio</code> to implement the steps above.</p>
<p>Subscriptions are run asynchronously, see for example <code>execute_subscription1</code> in the <code>gql</code> docs linked above. How do I do step 1 / start a subscription (which implies <code>await</code>ing it, doesn't it ?) - while staying unblocked in the main thread, so I can do step 2 / run a mutation ? I tried using <a href="https://docs.python.org/3/library/asyncio-task.html#asyncio.to_thread" rel="nofollow noreferrer"><code>asyncio.to_thread(...)</code></a>, but from what I understand, using that with <code>await asyncio.gather(...)</code> as shown in the linked Python docs (and my sample code below), i.e. running a function in a separate thread doesn't seem to mean I'm decoupled from it, does it ? I'm still blocked waiting for the subscription function to finish (which never happens...)</p>
<p>I <em>think</em> I should be able to figure out step 3 by myself - but looking ahead to step 4, it seems I need to create a <a href="https://docs.python.org/3/library/asyncio-task.html#asyncio.Task" rel="nofollow noreferrer"><code>Task</code></a> and <a href="https://docs.python.org/3/library/asyncio-task.html#asyncio.Task.cancel" rel="nofollow noreferrer"><code>cancel()</code></a> that, right ?</p>
<p>One further problem: The entire system doesn't seem exactly super fast: Running any operation takes a couple of seconds at least and a subscription doesn't seem to start listening immediately, but takes a little to start up. Therefore, the mutation might already have happened before the subscription was ready.
I might need some mechanism to ensure step 2 is only kicked off once step 1 has fully started up. I <em>think</em> <a href="https://docs.python.org/3/library/asyncio-sync.html#asyncio.Event" rel="nofollow noreferrer"><code>asyncio.Event</code></a> or <a href="https://docs.python.org/3/library/asyncio-sync.html#asyncio.Condition" rel="nofollow noreferrer"><code>asyncio.Condition</code></a> might help - but considering the amount of question marks over my head right now, it seems advisable to run all this by the SO swarm first.</p>
<h3>My question:</h3>
<p>How do I run an asynchronous function (the subscription), then run the mutation (sync ? async ?) and then cancel the asynchronous one - all in the same process (possibly in different threads) ?</p>
<h3>What I have so far:</h3>
<p>The code I have so far is rather involved, it's not easy to reduce it to a digestable form - let alone present a concise, self-contained working example here (which anyway mandates access to the AppSync API); trying anyway to give some idea:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python
import asyncio
from gql import Client, gql
from gql.transport import aiohttp, appsync_auth, appsync_websockets
# authenticate with AWS Cognito to get a JWT
access_token = _get_access_token()
# get this from AWS AppSync console
endpoint = '<some random ID>'
host = f'{endpoint}.appsync-api.us-east-1.amazonaws.com'
url_gql = f'https://{host}/graphql'
url_wss = f'wss://{endpoint}.appsync-realtime-api.us-east-1.amazonaws.com/graphql'
auth = appsync_auth.AppSyncJWTAuthentication(host=host, jwt=access_token)
def run_subscription():
query = gql('''
subscription mySubscription {
...
}
''')
transport = appsync_websockets.AppSyncWebsocketsTransport(auth=auth, url=url_wss)
client = Client(transport=transport)
print ('run_subscription')
for result in client.subscribe(query):
print (f'subscription result: {result}')
# from https://gql.readthedocs.io/en/stable/usage/basic_usage.html:
# basic example won’t work if you have an asyncio event loop running
# --> I _think_ I need to run asynchronously:
# https://gql.readthedocs.io/en/stable/async/async_usage.html
async def run_mutation():
query = gql('''
mutation myMutation($mut_var: SomeType!) {
...
}
''')
params = {"mut_var": {
... (data for SomeType) ...
}}
print ('run_mutation')
transport = aiohttp.AIOHTTPTransport(auth=auth, url=url_gql)
async with Client(transport=transport) as session:
result = await session.execute(query, variable_values=params)
print (f'mutation result: {result}')
async def main():
# THIS IS WHAT I AM STRUGGLING WITH:
# I don't think I can just start both functions together...
await asyncio.gather(asyncio.to_thread(run_subscription),
run_mutation())
# TODO: how do I stop the mutation here ?
asyncio.run(main())
</code></pre>
<p>Output:</p>
<pre><code>run_mutation
run_subscription
mutation result: ... (matches expectation) ...
</code></pre>
<p>From what I understand, <code>run_subscription</code> and <code>run_mutation</code> are started simultaneously and the subscription is not ready to listen yet, hence no output from it.</p>
<h3>More thoughts:</h3>
<p>I am somewhat surprised I haven't been able to find a lot of online resources about this (JavaScript example <a href="https://speckle.systems/blog/testing-gql-subs" rel="nofollow noreferrer">here</a>) - doesn't anyone test their subscriptions ?!? Also, am I at least heading in the general right direction ? I thought about pivoting the entire approach and start the subscription as a pytest fixture, but that really seems weird, sort of makes the subject under test a part of the test environment...</p>
<p>I would greatly appreciate any help on this. Thank you very much.</p>
| <python><asynchronous><concurrency><graphql><python-asyncio> | 2023-09-07 16:08:59 | 1 | 9,959 | ssc |
77,061,105 | 17,873,096 | ValueError: The 'ydata_profiling' package was not installed in a way that PackageLoader understands | <p>i compiled an APP use <code>**Pyinstaller**</code> and got this error popup when open the compiled .exe file:</p>
<pre><code>Traceback (most recent call last):
File "myuniverse.py", line 133, in <module>
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 385, in exec_module
File "ydata_profiling\__init__.py", line 7, in <module>
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 385, in exec_module
File "ydata_profiling\compare_reports.py", line 9, in <module>
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 385, in exec_module
File "ydata_profiling\profile_report.py", line 33, in <module>
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 385, in exec_module
File "ydata_profiling\report\presentation\flavours\html\__init__.py", line 1, in <module>
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 385, in exec_module
File "ydata_profiling\report\presentation\flavours\html\alerts.py", line 2, in <module>
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 385, in exec_module
File "ydata_profiling\report\presentation\flavours\html\templates.py", line 11, in <module>
File "jinja2\loaders.py", line 323, in __init__
ValueError: The 'ydata_profiling' package was not installed in a way that PackageLoader understands.
</code></pre>
<p>NOT WORK:</p>
<ul>
<li>Add ydata_profiling path in Pyinstaller cmd <code>-p=""</code></li>
<li>Add 'Initializing Jinja' code block like in <a href="https://github.com/ydataai/ydata-profiling/issues/942" rel="nofollow noreferrer">this page</a> to ydata_profiling <strong>init</strong> file source</li>
</ul>
| <python><jinja2><pyinstaller> | 2023-09-07 15:50:06 | 1 | 366 | vERISBABY. |
77,061,093 | 3,246,693 | Effeciently merging dataframe rows based on multiple columns and substrings | <p>I am working on analyzing sudo logs from a large number of *nix systems. One issue I've run into is that every now and then someone runs a massively long command(typically a bunch of concatenated commands) and sudo logs it as multiple events in syslog due to event size limiations in both sudo and syslog, so I am trying to find a good way to reassemble the full command.</p>
<p>I am pulling the data out of my SIEM and loading it into a dataframe, and I'm able to easily parse out the necessary columns(eg: account, server, command, etc...), however I am running into an issue flattening/combining the commands.</p>
<p>The following code works, but is painfully slow on a large number of records.</p>
<pre><code># Load some dummy data
data = [
["1694030392144", "server1", "bob" , "/home/bob/", "command=a bunch of commands here; " ],
["1694030392145", "server1", "bob" , "/home/bob/", "(command continued) more commands here; " ],
["1694030392146", "server1", "bob" , "/home/bob/", "(command continued) even commands here; " ],
["1694030392147", "server1", "bob" , "/home/bob/", "(command continued) yet more commands here"],
["1694030392148", "server9", "bob" , "/home/bob/", "(command continued) WTF" ],
["1694030392149", "server2", "bob" , "/home/bob/", "command=a new command" ],
["1694030392150", "server3", "bob" , "/home/bob/", "command=I did something; " ],
["1694030392151", "server3", "bob" , "/home/bob/", "(command continued) I did another thing" ],
["1694030392152", "server2", "fred", "/" , "command=a new command" ],
["1694030392153", "server1", "todd", "/tmp/" , "command=I did something; " ],
["1694030392154", "server1", "todd", "/tmp/" , "(command continued) I did another thing" ]
]
df = pd.DataFrame(data, columns=['epoch', 'server', 'account', 'pwd', 'command'])
# Data is typically in the correct order when loaded, but sorting just to be safe.
df.sort_values(['account','server','epoch'], inplace=True, ignore_index=True)
FirstLoop = True
for index, row in df.iterrows():
# If the server or account changed, or it begins with "command =" again...
if curServer != row["server"] or curAccount != row["account"] or
row['command'].startswith("command=") == True:
if FirstLoop == True:
FirstLoop = False
else:
df.at[(curCommandIndex), 'command'] = curCommand
# Index of current command grouping
curCommandIndex = index
# Starting building the full command and replace unecessary strings
if row['command'].startswith("(command continued)"):
curCommand = row['command'].replace("(command continued) ", "")
elif row['command'].startswith("command="):
curCommand = row['command'].replace("command=", "")
else:
curCommand = row['command']
# Otherwise concat the commands together
else:
if row['command'].startswith("(command continued)"):
curCommand = curCommand + row['command'].replace("(command continued) ", "")
elif row['command'].startswith("command="):
curCommand = curCommand + row['command'].replace("command=", "")
else:
curCommand = curCommand + row['command']
# Drop the row after concating it onto the command
df.drop(index, inplace=True)
curAccount = row['account']
curServer = row['server']
</code></pre>
<p>Is there a faster, more pandas'esque, way of accomplishing this?</p>
| <python><python-3.x><pandas><dataframe> | 2023-09-07 15:48:21 | 1 | 803 | user3246693 |
77,060,997 | 4,877,683 | Dynamic Image Ouputs in Gradio | <p>I want to create a Gradio Application which takes in a query as the input and displays image content in text outputs and corresponding Images in image outputs.
You can assume there will always be 4 results to output.</p>
<p>My code looks like this:</p>
<pre><code> def gradio_fn(query):
content, image_paths = answer_question(query) #This is where the image paths are generated
return content
output_texts = [gr.Textbox() for i in range(4)]
input_text = gr.Textbox()
demo = gr.Interface(
fn=gradio_fn,
inputs=input_text,
outputs=output_texts
)
demo.launch(share = True)
</code></pre>
<p>where 'image_paths' are paths to the images that i want to output
e.g. '/dbfs/FileStore/d/llm_0.png'</p>
<p>Currently I've only tried to output the text i.e. 'content' and its working (See picture below)</p>
<p><a href="https://i.sstatic.net/zM8k6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zM8k6.png" alt="enter image description here" /></a></p>
<p>Now I want to add image outputs next to the text outputs. From my understanding i need to pass the image path to the gradio image function e.g. gr.Image(value= '/dbfs/FileStore/d/llm_0.png')</p>
<p>How can I achieve that when my image paths are generated only after the submit button is clicked?</p>
| <python><image><visualization><huggingface><gradio> | 2023-09-07 15:34:35 | 0 | 703 | Danish Zahid Malik |
77,060,949 | 850,781 | Marking unused parameters in functions passed as arguments | <p>I have a higher order function:</p>
<pre><code>def mymap(f,...):
...
x = f(a, logger)
...
</code></pre>
<p>and I need to pass to it <code>f</code> that <em>needs</em> only one argument:</p>
<pre><code>def bigfun(...):
...
def f(a, logger):
return a
mymap(f, ...)
...
</code></pre>
<p>the code above works fine, but <code>pylint</code> complains about</p>
<pre><code>Unused argument 'logger' (unused-argument)
</code></pre>
<p>If I define <code>f</code> using <code>_</code>:</p>
<pre><code>def bigfun(...):
...
def f(a, _logger):
return a
mymap(f, ...)
...
</code></pre>
<p>the code breaks with</p>
<pre><code>TypeError: bigfun.<locals>.f() got an unexpected keyword argument 'logger'
</code></pre>
<p>I can, of course, add <code># pylint: disable=unused-argument</code> to <code>f</code>, but is this TRT?</p>
| <python><pylint><optional-parameters><unused-variables> | 2023-09-07 15:26:14 | 1 | 60,468 | sds |
77,060,929 | 5,274,291 | Cognito not running custom auth challenge | <p>I have created in Cognito the following custom challenge triggers in Python. They are identical to the AWS ones but written in Python.</p>
<p>Define auth challenge:</p>
<pre class="lang-py prettyprint-override"><code>def lambda_handler(event, context):
if (len(event['request']['session']) == 1 and event['request']['session'][0]['challengeName'] == "SRP_A"):
event['response']['issueTokens'] = False
event['response']['failAuthentication'] = False
event['response']['challengeName'] = "PASSWORD_VERIFIER"
elif (len(event['request']['session']) == 2 and event['request']['session'][1]['challengeName'] == "PASSWORD_VERIFIER" and event['request']['session'][1]['challengeResult'] is True):
event['response']['issueTokens'] = False
event['response']['failAuthentication'] = False
event['response']['challengeName'] = "CUSTOM_CHALLENGE"
elif (len(event['request']['session']) == 3 and event['request']['session'][2]['challengeName'] == "CUSTOM_CHALLENGE" and event['request']['session'][2]['challengeResult'] is True):
event['response']['issueTokens'] = False
event['response']['failAuthentication'] = False
event['response']['challengeName'] = "CUSTOM_CHALLENGE"
elif (len(event['request']['session']) == 4 and event['request']['session'][3]['challengeName'] == "CUSTOM_CHALLENGE" and event['request']['session'][3]['challengeResult'] is True):
event['response']['issueTokens'] = True
event['response']['failAuthentication'] = False
else:
event['response']['issueTokens'] = False
event['response']['failAuthentication'] = True
return event
</code></pre>
<p>Create auth-challenge:</p>
<pre class="lang-py prettyprint-override"><code>def lambda_handler(event, context):
if event['request']['challengeName'] != "CUSTOM_CHALLENGE":
return event
if len(event['request']['session']) == 2:
event['response']['publicChallengeParameters'] = {}
event['response']['privateChallengeParameters'] = {}
event['response']['publicChallengeParameters']['captchaUrl'] = "url/123.jpg"
event['response']['privateChallengeParameters']['answer'] = "5"
if len(event['request']['session']) == 3:
event['response']['publicChallengeParameters'] = {}
event['response']['privateChallengeParameters'] = {}
event['response']['publicChallengeParameters']['securityQuestion'] = "Who is your favorite team mascot?"
event['response']['privateChallengeParameters']['answer'] = "Peccy"
return event
</code></pre>
<p>And verify auth challenge:</p>
<pre class="lang-py prettyprint-override"><code>def lambda_handler(event, context):
if event['request']['privateChallengeParameters']['answer'] == event['request']['challengeAnswer']:
event['response']['answerCorrect'] = True
else:
event['response']['answerCorrect'] = False
return event
</code></pre>
<p>I also added to the Cognito User Pool Client authorization flow the permission: <code>ALLOW_CUSTOM_AUTH</code>.</p>
<p>And then I created the following Python client to test my custom Lambdas:</p>
<pre class="lang-py prettyprint-override"><code>import boto3
from pycognito import AWSSRP
session = boto3.Session(profile_name="aws-profile")
cognito = session.client('cognito-idp')
cognito_user_pool_id = "xx-xxxx-x_xxxxxxxxx"
cognito_user_pool_client_id = "xxxxxxxxxxxxxxxxxxxxxxxxxx"
username = "xxxx"
password = "xxxx"
aws_srp = AWSSRP(
username=username,
password=password,
pool_id=cognito_user_pool_id,
client_id=cognito_user_pool_client_id,
client=cognito
)
print("\nInitiate CUSTOM_AUTH with SRP_A challenge")
auth_params = aws_srp.get_auth_params()
resp = cognito.initiate_auth(
AuthFlow='CUSTOM_AUTH',
AuthParameters={**auth_params, **{
"CHALLENGE_NAME": "SRP_A"
}},
ClientId=cognito_user_pool_client_id
)
print(resp)
print("\nRespond to PASSWORD_VERIFIER challenge")
assert resp["ChallengeName"] == "PASSWORD_VERIFIER"
challenge_response = aws_srp.process_challenge(resp["ChallengeParameters"], auth_params)
resp = cognito.respond_to_auth_challenge(
ClientId=cognito_user_pool_client_id,
ChallengeName="PASSWORD_VERIFIER",
ChallengeResponses={**challenge_response}
)
print(resp)
</code></pre>
<p>However when I run this code I get the access and ID tokens.</p>
<p>It's worth mentioning the <code>define auth challenge</code> lambda function is executing fine for the <code>SRP_A</code> flow. The issue is that after I send back the <code>PASSWORD_VERIFIER</code> I was expecting to receive the first CUSTOM_CHALLENGE I have defined, but instead I'm the tokens.</p>
<p>Does anybody know what that could be?</p>
| <python><boto3><amazon-cognito> | 2023-09-07 15:23:06 | 1 | 1,578 | João Pedro Schmitt |
77,060,927 | 1,164,246 | Selenium: getting the latest HTML code after external modification to the web page | <p>I have a web page that is modified externally. For example, new text is written to text boxes, checkmarks are checked, etc. I would imagine that these changes are reflected in the HTML code of the web page. If so, I would like to use <code>Selenium</code> to retrieve the most modified HTML of the web page that encompass the changes.</p>
<p>I have tried using <code>driver.page_source</code> but it seems like it's returning the default HTML code (before any changes). I have also tried <code>driver.execute_script("return document.documentElement.outerHTML;")</code> but no luck. Wondering if you have any suggestions for me.</p>
| <python><html><selenium-webdriver> | 2023-09-07 15:22:56 | 0 | 6,057 | Daniel |
77,060,689 | 1,354,517 | how to ensure tensorflow gpu version is in use with docker image? | <p>I am using tensorflow 2.13 with help of docker file available within tensorflow models/research/object_detection/dockerfiles folder.</p>
<p>The Docker file contents are</p>
<pre><code>FROM tensorflow/tensorflow:latest-gpu
ARG DEBIAN_FRONTEND=noninteractive
# Install apt dependencies
RUN apt-get update && apt-get install -y \
git \
gpg-agent \
python3-cairocffi \
protobuf-compiler \
python3-pil \
python3-lxml \
python3-tk \
python3-opencv \
libssl-dev \
software-properties-common \
wget
WORKDIR /home/tensorflow
## Copy this code (make sure you are under the ../models/research directory)
COPY models/research/. /home/tensorflow/models
# Compile protobuf configs
RUN (cd /home/tensorflow/models/ && protoc object_detection/protos/*.proto --python_out=.)
WORKDIR /home/tensorflow/models/
RUN cp object_detection/packages/tf2/setup.py ./
ENV PATH="/home/tensorflow/.local/bin:${PATH}"
RUN python -m pip install -U pip
RUN python -m pip install .
COPY scripts /home/tensorflow/
COPY workspace /home/tensorflow/
#ENTRYPOINT ["python", "object_detection/model_main_tf2.py"]
</code></pre>
<p>After all due steps the docker image runs. My host machine has GPU GTX 1650. I tried to test if my tensorflow installation is using GPU using test_tf.py as below</p>
<pre><code>import tensorflow as tf
if tf.test.gpu_device_name():
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
else:
print("Please install GPU version of TF")
</code></pre>
<p>Here is the output from test_tf.py</p>
<pre><code>root@5433479cb167:/home/tensorflow# python3 test_tf.py
2023-09-07 14:43:50.620651: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-09-07 14:43:53.994502: E tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:268] failed call to cuInit: UNKNOWN ERROR (34)
Please install GPU version of TF
</code></pre>
<p>Using docker is critical for my work so I need to figure out a solution. From the message my understanding is TF detects the GPU but is unable to use it . The last message is confusing since the base image in use is <code>FROM tensorflow/tensorflow:latest-gpu</code></p>
<p>I started a small dataset training ( 50 images ) and it seems to be using my CPU to full extent. My training loop is stuck with the following message on the console -</p>
<pre><code>I0907 14:31:03.622151 140609981511424 api.py:460] feature_map_spatial_dims: [(128, 128), (64, 64), (32, 32), (16, 16), (8, 8)]
I0907 14:31:10.580329 140609981511424 api.py:460] feature_map_spatial_dims: [(128, 128), (64, 64), (32, 32), (16, 16), (8, 8)]
I0907 14:31:16.743497 140609981511424 api.py:460] feature_map_spatial_dims: [(128, 128), (64, 64), (32, 32), (16, 16), (8, 8)]
I0907 14:31:23.568284 140609981511424 api.py:460] feature_map_spatial_dims: [(128, 128), (64, 64), (32, 32), (16, 16), (8, 8)]
</code></pre>
<p>Prior to this run my training loop had crashed with the same 4 msgs on console and I had to reduce my batch size which allows the training to now continue. But I have no loss messages on console yet.</p>
<p>Please suggest any steps I can take to address / resolve these issues.</p>
<p>Update # after about an hour my training loop produced loss statement and that confirms all is well with CPU based training</p>
<pre><code>INFO:tensorflow:Step 100 per-step time 30.199s
I0907 15:21:21.773339 140617841817408 model_lib_v2.py:705] Step 100 per-step time 30.199s
INFO:tensorflow:{'Loss/classification_loss': 0.16712503,
'Loss/localization_loss': 0.101843126,
'Loss/regularization_loss': 0.29896417,
'Loss/total_loss': 0.56793237,
'learning_rate': 0.0141663505}
I0907 15:21:21.820583 140617841817408 model_lib_v2.py:708] {'Loss/classification_loss': 0.16712503,
'Loss/localization_loss': 0.101843126,
'Loss/regularization_loss': 0.29896417,
'Loss/total_loss': 0.56793237,
'learning_rate': 0.0141663505}
</code></pre>
<p>Still I am looking for a solution to my GPU woes.</p>
| <python><python-3.x><docker><tensorflow><tensorflow2.0> | 2023-09-07 14:52:42 | 1 | 1,117 | Gautam |
77,060,452 | 12,730,406 | color bar chart if by condition if values exist? | <p>I have this dataframe:</p>
<pre><code>df = pd.DataFrame({
'year': [2022,2022,2022,2022,2022,2023,2023],
'source': ['youtube', 'youtube', 'facebook', 'facebook', 'facebook', 'google', 'google'],
'score': [10,20,100,200,300,90,70],
'rating': ['small', 'large', 'small', 'medium', 'large', 'medium', 'large']})
</code></pre>
<p>I am trying to create a stacked bar chart for each of the different <strong>source values</strong> - so there will be 3 plots, one for each of the values in the source column.</p>
<p>I am trying to set colors for the bar charts via a colors mapping argument in <code>df.plot</code> - but not all the values appear e.g. youtube for 2022 does not have a value for <strong>medium</strong> hence the color mapping causes an error.</p>
<p>here is my color mapping:</p>
<pre><code>color_map = {'small': 'yellow', 'medium': 'green', 'large':'blue'}
</code></pre>
<p>How can i ensure that when making the plots there is no error and it handles missing cases fine?</p>
<p>My code for plot is below:</p>
<pre><code>for company in df['source'].unique():
# filter to make plot of company only
temp_df = df[df.source == company]
temp_df = temp_df.pivot_table(index=temp_df.year, columns=['source', 'rating'], values='score', aggfunc='sum')
color_map = {'small': 'yellow', 'medium': 'green', 'large':'blue'}
fig, ax = plt.subplots(1,1)
ax = temp_df.plot.bar(stacked=True, figsize=(10, 6), ylabel='scores', xlabel='dates', title='Scores', ax = ax,color = color_map)
</code></pre>
| <python><pandas><matplotlib> | 2023-09-07 14:22:15 | 1 | 1,121 | Beans On Toast |
77,060,395 | 8,030,794 | Get column based on changing bool values | <p>I have df like this</p>
<pre><code>id bool
0 1
1 1
2 1
3 0
4 0
5 1
</code></pre>
<p>And then i need to get сolumn with id and new bool value, where the value of bool changes.
Like this :</p>
<pre><code>id bool
0 1
3 0
5 1
</code></pre>
<p>I can do this by using for, but is there an easier way to do this?</p>
| <python><pandas><dataframe> | 2023-09-07 14:13:21 | 1 | 465 | Fresto |
77,060,380 | 447,738 | Find how many times a DataFrame cell value appears in another DataFrame, with a tolerance | <p>Given two DataFrames with the same columns.</p>
<p>dfA</p>
<pre>
Index Price
0 10.21
1 12.21
</pre>
<p>dfB</p>
<pre>
Index Price
0 10.21
1 10.24
2 11.32
3 12.21
</pre>
<p>I want to add a column to dfA with the times that each value appears in dfB, but with a tolerance of, let's say, 1%.</p>
<p>Result</p>
<pre>
Index Price Occurrences
0 10.21 2
1 12.21 1
</pre>
<p>Is it still possible to avoid iterations? Maybe using <code>merge_asof</code> and <code>grouping</code>?</p>
<p>P.S. This is an amendment to my other <a href="https://stackoverflow.com/questions/77045763/filter-a-pandas-dataframe-if-cell-values-exist-in-another-dataframe-but-with-a-r/77045897?noredirect=1#comment135838786_77045897">question</a>, but posted in a different threat since it addresses a slightly different issue.</p>
| <python><pandas> | 2023-09-07 14:11:13 | 4 | 2,357 | cksrc |
77,060,367 | 678,321 | How can I use a Quarto variable to include a code block? | <p>I am trying to de-clutter a Quarto markdown document, using variables to include code blocks. I've created a <code>_variables.yml</code> file with</p>
<pre><code>view:
arvores: |
{python}
#| echo: false
#| label: tbl-test
#| tbl-cap: Volumes
Markdown(tabulate(data,headers=data.columns,showindex=False))
</code></pre>
<p>with the following code in the <code>report.qmd</code> file:</p>
<pre><code>{{< var view.arvores >}}
</code></pre>
<p>But the code is not executed. Is it possible to do this? Do I need to escape any character? Would it be easier to use Quarto <strong>includes</strong> instead of <strong>variables</strong>?</p>
| <python><yaml><markdown><quarto> | 2023-09-07 14:09:33 | 1 | 1,708 | Hugo |
77,060,363 | 1,162,465 | Adding OAuth2 token based authentication in my fast api code and swagger | <p>I am having a fastapi python application with a few get requests. I am having a ping-sso based identity provider. Now in all the apis in header Bearer token is being passed. How to do authentication and authorization(not required but will be informational) with the bearer token passed in the header.</p>
<pre><code>from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
import uvicorn
from feast import FeatureStore
app = FastAPI()
store=FeatureStore()
origins = [
"http://localhost",
"http://localhost:8080",
]
app.add_middleware(
CORSMiddleware,
allow_origins=origins,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
@app.get("/")
async def main():
return {"message": "Hello World"}
@app.get('/list_data_sources')
async def list_data_sources():
data_sources = store.list_data_sources()
data_source_names = [ds.name for ds in data_sources]
return (data_source_names)
@app.get('/list_entities')
def list_entities():
entities = store.list_entities()
entity_names = [entity.name for entity in entities]
return (entity_names)
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=5000)
</code></pre>
<p>Now once the token validation is implemented i need my swagger page also to be updated with Bearer token Authorization.
<a href="https://i.sstatic.net/XwTea.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XwTea.png" alt="swagger" /></a></p>
<p>I am using Ping-SSO OpenId.</p>
| <python><oauth-2.0><swagger><fastapi> | 2023-09-07 14:09:02 | 0 | 537 | slaveCoder |
77,060,310 | 10,440,076 | About the inputs of the Wasserstein Distance W1 | <p><strong>NOTE: I wrote the same question on <a href="https://math.stackexchange.com/questions/4765025/about-the-inputs-of-the-wasserstein-distance-w-1">https://math.stackexchange.com/questions/4765025/about-the-inputs-of-the-wasserstein-distance-w-1</a>, and since I did not get any comment or answer, I post it here, since it can overlap with topics on stack overflow</strong></p>
<p>In math, you calculate the Wasserstein Distance W1 among two probability measures P and Q, by using the CDFs (or the inverse CDFs) of those two probability measures P and Q, i.e. F and G (or F<sup>-1</sup> and G<sup>-1</sup>).</p>
<p>In Python (please see <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.wasserstein_distance.html" rel="nofollow noreferrer">scipy.stats.wasserstein_distance</a>), you use the "<em>Values observed in the (empirical) distributions</em>" as inputs to calculate the Wasserstein Distance W1. Therefore:</p>
<ol>
<li>What are the "<em>Values observed in the (empirical) distributions</em>" mentioned in Python guidelines as inputs for calculating W1? I mean, do they refer to the empirical estimations of Probability Density Functions, i.e. histograms, or to the empirical Cumulative Distribution Functions (eCDFs)?</li>
<li>How are the inputs used in Python related to the two probability measures P and Q?</li>
</ol>
| <python><scipy><probability-distribution> | 2023-09-07 14:02:05 | 1 | 305 | Ommo |
77,060,299 | 6,937,465 | Join elements of a nested list based on condition | <p>I have a nested array called <code>element_text</code> in the form of for example:</p>
<pre class="lang-py prettyprint-override"><code>[[1, 'the'], [1, 'quick brown'], [2, 'fox jumped'], [2, 'over'], [2, 'the'], [3, 'lazy goat']]
</code></pre>
<p>And would like to concatenate the elements in the array and return a new array called <code>page_text</code> as so:</p>
<pre class="lang-py prettyprint-override"><code>[[1, 'the quick brown'], [2, 'fox jumped over the'], [3, 'lazy goat']]
</code></pre>
<p>So, if the first number is the same, join the second text strings together with a space in between.</p>
<p>I've tried:</p>
<pre class="lang-py prettyprint-override"><code>page_text = []
for i in element_text:
#join the list of strings together if the page number is the same
if i[0] == i[0]:
text = " ".join(i[1])
page_text.append([i[0], text])
</code></pre>
<p>But this just returns the same array as what was there in the first place.</p>
<p>Any help appreciated!</p>
<p>Thanks,</p>
<p>Carolina</p>
| <python><arrays> | 2023-09-07 14:00:58 | 5 | 426 | Carolina Karoullas |
77,060,140 | 16,253,390 | Pandas pivot + date slicing: group by periods of time with partial overlap | <p>I am trying to find a way to 'pivot' my pandas dataframe, but keeping my index by sliced dates. The end goal is to create a range for each index in which each attributes and their values are matched.</p>
<p>I reached the expected output using for loops and other non-vectorized ways, but I would be looking for a vectorized solution since my input dataframe might be quite big.</p>
<p>I am using python 3.11 and pandas>=2.0.0.</p>
<p>Here is an input example :</p>
<pre><code> index attribute start_date end_date value
0 index_1 attribute_1 2022-01-01 2022-02-01 1
1 index_1 attribute_1 2022-02-01 2023-01-01 2
2 index_1 attribute_2 2022-01-01 2023-01-01 3
3 index_2 attribute_3 2022-01-01 2023-01-01 4
4 index_3 attribute_4 2022-01-01 2023-01-01 5
</code></pre>
<p>What I am trying to obtain is this :</p>
<pre><code> index start_date end_date attribute_1 attribute_2 attribute_3 attribute_4
0 index_1 2022-01-01 2022-02-01 1 3 None None
1 index_1 2022-02-01 2023-01-01 2 3 None None
2 index_2 2022-01-01 2023-01-01 None None 4 None
3 index_3 2022-01-01 2023-01-01 None None None 5
</code></pre>
<p>Here is a dictionary to reproduce the input dataframe :</p>
<pre><code>from datetime import datetime
{
"index": ["index_1", "index_1", "index_1", "index_2", "index_3"],
"attribute": ["attribute_1", "attribute_1", "attribute_2", "attribute_3", "attribute_4"],
"start_date": [datetime(2022, 1, 1), datetime(2022, 2, 1), datetime(2022, 1, 1), datetime(2022, 1, 1), datetime(2022, 1, 1)],
"end_date": [datetime(2022, 2, 1), datetime(2023, 1, 1), datetime(2023, 1, 1), datetime(2023, 1, 1), datetime(2023, 1, 1)],
"value": [1, 2, 3, 4, 5]
}
</code></pre>
| <python><pandas><group-by><python-datetime><pandas-explode> | 2023-09-07 13:40:55 | 1 | 375 | Odhian |
77,060,066 | 7,656,163 | return an error page from a subthread if the subthread fails | <p>As you can see from my below simplified code/flask app, I am starting a new thread, which calls a sagemaker endpoint, then the old thread redirects to a static page "/completed", which says something like "currently loading". If the sagemaker endpoint works correctly without error, were good to go. But if the sagemaker endpoint doesn't work, I want my flask app to redirect to another page that says something like "error!". Originally I thought I'd need to get back to the original thread from the new thread, but this doesn't seem possible. Are there any recommendations on how I can return an error page from a sub thread inside a flask app? I'm really trying to stay within the Flask framework, and avoid using a bunch of JS.
Please note, I can not simply join the threads with</p>
<pre><code>sm_ep.join()
</code></pre>
<p>as I want to immediately redirect to the "/completed" page.</p>
<pre><code>@/app.route('/get_question', methods = ['POST', 'GET'])
def post_question():
@/copy_current_request_context
def ask(questions, sentences):
call SM endpoint, perform inference, then write to a folder/return results
if this code errors, I need to return an "Error" to the UI
sm_ep = threading.Thread(target = ask, args=(some params))
sm_ep.start()
return redirect("/completed") #this returns to a static page so the user isn't
#waiting
</code></pre>
<p>EDIT 1:
I've also looked into flask-executor library but that doesn't seem to solve my problem either</p>
| <python><flask> | 2023-09-07 13:28:51 | 0 | 326 | bnicholl |
77,060,037 | 4,957,620 | Connection to Neo4j is failing with BoltSecurityError: [SSLCertVerificationError] | <p>I have a python script with the following code:</p>
<pre><code>from neo4j import GraphDatabase
from neo4j.debug import watch
watch("neo4j")
uri = "neo4j+s://XXX.databases.neo4j.io:7687"
driver = GraphDatabase.driver(uri, auth=("neo4j", "password"))
</code></pre>
<p>I run the script in my local computer. The connection fails with the following logs:</p>
<pre><code>[DEBUG ] [Thread 140704683963968] [Task 4387747392 ] 2023-09-07 09:16:20,583 [#0000] _: <POOL> created, routing address IPv4Address(('XXX.databases.neo4j.io', 7687))
[DEBUG ] [Thread 140704683963968] [Task 4387747392 ] 2023-09-07 09:16:20,583 [#0000] _: <WORKSPACE> resolve home database
[DEBUG ] [Thread 140704683963968] [Task 4387747392 ] 2023-09-07 09:16:20,583 [#0000] _: <POOL> attempting to update routing table from IPv4Address(('XXX.databases.neo4j.io', 7687))
[DEBUG ] [Thread 140704683963968] [Task 4387747392 ] 2023-09-07 09:16:20,583 [#0000] _: <RESOLVE> in: XXX.databases.neo4j.io:7687
[DEBUG ] [Thread 140704683963968] [Task 4387747392 ] 2023-09-07 09:16:20,695 [#0000] _: <RESOLVE> dns resolver out: xx.xx.xxx.xx:7687
[DEBUG ] [Thread 140704683963968] [Task 4387747392 ] 2023-09-07 09:16:20,696 [#0000] _: <POOL> _acquire router connection, database=None, address=ResolvedIPv4Address(('xx.xx.xxx.xx', 7687))
[DEBUG ] [Thread 140704683963968] [Task 4387747392 ] 2023-09-07 09:16:20,696 [#0000] _: <POOL> trying to hand out new connection
[DEBUG ] [Thread 140704683963968] [Task 4387747392 ] 2023-09-07 09:16:20,701 [#0000] C: <OPEN> xx.xx.xxx.xx:7687
[DEBUG ] [Thread 140704683963968] [Task 4387747392 ] 2023-09-07 09:16:20,735 [#EA4B] C: <SECURE> XXX.databases.neo4j.io
[DEBUG ] [Thread 140704683963968] [Task 4387747392 ] 2023-09-07 09:16:20,777 [#0000] S: <CONNECTION FAILED> BoltSecurityError: [SSLCertVerificationError] Connection Failed. Please ensure that your database is listening on the correct host and port and that you have enabled encryption if required. Note that the default encryption setting has changed in Neo4j 4.0. See the docs for more information. Failed to establish encrypted connection. (code 1: Operation not permitted)
</code></pre>
<p>Specifically:</p>
<pre><code>BoltSecurityError: [SSLCertVerificationError] Connection Failed. Please ensure that your database is listening on the correct host and port and that you have enabled encryption if required. Note that the default encryption setting has changed in Neo4j 4.0. See the docs for more information. Failed to establish encrypted connection. (code 1: Operation not permitted)
</code></pre>
<p>It's very strange that using the same URI and credentials I'm able to connect using the desktop app Neo4j Desktop from my computer.<a href="https://i.sstatic.net/jV76V.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jV76V.png" alt="enter image description here" /></a></p>
<p>But for some reason the code fails... any ideas why this could be happening?</p>
| <python><python-3.x><neo4j><graph-databases> | 2023-09-07 13:25:38 | 1 | 2,882 | chris |
77,059,938 | 22,466,650 | What's the logic behind "cumsum" to make flags, compute counts and form groups? | <p>Without further ado, my input (s1) & expected-output (df) are below :</p>
<pre><code>#INPUT
s1 = pd.Series(['a', np.nan, 'b', 'c', np.nan, np.nan, 'd', np.nan]).rename('col1')
#EXPECTED-OUTPUT
s2 = pd.Series([1, 2, 3, 3, 4, 4, 5, 6]).rename('col2') # flag the transition null>notnull or vice-versa
s3 = pd.Series([0, 1, 0, 0, 2, 3, 0, 4]).rename('col3') # counter of the null values
df = pd.concat([s1, s2, s3], axis=1)
col1 col2 col3
0 a 1 0
1 NaN 2 1
2 b 3 0
3 c 3 0
4 NaN 4 2
5 NaN 4 3
6 d 5 0
7 NaN 6 4
</code></pre>
<p>I tried plenty of weird combinations of cumsum and masks but without any success. Likely that's because I don't have the basics of the logical thinking. What questions do I need to ask myself before starting to build the chain that will give me my series ?</p>
<p>Any help would be greately appreciated, guys !</p>
| <python><pandas> | 2023-09-07 13:12:28 | 1 | 1,085 | VERBOSE |
77,059,740 | 12,458,212 | PySpark - How to 'sc.parallelize' a function that yields a generator (Pickling Error)? | <p>My specific use case is quite different, but this simple example is able to replicate my issue.</p>
<pre><code>def foo(inp):
yield 2*inp
def bar(result):
x = next(result)
return x
inp = range(0,10000)
results = []
for f in inp:
results.append(foo(f))
rdd_sim = sc.parallelize(results, numSlices=4)
results = rdd_sim.map(lambda x: bar(x)).collect()
</code></pre>
<p>Here I have a function that yields a gen object. The gen object is then used as an input to the function bar. I'm trying to parallelize this entire process but am getting the following error:</p>
<pre><code>TypeError: cannot pickle 'generator' object
</code></pre>
<p>Due to memory considerations, a generator is more appropriate for my use case, so would appreciate any ideas here.</p>
| <python><apache-spark><pyspark><databricks> | 2023-09-07 12:45:31 | 1 | 695 | chicagobeast12 |
77,059,737 | 22,496,572 | Adding tensors rows to a tensor given possibly repeating row indices | <p>In PyTorch, is there a batch version of the following (in-place) operations?</p>
<pre><code># v is a tensor of shape (n, m)
# w is a tensor of shape (k, m)
# indices is a tensor of shape (k, )
def f_sum(v, w, indices):
for i, w0 in zip(indices, w):
v[i] += w0
def f_average(v, w, indices):
for i, w0 in zip(indices, w):
v[i] += (1 / sum(indices == i)) * w0
</code></pre>
<p>Maybe, I think (?), the first operation is equivalent to <code>v[indices] += w</code> when it is guaranteed that <code>indices</code> does not have repititions, but we don't assume that.</p>
| <python><pytorch> | 2023-09-07 12:44:35 | 1 | 371 | Sasha |
77,059,654 | 8,256,981 | In numpy genfromtxt, how to use `names = True`? | <p>This is a follow up question to my previous question:</p>
<p><a href="https://stackoverflow.com/questions/77053670/in-numpy-genfromtxt-missing-values-filling-values-excludelist-deletechars-an">In numpy genfromtxt, missing_values, filling_values, excludelist, deletechars and replace_space are not working properly</a></p>
<p>This is my test.csv file, where "A 1" and "A+2" are headers:</p>
<pre><code>A 1,A+2
test& ,1
skip,
#,
N/A,NA
</code></pre>
<p>With this Jupyter code:</p>
<pre><code>import numpy as np
test = np.genfromtxt("test.csv",
delimiter = ',',
dtype = str,
names = None
)
test
</code></pre>
<p>I get this output:</p>
<pre><code>array([['A 1', 'A+2'],
['test& ', '1'],
['skip', ''],
['N/A', 'NA']], dtype='<U10')
</code></pre>
<p>Here <code>dtype='<U10'</code> is reasonable, because there are 10 characters in <code>'test& '</code>.</p>
<p>But when I change <code>names = None</code> to <code>names = True</code>, I get this output. I understand that the headers <code>A 1</code> and <code>A+2</code> have been changed to <code>A_1</code> and <code>A2</code>. But why is it <code>'<U'</code>?</p>
<pre><code>array([('', ''), ('', ''), ('', '')], dtype=[('A_1', '<U'), ('A2', '<U')])
</code></pre>
| <python><numpy> | 2023-09-07 12:33:19 | 2 | 491 | maxloo |
77,059,630 | 16,420,204 | Python Polars: Conditional Join by Date Range | <p>First of all, there seem to be some similar questions answered already. However, I couldn't find this specific case, where the conditional columns are also part of the join columns:</p>
<p>I have two dataframes:</p>
<pre><code>df1 = pl.DataFrame({"timestamp": ['2023-01-01 00:00:00', '2023-05-01 00:00:00', '2023-10-01 00:00:00'], "value": [2, 5, 9]})
df1 = df1.with_columns(
pl.col("timestamp").str.to_datetime().alias("timestamp"),
)
┌─────────────────────┬───────┐
│ timestamp ┆ value │
│ --- ┆ --- │
│ datetime[μs] ┆ i64 │
╞═════════════════════╪═══════╡
│ 2023-01-01 00:00:00 ┆ 2 │
│ 2023-05-01 00:00:00 ┆ 5 │
│ 2023-10-01 00:00:00 ┆ 9 │
└─────────────────────┴───────┘
df2 = pl.DataFrame({"date_start": ['2022-12-31 00:00:00', '2023-01-02 00:00:00'], "date_end": ['2023-04-30 00:00:00', '2023-05-05 00:00:00'], "label": [0, 1]})
df2 = df2.with_columns(
pl.col("date_start").str.to_datetime().alias("date_start"),
pl.col("date_end").str.to_datetime().alias("date_end"),
)
┌─────────────────────┬─────────────────────┬───────┐
│ date_start ┆ date_end ┆ label │
│ --- ┆ --- ┆ --- │
│ datetime[μs] ┆ datetime[μs] ┆ i64 │
╞═════════════════════╪═════════════════════╪═══════╡
│ 2022-12-31 00:00:00 ┆ 2023-04-30 00:00:00 ┆ 0 │
│ 2023-01-02 00:00:00 ┆ 2023-05-05 00:00:00 ┆ 1 │
└─────────────────────┴─────────────────────┴───────┘
</code></pre>
<p>I want to join <code>label</code> of the second <code>polars.Dataframe</code> (df2) onto the first <code>polars.Dataframe</code> (df1) - but only when the column value of <code>timestamp</code> (<code>polars.Datetime</code>) is within the date ranges given in <code>date_start</code> and <code>date_end</code>, respectively. <br>
Since I basically want a <code>left join</code> on df1, the column <code>label</code> should be <code>None</code> when the column value of <code>timestamp</code> isn't at all covered by df2.</p>
<p>The tricky part for me is, that there isn't an actual <code>on</code> for df2 since its a range of dates.</p>
| <python><join><python-polars> | 2023-09-07 12:29:06 | 4 | 1,029 | OliverHennhoefer |
77,059,569 | 11,325,478 | WSGI vs ASGI server with hybrid sync/async Django app | <p>I have a Django app with multiple async views doing some http requests, but some part are still synchronous like most of the middlewares.
So there will always be a small performance penalty with the sync/async switch when handling incoming requests.</p>
<p>It is possible to run this app on both WSGI or ASGI gunicorn server (using uvicorn worker), but I don't really understand which one is better for an hybrid sync/async Django app.
In both cases it seems that there is blocking code in a thread.</p>
| <python><django><asynchronous><gunicorn><asgi> | 2023-09-07 12:20:57 | 0 | 338 | Grum |
77,059,419 | 10,232,932 | split up column value into empty column values in a dataframe | <p>I am having a dataframe <code>df</code>:</p>
<pre><code>columnA columnB columnC
A A 10
A B NaN
A C 20
B A 30
B C NaN
A D NaN
D C 15
</code></pre>
<p>How can I fill the <code>NaN</code>values in that case, that the next non `NaN´ value is diveded by the missing entries before and splitted (including the already filled row)? So in my case that the output is:</p>
<pre><code>columnA columnB columnC
A A 10
A B 10
A C 10
B A 30
B C 5
A D 5
D C 5
</code></pre>
<p>Further explanation, in that case 20 was divided by 2 and leads to 10, and 15 was divided by 3 and leads to 5.</p>
| <python><pandas> | 2023-09-07 11:59:53 | 2 | 6,338 | PV8 |
77,059,354 | 3,801,530 | Pydantic with a field of type ENUM | <p>I want to transform to lowercase, the value that comes to the model of a field that is of type ENUM, before it is assigned.</p>
<p>I show an example.</p>
<pre class="lang-py prettyprint-override"><code>class DeviceType(str, Enum):
BASIC = "basic"
PROFESIONAL = "profesional"
class Device(BaseModel):
id: UUID
model: constr(to_lower=True)
created_at: datetime = Field(alias="createdAt")
owner_id: UUID = Field(alias="ownerId")
type: DeviceType
info = {..., "type": "Basic", ...}
device = Device(**info) # Error
info = {..., "type": "basic", ...}
device = Device(**info) # No Error
</code></pre>
<p>The same thing I do for the <strong>model</strong> field, I want to do for the <strong>type</strong> field. But since it is not of type string, I cannot do exactly the same.</p>
<p>What I want is to prevent the model from failing if the value is <em>Basic</em> or <em>BASIC</em>. And my ENUM type is <em>basic</em>, all lowercase.</p>
<p>I want to use something from pydantic as I use with the <strong>model</strong> field, use it for the <strong>type</strong> field.</p>
| <python><pydantic> | 2023-09-07 11:51:02 | 2 | 890 | RodriKing |
77,059,083 | 11,790,979 | Unexpected behaviour when re-organising directory (python, shutil) | <p>I have a function that reorganises a directory of locally saved data, but its behaving in an unexpected way. There are 3 folders, <code>current</code>, <code>previous</code> and <code>tma</code> (two months ago) and with the function below I am expecting that it will delete the existing <code>tma</code>, rename <code>current</code> -> <code>previous</code>, <code>previous</code> -> <code>tma</code> and then re-create and populate (with another function) a <code>current</code> folder.</p>
<pre class="lang-py prettyprint-override"><code>import os
import shutil
__PATH = os.getcwd()
def reorganise_directory():
print(__PATH)
destinations = ['current', 'previous', 'tma']
--->shutil.rmtree(os.path.join(__PATH, f'/data/{destinations[-1]}'))
shutil.move(os.path.join(__PATH, f'/data/{destinations[1]}'), os.path.join(__PATH, f'/data/{destinations[-1]}'))
shutil.move(os.path.join(__PATH, f'/data/{destinations[0]}'), os.path.join(__PATH, f'/data/{destinations[1]}'))
return
</code></pre>
<p>This is the error I recieve:</p>
<p><a href="https://i.sstatic.net/qnY7L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qnY7L.png" alt="enter image description here" /></a></p>
<p>It appears that for some reason, the <code>__PATH</code> variable, which prints out <code>C:\dev\python\smog_usage_stats</code> is being turned to just <code>C:\</code> in the line indicated by an arrow. I'm not sure why?</p>
| <python><shutil> | 2023-09-07 11:12:06 | 1 | 713 | nos codemos |
77,059,057 | 257,742 | Adding multiple LoRa safetensors to my HuggingFace model in Python | <p>Suppose I use this script to load one fine-tuned model: (example taken from <a href="https://towardsdatascience.com/hugging-face-diffusers-can-correctly-load-lora-now-a332501342a3" rel="nofollow noreferrer">https://towardsdatascience.com/hugging-face-diffusers-can-correctly-load-lora-now-a332501342a3</a>)</p>
<pre><code>import torch
from diffusers import StableDiffusionPipeline
text2img_pipe = StableDiffusionPipeline.from_pretrained(
"stablediffusionapi/deliberate-v2"
, torch_dtype = torch.float16
, safety_checker = None
).to("cuda:0")
lora_path = "<path/to/lora.safetensors>" #only one tensor , not folder
text2img_pipe.load_lora_weights(lora_path)
</code></pre>
<p>This adds one safetensors file. How can I load multiple safetensors? I tried the <code>use_safetensors</code> argument when instantiating the <code>StableDiffusionPipeline</code>, but it is unclear where I should put the safetensors folder I have. I have such an error:</p>
<blockquote>
<p>OSError: Could not found the necessary <code>safetensors</code> weights in {'vae/diffusion_pytorch_model.safetensors',
'text_encoder/pytorch_model.bin', 'safety_checker/model.safetensors', 'vae/diffusion_pytorch_model.bin',
'text_encoder/model.safetensors', 'unet/diffusion_pytorch_model.bin', 'safety_checker/pytorch_model.bin',
'unet/diffusion_pytorch_model.safetensors'} (variant=None)</p>
</blockquote>
<p>I have also tried to load the weights one after the other, but results suggest that I'm not keeping the previous loaded weights.</p>
| <python><model><safe-tensors><diffusers> | 2023-09-07 11:09:29 | 2 | 1,960 | D.Giunchi |
77,058,992 | 17,561,414 | alternative of koalas in pyspark? use dataframe in sql statement databricks | <p>My goal is to use the <code>df</code> in the SQL statement like below</p>
<p>creating <code>df</code></p>
<pre><code> df = spark.readStream.format("delta") \
.option("readChangeFeed", "true") \
.table("mdp_prd.bronze.nrq_customerassetproperty_autoloader_nodups")
</code></pre>
<p>using this <code>df</code> in <code>sql</code> statement</p>
<pre><code>%sql
CREATE OR REPLACE VIEW test_test as
Select *
from {df}
</code></pre>
<p>I have find the solution in this <a href="https://gbamezai.medium.com/azure-databricks-run-sql-commands-on-dataframe-ba85d89fc3fc#:%7E:text=So%20naturally%20when%20I%20learnt,pandas%20API%20on%20Apache%20Spark." rel="nofollow noreferrer">article</a> by using the <code>koalas</code> module which provides a drop-in replacement for pandas(above <code>sql</code> will work if I import koalas). But I worry about using koalas as I think it will decrease the processing time and I find efficient using pandas package in databricks when I can use pyspark.</p>
<p>Any other alternatives or work arounds to this?</p>
| <python><apache-spark><pyspark><databricks> | 2023-09-07 11:01:50 | 0 | 735 | Greencolor |
77,058,822 | 8,511,822 | How to specify the Python version such that it is added to the wheel file name? | <p>I have this in my pyproject.toml file:</p>
<pre class="lang-ini prettyprint-override"><code>requires-python = ">=3.8.1,<3.9"
</code></pre>
<p>I know this does not specify what Python version is added to the wheel file name, but I cannot figure out how to do it using <a href="https://hatch.pypa.io/latest/config/build/" rel="nofollow noreferrer">hatch build</a>.</p>
| <python><hatch> | 2023-09-07 10:39:34 | 0 | 1,642 | rchitect-of-info |
77,058,608 | 7,657,658 | Disable Automatic Code Suggestion in Jupyter for magic and path | <p>Whenever I write a code in Jupyter Lab (or Notebook), it prompts annoying suggestions regarding magics and paths, even without pressing TAB. Example:</p>
<p><a href="https://i.sstatic.net/0zaST.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0zaST.png" alt="annoying prompt" /></a></p>
<p>This usually happens when the caret hits a closing bracket or parentheses. However
I don't have this problem in IPython.</p>
<p>I reverted the Jupyter user configuration changes to default, but the issue persists. Do you have a suggestion on how to solve this problem?
IMO reverting to a behavior when suggesting only on TAB hit will be the best. I wonder if it comes from Jupyter config or another LSP like Jedi (LSP is disabled in my experimental config)</p>
<h2>Details</h2>
<p>Chrome
Ubuntu 22.04
jupyterlab 4.0.4</p>
| <python><jupyter-notebook><jupyter><jedi> | 2023-09-07 10:07:04 | 0 | 1,156 | MCMZL |
77,058,582 | 2,912,349 | Reversibly disconnect matplotlib's own key press handling | <p>I am trying to temporarily and reversibly disconnect matplotlib's own key press events, i.e. the keyboard shortcuts ("s" -> save, "q" -> quit, etc).</p>
<p>This used to be the way to do this:</p>
<pre class="lang-py prettyprint-override"><code>fig.canvas.mpl_disconnect(fig.canvas.manager.key_press_handler_id)
# Do stuff
fig.canvas.manager.key_press_handler_id \
= fig.canvas.mpl_connect('key_press_event', fig.canvas.manager.key_press)
</code></pre>
<p>However, the method <code>fig.manager.key_press</code> seems to no longer exist in recent matplotlib versions. How do I temporarily disable matplotlib's event handling now?</p>
| <python><matplotlib> | 2023-09-07 10:03:52 | 1 | 12,703 | Paul Brodersen |
77,058,441 | 765,269 | pass memoryview to win32.ReadFile for Pyserial readinto() function | <p>I want to create a proper readinto function for the pyserial package (for windows). pyserial implements readinto() by calling read() which defeats the purpose of readinto. Adapting the <a href="https://github.com/pyserial/pyserial/blob/master/serial/serialwin32.py" rel="nofollow noreferrer">existing read() function</a> seemed straightforward to accept a memoryview:</p>
<pre class="lang-py prettyprint-override"><code>def serial_read_into(serial_object, buf, size) -> int:
if not serial_object.is_open:
raise PortNotOpenError()
if size == 0:
return 0
win32.ResetEvent(serial_object._overlapped_read.hEvent)
flags = win32.DWORD()
comstat = win32.COMSTAT()
if not win32.ClearCommError(serial_object._port_handle, ctypes.byref(flags), ctypes.byref(comstat)):
raise SerialException("ClearCommError failed ({!r})".format(ctypes.WinError()))
length = min(comstat.cbInQue, size) if serial_object.timeout == 0 else size
if length == 0:
return 0
# buf = ctypes.create_string_buffer(n) <---- Note this in the read() implementation
rc = win32.DWORD()
read_ok = win32.ReadFile(
serial_object._port_handle,
buf,
size,
ctypes.byref(rc),
ctypes.byref(serial_object._overlapped_read))
if not read_ok and win32.GetLastError() not in (win32.ERROR_SUCCESS, win32.ERROR_IO_PENDING):
raise SerialException("ReadFile failed ({!r})".format(ctypes.WinError()))
result_ok = win32.GetOverlappedResult(
serial_object._port_handle,
ctypes.byref(serial_object._overlapped_read),
ctypes.byref(rc),
True)
if not result_ok:
if win32.GetLastError() != win32.ERROR_OPERATION_ABORTED:
raise SerialException("GetOverlappedResult failed ({!r})".format(ctypes.WinError()))
read = rc.value
return read
</code></pre>
<p><strong>Problem</strong>: The win32.ReadFile() function does not accept a memoryview. How can I pass the memoryview to the win32.ReadFile() function?
The error is
<code>ctypes.ArgumentError: argument 2: <class 'TypeError'>: wrong type</code></p>
<p>I have a memoryview, because I want to receive into a multiprocessing.SharedMemory region without extra copying.</p>
| <python><pyserial> | 2023-09-07 09:44:43 | 1 | 422 | user765269 |
77,058,416 | 471,478 | An argument that can be a combination of enum.IntFlag | <p>Given an <code>enum.IntFlag</code>:</p>
<pre><code>class Flag(enum.IntFlag):
FOO = 1
BAR = 2
QUX = 4
</code></pre>
<p>Can I express a type that says "Flag or any combination of Flags"?</p>
<pre><code>def func(accepts: CombinationOf[Flag]):
...
</code></pre>
<p>such that all these would be valid:</p>
<pre><code>func(Flag.FOO)
func(Flag.FOO | Flag.BAR)
</code></pre>
<p>but this one for instance would not:</p>
<pre><code>func(8)
</code></pre>
| <python><enums><python-typing> | 2023-09-07 09:41:36 | 1 | 12,364 | scravy |
77,058,049 | 951,139 | Random integer generation performance optimization | <p>I am looking to generate random integers on the interval [0, n) as fast as possible:</p>
<pre><code>from random import Random
rand = Random(123)
def rand_func(max: int, random: Random):
return int(random.random() * max)
for a in range(10_000_000): # perf_counter 9.7s
ax = rand_func(456, rand)
for b in range(10_000_000): # perf_counter 3.9s
bx = int(rand.random() * 456)
</code></pre>
<p>My experience in other languages leaves me surprised at how expensive this function call seems to be.</p>
<p>Is there a method, pattern or decorator I could use to further optimize speed while keeping the code simple and usable?</p>
<p>I've looked at some other less simple solutions. numpy <code>random.integers</code> is very fast at returning a large array of results, but consuming the results becomes more complex and less ideal.</p>
<p>(Of note I'm aware of the bias in the random integer sampling method above but it is acceptable for my requirements).</p>
| <python><random> | 2023-09-07 08:54:31 | 1 | 2,742 | TVOHM |
77,057,967 | 4,530,214 | How to enable Legend picking with a scatter plot | <p>The following example <a href="https://matplotlib.org/stable/gallery/event_handling/legend_picking.html" rel="nofollow noreferrer">from the matplotlib doc</a> shows how to make a legend "pickable", such that you can toggle the lines visibility by clicking on their legend:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
t = np.linspace(0, 1)
y1 = 2 * np.sin(2*np.pi*t)
y2 = 4 * np.sin(2*np.pi*2*t)
fig, ax = plt.subplots()
ax.set_title('Click on legend line to toggle line on/off')
line1, = ax.plot(t, y1, lw=2, label='1 Hz')
line2, = ax.plot(t, y2, lw=2, label='2 Hz')
leg = ax.legend(fancybox=True, shadow=True)
lines = [line1, line2]
lined = {} # Will map legend lines to original lines.
for legline, origline in zip(leg.get_lines(), lines):
legline.set_picker(True) # Enable picking on the legend line.
lined[legline] = origline
def on_pick(event):
# On the pick event, find the original line corresponding to the legend
# proxy line, and toggle its visibility.
legline = event.artist
origline = lined[legline]
visible = not origline.get_visible()
origline.set_visible(visible)
# Change the alpha on the line in the legend, so we can see what lines
# have been toggled.
legline.set_alpha(1.0 if visible else 0.2)
fig.canvas.draw()
fig.canvas.mpl_connect('pick_event', on_pick)
plt.show()
</code></pre>
<p>Specificaly, the line <code>leg.get_lines()</code> allows to retrieve the "line legends". When plotting scatterplots, this method does not retrieve the scatters legends, as shown by the following :</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
x = np.random.rand(50)
y = np.random.rand(50)
fig, ax = plt.subplots()
scatter = ax.scatter(x, y, label='Scatter Plot')
line = ax.plot(x,y, label="Line plot")
legend = ax.legend()
lines_from_legend = legend.get_lines()
print(lines_from_legend) # [<matplotlib.lines.Line2D object at 0x0000027C11058670>]
</code></pre>
<p>I am trying to make a legend pickable/toggle-able like the example from the doc, such that it works both on line plots and scatterplots, from a given <code>Axes</code> only - and no reference to the previously plotted lines/scatters. Basically, I am missing an equivalent way to <code>leg.get_lines()</code> but for scatters:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.collections
# for a given axes, ano nothing else :
leg = ax.legend(fancybox=True, shadow=True)
# for lines
lines = [ax.lines[i] for i in range(len(ax.lines))]#[line1, line2]
leglines = leg.get_lines()
# for scatters
scatters = scatters = [i for i in ax.get_children() if isinstance(i, matplotlib.collections.PathCollection)]
legscatters = ????
</code></pre>
<p>Matplotlib version : 3.6.3</p>
| <python><matplotlib><legend> | 2023-09-07 08:43:54 | 0 | 546 | mocquin |
77,057,836 | 16,648,033 | Reason to return updated stack along with top element from pop() method | <p><a href="https://github.com/google/jax/blob/fae98733aa08d407ce678d6034f4a888171fa1c5/jax/_src/lax/stack.py#L63" rel="nofollow noreferrer">This</a>:</p>
<pre><code> def pop(self) -> tuple[Any, Stack]:
"""Pops from the stack, returning an (elem, updated stack) pair."""
</code></pre>
<p>What is the reason behind returning updated stack along with top element from <code>pop()</code> method?</p>
| <python><stack><jax> | 2023-09-07 08:24:09 | 1 | 409 | vtm11 |
77,057,832 | 17,561,414 | readstream display dataframe does nto work | <p>I have loaded the delta bronze table using autoloader in databricks.</p>
<p>now I want to read this table and display it to see. But Im not I get the error when <code>display(df)</code> runs.</p>
<p>my code:</p>
<pre><code>df= spark.readStream.format("delta") \
.option("readChangeFeed", "true") \
.table("mdp_prd.bronze.nrq_customerassetproperty_autoloader_nodups")
</code></pre>
<p>Error:</p>
<p>Py4JError: An error occurred while calling z:com.databricks.backend.daemon.driver.DisplayHelper.getStreamName. Trace:
py4j.security.Py4JSecurityException: Method public static java.lang.String com.databricks.backend.daemon.driver.DisplayHelper.getStreamName() is not whitelisted on class class com.databricks.backend.daemon.driver.DisplayHelper
at</p>
<pre><code>py4j.security.WhitelistingPy4JSecurityManager.checkCall(WhitelistingPy4JSecurityManager.java:473)
at py4j.Gateway.invoke(Gateway.java:305)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:195)
at py4j.ClientServerConnection.run(ClientServerConnection.java:115)
at java.lang.Thread.run(Thread.java:750)
</code></pre>
| <python><apache-spark><pyspark><azure-databricks> | 2023-09-07 08:23:53 | 1 | 735 | Greencolor |
77,057,794 | 10,232,932 | conditional line color results in ValueError | <p>I am trying to plot a linegraph with matplotlib.pytplot and have a dataframe df with the shape <code>(42,7)</code>. The dataframe has the following structur (only showing relevant columns):</p>
<pre><code>timepoint value point
2021-01-01 10 0
2021-02-01 20 0
....
2021-11-01 10 0
2021-12-01 50 1
2022-01-01 60 1
...
</code></pre>
<p>I try to plot conditional color in the following way (that each value with point=0 is blue and each value with point=1 is red):</p>
<pre><code>import numpy as np
col = np.where(df['point'] == 0, 'b', 'r')
plt.plot(df['timepoint'], df['value'], c=col)
plt.show()
</code></pre>
<p>and I get the error massage:</p>
<blockquote>
<p>ValueError: array(['b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b',
'b', 'b', 'b',
'b', 'r', 'r', 'r', 'r', 'r', 'r', 'r', 'r', 'r', 'r', 'r', 'r',
'r', 'r', 'r', 'r', 'r', 'r', 'r', 'r', 'r', 'r', 'r', 'r', 'r',
'r', 'r', 'r'], dtype='<U1') is not a valid value for color</p>
</blockquote>
<p>When I look at this question <a href="https://stackoverflow.com/questions/53531429/valueerror-invalid-rgba-argument-what-is-causing-this-error">ValueError: Invalid RGBA argument: What is causing this error?</a> , I don't find any solution as the shape of my color array is: <code>col.shape</code> is <code>(42, )</code></p>
| <python><pandas><matplotlib><colors><valueerror> | 2023-09-07 08:17:30 | 2 | 6,338 | PV8 |
77,057,654 | 1,725,871 | Add Content-Type header to Very simple SimpleHTTPRequestHandler without extending the class | <p>I have an extremely simple http server setup for local testing:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python
from functools import partial
from http.server import SimpleHTTPRequestHandler, test
import os
HTTP_DIR = os.getcwd()
SimpleHTTPRequestHandler.extensions_map = {k: v + ';charset=UTF-8' for k, v in SimpleHTTPRequestHandler.extensions_map.items()}
test(HandlerClass=partial(SimpleHTTPRequestHandler, directory=HTTP_DIR), port=8000, bind='0.0.0.0')
</code></pre>
<p>I note that the <code>Content-Type</code> header is not sent for css files, and a little further digging indicated to me that it might be something to do with <code>SimpleHTTPRequestHandler.extensions_map</code> but as I understand it, <a href="https://docs.python.org/3/library/http.server.html" rel="nofollow noreferrer">this only has overrides for the system mappings</a> so I assumed that the System mappings should be automatically set?</p>
<blockquote>
<p>Changed in version 3.9: This dictionary is no longer filled with the default system mappings, but only contains overrides.</p>
</blockquote>
<p>As I understand it, the system defaults should be added automatically, but the dump of <code>print(SimpleHTTPRequestHandler.extensions_map.items())</code> is this list:</p>
<pre class="lang-py prettyprint-override"><code>dict_items([('.gz', 'application/gzip;charset=UTF-8'), ('.Z', 'application/octet-stream;charset=UTF-8'), ('.bz2', 'application/x-bzip2;charset=UTF-8'), ('.xz', 'application/x-xz;charset=UTF-8')])
</code></pre>
<p>I do see that I could <a href="https://www.programcreek.com/python/?code=watir%2Fnerodia%2Fnerodia-master%2Fnerodia%2Fsupport%2Fwebserver.py" rel="nofollow noreferrer">extend the functionality and add custom endpoints</a>, but that would make the implementation more extensive than desired...</p>
<h1>EDIT</h1>
<p>For some reason <code>.js</code> files <em>do</em> get <code>application/javascript</code></p>
| <python><simplehttpserver><simplehttprequesthandler> | 2023-09-07 07:55:35 | 1 | 3,442 | JoSSte |
77,057,463 | 967,501 | How do I get the amount of bandwidth used during the execution of a Linux program? | <p>I need to understand how much bandwidth is used by a program which I execute from a shell on Linux, via subprocess.run in Python.</p>
<p>Maybe a solution already exists, something similar to <code>time</code>?</p>
<pre><code>petur@petur:~$ time foobar
foobar 0.00s user 0.00s system 80% cpu 0.003 total
</code></pre>
<p>If not, what libraries would I need create such a utility? (To gather the total incoming and outgoing bandwidth during the execution of a program (including all of its threads/tasks)).</p>
| <python><c><python-3.x><linux> | 2023-09-07 07:32:17 | 1 | 4,503 | Pétur Ingi Egilsson |
77,057,444 | 3,104,974 | Plot sklearn's DecisionBoundaryDisplay for Classifier With More Than 2 Features | <h2>Observation</h2>
<p>There are <a href="https://stackoverflow.com/questions/76876844/color-regions-in-a-scatter-plot/76877301#76877301">examples</a> using the <a href="https://scikit-learn.org/stable/auto_examples/datasets/plot_iris_dataset.html" rel="nofollow noreferrer">iris dataset</a> of how to plot decision boundaries using <a href="https://scikit-learn.org/stable/modules/generated/sklearn.inspection.DecisionBoundaryDisplay.html#sklearn.inspection.DecisionBoundaryDisplay" rel="nofollow noreferrer"><code>sklearn.inspection.DecisionBoundaryDisplay</code></a> on a model trained on only 2 (arbitrary) features out of 4 available. Of course that is not the classifier I want to end up with, since I want to include all 4 features into the model.</p>
<p>However, when including all features <code>DecisionBoundaryDisplay.from_estimator</code> raises</p>
<blockquote>
<p>ValueError: Input X contains NaN.</p>
<p>KNeighborsClassifier does not accept missing values encoded as NaN natively....</p>
</blockquote>
<p>The documentation describes the signature of <code>DecisionBoundaryDisplay.from_estimator(estimator, X, ...)</code> where <code>X</code> is said to be <em>of shape (n_samples, 2). Input data that should be only 2-dimensional.</em></p>
<p>But obviously it need not only be 2-dimensional, but have only 2 features.</p>
<p>I also tried to train the model on all features, but pass only 2 of them to <code>from_estimator</code>, which raises:</p>
<blockquote>
<p>ValueError: X has 2 features, but RobustScaler is expecting 4 features as input.</p>
</blockquote>
<h2>Question</h2>
<p>Is there any way of plotting a 2-d intersection of the decision boundary of a higher dimensional model?</p>
<h2>Code Example</h2>
<p>This is based on sklearn's <a href="https://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html" rel="nofollow noreferrer">Classifier Comparison</a> article</p>
<pre><code># -*- coding: utf-8 -*-
import seaborn as sns
import pandas as pd
from matplotlib import pyplot as plt
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.inspection import DecisionBoundaryDisplay
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler
iris = datasets.load_iris()
X = pd.DataFrame(iris.data, columns=iris.feature_names)
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
classifiers = {
"KNN": KNeighborsClassifier(n_neighbors=5),
"RBF SVM": SVC(kernel="rbf"),
}
intersections = [
["sepal length (cm)", "sepal width (cm)"],
["petal length (cm)", "petal width (cm)"],
]
fig, axs = plt.subplots(len(classifiers), len(intersections),
figsize=(4 * len(intersections), 4 * len(classifiers)))
for i, (name, mdl) in enumerate(classifiers.items()):
clf = make_pipeline(RobustScaler(), mdl)
clf.fit(X_train, y_train)
for j, cols in enumerate(intersections):
sns.scatterplot(X_test, x=cols[0], y=cols[1], hue=y_test, ax=axs[i, j])
DecisionBoundaryDisplay.from_estimator(clf, X, alpha=0.2, ax=axs[i, j])
</code></pre>
| <python><plot><scikit-learn><classification> | 2023-09-07 07:29:27 | 0 | 6,315 | ascripter |
77,057,362 | 350,685 | Python code stops updating Mysql database randomly | <p>I am trying to learn Python by building myself a trading bot kind of application. This also uses a mysql database. I have a class for handling all database operations. The class is structured like so:</p>
<pre><code>class BotDB:
def __init__(self, dbUsername, dbPassword):
print("Initializing database.")
self.__dbUsername = dbUsername
self.__dbPassword = dbPassword
# Function to connect to db.
def __connectToDatabase(self):
print("Connecting to database")
try:
self.__botDB = mysql.connector.connect(
host="localhost",
port=3306,
user=self.__dbUsername,
password=self.__dbPassword
)
self.__botDBCursor = self.__botDB.cursor(buffered=True)
return True
except Exception as e:
print("Something went wrong in connecting to database.")
print(e)
return False
</code></pre>
<p>The self.__botDBCursor and self.__botDB get used in all different functions within BotDB class. There are functions that update a specific table within the database quite frequently. I find that the database stops being updated after a time. Quite randomly too and without errors. I have gone through some of the threads here and</p>
<ul>
<li>I am calling <code>commit</code> on the update transactions.</li>
</ul>
<p>My questions:</p>
<ul>
<li>A lot of the other threads point out that it is important to close the connections once done. But I am re-using the object once created. Do I still need to close and re-open the connections?</li>
<li>Are there better ways of diagnosing or handling this? Like ORMs for Python that can manage connections ?</li>
</ul>
| <python><mysql> | 2023-09-07 07:16:27 | 0 | 10,638 | Sriram |
77,057,081 | 5,563,977 | In a Python Poetry project, what's the difference between {include = "b", from = "a"} and {include = "a/b"} in the packages section? | <p>I'm trying to build a Python package using Poetry and have some sub-packages I need to include.</p>
<p>Using the <code>include</code> directive in pyproject.toml, I'm struggling to understand what's the difference between this:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry]
...
packages = [
{include = "b", from = "a"},
]
</code></pre>
<p>and this:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry]
...
packages = [
{include = "a/b"},
]
</code></pre>
<p>It seems to give the exact same result. When would I use one over the other?</p>
| <python><python-3.x><python-packaging><python-poetry> | 2023-09-07 06:31:03 | 0 | 1,085 | MatanRubin |
77,056,859 | 464,618 | How do I create a dataset with a file resource on the Humanitarian Data Exchange (HDX) using the HDX Python API library? | <p>I would like to create a dataset the <a href="https://data.humdata.org/" rel="nofollow noreferrer">Humanitarian Data Exchange</a> (HDX) with a single resource, a csv file. I would like to use the <a href="https://github.com/OCHA-DAP/hdx-python-api" rel="nofollow noreferrer">HDX Python API</a>. I have looked at the <a href="https://hdx-python-api.readthedocs.io/en/latest/" rel="nofollow noreferrer">documentation</a> but need a more complete example of how to do it. How can I create the dataset?</p>
| <python><dataset><hdx> | 2023-09-07 05:39:07 | 1 | 1,352 | mcarans |
77,056,770 | 2,216,718 | Pandas Convert a column in the format "yyyy-MM-ddTHH:mm:ss.SSSZ" to datetime object | <p>I have a dataframe which has a column containing strings of the format "yyyy-MM-ddTHH:mm:ss.SSSZ"</p>
<p>When I do the following operation</p>
<pre><code>df["updatedTimeDateTime"] = pd.to_datetime(df["last_update_date_time"], format='%Y-%m-%dT%H:%M:%S.%fZ')
</code></pre>
<p>I get the following error</p>
<pre><code>ValueError: time data 'last_update_date_time' does not match format '%Y-%m-%dT%H:%M:%S.%fZ' (match)
</code></pre>
<p>I ran the command</p>
<pre><code>print (df.loc[pd.to_datetime(df["token_last_update_date_time"], format='%Y-%m-%dT%H:%M:%S.%fZ', errors='coerce').isna(),'token_last_update_date_time'])
</code></pre>
<p>and got the following output</p>
<pre><code>0 token_last_update_date_time
Name: token_last_update_date_time, dtype: object
</code></pre>
<p>On running the following command</p>
<pre><code>print (pd.to_datetime(df["token_last_update_date_time"], format='%Y-%m-%dT%H:%M:%S.%fZ', errors='coerce'))
</code></pre>
<p>I got the output</p>
<pre><code>0 NaT
1 2023-08-29 18:37:17.686
2 2023-08-29 19:23:25.107
3 2023-08-29 19:10:14.758
4 2023-08-29 18:34:19.377
</code></pre>
| <python><pandas><datetime> | 2023-09-07 05:15:20 | 2 | 862 | Mohit Shah |
77,056,731 | 13,060,649 | Django istartswith performance | <p>I have a model with column "name" and I am running query with istartswith and I am using postgresql. I have added <code>django.db.models.Index</code> with Postgresql opclass</p>
<pre><code>Index(fields=('name',), name="partial_name_pat_idx", opclasses=("varchar_pattern_ops",),
condition=Q(published=True))
</code></pre>
<p>But I can't figure out how do make I make use of index for case insensitive search, I have tried with django model function <code>Upper('name')</code> but it is not supported in <code>fields</code> argument. If I pass <code>Upper(F('name'))</code> in expressions django is restricting me to use <code>opclasses=("varchar_pattern_ops",)</code>. Is there any way I can create index for queries with <code>LIKE</code> operator and <code>UPPER</code> function on field name?</p>
| <python><django><postgresql><django-models><indexing> | 2023-09-07 05:04:09 | 1 | 928 | suvodipMondal |
77,056,468 | 7,035,448 | Numba Dispatch error when Number of keyword args > 3 for nested numba calls | <p>This error happens when defining function <code>*</code> is used. I can start with three function definition cases, The first two cases are passed, and the third one which is a minor modification of the second test case fails. Maybe * are not supported however the error is interesting would like to understand the cause.</p>
<p>Numba version: '0.56.4'
Python version: '3.9.17'</p>
<h2>Pass Test 1</h2>
<pre class="lang-py prettyprint-override"><code>import numba as nb
def test_1(a, b, c, d, e, f, g):
return a + b + c + d + e + f + g
test_1 = nb.njit(test_1)
def test_2(a, b, c, d, e, f, g):
return test_1(a, b, c, d, e, f, g)
test_2 = nb.njit(test_2)
test_2(1, 2, 3, 4, 5, 6, 7)
</code></pre>
<h2>Pass Test 2</h2>
<pre class="lang-py prettyprint-override"><code>import numba as nb
def test_1(a, b, c, *, d, e, f):
return a + b + c + d + e + f
test_1 = nb.njit(test_1)
def test_2(a, b, c, d, e, f, g):
return test_1(a, b, c, d, e, f)
test_2 = nb.njit(test_2)
test_2(1, 2, 3, 4, 5, 6, 7)
</code></pre>
<h2>Fail Test 3</h2>
<pre class="lang-py prettyprint-override"><code>import numba as nb
def test_1(a, b, c, *, d, e, f, g):
return a + b + c + d + e + f
test_1 = nb.njit(test_1)
def test_2(a, b, c, d, e, f, g):
return test_1(a, b, c, d, e, f, g)
test_2 = nb.njit(test_2)
test_2(1, 2, 3, 4, 5, 6, 7)
</code></pre>
<h3>Error traceback</h3>
<pre><code>---------------------------------------------------------------------------
TypingError Traceback (most recent call last)
Cell In[10], line 9
7 test_2 = nb.njit(test_2)
8 # test_1(1, 2, 3, 4, 5, 6, 7)
----> 9 test_2(1, 2, 3, 4, 5, 6, 7)
File ~/.cache/pypoetry/virtualenvs/indices-ldm-post-process-danSbNDA-py3.9/lib/python3.9/site-packages/numba/core/dispatcher.py:467, in _DispatcherBase._compile_for_args(self, *args, **kws)
464 msg = (f"{str(e).rstrip()} \n\nThis error may have been caused "
465 f"by the following argument(s):\n{args_str}\n")
466 e.patch_message(msg)
--> 467 error_rewrite(e, 'typing')
468 except errors.UnsupportedError as e:
469 # Something unsupported is present in the user code, add help info
470 error_rewrite(e, 'unsupported_error')
File ~/.cache/pypoetry/virtualenvs/indices-ldm-post-process-danSbNDA-py3.9/lib/python3.9/site-packages/numba/core/dispatcher.py:409, in _DispatcherBase._compile_for_args..error_rewrite(e, issue_type)
407 raise e
408 else:
--> 409 raise e.with_traceback(None)
TypingError: Failed in nopython mode pipeline (step: nopython frontend)
Internal error at .
tuple index out of range
During: resolving callee type: type(CPUDispatcher())
During: typing of call at /tmp/ipykernel_3435313/3644502072.py (6)
Enable logging at debug level for details.
</code></pre>
<p>I have raised an issue at <a href="https://github.com/numba/numba/issues/9185" rel="nofollow noreferrer">Numba github</a></p>
| <python><debugging><numba><python-typing> | 2023-09-07 03:35:00 | 0 | 1,845 | eroot163pi |
77,056,398 | 5,179,643 | List comprehension to create a Pandas dataframe column based on values of 4 other other columns | <p>I have a Pandas dataframe that looks like this:</p>
<pre><code>df = pd.DataFrame({
'company_1': ['McDonalds', 'Mercedes', 'Apple'],
'company_2': ['Wendys', 'BWM', 'Microsoft'],
'company_1_price' : [3, 1, 4],
'company_2_price' : [2, 8, 3]
})
</code></pre>
<p>I'd like to add a column to this <code>df</code> named <code>more_expensive</code>, which will be the company that is more expensive.</p>
<p>For example, the <code>df</code> would look as follows:</p>
<pre><code> company_1 company_2 company_1_price company_2_price more_expensive
0 McDonalds Wendys 2 3 Wendys
1 Mercedes BWM 7 4 Mercedes
2 Apple Microsoft 9 3 Apple
</code></pre>
<p>How would I use list comprehension to add this column?</p>
<p>Thank you.</p>
| <python><pandas> | 2023-09-07 03:16:00 | 1 | 2,533 | equanimity |
77,056,341 | 1,471,980 | how do you modify styled data frame in Pandas | <p>I have this data frame:</p>
<pre><code>df
Server Env. Model Percent_Utilized
server123 Prod Cisco. 50
server567. Prod Cisco. 80
serverabc. Prod IBM. 100
serverdwc. Prod IBM. 45
servercc. Prod Hitachi. 25
Avg 60
server123Uat Uat Cisco. 40
server567u Uat Cisco. 30
serverabcu Uat IBM. 80
serverdwcu Uat IBM. 45
serverccu Uat Hitachi 15
Avg 42
</code></pre>
<p>I have style applied to this df as follows:</p>
<pre><code>def color(val):
if val > 80:
color = 'red'
elif val > 50 and val <= 80:
color = 'yellow'
else:
color = 'green'
return 'background-color: %s' % color
df_new = df.style.applymap(color, subset=["Percent_Utilized"])
</code></pre>
<p>I need to add % at the end of the numbers on Percent_Utilized columns:</p>
<p>resulting data frame need to look something like this:</p>
<pre><code>df_new
Server Env. Model Percent_Utilized
server123 Prod Cisco. 50%
server567. Prod Cisco. 80%
serverabc. Prod IBM. 100%
serverdwc. Prod IBM. 45%
servercc. Prod Hitachi. 25%
Avg 60%
server123Uat Uat Cisco. 40%
server567u Uat Cisco. 30%
serverabcu Uat IBM. 80%
serverdwcu Uat IBM. 45%
serverccu Uat Hitachi 15%
Avg 42%
</code></pre>
<p>when I do this:</p>
<pre><code>df_new['Percent_Utilized'] = df_new['Percent_Utilized'].astype(str) + '%'
</code></pre>
<p>I get this error:</p>
<p>TypeError: 'Styler" object is not suscriptable.</p>
| <python><pandas><pandas-styles> | 2023-09-07 02:58:38 | 1 | 10,714 | user1471980 |
77,056,176 | 11,124,121 | How to use with_columns in LazyGroupBy object in polars? | <p>I am trying to calculate the difference of lag variable group by <code>id</code> variable. However,</p>
<p>when I tried to run the following code:</p>
<pre><code>ad.v2.group_by('id').with_columns(
diff = pl.col('Movement_Time_clear') - pl.col('Movement_Time_clear').diff()
)
</code></pre>
<p>A warning was popped:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'LazyGroupBy' object has no attribute 'with_columns'
</code></pre>
<p>What was the cause of the warning?</p>
| <python><python-polars> | 2023-09-07 02:02:04 | 1 | 853 | doraemon |
77,056,099 | 5,179,643 | How to get the max count of groups using Pandas groupby, using alphabetical order to break any ties | <p>I have a Pandas dataframe that looks like this:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'person': ['Mike', 'Mike', 'Mike', 'Bob', 'Bob', 'Bob', 'Susan', 'Cindy', 'Paul', 'Paul', 'Jon', 'Larry', 'Cindy', 'Larry', 'Larry', 'David', 'David', 'David', 'Eric', 'Cindy', 'Paul'],
'city': ['New York', 'New York', 'New York', 'New York', 'New York', 'New York', 'New York', 'London', 'London', 'London', 'London', 'Sydney', 'Sydney', 'Sydney', 'Sydney', 'Sydney', 'Sydney', 'Sydney', 'Tokyo', 'Tokyo', 'Tokyo']
})
</code></pre>
<p>For each city, I'd like to return the person with the max count within that city. In cases of a tie, I'd like to use the alphabetical order (closest to 'A') of the person.</p>
<p>The desired dataframe would look as follows:</p>
<pre><code>city person
New York Bob
London Paul
Sydney David
Tokyo Cindy
</code></pre>
<p>I believe I can do this using <code>groupby()</code> and <code>idxmax()</code>, but I'm not sure how.</p>
<p>Any assistance would be greatly appreciated.</p>
<p>Thanks!</p>
| <python><pandas><group-by> | 2023-09-07 01:37:52 | 3 | 2,533 | equanimity |
77,056,061 | 1,522,308 | Problem implementing delays using elapsed time (timestamps)? | <p>My use case requirements:</p>
<ul>
<li>Requirement 1: Loop through Task A, Task B, Task C</li>
<li>Requirement 2: Skip Task A if performed within the last 5 minutes.</li>
<li>Requirement 3: If Task A not performed within the last 5 minutes, do not skip during the next loop (or add Task A to a queue?)</li>
</ul>
<p>I'm using the timestamp approach:</p>
<pre><code>import time
from time import perf_counter_ns
intDelayIntervalSecs = 300/1000
tsLastIteration = perf_counter_ns()/1000000000
while true:
if time.time() - tsLastIteration > intDelayIntervalSecs:
# perform task A here
tsLastIteration = perf_counter_ns()/1000000000
else:
# perform task B here
# perform task C here
</code></pre>
<p>I'm avoiding <code>time.sleep(n)</code>, because that will park the program for 5 minutes and block the other tasks.</p>
<p>I'm avoiding threads (<code>Event.wait()</code>, etc.) because when the delay interval elapses, I don't want the program to abandon the current task and jump directly to the delayed task.</p>
<p>I've looked at numerous Python references. This elapsed time approach is NEVER mentioned. This makes me wonder, is there a nuance or problem with this code?</p>
<p>If there is another way to code these requirements, thanks in advance.</p>
| <python><delay><sleep> | 2023-09-07 01:24:23 | 0 | 342 | torpedo51 |
77,056,035 | 5,319,180 | Why would you want to create more than one event loops in asyncio? | <p>Why not just use the default one always? are there any usecases for creating multiple event loops?</p>
| <python><asynchronous><python-asyncio> | 2023-09-07 01:11:18 | 2 | 429 | D.B.K |
77,056,021 | 8,869,570 | How to get the most recent dated file (filename of the form YYYY-MM-DD-filename.txt) in a directory | <p>Inside a directory <code>dir2search</code>, I have many files of the form</p>
<pre><code>YYYY-MM-DD-orders.txt
</code></pre>
<p>where YYYY is the 4 digit year, MM is the 2 digit month (01 for January, 02 for February, etc..) and DD is the 2 digit day of the month (01 for the first, 02 for the second, etc..).</p>
<p>How can I use python to get the most recently dated file?</p>
<p>Currently, I use</p>
<pre><code> def all_orders_files(dir2search) :
return glob(os.path.join(dir2search, '????-??-??-orders.txt'))
def most_recent_date() :
# str2date is a custom function I have that converts date to datetime format
return max(str2date(os.path.basename(filename)[:10])
for filename in all_orders_files())
</code></pre>
<p>I don't know how efficient this is, but I also don't know of another way to do it. I would like to profile against some other methods.</p>
| <python> | 2023-09-07 01:08:51 | 2 | 2,328 | 24n8 |
77,056,014 | 5,437,918 | How to import entire own module in Python as an alias? | <p>I have a directory structure like the following:</p>
<pre><code>my-package/
config/
__init__.py
constants.py
logging.py
app.py
</code></pre>
<p>From within <code>config/__init__.py</code>, I'd like to do an import along the lines of:</p>
<pre class="lang-py prettyprint-override"><code>import .constants as const
</code></pre>
<p>or</p>
<pre class="lang-py prettyprint-override"><code>from .constants import * as const
</code></pre>
<p>But of course neither syntax is valid.</p>
<p>Just wanted to ask if anyone has any ideas on how to achieve this.</p>
<p>I looked through several other related SO posts, and none of them seem to address this specific case.</p>
| <python><python-import><relative-import> | 2023-09-07 01:06:51 | 1 | 1,070 | Jethro Cao |
77,055,917 | 5,319,180 | Why use SQS/AMQ in python when you can just use asyncio.create_task? | <p>Given the downside that you're now unable to independently scale your API service and workers, you still do have the upside of lesser network latencies since nothing needs to be enqueued into the task queue, and nothing needs to be polled from a remote task queue.</p>
<p>Are there any notable reasons to still use SQS in the context of python?</p>
| <python><asynchronous><celery><python-asyncio><amazon-sqs> | 2023-09-07 00:24:52 | 0 | 429 | D.B.K |
77,055,702 | 5,179,643 | How to add non grouped column to the output of Pandas groupby() | <p>I have a Pandas dataframe df that looks like this:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'student': ['Bob', 'Sally', 'Rich', 'Melissa', 'Len', 'Sue', 'Jon', 'Sandy',
'William', 'Sara'],
'year': [2020, 2020, 2020, 2021, 2021, 2021, 2021, 2022, 2022, 2022],
'gpa': [2.9, 3.7, 3.2, 3.8, 3.8, 3.1, 3.2, 2.7, 3.6, 3.9]})
df
</code></pre>
<p>Output:</p>
<pre><code> student year gpa
0 Bob 2020 2.9
1 Sally 2020 3.7
2 Rich 2020 3.2
3 Melissa 2021 3.8
4 Len 2021 3.8
5 Sue 2021 3.1
6 Jon 2021 3.2
7 Sandy 2022 2.7
8 William 2022 3.6
9 Sara 2022 3.9
</code></pre>
<p>I'd like to use <code>groupby()</code> to get the highest GPA per year <em><strong>and</strong></em> include the student associated with that GPA value.</p>
<p>The desired dataframe would look as follows:</p>
<pre><code>year gpa student
2020 3.7 Sally
2021 3.8 Len
2021 3.8 Melissa # <--- notice that there is a tie
2022 3.9 Sara
</code></pre>
<p>I tried using the following:</p>
<pre><code>result = df.groupby(['year'])[['gpa']].max().reset_index()
result
year gpa
0 2020 3.7
1 2021 3.8
2 2022 3.9
</code></pre>
<p>But, this does not give me <code>student</code>.</p>
<p>How would I add <code>student</code> to the output?</p>
| <python><pandas><group-by> | 2023-09-06 23:03:38 | 3 | 2,533 | equanimity |
77,055,665 | 15,491,705 | plt.show adds extra legend labelspacing between rows if there are subscripts, but fig.savefig does not? | <p>When I plot a figure and legend using the following code</p>
<pre><code>import matplotlib.pyplot as plt
# Create sample data
x = [1, 2, 3, 4]
y1 = [1, 2, 3, 4]
y2 = [4, 3, 2, 1]
y3 = [2, 3, 2, 1]
# Create the main figure
fig, ax = plt.subplots()
# Plot the lines
line1, = ax.plot(x, y1, label='$\\mathdefault{H_{12}\\ 5}$')
line2, = ax.plot(x, y2, label='$\\mathdefault{H_{12}\\ 5}$')
line3, = ax.plot(x, y3, label='$\\mathdefault{H_{12}\\ 5}$')
line3, = ax.plot(x, y3, label='$\\mathdefault{H_{12}\\ 5}$')
line3, = ax.plot(x, y3, label='$\\mathdefault{H_{12}\\ 5}$')
line3, = ax.plot(x, y3, label='$\\mathdefault{H_{12}\\ 5}$')
line3, = ax.plot(x, y3, label='$\\mathdefault{H_{12}\\ 5}$')
# Create a legend for the lines below the figure
fig.legend(loc='lower center', bbox_to_anchor=(0.5, 0), ncol=1, fontsize=8, labelspacing=0.5)
# Set labels and title
ax.set_xlabel('X-axis')
ax.set_ylabel('Y-axis')
ax.set_title('Multiple Lines with Legend Below')
# Adjust layout for better appearance
plt.tight_layout()
plt.subplots_adjust(bottom=0.4)
plt.show()
</code></pre>
<p>I get different amount of legend labelspacing between the rows of the legend in the popped-up window versus the saved figure. The figure in the screen shotted pop-up window in spyder Automatic backend (left) has more spacing than the figure saved as png (but also with other file formats) using <code>fig.savefig(filepath,dpi=600, transparent=False,bbox_inches='tight')</code> (right)
<a href="https://i.sstatic.net/1sknY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1sknY.png" alt="enter image description here" /></a></p>
<p>I overlayed the figures and the font(sizes) are exactly the same. I figured that this issue only shows up when the legend labels have subscripts. It seems that <code>plt.show()</code> does generate extra space for subscripts while fig.savefig does not. Is this a bug, or how can I/we fix it. Anyone any ideas to solve this or work around? Any way to let <code>plt.show()</code> just ignore the fact that there are subscripts?</p>
<p>Following the comment from @Jody Klimak, my tex related rcParams are:</p>
<pre><code>'text.antialiased': True,'text.color': 'black','text.hinting': 'force_autohint','text.hinting_factor': 8,'text.kerning_factor': 0,'text.latex.preamble': '\\usepackage{amsmath}\\usepackage{amssymb} ''\\usepackage{sfmath}','text.usetex': False,'mathtext.bf': 'sans:bold','mathtext.cal': 'cursive','mathtext.default': 'it','mathtext.fallback': 'cm','mathtext.fontset': 'dejavusans','mathtext.it': 'sans:italic','mathtext.rm': 'sans','mathtext.sf': 'sans','mathtext.tt': 'monospace',
</code></pre>
| <python><matplotlib> | 2023-09-06 22:51:42 | 1 | 321 | Maurits Houck |
77,055,261 | 12,415,855 | Python - Email-Sending with smtplib for yahoo not possible? | <p>i try to send an email using the following code -</p>
<pre><code>from email.mime.text import MIMEText
import smtplib
from dotenv import load_dotenv
import os
import sys
if __name__ == '__main__':
path = os.path.abspath(os.path.dirname(sys.argv[0]))
fn = os.path.join(path, ".env")
load_dotenv(fn)
LOGIN_EMAIL = os.environ.get("LOGIN_EMAIL")
LOGIN_PW = os.environ.get("LOGIN_PW")
SMTP_SERVER = os.environ.get("SMTP_SERVER")
SMTP_PORT = os.environ.get("SMTP_PORT")
MAIL_TO = os.environ.get("MAIL_TO")
headlineTxt = "The Headline"
msgTxt = "This is some Text"
s = smtplib.SMTP (SMTP_SERVER, str(int(SMTP_PORT)))
print (s.ehlo ()) # Check if OK - Response 250 means connection is ok
print (s.starttls ()) # Check if OK
print (s.login (LOGIN_EMAIL, LOGIN_PW)) # Check if OK
msg = MIMEText (msgTxt)
msg['Subject'] = headlineTxt
msg['From'] = LOGIN_EMAIL
s.sendmail (LOGIN_EMAIL, MAIL_TO, msg.as_string ())
s.quit ()
</code></pre>
<p>When i run this code with a gmail-account:</p>
<pre><code>LOGIN_EMAIL = myEMail@gmail.com
LOGIN_PW = myPW
SMTP_SERVER = smtp.gmail.com
SMTP_PORT = 587
MAIL_TO = myEMail@gmx.at
</code></pre>
<p>this works fine with this output</p>
<pre><code>(250, b'smtp.gmail.com at your service, [2a02:1748:dd5c:8830:25e4:8eed:cc33:1bd4]\nSIZE 35882577\n8BITMIME\nSTARTTLS\nENHANCEDSTATUSCODES\nPIPELINING\nCHUNKING\nSMTPUTF8')
(220, b'2.0.0 Ready to start TLS')
(235, b'2.7.0 Accepted')
</code></pre>
<p>but when i try it with a yahoo-account:</p>
<pre><code>LOGIN_EMAIL = myEMail@yahoo.com
LOGIN_PW = myPW
SMTP_SERVER = smtp.mail.yahoo.com
SMTP_PORT = 587
MAIL_TO = myEMail@yahoo.com
</code></pre>
<p>it didn´t work and i get the following error:</p>
<pre><code>(250, b'hermes--production-ir2-5cc57b9c45-hvdt8 Hello DESKTOP-8CPPRED.fritz.box [185.17.14.8])\nPIPELINING\nENHANCEDSTATUSCODES\n8BITMIME\nSIZE 41697280\nSTARTTLS')
(220, b'2.0.0 Ready to start TLS')
Traceback (most recent call last):
File "G:\DEV\Python-Diverses\smtplib\sendEmail.py", line 22, in <module>
print (s.login (LOGIN_EMAIL, LOGIN_PW)) # Check if OK
File "C:\Users\marku\AppData\Local\Programs\Python\Python310\lib\smtplib.py", line 739, in login
(code, resp) = self.auth(
File "C:\Users\marku\AppData\Local\Programs\Python\Python310\lib\smtplib.py", line 642, in auth
(code, resp) = self.docmd("AUTH", mechanism + " " + response)
File "C:\Users\marku\AppData\Local\Programs\Python\Python310\lib\smtplib.py", line 432, in docmd
return self.getreply()
File "C:\Users\marku\AppData\Local\Programs\Python\Python310\lib\smtplib.py", line 405, in getreply
raise SMTPServerDisconnected("Connection unexpectedly closed")
smtplib.SMTPServerDisconnected: Connection unexpectedly closed
</code></pre>
<p>How can i send emails using a yahoo-email?</p>
| <python><email><smtplib> | 2023-09-06 20:59:35 | 0 | 1,515 | Rapid1898 |
77,055,222 | 3,832,377 | How do I "wrap" a Python function without losing type information? | <p>I'm trying to provide some higher level logic to a Python function that calls an external library, e.g:</p>
<pre class="lang-py prettyprint-override"><code>class Example():
def my_func(self, *args, **kwargs):
return library.func(*args, **kwargs)
</code></pre>
<p>Depending on the result of the Python library, we might want to do different things, e.g:</p>
<pre class="lang-py prettyprint-override"><code>class Example():
def my_func(self, *args, **kwargs):
try:
return library.func(*args, **kwargs)
except LibraryRateLimitError:
# Simplified version of backoff:
time.sleep(2)
return self.get(*args, **kwargs)
</code></pre>
<p>Unfortunately, this new function loses <em>all</em> typing. The <code>library.func</code> is complex, with dozens of arguments, so I don't want to type it manually. Is there a way to statically tell the analyser that the higher level function takes in the same arguments as the lower-level one?</p>
| <python><types> | 2023-09-06 20:52:21 | 0 | 8,977 | Alexander Craggs |
77,055,063 | 2,058,333 | PostgreSQL changes table name to lowercase | <p>I am running this command with <code>sqlalchemy==1.4.31</code>.</p>
<pre><code>with get_postgres_db() as postgres_session:
r = postgres_session.execute("SELECT * FROM shop.Clients;")
</code></pre>
<p>and it fails with</p>
<pre><code>ProgrammingError: (psycopg2.errors.UndefinedTable) relation "shop.clients" does not exist
</code></pre>
<p>but <code>shop.Clients</code> exists:</p>
<pre><code>In [30]: with get_postgres_db() as postgres_session:
...: r = postgres_session.execute("SELECT * FROM information_schema.tables WHERE table_schema = 'shop';")
...:
In [31]:
In [31]: r.all()
Out[31]:
[('DB', 'shop', 'Clients', 'BASE TABLE', None, None, None, None, None, 'YES', 'NO', None),
...]
</code></pre>
<p>and the table exists.</p>
<p>It appears that the request string will be filtered to lowercase the table name.
How can I disable this feature?</p>
<p>Initially I wanted to create a decorator that clears all tables with <code>DELETE FROM Clients;</code> and thats how I stumbled upon this.</p>
| <python><postgresql><sqlalchemy> | 2023-09-06 20:19:19 | 0 | 5,698 | El Dude |
77,054,882 | 2,225,895 | Changing color of tkinter button | <p>I want to update the background color of a button when I have clicked on it. But nothing happens. Instead I tried this minimal code snippet, but only the foreground is changed, not the background:</p>
<pre><code>from tkinter import *
def demoColorChange():
button1.configure(bg='red', fg='blue')
parent = Tk()
parent.geometry('500x500')
button1 = Button(parent, text = 'click me!', command= demoColorChange )
button1.pack()
parent.mainloop()
</code></pre>
<p>This example code can also be found here: <a href="https://www.educba.com/tkinter-button-color/" rel="nofollow noreferrer">https://www.educba.com/tkinter-button-color/</a></p>
<p>This is on Mac Ventura and Python 3.11.5.
Is there some update I don't know about?</p>
<p><strong>Update</strong></p>
<p>Installing tkmacosx and adding the import</p>
<pre><code>from tkmacosx import Button
</code></pre>
<p>made it work!</p>
| <python><tkinter> | 2023-09-06 19:49:21 | 0 | 1,916 | El_Loco |
77,054,769 | 5,091,329 | SQLAlchemy 1:1 mapping | <p>Using python and SQLAlchemy 2.0 with PostgreSQL, I am trying to create a simple one to one mapping between two tables. The code is:</p>
<pre><code>class Base(orm.DeclarativeBase):
pk: orm.Mapped[uuid.UUID] = orm.mapped_column(
primary_key=True,
default=uuid.uuid4,
)
class AAA(Base):
__tablename__ = "aaa"
name: orm.Mapped[str]
# 1:1 mapping with BBB
linked_bbb: orm.Mapped["BBB"] = relationship(back_populates="linked_aaa", lazy="selectin")
class BBB(Base):
__tablename__ = "bbb"
name: orm.Mapped[str]
# 1:1 mapping with AAA
linked_aaa: orm.Mapped["AAA"] = relationship(back_populates="linked_bbb", lazy="selectin")
</code></pre>
<p>The code runs and produces this schema in Postgres:</p>
<pre><code>my_db=> \d aaa
Table "public.aaa"
Column | Type | Collation | Nullable | Default
--------+-------------------+-----------+----------+---------
name | character varying | | not null |
pk | uuid | | not null |
Indexes:
"aaa_pkey" PRIMARY KEY, btree (pk)
my_db=> \d bbb
Table "public.bbb"
Column | Type | Collation | Nullable | Default
--------+-------------------+-----------+----------+---------
name | character varying | | not null |
pk | uuid | | not null |
Indexes:
"bbb_pkey" PRIMARY KEY, btree (pk)
</code></pre>
<p>I was expecting that each table would have a column linking it back to the other. What am I missing?</p>
| <python><sqlalchemy> | 2023-09-06 19:32:34 | 0 | 1,437 | Jim Archer |
77,054,713 | 10,225,070 | Need help making a 3D surface plot a 4D surface plot with color as separate dimension | <p>As the title says. I have tried so many different ways of doing this. I have 4 vectors of length 48.</p>
<pre><code>X: [ 25 25 25 25 25 25 50 50 50 50 50 50 75 75 75 75 75 75
100 100 100 100 100 100 125 125 125 125 125 125 150 150 150 150 150 150
175 175 175 175 175 175 200 200 200 200 200 200]
Y: [ 100 250 500 1000 1500 2000 100 250 500 1000 1500 2000 100 250
500 1000 1500 2000 100 250 500 1000 1500 2000 100 250 500 1000
1500 2000 100 250 500 1000 1500 2000 100 250 500 1000 1500 2000
100 250 500 1000 1500 2000]
Z: [ 0.20900428 0.51286209 1.03853414 3.28220448 4.6407558 7.34891026
0.2765902 0.7604821 1.76022537 5.10049512 8.61249235 12.96447849
0.2623122 0.98286221 2.5040107 6.2533442 11.0721308 15.36910634
0.32121766 0.97078288 2.66376145 7.51123161 12.98652091 20.21016505
0.38653798 1.21371622 3.30200138 7.93705671 17.20774968 28.97923372
0.46758823 1.23861806 3.72943289 8.38099084 19.04535632 32.7009341
0.44258697 1.42894619 3.96008332 10.45831311 22.98130064 31.32277734
0.4507597 1.7036628 4.69553339 10.92697349 25.68610439 45.02457106]
C: [38.96 39.48 40.34 41.04 41.08 41.06 39.76 40.62 40.88 41.06 41.04 41.2
39.22 40.48 40.98 41.2 41.26 41.16 40.2 40.78 40.68 41.26 41.26 41.32
39.96 40.56 40.86 41.26 41.26 41.52 40.36 40.6 41.22 41.26 41.78 41.7
39.24 40.8 41.26 41.4 41.92 41.62 39.74 41.06 41.24 41.56 41.94 42.06]
</code></pre>
<p>This code snippet</p>
<pre><code>X = overall_results.num_generations.values
Y = overall_results.population_size.values
Z = overall_results.avg_time.values
C = overall_results.avg_reward.values
# note, C is not used as this is meant to be a fully working example
fig = plt.figure(figsize=(8,6))
ax = Axes3D(fig, auto_add_to_figure=False)
fig.add_axes(ax)
surf = ax.plot_trisurf(X, Y, Z, cmap=cm.jet, linewidth=.2)
ax.view_init(elev=5, azim=-140)
colorbar = fig.colorbar(surf, ax=ax, pad=0.1, shrink=.5, ticks=[5, 10, 15, 20, 25, 30], format="%d")
colorbar.ax.set_yticklabels(['<= 5', '10', '15', '20', '25', '>= 30'])
plt.title('GA Time Analysis by Population Size and Number of Generations')
plt.show()
</code></pre>
<p>produces this figure</p>
<p><a href="https://i.sstatic.net/ZHUiF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZHUiF.png" alt="enter image description here" /></a></p>
<p>The color is mapped to Z, and the various methods I have tried to incorporate C all throw errors. This also uses trisurf, and, the polycount is very low.</p>
<p>This code snippet</p>
<pre><code>X = overall_results.num_generations.values
Y = overall_results.population_size.values
Z = overall_results.avg_time.values
C = overall_results.avg_reward.values
# Define a finer grid for interpolation
new_X = np.linspace(X.min(), X.max(), 100)
new_Y = np.linspace(Y.min(), Y.max(), 100)
new_X, new_Y = np.meshgrid(new_X, new_Y)
# Perform interpolation
new_Z = griddata((X, Y), Z, (new_X, new_Y), method='linear')
# Create the 3D plot
fig = plt.figure(figsize=(8, 8))
ax = Axes3D(fig, auto_add_to_figure=False)
fig.add_axes(ax)
surf = ax.plot_surface(new_X, new_Y, new_Z, cmap=cm.jet, antialiased=True)
ticks = np.linspace(Z.min(), Z.max(), 10)
#ticks = [5, 10, 15, 20, 25, 30, 35, 40, 45]
colorbar = fig.colorbar(surf, ax=ax, pad=0.1, shrink=0.35, ticks=ticks, format="%d")
#colorbar.ax.set_yticklabels(['<= 5', '10', '15', '20', '25', '>= 30'])
ax.view_init(elev=8, azim=-150)
plt.title('GA Time Analysis by Population Size and Number of Generations')
ax.set_xlabel('Number of Generations', labelpad=12, fontsize=14)
ax.set_ylabel('Population Size', labelpad=12, fontsize=14)
ax.zaxis.set_rotate_label(False)
ax.set_zlabel('Running Time (seconds)', rotation=90, fontsize=14)
plt.show()
</code></pre>
<p>produces this figure</p>
<p><a href="https://i.sstatic.net/uQn3r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uQn3r.png" alt="enter image description here" /></a></p>
<p>A much better looking figure, and uses plot_surface instead of trisurf. But again, I'm not able to use C to set the color bar. I've looked at</p>
<p><a href="https://stackoverflow.com/questions/32461452/plot-3d-surface-with-colormap-as-4th-dimension-function-of-x-y-z">Plot 3d surface with colormap as 4th dimension, function of x,y,z</a></p>
<p><a href="https://stackoverflow.com/questions/14995610/how-to-make-a-4d-plot-with-matplotlib-using-arbitrary-data?noredirect=1&lq=1">How to make a 4d plot with matplotlib using arbitrary data</a></p>
<p><a href="https://stackoverflow.com/questions/6539944/color-matplotlib-plot-surface-command-with-surface-gradient/6543777#6543777">Color matplotlib plot_surface command with surface gradient</a></p>
<p>libraries used</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from scipy.interpolate import griddata
from matplotlib.ticker import LinearLocator, FormatStrFormatter
from matplotlib.ticker import MaxNLocator
from matplotlib.colors import Normalize
</code></pre>
<p>Now, I'm able to create a scatter plot that does what I want, minus the surface, like so</p>
<pre><code>X = overall_results.num_generations.values
Y = overall_results.population_size.values
Z = overall_results.avg_time.values
C = overall_results.avg_reward.values
cmap = plt.get_cmap('jet') # You can choose any colormap you prefer
norm = Normalize(vmin=C.min(), vmax=C.max())
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111, projection='3d')
img = ax.scatter(X, Y, Z, c=C, s=100, cmap=cmap, norm=norm, alpha=1.0)
#plt.scatter(x, y, c=x, cmap=cmap, s=350, alpha=.7)
plt.xlabel('Average Reward', fontsize=14)
plt.ylabel('Running Time', fontsize=14)
cbar = fig.colorbar(img, pad=.1, shrink=.5)
cbar.set_label('Average Reward', fontsize=14, labelpad=10)
ax.view_init(elev=20, azim=-140)
plt.show()
</code></pre>
<p>Which produces this figure</p>
<p><a href="https://i.sstatic.net/rjvfx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rjvfx.png" alt="enter image description here" /></a></p>
<p>But I'd like this effect as a surface.</p>
| <python><matplotlib><surface><matplotlib-3d><4d> | 2023-09-06 19:18:37 | 1 | 413 | darrahts |
77,054,593 | 1,311,449 | File manipulation through Microsoft Graph API | <p>I have an Excel inside a Sharepoint which I'm perfectly able to read making a call to Graph API from Python using msal library. I'm stuck in trying to update this file. I gave the application the <code>Files.ReadWrite.All</code> permission and I can see it decoding the token through <a href="https://jwt.ms" rel="nofollow noreferrer">jwt.ms</a>:</p>
<pre><code>{
...
"roles": [
"Files.ReadWrite.All"
]
...
}
</code></pre>
<p>These are the permissions of my application: <a href="https://i.sstatic.net/preGt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/preGt.png" alt="App permissions" /></a></p>
<p>Despite this, when I try to send the <code>PATCH</code> http request to the file passing the structure to update I get a 403 AccessDenied error, it says that the operation can't be performed. It's not a problem of the structure I'm passing as it works fine in the <a href="https://developer.microsoft.com/en-us/graph/graph-explorer" rel="nofollow noreferrer">Graph Explorer</a>. It seems a permissions problem but I don't know which other permission might be missing...</p>
<p>How can I find out where the problem is?</p>
<hr />
<p><strong>UPDATE</strong></p>
<p>Here's the code I'm using:</p>
<p>client instance generation:</p>
<pre><code>client = msal.ConfidentialClientApplication(
client_id
,authority = authority
,client_credential = client_secret
)
</code></pre>
<p>token request:</p>
<pre><code>tk = client.acquire_token_for_client(scopes = ['https://graph.microsoft.com/.default'])
</code></pre>
<p>the simple HTTP request:</p>
<pre><code>requests.request(
method = 'PATCH'
,headers = {'Authorization': 'Bearer ' + tk['access_token']}
,url = 'https://graph.microsoft.com/v1.0/sites/{}/drive/items/{}/workbook/worksheets(\'{}\')/range(address=\'B2:C{}\')'.format(CONFIGS['app']['sharepoint_id'], CONFIGS['app']['workbook_id'], sheet, (CONFIGS['app']['nRows'] + 1))
,json = {...}
)
</code></pre>
<p>where <code>json</code> has a format like this (I'm emptying some cells):</p>
<pre><code>{
"values": [
["", ""]
],
"formulas": [
[null, null]
],
"numberFormat": [
[null, null]
]
}
</code></pre>
<hr />
<p><strong>UPDATE 2</strong></p>
<p>I have set all readwrite permissions that exists on files API as you can see in this screenshot, still having a 403:</p>
<p><a href="https://i.sstatic.net/NDfAO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NDfAO.png" alt="enter image description here" /></a></p>
| <python><microsoft-graph-api><azure-ad-msal> | 2023-09-06 18:55:11 | 1 | 656 | Mark |
77,054,449 | 3,158,876 | Connecting to MongoDB with requests library | <p>I have a situation where I need to pass some data to MongoDB but am limited in the libraries I can access and so can only use requests. This is what I have so far, but it's always throwing a 'No connection adapters were found' error. I've confirmed that I have manual access to the server. I've also tried this both with and withouht setting the connection first, but get the same error either way.</p>
<pre><code>data = [10,20,30,40,50,60]
df = pd.DataFrame(data, columns=['Numbers'])
# Create a connection to the MongoDB database
connection = requests.Session()
# Post the document to the MongoDB database
response = connection.post(r'mongodb+srv://username:password@name.aaaa.mongodb.net/db_name/collection_name', json=df.to_dict())
</code></pre>
| <python><mongodb><python-requests> | 2023-09-06 18:31:07 | 0 | 315 | Benjamin Brannon |
77,054,430 | 1,471,980 | How do I change the background color of a cell in Pandas? | <p>I have this DataFrame:</p>
<pre><code>Server Env. Model Percent_Utilized
server123 Prod Cisco. 50
server567. Prod Cisco. 80
serverabc. Prod IBM. 100
serverdwc. Prod IBM. 45
servercc. Prod Hitachi. 25
Avg 60
server123Uat Uat Cisco. 40
server567u Uat Cisco. 30
serverabcu Uat IBM. 80
serverdwcu Uat IBM. 45
serverccu Uat Hitachi 15
Avg 42
</code></pre>
<p>I need to change the background color of Percent_Utilized column cell based on some condition. Tackground color needs to be red, yellow or green based on what is the number. I also need to highlight the whole Avg row based on the value of Avg under Percent_Utilized value.in yellow collo (red, yellow or green)</p>
<p>I tried this for changing the cell values under Percent_Utilized column:</p>
<pre><code>def color(val):
if val > 80:
color = 'red'
elif val > 50 & val < 80:
color = 'yellow'
elif val < 50:
color = 'green'
return 'background-color: %s' % color
df.style.applymap(color, subset=["Percent_Utilized"]
</code></pre>
<p>I get this error:</p>
<pre><code>UnboundLocalError: local variable 'color' referenced before assignment
</code></pre>
<p>I am not sure how to go about changing the color of whole row that start with "Avg". If the Avg value is greater than 80 then the whole line should be red, if it is between 50 and 80 itshould be yellow, and if under 50 then the whole line should be green.</p>
<p>I also need to write this to an excel file with the same color background.</p>
| <python><pandas> | 2023-09-06 18:27:32 | 3 | 10,714 | user1471980 |
77,054,285 | 11,748,924 | convert pip requirements.txt to conda requirements | <p>For <code>pip</code>, required libraries is stored in <code>requirements.txt</code>.</p>
<p>But at same workspace, I'm using <code>conda virtual environtment</code>.</p>
<p>Here is my <code>requirements.txt</code></p>
<pre><code>flask
werkzeug
Jinja2
watchdog
mongoengine
pyjwt
opencv-stubs
facenet-pytorch
Pillow
numpy
ultralytics
pylint
python-dotenv
</code></pre>
<p>What is the equivalent command for <code>pip install -r requirements.txt</code> with <code>conda</code>?</p>
| <python><pip><conda> | 2023-09-06 18:02:52 | 1 | 1,252 | Muhammad Ikhwan Perwira |
77,054,224 | 5,437,090 | Pandas vs Dask sort columns and index of string and number | <p><strong>Given</strong>:</p>
<p>Small sample pandas dataframe:</p>
<pre><code>import pandas as pd
import numpy as np
import dask.dataframe as dd
df = pd.DataFrame({"usr": ["ip1", "ip7", "ip12", "ip4"], "colB": [1, 2, 3, 0], "ColA": [3, np.nan, 7, 1]}, dtype="float32").set_index("usr")
colB ColA
usr
ip1 1.0 3.0
ip7 2.0 NaN
ip12 3.0 7.0
ip4 0.0 1.0
</code></pre>
<p>I can sort this dataframe for both index and columns using <code>sort_index</code> and <code>reindex</code> as follows:</p>
<pre><code>df_s = df.sort_index(key=lambda x: ( x.to_series().str[2:].astype(int) )) # sort index
df_s = df_s.reindex(columns=sorted(df_s.columns)) # sort columns
ColA colB
usr
ip1 3.0 1.0
ip4 1.0 0.0
ip7 NaN 2.0
ip12 7.0 3.0
</code></pre>
<p><strong>Problem</strong>:</p>
<p>My real dataset is a large dataframe and I use Dask to benefit from parallel computing. Since <code>sort_index</code> does not exist in Dask, I try to use <code>sort_values</code> as follows:</p>
<pre><code>ddf = dd.from_pandas(df, npartitions=2)
ddf_s = ddf.map_partitions(lambda inp_ddf: inp_ddf.sort_values( ["usr"], ascending=True) ).compute()
</code></pre>
<p>But I get completely different results compared to my <code>df_s</code>. Neither index nor columns got sorted properly.</p>
<pre><code> ColA colB
usr
ip1 3.0 1.0
ip4 1.0 0.0
ip7 NaN 2.0
ip12 7.0 3.0
</code></pre>
<p>How do I have to sort index and columns in Dask?</p>
<p>Cheers,</p>
| <python><pandas><dask> | 2023-09-06 17:51:02 | 1 | 1,621 | farid |
77,054,156 | 13,350,341 | Render forms conditionally on checked radio button - Jinja2 templates - FastAPI | <p>I'm pretty new to both <em>Jinja templating</em> and <em>javascript</em>; I'm trying to build an application in FastAPI which renders some Jinja templates.</p>
<p>I have added a pair of radio buttons to my template and I would like one of the two to trigger the rendering of a couple of <em>forms</em>.</p>
<p>Following the answer to <a href="https://stackoverflow.com/questions/60952260/how-to-expose-a-form-when-a-radio-button-is-checked">How to expose a form when a radio button is checked?</a>, I've tried to proceed as such (I've defined the <code>.hidden</code> style in a proper <code>style.css</code> file):</p>
<pre><code>...
<div class="row mt-5 mb-3">
<div>
<form method="post">
<input type="radio" id="oneway-trip" name="trip-type" placeholder="One-way trip" checked/>
<label>One-way trip</label>
<br>
<input type="radio" id="round-trip" name="trip-type" placeholder="Round trip"/>
<label>Round trip</label>
</form>
</div>
</div>
...
<div class="row mt-5 mb-3">
<div>
<form class="hidden" method="post">
<input type="text" placeholder="from" name="dep-loc-cb" value="{{departure_location_comeback}}">
</form>
<form class="hidden" method="post">
<input type="text" placeholder="to" name="arr-loc-cb" value="{{arrival_location_comeback}}">
</form>
<script>
const form1 = document.querySelector("form[name='dep-loc-cb']");
const form2 = document.querySelector("form[name='arr-loc-cb']");
document.querySelector('#round-trip').addEventListener('change',(event)=>{
if (event.target.checked){
form1.classList.remove("hidden");
form2.classList.remove("hidden");
} else {
form1.classList.add("hidden");
form2.classList.add("hidden");
}
});
</script>
</div>
</div>
</div>
</code></pre>
<p>Namely, I would like the radio button identified by <code>id='round-trip'</code> to trigger the rendering of the two forms identified by names <code>name='dep-loc-cb'</code> and <code>name='arr-loc-cb'</code>.</p>
<p>However, nothing is rendered whenever I click on the second radio button. I've tried to play around with my code on <a href="https://developer.mozilla.org/en-US/play" rel="nofollow noreferrer">https://developer.mozilla.org/en-US/play</a> and I've seen the following error being returned</p>
<blockquote>
<p>TypeError: Cannot read properties of null (reading 'classList')</p>
</blockquote>
<p>which would imply (I guess) the constant <code>form1</code> being null.</p>
<p>Can somebody help? Thank you!</p>
| <javascript><python><jinja2><fastapi> | 2023-09-06 17:39:01 | 1 | 3,157 | amiola |
77,054,031 | 595,305 | Sending data in both directions between Python and lengthy Rust module? | <p>Pleasantly surprised at the ease of integrating Rust modules into a calling Python app using PyO3.</p>
<p>But the next thing I want to understand is whether it's possible to exchange data between a potentially long-running Rust module and the Python code. A typical case would be where Python is handling the GUI (e.g. PyQt), and a user may want to terminate this long-running Rust-module midstream. So hopefully a graceful interrupt mechanism which the Rust module could detect. "Graceful" would tend to mean not using <code>SIGINT</code>, I assume...</p>
<p>But there's also a need for Rust to be able to send out signals in the other direction: a typical case would be a progress indicator to be updated in the GUI. But in another scenario actual significant objects might need to "spun off" from the Rust code and "caught" by the Python code.</p>
<p>Is any of this possible? I can think of an incredibly clunky mechanism: files. If the Python code wants to interrupt, it creates a particular file on disk. The lengthy Rust code is constantly checking to see whether such a file exists, and what the instruction is. Such file mechanism could also be used for data flowing in the other direction.</p>
<p>But I'm hoping there's a better method than this. I've done some searching but not found anything very obvious. An intriguing comment said "Maybe use <a href="https://docs.rs/crossbeam-channel/latest/crossbeam_channel/" rel="nofollow noreferrer">crossbeam-channel</a> to allow communication between Python and Rust, enabling Python to send a signal/Interrupt to the Rust program?". I had a look at crossbeam-channel, but couldn't see anything addressing this sort of scenario.</p>
<p>Naturally I also looked at PyO3. There doesn't seem to be anything about communication between the respective threads/processes (NB I'm not yet clear whether a called Rust module is in fact running in a different <em>process</em> to the calling Python, but I assume so).</p>
<p><em><strong>Later</strong></em><br>
In fact, according to my experiments, it turns out that, unless you arrange things otherwise, e.g. by using Python <code>subprocess</code> or <code>multiprocessing</code>, etc., in fact a Rust-to-Python module compiled using <code>maturin develop</code> runs <strong>in the same process</strong> as the calling Python code.<br>
The significance of this for this question is that it may be possible to use inter-<strong>thread</strong>-communication, potentially. zeromq, which uses sockets, seems adequate for me to be getting on with (and it is obviously a form of inter-thread communication), and I have no idea how such "superior" within-process inter-thread comms might work. Maybe a suitable expert might have an idea.</p>
<p>It is also intriguing to wonder how the GIL fits into this, when you have a PyO3 (maturin) module running in one thread, and some Python code running in another thread of the same process...</p>
| <python><multithreading><rust><signals><pyo3> | 2023-09-06 17:12:26 | 1 | 16,076 | mike rodent |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.