QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 β |
|---|---|---|---|---|---|---|---|---|
76,723,439 | 14,923,024 | Get a list of all Python released versions to date | <p>I need a reliable way to retrieve a list of all released versions of Python (only final releases) to date.</p>
<p>Right now I'm relying on the tags of the CPython GitHub repository (see my added answer), but this seems a bit hacky. Isn't there an official way (e.g. a web-API) to fetch all current Python releases?</p>
| <python><version><release> | 2023-07-19 16:32:09 | 1 | 457 | AAriam |
76,723,421 | 1,585,507 | Google pubsub client can't subscribe with Locust | <p>I'm trying to implement a <code>GCloudSubUser</code> for the Python library Locust. I'm implementing this custom User because I need the user to:</p>
<ul>
<li>send a message to a topic of Google Pub/sub</li>
<li>wait for my API's response, which will be sent to an output topic</li>
</ul>
<p>So far this is what I have:</p>
<pre class="lang-py prettyprint-override"><code>import os
from google.cloud import pubsub_v1
from locust import User
class GCloudSubUser(User):
abstract = True
def __init__(self, environment) -> None:
super().__init__(environment)
self.client = GCloudSubClient(environment)
class GCloudSubClient:
def __init__(self, environment):
self.environment = environment
def publish(self):
# ...some code to publish to a topic...
project_id = os.getenv("PUBSUB_PROJECT_ID")
subscription_id = os.getenv("PUBSUB_TOPIC_ID")
subscriber = pubsub_v1.SubscriberClient()
subscription_path = subscriber.subscription_path(project_id, subscription_id)
future = subscriber.subscribe(subscription_path, callback=callback)
print("I reached this point")
def callback(message: pubsub_v1.subscriber.message.Message) -> None:
print(f"Received {message}.")
message.ack()
</code></pre>
<p>The problem I have is that it seems Locust prevents me from subscribing. This line blocks forever and I never see the log message:</p>
<pre class="lang-py prettyprint-override"><code>future = subscriber.subscribe(subscription_path, callback=callback)
</code></pre>
<p>I'm able to subscribe to the output topic if I don't use Locust, and I can see messages flowing in.
Would you know why this is happening?</p>
| <python><publish-subscribe><google-cloud-pubsub><locust> | 2023-07-19 16:29:00 | 0 | 5,739 | JPFrancoia |
76,723,387 | 22,212,435 | Why binded function on Entry widget keyPress doesn't return the current text? | <p>Sorry, if this is a duplicate or obvious question. So I simply wrote this:</p>
<pre><code>import tkinter as tk
root = tk.Tk()
def on_key_pressed(e: tk.Event):
print(text_field.get())
text_field = tk.Entry()
text_field.pack()
text_field.bind("<KeyPress>", on_key_pressed)
</code></pre>
<p>When I press key, it outputs like a previous typed string and not considering the key that has just been pressed. I guess maybe this is because the event keyPress happens too fast, and Entry didn't register this new change yet.</p>
<p>Am I right, or what will be a better explanation of this?</p>
| <python><python-3.x><tkinter> | 2023-07-19 16:23:48 | 1 | 610 | Danya K |
76,723,365 | 3,371,250 | How to calculate the maximum occurence in a rolling window? | <p>Say I have a table as follows:</p>
<pre><code>--------------------------------------------------
| Type | Incident ID | Date of incident|
--------------------------------------------------
| A | 1 | 2022-02-12 |
| A | 2 | 2022-02-14 |
| A | 3 | 2022-02-14 |
| A | 4 | 2022-02-14 |
| A | 5 | 2022-02-16 |
| A | 6 | 2022-02-17 |
| A | 7 | 2022-02-19 |
| A | 8 | 2022-02-19 |
| A | 7 | 2022-02-19 |
| A | 8 | 2022-02-19 |
... ... ...
| B | 1 | 2022-02-12 |
| B | 2 | 2022-02-12 |
| B | 3 | 2022-02-13 |
... ... ...
--------------------------------------------------
</code></pre>
<p>This is a list of different types of incidents. Every incident has a type, an id and a date, at which it occurred. This is just an example to help understand my goal.</p>
<p>What I want is - for a given range, e.g. 5 days - the maximum value that a rolling sum over these incidents would become:</p>
<p>So I would start with all elements that fall into the first 5 days and accumulate the occurences: 6.</p>
<pre><code>2022-02-12 - 2022-02-17: 6
</code></pre>
<p>By starting to roll the window by one day, all elements of the first day get eliminated from the sum, in this case -1 and no element for the next day in line gets added. The next value would be 5.</p>
<pre><code>2022-02-13 - 2022-02-18: 5
</code></pre>
<p>6 > 5. So 6 is still the maximum occurence of incidents in a 5 day window.</p>
<p>Continue for the complete time range.</p>
<p>This is not that hard to achieve but how would I do this in a very efficient manner for millions of elements? In short: I want to create a moving window of a fixed date range (e.g. 5 days), count all occurances for this window and give out the maximum value that was reached for any window.</p>
<p>By the way I am using sqlalchemy, but I would also be interested in plain sql.</p>
<p>An appropriate test set would be this:</p>
<pre><code>test_data_small = {'Id': [1, 2, 3, 4, 5,
6, 7, 8, 9, 10,
0, 1, 2, 3],
'Type': ['A', 'A', 'A', 'A',
'A', 'A', 'A', 'A',
'A', 'A', 'B', 'B',
'B', 'B'],
'Date': [
'2022-02-12', '2022-02-14',
'2022-02-14', '2022-02-14',
'2022-02-16', '2022-02-17',
'2022-02-19', '2022-02-19',
'2022-02-19', '2022-02-19',
'2022-02-16', '2022-02-12',
'2022-02-12', '2022-02-13']
}
</code></pre>
<p>I am connecting to a table via sqlalchemy like so:</p>
<pre><code>incidents = select(
incidents.c.type,
incidents.c.id,
incidents.c.date
).subquery()
result = self.connection.execute(incidents).fetchall()
</code></pre>
<p>Is it even possible in plain sql? Maybe I should use pandas in order to apply a rolling window?</p>
| <python><sql><pandas><sqlalchemy><rolling-computation> | 2023-07-19 16:21:07 | 1 | 571 | Ipsider |
76,723,303 | 12,596,824 | Replacing string column in pandas based on multiple string conditions | <p>I have a dataframe with one column that contains a list of countries. I basically want to transform it to a new column that says "Inside US" if the row contains either United States or Puerto Rico, otherwise "Outside US".
How can I do this in pandas?</p>
<p><strong>Expected input:</strong></p>
<pre><code>countries
United States, Japan
China
Brazil, South Africa
Puerto Rico, Spain
United States, Vietnam
Madagascar
</code></pre>
<p><strong>Expected output:</strong></p>
<pre><code>countries
Inside US
Outside US
Outside US
Inside US
Inside US
Outside US
</code></pre>
<p><strong>My attempt:</strong>
The following code gives me a true or false series which I'm struggling to use..Also not sure if this is the best way to start.</p>
<pre><code>df['countries'].str.contains('United States|Puerto Rico')
</code></pre>
| <python><pandas> | 2023-07-19 16:13:52 | 2 | 1,937 | Eisen |
76,723,222 | 14,967,088 | Get id of inserted object after "ON CONFLICT DO NOTHING" with SQLite on SQLAlchemy | <p>Using SQLAlchemy I have the following models (<code>App <- MarketItem <- MarketItemPriceHistory</code>):</p>
<pre><code>from sqlalchemy import (
Column,
Integer,
Float,
String,
ForeignKey,
UniqueConstraint
)
from sqlalchemy.orm import relationship, declarative_base
Base = declarative_base()
class App(Base):
__tablename__ = "App"
appid = Column(Integer, primary_key=True)
name = Column(String, nullable=False)
def __repr__(self):
return f"App(appid={self.appid}, name='{self.name}')"
class MarketItem(Base):
__tablename__ = "MarketItem"
id = Column(Integer, primary_key=True, autoincrement=True)
appid = Column(Integer, ForeignKey("App.appid"), nullable=False)
markethashname = Column(String, nullable=False)
app = relationship("App")
__table_args__ = (
UniqueConstraint("appid", "markethashname", name="unique_marketitem"),
)
def __repr__(self):
return f"MarketItem(id={self.id}, appid={self.appid}, markethashname='{self.markethashname}')"
class MarketItemPriceHistory(Base):
__tablename__ = 'MarketItemPriceHistory'
id = Column(Integer, primary_key=True, autoincrement=True)
marketitemid = Column(Integer, ForeignKey('MarketItem.id'), nullable=False)
price = Column(Float, nullable=False)
timestamp = Column(Integer, nullable=False)
market_item = relationship("MarketItem")
</code></pre>
<p>And the following code:</p>
<pre><code>from sqlalchemy import create_engine, text
from sqlalchemy.orm import sessionmaker
import models
from sqlalchemy.ext.compiler import compiles
from sqlalchemy.sql.expression import Insert
from sqlalchemy.dialects.sqlite.dml import OnConflictDoNothing
def get_session(uri: str):
engine = create_engine(uri, echo=True)
# By default, SQLite doesn't throw an error when a foreign key references
# a non-existent id.
with engine.connect() as connection:
connection.execute(text('PRAGMA foreign_keys = ON'))
# Create all models
models.Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
return Session()
@compiles(Insert)
def suffix_inserts(insert, compiler, **kw):
# This will basically add "ON CONFLICT DO NOTHING" at the end of an insert
# statement
insert._post_values_clause = OnConflictDoNothing()
return compiler.visit_insert(insert, **kw)
session = get_session("sqlite:///__mydatabase.db")
app = models.App(appid=753, name="Steam")
session.add(app)
session.flush()
market_item = models.MarketItem(appid=app.appid, markethashname="HASH NAME 0")
session.add(market_item)
session.flush()
market_item_price_history = models.MarketItemPriceHistory(marketitemid=market_item.id, price=10.5, timestamp=101010)
session.add(market_item_price_history)
session.commit()
</code></pre>
<p>After creating each object I add them (<code>session.add(obj)</code>) and then flush (<code>session.flush()</code>) so that objects that have a foreign key can reference them (<code>models.MarketItem(appid=app.appid, ...)</code>)</p>
<p>This works fine the first time you run the code, but the second time you get the following error when trying to add create a <code>MarketItemPriceHistory</code> object:</p>
<pre><code>sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) FOREIGN KEY constraint failed
[SQL: INSERT INTO "MarketItemPriceHistory" (marketitemid, price, timestamp) VALUES (?, ?, ?) ON CONFLICT DO
NOTHING]
[parameters: (0, 10.5, 101010)]
</code></pre>
<p>I have the following questions:</p>
<ol>
<li>Why does the error occur when trying to create a <code>MarketItemPriceHistory</code> object and not a <code>MarketItem</code> object?</li>
<li>Why is it that in the error message (<code>[parameters: (<id>=0, <price>=10.5, <timestamp>=101010)]</code>) <code>id</code> is set to 0? SQLite starts table ids from 1, not 0 so it makes no sense to me as for why it's 0.</li>
<li>How can I fix this so that it works as expected? (i.e. it fetches the actual id of the <code>MarketItem</code> object when trying to create the <code>MarketItemPriceHistory</code> object). Ideally I'd like to <em>not</em> have to check if the row exists in the database to then be able to create the object</li>
</ol>
| <python><sqlite><sqlalchemy> | 2023-07-19 16:03:12 | 0 | 741 | qwerty_url |
76,723,033 | 6,464,041 | How to pass DAG params to a KuberenetesPodOperator as an argument? | <p>Whenever the DAG is triggered using a custom JSON it should use those values in the <code>{{ params }}</code>. I'd like to send this dictionary with all it's keys and sub-dicts to a task which will process and check those values if they are correct.</p>
<p>I tried sending it without transforming, using <code>json.loads</code>, replacing the whitespace, it seems like nothing is working as <code>argparse</code> can't recognize the argument even though it works locally. Somehow airflow doesn't respect the changes I tried to make to this value.</p>
<p>How should I change the code in the DAG to be able to send the <code>{{ params }}</code> through argparse?</p>
<p>This is how the two scripts look using only minimal code (also without imports):</p>
<p>airflow dag:</p>
<pre><code>with DAG(
"pipeline",
render_template_as_native_obj=True, # so that {{ params }} is a dict
) as dag:
prevalidation = KubernetesPodOperator(
task_id="pre-validation",
name="pre-validation",
cmds=["python3"],
arguments=[
"-m",
"prevalidation",
"--config",
"{{ params }}", # the params dict
],
)
</code></pre>
<p>prevalidation.py:</p>
<pre><code>if __name__ == "__main__":
parser = ArgumentParser()
parser.add_argument("--config")
args, unknown = parser.parse_known_args()
# won't get here when using from DAG, locally it works
</code></pre>
| <python><kubernetes><airflow> | 2023-07-19 15:40:11 | 0 | 1,358 | GΓ‘bor Fekete |
76,723,027 | 4,835,204 | How to draw 2D Gaussian blob on an OpenCV image? | <p>There are various available examples with a formula for a 2D Gaussian Blob and drawing it via Pyplot, for example:<br>
<a href="https://stackoverflow.com/questions/7687679/how-to-generate-2d-gaussian-with-python">How to generate 2D gaussian with Python?</a><br> and<br> <a href="https://stackoverflow.com/questions/28342968/how-to-plot-a-2d-gaussian-with-different-sigma">How to plot a 2d gaussian with different sigma?</a><br><br>
I'm attempting to change this over to OpenCV (in Python).<br><br>
Some requirements are:<br></p>
<p>-ability to specify different height and width for the blob, i.e. ability to make the blob an ellipse (not always a circle)<br>
-ability to specify the center point of the blob in the original image<br>
-the value at the exact center of the blob should be 255, and the values should work their way down to 0 towards the edge of the blob<br>
-rotation is not necessary<br></p>
<p>The final image (depending on settings of course) should look something like this:
<a href="https://i.sstatic.net/KBpCv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KBpCv.png" alt="enter image description here" /></a></p>
<p>In the context of CenterNet (which is my use case for this) the result (image with a Gaussian blob on it) is called a "Heatmap" so that's the term I'm going to use in code for the image.</p>
<p>Here is what I have so far:</p>
<pre><code>import numpy as np
import cv2
def main():
# suppress numpy printing in scientific notation
np.set_printoptions(suppress=True)
hm_width = 1600
hm_height = 1000
# create blank heatmap (OpenCV image)
heatmap = np.zeros((hm_height, hm_width), dtype=np.uint8)
blob_height = 100
blob_width = 300
blob_center_x = 1000
blob_center_y = 400
# Create a 2D Gaussian blob
x, y = np.meshgrid(np.linspace(0, 1, blob_width), np.linspace(0, 1, blob_height))
print('\n' + 'x: ')
print(x.dtype)
print(x.shape)
print('min = ' + str(np.min(x)) + ' (s/b 0.0)')
print('max = ' + str(np.max(x)) + ' (s/b 1.0)')
print(x)
print('\n' + 'y: ')
print(y.dtype)
print(y.shape)
print('min = ' + str(np.min(y)) + ' (s/b 0.0)')
print('max = ' + str(np.max(y)) + ' (s/b 1.0)')
print(y)
# gaussian_blob = 1.0 / (2.0 * np.pi * blob_width * blob_height) * np.exp(-((x - blob_center_x)**2.0 / (2. * blob_width**2.0) + (y - blob_center_y)**2.0 / (2. * blob_height**2.0)))
gaussian_x_term = np.power(x - blob_center_x, 2.0) / np.power(blob_width, 2.0)
gaussian_y_term = np.power(y - blob_center_y, 2.0) / np.power(blob_height, 2.0)
gaussian_blob = np.exp(-1.0 * (gaussian_x_term + gaussian_y_term))
print('\n' + 'gaussian_blob before: ')
print(gaussian_blob.dtype)
print(gaussian_blob.shape)
print('min = ' + str(np.min(gaussian_blob)) + ' (s/b 0.0)')
print('max = ' + str(np.max(gaussian_blob)) + ' (s/b 1.0)')
print(gaussian_blob)
# scale up the gaussian blob from the 0.0 to 1.0 range to the 0 to 255 range
gaussian_blob = gaussian_blob * 255.0
gaussian_blob = np.clip(gaussian_blob, a_min=0.0, a_max=255.0)
gaussian_blob = np.rint(gaussian_blob)
gaussian_blob = np.clip(gaussian_blob, a_min=0, a_max=255)
gaussian_blob = gaussian_blob.astype(np.uint8)
print('\n' + 'gaussian_blob after: ')
print(gaussian_blob.dtype)
print(gaussian_blob.shape)
print('min = ' + str(np.min(gaussian_blob)) + ' (s/b 0)')
print('max = ' + str(np.max(gaussian_blob)) + ' (s/b 255)')
print(gaussian_blob)
# show the blob via OpenCV
cv2.imshow('gaussian blob', gaussian_blob)
# add the gaussian blob image to the heatmap
blob_left_edge_loc = round(blob_center_x - (0.5 * blob_width))
blob_right_edge_loc = round(blob_center_x + (0.5 * blob_width))
blob_top_edge_loc = round(blob_center_y - (0.5 * blob_height))
blob_bottom_edge_loc = round(blob_center_y + (0.5 * blob_height))
heatmap[blob_top_edge_loc:blob_bottom_edge_loc, blob_left_edge_loc:blob_right_edge_loc] = gaussian_blob
# show the heatmap
cv2.imshow('heatmap', heatmap)
cv2.waitKey()
# end function
if __name__ == '__main__':
main()
</code></pre>
<p>Currently both images come out almost blank, and based on the output:</p>
<pre><code>x:
float64
(100, 300)
min = 0.0 (s/b 0.0)
max = 1.0 (s/b 1.0)
[[0. 0.00334448 0.00668896 ... 0.99331104 0.99665552 1. ]
[0. 0.00334448 0.00668896 ... 0.99331104 0.99665552 1. ]
[0. 0.00334448 0.00668896 ... 0.99331104 0.99665552 1. ]
...
[0. 0.00334448 0.00668896 ... 0.99331104 0.99665552 1. ]
[0. 0.00334448 0.00668896 ... 0.99331104 0.99665552 1. ]
[0. 0.00334448 0.00668896 ... 0.99331104 0.99665552 1. ]]
y:
float64
(100, 300)
min = 0.0 (s/b 0.0)
max = 1.0 (s/b 1.0)
[[0. 0. 0. ... 0. 0. 0. ]
[0.01010101 0.01010101 0.01010101 ... 0.01010101 0.01010101 0.01010101]
[0.02020202 0.02020202 0.02020202 ... 0.02020202 0.02020202 0.02020202]
...
[0.97979798 0.97979798 0.97979798 ... 0.97979798 0.97979798 0.97979798]
[0.98989899 0.98989899 0.98989899 ... 0.98989899 0.98989899 0.98989899]
[1. 1. 1. ... 1. 1. 1. ]]
gaussian_blob before:
float64
(100, 300)
min = 6.880118208869318e-12 (s/b 0.0)
max = 7.240508138966562e-12 (s/b 1.0)
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
...
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
gaussian_blob after:
uint8
(100, 300)
min = 0 (s/b 0)
max = 0 (s/b 255)
[[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
...
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]]
</code></pre>
<p>it seems I'm not calculating the Gaussian blob quite right, but I'm not sure how to resolve this. Suggestions?</p>
| <python><opencv><image-processing><gaussian> | 2023-07-19 15:39:34 | 2 | 3,840 | cdahms |
76,723,017 | 317,460 | What is faster in Python, generating a uuid, using LRU cache or retrieve from a dict | <p>I need to generate around a few hundreds of UUIDs and then I need to reuse each UUID a few thousands times.</p>
<p>What will give me better performance?</p>
<p>Option 1: Generate the uuid every time from the input?
Option 2: Use Python's lru_cache(maxsize=None) around the method generating the uuid?
Option 3: Store the uuid in a dictionary and retrieve it (primitive cache)?</p>
| <python><uuid><python-lru-cache> | 2023-07-19 15:37:18 | 2 | 3,627 | RaamEE |
76,722,996 | 4,865,723 | Use pathlib.Path as search pattern? | <p>I construct a path object with wildcard in it. Can I use this somehow as a search pattern for <code>patlib.Path.glob()</code>?</p>
<pre><code>pattern = pathlib.Path('/') / 'home' / 'user' / '*' / 'folder'
result = pathlib.Path.glob(pattern)
</code></pre>
| <python><pathlib> | 2023-07-19 15:35:28 | 1 | 12,450 | buhtz |
76,722,814 | 11,922,765 | python dataframe check a non-existing column or row | <p>I should get certain data from user and sometimes I don't. This is when it breaks the code:</p>
<p>code:</p>
<pre><code>def check_user_data(meta_df = pd.DataFrame(),
params = ['param1','param2']):
if meta_df['alpha'] in params:
print('Alpha is available')
if meta_df['beta'] in params:
print('Beta is available')
user_df = pd.Series(index=['alpha'],data=['alpha1'])
check_user_data(meta_df = user_df,
params = ['alpha1','beta1'])
</code></pre>
<p>Present output:</p>
<pre><code>Alpha is available
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
KeyError: 'beta'
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last)
Cell In[47], line 5, in check_user_data(meta_df, params)
----> 5 if meta_df['beta'] in params:
6 print('Beta is available')
KeyError: 'beta'
</code></pre>
| <python><pandas><dataframe> | 2023-07-19 15:16:22 | 2 | 4,702 | Mainland |
76,722,794 | 696,206 | Define variable parameter type hints for Callables in Python | <p>I have a class that may contain a function that will be called elsewhere. Depending on what's calling it, it could have a variable number of arguments. One owner might be calling it with the arguments <code>(event, int, str)</code> and another might be calling it with <code>(event, str, bool, dict)</code>. There are validations that occur later to ensure that the signature matches what is needed by the owner. For type hinting purposes, I need to ensure that the signature being passed matches anything starting with our <code>Event</code> object, with anything after being fine and dandy. As a result, functions like <code>def foo(event: Event, a: int, b: bool)</code> and <code>def (event: ClickEvent, c: dict, *args, **kwargs) -> typing.Coroutine[Any, Any, str]</code> are both absolutely valid.</p>
<p>Given the following example:</p>
<pre class="lang-py prettyprint-override"><code>from typing import *
from dataclasses import dataclass
PARAMS = ParamSpec("PARAMS")
@dataclass
class Event:
field1: int
field2: bool
name: str
def get_name(self) -> str:
return self.name
class SomeCaller:
...
class Element:
...
class ClickEvent(Event):
def __init__(self, field1: int, field2: int, target: str):
super().__init__(field1=field1, field2=field2, name="click")
self.__target = target
@property
def target(self) -> str:
return self.__target
# The type hint in question
HANDLER = Callable[
[
Event,
PARAMS
], Union[Any, Coroutine]
]
def control(event: Event, *args, **kwargs) -> bool:
pass
def false_control(event: ClickEvent, *args, **kwargs) -> bool:
pass
async def async_function(event: Event, arg1: int, arg2: int, *args, **kwargs):
return 9
def function(event: ClickEvent, arg1: int, arg2: int, *args, **kwargs):
return 7
def other_function(event: Event, caller: SomeCaller, element: Element):
return 8
class EventHandlerWhatsit:
def __init__(self, handler: HANDLER):
self.__handler = handler
control_value = EventHandlerWhatsit(control)
false_control_value = EventHandlerWhatsit(false_control)
async_function_value = EventHandlerWhatsit(async_function)
function_value = EventHandlerWhatsit(function)
other_function_value = EventHandlerWhatsit(other_function)
def main():
print("This works")
if __name__ == "__main__":
main()
</code></pre>
<p>Typing hinting warnings appear on the declaration of <code>false_control_value</code>, <code>async_function_value</code>, <code>function_value</code>, and <code>other_function_value</code>, all with warnings like <code>Expected type '(Event, ParamSpec("PARAMS")) -> Coroutine | Any' (matched generic type '(Event, ParamSpec("PARAMS")) -> Coroutine | Any'), got '(event: Event, arg1: int, arg2: int, args: tuple[Any, ...], kwargs: dict[str, Any]) -> Any' instead </code>. The declaration of <code>control_value</code> presents no issue. The assignment of <code>self.__handler = handler</code> within the initializer in <code>EventHandlerWhatsit</code> also shows a warning of <code>Expected type '(Event, ParamSpec("PARAMS")) -> Coroutine | Any', got '(Event, ParamSpec("PARAMS")) -> Coroutine | Any' instead </code>, which I find odd.</p>
<p>All it needs to indicate is "Something that may be called as long as it starts with a parameter that is a subclass of <code>Event</code>". The names don't matter. I've played with the definition of <code>HANDLER</code> in all sorts of ways, such as defining <code>*args</code> and <code>**kwargs</code> as <code>Tuple[Any, ...]</code> and <code>Dict[str, Any]</code> (with and without <code>Optional</code>), with the extra parameters, and still end up with the same sorts of warnings.</p>
<p>I'm stuck in python 3.8 and I'm editing in PyCharm, which shows the warnings.</p>
<p>Any ideas?</p>
<p><strong>EDIT</strong>:</p>
<p><a href="https://stackoverflow.com/users/19770795/daniil-fajnberg">@Daniil Fajnberg</a> and <a href="https://stackoverflow.com/users/14401160/suterliakov-supports-strike">@SUTerliakov</a> provided perfect answers in the comments:</p>
<pre class="lang-py prettyprint-override"><code>HANDLER = Callable[
Concatenate[
Event,
PARAMS
], Union[Any, Coroutine]
]
</code></pre>
<p><code>Concatenate</code> allows the annotation to match with slightly different values than what is in the input definition.</p>
<p>Take the following invalid code:</p>
<pre class="lang-py prettyprint-override"><code>def test_inner(arg: Callable[[str, int, P], Any]):
pass
def test_input(i: str, j: int, q: str = None, *args, **kwargs):
pass
def test_outer():
test_inner(test_input)
</code></pre>
<p>The linter will trigger a warning because the existence of the optional <code>q</code> parameter does not fit the expectation of the <code>arg</code> parameter in <code>test_inner</code>. Changing the definition of <code>arg</code> in <code>test_inner</code> to look like <code>arg: Callable[Concatenate[str, int, P], Any]</code>, however, and the linter is just fine.</p>
<p><strong>A word of warning:</strong> <code>ParamSpec</code> and <code>Concatenate</code> were introduced in Python 3.10. If you need to use an older version due to environment constraints, use the <code>typing_extensions</code> package to provide that functionality.</p>
| <python><pycharm><python-typing> | 2023-07-19 15:14:20 | 1 | 715 | Tubbs |
76,722,706 | 2,128,799 | Prefetching or Selecting a single model from a ManyToMany field for use in a Serializer | <p>We have a model with a ManyToMany through table like such</p>
<pre><code>class Person(models.Model):
name = models.CharField(max_length=50)
class Group(models.Model):
name = models.CharField(max_length=128)
members = models.ManyToManyField(
Person,
through="Membership",
through_fields=("group", "person"),
)
class Membership(models.Model):
group = models.ForeignKey(Group, on_delete=models.CASCADE)
person = models.ForeignKey(Person, on_delete=models.CASCADE)
</code></pre>
<p>On our <code>Group</code> view we would like to create a single-model access to the <code>Membership</code> model based on <code>request.user</code> so that our serializer can access fields of the pivot table like</p>
<pre><code>class GroupSerializer(serializers.ModelSerializer):
user_name = serializer.CharField(source='memberships.user.name')
</code></pre>
<p>I have tried a query such as</p>
<pre><code>Group.objects.filter(user=request.user).annotate(
request_user_membership=Subquery(Membership.objects.filter(group_id=OuterRef('id'), user_id=request.user.id))
)
</code></pre>
<p>so that I might be able to reference the single object like</p>
<pre><code>class GroupSerializer(serializers.ModelSerializer):
user_name = serializer.CharField(source='request_user_membership.user.name')
</code></pre>
<p>however it does not seem that you can use subqueries like this.</p>
<p>This seems like a common problem so I was hoping you all might have some ideas.</p>
<p>Any help is greatly appreciate</p>
| <python><django><django-rest-framework><django-orm> | 2023-07-19 15:04:35 | 1 | 1,294 | Dash Winterson |
76,722,680 | 14,044,486 | What is the best way to combine conda with standard python packaging tools (e.g. requirements.txt or pyproject.toml files)? | <p>I work in scientific computing, and I am effectively forced to use conda to install certain other maintained packages to do my job. If I want to work on my own package, I need a way for it to play nice with both the conda dependency solver and pip. I would want to simply <code>conda install</code> the local package and use the conda dependency solver so that its compatible with the other software. However, I would also want to be able to otherwise <code>pip install</code> the package and/or upload it to PYPI.</p>
<p>Is there a way to develop a standardized python package (using, e.g., <code>pyproject.toml</code> and/or <code>requirements.txt</code> files) compatible with a conda environment? I have searched and haven't found a clear prescription on how to do so.</p>
<p>For conda, one could also locally specify the required dependencies <a href="https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-from-an-environment-yml-file" rel="noreferrer">in a <code>*.yml</code> file</a>, but this option is not compatible with installation via pip. One would have to maintain dependencies in <em>both</em> a <code>*.yml</code> file as well as a <code>requirements.txt</code> file. This duplication results in manual maintenance and is error-prone.</p>
<p>Note that the <a href="https://docs.conda.io/projects/conda-build/en/main/resources/commands/conda-develop.html" rel="noreferrer">conda develop</a> command is officially supported by anaconda and on the surface looks like it could be used to address this problem; however, it is <a href="https://github.com/conda/conda-build/issues/4251" rel="noreferrer">effectively deprecated</a> and as of this writing doesn't seem to be supported on python 3.11.</p>
| <python><pip><conda><pyproject.toml> | 2023-07-19 15:02:03 | 1 | 593 | Drphoton |
76,722,678 | 4,551,325 | Multi-indexed Pandas Dataframe division | <p>Consider a multi-indexed dataframe:</p>
<pre><code>import numpy as np
import pandas as pd
df1 = pd.DataFrame(np.random.rand(3, 3), index=['a', 'b', 'c'], columns=['col1', 'col2', 'col3'])
df2 = pd.DataFrame(np.random.rand(3, 3), index=['a', 'b', 'c'], columns=['col1', 'col2', 'col3'])
df3 = pd.DataFrame(np.random.rand(3, 3), index=['a', 'b', 'c'], columns=['col1', 'col2', 'col3'])
df = pd.concat([df1, df2, df3], axis=0, keys=['A', 'B', 'C'])
</code></pre>
<p>Which gives <code>df</code>:</p>
<pre><code> col1 col2 col3
A a 0.893752 0.554021 0.492867
b 0.319270 0.263366 0.542281
c 0.082265 0.635637 0.796405
B a 0.954748 0.684624 0.488293
b 0.485414 0.966693 0.211348
c 0.411648 0.989666 0.028412
C a 0.701327 0.025172 0.320882
b 0.073527 0.060885 0.111406
c 0.169269 0.627686 0.438393
</code></pre>
<p>(your numbers will differ)</p>
<p>How do I:</p>
<ul>
<li>divide row (A, b) and (A, c) by (A, a)</li>
<li>divide row (B, b) and (B, c) by (B, a)</li>
<li>divide row (C, b) and (C, c) by (C, a)</li>
</ul>
<p>...in one call?</p>
<p>My attempt:</p>
<pre><code>idx = pd.IndexSlice
ratio_list = [df.loc[idx[x,:], :].div(df.loc[idx[x,'a'], :]) for x in ['A', 'B', 'C']]
ratio = pd.concat(ratio_list, axis=0)
</code></pre>
<p>Which gives <code>ratio</code>:</p>
<pre><code> col1 col2 col3
A a 1.000000 1.000000 1.000000
b 0.357225 0.475371 1.100259
c 0.092044 1.147315 1.615864
B a 1.000000 1.000000 1.000000
b 0.508422 1.412005 0.432830
c 0.431159 1.445560 0.058186
C a 1.000000 1.000000 1.000000
b 0.104840 2.418784 0.347188
c 0.241355 24.936324 1.366214
</code></pre>
<hr />
<p>See below answers from @Smordy and @ouoboros1. Both are excellent. <code>groupby-transform</code> is more concise but <code>np.repeat</code> is definitely more performant when size of the dataframe is big.</p>
<pre><code>nrow = 100 # iterate through this
df = df = pd.DataFrame(np.random.rand(nrow*3, 3),
columns=['col1', 'col2', 'col3'],
index=pd.MultiIndex.from_product([[*'ABC'], ['row' + str(ii) for ii in range(0, nrow)]]))
# pandas `groupby-transform` from @Smordy
%timeit df.div(df.groupby(level=0).transform('first'))
# `numpy.repeat` from @ouroboros1
%timeit df.div(np.repeat(df.loc[(slice(None), ['row0']), :].to_numpy(), nrow, axis=0))
</code></pre>
<p>Results:</p>
<p><a href="https://i.sstatic.net/Di3cz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Di3cz.png" alt="enter image description here" /></a></p>
<p>Here it seems time-factor plateaus around 3x. If we increase the number of columns, the time-factor will go up again.</p>
| <python><pandas><multi-index> | 2023-07-19 15:01:41 | 3 | 1,755 | data-monkey |
76,722,564 | 10,938,315 | Batch process Kafka messages | <p>I want to write my Kafka messages to a jsonl file which should each contain a number of lines (let's say 2). My producer currently writes 3 messages at a time so I should get two jsonl files: one with 2 events and a second one with 1 event.</p>
<p>My producer only runs for 3 events (it's just an example project) while my consumer should run as long as it finds messages.</p>
<p>Right now I lose the third event and my consumer only writes if I interrupt the program manually because it can't reach <code>if records</code> and <code>self.running = False</code>. How can I fix the while loop and batch logic?</p>
<pre><code>import os
import json
from datetime import datetime
from kafka import KafkaConsumer
class WikiConsumer:
def __init__(self, topic: str, server: str, group_id: str, output_dir: str) -> None:
self.consumer = KafkaConsumer(
topic, bootstrap_servers=server, group_id=group_id
)
self.running = True
self.output_dir = output_dir
def consume_messages(self, batch_size: int):
records = []
while self.running:
for message in self.consumer:
if message.value != b'""':
json_str = message.value.decode("utf-8")
json_obj = json.loads(json.loads(json_str))
records.append(json_obj)
print(f"records before write: {records}")
if len(records) >= batch_size:
self.write_to_jsonl(records)
records = []
print(f"records after write: {records}")
if records:
self.write_to_jsonl(records)
print("remainder written")
self.running = False
def write_to_jsonl(self, records):
timestamp = now.strftime("%Y-%m-%d-%H-%M-%S")
filename = f"data_{timestamp}.jsonl"
with open(
os.path.join(self.output_dir, filename), "a"
) as f:
for record in records:
json.dump(record, f)
f.write("\n")
def run(self) -> None:
self.consume_messages(2)
if __name__ == "__main__":
output_directory = "my_dir/output/"
consumer = WikiConsumer(
"my_project", "localhost:9092", "project-group", output_directory
)
consumer.run()
</code></pre>
| <python><apache-kafka><kafka-python> | 2023-07-19 14:47:12 | 1 | 881 | Omega |
76,722,560 | 2,110,805 | Concatenate flattened layers with Keras | <p>I'm having a hard time making a model to fit. I'm not sure if it is an input problem or a model problem. It looks like the concatenate layer is at fault, but after flattening all layers, concatenation should have been ok, no? My guess is it's about the <code>sec_input / sec_flatten</code> layers, since it's works if I remove them.</p>
<pre><code>tf.__version__: '2.12.0'
</code></pre>
<p>1 - I'm getting this error on <code>model.fit(sg_train, epochs=100)</code>:</p>
<pre><code>Exception has occurred: InvalidArgumentError
Graph execution error:
(...)
Node: 'gradient_tape/conv2D_3_inputs/concat/ConcatOffset'
All dimensions except 1 must match. Input 2 has shape [32 1] and doesn't match input 0 with shape [1 936].
[[{{node gradient_tape/conv2D_3_inputs/concat/ConcatOffset}}]] [Op:__inference_train_function_2217]
</code></pre>
<p>2 - This is my model:</p>
<pre><code>nat_input = tf.keras.layers.Input(
shape=(window, nat_col_len, spectrum_layers), name="nat_input"
)
nat_conv_1 = tf.keras.layers.Conv2D(
conv_filters, kernel_1, activation="relu", padding=padding, name="nat_conv_1"
)(nat_input)
nat_batch_norm_1 = tf.keras.layers.BatchNormalization()(nat_conv_1)
nat_pooling_1 = tf.keras.layers.MaxPooling2D((2, 2), name="nat_pooling_1")(
nat_batch_norm_1
)
nat_conv_2 = tf.keras.layers.Conv2D(
conv_filters, kernel_1, activation="relu", padding=padding, name="nat_conv_2"
)(nat_pooling_1)
nat_batch_norm_2 = tf.keras.layers.BatchNormalization()(nat_conv_2)
nat_pooling_2 = tf.keras.layers.MaxPooling2D((2, 2), name="nat_pooling_2")(
nat_conv_2
)
nat_flatten = tf.keras.layers.Flatten(name="nat_flatten")(nat_pooling_2)
act_input = tf.keras.layers.Input(
shape=(window, act_col_len, spectrum_layers), name="act_input"
)
act_conv_1 = tf.keras.layers.Conv2D(
conv_filters, kernel_1, activation="relu", padding=padding, name="act_conv_1"
)(act_input)
act_batch_norm_1 = tf.keras.layers.BatchNormalization()(act_conv_1)
act_pooling_1 = tf.keras.layers.MaxPooling2D((2, 2), name="act_pooling_1")(
act_batch_norm_1
)
act_conv_2 = tf.keras.layers.Conv2D(
conv_filters, kernel_1, activation="relu", padding=padding, name="act_conv_2"
)(act_pooling_1)
act_batch_norm_2 = tf.keras.layers.BatchNormalization()(act_conv_2)
act_pooling_2 = tf.keras.layers.MaxPooling2D((2, 2), name="act_pooling_2")(
act_conv_2
)
act_flatten = tf.keras.layers.Flatten(name="act_flatten")(act_pooling_2)
sec_input = tf.keras.layers.Input(shape=(window, 1), name="sec_input")
sec_flatten = tf.keras.layers.Flatten(name="sec_flatten")(sec_input)
concat = tf.keras.layers.concatenate(
[nat_flatten, act_flatten, sec_flatten], name="concat"
)
dense_1 = tf.keras.layers.Dense(128, activation="relu", name="dense_1")(concat)
dropout_1 = tf.keras.layers.Dropout(0.2, name="dropout_1")(dense_1)
ouput = tf.keras.layers.Dense(1, name="output")(dropout_1)
model = tf.keras.Model(
inputs=[nat_input, act_input, sec_input],
outputs=ouput,
name="conv2D_3_inputs",
)
model.compile(
optimizer="adam",
loss=tf.keras.losses.MeanSquaredError(),
metrics=["mae", tf.keras.metrics.RootMeanSquaredError()],
)
</code></pre>
<p><a href="https://i.sstatic.net/rgzmC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rgzmC.png" alt="enter image description here" /></a></p>
<p>3 - And this is my inputs shapes (one sample), from a generator:</p>
<pre><code># X_train_nat_i.shape: (1, 32, 470, 1)
# X_train_act_i.shape: (1, 32, 480, 1)
# X_train_sec_i.shape: (32,)
# y_train_i.shape: (1,)
return (X_train_nat_i, X_train_act_i, X_train_sec_i), y_train_i
</code></pre>
<p>It's certainly a stupid mistake. I hope you will spot it. Any hint is appreciated.</p>
| <python><tensorflow><keras> | 2023-07-19 14:46:58 | 0 | 14,653 | Cyrille |
76,722,536 | 1,668,622 | With Python-textual (package) how do I linearly switch between different 'screens'? | <p>With <a href="https://github.com/Textualize/textual" rel="noreferrer">textual</a> I'd like to build a simple program which presents me with different options I can choose from using <code>OptionList</code>, but one by one, e.g.</p>
<p>First "screen":</p>
<pre><code>what do you want to buy (Car/Bike)?
+---------+
| Car |
| > Bike |
+---------+
</code></pre>
<blockquote>
<p>bike</p>
</blockquote>
<p>And after I pressed/clicked on "Bike" I'd like to see the second 'screen' (with potentially different widgets):</p>
<pre><code>electric (yes/no)?
+---------+
| Yes |
| > No |
+---------+
</code></pre>
<blockquote>
<p>No</p>
</blockquote>
<p>The following code shows me the first list of options but I have no idea how to proceed:</p>
<pre class="lang-py prettyprint-override"><code>from textual.app import App, ComposeResult
from textual.widgets import Footer, Header, OptionList, Static
from textual import events, on
class SelectType(Static):
def compose(self) -> ComposeResult:
yield OptionList(
"Car",
"Bike",
)
@on(OptionList.OptionSelected)
def selected(self, *args):
return None # What to do here?
class MainProgram(App[None]):
def compose(self) -> ComposeResult:
yield Header()
yield Footer()
yield SelectType()
MainProgram().run()
</code></pre>
<p>What to do now? I crawled the tutorial, guides, examples but it looks like they all show me how to build <em>one</em> set of widgets but I didn't find a way to make a transition between one input screen and another one..</p>
| <python><rich><python-textual> | 2023-07-19 14:43:33 | 2 | 9,958 | frans |
76,722,474 | 8,251,318 | Requests Post object Python | <p>I'm trying to make a request to a given endpoint:</p>
<pre class="lang-py prettyprint-override"><code>requests.post(url = accessTokenUrl, json = body, auth = HTTPBasicAuth(clientId, clientSecret))
</code></pre>
<p>and keep getting a 400 back from the server.</p>
<p>I want to be able to print the object that requests.post(params) builds <em>BEFORE</em> it sends; it would look something like this when printed:</p>
<pre><code>Request Headers
Content-Type: application/x-www-form-urlencoded
Authorization: Basic Y134123123441324dfas1YzI4Nzg5NTE0
Accept: */*
Host: test-lm.lb.com
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Content-Length: 46
Request Body
grant_type: "client_credentials"
scope: "test-admin"
</code></pre>
<p>Is this possible?</p>
| <python><http><python-requests> | 2023-07-19 14:36:24 | 0 | 877 | Matthew |
76,722,288 | 5,692,005 | How to compute the inner integral with quad? | <p>I am reading <a href="https://www.scirp.org/journal/paperinformation.aspx?paperid=96020" rel="nofollow noreferrer">the paper by Floris</a>β and try to compute the numerical value of the normalization constant <code>C_I</code> from eq. (20).</p>
<p>From the eqution</p>
<p><a href="https://i.sstatic.net/L64KS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/L64KS.png" alt="2]" /></a></p>
<p>and property of transition probability density function (PDF)
<a href="https://i.sstatic.net/ERYRK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ERYRK.png" alt="enter image description here" /></a>, the normalization constant is:</p>
<p><a href="https://i.sstatic.net/UvyqB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UvyqB.png" alt="enter image description here" /></a></p>
<p>In my case, <code>m_1(x)=-(a*x+b*x**3)</code>, <code>m_2(x)= c+ sigma**2*x</code>,
<code>a=1.0, b=0.5, c = 1.0, sigma = 1.0</code>.</p>
<p>I have used the <a href="https://docs.scipy.org/doc/scipy/tutorial/integrate.html" rel="nofollow noreferrer">quad()</a> function in my attempt for the inner integral:</p>
<pre><code>import numpy as np
exp = np.exp
inf = np.inf
log = np.log
sqrt = np.sqrt
from scipy.integrate import quad
a = 1.0; b = 0.5; c = 1.0; sigma = 1.0
tspan = np.linspace(0.0, 2.5, 5001)
x0 = 0.1
def m_1(x, t):
return -(a*x + b*x**3)
def m_2(x, t):
return (c + sigma**2 * x)
def inner_integrand(x0, x, t):
return 2 * m_1(x,t) / m_2(x,t)**2
inner = quad(inner_integrand, -inf, inf, args=(x0, tspan))[0]
print('inner_integrand=', inner)
# inner_integrand= -1.693896469019819
</code></pre>
<p>and I have obtained the result <code>inner_integrand= 0.33223140495839776</code>.</p>
<p>When I have used the <a href="https://www.wolframalpha.com/input?i=integrate%202*%28-%28x%2Bx%5E3%2F2%29%29%2F%281%2Bx%29%5E2%20dx%20from%20x%3D-inf%20to%20inf" rel="nofollow noreferrer">wolfram alpha</a>, the result is 'integral does not convergence'</p>
<p><strong>Question.</strong> How to compute the inner integral?</p>
| <python><integral><quad> | 2023-07-19 14:15:59 | 1 | 1,126 | Nick |
76,722,172 | 386,861 | Dataclasses: trying to understand them in python | <p>I'm learning classes in python. Also trying to learn dataclasses.</p>
<pre><code>from dataclasses import dataclass
class Person():
name: str
age: int
height: float
email: str
person = Person('Joe', 25, 1.85, 'joe@dataquest.io')
print(person.name)
</code></pre>
<p>Error:</p>
<pre><code>TypeError: Person() takes no arguments
</code></pre>
<p>Unsure why.</p>
| <python><python-dataclasses> | 2023-07-19 14:03:59 | 1 | 7,882 | elksie5000 |
76,722,077 | 19,163,024 | Why doesn't langchain ConversationalRetrievalChain remember the chat history, even though I added it to the chat_history parameter? | <p>Studying AI and LangChain, and I was trying to make a conversational chatbot. So far so good, I managed to get feed it custom texts and it answers questions based on the text, but for some reason it doesn't remember the previous answers. From <a href="https://stackoverflow.com/questions/76264205/in-langchain-why-conversationalretrievalchain-not-remembering-the-chat-history">this question</a>, it appears that <code>ConversationalRetrievalChain</code> needs to take the chat_history parameter to retain memories, but even though I supply it, it still can't remember anything. Here is my code:</p>
<pre class="lang-py prettyprint-override"><code>history = []
def ask(question: str):
chat = ConversationalRetrievalChain.from_llm(llm, vectorstore.as_retriever(), memory=memory)
answer = chat({"question": question, "chat_history": history})["answer"]
history.append((question, answer))
print(answer)
return answer
ask("Who is Bound by this Agreement?") #Answers correctly
ask("What did I ask in previous question?") #Doesn't remember
</code></pre>
<p>I have verified that the chat history is indeed recorded into the <code>history</code> list. So why doesn't model remember what was before?
<a href="https://i.sstatic.net/3VNlk.png" rel="noreferrer"><img src="https://i.sstatic.net/3VNlk.png" alt="enter image description here" /></a></p>
| <python><artificial-intelligence><chatbot><langchain> | 2023-07-19 13:54:31 | 2 | 432 | ΠΠ»Π°Π΄ΠΈΡΠ»Π°Π² ΠΠΎΡΠΎΠ»Ρ |
76,722,046 | 10,863,083 | How to color each 3 consecutive rows of dataframe with the same color and write output to excel file | <p>After getting a dataframe, I want to color the background of each three consecutive rows with the same color using xlsxwriter library or any other library
I have tried the next code but unfortunately it gave me a single color yellow</p>
<pre><code>def highlight_cells():
return ['background-color: yellow']
</code></pre>
| <python><pandas><background-color><xlsxwriter> | 2023-07-19 13:51:12 | 1 | 417 | baddy |
76,721,981 | 559,827 | Is it possible for __new__ to modify the arguments that __init__ sees? | <p>Suppose I have a class whose <code>__init__</code> argument has signature <code>(self, *args, foo=42, **kwargs)</code></p>
<p>Is it possible for the class's <code>__new__</code> method to modify the value of the <code>foo</code> keyword argument that the <code>__init__</code> method eventually sees?</p>
<p>I thought that something like this would do it:</p>
<pre><code>import sys
class Wibble(object):
def __new__(cls, *args, foo=42, **kwargs):
if kwargs.pop('use_floats_only', False):
print('Warning: using floats only!', file=sys.stderr)
return cls.__new__(cls, *args, foo=float(foo), **kwargs)
return super().__new__(cls)
def __init__(self, *args, foo=42, **kwargs):
self.foo = foo
</code></pre>
<p>but:</p>
<pre><code>>>> print(type(Wibble(use_floats_only=True).foo))
Warning: using floats only!
<class 'int'>
</code></pre>
<p>In other words, even though <code>__new__</code>'s <code>if</code> block got executed, the <code>__init__</code> method for the returned instance still got the default value for the <code>foo</code> keyword argument.</p>
<p>This leads me to suspect that, no matter what <code>__new__</code> may do, it cannot affect the value of <code>foo</code> that <code>__init__</code> sees. Is this suspicion correct?</p>
<p>The only purpose of this question is to gain a better understanding of the Python object model.</p>
| <python> | 2023-07-19 13:43:21 | 1 | 35,691 | kjo |
76,721,955 | 16,371,459 | Scrap image_urls from a dynamic scrollable web page, its image_urls are disappeared when it is scrolled to other location | <p>I have to scrap a dynamic scrollable website, and extract image urls. The problem is that when the scroller comes to a location. All the images from the other locations are vanished. How to extract all the image URLs.</p>
<p>I am using the following Python selenium code, it doesn't extract all the URLs from the page.</p>
<pre><code>wd.get('https://dynamicWebPage')
scroll_pause_time = 2
screen_height = wd.execute_script("return window.innerHeight")
last_scroll_height = wd.execute_script("return document.body.scrollHeight")
items = []
while True:
wd.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(scroll_pause_time)
new_scroll_height = wd.execute_script("return document.body.scrollHeight")
print(new_scroll_height, last_scroll_height)
if new_scroll_height == last_scroll_height:
break
last_scroll_height = new_scroll_height
elements = wd.find_elements(By.XPATH,"//a")
items.extend([element.text for element in elements])
</code></pre>
| <python><selenium-webdriver><web-scraping><dynamic> | 2023-07-19 13:41:13 | 1 | 318 | Basir Mahmood |
76,721,864 | 1,581,090 | How to fix this AttributeError related to telnetlib3 in python on windows? | <p>In windows 10 I want to write an async-free version of <code>telnetlib3</code> with python 3.10.11. I created the code I will attach to the end of my question, which seem to work fine in creating a new telnet connection and read data. However, when I use the <code>write</code> method of the <code>Telnet3</code> class I get the following error:</p>
<pre><code>Traceback (most recent call last):
session.write("test")
File "C:\Users\Test\example_telnet3_sync.py", line 66, in write
response = loop.run_until_complete(_write(self.writer, command))
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\asyncio\base_events.py", line 649, in run_until_complete
return future.result()
File "C:\Users\Test\example_telnet3_sync.py", line 41, in _write
writer.write(command + "\r\n")
File "C:\Users\Test\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\telnetlib3\stream_writer.py", line 2614, in write
self._write(self.encode(string, errors))
File "C:\Users\Test\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\telnetlib3\stream_writer.py", line 1735, in _write
self._transport.write(buf)
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\asyncio\proactor_events.py", line 365, in write
self._loop_writing(data=bytes(data))
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\asyncio\proactor_events.py", line 401, in _loop_writing
self._write_fut = self._loop._proactor.send(self._sock, data)
AttributeError: 'NoneType' object has no attribute 'send'
</code></pre>
<p>I am <strong>not</strong> familiar with <code>telnetlib3</code> or <code>asyncio</code>. Is there a way to fix this error?</p>
<p>Here is the complete code:</p>
<pre><code>import asyncio
import telnetlib3
async def _open(host, port):
reader, writer = await telnetlib3.open_connection(host, port)
data = await asyncio.wait_for(reader.read(4096), timeout=2)
return reader, writer, data
async def _read(reader, expected="myprompt >>"):
reply = ""
while True:
data = await reader.read(4096)
if data:
reply += data
if expected in reply:
break
return reply
async def _read_timeout(reader, timeout=2):
try:
return await asyncio.wait_for(_read(reader), timeout=timeout)
except (asyncio.exceptions.TimeoutError, RuntimeError):
print("TimeoutError while reading from telnet!")
return None
async def _write(writer, command):
writer.write(command + "\r\n")
class Telnet3:
def __init__(self, host, port):
self.host = host
self.port = port
self.reader = None
self.writer = None
self.message = ""
def connect(self):
loop = asyncio.new_event_loop()
self.reader, self.writer, self.message = loop.run_until_complete(_open(self.host, self.port))
loop.close()
def read(self):
loop = asyncio.new_event_loop()
response = loop.run_until_complete(_read_timeout(self.reader))
loop.close()
return response
def write(self, command):
loop = asyncio.new_event_loop()
loop.run_until_complete(_write(self.writer, command))
loop.close()
def write_read(self, command):
self.write(command)
return self.read()
if __name__ == "__main__":
session = Telnet3("100.200.10.10", 9000)
session.connect()
session.write("test")
</code></pre>
| <python><windows-10><python-asyncio><telnet><telnetlib3> | 2023-07-19 13:28:52 | 1 | 45,023 | Alex |
76,721,643 | 761,620 | Plot with varying yrange without a log scale or an inset | <p>My question was closed yesterday because of a misunderstanding.</p>
<p>I have the figure below and want to zoom around y=-.1 to y=.4 so n=2..4 can be clearly seen. How to "zoom in" on that range?
<a href="https://i.sstatic.net/1t18o.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1t18o.png" alt="enter image description here" /></a></p>
<p>If n=1 is not present the other curves look this:</p>
<p><a href="https://i.sstatic.net/z5xid.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z5xid.png" alt="enter image description here" /></a>
I DO NOT want to use an inset rather I want to have something like this:</p>
<p><a href="https://i.sstatic.net/2YsPR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2YsPR.png" alt="enter image description here" /></a></p>
| <python><matplotlib> | 2023-07-19 13:06:14 | 1 | 1,419 | ziulfer |
76,721,557 | 3,423,825 | How to modify a file with sed? | <p>I need to modify a file in a Docker image. I would like to change the line 2357 of a Python module <code>mod.py</code> with sed, how can I do that ?</p>
<pre><code> raise ExchangeError(self.id + ' ' + errorText + '(#' + errorCode + ')')
</code></pre>
<p>The desired output would be:</p>
<pre><code> raise ExchangeError(self.id + '(#' + errorCode + ')')
</code></pre>
<p>or</p>
<pre><code> raise ExchangeError(self.id)
</code></pre>
| <python><docker><ubuntu><sed> | 2023-07-19 12:58:00 | 4 | 1,948 | Florent |
76,721,325 | 10,419,999 | regex to extract non constant characters from a string | <p>How to extract non constant characters from a string using regex. String is</p>
<blockquote>
<p>archives/latest/pipelines/<strong>my-page</strong>/pages/<strong>content/page1/</strong>,</p>
</blockquote>
<p>here i have to <strong>remove</strong> '<em>archives/latest/pipelines</em>' and '<em>pages/</em>', which repeats in multiple strings</p>
<p>i have extracted <strong>my-page</strong> using below regex</p>
<pre><code>(?<=archives\/latest\/pipelines\/)[^\/]*(?=\/pages\/)
</code></pre>
<p>but not sure how to extract <strong>/my-page/content/page1/</strong></p>
<pre><code>i/p : archives/latest/pipelines/my-page/pages/content/page1/
o/p : /my-page/content/page1/
</code></pre>
<p>please help</p>
| <python><regex> | 2023-07-19 12:31:57 | 1 | 4,912 | Shijith |
76,721,320 | 5,969,463 | Reading Excel File With Pandas and Retaining Original Row Number | <p>I have the following code that reads an Excel file into a Pandas structure:</p>
<pre><code>try:
book = xlrd.open_workbook(file_contents=filecontent)
file = pd.read_excel(book)
...
rows = file.values
</code></pre>
<p>Later, I do something like this</p>
<pre><code> rows = sorted(
rows,
key=lambda vs: '' if not nan_to_none(vs[columns_map['order_name']]) else vs[columns_map['order_name']]
)
</code></pre>
<p>When processing <code>rows</code>, I would like to throw errors that indicate line numbers where they occurred in the original Excel file. My thinking was that I should retain those line numbers somehow and later send them with errors. How can I do that or how can I achieve my goal if there is another way to do it?</p>
| <python><pandas><excel><dataframe> | 2023-07-19 12:31:44 | 0 | 5,891 | MadPhysicist |
76,721,024 | 2,106,911 | Python import modules solution for scripts | <p>My project structure looks like</p>
<p>--src/core</p>
<p>--src/bin</p>
<p>i have a bunch of scripts that i want to run from bin that import from src/core</p>
<p>when i run them from PyCharms everything works fine.</p>
<p>when i run the same script from my terminal, i get ModuleNotFound error</p>
<p>the difference is in the PYTHONPATH (Pycharm adds the root directory at runtime - venv does not)</p>
<p>my current hack is to add this to all my scripts that i want to call</p>
<pre><code>import sys, os
root_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../../.."))
sys.path.append(root_path)
</code></pre>
<p>what is a more elegant solution to this?</p>
| <python><import><pycharm><python-venv> | 2023-07-19 11:53:52 | 1 | 459 | Shubham |
76,720,975 | 11,759,533 | CWE based Test Suite for Python | <p>Is there a widely known test suite with testcases based on CWEs like the <strong>Juliet Test Suite for C/C++</strong> from <strong>NIST</strong> for Python? I was not able to find a ready to use solution, only plugins like unittest where you can build your own collection.</p>
<p>The purpose would be to evaluate the performance of different static code analysis tools.</p>
| <python><testing><automated-tests> | 2023-07-19 11:46:32 | 0 | 443 | huondui |
76,720,881 | 3,371,250 | How to perform a join on a already joined query? | <p>Say I have a query like so:</p>
<pre><code>subquery = select(table1.c.id,
table1.c.type,
table1.c.some_category,
table2.c.some_other_category).join(table2,
table1.c.id== table2.c.id)
</code></pre>
<p>I want to use this query to perform another join on a third table.
Like so:</p>
<pre><code># Fetch data
another_query = session.query(table3.c.id,
table3.c.aa,
table3.c.bb,
table3.c.cc,
table3.c.dd).subquery()
join = select(another_query.c.id,
another_query.c.aa).join(subquery,
another_query.c.id== subquery.c.id)
result = session.execute(join).fetchmany(1000)
</code></pre>
<p>I get the following error: Join target, typically a FROM expression, or ORM relationship attribute expected, got <sqlalchemy.sql.selectable.Select object.</p>
<p>How can I reuse the mentioned select subquery in the join statement?</p>
| <python><join><select><sqlalchemy> | 2023-07-19 11:36:08 | 1 | 571 | Ipsider |
76,720,857 | 8,223,979 | Format python virtual environment name on bash prompt without root access | <p>Normally you would change simply PS1 in /bin/activate. Problem is the file is readonly. How would you do that from bash_profile, for example?</p>
<p>This is part of the /bin/activate file:</p>
<pre><code>if [ -z "${VIRTUAL_ENV_DISABLE_PROMPT:-}" ] ; then
_OLD_VIRTUAL_PS1="${PS1:-}"
PS1="(python3.9.13_) ${PS1:-}"
export PS1
fi
</code></pre>
| <python><environment-variables><environment> | 2023-07-19 11:33:09 | 1 | 1,097 | Caterina |
76,720,815 | 11,747,861 | apply operations from string list to one (or more) column(s) in polars | <p>I would need to apply multiple simple operations (sum/mean/max/min/median etc) to a single column. Is there a way to write that concisely without repeating myself?</p>
<p>Right now I would need to write all these manually,</p>
<pre class="lang-py prettyprint-override"><code>df.select(pl.col("a").max(), pl.col("b").mean(), pl.col("b").min())
</code></pre>
<p>Whereas in pandas I could pass a list of operations (["max", "min", "mean"]) to agg</p>
<p>Looked through polars documentation and internet and couldn't find anything</p>
| <python><dataframe><python-polars> | 2023-07-19 11:27:42 | 4 | 2,757 | Mark Wang |
76,720,751 | 125,673 | Cannot install openpyxl | <p>When I attempt to run my python code, I get this error message</p>
<pre><code>ImportError: Missing optional dependency 'openpyxl'. Use pip or conda to install openpyxl.
</code></pre>
<p>So I try to install openpvxl</p>
<pre><code>pip install --upgrade openpyxl
WARNING: Ignoring invalid distribution -orch (d:\anaconda\lib\site-packages)
Requirement already satisfied: openpyxl in d:\anaconda\lib\site-packages (3.1.2)
Requirement already satisfied: et-xmlfile in d:\anaconda\lib\site-packages (from openpyxl) (1.1.0)
WARNING: Ignoring invalid distribution -orch (d:\anaconda\lib\site-packages)
</code></pre>
<p>This makes no difference, I still get the same error message.</p>
<p>I am using PyCharm and Python files.</p>
| <python><pycharm><openpyxl> | 2023-07-19 11:21:03 | 1 | 10,241 | arame3333 |
76,720,657 | 2,515,265 | Gunicorn error: Socket error processing request | <p>I have upgraded my Dash application to use gunicorn 21.1.0 (from 20.1.0) and I'm getting this unexpected error in the application log when submitting a request from the browser:</p>
<pre><code>[2023-07-19 20:52:51 +1000] [56562] [ERROR] Socket error processing request.
Traceback (most recent call last):
File "/Users/dade/.conda/envs/viz-davide/lib/python3.8/site-packages/gunicorn/workers/gthread.py", line 285, in handle
keepalive = self.handle_request(req, conn)
File "/Users/dade/.conda/envs/viz-davide/lib/python3.8/site-packages/gunicorn/workers/gthread.py", line 357, in handle_request
util.reraise(*sys.exc_info())
File "/Users/dade/.conda/envs/viz-davide/lib/python3.8/site-packages/gunicorn/util.py", line 641, in reraise
raise value
File "/Users/dade/.conda/envs/viz-davide/lib/python3.8/site-packages/gunicorn/workers/gthread.py", line 343, in handle_request
resp.write(item)
File "/Users/dade/.conda/envs/viz-davide/lib/python3.8/site-packages/gunicorn/http/wsgi.py", line 326, in write
self.send_headers()
File "/Users/dade/.conda/envs/viz-davide/lib/python3.8/site-packages/gunicorn/http/wsgi.py", line 322, in send_headers
util.write(self.sock, util.to_bytestring(header_str, "latin-1"))
File "/Users/dade/.conda/envs/viz-davide/lib/python3.8/site-packages/gunicorn/util.py", line 299, in write
sock.sendall(data)
OSError: [Errno 9] Bad file descriptor
</code></pre>
<p>My application is configured to work with multiple processes. Is this a gunicorn bug or has my configuration broken some new assumptions?</p>
| <python><plotly-dash><gunicorn> | 2023-07-19 11:08:17 | 0 | 2,657 | Javide |
76,720,604 | 10,780,715 | PyPolars, get value from column based on value in another column without for loop | <p>Using PyPolars I'm trying to create a new column containing the value of a column chosen among several based on a condition.</p>
<p>The condition is expressed in a dictionary. The following code should be clear enough to describe more precisely what I'm looking for.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
def do_that_mapping(lf: pl.LazyFrame) -> pl.LazyFrame:
my_map = {
"A": "col_1",
"B": "col_2",
"C": "col_2",
"D": "col_3",
# many more values to map
}
for k, v in my_map.items():
# ofc I don't want to use that in a loop, I'm looking for a way to execute
# the following line with a native method of Polars and remove the
# Python iteration on `my_map`
lf = lf.with_columns(pl.when(pl.col("col_val") == k).then(pl.col(v)).alias("new_col"))
return lf
x = pl.LazyFrame(
data={
"col_val": ["A", "B", "C", "D"],
"col_1": [22, 1, 54, 82],
"col_2": [1, 32, 7, 8],
"col_3": [4, 6, 90, 3],
},
schema={
"col_val": pl.String,
"col_1": pl.Int16,
"col_2": pl.Int16,
"col_3": pl.Int16,
},
)
x_with_new_col = x.pipe(do_that_mapping)
</code></pre>
| <python><dataframe><python-polars> | 2023-07-19 11:01:31 | 3 | 575 | mlisthenewcool |
76,720,599 | 20,920,790 | How to force local minimums plotting under graph in adjust_text? | <p>I got this graph:</p>
<pre><code># graph plot
plt.plot(
df_for_pred['date'],
df_for_pred['mentee_per_mentor'],
color="r"
)
plt.title('Mentee per mentor dynamic')
plt.grid(False)
# graph_annotates contains local highs and lows
labels = [plt.text(graph_annotates['date'][i], graph_annotates['mentee_per_mentor'][i],
f"{graph_annotates['mentee_per_mentor'][i]:.2f}") for i in graph_annotates.index]
adjust_text(labels)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/18crp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/18crp.png" alt="enter image description here" /></a></p>
<p>How to force lows to be under graph (hight - under graph)?</p>
| <python><matplotlib> | 2023-07-19 11:00:58 | 1 | 402 | John Doe |
76,720,386 | 14,954,327 | is there a way to search a huggingface Repository for a specific filename? | <p>I'd like to search a huggingface repository for a specific filename, without having to clone it first as it is a rather large repo with thousands of files.</p>
<p>I couldn't find a way to do it with the web interface, I installed the <em>python</em> package <code>huggingface_hub</code> and looked into <code>huggingface_hub.Repository</code> and <code>huggingface_hub.HfFileSystem</code> without success.</p>
<p>If somehow a search query isn't possible, may be retrieving the list of files?</p>
| <python><search><huggingface><huggingface-hub> | 2023-07-19 10:36:25 | 1 | 960 | codekoriko |
76,720,256 | 12,436,050 | Group by and Join excludes one column in the output dataframe in Python 3.7 | <p>I have a dataframe with following columns.</p>
<pre><code>col1 col2 col3 col4 col5 col6
A20 hghjfg jhdf A20.1 abcd direct
A20 hghjfg jhdf A20.2 edfg direct
A20 hghjfg jhdf A20.3 rtzu direct
</code></pre>
<p>I would like to group by this dataframe and join unique values from other columns. I am expecting following dataframe</p>
<pre><code>col1 col2 col3 col4 col5 col6
A20 hghjfg jhdf A20.1 | A20.2 | A20.3 abcd | edfg | rtzu direct
</code></pre>
<p>I am using following python code to do this.</p>
<pre><code>join_unique = lambda x: ' | '.join(x.unique())
df.groupby(['col1'], as_index=False).agg(join_unique)
</code></pre>
<p>However, when I do this only col1, col2, col3, col4 and col6 are in the output.</p>
<pre><code>col1 col2 col3 col4 col6
A20 hghjfg jhdf A20.1 | A20.2 | A20.3 direct
</code></pre>
<p>Why col5 is not there. How can I include it in the final dataframe.</p>
<p>Any help is highly appreciated</p>
| <python><pandas><join><group-by> | 2023-07-19 10:20:20 | 0 | 1,495 | rshar |
76,720,204 | 11,235,680 | How to plot a combination of data as a heatmap | <p>I have a dataframe that represents all the combinations of data sources and the number of common data points for each combination:</p>
<p>here's how to load a simplified dataframe:</p>
<pre><code>data = { 's1': [True, False, False], 's2': [True, True, True], 's3': [False, False,
True], 's4': [False, True, False], 'count': [2, 2, 2] }
df = pd.DataFrame(data)
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>s1</th>
<th>s2</th>
<th>s3</th>
<th>s4</th>
<th>count</th>
</tr>
</thead>
<tbody>
<tr>
<td>True</td>
<td>True</td>
<td>False</td>
<td>False</td>
<td>2</td>
</tr>
<tr>
<td>False</td>
<td>True</td>
<td>False</td>
<td>True</td>
<td>2</td>
</tr>
<tr>
<td>False</td>
<td>True</td>
<td>True</td>
<td>False</td>
<td>2</td>
</tr>
</tbody>
</table>
</div>
<p>The first line says that we have 2 data points common to source 1 and 2 and that aren't available in source 3 and 4.</p>
<p>I'm trying to make it more "readable" by doing a plot that could be a heatmap, because as you can imagine there more combination. But I can't figure the right transformation to reach that objective.</p>
<p>how can I achieve that?</p>
| <python><pandas><seaborn><heatmap> | 2023-07-19 10:12:52 | 2 | 316 | Bouji |
76,720,188 | 149,818 | Enumerate partial sums using Z3 | <p>My business goal to search decision that partial sum doesn't exceed specific number through really big Array of possible values. Partial sums are not distiguished (I mean <code>A[1] + A[3]</code> is the same as <code>A[3] + A[1]</code>). Following is minimal example that models the real decision but with some concerns:</p>
<pre><code>import z3
nums = [6, 1, 2, 3, 4, 7, 8, 10, 11, 0, 0, 0, 0, 0, 0] # Problem #1 - padding 0
GOAL = 14
xis = z3.Ints('x1 x2 x3 x4 x5 x6') # indexes to point inside array 'a'
sol = z3.Optimize()
sol.add(z3.Distinct(*xis))
sol.add( [z3.And( x >=0, x < len(nums)) for x in xis] )
# Problem #2 - how correctly apply AtLeast/AtMost statements
#sol.add( z3.AtLeast( *[x >=0 for x in xis], 1 ))
#sol.add( z3.AtMost( *[x >=0 for x in xis], 4 ))
a = z3.Array('a', z3.RealSort(), z3.IntSort())
for i, r in enumerate(nums):
a = z3.Store(a, i, r)
s = z3.Real('S')
sol.add(z3.Sum([z3.Select(a, x) for x in xis]) == s)
sol.add(s <= GOAL)
sol.maximize(s) # I need this to ensure to be close as possible to GOAL
while sol.check() == z3.sat:
model = sol.model()
print("==="*10)
print(model)
for x in xis: # Problem #3 - O^2 loop for exclusion
excl = model[x]
sol.add(z3.And(*(x != excl for x in xis)))
</code></pre>
<p>I've placed comments with problems that I see there:</p>
<p><strong>Problem #1</strong> and <strong>#2</strong> Sum can be combined of 4..6 items that is why I had to add padding <code>0</code> to ensure indexes can achieve this. I know about <code>AtLeast</code> <code>AtMost</code> but have no idea how to laverage these there.</p>
<p><strong>Problem #3</strong> After each successful evaluation I need O^2 loop to force x1..x6 not to use already used indexes anymore. Is there something like <code>NOT IN</code> to simplify progress uniq checking?</p>
| <python><z3><z3py> | 2023-07-19 10:10:24 | 1 | 23,762 | Dewfy |
76,720,187 | 12,775,432 | Resampling data from 6 min to 5min with nan | <p>I have a linear interpolation problem with nans in my data. I have instantaneous measurements that I want to resample from 6 min intervals to 5 min intervals.</p>
<pre><code>df = pd.DataFrame(zip(['10:00','10:06','10:12','10:18','10:24'],
[1, 2, 3, 0.5, 2.5], [0, np.nan, 5, 2.5, 10]),
columns=['date','column_a','column_b'])
df['date'] = pd.to_datetime(df['date'], infer_datetime_format=True)
df = df.set_index('date')
print(df)
column_a column_b
date
2023-07-19 10:00:00 1.0 0.0
2023-07-19 10:06:00 2.0 NaN
2023-07-19 10:12:00 3.0 5.0
2023-07-19 10:18:00 0.5 2.5
2023-07-19 10:24:00 2.5 10.0
</code></pre>
<p>I used this code but at 10:05 there is supposed to be nan instead of value. Thanks for helping.</p>
<pre><code>print(df.resample('1Min').interpolate(method='linear', limit=5).resample('5Min').asfreq())
column_a column_b
date
2023-07-19 10:00:00 1.000000 0.000000
2023-07-19 10:05:00 1.833333 2.083333 <--- here should be nan
2023-07-19 10:10:00 2.666667 NaN
2023-07-19 10:15:00 1.750000 3.750000
2023-07-19 10:20:00 1.166667 5.000000
</code></pre>
| <python><pandas><linear-interpolation><pandas-resample> | 2023-07-19 10:10:13 | 2 | 640 | pyaj |
76,720,158 | 4,913,660 | Numpy - Efficiently compute a function (1-d array) over a grid | <p>So I have a function <code>f(x)</code> where the argument <code>x</code> is a row array, with dimension <code>k</code>.
The function can be given an array with a number of rows >1 and is optimized to operate row-wise on arrays, clearly much faster than iterating over array rows and calling the function.
Now I would like to apply the function on a grid covering a k-dimensional space.
Let us say the k-dimension = 3.
Then</p>
<pre><code>N_DIV = 2
x0 = np.linspace(0,1,N_DIV)
x1 = np.linspace(0,1,N_DIV)
x2 = np.linspace(0,1,N_DIV)
</code></pre>
<p>and I would like to compute the function for all combinations such as</p>
<pre><code>x0 x1 x2
0 0 0
0 0 0.5
0 0.5 0
0 0.5 0.5
</code></pre>
<p>etc.</p>
<p>I thought about using <code>np.meshgrid</code> so</p>
<pre><code>xx, yy, zz = np.meshgrid(x0,x1,x2)
</code></pre>
<p>but what next? The brutal approach</p>
<pre><code>prev_array=no.array([0,0,0])
for i in range(N_DIV):
for j in range(N_DIV):
for k in range(N_DIV):
prev_array = np.vstack((prev_array,
np.array([xx[i,j,k],yy[i,j,k],zz[i,j,k]])))
</code></pre>
<p>cannot be right, any suggestions please?
I would like to efficiently compute the function<code>f</code> over a grid covering the k-dimensional space, thanks
*** EDIT</p>
<p>The post <a href="https://stackoverflow.com/questions/22774726/numpy-evaluate-function-on-a-grid-of-points">Evaluate function on a grid of points</a> has been suggested as a solution, but I fail to see how it could answer my question. They have a <code>f(x,y)</code> of two scalar variables, and I see how the idea <code>result = func(xaxis[:,None], yaxis[None,:])</code>. But my function takes a row vector as input, so {x,y}, and hence the idea above seems not directly applicable, to me at least, thanks again</p>
<p><em><strong>BACKGROUND - What I am trying to achieve</strong></em></p>
<p>Say I have a function of 3 variables</p>
<pre><code>def func(x,y,z):
return x**3 - 3*y + z**2
</code></pre>
<p>and I want to plot it as a function of say<code>(x,y)</code>, for a fixed value of <code>z</code>.</p>
<p>I could do</p>
<pre><code>N_DIV = 30
x =np.linspace(0,5,N_DIV)
y =np.linspace(0,5,N_DIV)
z =np.linspace(0,5,N_DIV)
xx , yy, zz = np.meshgrid(x,y,z)
W = func(xx,yy,zz)
import matplotlib.pyplot as plt
# Plot the surface
fig, ax = plt.subplots(subplot_kw={"projection": "3d"})
surf = ax.plot_surface(xx[:,:,1],yy[:,:,1],W[:,:,1], cmap="Spectral",
linewidth=0)
plt.show()
</code></pre>
<p>worls fine and the repeated function evaluations are fast.
But, if my function is defined as</p>
<pre><code>def func_vect(x):
return x[0]**3 - 3*x[1] + x[2]**2
</code></pre>
<p>how to achieve the same result? That is, creating an array <code>W</code> of output results, ready to plot using as before?</p>
<p>The brute force approach would be to create it by looping, but I am also confused by the following</p>
<pre><code>def func2(x,y):
return x**3 - 3*y
xx2 , yy2 = np.meshgrid(x,y)
W2 = func2(xx2,yy2)
W2_loop = np.zeros((N_DIV, N_DIV))
for i in range(N_DIV):
for j in range(N_DIV):
W2_loop[i,j] = func2(x[j],y[i])
np.isclose(W2,W2_loop)
</code></pre>
<p>returns all <code>True</code>, but I cannot figure out how to make it work in three dimensions, as (<em><strong>SECOND FUNDAMENTAL ISSUE</strong></em>)</p>
<pre><code>W_loop = np.zeros((N_DIV, N_DIV,N_DIV))
for i in range(N_DIV):
for j in range(N_DIV):
for k in range(N_DIV):
W_loop[i,j,k] = func(x[k],y[j],z[i])
</code></pre>
<p>is different from the <code>W</code> created above.</p>
<p>Thanks</p>
| <python><arrays><numpy> | 2023-07-19 10:07:33 | 1 | 414 | user37292 |
76,720,156 | 15,450,772 | pip install old version with optional dependencies | <p>Is there a way to install an old version of a package with optional dependencies?</p>
<p>For example, I can <code>pip install pandas[xml]</code> that would install the current pandas version with the xml extra dependencies.</p>
<p>Nonetheless, when I do <code>pip install pandas==1.4.4[xml]</code>, the following error appears: <code>ERROR: Extras after version '==1.4.4[xml]'.</code> From the <a href="https://pandas.pydata.org/pandas-docs/version/1.4/getting_started/install.html#xml" rel="nofollow noreferrer">archived documentation of pandas</a> the [xml] extra dependencies already existed in version 1.4.4.</p>
| <python><python-3.x><pip> | 2023-07-19 10:07:18 | 1 | 451 | KevinYanesG |
76,720,127 | 16,688,854 | np.where gives error message but correct result | <p>The following code returns an error message I don't understand, although the output is correct.</p>
<pre><code>a = np.array([1, 4, 9])
bool_1 = a > 5
bool_2 = a < 5
a = np.where(bool_1, -np.sqrt(a), a)
array([ 1., 4., -3.])
a = np.where(bool_2, np.sqrt(a), a)
array([ 1., 2., -3.])
</code></pre>
<p>But with error message:</p>
<pre><code>RuntimeWarning: invalid value encountered in sqrt
a = np.where(bool_2, np.sqrt(a), a)
</code></pre>
<p>Boolean masks are declared beforehand not to be impacted by the first <code>np.where</code>. Why do I get this error message?</p>
| <python><numpy> | 2023-07-19 10:03:26 | 0 | 337 | Antoine101 |
76,720,014 | 3,062,260 | create a numpy array with zeros and ones at random positions BUT at least one or more '1's on a given sub axis | <p>I am looking to make a numpy array randomly initialized with zeros and ones, I found the following question <a href="https://stackoverflow.com/questions/63451108/create-a-numpy-array-with-zeros-and-ones-at-random-positions">here</a> that descripbes the basic random array and how to control the dimensions. However, I need my array to have at least a single '1' on each sub array of the nested axis. See example:</p>
<pre><code>import numpy as np
size = (3, 5, 5)
proba_0 = 0.7
n_positions = np.random.choice([0,1], size=size, p=[proba_0, 1-proba_0])
print(n_positions)
[[[0 1 1 0 0]
[0 0 0 0 0]
[0 0 0 1 1]
[1 0 0 0 0]
[0 1 0 0 0]]
[[0 1 1 1 1]
[0 0 1 1 0]
[1 0 0 1 1]
[0 0 1 0 0]
[0 1 0 1 1]]
[[0 0 0 0 1]
[0 0 1 0 1]
[0 0 0 0 1]
[0 0 0 1 0]
[0 0 0 0 0]]]
</code></pre>
<p>This issue here is that at the following position in this array <code>n_positions[0][1]</code> the data is populted ONLY with zeros. I need there to be at least one '1' in each row on axis 2. I can increase the probability of 1s occuring but this doesn't eliminate the risk.</p>
<p>I could make this with a loop or a comprehension using a method getting numpy to generate a random numer of ones between 1-5 and then filling out with zeros but its very slow. I am hoping there is a more numpy friendly method built in to achieve this?</p>
| <python><numpy><random> | 2023-07-19 09:50:56 | 1 | 1,644 | user3062260 |
76,719,963 | 4,913,660 | Numpy Slicing: equivalence of [: , 2:3, 1] and slice() notation | <p>I am failing to understand how the function slice() relates to the usual Numpy indexing notation.</p>
<pre><code>test = np.random.rand(4,4,4)
test[:,0:1,2]
>>> array([[0.73897606],
[0.68005618],
[0.32831257],
[0.36882484]])
</code></pre>
<p>but I cannot see what is about to go on now</p>
<pre><code>test[slice(None),slice(0,1),slice(2,3)]
>>>> array([[[0.73897606]],
[[0.68005618]],
[[0.32831257]],
[[0.36882484]]])
</code></pre>
<p>Some experiments seem to confirm that <code>slice(None)</code> is equivalent to <code>:</code>, <code>slice(0,1)</code> is equivalent to <code>0:1</code>, but <code>slice(2,3)</code> is equivalent to <code>2</code>, as an index.</p>
<p>How to describe the <code>[:,0:1,2]</code> slicing using the <code>slice()</code> function?</p>
<p>Could somebody please give us a hint? I also do not get where the shape of the second output comes from, many thanks</p>
<p><strong>EDIT - ADDITIONAL BACKGROUND</strong></p>
<p>What I would like to be able to do is dynamically slice an array, given an input.
For example, given an array <code>S</code> with shape (10,10,10,10), the user might select two variables to plot over, keeping the other two fixed at a selected location.
For example, (surface)plot the first and third, keeping the second and fourth at indeces say (2,3).
I can pass then the array <code>S[:,2,:,3]</code>.</p>
<p>Following the answer in <a href="https://stackoverflow.com/questions/24398708/slicing-a-numpy-array-along-a-dynamically-specified-axis">Dynamically slice an array</a>, I though I would have something like</p>
<pre><code>axis_to_plot_1 = 0
axis_to_plot_2 = 2
axis_to_slice_1 = 1
index_to_consider_1 = 2
axis_to_slice_2 = 3
index_to_consider_2 = 3
slc = [slice(None)] * len(S.shape)
slc[axis_to_slice_1] = slice(index_to_consider_1, index_to_consider_1 +1)
slc[axis_to_slice_2] = slice(index_to_consider_2, index_to_consider_2+1)
</code></pre>
<p>this would solve my issue, I could <code>slice()</code> for both the <code>:</code> (variable to consider when plotting) and "i" indexing cases (section on remaining dimensions).</p>
<p>How is the above dynamical slicing over two axes best implemented, any ideas please?</p>
| <python><numpy> | 2023-07-19 09:46:05 | 1 | 414 | user37292 |
76,719,932 | 1,900,563 | DST-agnostic time in Python | <p>I have a scheduling class that takes a <code>datetime.time</code> object and runs a task every day at the specified time.</p>
<pre class="lang-py prettyprint-override"><code>import time
import datetime as dt
class Scheduler:
def set_time(self, day_time):
self._day_time = day_time
self._last_execution = None
def run(self, f):
while True:
time.sleep(10)
now = dt.datetime.now(dt.UTC)
today = now.date()
# Don't run on weekends
if today.weekday() > 4:
continue
# First execution: it can run anytime within the first 5 minutes after the expected time
if self._last_execution is None:
begin = dt.datetime.combine(today, self._day_time)
end = begin + dt.timedelta(minutes=5)
if begin <= now < end:
self._last_execution = now
f()
else:
continue
# Later executions
scheduled = dt.datetime.combine(today, self._day_time)
if self._last_execution < scheduled and now >= scheduled:
self._last_execution = now
f()
scheduler = Scheduler()
scheduler.set_time(dt.time(8, 30, tzinfo=my_timezone))
scheduler.run(lambda: print("This is printed every day at 8:30"))
</code></pre>
<p>Assume that this service stays up for years.</p>
<p>Obviously, the problem is the <code>my_timezone</code> variable. I would like this task to be run every day at the same time, no matter whether we are in DST or not (which by the way means that if my task is to be run at 2:30 in the morning, it will be run twice when moving from DST to non-DST, and skipped when moving from non-DST to DST).</p>
<p>Internally, <code>scheduler</code> compares the time specified at set up time with <code>dt.datetime.now(dt.UTC)</code>.</p>
<p>I wonder what is the best way to deal with this. Is there a <code>tzinfo</code> I can specify to account for DST?</p>
| <python><datetime><timezone><dst> | 2023-07-19 09:41:43 | 2 | 2,406 | Spiros |
76,719,916 | 5,986,164 | Find combination of continuous ranges returning max sum | <p>I have an SQL Table resembling the following</p>
<pre><code>-- Create the table
CREATE TABLE YourTableName (
val FLOAT,
p300 FLOAT,
p100 FLOAT
);
-- Insert values into the table
INSERT INTO YourTableName (val, p300, p100)
VALUES
(2295.91836734693400, -1.370, -2.340),
(1538.77551020407994, -0.035, 0.135),
(1269.68503937007615, -0.041, 0.300),
(-1130.38277511960990, -0.160, -0.075),
(1004.27350427350345, -0.070, 0.030),
(-2396.37305699481525, -0.210, 0.580),
(1632.46268656716000, -0.090, 0.290)
</code></pre>
<p>I need to find the combination of continuous ranges for columns p100 and p300 that return the max possible result in column val. This example is for 3 columns, but my real world case has more columns and more rows.</p>
<p>I firstly did a script finding the max sum subarray for each property seperately. This worked. Then I proceeded to try finding the max sum submatrix between p100 and p300, but I realized that this wouldn't work, as each of these has to be continuous and I can only order the matrix in one way.</p>
| <python><c#><sql-server><math><statistics> | 2023-07-19 09:39:42 | 1 | 317 | Georgi Lubomirov |
76,719,861 | 160,665 | How can I passively check if a user session is still active when using SAML? | <p>tl;dr: How can a SAML SP validate incoming requests <em>after</em> the inisial POST request received from the IdP?</p>
<hr />
<p>NOTE: This is a faily high-level SAML question and the small code-examples below are non-functional and serve for illustration. I would like to understand the "best-practice" when using SAML in an SPA exchanging data via a REST API (the service-provider).</p>
<hr />
<p>I have a JS frontend and a Python back-end. When authenticating, the JS-app triggers a SAML flow on the back-end. This works as intended:</p>
<ul>
<li>The browser asks the service-provider (the Python app) to start a login</li>
<li>The SP response with a redirect to the SAML IdP</li>
<li>The browser redirects, user authenticates and the IdP sends a POST response back to the SP</li>
</ul>
<p>All that works fine and I know that user has successfully logged in. I remember basic info using a JWT token in the browser's local-storage.</p>
<p>When I make a request to the Python app, I would like to verify that the session has not been terminated inside the IdP. Especially for critical operations.</p>
<p>From my investigation I gather that this is what the "passive" SAML request is useful for. I'm using the OneLogin Python library for this. But I don't understand how to use these passive requests. When I do the following, all I get is a new redirect URL:</p>
<pre class="lang-py prettyprint-override"><code>auth = build_onelogin_auth_request()
redirect_url = auth.login(is_passive=True)
# ^-- what should I do with this URL?
</code></pre>
<p>I can send that URL back to the browser, making it redirect to the IdP. But that disrupts the user. And it may be a "lossy" reduirect by losing headers and POST payload along the way. I'm not sure if those can be "bundled" into the SAML response. It feels iffy.</p>
<p>I would prefer if the SP could simply "ask" the IdP if the give user-session is still valid and if the rest of the code can continue. For example:</p>
<pre class="lang-py prettyprint-override"><code>auth = build_onelogin_auth_request()
is_authed = do_something(auth)
# ^-- What would this need to be?
if not is_authed:
return 401 # (or 403, whichever is appropriate in the context)
</code></pre>
<p>Another thing I don't fully grasp is how the SP can make the link between the initial login-request and any followup "check" as discussed above. I can see that the SAML IdP is keeping a session-id. So the IdP can properly identify returning clients. But the SP does not have access to that data. I can see some IDs in the SAML exchanges. But those seem to be request/response IDs. Can they be used by the SP to link an auth-request to an existing session?</p>
<p>Maybe I'm also totally on the wrong path and I would appreciate a reorientation.</p>
<p>For now, from the perspective of the SP I only see the authentication state when the IdP sends the POST response. For any subsequent exchanges between the front-end application (JS) and the SP I am missing that info. I could use expirations on the JWT token and force a fresh SAML exchange when it runs out. But that feels like using the wrong tool for the job.</p>
| <python><saml><saml-2.0> | 2023-07-19 09:34:01 | 0 | 22,091 | exhuma |
76,719,835 | 4,451,521 | pycuda cannot find the kernel cuModuleGetFunction failed: named symbol not found | <p>I have the following script that tries to paint a rectangle on a image</p>
<pre><code>import cv2
import numpy as np
import pycuda.autoinit
import pycuda.driver as cuda
from pycuda.compiler import SourceModule
def draw_square(image_gpu, image_width,image_height, x, y, width, height, color):
block_dim = (16, 16) # CUDA block dimensions
grid_dim_x = (image_width + block_dim[0] - 1) // block_dim[0] # CUDA grid dimensions (x-axis)
grid_dim_y = (image_height + block_dim[1] - 1) // block_dim[1] # CUDA grid dimensions (y-axis)
mod = SourceModule("""
__global__ void draw_square_kernel(unsigned char *image, int image_width, int x, int y, int width, int height, unsigned char *color)
{
int row = blockIdx.y * blockDim.y + threadIdx.y;
int col = blockIdx.x * blockDim.x + threadIdx.x;
if (row >= y && row < y + height && col >= x && col < x + width)
{
int pixel_idx = row * image_width * 3 + col * 3;
image[pixel_idx] = color[0];
image[pixel_idx + 1] = color[1];
image[pixel_idx + 2] = color[2];
}
}
""", no_extern_c=True)
draw_square_kernel = mod.get_function("draw_square_kernel")
draw_square_kernel(image_gpu, np.int32(image_width), np.int32(x), np.int32(y),
np.int32(width), np.int32(height), cuda.In(color, block=block_dim, grid=(grid_dim_x, grid_dim_y)))
# Load the image
image_path = 'Lena.png' # Replace with the path to your image
image = cv2.imread(image_path)
# Convert the image to the RGB format
image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Upload the image to the GPU
image_gpu = cuda.to_device(image_rgb)
# Define the square coordinates
x, y = 100, 100 # Top-left corner coordinates
width, height = 200, 200 # Width and height of the square
# Define the color of the square (Green in this example)
color = np.array([0, 255, 0], dtype=np.uint8)
# Draw a square on the GPU image
draw_square(image_gpu, image_rgb.shape[1], image_rgb.shape[0],x, y, width, height, color)
# Download the modified image from the GPU
image_with_square = np.empty_like(image_rgb)
cuda.memcpy_dtoh(image_with_square, image_gpu)
# Convert the image back to the BGR format for display
image_with_square_bgr = cv2.cvtColor(image_with_square, cv2.COLOR_RGB2BGR)
# Display the image with the square
cv2.imshow('Image with Square', image_with_square_bgr)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p>However when I try to run it I get</p>
<pre><code> python 3_rectangle6.py
Traceback (most recent call last):
File "/cbe421fe-1303-4821-9392-a849bfdd00e2/MyStudy/PyCuda/practice/3_rectangle_pycuda3.py", line 52, in <module>
draw_square(image_gpu, image_rgb.shape[1], image_rgb.shape[0],x, y, width, height, color)
File "/cbe421fe-1303-4821-9392-a849bfdd00e2/MyStudy/PyCuda/practice/3_rectangle_pycuda3.py", line 29, in draw_square
draw_square_kernel = mod.get_function("draw_square_kernel")
File "/miniconda3/envs/py39Cuda2/lib/python3.9/site-packages/pycuda/compiler.py", line 332, in get_function
return self.module.get_function(name)
pycuda._driver.LogicError: cuModuleGetFunction failed: named symbol not found
</code></pre>
<p>As you can see this is my sixth attempt, and still draw_square_kernel is not recognized...</p>
| <python><cuda><pycuda> | 2023-07-19 09:30:22 | 1 | 10,576 | KansaiRobot |
76,719,816 | 10,083,382 | Populate Pandas DataFrame using Backfill | <p>I make predictions on 5 dates using increments as <code>increments = [0, 3, 6, 10, 15, 21]</code>. So for any start prediction is made on day + 0, day + 3 and so on. Suppose that the predicted df can be generated using code below.</p>
<pre><code>timezone = pytz.timezone('US/Pacific')
start_date_pst = datetime.now(timezone).date() + timedelta(days=4)
increments = [0, 3, 6, 10, 15, 21]
start_dates_list = []
for i in increments:
next_date = start_date_pst + pd.Timedelta(days=i)
start_dates_list.append(next_date)
testing_df = pd.DataFrame(start_dates_list*2, columns = ['START_DATE'])
testing_df['START_DATE'] = pd.to_datetime(testing_df['START_DATE'])
testing_df['Code'] = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L']
testing_df['Preds'] = [100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200]
</code></pre>
<p>So for dates that are not between day 0th and day + 3 I want to use data from day + 3 and replicate it for two days in between and change the dates. I can achieve desired output using code below</p>
<pre><code>for i in range(0, len(increments)-1):
start_date = start_date_pst + pd.Timedelta(days=increments[i]+1)
end_date = start_date_pst + pd.Timedelta(days=increments[i+1]-1)
data_date = start_date_pst + pd.Timedelta(days=increments[i+1])
date_range = pd.date_range(start_date, end_date).tolist()
for date in date_range:
temp_df = testing_df[testing_df.START_DATE == str(data_date)]
temp_df['START_DATE'] = date
testing_df = pd.concat([testing_df, temp_df])
print(testing_df.START_DATE.unique())
testing_df = testing_df.reset_index(drop=True)
testing_df = testing_df.sort_values(by='START_DATE')
</code></pre>
<p>Although I get the desired results from above code but I want to vectorise the code as I would need to perform it on larger data set. What could be a more efficient approach to reach desired output?</p>
| <python><pandas><vectorization> | 2023-07-19 09:28:30 | 1 | 394 | Lopez |
76,719,531 | 726,730 | Compatibillity problems between PyQt5 and qtpy | <p><strong>File: test_script.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from PyQt5 import QtWidgets, QtCore, QtGui
</code></pre>
<pre class="lang-py prettyprint-override"><code>pip install qtpy
</code></pre>
<pre class="lang-py prettyprint-override"><code>pyinstaller --onedir test_script.py
</code></pre>
<pre class="lang-py prettyprint-override"><code>cd dist/test_script
test_script.exe
</code></pre>
<p>Error:</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "test_script.py", line 1, in <module>
ImportError: DLL load failed: The specified module could not be found.
[7508] Failed to execute script 'test_script' due to unhandled exception!
</code></pre>
<p>The problem is related to qtpy because if i uninstall qtpy with: <code>pip uninstall qtpy</code> and then i run <code>pyinstaller test_script.py</code> then the exe run with no error.</p>
<p>How can I solve this?</p>
<p><strong>Edit:</strong> the qtpy module is requirement of qtwidgets module (this is why i was needed). Edditing qtwidgets module (replace qtpy with PyQt5, Signal with pyqtSignal and Slot with pyqtSlot) and renaming qtwidgets module to extra_qtwidgets, solve the error in my occasion.</p>
| <python><pyqt5><pyinstaller><qtpy> | 2023-07-19 08:54:23 | 0 | 2,427 | Chris P |
76,719,448 | 2,490,497 | unevenly spaced time series: moving sum in python pandas | <p>I am trying to figure out how to apply rolling function to unevenly spaced time series data.</p>
<p>The column that is defining space between the values to be sumed, called <code>id1</code>, is an integer, and could be representing arbitrary time units (nanoseconds, hours, years, etc.).</p>
<p>In below simplified example of my problem I am trying to apply <code>sum</code> on a window of length 3 of an index. <code>idf.rolling(3).sum()</code> seems to be ignoring index in this case. How can I make it respect integer index?</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({
'id1': [0,1,2,5,6,8],
'v1': [1,2,3,4,5,6]
})
idf = df.set_index('id1', drop=True)
idf.rolling(3).sum()
# v1
#id1
#0 NaN
#1 NaN
#2 6.0
#5 9.0
#6 12.0
#8 15.0
</code></pre>
<p>Expected results are:</p>
<pre><code># v1
#id1
#0 NaN
#1 NaN
#2 6.0
#5 4.0
#6 9.0
#8 11.0
</code></pre>
<p>I am looking for the most efficient solution in terms of time and memory. Therefore expanding original data by filling in missing index entries is not an option because index may be very sparse and materializing empty NaN entries will lead to an out of memory error.</p>
| <python><pandas><time-series><rolling-computation><moving-average> | 2023-07-19 08:43:59 | 2 | 16,756 | jangorecki |
76,719,401 | 5,960,363 | Does Pydantic work with Pyre? ("Uninitialized attribute" when type checking BaseModel) | <h2>Problem:</h2>
<p>I'm using Pydantic to structure my code, and Pyre (aka pyre-check) to type check. In the following sample, the code works and mypy doesn't complain but Pyre gives error:</p>
<blockquote>
<p>Uninitialized attribute [13]: Attribute <code>first</code> is declared in class <code>Name</code> to have type <code>str</code> but is never initialized.</p>
</blockquote>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
from pydantic import BaseModel
class Name(BaseModel):
first: str
n = Name(first="test")
print(n)
</code></pre>
<h2>Is Pyre incompatible with Pydantic, or is this user error on my part?</h2>
<p>I understand Pyre wants to see the attribute initialized (for example <code>first: str = "Bob"</code>) but setting such an equality would indicate to Pydantic that the field is optional (it's not).</p>
<p><strong>Other solutions I've considered and discarded:</strong></p>
<ul>
<li>Pyre doesn't complain if I make <code>Name</code> a dataclass (but then I lose Pydantic features)</li>
<li>Adding Field to each attribute eg <code> first: str = Field(..., alias="first_name")</code> - this seems hacky and labor intensive (there are many such BaseModel classes in my code)</li>
</ul>
<p>Thanks for your help!</p>
| <python><mypy><python-typing><pydantic><pyre-check> | 2023-07-19 08:37:57 | 0 | 852 | FlightPlan |
76,719,392 | 4,489,998 | Type hinting overriden __new__ in a subclass of a generic class | <p>Consider the following:</p>
<pre><code>from typing import TypeVar, Generic
T = TypeVar("T")
class A():
pass
class B(A, Generic[T]):
def __new__(cls, x: T):
return super().__new__(cls)
class C(B[T]):
def __new__(cls, x: T):
return super().__new__(cls, x)
</code></pre>
<p>On the last line, mypy raises the following errors:</p>
<pre><code># Argument 1 to "__new__" of "B" has incompatible type "type[C[T]]"; expected "type[B[T]]" [arg-type]
# Argument 2 to "__new__" of "B" has incompatible type "T"; expected "T" [arg-type]
</code></pre>
<ul>
<li>Overriding <code>__new__</code> this way seems normal, apart from the generics. I know <code>type[C]</code> is compatible with <code>type[B]</code>, so why is this not the case with <code>type[C[T]]</code> and <code>type[B[T]]</code> ? I don't think this has anything to do with variance (though I did try).</li>
<li>I don't understand the second error at all</li>
</ul>
<p>I haven't found any similar SO questions, and read through <a href="https://mypy.readthedocs.io/en/stable/generics.html" rel="nofollow noreferrer">https://mypy.readthedocs.io/en/stable/generics.html</a> to no avail.</p>
<p>EDIT: it could also be linked to this: <a href="https://mypy.readthedocs.io/en/stable/common_issues.html#incompatible-overrides" rel="nofollow noreferrer">https://mypy.readthedocs.io/en/stable/common_issues.html#incompatible-overrides</a>, as I'm overriding <code>__new__</code> with an argument type that is probably more specific (even though I didn't manual typehint it), but it's not the same error and seems necessary for <code>__new__</code> to work correctly.</p>
| <python><python-typing><mypy> | 2023-07-19 08:37:17 | 0 | 2,185 | TrakJohnson |
76,719,299 | 12,131,472 | do we need to write the dataframe name several times when we refer to different columns in the same dataframe? | <p>This should be a quite basic question but I haven't found anything, or maybe it's simply not feasible.</p>
<p>I have a Python Pandas dataframe df having columns A, B, C, I want to create a new column D like this, <code>df['D'] = df['A']+df['B']*df['C']</code>, my question is do I have to write "df" four times like this? as they are in the same dataframe, is there an easy way to only write "df" once?</p>
<p>something like:</p>
<pre><code>df['D'] = ['A']+['B']*['C']
</code></pre>
| <python><pandas><dataframe> | 2023-07-19 08:25:14 | 1 | 447 | neutralname |
76,719,175 | 16,688,854 | np.sqrt with integers and where condition returns wrong results | <p>I am getting weird results from numpy sqrt method when applying it on an array of integers with a <code>where</code> condition. See below.</p>
<p>With integers:</p>
<pre><code>a = np.array([1, 4, 9])
np.sqrt(a, where=(a>5))
Out[3]: array([0. , 0.5, 3. ])
</code></pre>
<p>With floats:</p>
<pre><code>a = np.array([1., 4., 9.])
np.sqrt(a, where=(a>5))
Out[25]: array([1., 4., 3.])
</code></pre>
<p>Is it a bug or a misunderstanding of how this method works?</p>
| <python><numpy><sqrt> | 2023-07-19 08:08:21 | 2 | 337 | Antoine101 |
76,719,011 | 12,579,308 | How to Disable Ray in Python with a Mock module for Debugging and Profiling | <p>I am working on a Python project that extensively uses asyncio and ray. In my code, I have several methods prefixed with async for asynchronous execution, and I also utilize the ray.remote decorator to parallelize certain classes. However, I now need to disable Ray temporarily to debug and profile my code effectively.</p>
<p>To achieve this, I plan to create a mock Ray class that can serve as a replacement for the original ray module. By using this mock class, I aim to seamlessly switch between the actual Ray implementation and the mock version without modifying the rest of the code. The idea is to use the same method implementations from the original classes, ensuring code reusability and easier maintenance.</p>
<p>Here's an example of the class I want to mock:</p>
<pre class="lang-py prettyprint-override"><code>@ray.remote
class A:
def method1():
return 3
</code></pre>
<p>And I currently use this class as follows:</p>
<pre class="lang-py prettyprint-override"><code>obj = A.remote()
result = await obj.method1.remote()
</code></pre>
<p>To proceed with my plan, I would like to create a RayMock class that can be utilized as a replacement for the original ray module. Here's how I envision using it:</p>
<pre class="lang-py prettyprint-override"><code>from ray_mock import RayMock
ray = RayMock()
</code></pre>
<p>By integrating this RayMock class into my codebase, I expect to retain the original functionality while being able to disable Ray for debugging and profiling purposes.</p>
<p>I would greatly appreciate any insights or code examples on how to implement the RayMock class and utilize it as described above. Thank you in advance!</p>
| <python><ray> | 2023-07-19 07:47:14 | 0 | 341 | Oguz Hanoglu |
76,718,907 | 17,896,651 | Add a "not" filter on Dajngo admin | <p>So I have this model:</p>
<pre><code>class Follower(models.Model):
language = models.ManyToManyField(
"instagram_data.Language", verbose_name=_("language_code_name"), blank=True)
class Language(models.Model):
code_name = models.CharField(max_length=6,
null=True,
blank=True,
unique=True,
db_index=True,
verbose_name='language_code_name')
def __str__(self):
return self.code_name
</code></pre>
<p>So in my admin panel I uses:
('language__code_name', MultiSelectDropdownFilter),
<a href="https://i.sstatic.net/1heOH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1heOH.png" alt="enter image description here" /></a></p>
<p>which is nice.</p>
<p>But I also what to have "filter OUT checkbox for languages)</p>
<p>I want to have a "remove from results" if language in ['ar', 'bg'...]</p>
<p>What is the proper way ? without slowing down everything.</p>
| <python><django><admin> | 2023-07-19 07:32:07 | 1 | 356 | Si si |
76,718,870 | 5,835,338 | How to validate SQL query with unbalanced quotes | <p>We want to parse the mysql query in python project and detect wrong query based on unbalanced quotes or wrong quotes placement
or column filtered values having wrong quotes</p>
<p>Example:</p>
<ol>
<li>"select * from city where name='x and type=y;" -> Wrong query</li>
<li>"select * from city where name='x and type=y';" -> Wrong query</li>
</ol>
<p>We tried using sqlparse and sqlglot and sqlvalidator</p>
<pre><code>from sqlglot import exp, parse_one
try:
sqlglot.transpile("select * from city where name='x and type=y';")
except sqlglot.errors.ParseError as e:
print(e.errors)
</code></pre>
| <python><mysql><validation><sql-parser><sqlglot> | 2023-07-19 07:25:59 | 0 | 1,244 | Vaibhav |
76,718,805 | 1,049,569 | Insert a geometry in a PostGres DB from a GeoJSON text with Python | <p>I've got a GeoJSON file and I need to import the geometries into a Postgres Db with Python.
I tried this:</p>
<pre><code>...
cursor = conn.cursor()
data = json.load(json_file)
for row in data['features']
geometry = row['geometry']
params = (geometry)
cursor.callproc('ST_GeomFromGeoJSON', params)
wkb_geometry = cursor.fetchone()[0]
sql = "INSERT INTO tbl(wkb_geometry) VALUES (%s)"
val = (wkb_geometry)
cursor.execute(sql, val)
conn.commit()
...
</code></pre>
<p>I get this error:</p>
<pre><code>function st_geomfromgeojson(typt => unknown, coordinates => numeric[]) does not exist
LINE 1: SELECT * FROM ST_GeomFromGeoJSON("type":='Polygon',"coordina...
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
</code></pre>
<p>Any idea? I don't know how to add explicit type casts as suggested.</p>
| <python><postgresql> | 2023-07-19 07:15:21 | 1 | 3,828 | kiks73 |
76,718,676 | 13,605,609 | Django Custom Complex Field with multiple properties | <p>Example Json</p>
<pre><code> {
"name": "asdfg",
"amount": {
"amount": 5000,
"forexAmount": 10,
"rateOfExchange": 500,
"currency": "$",
"isDebit": false
}
}
</code></pre>
<p>Want to create a custom field which accepts amount.
and perform calculations like ModelName.objects.filter(amount.isDebit=True).aggregate(Sum('amount.amount'))</p>
<p>I am using following code.
Issue with following field is its creating multiple fields amount_amount</p>
<p>Django Model</p>
<pre><code>from django.db import models
class Ledger(models.Model):
name = models.CharField(max_length=100)
amount = AmountField()
</code></pre>
<p>Custom Django Model Field</p>
<pre><code>from typing import Any, Type
from django.db import models
from django.db.models import Model
from django.db.models.query_utils import DeferredAttribute
from django.db.models.utils import AltersData
from django.utils.translation import gettext_lazy as _
class Amount():
amount:float
forexAmount:float|None
rateOfExchange:float|None
currency:str|None
isDebit:bool
def __init__(self,amount,isDebit,currency=None,forexAmount=None,rateOfExchange=None) -> None:
self.amount = amount
self.forexAmount = forexAmount
self.rateOfExchange = rateOfExchange
self.isDebit = isDebit
self.currency = currency
class FieldAmount(Amount,AltersData):
def __init__(self, instance, field, name):
super().__init__(None, name)
self.instance = instance
self.field = field
self.storage = field.storage
self._committed = True
class AmountDescriptor(DeferredAttribute):
def __get__(self,instance,cls=None):
if instance is None:
return self
data = instance.__dict__
amount = super().__get__(instance, cls)
# if(isinstance(amount,Amount)) or amount is None:
# attr = self.field.attr_class(instance, self.field, amount)
# data[self.field.attname] = attr
# pass
# else :
# val = Amount(5,False)
# data[self.field.attname] = val
return data[self.field.attname]
def __set__(self, instance, value):
if(isinstance(value,Amount)):
data = instance.__dict__
data[self.field.attname] = value
data[self.field.attname+"_amount"] = value.amount
data[self.field.attname+"_forexAmount"] = value.forexAmount
class AmountField(models.Field):
attr_class = FieldAmount
descriptor_class = AmountDescriptor
description = _("Tally Amount")
def __init__(self, *args: Any, **kwargs: Any) -> None:
super().__init__(*args, **kwargs)
def deconstruct(self) -> Any:
return super().deconstruct()
def from_db_value(self,value):
if value is None:
return value
return value
def to_python(self, value: Any) -> Any:
return super().to_python(value)
def contribute_to_class(self, cls: type[Model], name: str, private_only: bool = ...) -> None:
if not hasattr(self, "_amount_field"):
amount_field = models.DecimalField(decimal_places=5,max_digits=15)
amount_field.contribute_to_class(cls,"amount_amount")
self._amount_field = amount_field
super().contribute_to_class(cls, name)
setattr(cls, name, self.descriptor_class(self))
</code></pre>
<p>Error:
<code>django.db.utils.OperationalError: duplicate column name: amount_amount</code></p>
<p>References :</p>
<ul>
<li><a href="https://docs.djangoproject.com/en/4.2/howto/custom-model-fields/" rel="nofollow noreferrer">https://docs.djangoproject.com/en/4.2/howto/custom-model-fields/</a></li>
<li><a href="https://stackoverflow.com/questions/55229112/how-to-store-a-complex-number-in-django-model">How to store a complex number in Django model</a></li>
</ul>
| <python><django> | 2023-07-19 06:53:57 | 1 | 1,045 | sai vineeth |
76,718,550 | 5,368,083 | PyDantic setter callbacks | <p>What I'm trying to achieve is a functionality where I can add and remove callbacks from a PyDantic (>2.0) model.</p>
<p>With the standard Python dictionary, it would look something like this</p>
<pre class="lang-py prettyprint-override"><code>class DictionaryWithCallbacks(dict):
"""
A standard Python dictionary with support for callbacks that get executed when a key is set
The callable accepts three arguments:
- The dictionary itself
- The key that is about to be set
- The value that is about to be set
This means that the callback has the current state of the dictionary, plus the new key and value that are about to
be set, allowing delta calculations between the state and new value. The value is set only after the callback is
executed.
"""
def __init__(self, *args, **kwargs):
self._callbacks = defaultdict(list)
super().__init__(*args, **kwargs)
def __setitem__(self, key, value):
"""
Set item and call callbacks
:param key: Key to set
:param value: Value to set
:return: None
"""
for callback in self._callbacks[key]:
callback(self, key, value)
super().__setitem__(key, value)
def register_set_callback(self, key, callback: t.Callable[['DictionaryWithCallbacks', str, object], None]):
"""
Register callback for key
:param key: Key
:param callback: Callback
:return: None
"""
self._callbacks[key].append(callback)
def unregister_set_callback(self, key, callback: t.Callable[['DictionaryWithCallbacks', str, object], None]):
"""
Unregister callback for key
:param key: Key
:param callback: Callback
:return: None
"""
with suppress(ValueError):
self._callbacks[key].remove(callback)
</code></pre>
<p>How can I achieve something similar in PyDantic?</p>
| <python><pydantic> | 2023-07-19 06:33:58 | 0 | 12,767 | bluesummers |
76,718,476 | 11,893,427 | How to get the maximum length for a value of populated csv in python? | <p>I have a csv with two columns "id" and "results". One "id" can appear on csv repeatedly with same or different values. I want to group the csv by each column and calculate the id with the maximum length of "results".</p>
<p><strong>Input CSV</strong></p>
<pre><code>id, result
Test10001, 400
Test10001, 404
Test10001, 200
Test10002, 404
Test10002, 404
Test10003, 400
</code></pre>
<p>I thought of using dataframes and have done upto below so far.</p>
<pre><code>grouped_data = data.groupby('id')['result'].apply(list)
</code></pre>
<p><strong>output - grouped csv</strong></p>
<pre><code>id
Test10001 [400, 404, 200]
Test10002 [404, 404]
Test10003 [400]
</code></pre>
<p>now I want to get the which "id" has the maximum length of list. I am having a trouble to get the length of each, as the type of the <strong>grouped_data</strong> is <class 'pandas.core.series.Series'>.</p>
<p>Please assist me on this. Thank you in advance!</p>
| <python><pandas><dataframe><csv> | 2023-07-19 06:20:54 | 3 | 429 | Indi |
76,718,403 | 20,508,530 | python Invalid syntax in wsgi while hosting django app | <p>I am using <code>Django 4.2.2</code> and <code>python 3.10</code> and I am having syntax errors in <code>wsgy.py</code> while hosting the server on AWS.</p>
<p>this is what i get in error.log</p>
<pre><code>
[wsgi:error] mod_wsgi (pid=2084): Failed to exec Python script file '/var/www/project/wshp/wshp/wsgi.py'.
[wsgi:error] mod_wsgi (pid=2084): Exception occurred processing WSGI script '/var/www/project/wshp/wshp/wsgi.py'.
[wsgi:error] Traceback (most recent call last):
[wsgi:error] File "/var/www/project/wshp/wshp/wsgi.py", line 13, in <module>
[wsgi:error] from django.core.wsgi import get_wsgi_application
[wsgi:error] File "/var/www/project/wshp/Env/lib/python3.10/site-packages/django/__init__.py", line 1, in <module>
[wsgi:error] from django.utils.version import get_version
[wsgi:error] File "/var/www/project/wshp/Env/lib/python3.10/site-packages/django/utils/version.py", line 7, in <module>
[wsgi:error] from django.utils.regex_helper import _lazy_re_compile
[wsgi:error] File "/var/www/project/wshp/Env/lib/python3.10/site-packages/django/utils/regex_helper.py", line 10, in <module>
[wsgi:error] from django.utils.functional import SimpleLazyObject
[wsgi:error] File "/var/www/project/wshp/Env/lib/python3.10/site-packages/django/utils/functional.py", line 265
[wsgi:error] if (_wrapped := self._wrapped) is empty:
[wsgi:error] ^
[wsgi:error] SyntaxError: invalid syntax
</code></pre>
<p>If run migrations, or <code>run server</code> there is no issue.</p>
<p>I can not see what I have to upgrade and dont understand why this <code>get_wsgi_application</code> does not understand walrus operator.
Please help me to understand what I am doing wrong.</p>
| <python><django><python-3.10> | 2023-07-19 06:09:11 | 2 | 325 | Anonymous |
76,718,181 | 4,277,485 | Python pandas replace abc-1 to abc-01 in a column | <p>I am using python pandas and here is the dataframe</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">sl.no</th>
<th style="text-align: center;">data_col1</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">321</td>
<td style="text-align: center;">abc-1</td>
</tr>
<tr>
<td style="text-align: left;">324</td>
<td style="text-align: center;">abc-2</td>
</tr>
<tr>
<td style="text-align: left;">326</td>
<td style="text-align: center;">abc-3</td>
</tr>
<tr>
<td style="text-align: left;">328</td>
<td style="text-align: center;">abc-4</td>
</tr>
<tr>
<td style="text-align: left;">330</td>
<td style="text-align: center;">abc-5</td>
</tr>
<tr>
<td style="text-align: left;">330</td>
<td style="text-align: center;">abc-12</td>
</tr>
<tr>
<td style="text-align: left;">331</td>
<td style="text-align: center;">xyz-1</td>
</tr>
</tbody>
</table>
</div>
<p>Want to replace abc-single digit with abc-01, abc-02, abc-03, data other than start with abc should remains same</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">sl.no</th>
<th style="text-align: center;">data_col1</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">321</td>
<td style="text-align: center;">abc-01</td>
</tr>
<tr>
<td style="text-align: left;">324</td>
<td style="text-align: center;">abc-02</td>
</tr>
<tr>
<td style="text-align: left;">326</td>
<td style="text-align: center;">abc-03</td>
</tr>
<tr>
<td style="text-align: left;">328</td>
<td style="text-align: center;">abc-04</td>
</tr>
<tr>
<td style="text-align: left;">330</td>
<td style="text-align: center;">abc-05</td>
</tr>
<tr>
<td style="text-align: left;">330</td>
<td style="text-align: center;">abc-12</td>
</tr>
<tr>
<td style="text-align: left;">331</td>
<td style="text-align: center;">xyz-1</td>
</tr>
</tbody>
</table>
</div>
<p>I am new to python need some inputs using df.replace() or any short method</p>
| <python><pandas><string><replace><regexp-replace> | 2023-07-19 05:18:46 | 1 | 438 | Kavya shree |
76,717,974 | 8,726,488 | python regular expression - split by dict value | <p>This is my data :</p>
<pre><code>data= 'JUN 2023 02 20 INFO : data1 = data2 data3 = data4 {"app":[{"key":"app1","value":"100"},{"key":"app2","value":"200"},{"key":"app3","value":"300"}]}'
</code></pre>
<p>Based on pattern i written like this : <code>print(re.findall(r'(?:\w+ :)(\w+)\s*=\s*(.*?)(?=\s*\w+\s*=|$)', text))</code> . After running this code, I am getting empty list. I tried multiple pattern and there is no expected result.</p>
<p>My expected result is</p>
<pre><code>[ 'data1' = 'data2',
'data3' = 'data4'
'app1' = '100',
'app2' = '200',
'app3' = '300',
]
</code></pre>
| <python><regex><python-re> | 2023-07-19 04:23:55 | 2 | 3,058 | Learn Hadoop |
76,717,876 | 2,315,911 | How to show a web content (graph) using `Julia` in Jupyter Notebook | <p>Consider the following <code>Python</code> code to show the content of a URL in Jupyter Notebook (it is a graph from <a href="https://fred.stlouisfed.org" rel="nofollow noreferrer">FRED</a>)</p>
<pre><code>from IPython.display import IFrame
url = 'https://fred.stlouisfed.org/graph/graph-landing.php?g=BOLN&width=420&height=320'
IFrame(src=url, width=450, height=330)
</code></pre>
<p>I want to achieve the same result using <code>Julia</code> in Jupyter Notebook. I tried <a href="https://github.com/jbn/IJuliaPortrayals.jl" rel="nofollow noreferrer">IJuliaPortrayals</a>, but I guess it is not compatible with the latest <code>Julia</code>. Can anyone suggest a solution?</p>
| <python><julia> | 2023-07-19 03:51:43 | 1 | 1,300 | Spring |
76,717,831 | 19,106,406 | How to bundle Python C extension libraries in an iOS app for App Store submission? | <p>I'm developing an iOS application using Kivy and Python, which includes the <code>pymongo</code> and <code>bson</code> libraries. These libraries have some C extensions, which get compiled into <code>.so</code> files (<code>bson/_cbson.cpython-310-darwin.so</code> and <code>pymongo/_cmessage.cpython-310-darwin.so</code>).</p>
<p>When I try to submit my app to the App Store, I receive an error stating:</p>
<pre><code>Asset validation failed
Invalid bundle structure. The binary file is not permitted. Your app cannot contain standalone executables or libraries, other than a valid CFBundleExecutable of supported bundles.
</code></pre>
<p>I understand that this is due to the App Store not allowing standalone dynamic libraries. However, I'm not sure how to resolve this issue.</p>
<p>Here are the options chatgpt offered:</p>
<ol>
<li><strong>Static linking</strong>: Linking these libraries statically into my main app executable.</li>
<li><strong>Bundling the dynamic libraries</strong>: Including these dynamic libraries in my main app bundle and adjusting the code to refer to the new location.</li>
<li><strong>Switching to pure Python libraries</strong>: Although this may not provide the functionality I need.</li>
</ol>
<p>Does anyone have experience with this or suggestions on the best way to proceed? Are there other options I should consider?</p>
<p>Any help would be greatly appreciated!</p>
| <python><ios><mongodb><kivy><pymongo> | 2023-07-19 03:36:46 | 0 | 301 | cryptotheo |
76,717,805 | 3,875,388 | How can I show a python package installed packages | <p>If I install some python pacakges, like <code>opencv-python</code>, it will install the cv2 pacakge.</p>
<p>But before I look at opencv's document, how can I inspector the <code>opencv-python</code> pacakge, and find out what it installed.</p>
<p>Like <code>pip info opencv-python</code>, but it will not print the installed packages.</p>
<p>Update:</p>
<p>I find the pip installed place <code>/usr/local/lib/python3.11/site-packages/opencv_python-4.7.0.72.dist-info/top_level.txt</code> contains the top_level package(like installed packages). So I can write some script to parse this file, or python has builltin util to print this info?</p>
| <python> | 2023-07-19 03:31:38 | 3 | 603 | user3875388 |
76,717,774 | 4,451,521 | Why I cannot print inside a kernel in Pycuda? | <p>I have the following start of a code</p>
<pre><code>import numpy as np
from pycuda import driver, gpuarray
from pycuda.compiler import SourceModule
import pycuda.autoinit
MATRIX_SIZE = 3
matrix_mul_kernel = """
__global__ void Matrix_Mul_Kernel(float *d_a, float *d_b, float *d_c)
{
int tx = threadIdx.x;
int ty = threadIdx.y;
float value = 0;
int s=5;
printf("X %d Y \\n",s);
for (int i = 0; i < %(MATRIX_SIZE)s; ++i) {
float d_a_element = d_a[ty * %(MATRIX_SIZE)s + i];
float d_b_element = d_b[i * %(MATRIX_SIZE)s + tx];
value += d_a_element * d_b_element;
}
d_c[ty * %(MATRIX_SIZE)s + tx] = value;
} """
matrix_mul = matrix_mul_kernel % {'MATRIX_SIZE': MATRIX_SIZE}
mod = SourceModule(matrix_mul)
</code></pre>
<p>The part inside the kernel with printf, if I do <code>printf("hello");</code> it goes fine but when trying to print an integer (I was trying to print <code>tx</code> and <code>ty</code> but never mind, any would be fine) an error appears</p>
<pre><code>Traceback (most recent call last):
File "/media/cbe421fe-1303-4821-9392-a849bfdd00e2/MyStudy/PyCuda/9_matrix_mul.py", line 26, in <module>
matrix_mul = matrix_mul_kernel % {'MATRIX_SIZE': MATRIX_SIZE}
TypeError: %d format: a number is required, not dict
</code></pre>
<p>Why is this code failing?</p>
<p>Previously when no constant was used, I could print the thread x and y</p>
<p>EDIT:
Even stranger when I do this</p>
<pre><code>printf("X %s Y \\n",5);
</code></pre>
<p>It does not fail but prints this</p>
<pre><code>X {'MATRIX_SIZE': 3} Y
X {'MATRIX_SIZE': 3} Y
X {'MATRIX_SIZE': 3} Y
X {'MATRIX_SIZE': 3} Y
X {'MATRIX_SIZE': 3} Y
X {'MATRIX_SIZE': 3} Y
X {'MATRIX_SIZE': 3} Y
X {'MATRIX_SIZE': 3} Y
X {'MATRIX_SIZE': 3} Y
</code></pre>
<p>So apparently no matter the variable it is always interpreted as the dictionary {'MATRIX_SIZE': 3} therefore the error. The question is why?</p>
<p>what is happening here?</p>
| <python><cuda><pycuda> | 2023-07-19 03:21:39 | 1 | 10,576 | KansaiRobot |
76,717,670 | 5,319,542 | ModuleNotFoundError: No module named 'svntools' | <p>I am new to Python and when I run a Python script it gives me following error.</p>
<pre><code> import svntools.svntools
ModuleNotFoundError: No module named 'svntools'
</code></pre>
<p>Can anybody help me to resolve this?</p>
<p>My script is like,</p>
<pre><code>import svntools.svntools
from svntools.svntools import *
import svntools.commontools
from svntools.commontools import *
</code></pre>
<p>When running</p>
<pre><code>D:\>pip install svntools
ERROR: Could not find a version that satisfies the requirement svntools (from versions: none)
ERROR: No matching distribution found for svntools
</code></pre>
| <python> | 2023-07-19 02:51:54 | 0 | 603 | chk.buddi |
76,717,563 | 3,833,632 | Why does my python script using locks to pause on asynchronous keyboard input gets stuck only on long pauses | <p>I have a python program that has a run loop animating the screen with tinter. I am trying to implement a feature where if a key is pressed the animation pauses until the key is released.</p>
<p>I am using pynput keyboard to asynchronously listen for key commands and threading locks to try and create this behavior.</p>
<p>My first attempt is not working well.</p>
<p>If you briefly press a key everything works fine. However if you hold that key down for a bit the program gets permanently stuck with no error.</p>
<pre><code>from pynput import keyboard
import threading
import time
pauseLock = threading.Lock()
def on_press(key):
try:
pauseLock.acquire()
print("Paused")
except AttributeError:
pass
def on_release(key):
pauseLock.release()
print("Unpaused")
if key == keyboard.Key.esc:
return False
# Collect events until released
listener = keyboard.Listener(
on_press=on_press,
on_release=on_release)
listener.start()
while True:
time.sleep(0.01) # in seconds
pauseLock.acquire()
print("Hello")
pauseLock.release()
</code></pre>
<p>I guess I don't understand why this is happening and how to get around it. It must be something specific to Pythons implementation of threads and locks because I feel like this would work well in other languages.</p>
| <python><concurrency><locking> | 2023-07-19 02:19:54 | 1 | 715 | CalebK |
76,717,550 | 4,115,378 | Neo4j query works in neo4j desktop but not python driver | <p>My code below returns an empty list for <code>result</code>. However when I paste the query in neo4j desktop and replace $keyword with <code>"power control"</code>, it runs and returns a series of keyword frequency by year. I have no issue with authenticating to the database as far as I know, no errors. Anything seems out of ordinary?</p>
<pre><code>from neo4j import GraphDatabase
class KeywordPublicationFrequency:
def __init__(self, uri, user, password):
self.driver = GraphDatabase.driver(uri, auth=(user, password))
def close(self):
self.driver.close()
def get_publication_frequency(self, keyword):
with self.driver.session() as session:
result = session.read_transaction(self._create_and_return_query, keyword)
print(result)
for record in result:
print("Year: ", record['Year'], "Publications: ", record['Publications'])
@staticmethod
def _create_and_return_query(tx, keyword):
query = """
MATCH (k:KEYWORD {name: $keyword})-[:LABEL_BY]-(p:PUBLICATION)
RETURN p.year as Year, count(p) as Publications
ORDER BY Year ASC
"""
result = tx.run(query, keyword=keyword)
return list(result)
if __name__ == "__main__":
kp_frequency = KeywordPublicationFrequency("bolt://localhost:7687", "my username", "my password")
kp_frequency.get_publication_frequency("power control")
kp_frequency.close()
</code></pre>
| <python><neo4j> | 2023-07-19 02:14:46 | 1 | 1,364 | A1122 |
76,717,356 | 5,340,833 | Finding Unique values in two ArrayLists | <p>For Example
<code>A = [10010,10020,99948]</code>
and each element of A List possible values are atmost two values or one element or null.</p>
<p>you can take below is B list,</p>
<blockquote>
<ul>
<li>10010 = []. --> expected output is 10010</li>
<li>10020 =[10020,10020] -> expected output is null</li>
<li>99948 = [99948,99948] -> expected output is null</li>
</ul>
</blockquote>
<p>so final output should be 10010.</p>
<p>MyCode is working fine but i need better solution below the O(n).</p>
<pre><code>var list = new ArrayList()
for ( val in A){
var B = val.Valus.toList()
if ( not B.contain(val)){
list.add(val)
}
}
</code></pre>
<p>Here output is : 10010</p>
<p>if A has 2000+ values it performance down , can be do better?</p>
| <python><java><guidewire><gosu> | 2023-07-19 01:12:20 | 2 | 2,292 | Py-Coder |
76,717,307 | 3,324,136 | Heroku crashes with error "bash: line 1: app/main:app: No such file or directory" | <p>I have a Django app that I am hosting on Heroku connected via Github. However, every time I try to start the app, I am receiving the error <code>app[uvicorn.1]: bash: line 1: app/main:app: No such file or directory</code>.</p>
<p>Looking up the problem, it seems that it is in the wrong directory, but even changing the <code>app/main:app</code> location still gives the same error.</p>
<p>The Procfile contents are: <code>uvicorn app/main:app --reload --host 0.0.0.0 --port=${PORT}</code>.</p>
<p>I have attached a screenshot of the file hierarchy as well to help understand the structure.</p>
<p><a href="https://i.sstatic.net/r8xWw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/r8xWw.png" alt="enter image description here" /></a></p>
<p>I tried changing the Procfile to <code>uvicorn main:app --reload --host 0.0.0.0 --port=${PORT}</code> but still received the error: <code>bash: line 1: main:app: command not found</code></p>
| <python><django><heroku><heroku-api> | 2023-07-19 00:51:29 | 1 | 417 | user3324136 |
76,717,299 | 5,032,387 | Recombine arrays obtained from subsetting on some of the dimensions of original array | <p>I have a 3-dim array, which I subset based on 2 of the 3 dimensions</p>
<pre><code>import dask.array as da
import numpy as np
np.random.seed(40)
test_arr = np.random.normal(size=(2,3,4))
array([[[-0.6075477 , -0.12613641, -0.68460636, 0.92871475],
[-1.84440103, -0.46700242, 2.29249034, 0.48881005],
[ 0.71026699, 1.05553444, 0.0540731 , 0.25795342]],
[[ 0.58828165, 0.88524424, -1.01700702, -0.13369303],
[-0.4381855 , 0.49344349, -0.19900912, -1.27498361],
[ 0.29349415, 0.10895031, 0.03172679, 1.27263986]]])
bool_check = test_arr[:,:,0] < 0.6
array([[ True, True, False],
[ True, True, True]])
# shape is (5,4)
arr1 = test_arr[bool_check]
# shape is (1,4)
arr2 = test_arr[~bool_check]
</code></pre>
<p>Note that I would rather have made <code>test_arr</code> a dask array from the start, but dask doesn't allow me to subset in this way like numpy does.</p>
<p>Now imagine in my actual use-case I do a bunch of manipulations that are irrelevant here and then want to reconstitute <code>arr1</code> and <code>arr1</code> into <code>arr3</code> by subsetting. How would I do it?</p>
<pre><code>arr3 = da.zeros_like(test_arr)
# this gives an error
arr3[da.from_array(bool_check)] = arr1
</code></pre>
<pre><code>ValueError: Boolean index assignment in Dask expects equally shaped arrays.
</code></pre>
| <python><numpy><subset><dask> | 2023-07-19 00:49:54 | 0 | 3,080 | matsuo_basho |
76,717,224 | 251,840 | CSV to SQLite error using csvs_to_sqlite package | <p>I'm using <a href="https://github.com/simonw/csvs-to-sqlite/tree/main" rel="nofollow noreferrer">csvs_to_sqlite</a> to import CSV files into an SQLite database. The package uses Python Click as a CLI but I've figured out how to call the function in my Python script:</p>
<pre><code>from csvs_to_sqlite import cli
import click
@click.command()
@click.pass_context
def convert_csv(ctx):
args = {"paths":"bigmac.csv", "dbname": "bigmac.db"}
ctx.invoke(cli.cli, **args)
def main():
convert_csv()
if __name__ == "__main__":
main()
</code></pre>
<p>I get this error:</p>
<pre><code>Could not load ./bigmac.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/core/tests/data/umath-validation-set-log2.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/core/tests/data/umath-validation-set-arcsinh.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/core/tests/data/umath-validation-set-arctanh.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/core/tests/data/umath-validation-set-sin.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/core/tests/data/umath-validation-set-cos.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/core/tests/data/umath-validation-set-cbrt.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/core/tests/data/umath-validation-set-arctan.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/core/tests/data/umath-validation-set-cosh.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/core/tests/data/umath-validation-set-expm1.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/core/tests/data/umath-validation-set-sinh.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/core/tests/data/umath-validation-set-tanh.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/core/tests/data/umath-validation-set-log10.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/core/tests/data/umath-validation-set-arcsin.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/core/tests/data/umath-validation-set-arccos.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/core/tests/data/umath-validation-set-log1p.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/core/tests/data/umath-validation-set-log.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/core/tests/data/umath-validation-set-exp2.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/core/tests/data/umath-validation-set-arccosh.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/core/tests/data/umath-validation-set-tan.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/core/tests/data/umath-validation-set-exp.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/random/tests/data/philox-testset-1.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/random/tests/data/philox-testset-2.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/random/tests/data/sfc64-testset-1.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/random/tests/data/sfc64-testset-2.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/random/tests/data/mt19937-testset-2.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/random/tests/data/mt19937-testset-1.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/random/tests/data/pcg64-testset-1.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/random/tests/data/pcg64-testset-2.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/random/tests/data/pcg64dxsm-testset-1.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Could not load ./venv/lib/python3.8/site-packages/numpy/random/tests/data/pcg64dxsm-testset-2.csv: read_csv() got an unexpected keyword argument 'error_bad_lines'
Loaded 0 dataframes
Created bigmac.db from 31 CSV files
</code></pre>
<p>An empty SQLite database is created with no tables. The CSV file is not corrupted (I was able to import and query it in DuckDB). Why is the conversion failing?</p>
| <python><sqlite><csv> | 2023-07-19 00:18:03 | 0 | 4,393 | jwesonga |
76,717,206 | 4,298,591 | Patching a method that is referenced in a constants file, whose constant is pulled into a method under test | <p>I've got a constants file which contains configuration that developers can modify as needed. It's broken apart for readability and so the structure looks something like this:</p>
<pre><code>from app.configuration.foo.main import FOO_RULES
MAIN_RULES = {
'foo': FOO_RULES
}
</code></pre>
<p>FOO_RULES is defined as:</p>
<pre><code>from app.helpers.foo.action_helpers import ActionHelpers
FOO_RULES = {
'bar': {
'action_func': ActionHelpers.perform_action,
},
}
</code></pre>
<p>I have a separate file/class that imports in <code>MAIN_RULES</code> and dynamically parses through the configuration when invoked. When an <code>action_func</code> is defined (it <em>may</em> be None) it is invoked with a consistent set of arguments. Something to the effect of:</p>
<pre><code>from app.configuration.main import MAIN_RULES
class RuleRunner:
def __init__(self, rule_book):
self.__rule_book = MAIN_RULES
def run_rule(self, rule, val_1, val_2):
rule = self.__rule_book[rule]
action_func = rule['action_func']
if action_func is not None:
action_func(val_1, val_2)
</code></pre>
<p>The expectation is that we write tests for this configuration since it is a critical portion of our app. What I'm struggling with is I don't want to mock out the configuration (<code>MAIN_RULES</code>) since it lives in the source code and I want to test it all properly (and seems redundant to just do a test for a constants file that isn't just repeating the constant in the file). However, I do want to mock out the call to <code>ActionHelpers.perform_action</code> as that is properly unit tested in the test file for action helpers. How do I properly mock out <code>ActionHelpers.perform_action</code> without mocking out the entirety of the configuration? Most of the suggestions are only trying to mock out classes/functions directly imported into the file, but this is imported down the chain.</p>
<p>Example test:</p>
<pre><code>from app.runners import RuleRunner
import mock
@mock.patch('??????.ActionHelpers.perform_action')
def test_run_role_foo_bar_rule(action_helper_perform_action):
val_1 = 'fizz'
val_2 = 'buzz'
RuleRunner('foo').run_rule('bar', val_1, val_2)
action_helper_perform_action.assert_called_with(val_1, val_2)
</code></pre>
<p>Obviously <code>??????</code> is likely some module I need to use depending on if this is possible. Any guidance is appreciated.</p>
| <python> | 2023-07-19 00:10:29 | 1 | 802 | Ben |
76,717,001 | 11,092,636 | Group anagrams big O doesn't seem to hold up, is there a mistake? | <p>The goal is to group words in the same list if they are anagrams. Words can only contain <code>a</code> <code>b</code> <code>c</code> or <code>d</code>.</p>
<p>Input:</p>
<p><code>["abc","acb","add","dda","ddd","aaa"]</code></p>
<p>Should give output:</p>
<p><code>[['abc', 'acb'], ['add', 'dda'], ['ddd'], ['aaa']]</code></p>
<p><code>n</code> will denote the number of strings in my input.</p>
<p><code>m</code> will denote the length of each string in my input.</p>
<p>The first solution is the more natural one and is <code>O(n * mlogm) time</code>:</p>
<pre class="lang-py prettyprint-override"><code>from collections import defaultdict
def groupAnagrams(strs: list[str]) -> list[list[str]]: # O(n * mlogm) time
mapping = defaultdict(list)
for s in strs: # O(n)
sig = "".join(sorted(s)) # O(m*logm)
mapping[sig].append(s)
return list(mapping.values())
print(groupAnagrams(["abc","acb","add","dda","ddd","aaa"]))
</code></pre>
<p>The second solution uses a trick making it <code># O(n * m) time</code>:</p>
<pre class="lang-py prettyprint-override"><code>from collections import defaultdict
def groupAnagrams2(strs: list[str]) -> list[list[str]]: # O(n * m) time
res = defaultdict(list)
for s in strs: # O(n)
count = [0]*4
for c in s: # O(m)
count[ord(c) - ord("a")] += 1
res[tuple(count)].append(s) # O(4)
return list(res.values())
print(groupAnagrams(["abc","acb","add","dda","ddd","aaa"]))
</code></pre>
<p>Both solutions seem to be working.</p>
<p>On small lists and on big lists, solution 1 is faster even though, asymptotically, solution 2 should be faster. Does someone know why?</p>
<p>Small lists:</p>
<pre class="lang-py prettyprint-override"><code>groupAnagrams(["abc","acb","add","dda","ddd","aaa"]) -> 2.21 Β΅s Β± 23.8 ns per loop (mean Β± std. dev. of 7 runs, 100,000 loops each)
groupAnagrams2(["abc","acb","add","dda","ddd","aaa"]) -> 3.02 Β΅s Β± 18.1 ns per loop (mean Β± std. dev. of 7 runs, 100,000 loops each)
</code></pre>
<p>Big lists:</p>
<pre class="lang-py prettyprint-override"><code>import random
random_strings = ["".join([chr(random.randint(97, 100)) for _ in range(100000)]) for _ in range(1000)]
groupAnagrams(random_strings) -> 5.48 s Β± 33 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
groupAnagrams2(random_strings) -> 7.13 s Β± 83.5 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
</code></pre>
| <python><hashmap><time-complexity> | 2023-07-18 23:06:41 | 1 | 720 | FluidMechanics Potential Flows |
76,716,982 | 420,157 | Example for Py_BuildValue with O& | <p>I am trying to find simple examples for Py_BuildValue with O& as an argument for reference. Could you please help with any examples or references if any?</p>
<p>Especially the usage of the converter and the value we pass along with this.</p>
<p>I tried a converter function which returns a PyObject* and accepts a void* as input. Unfortunately, this is giving me a seg fault.</p>
<p>Reference:
<a href="http://web.mit.edu/people/amliu/vrut/python/ext/buildValue.html" rel="nofollow noreferrer">http://web.mit.edu/people/amliu/vrut/python/ext/buildValue.html</a></p>
| <python><python-3.x><python-c-api> | 2023-07-18 23:00:01 | 1 | 777 | Maverickgugu |
76,716,917 | 5,032,387 | Boolean comparison of array with vector to yield a matrix | <p>I have the following vectors:</p>
<pre><code>import numpy as np
x = np.array([0.4, 0.6])
y = np.array([0, 0.2, 0.4, 0.6, 0.8, 1])
</code></pre>
<p>What operation would get me a matrix that looks like this?</p>
<pre><code>False False True False False False
False False False True False False
</code></pre>
<p>In other words, all Falses except where the values from x appear in y by row. Note that I can't have for loops because this is a toy example for my use case case that deals with large matrices.</p>
| <python><numpy> | 2023-07-18 22:39:49 | 1 | 3,080 | matsuo_basho |
76,716,898 | 34,935 | Is python dict insertion order preserved after deleting elements? | <p>This <a href="https://stackoverflow.com/a/39980744">StackOverflow answer</a> says python dicts keep insertion order of keys as of python 3.7. The comments to that answer discuss the implementation details of what happens when a key is deleted. I'd like to know: what does the language spec guarantee about key order in the face of deletes (preferably with a link)?</p>
<p>Based on the discussion, I bet it guarantees insertion order of the undeleted elements, but I've been unable to find confirmation.</p>
| <python><python-3.x><dictionary> | 2023-07-18 22:34:41 | 1 | 21,683 | dfrankow |
76,716,887 | 16,872,665 | How can I get the data from my Survey123 form using Python | <p>I am trying to access my ESRI Survey123 results that aren't Everyone(public) accessible. I'm using the <code>arcgis</code> Python package on a machine that does not have ArcGIS installed. Unfortunately, I can only access public results in my account (and elsewhere). How can I retrieve the data when it's not public</p>
<p><em>Additional information:</em> the survey was created without using any existing ESRI features, layers, or other stuff - just opened Survey123 online and created the survey</p>
<pre class="lang-py prettyprint-override"><code>from arcgis.gis import GIS
# ESRI Survey123 API endpoint
survey123_api_url = 'https://www.arcgis.com'
survey123_username = '<my_username>'
survey123_password = '<my_password>'
# Get a list of non-public Survey123 data
survey_item_id = '88d7e11f82fa44c0a52db4ba435b86ff' # A random ID
gis = GIS(survey123_api_url, survey123_username, survey123_password)
# Use SurveyManager to see everything available
survey_manager = arcgis.apps.survey123.SurveyManager(gis)
print(survey_manager.surveys) # only contains public items
# Try to get a non-public item
sbi = survey_manager.get(survey_item_id)
print(sbi) # only contains item when it's public
sr = gis.content.search('owner:<my account name>')
print(sr) # also only contains public items
</code></pre>
<p>Note: <a href="https://community.esri.com/t5/python-questions/access-non-public-results-from-survey123-outside/m-p/1305982/thread-id/68094#M68202" rel="nofollow noreferrer">original post</a> in ESRI community</p>
| <python><download><arcgis> | 2023-07-18 22:32:13 | 1 | 314 | ChrisSc |
76,716,552 | 1,880,182 | Finding intersection and projection points for a set of labelled coordinates | <p>I am working on a project where I need to find the minimum x and minimum y coordinates for a set of labelled points. Additionally, I need to determine the intersection point of the two lines passing through the minimum x and minimum y coordinates.</p>
<p>I have a list of points, where each point is represented as a list in the following format:</p>
<pre><code>[[[x1, y1], [x2, y2], [x3, y3], [x4, y4]], [x_center, y_center], label]
</code></pre>
<p>Here, <code>[[x1, y1], [x2, y2], [x3, y3], [x4, y4]]</code> represents the four corners of a rectangle enclosing the labelled point. <code>[x_center, y_center]</code> represents the coordinates of the centre of the rectangle, and the <code>label</code> is a numerical value associated with the point.</p>
<p>I need to find the minimum x and minimum y coordinates of the labelled points, and then determine the intersection point of the lines passing through the minimum x and minimum y coordinates. Additionally, I want the projection points to have the same format as the zero points, including the label coordinates.</p>
<p>I have tried implementing a function <code>find_actual_points(points, pixel_tolerance=1)</code> to achieve this, but I am having trouble getting the correct results. Here is my current implementation:</p>
<pre><code>def find_actual_points(points, pixel_tolerance=1):
sorted_points_by_x = sorted(points, key=lambda point: point[1][0]) # Sort points by X center coordinate
x_zero_point = sorted_points_by_x[0] # Point with the minimum X center coordinate
sorted_points_by_y = sorted(points, key=lambda point: point[1][1]) # Sort points by Y center coordinate
y_zero_point = sorted_points_by_y[-1] # Point with the minimum Y center coordinate
x1, y1 = x_zero_point[1]
x2, y2 = y_zero_point[1]
if abs(y1 - y2) <= pixel_tolerance:
raise Exception("Lines are parallel")
intersection_x = x1 - ((x1 - x2) * y1) / (y1 - y2)
intersection_y = y1 - ((y1 - y2) * x1) / (x1 - x2)
# Calculate label ratios for the axes
x_label_ratio = (y1 - intersection_x) / (y1 - x1)
y_label_ratio = (y1 - intersection_y) / (y1 - x1)
# Add the intersection point of the minimum x and minimum y lines
actual_points = []
actual_points.append([[intersection_x, intersection_y], [x_zero_point[2], y_zero_point[2]]])
# Calculate projection points for the other labeled coordinates
for point in points:
coords = point[1]
label = point[2]
# Calculate the x-axis projection of the minimum x and minimum y lines intersection
x_projection = intersection_x + (coords[0] - intersection_x) * x_label_ratio
# Calculate the y-axis projection of the minimum x and minimum y lines intersection
y_projection = intersection_y + (coords[1] - intersection_y) * y_label_ratio
actual_points.append([[x_projection, y_projection], label])
return actual_points
</code></pre>
<p>When I run this code with the provided example points:</p>
<pre><code>[[[[4.071428571428571, 217.07142857142858], [19.92857142857143, 217.07142857142858], [19.92857142857143, 230.92857142857142], [4.071428571428571, 230.92857142857142]], [12, 224], 1.0],
[[[4.071428571428571, 301.07142857142856], [19.92857142857143, 301.07142857142856], [19.92857142857143, 314.92857142857144], [4.071428571428571, 314.92857142857144]], [12, 308], 0.0],
[[[8.0, 50.0], [17.0, 50.0], [17.0, 63.0], [8.0, 63.0]], [12, 56], 3.0],
[[[9.0, 130.0], [18.0, 130.0], [18.0, 142.0], [9.0, 142.0]], [13, 136], 2.0],
[[[18.0, 305.0], [28.0, 305.0], [28.0, 316.0], [18.0, 316.0]], [23, 310], 0.0],
[[[132.0, 303.0], [153.0, 303.0], [153.0, 319.0], [132.0, 319.0]], [142, 311], 20.0],
[[[251.0, 303.0], [273.0, 303.0], [273.0, 319.0], [251.0, 319.0]], [262, 311], 40.0],
[[[370.0, 303.0], [391.0, 303.0], [391.0, 319.0], [370.0, 319.0]], [380, 311], 60.0],
[[[489.0, 305.0], [508.0, 305.0], [508.0, 318.0], [489.0, 318.0]], [498, 311], 80.0]]
</code></pre>
<p>I do not get the expected results.</p>
<p>Also, this is the visualisation of the points, which might be helpful.</p>
<p><a href="https://i.sstatic.net/zcrFd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zcrFd.png" alt="Also, this is the visualisation of the points, which might be helpful." /></a></p>
<p>I would greatly appreciate any guidance or suggestions on how to fix the code and obtain the desired results. Thank you in advance!</p>
<p>EDIT: I need something like that.</p>
<p><a href="https://i.sstatic.net/Vxj2Q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Vxj2Q.png" alt="EDIT: I need something like that." /></a></p>
<p>EDIT: For the example list I have provided, the following points are on the same line.</p>
<p>For the line parallel to the Y-axis:</p>
<pre><code>[[4.071428571428571, 217.07142857142858], [19.92857142857143, 217.07142857142858], [19.92857142857143, 230.92857142857142], [4.071428571428571, 230.92857142857142]], [12, 224], 1.0],
[[[4.071428571428571, 301.07142857142856], [19.92857142857143, 301.07142857142856], [19.92857142857143, 314.92857142857144], [4.071428571428571, 314.92857142857144]], [12, 308], 0.0],
[[[8.0, 50.0], [17.0, 50.0], [17.0, 63.0], [8.0, 63.0]], [12, 56], 3.0],
[[[9.0, 130.0], [18.0, 130.0], [18.0, 142.0], [9.0, 142.0]], [13, 136], 2.0]
</code></pre>
<p>For the line parallel to the X-axis,</p>
<pre><code> [[[18.0, 305.0], [28.0, 305.0], [28.0, 316.0], [18.0, 316.0]], [23, 310], 0.0],
[[[132.0, 303.0], [153.0, 303.0], [153.0, 319.0], [132.0, 319.0]], [142, 311], 20.0],
[[[251.0, 303.0], [273.0, 303.0], [273.0, 319.0], [251.0, 319.0]], [262, 311], 40.0],
[[[370.0, 303.0], [391.0, 303.0], [391.0, 319.0], [370.0, 319.0]], [380, 311], 60.0],
[[[489.0, 305.0], [508.0, 305.0], [508.0, 318.0], [489.0, 318.0]], [498, 311], 80.0]
</code></pre>
<p>And I need to achieve the minimum labelled X point and minimum labelled Y point, which are</p>
<pre><code>[[[18.0, 305.0], [28.0, 305.0], [28.0, 316.0], [18.0, 316.0]], [23, 310], 0.0]
[[[4.071428571428571, 301.07142857142856], [19.92857142857143, 301.07142857142856], [19.92857142857143, 314.92857142857144], [4.071428571428571, 314.92857142857144]], [12, 308], 0.0]
</code></pre>
<p>respectively.</p>
<p>And I think their orthogonal intersection of the centre points between the two lines will be <code>[12, 310]</code>.</p>
<p>And the projections of the points on the line parallel to the X-axis will be like [X point, 310], the projections of the points on the line parallel to the Y-axis will be like [12, Y point].</p>
<p>I am having trouble to find the minimum point on the line parallel to the X-axis and the minimum point on the line parallel to the Y-axis. Please note that one of the labels might be greater than other. I need a general solution. I think I can do the rest.</p>
| <python><python-3.x> | 2023-07-18 21:05:08 | 2 | 541 | Eftal Gezer |
76,716,497 | 670,446 | Trying to set up loggers in packages files using __name__, but __name__ is __main__ | <p>I have a hierarchy for a package like this:</p>
<pre><code>test_script.py
package_name/
__init__.py
functionality_1.py
functionality_2.py
</code></pre>
<p>For testing purposes, in addition to the functions in functionality_1.py, there is a section to run it as main. I use that for debugging a developing that functionality of the package. At the bottom of functionality_1.py is a standard main like this:</p>
<pre><code>if __name__ == "__main__":
# Do some stuff
</code></pre>
<p>I would like logging from the functions in <strong>functionality_1.py</strong> to use a logger <code>package_name.functionality_1</code>, and <strong>functionality_2.py</strong> use a logger named <code>package_name.functionality_2</code></p>
<p>I tried what I've seen in examples, using</p>
<pre><code>logger = logging.getLogger(__name__)
</code></pre>
<p>but if I'm running file functionality_1.py with <code>python -m package_name.functionality_1</code>, the logger is always named <code>__main__</code>.</p>
<p>I'd rather not hardcode logger names, but I'm not sure what the best way to do this is.</p>
<p>Where do you create loggers, so each xxxx.py has it's own logger? <code>__init__.py</code> does seem like the right place.</p>
<p>Is it bad form putting a main function in my package files?</p>
| <python><python-3.x><logging> | 2023-07-18 20:55:37 | 2 | 3,739 | bpeikes |
76,716,481 | 14,044,445 | Measuring the smoothness of a curve | <p>In deep learning, accuracy curves are crucial for evaluating a model's performance. Typically, an accuracy curve resembles a logarithmic function, although the reasons for this are beyond the scope of this question. Large spikes in the accuracy curve can indicate issues such as an inappropriate batch size. Let's examine these curves:</p>
<p><a href="https://i.sstatic.net/OIwZq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OIwZq.png" alt="enter image description here" /></a></p>
<p>Here, I have plotted a function (log(x)) with different random noises of alpha, using this code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
def f(x, alpha):
return np.log(x) + alpha * np.random.normal(size=x.size)
def main():
x = np.linspace(0.5, 3)
for i in [0, 0.1, 0.3]:
plt.plot(x, f(x, alpha=i),label=fr'$\alpha$ = {i}')
plt.legend()
plt.show()
if __name__ == '__main__':
main()
</code></pre>
<p>My objective is to determine the smoothness of these curves in order to infer the original alpha value. The only thing that hints a solution to me, is the integral of squared second derivative; but I think there might be a better, more accurate solution.</p>
<p>Can anyone suggest a solution?</p>
| <python><machine-learning><curve> | 2023-07-18 20:52:52 | 0 | 364 | Amirhossein Rezaei |
76,716,480 | 20,122,390 | How can I count and fetch records from a model in a single query with Tortoise-ORM? | <p>I have an application in which I use Tortoise-orm to interact with a postgresql database. Then I have the following model:</p>
<pre><code>class Dat(models.Model):
id = fields.IntField(pk=True)
name = fields.CharField(max_length=255, unique=False, null=False)
ext = fields.CharEnumField(enum_type=ExtEnum, null=False)
granularity = fields.CharEnumField(enum_type=Granularity, null=False)
status_code = fields.IntField(null=False)
status_info = fields.CharField(max_length=255, unique=False, null=False)
created_at = fields.DatetimeField(auto_now_add=True)
last_modified = fields.DatetimeField(auto_now=True)
company = fields.ForeignKeyField(
"models.Company",
related_name="dat_company",
on_delete=CASCADE,
null=False,
)
user = fields.ForeignKeyField(
"models.User",
related_name="dat_user",
on_delete=CASCADE,
null=False,
)
class Meta:
table = "dat"
</code></pre>
<p>As you can see, there is a one-to-many relationship between User and Dat.
For a functionality of my application, I need to fetch all the records (applying an optional filter) from Dat together with the "name" field of the User table (which is related to Dat). But in addition, I apply an offset and limit, and I need to count also all the records (but without taking into account the offser and limit).
So, I do it with the following approach:</p>
<pre><code>async def get_all_with_count(
self,
payload: dict = None,
skip: int = 0,
limit: int = 10,
):
query = self.model.filter(**payload)
model = (
await query.prefetch_related(
Prefetch(
"user", queryset=User.all().only("uid", "name"), to_attr="user_info"
),
)
.offset(skip)
.limit(limit)
)
total = await query.count()
return model, total
</code></pre>
<p>My code works, but I have to do two queries to get what I want. Is there any way to do it with only one query?</p>
| <python><database><postgresql><orm><tortoise-orm> | 2023-07-18 20:52:49 | 0 | 988 | Diego L |
76,716,367 | 2,829,961 | How to match column values and extract indices in siuba? | <h2>Objective and data</h2>
<p>My goal is to look for the values of <code>preceding</code> in <code>vehicle_id</code> at a given <code>frame_id</code> and extract the corresponding value of <code>v_vel</code> in a new column called <code>preceding_vel</code>. I want to use the <code>siuba</code> python package for this purpose. Following is my dataframe:</p>
<pre><code> import pandas as pd
df_mini_dict = {'vehicle_id': {884: 2, 885: 2, 886: 2, 14148: 44, 14149: 44, 14150: 44},
'frame_id': {884: 338, 885: 339, 886: 340, 14148: 338, 14149: 339, 14150: 340},
'preceding': {884: 44, 885: 44, 886: 44, 14148: 3355, 14149: 3355, 14150: 3355},
'v_vel': {884: 6.299857770322456, 885: 6.427411525504063, 886: 6.590098168958994, 14148: 7.22883474245701, 14149: 6.973590500351793, 14150: 6.727721962795176}}
df_mini = pd.DataFrame.from_dict(df_mini_dict)
</code></pre>
<h2>Working R solution</h2>
<p>I can achieve the objective by using the following code:</p>
<pre><code>df_mini <- structure(list(vehicle_id = c(2L, 2L, 2L, 44L, 44L, 44L),
frame_id = c(338L, 339L, 340L, 338L, 339L, 340L),
preceding = c(44L, 44L, 44L, 3355L, 3355L, 3355L),
v_vel = c(6.29985777032246, 6.42741152550406,
6.59009816895899, 7.22883474245701,
6.97359050035179, 6.72772196279518),
preceding_vel = c(7.22883474245701, 6.97359050035179,
6.72772196279518, NA, NA, NA)),
class = c("tbl_df", "tbl", "data.frame"),
row.names = c(NA, -6L))
library(dplyr)
df_mini <- df_mini |>
dplyr::group_by(frame_id) |>
dplyr::mutate(preceding_vel = v_vel[match(preceding, vehicle_id)]) |>
dplyr::ungroup()
</code></pre>
<h2>Python attempt</h2>
<p>Essentially, I am trying to do in <code>siuba</code> what <code>dplyr</code> is doing but it seems that I need to use <code>index()</code> to do what <code>match</code> does. I tried the following unsuccessfully:</p>
<pre><code>def match(x, table):
indicez = []
for i in x:
indicez.append(table.index(i))
return indicez
from siuba import *
df_mini = (
df_mini
>> group_by(_.frame_id) # grouping by frame id
>> mutate(preceding_vel = _.v_vel[match(_.preceding, _.vehicle_id)])
)
TypeError: 'Symbolic' object is not iterable
</code></pre>
<p>Please guide me what is the best way to define the <code>match</code> function or use something else to meet the objective. Thanks.</p>
| <python><r><pandas><dplyr> | 2023-07-18 20:34:00 | 2 | 6,319 | umair durrani |
76,716,231 | 4,175,822 | Why does mypy say that my return type is incompatible (subset of union types)? | <p>Why does mypy say that my return type is incompatible?
Here is my code:</p>
<pre><code>from __future__ import annotations
import typing
import dataclasses
import immutabledict
class Unset:
pass
unset: Unset = Unset()
OUTPUT_BASE_TYPES = typing.Union[
immutabledict.immutabledict[str, 'OUTPUT_BASE_TYPES'],
str,
int,
float,
bool,
None,
typing.Tuple['OUTPUT_BASE_TYPES', ...],
]
@dataclasses.dataclass
class ApiResponse:
body: typing.Union[OUTPUT_BASE_TYPES, Unset] = unset
class MyDict(immutabledict.immutabledict[str, typing.Union[None, str]]):
pass
@dataclasses.dataclass
class CustomApiResponse(ApiResponse):
body: MyDict
inst = ApiResponse(body=1)
other_inst = CustomApiResponse(body=MyDict({}))
</code></pre>
<p>mypy reports:</p>
<p><code>mypy union.py union.py:36: error: Incompatible types in assignment (expression has type "MyDict", base class "ApiResponse" defined the type as "Union[OUTPUT_BASE_TYPES, Unset]") [assignment]</code></p>
<p>But the definition of the values for immutabledict typing.Union[None, str] are a subset of the union of types OUTPUT_BASE_TYPES
so why is this failing?</p>
<p>How do i get it to work? What do I need to change?</p>
<h3>mypy version</h3>
<ul>
<li>Mypy version used: mypy 1.4.1 (compiled: yes)</li>
<li>Mypy command-line flags: none, <code>mypy union.py</code></li>
<li>Mypy configuration options from <code>mypy.ini</code> N/A</li>
<li>Python version used: Python 3.7.12 (default, Sep 20 2022, 17:18:51)
[Clang 10.0.1 (clang-1001.0.46.4)] on darwin</li>
</ul>
| <python><mypy><python-typing> | 2023-07-18 20:09:47 | 1 | 2,821 | spacether |
76,716,206 | 1,676,393 | for inner product in pytorch, `dot` and `inner` give wildly different results with dtype = torch.bfloat | <p>I have two 1-dimensional PyTorch tensors (of type <code>bfloat16</code>), and I want to compute their inner/dot product.</p>
<p>Should I use <code>torch.dot</code> or <code>torch.inner</code>? I thought it shouldn't really matter which one, but they give me wildly different results. I experimented with some other methods too, and found that some behave like <code>torch.dot</code> and some behave like <code>torch.inner</code> (see comments in code below). Why is this? And <strong>which is the right one to use?</strong></p>
<pre class="lang-py prettyprint-override"><code>import torch
torch.manual_seed(17) # set seed for replication
dtype=torch.bfloat16
# make two 1d tensors
a = torch.rand([10000], dtype=dtype)
b = torch.rand([10000], dtype=dtype)
x1 = torch.dot(a,b)
# or, equivalently:
# x1 = torch.matmul(a,b)
# x1 = a @ b
x2 = torch.inner(a,b)
# or equivalently:
# x2 = (a*b).sum(-1)
# x2 = torch.mul(a,b).sum(-1)
# x2 = torch.matmul(a.unsqueeze(0),
# b.unsqueeze(-1)).squeeze()
</code></pre>
<p>The results are not equal. They're not even close.</p>
<pre class="lang-py prettyprint-override"><code>print(f"""{x1 = }\n{x2 = }
{torch.equal(x1, x2) = }
{torch.isclose(x1, x2) = }""")
</code></pre>
<blockquote>
<pre><code>x1 = tensor(256., dtype=torch.bfloat16)
x2 = tensor(2464., dtype=torch.bfloat16)
torch.equal(x1, x2) = False
torch.isclose(x1, x2) = tensor(False)
</code></pre>
</blockquote>
<p>However, if I set <code>dtype=torch.float</code> instead of <code>bfloat16</code>, they end up <em>nearly</em> the same (still some differences due, I suppose, to numerical instability).</p>
<blockquote>
<pre><code>x1 = tensor(2477.7292, dtype=torch.bfloat16)
x2 = tensor(2477.7295, dtype=torch.bfloat16)
torch.equal(x1, x2) = False
torch.isclose(x1, x2) = tensor(True)
</code></pre>
</blockquote>
<p>What is the best way to get the inner product reliably, if the type is/may be bfloat16?</p>
<hr />
<p>EDIT:</p>
<p>Python 3.10.10 on CPU.</p>
| <python><pytorch><precision><matrix-multiplication> | 2023-07-18 20:06:05 | 1 | 1,121 | postylem |
76,716,123 | 9,443,671 | How can I merge independent json files which each contain dataframes? | <p>Let's say I have a bunch of json files where each contains a dataframe of the same format:</p>
<p>e.g.</p>
<pre><code>dir/1.json
dir/2.json
dir/3.json
...
</code></pre>
<p>And each dataframe looks like this:</p>
<pre><code>text_input liked_vote disliked_vote
text True False
text True False
text True False
</code></pre>
<p>Each dataframe has either <code>liked_vote = True</code> or <code>disliked_vote=True</code> but not both! I.e. if one column is <code>True</code> then the other is <code>False</code>.</p>
<p>What I'm trying to do is recursively merge the json files until I get a dataframe which has <code>disliked_vote</code> as True, for example if <code>1.json</code> and <code>2.json</code> and <code>3.json</code> all have <code>liked_vote = True</code> and <code>4.json</code> has <code>disliked_vote=True</code> then I'd like to merge all the 4 json files into a single dataframe (e.g. <code>pd.concat[df1, df2, df3, df4]</code>), set <code>liked_vote = True</code> on the merged dataframe; and then delete the json files and save the merged files into <code>1.json</code>.</p>
<p>In summary I want to split on <code>disliked_vote=True</code> and re-merge dataframes/json files. I'm really not sure whatsoever how to go about doing this... can anyone help?</p>
| <python><json><pandas><dataframe> | 2023-07-18 19:52:44 | 1 | 687 | skidjoe |
76,716,100 | 11,370,582 | Check if both start and end date exist in column and filter dataframe - python, pandas | <p>I have a dataset containing multiple <code>IDs</code>, and <code>Targets</code> that spans a time frame of 2 months, available here - <a href="https://pastebin.com/UeyZ4uZu" rel="nofollow noreferrer">https://pastebin.com/UeyZ4uZu</a></p>
<pre><code>start_date = '01-01-19'
end_date = '02-28-19'
</code></pre>
<p>I need to filter out any data that does not span that entire timeframe, i.e. there is data on <code>01-01-19</code> and there is data on <code>02-28-19</code>. There does not need to be data for every day in between. For example in the sample dataset here:</p>
<pre><code>df = pd.DataFrame({'names':['jim','jim','jim','jim','jim','jim','jim','jim','jim',
'bob','bob','bob','bob','bob','bob',
'sara','sara','sara','sara','sara','sara','sara','sara','sara','sara'],
'dates':['01-01-19','01-02-19','01-03-19','01-05-19','01-06-19','01-07-19','01-08-19','01-09-19','01-10-19',
'01-05-19','01-06-19','01-07-19','01-08-19','01-09-19','01-10-19',
'01-01-19','01-02-19','01-03-19','01-04-19','01-05-19','01-06-19','01-07-19','01-08-19','01-09-19','01-10-19']})
</code></pre>
<p><code>jim</code> and <code>sara</code> would be kept, even though <code>jim</code> is missing <code>01-04-19</code> and <code>bob</code> would be dropped as he does not have data for <code>01-01-19</code>. I previously asked a similar question here: <a href="https://stackoverflow.com/questions/76689751/filer-by-dates-that-start-after-a-specific-time-pandas-python/76690135?noredirect=1#comment135210303_76690135">Filer by Dates that Start After a Specific Time - pandas, python</a>, and got the solution:</p>
<pre><code>start = df.Date.min()
end = df.Date.max()
num_days = (end - start).days + 1
# If start/end is fixed date and not by min/max,
# add filter to make sure it won't start/end on the wrong dates
# df = df[(df.Date >= start) & (df.Date <= end)]
df = df.loc[df.groupby('ID').Date.transform('nunique') == num_days]
</code></pre>
<p>Which was correct per the question asked but I realized is filtering out additional data that I need to keep. The primary goal is to keep data that exists between the <code>start_date</code> and <code>end_date</code> and throw out anything that does not contain those bookends.</p>
<p>Something along the lines of:</p>
<pre><code>dfin = df.loc[df.groupby('ID').Date.isin([start_date,end_date])]
</code></pre>
| <python><pandas><date><filter><group-by> | 2023-07-18 19:48:04 | 1 | 904 | John Conor |
76,715,925 | 10,685,529 | Pytest, vscode test explorer and integrated terminal | <p>I have trouble figuring out pytest in requards to pythonpath and the vscode test explorer.</p>
<p>My project structure looks like this:</p>
<pre><code>analytics
βββ worker
β βββ src
| βββ |ββ __init__.py
| βββ |ββ data
| βββ |ββ βββ __init__.py
| βββ |ββ βββ connection.py
| βββ |ββ main.py
| βββ |ββ ...
β βββ tests
| βββ |ββ integration
| βββ |ββ βββ ...
| βββ |ββ unit
| βββ |ββ βββ conftest.py
| βββ |ββ βββ ...
β βββ Dockerfile
β βββ Dockerfile.dev
</code></pre>
<p>And my pyproject.toml looks like this:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.pytest.ini_options]
# ------------------------------- TOOL:PYTEST -------------------------------
pythonpath = ["./worker", "./worker/src"]
addopts = "-v"
python_files = "test_*.py"
</code></pre>
<p>In <code>conftest.py</code> I have the following import:</p>
<pre class="lang-py prettyprint-override"><code>from src.data.connection import Connection
</code></pre>
<p>In this scenario my integrated terminal lets me run <code>pytest</code> without errors. and I don't get any errors in <code>conftest.py</code>. But I get errors in the vscode test explorer.</p>
<p>If I move the <code>tests</code> folder to the root of the project, and change the import in <code>conftest.py</code> to:</p>
<pre class="lang-py prettyprint-override"><code>from worker.src.data.connection import Connection
</code></pre>
<p>I don't get error in the integrated terminal, and the test explorer. But I get import errors in <code>conftest.py</code>.</p>
<p>And if I prepend <code>worker.</code> to <code>from src.data.connection import Connection</code> I get errors in the integrated terminal, but not in the test explorer, or in <code>conftest.py</code>.</p>
<p>Can someone explain how i'm supposed to structure my project, and how I can get the test explorer to work?</p>
| <python><visual-studio-code><pytest> | 2023-07-18 19:18:35 | 1 | 1,353 | Lewi Uberg |
76,715,788 | 3,623,723 | Pipenv creates virtualenv, then complains about not finding suitable Python, but it runs anyway | <p>I compiled Python 3.7.17 with <code>pyenv</code>:</p>
<pre class="lang-bash prettyprint-override"><code> $ env PYTHON_CONFIGURE_OPTS="--enable-shared" pyenv install 3.7.17
Downloading Python-3.7.17.tar.xz...
-> https://www.python.org/ftp/python/3.7.17/Python-3.7.17.tar.xz
Installing Python-3.7.17...
patching file Doc/library/ctypes.rst
patching file Lib/test/test_unicode.py
patching file Modules/_ctypes/_ctypes.c
patching file Modules/_ctypes/callproc.c
patching file Modules/_ctypes/ctypes.h
patching file setup.py
patching file 'Misc/NEWS.d/next/Core and Builtins/2020-06-30-04-44-29.bpo-41100.PJwA6F.rst'
patching file Modules/_decimal/libmpdec/mpdecimal.h
patching file setup.py
Installed Python-3.7.17 to /home/username/.pyenv/versions/3.7.17
</code></pre>
<p>Now I'm trying to set up a pipenv using that version, and it seems to work...</p>
<pre class="lang-bash prettyprint-override"><code>$ pipenv install python 3.7.17
Creating a virtualenv for this project...
Pipfile: /home/username/my_env/Pipfile
Using /home/username/.pyenv/versions/3.7.17/bin/python3.7m (3.7.17) to create virtualenv...
β Ή Creating virtual environment...created virtual environment CPython3.7.17.final.0-64 in 136ms
creator CPython3Posix(dest=/home/username/.local/share/virtualenvs/my_env-_8TM2tw_, clear=False, no_vcs_ignore=False, global=False)
seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/username/.local/share/virtualenv)
added seed packages: pip==23.1.2, setuptools==68.0.0, wheel==0.40.0
activators BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator,PythonActivatoras it should
β Successfully created virtual environment!
Virtualenv location: /home/username/.local/share/virtualenvs/my_env-_8TM2tw_
</code></pre>
<p>...until it does not?</p>
<pre class="lang-bash prettyprint-override"><code>Installing python...as it should
Error: An error occurred while installing python!
Error text:
ERROR: Could not find a version that satisfies the requirement python (from versions: none)
ERROR: No matching distribution found for python
β Installation Failed
</code></pre>
<p>...but then it works anyway:</p>
<pre class="lang-bash prettyprint-override"><code>$ pipenv shell
Launching subshell in virtual environment...
. /home/username/.local/share/virtualenvs/my_env-_8TM2tw_/bin/activate
username@TUD259847-ubuntu:~/my_env$ . /home/username/.local/share/virtualenvs/my_env-_8TM2tw_/bin/activate
(my_env) username@TUD259847-ubuntu:~/my_env$ python
Python 3.7.17 (default, Jul 18 2023, 20:23:41)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
</code></pre>
<p>I could just accept this and move on, but I'd like to understand what's going on here, and I need to make sure that Python is working correctly, because further down the line, I'm getting cryptic errors trying to compile wxpython (see <a href="https://stackoverflow.com/questions/76714994/recover-config-log-after-failed-pip-install">here</a>), and I'd like to make sure that this issue is not the reason for the mystery compiler problems I see further down the road...</p>
| <python><virtualenv><pipenv> | 2023-07-18 18:53:35 | 1 | 3,363 | Zak |
76,715,719 | 1,079,017 | How to resolve "The 'pip' distribution was not" when using pyenv on Mac | <p>I'm attempting to install codalab worksheets cli. I have a Mac M2 machine, with a python version 3.10 installed through homebrew. I tried just <code>pip install</code>ing the codalab package, but it didn't work and some digging showed that codalab might not work yet on python 3.10. In one of the <a href="https://github.com/codalab/codalab-worksheets/blob/master/.github/workflows/release.yml" rel="nofollow noreferrer">codalab CI</a> yamls I saw that python version 3.7 is specified, so I installed <code>pyenv</code>, configured my python homebrew installation in it, and got python 3.7. Running <code>pyenv versions</code> I get:</p>
<pre><code> system
3.7.17
3.8.17
* 3.10 --> /opt/homebrew/opt/python@3.10 (set by /Users/a/.pyenv/version)
</code></pre>
<p>(I also tried 3.8). However, doing <code>pip install codalab</code> after the right python version was chosen with <code>pyenv global ..</code> fails on the following error:</p>
<pre><code> pkg_resources.DistributionNotFound: The 'pip' distribution was not found and is required by the application
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
</code></pre>
<p>When I run <code>pip --version</code> and <code>python --version</code> I get the expected values:</p>
<pre><code>pip 23.2 from /Users/a/.pyenv/versions/3.7.17/lib/python3.7/site-packages/pip (python 3.7)
Python 3.7.17
</code></pre>
<p>I looked this error up online and people always seem to have pip missing from their <code>PATH</code> or something but I have it there (in the pyenv shims).
Thanks for any help.</p>
| <python><pip><python-3.7><pyenv> | 2023-07-18 18:40:45 | 0 | 6,148 | rel-s |
76,715,704 | 95,245 | using pandas date_range with an unspecified end date | <p>Is there a typical way to use pandas date_range to specify an open-ended period of time?</p>
<p>For example, a contract where the end date is determined by some event, so only the start date is known when the agreement is created.</p>
<p>** UPDATE **</p>
<p>I wound up flagging an unspecified end date like so</p>
<pre><code>pd.date_range(pd.Timestamp.now(), freq="N", periods=1)
</code></pre>
| <python><pandas><date-range> | 2023-07-18 18:38:33 | 0 | 12,921 | Berryl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.