QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,564,880 | 1,422,096 | Built-in decorator to log method calls | <p>The following code (or custom solutions from <a href="https://stackoverflow.com/questions/5103735/better-way-to-log-method-calls-in-python">Better way to log method calls in Python?</a> or solutions with <code>inspect</code>) works to log method calls:</p>
<pre><code>import logging, sys
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
logger = logging.getLogger()
def log(func):
def logged(*args, **kwargs):
logger.info(f"{func.__name__} starting with {args=} {kwargs=}...")
result = func(*args, **kwargs)
logger.info(f"{func.__name__} finished.")
return result
return logged
class A:
@log
def foo(self, x, y):
pass
@log
def bar(self, x, y):
1/0
a = A()
a.foo(2, y=3)
a.bar(2, y=3)
</code></pre>
<p>Output:</p>
<pre><code>INFO:root:foo starting with args=(<__main__.A object at 0x0000000002784EE0>, 2) kwargs={'y': 3}...
INFO:root:foo finished.
INFO:root:bar starting with args=(<__main__.A object at 0x0000000002784EE0>, 2) kwargs={'y': 3}...
...
</code></pre>
<p>Is there a built-in solution in current Python versions (either in <code>logging</code> or in another module)?</p>
| <python><python-decorators><python-logging> | 2023-11-28 14:55:30 | 1 | 47,388 | Basj |
77,564,689 | 512,480 | VScode attach to unprepared remote python process? | <p>I have a Python process running on a raspberry pi processor elsewhere in my house. It has been working smoothly enough, long enough, that I quit running it in debug mode and just ran it the normal way, directly from the raspberry pi. I did not use debugpy. Now it shows evidence of a bug. I'd like to attach from my development computer, have a debugging session, and see what's going on in there. Is this possible?</p>
<p>Even if not, if I had started it with debugpy, I'd like to know for future reference: what setup steps would it take?</p>
| <python><visual-studio-code><remote-debugging> | 2023-11-28 14:29:38 | 1 | 1,624 | Joymaker |
77,564,517 | 3,521,180 | how to dynamically insert random value to a column in excel through pandas? | <p>I have below two functions:</p>
<pre><code>import pandas as pd
import random
import string
def random_alphanumeric(length, hyphen_interval=4):
characters = string.ascii_letters + string.digits
random_value = ''.join(random.choice(characters) for _ in range(length))
return '-'.join(random_value[i:i + hyphen_interval] for i in range(0, len(random_value), hyphen_interval))
def process_excel(xl_input_file, xl_output_file):
df = pd.read_excel(xl_input_file)
df['Value'] = pd.to_numeric(df['Value'], errors='coerce')
updated_rows = []
for index, row in df.iterrows():
updated_rows.append(pd.DataFrame([row], columns=df.columns))
if row['Value'] != 0:
new_row = row.copy()
new_row['Value'] = -row['Value']
updated_rows.append(pd.DataFrame([new_row], columns=df.columns))
new_row['ID'] = random_alphanumeric(16, hyphen_interval=4)
new_row['gla'] = '2100-abc'
updated_df = pd.concat(updated_rows, ignore_index=True)
updated_df.to_excel(xl_output_file, index=False)
# Example usage
xl_input_file = 'input.xlsx'
xl_output_file = 'updated_file.xlsx'
process_excel(xl_input_file, xl_output_file)
</code></pre>
<p>Functionality of functions</p>
<pre><code>1- I have an excel file where I have `ID`, `gla`, `value`, and `4 more columns`.
2- The value has some numeric data which could be negative or positive. i.e. for any row there could be 23245, or -7989 for example.
3- The `process_excel()` function is suppose to convert corresponding
`positive value to negative`, and `vice versa`. i.e. for the example given in point 2, `for 23245` there will be `-23245`, and `for -7989` it would be `7989`. The function is doing the conversion.
</code></pre>
<p>My challenge is that I also wanted to enter some random generated value for <code>ID</code> column and a hard coded value for <code>gla</code> column. I have written a reusable function called <code>random_alphanumeric()</code> that generates a alphanumeric value, and I have also called this function within <code>process_excel()</code>. But it doesn't seems to have any effect in the new excel file that is generated. Please suggest what could be the issue in my code.</p>
<p>Note: for rest of 4 columns will be empty.</p>
| <python><python-3.x><pandas><dataframe> | 2023-11-28 14:03:30 | 1 | 1,150 | user3521180 |
77,564,403 | 11,342,139 | How to resolve PulseAudio connection test failed: Failed to connect to pulseaudio server? | <p>Multiple days now I have been trying to setup a python service that runs on debian and streams music via bluetooth. The bluetooth part works fine. I am using <code>dbus</code>. The problem comes with the `pulsectl` pulse audio service running on the debian computer. The error is below. It keeps telling that it cannot find the `/run/user/1001/pulse` directory but when I do `ls` it is already there.</p>
<pre><code>:Failed to create secure directory (/run/user/1001/pulse): No such file or directoryNov 28 14:32:59 triton python3[550]: ERROR app.root - PulseAudio connection test failed: Failed to connect to pulseaudio serverNov 28 14:32:59 triton python3[550]: ERROR app.root - Cannot connect to PulseAudio server. Exiting...
</code></pre>
<p>I have some logic to first check if `pulsectl` is connected or not and this is where it breaks:</p>
<pre><code>def test_pulseaudio_connection():
try:
with pulsectl.Pulse("test-connection") as pulse:
return True
except Exception as e:
logging.error(f"PulseAudio connection test failed: {e}")
return False
if __name__ == "__main__":
os.environ["XDG_RUNTIME_DIR"] = f"/run/user/{os.getuid()}"
# First, test the PulseAudio connection
if not test_pulseaudio_connection():
logging.error("Cannot connect to PulseAudio server. Exiting...")
exit(1) # Exits the program if PulseAudio is not running
</code></pre>
<p>Also I I have setup the systemd service file with passing and environment file there and also choosing user to run this etc because I read that it should not be the root running pulseaudio service.</p>
<pre><code>[Service]ExecStart=/usr/bin/python3 /opt/backend/src/app.py
WorkingDirectory=/opt/backend/src
Environment="XDG_RUNTIME_DIR=/run/user/1001"
Environment="ENVIRONMENT=development"
User=user
Group=audio
Group=root
Restart=always
[Device]
DeviceAllow=/dev/ttyRS232_A rw
[Install]WantedBy=multi-user.target
</code></pre>
<p>I have no idea why it keeps showing this error.</p>
| <python><debian><dbus><pulseaudio> | 2023-11-28 13:47:48 | 1 | 1,046 | Angel Hadzhiev |
77,564,191 | 860,233 | FastAPI Websocket messages are being queued up and only being sent at the end of the user session | <p>For some reason, some of my websocket messages are only being sent when a session is over or right before a websocket becomes disconneted. I have connection and session management code and I'm making sure to pass around the same websocket object, even veryfying the ID as a debugging step. I am relatively new to python so I'm at my wits ends here!</p>
<p>The websocket send in my connect function in main class works instantly, but the in my AutogenChat the websocket call in the new_group_chat_received_message function is only being queued and not sent.</p>
<p>Here is my code:</p>
<p><strong>main.py</strong></p>
<pre><code>class ConnectionManager:
def __init__(self):
self.active_connections: Dict[str, AutogenChat] = {}
async def connect(self, autogen_chat: AutogenChat, client_id: str):
await autogen_chat.websocket.accept()
logger.info(f"attempting to connect: {client_id}") # Log when a connection is opened
if client_id in self.active_connections:
# Handle existing connection, e.g., close it or notify the client
existing_chat = self.active_connections[client_id]
await existing_chat.websocket.close(code=1001, reason="New connection made")
self.active_connections[client_id] = autogen_chat
await autogen_chat.websocket.send_text("connected") #this websocket send appears instantly in my client
async def disconnect(self, autogen_chat: AutogenChat, client_id: str):
# await autogen_chat.client_receive_queue.put_nowait("DO_FINISH")
print(f"autogen_chat {autogen_chat.client_id} disconnected")
self.active_connections[client_id].websocket = None
self.active_connections.pop(client_id)
manager = ConnectionManager()
@app.get("/")
async def home():
return "Welcome Home"
@app.get("/get-chat-id")
async def get_chat_id():
client_id = str(uuid.uuid1())
while client_id in client_ids:
client_id = str(uuid.uuid1())
client_ids.append(client_id)
return JSONResponse({"client_id": client_id})
@app.websocket("/autogen/{client_id}")
async def websocket_endpoint(websocket: WebSocket, client_id: str):
# clients[client_id] = websocket
autogen_chat = AutogenChat(websocket=websocket, client_id=client_id, logger=logger)
await manager.connect(autogen_chat, client_id)
try:
await manager.active_connections[client_id].connect(client_id)
except Exception as e:
logger.error(f"Error in WebSocket connection with client_id {client_id}: {e}") # Log exceptions
finally:
try:
logger.info(
f"Closing WebSocket connection with client_id {client_id}") # Log when a connection is closed
await manager.disconnect(autogen_chat, client_id)
except Exception as e:
logger.error(f"Error while closing WebSocket connection with client_id {client_id}: {e}")
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000)
</code></pre>
<p><strong>AutogenChat class</strong></p>
<pre><code>class AutogenChat:
def __init__(self, client_id=None, websocket=None, logger=None):
self.groupchat = None
self.websocket = websocket
self.logger = logger
self.client_id = client_id
self.product_manager_assistant = autogen.AssistantAgent(
name="product_manager",
llm_config=llm_config,
system_message="Your job is...."
)
self.ux_designer_assistant = autogen.AssistantAgent(
name="user_experience_designer",
llm_config=llm_config,
system_message="Your job is to..."
)
self.user_proxy = UserProxyWebAgent(
name="human_admin",
human_input_mode="ALWAYS",
max_consecutive_auto_reply=5,
default_auto_reply="APPROVED",
is_termination_msg=lambda x: x.get("content", "") and x.get("content", "").rstrip().endswith("TERMINATE"),
code_execution_config=False,
llm_config=llm_config,
system_message="""A human admin. Interact with the team to receive feedback on ideas. Plan execution needs to be approved by this admin.
Only say APPROVED in most cases, and say TERMINATE when nothing is to be done further. Do not say others."""similarly...
)
try:
self.user_proxy.set_websocket(self.websocket)
self.user_proxy.set_logger(self.logger)
except Exception as e:
logger.error(f"Error in processing: {e}")
async def new_group_chat_received_message(self, recipient, messages, sender, config):
if messages:
content = messages[-1]['content']
# print ("message is ")
# print [messages[-1]]
if 'name' in messages[-1]:
name = messages[-1]['name']
#if (name != "human_admin"):
payload = {'message': content, 'sender': name}
json_payload = json.dumps(payload)
try:
print ("before trying")
await self.websocket.send_text(json_payload) #This websocket send is queued up and only appears in the client after the session ends
print ("after trying")
except Exception as e:
self.logger.error(f"Error in sending websocket message: {e}") # Log exceptions
self.logger.info(f"Sent message from new group chat recieved: {json_payload}")
return False, None
async def start_group_chat(self, message):
self.user_proxy.register_reply([autogen.Agent, None], reply_func=self.new_group_chat_received_message,
config={"callback": None})
self.ux_designer_assistant.register_reply([autogen.Agent, None],
reply_func=self.new_group_chat_received_message,
config={"callback": None})
self.product_manager_assistant.register_reply([autogen.Agent, None],
reply_func=self.new_group_chat_received_message,
config={"callback": None})
self.groupchat = autogen.GroupChat(
agents=[self.user_proxy, self.ux_designer_assistant, self.product_manager_assistant],
messages=[], max_round=12)
self.manager = GroupChatManagerWeb(groupchat=self.groupchat,
llm_config=llm_config,
human_input_mode="ALWAYS")
await self.user_proxy.a_initiate_chat(
self.manager,
clear_history=True,
message=message
)
self.user_proxy.stop_reply_at_receive(self.manager)
async def send_message(self, message):
print ("sending message " + message)
await self.user_proxy.a_send(message,
self.manager)
return self.user_proxy.last_message()["content"]
async def connect(self,client_id):
await self.websocket.send_text("Connecting to websocket...")
while True:
data = await self.websocket.receive_text()
#future_calls = asyncio.gather(receive_from_client(autogen_chat, client_id))
self.logger.info(f"WebSocket connection established with client_id: {client_id}")
if data:
message = json.loads(data)
content = message["message"]["content"]
message_type = message["message"]["messageType"]
if message_type == "start":
print("we're in start")
#await autogen_chat.start_single_chat(content)
await self.start_group_chat(content)
elif message_type == "feedback":
print ("we're in feedback")
await self.send_message(content)
elif content == "DO_FINISH":
break
</code></pre>
<p>I'm also wondering if I should invest time in understanding ayncio better, something I've seen around but havn't quite yet tried to wrap my head around yet.</p>
<p>Thanks!</p>
| <python><reactjs><fastapi> | 2023-11-28 13:17:52 | 0 | 930 | Glenncito |
77,564,148 | 15,222,211 | How can retrieve the mandatory attributes of a Pydantic object? | <p>How to identify mandatory attributes in a Pydantic object.
In the following example, the method <code>mandatory_attributes</code> does it manually.
Is it possible to identify all mandatory attributes automatically?</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, Field
class MyClass(BaseModel):
mandatory1: str = Field(description="mandatory")
mandatory2: str = Field(description="mandatory")
optional: str = Field(default="", description="optional")
def mandatory_attributes(self):
"""Need fix this method, to identify mandatory attributes automatically."""
return ["mandatory1", "mandatory2"]
obj = MyClass(mandatory1="a", mandatory2="b", optional="c")
result = obj.mandatory_attributes()
assert result == ["mandatory1", "mandatory2"]
</code></pre>
| <python><pydantic> | 2023-11-28 13:11:22 | 1 | 814 | pyjedy |
77,564,110 | 4,451,521 | How can I make a histogram with two consecutive bars in plotly? | <p>I have the following code:</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Sample data (replace this with your actual DataFrame)
data = {
'CU': [1.5, 2.3, 1.8, 3.2, 2.5, 2.0, 3.8, 3.0],
'ER': [0.2, 0.5, np.nan, 0.7, 0.8, 0.4, 0.9, 0.6],
}
df = pd.DataFrame(data)
print(data)
# Create a new column 'Valid_CU' where CU values are replaced with NaN if ER is NaN
df['Valid_CU'] = df['CU'].where(~df['ER'].isna())
hist_values, bin_edges,something= plt.hist([df['CU'], df['Valid_CU']], bins= 3,label=['Total Data', 'Valid Data'])
plt.xlabel('CU Values')
plt.ylabel('Frequency')
plt.title('Histogram of CU Values with Valid Data Counts')
plt.legend()
print("hist",hist_values,"bind edges",bin_edges,"some",something)
# Show the plot
plt.show()
</code></pre>
<p>with this I get:</p>
<p><a href="https://i.sstatic.net/t157v.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t157v.png" alt="enter image description here" /></a></p>
<p>This is very nice! I have tried to do the same thing with Plotly without success. Plotly histograms give ugly representations and worst of all the values are all wrong.</p>
<p>How can I do the above in Plotly?</p>
<p>(Notice that in there, the bins is customizable. I want the same thing in Plotly).</p>
<h2>Post script</h2>
<p>I would rather the following code not influence any answers. I write it as report. The most I could do was:</p>
<pre><code>fig = go.Figure()
fig.add_trace(go.Bar(x=df['CU'], y=[1] * len(df),name='Total Data'))
fig.add_trace(go.Bar(x=df['Valid_CU'], y=[1] * len(df), name='Valid Data'))
fig.update_layout(
title_text='Count of CU Values with Valid Data Counts',
xaxis_title_text='CU Values',
yaxis_title_text='Count',
# barmode='overlay', # overlay bars
barmode='group'
)
# Show the plot
fig.show()
</code></pre>
<p>But here the number of bins is not customizable.</p>
| <python><plotly> | 2023-11-28 13:03:35 | 1 | 10,576 | KansaiRobot |
77,563,999 | 12,851,171 | Passing argument on SparkKubernetesOperator | <p>I am using a spark with airflow, but not able to pass the arguments. I have tried multiple ways please suggest where right way to do this this.</p>
<p>dag.py file:</p>
<pre><code>base_operator = SparkKubernetesOperator(
application_file="spark-pi.yaml",
task_id='segment_tag_refresh_process',
namespace="spark-jobs",
api_group="sparkoperator.k8s.io",
api_version="v1beta2",
parms= {"ID": '1'},
dag=dag
)
</code></pre>
<p>spark-pi.yaml</p>
<pre><code>apiVersion: sparkoperator.k8s.io/v1beta2
kind: SparkApplication
metadata:
name: spark-create-file
spec:
type: Scala
mode: cluster
image: imagefilename
imagePullSecrets:
- sparkairlow
imagePullPolicy: IfNotPresent
mainClass: org.apache.spark.examples.
mainApplicationFile: local:///data/processing.py
arguments: {{ parms.ID}}
sparkVersion: 3.5.0
sparkConf:
spark.eventLog.enabled: "true"
spark.eventLog.dir: /data/logs
....
other configurations
....
</code></pre>
<p>While reading the arguments on processing.py I am using system arguments to read:</p>
<pre><code>import sys
print("**********",sys.argv)
</code></pre>
<p>But not able to find the arguments.</p>
<p>If any think I'm missing please ask I'll update.</p>
| <python><apache-spark><kubernetes><pyspark><airflow> | 2023-11-28 12:45:13 | 1 | 303 | Shivam Gupta |
77,563,897 | 3,935,797 | Bert topic clasiffying over a quarter of documents in outlier topic -1 | <p>I am running Bert topic with default options</p>
<pre><code>import pandas as pd
from sentence_transformers import SentenceTransformer
import time
import pickle
from bertopic import BERTopic
llm_mod = "all-MiniLM-L6-v2"
model = SentenceTransformer(llm_mod)
embeddings = model.encode(skills_augmented, show_progress_bar=True)
bertopic_model = BERTopic(verbose=True)
</code></pre>
<p>I have a dataset of 40,000 documents that are only one short sentence. 13,573 of the documents get placed in the -1 topic (below distribution across top 5 topics).</p>
<pre><code>-1 13573
0 1593
1 1043
2 628
3 627
</code></pre>
<p>From the documentation: The -1 refers to all outliers and should typically be ignored. Is there a parameter I can use to get less documents in -1? Perhaps get a more even distribution across topics? Would running kmeans be better?</p>
| <python><nlp><bert-language-model><topic-modeling> | 2023-11-28 12:32:26 | 1 | 1,028 | RM- |
77,563,738 | 12,163,252 | groupby and aggregate on the resulting groups | <p>I have a dataframe with a column Status which values are either 'OPEN' or 'CLOSED'. I would like to do a <code>groupby()</code> on multiple columns and use the following rule for that Status column: <em>if there is one or more 'OPEN' values in the group then the aggregate should return 'OPEN' else 'CLOSED'</em></p>
<p>I tried the following:</p>
<pre><code>df_agg = df.groupby(['col1', 'col2', 'col3'], as_index=False)
.agg({'col4': 'sum', 'Status': lambda x: np.where(x == 'OPEN', 'OPEN', 'CLOSED')}).reset_index(drop=True)
</code></pre>
<p>but this returns lists like ['OPEN', 'CLOSED'], ['OPEN', 'CLOSED', 'CLOSED'] and so on. Is there a better way using the <code>agg()</code> function to return a single value of 'OPEN' or 'CLOSED' rather than doing the <code>groupby()</code> and then doing again something like below?</p>
<pre><code>np.where(df_agg['Status'].str.contains('OPEN'), 'OPEN', 'CLOSED')
</code></pre>
| <python><pandas><group-by> | 2023-11-28 12:07:09 | 2 | 453 | Hotone |
77,563,636 | 2,641,187 | Python type hints for type promotion | <p>Consider a function that performs type promotion, e.g. a simple multiplication of two numbers that can both be either <code>int</code> or <code>float</code>:</p>
<pre class="lang-py prettyprint-override"><code>def mul(a: int | float, b: int | float): # return type?
return a * b
</code></pre>
<p>This function returns <code>float</code>, except in the case where both <code>a</code> and <code>b</code> are <code>int</code>.</p>
<p>How can I properly and concisely annotate the return type? I know I can do this with <code>@overload</code>:</p>
<pre class="lang-py prettyprint-override"><code>from typing import overload
@overload
def mul(a: int, b: int) -> int: ...
@overload
def mul(a: float, b: int | float) -> float: ...
@overload
def mul(a: int | float, b: float) -> float: ...
def mul(a, b):
return a * b
</code></pre>
<p>but this is very verbose and requires many overloads for something I would imagine some "type function" should handle. In C++ this could be done e.g. with <a href="https://en.wikipedia.org/wiki/Substitution_failure_is_not_an_error" rel="noreferrer">SFINAE</a>. Is there something similar I can do in Python in terms of a generic function along the lines of</p>
<pre class="lang-py prettyprint-override"><code>def mul(a: T1, b: T2) -> promote_types(T1, T2):
return a * b
</code></pre>
<p>that also works with TypeVars? I don't expect anything built in that already works for <code>int</code> and <code>float</code>, but some technique perhaps?</p>
<p>Notes:</p>
<ul>
<li><p>I know about the recommendation to just annotate everything taking an <code>int</code> with <code>float</code>, but my setting has more complicated TypeVars, the choice of <code>int</code> and <code>float</code> here is just a simple example.</p>
</li>
<li><p>I know I can just do <code>Union[int, float]</code>, but I need it to be specific. Depending on the exact types the function is called with, the return type must be exact too, not a union.</p>
</li>
</ul>
| <python><mypy><python-typing> | 2023-11-28 11:50:21 | 3 | 931 | Darkdragon84 |
77,563,490 | 4,350,650 | Can I assign a specific environment/docker image to my Kaggle notebook? | <p>I want to create a new Kaggle notebook that would be relying on the same environment than an older Kaggle notebook I have.<br />
<strong>Why ?</strong> Because I want to use the commit & save function allowing you to run stuff in the background, without having it running my entire original notebook as it would do a lot of unecessary compute for my test.</p>
<p>From my understanding, when you create a new Kaggle notebook it is assigned the most recent environment as of today. You can then choose to have it locked, or keep your environment evolving by staying on the latest available version.<br />
However is it possible to assign <strong>an older version</strong> of the environment? Basically on the docker image I get today, the code I want to test is not running because of some library version conflicts. There are hundreds of libraries that have been updated within a month, so pinpointing the exact origin of the bug is extremely tidious. I already tried to rollback the libraries I suspected initially without success.<br />
The solution I have now is basically commenting my whole notebook to just keep the cells I want to run for the test, but it is also not efficient.</p>
| <python><docker><jupyter-notebook><kaggle> | 2023-11-28 11:25:51 | 0 | 2,099 | Mayeul sgc |
77,563,309 | 3,751,931 | shap force plots for multiclass problems | <p>I am trying to get to show the force plots for a given test example to all show in the same plot in the case of a multiclass classification problem.</p>
<p>My best attempt:</p>
<pre><code>explainer = shap.TreeExplainer(model)
shap_test = explainer.shap_values(x_test)
fig, axs = plt.subplots(6, 1, figsize=(10, 5 * 6))
for i in range(6):
axs[i].set_title(f"Waterfall Plot - Class {i}")
decision_plot = shap.force_plot(
explainer.expected_value[i], shap_test[0][i], x_test.columns, show=False
)
plt.show()
</code></pre>
<p>Unfortunately this still miserably fails:
<a href="https://i.sstatic.net/Jd3cM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jd3cM.png" alt="enter image description here" /></a></p>
<p>Does anyone have a solution for this?</p>
| <python><machine-learning><shap><xai> | 2023-11-28 10:56:02 | 0 | 2,391 | shamalaia |
77,563,236 | 6,221,742 | GPT4All prompt size | <p>Name: gpt4all
Version: 2.0.2</p>
<p>I am trying to query a database using GPT4All package using my postgresql database.
Below is the code</p>
<pre><code>from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain_experimental.sql import SQLDatabaseChain
from langchain import SQLDatabase
from langchain.llms import GPT4All
import os
username = "postgres"
password = "password"
host = "127.0.0.1" # internal IP
port = "5432"
mydatabase = "reporting_db"
pg_uri = f"postgresql+psycopg2://{username}:{password}@{host}:{port}/{mydatabase}"
my_db = SQLDatabase.from_uri(pg_uri)
PROMPT = """
Given an input question, first create a syntactically correct postgresql query to run,
then look at the results of the query and return the answer.
The question: {question}
"""
path = "./models/mistral-7b-openorca.Q4_0.gguf"
callbacks = [StreamingStdOutCallbackHandler()]
llm = GPT4All(model = path,
callbacks=callbacks,
n_threads=3,
max_tokens=5162,
verbose=True
)
db_chain = SQLDatabaseChain.from_llm(llm = llm,
db = my_db,
verbose=True
)
question = "Describe the table Sales"
answer = db_chain.run(PROMPT.format(question=question)
)
print(answer)
</code></pre>
<p>but I am getting the following error,</p>
<pre><code>ERROR: sqlalchemy.exc.ProgrammingError: (psycopg2.errors.SyntaxError) syntax error at
or near "ERROR"
LINE 1: ERROR: The prompt size exceeds the context window size and c...
^
[SQL: ERROR: The prompt size exceeds the context window size and cannot be processed.]
(Background on this error at: https://sqlalche.me/e/20/f405)
</code></pre>
<p>Is there a parameter I should change in order to overcome this limitation?</p>
| <python><langchain><py-langchain><gpt4all> | 2023-11-28 10:45:53 | 1 | 339 | AndCh |
77,563,053 | 8,037,521 | Installing local dependency with pip install | <p>I am using the Setuptools build backend for a <code>pyproject.toml</code>-based project.</p>
<p>I have this project structure:</p>
<pre><code>- project_root
- dependency_lib
- pyproject.toml
- * other files *
- src
- package_name
- __init__.py
* different .py files *
- README.md
- pyproject.toml
</code></pre>
<p>Now I want to install my local package in <code>src/package_name</code> using <code>pip install .</code> or <code>pip install -e .</code> In <code>pyproject.toml</code>, I have different dependencies which are working fine. However, the local package also depends on <code>dependency_lib</code>, which is a separate package with its own <code>pyproject.toml</code>, and which is added as a git submodule. I want <code>dependency_lib</code> to be installed automatically when I use <code>pip install .</code>.</p>
<p>Can I do this by specifying dependencies in <code>pyproject.toml</code>? If so, how?</p>
<p>I saw a suggestion to add the dependency as <code>"dependency_lib@ file:///{$PROJECT_ROOT}/dependency_lib"</code>, but this fails due to unknown <code>$PROJECT_ROOT</code>. Is there a way to get the path automatically, based on the location of <code>pyproject.toml</code>? I would prefer not to add it manually to my environment variables.</p>
<p>Alternately, is there a better way to structure the project (e.g. putting <code>dependency_lib</code> somewhere else) so that this works more simply?</p>
| <python><setuptools><python-packaging> | 2023-11-28 10:20:23 | 0 | 1,277 | Valeria |
77,562,638 | 9,974,205 | How can I find pairs and triplets of values in a pandas dataframe | <p>I have a pandas dataframe in Python that contains the following pair of columns.</p>
<p>I need to count how many times pairs and triplets of combination of data appear with and without considering the order. As an example, let's say that I have a dataframe with two columns, <code>Classification</code> and <code>Individual</code> and the following token data</p>
<pre><code>data = {
'Classification': [1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5],
'Individual': ['A', 'A', 'B', 'B', 'A', 'A', 'B', 'C', 'C', 'C', 'A', 'A', 'A', 'B', 'B', 'A', 'A', 'B', 'B', 'B', 'C', 'C', 'C', 'C', 'C', 'A', 'A', 'B', 'B', 'B']
}
</code></pre>
<p>Now, I want to arrive to the following results</p>
<pre><code>Clasification ValueSeries TimesClassification PercentageClassification
1 AB 5 1
2 AB 5 1
3 AC 2 0.4
3 AB 5 1
3 ABC 3 0.6
4 AB 5 1
4 BC 2 0.4
4 ABC 3 0.6
5 AC 2 0.4
5 AB 5 1
5 ABC 3 0.6
</code></pre>
<p>this is, for each value of clasification the unnordered pairs and triplets contained within.</p>
| <python><pandas><dataframe><dataset><series> | 2023-11-28 09:24:00 | 1 | 503 | slow_learner |
77,562,470 | 7,959,614 | Merge dictionaries with same key using ChainMap | <p>I have two dictionaries</p>
<pre><code>a = {'123': {'player': 1,
'opponent': 2},
'18': {'player': 10,
'opponent': 12}
}
b = {'123': {'winner': 1},
'180': {'winner': 2}
}
</code></pre>
<p>The goal is to merge them together and get one dictionary that looks as follows:</p>
<pre><code>{'123': {'player': 1,
'opponent': 2,
'winner': 1},
'18': {'player': 10,
'opponent': 12},
'180': {'winner': 2}
}
</code></pre>
<p>I would like to use <code>collections.ChainMap</code> for this purpose. I tried the following</p>
<pre><code>>>> from collections import ChainMap
>>> print(dict(ChainMap(a, b)))
{'123': {'player': 1, 'opponent': 2}, '180': {'winner': 2}, '18': {'player': 10, 'opponent': 12}}
</code></pre>
<p>In the <a href="https://docs.python.org/3/library/collections.html#collections.ChainMap" rel="nofollow noreferrer">docs</a> they create a new class as follows:</p>
<pre><code>class DeepChainMap(ChainMap):
'Variant of ChainMap that allows direct updates to inner scopes'
def __setitem__(self, key, value):
for mapping in self.maps:
if key in mapping:
mapping[key] = value
return
self.maps[0][key] = value
def __delitem__(self, key):
for mapping in self.maps:
if key in mapping:
del mapping[key]
return
raise KeyError(key)
>>> print(dict(DeepChainMap(a, b)))
{'123': {'player': 1, 'opponent': 2}, '180': {'winner': 2}, '18': {'player': 10, 'opponent': 12}}
</code></pre>
<p>How can I modify the class and get the desired output?</p>
| <python><dictionary><mapping> | 2023-11-28 08:59:11 | 3 | 406 | HJA24 |
77,562,252 | 14,771,666 | How to find the probability cutoff that maximize inner product of two tensors? | <p>I have two tensors:</p>
<pre><code>import torch
target = torch.randint(2, (3,5)) #tensor of 0s & 1s
pred = torch.rand(3, 5) #tensor of prob
# transformed_pred = ?
</code></pre>
<p>How can I choose a cutoff probability to transform <code>pred</code> into a tensor of 0s & 1s (<code>transformed_pred</code>) so that the dot product between <code>target</code> and <code>transformed_pred</code> is maximized?</p>
<p>Thanks!</p>
| <python><pytorch><probability><dot-product> | 2023-11-28 08:21:39 | 1 | 368 | Kaihua Hou |
77,562,167 | 3,336,412 | SQL Alchemy maps UUID-primary key to string | <p>I'm working with SQLModel and I created following base-table and table:</p>
<pre class="lang-py prettyprint-override"><code>class GUIDModel(PydanticBase):
"""
Provides a base mixin for tables with GUID as primary key
"""
guid: Optional[UUID] = Field(
...,
primary_key=True,
description=DescriptionConstants.GUID,
sa_column=Column(
"guid",
UNIQUEIDENTIFIER,
nullable=False,
primary_key=True,
server_default=text("newsequentialid()"),
),
)
class Project(GUIDModel):
name: str = Field(max_length=255, description=DescriptionConstants.NAME)
</code></pre>
<p>So everything works, but when I try to get a row with SQL Alchemy, it returns the GUID as a string instead of UUID</p>
<pre class="lang-py prettyprint-override"><code>def test_get_project(self):
with Session(__get_engine()) as session:
project: Project = Projects._get_project(session)
self.assertEqual(type(project.guid), uuid.UUID)
</code></pre>
<p>Error:</p>
<pre><code><class 'uuid.UUID'> != <class 'str'>
Expected :<class 'str'>
Actual :<class 'uuid.UUID'>
<Click to see difference>
</code></pre>
| <python><sqlalchemy><uuid><sqlmodel> | 2023-11-28 08:04:27 | 1 | 5,974 | Matthias Burger |
77,562,148 | 11,267,783 | Issue using GridSpec and colorbar with Matplotlib | <p>I want to create a specific plot using GridSpec. However with my code, colorbars are changing the width_ratios.</p>
<p><a href="https://i.sstatic.net/jdKz0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jdKz0.png" alt="enter image description here" /></a></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib.gridspec import GridSpec
data1 = np.random.rand(10,10)
data2 = np.random.rand(10,10)
x = np.linspace(1,10,10)
fig = plt.figure(constrained_layout=True)
gs = GridSpec(2,2,figure=fig, width_ratios=[10,1])
ax = fig.add_subplot(gs[0,0])
img = plt.imshow(data1, aspect='auto')
plt.colorbar(location='left')
ax = fig.add_subplot(gs[0,1])
ax.plot(x, data1[0,:])
ax = fig.add_subplot(gs[1,:])
img = plt.imshow(data2, aspect='auto')
plt.colorbar(location='right')
plt.show()
</code></pre>
<p>I would like the 1d plot to be more on the right and the second 2d plot to be extended on the left. (I don't want the colorbars to be both on left)</p>
| <python><matplotlib> | 2023-11-28 08:02:13 | 1 | 322 | Mo0nKizz |
77,561,864 | 9,798,210 | How to use pdf document in the agent using Langchain | <p>My code uses "wikipedia" to search for the relevant content. Below is the code</p>
<h1>Load tools</h1>
<pre><code>tools = load_tools(
["wikipedia"],
llm=llm)
agent = initialize_agent(
tools,
llm,
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True,
verbose=False
)
out = agent(f"Does {var_1} cause {var_2} or the other way around?.")
</code></pre>
<p>Instead of "wikipedia", I want to use my own pdf document that is available in my local. Can anyone help me in doing this?</p>
<p>I have tried using the below code</p>
<pre><code>from langchain.document_loaders import PyPDFium2Loader
loader = PyPDFium2Loader("hunter-350-dual-channel.pdf")
data = loader.load()
</code></pre>
<p>but i am not sure how to include this in the agent.</p>
| <python><langchain><large-language-model><pdfium> | 2023-11-28 07:00:38 | 1 | 1,835 | merkle |
77,561,851 | 10,633,596 | Getting HTTP Error 403 Forbidden when making call to GitLab server using Python code | <p>I have written a small Python code to get the Storage usage of a GitLab repository. However when I run my code then I get this <code>403 Forbidden error</code>. This repo, say <strong>test_repo_1</strong> is under a <strong>test_group_1</strong>. I have created the token with Maintainer role both at project level and at the group level (group access token) and used them separately in my Python script individually to run the code but get the same result as 403.</p>
<p><strong>Python code:-</strong></p>
<pre><code>import gitlab
GITLAB_URL = "https://gitlab.com"
GL_GROUP_TOKEN = '<Group token & Project access token>'
def grant_access(gl_project_id):
gl = gitlab.Gitlab(GITLAB_URL, private_token = GL_GROUP_TOKEN)
project = gl.projects.get(gl_project_id)
storage = project.storage.get()
print("storage::",storage)
def main():
gl_project_id='8724648'
role_requested=''
grant_access(gl_project_id)
main()
</code></pre>
<p><strong>Error log:-</strong></p>
<pre><code>python3 storage.py
Traceback (most recent call last):
File "/home/nairv/.local/lib/python3.7/site-packages/gitlab/exceptions.py", line 336, in wrapped_f
return f(*args, **kwargs)
File "/home/nairv/.local/lib/python3.7/site-packages/gitlab/mixins.py", line 154, in get
server_data = self.gitlab.http_get(self.path, **kwargs)
File "/home/nairv/.local/lib/python3.7/site-packages/gitlab/client.py", line 829, in http_get
"get", path, query_data=query_data, streamed=streamed, **kwargs
File "/home/nairv/.local/lib/python3.7/site-packages/gitlab/client.py", line 797, in http_request
response_body=result.content,
gitlab.exceptions.GitlabHttpError: 403: 403 Forbidden
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "storage.py", line 20, in <module>
main()
File "storage.py", line 18, in main
grant_access(gl_project_id)
File "storage.py", line 12, in grant_access
storage = project.storage.get()
File "/home/nairv/.local/lib/python3.7/site-packages/gitlab/v4/objects/projects.py", line 1257, in get
return cast(ProjectStorage, super().get(**kwargs))
File "/home/nairv/.local/lib/python3.7/site-packages/gitlab/exceptions.py", line 338, in wrapped_f
raise error(e.error_message, e.response_code, e.response_body) from e
gitlab.exceptions.GitlabGetError: 403: 403 Forbidden
</code></pre>
<p><strong>Group access token scopes:-</strong>
<a href="https://i.sstatic.net/CKU4r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CKU4r.png" alt="enter image description here" /></a></p>
<p><strong>Project access token scopes:-</strong>
<a href="https://i.sstatic.net/z35ZG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z35ZG.png" alt="enter image description here" /></a></p>
| <python><gitlab-api><python-gitlab> | 2023-11-28 06:56:43 | 1 | 1,574 | vinod827 |
77,561,774 | 1,467,552 | How does Polars auto-cache mechanism work on LazyFrames? | <p>As stated <a href="https://twitter.com/RitchieVink/status/1570477690297147392/photo/1" rel="noreferrer">here</a>, Polars introduced an auto-cache mechanism for LazyFrames that occures multiple times in the logical plan, so the user will not have to actively perform the cache.<br />
However, while trying to examine their new mechanism, I encountred scenarios that the auto-cache is not performed optimally:</p>
<p><strong>Without explicit cache:</strong></p>
<pre><code>import polars as pl
df1 = pl.DataFrame({'id': [0,5,6]}).lazy()
df2 = pl.DataFrame({'id': [0,8,6]}).lazy()
df3 = pl.DataFrame({'id': [7,8,6]}).lazy()
df4 = df1.join(df2, on='id')
print(pl.concat([df4.join(df3, on='id'), df1,
df4]).explain())
</code></pre>
<p>We get the logical plan:</p>
<pre><code>UNION
PLAN 0:
INNER JOIN:
LEFT PLAN ON: [col("id")]
INNER JOIN:
LEFT PLAN ON: [col("id")]
CACHE[id: a4bcf9591fefc837, count: 3]
DF ["id"]; PROJECT 1/1 COLUMNS; SELECTION: "None"
RIGHT PLAN ON: [col("id")]
CACHE[id: 8cee8e3a6f454983, count: 1]
DF ["id"]; PROJECT 1/1 COLUMNS; SELECTION: "None"
END INNER JOIN
RIGHT PLAN ON: [col("id")]
DF ["id"]; PROJECT */1 COLUMNS; SELECTION: "None"
END INNER JOIN
PLAN 1:
CACHE[id: a4bcf9591fefc837, count: 3]
DF ["id"]; PROJECT 1/1 COLUMNS; SELECTION: "None"
PLAN 2:
INNER JOIN:
LEFT PLAN ON: [col("id")]
CACHE[id: a4bcf9591fefc837, count: 3]
DF ["id"]; PROJECT 1/1 COLUMNS; SELECTION: "None"
RIGHT PLAN ON: [col("id")]
CACHE[id: 8cee8e3a6f454983, count: 1]
DF ["id"]; PROJECT 1/1 COLUMNS; SELECTION: "None"
END INNER JOIN
END UNION
</code></pre>
<p><strong>With explicit cache:</strong></p>
<pre><code>import polars as pl
df1 = pl.DataFrame({'id': [0,5,6]}).lazy()
df2 = pl.DataFrame({'id': [0,8,6]}).lazy()
df3 = pl.DataFrame({'id': [7,8,6]}).lazy()
df4 = df1.join(df2, on='id').cache()
print(pl.concat([df4.join(df3, on='id'), df1,
df4]).explain())
</code></pre>
<p>We get the logical plan:</p>
<pre><code>UNION
PLAN 0:
INNER JOIN:
LEFT PLAN ON: [col("id")]
CACHE[id: 290661b0780, count: 18446744073709551615]
FAST_PROJECT: [id]
INNER JOIN:
LEFT PLAN ON: [col("id")]
DF ["id"]; PROJECT 1/1 COLUMNS; SELECTION: "None"
RIGHT PLAN ON: [col("id")]
DF ["id"]; PROJECT 1/1 COLUMNS; SELECTION: "None"
END INNER JOIN
RIGHT PLAN ON: [col("id")]
DF ["id"]; PROJECT */1 COLUMNS; SELECTION: "None"
END INNER JOIN
PLAN 1:
DF ["id"]; PROJECT */1 COLUMNS; SELECTION: "None"
PLAN 2:
CACHE[id: 290661b0780, count: 18446744073709551615]
FAST_PROJECT: [id]
INNER JOIN:
LEFT PLAN ON: [col("id")]
DF ["id"]; PROJECT 1/1 COLUMNS; SELECTION: "None"
RIGHT PLAN ON: [col("id")]
DF ["id"]; PROJECT 1/1 COLUMNS; SELECTION: "None"
END INNER JOIN
END UNION
</code></pre>
<p>You can see, that with the explicit cache, we get more optimal plan because the join of <code>df1</code> and <code>df2</code> is performed only once.</p>
<p>Why doesn't Polars auto-cache mechanism detect the repeated usage of join, and apply the cache by itself? What am I missing?</p>
<p>Thanks.</p>
| <python><dataframe><python-polars> | 2023-11-28 06:34:49 | 2 | 1,170 | barak1412 |
77,561,524 | 9,983,652 | when to use feature selection when doing hyperparameter optimization? | <p>I am using skopt(scikit-optimize) to find best hyperparameters for random forest model. I have a lot of features. To avoid overfitting, I'd like to add feature selection like using RFE. But I am not sure when to use feature selection, during each iteration of estimating each combination of parameters or after finding best parameters?</p>
<p>My main question is:</p>
<ul>
<li>If adding RFE in each iteration, then each iteration has different features so the model is not consistent?</li>
<li>if I move RFE after finding the best parameters, then during hyperparmeter optimization, since there are a lot of features, the model might overfit in each iteration?</li>
</ul>
<p>For example, below is my current skopt codes without feature selection</p>
<pre><code>def optimize(params,param_names,x,y):
"""
The main optimization function.
This function takes all the arguments from the search space
and training features and targets. It then initializes
the models by setting the chosen parameters and runs
cross-validation and returns a negative accuracy score
:param params: list of params from gp_minimize
:param param_names: list of param names. order is important!
:param x: training data
:param y: labels/targets
:return: negative accuracy after 5 folds
"""
# convert params to dictionary
params = dict(zip(param_names, params))
# initialize model with current parameters
model = ensemble.RandomForestClassifier(**params)
# initialize stratified k-fold
kf = model_selection.StratifiedKFold(n_splits=5)
# initialize accuracy list
accuracies = []
# loop over all folds
for idx in kf.split(X=x, y=y):
train_idx, test_idx = idx[0], idx[1]
xtrain = x[train_idx]
ytrain = y[train_idx]
xtest = x[test_idx]
ytest = y[test_idx]
# fit model for current fold
model.fit(xtrain, ytrain)
#create predictions
preds = model.predict(xtest)
# calculate and append accuracy
fold_accuracy = metrics.accuracy_score(ytest,preds )
accuracies.append(fold_accuracy)
# return negative accuracy
return -1 * np.mean(accuracies)
from functools import partial
from skopt import gp_minimize
from skopt import space
# define a parameter space
param_space = [
# max_depth is an integer between 3 and 10
space.Integer(3, 15, name="max_depth"),
# n_estimators is an integer between 50 and 1500
space.Integer(100, 1500, name="n_estimators"),
# criterion is a category. here we define list of categories
space.Categorical(["gini", "entropy"], name="criterion"),
# you can also have Real numbered space and define a
# distribution you want to pick it from
space.Real(0.01, 1, prior="uniform", name="max_features")
]
# make a list of param names
# this has to be same order as the search space
# inside the main function
param_names = [
"max_depth",
"n_estimators",
"criterion",
"max_features"
]
# by using functools partial, i am creating a
# new function which has same parameters as the
# optimize function except for the fact that
# only one param, i.e. the "params" parameter is
# required. this is how gp_minimize expects the
# optimization function to be. you can get rid of this
# by reading data inside the optimize function or by
# defining the optimize function here.
optimization_function = partial(
optimize, # having 4 argument, so need 3 input inside this partial function
param_names=param_names,
x=X,
y=y
)
# now we call gp_minimize from scikit-optimize
# gp_minimize uses bayesian optimization for
# minimization of the optimization function.
# we need a space of parameters, the function itself,
# the number of calls/iterations we want to have
result = gp_minimize(
optimization_function,
dimensions=param_space,
n_calls=15,
n_random_starts=10,
verbose=10
)
# create best params dict and print it
best_params = dict(
zip(
param_names,
result.x
)
)
print('Best parameters after skopt optimization are')
print(best_params)
</code></pre>
<p>Now I am thinking add RFE into the above codes, I am thinking to modify the first function of optimize() by adding RFE to model fitting for each fold, like below. I am not sure if this way is the correct way to do it. Could you please give me some suggestion? Thanks</p>
<pre><code>
def feature_selection_RFE(model,X,y,num_features):
rfe=RFE(estimator=model,n_features_to_select=num_features)
rfe.fit(X,y)
X_transformed=rfe.transform(X)
# summarize all features
col_index_features_selected=[]
for i in range(X.shape[1]):
if rfe.support_[i]==True: # selected feature
col_index_features_selected.append(i)
return (X_transformed,col_index_features_selected)
def optimize(params,param_names,x,y):
"""
The main optimization function.
This function takes all the arguments from the search space
and training features and targets. It then initializes
the models by setting the chosen parameters and runs
cross-validation and returns a negative accuracy score
:param params: list of params from gp_minimize
:param param_names: list of param names. order is important!
:param x: training data
:param y: labels/targets
:return: negative accuracy after 5 folds
"""
# convert params to dictionary
params = dict(zip(param_names, params))
# initialize model with current parameters
model = ensemble.RandomForestClassifier(**params)
# initialize stratified k-fold
kf = model_selection.StratifiedKFold(n_splits=5)
# initialize accuracy list
accuracies = []
# loop over all folds
for idx in kf.split(X=x, y=y):
train_idx, test_idx = idx[0], idx[1]
xtrain = x[train_idx]
ytrain = y[train_idx]
(x_train_reduced_features,col_index_features_selected)=feature_selection_RFE(model,x_train, y_train,5)
xtest = x[test_idx]
ytest = y[test_idx]
# fit model for current fold
# model.fit(xtrain, ytrain)
model.fit(x_train_reduced_features, ytrain)
#create predictions
preds = model.predict(xtest[col_index_features_selected])
# calculate and append accuracy
fold_accuracy = metrics.accuracy_score(ytest,preds )
accuracies.append(fold_accuracy)
# return negative accuracy
return -1 * np.mean(accuracies)
</code></pre>
| <python><scikit-optimize> | 2023-11-28 05:25:33 | 0 | 4,338 | roudan |
77,561,384 | 10,200,497 | dateutil.parser._parser.ParserError: hour must be in 0..23: | <p>The dataframe is on my drive:</p>
<pre><code>url = 'https://drive.google.com/file/d/12R6nMvN81GJHSBEElP8NiXXkwtuv04wf/view?usp=sharing'
url = 'https://drive.google.com/uc?id=' + url.split('/')[-2]
df = pd.read_csv(url)
sym time online_price
0 1000FLOKIUSDT 00:04.9 0.02567
1 1000FLOKIUSDT 00:31.7 0.02527
2 1000FLOKIUSDT 59:23.4 0.03638
3 1000FLOKIUSDT 59:23.4 0.03554
4 1000FLOKIUSDT 58:59.2 0.03552
... ... ... ...
3640 ZRXUSDT 00:58.8 0.30240
3641 ZRXUSDT 00:19.7 0.42730
3642 ZRXUSDT 00:19.7 0.42920
3643 ZRXUSDT 00:19.7 0.43130
3644 ZRXUSDT 00:19.7 0.42810
</code></pre>
<p>I want to convert column <code>time</code> to datetime. I have tried several ways but all of them leads to the error that is the title of the post.</p>
<p>My desired format for date is: <code>format='%Y-%m-%d %H:%M:%S'</code></p>
<p>These are the ways that I have tried:</p>
<pre><code>df['time'] = pd.to_datetime(df.time)
df['time'] = pd.to_datetime(df.time, format='%Y-%m-%d %H:%M:%S')
df['time'] = pd.to_datetime(df.time, infer_datetime_format=True)
</code></pre>
| <python><pandas> | 2023-11-28 04:38:06 | 1 | 2,679 | AmirX |
77,561,341 | 13,771,657 | Getting GPS boundaries for each hexbin in a python plotly 'hexbin_mapbox' heat map - Both centroid GPS point and GPS points for each corner of hexbin | <p>I have created a hexbin "heat map" in Python using plotly by mapping a number of locations (using GPS latitude / longitude), along with the value of each location. See code below for sample df and hexbin figure plot.</p>
<p><strong>Data Desired</strong></p>
<p>When I mouse-over each hexbin, I can see the average value contained within that hexbin. But what I want is a way to download into a pandas df the following info for each hexbin:</p>
<ul>
<li>Average value in each hexbin (already calculated per the code below, but currently only accessible to me by mousing over each and every hexbin; I want to be able to download it into a df)</li>
<li>Centroid GPS coordinate for each hexbin</li>
<li>GPS coordinates for each corner of the hexbin (i.e., latitude and longitude for each of the six corners of each hexbin)</li>
</ul>
<p><strong>My Question</strong></p>
<p>How can I download the data described in the bullets above into a pandas df?</p>
<p><strong>Code example</strong></p>
<pre><code># Import dependencies
import pandas as pd
import numpy as np
import plotly.figure_factory as ff
import plotly.express as px
# Create a list of GPS coordinates
gps_coordinates = [[32.7792, -96.7959, 10000],
[32.7842, -96.7920, 15000],
[32.8021, -96.7819, 12000],
[32.7916, -96.7833, 26000],
[32.7842, -96.7920, 51000],
[32.7842, -96.7920, 17000],
[32.7792, -96.7959, 25000],
[32.7842, -96.7920, 19000],
[32.7842, -96.7920, 31000],
[32.7842, -96.7920, 40000]]
# Create a DataFrame with the GPS coordinates
df = pd.DataFrame(gps_coordinates, columns=['LATITUDE', 'LONGITUDE', 'Value'])
# Print the DataFrame
display(df)
# Create figure using 'df_redfin_std_by_year_and_acreage_bin' data
fig = ff.create_hexbin_mapbox(
data_frame=df, lat='LATITUDE', lon='LONGITUDE',
nx_hexagon=2,
opacity=0.2,
labels={"color": "Dollar Value"},
color='Value',
agg_func=np.mean,
color_continuous_scale="Jet",
zoom=14,
min_count=1, # This gets rid of boxes for which we have no data
height=900,
width=1600,
show_original_data=True,
original_data_marker=dict(size=5, opacity=0.6, color="deeppink"),
)
# Create the map
fig.update_layout(mapbox_style="open-street-map")
fig.show()
</code></pre>
| <python><pandas><plotly><gps><latitude-longitude> | 2023-11-28 04:23:15 | 2 | 528 | BGG16 |
77,561,306 | 5,719,396 | Spark reads in parallel but writes partitions with only one core | <p>I have a 300 GB csv file that I am writing to an existing iceberg table using a local Spark cluster. My context is initialized like this:</p>
<pre><code>.master("local[*]") \
.config("spark.driver.memory", "30g") \
.config("spark.executor.memory", "15g") \
.config("spark.dynamicAllocation.enabled", "true") \
.config("spark.sql.catalog.my_catalog.io-impl", "org.apache.iceberg.aws.s3.S3FileIO") \
.config("spark.hadoop.fs.s3a.fast.upload", "true") \
</code></pre>
<p>After the transformations, I create a <code>partition_key</code> so that my dataframe is split into roughly equal sized partitions. File sizes in S3 end up being ~ 200 MB. During the first stage of execution (reading, application of the schema, transformations, and partitioning) Spark works very fast, completing 2470 tasks in 20 minutes, with 300 GB input, 50 GB shuffle write, and uses all 10 local cores.</p>
<p>However, as soon as the second stage starts (the actual writing to S3), Spark writes using only <em>one</em> core (note: this is not throttled by internet connection).</p>
<pre><code>df.repartition("partition_key") \
.write \
.format("iceberg") \
.mode("append") \
.option("path", "s3://my_bucket/my_db/") \
.partitionBy(["partition_key"]) \
.saveAsTable("glue_catalog.my_db.data")
</code></pre>
<p>I have read that because I am using a local Spark instance, I have virtually only one executor (my local jvm instance), which throttles writing. However, that doesn't make sense because it does not throttle reading (which uses all cores in parallel). How can writing partitions be sped up and parallelized?</p>
<p>I tried adding dynamic allocation but this did not help:</p>
<pre><code>.config("spark.dynamicAllocation.enabled", "true") \
</code></pre>
<p>Is there a reason that read parallelization happens by default in Spark whereas write parallelization with partitions does not?</p>
| <python><apache-spark><pyspark><apache-iceberg> | 2023-11-28 04:12:08 | 1 | 7,566 | iskandarblue |
77,561,280 | 5,718,551 | Const pydantic field value | <p>I have the below Pydantic model</p>
<pre class="lang-py prettyprint-override"><code>import datetime
import pydantic
class MealsService(pydantic.BaseModel):
class MealItem(pydantic.BaseModel):
course: str
name: str
quantity: int
unitPrice: float | None
type: str = "meals"
items: list[MealItem]
time: datetime.time | None
class CanapesService(pydantic.BaseModel):
class CanapeItem(pydantic.BaseModel):
name: str
quantity: int
unitPrice: float | None
type: str = "canapes"
items: list[CanapeItem]
time: datetime.time | None
class Event:
services: list[MealsService | CanapesService]
</code></pre>
<p>Given the following JSON payload</p>
<pre class="lang-json prettyprint-override"><code>{
"services": [
{
"type": "canapes",
"items": [],
"time": null
}
]
}
</code></pre>
<p>Pydantic incorrectly parses this as an instance of <code>MealsService</code>, not <code>CanapesService</code>. This makes sense, since without any of the nested field values, the two look identical. Can I specify that the "type" field ("meals" or "canapes") must be matched?</p>
| <python><pydantic> | 2023-11-28 04:03:08 | 2 | 944 | Inigo Selwood |
77,561,224 | 14,250,641 | Extracting Sequences from Padded DNA Sequences | <p>I have a DataFrame with two columns, 'padded_seq' and 'seq', where 'padded_seq' contains padded DNA sequences, and 'seq' contains target sequences.</p>
<p>Simple example DataFrame:</p>
<pre><code> padded_seq seq
0 AATTGGC TTG
1 AATTGGC TTG
2 AATTGGC TTG
3 CGTACGC TAC
4 CGTACGC TAC
5 CGTACGC TAC
</code></pre>
<p>I need to extract sequences from 'padded_seq' based on the 'seq' column, considering a window of 4 nucleotides around each character in 'seq'. For each character in 'seq', I want to create a new row with the extracted sequence.</p>
<p>Here's the expected output (for taking +/-2 nucleotides):</p>
<pre><code> padded_seq seq single_letter_padded (separated for clarity)
0 AATTGGC TTG AA T TG
0 AATTGGC TTG AT T GG
0 AATTGGC TTG TT G GC
1 CGTACGC TAC CG T AC
1 CGTACGC TAC GT A CG
1 CGTACGC TAC TA C GC
</code></pre>
<p>I would appreciate help to find an efficient way as I have millions of rows.
Here's what I have (kernel keeps dying though):</p>
<pre><code># Function to extract sequences based on the given window size
half_window=250 #250 per side
def extract_sequences(df, half_window=250):
padded_single_nucleotides_list = []
for y in range(len(df['padded_sequences'])):
for x in range(df['seq_len'][y]):
padded_single_nucleotides_list.append(df['padded_sequences'][y][(half_window+x)-half_window:(half_window+x)+half_window+1])
return padded_single_nucleotides_list
#to make sure the seq_df is the correct num of rows-- shouldn't have to do this (use explode?), not sure how to get around it
seq_df=seq_df.loc[np.repeat(seq_df.index, seq_df['seq_len'])].reset_index(drop=True)
# Apply the function to each row and explode the list into separate rows
seq_df['single_nucl_padded'] = extract_sequences(seq_df, half_window=250)
# Display the result
seq_df
</code></pre>
<p>Any guidance or code snippets would be appreciated!</p>
| <python><pandas> | 2023-11-28 03:37:44 | 0 | 514 | youtube |
77,560,983 | 4,451,521 | Plot with colors depending on data | <p>I would like to plot some data but with colors depending on certain conditions. Ideally I would like to do it in both plotly and matplotlib (separate scripts)</p>
<p><strong>The data</strong></p>
<p>For example I have the following data</p>
<pre><code>import pandas as pd
data = {
'X': [1, 2, 3, 4, 5,6,7,8,9,10],
'Y': [5, 4, 3, 2, 1,2,3,4,5,5],
'XL': [2, None, 4, None, None,None,4,5,None,3],
'YL': [3, None, 2, None, None,None,5,6,None,4],
'XR': [None, 4, None, 1, None,None,None,4,5,4],
'YR': [None, 3, None, 5, None,None,None,3,4,4]
}
df = pd.DataFrame(data)
</code></pre>
<p><strong>The simple plots</strong></p>
<p>So with matplotlib</p>
<pre><code>import matplotlib.pyplot as plt
fig, ax = plt.subplots()
# Plot X, Y
ax.plot(df['X'], df['Y'], linestyle='-', marker='o')
# Update plot settings
ax.set_title('Trajectory Plot')
ax.set_xlabel('X-axis')
ax.set_ylabel('Y-axis')
# Show the plot
plt.show()
</code></pre>
<p>and with plotly</p>
<pre><code>import plotly.graph_objects as go
# Create a scatter plot
fig = go.Figure(data=go.Scatter(x=df['X'], y=df['Y'], mode='lines+markers'))
# Update layout for better visibility
fig.update_layout(
title='Trajectory Plot',
xaxis_title='X-axis',
yaxis_title='Y-axis',
)
# Show the plot
fig.show()
</code></pre>
<p><strong>The problem</strong></p>
<p>I would like to modify the scripts so that I can use a different color depending on the existence or not of the <code>(XL,YL)</code> and <code>(XR,YR)</code> pairs.</p>
<ul>
<li>Grey: none exist</li>
<li>Red: Only XL,YL exists</li>
<li>Blue: Only XR,YR exists</li>
<li>Green: Both exists</li>
</ul>
<p>In the end it should be like this (pardon the crude picture, I painted over the original blue lines)</p>
<p>How can I add this in matplotlib and plotly?</p>
<p><a href="https://i.sstatic.net/CB0kO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CB0kO.png" alt="enter image description here" /></a></p>
| <python><matplotlib><plotly> | 2023-11-28 02:18:31 | 2 | 10,576 | KansaiRobot |
77,560,739 | 2,985,331 | scikit-learn FeatureUnion never exits | <p>I have two versions of a pipeline, one of which runs, one of which doesn't.</p>
<p>Version 1.
This runs reasonably quickly. Approximately 4 hours on a machine with 32G and 16 cores. In this version I am doing a differential methylation analysis to outside of this to select several hundred variables from a set of more than 300K.</p>
<pre><code>X_train, X_test, y_train, y_test = train_test_split(df, cancerType, test_size=0.2, random_state=42)
rfeFeatureSelection = RFE(estimator=RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1))
randomForest = RandomForestClassifier(random_state=42)
stratified_cv = StratifiedKFold(n_splits=5, shuffle=True, random_state=42)
# Create the pipeline with feature selection and model refinement
pipeline = Pipeline([
("featureSelection", rfeFeatureSelection),
('modelRefinement', randomForest)
])
search = GridSearchCV(pipeline,
param_grid=parameterGrid,
scoring='accuracy',
cv=stratified_cv,
verbose=2,
n_jobs=-1,
pre_dispatch='2*n_jobs',
error_score='raise',
)
search.fit(X_train, y_train)
</code></pre>
<p>Version 2.
I would prefer the differential methylation step to be done inside the cross validation process though, so that the initial selection of variables is not seen by the test or validation sets. So I wrote a custom classifier that does the differential methylation analysis. The number of variables returned sometimes differs by one or two so I put this step in a FeatureUnion step with the RecursiveFeatureElimination, following what's been done <a href="https://scikit-learn.org/stable/auto_examples/compose/plot_feature_union.html" rel="nofollow noreferrer">here</a>. My pipeline now looks like this:</p>
<pre><code> X_train, X_test, y_train, y_test = train_test_split(df, cancerType, test_size=0.2, random_state=42)
differentialMethylation = DifferentialMethylation(truthValues = y_train, name=name)
rfeFeatureSelection = RFE(estimator=RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1))
randomForest = RandomForestClassifier(random_state=42)
combinedFeatures = FeatureUnion([
("differentialMethylation", differentialMethylation),
("rfeFeatureSelection", rfeFeatureSelection)
])
stratified_cv = StratifiedKFold(n_splits=5, shuffle=True, random_state=42)
# Create the pipeline with combined feature selection and model refinement
pipeline = Pipeline([
("featureSelection", combinedFeatures),
('modelRefinement', randomForest)
])
search = GridSearchCV(pipeline,
param_grid=parameterGrid,
scoring='accuracy',
cv=stratified_cv,
verbose=2,
n_jobs=-1,
pre_dispatch='2*n_jobs',
error_score='raise',
)
search.fit(X_train, y_train)
</code></pre>
<p>This code, will get to through the DiffernentialMethylation classifier - I've got logging statements that spit out what's happening immediately before it passes data to the rfeFeatureSelection step. If I set the verbosity to 1 in rfeFeatureSelection, it definitly gets to rfeFeatureSelection, but never exits, it will sit there happily outputting this overnight and never finishing.</p>
<pre><code>[Parallel(n_jobs=-1)]: Done 18 tasks | elapsed: 0.1s
[Parallel(n_jobs=-1)]: Done 18 tasks | elapsed: 0.1s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed: 0.2s finished
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed: 0.2s finished
[Parallel(n_jobs=-1)]: Using backend ThreadingBackend with 16 concurrent workers.
</code></pre>
<p>So I am assuming I am doing something wrong with the FeatureUnion, but can't for the life of me figure out what.</p>
<p>What am I doing wrong?</p>
| <python><scikit-learn> | 2023-11-28 00:38:46 | 1 | 451 | Ben |
77,560,697 | 9,973,177 | websockets : ConnectionClosedOK: received 1000 (OK); then sent 1000 (OK) | <p>I'm trying to exchange data between 2 processes using websockets but I get a connectionClosedOK after the first request. Tried a few suggestions but can't really figure out how to fix it... as far as I understand the error happens on <code>res = await asyncio.wait_for(websocket.recv(), timeout=10)</code></p>
<p>The client code is here:</p>
<pre><code># https://stackoverflow.com/questions/70864604/websockets-exceptions-connectionclosedok-code-1000-ok-no-reason
# https://stackoverflow.com/questions/75780416/websockets-in-fastapi-connectionclosedok-received-1000-ok
import time
import json
import asyncio
import websockets
import nest_asyncio
nest_asyncio.apply()
async def ping(websocket):
while True:
await websocket.send('{"message":"PING"}')
print('------ ping')
await asyncio.sleep(5)
async def get_data(websocket, key):
order = {'key' : key}
json_data = json.dumps(order)
await websocket.send(json_data)
res = await asyncio.wait_for(websocket.recv(), timeout=10)
res = json.loads(res)
return res
async def main():
keeprunning = True
i = 0
#wss = websockets.serve(handle_request, "localhost", 8766)
uri = "ws://localhost:8765"
#websocket = await websockets.connect(uri, ping_interval = None)
async for websocket in websockets.connect(uri, timeout=15, ping_timeout=None, ping_interval=None):
#task = asyncio.create_task(ping(websocket))
while keeprunning and i < 10:
i = i + 1
print(f'counter {i}')
res = await get_data(websocket, 'key')
print(f"res: {res}")
#time.sleep(5.0)
print('main exit')
if __name__ == "__main__":
#asyncio.run(main())
asyncio.get_event_loop().run_until_complete(main())
</code></pre>
<p>The server code is here:</p>
<pre><code>import json
import asyncio
import websockets
import nest_asyncio
nest_asyncio.apply()
async def handle_request(websocket):
message = await websocket.recv()
data = json.loads(message)
order = {'key' : data['key'], 'val1' : 1.0, 'val2' : 2.0 }
json_data = json.dumps(order)
await websocket.send(json_data)
print("handle_request")
async def main():
async with websockets.serve(handle_request, "localhost", 8765, ping_interval=None):
print('world running...')
await asyncio.Future() # run forever
print('world done')
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
| <python> | 2023-11-28 00:25:26 | 1 | 952 | Will |
77,560,538 | 113,538 | How to print a char ** var from python LLDB | <p>I'm trying to print out a char ** using the python LLDB lib.
To be clear it the char **argv from the main function in a C program.
It should have 3 strings in the array from the input.</p>
<p>When launching the C program and stopping right after
main(int argc, char **argv)
in xcode i can just do p argv[0] and p argv[1] and I get the correct arg strings to print.</p>
<p>In python I have a function to print the first 2 argv strings</p>
<pre class="lang-py prettyprint-override"><code>def print_argv(argv: lldb.SBValue, target:SBTarget):
pointer = argv.Dereference()
summary = pointer.GetSummary().strip('\"')
print(summary)
str_len = len(summary)
next_str_address:int = pointer.GetLoadAddress() + str_len + 1
pointer_type = pointer.GetType()
addr = SBAddress(next_str_address, target)
next_value = target.CreateValueFromAddress("argv", addr, pointer_type)
summary = next_value.GetSummary().strip('\"')
</code></pre>
<p>But the second input isn't there.</p>
<p>I've tried a number of different ways but I can't seen to get the second string. I tried using the argv.GetChildAtIndex and that works for 0 but not 1 or 2. I noticed that the address are offset by the string length, so I've tried to update the pointers, again no luck. I check that the inputs are really there by printing out from the C program.</p>
<p>Edit:
Here is how to set up the call to the print function</p>
<pre><code>debugger = lldb.SBDebugger.Create()
debugger.SetAsync(False)
target = debugger.CreateTargetWithFileAndArch (str(binary_path), lldb.LLDB_ARCH_DEFAULT)
assert target, "Failed to create target"
target.BreakpointCreateByName ("main", target.GetExecutable().GetFilename())
launch_info = lldb.SBLaunchInfo(str_args)
launch_info.SetWorkingDirectory(os.getcwd())
error = lldb.SBError()
process = target.Launch(launch_info, error)
assert process, "Failed to launch process"
for _ in range(1000):# Putting an upper limit on the number of functions to trace
state = process.GetState()
if state == lldb.eStateStopped:
for thread in process:
function_name = frame.GetFunctionName()
frame = thread.frames[0]
if function_name in function_names:
for arg in frame.arguments:
if arg.name == "argv":
print_argv(arg, 3, target)
process.Continue()
</code></pre>
<p>Solution:
Turns out you need to use can_create_synthetic==True.</p>
<p><code>child = argv.GetChildAtIndex(1, lldb.eNoDynamicValues, True)</code></p>
| <python><c><lldb> | 2023-11-27 23:31:10 | 2 | 895 | Brian |
77,560,516 | 4,398,966 | does __new__ over ride __init__ in python | <p>I have the following code:</p>
<pre><code>class Demo:
def __new__(self):
self.__init__(self)
print("Demo's __new__() invoked")
def __init__(self):
print("Demo's __init__() invoked")
class Derived_Demo(Demo):
def __new__(self):
print("Derived_Demo's __new__() invoked")
def __init__(self):
print("Derived_Demo's __init__() invoked")
def main():
obj1 = Derived_Demo()
obj2 = Demo()
main()
</code></pre>
<p>I'm trying to understand the order of execution:</p>
<ol>
<li><p>__new__ in the derived class is called first</p>
</li>
<li><p>why doesn't _<em>init</em>_ in the derived class called next?</p>
</li>
</ol>
| <python><init><derived-class><base-class> | 2023-11-27 23:23:49 | 3 | 15,782 | DCR |
77,560,502 | 3,195,451 | Pandas .any() vs. Python any() on Dataframe | <p>What is the reason to prefer Pandas implementation of <code>.any()</code> instead of Python's builtin <code>any()</code> when used on a DataFrame? Is there a performance reason to this, since Pandas DataFrames are column-major? My hunch is perhaps the Pandas method is implemented in such a way that it is faster for column-based reads, in expectation. Can anyone confirm?</p>
<p>Why this:</p>
<pre><code>if df.any():
</code></pre>
<p>instead of this:</p>
<pre><code>if any(df):
</code></pre>
| <python><pandas><dataframe><any> | 2023-11-27 23:19:00 | 2 | 2,499 | Daniel |
77,560,495 | 875,295 | Possible non optimization in list comprehension | <p>If I do the following:</p>
<pre><code>import sys
for i in range(30):
print(sys.getsizeof([1 for _ in range(i)]))
</code></pre>
<p>I get outputs that are powers of 2, whereas if I do this:</p>
<pre><code>for i in range(30):
print(sys.getsizeof([1]*i))
</code></pre>
<p>I get smaller sizes that are linearly increasing.</p>
<p>I'm trying to understand: it seems that in the first example, python is re-creating and growing the array in the list-comprehension loop as it is iterating over range(i). In the second example python seems able to allocate the final array only once and at the correct dimension. Why is it not able to figure it out in the first example? In my precise case it's easy to see that <code>range(i)</code> will always generate a array of size i</p>
| <python><list-comprehension> | 2023-11-27 23:18:11 | 0 | 8,114 | lezebulon |
77,560,464 | 9,582,542 | MongoDB bulk loading 36000 files | <p>I am newbie to mongoDB. Currently I have a dataframe with the full path to 36000 json files. I downloaded the mongoimport and placed it in my install bin directory. I would like to create a loop of some sort to load all files into my local install of mongodb. Should I do this in python or does the mongoimport have this feature that can be leveraged.</p>
| <python><mongodb><mongoimport> | 2023-11-27 23:09:09 | 1 | 690 | Leo Torres |
77,560,371 | 4,980,705 | Pandas Apply using input from first row and not from each row | <p>I have the following table and want to use the RoadSegmentOrigin as input for a new column OriginCoordinates</p>
<pre><code>RoadSegmentOrigin,RoadSegmentDest,trip_id,planned_duration
AREI2,JD4,107_1_D_1,32
JD4,PNG4,107_1_D_1,55
PNG4,TVA2,107_1_D_1,55
</code></pre>
<p>This is what I'm using:</p>
<pre><code>df_RoadSegments["OriginCoordinates"] = df_RoadSegments.apply(lambda x: GetStopsCoordinates(df_Stops, x["RoadSegmentOrigin"]), axis=1)
</code></pre>
<p>But what I'm getting as result is like the GetStopsCoordinates is using only the first RoadSegmentOrigin as input and does not update every row</p>
<pre><code> RoadSegmentOrigin ... OriginCoordinates
0 AREI2 ... 41.1591084955401,-8.55577748652738
1 JD4 ... 41.1591084955401,-8.55577748652738
2 PNG4 ... 41.1591084955401,-8.55577748652738
</code></pre>
| <python><pandas><dataframe> | 2023-11-27 22:45:00 | 0 | 717 | peetman |
77,560,354 | 13,142,245 | Locust: How to specify volume of requests in time period? | <p>I'm interested in using Locust to load test an endpoint. From <a href="https://docs.locust.io/en/stable/writing-a-locustfile.html#environment-attribute" rel="nofollow noreferrer">documentation</a> (see third "Note" section.)</p>
<blockquote>
<p>For example, if you want Locust to run 500 task iterations per second at peak load, you could use wait_time = constant_throughput(0.1) and a user count of 5000.</p>
</blockquote>
<p><code>constant_throughput</code> appears to be an expected variable name in scope of classes that inherit from User.</p>
<pre class="lang-py prettyprint-override"><code>from locust import User, task, between
class MyUser(User):
@task
def my_task(self):
print("executing my_task")
wait_time = between(0.5, 10)
</code></pre>
<p>However, it's ambiguous how user count should be parameterized to Locust.</p>
<p>What's the right way to go about this?</p>
| <python><locust> | 2023-11-27 22:41:27 | 1 | 1,238 | jbuddy_13 |
77,560,247 | 5,549,107 | How to return json object with QHttpServer | <p>I am trying to create a simple rest api with QHttpServer in Python. I had checked examples in C++. However, I couldn't get it to work in Python. When <code>callback_api</code> returns a string, it is visible as plain text, but any other return value creates a response with empty body and status code 200. Moreover status codes other than 200 do not have any effect. What is the correct way to return json and/or desired status code?</p>
<pre><code>import sys
from PySide6.QtWidgets import QApplication
from PySide6.QtHttpServer import QHttpServer,QHttpServerRequest, QHttpServerResponse, QHttpServerResponder
from PySide6.QtNetwork import QHostAddress
from PySide6.QtCore import QJsonArray, QJsonValue
def test(req: QHttpServerRequest):
d = {"key": "value"}
return QHttpServerResponse(d)
app = QApplication(sys.argv)
server = QHttpServer()
server.listen(QHostAddress("127.0.0.1"), 5005)
server.route("/api", test)
app.exec()
</code></pre>
| <python><qt><pyside><pyside6> | 2023-11-27 22:16:50 | 1 | 304 | Reactionic |
77,560,118 | 901,426 | python 'socket' has no attribute 'CAN_J1939' | <p>just making a pretty straightforward J1939 reader with the <code>socket</code> library:</p>
<pre class="lang-py prettyprint-override"><code>import socket
with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_J1939) as s:
s.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1)
addr = "can3", socket.J1939_NO_NAME, socket.J1939_NO_PGN, socket.J1939_NO_ADDR
s.bind(addr)
while True:
data, addr = s.recvfrom(128)
print(f'{addr[3]:02x} {addr[2]:05x}')
for j in range(len(data)):
if j % 8 == 0 and j != 0:
print(f'\n{j:05x} ')
print(f'{j:02x}')
print('\n')
</code></pre>
<p>running this give me this error:</p>
<blockquote>
<p>AttributeError: module 'socket' has no attribute 'CAN_J1939'</p>
</blockquote>
<p>i'm not sure why the heck this error is firing. i have <code>can-utils</code> installed and a couple other J1939 libs when i wrote the Buildroot, so i don't know why this is popping up. plus, i'm running v3.10 on the SoC and the <code>CAN_J1939</code> has be available since v3.5. :P don't even know how to troubleshoot this one... O_o</p>
| <python><can-bus><j1939> | 2023-11-27 21:45:42 | 0 | 867 | WhiteRau |
77,560,096 | 6,368,549 | Python - Parsing / extracting sections using Python | <p>was hoping someone could give me some suggestions.</p>
<p>So, I have a large HTML document. I need to extract data between 2 tags. It's a dynamic document, so it can be different each time/ But there are a couple of constants. The starting point of extraction will be the section that starts with "Notes to Unaudited Condensed". You can see the section ID from the table of contents:</p>
<pre><code> <a href="#a1NatureofOperations_790426"><span style="font-style:normal;font-weight:normal;">Notes to Unaudited Condensed Consolidated Financial Statements</span></a></p></td>
</code></pre>
<p>Basically I want to extract all content up until the next section ID, which always starts with "Item 2.":</p>
<pre><code> <a href="#ITEM2MANAGEMENTSDISCUSSIONANDANALYSIS_77"><span style="font-style:normal;font-weight:normal;">Item 2.</span></a></p></td>
</code></pre>
<p>So, is there a way for me to get the tag ID from the anchor, and then I can search the document for that tag ID as the start / end of the parsing that is needed?</p>
<p>Or, perhaps there is some other Python HTML parser which can do much of the work for me?</p>
<p>Thanks!</p>
| <python><html><parsing> | 2023-11-27 21:40:31 | 1 | 853 | Landon Statis |
77,560,095 | 10,620,003 | Build the different df from one df based on the ids columns in a df with multiple Nans | <p>I have a dataframe which has multiple Nan values. I want to build three different dataframe from it. Here is an example of my df:</p>
<pre><code>df = pd.DataFrame({'a':[10, np.nan, np.nan, 22, np.nan], 'b':[23, 12, 7, 4, np.nan], 'c':[13, np.nan, np.nan, np.nan, 65]})
a b c
0 10.0. 23.0 13.0
1 NaN 12.0 NaN
2 NaN 7.0 NaN
3 22.0. 4.0 NaN
4 NaN NaN 65.0
</code></pre>
<p>I want to assign an id to the df based on this:
from a cell which is not Nan until the next cell, their ids are equal. For example in this df we only have 2 ids (1,2). From row 0-row2 has the id=1, and the others have id=2.
So, based on this I want to build the following df.</p>
<p>id and column a:</p>
<pre><code> id a
0 1 1
1 2 22
</code></pre>
<p>id and column b :</p>
<pre><code> id b
0 1 23
1 1 12
2 1 7
3 2 4
</code></pre>
<p>id and column c:</p>
<pre><code> id c
0 1 13
1 2 65
</code></pre>
<p>Could you please help me with that? Thanks</p>
| <python><pandas><dataframe> | 2023-11-27 21:40:27 | 3 | 730 | Sadcow |
77,559,996 | 6,930,340 | Define dependency in pyproject.toml depending on operating system | <p>I am using <code>pyproject.toml</code> according to PEP 631 (i.e. NO <code>poetry</code>, I am using <code>pdm</code>). I need to specify a dependency version according to the operating system.</p>
<p>So far, I tried something like this:</p>
<pre><code>[project]
...
dependencies = [
"kaleido==0.0.3; platform_system!='Windows'",
"kaleido==0.1.0post1; platform_system=='Windows'",
]
</code></pre>
<p>What I want to achieve is that <code>kaleido</code> will be installed in version 0.0.3 if the operating system is NOT Windows. On the other hand, if Windows is used, then I want <code>kaleido</code> to be installed in version <code>0.1.0post1</code>.</p>
<p>What I learned is that the first row will always be ignored, that is, if I am installing the package on a linux machine, <code>kaleido v0.0.3</code> won't be installed. On a Windows machine <code>kaleido v0.1.0post1</code> will successfully be installed.</p>
<p>If I switch the two code lines, the behaviour will be the opposite, i.e. <code>kaleido v0.0.3</code> will be installed on the linux machine while no <code>kaleido</code> will be installed on a Windows machine.</p>
<p>The actual question is, what is the correct syntax in the <code>pyproject.toml</code> to work on all operating systems?</p>
| <python><python-packaging><pyproject.toml><pdm> | 2023-11-27 21:19:04 | 0 | 5,167 | Andi |
77,559,948 | 22,212,435 | Event button release doesn't work if another event happens | <p>The problem occurs when a mouse button is pressed and not released, another click <code>event</code> happens which interrupts the current <code>event</code>. I will try to show this with the following code example:</p>
<pre><code>import tkinter as tk
root = tk.Tk()
l = tk.Label(bg='red', width=30, height=30)
l.pack(fill='both', padx=100, pady=100)
l.bind('<Button-1>', lambda e: print('pressed'))
l.bind('<ButtonRelease-1>', lambda e: print('release'))
root.mainloop()
</code></pre>
<p>To reproduce this error:</p>
<ol>
<li>Click and hold the left mouse button on the red label.</li>
<li>Move the cursor to the white space that is on the <code>root</code>. The mouse button must remain pressed.</li>
<li>Now press and release the other button while still holding the left mouse button.</li>
<li>Release the left mouse button on the white space.</li>
</ol>
<p>Note: In the 4'th step it is important to stay outside the red <code>label</code> otherwise everything will work.</p>
<p>So for some reason the <code>ButtonRelease-1</code> event will not happen, most likely because another event happened during the hold process, but it is still not clear for me why.</p>
<p>I want to avoid this. Maybe it is possible to block other events during this process or force other events to work together with this event. I tried to use <code>break</code>, also tried to use <code>add='+'</code> for the bind functions, but none of that works.</p>
| <python><tkinter> | 2023-11-27 21:05:10 | 1 | 610 | Danya K |
77,559,867 | 3,906,786 | How to nest a list of attrs classes into an attrs class | <p>I have a list of dicts and I'd like to use <code>python-attrs</code> to convert them into classes.</p>
<p>Here's the sample data:</p>
<pre class="lang-ini prettyprint-override"><code>[[characters]]
first_name = 'Duffy'
last_name = 'Duck'
[[characters]]
first_name = 'Bugs'
last_name = 'Bunny'
[[characters]]
first_name = 'Sylvester'
last_name = 'Pussycat'
[[characters]]
first_name = 'Elmar'
last_name = 'Fudd'
[[characters]]
first_name = 'Tweety'
last_name = 'Bird'
[[characters]]
first_name = 'Sam'
last_name = 'Yosemite'
[[characters]]
first_name = 'Wile E.'
last_name = 'Coyote'
[[characters]]
first_name = 'Road'
last_name = 'Runner'
</code></pre>
<p>This will then turn into a dictionary after reading the content:</p>
<pre class="lang-py prettyprint-override"><code>{'characters': [{'first_name': 'Duffy', 'last_name': 'Duck'},
{'first_name': 'Bugs', 'last_name': 'Bunny'},
{'first_name': 'Sylvester', 'last_name': 'Pussycat'},
{'first_name': 'Elmar', 'last_name': 'Fudd'},
{'first_name': 'Tweety', 'last_name': 'Bird'},
{'first_name': 'Sam', 'last_name': 'Yosemite'},
{'first_name': 'Wile E.', 'last_name': 'Coyote'},
{'first_name': 'Road', 'last_name': 'Runner'}]}
</code></pre>
<p>My classes look like this:</p>
<pre class="lang-py prettyprint-override"><code>@define(kw_only=True)
class Character:
first_name: str
last_name: str
@define
class LooneyToons:
characters: List[Character] = field(factory=list, converter=Character)
</code></pre>
<p>But it does not work:
<code>TypeError: Character.__init__() takes 1 positional argument but 2 were given</code></p>
<p>Of course I could modify the class a bit and use this code (which works):</p>
<pre class="lang-py prettyprint-override"><code>@define
class LooneyToons:
characters: List[Character]
> LooneyToons([Character(**x) for x in d['characters']])
LooneyToons(characters=[Character(first_name='Duffy', last_name='Duck'), Character(first_name='Bugs', last_name='Bunny'), Character(first_name='Sylvester', last_name='Pussycat'), Character(first_name='Elmar', last_name='Fudd'), Character(first_name='Tweety', last_name='Bird'), Character(first_name='Sam', last_name='Yosemite'), Character(first_name='Wile E.', last_name='Coyote'), Character(first_name='Road', last_name='Runner')])
</code></pre>
<p>But it would be more elegant (from my point of view) to handle this within the class <code>LooneyToons</code> by just giving <code>d['characters']</code> as argument to the class.</p>
<p>Any hints for me? I already checked out <code>cattrs</code> but I don't get the point on how it may be useful in my case.</p>
| <python><python-3.x><python-attrs> | 2023-11-27 20:50:30 | 2 | 983 | brillenheini |
77,559,790 | 5,473,533 | Python Tkinter TreeviewSelect event got called twice (it should only be called once) | <p>I am having a problem with the Tkinter Treeview code below. The code generates a Combobox and a treeview.</p>
<p>Every time, when I select a value in Combobox, it will automatically update the value in the treeview, like following:</p>
<p><a href="https://i.sstatic.net/LclUu.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LclUu.jpg" alt="enter image description here" /></a></p>
<p>And the last value in the treeview will be automatically selected:</p>
<p><a href="https://i.sstatic.net/zb2wK.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zb2wK.jpg" alt="enter image description here" /></a></p>
<p>When one row of treeview is selected, it will trigger a event <code><<TreeviewSelect>></code> in function <code>__getUpdateTypeSelected</code>. I only expect <code>__getUpdateTypeSelected</code> be called once everytime I select a new Combobox value. However, it is called twice if there is already values in treeview.</p>
<p>Here is the output everytime I select a new item in Combobox</p>
<pre><code>Update Group
Update Type
Update Group
Update Type
Update Type
Update Group
Update Type
Update Type
</code></pre>
<p>The reason that it is called twice is from the treeview <code>delete</code> function, it seems that the <code>delete</code> function will cause the tree <code>selection</code>. I already tried to disable it by using a <code>self.selection_event_enabled = False</code>, however, the <code>__getUpdateTypeSelected</code> function is still called twice. May I ask if you have some suggestions? Thank you.</p>
<pre><code>import tkinter as tk
from tkinter import ttk
# ===
# APP
# ===
class App(tk.Tk):
# -------
# Initial
# -------
def __init__(self):
super().__init__()
self.accounts_dict = {}
self.accounts_dict['A'] = {"Aa": {1, 2, 3}, "Ab": {2, 3, 4}}
self.accounts_dict['B'] = {"Ba": {3, 4, 5}, "Bb": {4, 5, 6}}
self.accounts_dict['C'] = {"Ca": {5, 6, 7}, "Cb": {6, 7, 8}}
self.selection_event_enabled = True # Flag to control selection event
self.__create_group()
self.__create_tree_type()
# -----------------
# Create tree group
# -----------------
def __create_group(self):
self.groupCombo = ttk.Combobox(self, values=list(self.accounts_dict.keys()), state="readonly")
self.groupCombo.bind('<<ComboboxSelected>>', self.__getUpdateGroup)
self.groupCombo.grid(row=1, column=0, padx=5, pady=5, sticky=tk.E)
# ----------------
# Create tree type
# ----------------
def __create_tree_type(self):
self.tree_type = ttk.Treeview(self, height=6, selectmode='browse')
self.tree_type.heading('#0', text='Type', anchor='w')
self.tree_type.column('#0', anchor='w', width=200, minwidth=100)
self.tree_type.grid(row=0, column=0, columnspan=6, padx=5, pady=5, sticky=tk.NSEW)
# Bind item selection
self.tree_type.bind('<<TreeviewSelect>>', self.__getUpdateTypeSelected)
def __getUpdateGroup(self, event):
print("Update Group")
# Temporarily disable the selection event
self.selection_event_enabled = False
# Remove previous items in tree_type
for item in self.tree_type.get_children():
self.tree_type.delete(item)
# Get insert new items to tree_type
onegroup = self.groupCombo.get()
for onetype in self.accounts_dict[onegroup].keys():
self.tree_type.insert('', tk.END, text=onetype)
children = self.tree_type.get_children()
if children:
self.tree_type.selection_set(children[-1])
# Re-enable the selection event
self.selection_event_enabled = True
# --------------------------------
# Event when tree_type is selected
# --------------------------------
def __getUpdateTypeSelected(self, event):
# Process the selection event only if enabled
if self.selection_event_enabled:
print("Update Type")
# Your code here
pass
# ====
# Main
# ====
if __name__ == "__main__":
app = App()
app.mainloop()
</code></pre>
<p>Update:</p>
<p>I found that if I add <code>self.update()</code> after <code>treeview delete function</code>, it is only called once.</p>
<pre><code> # Remove previous items in tree_type
for item in self.tree_type.get_children():
self.tree_type.delete(item)
self.update()
</code></pre>
| <python><tkinter><treeview> | 2023-11-27 20:35:29 | 1 | 523 | Hao Shi |
77,559,452 | 1,492,613 | why inplace=True behave differently when apply to a single column view than apply to a multiple columns? | <p>I have been heavily using scipy and numpy form many years. There performance is very important so we normally do not copy data whenever possible.
I also use pandas superficially for more than 10 years. I usually thought inplace=True means no-copy on data during operation.</p>
<p>However, recently I encountered some <a href="https://stackoverflow.com/questions/77552859/inplace-mask-does-not-work-is-this-expected-or-it-is-a-bug">strange problem on mask(inplace=True)</a>, let me doubt I might be wrong.
I also came across several discussing:</p>
<p><a href="https://stackoverflow.com/questions/45570984/in-pandas-is-inplace-true-considered-harmful-or-not">In pandas, is inplace = True considered harmful, or not?</a></p>
<p><a href="https://stackoverflow.com/questions/43893457/understanding-inplace-true-in-pandas">Understanding inplace=True in pandas</a> (this post almost just repeat the points and claims from the previous post)</p>
<p>In there people argue the inplace has no value in most cases, because it will copy-on-write by default.</p>
<p>However, I think the argument is wrong (at least for pandas 2.x) according to document: <a href="https://pandas.pydata.org/docs/user_guide/copy_on_write.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/user_guide/copy_on_write.html</a></p>
<pre><code>import pandas as pd
print(pd.options.mode.copy_on_write)
# False
</code></pre>
<p>apparently the CoW is not on by default, so I really doubt the claim: it always copy behind the scene. This also against my experience, when I use inplace=True, it definitely reduce my memory usage in most cases. I can verify this with memory_profiler</p>
<p>Here is my test in ipython, the peak memory is not important here, increment means how much memory was used by that line:</p>
<pre><code>%load_ext memory_profiler
import numpy as np
df_big = pd.DataFrame(np.random.randn(int(1e7),3), columns=["A", "B", "C"])
str_source = ["foofoofoo", "barbarbar", "hello world"]
df_big['STR'] = np.random.choice(str_source, df_big.shape[0])
display(df_big.memory_usage().sum()/1024**2) # show 305MB
df_big2 = df_big.copy()
%%memit
df_big2.replace("barbarbar", "rabrabrab", inplace=True)
# peak memory: 1778.38 MiB, increment: 76.25 MiB
%%memit
df_big3 = df_big.replace("barbarbar", "rabrabrab", inplace=False)
# peak memory: 2007.23 MiB, increment: 305.16 MiB
%%memit
df_big2.loc[:, "STR"].replace("foofoofoo", "oofoofoof", inplace=True)
# peak memory: 1778.55 MiB, increment: 76.38 MiB
%%memit
df_big2.loc[:, "STR"] = df_big2.loc[:, "STR"].replace("oofoofoof", "foofoofoo", inplace=False)
# peak memory: 1854.80 MiB, increment: 152.50 MiB
</code></pre>
<p><strong>It is very clear the replace(inplace=True) reduce memory usage a lot.</strong> 76MB vs 305MB if replace the entire dataframe or 76MB vs 152MB if replace only 1 column.
<strong>So the inplace do have big value for us if one often has GB size dataframe like me.</strong></p>
<p>from here <a href="https://github.com/pandas-dev/pandas/issues/16529" rel="nofollow noreferrer">https://github.com/pandas-dev/pandas/issues/16529</a> its author also admitted actually more functions benefit from inplace if you look at the list under:</p>
<blockquote>
<p>Should be able to not copy memory (under discussion on what to do)</p>
</blockquote>
<p>The list is far longer than the list surely has no benefit.</p>
<p>So my question here is which functions in what condition actually CoW even when inplace=True?</p>
<p>Is there any official source about this? Or is there a way to tell if a function actually create copy behind the scene or not?</p>
<p>Or maybe this has something to do with the .loc[] not the inplace argument?</p>
<p>I though the .loc[] always return view without copy <a href="https://pandas.pydata.org/docs/user_guide/indexing.html#returning-a-view-versus-a-copy" rel="nofollow noreferrer">https://pandas.pydata.org/docs/user_guide/indexing.html#returning-a-view-versus-a-copy</a></p>
<p>however, it seems not the case. if I do df.loc[:, 'A'].mask(df.loc[:, 'A']<0, inplace=True), it actually change the df.</p>
<p>if I do df.loc[:, ['A', 'B']].mask(df.loc[:, 'A']<0, inplace=True), though it does not raise any error about modify a copy of data.
It does not modify the df at all, which indicate it actually modified a copy of the view.</p>
<p>Samething happens to .where(), .replace(), and even more maybe</p>
<p>let's use the same example, simple change "STR" to ["STR"], so now we return a sub dataframe instead of a Series:</p>
<pre><code>%%memit
df_big2.loc[:, ["STR"]].replace("foofoofoo", "oofoof123", inplace=True)
# peak memory: 3325.38 MiB, increment: 152.50 MiB
</code></pre>
<p>now, we can tell from the memory usage, it indeed created a copy in this case. But surprisingly it does not raise the SettingWithCopyWarning, which make me doubt it is expected behaviour.</p>
<p>So the behaviour is very strange right here, is there a stable rule about in which cases the .loc[] make a copy?</p>
<p>The increased memory size 76.3MB, which is the STR column size, indicates that, the underline Series still gets copied before modification. But it is strange that .loc[:, ["STR"]] need to copy it twice, Even more strange the .loc[:, "STR"].replace(inplace=False), also need to copy it twice too. Isn't python has a very mature way to handle this already? like the tuple in a list, double copy rarely happens there. To this point I feel either there is some bug or the pandas use the memory in a very very inefficient way.</p>
| <python><pandas><dataframe> | 2023-11-27 19:29:22 | 0 | 8,402 | Wang |
77,559,433 | 18,377,883 | Can't install command-not-found on pip | <p>I wanna install the pip package command-not-found which is defined in my requirements.txt like this: <code>command-not-found==0.3</code> on my Ubuntu machine but i get this error:</p>
<pre><code>ERROR: No matching distribution found for command-not-found==0.3
</code></pre>
<p>I heard that there is an apt package <code>python-commandnotfound</code> but i can't install that too</p>
<pre><code>E: Unable to locate package python-commandnotfound
</code></pre>
<p>I also tried <code>apt search</code> with an <code>apt update</code> but both returned nothing. I also am getting the same error in my GitHub Actions and at work</p>
| <python><ubuntu><pip> | 2023-11-27 19:25:21 | 2 | 1,681 | vince |
77,559,227 | 5,013,143 | How to get an instance attribute which is the name of the instance itself? | <p>I have a Python class named <code>myClass</code> having one argument or more, say <code>a</code>, and an instance which I call <code>obj</code>, as follows:</p>
<pre><code>class myClass:
def __init__(self, a):
self.a = a
...
def method1(self):
...
obj = myClass(3)
</code></pre>
<p>I want <code>obj</code> to have an <em>attribute</em> which is a string with the name of the instance itself, even without passing that name as instance argument, as follows:</p>
<pre><code>>>> obj = myClass(3)
>>> obj.name
"obj"
</code></pre>
<p>From my experience, there is no way to get it via chatGPT. Could you tell me how I can do it?</p>
| <python> | 2023-11-27 18:40:21 | 1 | 7,483 | Stefano Fedele |
77,559,176 | 15,781,591 | Unable to set custom timestamp xticks in seaborn lineplot [python] | <p>My dataframe looks like this:
<a href="https://i.sstatic.net/7xOMv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7xOMv.png" alt="enter image description here" /></a></p>
<p>We see category values for a given Date/Time timestamp, showing year, month, day, and the time.</p>
<p>Plotted as a lineplot in seaborn, the x-axis looks like this:
<a href="https://i.sstatic.net/4tGVf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4tGVf.png" alt="enter image description here" /></a></p>
<p>Here we see the Date/Time x-axis tick marks showing the month, day, and hour.</p>
<p>What I am trying to accomplish is to have the Date/Time x-tick marks reformatted to just show the time, so starting at 00:00, with 6 total tick marks, and so for the whole study day, continuing with 04:00, 08:00, 12:00, 16:00, 20:00, 00:00.</p>
<p>I tried using this code to reset the date/time format:</p>
<pre><code>x_dates = df['Date/Time'].dt.strftime('%H:%M').sort_values().unique()
dataplot.set_xticklabels(labels=x_dates, rotation=45, ha='right')
</code></pre>
<p>And now my x-axis ticks look like this:
<a href="https://i.sstatic.net/bTSk7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bTSk7.png" alt="enter image description here" /></a></p>
<p>I do not understand why the hours go from 00:30 to 04:30. That means that my study only lasted until 4:30PM? I do not know if this is supposed to be military time or not. My study data lasts for 24 hours, so I do not understand why this code changes the data to end at 04:30.</p>
<p>And then I try to change the number of x-axis tick marks to 6 by adding:</p>
<pre><code>dataplot.set_xticks(6)
</code></pre>
<p>But this just results in this error:</p>
<pre><code>TypeError: object of type 'int' has no len()
</code></pre>
<p>How can I fix my date/time string formatting for my x-ticks so that it just shows the time throughout the 24-hour period, without the year, month, or day, and as 6-tick marks? I am using seaborn with Python.</p>
| <python><datetime><seaborn><strftime> | 2023-11-27 18:31:01 | 1 | 641 | LostinSpatialAnalysis |
77,558,995 | 9,462,829 | install tensorflow wheel through requirements.txt | <p>I'm working on a project and my processor is old, so I need to install a specific Tensorflow wheel, from here: <a href="https://github.com/fo40225/tensorflow-windows-wheel/tree/master/2.9.0/py39/CPU%2BGPU/cuda117cudnn8sse2" rel="nofollow noreferrer">https://github.com/fo40225/tensorflow-windows-wheel/tree/master/2.9.0/py39/CPU%2BGPU/cuda117cudnn8sse2</a></p>
<p>The thing is, I'm not even able to install it using <code>pip</code>:</p>
<pre><code>pip install "git+ssh://git@github.com/fo40225/tensorflow-windows-wheel/tree/master/2.9.0/py39/CPU%2BGPU/cuda117cudnn8sse2"
</code></pre>
<p>which brings:</p>
<pre><code> fatal: remote error:
is not a valid repository name
</code></pre>
<p>But what I want to actually achieve is to add it to my requirement.txt, so if someone knows how to achieve that, it'd appreciated</p>
<p>Thanks!</p>
<p>Edit:</p>
<p>after downloading all the files I'm getting this:</p>
<p><a href="https://i.sstatic.net/vji8d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vji8d.png" alt="enter image description here" /></a></p>
<p>Should I compress these files into a .whl file of my own?</p>
<p>Tried <code>wheel pack tensorflow</code> but I'm getting <code>No .dist-info directories found in tensorflow</code>. Any ideas?</p>
| <python><tensorflow><github> | 2023-11-27 17:53:22 | 1 | 6,148 | Juan C |
77,558,894 | 9,026,704 | How to upload a file larger than 32MB from django admin panel hosted in Cloud Run? | <h2>Problem</h2>
<p>I need to upload zip files with a size of ~800MB to a GCP Cloud Storage bucket using Django's admin panel. My Django API is hosted in Cloud Run, and I just found out Cloud Run has a request size limit of 32MB. I want to know if there's any workaround for this.</p>
<h2>What I tried</h2>
<p>I have tried using signed URLs and chunking the file in 30MB chunks, but got the same "413: Request too large" error. It seems that when I upload the file from the admin panel, it still sends a request with the whole file instead of sending it chunked. Here's my code up until now, which works perfectly from my local machine using a Cloud SQL proxy:</p>
<h4>utils.py:</h4>
<pre><code>from django.conf import settings
from google.cloud import storage
def generate_signed_url(file_name, chunk_number):
storage_client = storage.Client(credentials=settings.GS_CREDENTIALS)
bucket = storage_client.bucket(settings.GS_BUCKET_NAME)
blob_name = f'zip_files/{file_name}_chunk_{chunk_number}'
blob = bucket.blob(blob_name)
url = blob.generate_signed_url(
version="v4",
expiration=3600,
method="PUT"
)
return url
def combine_chunks(bucket_name, blob_name, chunk_count):
storage_client = storage.Client(credentials=settings.GS_CREDENTIALS)
bucket = storage_client.bucket(bucket_name)
# Create a list of the blobs (chunks) to be composed
blobs = [bucket.blob(f'zip_files/{blob_name}_chunk_{i}') for i in range(chunk_count)]
# Create a new blob to hold the composed object
composed_blob = bucket.blob(f'zip_files/{blob_name}')
# Compose the chunks into a single object
composed_blob.compose(blobs)
# Delete the chunks
for blob in blobs:
blob.delete()
</code></pre>
<h4>forms.py:</h4>
<pre><code>from django import forms
from django.conf import settings
from .models import Photoset
from .utils import generate_signed_url, combine_chunks
import requests
class PhotosetAdminForm(forms.ModelForm):
class Meta:
model = Photoset
fields = ['name', 'price', "store_photo", "left_photo", "central_photo", "right_photo", "video_length", "photo_count", 'zip_file']
def save(self, commit=True):
instance = super().save(commit=False)
file = self.cleaned_data.get('zip_file')
if file:
# Split the file into chunks
chunk_size = 1024 * 1024 * 30 # 30MB
chunks = [file.read(chunk_size) for _ in range(0, len(file), chunk_size)]
# Upload each chunk separately
for i, chunk in enumerate(chunks):
signed_url = generate_signed_url(file.name, i)
headers = {'Content-Type': file.content_type}
response = requests.put(signed_url, data=chunk, headers=headers)
if response.status_code != 200:
raise ValueError('A chunk could not be uploaded.')
# Combine the chunks into a single file
combine_chunks(settings.GS_BUCKET_NAME, file.name, len(chunks))
if commit:
instance.save()
return instance
</code></pre>
<h4>admin.py:</h4>
<pre><code>...
@admin.register(Photoset)
class PhotosetAdmin(admin.ModelAdmin):
list_display = ('id', 'name', 'price', 'insert_date', 'last_modified_date')
list_display_links = ('name',)
list_filter = ('models',)
search_fields = ('name', 'price')
list_per_page = 25
form = PhotosetAdminForm
...
</code></pre>
<p>zip_file is a FileField in my Photoset model.</p>
| <python><django><google-cloud-platform><google-cloud-run> | 2023-11-27 17:36:18 | 0 | 337 | Nicholas Kemp |
77,558,692 | 7,615,684 | Change Bitbucket's Private repo to Public repo using RESTAPI | <p>I want to convert few of my private repositories to public repo.</p>
<pre><code>import requests
workspace = ''
base_url = 'https://api.bitbucket.org/2.0/'
session = requests.Session()
session.auth = (username, password)
headers = {
'Content-Type': 'application/json'
}
data = {
"type": "repository",
"is_private": False
}
url = f'https://api.bitbucket.org/2.0/repositories/{workspace}/{repo_slug}'
res = session.put(url, json=data, headers=headers)
print(res, res.reason)
</code></pre>
<p>But I got <code><Response [400]> Bad Request</code>. Should I pass any other data in request body?</p>
| <python><rest><python-requests><bitbucket> | 2023-11-27 17:04:04 | 1 | 1,356 | NavaneethaKrishnan |
77,558,411 | 1,210,665 | Python regex for matching next line if present | <p>I have to match some lines as below.</p>
<p><em>Case 1</em>:</p>
<pre><code>[01:32:12.036,000] <tag> label: val3. STATUS = 0x1
[01:32:12.036,001] <tag> label: val3. MISC = 0x8
[02:58:34.971,000] <tag> label: val2. STATUS = 0x2
</code></pre>
<p><em>Case 2</em>:</p>
<pre><code>[01:32:12.036,000] <tag> label: val3. STATUS = 0x1
[02:58:34.971,000] <tag> label: val2. STATUS = 0x2
[01:32:12.036,001] <tag> label: val2. MISC = 0x6
</code></pre>
<p>The line that has <code>MISC</code> value is optional and may be missing. The line with <code>STATUS</code> will always preceed <code>MISC</code> line and is always present.</p>
<p>To match this I am using regex like this: <code>"label: val(\d+). STATUS = (0x[0-9a-fA-F]+)(.*?(label: val(\d+). MISC = (0x[0-9a-fA-F]+)))?"</code></p>
<p>This is working for <em>Case 1</em> and is correctly reporting the values. The ootput for matched groups is as below:</p>
<pre><code>MATCH 1
[0] 3
[1] 0x1
[2]
[01:32:12.036,001] <tag> label: val3. MISC = 0x8
[3] label: val3. MISC = 0x8
[4] 3
[5] 0x8
MATCH 2
[0] 2
[1] 0x2
[2]
[3]
[4]
[5]
</code></pre>
<p>But for <em>Case 2</em>, this is skipping second <code>STATUS</code> in line 2 as below:</p>
<pre><code>Match 1
[0] 3
[1] 0x1
[2]
[02:58:34.971,000] <tag> label: val2. STATUS = 0x2
[01:32:12.036,001] <tag> label: val2. MISC = 0x6
[3] label: val2. MISC = 0x6
[4] 2
[5] 0x6
</code></pre>
<p>I needed 2 matches here also, with first match not reporting <code>MISC</code>.
What am I doing wrong here?</p>
| <python><python-3.x><regex> | 2023-11-27 16:21:04 | 2 | 914 | VinayChoudhary99 |
77,558,347 | 4,980,705 | Use Python to calculate timing between bus stops | <p>The following is an example of the .csv file of thousands of lines I have for the different bus lines.</p>
<p><strong>Table:</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>trip_id</th>
<th>arrival_time</th>
<th>departure_time</th>
<th>stop_id</th>
<th>stop_sequence</th>
<th>stop_headsign</th>
</tr>
</thead>
<tbody>
<tr>
<td>107_1_D_1</td>
<td>6:40:00</td>
<td>6:40:00</td>
<td>AREI2</td>
<td>1</td>
<td></td>
</tr>
<tr>
<td>107_1_D_1</td>
<td>6:40:32</td>
<td>6:40:32</td>
<td>JD4</td>
<td>2</td>
<td></td>
</tr>
<tr>
<td>107_1_D_1</td>
<td>6:41:27</td>
<td>6:41:27</td>
<td>PNG4</td>
<td>3</td>
<td></td>
</tr>
</tbody>
</table>
</div>
<p><strong>Raw Data:</strong></p>
<pre><code>trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign
107_1_D_1,6:40:00,6:40:00,AREI2,1,
107_1_D_1,6:40:32,6:40:32,JD4,2,
107_1_D_1,6:41:27,6:41:27,PNG4,3,
</code></pre>
<p>I want to create a table or dataframe that creates a line for each road segment and calculates the time between each arrival_time.</p>
<p>Expected result:</p>
<p><a href="https://i.sstatic.net/VqlX9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VqlX9.png" alt="enter image description here" /></a></p>
<p>some other trip_id may share the same RoadSegment</p>
| <python><pandas><numpy> | 2023-11-27 16:11:19 | 3 | 717 | peetman |
77,558,217 | 390,897 | Blender Python Package Cannot be Resolved in VSCode | <p>I'm trying to get Blender's Python library (bpy) working with VSCode, but for some reason, despite having the correct interpreter selected, VSCode cannot resolve the package. What's strange is that Jupyter autocomplete has no issues.</p>
<p>Other libraries, such as Numpy, import without any issues.</p>
<p>Note that BPY also imports several other modules, such as bmesh, bpy_types, and mathutils. None of those resolve either. The integrated terminal has no issues importing any of these packages.</p>
<p>Does anyone have any recommendations for how to force VSCode to recognize BPY?</p>
<p><a href="https://i.sstatic.net/qiURH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qiURH.png" alt="broken imports" /></a></p>
| <python><python-import> | 2023-11-27 15:48:46 | 1 | 33,893 | fny |
77,558,059 | 4,249,338 | How to check if a path is a relative symlink in python? | <p><code>os</code> package has
<code>os.path.islink(path)</code>
to check if <code>path</code> is a symlink.
How do I determine if a file located at <code>path</code> is a <em>relative</em> symlink?</p>
<p>Note that <code>path</code> can be an absolute path (i.e. <code>path == os.path.abspath(path)</code>), e.g. <code>/home/user/symlink</code> that is a relative symlink to <code>../</code> which is relative. As opposed to a link to <code>/home/user</code>, which points to the same directory but is absolute.</p>
| <python> | 2023-11-27 15:24:39 | 1 | 656 | gg99 |
77,557,930 | 8,270,512 | dataframe to string for posting in a jira comment | <p>I am trying to create a string representation of a pandas DataFrame that can be formatted as a table in JIRA comments. My issue arises when newline characters (\n) in the string are not being correctly interpreted by JIRA, when I try to paste the string.</p>
<p>Code:
Here's the Python function I'm using to convert a DataFrame to a JIRA-formatted table string:</p>
<pre><code>import pandas as pd
def dataframe_to_jira_string(df):
jira_lines = []
header = "||" + "||".join([f"*{col}*" for col in df.columns]) + "||"
jira_lines.append(header)
for index, row in df.iterrows():
first_col = f"|{index}"
other_cols = "|".join(row.astype(str).values)
row_str = f"{first_col}|{other_cols}|"
jira_lines.append(row_str)
jira_table_string = "\n".join(jira_lines)
return jira_table_string
# Sample DataFrame
data = {
'CR': [4312, 1432, 7, 41, 1, 9, 11],
'False': [13, 1, 7, 0, 30, 2, 4],
'Miss': [5285, 250, 15, 45, 0, 57, 2],
'True': [732, 48, 105, 19, 63, 10, 9]
}
index = ['A', 'B', 'C', 'D', 'E', 'F', 'G']
df = pd.DataFrame(data, index=index)
jira_table_string = dataframe_to_jira_string(df)
</code></pre>
<p>Question:
How can I ensure that the newline characters in the string are preserved and correctly interpreted by JIRA when pasted?
Is there a specific way I should format the string or handle newline characters in Python to make it compatible with JIRA's text processing?
Any advice or insights on how to solve this issue would be greatly appreciated!</p>
| <python><jira> | 2023-11-27 15:06:30 | 0 | 1,141 | ZakS |
77,557,712 | 5,959,601 | Continous Update of Python Qt Plot | <p>Consider the sample PySide6 code below.
I have a class <strong>SampleWindow</strong> which generates some set of values continuously (x and y).
I would also like to plot and visualize them parallelly, for which <strong>ScatterPlotDialog</strong> class has been writtten.
However, I am having issues with the continuos updating/refresh of the plot.</p>
<p>I am not sure if this is an issue of <code>pyqtgraph</code> OR it's a pythonic implementation issue.
Any suggestions to improve or solve the issue ?</p>
<pre><code>from PySide6.QtWidgets import QApplication, QMainWindow, QVBoxLayout, QPushButton, QDialog
import pyqtgraph as pg
import random
class ScatterPlotDialog(QDialog):
def __init__(self, x, y):
super().__init__()
self.setWindowTitle("Scatter Plot Dialog")
self.setup_ui()
# Create initial empty scatter plot
self.scatter_plot = pg.ScatterPlotItem()
self.plot_widget.addItem(self.scatter_plot)
# Set initial scatter plot data
self.scatter_plot.setData(x=x, y=y)
def setup_ui(self):
layout = QVBoxLayout(self)
self.plot_widget = pg.PlotWidget(self)
layout.addWidget(self.plot_widget)
self.setLayout(layout)
class SampleWindow(QMainWindow):
def __init__(self):
super().__init__()
self.setWindowTitle("Sample Window")
self.setup_ui()
self.x = [] # Initialize x and y values as empty lists
self.y = []
self.timer = QTimer()
self.timer.timeout.connect(self.update_data)
self.timer.start(1000) # Update data every 1 second
def setup_ui(self):
self.button = QPushButton("Open Scatter Plot Dialog", self)
self.button.clicked.connect(self.open_scatter_plot_dialog)
def update_data(self):
# Generate new random data points and update x and y values
self.x = [random.uniform(0, 10) for _ in range(10)]
self.y = [random.uniform(0, 10) for _ in range(10)]
def open_scatter_plot_dialog(self):
dialog = ScatterPlotDialog(self.x, self.y)
dialog.exec()
if __name__ == "__main__":
app = QApplication([])
window = SampleWindow()
window.show()
app.exec()
</code></pre>
| <python><pyside><pyqtgraph> | 2023-11-27 14:37:06 | 1 | 449 | StanGeo |
77,557,706 | 11,040,661 | FastAPI protecting shared variable with multiprocessing lock | <p>My objective is to have a stateful app that is optimized for latency, albeit going against 12-factor pattern. Thus, I need a state manager that can safely handle concurrency/race conditions to a certain extent. Take a look at the example below:</p>
<pre class="lang-py prettyprint-override"><code>import copy
from fastapi import FastAPI
from multiprocessing import Lock
app = FastAPI()
Statemanager._bootstrap(app)
@app.get("/path1")
async def handler1(): # simple counter init/return api
states = StateManager.global_instance
if states.get("some-state") is None:
states.set("some-state", 0)
return {"status": states.get("some-state")}
@app.get("/path2")
async def handler2(): # counter increment path
states = StateManager.global_instance
counter = states.get("some-state")
counter += 1
states.set("some-state", counter)
return {"status": states.get("some-state")}
class StateManager:
def __init__(app):
self._state = app.state # starlette shared state
def get(self, key):
with self._state.StateManagerLock:
try:
retval = getattr(self._state, key)
retval = copy.deepcopy(retval)
except AttributeError:
retval = None
return retval
def set(self, key: str, value: any):
self._state.StateManagerLock.acquire(block=True)
setattr(self._state, key, copy.deepcopy(value))
self._state.StateManagerLock.release()
return self
def _bootstrap(cls, app):
app.state.StateManagerLock = Lock()
cls.global_instance = cls(app=app)
</code></pre>
<p>Am I right to assume that the <code>state.StateManagerLock</code> is the same lock that is used when this script is executed with uvicorn with <code>--workers</code> set to >1? Which is to say that <code>set</code> is atomic while <code>get</code> method is a read lock common to uvicorn worker pool.</p>
| <python><fastapi><uvicorn><multiprocess><starlette> | 2023-11-27 14:35:51 | 0 | 1,111 | Silver Flash |
77,557,539 | 13,641,680 | How to run Python Azure SDK in a Github Actions workflow with federated credentials authentication | <p>I have a Python tool that wants to interact with Azure SDK (list and upload blobs). Typically I would use something like:</p>
<pre class="lang-py prettyprint-override"><code>from azure.identity import DefaultAzureCredential
from azure.storage.blob import BlobServiceClient
creds = DefaultAzureCredential()
client = BlobServiceClient(..., credential=creds)
</code></pre>
<p>Considering that I am relying on the Authentication to be via the current logged in user in the CLI, I would like to run this script in a GitHub Action workflow.</p>
<p>The catch is that the authentication method in the GitHub action <strong>MUST</strong> be via federated credentials.</p>
<p>I have a service principal with the federated identity configured, I can authenticate and login using the <code>azure/login</code> action. The problem is that in the step where the Python script must be run, it seems that the context of the logged-in CLI from the previous step is not carried over because I see the following error:</p>
<pre class="lang-bash prettyprint-override"><code>azure.core.exceptions.HttpResponseError: This request is not authorized to perform this operation using this permission.
</code></pre>
<p>My workflow looks like this (simplified):</p>
<pre class="lang-yaml prettyprint-override"><code>jobs:
execute:
runs-on: ubuntu-latest
steps:
# checkout and other stuff
# ...
- name: AZ Login
uses: Azure/login@v1
with:
client-id: ${{ vars.AZURE_CLIENT_ID }}
subscription-id: ${{ vars.AZURE_SUBSCRIPTION_ID }}
tenant-id: ${{ vars.AZURE_TENANT_ID }}
# setup python and install packages
# ...
- name: Run script
run: |
.venv/bin/python script.py
</code></pre>
<p>Quetion is: how can I propagate the credentials from the <code>azure/login</code> step into the python runtime environment?</p>
| <python><azure><federated-identity><azure-sdk> | 2023-11-27 14:11:02 | 1 | 1,750 | everspader |
77,557,529 | 10,985,257 | Building Debian Python Package with Poetry | <p>I try to build a debian package from the python package <code>python-pam</code>.</p>
<p>I followed mostly this <a href="https://gist.github.com/gh-888/0c69a350a8bcfff0c12b2d1c553b6fb6" rel="nofollow noreferrer">gist</a>.</p>
<p>Because the description is based on <code>setup.py</code>, but <code>python-pam</code> is based on <code>pyproject.toml</code> I tried to combine the information of this question: <a href="https://stackoverflow.com/questions/63304163/how-to-create-a-deb-package-for-a-python-project-without-setup-py">How to create a deb package for a python project without setup.py</a></p>
<p>I updated my <code>control</code> file to:</p>
<pre class="lang-yaml prettyprint-override"><code>Source: python-pam
Section: python
Priority: optional
Maintainer: username <username@mail.com>
Uploaders:
username <username@mail.com>
Build-Depends:
debhelper-compat (= 13),
dh-python,
python3-all,
pybuild-plugin-pyproject,
python3-poetry-core,
Standards-Version: 4.5.1
Homepage: https://pypi.org/project/python-pam/
Rules-Requires-Root: no
Vcs-Git: https://github.com/FirefighterBlu3/python-pam.git
Vcs-Browser: https://github.com/FirefighterBlu3/python-pam
X-Python3-Version: >= 3.7
Package: python3-python-pam
Architecture: all
Multi-Arch: foreign
Depends:
python3-six,
${misc:Depends},
${python3:Depends}
Description: python-pam for authentication in python
Python pam module supporting py3 (and py2) for Linux type systems (!windows)
</code></pre>
<p>and reduced the <code>rules</code> to the following minimal:</p>
<pre><code>#!/usr/bin/make -f
# You must remove unused comment lines for the released package.
#export DH_VERBOSE = 1
%:
dh $@ --with python3 --buildsystem=pybuild
</code></pre>
<p>If I try to run:</p>
<pre class="lang-bash prettyprint-override"><code>gbp buildpackage --git-pristine-tar
</code></pre>
<p>I've got an error about my ssh-key which I used to sign the tag previous isn't bound to my user name. I might be able to figure this out by myself, but I struggle with my simple rules file.</p>
<p>So far as I understand, the rules describes the steps necessary to build the package. I used poetry in the past, but only for building Pypi-Packages. At this point I am not sure how to continue the rules file or, what else I might miss. The json package mentioned in the other question seems to at lot of other stuff, which python-pam doesn't provide.</p>
| <python><debian><packaging> | 2023-11-27 14:09:12 | 0 | 1,066 | MaKaNu |
77,557,459 | 6,562,240 | Dynamically Adding Columns to DataFrame based on str.split() output | <p>I currently have a dataframe with one column containing contact names such as:</p>
<pre><code>Andrew Jones
James
Hugh Peter Michael
</code></pre>
<p>I am trying to split the names into mutiple columns so that I can separate first names from last names:</p>
<pre><code>df[['Name Part One', 'Name Part Two', 'Name Part Three', 'Name Part Four']] = df['Contact Person'].str.split(' ', expand=True)
</code></pre>
<p>However this is giving the error:</p>
<blockquote>
<p>ValueError: Columns must be same length as key</p>
</blockquote>
<p>Which I believe must be related to the fact that for some Contact People, there are only 1 parts to the name, others 2, and so on.</p>
<p>How can I dynamically add columns depending on how many parts are split out?</p>
| <python><pandas><split> | 2023-11-27 13:57:37 | 1 | 705 | Curious Student |
77,557,453 | 3,336,423 | Scipy: Bessel iterative filtering different than single shot filtering | <p>I have this piece of code that applies a Bessel filter:</p>
<p><strong>One-shot version:</strong></p>
<pre><code>import scipy.signal
fc_bessel = 0.14 # [Hz]
ordre_bessel = 3
b,a = scipy.signal.bessel(ordre_bessel, fc_bessel, 'low', analog=False, output='ba', fs=300)
filter_once = scipy.signal.lfilter(b, a, input_data)
</code></pre>
<p>As I will in the end receive my data in real-time, I need to adapt this code to take each <code>input_data</code> on the fly, so I need to maintain the filter state in a <code>zi</code> variable. So I wrote:</p>
<p><strong>Iterative version:</strong></p>
<pre><code>import scipy.signal
fc_bessel = 0.14 # [Hz]
ordre_bessel = 3
b,a = scipy.signal.bessel(ordre_bessel, fc_bessel, 'low', analog=False, output='ba', fs=300)
z = scipy.signal.lfilter_zi(b, a)
filter_iter = []
for input_value in input_data:
filtered_value, z = scipy.signal.lfilter(b, a, [input_value], zi=z)
filter_iter.append(filtered_value[0])
</code></pre>
<p>However, the outputs are completly different (if first value is a 0, <code>filter_once[0]</code> is <code>0</code> but <code>filter_iter[0]</code> is 0.999...)</p>
| <python><scipy><filtering> | 2023-11-27 13:56:41 | 1 | 21,904 | jpo38 |
77,557,429 | 10,909,217 | Validate a function call without calling a function | <p>I'd like to know if I can use pydantic to check whether arguments <em>would</em> be fit for calling a type hinted function, but <em>without</em> calling the function.</p>
<p>For example, given</p>
<pre><code>kwargs = {'x': 1, 'y': 'hi'}
def foo(x: int, y: str, z: Optional[list] = None):
pass
</code></pre>
<p>I want to check whether <code>foo(**kwargs)</code> would be fine according to the type hints. So basically, what <code>pydantic.validate_call</code> does, without calling <code>foo</code>.</p>
<p>My research led me to <a href="https://github.com/pydantic/pydantic/issues/2127" rel="nofollow noreferrer">this</a> github issue. The solution works, but it relies on <code>validate_arguments</code>, which is deprecated. It cannot be switched for <code>validate_call</code>, because its return value doesn't have a <code>vd</code> attribute, making the line <code>validated_function = f.vd</code> fail.</p>
| <python><pydantic> | 2023-11-27 13:53:01 | 2 | 1,290 | actual_panda |
77,557,426 | 14,777,704 | Is it an efficient and a good practice to pass huge pandas dataframes as function parameters and also return them from functions? | <p>I apologize for asking a dumb question but kindly advise me regarding this.</p>
<p><strong>Scenario a: I need to write 1.5GB pandas dataframes as csv files into SharePoint location.</strong></p>
<p>Approach 1:</p>
<pre><code># function to write dataframe to sharepoint location
def writeData(largeDataframe,uname,pwd,relpath,filename):
baseCtx=getSharePointContext(uname,pwd) # another function to authenticate and get sharepoint context
target_folder = baseCtx.web.get_folder_by_server_relative_url(f"Shared Documents/{relpath}")
buffer = io.BytesIO()
dfResult.to_csv(buffer, index=False,encoding='utf-8',lineterminator='\n')
buffer.seek(0)
file_content = buffer.read()
target_folder.upload_file(fileName, file_content).execute_query()
if __name__ == '__main__':
lisFiles=["aa.csv","bb.csv"....] # list having 100 csv files
for file in lisFiles:
df=pd.read_csv(file) # have
# do something with df
writeData(df,uname,pwd,relpath,filename)
</code></pre>
<p>Approach 2:</p>
<pre><code>if __name__ == '__main__':
lisFiles=["aa.csv","bb.csv"....] # list having 100 csv files
for file in lisFiles:
df=pd.read_csv(file)
baseCtx=getSharePointContext(uname,pwd) # another function to authenticate and get sharepoint context
target_folder = baseCtx.web.get_folder_by_server_relative_url(f"Shared Documents/{relpath}")
buffer = io.BytesIO()
dfResult.to_csv(buffer, index=False,encoding='utf-8',lineterminator='\n')
buffer.seek(0)
file_content = buffer.read()
target_folder.upload_file(fileName, file_content).execute_query()
</code></pre>
<p>Is there any difference in efficiency or performance? which one is the recommended approach for best performance and greatest speed?</p>
<p><strong>Scenario b: I need to read nearly 1.5GB csv files stored in SharePoint into pandas dataframes.</strong></p>
<p>Approach 1:</p>
<pre><code># function to read dataframe from sharepoint location
def readData(url,uname,pwd):
largeResult=None
baseCtx=getSharePointContext(uname,pwd) # another function to authenticate and get sharepoint context
web = ctx.web
ctx.load(web)
ctx.execute_query()
response = File.open_binary(ctx, url)
bytes_file_obj = io.BytesIO()
bytes_file_obj.write(response.content)
bytes_file_obj.seek(0)
largeResult=pd.read_csv(bytes_file_obj,dtype=str,encoding='utf-8')
return largeResult
if __name__ == '__main__':
df1=readData(url1,uname,pwd);
</code></pre>
<p>Approach 2:</p>
<pre><code>if __name__ == '__main__':
baseCtx=getSharePointContext(uname,pwd) # another function to authenticate and get sharepoint context
web = ctx.web
ctx.load(web)
ctx.execute_query()
response = File.open_binary(ctx, url1)
bytes_file_obj = io.BytesIO()
bytes_file_obj.write(response.content)
bytes_file_obj.seek(0)
df1=pd.read_csv(bytes_file_obj,dtype=str,encoding='utf-8')
</code></pre>
<p>Is there any difference in efficiency or performance? which one is the recommended approach for best performance and greatest speed?</p>
| <python><pandas><dataframe> | 2023-11-27 13:52:28 | 2 | 375 | MVKXXX |
77,557,158 | 12,276,279 | How can I add arrows in maps using geopandas and matplotlib with the arrows having color gradients? | <p>I want to show trade flow between countries in a map.
For this, I want to add 1 or 2 arrows to and from country as shown below.
The arrow should have color gradients, which represent certain numeric values.</p>
<p>1.How can I add arrows on top of countries in maps using geopandas or other packages?</p>
<ol start="2">
<li>How do I add color gradients to those arrows using color maps? I also need the legend for the color values.</li>
</ol>
<p><a href="https://i.sstatic.net/RobrA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RobrA.png" alt="enter image description here" /></a></p>
| <python><matplotlib><geospatial><spatial> | 2023-11-27 13:09:48 | 1 | 1,810 | hbstha123 |
77,557,107 | 4,404,805 | Python: int() returning unexpected results | <p>I have a scenario where I cannot use Python's inbuilt rounding functions so I have written a custom rounding function but it's not giving the expected result for a particular usecase. This is what I have tried (I have kept the output as the comments):</p>
<pre><code>import pandas as pd
def custom_rounder(value, scale):
multiplier = 10 ** scale
if isinstance(value, pd.Series):
print("number: ", value)
# number:
# 0 583.065
# 1 74.040
# Name: ItemCost, dtype: float64
print("multiplier: ", multiplier)
# multiplier: 100
print("number * multiplier: ", value.apply(lambda number: number * multiplier))
# number * multiplier:
# 0 58306.5
# 1 7404.0
# Name: ItemCost, dtype: float64
print("number * multiplier + 0.5 :", value.apply(lambda number: number * multiplier + 0.5))
# number * multiplier + 0.5 :
# 0 58307.0
# 1 7404.5
# Name: ItemCost, dtype: float64
print("int(number * multiplier + 0.5) :", value.apply(lambda number: int(number * multiplier + 0.5)))
# int(number * multiplier + 0.5) :
# 0 58306
# 1 7404
# Name: ItemCost, dtype: int64
return value.apply(lambda number: int(number * multiplier + 0.5) / float(multiplier))
elif isinstance(value, pd.DataFrame):
return value.apply(lambda number: number.apply(lambda x: (x * multiplier + 0.5) / float(multiplier)))
else:
rounded_value = int(value * multiplier + 0.5) / float(multiplier)
return rounded_value
data = {'ItemID': [1, 2],
'ItemDescription': ['ItemA', 'ItemB'],
'ItemCost': [945.0, 120.0]}
df = pd.DataFrame(data)
# ItemID ItemDescription ItemCost
# 1 ItemA 945.0
# 2 ItemB 120.0
print(custom_rounder(df['ItemCost'] * 0.617, 2))
# 0 583.06
# 1 74.04
# Name: ItemCost, dtype: float64
</code></pre>
<p>When I perform:</p>
<ol>
<li><code>int(583.065 * 100 + 0.5)</code> - I get the expected output as <code>58307</code></li>
<li><code>value.apply(lambda number: int(number * multiplier + 0.5))</code> - I get unexpected output as <code>58306</code> instead of <code>58307</code>. In this case, <code>int(number * multiplier + 0.5)</code> is behaving strangely.</li>
</ol>
<p>I have been trying to solve this issue for the past few days but unfortunately I couldn't find a solution. What could be the reason behind this strange behavior?</p>
| <python><pandas><dataframe><apply><series> | 2023-11-27 13:03:04 | 0 | 1,207 | Animeartist |
77,557,037 | 2,725,810 | The phenomenon of AWS Lambda function running time not affected by cold starts | <p>I have a script to keep two Lambda functions warm:</p>
<pre class="lang-py prettyprint-override"><code>PING_DELAY = 60
last_ping = -PING_DELAY
while True:
try:
timer = Timer()
timestamp_str = \
datetime.now().strftime("%d.%m.%Y %H:%M:%S.%f")[:-3]
print(f"Ping at {timestamp_str} ", end='', flush=True)
response = lambda_client.invoke(
FunctionName='myfunc',
InvocationType = 'RequestResponse',
Payload=json.dumps({'command': 'ping'}))
payload = json.load(response['Payload'])
if 'errorMessage' in payload:
raise Exception(payload['errorMessage'])
else:
my_time = timer.stop()
stats = payload['stats']
print(f"took {my_time}ms. n_cold: {stats['n_cold']} total_init: {stats['total_init']}ms", flush=True)
except Exception as e:
print(f"AWS Lambda submit failed: {e}", flush=True)
time.sleep(PING_DELAY)
</code></pre>
<p>Every minute, it invokes the <code>myfunc</code> Lambda function, which in turn invokes in parallel 109 instances of another Lambda function. Based on the responses, <code>myfunc</code> reports in its response how many instances experienced a cold start and the total initialization time. The relevant code for both functions is shown in the Appendix below. Here is part of the output of this script:</p>
<pre class="lang-none prettyprint-override"><code>Ping at 27.11.2023 11:01:23.180 took 581ms. n_cold: 0 total_init: 0ms
Ping at 27.11.2023 11:02:23.821 took 527ms. n_cold: 2 total_init: 11531ms
Ping at 27.11.2023 11:03:24.408 took 486ms. n_cold: 0 total_init: 0ms
Ping at 27.11.2023 11:04:24.954 took 511ms. n_cold: 0 total_init: 0ms
Ping at 27.11.2023 11:05:25.525 took 592ms. n_cold: 1 total_init: 6416ms
Ping at 27.11.2023 11:06:26.119 took 525ms. n_cold: 0 total_init: 0ms
Ping at 27.11.2023 11:07:26.704 took 502ms. n_cold: 0 total_init: 0ms
Ping at 27.11.2023 11:08:27.236 took 605ms. n_cold: 0 total_init: 0ms
Ping at 27.11.2023 11:09:27.901 took 546ms. n_cold: 0 total_init: 0ms
Ping at 27.11.2023 11:10:28.503 took 497ms. n_cold: 1 total_init: 6396ms
Ping at 27.11.2023 11:11:29.060 took 489ms. n_cold: 0 total_init: 0ms
</code></pre>
<p>What does not make sense to me, is that <code>myfunc</code> returns a response in less than a second even when some of the instances experience a cold start, which ensues about six seconds of initialization. Only on rare occasions do I get what I would expect when there are cold starts:</p>
<pre class="lang-none prettyprint-override"><code>Ping at 27.11.2023 12:09:04.775 took 6525ms. n_cold: 6 total_init: 36829ms
</code></pre>
<p><strong>The question:</strong> How can we explain the phenomenon of AWS Lambda function running time seemingly not affected by cold starts?</p>
<p><strong>Appendix.</strong> For reference, the <code>myfunc</code> script processes the stats in the responses as follows:</p>
<pre class="lang-py prettyprint-override"><code>stats = response['stats']
init = stats['init_time']
if stats['cold_start']:
n_cold += 1
assert(init > 0)
total_init += init
else:
assert(init == 0)
</code></pre>
<p>The Lambda function invoked by <code>myfunc</code> is as follows:</p>
<pre class="lang-py prettyprint-override"><code>import json
import time
import pickle
def ms_now():
return int(time.time_ns() / 1000000)
class Timer():
def __init__(self):
self.start = ms_now()
def stop(self):
return ms_now() - self.start
timer = Timer()
started_init = ms_now()
cold_start = True
init_time = timer.stop()
... # Some heavy machine learning models are loaded here
def compute(event):
... # The actual computation is here
def lambda_handler(event, context):
global cold_start
global init_time
global started_init
stats = {'cold_start': cold_start,
'started_init': started_init,
'init_time': init_time}
cold_start = False
init_time = 0
started_init = 0
stats['started'] = ms_now()
result = compute(event)
stats['finished'] = ms_now()
return {
'statusCode': 200,
'headers': {
'Content-Type': 'application/json'
},
'result': result,
'stats': stats
}
</code></pre>
<p>P.S. The <a href="https://repost.aws/questions/QUtNm0H7HfQ2-hST9uNz_zwA/the-phenomenon-of-aws-lambda-function-running-time-not-affected-by-cold-starts" rel="nofollow noreferrer">question at re:Post</a></p>
| <python><aws-lambda> | 2023-11-27 12:50:38 | 1 | 8,211 | AlwaysLearning |
77,556,881 | 3,256,651 | Display plotly go figure in widget | <p>I would like to make a widget in jupyter lab with a plotly go figure.</p>
<p>I started by copying this example: <a href="https://plotly.com/python/figurewidget-app/?_ga=2.180976559.943274886.1701077393-1452099185.1695387650" rel="nofollow noreferrer">https://plotly.com/python/figurewidget-app/?_ga=2.180976559.943274886.1701077393-1452099185.1695387650</a></p>
<p>The widget displays but not the figure. How do I get to see also the figure?</p>
<p>I report the code below. I just added the display part.</p>
<pre><code>import datetime
import numpy as np
import pandas as pd
from IPython.display import display
import plotly.graph_objects as go
from ipywidgets import widgets
df = pd.read_csv(
'https://raw.githubusercontent.com/yankev/testing/master/datasets/nycflights.csv')
df = df.drop(df.columns[[0]], axis=1)
month = widgets.IntSlider(
value=1.0,
min=1.0,
max=12.0,
step=1.0,
description='Month:',
continuous_update=False
)
use_date = widgets.Checkbox(
description='Date: ',
value=True,
)
container = widgets.HBox(children=[use_date, month])
textbox = widgets.Dropdown(
description='Airline: ',
value='DL',
options=df['carrier'].unique().tolist()
)
origin = widgets.Dropdown(
options=list(df['origin'].unique()),
value='LGA',
description='Origin Airport:',
)
# Assign an empty figure widget with two traces
trace1 = go.Histogram(x=df['arr_delay'], opacity=0.75, name='Arrival Delays')
trace2 = go.Histogram(x=df['dep_delay'], opacity=0.75, name='Departure Delays')
g = go.FigureWidget(data=[trace1, trace2],
layout=go.Layout(
title=dict(
text='NYC FlightDatabase'
),
barmode='overlay'
))
def validate():
if origin.value in df['origin'].unique() and textbox.value in df['carrier'].unique():
return True
else:
return False
def response(change):
if validate():
if use_date.value:
filter_list = [i and j and k for i, j, k in
zip(df['month'] == month.value, df['carrier'] == textbox.value,
df['origin'] == origin.value)]
temp_df = df[filter_list]
else:
filter_list = [i and j for i, j in
zip(df['carrier'] == 'DL', df['origin'] == origin.value)]
temp_df = df[filter_list]
x1 = temp_df['arr_delay']
x2 = temp_df['dep_delay']
with g.batch_update():
g.data[0].x = x1
g.data[1].x = x2
g.layout.barmode = 'overlay'
g.layout.xaxis.title = 'Delay in Minutes'
g.layout.yaxis.title = 'Number of Delays'
origin.observe(response, names="value")
textbox.observe(response, names="value")
month.observe(response, names="value")
use_date.observe(response, names="value")
container2 = widgets.HBox([origin, textbox])
widg = widgets.VBox([container,
container2,
g])
display(widg)
</code></pre>
| <python><plotly><jupyter-lab><ipywidgets> | 2023-11-27 12:24:22 | 0 | 1,922 | esperluette |
77,556,682 | 6,562,240 | Lambda Function "unhashable type: Series" error | <p>I am attempting to set the value of a column equal to "United States of America" if the first 3 characters of another column equal "USA", else I want to preserve the existing value of the column.</p>
<pre><code>df['Country'] = df['Country ID'].map(lambda x: 'United States of America' if x[:3] == 'USA' else df['Country'])
</code></pre>
<p>However, this gives the error:</p>
<blockquote>
<p>unhashable type: 'Series'</p>
</blockquote>
<p>I am unsure how to refer back to the column's original value if the if condition is not met.</p>
| <python><pandas><lambda> | 2023-11-27 11:53:19 | 0 | 705 | Curious Student |
77,556,547 | 4,069,931 | Bumpversion optional dev version | <p>I am using the Bumpversion utility (<a href="https://github.com/peritus/bumpversion" rel="nofollow noreferrer">https://github.com/peritus/bumpversion</a>) to increase the version of my application.</p>
<p>And I want to make <code>dev</code> build - add a suffix to a regular version <strong>optionally</strong>.
With the current approach, I can't do <code>bumpversion dev</code> - it says:</p>
<p><code>ValueError: The part has already the maximum value among ['dev'] and cannot be bumped.</code></p>
<pre><code>[bumpversion]
current_version = 1.5.3
parse = (?P<major>\d+)\.(?P<minor>\d+)\.(?P<patch>\d+)(-(?P<dev>.*)-(?P<build>\d+))?
serialize =
{major}.{minor}.{patch}-{dev}-{build}
{major}.{minor}.{patch}
[bumpversion:part:dev]
values =
dev
[bumpversion:part:build]
first_value = 1
</code></pre>
<p>How to create <code>dev-{build}</code> version optionally?</p>
| <python><version><bump2version> | 2023-11-27 11:31:57 | 1 | 874 | Hardwired |
77,556,489 | 21,034,926 | Tkinter - Create table using database fetched data | <p>I'm learning Python and I want to make a GUI using Tkinter module. I'm reading some articles about usage and now I'm trying to create a simple table that needs to be pupulated from data fetched from a supabase indatnce. I've writed this code but when I try to debug it the application will crash. How I need to fix it to get a table that will show the desired informations? How I add a label for each column?</p>
<pre class="lang-py prettyprint-override"><code>supabase: Client = create_client(url, key)
#
response = supabase.table('pdvList').select('*').execute()
# Creting application window
window = Tk()
window.title("Test screen")
window.geometry("300x300")
# Creating table
for r in range(len(response.data)):
for c in range(15):
e = Entry(window, width=20, fg='blue', font=('Arial', 16, 'bold'))
e.grid(row=r, column=c)
e.insert(END, response.data)
#window.resizable(False, False)
#
frame = ttk.Frame(window, padding=10)
#
frame.grid(row=r, column=c)
#
window.mainloop()
</code></pre>
| <python><tkinter><supabase> | 2023-11-27 11:20:48 | 1 | 501 | OHICT |
77,556,413 | 9,974,205 | How can I extract subchains of names from pandas dataframe | <p>Following <a href="https://stackoverflow.com/questions/77555573/how-can-i-extract-chains-of-customers-from-a-pandas-dataframe">this question</a>, I have the following code capable of finding chains of customers in a database per delivery person sortie.</p>
<pre><code>import pandas as pd
data = {
'DateTime': ['01/01/2023 09:00:00', '01/01/2023 09:10:00', '01/01/2023 09:15:00',
'01/01/2023 12:00:00', '01/01/2023 12:00:10', '01/01/2023 12:15:00',
'01/01/2023 15:00:00', '01/01/2023 15:05:10', '01/01/2023 15:15:00',
'01/01/2023 15:30:10', '01/01/2023 15:35:15', '01/01/2023 18:00:00',
'01/01/2023 18:00:00', '01/01/2023 18:00:00', '01/01/2023 18:00:00'
, '01/01/2023 18:00:00'],
'SortieNumber': [1, 1, 1, 2, 2, 2, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4],
'CustomerName': ['Josh', 'Alice', 'Robert',
'Anna', 'Anna', 'Robert',
'Josh', 'Alice', 'Robert', 'Robert', 'Robert',
'Josh', 'Alice', 'Robert', 'Anna', 'Anna'],
'ProductCode': ['001', '002', '002', '001', '003', '003', '004', '003', '001', '002', '003', '003', '003', '003', '003', '003']
}
df = pd.DataFrame(data)
a=df.sort_values(by=['SortieNumber', 'DateTime'])
a=a.loc[lambda d: d[['SortieNumber', 'CustomerName']].ne(d[['SortieNumber', 'CustomerName']].shift()).any(axis=1)]
a=a.groupby('SortieNumber')['CustomerName'].agg('-'.join).value_counts(normalize=True)
print(a)
</code></pre>
<p>The output of this code is</p>
<pre><code>Josh-Alice-Robert 0.50
Anna-Robert 0.25
Josh-Alice-Robert-Anna 0.25
Name: CustomerName, dtype: float64
</code></pre>
<p>Which can work with a small database, such as the one presented in the sample code above.</p>
<p>However, I would like to modify this code for the subchains to be considered as well, e.g., I would like</p>
<pre><code>Josh-Alice-Robert 0.75
Josh-Alice 0.75
Alice-Robert 0.75
Anna-Robert 0.25
Robert-Anna 0.25
Josh-Alice-Robert-Anna 0.25
Name: CustomerName, dtype: float64
</code></pre>
<p>as output, since I want the chain <code>Josh-Alice-Robert</code> present <code>Josh-Alice-Robert-Anna</code> to be considered as well as a <code>Josh-Alice-Robert</code> chain.</p>
<p>Can this be done?</p>
| <python><pandas><database><dataframe><series> | 2023-11-27 11:07:38 | 1 | 503 | slow_learner |
77,556,210 | 13,861,187 | Django, How to access request data in include template | <p>In my Django project,
I have some templates that includes in base template using {% include %} tag.
How do I access request data in this included template?
How do I access context variables in this included template?</p>
<p>I was expecting that request data is accessible in any template.</p>
| <python><django><templates><request><include> | 2023-11-27 10:35:16 | 1 | 888 | Pycm |
77,556,102 | 9,072,753 | How to make chunked streaming PUT request to BaseHTTPRequestHandler not loose bytes? | <p>Consider the following MCVE, that runs a python program that:</p>
<ul>
<li>sets the global variable <code>FIXME</code> set during startup from the first command line argument</li>
<li>in one thread starts a HTTP server
<ul>
<li>that listens to chunked PUT requests using <code>select</code> on the <code>self.rfile.fileno()</code></li>
<li>and prints the received data</li>
</ul>
</li>
<li>in the main thread it runs a client for that HTTP server
<ul>
<li>if FIXME is true
- sleep for 100 milliseconds</li>
<li>then sends a chunked PUT request with a single line of data using <code>requests</code> module.</li>
</ul>
</li>
</ul>
<p></p>
<pre><code>#!/usr/bin/env python
import os
import select
import sys
import threading
import time
from http.server import BaseHTTPRequestHandler, ThreadingHTTPServer
from typing import Iterator
import requests
ip = "127.0.0.1"
port = 10000
FIXME = True
class MyServer(BaseHTTPRequestHandler):
def do_PUT(self):
inputfd = self.rfile.fileno()
os.set_blocking(inputfd, False)
while 1:
print("SERVER SELECT")
sel = select.select([inputfd], [], [inputfd])
if inputfd in sel[2]:
print(f"SERVER {inputfd} errored")
break
if inputfd not in sel[0]:
print(f"SERVER read file descriptor closed")
break
chunk: bytes = os.read(inputfd, 8192)
print(f"SERVER RECV {len(chunk)} {chunk!r}")
if len(chunk) == 0:
print(f"SERVER len(chunk) == 0")
break
if chunk:
print(f"{chunk!r}")
if b"0\r\n\r\n" in chunk:
# terminating chunk
break
self.send_response(200)
self.end_headers()
def server():
with ThreadingHTTPServer((ip, int(port)), MyServer) as webServer:
webServer.serve_forever()
class Client:
def get_chunks_to_write(self) -> Iterator[bytes]:
if FIXME:
time.sleep(0.1)
print(f"CLIENT write")
yield b"123"
def main(self):
requests.put(f"http://{ip}:{port}", data=self.get_chunks_to_write())
def cli():
threading.Thread(target=server, daemon=True).start()
# Wait for server startup
time.sleep(0.5)
Client().main()
if __name__ == "__main__":
FIXME = int(sys.argv[1])
cli()
</code></pre>
<p>When FIXME is set, the execution is fine:</p>
<pre><code>$ ./test.py 1
SERVER SELECT
CLIENT write
SERVER RECV 13 b'3\r\n123\r\n0\r\n\r\n'
b'3\r\n123\r\n0\r\n\r\n'
127.0.0.1 - - [27/Nov/2023 11:14:01] "PUT / HTTP/1.1" 200 -
</code></pre>
<p>However, when <code>FIXME</code> is false, then the execution blocks and the bytes are lost (takes about ~10 tries to reproduce with this MCVE, happens every time on real program):</p>
<pre><code>$ ./1.py 0
CLIENT write
SERVER SELECT
</code></pre>
<p>What can I do to remove the extra sleep? What synchronization is missing? What is happening?</p>
<p>I <em>think</em> this happens, because the client closes or transfers bytes before the server can enter <code>select</code>. But I do not know how could that influence anything, as <code>select</code> should still allow processing the remaining bytes on the socket.</p>
<p>I tried searching the net. Setting <code>set_blocking(inputfd, True)</code> and <code>os.read(inputfd, 1)</code> also does <em>not</em> read the transferred bytes - I assume this is, as if, they were already read by <code>BaseHTTPRequestHandler</code>. How can I access them?</p>
| <python><linux><http><select> | 2023-11-27 10:18:23 | 1 | 145,478 | KamilCuk |
77,555,936 | 252,194 | Error when using checkinstall to create Python 3.12 installer on Debian 11 | <p>I am using checkinstall 1.6.3 to create a .deb installer for Python 3.12 on Debian 11.</p>
<p>I used this description as an outline for doing it and had it working in the past for Python 3.11 on the same system:
<a href="https://stackoverflow.com/questions/63314253/how-to-install-python3-8-using-checkinstall-on-debian-10">How to Install python3.8 using checkinstall on debian 10?</a></p>
<p>I downloaded and unpacked the tarball from python.org, installed dependencies, and ran the following commands:</p>
<pre><code>./configure --enable-optimizations --enable-shared --prefix=/usr
make -j $(nproc)
make -n altinstall > make_altinstall.sh
chmod +x make_altinstall.sh
sudo checkinstall -D ./make_altinstall.sh
</code></pre>
<p>checkinstall creates a lot of output, so I only include from the point where the error appears:</p>
<pre><code>ERROR: Exception:
Traceback (most recent call last):
File "/tmp/tmpklvy9437/pip-23.2.1-py3-none-any.whl/pip/_internal/cli/base_command.py", line 180, in exc_logging_wrapper
status = run_func(*args)
^^^^^^^^^^^^^^^
File "/tmp/tmpklvy9437/pip-23.2.1-py3-none-any.whl/pip/_internal/cli/req_command.py", line 248, in wrapper
return func(self, options, args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/tmpklvy9437/pip-23.2.1-py3-none-any.whl/pip/_internal/commands/install.py", line 324, in run
session = self.get_default_session(options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/tmpklvy9437/pip-23.2.1-py3-none-any.whl/pip/_internal/cli/req_command.py", line 98, in get_default_session
self._session = self.enter_context(self._build_session(options))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/tmpklvy9437/pip-23.2.1-py3-none-any.whl/pip/_internal/cli/req_command.py", line 125, in _build_session
session = PipSession(
^^^^^^^^^^^
File "/tmp/tmpklvy9437/pip-23.2.1-py3-none-any.whl/pip/_internal/network/session.py", line 342, in __init__
self.headers["User-Agent"] = user_agent()
^^^^^^^^^^^^
File "/tmp/tmpklvy9437/pip-23.2.1-py3-none-any.whl/pip/_internal/network/session.py", line 175, in user_agent
setuptools_dist = get_default_environment().get_distribution("setuptools")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/tmpklvy9437/pip-23.2.1-py3-none-any.whl/pip/_internal/metadata/importlib/_envs.py", line 188, in get_distribution
return next(matches, None)
^^^^^^^^^^^^^^^^^^^
File "/tmp/tmpklvy9437/pip-23.2.1-py3-none-any.whl/pip/_internal/metadata/importlib/_envs.py", line 183, in <genexpr>
matches = (
^
File "/tmp/tmpklvy9437/pip-23.2.1-py3-none-any.whl/pip/_internal/metadata/base.py", line 612, in iter_all_distributions
for dist in self._iter_distributions():
File "/tmp/tmpklvy9437/pip-23.2.1-py3-none-any.whl/pip/_internal/metadata/importlib/_envs.py", line 176, in _iter_distributions
for dist in finder.find_eggs(location):
File "/tmp/tmpklvy9437/pip-23.2.1-py3-none-any.whl/pip/_internal/metadata/importlib/_envs.py", line 144, in find_eggs
yield from self._find_eggs_in_dir(location)
File "/tmp/tmpklvy9437/pip-23.2.1-py3-none-any.whl/pip/_internal/metadata/importlib/_envs.py", line 115, in _find_eggs_in_dir
with os.scandir(location) as it:
^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: ''
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/thomas/Downloads/Python-3.12.0/Lib/ensurepip/__main__.py", line 5, in <module>
sys.exit(ensurepip._main())
^^^^^^^^^^^^^^^^^
File "/home/thomas/Downloads/Python-3.12.0/Lib/ensurepip/__init__.py", line 284, in _main
return _bootstrap(
^^^^^^^^^^^
File "/home/thomas/Downloads/Python-3.12.0/Lib/ensurepip/__init__.py", line 200, in _bootstrap
return _run_pip([*args, *_PACKAGE_NAMES], additional_paths)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/thomas/Downloads/Python-3.12.0/Lib/ensurepip/__init__.py", line 101, in _run_pip
return subprocess.run(cmd, check=True).returncode
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/thomas/Downloads/Python-3.12.0/Lib/subprocess.py", line 571, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/home/thomas/Downloads/Python-3.12.0/python', '-W', 'ignore::DeprecationWarning', '-c', '\nimport runpy\nimport sys\nsys.path = [\'/tmp/tmpklvy9437/pip-23.2.1-py3-none-any.whl\'] + sys.path\nsys.argv[1:] = [\'install\', \'--no-cache-dir\', \'--no-index\', \'--find-links\', \'/tmp/tmpklvy9437\', \'--root\', \'/\', \'--upgrade\', \'pip\']\nrunpy.run_module("pip", run_name="__main__", alter_sys=True)\n']' returned non-zero exit status 2.
**** Installation failed. Aborting package creation.
Cleaning up...OK
Bye.
</code></pre>
<p>It seems like the error may have something to do with pip or setuptools. Any tips on how to fix this issue are much appreciated.</p>
| <python><python-3.x><pip><debian><checkinstall> | 2023-11-27 09:49:56 | 0 | 401 | tompi |
77,555,911 | 2,820,289 | Pandas: group dataframe on variables depending on threshold | <p>I have a dataframe, and I want to group it on some variables <code>a</code>, <code>b</code>, <code>c</code>. If I group the dataset by <code>a</code>, and count the number of rows for all levels of <code>a</code>, if some level of <code>a</code> has fewer rows than some threshold, I do not want to continue grouping on <code>b</code> and <code>c</code> (<strong>for these levels of <code>a</code></strong>). So, I basically want to stop grouping if some group threshold is reached, but continue grouping for the others.</p>
<pre><code>import pandas as pd
import numpy as np
df=pd.DataFrame({'a':[1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2],
'b':[1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
'c':[1, 1, 1, 2, 2, 2, 3, 4, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2]
})
df_desired=pd.DataFrame({'a':[1, 1, 1, 2],
'b':[1, 1, 2, 2],
'c':[1, 2, np.nan, 2],
'group_size':[3, 3, 3, 9]
})
</code></pre>
<p>My attempt thus far is not very elegant, so I wanted to see if this could be done in a better way?</p>
<pre><code>group_cols = ['a', 'b', 'c']
grouped = list()
n_threshold = 3
df_list = list()
df_tmp = df.copy()
for g in group_cols:
print(g)
grouped.append(g)
df_grp = df_tmp.groupby(grouped).agg(n_rows=(g, 'size')).reset_index()
df_lt = df_grp.loc[df_grp.n_rows <= n_threshold, ]
df_list.append(df_lt)
df_gt = df_grp.loc[df_grp.n_rows > n_threshold, ]
df_tmp = pd.merge(df_tmp, df_gt[grouped], on = grouped, how = 'inner')
df_list.append(df_gt)
pd.concat(df_list)
</code></pre>
| <python><pandas><group-by> | 2023-11-27 09:46:29 | 1 | 685 | Helen |
77,555,841 | 9,564,152 | How to retry failed tasks in celery chunks or celery starmap | <p>I would like to know how to configure these tasks correctly so that the failed tasks can be retried. One thing I have noted is if one of the tasks in the chunk fails, the whole chunk (I assume executed by celery.starmap) fails</p>
<p>Here are some related questions I have checked:</p>
<p>-https://stackoverflow.com/questions/30653216/retry-behavior-for-celery-tasks-within-a-chunk</p>
<p>-https://stackoverflow.com/questions/65217268/continue-when-task-in-celery-chunks-fails</p>
<pre><code>@celery_app.task
def get_item_asteroids_callback(parent_item_id):
pass
@celery_app.task(
ignore_result=False,
autoretry_for=(requests.exceptions.ReadTimeout, TimeoutError, urllib3.exceptions.ReadTimeoutError, InterfaceError),
retry_kwargs={'max_retries': 3},
retry_backoff=True
)
def get_item_asteroid(work_id):
"""
Does some stuff that can raise some exceptions.
"""
raise requests.exceptions.ReadTimeout('Test Exception')
@celery_app.task(ignore_result=False)
def get_item_asteroids(parent_item_id):
item_ids = [1,2,3.......]
item_ids = [(item_id,) for item_id in item_ids]
chunk_size = min(len(item_ids), 128)
tasks = [get_item_asteroid.chunks(item_ids, chunk_size).group()]
chord(tasks)(get_item_asteroids_callback.si(parent_item_id))
</code></pre>
<p>None of the failed tasks is retried (The Retried count is 0)
<a href="https://i.sstatic.net/b8QdM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/b8QdM.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/cyLnw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cyLnw.png" alt="enter image description here" /></a></p>
| <python><django><celery> | 2023-11-27 09:36:31 | 0 | 583 | Eric O. |
77,555,742 | 5,873,325 | Streamlit : Dynamically open multiple pdfs files in different tabs | <p>I have a Streamlit application where I can upload some pdf files and then select the files I want to work on from a select box and they will be displayed in the main panel. Now instead of displaying all the selected pdfs one after the other, I wonder if it’s possible to dynamically open every selected file in a new tab with the file name as its title. Of course, if I un-select the file, the tab should disappear.</p>
<p>Here's what I have tried so far, but it's not working as I want. Any suggestions ?</p>
<pre><code>import streamlit as st
import os
from PyPDF2 import PdfReader
from io import BytesIO
# Function to read PDF and return content
def read_pdf(file_path):
# Replace with your PDF reading logic
pdf = PdfReader(file_path)
return pdf
# Function to display PDF content
def display_pdf(pdf):
num_pages = pdf.page_count
# Display navigation input for selecting a specific page
page_number = st.number_input(
"Enter page number", value=1, min_value=1, max_value=num_pages
)
# Display an image for the selected page
image_bytes = pdf[page_number - 1].get_pixmap().tobytes()
st.image(image_bytes, caption=f"Page {page_number} of {num_pages}")
# Main Streamlit app code
st.title("PDF Viewer App")
# Get uploaded files
selected_files = st.file_uploader("Upload PDF files", type=["pdf"], accept_multiple_files=True)
# List to store content of all pages for each file
all_files_content = []
# Dictionary to store selected file content
selected_file_content = {}
# Iterate over uploaded files
for uploaded_file in selected_files:
file_content = uploaded_file.read()
temp_file_path = f"./temp/{uploaded_file.name}"
os.makedirs(os.path.dirname(temp_file_path), exist_ok=True)
with open(temp_file_path, "wb") as temp_file:
temp_file.write(file_content)
if uploaded_file.type == "application/pdf":
# Read PDF and store content
pdf = read_pdf(temp_file_path)
all_files_content.append(pdf)
# Display PDF content
selected_file_content[uploaded_file.name] = pdf
# Cleanup: Remove the temporary file
os.remove(temp_file_path)
# Create tabs dynamically for each file
with st.tabs(list(selected_file_content.keys())):
for file_name, pdf in selected_file_content.items():
display_pdf(pdf)
</code></pre>
| <python><file-upload><tabs><streamlit> | 2023-11-27 09:20:32 | 2 | 640 | Mejdi Dallel |
77,555,698 | 9,846,358 | Cannot click the <svg></svg> and input the value | <p>Cannot click the <code><svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">...</svg></code> and input the <code><input class="opacity-0 absolute top-0 left-0 w-full h-full" type="date" max="2023-11-27" value="2023-10-27"></code> value = "2023-10-27" and then click "Apply"</p>
<p>four steps i want to use selenium to automatic<br>
1)click <code><svg></svg></code> <br>
2)input value<br>
3)click "apply"<br>
4)print page source</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
# Setup the WebDriver
driver = webdriver.Firefox()
# Open the page
driver.get("https://www.investing.com/equities/tencent-holdings-hk-historical-data")
# Wait for the SVG element and click it
svg_element = WebDriverWait(driver, 10).until(
EC.element_to_be_clickable((By.XPATH, "//*[@id='__next']/div[2]/div[2]/div[2]/div[1]/div[2]/div[2]/div[2]/div[2]/span/svg/g/path")))
svg_element.click()
# Find the date input, clear it, and set a new value
date_input = WebDriverWait(driver, 10).until(
EC.visibility_of_element_located((By.CSS_SELECTOR, "input[type='date']")))
date_input.clear()
date_input.send_keys("2020-01-01")
# Find and click the "Apply" button
apply_button = WebDriverWait(driver, 10).until(
EC.element_to_be_clickable((By.XPATH, "//button[text()='Apply']")))
apply_button.click()
# print page_source
page_source = driver.page_source.encode('utf-8')
page_source = page_source.decode('utf-8')
print(page_source)
# Remember to close the driver
driver.quit()
</code></pre>
<p>and cannot get any result after run the code, please help on this, thanks !</p>
<p>Error Log:</p>
<pre><code>---------------------------------------------------------------------------
TimeoutException Traceback (most recent call last)
<ipython-input-3-325ef3fa9d48> in <module>
17
18 # Wait for the SVG element and click it
---> 19 svg_element = WebDriverWait(driver, 10).until(
20 EC.element_to_be_clickable((By.XPATH, "//*[@id='__next']/div[2]/div[2]/div[2]/div[1]/div[2]/div[2]/div[2]/div[2]/span/svg/g/path")))
21 svg_element.click()
~/.local/lib/python3.8/site-packages/selenium/webdriver/support/wait.py in until(self, method, message)
78 if time.time() > end_time:
79 break
---> 80 raise TimeoutException(message, screen, stacktrace)
81
82 def until_not(self, method, message=''):
TimeoutException: Message:
</code></pre>
| <python><css><selenium-webdriver><xpath> | 2023-11-27 09:14:21 | 1 | 797 | Mary |
77,555,606 | 2,005,869 | How best to create an intake catalog from a collection of CSV files? | <p>I'm trying to figure out the best way to create an intake catalog from a collection of CSV files, where I want each CSV file to be an individual <code>source</code>.</p>
<p>I can create a <code>catalog.yml</code> for one CSV by doing:</p>
<pre class="lang-py prettyprint-override"><code>import intake
source1 = intake.open_csv('states_1.csv')
source1.name = 'states1'
with open('catalog.yml', 'w') as f:
f.write(str(source1.yaml()))
</code></pre>
<p>which produces the valid:</p>
<pre class="lang-yaml prettyprint-override"><code>sources:
states1:
args:
urlpath: states_1.csv
description: ''
driver: intake.source.csv.CSVSource
metadata: {}
</code></pre>
<p>but if I do</p>
<pre class="lang-py prettyprint-override"><code>import intake
source1 = intake.open_csv('states_1.csv')
source1.name = 'states1'
source2 = intake.open_csv('states_2.csv')
source2.name = 'states2'
with open('catalog.yml', 'w') as f:
f.write(str(source1.yaml()))
f.write(str(source2.yaml()))
</code></pre>
<p>of course this fails because the catalog has a duplicate <code>sources</code> entry:</p>
<pre class="lang-yaml prettyprint-override"><code>sources:
states1:
args:
urlpath: states_1.csv
description: ''
driver: intake.source.csv.CSVSource
metadata: {}
sources:
states2:
args:
urlpath: states_2.csv
description: ''
driver: intake.source.csv.CSVSource
metadata: {}
</code></pre>
<p>I'm guessing there must be a better way to go about this, like perhaps by instantiating a catalog object, adding source objects and then writing the catalog? But I couldn't find the methods to accomplish this.</p>
<p>What is the best practice for accomplishing this?</p>
| <python><csv><intake> | 2023-11-27 08:57:18 | 1 | 16,655 | Rich Signell |
77,555,573 | 9,974,205 | How can I extract chains of customers from a Pandas dataframe? | <p>I have a pandas dataframe with information about the deliveries done by a delivery person. In this pandas dataframe there are four columns. The first one is <code>DateTime</code>, the second one is <code>SortieNumber</code>, the third one is <code>CustomerName</code> and the fourth one is <code>ProductCode</code>.</p>
<p>I want to study this pandas dataframe and find chains within it. I want to find out if this delivery person delivers to the same customers in the same order in each sortie. I don’t care about the ordered products.
The first rows of the data frame are something like this:</p>
<pre><code>DateTime SortieNumber CustomerName ProductCode
01/01/2023 09:00:00 1 Josh 001
01/01/2023 09:10:00 1 Alice 002
01/01/2023 09:15:00 1 Robert 002
01/01/2023 12:00:00 2 Anna 001
01/01/2023 12:00:10 2 Anna 003
01/01/2023 12:15:00 2 Robert 003
01/01/2023 15:00:00 3 Josh 004
01/01/2023 15:05:10 3 Alice 003
01/01/2023 15:15:00 3 Robert 001
01/01/2023 15:30:10 3 Robert 002
01/01/2023 15:35:15 3 Robert 003
</code></pre>
<p>From this data, I want to say that the chain <code>Josh-Alice-Robert</code> happens in 2 of the 3 sorties, <code>Anna-Robert</code> happens in one of the three sorties and so on for the remaining rows.</p>
<p>Can this be done?</p>
| <python><pandas><database><dataframe><series> | 2023-11-27 08:52:09 | 2 | 503 | slow_learner |
77,555,529 | 3,484,477 | Where can one find the open source Python implementation of `cv2.warpAffine`? | <p>Where is the python implementation of modules of OpenCV, like <code>https://github.com/opencv/opencv</code>, for example for <code>cv2.warpAffine</code>?</p>
| <python><opencv> | 2023-11-27 08:44:11 | 1 | 1,643 | Meysam Sadeghi |
77,555,527 | 21,049,944 | How to effectively create duplicate rows in polars? | <p>I am trying to transfer my pandas code into polars but I have a difficulties with duplicating lines (I need it for my pyvista visualizations).
In pandas I did the following:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({ "key": [1, 2, 3], "value": [4, 5, 6] })
df["key"] = df["key"].apply(lambda x: 2*[x])
df = df.explode("key",
ignore_index=False
)
</code></pre>
<p>In polars I tried</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({ "key": [1, 2, 3], "value": [4, 5, 6] })
df.with_columns(
(pl.col("key").map_elements(lambda x: [x]*2))
.explode()
)
</code></pre>
<p>but it raises:</p>
<blockquote>
<p>ShapeError: unable to add a column of length 6 to a DataFrame of height 3</p>
</blockquote>
<p>I also tried to avoid <code>map_elements</code> using</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(
(pl.col("key").cast(pl.List(float))*2)
.explode()
)
</code></pre>
<p>but it only raises:</p>
<blockquote>
<p>InvalidOperationError: can only do arithmetic operations on Series of the same size; got 3 and 1</p>
</blockquote>
<p>Any idea how to do this?</p>
| <python><dataframe><python-polars> | 2023-11-27 08:43:28 | 3 | 388 | Galedon |
77,555,492 | 1,113,579 | Python: True fire-and-forget POST request without waiting for response even in separate thread | <p>In Python 3.10, I want to send some data to an API Server using the POST method without waiting for response. I have seen many solutions which use a separate thread to make the POST request, but my requirement is not to wait for response <strong>even in the separate thread</strong>. I want a true fire-and-forget API where I am able to send the required data to the API and close the connection immediately. Let the API take its own time to process, and let no other resource on the client side get used in processing the response (not even in separate thread).</p>
<p>Is this possible?</p>
<p>Should I use a combination of separate thread and timeout (long enough just to send the data), or are there any caveats in this approach where some requests may not reach the server? Are there any better ways without using a sub-process or threading, either in my code or in any library's code?</p>
<p>I am okay to use a lower level library, if it allows just sending the data without handing or waiting for the response.</p>
| <python><multithreading><python-requests><fire-and-forget> | 2023-11-27 08:37:46 | 1 | 1,276 | AllSolutions |
77,555,312 | 2,601,515 | Langchain / ChromaDB: Why does VectorStore return so many duplicates? | <pre><code>import os
from langchain.llms import OpenAI
import bs4
import langchain
from langchain import hub
from langchain.document_loaders import UnstructuredFileLoader
from langchain.embeddings import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Chroma
os.environ["OPENAI_API_KEY"] = "KEY"
loader = UnstructuredFileLoader(
'path_to_file'
)
docs = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000, chunk_overlap=200, add_start_index=True
)
all_splits = text_splitter.split_documents(docs)
vectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())
retriever = vectorstore.as_retriever(search_type="similarity", search_kwargs={"k": 6})
retrieved_docs = retriever.get_relevant_documents(
"What is X?"
)
</code></pre>
<p>This returns:</p>
<pre><code>[Document(page_content="...", metadata={'source': 'path_to_text', 'start_index': 16932}),
Document(page_content="...", metadata={'source': 'path_to_text', 'start_index': 16932}),
Document(page_content="...", metadata={'source': 'path_to_text', 'start_index': 16932}),
Document(page_content="...", metadata={'source': 'path_to_text', 'start_index': 16932}),
Document(page_content="...", metadata={'source': 'path_to_text', 'start_index': 16932}),
Document(page_content="...", metadata={'source': 'path_to_text', 'start_index': 16932})]
</code></pre>
<p>Which is all seemingly the same document.</p>
<p>When I first ran this code in Google Colab/Jupyter Notebook, it returned different documents...as I ran it more, it started returning the same documents. Makes me feel like this is a database issue, where the same entry is being inserted into the database with each run.</p>
<p>How do I return 6 different unique documents?</p>
| <python><openai-api><langchain><py-langchain><chromadb> | 2023-11-27 08:01:32 | 3 | 401 | narcissa |
77,555,284 | 16,405,935 | Read two sheets and concat | <p>I have one dataframe with two sheets as below:</p>
<pre><code>import pandas as pd
sheet_1 = pd.DataFrame({'KIFRS BS / IS': ['', 'CCY Type :', 'COA Code', '11000000000'],
'Unnamed: 1': ['', 'EQU', 'COA English Name', 'ABC'],
'Unnamed: 2': ['', '', 'Base Currency Exchange Amt.', 1000],
'Unnamed: 3': ['26-11-2023', '', '', ''],
'Unnamed: 4': ['', 'Hanoi', '', '']})
sheet_2 = pd.DataFrame({'KIFRS BS / IS': ['', 'CCY Type :', 'COA Code', '11000000000'],
'Unnamed: 1': ['', 'EQU', 'COA English Name', 'ABC'],
'Unnamed: 2': ['', '', 'Base Currency Exchange Amt.', 1200],
'Unnamed: 3': ['26-11-2023', '', '', ''],
'Unnamed: 4': ['', 'Hochiminh', '', '']})
</code></pre>
<p>and data will look like below:</p>
<pre><code> KIFRS BS / IS Unnamed: 1 Unnamed: 2 Unnamed: 3 Unnamed: 4
0 26-11-2023
1 CCY Type : EQU Hanoi
2 COA Code COA English Name Base Currency Exchange Amt.
3 11000000000 ABC 1000
</code></pre>
<p>Below is my expected output:</p>
<pre><code>output = pd.DataFrame({'COA Code': ['11000000000', '11000000000'],
'COA English Name': ['ABC', 'ABC'],
'Base Currency Exchange Amt.': [1000, 1200],
'Base Date': ['26-11-2023', '26-11-2023'],
'Branch': ['Hanoi', 'Hochiminh']})
COA Code COA English Name Base Currency Exchange Amt. Base Date Branch
0 11000000000 ABC 1000 26-11-2023 Hanoi
1 11000000000 ABC 1200 26-11-2023 Hochiminh
</code></pre>
<p>To make expected Output, I did something as below:</p>
<pre><code>base_date_1 = sheet_1.iloc[0,3]
branch_1 = sheet_1.iloc[1,4]
sheet_1.columns = sheet_1.iloc[2]
sheet_1 = sheet_1[3:]
sheet_1['Base Date'] = base_date_1
sheet_1['Branch'] = branch_1
base_date_2 = sheet_2.iloc[0,3]
branch_2 = sheet_2.iloc[1,4]
sheet_2.columns = sheet_2.iloc[2]
sheet_2 = sheet_2[3:]
sheet_2['Base Date'] = base_date_2
sheet_2['Branch'] = branch_2
output_2 = pd.concat([sheet_1, sheet_2], axis=0, ignore_index=True)
output_2
</code></pre>
<p>It worked but in case there are more sheets then I need to define it manually. I tried with below code but it raised error that said <code>AttributeError: 'dict' object has no attribute 'iloc'</code>. What should I do to in this case. Thank you.</p>
<pre><code>import pandas as pd
sheet_name = ['sheet_1', 'sheet_2']
for x in sheet_name:
df = pd.read_excel('filename.xlsx', sheet_name = sheet_name)
base_date_1 = df.iloc[0,3]
branch_1 = df.iloc[1,4]
df.columns = df.iloc[2]
df = df[3:]
df['Base Date'] = base_date_1
df['Branch'] = branch_1
</code></pre>
| <python><pandas> | 2023-11-27 07:55:15 | 1 | 1,793 | hoa tran |
77,554,773 | 2,251,058 | Send Image File to slack using http request sending blank image | <p>I am trying to send a graph to slack channel using Http Request, but I am able to see only a blank png.
If I use the slack WebClient to send the same graph it succeeds but not using an http request.</p>
<pre><code>data_anomalies_trend_pd = data_anomalies_trend.toPandas()
plt.figure(figsize=(10, 6))
plt.plot(data_anomalies_trend_pd['date'], data_anomalies_trend_pd['visitors'], marker='o', linestyle='-', color='b')
plt.title('Visitor Trend Over Time')
plt.xlabel('Date')
plt.ylabel('Visitors')
plt.xticks(rotation=45) # Rotate x-axis labels for better readability
plt.tight_layout()
plt.show()
plt.savefig("visitor_trend.png")
with open("visitor_trend.png", "rb") as image_file:
payload = {
'channels': channel_id,
'filetype': 'png',
'filename': 'visitor_trend.png',
'title': 'visitor trend'
}
headers = {
'Authorization': f'Bearer {slack_bot_token}'
}
response = requests.post(
url='https://slack.com/api/files.upload',
data=payload,
headers=headers,
files={'file': image_file}
)
</code></pre>
<p>I need to use the http request instead of WebClient, since I have reqtrictions on installing slack.</p>
| <python><python-3.x><slack> | 2023-11-27 05:58:26 | 1 | 3,287 | Akshay Hazari |
77,554,764 | 14,534,480 | Replace values in pandas column with lists by values in other dataframe | <p>I have dataframe with column that contains lists with id of objects and objects alias in other dataframe. I want to replace <code>id</code> in column with lists in first dataframe by <code>alias</code> from second.</p>
<p>First:</p>
<pre><code>df = pd.DataFrame({'objects': ['', '', '']})
df.objects[0] = []
df.objects[1] = ['2309b0ec-047f-43ef',
'a6150d80-49fa-11e9',
'c103f8b0-4ac0-11e9',
'e8c9db30-4ac0-11e9']
df.objects[2] = ['b14424df-1a46-4e30']
</code></pre>
<pre><code>print(df)
objects
0 []
1 [2309b0ec-047f-43ef, a6150d8...
2 [b14424df-1a46-4e30]
</code></pre>
<p>Second:</p>
<pre><code>objects = pd.DataFrame({'id': ['2309b0ec-047f-43ef',
'a6150d80-49fa-11e9',
'c103f8b0-4ac0-11e9',
'e8c9db30-4ac0-11e9',
'b14424df-1a46-4e30'],
'alias': ['first', 'second', 'third', 'fourth', 'fifth']})
</code></pre>
<pre><code>print(objects)
id alias
0 2309b0ec-047f-43ef first
1 a6150d80-49fa-11e9 second
2 c103f8b0-4ac0-11e9 third
3 e8c9db30-4ac0-11e9 fourth
4 b14424df-1a46-4e30 fifth
</code></pre>
<p>Desire output:</p>
<pre><code> objects
0 []
1 [first, second, third, fourth]
2 [fifth]
</code></pre>
| <python><pandas> | 2023-11-27 05:55:08 | 2 | 377 | Kirill Kondratenko |
77,554,491 | 11,748,924 | Catching MessageError exception from userdata.get() in Google Colaboratory secrets | <p>I'm trying to use the userdata.get() function in Google Colab to retrieve a secret, and I want to handle the case where the secret doesn't exist. Here's my code:</p>
<pre><code>from google.colab import userdata
try:
userdata.get('mySecret')
except MessageError:
print("The mySecret doesn't exists")
</code></pre>
<p>But I got this NameError:</p>
<pre><code>---------------------------------------------------------------------------
MessageError Traceback (most recent call last)
<ipython-input-7-32b2ab9e6be3> in <cell line: 2>()
2 try:
----> 3 userdata.get('mySecret')
4 except MessageError:
3 frames
MessageError: Error: Secret mySecret does not exist.
During handling of the above exception, another exception occurred:
NameError Traceback (most recent call last)
<ipython-input-7-32b2ab9e6be3> in <cell line: 2>()
2 try:
3 userdata.get('mySecret')
----> 4 except MessageError:
5 print("The mySecret doesn't exists")
NameError: name 'MessageError' is not defined
</code></pre>
<p>However, I'm encountering a NameError instead of catching the MessageError. The error message suggests that the secret doesn't exist, but I'm having trouble handling it properly.</p>
<p>Could someone please help me understand what I'm doing wrong? How can I catch the MessageError and print the desired message when the secret is not found?</p>
<p>I expect it caught the error which it will print:</p>
<pre><code>The mySecret doesn't exists
</code></pre>
| <python><google-colaboratory> | 2023-11-27 04:10:06 | 1 | 1,252 | Muhammad Ikhwan Perwira |
77,554,393 | 15,781,591 | How to make line segments transparent based on value using seaborn in python | <p>I am using the following example code presented by user "dnswlt" for creating a multiple lineplot using seaborn in python:</p>
<pre><code>num_rows = 20
years = list(range(1990, 1990 + num_rows))
data_preproc = pd.DataFrame({
'Year': years,
'A': np.random.randn(num_rows).cumsum(),
'B': np.random.randn(num_rows).cumsum(),
'C': np.random.randn(num_rows).cumsum(),
'D': np.random.randn(num_rows).cumsum()})
sns.lineplot(x='Year', y='value', hue='variable',
data=pd.melt(data_preproc, ['Year']))
</code></pre>
<p>This produces the following multiple lineplot, with a different colored solid line for each category:
<a href="https://i.sstatic.net/KQ68R.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KQ68R.png" alt="enter image description here" /></a></p>
<p>What I am trying to do is to make each value in the lineplot transparent based on whether it falls outside of a range of -2 to 2, with the colors preserved. For example, if the red D line, ever falls outside of this range, this line would turn to a transparent pink. Is there a simple and straightforward way to accomplish this using seaborn, such as a specific conditional styling argument?</p>
| <python><plot><seaborn> | 2023-11-27 03:30:26 | 2 | 641 | LostinSpatialAnalysis |
77,554,214 | 2,307,441 | Transpose df columns based on the column names in Pandas | <p>I have a pandas dataframe as below and I would like to transpose the df based on the column Names</p>
<p>I want transpose all the coumns that has _1,_2,_3 & _4. I have 100+ Values in my df, Below is the example data.</p>
<pre class="lang-py prettyprint-override"><code>
import pandas as pd
data = {
# One Id Column
'id':[1],
#Other columns
'c1':['1c'],
'c2':['2c'],
'c3':['3c'],
#IC indicator
'oc_1':[1],
'oc_2':[0],
'oc_3':[1],
'oc_4':[1],
#GC Indicator
'gc_1':['T1'],
'gc_2':['T2'],
'gc_3':['T3'],
'gc_4':['T4'],
#PF Indicator
'pf_1':['PF1'],
'pf_2':['PF2'],
'pf_3':['PF3'],
'pf_4':['PF4'],
#Values
'V1_1':[11],
'V1_2':[12],
'V1_3':[13],
'V1_4':[14],
'S1_1':[21],
'S1_2':[22],
'S1_3':[23],
'S1_4':[24]
}
df = pd.DataFrame(data)
</code></pre>
<p><a href="https://i.sstatic.net/KT9Km.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KT9Km.png" alt="enter image description here" /></a></p>
<p>I need to transpose this df as below output</p>
<p><a href="https://i.sstatic.net/cyhhh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cyhhh.png" alt="enter image description here" /></a></p>
<p>I tried below code:</p>
<pre><code>standard_cols = ['id','c1','c2','c3']
value_cols = ['V1_1','V1_2','V1_3','V1_4','S1_1','S1_2','S1_3','S1_4']
result_cols = standard_cols+['OC','GC','PF','Var','value']
melted_df = pd.melt(df, id_vars=standard_cols + ['oc_1','oc_2','oc_3','oc_4','gc_1','gc_2','gc_3','gc_4','pf_1','pf_2','pf_3','pf_4'],
value_vars=value_cols,var_name='Var',value_name='value')
print(melted_df)
</code></pre>
| <python><pandas><dataframe> | 2023-11-27 02:16:30 | 2 | 1,075 | Roshan |
77,553,886 | 8,755,792 | PyTorch distributed from two ec2 instances hangs | <pre><code># env_vars.sh on rank 0 machine
#!/bin/bash
export MASTER_PORT=23456
export MASTER_ADDR=... # same as below, private ip of machine 0
export WORLD_SIZE=2
export GLOO_SOCKET_IFNAME=enX0
export RANK=0
# env_vars.sh on rank 1 machine
#!/bin/bash
export MASTER_PORT=23456
export MASTER_ADDR=... # same as above
export WORLD_SIZE=2
export GLOO_SOCKET_IFNAME=enX0
export RANK=1
# on rank 0 machine
$ ifconfig
enX0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001
inet ... netmask 255.255.240.0 broadcast ...
inet6 ... prefixlen 64 scopeid 0x20<link>
ether ... txqueuelen 1000 (Ethernet)
RX packets 543929 bytes 577263126 (550.5 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 203942 bytes 21681067 (20.6 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 12 bytes 1020 (1020.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 12 bytes 1020 (1020.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
$ conda activate pytorch_env
$ . env_vars.sh
$ python
>>> import torch.distributed
>>> torch.distributed.init_process_group('gloo')
# Do the same on rank 0 machine
</code></pre>
<p>After 30 seconds or so, machine 0 outputs the following, and machine 1 just continues to hang.</p>
<pre><code>[E ProcessGroupGloo.cpp:138] Gloo connectFullMesh failed with [/opt/conda/conda-bld/pytorch_1699449045860/work/third_party/gloo/gloo/transport/tcp/pair.cc:144] no error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ec2-user/miniconda3/envs/pytorch_env/lib/python3.9/site-packages/torch/distributed/c10d_logger.py", line 74, in wrapper
func_return = func(*args, **kwargs)
File "/home/ec2-user/miniconda3/envs/pytorch_env/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 1155, in init_process_group
default_pg, _ = _new_process_group_helper(
File "/home/ec2-user/miniconda3/envs/pytorch_env/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 1293, in _new_process_group_helper
backend_class = ProcessGroupGloo(backend_prefix_store, group_rank, group_size, timeout=timeout)
RuntimeError: Gloo connectFullMesh failed with [/opt/conda/conda-bld/pytorch_1699449045860/work/third_party/gloo/gloo/transport/tcp/pair.cc:144] no error
</code></pre>
<p>I can connect to the rank 0 machine from the rank 1 machine:</p>
<pre><code># rank 0 machine
nc -lk 23456
# rank 1 machine
telnet … 23456 # use private ip address of rank 0 machine
Trying ...
Connected to …
Escape character is '^]'.
ping
# rank 0 machine
ping
</code></pre>
<p>If I run all the same commands from two shells of the rank 0 machine (modifying one of them with <code>export RANK=1</code>), <code>init_process_group</code> completes execution as expected.</p>
<p>A user posted <a href="https://discuss.pytorch.org/t/strange-behaviour-of-gloo-tcp-transport/66651/22?page=2" rel="nofollow noreferrer">here</a> about the same error, which they said they solved by resetting <code>GLOO_SOCKET_IFNAME</code> and <code>TP_SOCKET_IFNAME</code>. Trying to do a similar thing on my machine didn't succeed.</p>
| <python><pytorch><distributed-computing> | 2023-11-26 23:32:25 | 1 | 1,246 | Eric Auld |
77,553,830 | 12,279,326 | Updating False values in a list based on the previous iteration | <p>I have a list of lists:</p>
<pre><code>['col1', False, False, False, False, False]
['col1', 'col2', False, False, False, False]
['col1', False, 'col3a', False, False, False]
['col1', False, 'col3b', False, False, False]
['col1', False, False, 'col4', False, False]
['col1', False, False, 'col4', False, False]
</code></pre>
<p>As I go through each row, I'm looking to see if the current item is <code>False</code> and if the item in the previous row (same pos) is not <code>False</code>, I'll replace the current item with the previous one.</p>
<p>So the new list of lists would be:</p>
<pre><code>list_of_lists = [
['col1', False, False, False, False, False],
['col1', 'col2', False, False, False, False],
['col1', 'col2', 'col3a', False, False, False],
['col1', 'col2', 'col3b', False, False, False],
['col1', 'col2', 'col3b', 'col4', False, False],
['col1', 'col2', 'col3b', 'col4', False, False],
]
</code></pre>
<p>Here is my code:</p>
<pre><code>for row_num in range(len(list_of_lists)):
display_list = []
if row_num == 0:
continue
for col_num in range(len(list_of_lists[row_num])):
current_row = list_of_lists[row_num][col_num]
previous_row = list_of_lists[row_num - 1][col_num]
if current_row == False and previous_row != False:
display_list.append(previous_row)
else:
display_list.append(current_row)
print(display_list)
</code></pre>
<p>It incorrectly outputs:</p>
<pre><code>['col1', False, False, False, False, False]
['col1', 'col2', False, False, False, False]
['col1', 'col2', 'col3a', False, False, False]
['col1', False, 'col3b', False, False, False]
['col1', False, 'col3b', 'col4', False, False]
['col1', False, False, 'col4', False, False]
</code></pre>
<p>Why is it only revising two values? If a <code>False</code> value was replaced, in the next iteration, it seems like it does not know it changed? For example:</p>
<pre><code>['col1', 'col2', 'col3a', False, False, False]
['col1', False, 'col3b', False, False, False]
</code></pre>
<p>Why does 1st item in the 2nd list not know <code>'col2'</code> was updated, thus use it to replace the <code>False</code> in its position?</p>
| <python> | 2023-11-26 23:16:30 | 3 | 948 | dimButTries |
77,553,816 | 9,213,600 | Keras RBF (radial basis functions) layer for image classification is not working | <p>I want to build an image classifier using an RBF layer, Keras, and the fashionMNIST dataset. The model compiles correctly, the only issue I have is that the model is not learning anything, basically classifies any image to some label it comes up with ending up with 10% accuracy all the time (please note that the fashionMNIST is a balanced dataset of 10 classes).</p>
<p>Here is the implementation</p>
<pre><code>from keras.layers import Layer
from keras import backend as K
from keras.utils import to_categorical
from keras.initializers import RandomUniform, Initializer, Constant
import numpy as np
from keras.datasets import mnist
from keras.layers import Dense, Flatten
from keras.models import Sequential
from keras.losses import categorical_crossentropy
from keras.optimizers import Adam
import matplotlib.pyplot as plt
from sklearn.metrics import mean_squared_error
class InitCentersRandom(Initializer):
def __init__(self, X):
self.X = X
def __call__(self, shape, dtype=None):
assert shape[1] == self.X.shape[1]
idx = np.random.randint(self.X.shape[0], size=shape[0])
return self.X[idx, :]
class RBFLayer(Layer):
def __init__(self, output_dim, initializer=None, betas=1.0, **kwargs):
self.output_dim = output_dim
self.init_betas = betas
if not initializer:
self.initializer = RandomUniform(0.0, 1.0)
else:
self.initializer = initializer
super(RBFLayer, self).__init__(**kwargs)
def build(self, input_shape):
self.centers = self.add_weight(name='centers',
shape=(self.output_dim, input_shape[1]),
initializer=self.initializer,
trainable=True)
self.betas = self.add_weight(name='betas',
shape=(self.output_dim,),
initializer=Constant(value=self.init_betas),
trainable=True)
super(RBFLayer, self).build(input_shape)
def call(self, x):
C = K.expand_dims(self.centers)
H = K.transpose(C - K.transpose(x))
return K.exp(-self.betas * K.sum(H**2, axis=1))
def compute_output_shape(self, input_shape):
return (input_shape[0], self.output_dim)
def get_config(self):
config = {
'output_dim': self.output_dim
}
base_config = super(RBFLayer, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
# Load and preprocess the data
(x_train, y_train), (x_test, y_test) = mnist.load_data()
X = x_train.astype('float32') / 255.0
X = X.reshape((len(X), -1))
y = to_categorical(y_train, num_classes=10)
# Create the RBFLayer model
rbflayer = RBFLayer(20,
initializer=InitCentersRandom(X),
betas=2.0,
input_shape=X.shape[1:])
model = Sequential()
model.add(rbflayer)
model.add(Dense(10, activation='softmax'))
model.summary()
# Compile and train the model
model.compile(loss='categorical_crossentropy', optimizer=Adam(), metrics=['accuracy'])
model.fit(X, y, batch_size=32, epochs=2, verbose=True)
</code></pre>
<p>And for testing the model I am using:</p>
<pre><code># Assuming you have a trained model 'model' and test data 'x_test' and 'y_test'
# Preprocess the test data
x_test = x_test.astype('float32') / 255.0
x_test = x_test.reshape((len(x_test), -1))
y_test_categorical = to_categorical(y_test, num_classes=10)
# Evaluate the model on the test data
loss = model.evaluate(x_test, y_test_categorical, verbose=0)
# Make predictions on the test data
y_pred = model.predict(x_test)
y_pred_classes = np.argmax(y_pred, axis=1)
# Calculate accuracy
correct_predictions = np.sum(y_pred_classes == y_test)
total_samples = len(y_test)
accuracy = correct_predictions / total_samples
print(f'Test Loss: {loss:.4f}')
print(f'Test Accuracy: {accuracy * 100:.2f}%')
</code></pre>
<p>I attempted different hyperparams, RBF kernals, initial weights, etc, but the model always ends up giving ~10% accuracy.</p>
| <python><machine-learning><keras><scikit-learn><deep-learning> | 2023-11-26 23:09:33 | 0 | 1,104 | Ali H. Kudeir |
77,553,676 | 16,988,223 | Dask dataframe condition to compare column doesn't working | <p>I'm trying to compare the column names of a dask dataframe to then change the column data type, however my condition never is true:</p>
<pre><code>column_name = "names"
print(f"Col Name: {column_name} \n") # names
# Change column datatype:
for i in dataframe_for_db.select_dtypes(include='object').columns.tolist():
if (dataframe_for_db[i] == column_name).any().compute() :
# Change column datatype
print("Column found. changing datatype : ")
dataframe_for_db[i] = dataframe_for_db[i].astype(str)
print(dataframe_for_db.dtypes)
</code></pre>
<p>as you can see guys, I'm getting all the column names, but this doensn't work. The column name is setted above.</p>
<p>this condition never is executed:</p>
<pre><code>if (dataframe_for_db[i] == column_name).any().compute() :
</code></pre>
<p>I want to change the column datatypes because then I need to store this dataframe in my database. I will only create the table name in the db and then run my script which will creates the columns of my table.</p>
<p>Any idea guys to fix this problem, I will appreciate it, thanks so much.</p>
| <python><dataframe><dask> | 2023-11-26 22:13:32 | 1 | 429 | FreddicMatters |
77,553,523 | 1,960,266 | tokenizer and parser returns wrong answer for a postfix notation | <p>I have coded a tokenizer and a recursive parser for a postfix expression. My code is the following:</p>
<pre><code>import re
token_patterns = [
('OPERATOR', r'[+\-*/]'),
('NUMBER', r'\d+'),
('WHITESPACE', r'\s+'),
]
def tokenize(source_code):
tokens = []
source_code = source_code.strip()
while source_code:
matched = False
for token_type, pattern in token_patterns:
match = re.match(pattern, source_code)
if match:
value = match.group(0)
tokens.append((token_type, value))
source_code = source_code[len(value):].lstrip()
matched = True
break
if not matched:
raise ValueError(f"Invalid character in source code: {source_code[0]}")
return tokens
def parse_expression(tokens):
if not tokens:
return None
token_type, value = tokens.pop(0)
if token_type == 'NUMBER':
return int(value)
elif token_type == 'OPERATOR':
if value in ('+', '-', '*', '/'):
right = parse_expression(tokens)
left = parse_expression(tokens)
return (value, left, right)
else:
raise ValueError(f"Unexpected operator: {value}")
else:
raise ValueError(f"Unexpected token: {token_type}")
def evaluate_expression(expression):
if isinstance(expression, int):
return expression
elif isinstance(expression, tuple):
operator, left, right = expression
if operator == '+':
return evaluate_expression(left) + evaluate_expression(right)
elif operator == '-':
return evaluate_expression(left) - evaluate_expression(right)
elif operator == '*':
return evaluate_expression(left) * evaluate_expression(right)
elif operator == '/':
return evaluate_expression(left) / evaluate_expression(right)
else:
raise ValueError(f"Invalid expression: {expression}")
def main():
source_code = "2 3 4 * +"
tokens = tokenize(source_code)
parsed_expression = parse_expression(tokens)
print(f"Source code: {source_code}")
print(f"Parsed expression: {parsed_expression}")
result = evaluate_expression(parsed_expression)
print(f"Result: {result}")
if __name__ == "__main__":
main()
</code></pre>
<p>The part of the tokenize function is working correctly, giving me:</p>
<pre><code>[('NUMBER', '2'), ('NUMBER', '3'), ('NUMBER', '4'), ('OPERATOR', '*'), ('OPERATOR', '+')]
</code></pre>
<p>but I would like to return in the</p>
<pre><code>print(f"Parsed expression: {parsed_expression}")
</code></pre>
<p>something like this:</p>
<pre><code>Parsed expression: ('+', 2, ('*', 3, 4))
</code></pre>
<p>however it only prints 2.
Also, the result should be 14 and I also got 2. I am not able to find the mistake.
Any help?</p>
| <python><parsing><tokenize><recursive-descent> | 2023-11-26 21:19:57 | 1 | 3,477 | Little |
77,553,414 | 12,193,952 | Refactoring FastAPI application: Best practices for handling overlapping parameters | <p>I am working on a <strong>Python</strong> application. It is an <strong>FastAPI</strong> based API and it's main purpose is to provide calculations controlled by received <strong>query parameters</strong>. The project has grown a lot and the number of available API parameters is around 150. I wanted to do some refactoring and ended up with a dilema "how to do things in a better way".</p>
<p>I am using <code>Python 3.10</code>, pydantic <code>2.5.0</code> and FastAPI <code>0.104.0</code></p>
<h2>Detailed description</h2>
<p>All received parameters are stored inside a <strong>dataclass class</strong>. This class is passed to all functions that require to read any parameter. I was unable to split this large class into several smaller ones, because the nature of parameters often overlaps.</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
...
@dataclass
class GenChart:
"""
The following model defines all arguments that are accepted by generate_charts
"""
sid: int = 1
time: str = "1m"
market: str = None
...
@app.get('/generate')
async def generate(config_object: GenChart = Depends()):
response = generate(config_object=config_object)
return response
</code></pre>
<p>I am also <strong>adding</strong> some new arguments to the class in the code so flexibility of <strong>dataclass</strong> suits my needs. The total number of arguments can raise up to 200.</p>
<pre class="lang-py prettyprint-override"><code>def generate(config_object: GenChart):
# Add new arguments (example)
config_object.foo = "boo"
</code></pre>
<p>I wanted to use <strong>Pydantic BaseModel</strong>, however since it is not mutable, it will be necessary to define all arguments on init. This will expose those arguments inside docs and might cause some confusion while reading them after while.</p>
<p><a href="https://i.sstatic.net/dNZ6I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dNZ6I.png" alt="config object passing schema" /></a></p>
<h2>Question</h2>
<ul>
<li>Is this the right pythonic way?</li>
<li>What and how can I improve my code architecture?</li>
<li>Should I rather use Basemodel with (<em>somehow</em>) hidden arguments from the API docs?</li>
</ul>
<h3>Additional Q&A</h3>
<ul>
<li>Q: Why there are so many arguments?
<ul>
<li>A: The application serves as a "backtester". It allows to test multiple settings and variables and their combinations. It is an internal tool used by "trained users".</li>
</ul>
</li>
</ul>
<hr />
<p><em>Please excuse me if the question is not clear, I have been working on this topic for a week and it is very difficult to formulate it simply. I do my best to modify the question so that it is as clear as possible. At the same time, I would be glad to be referred to any resources from which I can gain and learn something.</em></p>
| <python><python-3.x><class><pydantic> | 2023-11-26 20:39:09 | 0 | 873 | FN_ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.