QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 β |
|---|---|---|---|---|---|---|---|---|
77,115,778 | 9,221,448 | How to start a server on a remote machine? | <p>I'm trying to set up a cluster of EC2 instances that I can use for "big data" modeling. I'm trying to stay close to the metal and would prefer to not use existing frameworks (e.g. dask, DataBricks, etc).</p>
<p>I am able to stand up the instances using boto3, and I'm using paramiko to SSH and copy files to each one. I have written a TCP server that I'd like to launch on each worker machine, but when I issue the command (in BASH) to launch the server my session hangs waiting for the server to complete. Below is a somewhat stripped down version of what I'm trying to do. I'd like to loop over instances in a <code>for</code> loop starting a server on each one with a call to <code>start_server</code>.</p>
<pre><code>import paramiko
import os
def start_server(instance, server_code_fname: str, port: int, rsa_key):
"""
Args:
instance: A boto3 'Instance' handle to a machine running on EC2.
server_code_fname: The name of a file containing the TCP server code to
run on the remote machine.
port: The port the remote machine should listen on.
rsa_key: Security token needed to make the SSH connection.
Expected Effect: Should start a server on the remote machine and return
fairly quickly.
Observed Effect: Server starts, but the SSH session hangs -- presumably
waiting (forever) for the server to complete, and blocking me from
launching the server on the NEXT machine.
"""
dest = "/tmp/" + str(os.path.basename(server_code_fname))
copy_local_file_to_remote(
instance,
local_path=server_code_fname,
remote_path=dest,
rsa_key=rsa_key)
ip = instance.private_ip_address
# ----------------------------------------------------------------------
# This is the command that hangs
command = f"nohup python3 {dest} --port {port} --ip {ip} & "
# ----------------------------------------------------------------------
run_bash_command(instance, command, rsa_key)
def copy_local_file_to_remote(instance,
local_path: str,
remote_path: str,
rsa_key):
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(hostname=instance.public_dns_name,
username="ec2-user",
pkey=rsa_key,
port=22)
scp = ssh.open_sftp()
scp.put(local_path, remote_path)
scp.close()
def run_bash_command(instance, command, rsa_key):
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(hostname=instance.public_dns_name,
username="appropriate_username",
pkey=rsa_key)
_, stdout, stderr = ssh_client.exec_command(command)
ssh_client.close()
return stdout, stderr
</code></pre>
| <python><bash><amazon-ec2><boto3><paramiko> | 2023-09-15 23:27:08 | 1 | 505 | Steven Scott |
77,115,609 | 9,900,084 | Format time axis to only display tick labels after a certain date | <p>I am attempting to format the time axis in subplots so that the xtick label appears only after the start of the data.</p>
<pre><code>import pandas as pd
import numpy as np
from datetime import date
import matplotlib.pyplot as plt
from matplotlib.dates import MonthLocator, num2date
from matplotlib.ticker import FuncFormatter
xs = [pd.date_range(f'{y}-07-01', '2021-12-31', freq='M')
for y in range(2016, 2019)]
ys = [np.random.rand(len(x)) for x in xs]
fig, axs = plt.subplots(3, 1, figsize=(10, 8))
for ax, x, y in zip(axs, xs, ys):
ax.plot(x, y)
# Custom formatting function
date_min = x[0].date()
def custom_date_formatter(x, pos):
dt = num2date(x)
if dt.date() < date_min:
return ''
elif dt.month == 1:
return dt.strftime('%Y')
else:
return dt.strftime('%b').upper()
ax.xaxis.set_major_locator(MonthLocator((1, 4, 7, 10)))
ax.xaxis.set_major_formatter(FuncFormatter(custom_date_formatter))
ax.tick_params(axis='x', labelsize=8)
ax.set_xlim(date(2016, 7, 1))
</code></pre>
<p>But I got a plot where all axis display identical xtick labels:</p>
<p><a href="https://i.sstatic.net/HpdKd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HpdKd.png" alt="enter image description here" /></a></p>
<p>But I want them to be:</p>
<pre><code>JUL - OCT - 2017 - APR - JUL - OCT - 2018 .....
..... JUL - OCT - 2018 - APR - JUL - OCT - 2019 .....
............ JUL - OCT - 2019 - APR - JUL - OCT - 2020 .....
</code></pre>
<p><a href="https://i.sstatic.net/ET0nB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ET0nB.png" alt="enter image description here" /></a></p>
<p>How should I fix the <code>custom_date_formatter</code> function to achieve that?</p>
| <python><matplotlib> | 2023-09-15 22:20:43 | 1 | 2,559 | steven |
77,115,542 | 2,975,438 | AgentExecutor and ModuleNotFoundError/NameError | <p><code>AgentExecutor</code> should be able to install python packages.</p>
<p>Here is code snippet:</p>
<pre><code>from langchain.agents import ConversationalChatAgent, AgentExecutor
from langchain.agents import Tool
from langchain.agents import create_pandas_dataframe_agent
from langchain.chat_models import AzureChatOpenAI
#df = pd.read_csv(...)
#llm = AzureChatOpenAI(...)
agent_analytics_node = create_pandas_dataframe_agent(
llm,
df,
verbose=True,
reduce_k_below_max_tokens=True,
max_execution_time = 20,
early_stopping_method="generate",
)
tool_analytics_node = Tool(
name='Analytics Node',
func=agent_analytics_node.run
)
tools = [tool_analytics_node]
chat_agent = ConversationalChatAgent.from_llm_and_tools(llm=llm, tools=tools)
executor = AgentExecutor.from_agent_and_tools(
agent=chat_agent,
tools=tools,
return_intermediate_steps=True,
handle_parsing_errors=True,
verbose=True,
)
prompt='can you help understand mood of customers'
response = executor(prompt)
</code></pre>
<p>Here is the current output from the agent:</p>
<pre><code>> Entering new AgentExecutor chain...
Thought: The question seems to be asking for the sentiment polarity of the 'survey_comment' column in the dataframe. The sentiment polarity is a measure that lies between -1 and 1. Negative values indicate negative sentiment and positive values indicate positive sentiment. The TextBlob library in Python can be used to calculate sentiment polarity. However, before applying the TextBlob function, we need to ensure that the TextBlob library is imported. Also, the 'dropna()' function is used to remove any NaN values in the 'survey_comment' column before applying the TextBlob function.
Action: python_repl_ast
Action Input: import TextBlob
Observation: ModuleNotFoundError: No module named 'TextBlob'
Thought:The TextBlob library is not imported. I need to import it from textblob module.
Action: python_repl_ast
Action Input: from textblob import TextBlob
Observation:
Thought:Now that the TextBlob library is imported, I can apply it to the 'survey_comment' column to calculate the sentiment polarity.
Action: python_repl_ast
Action Input: df['survey_comment'].dropna().apply(lambda x: TextBlob(x).sentiment.polarity)
Observation: NameError: name 'TextBlob' is not defined
</code></pre>
<p>Also <code>TextBlob</code> is installed in python environment .</p>
| <python><langchain> | 2023-09-15 21:58:50 | 1 | 1,298 | illuminato |
77,115,394 | 6,658,422 | How to change the headings and column numbers in a pysimplegui Table element | <p>I cannot find out how I can update the headings in a Table element or change the number of columns.</p>
<p>I tried creating two table elements and switching them, but that did not work.</p>
<pre><code>import PySimpleGUI as sg
values = [["John Doe", 30], ["Jane Doe", 25], ["Peter Smith", 40]]
# Create a table with 3 rows and 2 columns
table1 = sg.Table(
values=values,
headings=["Name", "Age"],
auto_size_columns=False,
)
table2 = sg.Table(
values=values,
headings=["XXXXX", "YYYY"],
auto_size_columns=False,
)
table = table1
# Create a layout and show the window
layout = [[table], [sg.Button("Flip", key="-FLIP-"), sg.Push(), sg.Quit()]]
window = sg.Window("Table Example", layout)
while True:
event, values = window.read()
print(event, values)
if event in [sg.WIN_CLOSED, "Quit"]:
break
if event == "-FLIP-":
# update headings in table
pass
window.close()
</code></pre>
<p>I have looked at various examples but they all create new tables in a <code>sg.window()</code>, and do not update an existing table.</p>
<p>Thank you very much for pointers to where I can find out more.</p>
| <python><pysimplegui> | 2023-09-15 21:16:57 | 1 | 2,350 | divingTobi |
77,115,274 | 13,968,392 | copy a dataframe to new variable with method chaining | <p>Is it possible to copy a dataframe in the middle of a method chain to a new variable?
Something like:</p>
<pre><code>import pandas as pd
df = (pd.DataFrame([[2, 4, 6],
[8, 10, 12],
[14, 16, 18],
])
.assign(something_else=100)
.div(2)
.copy_to_new_variable(df_imag) # Imaginated method to copy df to df_imag.
.div(10)
)
</code></pre>
<p><code>print(df_imag)</code> would then return:</p>
<pre><code> 0 1 2 something_else
0 1.0 2.0 3.0 50.0
1 4.0 5.0 6.0 50.0
2 7.0 8.0 9.0 50.0
</code></pre>
<p><code>.copy_to_new_variable(df_imag)</code> could be replaced by <code>df_imag = df.copy()</code> but this would result in compromising the method chain.</p>
| <python><pandas><method-chaining> | 2023-09-15 20:46:27 | 3 | 2,117 | mouwsy |
77,115,046 | 13,562,186 | JVM DLL not found. FileNotFoundError: [Errno 2] | <p>Trying to explore using Tabula in python on a PDF in Visual Studio code on MacOS.</p>
<pre><code>import pandas as pd
import tabula
dfs = tabula.read_pdf("/Users/TEST.pdf", pages = 1)
len(dfs)
</code></pre>
<p>When I run the code however I get the following error:</p>
<p>FileNotFoundError: [Errno 2] JVM DLL not found: /Library/Java/JavaVirtualMachines/adoptopenjdk-11.jdk/Contents/Home/lib/jli/libjli.dylib</p>
<p><a href="https://i.sstatic.net/VDib1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VDib1.png" alt="enter image description here" /></a></p>
<p>I have installed Java via home-brew, and via a pkg all apparently successful and can run a simple java program in visual studio code just fine. So it is installed, but I don't really know how to solve the above error despite a few attempts.</p>
<p><a href="https://i.sstatic.net/PrK9g.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PrK9g.png" alt="enter image description here" /></a></p>
<p>I am really new to python and installing packages so if you think you can answer, please walk me through like I'm 5 years old.</p>
<p>UPDATE:</p>
<pre><code>import os
# Set the JAVA_HOME environment variable to the Java installation directory
os.environ["JAVA_HOME"] = "/opt/homebrew/opt/openjdk/libexec/openjdk.jdk"
import pandas as pd
import tabula
dfs = tabula.read_pdf("/Users/NickCoding/Desktop/TEST.pdf", pages = 1)
len(dfs)
</code></pre>
<p>This allows the code to work, however I feel that this is a botched solution.</p>
<p>How do I get it to work in the virtual environment?</p>
| <python><java><visual-studio-code><dll><jvm> | 2023-09-15 19:56:52 | 1 | 927 | Nick |
77,115,016 | 10,227,815 | how to mock in python API testing | <p>I'm trying to mock a sample function below. Somehow I'm getting error.
For example,
I've a class as below <code>myclass.py</code>:</p>
<pre><code>import os, requests
class MyClass:
def __init__(self, login_url):
self.username = os.environ.get('username')
self.password = os.environ.get('password')
self.login_url = login_url
self.auth_credentials = {
'name': self.username,
'password': self.password
}
def get_access_token(self):
token = ""
headers = {
'Authorization': f'Bearer {token}',
'Content-Type': 'application/json',
'Accept': 'application/json'
}
response = requests.post(self.login_url, json=self.auth_credentials, headers=headers, verify=False)
access_token = response.json().get('access_token')
return access_token
</code></pre>
<p>Now, I'm having unit-test as below <code>unit_test.py</code>, which is actually throwing an error:</p>
<pre><code>import os
import requests
import unittest
from unittest.mock import Mock, patch, MagicMock
import MyClass
class TestModule(unittest.TestCase):
# @patch('MyClass.get_access_token')
def test_execute(self):
# Test/Mock API:
_obj = MyClass(login_url="https://www.my-website.com/login/")
mock_access_token = "abc123def456ghi789"
with patch('MyClass.get_access_token') as _access:
_access.return_value.status_code = 200
_access.return_value.json.return_value = mock_access_token
response = _obj.get_access_token()
self.assertEqual(response.status_code, 200)
self.assertEqual(response, mock_access_token)
if __name__ == "__main__":
unittest.main()
</code></pre>
<p>So, what am I missing here in <code>unit_test.py</code> ?</p>
| <python><unit-testing><mocking><python-unittest><python-unittest.mock> | 2023-09-15 19:50:14 | 2 | 303 | SHM |
77,114,915 | 226,081 | python: how to use type annotations and docstring formats together | <p>Now that we have type annotations in Python, I'm wondering how to document code with type annotations AND docstring formats.</p>
<p>Specifically, it seems redundant to specify the argument types as an annotation AND <code>:type:</code> in the docstring, as in the restructured text format example below. The descriptions in the docstring format is still useful.</p>
<pre class="lang-py prettyprint-override"><code>
def my_function(db: Session, name: str) -> str:
"""Some function
:param db: a database connection
:type db: Session
:param name: some name
:type name: str
:return: return something
:rtype: str
"""
</code></pre>
<p>One option might be to write the function like this:</p>
<pre class="lang-py prettyprint-override"><code>
def my_function(db: Session, name: str) -> str:
"""Some function
:param db: a database connection
:param name: some name
:return: return something
"""
</code></pre>
<p>How do I write DRY docstrings for code with type annotations, so that it is most compatible with IDE type inference tools (i.e. VSCode), linters, mypy, auto documentation tools (i.e. Sphynx), and other community-based tools for generating and introspecting documentation?</p>
| <python><python-typing><docstring> | 2023-09-15 19:25:44 | 1 | 10,861 | Joe Jasinski |
77,114,887 | 239,801 | How can I fix this mypy "Incompatible return value type" for my Enum like class in my Python project? | <p>In our project we never use Enum we use SimpleEnum which is defined like this:</p>
<pre><code>class SimpleEnum:
@classmethod
def values(cls):
return {
getattr(cls, name)
for name in dir(cls)
if not name.startswith("_") and not callable(getattr(cls, name))
}
</code></pre>
<p>Now for something like this:</p>
<pre><code>class MySimpleEnum(SimpleEnum):
completed = "completed"
parse = "parse"
def foo() -> MySimpleEnum:
return MySimpleEnum.parse
</code></pre>
<p>I get mypy error:</p>
<pre><code>error: Incompatible return value type (got "str", expected "MySimpleEnum") [return-value]
</code></pre>
<p>This is a super simplified use case in a hundred thousand of lines project. I can't just switch to Enum. How can I change SimpleEnum to allow code hinting like I'm trying to do (if even possible)? The change must not change the behavior of SimpleEnum (since it's used hundred of times in the project).</p>
| <python><python-3.x><mypy> | 2023-09-15 19:18:58 | 2 | 23,198 | AlexV |
77,114,815 | 5,378,816 | Pylint is not suggesting the walrus operator. Why? | <p>I was going to ask if there is a pylint-style code analyzer capable of suggesting the use of the <code>:=</code> operator in places were it might improve the code. However, it looks like such test has been added to the <code>pylint</code> two years ago -> <a href="https://github.com/pylint-dev/pylint/pull/4876" rel="nofollow noreferrer">github PR (merged)</a>.</p>
<p>Anyway I never saw such suggestion, not even for this example like in the linked PR:</p>
<pre><code>x = 2
if x:
print(x)
# -----
# if (x := 2):
# print(x)
# -----
</code></pre>
<p>This feature is available since Python 3.8. (I'm using recent Python and pylint versions.) I though I have to enable it somehow, but the help says:</p>
<blockquote>
<p><strong>--py-version <py_version></strong>
Minimum Python version to use for version dependent checks. Will
default to the version used to run pylint.</p>
</blockquote>
<p>What is wrong? Why there is no <code>consider-using-assignment-expr</code> from <code>pylint</code>?</p>
| <python><pylint><python-assignment-expression> | 2023-09-15 19:07:11 | 1 | 17,998 | VPfB |
77,114,575 | 22,466,650 | How to consider the nesting level when looping through a nested list? | <p>My input is a nested list :</p>
<pre><code>data = ['aaa', ['eee', ['ccc', 'zzz', ['fff']], 'yyy']]
</code></pre>
<p>And I need to rearange it in a very specific way. Let me put some comments to explain the logic :</p>
<pre><code>expected_output = [
['aaa', 'eee'], # bcz eee is the first non-list item inside the next list that follows aaa
['aaa', 'yyy'], # bcz yyy is the second non-list item inside the next list that follows aaa
['eee', 'ccc'], # bcz ccc is the first non-list item inside the next list that follows eee
['eee', 'zzz'], # bcz zzz is the second non-list item inside the next list that follows eee
['zzz', 'fff'], # bcz fff is the first non-list item inside the next list that follows zzz
]
</code></pre>
<p>I tried some sort of recursive function but it's not good at all :</p>
<pre><code>def my_func(item):
output = []
for i in item:
if isinstance(i, list):
yield i
else:
output.append([i, my_func(i)])
</code></pre>
<p>Here is the wrong output I'm getting :</p>
<pre><code>print(list(my_func(data)))
[['eee', ['ccc', 'zzz', ['fff']], 'yyy']]
</code></pre>
<p>The first item of the input list is always a non-list. Also the order of the items in each output pair matters, but not the order in which the pairs are printed/returned relative to each other.</p>
<p>How should I approach this problem?</p>
| <python> | 2023-09-15 18:11:23 | 4 | 1,085 | VERBOSE |
77,114,567 | 16,717,009 | How do i obtain a key, value pair from a python dictionary referencing by a specific key | <p>Am I missing something obvious?
Given d = {'a': 1, 'b': 2, 'c': 10}
how do I get
{'b': 2} or even ('b', 2) or ['b', 2] just by providing 'b'?
Using a comprehension to iterate through the whole dictionary seems wildly inefficient and unnecessary since the whole point of a dictionary is to provide quick access by key.</p>
<p>My actual use for this is in a larger statement where I want to pass a single item from one dictionary appended to another dictionary, like so:
a = func({**d1['b'],**d2})
but of course that doesn't work because d1['b'] is not a dictionary.</p>
<p>I can do this with
result = {'b': d['b']}
but this seems clunky, especially when 'b' is actually a long string. Isn't there a cleaner way?</p>
| <python><dictionary> | 2023-09-15 18:09:52 | 2 | 343 | MikeP |
77,114,481 | 210,388 | Capturing inner string in lark using regex | <p>Ok, so, this is probably a bit convoluted solution and I should probably use something simple, but it is what is.</p>
<p>I have a huge text I want to parse using python's lark and I'm piecing together certain parts, and now I'm trying to get the text within the quotes.</p>
<p>Using regex that would be something like:</p>
<pre><code>([\"\'])(.*)(\1)
</code></pre>
<ul>
<li>capture first quote</li>
<li>capture all</li>
<li>until you capture the last quote that is the same as first</li>
</ul>
<p>However, adding that to Lark grammar, for instance like this:</p>
<pre><code>grammar = r"""
start: contained_string
contained_string: quoted | parenthesis | braces | brackets
quoted : /([\"\'])(.*)(:\1)/
parenthesis : "(" CONT_STRING+ ")"
braces : "{" CONT_STRING+ "}"
brackets : "[" CONT_STRING+ "]"
CONT_STRING: /[\w.,!?:\- ]/
"""
</code></pre>
<p>And test it out on a list like this:</p>
<pre><code>samples = ['"sample"', "'sam'ple'", """'samp"le'"""]
</code></pre>
<p>I get these as outputs:</p>
<pre><code>"sample"
'sam'ple'
'samp"le'
</code></pre>
<p>Which is ok, it doesn't get tripped the extra quote, either the same type or different. But it keeps the outer quotes. Now usually I could specify that I want the second group from the captured ones, but I'm not sure how to do it within lark like this.</p>
| <python><python-3.x><lark-parser> | 2023-09-15 17:53:19 | 0 | 393 | thevoiddancer |
77,114,363 | 2,148,414 | Snowpark UDF with Row input type | <p>I would like to define a Snowpark UDF with input type <a href="https://docs.snowflake.com/en/developer-guide/snowpark/reference/python/latest/api/snowflake.snowpark.Row" rel="nofollow noreferrer">snowflake.snowpark.Row</a>.
The reason for this is that I would like to mimic the <code>pandas.apply</code> approach
where I can define my business logic in some class, and then apply the logic to each row of the Snowpark dataframe. Each column can be easily mapped to a class attribute with <a href="https://docs.snowflake.com/en/developer-guide/snowpark/reference/python/latest/api/snowflake.snowpark.Row.asDict" rel="nofollow noreferrer">asDict</a></p>
<p>For example (running from the Snowflake Python worksheet):</p>
<pre><code>import snowflake.snowpark as snowpark
from snowflake.snowpark.functions import udf
from snowflake.snowpark import Row
from snowflake.snowpark.types import IntegerType
from dataclasses import dataclass
@dataclass
class MyEvent:
attribute1: str = 'dummy'
attribute2: str = 'unknown'
def someCalculation(self) -> int:
return len(self.attribute1) + len(self.attribute2.strip())
def testSomeCalculation():
inputDict = {'attribute1': 'foo',
'attribute2': 'baz'}
event = MyEvent(**inputDict)
print(event.someCalculation())
def main(session: snowpark.Session):
some_logic = udf(lambda row: MyEvent(**(row.asDict())).someCalculation()
, return_type=IntegerType()
, input_types=[Row])
</code></pre>
<p>However, when I try to use snowpark.Row as input type, I get an <code>unsupported data type</code>:</p>
<pre><code>File "snowflake/snowpark/_internal/udf_utils.py", line 972, in create_python_udf_or_sp
input_sql_types = [convert_sp_to_sf_type(arg.datatype) for arg in input_args]
File "snowflake/snowpark/_internal/udf_utils.py", line 972, in <listcomp>
input_sql_types = [convert_sp_to_sf_type(arg.datatype) for arg in input_args]
File "snowflake/snowpark/_internal/type_utils.py", line 195, in convert_sp_to_sf_type
raise TypeError(f"Unsupported data type: {datatype.__class__.__name__}")
TypeError: Unsupported data type: type
</code></pre>
<p>I see that all the UDF examples use basic types from <code>snowpark.types</code>.
Is there any fundamental reason why the input type cannot be a snowpark.Row ?</p>
<p>I know I could list explicitly all <code>MyEvent</code> attributes in <code>input_type=[]</code>,
but that is going to be error prone and defeating the purpose of designing
my code around a class representing my business object.</p>
| <python><snowflake-cloud-data-platform><user-defined-functions> | 2023-09-15 17:32:51 | 2 | 394 | user2148414 |
77,114,354 | 11,163,122 | Getting base Python interpreter in Makefile on Windows or POSIX | <p>I am trying to make a <code>Makefile</code> that has the path to system Python in an environment variable, that works on Windows and POSIX systems.</p>
<p>The below works on macOS and Windows WSL, but if you have a virtual environment activated, it returns the <code>venv</code>'s path (not system Python's path):</p>
<pre><code>SYSTEM_PYTHON = $(shell which python3)
</code></pre>
<p>The below works regardless of <code>venv</code> being activated, but it doesn't work on Windows (due to Windows not using <code>bin</code>, and using <code>\\</code>):</p>
<pre><code>SYSTEM_PYTHON = $(shell python3 -c "import sys; print(sys.base_prefix)")/bin/python3
</code></pre>
<p>How can I make a <code>SYSTEM_PYTHON</code> env var that works on both Windows and POSIX, regardless if a virtual environment is previously activated?</p>
| <python><windows><macos><makefile><gnu-make> | 2023-09-15 17:31:23 | 1 | 2,961 | Intrastellar Explorer |
77,114,337 | 1,584,906 | What does the "RuntimeError: Cannot enter into task ... while another task ... is being executed" mean in Python? | <p>I'm trying to troubleshoot this error in a fairly complex Python 3.9 project with PyQt 5.15 and <code>qasync</code> (error sanitized for privacy):</p>
<pre><code>RuntimeError: Cannot enter into task <Task pending name='Task-11911' coro=<*** running at ***:454> cb=[asyncSlot.<locals>._error_handler() at /mnt/data/opt/lib/python3.9/site-packages/qasync/__init__.py:778, <TaskWakeupMethWrapper object at 0x7f64291790>()]> while another task <Task pending name='Task-11910' coro=<*** running at ***.py:1038> cb=[asyncSlot.<locals>._error_handler() at /mnt/data/opt/lib/python3.9/site-packages/qasync/__init__.py:778]> is being executed.
</code></pre>
<p>I'm not sure I understand the error, and a quick search online only returns links to specific packages, not an explanation of the error itself. What does the error mean in practice?</p>
| <python><pyqt5><python-asyncio><qasync> | 2023-09-15 17:29:07 | 0 | 1,465 | Wolfy |
77,113,964 | 852,385 | Why does this Python list comprehension give list index out of range error | <pre class="lang-py prettyprint-override"><code>grid = [[]]
[j for j in range(len(grid[i])) for i in range(len(grid))]
</code></pre>
<p>gives <code>IndexError: list index out of range</code>.</p>
<p>I think <code>i</code> can be only <code>0</code> and <code>[j for j in range(len(grid[0]))]</code> runs fine without any problem. So I am confused.</p>
| <python><python-3.x> | 2023-09-15 16:21:03 | 1 | 4,232 | Helin Wang |
77,113,627 | 7,077,532 | Python Function: Loop Through a "Key:Value" Dictionary And Append The Last "Value" Word To an Output String Based on the Dictionary's Key | <p>Let's say I have the following dictionary:</p>
<pre><code>sorted_dict = {1: 'You', 2: 'teeth', 3: 'are', 4: 'mood', 5: 'test', 6: 'helpful'}
</code></pre>
<p>Each word is associated with a number and I want to arrange the words into a pyramid like the example below:</p>
<pre><code>1
2 3
4 5 6
</code></pre>
<p>The trick here is I only want the words corresponding to the numbers at the end of each line (1, 3, 6). All other words are discarded. So:</p>
<pre><code>1: You
3: are
6: helpful
</code></pre>
<p>In the end I want to return a string associated with numbers 1, 3, 6. In my example the desired output would be:</p>
<pre><code>"You are helpful"
</code></pre>
<p>I've started trying to write the function for this (see below). But I don't know how to get the pyramid pattern 1, 3, 6 in my loop to get the right words out. Please note this is a small example but I want my function (with this pyramid pattern) to run on a dictionary that potentially has thousands of words.</p>
<pre><code>def test(input_list):
output_str = ""
i = 1
for key, value in sorted_dict.items():
print(key, value)
# i += ?
# if statement here to only append string to output_str if it equals the right i number
output_str += ' ' + value
return output_str
</code></pre>
<p>My current function above returns: <code>' You teeth are mood test helpful'</code> which is incorrect</p>
<p>How do I fix this?</p>
| <python><function><loops><dictionary><number-sequence> | 2023-09-15 15:32:05 | 1 | 5,244 | PineNuts0 |
77,113,617 | 9,900,084 | Format time axis year as number and month as uppercase abbr | <p>I'm trying to find a way to format the datetime axis to display years as numbers and months as uppercase letters. Unfortunately, Python's <code>strptime</code> function doesn't provide uppercase month names (e.g. APR). I need to format it myself, but I haven't been able to figure out how. Since I only want to show Apr, Jul, Oct. I used a <code>MonthLocator</code> in a <code>ConciseDateFormatter</code>. I have tried the following:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.dates import MonthLocator
from matplotlib.dates import ConciseDateFormatter
from matplotlib.ticker import FuncFormatter
x = pd.date_range('2016-01-01', '2021-12-31', freq='M')
y = np.random.rand(len(x))
fig, ax = plt.subplots(figsize=(10, 8))
locator = MonthLocator((1, 4, 7, 10))
ax.xaxis.set_major_locator(locator)
ax.xaxis.set_major_formatter(
ConciseDateFormatter(
locator, formats=['%Y', '%b', '%d', '%H:%M', '%H:%M', '%S.%f']
)
)
ax.tick_params(axis='x', labelsize=8)
ax.plot(x, y)
</code></pre>
<p><a href="https://i.sstatic.net/xgslf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xgslf.png" alt="enter image description here" /></a></p>
<p>But I want the axis to be</p>
<pre><code>2016 - APR - JUL - OCT - 2017 - APR - JUL -OCT - 2018 -
</code></pre>
<p>I've tried <a href="https://stackoverflow.com/questions/49397437/matplotlib-uppercase-tick-labels">a few ways</a>, but none of them works. Once I use a user-defined <code>FuncFormatter</code>, it will no longer respect my <code>ConciseDateFormatter</code>. I'm wondering what the easiest way is for me to achieve my desired outcome. Should I simply write a formatter function that includes an if statement? If the year and month are both equal to 1, it should display the year number; otherwise, it should show the month name. If this is the case, how can I accomplish it?</p>
| <python><matplotlib> | 2023-09-15 15:30:00 | 1 | 2,559 | steven |
77,113,589 | 5,722,359 | The `w.destroy()` method does not eradicate the `tkinter` widget? | <p>I submitted the following commands in the Python interpreter and discovered that the <code>w.destroy()</code> method of a <code>tkinter</code> widget does not eradicate (i.e. wipe out the existence of) the <code>tkinter</code> widget.</p>
<pre><code>>>> root=tk.Tk()
>>> a = ttk.Button(root, text="Button")
>>> a.grid(row=0, column=0)
>>> a
<tkinter.ttk.Button object .!button>
>>> a.destroy()
>>> a
<tkinter.ttk.Button object .!button>
>>> a.grid(row=0, column=0)
Traceback (most recent call last):
File "/usr/lib/python3.10/idlelib/run.py", line 578, in runcode
exec(code, self.locals)
File "<pyshell#23>", line 1, in <module>
File "/usr/lib/python3.10/tkinter/__init__.py", line 2522, in grid_configure
self.tk.call(
_tkinter.TclError: bad window path name ".!button"
>>> b
Traceback (most recent call last):
File "/usr/lib/python3.10/idlelib/run.py", line 578, in runcode
exec(code, self.locals)
File "<pyshell#24>", line 1, in <module>
NameError: name 'b' is not defined
>>> del a
>>> a
Traceback (most recent call last):
File "/usr/lib/python3.10/idlelib/run.py", line 578, in runcode
exec(code, self.locals)
File "<pyshell#26>", line 1, in <module>
NameError: name 'a' is not defined
>>> root.destroy()
>>> root
<tkinter.Tk object .>
>>> del root
>>> root
Traceback (most recent call last):
File "/usr/lib/python3.10/idlelib/run.py", line 578, in runcode
exec(code, self.locals)
File "<pyshell#30>", line 1, in <module>
NameError: name 'root' is not defined
>>>
</code></pre>
<p>Quoting <a href="https://anzeljg.github.io/rin2/book2/2405/docs/tkinter/universal.html" rel="nofollow noreferrer">documentation</a>:</p>
<pre><code>w.destroy()
Calling w.destroy() on a widget w destroys w and all its children.
</code></pre>
<p>In the above example, although <code>a.destroy()</code> did remove the appearance of the <code>ttk.Button</code> and not let <code>a</code> run other tkinter methods, yet <code>a</code> is still recognised as a <code><tkinter.ttk.Button object .!button></code> object, i.e. the existence of <code>a</code> was not eradicated from the Python Interpreter. Only after deleting <code>a</code> via the <code>del a</code> command caused the <code>NameError: name 'a' is not defined</code> exception when <code>a</code> was called. The same outcome is seen in <code>root.destroy()</code>.</p>
<p>I think the <code>tkinter</code> package maintainers need to include a <code>del w</code> command (here, <code>w</code> refers to any tkinter widget) to its <code>.destroy()</code> method. If not, tkinter users presently need to run a <code>del w</code> command after every <code>w.destroy()</code> method to ensure cleanliness in their Python codes. Is my reasoning sound?</p>
<p>Following the thoughts in my comment, below is an example of a class method eradicating a class attribute:</p>
<pre><code>>>> class App(tk.Frame):
def __init__(self, master, **kw):
super().__init__(master, bg="pink")
self.master = master
self.bn = tk.Button(self, text="Button", bg="cyan", width=10, height=2)
self.bn.grid(row=0, column=0)
def eradicate(self):
del self.bn
>>> root = tk.Tk()
>>> root.geometry("150x150+10+10")
''
>>> app = App(root)
>>> app.grid(row=0, column=0)
>>> root.columnconfigure(0, weight=1)
>>> root.rowconfigure(0, weight=1)
>>> app.eradicate()
>>> app.bn
Traceback (most recent call last):
File "/usr/lib/python3.10/idlelib/run.py", line 578, in runcode
exec(code, self.locals)
File "<pyshell#116>", line 1, in <module>
AttributeError: 'App' object has no attribute 'bn'
</code></pre>
| <python><tkinter><tcl> | 2023-09-15 15:25:40 | 1 | 8,499 | Sun Bear |
77,113,551 | 3,261,772 | TRL SFTTrainer - llama2 finetuning on Alpaca - datasettext field | <p>I am trying to finetune the Llama2 model using the alpaca dataset. I have loaded the model in 4-bit and apply the peft config to the model for Lora training. Then I am trying to do TRLβs SFTTrainer to fine-tune the model.</p>
<p>The train_dataset is</p>
<pre><code>Dataset({
features: ['instruction', 'input', 'output', 'input_ids', 'attention_mask'],
num_rows: 50002
})
</code></pre>
<p>This is the error that I get:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[28], line 3
1 # Step 8 :Set supervised fine-tuning parameters
2 from transformers import DataCollatorForLanguageModeling
----> 3 trainer = SFTTrainer(
4 model=model,
5 train_dataset=train_data,
6 #eval_dataset=val_data,
7 #peft_config=peft_config,
8 #dataset_text_field="train",
9 max_seq_length=max_seq_length,
10 tokenizer=tokenizer,
11 args=training_arguments,
12 #packing=True,
13 #data_collator=DataCollatorForLanguageModeling(tokenizer, mlm=False),
14 )
ValueError: You passed `packing=False` to the SFTTrainer, but you didn't pass a `dataset_text_field` or `formatting_func` argument.
</code></pre>
<p>If I try to pass the packing = True then I get this error:</p>
<pre><code>ValueError: You need to pass a `dataset_text_field` or `formatting_func` argument to the SFTTrainer if you want to use the `ConstantLengthDataset`.
</code></pre>
<p>If I provide the dataset_text_field, which I do not know what it is. I tried with "train" or "text" keywords and I am getting this error:</p>
<pre><code>ValueError: the `--group_by_length` option is only available for `Dataset`, not `IterableDataset
</code></pre>
<p>I appreciate if someone can help me to understand the "dataset_text_filed", where do I set ConstantLengthDataset (does it come from packing?). I also tried with packing = False and provide the dataset_text_field with 'train' and 'text' and they are incorrect.</p>
<p>based on the documentation:</p>
<pre><code>dataset_text_field (Optional[str]): The name of the text field of the dataset, in case this is passed by a user, the trainer will automatically create a ConstantLengthDataset based on the dataset_text_field argument.
</code></pre>
| <python><large-language-model><llama> | 2023-09-15 15:18:13 | 1 | 1,205 | Hamid K |
77,113,251 | 1,020,139 | How to get the root directory of the project using pytest? | <p>I have defined the following fixture, which remove everything from the path after the root directory (<code>myapp</code>).</p>
<p>This is necessary as <code>pytest</code> changes the current directory (cwd), when executing the tests (why?).</p>
<p>However, this approach is not great, as it fails whenever more than one <code>myapp</code> entity exists in the path.</p>
<p>How can I get the path of the root directory using <code>pytest</code>?</p>
<pre><code>def _get_project_root_path() -> str:
return re.sub(r"/(myapp)(?:/.*)?", r"/\1", os.getcwd())
@pytest.fixture(scope="session")
def project_root_path():
return _get_project_root_path()
</code></pre>
| <python><pytest> | 2023-09-15 14:33:58 | 2 | 14,560 | Shuzheng |
77,113,157 | 718,057 | how to assign users during a data migration in django? | <p>I know this seems like it should be a no-brainer and should work without fail, but I am getting a crazy error during a migration which is currently running during pytest initialization.</p>
<p>The error I am getting is:</p>
<pre><code>ValueError: Cannot assign "<User: redshred_default_contact>": "Client.primary_contact" must be a "User" instance.
</code></pre>
<p>When i stop in the debugger on the offending line, i get the following:</p>
<pre class="lang-py prettyprint-override"><code>-> isinstance(default_user, User)
True
-> isinstance(default_user, get_user_model())
True
</code></pre>
<p>My migration is as follows:</p>
<pre class="lang-py prettyprint-override"><code># Generated by Django 4.2.4 on 2023-09-14 18:13
from django.db import migrations
from django.contrib.auth import get_user_model
from django.contrib.auth.models import Group, User
from django.conf import settings
def assign_default_client(apps, schema_editor):
Client = apps.get_model("apiserver_v2", "Client")
client = Client.objects.get(slug="default-client")
get_user_model().objects.update(account__client=client)
def create_default_client(apps, schema_editor):
(default_user, _) = User.objects.get_or_create(username="default_contact")
default_user.save()
Client = apps.get_model("apiserver_v2", "Client")
# BUG: this is where the bug is occurring
client = Client.objects.create(
name="Default Client",
slug="default-client",
primary_contact=default_user,
domain="mydomain.com",
)
(group, _) = Group.objects.get_or_create(name="default_contact_group")
client.new_user_group = group
client.save()
class Migration(migrations.Migration):
dependencies = [
("apiserver_v2", "0026_account"),
("authtoken", "0003_tokenproxy"),
]
operations = [
migrations.RunPython(create_default_client, migrations.RunPython.noop),
migrations.RunPython(assign_default_client, migrations.RunPython.noop),
]
</code></pre>
<p>My Client model is:</p>
<pre class="lang-py prettyprint-override"><code>from django.contrib.auth import get_user_model
from django.db import models
from shortuuid.django_fields import ShortUUIDField
class Client(models.Model):
"""A Client represents one of our clients.
Clients are used for accounting purposes. All transactions are associated with a client.
"""
id = ShortUUIDField(
help_text="A string identifier for this client",
length=22, max_length=44, primary_key=True
)
primary_contact = models.ForeignKey(
get_user_model(),
related_name="primary_contact_for",
null=True,
on_delete=models.SET_NULL
)
name = models.TextField(
null=False,
blank=False,
help_text="The name of the client.",
)
domain = models.TextField(
null=True,
blank=True,
help_text="New users with this email suffix will be added to this client. Leave blank to allow any. ",
)
new_user_group = models.ForeignKey(
"auth.Group",
related_name="client_for",
null=True,
on_delete=models.SET_NULL,
help_text="The group that new users will be added to when they sign up."
)
slug = models.SlugField(
max_length=55,
blank=False,
unique=True,
help_text="The slug name of this client. Label can be used to generate the slug. Will be used for RESTful lookup.",
)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
updated_by = models.ForeignKey(
get_user_model(),
related_name="clients_updated",
null=True,
on_delete=models.SET_NULL,
)
created_by = models.ForeignKey(
get_user_model(),
null=True,
related_name="clients_created",
on_delete=models.SET_NULL,
)
class Meta:
ordering = ["name"]
def __str__(self):
"""Print the name and primary contact"""
return f"{self.name} <{self.primary_contact}>"
</code></pre>
| <python><django><django-rest-framework><pytest><pytest-django> | 2023-09-15 14:21:08 | 1 | 626 | Skaman Sam |
77,113,085 | 13,508,045 | Table reference in formula breaks excel file | <p>I want to use a formula for a cell. Inside the formula I reference a table called <code>tblTest</code>. After running the script I try to open the excel file but I get an error, that there are problems with the file and if it should be restored or not. So, it seems like the problem is because I reference a table.</p>
<h4>Code</h4>
<pre class="lang-py prettyprint-override"><code>import openpyxl
wb = openpyxl.load_workbook("Planung.xlsx")
ws = wb["Zeitplan"]
ws["B6"].value = '=INDEX(FILTER(tblTest,tblTest[Packet]="A"),1)'
wb.save("test.xlsx")
</code></pre>
<h4>Reproduction Steps</h4>
<ul>
<li>Create file called <code>Planung.xlsx</code></li>
<li>Create sheet called <code>Zeitplan</code></li>
<li>Create a second sheet and there create a table called <code>tblTest</code> with a column called <code>Packet</code></li>
<li>Run code</li>
<li>Try open <code>test.xlsx</code> (should output error)</li>
</ul>
<h4>Edit</h4>
<p>What I noticed is that when I set <code>=tblTest</code> as the value, I can open the file but in the cell there's the error <code>#NAME?</code>. Only when I press onto the cell and press enter in the formula input, it works. It feels like it can't find the table even if it's there.</p>
| <python><excel><openpyxl> | 2023-09-15 14:11:14 | 0 | 1,508 | codeofandrin |
77,113,035 | 3,225,109 | Error when passing audio file to curl command for Whisper Open AI model | <p>Here is my <code>main.py</code> file</p>
<pre><code>from flask import Flask, request, jsonify
from transformers import pipeline
app = Flask(__name__)
# Create a pipeline for automatic speech recognition (ASR)
asr = pipeline("automatic-speech-recognition", model="openai/whisper-large")
@app.route('/recognize', methods=['POST'])
def recognize():
try:
# Get the audio file from the request
audio_file = request.files['audio']
# Perform automatic speech recognition on the audio
transcription = asr(audio_file.read())
# Return the transcription as JSON response
response = {"transcription": transcription}
return jsonify(response)
except Exception as e:
return jsonify({"error": str(e)})
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
</code></pre>
<p>I start my app with <code>python3 main.py</code> command.</p>
<pre><code>python3 main.py
* Serving Flask app 'main'
* Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:5000
* Running on http://10.16.4.81:5000
Press CTRL+C to quit
127.0.0.1 - - [15/Sep/2023 11:12:53] "POST /recognize HTTP/1.1" 200 -
127.0.0.1 - - [15/Sep/2023 11:15:01] "POST /recognize HTTP/1.1" 200 -
127.0.0.1 - - [15/Sep/2023 11:15:55] "POST /recognize HTTP/1.1" 200 -
127.0.0.1 - - [15/Sep/2023 11:17:48] "POST /recognize HTTP/1.1" 200 -
127.0.0.1 - - [15/Sep/2023 11:25:05] "POST /recognize HTTP/1.1" 200 -
</code></pre>
<p>Then from another terminal window I ssh to the same server where my app is running and I try to use curl to pass a flac audio file but I get an error.</p>
<pre><code>curl -X POST -F "audio=@/home/ubuntu/speech.flac" http://10.16.4.81:5000/recognize
curl: (26) Failed to open/read local data from file/application
</code></pre>
<p>I tried using a file from internet but got the same output :</p>
<pre><code>curl -X POST -F "audio=@https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac" http://localhost:5000/recognize
curl: (26) Failed to open/read local data from file/application
</code></pre>
<p>Can someone help me to sort this out please ?</p>
| <python><huggingface-transformers><flac> | 2023-09-15 14:04:47 | 1 | 983 | Tomas.R |
77,113,019 | 6,665,586 | Create string enum from IntEnum | <p>I have an IntEnum like this</p>
<pre><code>class TransactionTypeIntEnum(IntEnum):
UNKNOWN = 0
ADMINISTRATIVE = 1
PAYMENT = 2
TRADE = 3
</code></pre>
<p>I want to create a string enum that has the same names of the IntEnum like this</p>
<pre><code>class TransactionType(Enum):
UNKNOWN = "UNKNOWN"
ADMINISTRATIVE = "ADMINISTRATIVE"
PAYMENT = "PAYMENT"
TRADE = "TRADE"
</code></pre>
<p>The solution I found is to create a method that iterates over the IntEnum elements names</p>
<pre><code>class BaseEnum(Enum):
@classmethod
def from_iterable(cls: Type[TBaseEnum], name: str, values: Iterable[str]) -> Type[TBaseEnum]:
return cls(name, {value: value for value in values})
TBaseEnum = TypeVar("TBaseEnum", bound=BaseEnum, covariant=True)
TransactionType = BaseEnum.from_iterable("TransactionType", [element.name for element in TransactionTypeIntEnum])
</code></pre>
<p>The problem that I found with this approach is that I lost autocompletion of TransactionType elements. To know what are the elements of TransactionType, I have to go into TransactionTypeIntEnum.</p>
<p>Is there a way to create the string enum withut losing autocompletion?</p>
| <python><enums> | 2023-09-15 14:02:38 | 1 | 1,011 | Henrique Andrade |
77,112,968 | 2,123,706 | Remove numerics in column prior to value_counts | <p>I have a string df. I want to find a quick summary of the string columns, and am using describe as below:</p>
<pre><code>df = pd.DataFrame({'col1':['1','2','C','T','A','00400'],
'col2':['3241','H2','C8','T4','123','0000']})
df['col1'].value_counts()
pd.to_numeric(df['col1'], errors='coerce').describe()
</code></pre>
<p>but I also want to get a value count of the non numerics. How can I do that?</p>
<p>In this case, <code>df['Col1'].value_counts(non_numerics)</code> would yeild:</p>
<pre><code>A 1
C 1
T 1
</code></pre>
<p>As my data has 1m rows, I would like to remove the numerics prior to completing the value counts.</p>
<p>Any suggestions?</p>
| <python><pandas><count> | 2023-09-15 13:56:16 | 3 | 3,810 | frank |
77,112,921 | 10,722,752 | how to get a function return multiple plots as output | <p>So I am performing a time series forecasting using XGBoost. I am using cross validation using TimeSeriesSplit. I am splitting my data into 6 folds.</p>
<p>The requirement is I need to write a function that can output 6 plots, one for each fold.</p>
<p>Sample Data:</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.DataFrame({'date' : pd.date_range(start = '2022-01-01', end = '2023-06-30'),
'actual' : np.random.randint(0, 10, size = 546),
'predicted' : np.random.randint(0, 14, size = 546)})
df.set_index('date', inplace = True)
</code></pre>
<p>I can generate the Actual vs Predicted plot for 1 fold using:</p>
<pre><code>ax = df.loc[(df.index > '2022-05-31') & (df.index < '2022-07-1')]['actual'].plot(figsize = (15, 5), title = 'Actual vs Pred')
df.loc[(df.index > '2022-05-31') & (df.index < '2022-07-1')]['predicted'].plot(style = '.')
plt.legend(['Actual', 'Predicted'])
plt.show()
</code></pre>
<p>or for two date ranges as below:</p>
<pre><code>bool1 = (df.index > '2022-05-31') & (df.index < '2022-07-1')
bool2 = (df.index > '2022-10-31') & (df.index < '2022-12-1')
ax = df.loc[bool1]['actual'].plot(figsize = (15, 5), title = 'Actual vs Pred')
df.loc[bool1]['predicted'].plot(style = '.')
plt.legend(['Actual', 'Predicted'])
plt.show()
ax = df.loc[bool2]['actual'].plot(figsize = (15, 5), title = 'Actual vs Pred')
df.loc[bool2]['predicted'].plot(style = '.')
plt.legend(['Actual', 'Predicted'])
plt.show()
</code></pre>
<p>But I need to write a function that can output 6 such plots through a for loop because every time the function has to calculate the <code>predicted</code> column using <code>XGBoost</code> on the particular cross validation fold split by the <code>TimeSeriesSplit</code>.</p>
| <python><pandas><matplotlib> | 2023-09-15 13:48:45 | 1 | 11,560 | Karthik S |
77,112,878 | 5,552,507 | Annotation in a plotly polar plot | <p>I'm trying to add annotations arrows to a plotly polar plot, but that produces another underlying cartesian plot:</p>
<p><a href="https://i.sstatic.net/ahPxw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ahPxw.png" alt="arrow added to cartesian plot not polar" /></a></p>
<p><strong>Question 1:</strong> is there a way to add annotation to polar plots? This page of 2019 suggest that not, is it still the current status?: <a href="https://community.plotly.com/t/adding-annotations-to-polar-scatterplots/704" rel="nofollow noreferrer">https://community.plotly.com/t/adding-annotations-to-polar-scatterplots/704</a></p>
<p><strong>Question 2:</strong> is there a hack to produce arrows on a polar plot? (hide aligned overlying transparent cartesian plot? low level draw of arrows with lines in polar plot?)</p>
<p>The code producing the above image is here:</p>
<pre><code>import plotly.express as px
import plotly.graph_objects as go
import pandas as pd
df = pd.DataFrame()
fig = px.scatter_polar(
df,
theta=None,
r=None,
range_theta=[-180, 180],
start_angle=0,
direction="counterclockwise",
template="plotly_white",
)
x_end = [1, 2, 2]
y_end = [3, 5, 4]
x_start = [0, 1, 3]
y_start = [4, 4, 4]
list_of_all_arrows = []
for x0, y0, x1, y1 in zip(x_end, y_end, x_start, y_start):
arrow = go.layout.Annotation(dict(
x=x0,
y=y0,
xref="x", yref="y",
text="",
showarrow=True,
axref="x", ayref='y',
ax=x1,
ay=y1,
arrowhead=3,
arrowwidth=1.5,
arrowcolor='rgb(255,51,0)', )
)
list_of_all_arrows.append(arrow)
fig.update_layout(annotations=list_of_all_arrows)
fig.show()
</code></pre>
<p><strong>Edit:</strong></p>
<p>In fine, I would like to have something like that, with annotation relative to polar plot:
<a href="https://i.sstatic.net/GxPOS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GxPOS.png" alt="enter image description here" /></a></p>
| <python><plotly><polar-coordinates><polar-plot> | 2023-09-15 13:44:17 | 2 | 307 | PiWi |
77,112,728 | 8,869,570 | Is there a way to use os.path.exists with a wildcard string? | <p>I have a directory <code>dir</code> and inside the directory there may be files with filenames of the form:</p>
<pre><code>YYYY-MM-DD-data.db
</code></pre>
<p>where <code>YYYY-MM-DD</code> can be any year, month, day.</p>
<p>Usually I use <code>os.path.exists</code> to check for the existence of a path, but in this case I need to check if any of such files exist. I don't think <code>os.path.exists</code> is directly compatible with a wildcard match? Can someone advise on how to do this?</p>
<p>My current approach is:</p>
<pre><code>filename_pattern = "*-data.db"
matching_files = glob.glob(os.path.join(/path/to/dir, filename_pattern))
assert matching_files
</code></pre>
| <python> | 2023-09-15 13:21:58 | 1 | 2,328 | 24n8 |
77,112,343 | 2,051,392 | Wordpress won't run a python script but will execute shell commands | <p>Tried to keep it as simple as possible to understand what the problem is</p>
<p>Test.py</p>
<pre><code>print('It works')
</code></pre>
<p>TestPage.php</p>
<pre><code> /*
* Template Name: My Template
* Does stuff
* @package PackageName
* @author Author
* @copyright Copyright
*/
?>
<?php get_header(); ?>
<?php
$RESULT=shell_exec('python3 ./Test.py');
echo 'Result:'.$RESULT;
?>
<?php get_footer(); ?>
</code></pre>
<p><strong>What have I tried</strong></p>
<ul>
<li><p>To remove permissions as the issue I set both files to <code>777</code> and made their owner and group the www-data user</p>
</li>
<li><p>When I run <code>php Testpage.php</code> I get the output <code>Result:It works</code>
However, when I load the template in wordpress all I get is <code>Result:</code> as the output</p>
</li>
<li><p>I also tried putting in the fully qualified directory path to both python3 and the python file</p>
</li>
<li><p>Where it gets really interesting is if I instead do this in the php <code>echo shell_exec('python3 -c "print(\'It works\')"')</code> or any other shell command, it will execute as expected. So it's as if there's a block on running actual files</p>
</li>
</ul>
<p><strong>Is there a specific wordpress setting or apache setting that is required to allow it to run python files?</strong> All of the examples I see online are just as simple as mine.</p>
| <python><php><wordpress> | 2023-09-15 12:23:47 | 1 | 9,875 | DotNetRussell |
77,112,120 | 329,829 | How to get non expanded matrix from sympy | <p>I have this sympy code:</p>
<pre><code># Define matrices and vectors using the more straightforward approach
A_matrix = sp.MatrixSymbol('A', 3, 2).as_explicit()
x_vector = sp.MatrixSymbol('x', 2, 1).as_explicit()
y_vector = sp.MatrixSymbol('y', 3, 1).as_explicit()
# Define the summation index symbol
i = sp.Symbol('i', integer=True)
# Redefine the function using the sum notation
f_sum = sp.Sum((A_matrix * x_vector - y_vector)[i, 0]**2, (i, 0, 2))
# Compute the gradient using the sum notation
gradient_f_sum = [f_sum.diff(x_vector[j, 0]) for j in range(2)]
# Simplify the gradient expression
gradient_f_sum_simplified = [sp.simplify(expr) for expr in gradient_f_sum]
gradient_f_sum_simplified
</code></pre>
<p>which gives me this as output:</p>
<p><a href="https://i.sstatic.net/f8BBI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f8BBI.png" alt="enter image description here" /></a></p>
<p>How do I get the non-expanded form of this equation? Which should be: <code>(2A.(X.xvector-yvector)</code></p>
<hr />
<p>Update 1:</p>
<p>Removing the <code>.as_explicit</code>:</p>
<pre><code># Define matrices and vectors using the more straightforward approach
A_matrix = sp.MatrixSymbol('A', 3, 2)
x_vector = sp.MatrixSymbol('x', 2, 1)
y_vector = sp.MatrixSymbol('y', 3, 1)
# Define the summation index symbol
i = sp.Symbol('i', integer=True)
# Redefine the function using the sum notation
f_sum = sp.Sum((A_matrix * x_vector - y_vector)[i, 0]**2, (i, 0, 2))
# Compute the gradient using the sum notation
gradient_f_sum = [f_sum.diff(x_vector[j, 0]) for j in range(2)]
# Simplify the gradient expression
gradient_f_sum_simplified = [sp.simplify(expr) for expr in gradient_f_sum]
gradient_f_sum_simplified
</code></pre>
<p><a href="https://i.sstatic.net/dfAuw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dfAuw.png" alt="enter image description here" /></a></p>
| <python><sympy> | 2023-09-15 11:47:45 | 1 | 5,232 | Olivier_s_j |
77,111,993 | 714,122 | Why is pip install libvirt-python failing and saying I need libvirt installed, even though libvirt is installed? | <p>I am running RHEL 9.2 with Python v3.9.16, and I am trying to install the <a href="https://pypi.org/project/libvirt-python/" rel="nofollow noreferrer">libvirt-python</a> Python package. I have tried installing using pip and pipenv, and I get the same error every time.</p>
<p>I have the libvirt and KVM packages installed (via dnf) and working. I am able to interact with VMs with virsh through the cli.</p>
<p>I have read the other similar posts on stack, and those solutions didn't work for me. Any suggestions on what might be the issue?</p>
<pre><code>$ pip install libvirt-python
Defaulting to user installation because normal site-packages is not writeable
Collecting libvirt-python
Downloading libvirt-python-9.7.0.tar.gz (132 kB)
ββββββββββββββββββββββββββββββββββββββββ 132.1/132.1 kB 6.0 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: libvirt-python
Building wheel for libvirt-python (pyproject.toml) ... error
error: subprocess-exited-with-error
Γ Building wheel for libvirt-python (pyproject.toml) did not run successfully.
β exit code: 1
β°β> [51 lines of output]
running bdist_wheel
running build
running build_py
Package 'libvirt' was not found
Traceback (most recent call last):
File "/home/bill/.local/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/home/bill/.local/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/home/bill/.local/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 251, in build_wheel
return _build_backend().build_wheel(wheel_directory, config_settings,
File "/tmp/pip-build-env-cx5clem5/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 434, in build_wheel
return self._build_with_temp_dir(
File "/tmp/pip-build-env-cx5clem5/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 419, in _build_with_temp_dir
self.run_setup()
File "/tmp/pip-build-env-cx5clem5/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 341, in run_setup
exec(code, locals())
File "<string>", line 284, in <module>
File "/tmp/pip-build-env-cx5clem5/overlay/lib/python3.9/site-packages/setuptools/__init__.py", line 103, in setup
return distutils.core.setup(**attrs)
File "/tmp/pip-build-env-cx5clem5/overlay/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
File "/tmp/pip-build-env-cx5clem5/overlay/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/tmp/pip-build-env-cx5clem5/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
self.run_command(cmd)
File "/tmp/pip-build-env-cx5clem5/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 989, in run_command
super().run_command(command)
File "/tmp/pip-build-env-cx5clem5/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-cx5clem5/normal/lib/python3.9/site-packages/wheel/bdist_wheel.py", line 364, in run
self.run_command("build")
File "/tmp/pip-build-env-cx5clem5/overlay/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "/tmp/pip-build-env-cx5clem5/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 989, in run_command
super().run_command(command)
File "/tmp/pip-build-env-cx5clem5/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-cx5clem5/overlay/lib/python3.9/site-packages/setuptools/_distutils/command/build.py", line 131, in run
self.run_command(cmd_name)
File "/tmp/pip-build-env-cx5clem5/overlay/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "/tmp/pip-build-env-cx5clem5/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 989, in run_command
super().run_command(command)
File "/tmp/pip-build-env-cx5clem5/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "<string>", line 166, in run
File "<string>", line 35, in check_minimum_libvirt_version
File "/usr/lib64/python3.9/subprocess.py", line 373, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['pkg-config', '--print-errors', '--atleast-version=0.9.11', 'libvirt']' returned non-zero exit status 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for libvirt-python
Failed to build libvirt-python
ERROR: Could not build wheels for libvirt-python, which is required to install pyproject.toml-based projects
</code></pre>
<p>Here are the libvirt related packages I have installed:</p>
<pre><code>$ sudo dnf list | grep libvirt
libvirt.x86_64 9.0.0-10.3.el9_2 @rhel-9-for-x86_64-appstream-eus-rpms
libvirt-client.x86_64 9.0.0-10.3.el9_2 @rhel-9-for-x86_64-appstream-eus-rpms
libvirt-daemon.x86_64 9.0.0-10.3.el9_2 @rhel-9-for-x86_64-appstream-eus-rpms
libvirt-daemon-config-network.x86_64 9.0.0-10.3.el9_2 @rhel-9-for-x86_64-appstream-eus-rpms
libvirt-daemon-config-nwfilter.x86_64 9.0.0-10.3.el9_2 @rhel-9-for-x86_64-appstream-eus-rpms
libvirt-daemon-driver-interface.x86_64 9.0.0-10.3.el9_2 @rhel-9-for-x86_64-appstream-eus-rpms
libvirt-daemon-driver-network.x86_64 9.0.0-10.3.el9_2 @rhel-9-for-x86_64-appstream-eus-rpms
libvirt-daemon-driver-nodedev.x86_64 9.0.0-10.3.el9_2 @rhel-9-for-x86_64-appstream-eus-rpms
libvirt-daemon-driver-nwfilter.x86_64 9.0.0-10.3.el9_2 @rhel-9-for-x86_64-appstream-eus-rpms
libvirt-daemon-driver-qemu.x86_64 9.0.0-10.3.el9_2 @rhel-9-for-x86_64-appstream-eus-rpms
libvirt-daemon-driver-secret.x86_64 9.0.0-10.3.el9_2 @rhel-9-for-x86_64-appstream-eus-rpms
libvirt-daemon-driver-storage.x86_64 9.0.0-10.3.el9_2 @rhel-9-for-x86_64-appstream-eus-rpms
libvirt-daemon-driver-storage-core.x86_64 9.0.0-10.3.el9_2 @rhel-9-for-x86_64-appstream-eus-rpms
libvirt-daemon-driver-storage-disk.x86_64 9.0.0-10.3.el9_2 @rhel-9-for-x86_64-appstream-eus-rpms
libvirt-daemon-driver-storage-iscsi.x86_64 9.0.0-10.3.el9_2 @rhel-9-for-x86_64-appstream-eus-rpms
libvirt-daemon-driver-storage-logical.x86_64 9.0.0-10.3.el9_2 @rhel-9-for-x86_64-appstream-eus-rpms
libvirt-daemon-driver-storage-mpath.x86_64 9.0.0-10.3.el9_2 @rhel-9-for-x86_64-appstream-eus-rpms
libvirt-daemon-driver-storage-rbd.x86_64 9.0.0-10.3.el9_2 @rhel-9-for-x86_64-appstream-eus-rpms
libvirt-daemon-driver-storage-scsi.x86_64 9.0.0-10.3.el9_2 @rhel-9-for-x86_64-appstream-eus-rpms
libvirt-daemon-kvm.x86_64 9.0.0-10.3.el9_2 @rhel-9-for-x86_64-appstream-eus-rpms
libvirt-dbus.x86_64 1.4.1-5.el9 @rhel-9-for-x86_64-appstream-eus-rpms
libvirt-glib.x86_64 4.0.0-3.el9 @rhel-9-for-x86_64-appstream-eus-rpms
libvirt-libs.x86_64 9.0.0-10.3.el9_2 @rhel-9-for-x86_64-appstream-eus-rpms
pcp-pmda-libvirt.x86_64 6.0.1-5.el9_2 @rhel-9-for-x86_64-appstream-eus-rpms
python3-libvirt.x86_64 9.0.0-1.el9 @rhel-9-for-x86_64-appstream-eus-rpms
fence-virtd-libvirt.x86_64 4.10.0-43.el9 rhel-9-for-x86_64-appstream-eus-rpms
fence-virtd-libvirt.x86_64 4.10.0-43.el9 rhel-9-for-x86_64-appstream-rpms
libvirt-nss.x86_64 9.0.0-10.3.el9_2 rhel-9-for-x86_64-appstream-eus-rpms
libvirt-nss.x86_64 9.0.0-10.3.el9_2 rhel-9-for-x86_64-appstream-rpms
</code></pre>
| <python><rhel><kvm><libvirt><rhel9> | 2023-09-15 11:30:22 | 0 | 1,135 | Dan Largo |
77,111,961 | 10,101,636 | Identify Duplicate and Non-Dup records in a dataframe | <p>I have a dataframe which contains both duplicate and distinct records in it. I have to identify which are the duplicate records and which are distinct records and split them separately in 2 different dataframes.</p>
<p>Input:</p>
<pre><code>custid | cust_name | loc | prodid
1234 | John | US | P133
1234 | John | US | P133
1234 | John | US | P133
5678 | Mike | CHN | P456
4325 | Peter | RUS | P247
3458 | Andy | IND | P764
3458 | Andy | IND | P764
</code></pre>
<p>Ouput:
DF 1 (Dups):</p>
<pre><code>custid | cust_name | loc | prodid
1234 | John | US | P133
1234 | John | US | P133
1234 | John | US | P133
3458 | Andy | IND | P764
3458 | Andy | IND | P764
</code></pre>
<p>DF2 (Non Dups):</p>
<pre><code>custid | cust_name | loc | prodid
5678 | Mike | CHN | P456
4325 | Peter | RUS | P247
</code></pre>
<p>Can someone please help.</p>
| <python><dataframe><apache-spark><pyspark> | 2023-09-15 11:25:43 | 4 | 403 | Matthew |
77,111,916 | 403,425 | Django model validation which uses a ManyToMany field | <p>I have a model like this:</p>
<pre class="lang-py prettyprint-override"><code>class Offer(models.Model):
condition_products = models.ManyToManyField(
"shop.Product",
blank=True,
help_text="User needs to own at least one of these products.",
)
condition_owned_after = models.DateField(
blank=True,
null=True,
help_text="User needs to own at least one products after this date.",
verbose_name="Owned after date",
)
</code></pre>
<p>It's for a webshop where we can create offers with certain conditions. One of the conditions is that the user needs to already own one or more products, and a second condition is that this products needs to be owned after a certain date.</p>
<p>I want to add validation to the model so that <code>condition_owned_after</code> can only be filled in when <code>condition_product_variants</code> is not None. But when I try to access <code>condition_product_variants</code> in the <code>clean</code> method I get a ValueError from Django.</p>
<pre class="lang-py prettyprint-override"><code>class Offer(models.Model):
# ...
def clean(self):
if self.condition_owned_after and not self.condition_products.all():
raise ValidationError(
{
"condition_owned_after": "Cannot set condition_owned_after without also setting condition_products"
}
)
def save(self, *args, **kwargs):
self.full_clean()
return super().save(*args, **kwargs)
</code></pre>
<p>This results in the following error:</p>
<p><code>ValueError: "<Offer: Offer object (None)>" needs to have a value for field "id" before this many-to-many relationship can be used</code></p>
<p>I understand the error, but I don't understand how to work around it. How can I validate my model, when the validation depends on the value of this M2M field?</p>
<p>I know that I can create a <code>ModelForm</code> and assign it to the <code>ModelAdmin</code> and do the validation there, so at least adding items via the admin UI is validated. But that doesn't validate models getting created or saved outside of the admin UI of course.</p>
| <python><django><django-models><django-validation> | 2023-09-15 11:19:50 | 0 | 5,828 | Kevin Renskers |
77,111,904 | 7,128,910 | Azure http function times out when triggered by other function but works when triggered directly | <p>I have an Azure http function that almost always times out when called by another Azure function but always works when called directly. More info:</p>
<p>I have two Azure functions in Python:</p>
<ul>
<li>a time triggered (scheduled) function</li>
<li>an http triggered function</li>
</ul>
<p>The time triggered function fetches a list of items and then calls the http triggered function for each of the items, where the item is passed as a string argument. It uses python <em>requests</em> and the function URL of the http triggered function. The http triggered function then runs some calculations that take 1-2 seconds.</p>
<p>If the time triggered function runs, it tries to call the http triggered function but 95% of the time, the http function times out with a <code>504.0 GatewayTimeout</code> after 5minutes and doesn't return anything, causing the time triggered function to time out as well (since more than 5 minutes passed). Weirdly enough, sometimes it seems to work even though the parameters didn't change.</p>
<p>If I call the http function directly through the Azure portal or via python/curl using the <em>exact same</em> parameters (i.e. the exact same string argument) and the <em>exact same</em> function URL it always works and returns values after 1-2 seconds, leading me to think that there is no error in my code that prevents function execution but rather something else.</p>
<p>Any ideas what might be causing this problem and what steps I could take to try to fix it?</p>
<p>Edit:</p>
<p>Time triggered function (shortened):</p>
<pre><code>import datetime
import logging
import azure.functions as func
import requests
import os
import time
def main(mytimer: func.TimerRequest) -> None:
utc_timestamp = (
datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat()
)
items_without_data = ["item1", "item2"]
cloud_function_url = os.environ.get("MY_CF_URL")
for i, item in enumerate(items_without_data):
r = requests.post(
cloud_function_url,
json={"item_id": item},
timeout=300,
)
if r.status_code != 200:
logging.error(
f"cloud function call failed. {r.text}, for item {item}"
)
# make sure we don't hit the rate limit of 40 calls per minute
if i % 19 == 0:
time.sleep(60)
</code></pre>
<p>HTTP triggered function:</p>
<pre><code>import azure.functions as func
import logging
import json
import SomeOtherFunctionThatCallsAnExternalAPI
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info("CF processed a request.")
item_id = req.params.get("item_id")
if not item_id:
try:
req_body = req.get_json()
except ValueError:
pass
else:
item_id = req_body.get("item_id")
logging.info(f"Request for item {item_id}")
if not item_id:
logging.error("Error: item_id missing")
return func.HttpResponse(
"required parameter item_id is missing", status_code=400
)
logging.info(f"Getting XXX for item {item_id}")
IDD = SomeOtherFunctionThatCallsAnExternalAPI()
response_data, error = IDD.get_xxx(
item_id=item_id, write_to_db=True
)
if not response_data:
logging.error(f"Error: {error}")
return func.HttpResponse(error, status_code=500)
logging.info(f"Processed request for item {item_id}")
return func.HttpResponse(
json.dumps(response_data), mimetype="application/json", status_code=200
)
</code></pre>
| <python><azure><azure-functions> | 2023-09-15 11:17:16 | 1 | 1,261 | Nik |
77,111,774 | 22,466,650 | How to hide horizontal monotonic sequence of numbers? | <p>My input is <code>df</code> :</p>
<pre><code> COLUMN_1 COLUMN_2 COLUMN_3 COLUMN_4
0 0 1 0 2
1 1 1 2 3
2 1 2 3 2
3 1 2 4 5
4 4 5 8 8
</code></pre>
<p>And I wish I can hide (horizontally, from left to the non inclusive right) monotonic sequences with a difference equal to 1. For example if in a row we have <code>[4, 5, 8, 8]</code> (like in the last one), the concerned sequence is <code>[4, 5)</code>. So, we need to hide the number <code>4</code> with an emty string.</p>
<p>My expected output is this :</p>
<pre><code> COLUMN_1 COLUMN_2 COLUMN_3 COLUMN_4
0 1 0 2
1 1 3
2 3 2
3 2 5
4 5 8 8
</code></pre>
<p>Explanation :</p>
<p><a href="https://i.sstatic.net/mdyQl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mdyQl.png" alt="enter image description here" /></a></p>
<p>I tried with the code below but I'm not in the right track since I get a weird boolean dataframe.</p>
<pre><code>df.diff(axis=1).eq(1).iloc[:, ::-1].cummax(axis=1).replace(True, '').iloc[:, ::-1]
</code></pre>
| <python><pandas> | 2023-09-15 10:57:16 | 2 | 1,085 | VERBOSE |
77,111,770 | 6,734,243 | How to find subdependency version? | <p>In the pydata-sphinx-theme, we realized that Sphinx version is pinned to 5.3.0 when the latest one is 7.0.2 (<a href="https://github.com/pydata/pydata-sphinx-theme/issues/1441" rel="nofollow noreferrer">https://github.com/pydata/pydata-sphinx-theme/issues/1441</a>).</p>
<p>We checked that we are not pinning it in our own dependency list:</p>
<pre class="lang-ini prettyprint-override"><code>[project.optional-dependencies]
doc = [
"numpydoc",
"myst-nb",
"linkify-it-py", # for link shortening
"rich",
"sphinxext-rediraffe",
"sphinx-sitemap",
"sphinx-autoapi",
# For examples section
"ablog>=0.11.0rc2",
"jupyter_sphinx",
"pandas",
"plotly",
"matplotlib",
"numpy",
"xarray",
"sphinx-copybutton",
"sphinx-design",
"sphinx-togglebutton",
"jupyterlite-sphinx",
"sphinxcontrib-youtube",
"sphinx-favicon>=1.0.1",
# Install nbsphinx in case we want to test it locally even though we can't load
# it at the same time as MyST-NB.
"nbsphinx",
"ipyleaflet",
"colorama",
]
</code></pre>
<p>How can I pin point which of our dependencies has a hard upper bounds ?</p>
| <python><dependencies> | 2023-09-15 10:56:51 | 1 | 2,670 | Pierrick Rambaud |
77,111,647 | 2,011,659 | Why does default argument have a value that is not given explicitly? | <p>I have this minimal example:</p>
<pre class="lang-py prettyprint-override"><code>class Foo:
def __init__(self, args = []):
print(args)
self.data = args
for _ in range(2):
foo = Foo()
foo.data.append(True)
</code></pre>
<p>I would expect that variable <code>foo</code> gets assigned a fresh <code>Foo</code> instance with empty data on every loop execution. Instead it outputs:</p>
<pre><code>[]
[True]
</code></pre>
<p>Why does the second call to <code>Foo</code>s constructor have an <code>args</code> that is not []?</p>
| <python> | 2023-09-15 10:34:27 | 0 | 1,249 | user2011659 |
77,111,621 | 6,884,119 | Create Google Calendar invites using service account securely | <p>I created a service account using my Enterprise Google Workspace account and I need to automate calendar invites creation using Python.</p>
<p>I have added the service account email into the calendar's shared people and when I try to get the calendar using <code>service.calendarList().list().execute()</code> it works fine.</p>
<p>However, if I try to send an invite it doesn't work, and I get the error below:</p>
<p><code>googleapiclient.errors.HttpError: <HttpError 403 when requesting https://www.googleapis.com/calendar/v3/calendars/xxxxxx%group.calendar.google.com/events?alt=json returned "You need to have writer access to this calendar.". Details: "[{'domain': 'calendar', 'reason': 'requiredAccessLevel', 'message': 'You need to have writer access to this calendar.'}]"></code></p>
<p>Looking at the <a href="https://developers.google.com/identity/protocols/oauth2/service-account#delegatingauthority" rel="nofollow noreferrer">docs</a> I found that I need to delegate domain-wide authority to the service account for this to work, but my company isn't allowing this service account to have this type of access because of security issues.</p>
<p>So I wanted to know if there is any way that I could do this without delegating domain wide access to this service account ? Because delegating domain wide access gives me access to impersonated anyone in the domain, so it's a security issue. I want the service account to be able to impersonate just the parent google account from where it was created.</p>
<p>Below is the full code that I used to get the calendar and create the invite</p>
<pre><code>from google.oauth2 import service_account
from googleapiclient.discovery import build
class GoogleCalendar:
SCOPES = [
"https://www.googleapis.com/auth/calendar",
"https://www.googleapis.com/auth/calendar.events",
]
def __init__(self, credentials, calendar_id) -> None:
credentials = service_account.Credentials.from_service_account_file(
credentials, scopes=self.SCOPES
)
self.service = build("calendar", "v3", credentials=credentials)
self.id = calendar_id
def get_calendar_list(self):
return self.service.calendarList().list().execute()
def add_calendar(self):
entry = {"id": self.id}
return self.service.calendarList().insert(body=entry).execute()
def create_invite(self):
event = {
"summary": "Google I/O 2015",
"location": "800 Howard St., San Francisco, CA 94103",
"description": "A chance to hear more about Google's developer products.",
"start": {
"dateTime": "2023-09-16T09:00:00-07:00",
"timeZone": "America/Los_Angeles",
},
"end": {
"dateTime": "2023-09-16T17:00:00-07:00",
"timeZone": "Indian/Mauritius",
},
"attendees": [{"email": "myemail@domain.com"}],
}
event = self.service.events().insert(calendarId=self.id, body=event).execute()
work_cal_id = "xxxxx@group.calendar.google.com"
cal = GoogleCalendar(
credentials="work.json",
calendar_id=work_cal_id
)
cal.add_calendar()
print(cal.get_calendar_list())
cal.create_invite()
</code></pre>
| <python><google-cloud-platform><google-calendar-api><google-api-python-client><service-accounts> | 2023-09-15 10:29:50 | 2 | 2,243 | Mervin Hemaraju |
77,111,486 | 5,422,354 | How to plot a tree in Python with node labels? | <p>I would like to reproduce the following figure using any tool in Python:</p>
<p><a href="https://i.sstatic.net/eVhQS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eVhQS.png" alt="A tree with node labels" /></a></p>
<p>(This is a part of figure 5 page 75 in [^Furnival1974] that I created using Inkscape).</p>
<p>I would like not only to reproduce this tree, but other trees using Python programming.
Using <a href="http://etetoolkit.org/" rel="nofollow noreferrer">ETE</a>, I wrote this:</p>
<pre class="lang-py prettyprint-override"><code>from ete3 import Tree
t = Tree("(12345, (1234, (1235, (123)), (1245, (124, 125, (12))), (1345, (134, (135, (13))), 145), 2345));")
t.show()
</code></pre>
<p>which produces:</p>
<p><a href="https://i.sstatic.net/rINSo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rINSo.png" alt="A tree with ETE" /></a></p>
<p>The fact that the tree is horizontal is not the issue. Another issue is that I was not able to set the labels : the input argument is a string and a single character string cannot define both the node and its label. The main problem for me is that the nodes have no labels. The tool seems to know only leafs, while my need is to have labels on nodes. How can I create the required tree using Python?</p>
<p>[^Furnival1974]: Furnival, G. M., & Wilson, R. W. (1974). Regression by leaps and bounds. Technometrics. 16:499-511.</p>
| <python><plot><tree> | 2023-09-15 10:08:37 | 1 | 1,161 | Michael Baudin |
77,111,482 | 4,245,446 | Calling a common function across multiple files in a directory in Python | <p>I have a python project with the following structure</p>
<pre><code>- my-project/
- my_project/
- __init__.py
- modules_charts/
- chart_a.py
- chart_b.py
- ...
- main.py
- poetry.lock
- pyproject.toml
- README.md
</code></pre>
<p>My goal is to have a dynamic <code>modules_charts</code> folder that contain a non-limited list of python files with non-standard naming. Each file contain the same function called <code>create_chart()</code>.</p>
<p>I want to be able to call all those <code>create_chart()</code> functions from the <code>main.py</code> script.</p>
<p>I want to avoid updating <code>main.py</code> every time I add a new file within the <code>modules_charts</code> folder.</p>
<p>Until now I was using the following code in <code>main.py</code>:</p>
<pre><code>for file_name in sorted(os.listdir("modules_charts")):
if not file_name.endswith(".py"):
continue
module_name = file_name[:-3]
module = import_module("." + module_name, package="modules_charts")
module.main()
</code></pre>
<p>I have 2 concerns about this working solution:</p>
<ol>
<li>It generates an issue if I am running my script with <code>python my_project/main.py</code></li>
<li>There might be another way of doing it by using <code>import</code> and <code>__init__.py</code></li>
</ol>
<p>What would be the proper way to achieve this without taking into account the directory where I am running my script.</p>
| <python><python-import><python-module> | 2023-09-15 10:08:05 | 1 | 13,964 | Weedoze |
77,111,471 | 3,127,087 | Why a python's yield does not require a 2nd call to resume? | <p>I have some python code :</p>
<pre class="lang-python prettyprint-override"><code># file: your_app/views.py
import contextlib
import inspect
from airium import Airium
from django.http import HttpResponse
@contextlib.contextmanager
def basic_body(a: Airium, useful_name: str = ''):
"""Works like a Django/Ninja template."""
a('<!DOCTYPE html>')
with a.html(lang='en'):
with a.head():
a.meta(charset='utf-8')
a.meta(content='width=device-width, initial-scale=1', name='viewport')
# do not use CSS from this URL in a production, it's just for an educational purpose
a.link(href='https://unpkg.com/@picocss/pico@1.4.1/css/pico.css', rel='stylesheet')
a.title(_t=f'Hello World')
with a.body():
with a.div():
with a.nav(klass='container-fluid'):
with a.ul():
with a.li():
with a.a(klass='contrast', href='./'):
a.strong(_t="β¨ Foo Bar")
with a.ul():
with a.li():
a.a(klass='contrast', href='#', **{'data-theme-switcher': 'auto'}, _t='Auto')
with a.li():
a.a(klass='contrast', href='#', **{'data-theme-switcher': 'light'}, _t='Light')
with a.li():
a.a(klass='contrast', href='#', **{'data-theme-switcher': 'dark'}, _t='Dark')
with a.header(klass='container'):
with a.hgroup():
a.h1(_t=f"You're on the {useful_name}")
a.h2(_t="It's a page made by our automatons with a power of steam engines.")
with a.main(klass='container'):
yield # This is the point where main content gets inserted
with a.footer(klass='container'):
with a.small():
margin = 'margin: auto 10px;'
a.span(_t='Β© Airium HTML generator example', style=margin)
# do not use JS from this URL in a production, it's just for an educational purpose
a.script(src='https://picocss.com/examples/js/minimal-theme-switcher.js')
def index(request) -> HttpResponse:
a = Airium()
with basic_body(a, f'main page: {request.path}'):
with a.article():
a.h3(_t="Hello World from Django running Airium")
with a.p().small():
a("This bases on ")
with a.a(href="https://picocss.com/examples/company/"):
a("Pico.css / Company example")
with a.p():
a("Instead of a HTML template, airium has been used.")
a("The whole body is generated by a template "
"and the article code looks like that:")
with a.code().pre():
a(inspect.getsource(index))
return HttpResponse(bytes(a)) # from django.http import HttpResponse
</code></pre>
<p>This code uses a <strong>yield</strong> so the <strong>basic_body</strong> can be shared accross multiple pages and the <strong>main</strong> is filled according to the need of the specific page.</p>
<p>I thought that in order to resume a yielded function you had to call it again (<a href="https://www.geeksforgeeks.org/use-yield-keyword-instead-return-keyword-python/" rel="nofollow noreferrer">see here</a>), but I cannot see a 2nd call to <strong>basic_body</strong> here. What's going on? And how does the <strong>yield</strong> resume works?</p>
| <python><python-3.x> | 2023-09-15 10:06:40 | 1 | 937 | arennuit |
77,111,446 | 1,584,915 | Ensuring that aio_pika consumer runs forever alongside FastAPI | <p>I wrote a <a href="https://aio-pika.readthedocs.io/en/latest/" rel="nofollow noreferrer"><em>aio_pika</em></a> consumer task that is supposed to run forever in a <a href="https://fastapi.tiangolo.com/" rel="nofollow noreferrer"><em>FastAPI</em></a> app. This task is part of a manager object which implement a <a href="https://github.com/python-websockets/websockets/issues/653#issuecomment-858306542" rel="nofollow noreferrer">pub/sub pattern</a>:</p>
<pre class="lang-py prettyprint-override"><code>from aio_pika import connect_robust
from aio_pika.abc import AbstractIncomingMessage
class MyManager(object):
def __init__(self):
self.waiter = asyncio.Future()
def publish(self, value):
waiter, self.waiter = self.waiter, asyncio.Future()
waiter.set_result((value, self.waiter))
async def subscribe(self):
waiter = self.waiter
while True:
value, waiter = await waiter
yield value
__aiter__ = subscribe
async def on_message(self, message: AbstractIncomingMessage) -> None:
try:
async with message.process():
# do whatever deserialization of the received item
item = json.loads(message.body)
# share the item with subscribers
self.publish(item)
except Exception as e:
logger.error(e, exc_info=True)
async def run(self):
connection = await connect_robust(
settings.amqp_url,
loop=asyncio.get_running_loop()
)
channel = await connection.channel()
my_queue = await channel.get_queue('my-queue')
await my_queue.consume(self.on_message)
await asyncio.Future()
await connection.close()
</code></pre>
<p>This consumer tasks is created as during startup of the FastAPI app:</p>
<pre class="lang-py prettyprint-override"><code>my_manager = asyncio.Future()
@app.on_event("startup")
async def on_startup():
my_manager.set_result(MyManager())
task = asyncio.create_task((await my_manager).run())
</code></pre>
<p>Note that the manager is only instantiated during <code>on_startup</code> to ensure that there is an existing <em>asyncio</em> loop.</p>
<p>Unfortunately, the task stops working after a few weeks/months. I was unable to log what event caused the this. I not sure if the the task crashes or if the connection to the AMQP server dropped without ever reconnecting. I am not even sure how/where to catch/log the issue.</p>
<p>What could possibly be the cause of this issue and how to fix it?</p>
<p>As an additional context, the manager is used in a <a href="https://github.com/sysid/sse-starlette" rel="nofollow noreferrer">Server Sent Events</a> route:</p>
<pre class="lang-py prettyprint-override"><code>@router.get('/items')
async def items_stream(request: Request):
async def event_publisher():
try:
aiter = (await my_manager).__aiter__()
while True:
task = asyncio.create_task(aiter.__anext__())
event = await asyncio.shield(task)
yield dict(data=event)
except asyncio.CancelledError as e:
print(f'Disconnected from client (via refresh/close) {request.client}')
raise e
return EventSourceResponse(event_publisher())
</code></pre>
<p>The async iterator is shielded to prevent <a href="https://stackoverflow.com/q/75665038/1584915">the issue described here</a>.</p>
| <python><python-asyncio><fastapi><pika><python-pika> | 2023-09-15 10:02:05 | 1 | 1,708 | DurandA |
77,111,442 | 4,417,894 | How to access a user defined QObject that is instantiated in another thread in python | <p>I've never had this issue using c++, but recently i've been using QML/PyQt5/Python and i've run across the following error:</p>
<pre class="lang-bash prettyprint-override"><code>QQmlEngine: Illegal attempt to connect to MyClass(0x1fd42731e50) that is in a different thread than the QML engine QQmlApplicationEngine(0x1fd41562f70.
</code></pre>
<p>I have a worker thread that creates a <strong>QObject</strong> that i've defined. That is spawned by a controller thread which is set as a context property.</p>
<p>Here is an M.R.E.:</p>
<pre><code>import QtQuick 2.15
import QtQuick.Controls 2.15
ApplicationWindow {
visible: true
width: 400
height: 600
title: "Clock"
Text {
id: backend_data
anchors.centerIn: parent
text: backend.my_class.value.toString()
}
}
</code></pre>
<pre class="lang-py prettyprint-override"><code>from PyQt5.QtGui import QGuiApplication
from PyQt5.QtQml import QQmlApplicationEngine
from PyQt5.QtCore import QObject, pyqtSignal, pyqtProperty, QThread, QTimer, pyqtSlot
import sys
import copy
class MyClass(QObject):
value_changed = pyqtSignal()
def __init__(self, value):
QObject.__init__(self)
self._value = value
@pyqtProperty(int, notify=value_changed)
def value(self):
return self._value
class Worker(QObject):
my_class_signal = pyqtSignal(QObject)
def __init__(self, *args, **kwargs):
QObject.__init__(self, *args, **kwargs)
self._i = 0
self.timer=QTimer()
self.timer.timeout.connect(self.send_my_class)
self.timer.start(1000)
self._my_class = MyClass(0)
@pyqtSlot()
def send_my_class(self):
self._i += 1
self._my_class = MyClass(self._i)
self.my_class_signal.emit(self._my_class)
class Backend(QObject):
my_class_changed = pyqtSignal()
def __init__(self, *args, **kwargs):
QObject.__init__(self, *args, **kwargs)
self._my_class = MyClass(0)
self._thread = QThread()
self._worker = Worker()
self._worker.moveToThread(self._thread)
self._worker.my_class_signal.connect(self.receive_my_class)
self._thread.start()
@pyqtSlot(QObject)
def receive_my_class(self, myclass):
self._my_class = myclass
self.my_class_changed.emit()
@pyqtProperty(QObject, notify=my_class_changed)
def my_class(self):
return self._my_class
if __name__ == '__main__':
app = QGuiApplication(sys.argv)
backend = Backend()
engine = QQmlApplicationEngine()
engine.quit.connect(app.quit)
engine.rootContext().setContextProperty('backend', backend)
engine.load('./example.qml')
if len(engine.rootObjects()) > 0:
sys.exit(app.exec())
</code></pre>
<p>What should i be doing differently?</p>
| <python><qt><pyqt5><qml> | 2023-09-15 10:01:47 | 1 | 1,010 | emg184 |
77,111,377 | 20,088,885 | Why Do I get Odoo Server Error after creating XML files | <p>I get an error, so after adding my <code>views/xml</code> and enabling the super user to access the <code>custom module</code>. I get an error of</p>
<blockquote>
<p><code>RPC_ERROR Odoo Server Error</code></p>
</blockquote>
<p>I tried creating new database and account, change my odoo.conf</p>
<pre><code>[options]
; This is the password that allows database operations:
; admin_passwd = admin
db_host = False
db_port = False
db_user = odoo16v1
db_password = odoo16v1
addons_path = server/addons , server/customaddons , C:/Odoo/odoo16/server/customaddons
default_productivity_apps = True
http_port = 8050
</code></pre>
<p>But what happens is when I install the new app to my new database, and refresh the app, I get <code>cancel install</code> instead,</p>
<pre><code>RPC_ERROR
Odoo Server Error
Traceback (most recent call last):
File "C:\Odoo\odoo16\server\odoo\api.py", line 984, in get
cache_value = field_cache[record._ids[0]]
KeyError: 335
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Odoo\odoo16\server\odoo\fields.py", line 1160, in __get__
value = env.cache.get(record, self)
File "C:\Odoo\odoo16\server\odoo\api.py", line 991, in get
raise CacheMiss(record, field)
odoo.exceptions.CacheMiss: 'ir.actions.act_window(335,).search_view'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Odoo\odoo16\server\odoo\http.py", line 1584, in _serve_db
return service_model.retrying(self._serve_ir_http, self.env)
File "C:\Odoo\odoo16\server\odoo\service\model.py", line 133, in retrying
result = func()
File "C:\Odoo\odoo16\server\odoo\http.py", line 1611, in _serve_ir_http
response = self.dispatcher.dispatch(rule.endpoint, args)
File "C:\Odoo\odoo16\server\odoo\http.py", line 1815, in dispatch
result = self.request.registry['ir.http']._dispatch(endpoint)
File "C:\Odoo\odoo16\server\odoo\addons\base\models\ir_http.py", line 154, in _dispatch
result = endpoint(**request.params)
File "C:\Odoo\odoo16\server\odoo\http.py", line 697, in route_wrapper
result = endpoint(self, *args, **params_ok)
File "C:\Odoo\odoo16\server\odoo\addons\web\controllers\action.py", line 34, in load
action = request.env[action_type].sudo().browse([action_id]).read()
File "C:\Odoo\odoo16\server\odoo\addons\base\models\ir_actions.py", line 272, in read
result = super(IrActionsActWindow, self).read(fields, load=load)
File "C:\Odoo\odoo16\server\odoo\models.py", line 2986, in read
return self._read_format(fnames=fields, load=load)
File "C:\Odoo\odoo16\server\odoo\models.py", line 3165, in _read_format
vals[name] = convert(record[name], record, use_name_get)
File "C:\Odoo\odoo16\server\odoo\models.py", line 5898, in __getitem__
return self._fields[key].__get__(self, type(self))
File "C:\Odoo\odoo16\server\odoo\fields.py", line 1209, in __get__
self.compute_value(recs)
File "C:\Odoo\odoo16\server\odoo\fields.py", line 1387, in compute_value
records._compute_field_value(self)
File "C:\Odoo\odoo16\server\odoo\models.py", line 4222, in _compute_field_value
fields.determine(field.compute, self)
File "C:\Odoo\odoo16\server\odoo\fields.py", line 97, in determine
return needle(*args)
File "C:\Odoo\odoo16\server\odoo\addons\base\models\ir_actions.py", line 240, in _compute_search_view
fvg = self.env[act.res_model].get_view(act.search_view_id.id, 'search')
File "C:\Odoo\odoo16\server\odoo\api.py", line 537, in __getitem__
return self.registry[model_name](self, (), ())
File "C:\Odoo\odoo16\server\odoo\modules\registry.py", line 190, in __getitem__
return self.models[model_name]
KeyError: 'hospital.patient'
The above server error caused the following client error:
RPC_ERROR: Odoo Server Error
at makeErrorFromResponse (http://localhost:8069/web/assets/121-df8555b/web.assets_backend.min.js:992:163)
at XMLHttpRequest.<anonymous> (http://localhost:8069/web/assets/121-df8555b/web.assets_backend.min.js:1000:13)
</code></pre>
<p>What I did:</p>
<p>I tried changing database account and reinstall again but when i tried to search the app i get <code>cancel install</code>.</p>
<p>oom_hospital/<strong>manifest</strong>.py</p>
<pre><code>{
'name': 'Hospital Management V1',
'author': 'odoodev16v1',
'data': [
'views/menu.xml',
'views/patient_view.xml'
]
}
</code></pre>
<p>oom_hospital/views/menu.xml</p>
<pre><code><?xml version="1.0" encoding="utf-8"?>
<odoo>
<menuitem
id="menu_hospital_root"
name="Hospital"
sequence="0"
/>
<menuitem
id="menu_patient_master"
name="Patient Details"
parent="menu_hospital_root"
sequence="0"
/>
</odoo>
</code></pre>
<p>oom_hospital/views/patient_view.xml</p>
<pre><code><?xml version="1.0" encoding="utf-8"?>
<odoo>
<record id="action_hospital_patient" model="ir.actions.act_window">
<field name="name">Patients</field>
<field name="type">ir.actions.act_window</field>
<field name="res_model">hospital.patient</field>
<field name="view_mode">tree,form</field>
<field name="context">{}</field>
<field name="help" type="html">
<p class="no_view_nocontent_smilling_face">
Create your first patient
</p>
</field>
</record>
<menuitem
id="menu_patient"
name="Patient"
action="action_hospital_patient"
parent="menu_patient_master"
sequence="0"
/>
</odoo>
</code></pre>
| <python><odoo><erp> | 2023-09-15 09:51:28 | 0 | 785 | Stykgwar |
77,111,328 | 1,797,689 | How do I put a primary key constraint in BigQuery using the Python API? | <p>I am creating a BigQuery table using the Python API. I want to make <code>user_id</code> as the primary key. How do I add this constraint using the Python API?</p>
<pre><code>from google.cloud import bigquery
def create_table():
schema = [
bigquery.SchemaField("user_id", "STRING", mode="REQUIRED"),
bigquery.SchemaField("first_name", "STRING"),
bigquery.SchemaField("last_name", "STRING"),
bigquery.SchemaField("age", "INTEGER")
]
table = bigquery.Table(table_id, schema=schema)
table = client.create_table(table)
</code></pre>
| <python><google-bigquery> | 2023-09-15 09:43:12 | 1 | 5,509 | khateeb |
77,111,224 | 1,410,969 | Find path of pip installed package, when it is no longer located at that path | <p>I install a package locally using</p>
<pre><code>python -m pip install -r requirements.txt
</code></pre>
<p>Now, a user (unintentionally) moves or renames a parent folder containing the package.</p>
<p>How can I find the path where the package was installed?</p>
<p>I have tried</p>
<ul>
<li><code>pip list</code></li>
<li><code>pip freeze</code></li>
<li><code>importlib.import_module(package_name)</code></li>
<li><code>pkg_resources.get_distribution(package_name)</code></li>
<li><code>importlib.util.find_spec(package_name)</code></li>
</ul>
<p>All of the above methods give the same output as if the package had never been installed.
I need to find the location that the package had originally during the <code>pipinstall</code>. I cannot change the installation procedure, it has to be purely posthoc.</p>
| <python><pip><requirements.txt> | 2023-09-15 09:30:33 | 1 | 613 | Atnas |
77,111,096 | 1,959,753 | Dask Distributed - Stateful global parameters shared by worker methods | <p>I am using Dask to setup a cluster. For now I am setting up both the scheduler and the workers on localhost.</p>
<pre><code>cluster = SSHCluster(["localhost", "localhost"],
connect_options={"known_hosts": None},
worker_options={"n_workers": params["n_workers"], },
scheduler_options={"port": 0, "dashboard_address": ":8797"},)
client = Client(cluster)
</code></pre>
<p>Is there a way to create stateful global parameters that can be initialized on the workers' side and be used by any worker_methods that are subsequently assigned to be computed on the workers?</p>
<p>I have found the client.register_worker_plugin method.</p>
<pre><code>def read_only_data(self, jsonfilepath):
with open(jsonfilepath, "r") as readfile:
return json.load(read_file)
def main():
cluster = SSHCluster(params) # simplified
client = Client(cluster)
plugin = read_only_data(jsonfilepath)
client.register_worker_plugin(plugin, name="read-only-data")
</code></pre>
<p>However, ReadOnlyData is initialized on the client-side, hence, self.persons and self.persons_len, are copied over to the workers (and not initialized on the workers' side). While this may be useful for small data, if the data set is massive, this will incur additional communication overhead to copy over (unless I am missing something conceptually).</p>
<p>Let's say ReadOnlyData and the file in "jsonfilepath" was available on the workers' side. We could call it from "worker_method_1" and "worker_method_2" that feature some arbitrary logic; however, this means that it would have to be called every time the workers are called. Is there some "initialization" event/method, that happens on the workers' side, right after worker creation, and before the assignment of the worker methods, that would allow us to pre-load some data structures as stateful global parameters, commonly shared among subsequent instances of the worker methods?</p>
<p><strong>Update</strong></p>
<p>After trying @mdurant's suggestions, with a JSON file of 280mb, the code gets stuck on the client.replicate() for over an hour. Loading the file in a single process without Dask, takes less than 20 seconds. In the dashboard, the workers are all using approx 2 GB, and the memory was increasing. There's also network activity being recorded.</p>
<p><a href="https://i.sstatic.net/mo05M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mo05M.png" alt="Dask Dashboard" /></a></p>
<p>All of a sudden, the script crashed with the below:</p>
<blockquote>
<p>distributed.comm.core.CommClosedError: in <TCP (closed) ConnectionPool.replicate local=tcp://192.168.0.6:38498 remote=tcp://192.168.0.6:46171>: TimeoutError: [Errno 10] Connection timed out</p>
</blockquote>
<p>The memory usage is excessive. The only memory I have is 280 mb for the JSON. 280mb x 6 workers should amount to approx. 1.7gb, and definitely not 2gb on each worker.</p>
<p>I suspect the JSON is being copied to all workers.
<a href="https://distributed.dask.org/en/stable/api.html#distributed.Client.replicate" rel="nofollow noreferrer">Dask's documentation</a> also states that the data is copied onto the workers:</p>
<blockquote>
<p>replicate -> Set replication of futures within network.
Copy data onto many workers. This helps to broadcast frequently accessed data and can improve resilience.
This performs a tree copy of the data throughout the network individually on each piece of data. This operation blocks until complete. It does not guarantee replication of data to future workers.</p>
</blockquote>
<p>However, still, this does not explain the excessive memory usage and why Dask is not managing to copy 280 mb to 6 workers in less than 1 hour.</p>
| <python><cluster-computing><dask><dask-distributed> | 2023-09-15 09:10:59 | 2 | 736 | Jurgen Cuschieri |
77,111,073 | 4,139,024 | How to add line breaks to all existing hover texts in a plotly figure | <p>I am computing a plotly figure in an external library, ie I have a function that return a fig object.</p>
<p>However, the hover texts are very long and i want to add line breaks. How can I do that?</p>
<p>The entire code is not so easy to reproduce, but I am using this:</p>
<pre class="lang-py prettyprint-override"><code>import textwrap
from bertopic import BERTopic
from umap import UMAP
from sentence_transformers import SentenceTransformer
docs = [
# ...
]
model = SentenceTransformer('all-MiniLM-L6-v2')
embeddings = model(docs)
topic_model = BERTopic()
topics, probs = topic_model.fit_transform(docs, embeddings)
reduced_embeddings = UMAP(n_neighbors=10, n_components=2, min_dist=0.0, metric='cosine').fit_transform(embeddings)
fig = topic_model.visualize_documents(docs, reduced_embeddings=reduced_embeddings, width=1920, height=1080)
def wrap_text(text, delimiter='<br>', after=80):
if text is None:
return text
return textwrap.fill(text, width=80).replace('\n', delimiter)
fig.update_traces(
text=[wrap_text(text) for text in fig.data[0].hovertext],
hoverinfo='text'
)
</code></pre>
| <python><plotly> | 2023-09-15 09:07:48 | 0 | 3,338 | timbmg |
77,111,066 | 13,508,045 | Outer group removes multiple inner groups | <p>I want to have multiple groups on outline <code>level 1</code> and over these groups I want to have 1 group which is on outline <code>level 2</code>, like this:</p>
<p><a href="https://i.sstatic.net/ZkowQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZkowQ.png" alt="Groups Example" /></a></p>
<p>But with my code the outer group always <strong>overwrites</strong> or removes the inner groups:</p>
<p><a href="https://i.sstatic.net/lg3q1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lg3q1.png" alt="Groups_not_working" /></a></p>
<h4>Code</h4>
<pre class="lang-py prettyprint-override"><code>import openpyxl
wb = openpyxl.load_workbook("Planung.xlsx")
ws = wb["Zeitplan"]
ws.sheet_properties.outlinePr.summaryBelow = False
ws.row_dimensions.group(9, 11, outline_level=1)
ws.row_dimensions.group(13, 13, outline_level=1)
ws.row_dimensions.group(16, 17, outline_level=1)
ws.row_dimensions.group(7, 17, outline_level=2)
wb.save('test.xlsx')
</code></pre>
<h4>Reproduction steps</h4>
<ul>
<li>Create file called <code>Planung.xlsx</code></li>
<li>Create sheet called <code>Zeitplan</code> in the created workbook</li>
<li>Run code</li>
<li>Open <code>test.xlsx</code></li>
</ul>
| <python><excel><openpyxl> | 2023-09-15 09:06:26 | 1 | 1,508 | codeofandrin |
77,111,050 | 1,640,614 | How to get the number of leaves and nodes in a trained sklearn.tree.DecisionTreeClassifier? | <p>I have trained a DecisionTreeClassifier and I would like to know how many leaves and nodes it has.</p>
| <python><scikit-learn><decision-tree> | 2023-09-15 09:03:58 | 1 | 2,272 | Ferro |
77,110,893 | 2,673,013 | LineCollection, PatchCollection redrawing | <p>I am trying to have a Jupyter notebook with a matplotlib plot in which a <code>LineCollection</code> and a <code>PatchCollection</code> (of circles) can be changed by a slider:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib.collections import LineCollection, CircleCollection, PatchCollection
from matplotlib.patches import Circle
from ipywidgets import interact, FloatSlider, Layout
%matplotlib notebook
fig = plt.figure()
ax = plt.axes()
lines = LineCollection(np.array([[[0,0],[1,1]]]))
circles = PatchCollection([Circle((.5,.5),.5)])
ax.add_collection(lines)
ax.add_collection(circles)
#lines.set_segments([[[0,1],[1,0]]])
#circles.set_paths([Circle((.5,.5),.1)])
interact(lambda s: circles.set_paths([Circle((.5,.5),s)]),
s = FloatSlider(min = 0, max = .5, step=.01))
interact(lambda t: lines.set_segments([[[0,t],[1,1-t]]]),
t = FloatSlider(min = 0, max = 1, step=.01))
</code></pre>
<p>It works fine for the <code>LineCollection</code> but changing the parameter for the <code>PatchCollection</code> does not make the figure be redrawn. Changing the function for the <code>PatchCollection</code> to include <code>figs.canvas.draw()</code> fixes the problem, so I thought <code>PatchCollection.set_paths()</code> unlike <code>LineCollection.set_segments()</code> does not trigger a redraw. <s>However, in the commented line <code>circles.set_paths()</code> <em>does</em> seem to trigger a redraw.</s> Can somebody explain to me what's going on? If possible I'd like consistent behavior.</p>
| <python><matplotlib><jupyter-notebook> | 2023-09-15 08:40:19 | 1 | 343 | Stefan Witzel |
77,110,748 | 2,998,077 | To extract texts in selected page(s) from PDF | <p>By the use of pdfminer / pdfminer.six, I wish to extract the texts in pdf.</p>
<p>When trying to extract the texts on selected page(s) only, it gives an error:</p>
<pre><code>AttributeError: 'generator' object has no attribute 'seek'
# from this line "parser = PDFParser(page_selected)"
</code></pre>
<p>What's the right way to extract the texts in selected page(s) only?</p>
<p>Here is the code I have:</p>
<pre><code>from io import StringIO
from pdfminer.converter import TextConverter
from pdfminer.layout import LAParams
from pdfminer.pdfdocument import PDFDocument
from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter
from pdfminer.pdfpage import PDFPage
from pdfminer.pdfparser import PDFParser
import pdfminer.high_level as hl
working_folder = "C:\\temp\\"
output_string = StringIO()
with open(working_folder + 'AU.pdf', 'rb') as in_file:
page_selected = hl.extract_pages(in_file, page_numbers=[1]) # second page
parser = PDFParser(page_selected) # error line
doc = PDFDocument(parser)
rsrcmgr = PDFResourceManager()
device = TextConverter(rsrcmgr, output_string, laparams=LAParams())
interpreter = PDFPageInterpreter(rsrcmgr, device)
for page in PDFPage.create_pages(doc):
interpreter.process_page(page)
print(output_string.getvalue())
</code></pre>
| <python><pdf><extract><pdfminer><pdfminersix> | 2023-09-15 08:20:55 | 1 | 9,496 | Mark K |
77,110,705 | 12,415,855 | Scrolling down with selenium on website not possible? | <p>i try to scroll down on a website using selenium with the following code:</p>
<pre><code>import os
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
if __name__ == '__main__':
os.environ['WDM_LOG'] = '0'
options = Options()
options.add_argument("start-maximized")
options.add_experimental_option("prefs", {"profile.default_content_setting_values.notifications": 1})
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('excludeSwitches', ['enable-logging'])
options.add_experimental_option('useAutomationExtension', False)
options.add_argument('--disable-blink-features=AutomationControlled')
srv=Service()
driver = webdriver.Chrome (service=srv, options=options)
link = "https://unabated.com/mlb/odds"
driver.get (link)
driver.execute_script("window.scrollTo(0, 20000)")
input("Press!")
driver.quit()
</code></pre>
<p>But when the site is opened the scroll down on the website is not happening?
How can i scroll down on this page?</p>
| <python><selenium-webdriver> | 2023-09-15 08:14:12 | 1 | 1,515 | Rapid1898 |
77,110,563 | 12,711,388 | How to set heatmap to grayscale and annotate with a mask | <p>I have a matrix A:</p>
<pre><code>A = np.array([[ 0. , 0.00066748, -0.00097412, -0.00748846, 0.00305338],
[-0.00157652, 0. , 0.0048117 , 0.01069083, -0.0137888 ],
[-0.00713212, -0.00170574, 0. , 0.00096385, 0.00212367],
[-0.00186541, 0.00351104, -0.00590608, 0. , -0.00448311],
[-0.00929146, 0.00157808, 0.01300444, -0.00078593, 0. ]])
</code></pre>
<p>With the following code I create a heatmap that assigns blank (white) to zero values, and green for positive values and red for negative ones:</p>
<pre><code>import matplotlib.pyplot as plt
import seaborn as sns
rdgn = sns.diverging_palette(h_neg=10, h_pos=130, s=99, l=55, sep=3, as_cmap=True)
fig = plt.figure()
sns.heatmap(A, mask=(A == 0), cmap=rdgn, center=0)
plt.xticks(np.arange(0, 5, 1) + 0.5, [i + 1 for i in range(5)])
plt.yticks(np.arange(0, 5, 1) + 0.5, [i + 1 for i in range(5)], rotation=0)
plt.tick_params(axis='both', which='major', labelsize=10, labelbottom=False, bottom=False, top=False, labeltop=True)
plt.show()
</code></pre>
<p>I obtain this:</p>
<p><a href="https://i.sstatic.net/5EL9I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5EL9I.png" alt="enter image description here" /></a></p>
<p>Now my goal is to convert this picture to greyscale only. One way would be to assign darker colors to reds and lighter to greens, but I would like to highlight the transition from negative to positive making the zero values very different from others, ideally keeping them blank. If I try with</p>
<pre><code>fig = plt.figure()
sns.heatmap(A, mask=(A == 0), cmap = 'gray', center = 0)
plt.xticks(np.arange(0, 5, 1) + 0.5, [i + 1 for i in range(5)])
plt.yticks(np.arange(0, 5, 1) + 0.5, [i + 1 for i in range(5)], rotation=0)
plt.tick_params(axis='both', which='major', labelsize=10, labelbottom=False, bottom=False, top=False, labeltop=True)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/qXYgD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qXYgD.png" alt="enter image description here" /></a>
I do not get what I want. With <code>mask==0</code> I can keep zeros to blank, but this is not reflected in the colorbar on the right. What I would like to do is basically "split" the colors into 2: from pitch black to "half" grey for negatives (from the farthest away to the closest to zero), white for zeros and from white to "half" grey again for positive (from the closest to farthest away from zero). Is there a way to achieve this? I am open to any suggestion on how to tackle the problem</p>
| <python><matplotlib><seaborn><heatmap><grayscale> | 2023-09-15 07:50:21 | 2 | 377 | user9875321__ |
77,110,560 | 16,420,204 | Polars: Calculate time difference from first element in each group of consecutive labels | <p>I have a <code>polars.DataFrame</code> like:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.from_repr("""
βββββββββββββββββββββββββββ¬ββββββββββ
β timestamp β group β
β --- β --- β
β datetime[ΞΌs, UTC] β str β
βββββββββββββββββββββββββββͺββββββββββ‘
β 2009-04-18 11:30:00 UTC β group_1 β
β 2009-04-18 11:40:00 UTC β group_1 β
β 2009-04-18 11:50:00 UTC β group_1 β
β 2009-04-18 12:00:00 UTC β group_2 β
β 2009-04-18 12:10:00 UTC β group_2 β
β 2009-04-18 12:20:00 UTC β group_1 β # <- reappearance of group_1
β 2009-04-18 12:30:00 UTC β group_1 β # <- reappearance of group_1
βββββββββββββββββββββββββββ΄ββββββββββ
""")
</code></pre>
<p>I want to calculate the time difference between the timestamp of the first element in each group to the timestamp of the elements in a group.</p>
<p><strong>Important is, that 'group' is defined as a (chronologically) consecutive appearance of the same group label.</strong></p>
<p>Like in the example shown group labels can occur later in time with the same group label but should by then be treated as a new group.</p>
<p>With that, the result should look something like this:</p>
<pre><code>βββββββββββββββββββββββββββ¬ββββββββββ¬βββββββββββ
β timestamp β group β timediff β
β --- β --- β --- β
β datetime[ΞΌs, UTC] β str β int(?) β
βββββββββββββββββββββββββββͺββββββββββͺβββββββββββ‘
β 2009-04-18 11:30:00 UTC β group_1 β 0 β
β 2009-04-18 11:40:00 UTC β group_1 β 10 β
β 2009-04-18 11:50:00 UTC β group_1 β 20 β
β 2009-04-18 12:00:00 UTC β group_2 β 0 β
β 2009-04-18 12:10:00 UTC β group_2 β 10 β
β 2009-04-18 12:20:00 UTC β group_1 β 0 β # <- reappearance of group_1
β 2009-04-18 12:30:00 UTC β group_1 β 10 β # <- reappearance of group_1
βββββββββββββββββββββββββββ΄ββββββββββ΄βββββββββββ
</code></pre>
| <python><dataframe><datetime><group-by><python-polars> | 2023-09-15 07:50:01 | 3 | 1,029 | OliverHennhoefer |
77,110,540 | 21,534,356 | 'HttpClient' object has no attribute 'execute' clickhouse python | <p>I'm trying to execute a command with Clickhouse client in Python as I have seen it in the document but it keeps give me an error
'HttpClient' object has no attribute 'execute'</p>
<pre><code>from clickhouse_driver import Client
CLICKHOUSE_CLOUD_HOSTNAME = 'localhost'
CLICKHOUSE_CLOUD_USER = 'username'
CLICKHOUSE_CLOUD_PASSWORD = 'password'
client = clickhouse_connect.get_client(
host=CLICKHOUSE_CLOUD_HOSTNAME, port=8123, username=CLICKHOUSE_CLOUD_USER, password=CLICKHOUSE_CLOUD_PASSWORD)
import random
data=[]
import datetime
# using now() to get current time
current_time = datetime.datetime.now()
for x in range(1000):
data.append({x, get_random_string(random.randint(5, x+6)),get_random_string(random.randint(5, x+6)),get_random_string(random.randint(5, x+6)),current_time})
client.execute("INSERT INTO test FORMAT JSONEachRow", data)
</code></pre>
<p>. Stack trace:</p>
<ol start="0">
<li>DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000bfc4fe4 in /opt/bitnami/clickhouse/bin/clickhouse</li>
<li>DB::Exception::Exception(PreformattedMessage&&, int) @ 0x00000000075f8360 in /opt/bitnami/clickhouse/bin/clickhouse</li>
<li>DB::AccessControl::authenticate(DB::Credentials const&, Poco::Net::IPAddress const&) const @ 0x00000000102190bc in /opt/bitnami/clickhouse/bin/clickhouse</li>
<li>DB::Session::authenticate(DB::Credentials const&, Poco::Net::SocketAddress const&) @ 0x00000000112e0698 in /opt/bitnami/clickhouse/bin/clickhouse</li>
<li>DB::TCPHandler::runImpl() @ 0x00000000121a3620 in /opt/bitnami/clickhouse/bin/clickhouse</li>
<li>DB::TCPHandler::run() @ 0x00000000121b4d28 in /opt/bitnami/clickhouse/bin/clickhouse</li>
<li>Poco::Net::TCPServerConnection::start() @ 0x00000000146bc044 in /opt/bitnami/clickhouse/bin/clickhouse</li>
<li>Poco::Net::TCPServerDispatcher::run() @ 0x00000000146bd540 in /opt/bitnami/clickhouse/bin/clickhouse</li>
<li>Poco::PooledThread::run() @ 0x000000001482ee3c in /opt/bitnami/clickhouse/bin/clickhouse</li>
<li>Poco::ThreadImpl::runnableEntry(void*) @ 0x000000001482cfe4 in /opt/bitnami/clickhouse/bin/clickhouse</li>
<li>start_thread @ 0x0000000000007648 in /lib/aarch64-linux-gnu/libpthread-2.31.so</li>
<li>? @ 0x00000000000d1fdc in /lib/aarch64-linux-gnu/libc-2.31.so</li>
</ol>
| <python><clickhouse> | 2023-09-15 07:46:47 | 0 | 360 | ali heydarabadii |
77,110,144 | 10,712,335 | How to set the `running` attribute of concurrent.future.Future() object | <p>To learn the usage of asyncio, I have written the following code:</p>
<pre><code>class MultiProcessor:
def __init__(self, n) -> None:
self._loop = asyncio.new_event_loop()
self._bg_thread = Thread(target=loop_forever, args=(self._loop, ), daemon=True)
self._bg_thread.start()
def submit(self, f, *args, **kwargs) -> asyncio.Future:
fut = asyncio.run_coroutine_threadsafe(run_one_process(f, args, kwargs), self._loop)
return fut
async def run_one_process(f, args, kwargs):
q = mp.Queue()
kwargs['q'] = q
p = mp.Process(target=f, args=args, kwargs=kwargs,daemon=True)
p.start()
while p.is_alive():
await asyncio.sleep(0.1)
res = q.get()
p.close()
return res
def test_func(a, b, **kwargs):
print('start running')
time.sleep(2)
res = a + b
kwargs['q'].put(res)
def loop_forever(loop):
asyncio.set_event_loop(loop)
loop.run_forever()
if __name__ == "__main__":
pool = MultiProcessor(4)
start = time.perf_counter()
fut = pool.submit(test_func, 1, 2)
while not fut.done():
print(fut.running())
time.sleep(0.1)
</code></pre>
<p>from the console I can see that the coroutine is indeed running, but when I print <code>fut.running()</code>, it always returns <code>False</code>. How can I set the <code>running</code> attr of the <code>Future</code> object?</p>
| <python><python-3.x><python-asyncio> | 2023-09-15 06:40:14 | 1 | 489 | Lionnel104 |
77,110,088 | 5,357,095 | Python function to create an oracle db insert prepared statement | <p>I'm trying to insert some a record into Oracle DB table. I build a DB connector in python and have the required functions to build prepared statements, however the insert keeps failing with the error:</p>
<p><code>"oracledb.exceptions.DatabaseError: DPY-4009: 0 positional bind values are required but 13 were provided"</code></p>
<p>Here is my function:</p>
<pre><code> def insert(self, schema: str, table_name: str, column_names: List, values: tuple):
"""
Inserts one row into the specified table.
:param table_name: The name of the table to insert data into.
:param values: A tuple containing the values to insert into a row.
"""
column_name_placeholders = ', '.join(f'"{columns}"' for columns in column_names)
values = self.replace_none_with_null(values)
statement = f"INSERT INTO {schema}.{table_name} ({column_name_placeholders}) VALUES ({column_name_placeholders})"
cursor = self.get_connection().cursor()
cursor.execute(statement, values)
cursor.commit()
cursor.close()
</code></pre>
<p>I replaced 'None' values with empty strings ''. The length of list values is also equal to 13 (same as placeholders'?') including the null strings.</p>
<p>I created a test function to test what statement is being prepared and it does prepare the right statement:</p>
<pre><code>INSERT INTO random_table ("col1", "col2", "col3", "col4", "col5", "col6", "col7", "col8", "col9", "col10", "col11", "col12", "col13") VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?);
</code></pre>
<p>What could I be doing wrong ?
Also, is above the best practice to create prepared statements in python or should I use binding variables ?</p>
| <python><oracle-database> | 2023-09-15 06:28:27 | 1 | 877 | Alex Bloomberg |
77,109,805 | 13,443,954 | How to use locator in evaluate's javascript in Playwright python | <p>I have the following script running:</p>
<pre class="lang-py prettyprint-override"><code>def test_combobox_selected_by_label(page):
page.goto("http://www.tizag.com/htmlT/htmlselect.php")
element = page.get_by_role("combobox")
element.select_option("Colorado -- CO")
selected_label = page_custom.evaluate("document.getElementsByName('selectionField')[0].options[document.getElementsByName('selectionField')[0].selectedIndex].text")
assert selected_label == "Colorado -- CO"
</code></pre>
<p>I"m using javascript for element, to extract the selected option's label text.</p>
<p>How could I reformat the element variable to use the previously defined locator (<code>get_by_role("combobox")</code>) from select_elem?</p>
<p>In short, how can I use the locator variable (select_elem) <strong>in evaluate's javascript</strong>?</p>
<p>Locators should be:</p>
<pre class="lang-py prettyprint-override"><code>page.get_by_role("combobox")
</code></pre>
<p>or</p>
<pre class="lang-py prettyprint-override"><code>page.locator("[name=selectionField]").first
</code></pre>
<p>Note: you may run the above script, this is a live page.</p>
| <javascript><python><playwright><playwright-python> | 2023-09-15 05:23:28 | 1 | 333 | M AndrΓ‘s |
77,109,773 | 14,154,784 | Why Are Some Beautiful Soup Elements Accessed Using Dictionary Syntax But Others Using Object Syntax? | <p><strong>Context:</strong> I have the following little query in Beautiful Soup, and then build a list comprehension full of tuples from it. It works great:</p>
<pre><code>tags = soup.find_all('span', {'class': 'tags-links'})
title_text_list = [(tag['title'], tag.text) for tag in tags]
</code></pre>
<p><strong>Question:</strong> Why do we access the title like a dictionary, but the text with object notation? Why not do both with object notation, or both like a dictionary?</p>
| <python><html><web-scraping><beautifulsoup><syntax> | 2023-09-15 05:15:47 | 1 | 2,725 | BLimitless |
77,109,565 | 3,862,607 | Python3 Multiprocessing, Mac Out Of Application Memory For Linear Search Using 4 Cores | <p>I am writing a program to run a linear search using multiple processes versus using a single process</p>
<p>When I run the following program I wrote, it pops up an error on my Mac it says "Out of Application Memory" and it doesn't even seem to start the first process.</p>
<p><a href="https://i.sstatic.net/sONG6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sONG6.png" alt="out of memory error" /></a></p>
<p>When running a smaller <code>nums</code> array it loads the processes but it's always slower.</p>
<p>Is there something I am missing or doing wrong?</p>
<p>I'm trying to test at scale because at a smaller size of <code>nums</code> the performance improvement may not exist due to the overhead with processes.</p>
<p>I am running this on a Apple Macbook Pro M2 Chip with 16 GB of ram and 12 cores</p>
<pre><code>import random
import math
import time
from multiprocessing import Process, Value
import os
def fastLinearSearch(nums, startIndex, randNumToFind, chunkSize, foundNumber, start_time):
print('start index: ' + str(startIndex))
print('chunk size: ' + str(chunkSize))
if foundNumber == randNumToFind:
print('done with ' + str(os.getpid()))
return
for i in range(startIndex, startIndex + chunkSize):
if foundNumber == randNumToFind:
print('done with ' + str(os.getpid()))
return
if randNumToFind == nums[i]:
print('Found number ' + str(randNumToFind))
print("Took " + str(time.time() - start_time) + " seconds")
foundNumber = randNumToFind
print('done with ' + str(os.getpid()))
return
print("done and didn't find anything with pid: " + str(os.getpid()))
if __name__ == '__main__':
nums = []
print('Generating nums array')
for i in range(1000000000):
nums.append(i)
print('Nums array generated')
# mix up numbers
# print("Mixing up numbers...")
# random.shuffle(nums)
# print('Mixed up numbers')
randNum = math.floor(random.random() * len(nums))
# regular linear search
print("Linear search for number " + str(randNum))
start_time = time.time()
for i in range(len(nums)):
if (nums[i] == randNum):
print("Found " + str(randNum))
print("Took " + str(time.time() - start_time) + " seconds")
break
# multiprocessing linear search
print("Fast linear search for number " + str(randNum))
start_index_1 = 0
processes_count = 4
start_index_2 = math.floor(len(nums) / processes_count) * 1
start_index_3 = math.floor(len(nums) / processes_count) * 2
start_index_4 = math.floor(len(nums) / processes_count) * 3
chunkSize = math.floor(len(nums) / processes_count)
start_time = time.time()
foundNumber = Value('d', -1) # start at -1, a change shows it was found
p1 = Process(target=fastLinearSearch, args=(
nums, start_index_1, randNum, chunkSize, foundNumber, start_time))
p2 = Process(target=fastLinearSearch, args=(
nums, start_index_2, randNum, chunkSize, foundNumber, start_time))
p3 = Process(target=fastLinearSearch, args=(
nums, start_index_3, randNum, chunkSize, foundNumber, start_time))
p4 = Process(target=fastLinearSearch, args=(
nums, start_index_4, randNum, chunkSize, foundNumber, start_time))
p1.start()
p2.start()
p3.start()
p4.start()
p1.join()
p2.join()
p3.join()
p4.join()
</code></pre>
| <python><multithreading><parallel-processing><multiprocessing> | 2023-09-15 04:12:03 | 1 | 1,899 | Drew Gallagher |
77,109,554 | 219,153 | Simple MQTT subscriber using Python paho-mqtt works on LInux, but fails on Windows 11 | <p>This Python 3.11 script:</p>
<pre><code>import paho.mqtt.client as mqtt
def onMQTT(client, userdata, message):
print(message.payload)
if __name__ == '__main__':
topic = 'top'
client = mqtt.Client()
client.connect('localhost', 1883)
client.message_callback_add(topic, onMQTT)
(result, mid) = client.subscribe(topic, qos=2)
if result == mqtt.MQTT_ERR_SUCCESS:
print(f'Subscribed to: {topic}.')
client.loop_forever()
</code></pre>
<p>works as expected on Linux. On Windows 11, it prints <code>Subscribed to: top.</code>, but fails to receive topics, while <code>mosquitto_sub -t 'top'</code> command receives <code>top</code> topics with no problem. Is there anything different I have to do, in order to make this script work on Windows 11?</p>
| <python><linux><windows><mqtt> | 2023-09-15 04:08:45 | 1 | 8,585 | Paul Jurczak |
77,109,398 | 3,419,510 | Failing to Import Files Compiled from Protobuf in Python | <p>My directory structure is as follows:</p>
<pre><code> test
|-test.py
|-test.proto
|-test_pb2.py
|-__init__.py
|-comm
|-comm.proto
|-comm_pb2.py
|-__init__.py
</code></pre>
<p>both __init__.py is empty
and <strong>test.proto</strong> is like this:</p>
<pre><code>package test;
import "comm/comm.proto";
message Test{
optional comm.Foo foo = 1;
}
</code></pre>
<p>and <strong>comm.proto</strong> is like this:</p>
<pre><code>package comm;
message Foo{}
</code></pre>
<p>i successfuly used command <em>protoc --python_out=. -I. comm.proto</em> in <strong>comm</strong> directory to compile <strong>comm.proto</strong> and <em>protoc --python_out=. -I. test.proto</em> in <strong>test</strong> directory to compile <strong>test.proto</strong>
but when i tried to import <em><strong>test_pb2</strong></em> in <em><strong>test.py</strong></em>, i encounter this error</p>
<pre><code>TypeError: Couldn't build proto file into descriptor pool: Depends on file 'comm/comm.proto', but it has not been loaded
</code></pre>
<p>Can someone help me identify the reason and provide a solution, please?</p>
| <python><protocol-buffers><protobuf-python> | 2023-09-15 03:15:09 | 1 | 357 | charlie |
77,108,948 | 2,023,445 | How can I debug a hang in polars? | <p>I have a python script that uses polars to scan through many thousands of parquet files in s3, do a join, and write the result to a local parquet file. Because of memory constraints, I have to run it in batches. It runs for an varying amount of time, and then just hangs. I can give a brief overview of the operations I'm carrying out below, but my main question is: what can I do to debug this?</p>
<p>I have tried using pl.Config.set_verbose(true), and it outputs a lot of logging on what's going on until it just decides to stop and hang. Ctrl-c does not quit it. Are there any other ways to even find out where it's hanging? Alternatively, is there a better way of carrying this out?</p>
<p>Summary of program:
I have a local parquet file, reference.parquet, that contains a list of valid IDs, let's call id_1. On s3, I have many directories each with two files of interest, file_a.parquet and file_b.parquet. file_a has two columns, id_1 and id_a, file_b has two columns, id_a and id_b. All of these IDs can appear multiple times. Ultimately, I want to end up with a global mapping table of every pair of id_1 and id_b that appears in all the files, with id_a being a join key between the two files, and id_1 appearing in reference.parquet.</p>
<ol>
<li>I get the reference table as a cached lazy dataframe:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>ref = pl.scan_parquet("reference.parquet").select(pl.col("id_1")).cache()
</code></pre>
<ol start="2">
<li>Given an s3 base directory, create lazy dataframes for both file_a and file_b:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>def join_files(basedir):
# basedir is something like "s3://bucket/dirname"
df_a = (pl.scan_parquet(os.path.join(basedir, "file_a.parquet"))
.join(ref, on="id_1"))
df_b = (pl.scan_parquet(os.path.join(basedir, "file_b.parquet")))
return (df_a.join(df_b, on="id_a")
.select(["id_1", "id_b"])
.unique())
</code></pre>
<ol start="3">
<li>For a batch of directories, use join_files to make a batch of lazy dataframes, concat them all, and return the unique values:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>def process_batch(dirs):
dfs = [join_files(d) for d in dirs]
# concatenate the dataframes, then execute.
return pl.concat(dfs, rechunk=False).unique().collect()
</code></pre>
<p>Step 3 is where it inevitably hangs, when the concatenated lazy frames are being collected. Some variable number of batches will run, and then one will hang. When rerun, that batch will succeed, then a later one will hang.</p>
<p>Any theories or advice would be appreciated.</p>
| <python><python-polars> | 2023-09-14 23:55:31 | 2 | 324 | jsc |
77,108,738 | 4,414,359 | How to filter pandas DF on concatenation of columns? | <p>How to filter pandas DF on concatenation of columns?</p>
<p>For example:</p>
<pre><code>import pandas as pd
data = {'product_name': ['laptop', 'laptop', 'printer', 'tablet', 'desk', 'chair', 'chair', 'chair'],
'price': [1200, 1000, 150, 300, 450, 200, 100, 120],
'color': ['white', 'black', 'white', 'black', 'brown', 'red', 'grey', 'black']
}
df = pd.DataFrame(data)
</code></pre>
<p>There's a list of conditions:
<code>[('laptop', 'black'), ('chair', 'red'), ('chair', 'grey')]</code></p>
<p>How do I filter it to only return rows that match those conditions? Assume it's a big df with many more columns, with thousands of rows and thousands of conditions like that.</p>
| <python><pandas> | 2023-09-14 22:39:06 | 1 | 1,727 | Raksha |
77,108,654 | 5,821,028 | How to convert y-axis tick label numbers to letters | <p>For the code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
data = list(map(lambda n:str(n), np.random.choice(np.arange(1,50), 1000)))
fig, ax = plt.subplots(1,1)
sns.countplot(y=data,ax=ax)
ax.set_yticklabels(ax.get_yticklabels(), fontsize=5)
plt.show()
</code></pre>
<p>I got the plot below. The y-axis is string type 1~2 digit integers. I want to convert them to letters like</p>
<p>1->AL, 2->AK, 4->AZ, 5->AR, ...</p>
<p>I tried using <code>ax.set_yticks(ylabel, converted ylabel)</code>, but it did not work. How can I do that?</p>
<p><a href="https://i.sstatic.net/ifFgS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ifFgS.png" alt="" /></a></p>
| <python><matplotlib><seaborn><yticks><countplot> | 2023-09-14 22:15:12 | 2 | 1,125 | Jihyun |
77,108,531 | 17,309,108 | Is there realised solutions to unpack `stdin` as arguments? | <p>I would like to have the option to pass positional arguments via <code>stdin</code>. I have written my own <code>argparse.Action</code> subclass to overwhelm the problem, that does work, but it would be desirable to evade reinventing the wheel. Are there realised third-party or built-in solutions to automatically handle a hyphen as a source of the argument set?</p>
<p>My class:</p>
<pre class="lang-py prettyprint-override"><code>class ExtendStdin(Action):
def __call__(
self,
parser: ArgumentParser,
namespace: Namespace,
values: Sequence[str | Path],
option_string: Optional[str] = None,
) -> None:
if "-" in values:
if len(values) == 1:
content = sys.stdin.read()
sep = "\0" if "\0" in content else "\n"
values = list(map(existing_filepath, content.split(sep)))
else:
raise ValueError("Specification of both stdin & paths is not allowed.")
setattr(namespace, self.dest, values)
</code></pre>
<p><code>existing_filepath</code> is the function checking that a file path exists.</p>
<p>I am aware of the <code>argparse.FileType</code> feature, allowing me to convert a hyphen to the default <code>stdin</code> or <code>stdout</code>. But this treats a hyphen as one file-like argument instead of the source of an argument set.</p>
<hr />
<h2>Update 15.09.23</h2>
<p>The question touches upon <code>argparse</code> extensions or packages, adding auxiliary functionality for parsing arguments.<br />
I know <code>shlex</code> package, that was suggested in the comments, allowing to handle strings as a set of shell arguments and vice versa. Howbeit, the question is not about how to split the standard input stream.</p>
| <python><python-3.x><stdin> | 2023-09-14 21:40:32 | 0 | 782 | Vovin |
77,108,320 | 7,700,802 | TypeError: can only join an iterable (testing invoking a SageMaker endpoint for computer vision classification) | <p>I built an out-of-box <a href="https://en.wikipedia.org/wiki/Convolutional_neural_network" rel="nofollow noreferrer">CNN</a> and deployed using AWS Step Functions. I have these custom functions for the endpoint:</p>
<pre><code>def input_fn(data, content_type):
'''
take in image
'''
if content_type == 'application/json':
img = Image.open(io.BytesIO(data))
img_arr = np.array(img)
resized_arr = cv2.resize(img_arr, (img_size, img_size))
return resized_arr[None,...]
else:
raise RuntimeException("{} type is not supported by this endpoint.".format(content_type))
def model_fn():
'''
Return model
'''
client = boto3.client('s3')
client.download_file(Bucket=s3_bucket_name, Key='model/kcvg_cv_model.h5', Filename='kcvg_cv_model.h5')
model = tf.keras.saving.load_model('kcvg_cv_model.h5')
return model
def predict_fn(img_dir):
model = model_fn()
data = input_fn(img_dir)
prob = model.predict(data)
return np.argmax(prob, axis=-1)
</code></pre>
<p>When I run this code</p>
<pre><code>from sagemaker.predictor import RealTimePredictor
from sagemaker.serializers import JSONSerializer
endpoint_name = 'odi-ds-belt-vision-cv-kcvg-endpoint-Final-Testing4'
# Read image into memory
payload = None
with open("117.jpg", 'rb') as f:
payload = f.read()
predictor = RealTimePredictor(endpoint_name = endpoint_name, sagemaker_session=sm_sess, serializer=JSONSerializer)
inference_response = predictor.predict(data=payload)
print (inference_response)
</code></pre>
<p>I get the following error</p>
<pre class="lang-none prettyprint-override"><code>The class RealTimePredictor has been renamed in sagemaker>=2.
See: https://sagemaker.readthedocs.io/en/stable/v2.html for details.
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[14], line 11
8 payload = f.read()
10 predictor = RealTimePredictor(endpoint_name = endpoint_name, sagemaker_session=sm_sess, serializer=JSONSerializer)
---> 11 inference_response = predictor.predict(data=payload)
12 print (inference_response)
File ~/anaconda3/envs/odi-ds/lib/python3.9/site-packages/sagemaker/base_predictor.py:177, in Predictor.predict(self, data, initial_args, target_model, target_variant, inference_id, custom_attributes)
129 def predict(
130 self,
131 data,
(...)
136 custom_attributes=None,
137 ):
138 """Return the inference from the specified endpoint.
139
140 Args:
(...)
174 as is.
175 """
--> 177 request_args = self._create_request_args(
178 data,
179 initial_args,
180 target_model,
181 target_variant,
182 inference_id,
183 custom_attributes,
184 )
185 response = self.sagemaker_session.sagemaker_runtime_client.invoke_endpoint(**request_args)
186 return self._handle_response(response)
File ~/anaconda3/envs/odi-ds/lib/python3.9/site-packages/sagemaker/base_predictor.py:213, in Predictor._create_request_args(self, data, initial_args, target_model, target_variant, inference_id, custom_attributes)
207 args["EndpointName"] = self.endpoint_name
209 if "ContentType" not in args:
210 args["ContentType"] = (
211 self.content_type
212 if isinstance(self.content_type, str)
--> 213 else ", ".join(self.content_type)
214 )
216 if "Accept" not in args:
217 args["Accept"] = self.accept if isinstance(self.accept, str) else ", ".join(self.accept)
TypeError: can only join an iterable
</code></pre>
<p>I am guessing there is something I am doing wrong in the <code>input_fn</code>, but I am not exactly sure. What could it be?</p>
| <python><typeerror><amazon-sagemaker> | 2023-09-14 20:55:20 | 1 | 480 | Wolfy |
77,108,287 | 5,890,892 | Python connection error when called through docker | <p>I have this very simple python code:</p>
<pre><code>import psycopg2
postgres_credentials = {
"host": "postgres_mcdonalds",
"database_name": "postgres",
"user": "postgres",
"password": "password"
}
filename = "FastFoodNutritionMenu.csv"
conn = psycopg2.connect(
host=postgres_credentials["host"],
database=postgres_credentials["database_name"],
user=postgres_credentials["user"],
password=postgres_credentials["password"],
port="5432",
)
print("Sucess!")
</code></pre>
<p>Called main.py
And its Dockerfile:</p>
<pre><code>FROM python:3
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "main.py"]
</code></pre>
<p>I would like to connect my python script in a postgres database which be build and up previously:</p>
<pre><code>services:
postgres_mcdonalds:
image: postgres:14.5
environment:
POSTGRES_PORT: "5432"
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "password"
PORT : "5432"
TZ: America/Sao_Paulo
ports:
- "5432:5432"
volumes:
- postgres_volume:/var/lib/postgres
volumes:
postgres_volume:
</code></pre>
<p>I don't know why but when I run the line to call my docker image from my dockerfile, I receive the following traceback:</p>
<pre><code>Traceback (most recent call last):
File "//main.py", line 14, in <module>
conn = psycopg2.connect(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
psycopg2.OperationalError: connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused
Is the server running on that host and accepting TCP/IP connections?
connection to server at "localhost" (::1), port 5432 failed: Cannot assign requested address
Is the server running on that host and accepting TCP/IP connections?
</code></pre>
<p>To call my image, I ran:
docker build -t test_postgres .
docker run test_postgres</p>
<p>Why the python code isn't work when I call through docker?
If I call from python main.py, the "Success" is returned as expected. What am I doing wrong?</p>
<p>Thank you so much, have a nice day!</p>
| <python><postgresql><docker><docker-compose> | 2023-09-14 20:49:32 | 0 | 439 | victorcd |
77,108,230 | 774,133 | Plot multiple lines with plotnine | <p>Please consider this code for plotting multiple lines:</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
x = [1,2,3]
y = [ [30, 4, 50], [300,400,500], [350,450,550] ]
plt.plot(x, y)
</code></pre>
<p>that produces:</p>
<p><a href="https://i.sstatic.net/0FjB1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0FjB1.png" alt="enter image description here" /></a></p>
<p>I could not figure out how to do it in plotnine. So I asked a famous LLM, received a complex answer that I simplified as follows:</p>
<pre><code>import numpy as np
import plotnine as p9
import pandas as pd
import matplotlib.pyplot as plt
xx = np.array(x * len(y))
yy = np.ravel(y)
yyy = [val for sublist in y for val in sublist]
gg = [i+1 for i in range(len(y)) for _ in range(len(x))]
data = pd.DataFrame({'x':xx, 'y':yy, 'gg':gg})
plot = (
p9.ggplot(data, p9.aes(x='x', y='y', color='factor(gg)')) +
p9.geom_line()
)
plot.draw(True)
</code></pre>
<p>that produces:
<a href="https://i.sstatic.net/9HY8k.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9HY8k.png" alt="enter image description here" /></a></p>
<p>The two images are different and the correct one is the first, built by matplotlib.</p>
<p>So the question: how am I supposed to do this simple plot with plotnine?</p>
| <python><plotnine> | 2023-09-14 20:36:57 | 2 | 3,234 | Antonio Sesto |
77,107,910 | 1,669,860 | Do parentheses in a class definition do anything? | <p>I accidentally included parentheses in my class definition:</p>
<pre><code>class Crawler():
# ...
</code></pre>
<p>It seems like I'm supposed to do this:</p>
<pre><code>class Crawler:
# ...
</code></pre>
<p>Do those erroneous parentheses do anything? Or does Python ignore them?</p>
| <python> | 2023-09-14 19:34:57 | 1 | 26,307 | Kayce Basques |
77,107,814 | 5,924,264 | try; finally without except? | <p>Suppose I'm doing some sql query:</p>
<pre><code>try:
some_cursor.execute(sql_query)
finally:
some_cursor.close()
</code></pre>
<p>Is there any difference from the above with:</p>
<pre><code>some_cursor.execute(sql_query)
some_cursor.close()
</code></pre>
<p>They seem equivalent since there's no exception handling.
I do recall reading that if <code>some_cursor.execute(sql_query) </code> has an error, then the cursor may not be closed properly in the latter close, but the former case's <code>finally</code> block would ensure that?</p>
| <python><exception><try-catch> | 2023-09-14 19:16:37 | 1 | 2,502 | roulette01 |
77,107,787 | 9,415,280 | how iterate inside a tf.data.dataset for stacking the features | <p>starting with this code who is working properly.</p>
<pre><code># loading csv
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=filename,
num_parallel_reads=2,
batch_size=128,
num_epochs=1,
label_name='streamflow',
select_columns=keep_columns,
shuffle_buffer_size=10000,
header=True,
field_delim=','
)
def preprocess_fn(features, label):
# Normalize the features (example: scaling to [0, 1])
features['total_precipitation_sum'] /= 100.0
features['temperature_2m_min'] /= 100.0
features['temperature_2m_max'] /= 100.0
features['snow_depth_water_equivalent_max'] /= 100.0
-----> # Create a 'main_inputs' feature by stacking the selected columns
features['main_inputs'] = tf.stack([
features['total_precipitation_sum'],
features['temperature_2m_min'],
features['temperature_2m_max'],
features['snow_depth_water_equivalent_max']
], axis=-1)
return {'main_inputs': features['main_inputs']}, label
dataset = dataset.map(preprocess_fn)
</code></pre>
<p>In the "Create a 'main_inputs' feature by stacking the selected columns" section:</p>
<p>the "features" name are hard coded but will change in some cases. How I can automated the detection of all features and stacking process?</p>
| <python><tensorflow><tf.data.dataset> | 2023-09-14 19:11:40 | 1 | 451 | Jonathan Roy |
77,107,778 | 3,825,948 | Using a Load Balancer with Twilio for IVR App | <p>I'm architecting an IVR app that will be built using a Python framework (e.g. Flask, etc.). The app receives an audio stream from Twilio via Websockets when a user calls a designated number that is defined within the Twilio user dashboard. Concurrently, Twilio also makes POST requests to a webhook provided by me. How do I add a load balancer to work with Twilio so in the event that a started call overwhelms resources on 1 containerized instance of the app, the load balancer transfers that call with all its state to another containerized app instance without the user noticing anything? (seamless to the user) Has anyone done this before? If so, could you please share your experience. Much appreciated. Thanks.</p>
| <python><websocket><twilio><load-balancing><ivr> | 2023-09-14 19:09:55 | 1 | 937 | Foobar |
77,107,764 | 4,575,197 | calculate the predicted value based on coefficient and constant in python | <p>i have the coefficients and the constant (alpha). i want to multiply and add the values together like this example. (it has to be done for 300000 rows)</p>
<blockquote>
<p>Prediction = constant + (valOfRow1 * col1) + (-valOfRow1 *
col2) + (-valOfRow1 * col3) + (valOfRow1 *
col4) + (valOfRow1 * col5)</p>
<p>Prediction = 222 + (555-07 * col1) + (-555-07 * col2) + (-66* col3) +
(55* col4) + (777* col5)</p>
</blockquote>
<p>i have a one row dataframe which contains the coefficient and constant like this</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>col1</th>
<th>col2</th>
<th>col3</th>
<th>col4</th>
<th>col5</th>
<th>constant</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>2.447697e-07</td>
<td>-5.214072e-07</td>
<td>-0.000003</td>
<td>0.000006</td>
<td>555</td>
<td>222</td>
</tr>
</tbody>
</table>
</div>
<p>and another dataframe with the exact same name but with monthly values.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>col1</th>
<th>col2</th>
<th>col3</th>
<th>col4</th>
<th>col5</th>
</tr>
</thead>
<tbody>
<tr>
<td>16711</td>
<td>17961</td>
<td>0</td>
<td>20</td>
<td>55</td>
</tr>
</tbody>
</table>
</div>
<p>i already tried to sort the columns and then i take the product of them <code>df.dot</code>.</p>
<pre><code>selected_columns = selected_columns.sort_index(axis=1)
#mean_coefficients dataframe 21th (starting from 0) is constant so i use the other columns
selected_columns['predicted_Mcap']=selected_columns.dot(mean_coefficients.iloc[:,0:20])+mean_coefficients['const']
</code></pre>
<p>the reason that i use <code>mean_coefficients.iloc[:,0:20]</code> is because i don't want to conclude <code>const</code> in the multiplication it just needs to be added at the end.</p>
<p>so i calculated the predicted value but when i checked it in excel the value wasn't the same.</p>
<p>am i calculating it right?</p>
| <python><pandas><regression><prediction><coefficients> | 2023-09-14 19:08:01 | 2 | 10,490 | Mostafa Bouzari |
77,107,744 | 11,370,582 | Use NLP (NLTK) to identify groups of phrases in a python dataframe | <p>I have a table containing diagnosis information for a large group of patients. I would like to determine what the most common groupings of those diagnoses are, for example is it "Bloaty Head Syndrome" and "Slack Tongue", or "Broken Wind", "Chronic Nosehair" and "Corrugated Ankles"... or some other combination.</p>
<p>Data is structured like so:</p>
<pre><code>import pandas as pd
import numpy as np
# List of ids
ids = ['id1', 'id2', 'id3','id4','id5']
# List of sample sentences
diagnosis = ["Broken Wind","Chronic Nosehair","Corrugated Ankles","Discrete Itching"]
# Create dataframe
df = pd.DataFrame({'id': ids})
# Generate list of sentences for each id
df['diagnosis'] = df['id'].apply(lambda x: np.random.choice(diagnosis, 5).tolist())
# Explode into separate rows
df = df.explode('diagnosis')
print(df)
</code></pre>
<p>For example if both <code>id2</code> and <code>id5</code> contain <code>"Broken Wind" and Chronic Nosehair"</code> that would be 2 of that combination. If <code>id1, id3 and id4</code> contain <code>"Chronic Nosehair","Corrugated Ankles", and "Discrete Itching"</code> that would be 3 of that combination.</p>
<p>With the goal of determining which combination is most common.</p>
<p>I'm wondering is there an nlp library such as <code>NLTK</code>, or a method, that can be used to process data stored like this in a pandas dataframe? Most of what I have been able to find so far is geared toward sentiment analysis or analyzing single words as opposed to phrases...</p>
| <python><pandas><machine-learning><nlp><nltk> | 2023-09-14 19:04:49 | 1 | 904 | John Conor |
77,107,680 | 4,451,893 | Find all the rows in a pandas dataframe where list values repeat in a column | <p>I have a dataframe which looks something like this where I have already sorted it based on the <code>Page Count</code> and the column <code>Split Parties</code> contains list of elements</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>DocSetID</th>
<th>Duplicate Status</th>
<th>Page Count</th>
<th>split_counterparties</th>
</tr>
</thead>
<tbody>
<tr>
<td>8032866</td>
<td>Primary</td>
<td>39</td>
<td>['Bass Pro LLC']</td>
</tr>
<tr>
<td>8032866</td>
<td>Duplicate</td>
<td>39</td>
<td>['Bass Pro LLC']</td>
</tr>
<tr>
<td>8040900</td>
<td>Primary</td>
<td>39</td>
<td>['Bass Pro LLC']</td>
</tr>
<tr>
<td>8009326</td>
<td>Primary</td>
<td>39</td>
<td>['Blackrock Inc']</td>
</tr>
<tr>
<td>8014340</td>
<td>Primary</td>
<td>39</td>
<td>['Booking Holdings Inc']</td>
</tr>
<tr>
<td>8010961</td>
<td>Primary</td>
<td>39</td>
<td>['Cadence Design Systems Inc']</td>
</tr>
<tr>
<td>8010932</td>
<td>Primary</td>
<td>39</td>
<td>['Cadence Design Systems Inc']</td>
</tr>
<tr>
<td>8019492</td>
<td>Primary</td>
<td>. 39</td>
<td>['Cartavi LLC']</td>
</tr>
</tbody>
</table>
</div>
<p>I want only to select the rows where the <code>Split parties</code> are repeated. So for example based on these criteria I would get back</p>
<ul>
<li>rows containing Bass Pro LLC</li>
<li>rows containing Cadence Design Systems.</li>
</ul>
<p>So far I have gotten to sorting the dataframe by Page count and I am really stuck on how to get this further filtering based on similar list elements in successive rows.</p>
<p>I would really appreciate some help or guidance in this regard.</p>
<p><strong>Edit:</strong> Added more relevant data examples</p>
| <python><python-3.x><pandas><dataframe><data-analysis> | 2023-09-14 18:52:15 | 4 | 1,355 | sanster9292 |
77,107,614 | 11,725,056 | How to convert a string to Json (or dict) properly when it has "\uxxx", Emojis, LaTeX etc in Python? | <pre><code>X = """{"botResponse": "Great question! π So you want to find out if \( \sqrt{44} \) is closer to 6 or 7. Let's solve this together, step by step! π΅οΈββοΈ
Step 1: Find the squares of 6 and 7. So, \( 6^2 = 36 \) and \( 7^2 = 49 \).
Step 2: Now, observe that 44 is closer to 49 than it is to 36. π€
Step 3: So, \( \sqrt{44} \) will be closer to \( \sqrt{49} \) which equals 7. π
Therefore, \( \sqrt{44} \) is closer to 7! π Hope this helps!",
"quickReplies": ["Could you solve another problem for me? π", "Give me a similar question to practice? π―", "Explain this to me using a fun example that's easy to understand π€"]}"""
</code></pre>
<p>I want to convert this to a proper <code>json</code> or <code>dict</code> format. How can I do as I've tried everything?</p>
<pre><code>json.loads(X) # Error with and without strict = True / False
json.loads(repr(X)) # Error
json.loads(re.sub("\\", "SOMETHINGELSE", X)) # Still error
</code></pre>
<p>I've done the same above with <code>ast.literal_eval()</code> and still getting error.</p>
| <python><json><python-3.x><python-re> | 2023-09-14 18:41:02 | 1 | 4,292 | Deshwal |
77,107,483 | 4,418,481 | Change QHBoxLayout border color when widget clicked | <p>Im trying to create a simple Pyqt app that has a single button and a QVBoxLayout. When a user clicks on the button, I want it to add a row (QHBoxlayout) that has multiple widgets in it like QLineedit, QLabel, and a QButton. This way users can add multiple rows. Once the user clicks on any of those rows, i want the whole row to change its border color so it will let the user know on which row he stands.</p>
<p>I trying the following code:</p>
<pre><code>import sys
from PyQt5.QtWidgets import QApplication, QMainWindow, QVBoxLayout, QWidget, QPushButton, QHBoxLayout, QLineEdit
from PyQt5 import QtCore
class MyHBoxLayout(QWidget):
selected = QtCore.pyqtSignal()
def __init__(self, parent=None):
super().__init__(parent)
self.layout = QHBoxLayout()
self.setLayout(self.layout)
self.setStyleSheet("border: 1px solid black;")
self.line_edit = QLineEdit()
self.button = QPushButton('Remove')
self.layout.addWidget(self.line_edit)
self.layout.addWidget(self.button)
self.button.clicked.connect(self.removeMe)
self.line_edit.installEventFilter(self)
def removeMe(self):
self.setParent(None)
self.deleteLater()
def mousePressEvent(self, event):
self.selected.emit()
def eventFilter(self, source, event):
if source == self.line_edit and event.type() == QtCore.QEvent.MouseButtonPress:
self.selected.emit()
return super().eventFilter(source, event)
class MyApp(QMainWindow):
def __init__(self):
super().__init__()
self.initUI()
def initUI(self):
self.setWindowTitle('Highlight Selected QHBoxLayout')
self.setGeometry(100, 100, 400, 300)
central_widget = QWidget(self)
self.setCentralWidget(central_widget)
central_layout = QVBoxLayout()
central_widget.setLayout(central_layout)
add_button = QPushButton('Add QHBoxLayout', self)
central_layout.addWidget(add_button)
add_button.clicked.connect(self.addHBoxLayout)
self.container_widget = QWidget()
container_layout = QVBoxLayout()
self.container_widget.setLayout(container_layout)
central_layout.addWidget(self.container_widget)
self.selected_hbox = None
def addHBoxLayout(self):
hbox = MyHBoxLayout()
container_layout = self.container_widget.layout()
container_layout.addWidget(hbox)
hbox.selected.connect(self.onHBoxLayoutSelected)
def onHBoxLayoutSelected(self):
sender = self.sender()
if self.selected_hbox:
self.selected_hbox.setStyleSheet("border: 2px solid black;")
sender.setStyleSheet("border: 2px solid blue;")
self.selected_hbox = sender
if __name__ == '__main__':
app = QApplication(sys.argv)
window = MyApp()
window.show()
sys.exit(app.exec_())
</code></pre>
<p>But it changes the border of all of the widgets separately and not the whole row.</p>
<p>How can I fix it?</p>
<p>(I have another bug that crashes the app once I click on a QHBoxlayout after one row was removed but I focus on the border color at this moment).</p>
<p>Thank you</p>
| <python><pyqt><qtstylesheets><pyqt6> | 2023-09-14 18:14:50 | 1 | 1,859 | Ben |
77,107,430 | 11,117,255 | ImportError: The 'enchant' C library was not found and maybe needs to be installed. New | <p>I want to import enchant, but I get this error</p>
<pre><code>Python 3.9.7 (default, Sep 16 2021, 08:50:36)
[Clang 10.0.0 ] :: Anaconda, Inc. on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import enchant
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/anaconda3/lib/python3.9/site-packages/enchant/__init__.py", line 81, in <module>
from enchant import _enchant as _e
File "/opt/anaconda3/lib/python3.9/site-packages/enchant/_enchant.py", line 157, in <module>
raise ImportError(msg)
ImportError: The 'enchant' C library was not found and maybe needs to be installed.
See https://pyenchant.github.io/pyenchant/install.html
for details
</code></pre>
<p>I used these commands</p>
<pre><code>brew install enchant
pip install pyenchant
pip install enchant
</code></pre>
| <python><pyenchant><enchant> | 2023-09-14 18:05:20 | 1 | 2,759 | Cauder |
77,107,292 | 3,165,644 | Stream or chunk the huge text value from postgres using python psycopg2 (single row, single column) | <p>I am creating a huge SQL text dynamically using select statement which is more than 1GB. While executing SQL to read the column (one) value (generated SQL), the postgres fails to return the output.</p>
<pre><code>SQL Error [54000]: ERROR: out of memory
Detail: Cannot enlarge string buffer containing 1073741741 bytes by 95 more bytes.
</code></pre>
<p>The output is just one row, one column i.e. not n number of rows, just one string value which is huge. So, I cannot use fetchmany or other options of cursor as the data (one column) is not fit. The below is already in palce, but won't help in this case.</p>
<pre><code>with self.getConnection() as conn:
with conn.cursor(name=cur_name) as cursor:
cursor.itersize = chunksize
cursor.execute(sql)
while True:
results = cursor.fetchmany(chunksize)
.
.
.
</code></pre>
<p>I tried to store the column value as large object. i.e.</p>
<pre><code>CREATE TABLE lo_sql_data (
id bigserial not null ,
valuation_date date NOT NULL,
dynamic_sql oid NULL,
CONSTRAINT lo_sql_data_pkey PRIMARY KEY (id)
)
insert into lo_sql_data (valuation_date, dynamic_sql )
SELECT '2023-08-31'::date, lo_from_bytea(0, format(
$f$
SELECT <my dynamic query which generates a SQL very big>
$f$
)::bytea)
.
.
</code></pre>
<p>Above works well with dynamic SQL output value which is smaller, but anything bigger than buffer size throws the above error.</p>
<p>The limitation is a t server side (postgres) and I understand we cannot increase string buffer value.</p>
<p>Is there a way to stream the string value as it gets generated and pass it to client (Python) and later in Python I can merge and make a single string?</p>
<p>Or is there any other way for me to get the value of dynamic sql (which is to be executed later) out of server?</p>
| <python><postgresql><streaming><psycopg2> | 2023-09-14 17:42:50 | 1 | 603 | Mihir |
77,107,178 | 3,121,975 | Specify generic type based on constructor signature | <p>I have a function that takes a generic argument used for type-casting purposes (I know Python doesn't do those but humor me here). Essentially, this function can accept any type that has a constructor which can accept <code>self</code> alone. I want to do something like this:</p>
<pre><code>T = TypeVar("T") # Annotation here
def some_func(x: Type[T]) -> T:
return x(do_something_else())
</code></pre>
<p>Obviously, this is an over-simplified example, but I'm not sure how to specify the type here. <code>x</code> could be <code>str</code>, <code>int</code>, a class type or anything else.</p>
<p>If I were to annotate <code>x</code> with <code>Callable[[Any], T]</code> then I couldn't submit <code>int</code> or <code>str</code> as arguments because these aren't technically callables, they're types. However, if I submit <code>Type[T]</code> as an argument then mypy will complain that there are too many arguments for object, as described in <a href="https://github.com/python/mypy/issues/10343" rel="nofollow noreferrer">this GitHub issue</a>.</p>
<p>So, how do I annotate based on the signature of a type's <code>__init__</code> method?</p>
| <python><typing> | 2023-09-14 17:25:12 | 1 | 8,192 | Woody1193 |
77,107,072 | 4,396,198 | Accessing Spotify User Data | <p>I have been reading through the docs trying to understand how I can see recently played tracks for other users. After reading the Authentication flow docs, I am still unclear on the general way to achieve this.</p>
<p>I have my client key and client secret. I understand I need the users to grant me access under the <code>user-read-recently-played</code> scope. My question is, how exactly does a user give me this permission and how do I access the user data from within the app?</p>
<p>Since there is no way to look up user ID's, does the OAuth module just keep track of which users have granted me authorization?</p>
| <python><oauth><spotipy> | 2023-09-14 17:04:47 | 0 | 2,107 | Sam CD |
77,107,057 | 6,379,197 | make model_averaging exclusive in federated learning using Python Threads | <p>I am creating <code>num_of_clients</code> threads using the following code:</p>
<pre><code>sockets_thread = []
no_of_client = 1
all_data = b""
while True:
try:
for i in range(no_of_client):
connection, client_info = soc.accept()
print("\nNew Connection from {client_info}.".format(client_info=client_info))
socket_thread = SocketThread(connection=connection,
client_info=client_info,
buffer_size=1024,
recv_timeout=100)
sockets_thread.append(socket_thread)
for i in range(no_of_client):
sockets_thread[i].start()
sockets_thread[i].join()
except:
soc.close()
print("(Timeout) Socket Closed Because no Connections Received.\n")
break
</code></pre>
<p>In the run function, there are several pieces of code, as follows:</p>
<pre><code>class SocketThread(object):
def run(self):
while True:
received_data, status = self.recv()
if status == 0:
self.connection.close()
break
self.reply(received_data)
def reply(self, received_data):
model = SimpleASR()
#all threads must averge the model before going to next line
model_instance = self.model_averaging(model, model_instance)
print("All threads completed model averging.")
#now do rest of the things
</code></pre>
<p>In the reply function, I have called one function. I want to write this code in such a way that every thread will proceed to the next line after calling this function.</p>
<p>Every thread must average the model and then proceed to the next line. I understood that I have to use Python condition variable. How can I do that?</p>
<p>The following functions must be mutually exclusive.</p>
<pre><code>model_instance = self.model_averaging(model, model_instance)
</code></pre>
<p>Every thread will proceed to the next line after executing this piece of code.</p>
<p>I am writing this code as part of implementing a federated learning algorithm.
<a href="https://i.sstatic.net/Ta1FY.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ta1FY.jpg" alt="enter image description here" /></a></p>
| <python><multithreading><mutex><condition-variable><federated-learning> | 2023-09-14 17:01:56 | 1 | 2,230 | Sultan Ahmed |
77,107,049 | 3,121,975 | Handling multiple instances of Click CliRunner | <p>I had a project that used the <a href="https://click.palletsprojects.com/en/8.1.x/" rel="nofollow noreferrer">click</a> library for CLI commands. For testing purposes, I had created a fixture that looked like this:</p>
<pre><code>@pytest.fixture(scope="module", name="runner")
def fixture_cli_runner():
"""Fixture for CLIRunner."""
return CliRunner()
</code></pre>
<p>This worked well as I only had one CLI within my repository. However, as they often do, scope has expanded so I now have to add a second CLI. My directory structure now looks like this:</p>
<pre><code>command_one:
- command_one.py
command_two:
- command_two.py
</code></pre>
<p>where each of these has the a structure similar to this (this one is for <code>command_two.py</code>):</p>
<pre><code>@click.group()
def main():
"""Command two CLI."""
configure_log()
@main.command
def do_thing():
"""Do a thing."""
pass
</code></pre>
<p>Unfortunately, the shortcomings of <code>CliRunner</code> mean that these tests consistently fail because state is overwritten in between tests, even though I have it scoped at the module level. For example, I have the following test in <code>test_command_two.py</code>:</p>
<pre><code>def test_main(cli_runner):
"""Test the main function."""
result = cli_runner.invoke(command_two.main)
assert result.exit_code == 0
print(result.output)
assert (
result.output
== """Usage: main [OPTIONS] COMMAND [ARGS]...
Command two CLI.
Options:
--help Show this message and exit.
Commands:
do-thing Do a thing.
"""
)
</code></pre>
<p>However, this fails and prints the message shown for <code>command_one.py</code>.</p>
<p>Clearly, the issue is that <code>CliRunner</code> is failing to maintain separate state for each of the CLIs I'm testing because they're being tested in parallel. Does anyone know of a workaround for this or a way to ensure packages are tested serially?</p>
| <python><testing><concurrency> | 2023-09-14 17:00:45 | 0 | 8,192 | Woody1193 |
77,107,012 | 3,292,006 | How do I find coverage for subprocesses under test? | <p>Consider the following simple program:</p>
<pre class="lang-py prettyprint-override"><code># thingy.py
import sys
print("ready")
for line in sys.stdin:
print(line, end='')
</code></pre>
<p>If I want to unit-test the program, I can stub out the side-effecting functions <code>ready</code> and <code>print</code> easily enough.</p>
<p>However, if I want to perform an end-to-end test, I want to run the real program and test real responses to real inputs.</p>
<p>I can test it easily enough:</p>
<pre class="lang-py prettyprint-override"><code># test_thingy.py
import pytest
import subprocess
import sys
@pytest.fixture
def thingy_instance():
proc = subprocess.Popen(
[sys.executable, "-um", "thingy"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
encoding="utf-8")
for line in proc.stdout:
if "ready" in line:
break
yield proc
proc.terminate()
proc.wait()
def test_thingy(thingy_instance):
stdout, stderr = thingy_instance.communicate("Hello!")
assert stdout == "Hello!"
# more tests
</code></pre>
<p>However, if I try to generate coverage for this, it doesn't work:</p>
<pre><code>$ coverage run --source . -m pytest
============================= test session starts ==============================
platform linux -- Python 3.10.12, pytest-7.1.3, pluggy-1.0.0
rootdir: /home/user/playground-local/python/coverage-test-stuff
plugins: Faker-16.6.1, smtpdfix-0.4.0, quarantine-2.0.0, cov-3.0.0, mock-3.8.2, docker-tools-3.1.3, postgresql-4.1.1
collected 1 item
test_thingy.py . [100%]
============================== 1 passed in 0.09s ===============================
$ coverage report
Name Stmts Miss Cover
------------------------------------
test_thingy.py 15 0 100%
thingy.py 4 4 0%
------------------------------------
TOTAL 19 4 79%
</code></pre>
<p>Running <code>coverage run -m thingy</code> generates coverage.</p>
<p>Running <code>coverage run --source . -m pytest</code> doesn't generate coverage.</p>
<p>Replacing the <code>Popen</code> command with a <code>coverage run -m</code> command doesn't generate coverage.</p>
| <python><pytest><coverage.py> | 2023-09-14 16:55:22 | 1 | 869 | Marcus Harrison |
77,106,975 | 9,571,463 | Python json.dumps() with writing JSON string to CSV | <p>I am writing to a CSV file, where one of the column values is a large JSON string. I am trying to understand how to escape the comma values in the JSON so it does not write them as individual columns in the CSV file.</p>
<p>I need this solution without using the traditional csv library as I want to understand the writing of data to files better.</p>
<pre><code>import json
# Emulating the process
d: dict = {"foo":"bar", "py": "pi"}
json_list: list[str] = [json.dumps(d)]
with open("file_test.csv", mode='a+') as f:
f.write('col_a,')
f.write('col_b,'+ '\n')
f.write(','.join(json_list)+'\n')
</code></pre>
<p>Output is a CSV like:</p>
<pre><code>col_a,col_b
"{""foo"": ""bar"""," ""py"": ""pi""}"
</code></pre>
<p>I only want a single column filled (col_b is empty). Do I have to replace every comma with an escaped comma in the json string (the strings can be several MB large) or is there a more optimal way using a built-in json method?</p>
| <python><json><csv> | 2023-09-14 16:48:57 | 1 | 1,767 | Coldchain9 |
77,106,926 | 4,451,315 | Non-hacky way to share a boolean state between classes? | <p>Say I have</p>
<pre class="lang-py prettyprint-override"><code>class Cat:
def __init__(self, var, n_calls=None):
self.var = var
if n_calls is None:
self.n_calls = []
else:
self.n_calls = n_calls
def greet(self):
if self.n_calls:
raise ValueError('Already greeted, sorry')
self.n_calls.append(1)
print('hello')
def change_var(self, new_var):
return Cat(new_var, n_calls=self.n_calls)
</code></pre>
<p>A given cat can only greet once. But also, if any <code>Cat</code> derived from a given <code>Cat</code> greets, then no other <code>Cat</code> derived from that same initial <code>Cat</code> can greet.</p>
<p>Here's an example:</p>
<pre class="lang-py prettyprint-override"><code>cat = Cat(3)
new_cat = cat.change_var(4)
other_cat = cat.change_var(5)
cat2 = Cat(6)
new_cat.greet() # passes
cat2.greet() # passes
other_cat.greet() # raises
</code></pre>
<p><code>new_cat.greet()</code> passes, because it's the first time that <code>new_cat</code> greets. <code>cat2.greet()</code> also passes, because it's the first time <code>cat2</code> greets.</p>
<p>But <code>other_cat.greet()</code> fails, because <code>other_cat</code> and <code>new_cat</code> were both derived from the same <code>Cat</code>, and <code>new_cat</code> has already greeted.</p>
<p>The code I've written works for what I'm trying to do, but using a list to share state feels very hacky.</p>
<p>Is there a non-hacky way to do this?</p>
| <python><class><inheritance><state> | 2023-09-14 16:44:02 | 2 | 11,062 | ignoring_gravity |
77,106,782 | 8,868,699 | removing back dots dir from path's prefix | <p>I have two paths.</p>
<p><code>path1 = /users/fida/data_lake/data/archived/09142023</code></p>
<p><code>path2 = /users/fida/data_lake/data/localpublished/SPTTES/End_to_End_Edit_Schedule/2022-08-03_11_22_03.kp</code></p>
<p>Output Im trying to get after combining both path is:
<code>/users/fida/data_lake/data/archived/09142023/localpublished/SPTTES/End_to_End_Edit_Schedule/2022-08-03_11_22_03.kp</code></p>
<p>I have tried <code>os.path.relpath</code> but I get dots in the prefix which mess the path.</p>
<p><code>..\..\localpublished\SPTTES\End_to_End_Edit_Schedule\2022-08-03_11_22_03.kp</code></p>
| <python><pathlib><os.path> | 2023-09-14 16:22:17 | 2 | 1,649 | Hayat |
77,106,702 | 6,460 | How do I correctly typehint `cv2` and `numpy` interaction? | <p>I'm trying to typehint the following code which is supposed to convert PNG images (as bytes) to NumPy arrays:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import cv2
def png_to_numpy(blob: bytes) -> np.typing.NDArray[np.uint8]:
nparr = np.frombuffer(blob, np.uint8)
return cv2.imdecode(nparr, cv2.IMREAD_UNCHANGED)
</code></pre>
<p>However, <code>mypy</code> does not accept this code with the message <code>Returning Any from function declared to return "ndarray[Any, dtype[unsignedinteger[_8Bit]]]</code>. I fail to see how I'm supposedly returning <code>Any</code>, especially sind <code>imdecode</code> is supposed to be returning <code>cv2.typing.MatLike</code>.</p>
<p>Where is my mistake and how do I typehint this properly?</p>
<p>EDIT: <code>cv2.imdecode</code> has two overloads defined in the typings:</p>
<pre><code>@typing.overload
def imdecode(buf: cv2.typing.MatLike, flags: int) -> cv2.typing.MatLike: ...
@typing.overload
def imdecode(buf: UMat, flags: int) -> cv2.typing.MatLike: ...
</code></pre>
<p>However, if I reveal the type of <code>cv2.imdecode</code>, <code>cv2.typing.MatLike</code> is missing and replaced by <code>Any</code>:</p>
<pre><code>Revealed type is "Overload(def (buf: Any, flags: builtins.int) -> Any, def (buf: cv2.UMat, flags: builtins.int) -> Any)"
</code></pre>
<p>I also can't use <code>cv2.typing.MatLike</code> directly, only via <code>from cv2.typing import MatLike</code>. Seems like this might be the culprit.</p>
<p>EDIT2: opened an <a href="https://github.com/opencv/opencv-python/issues/901" rel="nofollow noreferrer">issue</a> upstream in hope of clarification.</p>
| <python><numpy><opencv><python-typing><mypy> | 2023-09-14 16:10:15 | 0 | 8,833 | Nikolai Prokoschenko |
77,106,669 | 12,040,751 | Limit memory in lru_cache | <p>I am using the <code>lru_cache</code> from the <code>functools</code> package. I would like to limit the memory that it is allowed to store, so that any data above say 100 MB is not stored.</p>
<p>I was going to define <code>maxsize</code> like so</p>
<pre class="lang-py prettyprint-override"><code>from functools import lru_cache
cache_limited = lru_cache(maxsize=10_000_000)
</code></pre>
<p>However, according to <a href="https://stackoverflow.com/a/62183912/12040751">this answer</a></p>
<blockquote>
<p>[maxsize] ... is the number of elements that are stored in the cache.</p>
</blockquote>
<p>How can I have a memory-limited cache instead?</p>
| <python><caching><functools> | 2023-09-14 16:05:33 | 0 | 1,569 | edd313 |
77,106,576 | 1,914,781 | convert utc string to datetime | <p>Try below code to convert utc string to datetinme but not work:</p>
<pre><code>import pandas as pd
data = [
["2023-09-13T19:17:21.000Z"],
["2023-09-13T19:18:22.000Z"],
["2023-09-13T19:19:23.000Z"],
]
df = pd.DataFrame(data,columns=['ts'])
df['ts'] = pd.to_datetime(df['ts'],format='%Y-%m-%dT%H:%M:%S.%Z')
print(df)
</code></pre>
| <python><pandas> | 2023-09-14 15:53:30 | 1 | 9,011 | lucky1928 |
77,106,442 | 4,744,055 | How to Store Java Epoch Long Value with Gremlin Python | <p>In my Apache Tinkerpop Gremlin based code I am using with AWS Neptune, need to share a timestamp field with Python based and Java based code bases in my project.
When I run the equivalent of the following code:</p>
<pre><code>from gremlin_python.process.traversal import Cardinality
g.addV("xyzzy").
property(Cardinality.single, "when", datetime.now().timestamp() * 1000)
).
next()
</code></pre>
<p>But this causes number overflow exception and there seems to be no direct way to case a Phyton number to the Long value Java uses. How can I fix this so that I can store the same <code>Long</code> integer value that Java can use?</p>
| <python><java><gremlin> | 2023-09-14 15:34:29 | 1 | 1,124 | Manabu Tokunaga |
77,106,304 | 536,262 | playwright force useragent for headless mode | <p>How can I force the useragent in playwright python?</p>
<p><a href="https://playwright.dev/python/docs/emulation#user-agent" rel="nofollow noreferrer">https://playwright.dev/python/docs/emulation#user-agent</a></p>
<pre class="lang-py prettyprint-override"><code>def run(playwright):
""" config for browser """
browser = playwright.chromium.launch(channel='msedge', headless=False)
context = browser.new_context(
user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.1938.81"
)
return browser
with sync_playwright() as p:
browser = run(p)
# some info
log.info(f"name:{browser.browser_type.name}, exe_path:{browser.browser_type.executable_path}, version:{browser.version}")
page = browser.new_page()
log.info(f"useragent:{page.evaluate('navigator.userAgent')}")
page.goto(dat['url'])
:
</code></pre>
<p>but useragent in headlessmode is always:
<code>useragent:Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/116.0.1938.81 Safari/537.36 HeadlessEdg/116.0.1938.81</code></p>
| <python><playwright><playwright-python> | 2023-09-14 15:16:26 | 1 | 3,731 | MortenB |
77,106,224 | 3,834,415 | How do I manually invoke click CLIs from within a Python calling context with parameters? | <p>In order to set up automatic function aliases in <code>setuptools</code>, it's necessary to wrap all args in some kind of function. If I am using a click interface, I'll need some way to parameterize specific calls to the click interface in order to alias them.</p>
<hr />
<p>In a hello world format, how do I call a Python click interface CLI:</p>
<pre class="lang-py prettyprint-override"><code># cli.py
import click
@click.group()
def cli():
"""Help Message"""
pass
</code></pre>
<p>within Python, rather than on the command line, with an argument so that:</p>
<pre class="lang-bash prettyprint-override"><code># command line
$ python3 cli.py --help
Help Message
</code></pre>
<pre><code># python interpreter
>>> cli(???) # replace ??? w/ the correct argument
Help Message
</code></pre>
<p>the "Help Message" is displayed when I call the CLI from within a python3 runtime context?</p>
| <python><python-click> | 2023-09-14 15:06:27 | 1 | 31,690 | Chris |
77,105,925 | 6,068,731 | Python multiprocessing - Should I create a new pool at each iteration or share it across iterations? | <p>I have a function with a while loop. At each iteration, I need to apply a function in parallel, to each row of a large array, and then I need to use the result of this for the rest of the iteration, to obtain a new array for the next iteration.</p>
<p>The details are not important, but I want to figure out if I should create a new pool at each iteration, or if I should start the pool before I initiate the while loop, and just keep using it. I am new to <code>multiprocessing</code>.</p>
<p>Here's a pseudo-code that can't be run, but should give an idea.</p>
<pre><code>import numpy as np
from multiprocessing import Pool, cpu_count)
def f():
pass
def option1():
"""This version creates a new pool at each iteration."""
while True:
# Run in parallel
pool = Pool(processes=cpu_count())
results = []
for i in range(N):
results.append(pool.apply_async(worker, (params,)))
pool.close()
pool.join()
# Grab the results for this iteration
iteration_results = [result.get() for result in results]
# Do something with `results` so that the results can be used in the next `while` loop iter
return foo # return something at the end
def option2():
"""This version creates a single pool, and uses it at each iteration."""
pool = Pool(processes=cpu_count())
while True:
results = []
for i in range(N):
results.append(pool.apply_async(worker, (params,)))
iteration_results = [result.get() for result in results]
pool.close()
pool.join()
return foo # return something
</code></pre>
| <python><parallel-processing><multiprocessing><python-multiprocessing> | 2023-09-14 14:32:16 | 1 | 728 | Physics_Student |
77,105,871 | 11,814,273 | Permissions error in Docker even after giving user proper permissions | <p>If I run uWSGI as root I get an error:</p>
<pre><code>| *** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
</code></pre>
<p>and rightfully so since it is unsafe.</p>
<p>So I create a user and give it permissions to the relevant folders:</p>
<pre><code>RUN adduser --disabled-password --gecos '' django-user && \
mkdir -p /vol/web/media && \
mkdir -p /vol/web/static && \
chown -R django-user:django-user /vol && \
chmod -R 755 /vol && \
chmod -R +x /scripts
</code></pre>
<p>However, after running I run into <code>PermissionError: [Errno 13] Permission denied:</code> for files within /vol directory. Since it is flagged -R, this doesn't make sense.</p>
| <python><django><docker> | 2023-09-14 14:27:50 | 1 | 502 | Omri |
77,105,771 | 3,139,771 | Python relative imports behavior | <p>I have simplified my problem into this:</p>
<pre><code>βββ demo
β βββ mypkg
β β βββ a.py
β β βββ b.py
β β βββ api.py
β β βββ startserver.py
β βββ readme.md
</code></pre>
<p>a.py has =></p>
<pre><code>from b import name
def greeting():
return name()
</code></pre>
<p>b.py has =></p>
<pre><code>def name():
return 'Joe'
</code></pre>
<p>api.py has =></p>
<pre><code>import hug
from a import greeting
@hug.get('/ping')
def ping():
return {"response": "pong"}
@hug.get('/hello')
def hello():
#name = hello()
return {"response": greeting()}
</code></pre>
<p>startserver.py has =></p>
<pre><code>import subprocess
with open('testapi.log', 'w') as fd:
subprocess.run(['hug', '-f', 'api.py'], stdout=fd , stderr=subprocess.STDOUT, bufsize=0)
</code></pre>
<p>Using .venv in VSCode terminal from \demo\mypkg> on windows 10 either of the following two commands work great, so that browser can see both /ping and /hello results just fine:</p>
<pre><code>\demo\mypkg> hug -f api.py
\demo\mypkg> python startserver.py
</code></pre>
<p>I want to be able to run this from outside mypkg, so to the folder "mypkg" I added init & main also changed to relative imports, now I have:</p>
<pre><code>βββ demo
β βββ mypkg
β β βββ __init__.py
β β βββ __main__.py
β β βββ a.py
β β βββ b.py
β β βββ api.py
β β βββ startserver.py
β βββ readme.md
</code></pre>
<p>__init__.py has =></p>
<pre><code>print('inside __init__.py')
__all__ = [
"a",
"b",
"api",
"startserver"
]
from . import *
</code></pre>
<p>__main__.py has =></p>
<pre><code>print('inside __main__.py')
import traceback
from .startserver import start
def main():
try:
start()
except Exception:
print(traceback.format_exc())
if __name__ == "__main__":
print('... inside name == main ...')
main()
</code></pre>
<p>a.py has =></p>
<pre><code>from .b import name
def greeting():
return name()
</code></pre>
<p>b.py has =></p>
<pre><code>def name():
return 'Joe'
</code></pre>
<p>api.py has =></p>
<pre><code>import hug
from .a import greeting
@hug.get('/ping')
def ping():
return {"response": "pong"}
@hug.get('/hello')
def hello():
#name = hello()
return {"response": greeting()}
</code></pre>
<p>startserver.py has =></p>
<pre><code>import subprocess
import traceback
import os
from pathlib import Path
def start():
try:
currentpath = Path(__file__)
print(f'Currently executing from {currentpath}')
apipath = os.path.join(currentpath.parent, 'api.py')
print(f'parse api path is {apipath}')
print('inside startserver start()')
with open('testapi.log', 'w') as fd:
subprocess.run(['hug', '-f', apipath], stdout=fd , stderr=subprocess.STDOUT, bufsize=0)
except Exception:
print(traceback.format_exc())
</code></pre>
<p>then when called with <code>"python -m mypkg"</code>
I get this error:</p>
<pre><code>File "...\demo\mypkg\api.py", line 2, in <module>
from .a import greeting
ImportError: attempted relative import with no known parent package
</code></pre>
<p>Reading lots of similar questions here, I figure its probably because of being called directly as a script. Issue is, how can I call the hug api inside a subprocess from outside the package? I need to be able to run this thru pyinstaller to produce a .exe Any help is greatly appreciated, thanks!</p>
| <python><hug> | 2023-09-14 14:14:57 | 1 | 357 | Impostor Syndrome |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.