QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,353,175 | 9,940,188 | Flask / Pytest error: AssertionError: The setup method 'before_request' can no longer be called | <p>first up, this is NOT a duplicate of <a href="https://stackoverflow.com/questions/76434519/assertionerror-the-setup-method-route-can-no-longer-be-called-on-the-blueprin">this</a> or <a href="https://stackoverflow.com/questions/74195757/flask-pytest-userwarning-the-setup-method-route-can-no-longer-be-called-on-th">this</a> although it seems like it. The problem arises when I have more than one test function in my unit test suite. Full code examples below.</p>
<p>It is pretty trivial to see what is going on: The blueprint gets created once, but it is used twice by the two test functions, so <code>test_1()</code> first calls <code>before_request()</code>, then <code>register_blueprint()</code>. Then <code>test_2()</code> does the same thing again on the same object, which of course fails. How can this be avoided?</p>
<p>Probably by creating the blueprint(s) with a factory function. In this very dumbed-down example it'd be trivial, but in reality the app consists of many blueprints from several modules. I don't want to change them all only for the purposes of testing. There must be a simpler method.</p>
<p>The <code>record_once()</code> blueprint method <a href="https://flask.palletsprojects.com/en/3.0.x/api/#flask.Blueprint" rel="nofollow noreferrer">(Flask documentation)</a> seems to do exactly what I need, but it makes it worse by apparently registering the BP first and then calling the function (this fails even when only one <code>test_x()</code> function is run)</p>
<p>For the time being I'll cook up something that checks if the blueprint is already registered and skips the @before_request decorator, but that seems like too much of a hack for such a common setup.</p>
<p>Thanks!</p>
<p><strong>app.py</strong></p>
<pre><code>import flask
blp = flask.Blueprint('admin', __name__)
@blp.route('/', methods=['GET'])
def login():
return('OK')
def add_blueprint(app):
@blp.before_request
def before_request():
return
with app.app_context():
app.register_blueprint(blp)
def make_app():
app = flask.Flask('dham_wsgi')
add_blueprint(app)
return app
</code></pre>
<p><strong>conftest.py</strong></p>
<pre><code>import pytest
from .app import make_app
@pytest.fixture
def app():
app = make_app()
yield app
@pytest.fixture
def client(app):
yield app.test_client()
</code></pre>
<p><strong>test_app</strong></p>
<pre><code>def test_1(client):
rv = client.open('a')
def test_2(client):
rv = client.open('b')
</code></pre>
<p><strong>app.py with record_once()</strong></p>
<pre><code>import flask
blp = flask.Blueprint('admin', __name__)
@blp.route('/', methods=['GET'])
def login():
return('OK')
def add_before_request(setup):
@setup.blueprint.before_request
def before_request():
return
blp.record_once(add_before_request)
def add_blueprint(app):
app.register_blueprint(blp)
def make_app():
app = flask.Flask('dham_wsgi')
add_blueprint(app)
return app
</code></pre>
| <python><flask><pytest> | 2023-10-24 14:55:15 | 1 | 679 | musbur |
77,352,946 | 1,020,139 | How to send data over SSH connection created by AWS SSM Session Manager without using plugin? | <p>I want to send data over the SSH connection created by AWS SSM Session Manager in Python without using the <strong>Session Manager Plugin</strong>.</p>
<p>The following answer describes how to do it using the plugin, but I want a pure Python approach, with a local port that's forwarded to a target host.</p>
<p>Any ideas?</p>
<p><a href="https://stackoverflow.com/questions/66222667/how-to-use-session-manager-plugin-command/70311671#70311671">How to use session-manager-plugin command</a></p>
| <python><amazon-web-services> | 2023-10-24 14:22:12 | 0 | 14,560 | Shuzheng |
77,352,821 | 7,064,415 | place a PNG right above an axes object | <p>I have a Matplotlib figure with three axes, placed above one another. Above the top <code>axes</code> object I want to place an image that I enclose in an <code>inset_axes()</code> 'box'. The image should as be as wide as the <code>axes</code> object (and its height should be scaled accordingly compared to its original proportions), and it should connect seamlessly to the top of the <code>axes</code> object (so with no whitespace between the bottom of the image and the top of the <code>axes</code> object). Like this (the black box is the <code>axes</code> object; the red box the image):</p>
<p><a href="https://i.sstatic.net/dkSZF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dkSZF.png" alt="enter image description here" /></a></p>
<p>This is the code I have so far:</p>
<pre><code>fig, axes = plt.subplots(nrows=3, ncols=1, sharex=True)
for i, ax in enumerate(axes):
if i == 0:
img = Image.open(os.path.join(MY_DIR, img_name), formats=["png"]
axin = ax.inset_axes([0.0, ???, 1.0, 1.0])
axin.imshow(img, alpha=1)
axin.axis("off")
</code></pre>
<p>According to the <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.inset_axes.html" rel="nofollow noreferrer">documentation</a>, the <code>bounds</code> argument for <code>inset_axes()</code> takes four values: "Lower-left corner of inset Axes, and its width and height". I found out that I can use 1.0 for the width and height, which makes the image as wide as the <code>axes</code>, keeping its proportions. The x-coordinate is also obvious: it should be 0, so that the left of the image aligns with the start of the x-axis. But -- I cannot figure out what the y-coordinate should be.</p>
<p>If I set <code>y</code> to 0, the image is centered vertically along the y-axis of the <code>axes</code> object:</p>
<p><a href="https://i.sstatic.net/FrGi5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FrGi5.png" alt="enter image description here" /></a></p>
<p>If I set it to 0.5, then its middle aligns with the top of the y-axis. Still not what I want:</p>
<p><a href="https://i.sstatic.net/MzubX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MzubX.png" alt="enter image description here" /></a></p>
<p>How can I get the result that I want? I cannot make it make sense. Is there a way to tell matplotlib that the bottom of the image should align with the top of the <code>axes</code>?</p>
<p>(I should mention that I asked a very similar question <a href="https://stackoverflow.com/questions/77345556/get-dimensions-of-pil-image-to-help-placing-it-relative-to-axes-object?">here</a>, but that seems to have been a dead end.)</p>
| <python><matplotlib><axis> | 2023-10-24 14:06:25 | 1 | 732 | rdv |
77,352,615 | 19,500,571 | Extracting elements from a long list of dictionaries efficiently | <p>I have a (long) list of dictionaries, but for the sake of this example I represent them as</p>
<pre><code>d = [{'a':1}, {'a':2}, {'a':3}]
</code></pre>
<p>I need to extract the same element from these dictionaryes, i.e.,</p>
<pre><code>[i['a'] for i in d]
</code></pre>
<p>What is the most efficient way to do this in Python? List comprehensions and for-loops work well, but are not known to be very efficient. Can the process be vectorized somehow?</p>
<hr />
<p>Additional details: The dictionaries have multiple keys, but it is the same one that I need to extract. All the dictionaries have the same keys.</p>
| <python><performance><dictionary><vectorization> | 2023-10-24 13:41:42 | 4 | 469 | TylerD |
77,352,588 | 3,364,859 | How to catch a package-specific exception? | <p>When I try to read a table from an SQL database with pandas using <code>pandas.read_sql</code>, the package pyodbc throws an <code>DataError</code> (pyodbc.DataError) that I would like to catch. The <code>DataError</code> is caused by an unexpected nvarchar value, causing an conversion failure when pyodbc attempts to convert it to data type int.</p>
<p>This is my code:</p>
<pre><code>import pandas as pd
import pyodbc
from sqlalchemy import create_engine
from sqlalchemy.engine import URL
driver = "SQL Server"
server = "my_server"
database = "my_db"
connection_string = (f"Driver={driver};"
f"Server={server};"
f"Database={database};"
f"Trusted_Connection=yes;"
f"TrustServerCertificate=yes")
connection_url = URL.create("mssql+pyodbc",
query={"odbc_connect": connection_string})
engine = create_engine(connection_url)
# select only the first 10 rows in a table
query = (f"SELECT top 10 * "
f"FROM abc.table_name")
df = pd.read_sql(sql=query,
con=engine)
</code></pre>
<p>The output is</p>
<pre><code>DataError: (pyodbc.DataError) ('22018', "[22018] [Microsoft][ODBC SQL Server Driver][SQL Server]Conversion failed when converting the nvarchar value '00:35' to data type int. (245) (SQLExecDirectW)")
[SQL: SELECT top 10 * FROM abc.table_name]
(Background on this error at: https://sqlalche.me/e/14/9h9h)
</code></pre>
<p>I know how to <a href="https://wiki.python.org/moin/HandlingExceptions" rel="nofollow noreferrer">catch and handle</a> <a href="https://docs.python.org/3/library/exceptions.html" rel="nofollow noreferrer">built-in exceptions</a> using <code>try</code> and <code>except</code> statements in Python. The <code>DataError</code> (pyodbc.DataError) however is not a built-in exception. I would like to catch this package-specific error while not just catching <em>any</em> error (I understand this to be <a href="https://stackoverflow.com/questions/21553327/">bad practice</a>).</p>
<p>I have attempted the following:</p>
<pre><code>try:
df = pd.read_sql(sql=query,
con=engine)
except DataError as err:
print(err)
pass
</code></pre>
<p>This results in an <code>NameError: name 'DataError' is not defined</code> (as far as I understand, because <code>DataError</code> is not a built-in exception).</p>
<p>I have also attempted the following:</p>
<pre><code>try:
df = pd.read_sql(sql=query,
con=engine)
except pyodbc.DataError as err:
print(err)
pass
</code></pre>
<p>This results in <code>DataError: (pyodbc.DataError)</code>, because the error is not caught.</p>
| <python><pandas><exception><pyodbc> | 2023-10-24 13:37:54 | 2 | 356 | marianoju |
77,352,551 | 4,415,502 | Why does the ABC has to be placed in the left most position in python abstract class? | <p>Is there any difference in class A and class B in the following code?</p>
<pre><code>from abc import ABC
class Base:
...
class A(ABC, Base):
...
class B(Base, ABC):
...
</code></pre>
<p>I see that most of the code online uses the first version (class A), why is that so?</p>
| <python> | 2023-10-24 13:33:45 | 0 | 2,473 | Mox |
77,352,344 | 10,462,461 | Filter dataframe for value that exists in all dates | <p>Say I have a dataframe like this:</p>
<pre><code>df = pd.DataFrame({
'PortDt': ['2022-01-31', '2022-02-28', '2022-02-28', '2022-03-31', '2022-03-31'],
'loannum': ['111', '111', '222', '111', '333']
})
</code></pre>
<p>I want to filter the dataset so that I am left with only records who appear in every distinct value for <code>PortDt</code>.</p>
<p>For this example, the result would be:</p>
<pre><code>PortDt | loannum
-----------+-------------
2022-01-31 | 111
2022-02-28 | 111
2022-03-31 | 111
</code></pre>
| <python><pandas><dataframe> | 2023-10-24 13:10:19 | 1 | 340 | gernworm |
77,352,103 | 5,576,536 | How to use langchain RetrievalQA with asyncio? | <p>I want to parallelize <code>RetrievalQA</code> with <code>asyncio</code> but I am unable to figure out how.</p>
<p>This is how my code works serially:</p>
<pre><code>import langchain
from langchain.chat_models import ChatOpenAI
from langchain import PromptTemplate, LLMChain
from langchain.chains import RetrievalQA
from langchain.vectorstores import FAISS
from langchain.schema.vectorstore import VectorStoreRetriever
import asyncio
import nest_asyncio
retriever = VectorStoreRetriever(vectorstore=FAISS(...))
chat = ChatOpenAI(model="gpt-3.5-turbo-16k", temperature=0.7)
qa_chain = RetrievalQA.from_llm(chat, retriever= retriever
#,memory=memory
, return_source_documents=True
)
queries = ['query1', 'query2', 'query3']
data_to_append = []
for query in queries :
vectordbkwargs = {"search_distance": 0.9}
result = qa_chain({"query": query, "vectordbkwargs": vectordbkwargs})
data_to_append.append({"Query": query, "Source_Documents": result["source_documents"], "Generated_Text": result["result"]})
</code></pre>
<p>Here was my attempt to parallelize it with <code>asyncio</code> but <code>RetrievalQA</code> doesn't seem to work async:</p>
<pre><code>import langchain
from langchain.chat_models import ChatOpenAI
from langchain import PromptTemplate, LLMChain
from langchain.chains import RetrievalQA
from langchain.vectorstores import FAISS
from langchain.schema.vectorstore import VectorStoreRetriever
import asyncio
import nest_asyncio
retriever = VectorStoreRetriever(vectorstore=FAISS(...))
chat = ChatOpenAI(model="gpt-3.5-turbo-16k", temperature=0.7)
qa_chain = RetrievalQA.from_llm(chat, retriever= retriever
, return_source_documents=True
)
queries = ['query1', 'query2', 'query3']
data_to_append = []
async def process_query(query):
vectordbkwargs = {"search_distance": 0.9}
result = await qa_chain({"query": query, "vectordbkwargs": vectordbkwargs})
data_to_append.append({"Query": query, "Source_Documents": result["source_documents"], "Generated_Text": result["result"]})
async def main():
tasks = []
for query in queries: # Iterate all rows
task = process_query(query)
tasks.append(task)
await asyncio.gather(*tasks)
if __name__ == "__main__":
nest_asyncio.apply()
asyncio.run(main())
</code></pre>
<p>Any help would be greatly appreciated.</p>
| <python><python-asyncio><langchain><large-language-model> | 2023-10-24 12:36:37 | 1 | 4,793 | Abhishek R |
77,351,940 | 12,769,783 | python performs a deepcopy of a weakref.proxy's content. Why? Which entity owns the reference? | <p>I wanted to double check my assumption, that the content of a <code>weakref.proxy</code> is not deepcopied when performing a deepcopy of it. To that end, I wrote the following code snippet:</p>
<pre class="lang-py prettyprint-override"><code>import copy
import weakref
class A:
def __init__(self, val=3):
self.val = val
def identify(self):
return id(self)
a = A()
proxy = weakref.proxy(a)
proxy.val = 4
proxy_deepcopy = copy.deepcopy(proxy)
proxy_deepcopy.val = 5
print(a.val, a.identify()) # 4 139732708115984
print(proxy.val, proxy.identify()) # 4 139732708115984
print(proxy_deepcopy.val, proxy_deepcopy.identify()) # 5 139732690146640
</code></pre>
<p>However my assumption seems to be incorrect; a deepcopy of <code>a</code> is performed when deepcopying the <code>proxy</code>. Am I misinterpreting the results of the tests or do I have a misconception of how the <code>weakref.proxy</code> is supposed to work? Shouldn't the copied instance of <code>A</code> be immediately be GCed?</p>
<p>I checked on the internet and found that the expected behavior occurs when using <code>weakref.ref</code> instead of the <code>weakref.proxy</code> (which I thought was mainly semantic sugar).</p>
<p>Because of this <a href="https://github.com/python/cpython/blob/9bb202a1a90ef0edce20c495c9426d9766df11bb/Lib/copy.py#L188" rel="nofollow noreferrer">line in the CPython copy module</a> that I found reading up on <a href="https://stackoverflow.com/a/46205191/12769783">this stack overflow answer on weakref.ref</a> utilizing weakref.ref yields the results I would have expected:</p>
<pre class="lang-py prettyprint-override"><code>proxy = weakref.ref(a)
proxy().val = 4
proxy_deepcopy = copy.deepcopy(proxy)
proxy_deepcopy().val = 5
print(a.val, a.identify()) # 5 140323837493200
print(proxy().val, proxy().identify()) # 5 140323837493200
print(proxy_deepcopy().val, proxy_deepcopy().identify()) # 5 140323837493200
</code></pre>
<p>Is this discrepancy between weakref.proxy and weakref.ref intended?</p>
<p>Who is the owner of the deepcopied instance of <code>A</code>, that is still accessible through the <code>proxy_deepcopy</code>?</p>
<p>Executed with <code>CPython</code> 3.8.18 and 3.10.13.</p>
| <python><python-3.x><deep-copy><weak-references> | 2023-10-24 12:11:59 | 0 | 1,596 | mutableVoid |
77,351,890 | 3,831,934 | Scipy sparse matrix multiplication using to much RAM | <p>I have two csr sparse matrix 136095x136095 : X and A
with dtype Bool</p>
<p>I want to calculate <code>X @ A @ X</code></p>
<p>But I hit this error :</p>
<pre><code>Unable to allocate 93.3 GiB for an array with shape (12525785139,) and data type int64
</code></pre>
<p>it happens at the line (from scipy code not my code)</p>
<pre><code>indices = np.empty(nnz, dtype=idx_dtype)
</code></pre>
<p>How could I get my result ?
I don't have 100 GB RAM and it seems overkill.</p>
<p>Also</p>
<p><code>X@A</code> works and it runs in 2.9 sec on my laptop</p>
<p><code>A@X</code> works and it runs in 2.9 sec</p>
<p>X is 500M non null elements
A is 2k non null elements
X@A is 40M non null elements
They are symmetric</p>
| <python><scipy> | 2023-10-24 12:02:55 | 0 | 1,329 | Hugo |
77,351,885 | 558,738 | Getting correct path to current script when running from exec() | <p>When I run one python file from another python file by using exec(), the __file__ attribute is the caller's path not the callee:</p>
<p>file caller.py:</p>
<pre><code>with open("callee.py", "r") as callee:
exec(callee.read(), globals(), locals())
</code></pre>
<p>file callee.py:</p>
<pre><code>print("this file is:", __file__)
</code></pre>
<p>result printed is:</p>
<pre><code>this file is: caller.py
</code></pre>
<p>instead of the expected:</p>
<pre><code>this file is: callee.py
</code></pre>
<p>How can I get the path to the currently executed file even when the file is being exec()'ed ?</p>
| <python><exec> | 2023-10-24 12:01:02 | 0 | 1,797 | Periodic Maintenance |
77,351,878 | 13,000,229 | Is it possible to make the box bigger on Swagger UI? | <h2>Environment</h2>
<ul>
<li>Python 3.11.6</li>
<li>flask 2.2.5</li>
<li>connexion 2.14.2</li>
<li>swagger-ui-bundle 0.0.9 (installed by <code>pip install connexion[swagger]</code>)</li>
</ul>
<h2>Problem</h2>
<p>I would like to know how to make this box bigger. I checked Swagger UI website (e.g. <a href="https://swagger.io/docs/specification/data-models/data-types/" rel="nofollow noreferrer">https://swagger.io/docs/specification/data-models/data-types/</a>), but didn't find a good solution.</p>
<p><a href="https://i.sstatic.net/sxQzj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sxQzj.png" alt="Swagger UI" /></a></p>
<h2>Part of code</h2>
<pre><code>parameters:
id:
name: "id"
description: "Very loooooooooooooooog ID"
in: path
required: True
schema:
type: "string"
</code></pre>
| <python><flask><swagger><connexion> | 2023-10-24 11:59:31 | 0 | 1,883 | dmjy |
77,351,719 | 10,133,797 | Override numpy array string representation | <p><code>with np.printoptions</code>'s <a href="https://numpy.org/doc/stable/reference/generated/numpy.set_printoptions.html" rel="nofollow noreferrer"><code>formatter</code></a> argument allows controlling formatting of individual array elements. Is there something for the entire array?</p>
<p>I'm looking for something to the effect of</p>
<pre class="lang-py prettyprint-override"><code>arr_trunc = lambda x: np.array2string(x).splitlines()[0]
with np.printoptions(formatter={'array': arr_trunc}):
print(np.random.randn(50, 50))
</code></pre>
<p>This way, <code>dc = {n: np.random.randn(50, 50) for n in 'abcd'}</code> can be printed via <code>print(dc)</code> rather than having to code traversing iterables or nested structures. Also, above code fails with infinite recursion.</p>
<p>I can't set <code>np.ndarray.__str__</code>. Do I need to get more hackish and override <code>array2string</code> itself? The methods are wrapped, so it may not be safe to do so directly.</p>
<hr>
<p>If it helps, here's a <a href="https://codereview.stackexchange.com/q/242369/210581"><code>deepmap</code></a> implem, so one could make a "<code>deepprint</code>" that still has to be applied like <code>deepprint(dc)</code>, which is better than nothing. So one might intercept Python's string-fier with <code>if isinstance(x, np.ndarray)</code> while leaving everything else unchanged.</p>
| <python><numpy> | 2023-10-24 11:36:12 | 0 | 19,954 | OverLordGoldDragon |
77,351,504 | 3,416,774 | No module named 'pydantic.v1' even though pip list see it | <p>I install <code>obsei[all]</code> in a venv, then run <code>python .\example\facebook_example.py</code>. I get:</p>
<p><a href="https://github.com/obsei/obsei/issues/310" rel="nofollow noreferrer" title="[BUG] pydantic.errors.PydanticImportError: `BaseSettings` has been moved to the `pydantic-settings` package · Issue #310 · obsei/obsei"><code>pydantic.errors.PydanticImportError: </code>BaseSettings<code>has been moved to the</code>pydantic-settings<code> package</code></a></p>
<p>According to <a href="https://docs.pydantic.dev/2.4/migration/#continue-using-pydantic-v1-features" rel="nofollow noreferrer" title="Migration Guide - Pydantic">Migration Guide - Pydantic</a>, I can use</p>
<pre><code>pip install "pydantic==1.*"
</code></pre>
<p>and continue to use the old version by importing <code>pydantic.v1</code>. However I get:</p>
<pre><code>Traceback (most recent call last):
File "D:\Programming\Topic modelling\obsei\example\facebook_example.py", line 4, in <module>
from obsei.source.facebook_source import FacebookSource, FacebookSourceConfig
File "D:\Programming\Topic modelling\obsei\.venv\Lib\site-packages\obsei\source\facebook_source.py", line 5, in <module>
from pydantic.v1 import BaseSettings, Field, PrivateAttr
ModuleNotFoundError: No module named 'pydantic.v1'
</code></pre>
<p>Running <code>pip list</code> I get:</p>
<pre><code>pydantic 1.10.13
pydantic_core 2.10.1
pydantic-settings 2.0.3
</code></pre>
<p>Why doesn't it recognize that I'm using v1?</p>
<p>My <code>pip --version</code> is 23.3.1 from D:\Programming\Topic modelling\obsei.venv\Lib\site-packages\pip (python 3.11)</p>
<pre><code>pip -V
pip 23.3.1 from D:\Programming\Topic modelling\obsei\.venv\Lib\site-packages\pip (python 3.11)
python -c "import sys; print(sys.executable)"
D:\Programming\Topic modelling\obsei\.venv\Scripts\python.exe
</code></pre>
| <python><pydantic> | 2023-10-24 10:58:06 | 1 | 3,394 | Ooker |
77,351,471 | 4,666,912 | Finding the largest time gap in a list of years | <p>I have a huge dataframe (around 100 million rows) in the following manner:</p>
<pre><code>ID Years
1 [1990,1991,1995,2000,2001,2006]
2 [1990,1990]
3 [1980,1981,1990,1995]
</code></pre>
<p>I want it to return the first occurrence of <em>the largest gap between two consecutive years</em> as the following dataframe (you can assume that the lists of years are sorted in order):</p>
<pre><code>ID largest_gap from_year to_year
1 5 1995 2000
2 0 1990 1990
3 9 1981 1990
</code></pre>
<p>Any idea of the most efficient way to calculate this?</p>
| <python><pandas><dataframe> | 2023-10-24 10:53:44 | 5 | 2,343 | BKS |
77,351,273 | 661,589 | how to merge two multi-depth objects in python | <p>I am building some app that needs configuration with lots of similarities and some differences. I'd like to be able to use some base config and override it with the specific values where needed.</p>
<p>Each config element is a list of configurations, and each configuration is a dict that can have dict or array as value. And I'd like to be able to merge them in a "smart" way. I.e:</p>
<pre><code>base_conf = [
{'a': 'foo', 'b': 12, 'c': {'x': 1, 'y': 2}, 'd': [1, 2, 3]},
{'a': 'bar', 'd': [10, 20]}
]
conf = {
'conf1': smart_merge(base_conf, [
{'b': 24, 'c': {'y': 20}},
{'c': {'x': 10, 'z': 30}}
]),
}
</code></pre>
<p>The expected result of conf1 is:</p>
<pre><code>conf1: [
{'a': 'foo', 'b': 24, 'c': {'x': 1, 'y': 20}, 'd': [1, 2, 3]},
{'a': 'bar', 'c': {'x': 10, 'z': 30}, 'd': [10, 20]}
]
</code></pre>
<p>So again the goal is merging nested object of types: dictionaries, lists, strings and numbers. It's always adding or overriding, never removing.</p>
| <python><json><list><dictionary><merge> | 2023-10-24 10:21:29 | 1 | 19,251 | Gavriel |
77,351,190 | 22,326,950 | How to check for mouse movement outside of tkinter GUI mainloop? | <p>To understand threadding better I try to write a clicker with a small tkinter GUI, which only clicks when you don't move the mouse. I have created a working script without noise (labels that show duration, clicks, settings etc.) that runs a deamon permanently in the background to detect mouse movements:</p>
<pre class="lang-py prettyprint-override"><code>from tkinter import Tk, Button
from win32api import SetCursorPos, mouse_event, GetCursorPos
from win32con import MOUSEEVENTF_LEFTDOWN, MOUSEEVENTF_LEFTUP
from threading import Thread, Lock
class ClickerGUI(Tk):
def __init__(self):
super().__init__()
self.running = False
self.mouse_x_coord, self.mouse_y_coord = 1, 1
self.cursor_pos = GetCursorPos()
self.mouse_moving = False
self.click_intervall = 3000
self.start_stop_btn = Button(self, text='Start', font='Calibri 12', width=15, command=self.run_clicker)
self.start_stop_btn.pack(pady=10)
self.lock = Lock()
self.thread = Thread(target=self.check_mouse_movement)
self.thread.setDaemon(True)
self.thread.start()
def run_clicker(self):
self.running = not self.running
if self.running:
self.start_stop_btn.config(text='Stop')
self.click()
else:
self.start_stop_btn.config(text='Start')
def click(self):
if self.running:
if not self.mouse_moving:
print('Click') # Debug print
# click to coordinates and set mous back to last position
SetCursorPos((self.mouse_x_coord, self.mouse_y_coord))
mouse_event(MOUSEEVENTF_LEFTDOWN, self.mouse_x_coord, self.mouse_y_coord, 0, 0)
mouse_event(MOUSEEVENTF_LEFTUP, self.mouse_x_coord, self.mouse_y_coord, 0, 0)
SetCursorPos(self.cursor_pos)
# put GUI in focus
self.focus_force()
self.after(self.click_intervall, self.click)
def check_mouse_movement(self):
with self.lock:
print(self.mouse_moving) # Debug print
if self.cursor_pos != GetCursorPos():
self.cursor_pos = GetCursorPos()
self.mouse_moving = True
else:
self.mouse_moving = False
self.after(self.click_intervall//20, self.check_mouse_movement)
if __name__ == '__main__':
cli = ClickerGUI()
cli.mainloop()
</code></pre>
<p>But now I would like to run the deamon only when the clicker is active, thus starting and stopping the thread. I found this <a href="https://stackoverflow.com/questions/41131117/how-to-stop-daemon-thread">question and its answer</a> very interesting but am not sure if this event logic matches my aproach since I am not using a loop but rather a recursive function call. How can I implement this or is this approach even recommendable at all?</p>
<p>Is there an even better/more save solution or totally diffenent approach to implement this functionality?</p>
| <python><multithreading><tkinter> | 2023-10-24 10:08:25 | 0 | 884 | Jan_B |
77,351,073 | 4,979,809 | How to automate my sources? (Python and SqlServer) | <p>In my .pbix, my data consists of 4 queries:
<a href="https://i.sstatic.net/I7Jpv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/I7Jpv.png" alt="enter image description here" /></a></p>
<p>One query is a <strong>Python.Execute</strong>, the other 3 are <strong>Sql.Database</strong>.</p>
<ul>
<li>The SqlServer is on-prem and I already have a gateway installed there…</li>
<li>The Python one is a call to an API to bring my fact table.</li>
</ul>
<p>I want to publish it and have it updated automatically every Sunday night. What’s the standard way of achieving this?</p>
<p>I understand I need a PERSONAL gateway for the Python, but, where do I install it?</p>
<p>My laptop doesn’t seem the right place because… When the update shall run on Sunday, my laptop is going to be off.</p>
<p>(we have a P1 Premium capacity)</p>
| <python><powerbi><powerbi-desktop><gateway> | 2023-10-24 09:51:47 | 1 | 706 | Chicago1988 |
77,350,880 | 9,318,372 | match-case: Bind keyword attribute to variable | <p>When matching against a <a href="https://peps.python.org/pep-0622/#class-patterns" rel="nofollow noreferrer">class-pattern</a> with keyword attributes, is it possible to bind the attribute to a variable directly?</p>
<p>For positional arguments, it is possible via walrus operator: <a href="https://peps.python.org/pep-0622/#walrus-patterns" rel="nofollow noreferrer">https://peps.python.org/pep-0622/#walrus-patterns</a></p>
<pre class="lang-py prettyprint-override"><code>import ast
tree = ast.parse('foo.bar = 2')
for node in ast.walk(tree):
match node:
case ast.Attribute(value=ast.Name()):
# how to bind value directly?
print(node)
print(value) # name value is not defined
</code></pre>
<p>One could of course do something like <code>case ast.Attribute(value=ast.Name()) as attr</code> and then use <code>attr.value</code>, but <code>mypy</code> doesn't like this and claims that <code>attr.value</code> is an <code>expr</code> and not an <code>ast.Name</code>.</p>
<pre class="lang-py prettyprint-override"><code>match node:
case ast.Attribute(value=ast.Name()) as attr:
name: str = attr.value.id # mypy: "expr" has no attribute "id"
</code></pre>
| <python><mypy><python-typing><structural-pattern-matching> | 2023-10-24 09:23:01 | 1 | 1,721 | Hyperplane |
77,350,706 | 13,874,745 | How to only execute partial codes of a method of super class in python? | <p>I know how to call the method of super class by using <code>super()</code>, like these:</p>
<pre><code>>>> class A:
>>> def func1(self):
>>> print("1")
>>> print("2")
>>> print("3")
>>> class B(A):
>>> def func1(self):
>>> super().func1()
>>> print("4")
>>> b = B()
>>> b.func1()
1
2
3
4
</code></pre>
<p>But now I would like to only execute partial codes of <code>A.func1()</code> in <code>b.func1()</code> and make the result looks like:</p>
<pre><code>1
3
4
</code></pre>
<p>Are there any built-in function or site-package can help me to do that?</p>
| <python><inheritance><super> | 2023-10-24 08:54:21 | 1 | 451 | theabc50111 |
77,350,602 | 10,564,566 | Cannot open Pycharm (Process 5,540 is still running) | <p>When I am trying to open Pycharm on Ubuntu (22.04) I am getting the following error from time to time (around 50% of cases):
<code> Cannot connect to already running IDE instance. Exception: Process 5,540 is still running</code>
Killing process: <code>sudo kill -9 5540</code> helps, but it is quite annoying and it would be nice to have clarity why this problem occurs.
Also on other machines and operating systems it does not seem to be the case.</p>
| <python><linux><ubuntu><pycharm><ide> | 2023-10-24 08:36:04 | 1 | 539 | Matmozaur |
77,350,263 | 9,718,879 | How to detect requested urls in Javascript or Python | <p>I would like to know whether there's a javascript or python package that could help monitor the requests performed by an application. Let's say for example in my browser I click on a document. The package should be able to grab the requested link for that document. I know packet analysis can be done with wireshark but in this case I would like to have a javascript or python package to monitor and hopefully save in a txt file all the links accessed by a user while browsing the internet</p>
| <javascript><python><request><network-programming> | 2023-10-24 07:38:12 | 1 | 1,121 | Laspeed |
77,350,048 | 4,380,515 | Retrieve a redis-py Lock name from a LockError? | <p>I have an application that uses <a href="https://redis-py.readthedocs.io/en/v4.1.2/lock.html" rel="nofollow noreferrer"><code>redis.lock.Lock</code></a> (from the official <a href="https://github.com/redis/redis-py" rel="nofollow noreferrer">redis-py</a> package) a lot. It's either used directly by what I'm developing, or by other packages.
In rare cases, I end up with <code>LockError</code> (in case the application was restarted incorrectly, for example).</p>
<p>I manage to catch these <code>Exceptions</code>, but in the available attributes, there's only the error message, and not the name of the <code>Lock</code> (which would allow me to do a cleanup task).</p>
<p>Is there any way to add information to the <code>LockError</code> object, or how to find the <code>Lock</code> name from the within except block (bearing in mind that I can't modify the <code>Lock</code> implementation, or make a class that inherits it, as it's might be instantiated from code I don't own)?</p>
<p>Example of code:</p>
<pre class="lang-py prettyprint-override"><code>try:
do_many_things_with_many_locks()
except LockError as le:
# How to retrieve which lock failed?
</code></pre>
| <python><redis><redis-py> | 2023-10-24 06:54:53 | 1 | 3,854 | Blusky |
77,349,979 | 1,895,611 | Is there any alternative to brute force matching binary images? | <p>This opencv/numpy code works, but is 1000x too slow. basically I have a white square, and another image that is slightly rotated and translated. The code recovers the pose by brute force trying all angles and translations within 10 degrees and 10 pixels.</p>
<p>I tried find homography but got terrible results. This code got perfect results, but is far too slow. I need it to run 100 to 1000 times faster, while staying in python.</p>
<p>I would like to know how to massively speed up this code, or, an alternate way that recovers the fact that the image is rotated 5 degrees and translated about 5,7 pixels</p>
<pre class="lang-py prettyprint-override"><code>import cv2
import numpy as np
import random
import math
import time
# Create a 240x224 black image (image01) with a 64-pixel black margin
image01 = np.zeros((224, 240), dtype=np.uint8)
image01[64:160, 64:176] = 255 # Add a white square within the margin
# Copy image01 to image02
image02 = image01.copy()
# Rotate image02 by 5 degrees and translate it 5 pixels up and 7 pixels left
rows, cols = image02.shape
M = cv2.getRotationMatrix2D((cols / 2, rows / 2), 5, 1)
M[0, 2] -= 7
M[1, 2] -= 5
image02 = cv2.warpAffine(image02, M, (cols, rows))
# Create scaled-down copies of image01 and image02
scaling_factor = 1/1
img1 = cv2.resize(image01, None, fx=scaling_factor, fy=scaling_factor)
img2s = cv2.resize(image02, None, fx=scaling_factor, fy=scaling_factor)
# Define parameters for the search
rotation_range = range(-10, 11, 1)
translation_range = range(-10, 11)
# Create a list to store the results
results = []
start = time.time()
for rotation_angle in rotation_range:
# Rotate img2
img2 = img2s.copy()
M_rotation = cv2.getRotationMatrix2D((img2.shape[1] / 2, img2.shape[0] / 2), rotation_angle, 1)
img2_rotated = cv2.warpAffine(img2, M_rotation, (img2.shape[1], img2.shape[0]))
for dx in translation_range:
for dy in translation_range:
# Translate img2_rotated
M_translation = np.float32([[1, 0, dx], [0, 1, dy]])
img2_transformed = cv2.warpAffine(img2_rotated, M_translation, (img2.shape[1], img2.shape[0]))
diff = cv2.absdiff(img1, img2_transformed)
non_zero_pixels = np.sum(diff[diff == 255])
results.append((rotation_angle, (dx, dy), non_zero_pixels))
print(time.time() - start)
# Sort the results based on the number of non-zero pixels
sorted_results = sorted(results, key=lambda x: x[2])
# Show the top 10 results
for i, (rotation_angle, (dx, dy), non_zero_pixels) in enumerate(sorted_results[:10], 1):
dx = dx/scaling_factor
dy = dy/scaling_factor
print(f"Top {i} - Rotation: {rotation_angle}°, Translation: ({dx}, {dy}), Non-zero pixels: {non_zero_pixels}")
</code></pre>
<p>the actual images look similar to this (random blobs)</p>
<p><a href="https://i.sstatic.net/w0LzU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/w0LzU.png" alt="enter image description here" /></a></p>
| <python><optimization><computer-vision><gpu><homography> | 2023-10-24 06:41:03 | 1 | 8,256 | AwokeKnowing |
77,349,929 | 1,186,991 | Python claims my venv doesnt have requests module but i have pip3 installed it | <p>Im sure for some of you this will likely be a stupid question but ive spent the last 3 hrs trying to get my first virtual environment working and i still cant seem to get it to work. Ive successfully completed the following.</p>
<ol>
<li>Installed an older version of Python 3.10.10</li>
<li>Created a virtual environment from the command line in VSCode.</li>
<li>Activated it.</li>
<li>Pip3 installed config, and a couple other packages.</li>
<li>Tried to run part of a Juypter notebook from a book i bought but its throwing the following error.</li>
</ol>
<p><a href="https://i.sstatic.net/FHRzH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FHRzH.png" alt="enter image description here" /></a></p>
<p>I really dont get what im doing wrong. Requests and Config have both been pip installed. Could someone please shed some light on what stupid thing im missing.</p>
<p>Thanks.</p>
<p>Edit: Ive created a new Venv and although the interpreter in the top right is now showing im still getting an error.</p>
<p><a href="https://i.sstatic.net/fSBYB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fSBYB.png" alt="enter image description here" /></a></p>
| <python><visual-studio-code><pytorch> | 2023-10-24 06:30:28 | 1 | 3,631 | Hans Rudel |
77,349,903 | 14,661,648 | How to configure Nginx server for Flask-Sock? | <p>I am not talking about Socket-IO but just regular Websocket, I am using this library: <a href="https://flask-sock.readthedocs.io/en/latest/web_servers.html" rel="nofollow noreferrer">https://flask-sock.readthedocs.io/en/latest/web_servers.html</a></p>
<p>How do I configure my Nginx configuration for <code>wss://</code>? I currently have set up cerbot and the unix service to always start my Gunicorn Flask app. My websocket endpoints worked fine before I got into Nginx where I just used port 5000, but now I get <code>400 BAD REQUEST</code> because I believe Nginx is not configured for Websocket.</p>
| <python><nginx><flask><flask-sockets> | 2023-10-24 06:23:34 | 1 | 1,067 | Jiehfeng |
77,349,823 | 8,523,868 | How to select the date from the calender box as same from the spreadsheet | <p>Thanks in advance for my reply.
I want to select the calendar from the calendar box.
I tried to send the code using selenium with python.
It is not able to select the correct date.
It is getting displayed as 21 -- ju--l- 22 instead of 21-JUL-2023
Below i have given the code and screenshot.
Please help me to select the correct calendar.
code:_</p>
<pre><code>driver.find_element(By.XPATH,"//input[@id='indentFromDateTxt']").send_keys("01-Jul-2023")
</code></pre>
<p>I did inspect the code to get the xpath and it is showing below</p>
<pre><code><input type="text" id="indentFromDateTxt" name="indentFromDateTxt" style="width: 230px;" onfocus="" class="hasDatepick calendar" maxlength="11">
</code></pre>
<p>Please help me to code. thanks
<a href="https://i.sstatic.net/vhFzB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vhFzB.png" alt="enter image description here" /></a></p>
| <python><selenium-webdriver> | 2023-10-24 06:02:54 | 2 | 911 | vivek rajagopalan |
77,349,733 | 754,136 | Best way to pass configuration with ranges | <p>I have a plotting function that I use for multiple things. The way I plot curves is the same, but I need to customize the limits and the label depending on the data I am plotting.
Right now I define the settings in dictionaries and then read them like</p>
<pre class="lang-py prettyprint-override"><code>for d in data_keys:
data = np.load(...)
ax.plot(data, label=data_to_label[d])
ax.xaxis.set_ticks(data_to_xticks[d])
ax.set_xlim([0, data_to_xlim[d]])
</code></pre>
<p>and so on, for other things I need.</p>
<p>The dictionaries are like these</p>
<pre class="lang-py prettyprint-override"><code>data_to_label = {
'easy' : 'Alg. 1 (Debug)',
'hard' : 'Alg. 2',
}
data_to_xlim = {
'easy' : 500,
'hard' : 2000,
}
data_to_xticks = {
'easy' : [0, 250, 500],
'hard' : np.arange(0, 2001, 500),
}
data_to_ylim = {
'easy' : [-0.1, 1.05],
'hard' : [-0.1, 1.05],
}
data_to_yticks = {
'Easy' : [0, 0.5, 1.],
'hard' : [0, 0.5, 1.],
}
</code></pre>
<p>I have many of these, and I am looking for the best way to save them in config files and load them in my plotting function. I thought about Hydra, YAML, JSON, but none allows to specify <code>np.arange()</code> as parameter.
Ideally, when I call <code>python myplot.py</code> I can pass the config file as argument.</p>
<p>I could also import them, but then the import must be read from the string passed to <code>myplot.py</code>.</p>
| <python><json><arguments><fb-hydra> | 2023-10-24 05:37:19 | 2 | 5,474 | Simon |
77,349,670 | 3,685,918 | how to get bloomberg index data using blpapi and xbbg on python | <p>I'd like to pull data from <code>blpapi</code> and <code>xbbg</code>.</p>
<p>It works well on Excel as belows.</p>
<pre><code>=BDS("YCGT0025 Index","CURVE_TENOR_RATES", "CURVE_DATE=20230930","headers=t")
</code></pre>
<p>But not working on python as bleows.</p>
<pre><code>blp.bds("YCGT0025 Index",
"CURVE_TENOR_RATES",
"CURVE_DATE=20230930")
Out[6]:
Empty DataFrame
Columns: []
Index: []
</code></pre>
| <python><blpapi> | 2023-10-24 05:18:41 | 1 | 427 | user3685918 |
77,349,582 | 9,983,652 | why feature importance from decision tree model is different each run? | <p>I am working on stratified kfold cv with decision tree model. The feature importance from decision model is different each time running the model. The accuracy is different as well each time. Can anyone help me understand why the result is different each time?</p>
<p>Also,below has 10 fold CV, so which fold do I use for feature importance? Do I need to find the overlap of feature importance from each fold?</p>
<p>Thanks</p>
<p>Here I just use the below code and change the model to decision tree model</p>
<p><a href="https://www.geeksforgeeks.org/stratified-k-fold-cross-validation/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/stratified-k-fold-cross-validation/</a></p>
<pre><code>from statistics import mean, stdev
from sklearn import preprocessing
from sklearn.model_selection import StratifiedKFold
from sklearn import linear_model
from sklearn import datasets
from sklearn import tree
import pandas as pd
# FEATCHING FEATURES AND TARGET VARIABLES IN ARRAY FORMAT.
cancer = datasets.load_breast_cancer()
# Input_x_Features.
x = cancer.data
# Input_ y_Target_Variable.
y = cancer.target
# Feature Scaling for input features.
scaler = preprocessing.MinMaxScaler()
x_scaled = scaler.fit_transform(x)
# Create classifier object.
# lr = linear_model.LogisticRegression()
lr=tree.DecisionTreeClassifier(criterion="gini")
# Create StratifiedKFold object.
skf = StratifiedKFold(n_splits=10, shuffle=True, random_state=1)
lst_accu_stratified = []
for train_index, test_index in skf.split(x, y):
x_train_fold, x_test_fold = x_scaled[train_index], x_scaled[test_index]
y_train_fold, y_test_fold = y[train_index], y[test_index]
lr.fit(x_train_fold, y_train_fold)
lst_accu_stratified.append(lr.score(x_test_fold, y_test_fold))
# Print the output.
print('List of possible accuracy:', lst_accu_stratified)
print('\nMaximum Accuracy That can be obtained from this model is:',
max(lst_accu_stratified)*100, '%')
print('\nMinimum Accuracy:',
min(lst_accu_stratified)*100, '%')
print('\nOverall Accuracy:',
mean(lst_accu_stratified)*100, '%')
print('\nStandard Deviation is:', stdev(lst_accu_stratified))
print(lr.feature_importances_)
</code></pre>
| <python><scikit-learn> | 2023-10-24 04:49:47 | 1 | 4,338 | roudan |
77,349,560 | 4,582,026 | Do you always need to call the library when running a python function in command line? | <p>The following line works in command prompt to call my function</p>
<pre><code>python -c "from foo import xyz; xyz.aFunction(n)"
</code></pre>
<p>However, I want to continue calling the function by changing the variable n and also call other functions from xyz. Do I always need to import xyz before calling the function each time, even in the same instance of CMD?</p>
<p>I want to minimise the effort (characters in line of code) taken to call the function. Presumably if xyz has been imported, the function can be called easily numerous times?</p>
| <python> | 2023-10-24 04:42:23 | 1 | 549 | Vik |
77,349,540 | 10,990,958 | How would I set a background Image in customtkinter? | <p>I want to set a png as the background image in a custokinter UI I have this code</p>
<pre><code>import customtkinter
import random
from PIL import Image
import PIL
customtkinter.set_appearance_mode("light")
# Create a list to track the names that have already been chosen
chosen_names = []
def button_callback():
# Create a list of names
names = ["Alice", "Bob", "Carol", "Dave", "Eve"]
# Randomly select a name from the list
name = random.choice(names)
# Check if the name has already been chosen
while name in chosen_names:
name = random.choice(names)
# Add the name to the list of chosen names
chosen_names.append(name)
# Get the label
#label = app.winfo_children()[0]
# Update the label text
label.configure(text=name)
label.grid_remove()
# Check if all the values in the list have been selected
if len(chosen_names) == len(names):
chosen_names.clear()
label.configure(text="")
app = customtkinter.CTk()
image = PIL.Image.open("Imagen.png")
background_image = customtkinter.CTkImage(image)
app.title("app")
app.iconbitmap('isologo.ico')
app.geometry("500x500")
# Create a label
label = customtkinter.CTkLabel(app)
label.pack(padx=0, pady=0)
label.configure(text="")
button = customtkinter.CTkButton(app, text="Selector Nombre", command=button_callback)
button.pack(ipadx=20, ipady=20,padx=20, pady=50)
app.mainloop()
</code></pre>
<p>how would i set
<code>image = PIL.Image.open("Imagen.png")</code>
as the background? The background can be static and doesn't have to change size, but if it is a bit responsive, it will be much better.</p>
| <python><user-interface><background><customtkinter> | 2023-10-24 04:36:14 | 1 | 349 | Pandas INC |
77,349,206 | 3,350,050 | Shabang Line For Python Script With Conda | <p>I'm trying to write my Python script to run in a particular <em>conda</em> environment without relying on the user to do it correctly. If the environment is named <em>MyEnv</em>, for example, the following <a href="https://unix.stackexchange.com/questions/399690/multiple-arguments-in-shebang">shabang line</a> will do that.</p>
<pre class="lang-bash prettyprint-override"><code>#!/bin/env -S /my-conda-stuff/bin/conda run -n MyEnv --live-stream python
</code></pre>
<p>Is it necessary though? Does it add anything to more simply just specifying the location of that environment's Python?</p>
<pre class="lang-bash prettyprint-override"><code>#!/my-conda-stuff/envs/MyEnv/bin/python
</code></pre>
<p>I ask because using <code>pip install</code> to install my software rewrites my shabang line from the former to the latter. Are they equivalent? I know the first <a href="https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#activating-an-environment" rel="nofollow noreferrer">sets environment variables</a> like <code>$PATH</code> that the second doesn't, but my program doesn't use those. I don't want to undo the "favor" <code>pip install</code> did for me unless I have to.</p>
<p>Thanks.</p>
| <python><pip><conda> | 2023-10-24 02:33:31 | 0 | 573 | Jim |
77,349,149 | 15,781,591 | Unable to change barplot width | <p>I have the following code trying make a bar plot in python using seaborn, based simply on a two-column dataframe, with a "Category" column, and "Value" column with float values:</p>
<pre><code>dataplot = sns.barplot(x = 'Category',
y = 'Value',
data = df, hue='Category')
</code></pre>
<p>This produces a bar plot plot with very skinny bars, going into both positive and negative values. I want to make the bars thicker/wider, and so I try to use the <code>width</code> parameter from seaborn, and I try:</p>
<pre><code>dataplot = sns.barplot(x = 'Category',
y = 'Value',
data = df, hue='Category', width=2)
</code></pre>
<p>I am just testing out 2 as a value here for width, but I get this error that I do not understand:</p>
<pre><code>TypeError: bar() got multiple values for argument 'width'
</code></pre>
<p>I do not understand what these multiple values is referring to. How can I correctly enter the <code>width</code> parameter into my bar plot code so that I can simply make my bars thicker?</p>
| <python><pandas><seaborn><bar-chart> | 2023-10-24 02:10:33 | 1 | 641 | LostinSpatialAnalysis |
77,349,120 | 6,394,617 | Example of SimPy event that is triggered but not processed | <p>Just like <a href="https://stackoverflow.com/q/76421544/6394617">this question</a>, I don't understand what triggered means for a SimPy event. The only answer posted does not clarify it for me.</p>
<p>The docs say an event:</p>
<ul>
<li>might happen (not triggered),</li>
<li>is going to happen (triggered) or</li>
<li>has happened (processed).</li>
</ul>
<p>I originally thought those corresponded to:</p>
<ul>
<li>event defined</li>
<li>event yielded</li>
<li>code in event completes execution</li>
</ul>
<p>But then I found through experimentation that was wrong. I have not been able to create an example for which I can print <code>p.triggered</code> and <code>p.processed</code> for some process <code>p</code> and have the first be <code>True</code> and the second <code>False</code>. They're always either both <code>True</code> or both <code>False</code>.</p>
<p>So my question is, can anyone give code of an example (ideally as simple as possible) for which there exists some moment of time that a process is triggered but not processed?</p>
<p>Ideally, it would include an explanation of "triggered" that matches the example.</p>
| <python><simpy> | 2023-10-24 01:58:34 | 2 | 913 | Joe |
77,349,083 | 10,990,958 | How would I remove the CTKLabel from a label using customtkinter? | <p>I have this code:</p>
<pre><code>import customtkinter
import random
customtkinter.set_appearance_mode("light")
# Create a list to track the names that have already been chosen
chosen_names = []
def button_callback():
# Create a list of names
names = ["Alice", "Bob", "Carol", "Dave", "Eve"]
# Randomly select a name from the list
name = random.choice(names)
# Check if the name has already been chosen
while name in chosen_names:
name = random.choice(names)
# Add the name to the list of chosen names
chosen_names.append(name)
# Get the label
label = app.winfo_children()[0]
# Update the label text
label.configure(text=name)
label.grid_remove()
# Check if all the values in the list have been selected
if len(chosen_names) == len(names):
app.destroy()
app = customtkinter.CTk()
app.title("Randomizer")
#replace with image
app.iconbitmap('isologo.ico')
app.geometry("500x500")
# Create a label
label = customtkinter.CTkLabel(app)
label.pack(padx=0, pady=0)
button = customtkinter.CTkButton(app, text="Selector Nombre", command=button_callback)
button.pack(ipadx=20, ipady=20,padx=20, pady=50)
app.mainloop()
</code></pre>
<p>When I run the app on top, I get a "CTkLabel" where the random names would go How would I make that label never appear?</p>
<p>Also, is there a way so that when my list finishes, it restarts? I added the destroy function because I don't know if this is possible. Any help would be appreciated.</p>
<p><a href="https://i.sstatic.net/7B8HU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7B8HU.png" alt="image reference" /></a></p>
| <python><tkinter><while-loop><label><customtkinter> | 2023-10-24 01:49:19 | 1 | 349 | Pandas INC |
77,349,031 | 2,955,541 | Using subs for PolyElement | <p>After creating a <code>sympy</code> <code>PolyElement</code>:</p>
<pre><code># This is just a simple example
x, P = sympy.symbols('x:3'), sympy.symbols('P')
K = ZZ[x+(P,)]
model = K.from_sympy(P*(x[1]+x[2]))
</code></pre>
<p>I expected to be able to, say, substitute <code>P=1</code> into the <code>model</code>:</p>
<pre><code>model.subs({P: 1})
</code></pre>
<p>or, according to the <a href="https://docs.sympy.org/latest/tutorials/intro-tutorial/basic_operations.html#substitution" rel="nofollow noreferrer">basic substitution docs</a>:</p>
<pre><code>model.subs(P, 1)
</code></pre>
<p>However, I encounter the following ValueError:</p>
<pre><code>ValueError: expected a polynomial generator, an integer, a string or None, got {P: 1}
</code></pre>
<p>After digging around, I learned that I could do this:</p>
<pre><code>model.subs(3, 1)
</code></pre>
<p>But this is far less convenient. Is there a way to substitute directly via <code>P</code> rather than needing to know the index of <code>P</code> within <code>K.ring.gens</code>?</p>
| <python><sympy> | 2023-10-24 01:25:47 | 2 | 6,989 | slaw |
77,348,920 | 8,869,570 | numpy.nan_to_num equivalent in sql? | <p>I am trying to rewrite a set of python operations in SQL. Specifically, I have a python script that queries from a sqlite3 connection from a table <code>table_name</code> for the column <code>vals</code> and note that <code>vals</code> is a nullable column.</p>
<p>It then filters the return values using <code>numpy.nan_to_number</code> which converts <code>None->0</code> (NULL in sql) and large numbers to infinity.</p>
<p>Is there an equivalent way to do this as a SQL statement?</p>
| <python><sqlite><null><nan> | 2023-10-24 00:27:12 | 1 | 2,328 | 24n8 |
77,348,874 | 301,584 | why serv.bind in this tutorial expected to always returns exception? | <p>I got the code below from multiple sources after googling "port scanner using python"</p>
<pre><code>import socket
s = socket.socket()
host = socket.gethostname()
for port in range(65535):
try:
serv = socket.socket(socket.AF_INET,socket.SOCK_STREAM) # create a new socket
serv.bind((host,port)) # bind socket with address
except:
print('[OPEN] Port open :',port) #print open port number
serv.close()
</code></pre>
<p>The code above will display the open port when the code under <code>except</code> is executed. But why do the line <code>serv.bind</code> returns an exception with the open port number? I thought since the port is open, <code>serv.bind</code> will connect successfully instead?</p>
| <python> | 2023-10-24 00:06:18 | 0 | 4,598 | imin |
77,348,872 | 12,170,032 | why does applying probability distributions and transformations result in the same value? | <p>I'm applying multiple Beta, Gamma and HalfNorm Transforms to each column of my pandas dataframe. The dataframe consists of marketing spend; each row indicates spend per week and each column indicates type of spend:
<a href="https://i.sstatic.net/6luLJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6luLJ.png" alt="enter image description here" /></a></p>
<p>The python functions and code to apply the transform is as follows:</p>
<pre><code>def geometric_adstock_tt(
x, alpha=0, L=12, normalize=True
): # 12 (days) is the delay or lag we expect to see?
"""
The term "geometric" refers to the way weights are assigned to past values,
which follows a geometric progression.
In a geometric progression,
each term is found by multiplying the previous term by a fixed, constant ratio (commonly denoted as "r").
In the case of the geometric adstock function, the "alpha" parameter serves as this constant ratio.
"""
# vector of weights assigned by decay rate alpha set to be 12 weeks
w = np.array([alpha**i for i in range(L)])
xx = np.stack(
[np.concatenate([np.zeros(i), x[: x.shape[0] - i]]) for i in range(L)]
)
if not normalize:
y = np.dot(w, xx)
else:
y = np.dot(
w / np.sum(w), xx
) # dot product to get marketing channel over time frame of decay
return y
### non-linear saturation function
def logistic_function(x_t, mu=0.1):
# apply the logistic function to spend variable
return (1 - np.exp(-mu * x_t)) / (1 * np.exp(-mu * x_t))
#################
response_mean = []
# Create Distributions
halfnorm_dist = st.halfnorm(loc=0, scale=5)
# Create a beta distribution
beta_dist = st.beta(a=3, b=3)
# Create a gamma distribution
gamma_dist = st.gamma(a=3)
delay_channels = [
'TV', 'Referral', 'DirectMail', 'TradeShows', 'SocialMedia','DisplayAds_Standard', 'ContentMarketing',
'GoogleAds', 'SEO', 'Email', 'AffiliateMarketing',
]
non_lin_channels = ["DisplayAds_Programmatic"]
################ ADSTOCK CHANNELS
for channel_name in delay_channels:
xx = df_in[channel_name].values
print(f"Adding Delayed Channels: {channel_name}")
# apply beta transform
y = beta_dist.pdf(xx)
# apply geometric adstock transform
geo_transform = geometric_adstock_tt(y)
# apply gamma transform
z = gamma_dist.pdf(geo_transform)
# apply logistic function transform
log_transform = logistic_function(z)
# apply halfnorm transform
output = halfnorm_dist.pdf(geo_transform)
# append output
response_mean.append(list(output))
################# SATURATION ONLY
for channel_name in non_lin_channels:
xx = df_in[channel_name].values
# apply gamma transform
z = gamma_dist.pdf(xx)
# apply logistic function transform
log_transform = logistic_function(z)
# apply halfnorm transform
output = halfnorm_dist.pdf(log_transform)
# append output
response_mean.append(list(output))
</code></pre>
<p><a href="https://i.sstatic.net/kPFdQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kPFdQ.png" alt="enter image description here" /></a></p>
<p>I'm not quite understanding why all values are being transformed to the same value. I would be so appreciative of any insight! Thanks so much:)</p>
| <python><pandas><probability-distribution> | 2023-10-24 00:06:06 | 1 | 495 | Erin |
77,348,858 | 11,922,765 | Python Pandas Dataframe convert julian day column to date column | <p>I have a column full of julian days. I want to convert them to date format.</p>
<pre><code>df1.shape()
(638765, 94)
df1['DAY'] =
0 2022216 # Format = year (2022),day of the year (216)
1 2022216
2 2022216
3 2022216
4 2022216
from datetime import datetime
</code></pre>
<p>Solution-1:</p>
<pre><code>%timeit df1['Date'] = df1['DAY'].apply(lambda x: datetime.strptime('%d'%x,'%Y%j').strftime('%Y-%m-%d'))
11.9 s ± 50.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
0 2022-08-04
1 2022-08-04
2 2022-08-04
3 2022-08-04
4 2022-08-04
</code></pre>
<p>Solution-2:</p>
<pre><code>%timeit df1['Date'] = pd.to_datetime(df1['DAY'], format='%Y%j')
20.3 ms ± 243 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
0 2022-08-04
1 2022-08-04
2 2022-08-04
3 2022-08-04
4 2022-08-04
</code></pre>
<p>I liked the above solution-2 very much, since it takes less than a second and other one takes around 11 s. My question: is this default behavior for the <code>to_datetime</code> to convert a given julian day to the date format ('%Y-%m-%d') even though I did not specify it?</p>
| <python><pandas><dataframe><datetime><julian-date> | 2023-10-24 00:00:34 | 1 | 4,702 | Mainland |
77,348,806 | 2,547,556 | Override a class attribute of the type of a generic type via a mixin so mypy won’t say it’s incompatible | <p>I have a generic base class <code>A</code> that has an attribute containing the generic type <code>TK</code>.</p>
<pre class="lang-py prettyprint-override"><code>import typing
class K1:
pass
TK = typing.TypeVar("TK", bound=K1)
class A(typing.Generic[TK]):
k: type[TK]
</code></pre>
<p>Then I have a mixin <code>B</code> that should be applied on it in combination of a specific <code>TK</code> which is <code>K1</code>.</p>
<pre class="lang-py prettyprint-override"><code>class B:
k = K1
</code></pre>
<p>Then I try to apply it to <code>A</code> like this:</p>
<pre class="lang-py prettyprint-override"><code>class AB(B, A[K1]):
pass
</code></pre>
<p>The code works, but mypy complains:</p>
<blockquote>
<p>error: Definition of "k" in base class "B" is incompatible with definition in base class "A"</p>
</blockquote>
<p>The issue won’t happen when directly inheriting it and doing the same assignment:</p>
<pre class="lang-py prettyprint-override"><code>class C(A[K1]):
k = K1
</code></pre>
<p>Is there any way to make the mixin compatible with <code>A[K1]</code>? Notably the issue won’t happen when <code>A.k</code> isn’t annotated with the typevar, therefore I find it a bit odd.</p>
| <python><generics><mypy><python-typing><mixins> | 2023-10-23 23:40:38 | 0 | 749 | Yushin Washio |
77,348,770 | 1,802,826 | Why does my shell script register a successful exit when it fails? | <p>I have a shell script that executes <a href="https://github.com/openai/whisper/" rel="nofollow noreferrer">whisper</a> on all files in a directory on a network device, and then stores the resulting transcript on the same volume. Sometimes something happens with the network (the volume is disconnected) that makes the saving of the resulting transcript to fail. This is a typical such error message:</p>
<pre><code>Traceback (most recent call last):
File "/Users/db/Library/Python/3.11/bin/whisper", line 8, in <module>
sys.exit(cli())
^^^^^
File "/Users/db/Library/Python/3.11/lib/python/site-packages/whisper/transcribe.py", line 413, in cli
os.makedirs(output_dir, exist_ok=True)
File "<frozen os>", line 225, in makedirs
FileNotFoundError: [Errno 2] No such file or directory: '.'
</code></pre>
<p>The relevant parts of my script looks like this:</p>
<pre><code> whisper "$file" --model large >> log
if [[ $? -eq 0 ]]; then
echo "exit success: $?"
echo "*****success****** $?, `uname -n`, $me" >> "${directory}/../${transdir}/${name}.txt"
else
echo "exit error: $?"
echo "*****error****** $?, `uname -n`, $me" >> "${directory}/../${transdir}/${name}.txt"
fi
</code></pre>
<p>Normally, when the script finish successfully it enter the first if as expected, but even when I get an error message like the one above it still enters the first <code>if</code> as if it was successful. Why is that?</p>
<p>Note: AFAIK the writing to the <code>log</code> is not the problem.</p>
<hr />
<p>Update:</p>
<pre><code>% whisper --bad_args; echo $?
whisper: error: the following arguments are required: audio
2
</code></pre>
<hr />
<p>Update 2:</p>
<pre><code>#!/opt/local/Library/Frameworks/Python.framework/Versions/3.11/bin/python3.11
# -*- coding: utf-8 -*-
import re
import sys
from whisper.transcribe import cli
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(cli())
</code></pre>
| <python><bash><shell><exit-code> | 2023-10-23 23:24:05 | 1 | 983 | d-b |
77,348,764 | 3,507,584 | Python or R Context aware fuzzy matching | <p>I am trying to match two string columns containing food descriptions [<code>foods1</code> and <code>foods2</code>]. I applied an algorithm weighting the word frequency so less frequent words have more weight but it fails as it does not recognise objects.</p>
<p>For instance, <code>foods1</code> item "Bagel with raisins" gets matched to <code>foods2</code> "salad with raisins" rather than to "bagel" as "raisins" is a less frequent word. However, a "bagel with raisins" is closer to being a "bagel" as an actual object than to a "salad with raisins".</p>
<p>Example in R:</p>
<pre><code>foods1 <- c('bagel plain','bagel with raisins and olives', 'hamburger','bagel with olives','bagel with raisins')
foods1_id <- seq.int(1,length(foods1))
foods2 <- c('bagel','pizza','salad with raisins','tuna and olives')
foods2_id <- c(letters[1:length(foods2)])
require(fedmatch)
fuzzy_result <- merge_plus(data1 = data.frame(foods1_id,foods1, stringsAsFactors = F),
data2 = data.frame(foods2_id,foods2, stringsAsFactors = F),
by.x = "foods1",
by.y = "foods2", match_type = "fuzzy",
fuzzy_settings = build_fuzzy_settings(method = "wgt_jaccard", nthread = 2,maxDist = .75),
unique_key_1 = "foods1_id",
unique_key_2 = "foods2_id")
</code></pre>
<p>Results, see line 3 matching <code>foods1</code> "bagel with raisins" to <code>foods2</code> "salad with raisins". Same for last line of <code>foods1</code> "bagel with raisins and olives" being matched to <code>foods2</code> "tuna and olives":</p>
<pre><code>fuzzy_results
$matches
foods2_id foods1_id foods1 foods2
1: a 1 bagel plain bagel
2: a 4 bagel with olives bagel
3: c 5 bagel with raisins salad with raisins
4: d 2 bagel with raisins and olives tuna and olives
</code></pre>
<p>Is there any fuzzy matching algorithm in R or Python able to understand what objects are being matched? [so "bagel" is recognised as closer to a "bagel with raisins" than a "salad with raisins"].</p>
| <python><r><fuzzy-comparison> | 2023-10-23 23:21:35 | 1 | 3,689 | User981636 |
77,348,634 | 7,102,718 | Is it possible to query the last element of an array using MongoEngine? | <p>I'm trying to query the last element of a list in a Mongo document using MongoEngine.
I can write a raw query to do this but I'm wondering if it's possible to do it without resorting to that.</p>
<p>So for example, while
<code>X.objects(some_list__val='Test Value')</code> would return any objects which have some list element such that val=='Test Value', I'd like for it to only return those objects where val=='Test Value' in the final element of the list.</p>
| <python><mongodb><pymongo> | 2023-10-23 22:30:45 | 1 | 332 | ScriptSurfer |
77,348,585 | 2,854,555 | How to await for a job multiple times in trio? | <p>This is similar to <a href="https://stackoverflow.com/questions/71970942/can-i-await-the-same-task-multiple-times-in-python">Can I await the same Task multiple times in Python?</a>, but for trio (instead of asyncio).
Basically, in trio, how can I <code>await</code> for (the result value of) an <code>async</code> function multiple times, while actually only executing it once?</p>
<p>E.g., what should the argument be for <code>coro_b</code> and <code>coro_c</code> which are executed in parallel?</p>
<pre><code>async def coro_a():
print("executing coro a")
return 'a'
async def coro_b(task_a):
task_a_result = await task_a
print("from coro_b: ", task_a_result)
return 'b'
async def coro_c(task_a):
task_a_result = await task_a
print("from coro_a: ", task_a_result)
return 'c'
def main():
t_a = coro_a()
# At some point
t_b = coro_b(t_a)
...
# At some other point, maybe concurrently to t_b
t_c = coro_c(t_a)
b = await t_b
c = await t_c
main()
</code></pre>
<p>(In my case, the trio root loop is managed by some framework <code>pyfuse3</code>, and I only need to define my own subclass, which contains several async functions (which can be executed in parallel) to be implemented. So I'm not sure how the underlying call was made, but can safely assume they are made correct. Please feel free to supplement the remaining part if anyone feels useful to make this code snippet a "full" version containing a main function.)</p>
<p>(I'm familiar with JS promise, and more familiar with concurrent/parallel concepts and practices, but just do not have enough experience with asyncio and no experience with trio in Python. )</p>
| <python><async-await><python-trio> | 2023-10-23 22:15:09 | 2 | 691 | renyuneyun |
77,348,582 | 2,954,547 | Psycopg 3 & SQLAlchemy 2: what would cause DuplicatePreparedStatement in normal operation? | <p>We are using SQLAlchemy in our application to interact with a TimescaleDB database with Psycopg as the database driver.</p>
<p>Recently we started seeing this error every time the application tries to select from the database:</p>
<pre><code>psycopg.errors.DuplicatePreparedStatement: prepared statement "_pg3_3" already exists
</code></pre>
<p>No change to the application was deployed, and overall database load is very low.</p>
<p>Apparently there is <em>something</em> wrong in our application, but I am not an expert in any of these technologies. I can identify when the errors first appeared in our logs, but I cannot identify any change in activity or behavior that might point to a root cause.</p>
<p>My questions are:</p>
<ol>
<li>What might cause this to happen during otherwise-normal operation, not under heavy load? Is this possibly a bug in one of the underlying libraries?</li>
<li>What is the recommended way to remediate this without significant downtime? Do I need to manually log in as an admin user and <code>DEALLOCATE</code> this <code>_pg_3</code> prepared statement? Do I need to restart the Python app workers?</li>
<li>What should be done in order to prevent this from happening again?</li>
</ol>
<p>The relevant code looks something like this:</p>
<pre class="lang-py prettyprint-override"><code>from datetime import datetime as Datetime
from typing import Any
from sqlalchemy import TIMESTAMP, select
from sqlalchemy.ext.asyncio import AsyncSession as BaseAsyncSession, async_sessionmaker, create_async_engine
from sqlalchemy.orm import DeclarativeBase
from ..config import settings
class Base(DeclarativeBase):
type_annotation_map = {
Datetime: TIMESTAMP(timezone=True),
}
async_engine = create_async_engine(
settings.SQLALCHEMY_DATABASE_URI,
isolation_level="READ COMMITTED",
pool_pre_ping=True,
pool_size=20,
pool_recycle=3600,
max_overflow=5,
)
AsyncSession = async_sessionmaker(async_engine, autobegin=False)
class SensorRecord(Base):
__tablename__ = "sensor_records"
... # attributes here
async def fetch_prev_record(session: BaseAsyncSession, ...):
query = select(SensorRecord)
return ... # more SQL and logic here
</code></pre>
<p>Update: I ended up simply re-deploying the Python application with a blue/green deployment. This seems to have fixed the problem, presumably by causing all connection sessions to be refreshed and thereby dropping these "stuck" prepared statements. However I want to leave this question open because I want to understand the actual cause, to prevent this from happening again.</p>
| <python><postgresql><sqlalchemy><timescaledb><psycopg3> | 2023-10-23 22:15:02 | 0 | 14,083 | shadowtalker |
77,348,561 | 3,245,998 | How does the AWS CLI open a browser and wait for a response before proceeding? | <p>I'm trying to build a golang cli tool for my company and as part of that build login and some other features into the tool. For the life of me I can't figure out how AWS is able to open a browser window and wait for a few button clicks before proceeding from the CLI.</p>
<p><a href="https://docs.aws.amazon.com/singlesignon/latest/OIDCAPIReference/API_StartDeviceAuthorization.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/singlesignon/latest/OIDCAPIReference/API_StartDeviceAuthorization.html</a></p>
<p>Here's the CLI command I input</p>
<pre><code>aws sso login --profile login
Attempting to automatically open the SSO authorization page in your default browser.
If the browser does not open or you wish to use a different device to authorize this request, open the following URL:
https://device.sso.us-east-1.amazonaws.com/
Then enter the code:
abcd-efgh
Successfully logged into Start URL: https://d-1421421423.awsapps.com/start
</code></pre>
<p>Here's the Python docs as well for start device auth and create token</p>
<p><a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sso-oidc/client/start_device_authorization.html" rel="nofollow noreferrer">https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sso-oidc/client/start_device_authorization.html</a>
<a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sso-oidc/client/create_token.html" rel="nofollow noreferrer">https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sso-oidc/client/create_token.html</a></p>
| <python><go><aws-sdk><aws-cli><aws-sdk-go> | 2023-10-23 22:08:31 | 1 | 876 | JacobW |
77,348,500 | 19,694,624 | Can't log into instagram using instaloader: incorrect password | <p>I'm trying to gather a list of my instagram followers using instaloader python module, but I get the error:</p>
<pre><code>instaloader.exceptions.ConnectionException: Login error: "fail" status, message "Sorry, your password was incorrect. Please double-check your password.".
</code></pre>
<p>But I know my password is right. And I'm using VPN while running the script, can that be an issue? Because I'm running from the IP I've never logged into my account from.</p>
<p>Here's the code:</p>
<pre><code># Get instance
import instaloader
L = instaloader.Instaloader()
# Login or load session
username = "login"
password = "password"
L.login(username, password) # (login)
# Obtain profile metadata
profile = instaloader.Profile.from_username(L.context, username)
print(profile.get_followers())
</code></pre>
<p>Also, I tried r'password', and it didn't work either. Help me please.</p>
| <python><instagram-api><instagram-graph-api><instaloader> | 2023-10-23 21:49:00 | 1 | 303 | syrok |
77,348,189 | 13,231,896 | How to send html email to multiple recipients using python3 smtplib | <p>I am trying to send an Email with HTML content to multiple recipients using python-3.10 I have read the documentation and came up with the following code, but, although the email it's been send to all recipients, the message is not showing the html correctly. The message I receive has no html format, nor line breaks. Email text I recive:
<a href="https://i.sstatic.net/jYt1o.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jYt1o.png" alt="Email text" /></a></p>
<p>What am I doing wrong? I need to be able to send an html email message to multiple recipients Here is my code. (I am intensionally hiding the credentials of the server form security reasons)</p>
<pre><code>import smtplib, ssl
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from email.message import EmailMessage
from email.utils import make_msgid
cobertura = "Movistar"
msisdn="53763615"
client_name="Ernesto Ruiz Rodriguez"
client_document="786876876786786"
mensaje="Este mensaje ha sido enviado a los administradores solamente"
content_text=f"""\
Saludos
El sistema PANIX ha recibido una nueva portabilida móvil inversa con los siguientes datos:
Cobertura: {cobertura}
Msisdn: {msisdn}
Nombre cliente: {client_name}
Documento cliente: {client_document}
Otra información de interes:
{mensaje}
Le recomendamos hacer un seguimiento del estado de esta portabilidad desde el apartado "Portabilidad móvil/Solicitudes portabilidad móvil inversa"
Atentamente:
Equipo Panix
"""
content_html=f"""\
<html>
<head></head>
<body>
<p><strong>Saludos</strong></p>
<p>El sistema PANIX ha recibido una nueva portabilida móvil inversa con los siguientes datos:</p>
<p><strong>Cobertura:</strong> {cobertura}</p>
<p><strong>Msisdn:</strong> {msisdn}</p>
<p><strong>Nombre cliente:</strong> {client_name}</p>
<p><strong>Documento cliente:</strong> {client_document}</p>
<p><strong>Otra información de interes:</strong></p>
<p>{mensaje}</p>
<p>Le recomendamos hacer un seguimiento del estado de esta portabilidad desde el apartado "Portabilidad móvil/Solicitudes portabilidad móvil inversa".</p>
<p>Atentamente:</p>
<p>Equipo Panix</p>
</body>
</html>
"""
mail=smtplib.SMTP('smtp.server.com', 587)
mail.ehlo()
mail.starttls()
mail.login('user@server.com','password')
sender_email='from@server.com'
receiver_email='receiver1@gmail.com, receiver2@gmail.com'
message = EmailMessage()
message["Subject"] = "Message with html to multiple receipents"
message["From"] = sender_email
message["To"] = receiver_email
message.set_content(content_text)
message.add_alternative(content_text, subtype='html')
mail.send_message(message)
mail.close()
</code></pre>
| <python><smtplib> | 2023-10-23 20:36:32 | 0 | 830 | Ernesto Ruiz |
77,348,033 | 12,734,203 | SQLAlchemy: QueryableAttribute missing arguments | <p>I have a SQLAlchemy model mapped as a dataclass (single class example below)</p>
<pre><code>class Base(DeclarativeBase):
pass
class Players(Base, MappedAsDataclass):
__tablename__ = "players"
player_uid: Mapped[str] = mapped_column(String(128), primary_key=True)
first_name: Mapped[str] = mapped_column(String(32))
last_name: Mapped[str] = mapped_column(String(32))
mobile_number: Mapped[str] = mapped_column(String(20))
email_address: Mapped[str] = mapped_column(String(64))
company: Mapped[str] = mapped_column(String(32))
date_joined: Mapped[datetime] = mapped_column(DateTime, nullable=False)
banned: Mapped[bool] = mapped_column(Boolean)
date_banned: Mapped[Optional[datetime]] = mapped_column(DateTime, nullable=True)
created: Mapped[datetime] = mapped_column(
DateTime, default=func.now(), nullable=False
)
modified: Mapped[datetime] = mapped_column(DateTime, nullable=True)
</code></pre>
<p>The delete function below is to delete one record, expected function call would look like this <code>delete_record(Players.player_uid, primary_key='123456')</code></p>
<pre><code>def delete_record(target_col: Base, primary_key: str = None) -> None:
"""
Deletes record from db where primary key matches
:param target_col of table
:param primary_key record
:returns None
"""
try:
with Session(engine) as session:
if primary_key is None:
insert_kwarg_err_msg = "None was passed to delete_record()"
logging.error(insert_kwarg_err_msg)
raise ValueError(insert_kwarg_err_msg)
elif primary_key:
statement = delete(target_col.__class__).where(
target_col == primary_key
)
session.execute(statement)
else:
base_err_msg = (
f'Incorrect primary key for {target_col}: "{primary_key}"'
)
logging.error(base_err_msg)
raise ValueError(base_err_msg)
session.commit()
except (ValueError | sqlalchemy.exc.DatabaseError) as err:
logging.error(err)
raise err
</code></pre>
<p>However when called I am getting this error:</p>
<pre><code>TypeError: QueryableAttribute.__clause_element__() missing 1 required positional argument: 'self'
</code></pre>
<p>I have a feeling the execption is caused by below, it works when I replace <code>target_col.__class__</code> with <code>Players</code> however i need this to scale to other classes within the Model hence why its being passed as an arg:</p>
<pre><code>statement = delete(target_col.__class__).where(
target_col == primary_key
)
</code></pre>
<p>Thanks for your help!</p>
| <python><class><sqlalchemy> | 2023-10-23 20:05:33 | 1 | 305 | DECROMAX |
77,347,779 | 2,494,795 | Error when Trying to Load Data Into Azure Cognitive Search Index (AttributeError: 'str' object has no attribute 'get') | <p>I am trying load data (with embeddings) into my Azure Cognitive Search index. This is my process after adding the embedding fields to my Pandas dataframe:</p>
<pre><code>input data = df.to_json() # Where DF is the Pandas dataframe with embedding fields
# Use SearchIndexingBufferedSender to upload the documents in batches optimized for indexing
with SearchIndexingBufferedSender(
endpoint=service_endpoint,
index_name=index_name,
credential=credential,
) as batch_client:
# Add upload actions for all documents
batch_client.upload_documents(documents=input_data)
print(f"Uploaded {len(input_data)} documents in total")
</code></pre>
<p>I am getting the following error:</p>
<pre><code>File /packages/azure/search/documents/_search_indexing_buffered_sender.py:322, in SearchIndexingBufferedSender._retry_action(self, action)
320 self._callback_fail(action)
321 return
--> 322 key = action.additional_properties.get(self._index_key)
323 counter = self._retry_counter.get(key)
324 if not counter:
325 # first time that fails
AttributeError: 'str' object has no attribute 'get'
</code></pre>
<p>Since my input data is relatively small, I have also tried loading the data without batches:</p>
<pre><code>search_client = SearchClient(endpoint=service_endpoint, index_name=index_name, credential=credential)
result = search_client.upload_documents(input_data, timeout = 50)
</code></pre>
<p>And this gives me a different error:</p>
<pre><code>File /packages/azure/search/documents/_generated/operations/_documents_operations.py:1251, in DocumentsOperations.index(self, batch, request_options, **kwargs)
1249 map_error(status_code=response.status_code, response=response, error_map=error_map)
1250 error = self._deserialize.failsafe_deserialize(_models.SearchError, pipeline_response)
-> 1251 raise HttpResponseError(response=response, model=error)
1253 if response.status_code == 200:
1254 deserialized = self._deserialize("IndexDocumentsResult", pipeline_response)
HttpResponseError: () The request is invalid. Details: A null value was found with the expected type 'search.documentFields[Nullable=False]'. The expected type 'search.documentFields[Nullable=False]' does not allow null values.
Code:
Message: The request is invalid. Details: A null value was found with the expected type 'search.documentFields[Nullable=False]'. The expected type 'search.documentFields[Nullable=False]' does not allow null values.
</code></pre>
<p>But my dataframe does not have any empty values, so that makes me think there is something wrong with the format of the file I am sending. I have tried both of these with no success:</p>
<pre><code>input_data = df.to_json()
input_data = df.to_json(orient="records")
</code></pre>
<p>Here is my index definition:</p>
<pre><code>
index_client = SearchIndexClient(
endpoint=service_endpoint, credential=credential)
fields = [
SimpleField(name="Id", type=SearchFieldDataType.String, key=True, sortable=True, filterable=True, facetable=True),
SearchableField(name="Field1", type=SearchFieldDataType.String),
SearchableField(name="Field2", type=SearchFieldDataType.String, filterable=True),
SearchableField(name="Field3", type=SearchFieldDataType.String, filterable=True),
SearchableField(name="Field4", type=SearchFieldDataType.String, filterable=True),
SearchableField(name="Field5", type=SearchFieldDataType.String, filterable=True),
SearchField(name="Field4_vec", type=SearchFieldDataType.Collection(SearchFieldDataType.Single),
searchable=True, vector_search_dimensions=384, vector_search_profile="myHnswProfile"),
SearchField(name="Field5_vec", type=SearchFieldDataType.Collection(SearchFieldDataType.Single),
searchable=True, vector_search_dimensions=384, vector_search_profile="myHnswProfile")
]
# Configure the vector search configuration
vector_search = VectorSearch(
algorithms=[
HnswVectorSearchAlgorithmConfiguration(
name="myHnsw",
kind=VectorSearchAlgorithmKind.HNSW,
parameters=HnswParameters(
m=4,
ef_construction=400,
ef_search=500,
metric="cosine"
)
),
ExhaustiveKnnVectorSearchAlgorithmConfiguration(
name="myExhaustiveKnn",
kind=VectorSearchAlgorithmKind.EXHAUSTIVE_KNN,
parameters=ExhaustiveKnnParameters(
metric="cosine"
)
)
],
profiles=[
VectorSearchProfile(
name="myHnswProfile",
algorithm="myHnsw",
),
VectorSearchProfile(
name="myExhaustiveKnnProfile",
algorithm="myExhaustiveKnn",
)
]
)
# Create the search index
index = SearchIndex(name=index_name, fields=fields,
vector_search=vector_search)
result = index_client.create_or_update_index(index)
print(f' {result.name} created')
</code></pre>
<p>I am unable to post a sample of the data, but it is a Pandas dataframe with the same fields as the index:</p>
<pre><code>Id (string)
Field1 (string)
Field2 (string)
Field3 (string)
Field4 (string)
Field5 (string)
Field4_vec (contents in the shape of [-0.01168345008045435, -0.0396871380507946, -0...]) with dimension 384
Field5_vec (contents in the shape of [-0.01168345008045435, -0.0396871380507946, -0...]) with dimension 384
</code></pre>
<p>Any advice is appreciated. Thanks!</p>
| <python><azure><azure-cognitive-search> | 2023-10-23 19:14:46 | 1 | 1,636 | Irina |
77,347,777 | 9,795,817 | Checkpoint multiple pyspark dataframes | <p>I have some code that wraps a while loop inside another while loop (the algorithm it implements requires this structure unfortunately).</p>
<p>The dataframes used in the loops have super long lineages, so I'd like to use <a href="https://stackoverflow.com/questions/67916597/dataframe-checkpoint-example-pyspark"><code>checkpoint</code></a>, to truncate their logical plans.</p>
<p>I have two questions:</p>
<ol>
<li>If I checkpoint multiple dataframes (i.e., <code>df1.checkpoint()</code> and <code>df2.checkpoint()</code>), will they overwrite one another or are they checkpointed separately?</li>
<li>As I mentioned, the nature of my algorithm is iterative, so <code>df1.checkpoint()</code> will be executed multiple times. Does each execution write its own checkpoint or is the previous checkpoint overwritten at each step?</li>
</ol>
<p>For context, my code looks something like this:</p>
<pre class="lang-py prettyprint-override"><code>sc.setCheckpointDir(PATH_CHECKPOINT)
df1 = spark.read.parquet(PATH_DF1)
df2 = spark.read.parquet(PATH_DF2)
_canIter, _iter = True, 0
while _canIter:
df1 = transform(df1)
df1.checkpoint()
df2 = transform(df1)
df2.checkpoint()
_iter += 1
_canIter = _iter < 3
</code></pre>
| <python><apache-spark><pyspark> | 2023-10-23 19:14:31 | 0 | 6,421 | Arturo Sbr |
77,347,740 | 1,930,402 | Joining 2 dataframes in pyspark where one column can have duplicates | <p>I have a pyspark dataframe with 2 columns, <code>ID</code> and <code>condition</code>. ID corresponds to a user, the user can have multiple conditions. I want to find out those users who have condition A and condition B, how to do that?</p>
<p>Sample dataframe:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>CONDITION</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>A</td>
</tr>
<tr>
<td>2</td>
<td>B</td>
</tr>
<tr>
<td>1</td>
<td>B</td>
</tr>
<tr>
<td>1</td>
<td>C</td>
</tr>
<tr>
<td>2</td>
<td>C</td>
</tr>
<tr>
<td>2</td>
<td>D</td>
</tr>
<tr>
<td>1</td>
<td>E</td>
</tr>
</tbody>
</table>
</div>
<p>If I want to get users who have A,B as conditions, I need only 1 as the output.
If I want to get users who have C,D as conditions, I need only 2 as the output.
If I want to get users who have B,C as conditions, I need both 1 and 2 as outputs.</p>
<p>These requirements are represented in a dataframe as below:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>sl_no</th>
<th>conditions</th>
</tr>
</thead>
<tbody>
<tr>
<td>s1</td>
<td>[A,B]</td>
</tr>
<tr>
<td>s2</td>
<td>[C,D]</td>
</tr>
<tr>
<td>s3</td>
<td>[B,C]</td>
</tr>
</tbody>
</table>
</div>
<p>My attempt is as following:</p>
<pre><code> df1=df.groupBy('USER_ID').agg(F.collect_set('CONDITION').alias('conditions'))
df2=conditions_data
result=df1.join(df2,F.array_intersection(df1['conditions'],df2['conditions'])==df2['conditions'])
</code></pre>
<p>However, I see some inconsistencies in the results. Also, wanted to know if there's a better way to do this.</p>
| <python><pyspark> | 2023-10-23 19:07:36 | 1 | 1,509 | pnv |
77,347,618 | 758,836 | Sentence Transformers Segmentation fault | <p>I get a <code>Segmentation fault</code> error when calling <code>model.encode</code> on a <code>SentenceTransformer</code> model:</p>
<pre class="lang-bash prettyprint-override"><code>Segmentation fault
root@0ac58308616e:/app# /usr/local/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
</code></pre>
<p>The environment is Docker:</p>
<pre><code>FROM python:3.8-slim-buster
RUN apt-get update && apt-get install -y \
software-properties-common \
build-essential \
pkg-config \
ninja-build \
libopenblas-dev \
python3-pip \
curl
COPY . .
CMD ["bash"]
</code></pre>
<pre><code>root@0ac58308616e:/app# python -c "import torch; print(torch.__version__);"
2.1.0
root@0ac58308616e:/app# python -c "import transformers; print(transformers.__version__);"
4.34.1
root@0ac58308616e:/app# python -c "import sentence_transformers; print(sentence_transformers.__version__);"
2.2.2
</code></pre>
<p>Code to reproduce:</p>
<pre class="lang-py prettyprint-override"><code>from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
</code></pre>
<p>This happens also using <code>transformers</code> library directly:</p>
<pre class="lang-py prettyprint-override"><code>from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2',cache_dir='models')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2',cache_dir='models')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
</code></pre>
| <python><pytorch><huggingface-transformers><sentence-transformers> | 2023-10-23 18:44:49 | 0 | 16,321 | loretoparisi |
77,347,403 | 1,547,004 | How to handle QComboBox wheel-events with an event-filter? | <p>In Qt6, many widgets (QComboBox, QSpinBox) will steal mouse wheel events that should be handled by their parent widget (like a QScrollArea), even when they don't have focus, and even when the <code>focusPolicy</code> has been set to <code>StrongFocus</code>.</p>
<p>I can subclass these widget classes and re-implement the <code>wheelEvent</code> handler to ignore these events to accomplish what I want, but it feels inelegant to do it this way because I need to subclass every widget class that exhibits this behavior.</p>
<pre class="lang-py prettyprint-override"><code>class MyComboBox(QtWidgets.QComboBox):
def wheelEvent(self, event: QtGui.QWheelEvent) -> None:
if not self.hasFocus():
event.ignore()
else:
super().wheelEvent(event)
</code></pre>
<p>Qt offers another way to ignore events using <code>installEventFilter</code>, which feels much more scalable and elegant, because I can create one Event Filter and apply it to any number of different widgets.</p>
<pre class="lang-py prettyprint-override"><code>class WheelEventFilter(QtCore.QObject):
"""Ignores wheel events when a widget does not already have focus."""
def eventFilter(self, watched: QtCore.QObject, event: QtCore.QEvent) -> bool:
if (
isinstance(watched, QtWidgets.QWidget)
and not watched.hasFocus()
and event.type() == QtCore.QEvent.Type.Wheel
):
# This filters the event, but it also stops the event
# from propagating up to parent widget.
return True
# This doesn't actually ignore the event for the given widget.
event.ignore()
return False
else:
return super().eventFilter(watched, event)
</code></pre>
<p>My problem, though, is that this event filter doesn't seem to be filtering the events as I would expect. I expect it to filter out the event <strong>for the <code>watched</code> object only</strong>, while also allowing the event to be propogated up to the parent widget to handle, but that isn't happening.</p>
<p>Is it possible to achieve the same effect as the <code>wheelEvent</code> handler defined above using an <code>eventFilter</code>?</p>
<hr />
<p>Here is a self-contained reproducible example that displays this behavior. If you try and scroll the scroll area with the mouse over one of the comboboxes, the combobox will steal focus and the wheel event.</p>
<pre class="lang-py prettyprint-override"><code>import sys
from PySide6 import QtWidgets, QtCore
class MyWidget(QtWidgets.QWidget):
def __init__(self) -> None:
super().__init__()
# # layout
self._layout = QtWidgets.QVBoxLayout()
self.setLayout(self._layout)
# layout for widget
self._mainwidget = QtWidgets.QWidget()
self._mainlayout = QtWidgets.QVBoxLayout()
self._mainwidget.setLayout(self._mainlayout)
# widgets for widget
self._widgets = {}
num_widgets = 20
for i in range(num_widgets):
combo = QtWidgets.QComboBox()
combo.addItems([str(x) for x in range(1, 11)])
combo.setFocusPolicy(QtCore.Qt.FocusPolicy.StrongFocus)
self._mainlayout.addWidget(combo)
self._widgets[i] = combo
# scroll area
self._scrollarea = QtWidgets.QScrollArea(self)
self._scrollarea.setWidgetResizable(True)
self._scrollarea.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarPolicy.ScrollBarAlwaysOn)
self._scrollarea.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarPolicy.ScrollBarAlwaysOn)
self._layout.addWidget(self._scrollarea)
# widget for scroll area
self._scrollarea.setWidget(self._mainwidget)
def main() -> None:
app = QtWidgets.QApplication(sys.argv)
widget = MyWidget()
widget.show()
app.exec()
if __name__ == "__main__":
main()
</code></pre>
| <python><events><mousewheel><pyside6><qcombobox> | 2023-10-23 18:05:28 | 1 | 37,968 | Brendan Abel |
77,347,287 | 6,231,968 | What components of spaCy Pipeline can be disabled so that the sentence tokenization can still work and the pipeline be faster? | <p>I want to use the spaCy pipeline only for sentence tokenization as it's the best for my language but I want it to be as minimal as possible.</p>
<p>So far I figured I could get rid of tagger and ner components:</p>
<p><code>nlp = spacy.load("pl_core_news_sm", disable=['tagger', 'ner'])</code></p>
<p>I noticed that without <code>tok2vec</code> it doesn't work (which seems very odd).
I don't want to try all combinations because I'd surely miss something.</p>
<p>So does anyone know what components can be disabled so that the tokenization can still work and the pipeline be faster?</p>
| <python><nlp><spacy> | 2023-10-23 17:40:34 | 1 | 525 | Karol |
77,347,083 | 9,640,238 | Print formula in plot | <p>Sympy returns a print of an expression in Jupyter in mathematical notation. E.g.:</p>
<pre class="lang-py prettyprint-override"><code>from sympy import symbols
from sympy.plotting import plot
x = symbols("x")
2 + 3 * x**2
</code></pre>
<p>will print: <a href="https://i.sstatic.net/UjYag.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UjYag.png" alt="enter image description here" /></a></p>
<p>However, I can't include this in a plot title. If I run</p>
<pre class="lang-py prettyprint-override"><code>exp = 2 + 3 * x**2
plot(exp, title=exp)
</code></pre>
<p>it will print the expression in the Python notation: <code>2 + 3 * x**2</code></p>
<p>Is there any way to get the mathematical notation shown in the plot?</p>
| <python><jupyter><sympy> | 2023-10-23 17:01:54 | 1 | 2,690 | mrgou |
77,347,024 | 492,191 | self-reference model in pydantic | <p>I would like to serialize a structure with has self-references:</p>
<pre class="lang-py prettyprint-override"><code>class FileSystemEntry(BaseModel):
id: str
path: str | None = None
mime_type: str | None = None
is_directory: bool
parent: FileSystemEntry | None = None
children: List[FileSystemEntry] = []
share_link: AnyHttpUrl | None = None
</code></pre>
<p>how to do it? Obviously when try to dump it to JSON, I have</p>
<p><code>pydantic_core._pydantic_core.PydanticSerializationError: Error serializing to JSON: ValueError: Circular reference detected (id repeated)</code></p>
<p>Any help will be appreciated</p>
| <python><recursion><filesystems><pydantic> | 2023-10-23 16:52:42 | 1 | 350 | stuudent |
77,346,958 | 308,827 | Converting days in pandas dataframe to other categories | <p>I have the following dataframe:</p>
<pre><code> country Duration
0 Afghanistan 0 days
1 Afghanistan 4 days
2 Afghanistan 0 days
3 Afghanistan 22 days
4 Afghanistan 6 days
... ...
316813 Zimbabwe (Rhodesia) 36 days
316814 Zimbabwe (Rhodesia) 6 days
316815 Zimbabwe (Rhodesia) 223 days
316816 Zimbabwe (Rhodesia) 6 days
316817 Zimbabwe (Rhodesia) 0 days
</code></pre>
<p>I would like to convert the Duration column into the following categories:
<code>< 1 week, 1-4 weeks, 1-3 months, 3-6 months, 6-9 months, 9-12 months</code></p>
<p>How do i achieve that?</p>
| <python><pandas> | 2023-10-23 16:41:42 | 1 | 22,341 | user308827 |
77,346,904 | 14,111,512 | Django-Python: You must set settings.ALLOWED_HOSTS if DEBUG is False | <p>I am trying to deploy a Django-Python based web application in Kubernetes. I'm also trying to setup OpenTelemetry so that I could capture the application metrics and visualise it using prometheus and Grafana.</p>
<p>However, when I try to instrument the application data to the OpenTelemetry collector, I am getting the below error:</p>
<blockquote>
<p>Defaulted container "trojanwall-django" out of: trojanwall-django, opentelemetry-auto-instrumentation-python (init).<br />
CommandError: You must set settings.ALLOWED_HOSTS if DEBUG is False.</p>
</blockquote>
<p>The first line is specifying the name of the Kubernetes pod which is <code>trojanwall-django</code> and also saying that OpenTelemetry auto-instrumentation has started. Besides that, the below command says something about the <code>ALLOWED_HOSTS</code> in the settings.py file.</p>
<p>I tried to add <code>ALLOWED_HOSTS = ["*"]</code> and also <code>ALLOWED_HOSTS = ['127.0.0.1', 'localhost', 'otel-collector.otel.svc.cluster.local', '0.0.0.0']</code> where <code>otel-collector.otel.svc.cluster.local</code> is the host name of the OpenTelemetry collector.</p>
<p>I tried to set <code>DEBUG = False</code>, yet still the resulting log is the same. Is this a host blocking issue in the settings.py file?</p>
<p>Below is my settings.py file</p>
<p><strong>settings.py</strong></p>
<pre><code>"""
Django settings for TestProject project.
Generated by 'django-admin startproject' using Django 3.0.8.
For more information on this file, see
https://docs.djangoproject.com/en/3.0/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/3.0/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/3.0/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'uzpjcqsdf&@d7#3hrpsdfs__&6%ja)6qsdfvsdoc-=l1*65dxqsxfsd!’
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
#DEBUG = int(os.environ.get("DEBUG", default=0))
#OTEL Test Config
ALLOWED_HOSTS = ['127.0.0.1', 'localhost', 'otel-collector.otel.svc.cluster.local', '0.0.0.0']
#ALLOWED_HOSTS = ['*']
# Application definition
INSTALLED_APPS = [
'testapp.apps.TestappConfig',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'TestProject.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'TestProject.wsgi.application'
# Database
# https://docs.djangoproject.com/en/3.0/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': os.environ.get('POSTGRES_NAME', 'postgres'),
'USER': os.environ.get('POSTGRES_USER', 'postgres'),
'PASSWORD': os.environ.get('POSTGRES_PASSWORD', 'postgres'),
'HOST': os.getenv('POSTGRES_SERVICE_HOST','127.0.0.1'),
'PORT': os.getenv('POSTGRES_SERVICE_PORT',5432)
}
}
# Password validation
# https://docs.djangoproject.com/en/3.0/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator ',
},
]
# Internationalization
# https://docs.djangoproject.com/en/3.0/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/3.0/howto/static-files/
STATIC_URL = '/static/'
STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'static')
]
STATIC_ROOT = os.path.join(BASE_DIR, 'assets')
</code></pre>
<p>(Also please consider the fact that this used to work well and fine. The application works if I disable the auto-instrumentation of OpenTelemetry collector in the Kubernetes pod level. When the container starts, the first command that gets executed is the <code>python manage.py run server 0.0.0.0:8000</code>. However, when enabling OpenTelemetry instrumentation in the Kubernetes level, before the container starts its first execution, OpenTelemetry tries to inject something which auto-instruments the application metric data I assume. Could that have messed up anything in Django?)</p>
<p>Does anyone know how to fix this?</p>
| <python><python-3.x><django><kubernetes> | 2023-10-23 16:33:12 | 0 | 385 | arjunbnair |
77,346,557 | 19,130,803 | read row for given id from pickle file without loading all data | <p>I have a dataframe with 2 columns (ID, Contents) which I am saving in <code>pickle</code> format. Say for example, I have 10 rows(ie 0 to 9 ID). I am trying to load a single row by passing <code>ID</code> (valid ID between 0-9) from the <strong>pickle file without loading the entire file</strong>.</p>
<pre><code>import pandas as pd
import pickle
id = 2
path = "contents.pickle"
# dummy data
df = pd.DataFrame({
"ID": [0,1,2,3,4,5,6,7,8,9],
"Contents": [0,1,2,3,4,5,6,7,8,9]
})
print(f"{df=}")
df.to_pickle(path=path)
df = pd.read_pickle(filepath_or_buffer=path)
with open(path, "rb") as handle:
df = pickle.load(handle)
</code></pre>
<p>The above both are reading all the data in one go. I am not getting where I should pass the <code>ID</code> to load that row only. If not possible with <code>dataframe</code>, I tried with simple <code>list</code> containing data saved in pickle file using reference <a href="https://stackoverflow.com/questions/37954324/how-to-load-one-line-at-a-time-from-a-pickle-file">here</a> but failed.</p>
<pre><code>data = [0,1,2,3,4,5,6,7,8,9]
with open(path, "wb") as handle:
pickle.dump(data, handle)
file = open(file=path)
file.seek(id)
row = pickle.load(file)
print(f"{row=}")
</code></pre>
| <python><pandas> | 2023-10-23 15:41:48 | 4 | 962 | winter |
77,346,314 | 6,087,667 | Assign axes to the chart in xlwings | <p>By using the script below I can generate the chart but it assigns columns to abscissa. How can I assign the rows to it, i.e. x,y,z as abscissa and A,B,C as ordinates.</p>
<p><a href="https://i.sstatic.net/G4E1B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/G4E1B.png" alt="enter image description here" /></a></p>
<pre><code>import xlwings as xw
wb = xw.Book(r"tests.xlsx")
ws = wb.sheets['Sheet1']
# adding a plot
cp = (10, 10)
ch = ws.charts.add(top=ws.range(cp).top, left=ws.range(cp).left, width=800, height=300)
ch.chart_type = 'line_markers'
ch.set_source_data(ws.range('A1:D3'))
# putting the chart title
ch.api[1].SetElement(2)
ch.api[1].ChartTitle.Text = 'Test'
</code></pre>
| <python><excel><charts><xlwings> | 2023-10-23 15:04:12 | 1 | 571 | guyguyguy12345 |
77,346,244 | 4,451,315 | How to resolve "Incompatible return value type (got "FancyCat", expected "Self")" | <p>Here's a minimal example I've written:</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
from typing import Protocol
from typing_extensions import Self
class Cat(Protocol):
def add(self, other: Self) -> Self:
...
def sub(self, other: Self) -> Self:
...
class FancyCat(Cat):
def __init__(self, value: int):
self._value = value
def add(self, other: Self) -> Self:
return FancyCat(self._value + other._value)
def sub(self, other: Self) -> Self:
return FancyCat(self._value - other._value)
fc = FancyCat(3)
fc2 = FancyCat(4)
fc.add(fc2)
</code></pre>
<p>If I try to type check it, I get</p>
<pre class="lang-py prettyprint-override"><code>$ mypy t.py
t.py:19: error: Incompatible return value type (got "FancyCat", expected "Self") [return-value]
t.py:22: error: Incompatible return value type (got "FancyCat", expected "Self") [return-value]
Found 2 errors in 1 file (checked 1 source file)
</code></pre>
<p>I'm so confused - isn't <code>Self</code> <code>FancyCat</code> in this case?</p>
<p>How can I satisfy <code>mypy</code> here?</p>
| <python><mypy><python-typing><self-type> | 2023-10-23 14:54:13 | 1 | 11,062 | ignoring_gravity |
77,346,241 | 5,005,808 | python cv2.CAP_PROP_FRAME_COUNT get wrong result, and also the cv2.CAP_PROP_FPS get wrong fps | <p>I want to use <code>cv2</code> to extract frames from a <code>.mkv</code> video file. The following is my code. I get wrong results for fps and frame count. I tried opencv versions 4.7.xx and 4.8.xx but still get the same problem. How shall I deal with this? The actual fps of my video file is 30 and the total frame count is around 15000.</p>
<pre><code>import time
import cv2
video_file_name = 'my_video_0.mkv'
cap = cv2.VideoCapture(video_file_name)
fps = cap.get(cv2.CAP_PROP_FPS) # 1000.0
frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) # -9223372036854775808
</code></pre>
<p>Thank you all for helping me !!!</p>
| <python><opencv> | 2023-10-23 14:54:07 | 0 | 1,930 | pfc |
77,346,210 | 2,584,772 | Getting module names from unittest tests in python | <p>Is there a way to get test module names programmatically, from a test suite, when using python unittest?</p>
<p>Python <code>unittest.loader.TestLoader</code> has a discover method that returns all test modules but I can't figure out a way to get only the test modules names.</p>
<p>What I did:</p>
<pre><code>import TestLoader
suite = loader.discover('path/to/tests')
for test_suite in suite:
print(test_suite.__module__)
</code></pre>
<p>This results in:</p>
<pre><code>unittest.suite
unittest.suite
unittest.suite
unittest.suite
</code></pre>
| <python><testing><python-unittest> | 2023-10-23 14:49:28 | 2 | 1,684 | Caco |
77,345,810 | 3,238,679 | Speed up matching between a sliding window and a list | <p>I have an odd length window and a array and I need to find the index of the array where the window is centered and matches more. So far I am doing it with the following code. Is it possible to speed up these computations?</p>
<pre><code>import numpy as np
import time
def find_best_match_position(lst, window):
min_corr = np.inf
best_position = -1
for i in range(len(lst) - len(window) + 1):
window_subset = lst[i:i + len(window)]
corr = np.linalg.norm(window - window_subset)
if corr < min_corr:
min_corr = corr
best_position = i
return best_position
input_list_len = int(8E+6)
np.random.seed(2)
input_list = np.random.rand(input_list_len)
win_length = 31
np.random.seed(4)
window = np.random.rand(win_length)
gnd_index = 15
half_width = win_length // 2
start = gnd_index - half_width # Shift start by 1 to the right
end = gnd_index + half_width + 1
input_list[start:end] = window + 0.01 * np.random.rand(win_length)
t = time.time()
print(f'Computed index {find_best_match_position(input_list, window) + half_width}')
t1 = time.time()
print(f'Computation time {(t1 - t) / 60} min')
# Computed index 15
# Computation time 0.6488747239112854 min
</code></pre>
| <python><algorithm><numpy><sliding-window> | 2023-10-23 13:52:14 | 1 | 1,041 | Thoth |
77,345,420 | 10,133,797 | Print numpy arrays truncated | <p>Something like</p>
<pre class="lang-py prettyprint-override"><code>print({n: np.random.randn(50, 50, 50) for n in 'abcd'})
</code></pre>
<p>prints a big mess. In MATLAB, it shows</p>
<pre class="lang-matlab prettyprint-override"><code> "a" ⟼ {50×50×50 double}
"b" ⟼ {50×50×50 double}
"c" ⟼ {50×50×50 double}
"d" ⟼ {50×50×50 double}
</code></pre>
<p>Is there an option to display arrays in "summary mode"? If not, limit their max size, in number of characters?</p>
<p>The simplest I can think of is</p>
<pre class="lang-py prettyprint-override"><code>s = np.array2string(np.random.randn(50, 50, 50))
print(s[:200] + ' ...\n' + s[-200:])
</code></pre>
<p>which gives</p>
<pre class="lang-py prettyprint-override"><code>[[[-2.20155640e+00 9.73402046e-01 -1.57076829e+00 ... 1.71643279e-01
-6.92943943e-01 -1.8014576
...
141624e+00 -3.51233625e-01 8.39485094e-01 ... 5.81172472e-01
-1.59288538e+00 6.68331170e-01]]]
</code></pre>
<p>but that's like tearing a page in half. Desired would be something like</p>
<pre class="lang-py prettyprint-override"><code>[[[-2.20155640e+00 9.73402046e-01 -1.57076829e+00 ... 1.71643279e-01
-6.92943943e-01 ...
...
[[ ... -3.51233625e-01 8.39485094e-01 ... 5.81172472e-01
-1.59288538e+00 6.68331170e-01]]]
</code></pre>
<p>but that'll take a bunch of coding manually upon output of <code>array2string</code>.</p>
<p>Is there built-in support to do such truncation? I've not found it in <a href="https://numpy.org/doc/stable/reference/generated/numpy.printoptions.html" rel="nofollow noreferrer">np.printoptions</a>. I'll accept "no" as an answer (if it's the answer).</p>
| <python><numpy><formatting> | 2023-10-23 12:58:47 | 2 | 19,954 | OverLordGoldDragon |
77,345,288 | 4,405,942 | Comparing JPEG images in PIL | <p>I am trying to assert equality between JPEG Images using PIL:
Currently, I'm relying on exact comparison using the following:</p>
<pre><code>from PIL import Image, ImageChops
img1 = Image.open("tmp_img.jpg")
img2 = Image.open("tmp_img.jpg")
ImageChops.difference(img1, img2).getbbox() is None
</code></pre>
<p>This works fine and outputs <code>True</code> for the above example.</p>
<p>The problem is, after saving and loading the image:</p>
<pre><code>img1.save("tmp_img_2.jpg")
img3 = Image.open("tmp_img_2.jpg")
ImageChops.difference(img1, img3).getbbox() is None
</code></pre>
<p>It outputs <code>False</code> and I suppose the reason is JPEG compression.</p>
<p>My question is, what metric should I use, in order to assert equality as reliably as possible?
I could use MSE as described <a href="https://pyimagesearch.com/2014/09/15/python-compare-two-images/" rel="nofollow noreferrer">here</a>. As a threshold I would save and load the image 10 times and see how big the MSE gets after 10 compressions.</p>
<p>But this depends on the specific image I'm using and also feels kind of dumb.</p>
<p>Is there a more generic way to check whether two JPEGs are equal regardless of (repeated) compression?</p>
<p>Thank you</p>
| <python><python-imaging-library><jpeg><image-comparison> | 2023-10-23 12:39:52 | 0 | 2,180 | Gerry |
77,345,242 | 9,973,879 | How to test for a raised Exception in the exit of a context manager? | <p>I am using pytest to test my program. I want to test for an Exception of type <code>Foo</code> being raised when a context manager exits. Using</p>
<pre><code>with pytest.raises(Foo):
with my_context_mngr():
pass
</code></pre>
<p>works but with the caveat that it will silently succeed if <code>Foo</code> is raised during the initialization of the context manager, which, as it turns out, could be the case.</p>
<p>Is there a way to handle this situation with <code>pytest</code> other than having to call the context managers methods directly?</p>
| <python><pytest> | 2023-10-23 12:32:45 | 2 | 1,967 | user209974 |
77,345,136 | 2,123,706 | Convert list of dictionaries to dictionary of dictionaries | <p>I have a list of dictionaries</p>
<pre><code>ls1 = [{'AccountBalance': '78', 'Status': '0'},
{'AccountBalance': '56', 'Status': '1'},
{'AccountBalance': '34', 'Status': '0'},
{'AccountBalance': '12', 'Status': '0'}]
</code></pre>
<p>I would like to convert this to a dictionary of 2 dictionaries</p>
<pre><code>dict1 = {'AccountBalance': {0: '78',
1: '56',
2: '34',
3: '12'},
'Status': {0: '0',
1: '1',
2: '0',
3: '0'}}
</code></pre>
<p>what would be the fastest way to do this?</p>
<p><code>pd.DataFrame(ls1).to_dict()</code> works, but I want to know if something is a little faster?</p>
| <python><dictionary> | 2023-10-23 12:14:58 | 3 | 3,810 | frank |
77,345,121 | 20,830,264 | Azure OpenAI LangChain - (InvalidField) The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set | <p>I'm trying to create an embedding vector database with some .txt documents in my local folder. In particular I'm following this tutorial from the official page of LangChain: <a href="https://python.langchain.com/docs/integrations/vectorstores/azuresearch" rel="nofollow noreferrer">LangChain - Azure Cognitive Search and Azure OpenAI</a>.
I have followed all the steps of the tutorial and this is my Python script:</p>
<pre class="lang-py prettyprint-override"><code># From https://python.langchain.com/docs/integrations/vectorstores/azuresearch
import openai
import os
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores.azuresearch import AzureSearch
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https://xxxxxx.openai.azure.com"
os.environ["OPENAI_API_KEY"] = "xxxxxxxxx"
os.environ["OPENAI_API_VERSION"] = "2023-05-15"
model: str = "text-embedding-ada-002"
vector_store_address: str = "https://xxxxxxx.search.windows.net"
vector_store_password: str = "xxxxxxx"
embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)
index_name: str = "cognitive-search-openai-exercise-index"
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query,
)
from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
loader = TextLoader("C:/Users/xxxxxxxx/azure_openai_cognitive_search_exercise/data/qna/a.txt", encoding="utf-8")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
vector_store.add_documents(documents=docs)
# Perform a similarity search
docs = vector_store.similarity_search(
query="Who is Pippo Franco?",
k=3,
search_type="similarity",
)
print(docs[0].page_content)
</code></pre>
<p>Now, when I run the script I get the following error:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>vector_search_configuration is not a known attribute of class <class 'azure.search.documents.indexes.models._index.SearchField'> and will be ignored
algorithm_configurations is not a known attribute of class <class 'azure.search.documents.indexes._generated.models._models_py3.VectorSearch'> and will be ignored
Traceback (most recent call last):
File "C:\Users\xxxxxxxxx\venv\Lib\site-packages\langchain\vectorstores\azuresearch.py", line 105, in _get_search_client
index_client.get_index(name=index_name)
File "C:\Users\xxxxxxx\venv\Lib\site-packages\azure\core\tracing\decorator.py", line 78, in wrapper_use_tracer
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxxxxx\KYF\venv\Lib\site-packages\azure\search\documents\indexes\_search_index_client.py", line 145, in get_index
result = self._client.indexes.get(name, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxxxx\venv\Lib\site-packages\azure\core\tracing\decorator.py", line 78, in wrapper_use_tracer
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxxxx\KYF\venv\Lib\site-packages\azure\search\documents\indexes\_generated\operations\_indexes_operations.py",
line 864, in get
map_error(status_code=response.status_code, response=response, error_map=error_map)
File "C:\Users\xxxxxxxx\venv\Lib\site-packages\azure\core\exceptions.py", line 165, in map_error
raise error
azure.core.exceptions.ResourceNotFoundError: () No index with the name 'cognitive-search-openai-exercise-index' was found in the service 'cognitive-search-openai-exercise'.
Code:
Message: No index with the name 'cognitive-search-openai-exercise-index' was found in the service 'cognitive-search-openai-exercise'.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Users\xxxxxxx\venv\azure_openai_cognitive_search_exercise\test.py", line 25, in <module>
vector_store: AzureSearch = AzureSearch(
^^^^^^^^^^^^
File "C:\Users\xxxxxxx\venv\Lib\site-packages\langchain\vectorstores\azuresearch.py", line 237, in __init__
self.client = _get_search_client(
^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxxxxxx\venv\Lib\site-packages\langchain\vectorstores\azuresearch.py", line 172, in _get_search_client
index_client.create_index(index)
File "C:\Users\xxxxxxx\venv\Lib\site-packages\azure\core\tracing\decorator.py", line 78, in wrapper_use_tracer
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxxxxx\venv\Lib\site-packages\azure\search\documents\indexes\_search_index_client.py", line 220, in create_index
result = self._client.indexes.create(patched_index, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxxxxx\venv\Lib\site-packages\azure\core\tracing\decorator.py", line 78, in wrapper_use_tracer
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxxxx\venv\Lib\site-packages\azure\search\documents\indexes\_generated\operations\_indexes_operations.py",
line 402, in create
raise HttpResponseError(response=response, model=error)
azure.core.exceptions.HttpResponseError: (InvalidRequestParameter) The request is invalid. Details: definition : The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set.
Code: InvalidRequestParameter
Message: The request is invalid. Details: definition : The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set.
Exception Details: (InvalidField) The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set. Parameters: definition
Code: InvalidField
Message: The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set. Parameters: definition</code></pre>
</div>
</div>
</p>
<p>I have created an index manually from the Azure Cognitive Search Console, but I don't think this is the correct approach, as the script should automatically create a new index.</p>
| <python><azure-cognitive-search><langchain><azure-openai><openaiembeddings> | 2023-10-23 12:12:47 | 1 | 315 | Gregory |
77,344,919 | 5,955,479 | Airflow - importing DAG with dependencies | <p>I deployed airflow on kubernetes using the official helm chart. I'm using KubernetesExecutor and git-sync.<br />
I am using a seperate docker image for my webserver and my workers - each DAG gets its own docker image. I am running into DAG import errors at the airflow home page. E.g. if one of my DAGs is using <code>pandas</code> then I'll get</p>
<pre><code>Broken DAG: [/opt/airflow/dags/repo/dags/airflow_demo/ieso.py] Traceback (most recent call last):
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/opt/airflow/dags/repo/dags/project1/dag1.py", line 7, in <module>
from pandas import read_parquet
ModuleNotFoundError: No module named 'pandas'
</code></pre>
<p>I dont have <code>pandas</code> installed on the webserver or scheduler docker images, because if I understand it correctly you shouldn't install the individual dependencies on these. I am getting the same error when running <code>airflow dags list-import-errors</code> on the scheduler pod. I do have <code>pandas</code> installed on the worker pod, but it doesn't get run, because the DAG cannot be discovered through these errors.<br />
How do I make airflow discover this DAG without installing <code>pandas</code> to either scheduler or webserver? I know installing it on both will fix this, however I am not interested in doing it this way.</p>
| <python><kubernetes><airflow> | 2023-10-23 11:46:38 | 1 | 355 | user430953 |
77,344,749 | 2,726,900 | PythonVirtualenvOperator gives error: No module named unusual_prefix_***_dag | <p>I'm using airflow 2.5.3 with Kubernetes executor and Python 3.7.</p>
<p>I've tried to make a simple DAG with only one <code>PythonVirtualnvOperator</code> and two context variables (<code>{{ ts }}</code> and <code>{{ dag }}</code>) passed into it.</p>
<pre><code>from datetime import timedelta
from pathlib import Path
import airflow
from airflow import DAG
from airflow.operators.python import PythonOperator, PythonVirtualenvOperator
import pendulum
dag = DAG(
default_args={
'retries': 2,
'retry_delay': timedelta(minutes=10),
},
dag_id='fs_rb_cashflow_test5',
schedule_interval='0 5 * * 1',
start_date=pendulum.datetime(2020, 1, 1, tz='UTC'),
catchup=False,
tags=['Feature Store', 'RB', 'u_m1ahn'],
render_template_as_native_obj=True,
)
context = {"ts": "{{ ts }}", "dag": "{{ dag }}"}
op_args = [context, Path(__file__).parent.absolute()]
def make_foo(*args, **kwargs):
print("---> making foo!")
print("make foo(...): args")
print(args)
print("make foo(...): kwargs")
print(kwargs)
make_foo_task = PythonVirtualenvOperator(
task_id='make_foo',
python_callable=make_foo,
provide_context=True,
use_dill=True,
system_site_packages=False,
op_args=op_args,
op_kwargs={
"execution_date_str": '{{ execution_date }}',
},
requirements=["dill", "pytz", f"apache-airflow=={airflow.__version__}", "psycopg2-binary >= 2.9, < 3"],
dag=dag)
</code></pre>
<p>Alas, when I'm trying to trigger this DAG, airflow gives me the following error:</p>
<pre><code>[2023-10-23, 13:30:40] {process_utils.py:187} INFO - Traceback (most recent call last):
[2023-10-23, 13:30:40] {process_utils.py:187} INFO - File "/tmp/venv5ifve2a5/script.py", line 17, in <module>
[2023-10-23, 13:30:40] {process_utils.py:187} INFO - arg_dict = dill.load(file)
[2023-10-23, 13:30:40] {process_utils.py:187} INFO - File "/tmp/venv5ifve2a5/lib/python3.7/site-packages/dill/_dill.py", line 287, in load
[2023-10-23, 13:30:40] {process_utils.py:187} INFO - return Unpickler(file, ignore=ignore, **kwds).load()
[2023-10-23, 13:30:40] {process_utils.py:187} INFO - File "/tmp/venv5ifve2a5/lib/python3.7/site-packages/dill/_dill.py", line 442, in load
[2023-10-23, 13:30:40] {process_utils.py:187} INFO - obj = StockUnpickler.load(self)
[2023-10-23, 13:30:40] {process_utils.py:187} INFO - File "/tmp/venv5ifve2a5/lib/python3.7/site-packages/dill/_dill.py", line 432, in find_class
[2023-10-23, 13:30:40] {process_utils.py:187} INFO - return StockUnpickler.find_class(self, module, name)
[2023-10-23, 13:30:40] {process_utils.py:187} INFO - ModuleNotFoundError: No module named 'unusual_prefix_4c3a45107010a4223aa054ffc5f7bffc78cce4e7_dag'
</code></pre>
<p>Why does it give me this strange error -- and how can it be fixed?</p>
| <python><kubernetes><airflow><pickle><dill> | 2023-10-23 11:22:08 | 1 | 3,669 | Felix |
77,344,656 | 1,661,465 | Testing concurrent.futures.TimeoutError and logging in a threaded function using Pytest | <p>I've come across a testing challenge in my Python project and I'm hoping to get some insights from the community. I have a utility module containing a function <code>threaded_execute</code> which utilizes the <code>concurrent.futures</code> module to execute a function in a separate thread. If a <code>concurrent.futures.TimeoutError</code> occurs, it logs a warning and retries the function. I'm using <a href="https://docs.pytest.org/" rel="nofollow noreferrer">pytest</a> for testing and I want to specifically test the logging of the <code>TimeoutError</code> without installing any additional packages.</p>
<p>Here's a simplified version of my code:</p>
<pre><code>import logging
from concurrent import futures
logger = logging.getLogger(__name__)
THREAD_TIMEOUT: float = 60 # For simplicity, just hardcoding the value here
def threaded_execute(func, *args, timeout=None, **kwargs):
timeout = timeout or THREAD_TIMEOUT
while True:
with futures.ThreadPoolExecutor(max_workers=1) as executor:
future = executor.submit(func, *args, **kwargs)
try:
return future.result(timeout=timeout)
except futures.TimeoutError as exc:
logger.warning(exc)
continue
</code></pre>
| <python><testing><pytest><concurrent.futures> | 2023-10-23 11:06:26 | 1 | 3,421 | serghei |
77,344,570 | 12,415,855 | Writing only part of the text in a cell as bold? | <p>i am writing / changing a cell-value in an existing excel-sheet using openpyxl and formating the cell as bold using the following code:</p>
<pre><code>import openpyxl
from openpyxl.styles import Font
workbook = openpyxl.load_workbook("try.xlsx")
worksheet = workbook.active
worksheet['C1'] = "this is some text is use"
bold_font = Font(bold=True)
worksheet['C1'].font = bold_font
workbook.save("try.xlsx")
workbook.close()
</code></pre>
<p>Now i want to format only the first word "is" and "text" in the cell as bold - rest should stay as normal font. How can i do this using openpyxl=</p>
| <python><openpyxl> | 2023-10-23 10:51:20 | 1 | 1,515 | Rapid1898 |
77,344,373 | 2,074,697 | Importing data from Excel; set values in Merged Cells to be the same | <p>I'm importing some hundred Excel files into a SQL Database and I need to have an import procedure which is very general, in the sense that I don't need to adjust any variables depending on the structure of the Excel file.</p>
<p>Thus I cannot use column names explicitly in the code and I cannot refer to cells with the (row, column) coordinates or row/column indices.</p>
<p>I've come across an issue now where <em>some</em> Excel files has their header row in merged cells.</p>
<p><a href="https://i.sstatic.net/RFU1S.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RFU1S.png" alt="enter image description here" /></a></p>
<p>which, represented as a data frame is represented as this:</p>
<pre><code>0 Merged column1 NaN NaN Merged column1
1 NaN Value Color NaN
2 DataValue11 DataValue12 NaN DataValue13
3 DataValue21 DataValue22 NaN NaN
</code></pre>
<p>Where the first column with integers represents the row numbers.</p>
<p>I've tried to import the excel file using pandas read_excel and subsequently use <code>df.fillna()</code> which works to a degree, like this:</p>
<pre><code>xl = read_excel(path, sheet_name = name)
xl = xl.fillna(method='ffill', axis=0, limit=1)
</code></pre>
<p>However, several rows has NaN values in the data rows, thus df.fillna is not suitable (as data rows which has a valid NaN value gets the value of the row underneath). In terms of the data example above this method yields this output:</p>
<pre><code>0 Merged column1 NaN NaN Merged column1
1 Merged column1 Value Color Merged column1
2 DataValue11 DataValue12 Color DataValue13
3 DataValue21 DataValue22 NaN DataValue13
</code></pre>
<p>I tried to check the documentation for read_excel and pandas etc. and I found this:</p>
<pre><code>xl.merged_cells
</code></pre>
<p>however I get the error that <code>'DataFrame' object has no attribute 'merged_cells'</code></p>
<p>I've also tried to limit the range of <code>fillna</code> using <code>head()</code>. But it "cuts off" the data frame. <code>xl = xl.head(2).fillna(method = 'ffill', axis=0, limit=1)</code> gave this output for example:</p>
<pre><code>0 Merged column1 NaN NaN Merged column1
1 Merged column1 Value Color Merged column1
</code></pre>
<p>Refering to the data structure above, I want the result to be:</p>
<pre><code>0 Merged column1 NaN NaN Merged column1
1 Merged column1 Value Color Merged column1
2 DataValue11 DataValue12 NaN DataValue13
3 DataValue21 DataValue22 NaN NaN
</code></pre>
<p>Is there a way to achieve this? Seeing the vast difference in structure of Excel files it needs to be general as well (i.e. I do not know the rows representing the headers beforehand etc, the table in the Excel file could start at different rows than row 1 etc).</p>
<p>EDIT:</p>
<p>Is there a way to deal with things like this, when additional "useless" data has been added to the file?</p>
<p><a href="https://i.sstatic.net/91zAf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/91zAf.png" alt="enter image description here" /></a></p>
<p>(the above gives the output</p>
<pre><code> None Comment Value Color Comment2 Merged column3
0 None DataValue11 DataValue12 NaN DataValue13 NaN
1 None DataValue21 DataValue22 NaN NaN NaN
</code></pre>
<p>rather than the expected</p>
<pre><code> None Merged column1 Value Color Merged column2 Merged column3
0 None DataValue11 DataValue12 NaN DataValue13 NaN
1 None DataValue21 DataValue22 NaN NaN NaN
</code></pre>
| <python><pandas><excel> | 2023-10-23 10:18:11 | 1 | 1,242 | Cenderze |
77,344,296 | 2,725,810 | PCA affects similarity comparisons even when keeping all the dimensions | <p>Consider:</p>
<pre class="lang-py prettyprint-override"><code>from sklearn.decomposition import PCA
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
def do_pca(X, n_components):
print(f"Doing PCA from {X.shape[0]} vectors")
pca = PCA(n_components=n_components)
X_pca = pca.fit_transform(X)
print('Explained variance:',
sum(pca.explained_variance_ratio_[:n_components]))
return pca
def by_relevance(vectors, key_vector):
rankings = [
(i, cosine_similarity(v.reshape(1,-1), key_vector.reshape(1,-1)))
for i, v in enumerate(vectors)]
rankings.sort(key=lambda el:-el[1])
for i, r in rankings:
print(i, r)
np.random.seed(1)
X = np.random.random((50, 20))
pca = do_pca(X, n_components=20)
X = X[:5]
key_vector = X[[0]]
by_relevance(X, key_vector)
print()
by_relevance(pca.transform(X), pca.transform(key_vector))
</code></pre>
<p>This code performs PCA on 50 vectors, while keeping as many components as there are dimensions. The function <code>by_relevance</code> sorts vectors by similarity to the given vector. We call this function twice - once before and once after the transformation. Since all the components are kept, I would expect similar results for both invocations. However, this is the output:</p>
<pre><code>Doing PCA from 50 vectors
Explained variance: 1.0
0 [[1.]]
4 [[0.78738484]]
1 [[0.73532448]]
3 [[0.71538191]]
2 [[0.6614021]]
0 [[1.]]
4 [[0.01659682]]
3 [[-0.02417426]]
1 [[-0.02855172]]
2 [[-0.03232985]]
</code></pre>
<p>Why is the ranking affected and how come the last four similarities became so small?</p>
| <python><scikit-learn><pca> | 2023-10-23 10:03:45 | 1 | 8,211 | AlwaysLearning |
77,344,291 | 1,841,839 | Wrong type of credentials for creating tuning model in quickstart | <p>I am trying to set up a tuning model for Palm api.</p>
<p>It requires authorization so I am following the <a href="https://developers.generativeai.google/tutorials/oauth_quickstart" rel="nofollow noreferrer">Oauth quick start</a></p>
<p>I have configured the consent screen and created the desktop app credetials as stated in the guilde.</p>
<blockquote>
<p>Click Application type > Desktop app.</p>
</blockquote>
<pre><code>{
"installed": {
"client_id": "[REDACTED].googleusercontent.com",
"project_id": "[REDACTED]",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_secret": "[REDACTED]",
"redirect_uris": [
"http://localhost"
]
}
}
</code></pre>
<p>When I run the code found in <a href="https://developers.generativeai.google/tutorials/tuning_quickstart_python" rel="nofollow noreferrer">tuning QuickStart python </a></p>
<pre><code>import google.generativeai as palm
import os
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'C:\Development\FreeLance\machineLearning\GaTraining\creds.json'
def check_for_existing_tuned_models():
print('Available base models:', [m.name for m in palm.list_models()])
# Press the green button in the gutter to run the script.
if __name__ == '__main__':
check_for_existing_tuned_models()
</code></pre>
<p>I get the following error.</p>
<blockquote>
<p>The file C:\Development\FreeLance\machineLearning\GaTraining\creds.json does not have a valid type. Type is None, expected one of ('authorized_user', 'service_account', 'external_account', 'external_account_authorized_user', 'impersonated_service_account', 'gdch_service_account').</p>
</blockquote>
<p>Which implies to me that it requires a service account authorization which is completely opposite of what the tutorial says i should use.</p>
<p>How do I authorize to palm api to create my own tuning model?</p>
<p>Note:</p>
<p>Creating a service account does work. I just dont understand why the tutorial says to use Oauth2 and configure the consent screen. Which one should i use a service account or oauth2.</p>
| <python><machine-learning-model><palm-api> | 2023-10-23 10:03:20 | 1 | 118,263 | Linda Lawton - DaImTo |
77,344,277 | 4,194,079 | Cross dimensional segmented operation | <p>Say you have the following <code>a</code> array</p>
<pre class="lang-py prettyprint-override"><code>>>> a = np.arange(27).reshape((3,3,3))
>>> a
array([[[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8]],
[[ 9, 10, 11],
[12, 13, 14],
[15, 16, 17]],
[[18, 19, 20],
[21, 22, 23],
[24, 25, 26]]], dtype=int64)
</code></pre>
<p>And <code>m</code>, an array that specifies segment ids</p>
<pre class="lang-py prettyprint-override"><code>>>> m = np.linspace(start=0, stop=6, num=27).astype(int).reshape(a.shape)
>>> m
array([[[0, 0, 0],
[0, 0, 1],
[1, 1, 1]],
[[2, 2, 2],
[2, 3, 3],
[3, 3, 3]],
[[4, 4, 4],
[4, 5, 5],
[5, 5, 6]]])
</code></pre>
<p>When using <a href="https://jax.readthedocs.io/en/latest/" rel="nofollow noreferrer">JAX</a> and wishing to perform, say, a sum over the scalars in <code>a</code> that share the same id in <code>m</code>, we can rely on <a href="https://jax.readthedocs.io/en/latest/_autosummary/jax.ops.segment_sum.html" rel="nofollow noreferrer"><code>jax.ops.segment_sum</code></a>.</p>
<pre><code>>>> jax.ops.segment_sum(data=a.ravel(), segment_ids=m.ravel())
Array([10, 26, 42, 75, 78, 94, 26], dtype=int64)
</code></pre>
<p>Note that I had to resort to <a href="https://numpy.org/doc/stable/reference/generated/numpy.ndarray.ravel.html" rel="nofollow noreferrer"><code>numpy.ndarray.ravel</code></a> since <code>~.segment_sum</code> assumes <code>m</code> to indicate the segments of data <a href="https://jax.readthedocs.io/en/latest/_autosummary/jax.ops.segment_sum.html#:%7E:text=that%20indicates%20the%20segments%20of%20data%20(along%20its%20leading%20axis)" rel="nofollow noreferrer"><em>along its leading axis</em></a>.</p>
<hr>
<p><strong>Q1</strong> : Can you confirm there is no better approach, either with or without JAX ?</p>
<p><strong>Q2</strong> : How would one then build <code>n</code>, an array that results from the replacement of the ids with the just-performed sums ? Note that I am not interested in non-vectorized approaches such as <a href="https://numpy.org/doc/stable/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a>.</p>
<pre class="lang-py prettyprint-override"><code>>>> n
array([[[10, 10, 10],
[10, 10, 26],
[26, 26, 26]],
[[42, 42, 42],
[42, 75, 75],
[75, 75, 75]],
[[78, 78, 78],
[78, 94, 94],
[94, 94, 26]]], dtype=int64)
</code></pre>
| <python><numpy><jax> | 2023-10-23 10:01:13 | 2 | 6,705 | keepAlive |
77,343,917 | 13,793,478 | Navigating in flask using JS | <p>I am trying to use JavaScript function to navigate to another flask route.</p>
<p>this is the function</p>
<pre><code>dateClick: function(info) {
alert('Date: ' + info.dateStr);
alert('Resource ID: ' + info.resource.id);
}
</code></pre>
<p>instead of alet I want it to navigate to this route</p>
<pre><code>@app.route('/addPage')
def addPage():
return render_template('manual_entry.html')
</code></pre>
| <javascript><python><flask> | 2023-10-23 09:02:34 | 1 | 514 | Mt Khalifa |
77,343,863 | 5,730,859 | Pandas table convert to word with repeated columns | <p>I want the columns and the data arrange nicely as the below format in the word</p>
<pre><code>Name: Sam
Age: 14
Weight: 45
Name:Andrea
Age: 25
Weight: 88
Name:Alex
Age: 55
Weight: 56
Name:Robin
Age: 8
Weight: 15
Name:Kia
Age: 21
Weight: 71
</code></pre>
<p>However,my code below unable to get teh desired output as the data was with the vertical output in table.</p>
<pre><code>import pandas as pd
import docx
data = pd.DataFrame({'Weight':[45, 88, 56, 15, 71],
'Name':['Sam', 'Andrea', 'Alex', 'Robin', 'Kia'],
'Age':[14, 25, 55, 8, 21]})
df = pd.DataFrame(data)
print(df.columns[0])
print(df.columns)
print(df)
# Initialise the Word document
doc = docx.Document()
# Initialise the table
t = doc.add_table(rows=df.shape[0]+1, cols=df.shape[1])
for j in range(df.shape[-1]):
t.cell(0,j).text = df.columns[j]
# Add the body of the data frame to the table
for i in range(df.shape[0]):
for j in range(df.shape[-1]):
t.cell(i+1,j).text = str(df.values[i,j])
# Save the Word doc
doc.save(r'C:\Users\Downloads\table 1.docx')
</code></pre>
| <python><python-docx> | 2023-10-23 08:54:16 | 1 | 934 | bkcollection |
77,343,814 | 2,725,810 | pca.tranform(data) vs. data @ pca.components_.T | <p>Given the matrix X, suppose we did:</p>
<pre class="lang-py prettyprint-override"><code>from sklearn.decomposition import PCA
pca = PCA(n_components=5).fit_transform(X)
</code></pre>
<p>Given a data matrix <code>data</code>, I would expect that the reduction to 5 dimensions is given by:</p>
<pre class="lang-py prettyprint-override"><code>data_reduced = data @ pca.components_.T
</code></pre>
<p>However, looking at <a href="https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/decomposition/_kernel_pca.py" rel="nofollow noreferrer">the code</a> of <code>pca.tranform()</code>, I see that it does something very different.</p>
<p>How do the two methods differ in high-lever terms?</p>
| <python><scikit-learn><pca> | 2023-10-23 08:46:30 | 1 | 8,211 | AlwaysLearning |
77,343,501 | 859,227 | Dividing two columns of pandas daraframe and keep the header name | <p>With the following data frame</p>
<pre><code>ID,WEIGHT,I1,I2,I4
1,0.2,839,1664,3266
2,0.1,851,863,858
3,0.4,1018,1999,3982
4,0.3,878,1724,3447
</code></pre>
<p>I want to iterate over I1..I4 and create new data frames by joining the <code>WEIGHT</code> column and <code>I_i/I1</code>. The following code works fine</p>
<pre><code>for i in range(3,5):
df_new = pd.concat([df['WEIGHT'], df.iloc[:,i]/df.iloc[:,2]], axis=1)
print(df_new)
</code></pre>
<p>But as you can see in the output, the column header is 0 which I guess is the result of <code>I2/I1</code> and <code>I4/I1</code>.</p>
<pre><code> WEIGHT 0
0 0.2 1.983313
1 0.1 1.014101
2 0.4 1.963654
3 0.3 1.963554
WEIGHT 0
0 0.2 3.892729
1 0.1 1.008226
2 0.4 3.911591
3 0.3 3.925968
</code></pre>
<p>How can I keep the columns as I2 and I4? I mean keeping the column head of <code>df_new</code> the same as <code>df.iloc[:,i]</code> ?</p>
| <python><pandas> | 2023-10-23 07:54:18 | 5 | 25,175 | mahmood |
77,343,471 | 4,402,282 | Pytorch: CUDA error: invalid configuration argument | <p>I am attempting to run some third-party Pytorch code, and am seeing the following error:</p>
<pre><code> File "D:\Testing\OFFLINE\PSFM\particle-sfm\motion_seg\core\network\traj_oa_depth.py", line 48, in extract_feature
output_feat = self.transformer_model(input_traj, input_traj, \
File "C:\Users\B\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\B\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\transformer.py", line 146, in forward
output = self.decoder(tgt, memory, tgt_mask=tgt_mask, memory_mask=memory_mask,
File "C:\Users\B\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\B\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\transformer.py", line 369, in forward
output = mod(output, memory, tgt_mask=tgt_mask,
File "C:\Users\B\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\B\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\transformer.py", line 717, in forward
x = self.norm2(x + self._mha_block(x, memory, memory_mask, memory_key_padding_mask, memory_is_causal))
File "C:\Users\B\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\transformer.py", line 735, in _mha_block
x = self.multihead_attn(x, mem, mem,
File "C:\Users\B\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\B\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\activation.py", line 1205, in forward
attn_output, attn_output_weights = F.multi_head_attention_forward(
File "C:\Users\B\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\functional.py", line 5373, in multi_head_attention_forward
attn_output = scaled_dot_product_attention(q, k, v, attn_mask, dropout_p, is_causal)
RuntimeError: CUDA error: invalid configuration argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
</code></pre>
<p>With the notable line being <code> RuntimeError: CUDA error: invalid configuration argument</code></p>
<p>Google throws up a few <a href="https://stackoverflow.com/questions/16125389/invalid-configuration-argument-error-for-the-call-of-cuda-kernel">similar issues</a>, and it seems related to the blocks sent to the GPU, or using an older Pytorch version. My version is <code>2.0.1+cu117'</code></p>
<p>My question is, how can I debug this in code that I did not write? What should I be searching for to find the offending line of Python that sets the GPU blocks?</p>
<p>The function that results in the error is here:</p>
<p><a href="https://github.com/bytedance/particle-sfm/blob/main/motion_seg/core/network/traj_oa_depth.py" rel="nofollow noreferrer">https://github.com/bytedance/particle-sfm/blob/main/motion_seg/core/network/traj_oa_depth.py</a></p>
| <python><pytorch> | 2023-10-23 07:49:24 | 1 | 3,165 | anti |
77,343,422 | 17,128,041 | Connect CloudSQL Postgres Db using Python | <p>So I am DevOps engineer please excuse for my lack of knowledge in python development. I am trying to connect a CloudSQL Postgres 14 Db using Python. So that I can insert data and read data from my application. Can someone help me understand why I am getting error when I try to run the python script.</p>
<p>Reference used: <a href="https://stackoverflow.com/questions/67406341/python-script-query-to-gcp-postgresql-db-from-local-machine">Stackoverflow reference used for connecting CloudSQL with python</a></p>
<pre><code># Copyright 2021 Google LLC.
# SPDX-License-Identifier: Apache-2.0
import os
from google.cloud.sql.connector import connector
# Connect to the database
conn = connector.connect(
os.getenv("angelic-hexagon-307814:us-central1:sqlpg1234"),
"pg8000",
user=os.getenv("test1"),
password=os.getenv("test1"),
db=os.getenv("postgres")
)
# Execute a query
cursor = conn.cursor()
cursor.execute("SELECT * from accounts")
# Fetch the results
result = cursor.fetchall()
# Do something with the results
for row in result:
print(row)
</code></pre>
<p>I have this file with filename sql.py and I run the command <code>python3 sql.py</code> to run the python script. But I am getting below error but not able to debug the reason behind this. Can someone help?</p>
<pre><code>sidharth@Sidharths-Air python % python3 sql.py
Traceback (most recent call last):
File "/Users/sidharth/python/sql.py", line 8, in <module>
conn = connector.connect(
AttributeError: module 'google.cloud.sql.connector.connector' has no attribute 'connect'. Did you mean: 'Connector'?
</code></pre>
<p>After trying out the solution mentioned by @pensive</p>
<pre><code>Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pg8000/core.py", line 199, in _make_socket
sock = socket.create_connection((host, port), timeout, source_address)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/socket.py", line 845, in create_connection
raise err
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/socket.py", line 833, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 61] Connection refused
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 145, in __init__
self._dbapi_connection = engine.raw_connection()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 3292, in raw_connection
return self.pool.connect()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 452, in connect
return _ConnectionFairy._checkout(self)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 1269, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 716, in checkout
rec = pool._do_get()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/pool/impl.py", line 169, in _do_get
with util.safe_reraise():
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 146, in __exit__
raise exc_value.with_traceback(exc_tb)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/pool/impl.py", line 167, in _do_get
return self._create_connection()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 393, in _create_connection
return _ConnectionRecord(self)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 678, in __init__
self.__connect()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 902, in __connect
with util.safe_reraise():
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 146, in __exit__
raise exc_value.with_traceback(exc_tb)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 898, in __connect
self.dbapi_connection = connection = pool._invoke_creator(self)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/engine/create.py", line 637, in connect
return dialect.connect(*cargs, **cparams)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 616, in connect
return self.loaded_dbapi.connect(*cargs, **cparams)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pg8000/__init__.py", line 111, in connect
return Connection(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pg8000/legacy.py", line 443, in __init__
super().__init__(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pg8000/core.py", line 312, in __init__
self.channel_binding, self._usock = _make_socket(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pg8000/core.py", line 201, in _make_socket
raise InterfaceError(
pg8000.exceptions.InterfaceError: Can't create a connection to host localhost and port 5432 (timeout is None and source_address is None).
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/sidharth/python/sql.py", line 15, in <module>
with db.connect() as conn:
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 3268, in connect
return self._connection_cls(self)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 147, in __init__
Connection._handle_dbapi_exception_noconnection(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2430, in _handle_dbapi_exception_noconnection
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 145, in __init__
self._dbapi_connection = engine.raw_connection()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 3292, in raw_connection
return self.pool.connect()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 452, in connect
return _ConnectionFairy._checkout(self)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 1269, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 716, in checkout
rec = pool._do_get()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/pool/impl.py", line 169, in _do_get
with util.safe_reraise():
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 146, in __exit__
raise exc_value.with_traceback(exc_tb)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/pool/impl.py", line 167, in _do_get
return self._create_connection()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 393, in _create_connection
return _ConnectionRecord(self)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 678, in __init__
self.__connect()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 902, in __connect
with util.safe_reraise():
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 146, in __exit__
raise exc_value.with_traceback(exc_tb)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 898, in __connect
self.dbapi_connection = connection = pool._invoke_creator(self)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/engine/create.py", line 637, in connect
return dialect.connect(*cargs, **cparams)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 616, in connect
return self.loaded_dbapi.connect(*cargs, **cparams)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pg8000/__init__.py", line 111, in connect
return Connection(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pg8000/legacy.py", line 443, in __init__
super().__init__(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pg8000/core.py", line 312, in __init__
self.channel_binding, self._usock = _make_socket(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pg8000/core.py", line 201, in _make_socket
raise InterfaceError(
sqlalchemy.exc.InterfaceError: (pg8000.exceptions.InterfaceError) Can't create a connection to host localhost and port 5432 (timeout is None and source_address is None).
(Background on this error at: https://sqlalche.me/e/20/rvf5)
</code></pre>
<p>After executing what Jack has mentioned getting this error</p>
<pre><code>(venv) bash-3.2$ python main.py
['project:region:instance-name']: An error occurred while performing refresh. Scheduling another refresh attempt immediately
Traceback (most recent call last):
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/google/cloud/sql/connector/instance.py", line 389, in _refresh_task
refresh_data = await refresh_task
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/google/cloud/sql/connector/instance.py", line 313, in _perform_refresh
metadata = await metadata_task
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/google/cloud/sql/connector/refresh_utils.py", line 103, in _get_metadata
resp = await client_session.get(url, headers=headers, raise_for_status=True)
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/aiohttp/client.py", line 669, in _request
resp.raise_for_status()
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/aiohttp/client_reqrep.py", line 1011, in raise_for_status
raise ClientResponseError(
aiohttp.client_exceptions.ClientResponseError: 400, message='Bad Request', url=URL('https://sqladmin.googleapis.com/sql/v1beta4/projects/project/instances/instance-name/connectSettings')
Traceback (most recent call last):
File "/Users/sidharth/PycharmProjects/pythonapp/main.py", line 27, in <module>
with pool.connect() as db_conn:
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 3268, in connect
return self._connection_cls(self)
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 145, in __init__
self._dbapi_connection = engine.raw_connection()
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 3292, in raw_connection
return self.pool.connect()
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 452, in connect
return _ConnectionFairy._checkout(self)
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 1269, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 716, in checkout
rec = pool._do_get()
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/sqlalchemy/pool/impl.py", line 169, in _do_get
with util.safe_reraise():
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 146, in __exit__
raise exc_value.with_traceback(exc_tb)
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/sqlalchemy/pool/impl.py", line 167, in _do_get
return self._create_connection()
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 393, in _create_connection
return _ConnectionRecord(self)
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 678, in __init__
self.__connect()
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 902, in __connect
with util.safe_reraise():
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 146, in __exit__
raise exc_value.with_traceback(exc_tb)
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 898, in __connect
self.dbapi_connection = connection = pool._invoke_creator(self)
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 365, in <lambda>
return lambda rec: creator_fn()
File "/Users/sidharth/PycharmProjects/pythonapp/main.py", line 10, in getconn
conn = connector.connect(
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/google/cloud/sql/connector/connector.py", line 163, in connect
return connect_task.result()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/concurrent/futures/_base.py", line 446, in result
return self.__get_result()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/google/cloud/sql/connector/connector.py", line 244, in connect_async
instance_data, ip_address = await instance.connect_info(ip_type)
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/google/cloud/sql/connector/instance.py", line 446, in connect_info
instance_data = await self._current
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/google/cloud/sql/connector/instance.py", line 389, in _refresh_task
refresh_data = await refresh_task
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/google/cloud/sql/connector/instance.py", line 313, in _perform_refresh
metadata = await metadata_task
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/google/cloud/sql/connector/refresh_utils.py", line 103, in _get_metadata
resp = await client_session.get(url, headers=headers, raise_for_status=True)
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/aiohttp/client.py", line 669, in _request
resp.raise_for_status()
File "/Users/sidharth/PycharmProjects/pythonapp/venv/lib/python3.10/site-packages/aiohttp/client_reqrep.py", line 1011, in raise_for_status
raise ClientResponseError(
aiohttp.client_exceptions.ClientResponseError: 400, message='Bad Request', url=URL('https://sqladmin.googleapis.com/sql/v1beta4/projects/project/instances/instance-name/connectSettings')
</code></pre>
<p>The link said something like this</p>
<pre><code>{
"error": {
"code": 401,
"message": "Request is missing required authentication credential. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.",
"errors": [
{
"message": "Login Required.",
"domain": "global",
"reason": "required",
"location": "Authorization",
"locationType": "header"
}
],
"status": "UNAUTHENTICATED",
"details": [
{
"@type": "type.googleapis.com/google.rpc.ErrorInfo",
"reason": "CREDENTIALS_MISSING",
"domain": "googleapis.com",
"metadata": {
"service": "sqladmin.googleapis.com",
"method": "google.cloud.sql.v1beta4.SqlConnectService.GetConnectSettings"
}
}
]
}
}
</code></pre>
| <python><postgresql><google-cloud-platform><google-cloud-sql> | 2023-10-23 07:40:27 | 2 | 1,599 | sidharth vijayakumar |
77,343,330 | 1,780,761 | Customize a class of a library | <p>I am using a Python library that has one function that I would need to customize.</p>
<p>That is not a problem, since I can make my own edited version like this:</p>
<pre><code>from coolLibrary import originalclass
class myclass(originalclass):
whatever...
</code></pre>
<p>The problem that I am facing now, is that the library uses this <code>originalclass</code> in multiple locations. Is there a way to tell the library to use <code>myclass</code> everywhere instead of <code>originalclass</code>?</p>
<p>My goal would be to have a custom code without editing the original code so the library can be updated.</p>
| <python> | 2023-10-23 07:24:16 | 2 | 4,211 | sharkyenergy |
77,343,078 | 9,501,624 | How to close a Navbar Dropdownmenu when opening another one? | <p>I added multiple <a href="https://dash-bootstrap-components.opensource.faculty.ai/docs/components/dropdown_menu/" rel="nofollow noreferrer">dropdownmenus</a> to a <a href="https://dash-bootstrap-components.opensource.faculty.ai/docs/components/navbar/" rel="nofollow noreferrer">navbar</a>.
When activating both one after another, both stay open:</p>
<p><a href="https://i.sstatic.net/Cax4Y.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Cax4Y.gif" alt="enter image description here" /></a></p>
<p>How can I configure the navbar or the dropdownmenu to automatically close the first dropdownmenu upon opening the second one?</p>
<pre class="lang-py prettyprint-override"><code>import dash
import dash_bootstrap_components as dbc
from dash import html
app = dash.Dash(external_stylesheets=[dbc.themes.DARKLY])
navbar = dbc.NavbarSimple(
children=[
dbc.DropdownMenu(
children=[dbc.DropdownMenuItem("How to")],
nav=True,
in_navbar=True,
label="Here is",
),
dbc.DropdownMenu(
children=[dbc.DropdownMenuItem("auto-close?")],
nav=True,
in_navbar=True,
label="my question",
),
],
)
# embedding the navigation bar
app.layout = html.Div([navbar])
app.run(host='127.0.0.1', port=80)
</code></pre>
| <python><bootstrap-5><plotly-dash> | 2023-10-23 06:40:07 | 1 | 3,844 | Christian Karcher |
77,343,073 | 11,182,916 | Find points that are equidistant from 4 points | <p>This code needs a very long time to find a point. Is there any better way to find a point that is equidistant from 4 given points? A tolerance of 0.5 is fine.</p>
<pre><code>import numpy as np
# Define the four 3D points as (x, y, z) coordinates
point1 = np.array([1, 2, 3])
point2 = np.array([3, 4, 5])
point3 = np.array([5, 6, 7])
point4 = np.array([7, 8, 9])
# Set a tolerance for distance equality
tolerance = 0.5
# Calculate the initial average point
average_point = np.mean([point1, point2, point3, point4], axis=0)
while True:
distances = np.linalg.norm(average_point - np.array([point1, point2, point3, point4]), axis=1)
max_distance = np.max(distances)
min_distance = np.min(distances)
if max_distance - min_distance <= tolerance:
break
# Update the average point by moving it closer to the farthest point
farthest_point = np.argmax(distances)
average_point = average_point + 0.5 * (point1 - average_point)
print("Average point with relatively equal distances:", average_point)
</code></pre>
| <python><numpy> | 2023-10-23 06:38:05 | 4 | 405 | Binh Thien |
77,343,000 | 1,159,290 | can python multiprocessing do semaphores between "unrelated" processes? | <p>I am looking at the multiprocessing python module to protect some hardware (memory mapped) resources which requires atomic access (to do some stuff).
(The hardware is an FPGA implementing some finite state machine, and that machine should be returned to idle before any other access could start...)</p>
<p>The SW which may want to access this resource are run as different linux processes, written in Python, some of them using threads (and hence using the threading module).
These processes are not spawn from a common python parent: they are typically run as deamons and also run from the command line interface.</p>
<p>In the C world, I would typically use a linux named semaphore to guarantee atomicity towards my HW. I don't fully understand from the multiprocessing module documentation if the equivalent functionality can be achieved (and how). Or does multiprocessing only handles locks for processes spawned (forked) from a common single program who creates the lock before forking...</p>
<p>So the question is really if the multiprocessing module can be used to handle synchronization between unrelated (i.e. not having any common python ancestor) processes, as linux named semaphore would? If so, how... if not, what is the best alternative...</p>
| <python><multiprocessing> | 2023-10-23 06:21:56 | 2 | 1,003 | user1159290 |
77,342,992 | 11,855,904 | How can I order a Django query by a field of a related model? | <p>I have these two models, <code>ChatRoom</code> and <code>ChatMessage</code>, which are related. My goal is to order the <code>ChatRoom</code> queries in a way that the room with the most recent message, as determined by the <code>sent_timestamp</code>, appears at the top of the results.</p>
<p>While I've found some similar questions on this platform, none of them seem to precisely address my specific issue.</p>
<p>I'm aware of the annotate method, which I can use to achieve this outside of the model's <code>Meta</code> class, but I'm interested in setting the ordering directly in the <code>Meta</code> class of the <code>ChatRoom</code> model.</p>
<p>Is there a way to achieve this ordering within the Meta class, or is there an alternative approach I should consider?</p>
<pre class="lang-py prettyprint-override"><code>class ChatRoom(models.Model):
members = models.ManyToManyField(User, related_name="chat_rooms")
class Meta:
# Intended for this to order the chat rooms so that the room that
# has the most recent chat message come first, but it's not working
ordering = ("-messages__sent_timestamp",)
class ChatMessage(models.Model):
room = models.ForeignKey(ChatRoom, models.CASCADE, related_name="messages")
sender = models.ForeignKey(User, models.CASCADE)
text = models.TextField(validators=[MaxLengthValidator(280)])
sent_timestamp = models.DateTimeField(auto_now_add=True)
delivery_timestamp = models.DateTimeField(null=True, blank=True)
read_timestamp = models.DateTimeField(null=True, blank=True)
class Meta:
ordering = ("-sent_timestamp",)
</code></pre>
| <python><django><django-models><django-queryset> | 2023-10-23 06:20:46 | 3 | 392 | cy23 |
77,342,988 | 988,279 | Sort a list of dictionaries...(Itemgetter) | <p>I want to group a list of dictionaries by a key. The following code is working.</p>
<pre><code>from itertools import groupby
from operator import itemgetter
user_dict = {123: [
{'the_id': 1, 'title': 'Title1'},
{'the_id': 1, 'title': 'Title1'},
{'the_id': 3, 'title': 'Title3'},
{'the_id': 2, 'title': 'Title2'}
]}
for user in user_dict:
for id, value in groupby(user_dict[user], key=itemgetter('the_id')):
titles = []
for k in value:
titles.append(k['title'])
print(titles)
</code></pre>
<p>It results:</p>
<pre><code>['Title1', 'Title1']
['Title3']
['Title2']
</code></pre>
<p>When I change the order in the dict, the "grouping" doesn't work anymore.</p>
<pre><code>from itertools import groupby
from operator import itemgetter
user_dict = {123: [
{'the_id': 1, 'title': 'Title1'},
{'the_id': 3, 'title': 'Title3'},
{'the_id': 2, 'title': 'Title2'},
{'the_id': 1, 'title': 'Title1'}
]}
for user in user_dict:
for id, value in groupby(user_dict[user], key=itemgetter('the_id')):
titles = []
for k in value:
titles.append(k['title'])
print(titles)
</code></pre>
<p>This results to:</p>
<pre><code>['Title1']
['Title3']
['Title2']
['Title1']
</code></pre>
<p>Can someone explain me that? How is the code correct?
Thanks.</p>
| <python> | 2023-10-23 06:19:45 | 3 | 522 | saromba |
77,342,624 | 5,104,777 | How to log arbitrary request headers in Flask? | <p>How do you log headers such as <code>X-Request-ID</code> (or <code>X-Amzn-Trace-Id</code>) in Flask v3?</p>
<p>(Apparently in version 1 Flask needed an extension to provide this functionality but <a href="https://github.com/Workable/flask-log-request-id" rel="nofollow noreferrer">that project</a> was orphaned and is no longer compatible.)</p>
| <python><http><flask><logging> | 2023-10-23 04:21:38 | 1 | 5,062 | benjimin |
77,342,593 | 395,857 | How can I call with Postman a RESTful API deployed with Gradio? | <p>I'm following the basic Gradio interface from the <a href="https://www.gradio.app/docs/interface" rel="nofollow noreferrer">Gradio documentation</a>:</p>
<pre><code>import gradio as gr
def greet(name):
return "Hello " + name + "!"
demo = gr.Interface(fn=greet, inputs="text", outputs="text")
demo.launch()
</code></pre>
<p>The interface is:</p>
<p><a href="https://i.sstatic.net/D4DBD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D4DBD.png" alt="enter image description here" /></a></p>
<p>I believe this deploys a RESTful API, since it says "Use via API" in the footer of the interface:</p>
<p><a href="https://i.sstatic.net/Qeuz2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qeuz2.png" alt="enter image description here" /></a></p>
<p>How can I call that RESTful API with Postman?</p>
<p>I tried:</p>
<p><a href="https://i.sstatic.net/RkNq7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RkNq7.png" alt="enter image description here" /></a></p>
<p>but as the screenshot shows, I'm getting the error message:</p>
<pre><code>{"detail":"Not Found"}
</code></pre>
<p>The page that the "Use via API" points to indicate:</p>
<p><a href="https://i.sstatic.net/RVKEv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RVKEv.png" alt="enter image description here" /></a></p>
<p>but I don't want to use the Gradio client.</p>
<hr />
<p>Update: using the endpoint <code>http://localhost:7861/api/predict</code> seems to work better, but I am still trying to figure out what the name of the key is:</p>
<p><a href="https://i.sstatic.net/XkUpu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XkUpu.png" alt="enter image description here" /></a></p>
<p>I tried to add a body but that doesn't work either:</p>
<p><a href="https://i.sstatic.net/Fcwf2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fcwf2.png" alt="enter image description here" /></a></p>
| <python><rest><postman><gradio> | 2023-10-23 04:08:54 | 2 | 84,585 | Franck Dernoncourt |
77,342,177 | 5,924,264 | How to locate object that cannot be pickled inside complex object | <p>Related to my earlier question: <a href="https://stackoverflow.com/questions/77341161/how-to-i-change-import-statements-in-a-pickle-file">How to I change import statements in a pickle file?</a></p>
<p>I am trying to change an import statement inside a serialized file. It ended up causing some unexpected issues when trying to re-serialize, in particular:</p>
<pre><code>Traceback (most recent call last):
File "test.py", line 9, in <module>
CP.dump(data, file)
File "/path/to/lib/python2.7/copy_reg.py", line 70, in _reduce_ex
raise TypeError, "can't pickle %s objects" % base.__name__
TypeError: can't pickle Cursor objects
</code></pre>
<p>(Yes, unfortunately, I'm constrained to py2 here)</p>
<p>The serialized object is a class instance. I removed the cursor that I thought was causing this error, but after removing it, I still observe this error, so that tells me there's another cursor elsewhere. The problem is this class instance is quite large and is composed of numerous other class instances recursively. I don't want to dig through all of them to find the cursor, so I was wondering if there's a quick way to determine where the cursor is coming from?</p>
<p>Also, it doesn't say what kind of Cursor it is, but I think it's probably a sqlite3 cursor.</p>
| <python><serialization><cursor><pickle> | 2023-10-23 00:48:05 | 0 | 2,502 | roulette01 |
77,342,049 | 7,306,999 | Make tkinter widgets expand to fill up horizontal space | <p>I have the following code that generates a GUI window with <code>tkinter</code>:</p>
<pre><code>from tkinter import ttk
import tkinter as tk
class App(tk.Tk):
def __init__(self):
super().__init__()
self._create_widgets()
def _create_widgets(self):
self.columnconfigure(0, weight=1)
self.columnconfigure(1, weight=4)
text_1_label = ttk.Button(self, text="Browse text file 1 ...")
text_1_label.grid(column=0, row=0, sticky=tk.W, padx=5, pady=5)
text_1_entry = ttk.Entry(self)
text_1_entry.grid(column=1, row=0, sticky=tk.EW, padx=5, pady=5)
text_2_label = ttk.Button(self, text="Browse text file 2 ...")
text_2_label.grid(column=0, row=1, sticky=tk.W, padx=5, pady=5)
text_2_entry = ttk.Entry(self)
text_2_entry.grid(column=1, row=1, sticky=tk.EW, padx=5, pady=5)
lf = ttk.LabelFrame(self, text="Thresholds")
lf.grid(column=0, row=2, columnspan=2, sticky=tk.EW, padx=5, pady=5)
treshold_1_label = ttk.Label(lf, text="Accelerations")
treshold_1_label.grid(column=0, row=0, sticky=tk.W, padx=5, pady=5)
treshold_1_entry = ttk.Entry(lf)
treshold_1_entry.grid(column=1, row=0, sticky=tk.EW, padx=5, pady=5)
if __name__ == "__main__":
app = App()
app.geometry("400x400")
app.mainloop()
</code></pre>
<p>which when run generates the following:</p>
<p><a href="https://i.sstatic.net/Q0ZsL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Q0ZsL.png" alt="tkinter window" /></a></p>
<p>However I would like all Entry widgets to automatically expand to fill up all available horizontal space. At first I thought that the <code>sticky=tk.EW</code> parameter would accomplish this, but apparently this is not the case.</p>
<p>Would anyone here be able to point me in the right direction?</p>
| <python><tkinter> | 2023-10-22 23:41:21 | 1 | 8,674 | Xukrao |
77,341,790 | 2,386,605 | Create multi-column Postgresql trgm index and filter based on that in sqlalchemy | <p>Assume we install the <code>pg_trgm</code> extension</p>
<pre><code>create extension pg_trgm;
</code></pre>
<p>And then create an index on the <code>user</code> table</p>
<pre><code>CREATE INDEX user_search_idx
ON user USING gin (first_name gin_trgm_ops, last_name gin_trgm_ops);
</code></pre>
<p>How would I create such an index in <code>sqlalchemy</code> (This is similar to <a href="https://stackoverflow.com/a/36389792/2386605">this answer in StackOverflow</a>) based on two columns?</p>
<p>Furthermore, assume I would have a string like <code>word1 word2 word3</code>. How do query the <code>user</code> table in sqlalchemy, such that <code>word1, word2,</code> and <code>word3</code> are all contained either in <code>first_name</code> or <code>last_name</code> either case-insensitive (perhaps something like <code>ilike</code>) or by imperfect similarity.</p>
| <python><postgresql><indexing><sqlalchemy><full-text-search> | 2023-10-22 21:38:43 | 0 | 879 | tobias |
77,341,666 | 4,557,607 | How to convert gguf to bin? | <p>Trying to follow the LangChain documentation about Llama.cpp but I do not understand how to obtain the <code>.bin</code> file from a <code>.gguf</code>, i.e.: I downloaded <code>llama-2-7b-chat.Q8_0.gguf</code>.</p>
<p>Note that the docs only show how to convert the old format.</p>
<p>Here is the Llama.cpp doc link: <a href="https://python.langchain.com/docs/integrations/llms/llamacpp" rel="nofollow noreferrer">https://python.langchain.com/docs/integrations/llms/llamacpp</a></p>
<p>To provide some more context. I try one of the examples in the llama-cpp-python repo and I get this error:</p>
<blockquote>
<p>argument 2: TypeError: expected llama_model_params instance instead of llama_context_params</p>
</blockquote>
<p>Here is the <code>Chat.py</code> from the examples:</p>
<pre><code>#!/bin/python
import sys, os, datetime
from common import GptParams
from low_level_api_chat_cpp import LLaMAInteract
def env_or_def(env, default):
if (env in os.environ):
return os.environ[env]
return default
AI_NAME = env_or_def("AI_NAME", "ChatLLaMa")
MODEL = env_or_def("MODEL", "./models/llama-2-7b-chat.Q8_0.gguf.bin")
USER_NAME = env_or_def("USER_NAME", "USER")
N_PREDICTS = int(env_or_def("N_PREDICTS", "2048"))
N_THREAD = int(env_or_def("N_THREAD", "8"))
today = datetime.datetime.today()
DATE_YEAR=today.strftime("%Y")
DATE_TIME=today.strftime("%H:%M")
prompt=f"""Text transcript of a never ending dialog, where {USER_NAME} interacts with an AI assistant named {AI_NAME}.
{AI_NAME} is helpful, kind, honest, friendly, good at writing and never fails to answer {USER_NAME}'s requests immediately and with details and precision.
There are no annotations like (30 seconds passed...) or (to himself), just what {USER_NAME} and {AI_NAME} say aloud to each other.
The dialog lasts for years, the entirety of it is shared below. It's 10000 pages long.
The transcript only includes text, it does not include markup like HTML and Markdown.
{USER_NAME}: Hello, {AI_NAME}!
{AI_NAME}: Hello {USER_NAME}! How may I help you today?
{USER_NAME}: What year is it?
{AI_NAME}: We are in {DATE_YEAR}.
{USER_NAME}: Please tell me the largest city in Europe.
{AI_NAME}: The largest city in Europe is Moscow, the capital of Russia.
{USER_NAME}: What can you tell me about Moscow?
{AI_NAME}: Moscow, on the Moskva River in western Russia, is the nation's cosmopolitan capital. In its historic core is the Kremlin, a complex that's home to the president and tsarist treasures in the Armoury. Outside its walls is Red Square, Russia’s symbolic center.
{USER_NAME}: What is a cat?
{AI_NAME}: A cat is a domestic species of small carnivorous mammal. It is the only domesticated species in the family Felidae.
{USER_NAME}: How do I pass command line arguments to a Node.js program?
{AI_NAME}: The arguments are stored in process.argv.
argv[0] is the path to the Node. js executable.
argv[1] is the path to the script file.
argv[2] is the first argument passed to the script.
argv[3] is the second argument passed to the script and so on.
{USER_NAME}: Name a color.
{AI_NAME}: Blue.
{USER_NAME}: What time is it?
{AI_NAME}: It is {DATE_TIME}.
{USER_NAME}:""" + " ".join(sys.argv[1:])
print("Loading model...")
params = GptParams(
n_ctx=2048,
temp=0.7,
top_k=40,
top_p=0.5,
repeat_last_n=256,
n_batch=1024,
repeat_penalty=1.17647,
model=MODEL,
n_threads=N_THREAD,
n_predict=N_PREDICTS,
use_color=True,
interactive=True,
antiprompt=[f"{USER_NAME}:"],
input_prefix=" ",
input_suffix=f"{AI_NAME}:",
prompt=prompt,
)
with LLaMAInteract(params) as m:
m.interact()
</code></pre>
<p>Reading a little bit more in the git issue - it's now even more confusing what needs to be done to get one of these quantized models running with <code>llama.cpp</code>.</p>
| <python><langchain><large-language-model><py-langchain> | 2023-10-22 20:52:51 | 0 | 1,020 | Edv Beq |
77,341,580 | 3,433,875 | Convert an animation into an in-memory file or a function | <p>I am creating a matplotlib animation to be displayed on a flask app based on user input.
The matplotlib script is similar to this:</p>
<pre><code>import matplotlib.pyplot as plt
import matplotlib.animation as animation
import numpy as np
# Horizontal bar plot with gaps
fig, ax = plt.subplots()
ax.get_yaxis().set_visible(False)
ax.spines[['top', 'bottom','left','right']].set_visible(False)
y2=[20,20,20,20,20,20,20]
y3=np.array(y2) #convert to array wont work with list
x2 =[20,15,14,13, 12,11,10]
x3=np.array(x2)
year =["2014","2015","2016","2017","2018","2019","2020"]
yr2 =np.array(year)
def animate(i):
ax.clear()
ax.set_ylim(16, 24)
ax.barh(20, 60, 4 )
ax.plot(60, 18, marker=6, markersize=18, clip_on=False,)
ax.annotate(r"$\bf" + str(2013) +"$" + f" ({60})", (60 , 18),xytext=(0, -25), size= 8, textcoords='offset points', ha='center', va='bottom')
ax.barh(y3[i], x3[i], 4,color='c')
ax.plot(x3[i], y3[i]+2, color = 'c', marker=7, markersize=18, clip_on=False,)
ax.annotate(r"$\bf" + str(yr2[i]) +"$" + f" ({x3[i]})", (x3[i] , y3[i]+2),xytext=(0, 15), size= 8, color = 'c', textcoords='offset points', ha='center', va='bottom')
ani = animation.FuncAnimation(fig, animate, repeat=False,
frames=len(x3), interval=100000)
# To save the animation using Pillow as a gif
writer = animation.PillowWriter(fps=1,
metadata=dict(artist='Me'),
bitrate=1800)
ani.save('scatter.gif', writer=writer)
</code></pre>
<p><strong>Is it possible to save gif into an in-memory file rather than saving as a gif?</strong></p>
| <python><matplotlib><flask><matplotlib-animation><bytesio> | 2023-10-22 20:25:41 | 1 | 363 | ruthpozuelo |
77,341,479 | 13,526,701 | Open and execute command on visual terminal on startup raspberry pi | <p>I have a python script that includes a webserver and i want to run it on a terminal when the rpi boots. Before that I also have to use iptables to do some routing, dependent on the configuration. All that works with python code already and it can also be run on startup. In case that an error occurs, i would want to be able to actually see the terminal and interact with it as well as being able to close the program easily. This is pretty much necessary as people with next to zero computer experience will have to use this.
I'd accept any kind of solution, no matter how bad it is, it just has to work somehow.
I've tried stuff like crontab as well as even xautomation for simulating keystrokes in order to do it as if it was a keyboard, but i couldn't get it to do anything at all. This can't be that hard can it?</p>
| <python><linux><raspbian> | 2023-10-22 19:51:51 | 1 | 419 | NoBlockhit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.