QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,838,719
| 12,724,648
|
Get driver to update after page click in selenium
|
<p>I'm trying to build a simple webscraper. I want it to work by inputting a value into a webpage, clicking enter and then scraping the new page after the it loads.</p>
<p>So far initially loading up the webpage, inputting the value and clicking enter works but the driver does not seem to update when the new page loads, and as such I can scrape the new page for information.</p>
<p>Does anyone know how to get this functionality to work?</p>
<p>Code is below:</p>
<pre><code>import selenium.webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
test_wage = float(53109.22)
options = selenium.webdriver.FirefoxOptions()
options.add_argument("--headless")
driver = selenium.webdriver.Firefox(options=options)
driver.get('https://www.thesalarycalculator.co.uk/salary.php')
takehome_form = driver.find_element(By.CLASS_NAME, "hero__input")
takehome_form.send_keys(test_wage)
takehome_form.send_keys(Keys.RETURN)
</code></pre>
<p>The code above works fine, it is the following where I have issue:</p>
<pre><code>result = driver.find_element(By.XPATH, "/html/body/section[1]/div/table/tbody/tr[2]/td[6]")
</code></pre>
<p>Which produces the following error:</p>
<pre><code>NoSuchElementException: Unable to locate element: /html/body/section[1]/div/table/tbody/tr[2]/td[6]; For documentation on this error, please visit: https://www.selenium.dev/documentation/webdriver/troubleshooting/errors#no-such-element-exception
</code></pre>
<p>Again, I think it is because the original webpage does not have this information, but the new webpage after clicking enter on the form and loading the new page does have this information, but the driver does not update and thinks the original webpage is open still.</p>
<p>Does anyone know how to fix this?</p>
|
<python><selenium-webdriver><selenium-firefoxdriver>
|
2024-08-06 11:16:41
| 2
| 325
|
KGreen
|
78,838,529
| 1,942,868
|
It becomes array when sending Django nested dict to API
|
<p>I sent the dict data with this in Python:</p>
<pre><code> data = {
"test":{
"A":"Adata",
"B":"Bdata",
"C":"Cdata"
}
}
response = self.client.post("/api_patch/",data,follow=True)
</code></pre>
<p>then receive this as:</p>
<pre><code>@api_view(["POST","GET"])
def api_patch(request):
print("request.data",request.data)
print("request.data type",type(request.data))
</code></pre>
<p>However this shows,</p>
<pre><code>request.data <QueryDict: {'test': ['A', 'B', 'C']}>
request.data type <class 'django.http.request.QueryDict'>
</code></pre>
<p>Somehow A B C is treated as array.</p>
<p>Where am I wrong?</p>
|
<python><django>
|
2024-08-06 10:29:34
| 1
| 12,599
|
whitebear
|
78,838,421
| 4,892,210
|
Ollama with RAG for local utilization to chat with pdf
|
<p>I am trying to build ollama usage by using RAG for chatting with pdf on my local machine.
I followed this GitHub repo: <a href="https://github.com/tonykipkemboi/ollama_pdf_rag/tree/main" rel="nofollow noreferrer">https://github.com/tonykipkemboi/ollama_pdf_rag/tree/main</a>
The issue is when I am running code, there is no error, but the code will stop at embedding and will stop after that. I have attached all possible logs along with ollama list.</p>
<pre><code>import logging
from langchain_community.document_loaders import UnstructuredPDFLoader
from langchain_community.embeddings import OllamaEmbeddings
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_community.vectorstores import Chroma
from langchain.prompts import ChatPromptTemplate, PromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_community.chat_models import ChatOllama
from langchain_core.runnables import RunnablePassthrough
from langchain.retrievers.multi_query import MultiQueryRetriever
# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
local_path = "D:/KnowledgeSplice/ollama_pdf_rag-main/WEF_The_Global_Cooperation_Barometer_2024.pdf"
try:
# Local PDF file uploads
if local_path:
loader = UnstructuredPDFLoader(file_path=local_path)
data = loader.load()
logging.info("Loading of PDF is done")
else:
logging.error("Upload a PDF file")
raise ValueError("No PDF file uploaded")
# Preview first page
logging.info(f"First page content preview: {data[0].page_content[:500]}...")
# Split and chunk
text_splitter = RecursiveCharacterTextSplitter(chunk_size=7500, chunk_overlap=100)
logging.info("Text splitter created")
chunks = text_splitter.split_documents(data)
logging.info(f"Created {len(chunks)} chunks")
# Add to vector database
logging.info("Creating Vector db")
try:
embedding_model = OllamaEmbeddings(model="nomic-embed-text", show_progress=True)
print("Embedding", embedding_model)
vector_db = Chroma.from_documents(
documents=chunks,
embedding=embedding_model,
collection_name="local-rag"
)
logging.info("Local db created successfully")
except Exception as e:
logging.error(f"Error creating vector db: {e}")
raise # Re-raise the exception to stop further execution
# Verify vector database creation
if vector_db:
logging.info("Vector db verification successful")
else:
logging.error("Vector db creation failed")
raise ValueError("Vector db creation failed")
# LLM from Ollama
local_model = "llama3"
llm = ChatOllama(model=local_model)
logging.info("LLM model loaded")
QUERY_PROMPT = PromptTemplate(
input_variables=["question"],
template="""You are an AI language model assistant. Your task is to generate five
different versions of the given user question to retrieve relevant documents from
a vector database. By generating multiple perspectives on the user question, your
goal is to help the user overcome some of the limitations of the distance-based
similarity search. Provide these alternative questions separated by newlines.
Original question: {question}""",
)
logging.info("Query prompt created")
retriever = MultiQueryRetriever.from_llm(
vector_db.as_retriever(),
llm,
prompt=QUERY_PROMPT
)
logging.info("Retriever created")
# RAG prompt
template = """Answer the question based ONLY on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
logging.info("RAG prompt created")
chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
logging.info("Chain created")
response = chain.invoke("What are the 5 pillars of global cooperation?")
logging.info("Chain invoked")
logging.info(f"Response: {response}")
except Exception as e:
logging.error(f"An error occurred: {e}")
</code></pre>
<p>The code is showing no error but did not work after embedding.</p>
<p>Output:</p>
<pre><code>2024-08-06 14:59:59,858 - INFO - Text splitter created
2024-08-06 14:59:59,861 - INFO - Created 11 chunks
2024-08-06 14:59:59,861 - INFO - Creating Vector db
Embedding base_url='http://localhost:11434' model='nomic-embed-text' embed_instruction='passage: ' query_instruction='query: ' mirostat=None mirostat_eta=None mirostat_tau=None num_ctx=None num_gpu=None num_thread=None repeat_last_n=None repeat_penalty=None temperature=None stop=None tfs_z=None top_k=None top_p=None show_progress=True headers=None model_kwargs=None
2024-08-06 15:00:00,662 - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
OllamaEmbeddings: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 11/11 [00:27<00:00, 2.46s/it]
</code></pre>
<p>Below is my ollama list :</p>
<pre><code>NAME ID SIZE MODIFIED
nomic-embed-text:latest 0a109f422b47 274 MB 3 hours ago
mistral:latest f974a74358d6 4.1 GB 17 hours ago
phi3:latest d184c916657e 2.2 GB 2 weeks ago
llama3:latest 365c0bd3c000 4.7 GB 2 weeks ago
</code></pre>
<p>How to resolve this issue?</p>
|
<python><large-language-model><python-embedding><ollama><rag>
|
2024-08-06 10:02:03
| 1
| 321
|
NIrbhay Mathur
|
78,838,219
| 7,699,037
|
Regex pattern match until the next occurence of the pattern
|
<p>I'm trying to split a string using a regex pattern and including everything in between (could also be nothing) until the next occurrence of the pattern. However, the "everything in between" is matched with the next occurrence. So for example I have the following string:</p>
<pre><code>content = """
/path/to/source.cpp:8:18: error: 'FOO' was not declared in this scope
8 | std::cout << FOO << std::endl;
| ^~~
/path/to/source.cpp:9:18: error: 'BAR' was not declared in this scope; did you mean 'EBADR'?
9 | std::cout << BAR << std::endl;
| ^~~
| EBADR
"""
</code></pre>
<p>and so far I came up with the following regex pattern containing the lookahead assertion to match the next time a <code>path:line:col</code> match is found in the string:</p>
<pre><code>re_comp = re.compile((
r"^((?P<path>.*?):(?P<line>[0-9]*):(?P<column>[0-9]*): )?"
r"(?P<type>error|warning): "
r".+?[\r\n]+(?=^.*:[0-9]*:[0-9]*|\Z)"
),
re.MULTILINE | re.DOTALL)
</code></pre>
<p>The problem is that with the current regex pattern, the message is assigned to the path group of the next finding, when I try to iterate over the findings:</p>
<pre><code>for m in re_comp.finditer(content):
print(m.group(0)
</code></pre>
<p>will print</p>
<pre><code>/path/to/source.cpp:8:18: error: 'FOO' was not declared in this scope
</code></pre>
<p>for the first iteration and</p>
<pre><code> 8 | std::cout << FOO << std::endl;
| ^~~
/path/to/source.cpp:9:18: error: 'BAR' was not declared in this scope; did you mean 'EBADR'?
</code></pre>
<p>for the second iteration.</p>
<p>Instead I would want to have one match finding each time a new <code>path:line:col</code> pattern starts. Such as</p>
<pre><code>--------------------1st finding -----------------------------------
/path/to/source.cpp:8:18: error: 'FOO' was not declared in this scope
8 | std::cout << FOO << std::endl;
| ^~~
--------------------2nd finding -----------------------------------
/path/to/source.cpp:9:18: error: 'BAR' was not declared in this scope; did you mean 'EBADR'?
9 | std::cout << BAR << std::endl;
| ^~~
| EBADR
</code></pre>
<p>Can you help me fix my regex pattern?</p>
|
<python><regex>
|
2024-08-06 09:15:41
| 1
| 2,908
|
Mike van Dyke
|
78,838,209
| 22,963,183
|
Error When Using unstructured to Partition PDF Files
|
<p>I'm trying to use the <strong>unstructured</strong>, <strong>unstructured[pdf]</strong> library in <strong>Python</strong> to partition a .pdf file, but I'm encountering an error that I can't seem to resolve. The pdf file contains text, table, image,...</p>
<p>Here's a brief overview of what I'm doing:</p>
<pre class="lang-py prettyprint-override"><code>from unstructured.partition.pdf import partition_pdf
path = '/content/'
file_name = 'ABCABC.pdf'
raw_pdf_elements = partition_pdf(
filename=path + file_name,
extract_images_in_pdf=True,
infer_table_structure=True,
chunking_strategy="by_title",
max_characters=4000,
new_after_n_chars=3800,
combine_text_under_n_chars=2000,
image_output_dir_path=path
)
</code></pre>
<p><strong>Error Message:</strong>
When I run the code, I get the following error:</p>
<pre><code>NameError: name 'sort_page_elements' is not defined
</code></pre>
<p><strong>Environment:</strong></p>
<ul>
<li>python: 3.9.19</li>
<li>unstructured: 0.15.1 (I also tried 0.7.12 and 0.12.2 )</li>
<li>OS: Ubuntu 20.04</li>
</ul>
<p><strong>Question:</strong></p>
<p>What could be causing this NameError, and how can I resolve it? Any help would be greatly appreciated!</p>
<p><strong>Update</strong></p>
<p>I have tried solution @Oluwafemi Sule, but it doesn't work, my environment has both numpy and opencv-python</p>
<ul>
<li>numpy: 1.26.4</li>
<li>opencv-python (cv2):4.10.0.84</li>
</ul>
<p>I have to deactivate the environment and activate again, but I'm encountering an another error.</p>
<pre><code>get_model() got an unexpected keyword argument 'ocr_languages'
</code></pre>
<p>Solution:</p>
<ol>
<li>I add a addition parameter for partition_pdf: for example: languages=["vie"]</li>
<li>I need download the trained data for specified languages in <a href="https://github.com/tesseract-ocr/tessdata" rel="nofollow noreferrer">here</a></li>
<li>export TESSDATA_PREFIX=Path_of_your_tessdata_folder (the folder that contain the file is downloaded step 2)</li>
</ol>
|
<python><python-3.x><large-language-model><rag>
|
2024-08-06 09:14:06
| 1
| 515
|
happy
|
78,837,922
| 11,067,209
|
Multiheritance preferences on python classes does not take proper super()
|
<p>Good morning everyone. I am creating Mixin classes in a project to simplify the overall logic of a class diagram, for which I am using multiherences. However, the issue of multi-inheritance preferences is not very clear to me, or at least it is not clear to me how to avoid what is happening to me. Here is the simplified example:</p>
<pre><code>class A:
def foo(self):
return "AAA"
class B(A):
def foo(self):
return super().foo() + "BBB"
class C(A):
def foo(self):
return "CCC"
def bar(self):
return "Other logic"
class D(B, C):
pass
print(D().foo())
</code></pre>
<p>In this code we have 4 classes: <code>A</code>, <code>B</code>, <code>C</code>, <code>D</code> . We can see that both <code>B</code> and <code>C</code> inherit from <code>A</code>, but in the case of <code>B</code> we have to change the logic of the method <code>foo</code>, and in the case of the class <code>C</code> also and adds the method <code>bar</code>. What I intend with class <code>D</code> and the preferences set at the time of inheriting is that it has the <code>bar</code> method (that's why it inherits from <code>C</code>), but the <code>foo</code> method returns <code>"AAABBB"</code> which is the logic of the <code>B</code> method. I am aware that what is happening is that the <code>super()</code> of <code>D</code> is also <code>C</code> and that is why it currently returns <code>"CCCBBB"</code> and that would be solved by setting:</p>
<pre><code>class B(A):
def foo(self):
return A.foo() + "BBB"
</code></pre>
<p>However, this seems ugly to me since the whole point of relegating logic to <code>super()</code> and not specifying it is to simplify changes, and in reality <code>B</code> should not take into account that there may be a class <code>D</code> with that expected output.
What would be the correct way for that code to output <code>"AAABBB"</code> when calling the <code>D</code> class <code>foo</code>?</p>
|
<python><oop>
|
2024-08-06 08:06:57
| 1
| 665
|
Angelo
|
78,837,870
| 1,216,635
|
Prometheus metrics in Sanic
|
<p>I'm trying to add metrics to my Sanic app.
Prometheus provides Python client running as ASGI app - <a href="https://prometheus.github.io/client_python/exporting/http/asgi/" rel="nofollow noreferrer">https://prometheus.github.io/client_python/exporting/http/asgi/</a> .</p>
<p>The question is how to run 2 ASGI apps at once.</p>
<p>I found Starlette could help:</p>
<pre><code>from prometheus_client import make_asgi_app
from starlette.applications import Starlette
from starlette.routing import Mount
from sanic import Sanic
from sanic.response import text
sanic_app = Sanic("MyHelloWorldApp")
@sanic_app.get("/")
async def hello_world(request):
return text("Hello, world.")
metrics_app = make_asgi_app()
routes = [
Mount("/", app=sanic_app),
Mount("/metrics", app=metrics_app)
]
app = Starlette(routes=routes)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="127.0.0.1", port=8000, log_level="info")
</code></pre>
<p><code>metrics_app</code> mounted and works,
but Sunic returns an exception:</p>
<pre><code>sanic.exceptions.SanicException: Loop can only be retrieved after the app has started running. Not supported with `create_server` function
</code></pre>
<p>Does anybody know a more straightforward way to use Prometheus Client with ASGI Sanic?</p>
|
<python><prometheus><metrics><asgi><sanic>
|
2024-08-06 07:58:09
| 1
| 2,077
|
tulsluper
|
78,837,861
| 14,132
|
Automatic use of aliases for relationships
|
<p>I have a schema that includes, among other things, self-joins and many-to-many relationships, like this:</p>
<pre class="lang-py prettyprint-override"><code>from typing import List, Optional
from sqlalchemy import create_engine
from sqlalchemy.orm import (
aliased,
DeclarativeBase,
Session,
Mapped,
mapped_column,
relationship,
)
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.schema import ForeignKey
from sqlalchemy.types import String
class Base(DeclarativeBase):
pass
class Post(Base):
__tablename__ = 'post'
id: Mapped[int] = mapped_column(primary_key=True)
title: Mapped[str] = mapped_column(String(200), nullable=False)
parent_id: Mapped[int] = mapped_column(ForeignKey('post.id'), nullable=True)
parent: Mapped["Post"] = relationship('Post', foreign_keys=parent_id, remote_side=id)
tags: Mapped[List['Tag']] = relationship('Tag', secondary='tag2post')
class Tag(Base):
__tablename__ = 'tag'
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str] = mapped_column(String(100), nullable=False)
class Tag2Post(Base):
__tablename__ = 'tag2post'
id: Mapped[int] = mapped_column(primary_key=True)
tag_id: Mapped[int] = mapped_column('tag_id', ForeignKey('tag.id'))
tag: Mapped[Tag] = relationship(Tag, overlaps='tags')
post_id: Mapped[int] = mapped_column('post_id', ForeignKey('post.id'))
post: Mapped[Post] = relationship(Post, overlaps='tags')
engine = create_engine("sqlite+pysqlite:///:memory:", echo=True)
Base.metadata.create_all(engine)
with Session(engine) as session:
session.add(tag_a := Tag(name='a'))
session.add(tag_b := Tag(name='b'))
session.add(parent := Post(title='parent', tags=[tag_a]))
session.add(child := Post(title='child', parent=parent, tags=[tag_b]))
</code></pre>
<p>Now I want to write a query of the form "give me all posts with tag <code>a</code> and whose parent has a tag <code>b</code>".</p>
<p>I understand the "manual" way of adding aliases for each step:</p>
<pre><code> parent_alias = aliased(Post)
parent_tag = aliased(Tag)
parent_tag2post = aliased(Tag2Post)
q = session.query(
Post
).join(
parent_alias, Post.parent_id == parent_alias.id
).join(
parent_tag2post,
parent_alias.id == parent_tag2post.post_id
).join(
parent_tag,
parent_tag2post.tag_id == parent_tag.id,
).join(
Post.tags,
).filter(
parent_tag.name == 'a',
Tag.name == 'b',
)
print(q.one().title) # prints 'child'
</code></pre>
<p>But I'm dealing with more generic code, basically a query language that gives me the conditions:</p>
<ul>
<li><code>Post.tags.name == 'b'</code></li>
<li><code>Post.parent.tags.name == 'a'</code></li>
</ul>
<p>The code that translates the query language into JOINs now would need to introspect the relationships (like <code>Post.tags</code>, <code>Post.parents</code>, <code>Post.parents.tags</code>), and in each relationship replace the tables by aliases and rebuild the join conditions with the aliases... which sounds rather complicated and error-prone.</p>
<p>Is there an easier way? Like, maybe, telling sqlalchemy to use a relationship, but with an alias? Or is there maybe a third-party package that already does this?</p>
|
<python><sqlalchemy><alias>
|
2024-08-06 07:55:55
| 1
| 12,742
|
moritz
|
78,837,838
| 4,772,565
|
How to test calls of a mocked dataclass in Python?
|
<p>I want to write a unit test to a function which instantiates a dataclass twice followed by calling its method. However, the actual calls of the instantiation includes the calling of the method. This is confusing to me.</p>
<p><code>person.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
@dataclass
class Person:
name: str
def work(self):
print(f"{self.name} is working")
def run_person():
# It instantiates Person twice.
for n in ["John", "Jack"]:
person = Person(name=n) # Instantiate a person
person.work() # Calling its method .work()
</code></pre>
<p><code>test_person.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import unittest
from unittest.mock import MagicMock, call, patch
from person import run_person
class TestMain(unittest.TestCase):
@patch("person.Person") # Patch the Person dataclass
def test_run_person(self, mock_person):
mock_person.return_value = MagicMock()
# This is my expectation. It should call Person() twice.
# But the actual calls is
# [call(name='John'), call().work(), call(name='Jack'), call().work()]
expected_calls = [call(name="John"), call(name="Jack")]
run_person()
mock_person.assert_has_calls(expected_calls)
if __name__ == "__main__":
unittest.main()
</code></pre>
<p>The test gives an error:</p>
<pre><code>Expected: [call(name='John'), call(name='Jack')]
Actual: [call(name='John'), call().work(), call(name='Jack'), call().work()]
</code></pre>
<p>Can you please explain why I receive the error message above. Shouldn't the call of <code>Person()</code> and the call of <code>Person.work()</code> be separated? How to write this test properly?</p>
|
<python><python-unittest><python-dataclasses><python-mock>
|
2024-08-06 07:50:44
| 2
| 539
|
aura
|
78,837,775
| 678,861
|
Cast variable to right type automatically in Python
|
<p>Is there a way in Python to implement type-casting that automatically chooses the right type?</p>
<p>Say you have a class:</p>
<pre><code>class Foo:
foo: list[int]
def __init__(self):
self.foo = cast(list[int], api.get())
</code></pre>
<p>You know that <code>api.get()</code> always returns <code>list[int]</code> but it's a weird library with haphazard type hints, so it claims that it's (say) <code>FunkyContainerType</code> when it's not. Also you can't be bothered to create a correct stub file for the library today because deadlines.</p>
<p><code>self.foo = cast(list[int], api.get())</code> creates a lot visual noise in the code, which is also redundant if the type of <code>self.foo</code> is already defined. Moreover, if the variable has a custom type, you have to import the type at runtime for the program not to crash, so you can't put it in a stub file and you'll run into issues with circular imports more often.</p>
<p>I'd much rather use <code>self.foo = cast(api.get())</code> that finds out what type <code>self.foo</code> is and casts the result of the API call to that type.</p>
<p>So the key desiderata are (1) don't state the type redundantly, (2) change nothing about the runtime behavior, i.e. no type imports at runtime, and (3) minimal visual clutter.</p>
<p>Notes: (1) This question is about exactly what it says it is. It is not about copying or converting values at runtime. It is about type hints. (2) <a href="https://adamj.eu/tech/2021/07/06/python-type-hints-how-to-use-typing-cast/#use-cases" rel="nofollow noreferrer">The use of <code>cast</code> is to supply extra information to the type-checker</a> that you know but that the type-checker cannot infer.</p>
|
<python><casting><python-typing>
|
2024-08-06 07:37:07
| 2
| 957
|
Dawn Drescher
|
78,837,463
| 14,250,641
|
HuggingFace: Efficient Large-Scale Embedding Extraction for DNA Sequences Using Transformers
|
<p>I have a very large dataframe (60+ million rows) that I would like to use a transformer model to grab the embeddings for these rows (DNA sequences). Basically, this involves tokenizing first, then I can get the embeddings.
Because of RAM limits, I have found that tokenizing and then embedding all in one py file won't work. Here's the workaround I found that worked for a dataframe with ~30million rows (but it isn't working for the larger df):</p>
<ol>
<li>tokenizing-- saving the output as 200 chunks/shards</li>
<li>feeding those 200 chunks separately to get embedded</li>
<li>these embeddings then get concatenated into one larger file of embeddings</li>
</ol>
<p>final embedding file should have these columns:
[['Chromosome', 'label', 'embeddings']]</p>
<p>Overall, I'm a little lost in terms of how I can get this to work for my larger dataset.</p>
<p>I've looked into streaming the dataset, but I don't think that will actually help because I need all of the embeddings, not just a few. Perhaps it could work if I stream the tokenization and feed it into the embedding process a bit at a time (delete the tokens along the way). This way, I don't have to save the tokens. Please correct me if this is not feasible.</p>
<p>Ideally, I would like to avoid having to shard the data, but I would just like the code to run at this point without reaching the RAM limit.</p>
<p>step 1</p>
<pre><code>dataset = Dataset.from_pandas(element_final[['Chromosome', 'sequence', 'label']])
dataset = dataset.shuffle(seed=42)
tokenizer = AutoTokenizer.from_pretrained(f"InstaDeepAI/nucleotide-transformer-500m-human-ref")
def tokenize_function(examples):
outputs = tokenizer.batch_encode_plus(examples["sequence"], return_tensors="pt", truncation=False, padding=False, max_length=80)
return outputs
# Creating tokenized dataset
tokenized_dataset = dataset.map(
tokenize_function,
batched=True, batch_size=2000)
tokenized_dataset.save_to_disk(f"tokenized_elements/tokenized_{ELEMENT}", num_shards=200)
</code></pre>
<p>step 2 (this code runs over each of the 200 shards)</p>
<pre><code>input_file = f"tokenized_elements/tokenized_{ELEMENT_LABEL}/{filename}.arrow"
# Load input data
d1 = Dataset.from_file(input_file)
def embed_function(examples):
torch.cuda.empty_cache()
gc.collect()
inputs = torch.tensor(examples['input_ids']) # Convert to tensor
inputs = inputs.to(device)
with torch.no_grad():
outputs = model(input_ids=inputs, output_hidden_states=True)
# Step 3: Extract the embeddings
hidden_states = outputs.hidden_states # List of hidden states from all layers
embeddings = hidden_states[-1] # Assuming you want embeddings from the last layer
averaged_embeddings = torch.mean(embeddings, dim=1) # Calculate mean along dimension 1 (the dimension with size 86)
averaged_embeddings = averaged_embeddings.to(torch.float32) # Ensure float32 data type
return {'embeddings': averaged_embeddings}
# Map embeddings function to input data
embeddings = d1.map(embed_function, batched=True, batch_size=1550)
embeddings = embeddings.remove_columns(["input_ids", "attention_mask"])
# Save embeddings to disk
output_dir = f"embedded_elements/embeddings_{ELEMENT_LABEL}/{filename}" # Assuming ELEMENT_LABEL is defined elsewhere
</code></pre>
<p>step 3: concatenate all 200 shards of embeddings into 1.</p>
|
<python><huggingface-transformers><huggingface><huggingface-tokenizers><huggingface-datasets>
|
2024-08-06 06:09:24
| 0
| 514
|
youtube
|
78,837,003
| 6,030,951
|
VSCode Pylance Errors on FastAPI Response Type that Converts
|
<p>I'm using VSCode and the default Python Language Support extension from MS, whcih lints with Pylance. I have a Python 3.12 FastAPI project which uses type annotations to indicate which models should be returned as responses from the handlers using the pattern suggested by the FastAPI docs (<a href="https://fastapi.tiangolo.com/tutorial/response-model/#return-type-and-data-filtering" rel="nofollow noreferrer">https://fastapi.tiangolo.com/tutorial/response-model/#return-type-and-data-filtering</a>). There are no errors when the code is run and the app works correctly, returning the correct results in the correct models. However, I get Pylance linting errors that my function return types are not compatible with my returns.</p>
<p>Here are my FastAPI SQLModel/Pydantic models:</p>
<pre><code>class PetBase(SQLModel):
name: str
class Pet(PetBase, table=True):
id: str
class PetOutgoing(PetBase):
id: str
</code></pre>
<p>Here's my handler:</p>
<pre><code>@router.get("/{pet_id}")
def retrieve_one(pet_id: str) -> PetOutgoing:
pet: Pet = get_pet_by_id(pet_id)
return pet
</code></pre>
<p>This endpoint returns the <code>PetOutgoing</code> model. FastAPI converts it on the fly.
But here's the Pylance linting error I get.</p>
<pre><code>Expression of type "Pet" is incompatible with return type "PetOutgoing"
"Pet" is incompatible with "PetOutgoing" PylancereportReturnType
(variable) pet: Pet
</code></pre>
<p>What am I doing wrong?</p>
|
<python><fastapi><pydantic><pyright><sqlmodel>
|
2024-08-06 02:27:04
| 0
| 2,021
|
Dave
|
78,836,989
| 5,029,509
|
TensorFlow for Differential Privacy
|
<p>I am testing an example code from the <strong>TensorFlow</strong> website using <strong>Jupyter Notebook</strong>. You can find the code at the following link:</p>
<p><a href="https://www.tensorflow.org/responsible_ai/privacy/tutorials/classification_privacy" rel="nofollow noreferrer">https://www.tensorflow.org/responsible_ai/privacy/tutorials/classification_privacy</a></p>
<p>Below is the code along with the versions of <strong>TensorFlow</strong> and <strong>TensorFlow Privacy</strong> installed on my computer.</p>
<pre><code>!pip show tensorflow
Name: tensorflow
Version: 2.14.1
!pip show tensorflow_privacy
Name: tensorflow_privacy
Version: 0.9.0
import tensorflow as tf
tf.compat.v1.disable_v2_behavior()
import numpy as np
tf.get_logger().setLevel('ERROR')
WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/tensorflow/python/compat/v2_compat.py:108: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
import tensorflow_privacy
from tensorflow_privacy.privacy.analysis import compute_dp_sgd_privacy
train, test = tf.keras.datasets.mnist.load_data()
train_data, train_labels = train
test_data, test_labels = test
train_data = np.array(train_data, dtype=np.float32) / 255
test_data = np.array(test_data, dtype=np.float32) / 255
train_data = train_data.reshape(train_data.shape[0], 28, 28, 1)
test_data = test_data.reshape(test_data.shape[0], 28, 28, 1)
train_labels = np.array(train_labels, dtype=np.int32)
test_labels = np.array(test_labels, dtype=np.int32)
train_labels = tf.keras.utils.to_categorical(train_labels, num_classes=10)
test_labels = tf.keras.utils.to_categorical(test_labels, num_classes=10)
assert train_data.min() == 0.
assert train_data.max() == 1.
assert test_data.min() == 0.
assert test_data.max() == 1.
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11490434/11490434 [==============================] - 2s 0us/step
epochs = 3
batch_size = 250
l2_norm_clip = 1.5
noise_multiplier = 1.3
num_microbatches = 250
learning_rate = 0.25
if batch_size % num_microbatches != 0:
raise ValueError('Batch size should be an integer multiple of the number of microbatches')
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16, 8,
strides=2,
padding='same',
activation='relu',
input_shape=(28, 28, 1)),
tf.keras.layers.MaxPool2D(2, 1),
tf.keras.layers.Conv2D(32, 4,
strides=2,
padding='valid',
activation='relu'),
tf.keras.layers.MaxPool2D(2, 1),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.Dense(10)
])
optimizer = tensorflow_privacy.DPKerasSGDOptimizer(
l2_norm_clip=l2_norm_clip,
noise_multiplier=noise_multiplier,
num_microbatches=num_microbatches,
learning_rate=learning_rate)
loss = tf.keras.losses.CategoricalCrossentropy(
from_logits=True, reduction=tf.losses.Reduction.NONE)
model.compile(optimizer=optimizer, loss=loss, metrics=['accuracy'])
model.fit(train_data, train_labels,
epochs=epochs,
validation_data=(test_data, test_labels),
batch_size=batch_size)
</code></pre>
<p>Output and error:</p>
<pre><code>Train on 60000 samples, validate on 10000 samples
Epoch 1/3
60000/60000 [==============================] - 204s 3ms/sample - loss: 0.8745 - acc: 0.7297 - val_loss: 0.3688 - val_acc: 0.8954
Epoch 2/3
/usr/local/lib/python3.10/dist-packages/keras/src/engine/training_v1.py:2335: UserWarning: `Model.state_updates` will be removed in a future version. This property should not be used in TensorFlow 2.0, as `updates` are applied automatically.
updates = self.state_updates
60000/60000 [==============================] - 204s 3ms/sample - loss: 0.3743 - acc: 0.9021 - val_loss: 0.3241 - val_acc: 0.9227
Epoch 3/3
60000/60000 [==============================] - 203s 3ms/sample - loss: 0.3471 - acc: 0.9199 - val_loss: 0.3105 - val_acc: 0.9339
<keras.src.callbacks.History at 0x7ccded1f8850>
compute_dp_sgd_privacy.compute_dp_sgd_privacy(n=train_data.shape[0],
batch_size=batch_size,
noise_multiplier=noise_multiplier,
epochs=epochs,
delta=1e-5)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-13-abb2ef715897> in <cell line: 1>()
----> 1 compute_dp_sgd_privacy.compute_dp_sgd_privacy(n=train_data.shape[0],
2 batch_size=batch_size,
3 noise_multiplier=noise_multiplier,
4 epochs=epochs,
5 delta=1e-5)
AttributeError: module 'tensorflow_privacy.privacy.analysis.compute_dp_sgd_privacy' has no attribute 'compute_dp_sgd_privacy'
</code></pre>
<p>I received the above error, which is not the first time I've encountered such issues when using <strong>TensorFlow Privacy</strong>.
However, the <strong>TensorFlow</strong> website shows the following output for the code above:</p>
<pre><code>DP-SGD with sampling rate = 0.417% and noise_multiplier = 1.3 iterated over 720 steps satisfies differential privacy with eps = 0.563 and delta = 1e-05.
The optimal RDP order is 18.0.
(0.5631726490328062, 18.0)
</code></pre>
<p>Given this situation, it seems challenging to implement code using <strong>TensorFlow Privacy</strong>.
I would like to either fix the error in the reference code provided by the TensorFlow website and understand its cause or, if fixing it is not possible, find an alternative approach to TensorFlow Privacy for implementing <strong>Federated Learning (FL)</strong> with <strong>Differential Privacy</strong>.</p>
<p><strong>Note:</strong> I have also tested similar code with other versions of <strong>TensorFlow</strong> and <strong>TensorFlow Privacy</strong>, but similar issues persist.</p>
|
<python><tensorflow><machine-learning><privacy>
|
2024-08-06 02:18:31
| 0
| 726
|
Questioner
|
78,836,766
| 4,171,481
|
Is there a way to enforce the number of members an enum is allowed to have?
|
<p>Making an enum with exactly <em>n</em> many members is trivial if I've defined it myself:</p>
<pre><code>class Compass(enum.Enum):
NORTH = enum.auto()
EAST = enum.auto()
SOUTH = enum.auto()
WEST = enum.auto()
## or ##
Coin = enum.Enum('Coin', 'HEADS TAILS')
</code></pre>
<p>But what if this enum will be released into the wild to be subclassed by other users? Let's assume that some of its extra behaviour depends on having the right number of members so we need to enforce that users define them correctly.</p>
<p>Here's my desired behaviour:</p>
<pre><code>class Threenum(enum.Enum):
"""An enum with exactly 3 members, a 'Holy Enum of Antioch' if you will.
First shalt thou inherit from it. Then shalt though define members three,
no more, no less. Three shall be the number thou shalt define, and the
number of the members shall be three. Four shalt thou not define, neither
define thou two, excepting that thou then proceed to three. Five is right
out. Once member three, being the third member, be defined, then employest
thou thy Threenum of Antioch towards thy problem, which, being intractible
in My sight, shall be solved.
"""
...
class Triumvirate(Threenum): # success
CEASAR = enum.auto()
POMPEY = enum.auto()
CRASSUS = enum.auto()
class TeenageMutantNinjaTurtles(Threenum): # TypeError
LEONARDO = 'blue'
DONATELLO = 'purple'
RAPHAEL = 'red'
MICHELANGELO = 'orange'
Trinity = Threenum('Trinity', 'FATHER SON SPIRIT') # success
Schwartz = Threenum('Schwartz', 'UPSIDE DOWNSIDE') # TypeError
</code></pre>
<p>Overriding <code>_generate_next_value_()</code> allows the enforcement of a maximum number of members, but not a minimum.</p>
|
<python><enums>
|
2024-08-05 23:47:52
| 2
| 542
|
ibonyun
|
78,836,542
| 18,050,861
|
How to set the start date of the workweek for a month using Python?
|
<p>I have a database with a date column in the following format:</p>
<p><a href="https://i.sstatic.net/WiUzvBXw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WiUzvBXw.png" alt="data" /></a></p>
<p>However, I need to work with this database so that it can be recognized as a time series. In this case, the names "week 1", "week 2", "week 3", and "week 4" should be replaced with dates. If we know that each week has 7 days, each of the weeks in the month would start on the first business day of that month, and the next week would be 7 days later. Is it possible to do this with Python?</p>
<p>For example, in the month of June 2024, the first business day was the 3rd. So,</p>
<pre class="lang-none prettyprint-override"><code>week 1 = 06/03/2024
week 2 = 06/10/2024
week 3 = 06/17/2024
week 4 = 06/24/2024
</code></pre>
<p>And so on, each month would have its own specificity and start date. Is it possible to do this with Python and then obtain a weekly frequency of business days?</p>
|
<python><dataframe><datetime><time-series>
|
2024-08-05 21:44:43
| 2
| 375
|
User8563
|
78,836,505
| 678,572
|
How to evaluate nested boolean/logical expressions in Python?
|
<p>I'm working on a complex rule parser that has the following properties:</p>
<ul>
<li>A space character separates rules</li>
<li>A "+" character indicates an "AND" operator</li>
<li>A "," character indicates an "OR" operator</li>
<li>A "-" indicates an optional element</li>
<li>Tokens in parenthesis should be evaluated together</li>
</ul>
<p>I'm able to do simple rules but having trouble evaluating complex rules in nested parenthesis.</p>
<p>Here's nested rule definition I'm trying to evaluate:</p>
<pre><code>definition = '((K00925 K00625),K01895) (K00193+K00197+K00194) (K00577+K00578+K00579+K00580+K00581-K00582-K00583+K00584) (K00399+K00401+K00402) (K22480+K22481+K22482,K03388+K03389+K03390,K08264+K08265,K03388+K03389+K03390+K14127+(K14126+K14128,K22516+K00125))'
</code></pre>
<ul>
<li>Rule 1: <code>((K00925 K00625),K01895)</code>
<ul>
<li>This one is kind of tricky. Basically this rule either K00925 then separately K00625 OR just K01895 alone. Since it's all within a parenthesis set then that translates to (K00925 & K00625) OR K01895 as indicated by "," character.</li>
</ul>
</li>
<li>Rule 2: <code>(K00193+K00197+K00194)</code>
<ul>
<li>All 3 items must be present as indicated by "+" sign</li>
</ul>
</li>
<li>Rule 3: <code>(K00577+K00578+K00579+K00580+K00581-K00582-K00583+K00584)</code>
<ul>
<li>Everything except K00582 and K00583 because they are prefixed by "-" characters and when "+" is present then all items must be present</li>
</ul>
</li>
<li>Rule 4: <code>(K00399+K00401+K00402)</code>
<ul>
<li>All 3 items must be present as indicated by "+" sign</li>
</ul>
</li>
<li>Rule 5: <code>(K22480+K22481+K22482,K03388+K03389+K03390,K08264+K08265,K03388+K03389+K03390+K14127+(K14126+K14128,K22516+K00125))</code>
<ul>
<li>This is simpler than it looks. Either (K22480+K22481+K22482) OR (K03388+K03389+K03390). For the last subrule, it is K08264+K08265,K03388+K03389+K03390+K14127+(Either K14126+K14128 OR K22516+K00125))</li>
</ul>
</li>
</ul>
<p>Here's the code I have that is almost correct:</p>
<pre class="lang-py prettyprint-override"><code>import re
def rule_splitter(rule: str, split_characters: set = {"+", "-", ",", "(", ")", " "}) -> set:
"""
Split rule by characters.
Args:
rule (str): Boolean logical string.
split_characters (list): List of characters to split in rule.
Returns:
set: Unique tokens in a rule.
"""
rule_decomposed = str(rule)
if split_characters:
for character in split_characters:
character = character.strip()
if character:
rule_decomposed = rule_decomposed.replace(character, " ")
unique_tokens = set(filter(bool, rule_decomposed.split()))
return unique_tokens
def find_rules(definition: str) -> list:
"""
Find and extract rules from the definition string.
Args:
definition (str): Complex boolean logical string with multiple rules.
Returns:
list: List of extracted rules as strings.
"""
rules = []
stack = []
current_rule = ""
outside_rule = ""
for char in definition:
if char == '(':
if stack:
current_rule += char
if outside_rule.strip():
rules.append(outside_rule.strip())
outside_rule = ""
stack.append(char)
elif char == ')':
stack.pop()
if stack:
current_rule += char
else:
current_rule = f"({current_rule.strip()})"
rules.append(current_rule)
current_rule = ""
else:
if stack:
current_rule += char
else:
outside_rule += char
# Add any remaining outside_rule at the end of the loop
if outside_rule.strip():
rules.append(outside_rule.strip())
return rules
def evaluate_rule(rule: str, tokens: set, replace={"+": " and ", ",": " or "}) -> bool:
"""
Evaluate a string of boolean logicals.
Args:
rule (str): Boolean logical string.
tokens (set): List of tokens in rule.
replace (dict, optional): Replace boolean characters. Defaults to {"+":" and ", "," : " or "}.
Returns:
bool: Evaluated rule.
"""
# Handle optional tokens prefixed by '-'
rule = re.sub(r'-\w+', '', rule)
# Replace characters for standard logical formatting
if replace:
for character_before, character_after in replace.items():
rule = rule.replace(character_before, character_after)
# Split the rule into individual symbols
unique_symbols = rule_splitter(rule, replace.values())
# Create a dictionary with the presence of each symbol in the tokens
token_to_bool = {sym: (sym in tokens) for sym in unique_symbols}
# Parse and evaluate the rule using a recursive descent parser
def parse_expression(expression: str) -> bool:
expression = expression.strip()
# Handle nested expressions
if expression.startswith('(') and expression.endswith(')'):
return parse_expression(expression[1:-1])
# Evaluate 'OR' conditions
if ' or ' in expression:
parts = expression.split(' or ')
return any(parse_expression(part) for part in parts)
# Evaluate 'AND' conditions
elif ' and ' in expression:
parts = expression.split(' and ')
return all(parse_expression(part) for part in parts)
# Evaluate individual token presence
else:
return token_to_bool.get(expression.strip(), False)
return parse_expression(rule)
def evaluate_definition(definition: str, tokens: set) -> dict:
"""
Evaluate a complex definition string involving multiple rules.
Args:
definition (str): Complex boolean logical string with multiple rules.
tokens (set): Set of tokens to check against the rules.
Returns:
dict: Dictionary with each rule and its evaluated result.
"""
# Extract individual rules from the definition
rules = find_rules(definition)
# Evaluate each rule
rule_results = {}
for rule in rules:
try:
cleaned_rule = rule[1:-1] if rule.startswith('(') and rule.endswith(')') else rule # Remove outer parentheses if they exist
result = evaluate_rule(cleaned_rule, tokens)
except SyntaxError:
# Handle syntax errors from eval() due to incorrect formatting
result = False
rule_results[rule] = result
return rule_results
# Example usage
definition = '((K00925 K00625),K01895) (K00193+K00197+K00194) (K00577+K00578+K00579+K00580+K00581-K00582-K00583+K00584) (K00399+K00401+K00402) (K22480+K22481+K22482,K03388+K03389+K03390,K08264+K08265,K03388+K03389+K03390+K14127+(K14126+K14128,K22516+K00125))'
tokens = {
'K00925',
'K00625',
# 'K01895',
'K00193',
'K00197',
'K00194',
'K00577',
'K00578',
'K00579',
'K00580',
'K00581',
# 'K00582',
'K00584',
'K00399',
'K00401',
'K00402',
'K22480',
# 'K22481',
# 'K22482',
'K03388',
'K03389',
'K03390',
# 'K08264',
# 'K08265',
'K14127',
'K14126',
'K14128',
'K22516',
# 'K00125'
}
result = evaluate_definition(definition, tokens)
# result
# {'((K00925 K00625),K01895)': False,
# '(K00193+K00197+K00194)': True,
# '(K00577+K00578+K00579+K00580+K00581-K00582-K00583+K00584)': True,
# '(K00399+K00401+K00402)': True,
# '(K22480+K22481+K22482,K03388+K03389+K03390,K08264+K08265,K03388+K03389+K03390+K14127+(K14126+K14128,K22516+K00125))': True}
</code></pre>
<p>Note that the implementation is splitting the first rule.</p>
<p>Here is the following output I'm expecting:</p>
<pre><code>{
'(K00925 K00625),K01895)':True, # Note this is True because of `K00925` and `K00625` together (b/c the parenthesis) OR K01895 being present
'(K00193+K00197+K00194)':True,
'(K00577+K00578+K00579+K00580+K00581-K00582-K00583+K00584)':True, # This is missing optional tokens
'(K00399+K00401+K00402)':True,
'(K22480+K22481+K22482,K03388+K03389+K03390,K08264+K08265,K03388+K03389+K03390+K14127+(K14126+K14128,K22516+K00125)':True, # This combination allows for {K03388, K03389, K03390, K14127} +{K14126+K14128}
}
</code></pre>
<p>Here's a graphical representation of the information flow for this complex rule definition:</p>
<p><a href="https://i.sstatic.net/tCBGtpmy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tCBGtpmy.png" alt="enter image description here" /></a></p>
<p>It should also be able to handle this rule:</p>
<p><code>rule_edge_case='((K00134,K00150) K00927,K11389)'</code> with the following query tokens: <code>tokens = { 'K00134', 'K00927'}</code>. This should be <code>True</code> because either (K00134 OR K00150) with <code>K00927</code> are present which is sufficient.</p>
<p>Edit 1: I had an error before and the last rule was actually True and not False.</p>
<p>Edit 2: I've changed "items" to "tokens" and modified which tokens are used for evaluation to capture better edge cases.</p>
|
<python><parsing><boolean><boolean-logic><evaluate>
|
2024-08-05 21:30:15
| 4
| 30,977
|
O.rka
|
78,836,482
| 3,103,957
|
Path parameter type in Python's FastAPI
|
<p>In Python FastAPI, while associating path parameter with type, can we use only use path type (to mention a file path)?</p>
<p>For example, the below is accepted and works fine:</p>
<pre><code>@api.get("/{a:path}")
def get(a):
return "Hello"
</code></pre>
<p>Does it not accept any other type ?</p>
<p>The below is not accepted, for example:</p>
<pre><code>from uuid import UUID
@api.get("/{a:UUID}")
def get(a):
return "Hello"
</code></pre>
<p>when I curl for above: <code>curl -X GET localhost:8000/?a=12345678-1234-5678-1234-567812345678</code></p>
<p>The log says that: <code>AssertionError: Unknown path convertor 'UUID'</code></p>
<p>I am unable to find any mentions of this restriction anywhere.</p>
<p>But at the same time, when I give int, it works well.</p>
<pre><code>@api.get("/{a:int}")
def get(a:int):
return "Hello"
</code></pre>
<p>I am bit baffled, what kind of types the path parameter accepts?
Is that only basic types (int, float, string, bool) + the special case 'path' only? and anything else is rejected?</p>
|
<python><fastapi><starlette><path-parameter>
|
2024-08-05 21:21:35
| 3
| 878
|
user3103957
|
78,836,438
| 10,997,667
|
QFiledDialog setFileMode(Directory) with nameFilters
|
<p>In a python program using Qt I would like to be able to select a directory using <code>QtWidgets.QFileDialog()</code> with <code>Directory</code> mode of <code>setFileMode()</code> method, but also filter the contents of the selected directory using the <code>setNameFilter()</code> method. I'm currently using:</p>
<pre><code> def select_dir(self):
"""
Opens a file dialog for source file path selection.
"""
dlg = QtWidgets.QFileDialog()
dlg.setFileMode(QtWidgets.QFileDialog.Directory)
dlg.setNameFilter(("Any files (*);;text files (*.txt);;csv files (*.csv)"))
if dlg.exec():
dir = dlg.selectedFiles()[0]
fileTypeSelection = dlg.selectedNameFilter()
return fileTypeSelection, dlg.selectedFiles()[0]
</code></pre>
<p>But this is not allowing for selection of the filetype filter in the dialog window (the drop-down is not visible) on Windows OS. Can <code>Directory</code> mode be combined with name filters or mime filters?</p>
|
<python><pyqt>
|
2024-08-05 21:04:59
| 0
| 787
|
osprey
|
78,836,273
| 14,617,547
|
HTTPConnectionPool(host='0.0.0.0', port=8000): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3
|
<p>I wrote a FASTapi program that I used to run it with <code>ngrok</code> library to create external ports for it as I run it on Colab. Then I dockerized it without <code>ngrok</code> like this:</p>
<pre><code>FROM python:3.10-slim
RUN apt-get update && apt-get install -y ffmpeg
WORKDIR /app
COPY . /app
RUN pip install --no-cache-dir -r requirements.txt
# Make port 8000 available to the world outside this container
EXPOSE 8000
# Define environment variable
ENV NAME FastAPIApp
# Run app.py when the container launches
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
</code></pre>
<p>and the requirements.txt is:</p>
<pre><code>fastapi
uvicorn
python-multipart
pydub
nest-asyncio
audonnx
audinterface
</code></pre>
<p>so, I dockerized it with: <code>docker build -t my-fastapi-app .</code> command and then ran it with:<code>docker run -p 8000:8000 my-fastapi-app</code>. It run but as soon as I request it by IDLE python with:</p>
<pre><code>import requests
url = 'http://0.0.0.0:8000'
files = {'file': open('C:\\Users\\Z\\Desktop\\v\\f.m4a', 'rb')}
response = requests.post(url, files=files)
print(response.status_code)
print(response.text)
</code></pre>
<p>It doesn't show any log but the IDLE shows these errors and doesn't work at all:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Z\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connection.py", line 203, in _new_conn
sock = connection.create_connection(
File "C:\Users\Z\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\util\connection.py", line 85, in create_connection
raise err
File "C:\Users\Z\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\util\connection.py", line 73, in create_connection
sock.connect(sa)
OSError: [WinError 10049] The requested address is not valid in its context
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\Z\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connectionpool.py", line 791, in urlopen
response = self._make_request(
File "C:\Users\Z\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connectionpool.py", line 497, in _make_request
conn.request(
File "C:\Users\Z\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connection.py", line 395, in request
self.endheaders()
File "C:\Users\Z\AppData\Local\Programs\Python\Python311\Lib\http\client.py", line 1281, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "C:\Users\Z\AppData\Local\Programs\Python\Python311\Lib\http\client.py", line 1041, in _send_output
self.send(msg)
File "C:\Users\Z\AppData\Local\Programs\Python\Python311\Lib\http\client.py", line 979, in send
self.connect()
File "C:\Users\Z\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connection.py", line 243, in connect
self.sock = self._new_conn()
File "C:\Users\Z\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connection.py", line 218, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x000002C8B47DE610>: Failed to establish a new connection: [WinError 10049] The requested address is not valid in its context
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\Z\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\adapters.py", line 486, in send
resp = conn.urlopen(
File "C:\Users\Z\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connectionpool.py", line 845, in urlopen
retries = retries.increment(
File "C:\Users\Z\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\util\retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='0.0.0.0', port=8000): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000002C8B47DE610>: Failed to establish a new connection: [WinError 10049] The requested address is not valid in its context'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Z\AppData\Local\Programs\Python\Python311\requestapi.py", line 6, in <module>
response = requests.post(url, files=files)
File "C:\Users\Z\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
File "C:\Users\Z\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\Z\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\Z\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "C:\Users\Z\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\adapters.py", line 519, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='0.0.0.0', port=8000): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000002C8B47DE610>: Failed to establish a new connection: [WinError 10049] The requested address is not valid in its context')
</code></pre>
<p>I also take a look at this:</p>
<p><a href="https://stackoverflow.com/questions/56010271/requests-exceptions-connectionerror-httpconnectionpoolhost-127-0-0-1-port-8">requests.exceptions.ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=8000): Max retries exceeded with url: /api/1/</a></p>
<p>would you please help me? I'm on windows.</p>
|
<python><docker><connection><google-colaboratory><fastapi>
|
2024-08-05 20:07:48
| 0
| 410
|
Beryl Amend
|
78,836,171
| 14,345,989
|
Periodic writes while reading large amount of data
|
<p>I have a large number of image files (~220,000) stored on a fast, local SSD. Using python and the <a href="https://pypi.org/project/tifffile/" rel="nofollow noreferrer">tifffile library</a> I read the images in as numpy arrays, which are then combined into a single array and saved to disk. Reading this combined array is much faster than separately reading the files.</p>
<p>I'm trying to understand why there are writes (upwards of 30 MB/s) happening during the reading of the data (Expected: all reads happen, then a combined array is created, then one write happens). There's clearly more than enough RAM available while this is happening (the entire dataset fits in RAM):</p>
<p><a href="https://i.sstatic.net/ZLR47OHm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZLR47OHm.png" alt="Task Manager" /></a></p>
<p>I assume there's data still available in memory (but not marked as using memory) to explain why the first ~15 GB is loaded without incurring any disk reads (reading starts near the left-side of the plot).</p>
<p>A basic code example is something like this:</p>
<pre><code>import os
import numpy as np
from tifffile import imread
from functools import partial
from tqdm.contrib.concurrent import process_map
def get_image(dir, ID):
# Load as a numpy array
return imread(os.path.join(dir, ID + ".tif"))
def generate_numpy_file(IDs, folder, fname="train"):
_read = partial(get_image, folder)
print("Reading Data")
images = process_map(_read, IDs, max_workers=20, chunksize=1024)
images = np.array(images)
print("Writing Data")
np.save(os.path.join(SCRIPT_DIR, "Datasets", fname), images)
</code></pre>
|
<python><numpy><multiprocessing>
|
2024-08-05 19:35:46
| 1
| 886
|
Mandias
|
78,836,105
| 3,156,085
|
Why isn't the `pytest_addoption` hook run with the configured `testpaths`? (usage error)
|
<h1>Summary:</h1>
<p>I'm trying to set up a custom pytest option with the <a href="https://docs.pytest.org/en/8.1.x/example/simple.html#pass-different-values-to-a-test-function-depending-on-command-line-options" rel="nofollow noreferrer"><code>pytest_addoption</code></a> feature.</p>
<p>But when trying to configure my project with a <code>project.toml</code> file while using the said custom option, I'm getting the following error:</p>
<pre><code>$ pytest --foo foo
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: --foo
inifile: /home/vmonteco/Code/MREs/pytest__addoption__pyproject_toml/01_adding_pyproject.toml/pyproject.toml
rootdir: /home/vmonteco/Code/MREs/pytest__addoption__pyproject_toml/01_adding_pyproject.toml
</code></pre>
<p>Why is this problem occurring despite the configured test path and how could I solve it?</p>
<h2>Used versions are:</h2>
<ul>
<li>Python 3.10.13</li>
<li>Pytest 8.1.1</li>
</ul>
<hr />
<h1>How to reproduce:</h1>
<h2>Step 1 - before organizing the project, it works:</h2>
<p>I start with a very simple test in a single directory.</p>
<pre><code>$ tree
.
├── conftest.py
└── test_foo.py
1 directory, 2 files
$
</code></pre>
<ul>
<li><code>conftest.py</code>:</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import pytest
def pytest_addoption(parser):
parser.addoption("--foo", action="store")
@pytest.fixture
def my_val(request):
return request.config.getoption("--foo")
</code></pre>
<ul>
<li><code>test_foo.py</code>:</li>
</ul>
<pre class="lang-py prettyprint-override"><code>def test_foo(my_val):
assert my_val == "foo"
</code></pre>
<pre><code>$ pytest --foo bar
=============================== test session starts ===============================
platform linux -- Python 3.10.13, pytest-8.1.1, pluggy-1.5.0
rootdir: /home/vmonteco/code/MREs/pytest__addoption__pyproject_toml/00_simplest_case
collected 1 item
test_foo.py F [100%]
==================================== FAILURES =====================================
____________________________________ test_foo _____________________________________
my_val = 'bar'
def test_foo(my_val):
> assert my_val == "foo"
E AssertionError: assert 'bar' == 'foo'
E
E - foo
E + bar
test_foo.py:2: AssertionError
============================= short test summary info =============================
FAILED test_foo.py::test_foo - AssertionError: assert 'bar' == 'foo'
================================ 1 failed in 0.01s ================================
$
</code></pre>
<h2>Step 2 - When adding an <em>empty</em> <code>pyproject.toml</code> and reorganizing the project, it fails:</h2>
<pre><code>$ tree
.
├── my_project
│ └── my_tests
│ ├── conftest.py
│ ├── __init__.py
│ └── test_foo.py
└── pyproject.toml
3 directories, 4 files
$ pytest --foo bar
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: --foo
inifile: /home/vmonteco/code/MREs/pytest__addoption__pyproject_toml/01_adding_pyproject_toml/pyproject.toml
rootdir: /home/vmonteco/code/MREs/pytest__addoption__pyproject_toml/01_adding_pyproject_toml
$
</code></pre>
<h3>Notes:</h3>
<ul>
<li>In doubt, I also added an <code>__init__.py</code> file.</li>
<li>The <code>pyproject.toml</code> seems to be recognized despite not having the required <code>[tool.pytest.ini_options]</code> table, thus apparently contradicting the <a href="https://docs.pytest.org/en/8.1.x/reference/customize.html#pyproject-toml" rel="nofollow noreferrer">documentation</a>.</li>
</ul>
<h3>However, there's a simple workaround that seems to work in this specific case:</h3>
<p>Just manually passing the test path to my tests as command line argument seems enough to make things work correctly again:</p>
<pre><code>$ pytest --foo bar my_project/my_tests
=============================== test session starts ===============================
platform linux -- Python 3.10.13, pytest-8.1.1, pluggy-1.5.0
rootdir: /home/vmonteco/code/MREs/pytest__addoption__pyproject_toml/01_adding_pyproject_toml
configfile: pyproject.toml
collected 1 item
my_project/my_tests/test_foo.py F [100%]
==================================== FAILURES =====================================
____________________________________ test_foo _____________________________________
my_val = 'bar'
def test_foo(my_val):
> assert my_val == "foo"
E AssertionError: assert 'bar' == 'foo'
E
E - foo
E + bar
my_project/my_tests/test_foo.py:2: AssertionError
============================= short test summary info =============================
FAILED my_project/my_tests/test_foo.py::test_foo - AssertionError: assert 'bar' == 'foo'
================================ 1 failed in 0.02s ================================
$
</code></pre>
<p>But I'd like to avoid that and I'd rather have my project correctly configured.</p>
<h2>Step 3 - Unsuccessfully trying to configure <code>testpaths</code> in the <code>pyproject.toml</code>.</h2>
<p>The best explanation I found so far relies on the following points from the documentation:</p>
<ul>
<li><p>The necessity to put pytest_addoption in an "initial" <code>conftest.py</code>:</p>
<blockquote>
<p>This hook is only called for initial conftests.</p>
<hr />
<p><a href="https://docs.pytest.org/en/8.1.x/reference/reference.html#pytest.hookspec.pytest_addoption" rel="nofollow noreferrer">pytest_addoption documentation</a></p>
</blockquote>
</li>
<li><p>What is actually an "<em>initial <code>conftest.py</code></em>":
Initial conftest are for each test path the files whose path match <code><test_path>/conftest.py</code> or <code><test_test>/test*/conftest.py</code>.</p>
<blockquote>
<ol start="6">
<li>by loading all “initial “conftest.py files:</li>
</ol>
<ul>
<li>determine the test paths: specified on the command line, otherwise in testpaths if defined and running from the rootdir, otherwise the current dir</li>
<li>for each test path, load conftest.py and test*/conftest.py relative to the directory part of the test path, if exist. Before a conftest.py file is loaded, load conftest.py files in all of its parent directories. After a conftest.py file is loaded, recursively load all plugins specified in its pytest_plugins variable if present.</li>
</ul>
<hr />
<p><a href="https://docs.pytest.org/en/8.1.x/how-to/writing_plugins.html#pluginorder" rel="nofollow noreferrer">Plugin discovery order at tool startup</a></p>
</blockquote>
</li>
</ul>
<p>So, if I understand well:</p>
<ol>
<li><p><strong>my error seems to occurs because if I don't provide an explicit test path, the current directory is used. But in that case, my <code>conftest.py</code> is too deep into the arborescence to be used as an initial conftest.</strong></p>
</li>
<li><p><strong>With my workaround, explicitly passing a deeper test path solves this by making the conftest "initial" again.</strong></p>
</li>
</ol>
<p>From this, it would seem appropriate to try to translate my command line argument path into a bit of configuration (<code>testpaths</code>) as shown in the <a href="https://docs.pytest.org/en/8.1.x/reference/customize.html#pyproject-toml" rel="nofollow noreferrer">relevant documentation</a>.</p>
<p>But when trying to run my command again, I still get the same error:</p>
<pre><code>$ cat pyproject.toml
[tool.pytest.ini_options]
testpaths = [
"my_project/my_tests",
]
$ tree
.
├── my_project
│ └── my_tests
│ ├── conftest.py
│ ├── __init__.py
│ └── test_foo.py
└── pyproject.toml
3 directories, 4 files
$ pytest --foo bar
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: --foo
inifile: /home/vmonteco/code/MREs/pytest__addoption__pyproject_toml/02_attempt_to_solve/pyproject.toml
rootdir: /home/vmonteco/code/MREs/pytest__addoption__pyproject_toml/02_attempt_to_solve
$
</code></pre>
<p>I also tried to use a different kind of configuration file:</p>
<pre><code>$ cat pytest.ini
[pytest]
testpaths = my_project/my_tests
$ pytest --foo bar
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: --foo
inifile: /home/vmonteco/code/MREs/pytest__addoption__pyproject_toml/03_with_pytest_ini/pytest.ini
rootdir: /home/vmonteco/code/MREs/pytest__addoption__pyproject_toml/03_with_pytest_ini
$
</code></pre>
<p>But it still doesn't solve the problem despite the equivalent as command line argument seems to work. Why?</p>
<hr />
<h1>Addendum - raw output of new step 2 reproduction session:</h1>
<pre><code>Script started on 2024-08-06 03:23:13+02:00 [TERM="tmux-256color" TTY="/dev/pts/8" COLUMNS="126" LINES="69"]
[1m[7m%[27m[1m[0m
k..ew_pytest_MRE\]7;file://vmonteco-desktop/home/vmonteco/code/MREs/new_pytest_MRE\
[0m[27m[24m[J[01;32m➜ [36mnew_pytest_MRE[00m [K[?1h=[?2004httree[?1l>[?2004l
ktree\[01;34m.[0m
├── [01;34mmy_project[0m
│ └── [01;34mmy_tests[0m
│ ├── conftest.py
│ ├── __init__.py
│ └── test_foo.py
└── pyproject.toml
3 directories, 4 files
[1m[7m%[27m[1m[0m
k..ew_pytest_MRE\]7;file://vmonteco-desktop/home/vmonteco/code/MREs/new_pytest_MRE\
[0m[27m[24m[J[01;32m➜ [36mnew_pytest_MRE[00m [K[?1h=[?2004hccat pyproject.toml[1m [0m[0m [?1l>[?2004l
kcat\[1m[7m%[27m[1m[0m
k..ew_pytest_MRE\]7;file://vmonteco-desktop/home/vmonteco/code/MREs/new_pytest_MRE\
[0m[27m[24m[J[01;32m➜ [36mnew_pytest_MRE[00m [K[?1h=[?2004hccat my_project[1m/[0m[0m/my_tests[1m/[0m[0m/conftest.py[1m [0m[0m [?1l>[?2004l
kcat\import pytest
def pytest_addoption(parser):
parser.addoption("--foo", action="store")
@pytest.fixture
def my_val(request):
return request.config.getoption("--foo")
[1m[7m%[27m[1m[0m
k..ew_pytest_MRE\]7;file://vmonteco-desktop/home/vmonteco/code/MREs/new_pytest_MRE\
[0m[27m[24m[J[01;32m➜ [36mnew_pytest_MRE[00m [K[?1h=[?2004hcat my_project/my_tests/conftest.py __init__.py[1m [0m[0m [?1l>[?2004l
kcat\[1m[7m%[27m[1m[0m
k..ew_pytest_MRE\]7;file://vmonteco-desktop/home/vmonteco/code/MREs/new_pytest_MRE\
[0m[27m[24m[J[01;32m➜ [36mnew_pytest_MRE[00m [K[?1h=[?2004hcat my_project/my_tests/__init__.py test_foo.py[1m [0m[0m [?1l>[?2004l
kcat\def test_foo(my_val):
assert my_val == "foo"
[1m[7m%[27m[1m[0m
k..ew_pytest_MRE\]7;file://vmonteco-desktop/home/vmonteco/code/MREs/new_pytest_MRE\
[0m[27m[24m[J[01;32m➜ [36mnew_pytest_MRE[00m [K[?1h=[?2004hppython --version[?1l>[?2004l
kpython\Python 3.10.13
[1m[7m%[27m[1m[0m
k..ew_pytest_MRE\]7;file://vmonteco-desktop/home/vmonteco/code/MREs/new_pytest_MRE\
[0m[27m[24m[J[01;32m➜ [36mnew_pytest_MRE[00m [K[?1h=[?2004hpython --version est --version[?1l>[?2004l
kpytest\pytest 8.1.1
[1m[7m%[27m[1m[0m
k..ew_pytest_MRE\]7;file://vmonteco-desktop/home/vmonteco/code/MREs/new_pytest_MRE\
[0m[27m[24m[J[01;32m➜ [36mnew_pytest_MRE[00m [K[?1h=[?2004hppytest -fo -foo bar[?1l>[?2004l
kpytest\[31mERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: --foo
inifile: /home/vmonteco/code/MREs/new_pytest_MRE/pyproject.toml
rootdir: /home/vmonteco/code/MREs/new_pytest_MRE
[0m
[1m[7m%[27m[1m[0m
k..ew_pytest_MRE\]7;file://vmonteco-desktop/home/vmonteco/code/MREs/new_pytest_MRE\
[0m[27m[24m[J[01;31m➜ [36mnew_pytest_MRE[00m [K[?1h=[?2004h[?2004l
Script done on 2024-08-06 03:24:49+02:00 [COMMAND_EXIT_CODE="4"]
</code></pre>
<p><a href="https://pastebin.com/bSv2VxkX" rel="nofollow noreferrer">link to pastebin</a></p>
|
<python><pytest>
|
2024-08-05 19:15:32
| 1
| 15,848
|
vmonteco
|
78,836,094
| 11,824,828
|
LangChain Chat History
|
<p>I am struggling with passing context to conversational rag chain when using RunnableWithMessageHistory.</p>
<p>I have the following query function:</p>
<pre><code>def query(query_text, prompt, session_id, metadata_context):
# History retrieval test
contextualize_q_prompt = ChatPromptTemplate.from_messages(
[
("system", contextualize_q_system_prompt),
("system", "{context}"),
("system", prompt),
MessagesPlaceholder("chat_history"),
("human", "{input}"),
]
)
history_aware_retriever = create_history_aware_retriever(
llm, retriever, contextualize_q_prompt
)
qa_prompt = ChatPromptTemplate.from_messages(
[
("system", PROMPT_TEMPLATE),
("system", "{context}"),
("system", prompt),
MessagesPlaceholder("chat_history"),
("human", "{input}"),
]
)
question_answer_chain = create_stuff_documents_chain(llm, qa_prompt)
rag_chain = create_retrieval_chain(history_aware_retriever, question_answer_chain)
conversational_rag_chain = RunnableWithMessageHistory(
rag_chain,
get_session_history,
input_messages_key="input",
history_messages_key="chat_history",
output_messages_key="answer",
)
try:
logger.info(f"Model: {LLM_MODEL} assigned. Generation of response has started.")
response = conversational_rag_chain.invoke({"input": query_text, "context": metadata_context}, config={"configurable": {"session_id": f"{session_id}"}},)
logger.info(f"Response generated.")
except Exception as e:
return ({'Generation of response failed: ': str(e)})
return response["answer"]
</code></pre>
<p>I want to pass my own 'context' that is prepared and parsed from retriever. I do not want retriever to be called again but from what I've read - retrieving happens by itself if chat_history does not contain the answer.</p>
<p>prompt variable is created:</p>
<pre><code>prompt_template = ChatPromptTemplate.from_template(PROMPT_TEMPLATE)
prompt = prompt_template.format(context=metadata_context, input=query_text)
</code></pre>
<p>As you can see I am trying to put the context everywhere but no success.</p>
<p>The 'context' I can see when calling</p>
<pre><code>conversational_rag_chain.invoke({"input": query_text, "context": metadata_context}, config={"configurable": {"session_id": f"{session_id}"}},)
logger.info(f"Response generated.")
</code></pre>
<p>is the result of retriever:</p>
<pre><code>Document(metadata={'number_of_reviews': '16', 'price': 18999, 'product_name': 'product', 'rating': '4')
</code></pre>
<p>The code I'm using is as follows:</p>
<pre><code>chroma_client = chromadb.HttpClient(host=DB_HOST, port=DB_PORT)
chroma_collection = chroma_client.get_collection(os.getenv("DB_COLLECTION"))
vectorstore = VStoreChroma(DB_COLLECTION, embedding_function, client=client)
llm = ChatOpenAI(model="gpt-4o-mini",temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
search_kwargs = {"k": 10}
)
def self_query(query_text):
model = llm
logger.info("Data retrieval has started.")
try:
result = retriever.invoke(query_text)
logger.info("Data retrieved from database.")
if len(result) == 0:
logger.info(f"Unable to find matching results.")
except Exception as e:
return ({'Retrieval failed: ': str(e)})
return result
</code></pre>
<p>Retrieving is alright I get correct results. The problem is that the context I prepare from metadata by parsing it with the function like the one you mention in your snippet. It is string and I do not get it where I can pass it so the context is used properly. The rest is as I mentioned before.</p>
|
<python><langchain><large-language-model><rag>
|
2024-08-05 19:12:34
| 1
| 325
|
vloubes
|
78,836,067
| 3,183,808
|
SQLAlchemy - psycopg2.errors.InvalidSchemaName: no schema has been selected to create in
|
<p>I am connecting a FastAPI and SQLAlchemy app to a postgres deployment using kubernetes.</p>
<p>I am specifying my connection string and schema as follows:</p>
<pre><code>DB_URL = f"postgresql+psycopg2://{POSTGRES_USER}:{POSTGRES_PASSWORD}@{DB}:5432/{DB_NAME}"
engine = create_engine(DB_URL, connect_args={"options":"csearch_path={}".format("schema_name")},)
</code></pre>
<p>However, upon application start up, I receive the following error:</p>
<pre><code>INFO: Waiting for application startup. │
│ ERROR: Traceback (most recent call last): │
│ File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1967, in _exec_single_context │
│ self.dialect.do_execute( │
│ File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 924, in do_execute │
│ cursor.execute(statement, parameters) │
│ psycopg2.errors.InvalidSchemaName: no schema has been selected to create in │
│ LINE 2: CREATE TABLE field (
</code></pre>
<p>I don't understand why I am seeing this error when I am specifying a schema.</p>
|
<python><postgresql><sqlalchemy><psycopg2>
|
2024-08-05 19:05:43
| 1
| 435
|
jm22b
|
78,835,929
| 13,971,251
|
Unable to find Chromium that works with Chrome
|
<p>My chrome is version Version 127.0.6533.89 64-bit. I downloaded the according chromedriver from [https://googlechromelabs.github.io/chrome-for-testing/#stable][1], which is version 127.0.6533.88 (mine is .89).</p>
<p>When I run my program I get the following errors:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Kovy\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\common\service.py", line 95, in start
path = SeleniumManager().driver_location(browser)
File "C:\Users\Kovy\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\common\selenium_manager.py", line 74, in driver_location
result = self.run((binary, flag, browser))
File "C:\Users\Kovy\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\common\selenium_manager.py", line 93, in run
raise SeleniumManagerException(f"Selenium manager failed for: {command}.\n{stdout}{stderr}")
selenium.common.exceptions.SeleniumManagerException: Message: Selenium manager failed for: C:\Users\Kovy\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\common\windows\selenium-manager.exe --browser chrome.
WARN Error getting version of chromedriver 127. Retrying with chromedriver 126 (attempt 1/5)
WARN Error getting version of chromedriver 126. Retrying with chromedriver 125 (attempt 2/5)
WARN Error getting version of chromedriver 125. Retrying with chromedriver 124 (attempt 3/5)
WARN Error getting version of chromedriver 124. Retrying with chromedriver 123 (attempt 4/5)
WARN Error getting version of chromedriver 123. Retrying with chromedriver 122 (attempt 5/5)
ERROR The chromedriver version cannot be discovered
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Users\Kovy\Desktop\Programs\teumim.py", line 62, in <module>
first_name, last_name = get_webpage()
File "c:\Users\Kovy\Desktop\Programs\teumim.py", line 19, in get_webpage
driver = webdriver.Chrome(service=service, options=chrome_options)
File "C:\Users\Kovy\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\chrome\webdriver.py", line 80, in __init__
super().__init__(
File "C:\Users\Kovy\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\chromium\webdriver.py", line 101, in __init__
self.service.start()
File "C:\Users\Kovy\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\common\service.py", line 98, in start
raise err
File "C:\Users\Kovy\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\common\service.py", line 88, in start
self._start_process(self.path)
File "C:\Users\Kovy\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\common\service.py", line 209, in _start_process
raise WebDriverException(
selenium.common.exceptions.WebDriverException: Message: 'chromedriver.exe' executable needs to be in PATH. Please see https://chromedriver.chromium.org/home
</code></pre>
<p>Regarding the 2nd question, I added the file that chromedriver.exe is in to the PATH, and its still giving me the error. And regarding the first problem, I have the most updated version - how is it too old?</p>
<p>Here is the code:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from bs4 import BeautifulSoup
import time
def get_webpage():
chrome_options = Options()
chrome_options.add_argument("--no-sandbox")
chrome_options.add_argument("--disable-gpu")
chrome_options.add_argument('--disable-dev-shm-usage')
# Specify the path to chromedriver.exe
chromedriver_path = 'C:/Users/Kovy/Download/chromedriver-win64/chromedriver-win64/chromedriver.exe'
service = Service(executable_path=chromedriver_path)
driver = webdriver.Chrome(service=service, options=chrome_options)
try:
email = "email"
password = "code"
url = 'https://accounts.google.com/'
driver.get(url)
time.sleep(15) # Wait for the page to load
input_username = driver.find_element(By.NAME, 'identifier')
input_username.send_keys(email)
input_username.send_keys(Keys.ENTER)
time.sleep(2)
input_password = driver.find_element(By.NAME, 'Passwd')
input_password.send_keys(password)
input_password.send_keys(Keys.ENTER)
time.sleep(25) # Wait for login to complete
webpage_content = driver.page_source
finally:
driver.quit()
# Parse the HTML with BeautifulSoup
soup = BeautifulSoup(webpage_content, 'html.parser')
# Find all <p> tags and extract their text
paragraphs = soup.find_all('p')
text_content = '\n'.join([p.get_text() for p in paragraphs])
# Dummy values for first_name and last_name
first_name = "a"
last_name = "b"
return first_name, last_name
def download(first_name, last_name):
print(f"First Name: {first_name}, Last Name: {last_name}")
while True:
first_name, last_name = get_webpage()
download(first_name, last_name)
time.sleep(60) # Delay between iterations
</code></pre>
|
<python><selenium-webdriver><path>
|
2024-08-05 18:24:48
| 1
| 1,181
|
Kovy Jacob
|
78,835,812
| 10,145,953
|
ROC curve *without* a model estimator?
|
<p>I have created an AI Tool which extracts content from an image and then reviews that content for completeness and accuracy. I am trying to evaluate the performance of this tool and am gathering metrics to do so.</p>
<p>I have a resulting table that looks similar to the below; the true values come from manual ground truth reviews of documents and the predicted values are the actual output of the tool.</p>
<pre><code>ID | true | predicted |
-----------------------
1 | 0 | 1 |
2 | 0 | 0 |
3 | 1 | 1 |
4 | 1 | 0 |
</code></pre>
<p>I have been able to use the <code>true</code> and <code>predicted</code> columns to obtain various metrics using the code below:</p>
<pre><code>def calculate_metrics(df, true, predicted):
accuracy = accuracy_score(df[true], df[predicted])
precision = precision_score(df[true], df[predicted])
recall = recall_score(df[true], df[predicted])
f1 = f1_score(df[true], df[predicted])
roc_auc = roc_auc_score(df[true], df[predicted])
return print(f"accuracy: {accuracy}\nprecision: {precision}\nrecall: {recall}\nf1: {f1}\nroc_auc: {roc_auc}")
</code></pre>
<p>Additionally, I would like to plot a ROC curve. I am able to obtain the roc_auc score and assumed I could plot from there, but I'm having a hard time wrapping my head around how exactly to do so. It looks like I need a model estimator to determine probabilities and then from there I can create the plot but I'm unclear how to do so with the data I've obtained.</p>
<p>Is it even possible to create a ROC curve using the results I have and, if so, how do I do so?</p>
|
<python><scikit-learn><classification><roc>
|
2024-08-05 17:53:18
| 2
| 883
|
carousallie
|
78,835,771
| 181,783
|
StopIterator seemingly raised prematurely by generator in nested for loop
|
<p>I expected the above generator to produce at least 16 values but for reasons beyond me a <code>StopIterator</code> is raised at the end of the innermost for loop.</p>
<p>What am I doing wrong?</p>
<pre class="lang-py prettyprint-override"><code>>>> def generator():
... counter = 0
... upper_index = [0, 1, 2, 3, 7, 6, 5 , 4]
... lower_index = reversed(upper_index )
... for i in range(4):
... for _ in range(2):
... for up, down in zip(upper_index , lower_index):
... yield (up, down, counter)
... counter += 1
...
>>>
>>>
>>> gen = generator()
>>> for _ in range(16):
... print(next(gen))
...
(0, 4, 0)
(1, 5, 1)
(2, 6, 2)
(3, 7, 3)
(7, 3, 4)
(6, 2, 5)
(5, 1, 6)
(4, 0, 7)
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
StopIterator
</code></pre>
|
<python><generator>
|
2024-08-05 17:43:43
| 1
| 5,905
|
Olumide
|
78,835,718
| 13,016,237
|
SQL case when in where condition
|
<p>I'm trying to conver SQL to Python, have added the sqlfiddle <a href="https://sqlfiddle.com/oracle/online-compiler?id=c38f55d2-6992-43fe-8ef5-484446c86794" rel="nofollow noreferrer">here</a></p>
<p>This link has the input & expected output along with SQL as below:</p>
<pre><code>select a.id, b.country, a.person, b.num
from tab2 a
left join tab1 b
on a.id=b.id
WHERE
case when b.country = 'Denmark' then 'Swiz'
else b.country
end <> 'Russia';
</code></pre>
<p>And the requirement is to convert the above to Python, thanks in advance!</p>
|
<python><pandas><join>
|
2024-08-05 17:31:11
| 2
| 614
|
pc_pyr
|
78,835,509
| 9,363,181
|
Dynamically infer schema of JSON data using Pyspark
|
<p>I have a <code>MongoDB</code> server from which I load the data in a <code>PySpark</code> Dataframe. However, the problem is due to some differences between the systems(I also tried legacy data types) I can't load the data directly from <code>MongoDB</code>. So, I have to provide an external schema. Now, the problem is that one attribute(<code>steps</code> column) has nested attributes, so I can't provide the exact schema for it.</p>
<p>So while loading I am just marking that column as <code>StringType()</code> and later on trying to infer the schema for that column and its nested structure properly.</p>
<p>My steps <strong>data</strong> looks like the below:</p>
<pre><code>+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|steps |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|[{"id": "selfie", "status": 200, "startedAt": "2024-08-01T11:24:43.698Z", "completedAt": "2024-08-01T11:24:43.702Z", "startCount": 0, "cacheHit": false, "data": {"selfiePhotoUrl": ""}, "inner": {"isSelfieFraudError": false}}] |
|[{"id": "ip-validation", "status": 200, "startedAt": "2024-08-01T11:03:01.233Z", "completedAt": "2024-08-01T11:03:01.296Z", "startCount": 0, "cacheHit": false, "data": {"country": "Botswana", "countryCode": "BW", "region": "Gaborone", "regionCode": "GA", "city": "Gaborone", "zip": "", "latitude": -24.6437, "longitude": 25.9112, "safe": true, "ipRestrictionEnabled": false, "vpnDetectionEnabled": false, "platform": "web_mobile"}}, {"id": "liveness", "status": 200, "startedAt": "2024-08-01T11:22:29.787Z", "completedAt": "2024-08-01T11:22:30.609Z", "startCount": 1, "cacheHit": false, "data": {"videoUrl": "", "spriteUrl": "", "selfieUrl": {"media": "", "isUrl": true}}, "inner": {}}]|
|[{"id": "ip-validation", "status": 200, "startedAt": "2024-08-01T11:24:40.251Z", "completedAt": "2024-08-01T11:24:40.285Z", "startCount": 0, "cacheHit": false, "data": {"country": "Mexico", "countryCode": "MX", "region": "Mexico City", "regionCode": "CMX", "city": "Mexico City", "zip": "03020", "latitude": 19.4203, "longitude": -99.1193, "safe": true, "ipRestrictionEnabled": false, "vpnDetectionEnabled": false, "platform": ""}}] |
|[{"id": "liveness", "status": 200, "startedAt": "2024-07-31T20:57:54.206Z", "completedAt": "2024-07-31T20:57:55.762Z", "startCount": 1, "cacheHit": false, "data": {"videoUrl": "", "spriteUrl": "", "selfieUrl": {"media": "", "isUrl": true}}, "inner": {}}] |
|[] |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
</code></pre>
<p>As you can see the <code>data</code> attribute is of nested type and the attributes are not even fixed.</p>
<p>I tried the below code but it works just for the first record since I am using <code>head()</code></p>
<pre><code>schema = F.schema_of_json(df.select('steps').head()[0])
df1 = df.select("_id","steps").withColumn("steps",F.from_json("steps", schema))
</code></pre>
<p>Now, this works for first record but it matches this schema for other records and even if there are additional attributes, it truncates those attributes and doesn't produce any attribute. Have a look at the <code>data</code> attribute in the output.</p>
<p>Like for example I am getting the output as below:</p>
<pre><code>{"_id":"","steps":[{"cacheHit":false,"completedAt":"2024-08-01T11:24:43.702Z","data":{"selfiePhotoUrl":""},"id":"selfie","inner":{"isSelfieFraudError":false},"startCount":0,"startedAt":"2024-08-01T11:24:43.698Z","status":200}]}
{"_id":"","steps":[{"cacheHit":false,"completedAt":"2024-08-01T11:03:01.296Z","data":{},"id":"ip-validation","startCount":0,"startedAt":"2024-08-01T11:03:01.233Z","status":200},{"cacheHit":false,"completedAt":"2024-08-01T11:22:30.609Z","data":{},"id":"liveness","inner":{},"startCount":1,"startedAt":"2024-08-01T11:22:29.787Z","status":200}]}
{"_id":"","steps":[{"cacheHit":false,"completedAt":"2024-08-01T11:24:40.285Z","data":{},"id":"ip-validation","startCount":0,"startedAt":"2024-08-01T11:24:40.251Z","status":200}]}
{"_id":"","steps":[{"cacheHit":false,"completedAt":"2024-07-31T20:57:55.762Z","data":{},"id":"liveness","inner":{},"startCount":1,"startedAt":"2024-07-31T20:57:54.206Z","status":200}]}
{"_id":"","steps":[]}
</code></pre>
<p><code>_id</code> is an additional column that I have written in the output. It should be ignored.</p>
<p>I was trying to sample the records but the method I used just works for only a single record. So how can I infer the schema dynamically? Is there any optimal way?</p>
|
<python><json><mongodb><pyspark>
|
2024-08-05 16:34:49
| 3
| 645
|
RushHour
|
78,835,397
| 7,985,055
|
how to extract dataframe values to form an array?
|
<p>I have an excel sheet, which I am trying to process with Python as part of an ETL process.</p>
<p>I have an Excel sheet with a bunch of data in it. But the columns are not the first row on the Excel sheet, which is a dump of JIRA isssues.</p>
<p>file.xlsb</p>
<pre><code>time stamp row.
Export x number of issue
Project | Summary | Creator | Status | Description | Key (and a bunch of other fields)
data row 1
data row 2
etc....
</code></pre>
<p>I am able to read the data with a simple Panda; however, I only need certain data, like the summary, status, key. How do I pull that data?</p>
<pre><code>import pandas as pd
df = pd.read_excel(imported_file, skiprows=2)
</code></pre>
<p>When I try to make a Python list of the data, I get a key error:</p>
<pre><code>test.py", line 3812, in get_loc
raise KeyError(key) from err
KeyError: 'Key'
</code></pre>
<p>What I have so far is:</p>
<pre><code>import pandas as pd
df = pd.read_excel(imported_file, skiprows=2)
print(df)
issuesArray = []
for index, row in df.iterrows():
issuesArray.append({
' Ticket #: ': row['Key'],
' Issue Type: ': row['Issue Type'],
' Issue Created: ': row['Created'],
' Status: ': row['Status'],
' Summary: ': row['Summary'],
' Reporter: ': row['Reporter'],
' Project: ': row['Project'],
' Body: ': row['Description']
})
for entryItem in issuesArray:
print ("---")
print (entryItem)
print ("---")
</code></pre>
<p>I tried referencing the df row using the column string and integer, both of which result in the key error. Essentially, I am looking for a list of dict items that I can easily manage in Python. Anyone have an idea on how to do this?</p>
|
<python><pandas><dataframe>
|
2024-08-05 16:11:13
| 2
| 525
|
Mr. E
|
78,835,286
| 14,358,734
|
seaborn countplot that only counts total number of data points below and above a threshold
|
<p>Here's my data structure:</p>
<pre><code>betterTrue = [random.randint(0,1) for x in range(500)]
betterFalse = [(x + 1) % 2 for x in betterTrue]
data = {
"model": ["A" for x in range(500)] + ["B" for x in range(500)],
"safety": [random.randint(0,4) for x in range(1000)],
"honesty": [random.randint(0,4) for x in range(1000)],
"quality": [random.randint(0,4) for x in range(1000)],
"better": betterTrue + betterFalse
}
</code></pre>
<p>I'd like to generate count plots comparing each model's performance in each of the <code>safety</code>, <code>honesty</code>, <code>quality</code>, and <code>better</code> columns. For the first three, the data comes in integer values from 0 to 5, and for <code>better</code>, the data is either <code>0</code> or <code>1</code>.</p>
<p>But for the first three columns, I only care if the data point is greater than or equal to 3 or less than 3. Is there a way to generate a count plot which throws the data into two bins <code>>= 3</code> and <code>< 3</code>?</p>
<p>For reference, this is what it looks like when we don't do that, and instead just discretely bin by each possible value</p>
<p><code>fig = sns.countplot(x = 'safety', hue='model', data=data, stat='count')</code></p>
|
<python><seaborn>
|
2024-08-05 15:46:28
| 1
| 781
|
m. lekk
|
78,835,220
| 7,920,004
|
Ariflow Tasks in TaskGroup run out of order
|
<p>I'm using <code>Dynamic Task Mapping</code> feature with <code>Task Groups</code> to generate n number of tasks for my input list. DAG gets generated but at execution I'm seeing odd behaviour.
<a href="https://i.sstatic.net/iDbTPLj8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iDbTPLj8.png" alt="enter image description here" /></a></p>
<p>Mapped Tasks from second Task are getting triggered simultaneously when Mapped Tasks from first Task are still running. This is reflected in MWAA's logs:
<a href="https://i.sstatic.net/GPwiB8kQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GPwiB8kQ.png" alt="enter image description here" /></a></p>
<p>This non-deterministic run causes DAG to fail.</p>
<p>Tried both <code>depends_on_past=True, wait_for_downstream=True</code> parameters to no luck. Kind thanks for any help...</p>
<p><strong>DAG</strong>:</p>
<pre><code>@dag(dag_id='chore_task_group_stage3', catchup=False)
def pipeline():
# t0 = DummyOperator(task_id='start')
# t3 = DummyOperator(task_id='end')
@task_group(group_id="channel_demo_tg")
def tg1(my_num):
@task()
def print_num(num):
return num
@task()
def add_42(num):
return num + 42
print_num(my_num) >> add_42(my_num)
# creating 6 mapped task group instances of the task group group1
tg1_object = tg1.expand(my_num=[19, 23, 42, 8, 7, 108])
# setting dependencies
tg1_object
pipeline()
</code></pre>
<p>Scheduler log:</p>
<pre><code>[[34m2024-08-05T14:11:19.118+0000[0m] {{[34mtask_context_logger.py:[0m91}} ERROR[0m - Executor reports task instance <TaskInstance: chore_task_group_stage3.group1.add_42 manual__2024-08-05T14:10:41+00:00 map_index=4 [queued]> finished (failed) although the task says it's queued. (Info: None) Was the task killed externally?[0m
</code></pre>
|
<python><airflow><mwaa>
|
2024-08-05 15:33:14
| 1
| 1,509
|
marcin2x4
|
78,835,219
| 243,755
|
Failed to install pandas due to error "pytz >= 2011k"
|
<p>I failed to install pandas due to the following error, what might be wrong? Thanks</p>
<pre><code>Collecting pandas==0.23.4
Using cached pandas-0.23.4.tar.gz (10.5 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [4 lines of output]
<string>:12: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
error in pandas setup command: 'install_requires' must be a string or list of strings containing valid project/version requirement specifiers; Expected end or semicolon (after version specifier)
pytz >= 2011k
~~~~~~~^
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
|
<python><pandas>
|
2024-08-05 15:33:10
| 1
| 29,674
|
zjffdu
|
78,835,135
| 12,182,475
|
pyplot's contourf does not apply colors according to levels
|
<p>I am trying to plot some data with pyplot's <code>contourf</code> function. I want to define 9 levels, not linearly spaced. In my understanding each level should then receive its own color given that I use a colormap with 9 colors. I want values below and above the defined levels to be colored in the same color as the lowest and highest level, respectively. Strangely, <code>contourf</code> uses only 5 of the 9 colors, merging neighbouring levels into one. Further, it leaves values outside the level range white. Here comes a minimal working example:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
# Define levels
levels = np.array([-4, -2, -1, -0.5, -0.1, 0.1, 0.5, 1, 2, 4])
# Create a custom colormap with 9 colors
cmap = plt.get_cmap('BrBG', len(levels) - 1)
# Generate test data
test = np.arange(-5, 5, 0.1)
test = np.array([test, test])
# Plot with contourf using 'extend' parameter set to 'neither'
p = plt.contourf(test, cmap=cmap, levels=levels, vmin=-5, vmax=5, extend='neither')
plt.colorbar(p)
</code></pre>
<p>the resulting plot looks like this:</p>
<p><a href="https://i.sstatic.net/JG5RZs2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JG5RZs2C.png" alt="enter image description here" /></a></p>
<p>What I would like is this:</p>
<p><a href="https://i.sstatic.net/V0zhQzvt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V0zhQzvt.png" alt="enter image description here" /></a></p>
<p>Thank you for you help!</p>
|
<python><matplotlib><colormap><contourf>
|
2024-08-05 15:13:00
| 1
| 415
|
Olmo
|
78,835,070
| 3,906,713
|
Jupyterlab widgets do not work, resulting in "Loading widgets..." text
|
<p>I have <code>Python=3.10.8</code>, <code>jupyterlab=4.2.4</code>, <code>jupyterlab_widgets=3.0.11</code>, <code>ipywidgets=8.1.3</code></p>
<p>Importing basic ipywidgets</p>
<pre><code>import ipywidgets as widgets
slider = widgets.IntSlider(value=5, min=0, max=10, step=1)
slider
</code></pre>
<p>results in <code>Loading widget...</code> text and no widget. I have found <a href="https://github.com/jupyter-widgets/ipywidgets/issues/2623" rel="nofollow noreferrer">this thread</a> by googling and it seems to conclude that</p>
<ol>
<li>The message is not informative</li>
<li>Widgets "do not seem to be supported" on jupyter-lab</li>
</ol>
<p>So, if widgets are not supported, what is the point of <code>jupyterlab-widgets</code> library? It clearly states that it enables support for widgets in jupyter-lab.</p>
<p>There is <a href="https://stackoverflow.com/questions/49542417/how-to-get-ipywidgets-working-in-jupyter-lab">another thread</a> dealing with exactly this issue. However, it has not seen much updates since the last 4 years, and many replies there state that the default solution has been outdated. Among others, <a href="https://stackoverflow.com/a/68440326/3906713">this reply</a> states that manually enabling extensions should no longer be necessary, and simply installing the ipywidgets library should be sufficient, which does not seem to work, at least in my case.</p>
<p>Does anybody know the current state of widgets in jupyter-lab?</p>
<p><strong>Edit1:</strong> <a href="https://ipywidgets.readthedocs.io/en/latest/" rel="nofollow noreferrer">Official page</a> provides a <a href="https://ipywidgets.readthedocs.io/en/latest/try/lab/index.html?path=Widget%20List.ipynb" rel="nofollow noreferrer">jupyter-lab embedding</a> where widgets work out of the box when installing ipywidgets. So I must be doing something wrong, but what?</p>
<p><strong>Edit2:</strong> Problem does not seem to be there on a fresh install with just these 4 libraries. Is some other library interfering with ipywidgets?</p>
<p><strong>Edit3:</strong> The problem seems to be gone by itself. I switched to a fresh environment, where it worked. Then I switched back to my original environment, and it started working there too. I guess it was a caching error after all.</p>
|
<python><widget><jupyter-lab>
|
2024-08-05 14:57:03
| 0
| 908
|
Aleksejs Fomins
|
78,835,050
| 2,329,968
|
Multiple elements in set with the same hash
|
<p>I was reading through <a href="https://github.com/sympy/sympy/wiki/release-notes-for-1.13.0" rel="nofollow noreferrer">SymPy 1.13.0 release notes</a> when an entry caught my attention (emphasis mine):</p>
<blockquote>
<p>The hash function for <code>Floats</code> and expressions involving Float now respects the hash invariant that if <code>a == b</code> then <code>hash(a) == hash(b)</code>. This ensures that it is possible to use such expressions in a guaranteed deterministic way in Python's core <code>set</code> and <code>dict</code> data structures. <strong>It is however not recommended to mix up sympy <code>Basic</code> objects with non-sympy number types such as core Python's <code>float</code> or <code>int</code> in the same <code>set/dict</code></strong>.</p>
</blockquote>
<p>I mixed up Python's numbers with SymPy's numbers in a set, and I can't explain what's going on (sympy 1.13.0).</p>
<p>I thought elements of a set all have different hashes. For example, in the following code block there are 4 strings, <code>a, b, c1, c2</code>. They are distinct objects in memory. However, <code>c1</code> is equal to <code>c2</code>, so they have the same hash. Hence, <code>set([a, b, c1, c2])</code> only contains three elements, as expected:</p>
<pre><code>a, b = "a", "b"
c1 = "This is a test"
c2 = "This is a test"
t = [a, b, c1, c2]
print("ids:", ", ".join([str(id(e)) for e in t]))
print("hashes:", ", ".join([str(hash(e)) for e in t]))
s = set(t)
print(s)
# hashes: -8635426522778860779, 3774980000107278733, 3487163462126586929, 3487163462126586929
# ids: 11791568, 11792624, 135187170594160, 135187170598448
# {'This is a test', 'a', 'b'}
</code></pre>
<p>In the following exampe there are three different objects in memory, <code>n1, n2, n3</code>. They all generates the same hash. I expected the resulting set to only contain one element. Instead, it contains two elements, apparently sharing the same hash.</p>
<pre><code>from sympy import *
n1 = 2.0
n2 = Float(2.0)
n3 = Float(2.0, precision=80)
t = [n1, n2, n3]
print("ids:", ", ".join([str(id(e)) for e in t]))
print("hashes:", ", ".join([str(hash(e)) for e in t]))
s = set(t)
print(s)
# ids: 135913432385776, 135912654307472, 135912654307632
# hashes: 2, 2, 2
# {2.0, 2.0000000000000000000000}
</code></pre>
<p>What is going on?</p>
|
<python><hash><set><sympy><hash-collision>
|
2024-08-05 14:51:21
| 3
| 13,725
|
Davide_sd
|
78,835,020
| 22,538,132
|
Camera calibration using OpenCV 4.10
|
<p>I have a ChArUco board and I do like to do intrinsic camera calibration using OpenCV 4.10, what I have tried is to mimic this <a href="https://stackoverflow.com/a/74975523/22538132">answer</a>:</p>
<pre class="lang-py prettyprint-override"><code>import os
import cv2
import glob
img_folder_path = f"{os.getenv('HOME')}/rgb/"
aruco_dict = cv2.aruco.getPredefinedDictionary(cv2.aruco.DICT_5X5_250)
board = cv2.aruco.CharucoBoard(
size=(14, 9),
squareLength=0.04,
markerLength=0.03,
dictionary=aruco_dict)
ch_params = cv2.aruco.CharucoParameters()
detector_params = cv2.aruco.DetectorParameters()
refine_params = cv2.aruco.RefineParameters(minRepDistance=0.05, errorCorrectionRate=0.1, checkAllOrders=True)
ch_detector = cv2.aruco.CharucoDetector(
board=board,charucoParams=ch_params,
detectorParams=detector_params,
# refineParams=refine_params
)
all_ch_corners = []
all_ch_ids = []
image_size = None
images = glob.glob(img_folder_path + '/*.png')
for image_file in images:
image = cv2.imread(image_file)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
charucoCorners, charucoIds, markerCorners, markerIds = ch_detector.detectBoard(gray)
if charucoIds is not None and len(charucoCorners) > 3:
all_ch_corners.append(charucoCorners)
all_ch_ids.append(charucoIds)
result, camera_matrix, dist_coeffs, rvecs, tvecs = cv2.aruco.calibrateCameraCharuco(
all_ch_corners,
all_ch_ids,
board,
image.shape[:2],
None,
None
)
print("camera_matrix: \n", camera_matrix, "dist_coeffs: \n", dist_coeffs)
</code></pre>
<p>but I get an error:</p>
<pre class="lang-bash prettyprint-override"><code>AttributeError: module 'cv2.aruco' has no attribute 'calibrateCameraCharuco
</code></pre>
<p>Also in the <a href="https://docs.opencv.org/4.10.0/d9/d6a/group__aruco.html#gaa7357017aa9da857b487e447c7b13f11" rel="nofollow noreferrer">documentation</a> I see a note: <code>Aruco markers, module functionality was moved to objdetect module</code></p>
<p>Can you please tell me how can I fix that Error?</p>
|
<python><opencv><camera-calibration><aruco>
|
2024-08-05 14:45:12
| 1
| 304
|
bhomaidan90
|
78,834,627
| 23,260,297
|
Replace an empty value with nan in dataframe
|
<p>I have a dataframe with empty values in some rows like this:</p>
<pre><code>ID Date Price Curr
A Jan 21 (10,0) USD
B Aug 8 (10,0) USD
C Sep 29 (10,0) USD
settle Aug 24 ( ,)
</code></pre>
<p>where the last row has 2 empty values in <code>Price</code> and <code>Curr</code> columns.</p>
<p>How can I either replace the empty values with nan so I can <code>dropna()</code> or drop the rows that contain empty values to get a dataframe like:</p>
<pre><code>ID Date Price Curr
A Jan 21 (10,0) USD
B Aug 8 (10,0) USD
C Sep 29 (10,0) USD
</code></pre>
<p>sample:</p>
<pre><code>data = {
"ID": ["A", "B", "C", "settle"],
"Date": ["Jan 21", "Aug 8", "Sep 29", "Aug 24"],
"Price": [(10,0), (10,0), (10,0), ()],
"Curr": ["USD", "USD", "USD", ""]
}
df = pd.DataFrame(data)
</code></pre>
|
<python><pandas>
|
2024-08-05 13:16:15
| 4
| 2,185
|
iBeMeltin
|
78,834,604
| 5,816,253
|
create a query with string variable using python gql and psycopg2
|
<p>Let's say Ia have the following string to be passed as query for an API call using python:</p>
<pre><code>import psycopg2
from gql import Client, gql
from gql.transport.requests import RequestsHTTPTransport
from datetime import datetime
start_year = str(datetime.now().year -4)
end_year = str(datetime.now().year)
query_str = """
query events{
api_version
public_DB(
filters: {
iso: ["Nation1","Nation2","Nation3"]
from: 2020,
to: 2024,
classif: ["car","bus","train",]
include_hist: true
}
) {
total_available
info {
timestamp
filters
cursor
}
data {
disno
iso
start_year
start_month
start_day
end_year
end_month
end_day
type
subtype
entry_date
last_update
}
}
}
"""
query = gql(query_str)
transport = RequestsHTTPTransport(
url='https://api.open_DB.it/v1',
headers={'Authorization': api_key},
use_json=True,
)
# Create a GraphQL client using the defined transport
client = Client(transport=transport, fetch_schema_from_transport=True)
# Execute the query
response = client.execute(query)
</code></pre>
<p>What I would like to do is to pass two variables string (start_year, end_year) for the start year (2020) and the end year (2024) so that I can change them at the beginning of the code</p>
|
<python><gql>
|
2024-08-05 13:10:45
| 0
| 375
|
sylar_80
|
78,834,557
| 5,790,653
|
How to join the last two occurences of each ID in a list of dictionaries into one list
|
<p>I have a list like this:</p>
<pre class="lang-py prettyprint-override"><code>list1 = [
{'id': 'ABC', 'created_at': datetime.date(2024, 8, 1, 11, 22, 3)},
{'id': 'ABC', 'created_at': datetime.date(2024, 8, 2, 11, 22, 3)},
{'id': 'ABC', 'created_at': datetime.date(2024, 8, 3, 11, 22, 3)},
{'id': 'ABC', 'created_at': datetime.date(2024, 8, 5, 11, 22, 3)},
{'id': 'BAC', 'created_at': datetime.date(2024, 7, 25, 18, 22, 3)},
{'id': 'BAC', 'created_at': datetime.date(2024, 7, 26, 18, 22, 3)},
{'id': 'BAC', 'created_at': datetime.date(2024, 8, 1, 18, 22, 3)},
{'id': 'BAC', 'created_at': datetime.date(2024, 8, 5, 11, 22, 3)},
{'id': 'CAB', 'created_at': datetime.date(2024, 8, 1, 6, 53, 3)},
{'id': 'CAB', 'created_at': datetime.date(2024, 8, 1, 17, 53, 3)},
{'id': 'CAB', 'created_at': datetime.date(2024, 8, 2, 11, 53, 3)},
{'id': 'CAB', 'created_at': datetime.date(2024, 8, 5, 11, 22, 3)},
]
</code></pre>
<p>The list is sorted by <code>id</code> and <code>created_at</code>, so it's not needed to sort again.</p>
<p>If it were one <code>id</code> with no repeats, I knew I could do this:</p>
<pre class="lang-py prettyprint-override"><code>output = '\n'.join([f"ID: {l['id']}, Created At: {l['created_at']}" for l in list1])
</code></pre>
<p>But regarding the following expected output, I don't know the how-to:</p>
<pre class="lang-py prettyprint-override"><code>[
{'id': 'ABC', 'start': datetime.date(2024, 8, 3, 11, 22, 3), 'end': datetime.date(2024, 8, 5, 11, 22, 3)},
{'id' 'BAC', 'start': datetime.date(2024, 8, 1, 18, 22, 3), 'end': datetime.date(2024, 8, 5, 11, 22, 3)},
{'id' 'CAB', 'start': datetime.date(2024, 8, 2, 11, 53, 3), 'end': datetime.date(2024, 8, 5, 11, 22, 3)},
]
</code></pre>
<p>I'm going to print the one before last column date of each <code>id</code> as <code>start</code>, and the last one as <code>end</code>.</p>
|
<python>
|
2024-08-05 13:00:00
| 2
| 4,175
|
Saeed
|
78,834,504
| 10,513,151
|
Spark Code Completion in Visual Studio Code
|
<p>I am trying to use Visual Studio Code for Spark development.</p>
<p>My PySpark code all runs fine, but there is no code completion/hints.</p>
<p>How can I add these features for VS Code?</p>
<p>I have the Python and Pylance extensions installed.</p>
<p>In PyCharm this all work perfectly as expected.</p>
|
<python><apache-spark><visual-studio-code><pyspark><vscode-extensions>
|
2024-08-05 12:47:05
| 2
| 671
|
Bob
|
78,834,503
| 10,517,777
|
How to manage changes in the configuration files of a python project
|
<p>I have the following folder structure in my python project:</p>
<pre><code>main_folder/
|-- python_code_subfolder/
| |-- .git
| |-- .venv
| |-- tests/
| |-- test_code.py
| |-- __init__.py
| |-- code.py
|-- config.json
|-- headers.json
|-- body.json
</code></pre>
<p>The folder "<strong>python_code_subfolder</strong>" contains all my python code and it is already connected to a my git repository. The "config.json", "headers.json" and "body.json" are configuration files to manage some values in my code.py.</p>
<p>I did not include the previous files on my folder "python_code_subfolder" because when there is a change in those files, I am not changing my code theoritically and it would not make sense to push my code entire code and build it again because some changes in the configuration files. My first though was to create a different git repository with only those configuration files and I can only push changes to that repository without building or realising again a new version in my code.</p>
<p>My main question is where should I place those configuration files based on best practices and how should I manage only changes in the configuration files?. Is it a good idea to create a new repository for only my configuration files?</p>
|
<python><git>
|
2024-08-05 12:46:50
| 0
| 364
|
sergioMoreno
|
78,834,447
| 9,542,989
|
Optimizing Record Parsing from DynamoDB PartiQL Query Execution via Python
|
<p>I am running PartiQL queries against my DynamoDB tables using <code>boto3</code> and I want to be able to parse these records into a tabular format (<code>pandas</code> DataFrame).</p>
<p>This is what my code looks like:</p>
<pre><code>import boto3
from boto3.dynamodb.types import TypeDeserializer
import panads as pd
connection = boto3.client(
'dynamodb',
**config # This contains my credentials
)
result = connection.execute_statement(Statement=query)
records = []
if result['Items']:
# TODO: Can parsing be optimized?
records.extend(parse_records(result['Items']))
while 'LastEvaluatedKey' in result:
result = connection.execute_statement(
Statement=query,
NextToken=result['NextToken']
)
records.extend(parse_records(result['Items']))
df = pd.json_normalize(records)
def parse_records(records: List[Dict]) -> Dict:
"""
Parses the records returned by the PartiQL query execution.
Args:
records (List[Dict]): A list of records returned by the PartiQL query execution.
Returns:
Dict: A dictionary containing the parsed record.
"""
deserializer = TypeDeserializer()
parsed_records = []
for record in records:
parsed_records.append({k: deserializer.deserialize(v) for k,v in record.items()})
return parsed_records
</code></pre>
<p>Is it possible for me to avoid looping through all of the records in order to parse them? I am looking to optimize this part of the program. I am open to other approaches as well as long as I am able to execute PartQL queries.</p>
|
<python><amazon-dynamodb><boto3>
|
2024-08-05 12:28:42
| 1
| 2,115
|
Minura Punchihewa
|
78,834,232
| 1,191,068
|
Floating point precision loss in python numpy calculations
|
<p>I have a very strange situation: I calculate angle between two vectors in spherical coordinate system. First I calculate scalar product of two vectors. Calculation using <code>numpy</code> array gives result with relative precision 10^(-16), as it should be for 64-bit floating numbers. But then I call <code>acos</code> and the angle from <code>numpy</code>'s scalar product gives relative precision 10^(-11), i.e. 5 orders or magnitude worse compared to direct calculation (which is the correct result). Any ideas why?</p>
<pre><code>#!/usr/bin/python
import numpy as np
import math
def testPrecision(theta1, phi1, theta2, phi2):
# We define two vectors as numpy arrays
t1 = np.array([math.sin(theta1) * math.cos(phi1), math.sin(theta1) * math.sin(phi1), math.cos(theta1)])
t2 = np.array([math.sin(theta2) * math.cos(phi2), math.sin(theta2) * math.sin(phi2), math.cos(theta2)])
# scalar product of vectors using numpy
scalarProd1 = np.einsum('i,i', t1, t2)
# scalar product of vectors without numpy
scalarProd2 = math.sin(theta1) * math.cos(phi1) * math.sin(theta2) * math.cos(phi2) + math.sin(theta1) * math.sin(phi1) * math.sin(theta2) * math.sin(phi2) + math.cos(theta1) * math.cos(theta2)
print("Scalar product:\nnumpy = {:.15e} vs direct = {:.15e}".format(scalarProd1, scalarProd2))
alpha1 = math.acos(scalarProd1)
alpha2 = math.acos(scalarProd2)
print("Angle:\nnumpy = {:.15e} vs direct = {:.15e}".format(alpha1, alpha2))
testPrecision(theta1 = 1.745329251994330e-02, theta2 = 2.154759397163566e-02, phi1 = 1.745329251994329e-01, phi2 = 2.078144368653071e-01)
</code></pre>
<p>The result is</p>
<pre><code>Scalar product:
numpy = 9.999914101231913e-01 vs direct = 9.999914101231911e-01
Angle:
numpy = 4.144849600753233e-03 vs direct = 4.144849600780019e-03
</code></pre>
|
<python><numpy><precision>
|
2024-08-05 11:35:06
| 0
| 1,109
|
John Smith
|
78,834,127
| 948,655
|
Define a class (e.g. dataclass or Pydantic BaseModel or TypedDict) whose fields are defined by a collection of strings
|
<p>Suppose I have a collection of strings defined as a <code>StrEnum</code>:</p>
<pre class="lang-py prettyprint-override"><code>class ValidNames(StrEnum):
OLLY = auto()
ADAM = auto()
ANNA = auto()
</code></pre>
<p>I want to define a class, ideally with Pydantic (for type verification and conversion) that looks like this:</p>
<pre class="lang-py prettyprint-override"><code>class Assignment(pydantic.BaseModel):
olly: int = 0
adam: int = 0
anna: int = 0
</code></pre>
<p>And I want to be able to take external JSON data like this:</p>
<pre class="lang-py prettyprint-override"><code>external_data = {"olly": 14, "anna": 3}
this_assignment = Assignment(**external_data)
</code></pre>
<p><em>However</em>, I don't want to have to <em>explicitly</em> define the fields of <code>Assignment</code>; I want it to be defined by the values of <code>ValidNames</code> because I don't want to change two bits of code every time the members of <code>ValidNames</code> change (or are added or deleted), and for other reasons. Is there a way to do this, with or without Pydantic?</p>
<p>Basically I just want a <code>dict[ValidNames, pydantic.NonNegativeInt]</code> but as a class (with default value 0 for each field) and not a <code>dict</code>. Ideally a Pydantic class.</p>
|
<python><pydantic>
|
2024-08-05 11:07:44
| 2
| 8,813
|
Ray
|
78,834,040
| 12,285,101
|
"nbdev_export Fails When Notebook is Nested Inside a Subfolder within the nbs Folder"
|
<p>I have a repository where I’ve installed nbdev. Previously, on Linux, I was able to create subfolders inside the nbs directory and successfully run nbdev_export. However, now that I'm using Windows, I encounter an error when trying to use nbdev_export with notebooks inside subfolders. The error message is:</p>
<blockquote>
<p>File
"C:\Users\Leni\AppData\Local\Programs\Python\Python312\Lib\json\decoder.py",
line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char
0)</p>
</blockquote>
<p>I’ve found that moving the notebook out of the subfolder, running nbdev_export, and then moving the notebook back resolves the issue temporarily. However, I don’t fully understand why this happens. How can I properly export notebooks that are located in subfolders within the nbs directory?</p>
|
<python><jupyter-notebook><nbdev>
|
2024-08-05 10:42:12
| 0
| 1,592
|
Reut
|
78,834,030
| 1,485,926
|
VannaAI (with Ollama and ChromaDB) sample program fails at training model step
|
<p>I'm starting to test VannaAI, and I'm running a sample program based on <a href="https://github.com/vanna-ai/notebooks/blob/main/postgres-ollama-chromadb.ipynb" rel="nofollow noreferrer">Generating SQL for Postgres using Ollama, ChromaDB</a>:</p>
<pre><code>from vanna.ollama import Ollama
from vanna.chromadb import ChromaDB_VectorStore
class MyVanna(ChromaDB_VectorStore, Ollama):
def __init__(self, config=None):
ChromaDB_VectorStore.__init__(self, config=config)
Ollama.__init__(self, config=config)
vn = MyVanna(config={'model': 'mistral'})
vn.connect_to_postgres(host='<ofuscated>', dbname='<ofuscated>', user='<ofuscated>', password='<ofuscated>', port='<ofuscated>')
# The information schema query may need some tweaking depending on your database. This is a good starting point.
df_information_schema = vn.run_sql("SELECT * FROM INFORMATION_SCHEMA.COLUMNS")
# This will break up the information schema into bite-sized chunks that can be referenced by the LLM
plan = vn.get_training_plan_generic(df_information_schema)
print(plan)
# If you like the plan, then uncomment this and run it to train
print("Training starts")
vn.train(plan=plan)
print("Training ends")
</code></pre>
<p>When I run it I get:</p>
<pre class="lang-none prettyprint-override"><code>...
Training starts
C:\Users\bodoque\.cache\chroma\onnx_models\all-MiniLM-L6-v2\onnx.tar.gz: 100%|██████████| 79.3M/79.3M [00:05<00:00, 16.1MiB/s]
Add of existing embedding ID: 9064de8e-3c0c-4f3b-a02b-215dff373009-doc
Process finished with exit code -1073741819 (0xC0000005)
</code></pre>
<p>The <code>print("Training ends")</code> is never reached, so I understand <code>vn.train(plan=plan)</code> breaks.</p>
|
<python><chromadb><ollama>
|
2024-08-05 10:40:24
| 1
| 12,442
|
fgalan
|
78,833,796
| 2,109,064
|
Reload module that is imported into __init__.py
|
<p><strong>What works:</strong> I have a package <code>version_info</code> in which I define a string <code>version_info</code>. When I increment <code>version_info.version_info</code>, the main code prints out the incremented value after the <code>reload</code>.</p>
<p><strong>What doesn't work:</strong> When I increment the value in <code>version_info_sub.py</code>, it is not updated upon <code>reload</code>.</p>
<p>I suspect that the <code>importlib.reload</code> doesn't go accross the <code>from .sub import version_info_sub</code> statement.</p>
<p>How can I achieve that also the value from <code>sub.py</code> gets reloaded?</p>
<p>Main code:</p>
<pre><code>from importlib import reload
import version_info
...
reload(version_info)
print(version_info.version_string) # successfully updated
print(version_info.version_info_sub) # stays on the old value
</code></pre>
<p>version_info/__init__.py:</p>
<pre><code>from .sub import version_info_sub
version_string = "6"
</code></pre>
<p>version_info/sub.py:</p>
<pre><code>version_info_sub = "6"
</code></pre>
|
<python>
|
2024-08-05 09:43:00
| 2
| 7,879
|
Michael
|
78,833,765
| 7,173,479
|
No matching distribution found for types-pkg-resources
|
<p>During a Python pre-commit run with the following steps:</p>
<pre><code>...
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.4.1
hooks:
- id: mypy
additional_dependencies: [types-all]
...
</code></pre>
<p>The following error is presented:</p>
<pre><code>Run poetry run pre-commit run --all-files
...
An unexpected error has occurred: CalledProcessError: command: ('/home/runner/.cache/pre-commit/repot9s2e7uv/py_env-python3.9/bin/python', '-mpip', 'install', '.', 'types-all')
return code: 1
expected return code: 0
stdout:
...
stderr:
ERROR: Could not find a version that satisfies the requirement types-pkg-resources (from types-all) (from versions: 0.1.0, 0.1.1, 0.1.2, 0.1.3)
ERROR: No matching distribution found for types-pkg-resources
</code></pre>
<p>This is during a GitHub action that has been working regularly until now.</p>
|
<python><github-actions><pre-commit-hook>
|
2024-08-05 09:34:01
| 1
| 869
|
George
|
78,833,741
| 7,577,930
|
In-place operation causes a memory leak. Why?
|
<p>I have the following piece of code:</p>
<pre class="lang-python prettyprint-override"><code>eigvecs = torch.randn(n, b, dtype=eigval_approximations.dtype, device=eigval_approximations.device)
eigvecs /= torch.linalg.norm(eigvecs, dim=0)
for _ in range(iterations):
eigvecs = torch.linalg.solve(mats - eigval_approximations * identity, eigvecs.T).T
eigvecs /= torch.linalg.norm(eigvecs, dim=0)
</code></pre>
<p>It is part of loss calculation in PyTorch, and it causes GPU OOM <strong>after a few epochs</strong>.</p>
<p><code>eigvecs</code> is re-initialized at every training/validation step, and it is of the same size at each step. It exists in a very short-lived scope.</p>
<p>Changing this piece of code such that it avoids in-place division (<code>/=</code>) eliminates the memory leak.</p>
<p>Why was there a memory leak due to an in-place operation in the first place? I don’t see any reason for this.</p>
<p>Is it expected in Python, or might it be an implementation detail of or a bug in PyTorch?</p>
|
<python><pytorch><memory-leaks><out-of-memory><in-place>
|
2024-08-05 09:28:41
| 0
| 346
|
Moon
|
78,833,640
| 241,515
|
Pandas: group and sort entire dataframe basing on the value of two columns, preserving within-group order
|
<p>I have a pandas DataFrame like this:</p>
<pre><code>sample closest_signature distance patient cluster correlation biopsy similarity n_biopsy
0 21506-A HU.1 0.416795 21506 HU.1 0.994611 A 2.399261 2.0
1 21506-B HU.1 0.340269 21506 HU.1 0.993852 B 2.938855 2.0
2 21507-A HU.3 0.181289 21507 HU.3 0.868993 A 5.516052 3.0
3 21507-B HU.3 0.128398 21507 HU.3 0.972968 B 7.788282 3.0
4 21507-C HU.3 0.117186 21507 HU.3 0.949540 C 8.533432 3.0
5 21521-A HU.2 0.111720 21521 HU.2 0.956889 A 8.950942 2.0
6 21521-B HU.2 0.116082 21521 HU.2 0.974804 B 8.614610 2.0
7 21531-A HU.4 0.251558 21531 ND 0.560867 A 3.975227 2.0
8 21531-B HU.2 0.197108 21531 HU.2 0.890214 B 5.073356 2.0
9 21543-A HU.2 0.184331 21543 HU.2 0.973885 A 5.425033 6.0
10 21543-B HU.2 0.151444 21543 HU.2 0.990204 B 6.603119 6.0
11 21543-C HU.2 0.156038 21543 HU.2 0.989900 C 6.408698 6.0
12 21543-D HU.2 0.196920 21543 HU.2 0.939929 D 5.078191 6.0
13 21543-E HU.1 0.234311 21543 HU.1 0.980673 E 4.267841 6.0
14 21543-F HU.2 0.152050 21543 HU.2 0.989276 F 6.576796 6.0
</code></pre>
<p>My goal is sorting <em>the entire dataframe</em>, like this:</p>
<ul>
<li>Records should be ordered by <code>closest_signature</code> as defined for the <code>sample</code> ending with <strong>A</strong> (in other words, the <code>closest_signature</code> value to use as key is the one of the <code>-A</code> sample for each group, for example if 21543-A is <code>HU.2</code>, then it's <code>HU.2</code> even if other samples in the same group may be different)</li>
<li>Patient groups should be sorted by the <em>highest</em> <code>n_biopsy</code> but <strong>preserving order</strong> in a group (i.e. samples should be ordered from A to Z)</li>
</ul>
<p>Expected result:</p>
<pre><code>sample closest_signature distance patient cluster correlation biopsy similarity n_biopsy
0 21506-A HU.1 0.416795 21506 HU.1 0.994611 A 2.399261 2.0
1 21506-B HU.1 0.340269 21506 HU.1 0.993852 B 2.938855 2.0
9 21543-A HU.2 0.184331 21543 HU.2 0.973885 A 5.425033 6.0
10 21543-B HU.2 0.151444 21543 HU.2 0.990204 B 6.603119 6.0
11 21543-C HU.2 0.156038 21543 HU.2 0.989900 C 6.408698 6.0
12 21543-D HU.2 0.196920 21543 HU.2 0.939929 D 5.078191 6.0
13 21543-E HU.1 0.234311 21543 HU.1 0.980673 E 4.267841 6.0
14 21543-F HU.2 0.152050 21543 HU.2 0.989276 F 6.576796 6.0
5 21521-A HU.2 0.111720 21521 HU.2 0.956889 A 8.950942 2.0
6 21521-B HU.2 0.116082 21521 HU.2 0.974804 B 8.614610 2.0
2 21507-A HU.3 0.181289 21507 HU.3 0.868993 A 5.516052 3.0
3 21507-B HU.3 0.128398 21507 HU.3 0.972968 B 7.788282 3.0
4 21507-C HU.3 0.117186 21507 HU.3 0.949540 C 8.533432 3.0
7 21531-A HU.4 0.251558 21531 ND 0.560867 A 3.975227 2.0
8 21531-B HU.2 0.197108 21531 HU.2 0.890214 B 5.073356 2.0
</code></pre>
<p>If I don't include <code>closest_signature</code> the approach is very straightforward:</p>
<pre><code>
df = df.sort_values(["n_biopsy", "patient" ], ascending=[False, True])
df.head(10)
sample closest_signature distance patient cluster correlation biopsy similarity n_biopsy
9 21543-A HU.2 0.184331 21543 HU.2 0.973885 A 5.425033 6
10 21543-B HU.2 0.151444 21543 HU.2 0.990204 B 6.603119 6
11 21543-C HU.2 0.156038 21543 HU.2 0.989900 C 6.408698 6
12 21543-D HU.2 0.196920 21543 HU.2 0.939929 D 5.078191 6
13 21543-E HU.1 0.234311 21543 HU.1 0.980673 E 4.267841 6
14 21543-F HU.2 0.152050 21543 HU.2 0.989276 F 6.576796 6
32 21564-A HU.3 0.121428 21564 HU.3 0.975599 A 8.235334 6
33 21564-B HU.3 0.114477 21564 HU.3 0.978366 B 8.735386 6
34 21564-C HU.3 0.149845 21564 HU.3 0.983692 C 6.673560 6
35 21564-D HU.3 0.139047 21564 HU.3 0.949370 D 7.191837 6
</code></pre>
<p>However this of course starts with <code>HU.2</code> instead of <code>HU.1</code>.</p>
<p>I've tried making a composite key, mixing <code>closest_signature</code> from the first A record of each group with the number of biopsies (e.g. <code>6_HU.1</code>), but again I run into a chicken-and-egg problem because <code>closest_signature</code> needs to be sorted in ascending order, and the number of biopsies in descending order:</p>
<pre><code>for gid, group in df.groupby("patient"):
num_biopsy = group["n_biopsy"].unique().item()
cluster = group.loc[group.index[0], "closest_signature"]
value = f"{num_biopsy}_{cluster}"
df.loc[group.index, "biopsy_cluster"] = value
df.sort_values(["biopsy_cluster", "patient"], ascending=[False, True])
sample closest_signature distance patient cluster correlation biopsy similarity n_biopsy biopsy_cluster
32 21564-A HU.3 0.121428 21564 HU.3 0.975599 A 8.235334 6 6_HU.3
33 21564-B HU.3 0.114477 21564 HU.3 0.978366 B 8.735386 6 6_HU.3
34 21564-C HU.3 0.149845 21564 HU.3 0.983692 C 6.673560 6 6_HU.3
35 21564-D HU.3 0.139047 21564 HU.3 0.949370 D 7.191837 6 6_HU.3
36 21564-E HU.3 0.125473 21564 HU.3 0.969198 E 7.969847 6 6_HU.3
</code></pre>
<p>Notice how the first is <code>HU.3</code>. All other approaches I've tried lose the ordering within the groups, which is essential to keep.</p>
<p>Is what I'm trying to do doable?</p>
|
<python><pandas><dataframe>
|
2024-08-05 09:04:57
| 1
| 4,973
|
Einar
|
78,833,469
| 4,041,117
|
Pyplot animation not working properly when plt.pause() is used
|
<p>I'm trying to create a pyplot simulation using plt.pause() but I can't make even the simplest example work. This is the <a href="https://matplotlib.org/stable/gallery/animation/animation_demo.html" rel="nofollow noreferrer">code</a>:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
np.random.seed(19680801)
data = np.random.random((50, 50, 50))
fig, ax = plt.subplots()
for i, img in enumerate(data):
ax.clear()
ax.imshow(img)
ax.set_title(f"frame {i}")
plt.pause(0.1)
</code></pre>
<p>The issue seems to have something to do with the last line of code (plt.pause(0.1)). Without this line the final output shows the final frame of the simulation—frame 49 (indicating the whole loop has finished). If I include the final line and run the simulation, the output stops at the first frame—frame 0 (and the simulation doesn't progress to the next step in the loop). I've also tried to set plt.pause(0.0) but that had exactly same effect as setting it to any other number.</p>
<p>I'm on mac, using python 3.12 under jupyter notebook 7.2.1. I would appreciate any advice. Thank you.</p>
|
<python><matplotlib><pause>
|
2024-08-05 08:24:26
| 1
| 481
|
carpediem
|
78,833,013
| 7,787,648
|
Could not interpret value ` ` for parameter `hue`
|
<p>I am trying to color a scatterplot by a categorical column. Here is a sample data, the column I want to color the scatterplot by is 'cat'.</p>
<pre><code>data = {
'x': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
'y': [2, 3, 5, 7, 11, 13, 17, 19, 23, 29],
'z': [1, 2, 2, 3, 3, 4, 4, 5, 6, 6],
'cat': ['A', 'A', 'B', 'B', 'A', 'A', 'B', 'B', 'A', 'A']
}
pandas_df = pd.DataFrame(data)
pyspark_df = spark.createDataFrame(pandas_df)
</code></pre>
<p>I created the following function to test the output. If I remove "hue" from the parameters, everything works fine, but i cannot seem to get it working correctly with 'hue'.</p>
<pre><code>def facet_plot(df, x, y, color, facet_col, bins = None):
pd_df = df.toPandas()
if bins is not None:
# check col type
if pd_df[facet_col].dtype.name in ['float64', 'int64']:
# bin the facet column
pd_df['facet_col_binned']= pd.cut(pd_df[facet_col], bins = bins)
# convert intervals to midpoints
pd_df['facet_col_binned'] = pd_df['facet_col_binned'].apply(lambda interval: round(interval.mid, 1) if pd.notna(interval) else None)
pd_df['facet_col_binned'] = pd.Categorical(pd_df['facet_col_binned'])
# assigning x as 'x_binned' for remaining code
facet_col = 'facet_col_binned'
pd_df[color] = pd_df[color].astype(str)
g = sns.FacetGrid(pd_df, col=facet_col, col_wrap=4, height=5, aspect=2)
g.map(sns.scatterplot, x, y, hue=color)
# if row => then change to row_template = '{row_name}'
g.set_titles(col_template = '{col_name}')
g.set_axis_labels(x, y)
plt.show()
facet_plot(pyspark_df, 'x', 'y', color = 'cat', facet_col='cat', bins = 2)
</code></pre>
|
<python><pandas><seaborn>
|
2024-08-05 06:22:46
| 1
| 313
|
Jst2Wond3r
|
78,832,646
| 2,707,864
|
Abaqus scripting: Create a Set from a Face object
|
<p>I created in Abaqus v6.14 scripting an extruded Part <code>p</code> with <code>BaseSolidExtrude</code>, which is then a cylinder (not necessarily circular). I want to create three sets, one for the top and bottom surface, and one for the lateral surface.
The first two are easy, with</p>
<pre><code>faces = p.faces
face_bot = faces.findAt((p_center_bot,))
face_top = faces.findAt((p_center_top,))
p.Set(faces=face_bot, name='Set-face-bot')
p.Set(faces=face_top, name='Set-face-top')
</code></pre>
<p>But I couldn't find a way to create a <code>Set</code> for the lateral face, even if for the time being, I have the advantage that these are the only 3 faces.
I tried extracting the indexes of <code>face_bot</code>, <code>face_top</code> with</p>
<pre><code>face_bot_idx = face_bot[0].index
face_top_idx = face_top[0].index
</code></pre>
<p>(note that <code>face_bot</code> and <code>face_top</code> are of type <code>Sequence</code>, having only one element each, so I extract element 0 in each case) and excluding them from the list of indexes of the 3 faces, so I get the index of my <code>face_lat_0</code> with</p>
<pre><code># Of the 3 faces, get the one not being bottom or top
face_lat_idx = [idx for idx in range(len(faces)) if ((idx!=face_bot_idx) and (idx!=face_top_idx))][0]
</code></pre>
<p>But given that <code>Face</code> object, I was unable to create the needed <code>Sequence</code> <code>face_lat</code>.
Note that the named argument <code>faces</code> requires a <code>Sequence</code>, and it seems not to accept a <code>tuple</code> like <code>(face_lat_0,)</code> or <code>list</code> like <code>[face_lat_0]</code>.</p>
<p>If I am correct that no <code>list</code> or <code>tuple</code> can be used, I need to create a <code>Sequence</code>.
I found method <code>getSequenceFromMask(...)</code>, but I don't know how to generate the mask for <code>face_lat_0</code>, given its index.</p>
<p>Alternatively, I could try using named argument <code>xFaces</code>, but I guess I would need a <code>Sequence</code> containing <code>face_bot[0]</code> and <code>face_top[0]</code> to be excluded, which again I don't know how to build.</p>
<p>Any ideas?</p>
<p><strong>Possibly related</strong> (although I didn't find the answer there):</p>
<ol>
<li><a href="https://stackoverflow.com/questions/47293330/abaqus-surface-getsequencefrommask">Abaqus Surface getSequenceFromMask</a></li>
<li><a href="https://stackoverflow.com/questions/49661543/abaqus-get-face-object">Abaqus get face object</a></li>
<li><a href="https://stackoverflow.com/questions/37048689/abaqus-script-to-select-elements-on-a-surface">Abaqus: script to select elements on a surface</a></li>
<li><a href="https://stackoverflow.com/questions/39470645/how-to-select-part-surface-in-abaqus-using-python-script">how to select part.surface in abaqus using python script</a></li>
<li><a href="https://imechanica.org/node/15852" rel="nofollow noreferrer">https://imechanica.org/node/15852</a></li>
</ol>
|
<python><scripting><mask><abaqus>
|
2024-08-05 03:36:12
| 2
| 15,820
|
sancho.s ReinstateMonicaCellio
|
78,832,496
| 4,710,409
|
Django- how to convert <class 'django.core.files.uploadedfile.InMemoryUploadedFile'> to an image?
|
<p>I have a form that uploads an image to my view; the uploaded image type is:</p>
<pre><code><class 'django.core.files.uploadedfile.InMemoryUploadedFile'>
</code></pre>
<p>I want to process the image though; thus I need it in PIL or CV2 format.</p>
<p>How can I do that ?</p>
|
<python><django><forms><opencv><python-imaging-library>
|
2024-08-05 01:57:53
| 1
| 575
|
Mohammed Baashar
|
78,832,340
| 7,846,884
|
get an item of the output after applying str.split to a polars dataframe column
|
<p>how can i select last item of list in <code>paths</code> column after applying the str.split("/") function?</p>
<pre><code>dataNpaths = pl.scan_csv("test_data/file*.csv", has_header=True, include_file_paths = "paths").collect()
dataNpaths.with_columns(pl.col("paths").str.split("/").alias("paths"))
</code></pre>
<pre><code>>>> dataNpaths.with_columns(pl.col("paths").str.split("/").alias("paths"))
shape: (30, 5)
┌──────────┬──────────┬──────────┬──────────┬────────────────────────────┐
│ Column1 ┆ Column2 ┆ Column3 ┆ Column4 ┆ paths │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ f64 ┆ f64 ┆ f64 ┆ f64 ┆ list[str] │
╞══════════╪══════════╪══════════╪══════════╪════════════════════════════╡
│ 0.603847 ┆ 0.509877 ┆ 0.091579 ┆ 0.43821 ┆ ["test_data", "file1.csv"] │
│ 0.572299 ┆ 0.817647 ┆ 0.087951 ┆ 0.397217 ┆ ["test_data", "file1.csv"] │
│ 0.886123 ┆ 0.159805 ┆ 0.766246 ┆ 0.083915 ┆ ["test_data", "file1.csv"] │
│ 0.142208 ┆ 0.413847 ┆ 0.043408 ┆ 0.147779 ┆ ["test_data", "file1.csv"] │
│ 0.105215 ┆ 0.924754 ┆ 0.309823 ┆ 0.724407 ┆ ["test_data", "file1.csv"] │
│ … ┆ … ┆ … ┆ … ┆ … │
│ 0.381675 ┆ 0.849887 ┆ 0.498281 ┆ 0.733085 ┆ ["test_data", "file3.csv"] │
│ 0.697427 ┆ 0.950464 ┆ 0.999596 ┆ 0.645253 ┆ ["test_data", "file3.csv"] │
│ 0.49979 ┆ 0.172414 ┆ 0.679287 ┆ 0.091804 ┆ ["test_data", "file3.csv"] │
│ 0.668585 ┆ 0.640259 ┆ 0.932463 ┆ 0.579558 ┆ ["test_data", "file3.csv"] │
│ 0.077462 ┆ 0.802565 ┆ 0.966791 ┆ 0.29297 ┆ ["test_data", "file3.csv"] │
</code></pre>
<p>but neither of these approaches worked</p>
<pre><code>dataNpaths.with_columns(pl.col("paths").str.split("/")[-1].alias("paths"))
dataNpaths.with_columns(pl.col("paths").str.split("/",-1).alias("paths"))
</code></pre>
|
<python><string><python-polars>
|
2024-08-04 23:29:27
| 1
| 473
|
sahuno
|
78,832,077
| 875,317
|
Why is my string considered an Invalid Argument in Python?
|
<p>I am brand new to Python.</p>
<p>I have this as the beginnings of my code which is to read and parse a google doc:</p>
<pre><code>from pathlib import Path
def retrieve_parse_and_print_doc(docURL):
path = Path(docURL)
contents = path.read_text()
lines = contents.splitlines()
for line in lines:
print(line)
retrieve_parse_and_print_doc('https://docs.google.com/document/...[elided]/pub')
</code></pre>
<p>...but it gives me "OSError: [Errno 22] Invalid argument"; I'm running it in VSCode in Windows.</p>
<h2>UPDATE</h2>
<p>Based on an answer by Jayson Reis <a href="https://stackoverflow.com/questions/12842341/download-google-docs-public-spreadsheet-to-csv-with-python">here</a>, I changed the code to this:</p>
<pre><code>import requests
def retrieve_parse_and_print_doc(docURL):
response = requests.get(docURL)
assert response.status_code == 200, 'Wrong status code'
lines = response.content.splitlines()
for line in lines:
print(line)
retrieve_parse_and_print_doc('https://docs.google.com/document/... [elided]/pub')
</code></pre>
<p>...and it sort of works; at least, it returns me a bunch of the page source, as you can see below, but what I want is the values in the table on the page, which seem to be totally obscured amidst all that "gobbledygook" as seen in this image:</p>
<p><a href="https://i.sstatic.net/TpgKxHmJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TpgKxHmJ.png" alt="enter image description here" /></a></p>
<p>How can I parse out the data contained in the table's cells?</p>
|
<python><windows><pathlib>
|
2024-08-04 20:39:58
| 1
| 10,717
|
B. Clay Shannon-B. Crow Raven
|
78,832,062
| 8,275,139
|
Least privilege principle to create files in a bucket
|
<p>I'm trying to grant a service account a limited set of permissions and allow it to create files on a specific bucket. Instead of creating a custom role, I thought on using the <code>roles/storage.objectCreator</code> predefined one on the bucket.</p>
<p>After that, I run the following lines of code by impersonating the SA:</p>
<pre class="lang-py prettyprint-override"><code>storage_client = storage.Client("project")
bucket_name = "bucket"
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob("test.txt")
blob.upload_from_string("Hello World")
</code></pre>
<p>However, when I execute the workflow I receive a permission denied error saying:</p>
<blockquote>
<p>sa-name@project.iam.gserviceaccount.com does not have storage.buckets.get access to the Google Cloud Storage bucket. Permission 'storage.buckets.get' denied on resource (or it may not exist)</p>
</blockquote>
<p>I was expecting this to be enough, but it seems it isn't. I know I can create a custom role and define the set of permissions I want to give it. However, is it possible to upload the file while keeping the scope limited to the <code>objectCreator</code> role? Maybe by editing the python code and not instantiating the bucket object?</p>
|
<python><google-cloud-storage>
|
2024-08-04 20:29:31
| 1
| 1,077
|
Luiscri
|
78,831,561
| 14,684,366
|
Printing subprocess stdout lines output
|
<p>I created a simple Python function:</p>
<pre class="lang-py prettyprint-override"><code>import subprocess
from io import TextIOWrapper
def run_shell_command(command: list, debug: bool = False):
'''
Run shell command
:param command: Shell command
:param debug: Debug mode
:return: Result code and message
'''
try:
process = subprocess.run(
command, check=True, text=True, timeout=5,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
if debug:
for line in TextIOWrapper(process.stdout, encoding='utf-8'):
print(line)
message = 'Shell command executed sucessfully'
return ({'code': 200, 'msg': message, 'stdout': process.stdout})
except subprocess.CalledProcessError as e:
return ({'code': 500, 'msg': e.output})
if __name__ == "__main__":
command = run_shell_command(['ls', '-lah'], True)
print(command)
</code></pre>
<p>When I run it in debug mode, I get the following error:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "/tmp/command.py", line 28, in <module>
command = run_shell_command(['ls', '-lah'], True)
File "/tmp/command.py", line 19, in run_shell_command
for line in TextIOWrapper(process.stdout, encoding="utf-8"):
AttributeError: 'str' object has no attribute 'readable'
</code></pre>
<p>Running Python 3.9 on a Linux server, I was wondering if you can provide some insight where the issue might be. With debug disabled, I get a proper text output. Thank you for your help.</p>
<p>Edit: Based on the comments below, the fix is quite simple:</p>
<pre><code> if debug:
print(process.stdout.rstrip())
</code></pre>
<p>However, IMO the accepted solution is better, compared to original code.</p>
|
<python><python-3.x><subprocess>
|
2024-08-04 16:16:04
| 1
| 591
|
Floren
|
78,831,547
| 22,987,224
|
Python - ModuleNotFoundError: No module named 'swig'
|
<p>When trying to run the gymnasium package under Windows 11, Visual Studio Code, I get the following errors</p>
<pre><code>PS C:\Users\[...]\Code\Gymnasium> & c:/Users/Thomas.koeppen/Code/Gymnasium/.venv/Scripts/python.exe c:/Users/Thomas.koeppen/Code/Gymnasium/main.py
Traceback (most recent call last):
File "C:\Users\[...]\Code\Gymnasium\.venv\Lib\site-packages\gymnasium\envs\box2d\bipedal_walker.py", line 15, in <module>
import Box2D
ModuleNotFoundError: No module named 'Box2D'
[...]
File "C:\Users\[...]\Code\Gymnasium\.venv\Lib\site-packages\gymnasium\envs\box2d\bipedal_walker.py", line 25, in <module>
raise DependencyNotInstalled(
gymnasium.error.DependencyNotInstalled: Box2D is not installed, run `pip install gymnasium[box2d]`
</code></pre>
<p>When trying to install any of the box2d modules, I again get an error message</p>
<pre><code>PS C:\Users\[...]\Code\Gymnasium> pip install gymnasium[box2d]
[...]
Building wheel for box2d-py (pyproject.toml) ... error
ModuleNotFoundError: No module named 'swig'
[...]
error: command 'C:\\Users\\[...]\\Code\\Gymnasium\\.venv\\Scripts\\swig.exe' failed with exit code 1
</code></pre>
<p>The swig package is missing, even though I have installed it with pip install swig:</p>
<pre><code>PS C:\Users\[...]\Code\Gymnasium> pip3 install swig
Requirement already satisfied: swig in c:\users\[...]\code\gymnasium\.venv\lib\site-packages (4.2.1)
</code></pre>
<p>This error has kept me busy for quite some time, I would like to document the solution for any motivated peers encountering it in the future</p>
|
<python><swig><gymnasium>
|
2024-08-04 16:10:32
| 1
| 355
|
Mercutio1243
|
78,831,535
| 15,547,292
|
python metaclasses: editing the namespace after `__set_name__` methods have been called?
|
<p>Suppose we are defining a class with a metaclass.
In the class body, objects are assigned that implement <code>__set_name__</code> to register themselves in a data structure of the class.</p>
<p>Is it possible to edit the namespace <em>after</em> <code>__set_name__</code> methods have been run? Like, detach the filled data structure, split it in two, and add the parts under new attributes?</p>
<p>The trouble is that, before calling <code>super().__new__(...)</code> in the metaclass, the data structure is still empty. Afterwards it is filled, but then the class is already created, and its namespace is now a read-only copy of the namespace built up in the metaclass.</p>
<p>Is there any chance of editing the namespace, with <code>__set_name__</code> methods applied, like via a hook just before it is frozen into the new class?</p>
|
<python><metaclass>
|
2024-08-04 16:05:09
| 2
| 2,520
|
mara004
|
78,831,454
| 8,543,025
|
Update spacing between Plotly subplots
|
<p>Is there a way to change the vertical/horizontal spacing between subplots in an existing figure?
I couldn't find anything in the docs or user forum, and the following throws a <code>Bad property</code> error:</p>
<pre><code>import numpy as np
import plotly.graph_object as go
from plotly.subplots import make_subplots
fig = make_subplots(rows=2, cols=2, vertical_spacing=0.05)
# ... add subplots to figure
fig.update_layout(vertical_spacing=0.1)
</code></pre>
|
<python><plotly><subplot>
|
2024-08-04 15:31:41
| 0
| 593
|
Jon Nir
|
78,831,453
| 6,357,916
|
No module named 'django' when debugging inside vscode even though django is installed
|
<p>I am trying to debug my django app inside docker container. I have specified following in my requirements file:</p>
<p><strong>requirements.txt</strong></p>
<pre><code>Django==3.2.5
psycopg2-binary==2.9.1
djangorestframework==3.12.4
django-rest-swagger==2.2.0
</code></pre>
<p><strong>Dockerfile</strong></p>
<pre><code>FROM python:3.9.6-bullseye
ENV PYTHONUNBUFFERED 1
WORKDIR /my_project
COPY ./my_project/requirements.txt /my_project/requirements.txt
RUN pip install -r requirements.txt
EXPOSE 8000
COPY ./entrypoint.sh /entrypoint.sh
RUN ["chmod", "+x", "/entrypoint.sh"]
ENTRYPOINT /entrypoint.sh
</code></pre>
<p>Also when I checked by attacking vscode to docker container and running pip, it shows that the django is indeed installed:</p>
<pre><code># pip list | grep Django
Django 3.2.5
</code></pre>
<p>However, still I get the error:</p>
<pre><code>ModuleNotFoundError: No module named 'django'
</code></pre>
<p>Here is the screenshot of error showing exception raised in debug mode, launch.json and output of <code>pip list | grep Django</code>:
<a href="https://i.sstatic.net/Obn3WX18.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Obn3WX18.png" alt="enter image description here" /></a></p>
<p>PS: I am using docker compose to start the containers.</p>
|
<python><django><docker><visual-studio-code><docker-compose>
|
2024-08-04 15:31:24
| 1
| 3,029
|
MsA
|
78,831,434
| 16,869,946
|
New column in Pandas dataframe using least squares from scipy.optimize
|
<p>I have a Pandas dataframe that looks like the following:</p>
<pre><code>Race_ID Date Student_ID feature1
1 1/1/2023 3 0.02167131
1 1/1/2023 4 0.17349148
1 1/1/2023 6 0.08438952
1 1/1/2023 8 0.04143787
1 1/1/2023 9 0.02589056
1 1/1/2023 1 0.03866752
1 1/1/2023 10 0.0461553
1 1/1/2023 45 0.09212758
1 1/1/2023 23 0.10879326
1 1/1/2023 102 0.186921
1 1/1/2023 75 0.02990676
1 1/1/2023 27 0.02731904
1 1/1/2023 15 0.06020158
1 1/1/2023 29 0.06302721
3 17/4/2022 5 0.2
3 17/4/2022 2 0.1
3 17/4/2022 3 0.55
3 17/4/2022 4 0.15
</code></pre>
<p>I would like to create a new column using the following method:</p>
<ol>
<li>Define the following function using integrals</li>
</ol>
<pre><code>import numpy as np
from scipy import integrate
from scipy.stats import norm
import scipy.integrate as integrate
from scipy.optimize import fsolve
from scipy.optimize import least_squares
def integrandforpi_i(xi, ti, *theta):
prod = 1
for t in theta:
prod = prod * (1 - norm.cdf(xi - t))
return prod * norm.pdf(xi - ti)
def pi_i(ti, *theta):
return integrate.quad(integrandforpi_i, -np.inf, np.inf, args=(ti, *theta))[0]
</code></pre>
<ol start="2">
<li>for each <code>Race_ID</code>, the value for each <code>Student_ID</code> in the new column is given by solving a system of nonlinear equations using <code>least_squares</code> in <code>scipy.optimize</code> as follows:</li>
</ol>
<p>For example, for race 1, there are 14 students in the race and we will have to solve the following system of 14 nonlinear equations and we restrict the bound in between -2 and 2:</p>
<pre><code>def equations(params):
t1,t2,t3,t4,t5,t6,t7,t8,t9,t10,t11,t12,t13,t14 = params
return (pi_i(t1,t2,t3,t4,t5,t6,t7,t8,t9,t10,t11,t12,t13,t14) - 0.02167131,
pi_i(t2,t1,t3,t4,t5,t6,t7,t8,t9,t10,t11,t12,t13,t14) - 0.17349148,
pi_i(t3,t2,t1,t4,t5,t6,t7,t8,t9,t10,t11,t12,t13,t14) - 0.08438952,
pi_i(t4,t2,t3,t1,t5,t6,t7,t8,t9,t10,t11,t12,t13,t14) - 0.04143787,
pi_i(t5,t2,t3,t4,t1,t6,t7,t8,t9,t10,t11,t12,t13,t14) - 0.02589056,
pi_i(t6,t2,t3,t4,t5,t1,t7,t8,t9,t10,t11,t12,t13,t14) - 0.03866752,
pi_i(t7,t2,t3,t4,t5,t6,t1,t8,t9,t10,t11,t12,t13,t14) - 0.0461553,
pi_i(t8,t2,t3,t4,t5,t6,t7,t1,t9,t10,t11,t12,t13,t14) - 0.09212758,
pi_i(t9,t2,t3,t4,t5,t6,t7,t8,t1,t10,t11,t12,t13,t14) - 0.10879326,
pi_i(t10,t2,t3,t4,t5,t6,t7,t8,t9,t1,t11,t12,t13,t14) - 0.186921,
pi_i(t11,t2,t3,t4,t5,t6,t7,t8,t9,t10,t1,t12,t13,t14) - 0.02990676,
pi_i(t12,t2,t3,t4,t5,t6,t7,t8,t9,t10,t11,t1,t13,t14) - 0.02731904,
pi_i(t13,t2,t3,t4,t5,t6,t7,t8,t9,t10,t11,t12,t1,t14) - 0.06020158,
pi_i(t14,t2,t3,t4,t5,t6,t7,t8,t9,t10,t11,t13,t14,t1) - 0.06302721)
res = least_squares(equations, (1,1,1,1,1,1,1,1,1,1,1,1,1,1), bounds = ((-2,-2,-2,-2,-2,-2,-2,-2,-2,-2,-2,-2,-2,-2), (2,2,2,2,2,2,2,2,2,2,2,2,2,2)))
t1,t2,t3,t4,t5,t6,t7,t8,t9,t10,t11,t12,t13,t14 = res.x
</code></pre>
<p>Solving gives <code>t1,t2,t3,t4,t5,t6,t7,t8,t9,t10,t11,t12,t13,t14 = [1.38473533 0.25616609 0.6935956 1.07314877 1.30201502 1.10781642 1.01839475 0.64349646 0.54630158 0.20719836 1.23347391 1.27642131 0.879412 0.83338882]</code></p>
<p>Similarly, for race 2, there are 4 students competing and we will have to solve the following system of 4 nonlinear equations:</p>
<pre><code>def equations(params):
t1,t2,t3,t4 = params
return (pi_i(t1,t2,t3,t4) - 0.2,
pi_i(t2,t1,t3,t4) - 0.1,
pi_i(t3,t2,t1,t4) - 0.55,
pi_i(t4,t2,t3,t1) - 0.15)
res = least_squares(equations, (1,1,1,1), bounds = ((-2,-2,-2,-2), (2,2,2,2)))
t1,t2,t3,t4 = res.x
</code></pre>
<p>which gives <code>t1,t2,t3,t4 = [0.9209873 1.37615468 0.12293934 1.11735818]</code>.</p>
<p>Hence the desired outcome looks like</p>
<pre><code>Race_ID Date Student_ID feature1 new_column
1 1/1/2023 3 0.02167131 1.38473533
1 1/1/2023 4 0.17349148 0.25616609
1 1/1/2023 6 0.08438952 0.6935956
1 1/1/2023 8 0.04143787 1.07314877
1 1/1/2023 9 0.02589056 1.30201502
1 1/1/2023 1 0.03866752 1.10781642
1 1/1/2023 10 0.0461553 1.01839475
1 1/1/2023 45 0.09212758 0.64349646
1 1/1/2023 23 0.10879326 0.54630158
1 1/1/2023 102 0.186921 0.20719836
1 1/1/2023 75 0.02990676 1.23347391
1 1/1/2023 27 0.02731904 1.27642131
1 1/1/2023 15 0.06020158 0.879412
1 1/1/2023 29 0.06302721 0.83338882
3 17/4/2022 5 0.2 0.9209873
3 17/4/2022 2 0.1 1.37615468
3 17/4/2022 3 0.55 0.12293934
3 17/4/2022 4 0.15 1.11735818
</code></pre>
<p>I have no idea how to generate the new column. Also, my actual dataframe is much larger with many races so I would like to ask is there any way to speed up the computation too, thanks a lot.</p>
|
<python><pandas><dataframe><scipy><scipy-optimize>
|
2024-08-04 15:23:38
| 1
| 592
|
Ishigami
|
78,831,181
| 9,542,989
|
Migrating Stored Procedures Using Alembic
|
<p>I am using an Alembic project to migrate my database schema between my development and production servers.</p>
<p>I am autogenerating my migration scripts by maintaining a <code>models.py</code> containing the SQLAlchemy models for my tables and then running the following command anytime an update is made to the schema:</p>
<pre><code>alembic revision --autogenerate -m "<description of the change>"
</code></pre>
<p>My models.py file looks something like this:</p>
<pre><code>from sqlalchemy.sql import func
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, String
Base = declarative_base()
metadata = Base.metadata
class Customer(Base):
__tablename__ = 'customer'
__table_args__ = {'schema': 'sda'}
customer_id = Column(String, primary_key=True)
...
</code></pre>
<p>And this is my env.py:</p>
<pre><code>import os
from logging.config import fileConfig
from sqlalchemy import engine_from_config
from sqlalchemy import pool
from alembic import context
from models import Base
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# set the database URL from environment variables
db_conn_uri = f"mssql+pyodbc://{os.environ['MSSQL_ID']}:{os.environ['MSSQL_PASSWORD']}@{os.environ['MSSQL_SERVER']}/{os.environ['MSSQL_DATABASE']}?driver=ODBC+Driver+17+for+SQL+Server&Authentication=ActiveDirectoryServicePrincipal&Encrypt=yes&TrustServerCertificate=no"
config.set_main_option('sqlalchemy.url', db_conn_uri)
# Interpret the config file for Python logging.
# This line sets up loggers basically.
if config.config_file_name is not None:
fileConfig(config.config_file_name)
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
target_metadata = Base.metadata
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def run_migrations_offline() -> None:
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online() -> None:
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
connectable = engine_from_config(
config.get_section(config.config_ini_section, {}),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)
with connectable.connect() as connection:
context.configure(
connection=connection, target_metadata=target_metadata
)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()
</code></pre>
<p>This works fine, but I would also like to migrate my stored procedures from development to production. How exactly can I do that with the above setup?</p>
<p>I came across the concept of Replaceable Objects in the Alembic cookbook <a href="https://alembic.sqlalchemy.org/en/latest/cookbook.html#the-replaceable-object-structure" rel="nofollow noreferrer">here</a>, but it does not make much sense to me, to be honest.</p>
|
<python><stored-procedures><sqlalchemy><database-migration><alembic>
|
2024-08-04 13:19:57
| 1
| 2,115
|
Minura Punchihewa
|
78,831,103
| 2,913,106
|
How to handle self-closing tags without end-slash in html.parser.HTMLParser
|
<p>By default it seems that <code>html.parser.HTMLParser</code> cannot handle self closing tags correctly, if they are not terminated using <code>/</code>. E.g. it handles <code><img src="asfd"/></code> fine, but it incorrectly handles <code><img scr="asdf"></code> by considering it as a non-self closing tag.</p>
<pre><code>from html.parser import HTMLParser
class MyHTMLParser(HTMLParser):
def __init__(self):
super().__init__()
self.depth = 0
def handle_starttag(self, tag, attrs):
print('|'*self.depth + tag)
self.depth += 1
def handle_endtag(self, tag):
self.depth -= 1
html_content = """
<html>
<head>
<title>test</title>
</head>
<body>
<div>
<img src="http://closed.example.com" />
<div>1</div>
<div>2</div>
<img src="http://unclosed.example.com">
<div>3</div> <!-- will be indented too far -->
<div>4</div>
</div>
</body>
</html>
"""
parser = MyHTMLParser()
parser.feed(html_content)
</code></pre>
<p>Is there a way to change this behaviour so it correctly handles self-closing tags without slash, or maybe a workaround?</p>
<p>For context: I'm writing a script for an environment where I only have access to a pure python interpreter and can only use built-in libraries, I cannot use any other ones.</p>
|
<python><html><html-parsing>
|
2024-08-04 12:41:15
| 1
| 11,728
|
flawr
|
78,830,955
| 865,220
|
In a pandas dataframe, for a given column, How to multiple a value in same row of df with another value in previous row of its own column?
|
<p>I want to transform a datafram like this</p>
<pre><code>date | colA | ColB
------------------------------------
1971-10-01 0.002451 NaN
1971-11-01 0.002445 NaN
1971-12-01 0.002439 NaN
1972-01-01 0.002433 NaN
1972-02-01 0.004854 NaN
1972-03-01 0.000000 250341816.0
1972-04-01 0.002415 NaN
1972-05-01 0.002410 NaN
1972-06-01 0.002404 NaN
</code></pre>
<p>to</p>
<pre><code>date | colA | ColB
------------------------------------
1971-10-01 0.002451 0
1971-11-01 0.002445 0
1971-12-01 0.002439 0
1972-01-01 0.002433 0
1972-02-01 0.004854 0
1972-03-01 0.000000 250341816.0
1972-04-01 0.002415 250946391.486
1972-05-01 0.002410 251551172.289
1972-06-01 0.002404 252155901.307
</code></pre>
<p>There was only one valid value in colB ie <code>250341816.0</code> and rest all rows were <code>NaN</code>, the rows above it has to be filled with <code>0</code> but <code>i</code>-th row below it have to be filled with</p>
<p><code>df[ColB].iloc[i] = df[ColB].iloc[i-1] * (1 + df[ColA].iloc[i])</code></p>
<p>for example as you can see
<code>250341816.0*(1+0.002415)=250946391.486</code></p>
<p>I am looking for a concise way to do it, preferably not via looping all indices explicitly.</p>
|
<python><pandas><dataframe>
|
2024-08-04 11:33:50
| 1
| 18,382
|
ishandutta2007
|
78,830,929
| 21,294,350
|
How to write `compose` similar to that in SDF with python?
|
<p>Recently when I self-learnt MIT 6.5151 course, I first read CS 61AS Unit 0 as the preparation. Then I have read SICP 1 to 2.1 (with related lecture notes) as <a href="https://groups.csail.mit.edu/mac/users/gjs/6.945/psets/ps00/dh.pdf" rel="nofollow noreferrer">ps0</a> requires (also read 2.2.1 as <a href="https://people.eecs.berkeley.edu/%7Ebh/61a-pages/Volume2/notes.pdf" rel="nofollow noreferrer">CS 61A notes</a> requires) and then <a href="https://mitpress.ublish.com/ebook/software-design-for-flexibility-preview/12618/27" rel="nofollow noreferrer">Software Design for Flexibility (SDF)</a> Prologue, chapter 1 and partly Appendix on Scheme. I use MIT-Scheme as the course recommends.</p>
<p>The course <a href="https://groups.csail.mit.edu/mac/users/gjs/6.945/psets/ps01/ps.pdf" rel="nofollow noreferrer">problem set 1</a> has the following problem:</p>
<blockquote>
<p>Exercise 2.a: (Not in SDF)
Most modern languages, such as <em>Python</em> and Javascript provide
facilities to write and use combinators like COMPOSE. Pick your
favorite language and write implementations, in your favorite
language, of three of the combinators that that we show in section 2.1
of SDF. Can you deal with <em>multiple arguments</em>? To what extent can you
make them <em>work with multiple returned values</em>? To what extent can you
put in checking for correct arity? Do these requirements conflict.
Demonstrate that your combinators work correctly with a few examples.</p>
</blockquote>
<p>My implementation:</p>
<pre class="lang-py prettyprint-override"><code>import inspect
def compose(f, g):
g_arity=len(inspect.signature(g).parameters)
def compose_composition(*arguments):
# https://stackoverflow.com/a/18994347/21294350
try:
if len(arguments) != g_arity:
print("compose Arg number error")
# print("g return:",g(*arguments))
# https://stackoverflow.com/a/691274/21294350
# very inconvenient.
g_result=g(*arguments)
if type(g_result) is list:
print("g_result",g_result)
return f(*g_result)
else:
return f(*[g_result])
except:
pass
return compose_composition
assert compose(lambda x:x+2, lambda x:x*x)(2)==2*2+2
print("compose test_2",compose(lambda x:x+2, lambda x:x*x)(2,2)) # this test is expected to throw error.
assert compose(lambda x:["foo",x], lambda x:["bar",x])(2)==["foo",["bar",2]]
</code></pre>
<p>The key problem is due to my assumption of passing "multiple returned values" using <em><code>list</code></em>. But this will have problems if we just pass <em>one single list, i.e. one type of single-parameter cases</em>. Then we can't differentiate it from "multiple returned values".</p>
<hr />
<p><a href="https://github.com/sci-42ver/SDF_exercise_solution/blob/ea5c53e090c23245d89b9963e1a238e5cd1bfb03/software/sdf/combinators/function-combinators.scm#L41-L47" rel="nofollow noreferrer">The book Scheme implementation</a> solves the above problem by using <code>values</code> which is probably not returned normally as one single data. It is always used as one container of multiple arguments.</p>
<pre class="lang-lisp prettyprint-override"><code>(define (compose f g)
(define (the-composition . args)
(call-with-values (lambda () (apply g args))
f))
(restrict-arity the-composition (get-arity g)))
</code></pre>
<p>Is there one way to solve with the above tricky problem using python?</p>
<hr />
<p>Hinted by the comment, I tried the following where <code>,</code> in <code>(g(*arguments),)</code> is to consider the case that there is only one single object in one tuple.</p>
<pre class="lang-py prettyprint-override"><code>def compose(f, g):
g_arity=len(inspect.signature(g).parameters)
def compose_composition(*arguments):
try:
if len(arguments) != g_arity:
print("compose Arg number error")
print("g result:",(g(*arguments),),*(g(*arguments),))
if type(g(*arguments)) is tuple:
return f(*g(*arguments))
else:
return f(*(g(*arguments),))
except:
pass
return compose_composition
compose(lambda x:x, lambda x,y:(x,y))(2,3) # This should throw error
assert compose(lambda x,y:x+y, lambda x,y:(x,y))(2,3)==5
</code></pre>
<p>Since python normally use tuple to return multiple values, maybe we just can't differentiate it from returning one single value, i.e. one tuple. So here we assume we <em>should not</em> return one single tuple as one single value.</p>
|
<python><arguments><scheme><parameter-passing>
|
2024-08-04 11:23:12
| 0
| 782
|
An5Drama
|
78,830,890
| 6,778,374
|
How to print only the opening tag of an element in etree?
|
<p>Given an Element from <code>etree</code>, I would like to print only the opening tag. This would be immensely useful for debugging.</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>>>> from lxml import etree
>>> elem = etree.fromstring(b'''<bbq taste="awful" smell="amazing">
... <food type="hot dogs" />
... <food type="hamburgers" condition="burnt" />
... </bbq>''')
</code></pre>
<p>If I want to print this element I can use <code>etree.tostring</code>, but it can only print <em>the full tree</em>, even if it'd take up 10 screens:</p>
<pre class="lang-py prettyprint-override"><code>>>> print(etree.tostring(elem).decode())
<bbq taste="awful" smell="amazing">
<food type="hot dogs"/>
<food type="hamburgers" condition="burnt"/>
</bbq>
>>>
</code></pre>
<p>What I would like is something like this:</p>
<pre class="lang-py prettyprint-override"><code>>>> print(etree.tostring(elem, open_tag_only=True).decode())
<bbq taste="awful" smell="amazing">
>>>
</code></pre>
|
<python><lxml><elementtree>
|
2024-08-04 11:05:09
| 1
| 675
|
NeatNit
|
78,830,862
| 11,357,695
|
Testing static methods with pytest
|
<p>I am trying to test the <code>write_nodes</code> static method from <code>PrunedGraphWriter</code> (please see below). The original function copies files (nodes) from one dir to another, whereas the mocked function just checks that the nodes to be written are what I expect in the test case.</p>
<p>I am getting:</p>
<pre><code>FAILED .spyder-py3\app\tests\test_pipelines.py::TestSieve::test_sieve - TypeError:
TestSieve.test_sieve.<locals>.mock_pruned_graph_write_nodes() takes 3 positional arguments but 4 were given
</code></pre>
<p><em>This error does not occur and all tests pass</em> if i pass <code>self</code> to <code>mock_pruned_graph_write_nodes()</code>. However, I don't see why this is - as I understand, there is no explicit expectation of a <code>self</code> arg when using static methods.</p>
<p>Can someone explain what is going on? Thank you!</p>
<p><strong>networks.py</strong></p>
<pre><code>class PrunedGraphWriter:
...
def __init__(self, args) -> None:
...
self.written_nodes = self.write_nodes(self.nodes,
input_genbank_dir,
output_genbank_dir
)
...
@staticmethod
def write_nodes(nodes : list, input_genbank_dir : str,
output_genbank_dir : str) -> list:
...
</code></pre>
<p><strong>pipelines.py</strong></p>
<pre><code>from app.networks import PrunedGraphWriter
def sieve(args):
...
pruned_graph = PrunedGraphWriter(args)
...
</code></pre>
<p><strong>test_pipelines.py</strong></p>
<pre><code>class TestSieve:
...
def test_sieve(self,monkeypatch, caplog...):
def mock_pruned_graph_write_nodes(nodes,
input_genbank_dir,
output_genbank_dir) -> list:
assert nodes == ['file3', 'file4']
return nodes
...
monkeypatch.setattr('app.pipelines.PrunedGraphWriter.write_nodes',
mock_pruned_graph_write_nodes)
...
with caplog.at_level(logging.INFO):
sieve(args)
...
</code></pre>
|
<python><unit-testing><class><testing><pytest>
|
2024-08-04 10:53:22
| 1
| 756
|
Tim Kirkwood
|
78,830,794
| 14,358,734
|
Why is this list index out of range for a return statement but not for a print statement?
|
<p>The goal: Given an integer array nums of length n and an integer target, find three integers in nums such that the sum is closest to target.</p>
<p>Return the sum of the three integers.</p>
<p>I've written up a potential solution. At the end, I have a list of lists <code>solution</code>. When I try to print the first entry of the first list in <code>solution</code>, I get the expected (and correct) result. I can manipulate this data with no problems.</p>
<pre><code>class Solution:
def threeSumClosest(self, nums: List[int], target: int) -> int:
from operator import itemgetter
solution = [sum([nums[x], nums[y], nums[z]]) for x in nums for y in nums for z in nums if x != y != z != x]
solution = [[x, abs(x-target)] for x in solution]
sorted(solution, key=itemgetter(1))
print(solution[0][0])
</code></pre>
<p>However, the moment I actually set a return value for the function, like so</p>
<pre><code>class Solution:
def threeSumClosest(self, nums: List[int], target: int) -> int:
from operator import itemgetter
solution = [sum([nums[x], nums[y], nums[z]]) for x in nums for y in nums for z in nums if x != y != z != x]
solution = [[x, abs(x-target)] for x in solution]
sorted(solution, key=itemgetter(1))
return solution[0][0]
</code></pre>
<p>I get this error</p>
<pre><code>IndexError: list index out of range
~~~~~~~~^^^
return(solution[0][0])
Line 10 in threeSumClosest (Solution.py)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ret = Solution().threeSumClosest(param_1, param_2)
Line 42 in _driver (Solution.py)
_driver()
Line 53 in <module> (Solution.py)
</code></pre>
<p>I get this error even if I try to return a dummy value, like <code>return 0</code>. So clearly what I've written doesn't work, but I cannot figure out where the problem lies.</p>
|
<python><list>
|
2024-08-04 10:26:03
| 2
| 781
|
m. lekk
|
78,830,766
| 13,520,223
|
How to click the next link with Zyte browser automation?
|
<p>The <a href="https://docs.zyte.com/web-scraping/tutorial/setup.html#create-your-first-spider" rel="nofollow noreferrer">Zyte tutorial "Create your first spider"</a> crawls <a href="http://books.toscrape.com/catalogue/category/books/mystery_3/index.html" rel="nofollow noreferrer">this page</a> which has a pager with a "normal" next link. But what if the next link contains only a <code>href="#"</code> and executes JavaScript instead, like many websites nowadays do? In that case, you have no URL for your <code>next_page_links</code> and cannot execute <code>response.follow_all</code>, right?</p>
<p>The <a href="https://docs.zyte.com/web-scraping/tutorial/js.html#" rel="nofollow noreferrer">chapter "Handle JavaScript" of the Zyte Tutorial</a> suggests to use <a href="https://docs.zyte.com/web-scraping/tutorial/js.html#use-browser-automation" rel="nofollow noreferrer">browser automation</a>, and the example given there demonstrates how this works with the <code>scrollBottom</code> action for <a href="http://quotes.toscrape.com/scroll" rel="nofollow noreferrer">http://quotes.toscrape.com/scroll</a>.</p>
<p>Unfortunately, there is no example how to handle a <code>click</code> action on a next link to make the next results load with JavaScript. Basically, as a proof of concept, clicking the link would even work with a normal link like on <a href="http://books.toscrape.com" rel="nofollow noreferrer">http://books.toscrape.com</a>.</p>
<p>I tried this like that:</p>
<pre><code>import scrapy
from scrapy import Request
class BooksToScrapeSpider(scrapy.Spider):
name = "books_toscrape"
start_urls = [
"http://books.toscrape.com/catalogue/category/books/mystery_3/index.html"
]
def parse(self, response):
# Extract book data
for book in response.css("article.product_pod"):
yield {
"name": book.css("h3 a::attr(title)").get(),
"price": book.css(".price_color::text").get(),
}
# Find the "next" link
next_page = response.css("li.next a::attr(href)").get()
if next_page:
self.logger.info(f"Found next page: {next_page}")
yield Request(
# response.urljoin(next_page),
"http://books.toscrape.com/catalogue/category/books/mystery_3/index.html#",
meta={
"zyte_api_automap": {
"browserHtml": True,
"actions": [
{
"action": "click",
"selector": {"type": "css", "value": "li.next a"},
},
{
"action": "waitForSelector",
"selector": {
"type": "css",
"value": "li.previous a",
},
},
],
}
},
callback=self.parse,
)
else:
self.logger.info("No next page found")
</code></pre>
<p>To perform Zyte browser automation, I first need a request, right? So it doesn't work without an URL. In my fictitious case, the URL is <a href="http://books.toscrape.com/catalogue/category/books/mystery_3/index.html#" rel="nofollow noreferrer">http://books.toscrape.com/catalogue/category/books/mystery_3/index.html#</a>, actually. But I do not want to fire a request and then perform an action. What I want is to perform an action <em>without</em> request (like an <a href="https://www.w3schools.com/jsref/event_onclick.asp" rel="nofollow noreferrer">'onclick' event</a> does), and this action does something like, for example, a request.</p>
<p>I've been racking my brains for days on how to do this - to no avail. Does anyone have any ideas for me?</p>
|
<javascript><python><web-scraping><zyte>
|
2024-08-04 10:11:25
| 2
| 693
|
Ralf Zosel
|
78,830,678
| 865,220
|
Combining two dataframes indexed on date based (matching month and year only)
|
<p>Ignore the date column we just need to merge based on YYYY-MM .
First table has one rwo for each month but second table may have multiple rows for each month. First rows matching month's data should be merged to all of them.</p>
<p>data-frame 1:</p>
<pre><code>2020-01-01 1.480
2020-02-01 1.620
2020-03-01 1.000
2020-04-01 1.000
2020-05-01 1.950
2020-06-01 1.080
2020-07-01 1.480
2020-08-01 1.620
2020-09-01 1.000
2020-10-01 1.000
2020-11-01 1.950
2020-12-01 1.080
2021-01-01 11.480
2021-02-01 12.620
2021-03-01 13.000
2021-04-01 14.000
2021-05-01 15.950
2021-06-01 16.080
2021-07-01 17.480
2021-08-01 18.620
2021-09-01 19.000
2021-10-01 10.000
2021-11-01 11.950
2021-12-01 12.080
2022-01-01 21.480
2022-02-01 22.620
2022-03-01 23.000
2022-04-01 24.000
2022-05-01 25.950
2022-06-01 26.080
2022-07-01 27.480
2022-08-01 28.620
2022-09-01 29.000
2022-10-01 30.000
2022-11-01 31.950
2022-12-01 32.080
2023-01-01 31.480
2023-02-01 31.620
2023-03-01 32.000
2023-04-01 32.200
2023-05-01 31.950
2023-06-01 32.080
2023-07-01 33.080
</code></pre>
<p>data-frame 2:</p>
<pre><code>2020-01-15 NaN NaN 111 111
2020-02-25 NaN 333 NaN 204
2021-02-22 123 NaN NaN 111
2023-07-18 NaN 324 111 NaN
2023-03-15 NaN NaN NaN 205
</code></pre>
<p>desired data-frame:</p>
<pre><code>2020-01-15 1.480 NaN NaN 111 111
2020-02-25 1.620 NaN 333 NaN 204
2021-02-22 1.620 123 NaN NaN 111
2023-07-18 33.080 NaN 324 111 NaN
2023-03-15 32.000 NaN NaN NaN 205
</code></pre>
|
<python><pandas><dataframe>
|
2024-08-04 09:22:24
| 2
| 18,382
|
ishandutta2007
|
78,830,523
| 140,100
|
Align the time range across two or more separate time series
|
<p>I have several time series that fall within the same time range, but with different sampling rates. The start and end times are the same for all series.</p>
<pre><code>series_a_times = ['2023-01-01', '2023-01-03', '2023-01-04', '2023-01-08']
series_b_times = ['2023-01-01', '2023-01-04', '2023-01-04', '2023-01-08']
series_c_times = ['2023-01-01', '2023-01-02', '2023-01-04', '2023-01-08']
</code></pre>
<p>Is there a package that can standardize the sampling rate across these time series and apply interpolation where needed? The desired sampling times are:</p>
<pre><code>series_times = ['2023-01-01', '2023-01-02', '2023-01-03', '2023-01-04', '2023-01-05', '2023-01-06', '2023-01-07', '2023-01-08']
</code></pre>
|
<python><algorithm><time-series>
|
2024-08-04 07:46:38
| 1
| 21,420
|
Night Walker
|
78,830,224
| 3,104,963
|
How can I get around an unresolved hostname or unrecognized name error using HTTP(S) in java or python?
|
<p>I a trying to access a website's information programmatically, but on both Java and Python it is unable to resolve a hostname. If I specify the IP address, it changes the error to TLSV1_UNRECOGNIZED_NAME. This website is able to resolve without any additional work through any browser though.</p>
<p>I have looked through a lot of potential solutions on here, but for Python it says this issue should have been resolved in 2.7 or 2.8, but I am using 3.10 and still getting that error. In Java, it claims that this is a known error, but the solutions presented such as removing the SNI header through a compilation option, or passing an empty hostname array to HTTPSURLConnection to cancel out the creation of the SNI header doesn't solve it. I have also tried setting the user agent to Mozilla as suggested by an answer on here, but that didn't change anything either.</p>
<p>I am sure that it is something unusual about the website, but it is not one I own so I am unable to check much about its configuration.</p>
<p>Specifically the website I am trying to see is:</p>
<pre><code>URL -> https://epic7db.com/heroes
IP -> 157.230.84.20
DNS Lookup -> https://www.nslookup.io/domains/epic7db.com/webservers/
</code></pre>
<p>When using nslookup locally, I get back:</p>
<pre><code>nslookup epic7db.com
Server: UnKnown
Address: 10.0.0.1
Non-authoritative answer:
Name: epic7db.com
Address: 157.230.84.20
</code></pre>
<p>Any help would be appreciated as I am essentially throwing things at the wall to see what sticks at this point.</p>
<p>EDIT: Adding code samples
Python:</p>
<pre><code>import requests
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.76 Safari/537.36'} # This is chrome, you can set whatever browser you like
url = 'https://epic7db.com'
a = requests.get(url,headers)
print(a.content)
</code></pre>
<p>Kotlin using Java's HttpsUrlConnection:</p>
<pre><code>import http.SSLSocketFactoryWrapper
import java.net.URL
import javax.net.ssl.*
fun main() {
HttpsURLConnection.setDefaultHostnameVerifier { hostName, session -> true }
val url = URL("https://epic7db.com")
val sslParameters = SSLParameters()
val sniHostNames: MutableList<SNIHostName> = ArrayList<SNIHostName>(1)
// sniHostNames.add(SNIHostName(url.getHost()))
sslParameters.setServerNames(sniHostNames as List<SNIServerName>?)
val wrappedSSLSocketFactory: SSLSocketFactory =
SSLSocketFactoryWrapper(SSLContext.getDefault().socketFactory, sslParameters)
HttpsURLConnection.setDefaultSSLSocketFactory(wrappedSSLSocketFactory)
val conn = url.openConnection() as HttpsURLConnection
conn.hostnameVerifier = HostnameVerifier { s: String?, sslSession: SSLSession? -> true }
println(String(conn.inputStream.readAllBytes()))
}
</code></pre>
<p>Suggested helper class in Kotlin/Java:</p>
<pre><code>package http;
import java.io.IOException;
import java.net.InetAddress;
import java.net.Socket;
import java.net.UnknownHostException;
import javax.net.ssl.SSLParameters;
import javax.net.ssl.SSLSocket;
import javax.net.ssl.SSLSocketFactory;
public class SSLSocketFactoryWrapper extends SSLSocketFactory {
private final SSLSocketFactory wrappedFactory;
private final SSLParameters sslParameters;
public SSLSocketFactoryWrapper(SSLSocketFactory factory, SSLParameters sslParameters) {
this.wrappedFactory = factory;
this.sslParameters = sslParameters;
}
@Override
public Socket createSocket(String host, int port) throws IOException, UnknownHostException {
SSLSocket socket = (SSLSocket) wrappedFactory.createSocket(host, port);
setParameters(socket);
return socket;
}
@Override
public Socket createSocket(String host, int port, InetAddress localHost, int localPort)
throws IOException, UnknownHostException {
SSLSocket socket = (SSLSocket) wrappedFactory.createSocket(host, port, localHost, localPort);
setParameters(socket);
return socket;
}
@Override
public Socket createSocket(InetAddress host, int port) throws IOException {
SSLSocket socket = (SSLSocket) wrappedFactory.createSocket(host, port);
setParameters(socket);
return socket;
}
@Override
public Socket createSocket(InetAddress address, int port, InetAddress localAddress, int localPort) throws IOException {
SSLSocket socket = (SSLSocket) wrappedFactory.createSocket(address, port, localAddress, localPort);
setParameters(socket);
return socket;
}
@Override
public Socket createSocket() throws IOException {
SSLSocket socket = (SSLSocket) wrappedFactory.createSocket();
setParameters(socket);
return socket;
}
@Override
public String[] getDefaultCipherSuites() {
return wrappedFactory.getDefaultCipherSuites();
}
@Override
public String[] getSupportedCipherSuites() {
return wrappedFactory.getSupportedCipherSuites();
}
@Override
public Socket createSocket(Socket s, String host, int port, boolean autoClose) throws IOException {
SSLSocket socket = (SSLSocket) wrappedFactory.createSocket(s, host, port, autoClose);
setParameters(socket);
return socket;
}
private void setParameters(SSLSocket socket) {
socket.setSSLParameters(sslParameters);
}
}
</code></pre>
|
<python><java><ssl><https><python-requests>
|
2024-08-04 03:57:00
| 1
| 402
|
KM529
|
78,830,098
| 4,616,233
|
Timeout Issue with aiohttp but not with Requests
|
<p>I'm trying to fetch a webpage in Python using two different methods: <code>requests</code> and <code>aiohttp</code>. The requests method works fine, but the aiohttp method results in a timeout. Here's the code:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import aiohttp
import requests
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.0.0 Safari/537.36',
"X-Requested-With": "XMLHttpRequest",
"Cookie": ""
}
url = "an_url"
async def request_requests():
print("Requesting...")
try:
response = requests.get(url, headers=headers, timeout=3)
print(response.text)
except requests.exceptions.ReadTimeout:
print("Timeout REQUESTS")
async def request_aiohttp():
print("Requesting...")
try:
async with aiohttp.ClientSession(headers=headers, timeout=aiohttp.ClientTimeout(total=3)) as session:
async with session.get(url) as response:
print(await response.text())
except asyncio.TimeoutError:
print("Timeout AIOHTTP")
if __name__ == '__main__':
asyncio.run(request_requests())
asyncio.run(request_aiohttp())
</code></pre>
<p>When I run the script, <code>requests</code> retrieves the data successfully, but <code>aiohttp</code> consistently times out. Both methods are set with a 3-second timeout.</p>
<p>Interestingly, this issue occurs with a specific URL I'm working with. For most other URLs, both <code>requests</code> and <code>aiohttp</code> work as expected.</p>
<p>Any ideas on why aiohttp is timing out while requests isn't? What am I missing here?</p>
|
<python><python-3.x><python-requests><python-asyncio><aiohttp>
|
2024-08-04 01:31:37
| 2
| 1,251
|
graille
|
78,829,943
| 8,190,068
|
How do I programmatically add an AccordionItem to an Accordion?
|
<p><strong>UPDATED WITH MORE EXAMPLE CODE</strong></p>
<p>I have a test application which contains an Accordion. <strong>I want to be able to add items to the Accordion from within my python code.</strong></p>
<p>Here is the kv code:</p>
<pre><code>#:kivy 1.0.9
<LayoutTest>:
BoxLayout:
id: mainBox
orientation: 'horizontal'
padding: 4
spacing: 4
canvas:
Color:
rgba: 0, 0.2, 0.4, 1 # Blue-ish color
Rectangle:
pos: self.pos
size: self.size
# workBox
BoxLayout:
orientation: 'vertical'
canvas:
Color:
rgba: 0.16, 0.11, 0, 1 # brownish color
Rectangle:
pos: self.pos
size: self.size
ListView:
id: r_list
orientation: 'vertical'
[AccordionItemTitle@Label]:
text: ' ' + ctx.title
halign: 'left'
valign: 'center'
text_size: self.size
normal_background: ctx.item.background_normal if ctx.item.collapse else ctx.item.background_selected
disabled_background: ctx.item.background_disabled_normal if ctx.item.collapse else ctx.item.background_disabled_selected
canvas.before:
Color:
rgba: self.disabled_color if self.disabled else self.color
BorderImage:
source: self.disabled_background if self.disabled else self.normal_background
pos: self.pos
size: self.size
PushMatrix
Translate:
xy: self.center_x, self.center_y
Rotate:
angle: 90 if ctx.item.orientation == 'horizontal' else 0
axis: 0, 0, 1
Translate:
xy: -self.center_x, -self.center_y
canvas.after:
PopMatrix
<ListItem@AccordionItem>:
title: '2024/12/20 22:00 <reference>'
BoxLayout:
orientation: 'vertical'
Label:
text: 'Type 1'
text_size: self.width, None
Label:
text: 'Content 3'
text_size: self.width, None
<ListView@BoxLayout>:
Accordion:
id: accordyN
orientation: 'vertical'
AccordionItem:
title: '2024/12/20 00:00 <reference>'
BoxLayout:
orientation: 'vertical'
size_hint_y: None
# height: self.minimum_height
# height: type_label.height + content_label.height
valign: 'top'
Label:
id: type_label
text: 'Type 1'
text_size: self.size
halign: 'left'
valign: 'top'
Label:
id: content_label
text: 'Content 1'
text_size: self.size
halign: 'left'
valign: 'top'
AccordionItem:
title: '2024/12/20 09:00 <reference>'
BoxLayout:
orientation: 'vertical'
Label:
text: 'Type 1'
text_size: self.width, self.height
Label:
text: 'Content 2'
text_size: self.width, self.height
</code></pre>
<p>Using this layout, my application can display the Accordion, as well as the two sample AccordionItems shown above.</p>
<p>In my Python code, I somehow need to access the Accordion in order to add widgets to it. Here's my code which doesn't work:</p>
<pre><code>import kivy
from kivy.app import App
# below this kivy version you cannot use the app
kivy.require('1.9.0')
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.accordion import AccordionItem
class LayoutTest(BoxLayout):
temp = 0
class ListItem(AccordionItem):
temp = 0
class LayoutTestApp(App):
mainLayout = None
def __init__(self, **kwargs):
super().__init__()
def build(self):
self.mainLayout = LayoutTest()
entry = ListItem(title='2024/12/20 22:00 <reference>')
self.mainLayout.ids['accordyN'].add_widget(entry)
return self.mainLayout
if __name__ == '__main__':
LayoutTestApp().run()
</code></pre>
<p>When I run the Python code above, I get an error:</p>
<blockquote>
<p>self.mainLayout.ids['accordyN'].add_widget(entry)
'~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
KeyError: 'accordyN'</p>
</blockquote>
<p><strong>How do I add more kv widgets to the layout from my Python code?</strong></p>
|
<python><kivy>
|
2024-08-03 22:58:42
| 2
| 424
|
Todd Hoatson
|
78,829,913
| 8,190,068
|
How do I implement a Splash screen for my app?
|
<p>I have a simple test application, and I want to see how to add a splash screen to my app. I tried using a popup for this (<em>is there a better way?</em>), and the popup will preferably have no buttons, just an image that lingers for a short amount of time, then the popup closes automatically.</p>
<p>What I have now seems to do nothing. Here's my Python code:</p>
<pre><code>import kivy
from kivy.app import App
# below this kivy version you cannot use the app
kivy.require('1.9.0')
from kivy.uix.popup import Popup
from kivy.properties import ObjectProperty, StringProperty
from kivy.uix.floatlayout import FloatLayout
class LoadSplashScreenDialog(FloatLayout):
load = ObjectProperty(None)
cancel = ObjectProperty(None)
class LayoutTestApp(App):
SplashImage = 'images/GrapeVineSplashText.jpg'
def __init__(self, **kwargs):
super().__init__()
content = LoadSplashScreenDialog(cancel=self.dismiss_popup)
self._popup = Popup(title="Test App", content=content, size_hint=(1.0, 1.0))
self._popup.open()
# self._popup.dismiss()
def dismiss_popup(self):
self._popup.dismiss()
...
</code></pre>
<p>I have a call to _popup.dismiss() after _popup.open(), but it is currently commented out, because I don't (yet) know how to wait for a specified amount of time before automatically closing the popup. So, at this point, I just want to see that the splash screen appears.</p>
<p>Here is my kv code:</p>
<pre><code><LoadSplashScreenDialog>:
BoxLayout:
orientation: "vertical"
size: root.size
pos: root.pos
Image:
id: image
source: app.SplashImage
fit_mode: "contain" # resized to fit, maintaining aspect ratio
padding: 2
size_hint_x: 1.0
canvas:
Color:
rgba: 0.1, 0.1, 0.1, 1 # ? gray color
Line:
width: 2
rectangle: self.x, self.y, self.width, self.height
</code></pre>
<p>Am I close? Am I way off? <strong>How would I get this to work???</strong></p>
|
<python><kivy>
|
2024-08-03 22:35:37
| 2
| 424
|
Todd Hoatson
|
78,829,884
| 7,317,985
|
Cannot install Flask with pip on MacOS
|
<p>I am trying to install flask on my MacOS Ventura with the command:</p>
<pre><code>Pip3 install Flask
</code></pre>
<p>which gives me the following error:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>abdullah@MacBook-Pro ~ % pip3 install flask
error: externally-managed-environment
× This environment is externally managed
╰─> To install Python packages system-wide, try brew install
xyz, where xyz is the package you are trying to
install.
If you wish to install a Python library that isn't in Homebrew,
use a virtual environment:
python3 -m venv path/to/venv
source path/to/venv/bin/activate
python3 -m pip install xyz
If you wish to install a Python application that isn't in Homebrew,
it may be easiest to use 'pipx install xyz', which will manage a
virtual environment for you. You can install pipx with
brew install pipx
You may restore the old behavior of pip by passing
the '--break-system-packages' flag to pip, or by adding
'break-system-packages = true' to your pip.conf file. The latter
will permanently disable this error.
If you disable this error, we STRONGLY recommend that you additionally
pass the '--user' flag to pip, or set 'user = true' in your pip.conf
file. Failure to do this can result in a broken Homebrew installation.
Read more about this behavior here: <https://peps.python.org/pep-0668/>
note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.</code></pre>
</div>
</div>
</p>
<p>When I follow the instructions and try to install it with Homebrew command:</p>
<pre><code>brew install flask
</code></pre>
<p>It produces the following error:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>==> Downloading https://formulae.brew.sh/api/formula.jws.json
######################################################################### 100.0%
==> Downloading https://formulae.brew.sh/api/cask.jws.json
######################################################################### 100.0%
Warning: No available formula with the name "flask". Did you mean flash, flank, flake or flac?
==> Searching for similarly named formulae and casks...
==> Formulae
flash flank flake flac
To install flash, run:
brew install flash
Haroon@MacBook-Pro ~ % poetry install flask
Poetry could not find a pyproject.toml file in /Users/Haroon or its parents</code></pre>
</div>
</div>
</p>
|
<python><macos><flask><homebrew>
|
2024-08-03 22:12:48
| 1
| 301
|
haroon khan
|
78,829,678
| 6,597,296
|
Is there a way to do this with a list comprehension?
|
<p>I have a list that looks something like this:</p>
<pre class="lang-py prettyprint-override"><code>data = ['1', '12', '123']
</code></pre>
<p>I want to produce a new list, that looks like this:</p>
<pre class="lang-py prettyprint-override"><code>result = ['$1', '1', '$2', '12', '$3', '123']
</code></pre>
<p>where the number after the <code>$</code> sign is the length of the next element.</p>
<p>The straightforward way to do this is with a <code>for</code> loop:</p>
<pre class="lang-py prettyprint-override"><code>result = []
for element in data:
result += [f'${len(element)}'] + [element]
</code></pre>
<p>but I was wondering if it's possible to do it in a more elegant way - with a list comprehension, perhaps?</p>
<p>I could do</p>
<pre class="lang-py prettyprint-override"><code>result = [[f'${len(e)}', e] for e in data]
</code></pre>
<p>but this results in a list of lists:</p>
<pre class="lang-py prettyprint-override"><code>[['$1', '1'], ['$2', '12'], ['$3', '123']]
</code></pre>
<p>I could flatten that with something like</p>
<pre class="lang-py prettyprint-override"><code>result = sum([[f'${len(e)}', e] for e in data], [])
</code></pre>
<p>or even</p>
<pre class="lang-py prettyprint-override"><code>result = [x for xs in [[f'${len(e)}', e] for e in data] for x in xs]
</code></pre>
<p>but this is getting rather difficult to read. Is there a better way to do this?</p>
|
<python><list-comprehension>
|
2024-08-03 20:08:41
| 8
| 578
|
bontchev
|
78,829,629
| 1,403,470
|
date conversion and any() in Pony ORM with SQLite
|
<p>I am building a web application tutorial using <a href="https://docs.ponyorm.org/" rel="nofollow noreferrer">Pony ORM</a>. My Python is working, but feels very clunky: in particular, I'm managing date conversion by hand after fetching records, and am doing concatenate-then-split string operations to handle a one-to-many relationship where I would naturally use a JSON field. DB schema, Pony entities, my query, and my conversion code are below; I would be very grateful for advice on how to simplify this.</p>
<h2>Database Schema</h2>
<p>This database keeps track of which staff members have performed which experiments (many-to-many), which experimental plates are involved in which experiments (many-to-one), and which of those plates have been invalidated (optional one-to-one).</p>
<pre class="lang-sql prettyprint-override"><code>-- staff members: staff_id is primary key
CREATE TABLE staff (
staff_id BIGINT,
personal TEXT,
family TEXT
);
-- experiments: sample_id is primary key
CREATE TABLE experiment (
sample_id BIGINT,
kind TEXT,
start TEXT,
"end" TEXT -- may be NULL if the experiment is ongoing
);
-- join table showing which staff were involved in which experiment
-- there is at least one staff member associated with every experiment
CREATE TABLE performed (
staff_id BIGINT,
sample_id BIGINT
);
-- plate_id is primary key; plate-to-experiment is many to one
- there is at least one plate associated with each experiment
CREATE TABLE plate (
plate_id BIGINT,
sample_id BIGINT,
date TEXT,
filename TEXT
);
-- invalidated plates (along with who invalidated the plate and when)
-- this table only contains records for plates that have been invalidated,
-- and contains at most one such record for each plate
CREATE TABLE invalidated (
plate_id BIGINT,
staff_id BIGINT,
date TEXT
);
</code></pre>
<h2>Entity Classes</h2>
<p>These are straightforward given the table definitions above.</p>
<pre class="lang-py prettyprint-override"><code>class Staff(DB.Entity):
staff_id = orm.PrimaryKey(int)
personal = orm.Required(str)
family = orm.Required(str)
performed = orm.Set("Performed")
invalidated = orm.Set("Invalidated")
class Experiment(DB.Entity):
sample_id = orm.PrimaryKey(int)
kind = orm.Required(str)
start = orm.Required(str)
end = orm.Optional(str)
performed = orm.Set("Performed")
plate = orm.Set("Plate")
class Performed(DB.Entity):
staff_id = orm.Required(Staff)
sample_id = orm.Required(Experiment)
orm.PrimaryKey(staff_id, sample_id)
class Plate(DB.Entity):
plate_id = orm.PrimaryKey(int)
sample_id = orm.Required(Experiment)
date = orm.Required(date)
filename = orm.Required(str)
invalidated = orm.Set("Invalidated")
class Invalidated(DB.Entity):
plate_id = orm.Required(Plate)
staff_id = orm.Required(Staff)
date = orm.Required(date)
orm.PrimaryKey(plate_id, staff_id)
</code></pre>
<h2>My Problematic Query</h2>
<p>I want the following for each experiment:</p>
<ul>
<li>experiment ID (so that I can construct a hyperlink to the experiment's page)</li>
<li>start and end dates (again, the end date may be <code>None</code> if the experiment is ongoing)</li>
<li>whether or not the experiment has any invalidated plates</li>
<li>the IDs and names of all staff involved in the experiment</li>
</ul>
<p>What I have found is:</p>
<ol>
<li>The first three fields (ID, start date, and end date) are easy to get, but I have to convert the two date fields from text to Python <code>date</code> objects after retrieving the values: Pony has a <code>str2date</code> function, but there doesn't appear to be a way to use that inside a query. (Trying to do so produces error messages.)</li>
<li>I was expecting <code>any()</code> and <code>all()</code> functions to go along with Pony's <code>count()</code>, <code>sum()</code>, and other aggregation functions, but they don't appear to exist and Pony rejects use of the built-in functions. I'm therefore using <code>count() > 0</code> to answer the question, "Have any of the plates in this experiment been invalidated?"</li>
<li>Since several staff may be involved in a single experiment, I am constructing a string with each staff member's ID, personal names, and family name, then using Pony's <code>group_concat()</code> to join those to get a single string in the query result. I then post-process this (strip, split, and convert the ID back to an integer). I realize may actually be necessary because Pony doesn't support nested fields (i.e., doesn't seem to be able to create JSON results), but I'm hoping there's a simpler way.</li>
</ol>
<pre class="lang-py prettyprint-override"><code>def experiment_details():
query = orm.select(
(
e.sample_id,
e.start,
e.end,
orm.count(e.plate.invalidated) != 0,
orm.group_concat(
f"<{s.staff_id}|{s.personal}|{s.family}>"
for s in Staff
if e.sample_id in s.performed.sample_id.sample_id
),
)
for e in Experiment
)
rows = list(query.order_by(lambda eid, start, end, inv, staff: eid))
reformatters = (_reformat_as_is, _reformat_date, _reformat_date, _reformat_as_is, _reformat_staff)
rows = [[f(r) for (f, r) in zip(reformatters, row)] for row in rows]
return rows
def _reformat_as_is(text):
"""Do not reformat (used for uniform zipping)."""
return text
def _reformat_date(text):
"""Reformat possibly-empty date."""
return None if text is None else str2date(text)
def _reformat_staff(text):
"""Convert concatenated staff information back to list of tuples."""
fields = text.lstrip("<").rstrip(">").split(">,<")
values = [f.split("|") for f in fields]
result = [(int(v[0]), v[1], v[2]) for v in values]
return result
</code></pre>
<p>My thanks in advance to anyone who can help make this simpler—my future students will be grateful as well. If you want the database and code in question, I am <code>gvwilson@third-bit.com</code>.</p>
|
<python><sqlite><ponyorm>
|
2024-08-03 19:39:37
| 0
| 1,403
|
Greg Wilson
|
78,829,500
| 3,018,860
|
Querying data from Simbad using astroquery
|
<p>I'm making a script in Python to get information for all objects from the NGC and IC catalogs. Actually, I already have this information from OpenNGC, however, coordinates don't have the same precision, so I need to combine both dataframes.</p>
<p>What I want is: the name, RA in J2000, Dec in J2000 and the type. What I also would like, but it seems still more difficult: the constellation and the magnitude (flux B).</p>
<p>What I'm getting: a lot of repeated results. For example, for the cluster NGC 188 I got a lot of lines, one for every object into this cluster, with the individual magnitudes. So I removed the magnitudes (flux B) to get only the objects. I will insert the magnitudes manually, if it's necessary, from the OpenNGC.</p>
<p>This is my code:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python
# -*- coding: utf-8 -*-
from astroquery.simbad import Simbad
from astropy.table import Table
# Get all the NGC/IC objects
def query_deep_sky_objects(catalog):
adql_query = f"""
SELECT TOP 10000 main_id, otype, ra, dec
FROM basic
WHERE main_id LIKE 'NGC%' OR main_id LIKE 'IC%'
ORDER BY main_id ASC
"""
result = Simbad.query_tap(adql_query)
return result
objects = query_deep_sky_objects('NGC')
objects.write('simbad_objects.csv', format='csv', overwrite=True)
</code></pre>
<p>I'm very tired, since I've spent all day searching catalogs with precision enough. I also used an another catalog taken from Vizier (<code>'VII/118/ngc2000'</code>) without the precision I need, like in Open NGC. I'm making a sky atlas for my publications. Without the required precision, DSO and stars appear out of their place.</p>
|
<python><astropy><astroquery><simbad>
|
2024-08-03 18:46:06
| 2
| 2,834
|
Unix
|
78,829,305
| 10,181,236
|
Use python to automatically post on Instagram
|
<p>I am using instabot package on python to build an app that automatically post photos on my profile.</p>
<p>This is the code I wrote</p>
<pre><code>from instabot import Bot
import configparser
# Reading Configs
config = configparser.ConfigParser()
config.read("config.ini")
# Setting configuration values
username = str(config['Instagram']['user'])
password = str(config['Instagram']['password'])
bot = Bot()
bot.login(username=username, password=password)
bot.upload_photo("./pic.png", caption="caption")
bot.logout()
</code></pre>
<p>the problem is that I get the error</p>
<pre><code>2024-08-03 19:01:54,188 - INFO - Not yet logged in starting: PRE-LOGIN FLOW!
2024-08-03 19:01:54,754 - ERROR - Request returns 429 error!
2024-08-03 19:01:54,754 - WARNING - That means 'too many requests'. I'll go to sleep for 5 minutes.
</code></pre>
<p>How can I solve it? Can I use another package?</p>
|
<python><instagram>
|
2024-08-03 17:12:39
| 2
| 512
|
JayJona
|
78,829,255
| 726,373
|
concurrent kafka client offset commit management data structure
|
<h3>Question: Comparing Ring Buffer with Hash Map and Interval Designs for Kafka Client - Which is Better?</h3>
<p>I'm working on optimizing a Kafka client for high-throughput message processing. We're considering two different designs for managing message offsets and statuses: a ring buffer with a hash map and an interval-based approach. I'm seeking insights on their performance and whether granular locking is possible for each design in Go and Python.</p>
<p><strong>Design 1: Ring Buffer with Hash Map</strong></p>
<ul>
<li><strong>Structure:</strong> Uses a ring buffer to store message offsets and a hash map to quickly update their processing status.</li>
<li><strong>Operations:</strong>
<ul>
<li><strong>Add Offset:</strong> O(1) - Append the offset to the buffer.</li>
<li><strong>Update Status:</strong> O(1) - Use the hash map to quickly locate the offset in the buffer and update its status.</li>
<li><strong>Forward Scan:</strong> O(n) - Sequentially scan the buffer to find the next commit point.</li>
</ul>
</li>
</ul>
<p><strong>Design 2: Interval-Based Approach</strong></p>
<ul>
<li><strong>Structure:</strong> Stores intervals of processed message offsets (e.g., [offset1, offset5]) instead of individual offsets.</li>
<li><strong>Description:</strong> In the interval approach, instead of updating the status of individual offsets, we manage and merge intervals. When an offset is processed, it either extends an existing interval or starts a new one. The key operations are adding offsets, updating statuses within intervals, and merging intervals when necessary.</li>
<li><strong>Operations:</strong>
<ul>
<li><strong>Add Offset:</strong> O(1) - Insert the offset into the appropriate interval.</li>
<li><strong>Update Status:</strong> O(1) - Update the status within the interval.</li>
<li><strong>Interval Merge:</strong> O(1) - Merge intervals when a message is processed.</li>
<li><strong>Forward Scan:</strong> O(1) - Directly access the highest initial offset without scanning.</li>
</ul>
</li>
</ul>
<p><strong>Performance Comparison:</strong></p>
<ul>
<li><p><strong>Ring Buffer with Hash Map:</strong></p>
<ul>
<li><strong>Add Offset:</strong> O(1)</li>
<li><strong>Update Status:</strong> O(1)</li>
<li><strong>Forward Scan:</strong> O(n)</li>
</ul>
</li>
<li><p><strong>Interval-Based Approach:</strong></p>
<ul>
<li><strong>Add Offset:</strong> O(1)</li>
<li><strong>Update Status:</strong> O(1)</li>
<li><strong>Interval Merge:</strong> O(1)</li>
<li><strong>Forward Scan:</strong> O(1)</li>
</ul>
</li>
</ul>
<p><strong>Questions:</strong></p>
<ol>
<li>Which design is better for high-throughput message processing?</li>
<li>Is granular locking feasible for both designs, particularly for the interval-based approach?
<ul>
<li>How can granular locking be implemented in Go, considering its concurrency model?</li>
<li>How effective is granular locking in Python, given the Global Interpreter Lock (GIL)?</li>
</ul>
</li>
</ol>
<p>Any insights, examples, or experiences with these designs would be greatly appreciated!</p>
|
<python><go><apache-kafka><concurrency>
|
2024-08-03 16:48:08
| 0
| 642
|
Jack Peng
|
78,829,063
| 3,343,378
|
Exclude decorators in pdoc documentation
|
<p>I would like to exclude decorators from my API documentation using <code>pdoc</code>. Is there any possible way I can achieve that?</p>
<p>For an example, if I write:</p>
<pre><code>@xl_func("blah", "blah")
def examples_are_blah()
print("blah")
</code></pre>
<p>I get the following output:
<a href="https://i.sstatic.net/zObjr9K5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zObjr9K5.png" alt="enter image description here" /></a></p>
<p>The decorator function xl_func I would like to delete from the documentation output, given that it sometimes contains a lot of parameters that are not adding any specific value for the documentation.</p>
|
<python><documentation-generation><pdoc>
|
2024-08-03 15:25:44
| 1
| 444
|
blah_crusader
|
78,828,880
| 5,561,472
|
strange `re.sub` behaviour if the string include `\n` symbol
|
<p>I am trying to extract a part of string prior to substring:</p>
<pre class="lang-py prettyprint-override"><code>import re
s = "ab\nc"
print(re.sub(r"(.*)b.*", r"\1", s)) # a\nc
</code></pre>
<p>I would expect to get <code>a</code>, but I am getting <code>a\nc</code>.</p>
<p>Why is it?</p>
|
<python><regex>
|
2024-08-03 14:04:41
| 1
| 6,639
|
Andrey
|
78,828,715
| 934,904
|
Finding config.json for Llama 3.1 8B
|
<p>I installed the Llama 3.1 8B model through Meta's <a href="https://github.com/meta-llama/llama-models" rel="nofollow noreferrer">Github page</a>, but I can't get their example code to work. I'm running the following code in the same directory as the Meta-Llama-3.1-8B folder:</p>
<pre><code>import transformers
import torch
pipeline = transformers.pipeline(
"text-generation",
model="Meta-Llama-3.1-8B",
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda"
)
</code></pre>
<p>The error is</p>
<pre><code>OSError: Meta-Llama-3.1-8B does not appear to have a file named config.json
</code></pre>
<p>Where can I get <code>config.json</code>?</p>
<p>I've installed the latest <code>transformers</code> module, and I understand that I can access the remote model on HuggingFace. But I'd rather use my local model. Is this possible?</p>
|
<python><pytorch><huggingface><llama><llama3>
|
2024-08-03 12:54:18
| 1
| 5,966
|
MatthewScarpino
|
78,828,636
| 4,489,082
|
ValueError while saving a dataframe
|
<p>I am facing hurdle while saving a pandas data frame to parquet file</p>
<p>Code I am using -</p>
<pre><code>import pandas as pd
import yfinance as yf
start_date = "2022-08-06"
end_date = "2024-08-05"
ticker = 'RELIANCE.NS'
data = yf.download(tickers=ticker, start=start_date, end=end_date, interval="1h")
data.reset_index(inplace=True)
data['Date'] = data['Datetime'].dt.date
data['Time'] = data['Datetime'].dt.time
data.to_parquet('./RELIANCE.parquet')
</code></pre>
<p>The error it produces is - <code>ValueError: Can't infer object conversion type: 0</code></p>
<p>Can someone tell me how to fix this.</p>
<p>PS: Detailed error below-</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[1], line 15
12 data['Date'] = data['Datetime'].dt.date
13 data['Time'] = data['Datetime'].dt.time
---> 15 data.to_parquet('./RELIANCE.parquet')
File ~/python_venv/lib/python3.10/site-packages/pandas/util/_decorators.py:333, in deprecate_nonkeyword_arguments.<locals>.decorate.<locals>.wrapper(*args, **kwargs)
327 if len(args) > num_allow_args:
328 warnings.warn(
329 msg.format(arguments=_format_argument_list(allow_args)),
330 FutureWarning,
331 stacklevel=find_stack_level(),
332 )
--> 333 return func(*args, **kwargs)
File ~/python_venv/lib/python3.10/site-packages/pandas/core/frame.py:3113, in DataFrame.to_parquet(self, path, engine, compression, index, partition_cols, storage_options, **kwargs)
3032 """
3033 Write a DataFrame to the binary parquet format.
3034
(...)
3109 >>> content = f.read()
3110 """
3111 from pandas.io.parquet import to_parquet
-> 3113 return to_parquet(
3114 self,
3115 path,
3116 engine,
3117 compression=compression,
3118 index=index,
3119 partition_cols=partition_cols,
3120 storage_options=storage_options,
3121 **kwargs,
3122 )
File ~/python_venv/lib/python3.10/site-packages/pandas/io/parquet.py:480, in to_parquet(df, path, engine, compression, index, storage_options, partition_cols, filesystem, **kwargs)
476 impl = get_engine(engine)
478 path_or_buf: FilePath | WriteBuffer[bytes] = io.BytesIO() if path is None else path
--> 480 impl.write(
481 df,
482 path_or_buf,
483 compression=compression,
484 index=index,
485 partition_cols=partition_cols,
486 storage_options=storage_options,
487 filesystem=filesystem,
488 **kwargs,
489 )
491 if path is None:
492 assert isinstance(path_or_buf, io.BytesIO)
File ~/python_venv/lib/python3.10/site-packages/pandas/io/parquet.py:349, in FastParquetImpl.write(self, df, path, compression, index, partition_cols, storage_options, filesystem, **kwargs)
344 raise ValueError(
345 "storage_options passed with file object or non-fsspec file path"
346 )
348 with catch_warnings(record=True):
--> 349 self.api.write(
350 path,
351 df,
352 compression=compression,
353 write_index=index,
354 partition_on=partition_cols,
355 **kwargs,
356 )
File ~/python_venv/lib/python3.10/site-packages/fastparquet/writer.py:1304, in write(filename, data, row_group_offsets, compression, file_scheme, open_with, mkdirs, has_nulls, write_index, partition_on, fixed_text, append, object_encoding, times, custom_metadata, stats)
1301 check_column_names(data.columns, partition_on, fixed_text,
1302 object_encoding, has_nulls)
1303 ignore = partition_on if file_scheme != 'simple' else []
-> 1304 fmd = make_metadata(data, has_nulls=has_nulls, ignore_columns=ignore,
1305 fixed_text=fixed_text,
1306 object_encoding=object_encoding,
1307 times=times, index_cols=index_cols,
1308 partition_cols=partition_on, cols_dtype=cols_dtype)
1309 if custom_metadata:
1310 kvm = fmd.key_value_metadata or []
File ~/python_venv/lib/python3.10/site-packages/fastparquet/writer.py:904, in make_metadata(data, has_nulls, ignore_columns, fixed_text, object_encoding, times, index_cols, partition_cols, cols_dtype)
902 se.name = column
903 else:
--> 904 se, type = find_type(data[column], fixed_text=fixed,
905 object_encoding=oencoding, times=times,
906 is_index=is_index)
907 col_has_nulls = has_nulls
908 if has_nulls is None:
File ~/python_venv/lib/python3.10/site-packages/fastparquet/writer.py:122, in find_type(data, fixed_text, object_encoding, times, is_index)
120 elif dtype == "O":
121 if object_encoding == 'infer':
--> 122 object_encoding = infer_object_encoding(data)
124 if object_encoding == 'utf8':
125 type, converted_type, width = (parquet_thrift.Type.BYTE_ARRAY,
126 parquet_thrift.ConvertedType.UTF8,
127 None)
File ~/python_venv/lib/python3.10/site-packages/fastparquet/writer.py:357, in infer_object_encoding(data)
355 s += 1
356 else:
--> 357 raise ValueError("Can't infer object conversion type: %s" % data)
358 if s > 10:
359 break
ValueError: Can't infer object conversion type: 0 2022-08-08
1 2022-08-08
2 2022-08-08
3 2022-08-08
4 2022-08-08
...
3398 2024-08-02
3399 2024-08-02
3400 2024-08-02
3401 2024-08-02
3402 2024-08-02
Name: Date, Length: 3403, dtype: object
</code></pre>
|
<python><pandas><yfinance>
|
2024-08-03 12:15:03
| 3
| 793
|
pkj
|
78,828,344
| 315,427
|
How to modify Excel sheet without losing extensions?
|
<p>I am trying to modify excel file, which has number of VBA actions(not created by me). I had gentle attempt to modify single combo box item.</p>
<pre><code>from openpyxl import load_workbook
# Load the workbook
workbook = load_workbook('input.xlsx')
# Select the worksheet
worksheet = workbook['Monthly']
# Change the value of the cell C5
worksheet['C5'] = 9
# Save the workbook with a new name
workbook.save('output.xlsx')
</code></pre>
<p>These are the warnings I got:</p>
<blockquote>
<p>UserWarning: Data Validation extension is not supported and will be
removed warn(msg)</p>
</blockquote>
<blockquote>
<p>UserWarning: Conditional Formatting extension is
not supported and will be removed warn(msg)</p>
</blockquote>
<p>The output file size is much smaller and some functionality is gone, although the combo box value has been modified. My question is whether there is a library which will preserve the functionality of Validation / Conditional Formatting extension, while allowing me to modify the cell values?</p>
|
<python><excel>
|
2024-08-03 09:30:38
| 1
| 29,709
|
Pablo
|
78,828,192
| 6,718,081
|
What caused Python 3.13-0b3 ( compiled with GIL disabled ) be slower than 3.12.0?
|
<p>I did a simple performance test on python <code>3.12.0</code> against python <code>3.13.0b3</code> compiled with a <code>--disable-gil</code> flag. The program executes calculations of a Fibonacci sequence using <code>ThreadPoolExecutor</code> or <code>ProcessPoolExecutor</code>. The docs on the PEP introducing disabled GIL says that there is a bit of overhead mostly due to biased reference counting followed by per-object locking (<a href="https://peps.python.org/pep-0703/#performance" rel="nofollow noreferrer">https://peps.python.org/pep-0703/#performance</a>). But it says the overhead on pyperformance benchmark suit is around 5-8%. My simple benchmark shows a significant difference in the performance. Indeed, python 3.13 without GIL utilize all CPUs
with a <code>ThreadPoolExecutor</code> but it is much slower than python 3.12 with GIL. Based on the CPU utilization and the elapsed time we can conclude that with python 3.13 we do multiple times more clock cycles comparing to the 3.12.</p>
<p>Program code:</p>
<pre><code>from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
import datetime
from functools import partial
import sys
import logging
import multiprocessing
logging.basicConfig(
format='%(levelname)s: %(message)s',
)
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
cpus = multiprocessing.cpu_count()
pool_executor = ProcessPoolExecutor if len(sys.argv) > 1 and sys.argv[1] == '1' else ThreadPoolExecutor
python_version_str = f'{sys.version_info.major}.{sys.version_info.minor}.{sys.version_info.micro}'
logger.info(f'Executor={pool_executor.__name__}, python={python_version_str}, cpus={cpus}')
def fibonacci(n: int) -> int:
if n < 0:
raise ValueError("Incorrect input")
elif n == 0:
return 0
elif n == 1 or n == 2:
return 1
else:
return fibonacci(n-1) + fibonacci(n-2)
start = datetime.datetime.now()
with pool_executor(8) as executor:
for task_id in range(30):
executor.submit(partial(fibonacci, 30))
executor.shutdown(wait=True)
end = datetime.datetime.now()
elapsed = end - start
logger.info(f'Elapsed: {elapsed.total_seconds():.2f} seconds')
</code></pre>
<p>Test results:</p>
<pre><code># TEST Linux 5.15.0-58-generic, Ubuntu 20.04.6 LTS
INFO: Executor=ThreadPoolExecutor, python=3.12.0, cpus=2
INFO: Elapsed: 10.54 seconds
INFO: Executor=ProcessPoolExecutor, python=3.12.0, cpus=2
INFO: Elapsed: 4.33 seconds
INFO: Executor=ThreadPoolExecutor, python=3.13.0b3, cpus=2
INFO: Elapsed: 22.48 seconds
INFO: Executor=ProcessPoolExecutor, python=3.13.0b3, cpus=2
INFO: Elapsed: 22.03 seconds
</code></pre>
<p>Can anyone explain why do I experience such a difference when comparing the overhead to the one from pyperformance benchmark suit?</p>
<h2>EDIT 1</h2>
<ol>
<li>I have tried with <code>pool_executor(cpus)</code> instead of <code>pool_executor(8)</code> -> still got the similar results.</li>
<li>I watched this video <a href="https://www.youtube.com/watch?v=zWPe_CUR4yU" rel="nofollow noreferrer">https://www.youtube.com/watch?v=zWPe_CUR4yU</a> and executed the following test: <a href="https://github.com/ArjanCodes/examples/blob/main/2024/gil/main.py" rel="nofollow noreferrer">https://github.com/ArjanCodes/examples/blob/main/2024/gil/main.py</a></li>
</ol>
<p>Results:</p>
<pre><code>Version of python: 3.12.0a7 (main, Oct 8 2023, 12:41:37) [GCC 9.4.0]
GIL cannot be disabled
Single-threaded: 78498 primes in 6.67 seconds
Threaded: 78498 primes in 7.89 seconds
Multiprocessed: 78498 primes in 5.85 seconds
Version of python: 3.13.0b3 experimental free-threading build (heads/3.13.0b3:7b413952e8, Jul 27 2024, 11:19:31) [GCC 9.4.0]
GIL is disabled
Single-threaded: 78498 primes in 61.42 seconds
Threaded: 78498 primes in 32.29 seconds
Multiprocessed: 78498 primes in 39.85 seconds
</code></pre>
<p>so yet another test on my machine when we end up with multiple times slower performance. Btw. On the video we can see the similar overhead results as it is described in the PEP.</p>
<h2>EDIT 2</h2>
<p>As @ekhumoro suggested I did configure the build with the following flags:<br />
<code> ./configure --disable-gil --enable-optimizations</code><br />
and it seems the <code>--enable-optimizations</code> flag makes a significant difference in the considered benchmarks. The previous build was done with the following configuration:<br />
<code>./configure --with-pydebug --disable-gil</code>.</p>
<p>Tests results:</p>
<h3>Fibonacci benchmark:</h3>
<pre><code>INFO: Executor=ThreadPoolExecutor, python=3.12.0, cpus=2
INFO: Elapsed: 10.25 seconds
INFO: Executor=ProcessPoolExecutor, python=3.12.0, cpus=2
INFO: Elapsed: 4.27 seconds
INFO: Executor=ThreadPoolExecutor, python=3.13.0, cpus=2
INFO: Elapsed: 6.94 seconds
INFO: Executor=ProcessPoolExecutor, python=3.13.0, cpus=2
INFO: Elapsed: 6.94 seconds
</code></pre>
<h3>Prime numbers benchmark:</h3>
<pre><code>Version of python: 3.12.0a7 (main, Oct 8 2023, 12:41:37) [GCC 9.4.0]
GIL cannot be disabled
Single-threaded: 78498 primes in 5.77 seconds
Threaded: 78498 primes in 7.21 seconds
Multiprocessed: 78498 primes in 3.23 seconds
Version of python: 3.13.0b3 experimental free-threading build (heads/3.13.0b3:7b413952e8, Aug 3 2024, 14:47:48) [GCC 9.4.0]
GIL is disabled
Single-threaded: 78498 primes in 7.99 seconds
Threaded: 78498 primes in 4.17 seconds
Multiprocessed: 78498 primes in 4.40 seconds
</code></pre>
<p>So the general gain from moving from python 3.12 multiprocessing to python 3.12 no-gil multi-threading are significant memory savings (we do have only a single process).</p>
<p>When we compare CPU overhead for the machine with only 2 cores:</p>
<p>[Fibonacci] Python 3.13 multi-threading against Python 3.12 multiprocessing: (6.94 - 4.27) / 4.27 * 100% ~= 63% overhead</p>
<p>[Prime numbers] Python 3.13 multi-threading against Python 3.12 multiprocessing: (4.17 - 3.23) / 3.23 * 100% ~= 29% overhead</p>
|
<python><performance><cpython><gil><pep>
|
2024-08-03 08:11:39
| 2
| 923
|
K4liber
|
78,828,009
| 10,200,497
|
How can I get the group that has the largest streak of negative numbers in a column and add another condition to filter the groups?
|
<p>This is an extension to this accepted <a href="https://stackoverflow.com/a/78824669/10200497">answer</a>.</p>
<p>My DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': [-3, -1, -2, -5, 10, -3, -13, -3, -2, 1, 2, -100],
'b': [1, 2, 3, 4, 5, 10, 80, 90, 100, 99, 1, 12]
}
)
</code></pre>
<p>Expected output:</p>
<pre><code> a b
5 -3 10
6 -13 80
7 -3 90
8 -2 100
</code></pre>
<p>Logic:</p>
<p>a) Selecting the longest streak of negatives in <code>a</code>.</p>
<p>b) If for example there are two streaks that has same size, I want the one that has a greater sum of <code>b</code>. In <code>df</code> there are two streaks with size of 4 but I want the second one because sum of <code>b</code> is greater.</p>
<p>My Attempt:</p>
<pre><code>import numpy as np
s = np.sign(df['a'])
df['g'] = s.ne(s.shift()).cumsum()
df['size'] = df.groupby('g')['g'].transform('size')
df['b_sum'] = df.groupby('g')['b'].transform('sum')
</code></pre>
<p>Edit 1:</p>
<p>I have provided an extra <code>df</code> to clarify the point. I want the negative streaks under any circumstance. In this <code>df</code> the positive streak is longer and its <code>b</code> is greater but I still want the last two rows which is the longest negative streak:</p>
<pre><code>df = pd.DataFrame(
{
'a': [-0.65, 11, 18, 1, -2, -3],
'b': [1, 20, 30000, 4322, 300, 3]
}
)
#output
4 -2.00 300
5 -3.00 3
</code></pre>
<p>This is my attempt to get this output but if there are no negative rows in a dataframe then it throws an error:</p>
<pre><code>df['sign'] = np.sign(df.a)
df['g'] = df.sign.ne(df.sign.shift()).cumsum()
df = df.loc[df.a.lt(0)]
out = df[df.g.eq(df.groupby('g')['b'].agg(['size', 'sum'])
.query('size == size.max()')['sum'].idxmax())]
</code></pre>
|
<python><pandas><dataframe>
|
2024-08-03 06:34:34
| 3
| 2,679
|
AmirX
|
78,828,007
| 4,399,016
|
Integrating Holoviz Panel and Django to build Dashboard Webapps
|
<p>I am looking for any working example of a Django project that is integrated with <a href="https://panel.holoviz.org/" rel="nofollow noreferrer">Panel Apps</a>. There is a section in the <a href="https://panel.holoviz.org/how_to/integrations/Django.html" rel="nofollow noreferrer">user guide</a> that gives tips on how to integrate.
But I am unable to get it to work.</p>
<p>Could you please share any available resources online that demonstrates the integration.</p>
<p>The code I tried is this (and its variants)
The actual business logic is not important here. I want to integrate the Panel Dashboards and Django successfully.</p>
<pre><code># project structure:
# myproject/
# manage.py
# myproject/
# __init__.py
# settings.py
# urls.py
# wsgi.py
# app1/
# __init__.py
# apps.py
# views.py
# app2/
# __init__.py
# apps.py
# views.py
# myproject/settings.py
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'app1',
'app2',
'panel',
]
# myproject/urls.py
from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path('admin/', admin.site.urls),
path('app1/', include('app1.urls')),
path('app2/', include('app2.urls')),
]
# app1/apps.py
from django.apps import AppConfig
class App1Config(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'app1'
# app1/views.py
import panel as pn
from django.http import HttpResponse
def app1_panel(request):
# Create a simple Panel app
slider = pn.widgets.FloatSlider(start=0, end=10, name='Slider')
text = pn.widgets.StaticText(name='Slider Value')
@pn.depends(slider.param.value)
def update_text(value):
text.value = f"Slider value: {value:.2f}"
app = pn.Column(slider, text)
# Serve the Panel app
return HttpResponse(app.servable())
# app1/urls.py
from django.urls import path
from . import views
urlpatterns = [
path('', views.app1_panel, name='app1_panel'),
]
# app2/apps.py
from django.apps import AppConfig
class App2Config(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'app2'
# app2/views.py
import panel as pn
import altair as alt
import pandas as pd
from django.http import HttpResponse
def app2_panel(request):
# Create a simple Panel app with an Altair plot
df = pd.DataFrame({'x': range(10), 'y': [i**2 for i in range(10)]})
chart = alt.Chart(df).mark_line().encode(
x='x',
y='y'
).properties(
title='Square Function',
width=400,
height=300
)
altair_pane = pn.pane.Vega(chart)
app = pn.Column(
pn.pane.Markdown("# App 2: Square Function Plot"),
altair_pane
)
# Serve the Panel app
return HttpResponse(app.servable())
# app2/urls.py
from django.urls import path
from . import views
urlpatterns = [
path('', views.app2_panel, name='app2_panel'),
]
# manage.py (no changes needed)
# Requirements (update requirements.txt):
# Django
# panel
# altair
# pandas
</code></pre>
|
<python><django><web-applications><dashboard><holoviz-panel>
|
2024-08-03 06:32:24
| 0
| 680
|
prashanth manohar
|
78,827,959
| 3,442,922
|
How to get the index of an ordinal value from a numpy 3d array?
|
<p>I have a 3d array of shape: (2, 3, 3) from which I would like to get the index of the 10th value. I have built a program through the conventional method of using loops.
Below is my solution:</p>
<pre><code>import numpy as np
arr = np.arange(1, 19).reshape(2, 3, 3)
el = 10
count = 0
loc = ()
for each_arr in np.arange(arr.shape[0]):
for row in np.arange(arr.shape[1]):
for col in np.arange(arr.shape[2]):
count = col + arr.shape[2] * row + each_arr * arr.shape[1] * arr.shape[2]
# print(count, end=" ")
print(arr[each_arr, row, col], end=" ")
if (count + 1) == el:
loc = (each_arr, row, col)
# print(f"{arr[each_arr, row, col]} at {(each_arr, row, col)}")
print()
print()
print(f"{el} element at {loc}")
</code></pre>
<p>I'd like to ask that, Is there a numpy way to accomplish this(Any kinds of comments are welcome)?</p>
|
<python><numpy>
|
2024-08-03 06:03:01
| 0
| 336
|
Vivek
|
78,827,743
| 11,748,924
|
resample_poly of one-hot encoded masking
|
<p>I have these tensors:</p>
<pre><code>X_test = X_unseen_flutter[0,0,:][None, :] # (Batch Size, Amplitude Length) -> (1, 3208)
y_true = y_unseen_flutter[0,0,:][None, :] # (Batch Size, Mask Length, Num of Classes) -> (1, 3208, 4) (One-Hot Encoded)
</code></pre>
<p>I can resample the <code>X_test</code>, but I have no idea for the <code>y_true</code>:</p>
<pre><code>from scipy.signal import resample_poly
X_test_resampled = resample_poly(X_test, up=512, down=3208, axis=1) # (1, 512)
y_true_resampled = # ??? I expect shape (1, 512, 4)
</code></pre>
<p>What is the equivalent of <code>resample_poly</code> but for the one-hot encoded label?</p>
<p>I expect there is a function to do that where it accepts <code>tensor, up, down, mask_axis, class_axis</code></p>
|
<python><numpy><machine-learning><scipy><signal-processing>
|
2024-08-03 03:21:27
| 0
| 1,252
|
Muhammad Ikhwan Perwira
|
78,827,639
| 5,255,911
|
Playwright - issue with the footer template parsing
|
<p>Playwright (Python) save a page as PDF function works fine when there's no customisation in the header or footer. However, when I try to introduce a custom footer, the values don't seem to get injected appropriately.</p>
<p>Example code:</p>
<pre class="lang-py prettyprint-override"><code>from playwright.sync_api import sync_playwright
def generate_pdf_with_page_numbers():
with sync_playwright() as p:
browser = p.chromium.launch()
context = browser.new_context()
page = context.new_page()
# Navigate to the desired page
page.goto('https://example.com')
# Generate PDF with page numbers in the footer
pdf = page.pdf(
path="output.pdf",
format="A4",
display_header_footer=True,
footer_template="""
<div style="width: 100%; text-align: center; font-size: 10px;">
Page {{pageNumber}} of {{totalPages}}
</div>
""",
margin={"top": "40px", "bottom": "40px"}
)
browser.close()
# Run the function to generate the PDF
generate_pdf_with_page_numbers()
</code></pre>
<p>I was expecting:</p>
<pre><code>Page 1 of 1
</code></pre>
<p>But actually I get:</p>
<pre><code>Page {{pageNumber}} of {{totalPages}}
</code></pre>
<p>Do you see any issue with this code?</p>
|
<python><playwright><playwright-python>
|
2024-08-03 01:10:04
| 1
| 896
|
MaduKan
|
78,827,396
| 4,023,639
|
Python 3d scatter plot linking annotation between subplots
|
<p>I have two (or more) 3d scatter plots in subplots, each showing different 3 variables of a data set. When I hover over a data in one subplot, I'd like other subplots to also automatically show the same data sample. Currently, I am able to show annotation (on hover) for one plot using mplcursors module (in jupyter notebook), but I'd like the hover annotation to be linked between all subplots.</p>
<p>Below is the sample image generated with the current implementation:
<a href="https://i.sstatic.net/oTOblf2A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTOblf2A.png" alt="enter image description here" /></a></p>
<p>Minimal working code:</p>
<pre class="lang-py prettyprint-override"><code>%matplotlib ipympl
import plotly.express as px
import matplotlib.pyplot as plt
import mplcursors
df = px.data.iris()
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(121, projection='3d')
ax.scatter(df['sepal_length'], df['sepal_width'], df['petal_length'], marker='.')
ax.set_xlabel('Sepal Length')
ax.set_ylabel('Sepal Width')
ax.set_zlabel('Petal Length')
ax.set_title("Scatter plot of sepal length, sepal width, and petal length")
ax2 = fig.add_subplot(122, projection='3d')
ax2.scatter(df['sepal_length'], df['sepal_width'], df['petal_width'], marker='.')
ax2.set_xlabel('Sepal Length')
ax2.set_ylabel('Sepal Width')
ax2.set_zlabel('Petal Width')
ax2.set_title("Scatter plot of sepal length, sepal width, and petal width")
mplcursors.cursor(hover=True)
plt.show()
</code></pre>
<p>Thank you in advance.</p>
|
<python><matplotlib><hover><scatter-plot>
|
2024-08-02 22:17:27
| 1
| 1,083
|
user32147
|
78,827,386
| 54,873
|
How to use xxhash in hashlib.file_digest?
|
<p>I want to quickly compute hash values of large files on disk to compare them to one another.</p>
<p>I'm using the following:</p>
<pre><code> import hashlib
def sha256sum(filename):
with open(filename, 'rb', buffering=0) as f:
return hashlib.file_digest(f, 'sha256').hexdigest()
</code></pre>
<p>But I'd like to use <code>xxhash</code> since I hear it's faster. This doesn't work:</p>
<pre><code>import hashlib
def xxhashsum(filename):
with open(filename, 'rb', buffering=0) as f:
return hashlib.file_digest(f, 'xxhash').hexdigest()
</code></pre>
<p>Is there a version that would?</p>
|
<python><hash><sha><hashlib>
|
2024-08-02 22:11:53
| 1
| 10,076
|
YGA
|
78,827,379
| 610,569
|
What is the max_tokens number I can put for OpenAI GPT generate function?
|
<p>I've tried to use 100_000, 20_000 but seems like only 10_000 is possible:</p>
<pre><code>from openai import OpenAI
client = OpenAI()
messages = {"role": "user", "content": "Hello"}
completion = client.chat.completions.create(
model="gpt-4o-mini", messages=messages,
max_tokens=10_000
)
print(completion.choices[0].message.content)
</code></pre>
<p>There's the doc on <a href="https://platform.openai.com/docs/api-reference/chat/create#chat-create-max_tokens" rel="nofollow noreferrer">https://platform.openai.com/docs/api-reference/chat/create#chat-create-max_tokens</a> but there's no clear indication of what's the largest <code>max_tokens</code> value possible.</p>
|
<python><openai-api><large-language-model><gpt-4o-mini>
|
2024-08-02 22:09:30
| 1
| 123,325
|
alvas
|
78,827,074
| 6,197,439
|
Visual replacement of area of subwidgets in pair of custom widgets with a single widget in PyQt5?
|
<p>In the example code below, I have a CustomWidget consisting of a main label (red) on top, and sub label (yellow) on bottom - and I instantiate several of those in a scroll area:</p>
<p><a href="https://i.sstatic.net/0k9O4HMC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0k9O4HMC.png" alt="starting window" /></a></p>
<p>When I click on the "Merge" button, I'd like consecutive pairs of sub-labels to be replaced with a single sublabel, having the text of the left sub-label, and covering the area of both sub-labels - something like this (edited in image processing software):</p>
<p><a href="https://i.sstatic.net/DanDTPR4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DanDTPR4.png" alt="window with desired result" /></a></p>
<p>So I thought, I'll just calculate the width of the "merged" sub-label, apply it to the left sub-label, and hide the right sub-label, to achieve this effect - unfortunately, this doesn't work, as it messes up the other previously set layout relationships - so the actual result looks something like this:</p>
<p><a href="https://i.sstatic.net/jty2qWkF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jty2qWkF.png" alt="actual but undesired result" /></a></p>
<p>How can I achieve the kind of pairwise visual merging of sub-labels that I want?</p>
<p>The code:</p>
<pre class="lang-py prettyprint-override"><code>import sys
from PyQt5.QtWidgets import *
from PyQt5.QtCore import *
from PyQt5.QtGui import *
class CustomWidget(QWidget):
def __init__(self, parent=None, maintext=""):
super().__init__(parent)
self.maintext = maintext
self.origwidth = 50
self.lbl_main = QLabel(self.maintext)
self.lbl_main.setAlignment(Qt.AlignCenter)
self.lbl_main.setMinimumWidth(self.origwidth)
self.lbl_main.setFixedWidth(self.origwidth)
self.lbl_main.setFixedHeight(100)
self.lbl_main.setStyleSheet("background: #3FFF0000;")
self.lbl_sub = QLabel("sub_{}".format(self.maintext))
self.lbl_sub.setAlignment(Qt.AlignCenter)
self.lbl_sub.setMinimumWidth(self.origwidth)
self.lbl_sub.setFixedWidth(self.origwidth)
self.lbl_sub.setFixedHeight(20)
self.lbl_sub.setStyleSheet("background: #3FFFFF00; border: 1px solid black;")
self.vspacer = QLabel()
self.layout = QVBoxLayout(self)
self.layout.setContentsMargins(0, 0, 0, 0)
self.layout.setSpacing(0)
self.layout.addWidget(self.lbl_main)
self.layout.addWidget(self.lbl_sub)
self.layout.addWidget(self.vspacer)
self.layout.setAlignment(self.lbl_main, Qt.AlignHCenter)
self.layout.setAlignment(self.lbl_sub, Qt.AlignHCenter)
self.setSizePolicy(QSizePolicy.Fixed, QSizePolicy.Fixed)
class MyMainWindow(QMainWindow):
def __init__(self):
super(MyMainWindow, self).__init__()
self.num_cws = 10
# Define the geometry of the main window
self.setGeometry(300, 300, 300, 200)
self.setWindowTitle("my first window")
self.centralwidget = QWidget(self)
self.layout = QHBoxLayout(self.centralwidget)
#self.centralwidget.setLayout(self.layout)
self.setCentralWidget(self.centralwidget)
self.scroll_area = QScrollArea()
self.scroll_area.setVerticalScrollBarPolicy(Qt.ScrollBarAlwaysOff)
self.scroll_area.setHorizontalScrollBarPolicy(Qt.ScrollBarAlwaysOn)
self.scroll_central = QWidget()
self.scroll_layout = QHBoxLayout(self.scroll_central)
self.scroll_area.setWidget(self.scroll_central)
self.scroll_area.setWidgetResizable(True)
self.layout.addWidget(self.scroll_area)
self.btn = QPushButton(text = 'Merge')
self.layout.addWidget(self.btn)
self.btn.clicked.connect(self.onBtnClick)
self.customwidgets = []
for ix in range(self.num_cws):
cw = CustomWidget(None, "{0}{0}".format(chr(65+ix)))
self.scroll_layout.addWidget(cw)
self.customwidgets.append(cw)
self.show()
def onBtnClick(self):
if self.btn.text() == "Merge":
self.btn.setText("Unmerge")
for ix in range(0, self.num_cws, 2):
cw_left = self.customwidgets[ix]
cw_right = self.customwidgets[ix+1]
llblgeom = cw_left.lbl_sub.geometry().translated( cw_left.lbl_sub.mapTo(self.scroll_central, QPoint(0,0)) )
rlblgeom = cw_right.lbl_sub.geometry().translated( cw_right.lbl_sub.mapTo(self.scroll_central, QPoint(0,0)) )
lbl_dual_width = rlblgeom.right() - llblgeom.left()
cw_right.lbl_sub.hide()
cw_left.lbl_sub.setFixedWidth(lbl_dual_width)
else:
self.btn.setText("Merge")
for ix in range(0, self.num_cws, 2):
cw_left = self.customwidgets[ix]
cw_right = self.customwidgets[ix+1]
cw_left.lbl_sub.setFixedWidth(cw_left.origwidth)
cw_right.lbl_sub.show()
if __name__== '__main__':
app = QApplication(sys.argv)
myGUI = MyMainWindow()
sys.exit(app.exec_())
</code></pre>
|
<python><pyqt5>
|
2024-08-02 20:00:38
| 1
| 5,938
|
sdbbs
|
78,827,002
| 2,233,608
|
How to query a MultiIndex by MultiIndex and choose the "best" row?
|
<p>Say I have a <code>MultiIndex</code> by <code>MultiIndex</code> <code>DataFrame</code> similar to the one generated here (in the real use case the list of races is dynamic and not known ahead of time):</p>
<pre class="lang-py prettyprint-override"><code>import random
import pandas as pd
random.seed(1)
data_frame_rows = pd.MultiIndex.from_arrays([[], [], []], names=("car", "engine", "wheels"))
data_frame_columns = pd.MultiIndex.from_arrays([[], [], []], names=("group", "subgroup", "details"))
data_frame = pd.DataFrame(index=data_frame_rows, columns=data_frame_columns)
for car in ("mustang", "corvette", "civic"):
for engine in ("normal", "supercharged"):
for wheels in ("normal", "wide"):
data_frame.loc[(car, engine, wheels), ("cost", "", "money ($)")] = int(random.random() * 100)
data_frame.loc[(car, engine, wheels), ("cost", "", "maintenance (minutes)")] = int(random.random() * 60)
for race in ("f1", "indy", "lemans"):
percent_win = random.random()
recommended = percent_win >= 0.8
data_frame.loc[(car, engine, wheels), ("race", race, "win %")] = percent_win
data_frame.loc[(car, engine, wheels), ("race", race, "recommended")] = recommended
</code></pre>
<p>Which then will look something like:</p>
<pre><code>group cost race
subgroup f1 indy lemans
details money ($) maintenance (minutes) win % recommended win % recommended win % recommended
car engine wheels
mustang normal normal 13.0 50.0 0.763775 False 0.255069 False 0.495435 False
wide 44.0 39.0 0.788723 False 0.093860 False 0.028347 False
supercharged normal 83.0 25.0 0.762280 False 0.002106 False 0.445387 False
wide 72.0 13.0 0.945271 True 0.901427 True 0.030590 False
corvette normal normal 2.0 32.0 0.939149 True 0.381204 False 0.216599 False
wide 42.0 1.0 0.221692 False 0.437888 False 0.495812 False
supercharged normal 23.0 13.0 0.218781 False 0.459603 False 0.289782 False
wide 2.0 50.0 0.556454 False 0.642294 False 0.185906 False
civic normal normal 99.0 51.0 0.120890 False 0.332695 False 0.721484 False
wide 71.0 56.0 0.422107 False 0.830036 True 0.670306 False
supercharged normal 30.0 35.0 0.882479 True 0.846197 True 0.505284 False
wide 58.0 2.0 0.242740 False 0.797404 False 0.414314 False
</code></pre>
<p>I now want to find all rows where the particular car configuration (engine and wheel combination) is the "best" configuration for that car. For example, in this case the <code>civic</code> has two configurations that are recommended, but the <code>civic</code> with a <code>supercharged</code> <code>engine</code> and <code>normal</code> <code>wheels</code> has the highest chance of winning a race (88% in the <code>f1</code> race). All other cars/configurations that aren't recommended for any race, or have a lower chance of winning any race, I want filtered out. The <code>mustang</code> and the <code>corvette</code> have one configuration each which is recommended for any race, so these are the configurations I would choose for those two cars.</p>
<p>So the final output would be each car listed at most once, with the best configuration. If a car has no recommended configurations then I want it out completely.</p>
<p>I've read <a href="https://pandas.pydata.org/docs/user_guide/advanced.html" rel="nofollow noreferrer">this</a> multiple times and for the life of me I can't figure it out.</p>
<p>As a starting point to just get the recommended rows I tried something like:</p>
<pre><code>data_frame[(data_frame.loc[:,idx["race",:,"recommended"]]==True)]
</code></pre>
<p>But that doesn't seem to filter the rows, just sets things to either NaN or True</p>
<pre><code>group cost race
subgroup f1 indy lemans
details money ($) maintenance (minutes) win % recommended win % recommended win % recommended
car engine wheels
mustang normal normal NaN NaN NaN NaN NaN NaN NaN NaN
wide NaN NaN NaN NaN NaN NaN NaN NaN
supercharged normal NaN NaN NaN NaN NaN NaN NaN NaN
wide NaN NaN NaN True NaN True NaN NaN
corvette normal normal NaN NaN NaN True NaN NaN NaN NaN
wide NaN NaN NaN NaN NaN NaN NaN NaN
supercharged normal NaN NaN NaN NaN NaN NaN NaN NaN
wide NaN NaN NaN NaN NaN NaN NaN NaN
civic normal normal NaN NaN NaN NaN NaN NaN NaN NaN
wide NaN NaN NaN NaN NaN True NaN NaN
supercharged normal NaN NaN NaN True NaN True NaN NaN
wide NaN NaN NaN NaN NaN NaN NaN NaN
</code></pre>
|
<python><pandas><dataframe><multi-index>
|
2024-08-02 19:37:49
| 2
| 1,178
|
niltz
|
78,826,732
| 13,469,674
|
How do create_history_aware_retriever and RunnableWithMessageHistory interact when used together?
|
<p>I am building a chatbot, following the Conversational RAG example in langchain's documentation: <a href="https://python.langchain.com/v0.2/docs/tutorials/qa_chat_history/" rel="nofollow noreferrer">https://python.langchain.com/v0.2/docs/tutorials/qa_chat_history/</a></p>
<p>So far I was able to create the bot with chat history, exactly like the example, only with my own model, and retriever:</p>
<pre><code>retriever = load_embeddings()
llm = load_llm()
history_aware_retriever = contextualize_llm_with_chat_history(model=llm, embeddings_retriever=retriever)
rag_chain = create_qa_llm_chain(chat_history_chain=history_aware_retriever, model=llm)
</code></pre>
<p>It is when I introduce the <code>RunnableWithMessageHistory</code> that I encounter an error only when I use the stream invocation of the model:</p>
<pre><code>def get_session_history(session_id: str) -> BaseChatMessageHistory:
if session_id not in store:
store[session_id] = ChatMessageHistory()
return store[session_id]
store = {}
conversational_rag_chain = RunnableWithMessageHistory(
rag_chain,
get_session_history,
input_messages_key="input",
history_messages_key="chat_history",
output_messages_key="answer",
)
</code></pre>
<p>If I use conversational_rag_chain.invoke() I receive the information I expect. However, when I choose to stream it:</p>
<pre><code>for chunk in conversational_rag_chain.stream({"input": question}, config={"configurable": {"session_id": "abc123"}}):
print(chunk)
</code></pre>
<p>Before the streaming starts I can see the following in the console:</p>
<pre><code>Error in RootListenersTracer.on_chain_end callback: KeyError('answer')
Error in callback coroutine: KeyError('answer')
</code></pre>
<p>I do not understand what this error means, since the stream starts as expected right after those two messages.</p>
<p>I can see I am not the only one with this issue: <a href="https://github.com/langchain-ai/langchain/issues/24713" rel="nofollow noreferrer">https://github.com/langchain-ai/langchain/issues/24713</a></p>
|
<python><langchain><py-langchain>
|
2024-08-02 18:06:59
| 1
| 955
|
DPM
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.