QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
77,889,046
| 13,086,128
|
polars.read_csv vs polars.read_csv_batched vs polars.scan_csv?
|
<p>What is the difference between <code>polars.read_csv</code> vs <code>polars.read_csv_batched</code> vs <code>polars.scan_csv</code> ?</p>
<p><code>polars.read_csv</code> looks equivalent to <code>pandas.read_csv</code> as they have the same name.</p>
<p>Which one to use in which scenario and how they are similar/different to <code>pandas.read_csv</code>?</p>
|
<python><python-polars>
|
2024-01-26 21:20:47
| 2
| 30,560
|
Talha Tayyab
|
77,888,975
| 2,465,039
|
Pydantic serialization of containers of mixed types
|
<p>Can Pydantic v2 handle containers of mixed types? For example, how would serialization take place in a class defined as follows:</p>
<pre><code>class Foo(BaseModel):
foo_list: list
</code></pre>
<p>If the code above is not the recommended way of doing things, what is the recommended way of defining lists that may contain elements of different types?</p>
|
<python><pydantic><pydantic-v2>
|
2024-01-26 21:03:06
| 1
| 977
|
user2465039
|
77,888,944
| 1,471,980
|
how do you pull Aruba reports using API calls via Python
|
<p>I need to login to aruba appliance and get report data using the following api at aruba-airwave endpoint.</p>
<p>I have this code so far:</p>
<pre><code>import xml.etree.ElementTree as ET # libxml2 and libxslt
import requests # HTTP requests
import csv
import sys
# ---------------------------------------------------------------------------
# Constants
# ---------------------------------------------------------------------------
# Login/password for Airwave (read-only account)
LOGIN = 'operator'
PASSWD = 'verylongpasswordforyourreadonlyaccount'
# URL for REST API
LOGIN_URL = 'https://aruba-airwave.example.com/LOGIN'
AP_LIST_URL = 'https://aruba-airwave.example.com/report_detail?id=100'
# Delimiter for CSV output
DELIMITER = ';'
# HTTP headers for each HTTP request
HEADERS = {
'Content-Type' : 'application/x-www-form-urlencoded',
'Cache-Control' : 'no-cache'
}
# ---------------------------------------------------------------------------
# Fonctions
# ---------------------------------------------------------------------------
def open_session():
"""Open HTTPS session with login"""
ampsession = requests.Session()
data = 'credential_0={0}&credential_1={1}&destination=/&login=Log In'.format(LOGIN, PASSWD)
loginamp = ampsession.post(LOGIN_URL, headers=HEADERS, data=data)
return {'session' : ampsession, 'login' : loginamp}
def get_ap_list(session):
output=session.get(AP_LIST_URL, headers=HEADERS, verify=False)
ap_list_output=output.content
return ap_list_output
def main():
session=open_session()
get_ap_list(session)
main()
</code></pre>
<p>I get this error:</p>
<pre><code>get() takes no keyword arguments
</code></pre>
<p>any ideas what I am doing wrong here or suggestion for another method?</p>
|
<python><aruba>
|
2024-01-26 20:56:45
| 1
| 10,714
|
user1471980
|
77,888,886
| 525,865
|
running bs4 scraper needs to be redefined to enrich the dataset - some issues
|
<p>got a bs4 scraper that works with selenium - see far below:</p>
<p>well - it works fine so far:</p>
<p>see far below my approach to fetch some data form <strong>the given page</strong>: clutch.co/il/it-services</p>
<p>To enrich the scraped data, with additional information, i tried to modify the scraping-logic to extract more details from each company's page. Here's i have to an updated version of the code that extracts the company's website and additional information:</p>
<p>here we have <strong>script1</strong></p>
<pre><code>import pandas as pd
from bs4 import BeautifulSoup
from tabulate import tabulate
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options = Options()
options.headless = True
driver = webdriver.Chrome(options=options)
url = "https://clutch.co/il/it-services"
driver.get(url)
html = driver.page_source
soup = BeautifulSoup(html, 'html.parser')
# Your scraping logic goes here
company_info = soup.select(".directory-list div.provider-info")
data_list = []
for info in company_info:
company_name = info.select_one(".company_info a").get_text(strip=True)
location = info.select_one(".locality").get_text(strip=True)
website = info.select_one(".company_info a")["href"]
# Additional information you want to extract goes here
# For example, you can extract the description
description = info.select_one(".description").get_text(strip=True)
data_list.append({
"Company Name": company_name,
"Location": location,
"Website": website,
"Description": description
})
df = pd.DataFrame(data_list)
df.index += 1
print(tabulate(df, headers="keys", tablefmt="psql"))
df.to_csv("it_services_data_enriched.csv", index=False)
driver.quit()
</code></pre>
<p><strong>ideas to this</strong> extended version: well in this code, I added a loop to go through <strong>each company's information</strong>, extracted the website, and added a placeholder for additional information (in this case, the description). i thougth that i can adapt this loop to extract more data as needed. At least this is the idea.</p>
<p>the <strong>working model:</strong> i think that the structure of the HTML of course changes here - and therefore in need to adapt the scraping-logik: so i think that i might need to adjust the CSS selectors accordingly based on the current structure of the page. So far so good: Well,i think that we need to make sure to customize the scraping logic based on the specific details we want to extract from each company's page. Conclusio: well i think i am very close: but see what i gotten back: the following</p>
<pre><code>/home/ubuntu/PycharmProjects/clutch_scraper_2/.venv/bin/python /home/ubuntu/PycharmProjects/clutch_scraper_2/clutch_scraper_II.py
/home/ubuntu/PycharmProjects/clutch_scraper_2/clutch_scraper_II.py:2: DeprecationWarning:
Pyarrow will become a required dependency of pandas in the next major release of pandas (pandas 3.0),
(to allow more performant data types, such as the Arrow string type, and better interoperability with other libraries)
but was not found to be installed on your system.
If this would cause problems for you,
please provide us feedback at https://github.com/pandas-dev/pandas/issues/54466
import pandas as pd
Traceback (most recent call last):
File "/home/ubuntu/PycharmProjects/clutch_scraper_2/clutch_scraper_II.py", line 29, in <module>
description = info.select_one(".description").get_text(strip=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'get_text'
Process finished with exit code
</code></pre>
<p>and now - see below <strong>my allready working model:</strong> my approach to fetch some data form the given page: clutch.co/il/it-services</p>
<p>here we have <strong>script2</strong></p>
<pre><code>import pandas as pd
from bs4 import BeautifulSoup
from tabulate import tabulate
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options = Options()
options.headless = True
driver = webdriver.Chrome(options=options)
url = "https://clutch.co/il/it-services"
driver.get(url)
html = driver.page_source
soup = BeautifulSoup(html, 'html.parser')
# Your scraping logic goes here
company_names = soup.select(".directory-list div.provider-info--header .company_info a")
locations = soup.select(".locality")
company_names_list = [name.get_text(strip=True) for name in company_names]
locations_list = [location.get_text(strip=True) for location in locations]
data = {"Company Name": company_names_list, "Location": locations_list}
df = pd.DataFrame(data)
df.index += 1
print(tabulate(df, headers="keys", tablefmt="psql"))
df.to_csv("it_services_data.csv", index=False)
driver.quit()
</code></pre>
<p>import pandas as pd</p>
<pre><code>+----+-----------------------------------------------------+--------------------------------+
| | Company Name | Location |
|----+-----------------------------------------------------+--------------------------------|
| 1 | Artelogic | L'viv, Ukraine |
| 2 | Iron Forge Development | Palm Beach Gardens, FL |
| 3 | Lionwood.software | L'viv, Ukraine |
| 4 | Greelow | Tel Aviv-Yafo, Israel |
| 5 | Ester Digital | Tel Aviv-Yafo, Israel |
| 6 | Nextly | Vitória, Brazil |
| 7 | Rootstack | Austin, TX |
| 8 | Novo | Dallas, TX |
| 9 | Scalo | Tel Aviv-Yafo, Israel |
| 10 | TLVTech | Herzliya, Israel |
| 11 | Dofinity | Bnei Brak, Israel |
| 12 | PURPLE | Petah Tikva, Israel |
| 13 | Insitu S2 Tikshuv LTD | Haifa, Israel |
| 14 | Opinov8 Technology Services | London, United Kingdom |
| 15 | Sogo Services | Tel Aviv-Yafo, Israel |
| 16 | Naviteq LTD | Tel Aviv-Yafo, Israel |
| 17 | BMT - Business Marketing Tools | Ra'anana, Israel |
| 18 | Profisea | Hod Hasharon, Israel |
| 19 | MeteorOps | Tel Aviv-Yafo, Israel |
| 20 | Trivium Solutions | Herzliya, Israel |
| 21 | Dynomind.tech | Jerusalem, Israel |
| 22 | Madeira Data Solutions | Kefar Sava, Israel |
| 23 | Titanium Blockchain | Tel Aviv-Yafo, Israel |
| 24 | Octopus Computer Solutions | Tel Aviv-Yafo, Israel |
| 25 | Reblaze | Tel Aviv-Yafo, Israel |
| 26 | ELPC Networks Ltd | Rosh Haayin, Israel |
| 27 | Taldor | Holon, Israel |
| 28 | Clarity | Petah Tikva, Israel |
| 29 | Opsfleet | Kfar Bin Nun, Israel |
| 30 | Hozek Technologies Ltd. | Petah Tikva, Israel |
| 31 | ERG Solutions | Ramat Gan, Israel |
| 32 | Komodo Consulting | Ra'anana, Israel |
| 33 | SCADAfence | Ramat Gan, Israel |
| 34 | Ness Technologies | נס טכנולוגיות | Tel Aviv-Yafo, Israel |
| 35 | Bynet Data Communications Bynet Data Communications | Tel Aviv-Yafo, Israel |
| 36 | Radware | Tel Aviv-Yafo, Israel |
| 37 | BigData Boutique | Rishon LeTsiyon, Israel |
| 38 | NetNUt | Tel Aviv-Yafo, Israel |
| 39 | Asperii | Petah Tikva, Israel |
| 40 | PractiProject | Ramat Gan, Israel |
| 41 | K8Support | Bnei Brak, Israel |
| 42 | Odix | Rosh Haayin, Israel |
| 43 | Panaya | Hod Hasharon, Israel |
| 44 | MazeBolt Technologies | Giv'atayim, Israel |
| 45 | Porat | Tel Aviv-Jaffa, Israel |
| 46 | MindU | Tel Aviv-Yafo, Israel |
| 47 | Valinor Ltd. | Petah Tikva, Israel |
| 48 | entrypoint | Modi'in-Maccabim-Re'ut, Israel |
| 49 | Adelante | Tel Aviv-Yafo, Israel |
| 50 | Code n' Roll | Haifa, Israel |
| 51 | Linnovate | Bnei Brak, Israel |
| 52 | Viceman Agency | Tel Aviv-Jaffa, Israel |
| 53 | develeap | Tel Aviv-Yafo, Israel |
| 54 | Chalir.com | Binyamina-Giv'at Ada, Israel |
| 55 | WolfCode | Rishon LeTsiyon, Israel |
| 56 | Penguin Strategies | Ra'anana, Israel |
| 57 | ANG Solutions | Tel Aviv-Yafo, Israel |
+----+-----------------------------------------------------+--------------------------------+
</code></pre>
<p>what is aimed: i want to to fetch some more data form the given page: clutch.co/il/it-services - eg the website and so on...</p>
<p><strong>update_:</strong> The error AttributeError: 'NoneType' object has no attribute 'get_text' indicates that the .select_one(".description") method did not find any HTML element with the class ".description" for the current company information, resulting in None. Therefore, calling .get_text(strip=True) on None raises an AttributeError.</p>
<p>more to follow... later the day.</p>
<p><strong>update2:</strong>
note: @jakob had a interesting idea - posted here: <a href="https://stackoverflow.com/questions/77861365/selenium-in-google-colab-without-having-to-worry-about-managing-the-chromedriver">Selenium in Google Colab without having to worry about managing the ChromeDriver executable - i tried an example using kora.selenium</a>
I made <a href="https://github.com/jpjacobpadilla/Google-Colab-Selenium" rel="nofollow noreferrer">Google-Colab-Selenium</a> to solve this problem. It manages the executable and the required Selenium Options for you. - well that sounds very very interesting - at the moment i cannot imagine that we get selenium working on colab in such a way - that the above mentioned scraper works on colab full and well!? - ideas !? would be awesome:</p>
<p><strong>Jakob:</strong> the real issue is that the website you are trying to scrape is using CloudFlare, which can detect selenium.
I wrote a little code to scrape the data that you were looking for.
You actually don't need to use Selenium as the data is already baked right into the HTML when you go to the webpage.</p>
<p><a href="https://colab.research.google.com/drive/1qkZ1OV_Nqeg13UY3S9pY0IXuB4-q3Mvx?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1qkZ1OV_Nqeg13UY3S9pY0IXuB4-q3Mvx?usp=sharing</a></p>
<p>here we have <strong>script3</strong></p>
<pre><code>%pip install -q curl_cffi
%pip install -q fake-useragent
%pip install -q lxml
from curl_cffi import requests
from fake_useragent import UserAgent
# we need to take care for this: https://pypi.org/project/fake-useragent/
ua = UserAgent()
headers = {'User-Agent': ua.safari}
resp = requests.get('https://clutch.co/il/it-services', headers=headers, impersonate="safari15_3")
resp.status_code
# I like to use this to verify the contents of the request
from IPython.display import HTML
HTML(resp.text)
from lxml.html import fromstring
tree = fromstring(resp.text)
data = []
for company in tree.xpath('//ul/li[starts-with(@id, "provider")]'):
data.append({
"name": company.xpath('./@data-title')[0].strip(),
"location": company.xpath('.//span[@class = "locality"]')[0].text,
"wage": company.xpath('.//div[@data-content = "<i>Avg. hourly rate</i>"]/span/text()')[0].strip(),
"min_project_size": company.xpath('.//div[@data-content = "<i>Min. project size</i>"]/span/text()')[0].strip(),
"employees": company.xpath('.//div[@data-content = "<i>Employees</i>"]/span/text()')[0].strip(),
"description": company.xpath('.//blockquote//p')[0].text,
"website_link": (company.xpath('.//a[contains(@class, "website-link__item")]/@href') or ['Not Available'])[0],
})
import pandas as pd
from pandas import json_normalize
df = json_normalize(data, max_level=0)
df
</code></pre>
<p>that said - well i think that i understand the approach - fetching the HTML and then working with xpath the thing i have difficulties is the user-agent .. part</p>
<p>it works awesome - it is just overwhelming...!!!</p>
|
<python><selenium-webdriver><web-scraping><beautifulsoup>
|
2024-01-26 20:40:18
| 2
| 1,223
|
zero
|
77,888,783
| 7,225,482
|
Adding values in two array with overlapping dimensions (outer join)
|
<p>I have two xarrays with values along the same dimension. Some of the points overlap and some points along that dimension are only in the first or the second array. I would like to get a third array that adds the values and assumes 0 for missing values. I think this would be a sum of the two arrays along the dimension x doing an outer join. But I don't know how to do it.</p>
<pre><code>import xarray as xr
x = xr.DataArray([1,2,3], dims=("x"), coords={"x": ['a', 'b', 'c']})
y = xr.DataArray([10,20,30], dims=("x"), coords={"x": ['b', 'c', 'd']})
x+y
</code></pre>
<p>In the example above I get a new array that only has two values 12 and 23 for points 'b' and 'c'. But what I would like is an array with four values 1, 12, 23, 30 for points 'a', 'b', 'c', 'd'.</p>
|
<python><python-xarray>
|
2024-01-26 20:16:14
| 0
| 746
|
Alvaro Aguilar
|
77,888,735
| 5,420,229
|
Insert many-to-many relationship objects using SQLModel when one side of the relationship already exists in the database
|
<p>I am trying to insert records in a database using SQLModel where the data looks like the following.
A House object, which has a color and many locations.
Locations will also be associated with many houses. The input is:</p>
<pre><code>[
{
"color": "red",
"locations": [
{"type": "country", "name": "Netherlands"},
{"type": "municipality", "name": "Amsterdam"},
],
},
{
"color": "green",
"locations": [
{"type": "country", "name": "Netherlands"},
{"type": "municipality", "name": "Amsterdam"},
],
},
]
</code></pre>
<p>Here's a reproducible example of what I'm trying to do:</p>
<pre><code>import asyncio
from typing import List
from sqlalchemy.ext.asyncio import create_async_engine
from sqlalchemy.orm import sessionmaker
from sqlmodel import Field, Relationship, SQLModel, UniqueConstraint
from sqlmodel.ext.asyncio.session import AsyncSession
DATABASE_URL = "sqlite+aiosqlite:///./database.db"
engine = create_async_engine(DATABASE_URL, echo=True, future=True)
async def init_db() -> None:
async with engine.begin() as conn:
await conn.run_sync(SQLModel.metadata.create_all)
SessionLocal = sessionmaker(
autocommit=False,
autoflush=False,
bind=engine,
class_=AsyncSession,
expire_on_commit=False,
)
class HouseLocationLink(SQLModel, table=True):
house_id: int = Field(foreign_key="house.id", nullable=False, primary_key=True)
location_id: int = Field(
foreign_key="location.id", nullable=False, primary_key=True
)
class Location(SQLModel, table=True):
id: int = Field(primary_key=True)
type: str # country, county, municipality, district, city, area, street, etc
name: str # Amsterdam, Germany, My Street, etc
houses: List["House"] = Relationship(
back_populates="locations",
link_model=HouseLocationLink,
)
__table_args__ = (UniqueConstraint("type", "name"),)
class House(SQLModel, table=True):
id: int = Field(primary_key=True)
color: str = Field()
locations: List["Location"] = Relationship(
back_populates="houses",
link_model=HouseLocationLink,
)
# other fields...
data = [
{
"color": "red",
"locations": [
{"type": "country", "name": "Netherlands"},
{"type": "municipality", "name": "Amsterdam"},
],
},
{
"color": "green",
"locations": [
{"type": "country", "name": "Netherlands"},
{"type": "municipality", "name": "Amsterdam"},
],
},
]
async def add_houses(payload) -> List[House]:
result = []
async with SessionLocal() as session:
for item in payload:
locations = []
for location in item["locations"]:
locations.append(Location(**location))
house = House(color=item["color"], locations=locations)
result.append(house)
session.add_all(result)
await session.commit()
asyncio.run(init_db())
asyncio.run(add_houses(data))
</code></pre>
<p>The problem is that when I run this code, it tries to insert duplicated location objects together with the house object.
I'd love to be able to use <code>relationship</code> here because it makes accessing <code>house.locations</code> very easy.</p>
<p>However, I have not been able to figure out how to keep it from trying to insert duplicated locations. Ideally, I'd have a mapper function to perform a <code>get_or_create</code> location.</p>
<p>The closest I've seen to making this possible is SQLAlchemy's <a href="https://docs.sqlalchemy.org/en/20/orm/extensions/associationproxy.html" rel="nofollow noreferrer">association proxy</a>. But looks like SQLModel doesn't support that.</p>
<p>Does anybody have an idea on how to achieve this? If you know how to do it using SQLAlchemy instead of SQLModel, I'd be interested in seeing your solution. I haven't started on this project yet, so I might as well using SQLAlchemy if it will make my life easier.</p>
<p>I've also tried tweaking with <code>sa_relationship_kwargs</code> such as</p>
<pre><code>sa_relationship_kwargs={
"lazy": "selectin",
"cascade": "none",
"viewonly": "true",
}
</code></pre>
<p>But that prevents the association entries from being added to the <code>HouseLocationLink</code> table.</p>
<p>Any pointers will much appreciated. Even if it means changing my approach altogether.</p>
<p>Thanks!</p>
|
<python><sql><sqlalchemy><fastapi><sqlmodel>
|
2024-01-26 20:02:50
| 1
| 485
|
Hannon Queiroz
|
77,888,477
| 1,769,327
|
PyQt6: QThread: Destroyed while thread is still running
|
<p>Consider the following class that implements QThread:</p>
<pre><code>from PyQt6.QtCore import QThread
from stream_consumer.streaming_worker import StreamingWorker
class GuiStreamerController:
def __init__(self, callback_function) -> None:
"""Initialize the class
Args:
callback_function (function): The GUI function that will process results in the GUI thread.
"""
self._thread = QThread()
self._worker = StreamingWorker(callback_function)
self._worker.moveToThread(self._thread)
self._thread.started.connect(self._worker.run)
self._is_running = False
def run(self, local_stream_message_criteria=None) -> None:
"""Start the streaming process.
Args:
local_stream_message_criteria (StreamMessageCriteria): The stream message criteria. Optional.
- Starts the Qthread (which starts the streaming process.)
- Implicitly calls the *StreamingWorker* *run* method (assigned in this class' __init__)
"""
if self._is_running:
return
if local_stream_message_criteria:
self._worker.stream_from_local(local_stream_message_criteria)
self._thread.start()
self._is_running = True
def quit(self) -> None:
"""Quit the streaming process."""
self._worker.quit()
self._thread.quit()
</code></pre>
<p>Now consider this very simple testing script:</p>
<pre><code>...
class TestQCoreApplication:
def test_functionality(self):
self.app = QCoreApplication([])
self.streamer = GuiStreamerController(_handle_stream_result)
self.streamer.run(local_stream_message_criteria)
print('Streamer Started')
QTest.qWait(5000)
self.streamer.quit()
print('Streamer Quit')
self.app.quit()
# Block 1
# if __name__ == '__main__':
# test = TestQCoreApplication()
# test.test_functionality()
# Block 2
# test = TestQCoreApplication()
# test.test_functionality()
# Block 3
def test_this():
test = TestQCoreApplication()
test.test_functionality()
test_this()
</code></pre>
<p>This, mostly,runs as expected, except the call to self.streamer.quit() raises the following error:</p>
<pre><code>QThread: Destroyed while thread is still running
</code></pre>
<p>However, if I comment out "Block 3" and enable either "Block 1" or "Block 2", it works fine.</p>
<p>Sorry this is not a completely reproducible example, as there is just way too much code. However, I think the answer to the issue stems from QCoreApplication being called from within a function, so I am hoping somebody can provide a simple explanation and workaround to that.</p>
<p>Please note, I have tried calling wait() on the QThread, and it just hangs indefinitely. If I give it a timeout, the error still comes at the end of it.</p>
<p>Please also note that my end goal is to run tests from PyTest / pytest-qt, but I am getting the same error. My thinking is, if I can solve this simple example, I can solve it in PyTest.</p>
<p>Any thoughts would be greatly appreciated!</p>
<p><strong>EDIT</strong></p>
<p>I forgot to mention that, aside from GuiStreamerController functioning as expected when run under QCoreApplication, called from the root of a script, it also runs as expected when called inside a QApplication loop.</p>
<p>As another experiment, I created the following class:</p>
<pre><code>from PyQt6.QtCore import QCoreApplication
from stream_consumer.gui_streamer_controller import GuiStreamerController
class StreamRunner:
def __init__(
self,
local_stream_message_criteria,
max_results: int = None) -> None:
self._max_results = max_results
self._local_stream_message_criteria = local_stream_message_criteria
self._app = QCoreApplication([])
self._streamer = GuiStreamerController(self._handle_stream_result)
self._result_count = 0
self._results = []
@property
def result_count(self):
"""Return the number of stream results"""
return self._result_count
@property
def results(self):
"""Return the stream results"""
return self._results
def run(self):
"""Start a stream run."""""
self._streamer.run(self._local_stream_message_criteria)
# THIS TIME exec is called!!!!
self._app.exec()
def _quit(self) -> None:
"""Quit a stream run."""
self._streamer.quit()
self._app.quit()
def _handle_stream_result(self, result):
"""Handle the stream results"""
self._results.append(result)
self._result_count += 1
if self._max_results and self._result_count >= self._max_results:
self._quit()
</code></pre>
<p>On line 33 it calls <em>exec()</em>, closer mimicking what happens when run under QApplication. However, as with my earlier script, if I instantiate this class, and call its <em>run()</em> method from the root of a script, everything is fine, but if I call it from a function, I get the same issue.</p>
|
<python><pyqt><pyqt6>
|
2024-01-26 18:59:17
| 1
| 631
|
HapiDaze
|
77,888,418
| 6,471,140
|
CUDA path in Sagemaker instances to solve NameError: name '_C' is not defined with GroundingDINO
|
<p>I'm trying to install and use <a href="https://github.com/IDEA-Research/GroundingDINO" rel="nofollow noreferrer">grounding dino</a> in a Sagemaker instance (with GPU) but i got the error:</p>
<pre><code>NameError: name '_C' is not defined
</code></pre>
<p>I found out the reason is because the variable CUDA_HOME is not configured so to solve it I need to set the variable, but after searching for answers(I already checked the common path /usr/local/cuda) I cannot find the path where cuda is installed in sagemaker instances.</p>
<p>Where is cuda installed in sagemaker instances so I can set CUDA_HOME?</p>
|
<python><machine-learning><cuda><amazon-sagemaker>
|
2024-01-26 18:45:06
| 2
| 3,554
|
Luis Leal
|
77,888,183
| 11,170,350
|
How to find the indexes of stimulated regions in 1D signal python
|
<p>I have a 1D signal, where some part is at baseline and some at higher amplitude because of stimulus. I want to separate the stimulated segments.
Here is the image of attached signal
<a href="https://i.sstatic.net/6Qi9j.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6Qi9j.png" alt="enter image description here" /></a>
From the signal shown in image, i want to get the starting and end points for stimulated signal.
The signal can be generated using following code</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import random
def generate_random_decimal_array(start, end, size):
random_array = [random.uniform(start, end) for _ in range(size)]
return random_array
# Simulation parameters
fs = 200 # Sampling frequency (Hz)
t_total = 5 # Total simulation time (seconds)
# Time vector
t = np.linspace(0, t_total, int(fs * t_total), endpoint=False)
# Signal parameters
baseline_amplitude = 0.05
stimulus_amplitude = 0.75
stimulus_duration = 1 # Duration of the stimulus (seconds)
recovery_duration = 1 # Duration of the recovery period after stimulus (seconds)
num_stimuli = 3 # Number of stimulated regions
# Initialize signal with baseline noise
signal = baseline_amplitude * np.random.randn(len(t))
# Randomly choose start times for each stimulated region
stimulus_periods = []
for _ in range(num_stimuli):
start = random.randint(0, len(t) - int(fs * (stimulus_duration + recovery_duration)))
end = start + int(fs * stimulus_duration)
stimulus_periods.append((start, end))
# Apply stimulus and recovery for each period
for start, end in stimulus_periods:
signal[start:end] = generate_random_decimal_array(0.3, 1.5, (end - start))
recovery_start = end
recovery_end = recovery_start + int(fs * recovery_duration)
signal[recovery_start:recovery_end] = baseline_amplitude * np.random.randn(recovery_end - recovery_start)
# Plot the simulated signal
plt.plot(t, signal)
plt.title('Simulated Signal with Multiple Stimuli')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.grid(True)
plt.show()
</code></pre>
|
<python><signal-processing>
|
2024-01-26 17:58:47
| 1
| 2,979
|
Talha Anwar
|
77,887,846
| 880,874
|
How can I get my python script to ignore specific characters in my csv file?
|
<p>I have data in a csv file that contains car loan data.</p>
<p>The columns are the car make and model, the total cost, and the APR.</p>
<p>So I am using pyPlot and Python to make Pie charts for each make...the pie chart will contain the models for each car manufacturer where the size of the pie section will be determined by how high the APR is and how much the total cost is.</p>
<p>I am trying to use this library: <code>matplotlib.pyplot </code></p>
<p>But it doesn't like the dollar signs or percentages. Is there a way to fix this?</p>
<pre><code>SyntaxError: invalid syntax
Toyota, Rav4,"$25,814.73",$315.00,3.24%
^
</code></pre>
<p>Here is the script:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
# Read data
file_path = 'input_data.csv'
data = pd.read_csv(file_path, encoding='unicode_escape')
columns = data.columns
for column in columns:
if pd.api.types.is_numeric_dtype(data[column]):
# Ignore symbols
data[column] = pd.to_numeric(data[column].replace('[\%,]', '', regex=True), errors='coerce')
# Plot
colors = plt.cm.Set3.colors
plt.pie(data[column], labels=data.index, autopct='%1.1f%%', colors=colors, startangle=140)
plt.title(f'Pie Chart for {column}')
plt.show()
else:
print(f"Skipping non-numeric column: {column}")
</code></pre>
<p>full error:</p>
<pre><code>SyntaxWarning: invalid escape sequence '\%'
data[column] = pd.to_numeric(data[column].replace('[\%,]', '',
regex=True), errors='coerce')
</code></pre>
|
<python><pandas><regex>
|
2024-01-26 16:53:20
| 1
| 7,206
|
SkyeBoniwell
|
77,887,703
| 274,579
|
How to convert a big bytes object to a list of 32-bit integer values?
|
<p>I have a <code>bytes</code> object with a length <code>n</code> in bytes that is a multiple of 4. I want to convert it to a list of <code>n/4</code> 32-bit integer values.</p>
<p>What is an efficient way to do that?</p>
<p>For example, convert <code>x</code> to <code>y</code>:</p>
<pre><code>x = b'\x01\x02\x03\x04\xef\xbe\xad\xde'
y = [0x04030201, 0xdeadbeef]
</code></pre>
<p>UPDATE 1: Figured out this code. Is there a simpler way (one liner kind of solution)?</p>
<pre><code>x = b'\x01\x02\x03\x04\xef\xbe\xad\xde'
ls = [x[n:(n+4)] for n in range(0, len(x), 4)]
y = []
for xx in [int.from_bytes(xx, 'little') for xx in ls]:
y.append(xx)
</code></pre>
<p>UPDATE 2: I realized I can combine the two loops:</p>
<pre><code>x = b'\x01\x02\x03\x04\xef\xbe\xad\xde'
y = [int.from_bytes(x[n:(n+4)], 'little') for n in range(0, len(x), 4)]
</code></pre>
<p>Is this as elegant as it gets?</p>
|
<python><python-3.x><type-conversion>
|
2024-01-26 16:31:00
| 5
| 8,231
|
ysap
|
77,887,623
| 5,838,180
|
How to make a figure with scatter plots on the sky as subplots?
|
<p>I am trying to make a figure with 4 subplots (in 2 columns and 2 rows). Each of them should contain a healpy projscatter subplot or a scatter plot on the sky of another kind. But I see no easy way provided by healpy to do this. Can someone provide a useful code? thx</p>
|
<python><matplotlib><plot><healpy>
|
2024-01-26 16:18:08
| 1
| 2,072
|
NeStack
|
77,887,573
| 11,402,025
|
How to auto populate the address in json
|
<p>This is the json object I have</p>
<pre><code>[
{
"person": "abc",
"city": "united states",
"facebooklink": "link",
"united states": [
{
"person": "cdf",
"city": "ohio",
"facebooklink": "link",
"ohio": [
{
"person": "efg",
"city": "clevland",
"facebooklink": "link",
"clevland": [
{
"person": "jkl",
"city": "Street A",
"facebooklink": "link",
"Street A": [
{
"person": "jkl",
"city": "House 1",
"facebooklink": "link"
}
]
}
]
},
{
"person": "ghi",
"city": "columbus",
"facebooklink": "link"
}
]
},
{
"person": "abc",
"city": "washington",
"facebooklink": "link"
}
]
}
]
</code></pre>
<p>And I want to create the below json adding address fields dynamically to the json.</p>
<pre><code>[
{
"person": "abc",
"city": "united states",
"facebooklink": "link",
"address": "united states",
"united states": [
{
"person": "cdf",
"city": "ohio",
"facebooklink": "link",
"address": "united states/ohio",
"ohio": [
{
"person": "efg",
"city": "clevland",
"facebooklink": "link",
"address": "united states/ohio/clevland",
"clevland": [
{
"person": "jkl",
"city": "Street A",
"facebooklink": "link",
"address": "united states/ohio/clevland/Street A",
"Street A": [
{
"person": "jkl",
"city": "House 1",
"facebooklink": "link",
"address": "united states/ohio/clevland/Street A/House 1"
}
]
}
]
},
{
"person": "ghi",
"city": "columbus",
"facebooklink": "link",
"address": "united states/ohio/columbus"
}
]
},
{
"person": "abc",
"city": "washington",
"facebooklink": "link",
"address": "united states/washington"
}
]
}
]
``
How can I achieve this in Python.
</code></pre>
|
<python><json><python-3.x><recursion><iteration>
|
2024-01-26 16:11:53
| 1
| 1,712
|
Tanu
|
77,887,500
| 5,790,653
|
python unable to exit if the first for loop iteration matches
|
<p>This is <code>file.json</code>:</p>
<pre class="lang-json prettyprint-override"><code>[
{
"id": 5,
"name": "Jennifer",
"counts": 10,
"available_days": ["Sunday", "Monday", "Wednesday"],
"last_update": "2024-01-21T11:12:35.126945"
},
{
"id": 3,
"name": "Joseph",
"counts": 10,
"available_days": ["Sunday", "Monday", "Wednesday"],
"last_update": "2024-01-23T11:12:36.851668"
},
{
"id": 4,
"name": "Mary",
"counts": 10,
"available_days": ["Sunday", "Tuesday", "Wednesday"],
"last_update": "2024-01-22T11:12:36.224657"
},
{
"id": 1,
"name": "Saeed",
"counts": 10,
"available_days": ["Sunday", "Tuesday", "Wednesday"],
"last_update": "2024-01-25T11:12:37.875114"
},
{
"id": 2,
"name": "David",
"counts": 10,
"available_days": ["Sunday", "Tuesday", "Wednesday"],
"last_update": "2024-01-24T11:12:35.746215"
}
]
</code></pre>
<p>This is python code:</p>
<pre class="lang-py prettyprint-override"><code>today = 'Tuesday'
with open('file.json', 'r') as file:
data = json.load(file)
times = []
for i in data:
times.append(i['last_update'])
times.sort()
for days in data:
for being_available in days['available_days']:
for time in times:
if time == days['last_update'] and being_available == today:
print(f'Today is {days["name"]} turn.')
break
</code></pre>
<p>This is the current output:</p>
<pre><code>Today is Mary turn.
Today is Saeed turn.
Today is David turn.
</code></pre>
<p>While it should be (expected output):</p>
<pre><code>Today is Mary turn.
</code></pre>
<p>I'm not sure which part of my code is wrong. Since <code>Mary</code>'s <code>last_update</code> is the oldest, then she should be printed only.</p>
|
<python>
|
2024-01-26 16:00:24
| 2
| 4,175
|
Saeed
|
77,887,308
| 6,486,583
|
Exception on update() after calling init() for the second time on the same tracker instance
|
<p>Consider the following code:</p>
<pre><code>import cv2
window_name = "Object Tracking"
source = cv2.VideoCapture(".tmp/2024-01-24 22-53-46.mp4")
tracker = cv2.TrackerKCF_create()
selectedROI = None
while True:
ret, frame = source.read()
if not ret:
print("Error: Could not read frame.")
break
success = False
if selectedROI is not None:
success, object_bbox = tracker.update(frame)
if selectedROI is not None and success:
x, y, w, h = [int(coord) for coord in object_bbox]
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
cv2.putText(frame, f"Tracking ({x}, {y}) {w}x{h}",
(x, y - 10),
cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 255, 0),
2, cv2.LINE_AA)
cv2.imshow(window_name, frame)
keyCode = cv2.waitKey(15) & 0xFF
# Break the loop if 'ESC' or 'q' key is pressed
if keyCode in [27, ord('q')]:
break
elif keyCode == ord('s'):
selectedROI = cv2.selectROI("Object Tracking", frame, fromCenter=False, showCrosshair=True)
tracker.init(frame, selectedROI)
elif cv2.getWindowProperty(window_name, cv2.WND_PROP_VISIBLE) < 1:
break
source.release()
cv2.destroyAllWindows()
</code></pre>
<p>It's working fine until I press <code>s</code> key again and select another ROI. After selecting another region on the very next <code>tracker.update(frame)</code> call I receive either this exception:</p>
<pre><code> success, object_bbox = tracker.update(frame)
cv2.error: OpenCV(4.9.0) D:\a\opencv-python\opencv-python\opencv\modules\core\src\arithm.cpp:650: error: (-209:Sizes of input arguments do not match) The operation is neither 'array op array' (where arra
ys have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'cv::arithm_op'
</code></pre>
<p>or another:</p>
<pre><code> success, object_bbox = tracker.update(frame)
cv2.error: Unknown C++ exception from OpenCV code
</code></pre>
<p>Worth noting that it's not the case when I use <code>TrackerCSRT</code>. With it, I can re-select region as many times as I want w/o issues. The thing is the <code>TrackerKCF</code> works way better and faster for my particular use case.</p>
<p>Any ideas on how to re-init TrackerKCF w/o re-instantiating the object? It takes 1.5 sec to create a new instance which is obviously not acceptable for the real-time scenario :)</p>
|
<python><opencv><computer-vision><object-tracking>
|
2024-01-26 15:29:49
| 0
| 3,415
|
spirit
|
77,887,292
| 13,154,227
|
Python Django: make strftime return german language, not english
|
<p>In my Django project, I'm using strftime, specifically <code>%b</code> to get the abbreviated month name. Those names are returned in english. My settings contain the following:</p>
<pre><code>LANGUAGE_CODE = 'de' # also tried 'de_DE'
USE_I18N = True
USE_TZ = True
</code></pre>
<p>Why does strftime still return english language?</p>
|
<python><django><strftime>
|
2024-01-26 15:27:38
| 2
| 609
|
R-obert
|
77,887,248
| 4,527,628
|
Pydantic union discriminator based on field on parent model
|
<p>I have model like this:</p>
<pre><code>class Foo(BaseModel):
protocol: str
protocol_params: Union[ProtocolOneParam, ProtocolTwoParam]
</code></pre>
<p><code>ProtocolOneParam</code> and <code>ProtocolTwoParam</code> have no same field with distinguishable value so I can use them as <code>Discriminator</code> and the only way that I can understand which model should be used for <code>protocol_params</code> is through the value of <code>protocol</code> which can either be <code>"protocol1"</code> or <code>"protocol2"</code>.</p>
<p>if <code>protocol == "protocol1"</code> then <code>protocol_params</code> should be deserialized using the <code>ProtocolOneParam</code> model and so on.</p>
<p>in actual use cases, there are more than 5 protocols without any distinguishable field among them.</p>
<p>is there any way in pydantic to achieve what I need?</p>
|
<python><python-3.x><pydantic><pydantic-v2>
|
2024-01-26 15:20:23
| 2
| 1,225
|
M.Armoun
|
77,887,232
| 3,747,486
|
Cannot explode a nested JSON within spark dataframe
|
<p>I am new to Spark. I am trying to flatten a dataframe but failed to do so with "explode".</p>
<p>The original dataframe schema is like below:</p>
<pre><code>ID|ApprovalJSON
1|[{"ApproverType":"1st Line Manager","Status":"Approved"},{"ApproverType":"2nd Line Manager","Status":"Approved"}]
2|[{"ApproverType":"1st Line Manager","Status":"Approved"},{"ApproverType":"2nd Line Manager","Status":"Rejected"}]
</code></pre>
<p>I need to convert it to below schema?</p>
<pre><code>ID|ApprovalType|Status
1|1st Line Manager|Approved
1|2nd Line Manager|Approved
2|1st Line Manager|Approved
2|2nd Line Manager|Rejected
</code></pre>
<p>I have tried</p>
<pre><code>df_exploded = df.withColumn("ApprovalJSON", explode("ApprovalJSON"))
</code></pre>
<p>But I got error:</p>
<blockquote>
<p>Cannot resolve "explode(ApprovalJSON)" due to data type mismatch:
parameter 1 requires ("ARRAY" or "MAP") type, however, "ApprovalJSON"
is of "STRING" type.;</p>
</blockquote>
|
<python><json><pyspark>
|
2024-01-26 15:18:31
| 1
| 326
|
Mark
|
77,886,972
| 5,709,240
|
How to vlookup within a single dataframe?
|
<p>From the following dataframe (<code>df</code>):</p>
<pre><code>|------------+--------------------+-------------|
| child_code | child_name | parent_code |
|------------+--------------------+-------------|
| 900 | World | 0 |
| 920 | South-Eastern Asia | 900 |
| 702 | Singapore | 920 |
|------------+--------------------+-------------|
</code></pre>
<p>I would like to produce this dataframe:</p>
<pre><code>|------------+--------------------+-------------+--------------------|
| child_code | child_name | parent_code | parent_name |
|------------+--------------------+-------------+--------------------|
| 900 | World | 0 | |
| 920 | South-Eastern Asia | 900 | World |
| 702 | Singapore | 920 | South-Eastern Asia |
|------------+--------------------+-------------+--------------------|```
How could I make the equivalent of an MS Excel `vlookup` to produce the `parent_name` column?
</code></pre>
|
<python><pandas><dataframe>
|
2024-01-26 14:34:18
| 1
| 933
|
crocefisso
|
77,886,724
| 5,594,008
|
Call async function from sync inside async FastAPI application
|
<p>Not sure, that this is possible, but still</p>
<p>I have default async FastAPI application, where I have sync function</p>
<pre><code>@classmethod
def make_values(
cls,
records: list,
symbol: str = None,
) -> list:
values = []
for record in records:
new_row = SomePydanticModel(
timestamp=record._mapping.get("time"),
close=record._mapping.get("close"),
value=record._mapping.get("value"),
)
if symbol in EXTRA_SYMBOLS:
new_row = split_row(new_row, symbol)
values.append(new_row)
return values
</code></pre>
<p>The problem is that <code>split_row</code> is async function, that calls other async functions</p>
<pre><code>async def split_row(
row,
symbol,
):
adjusted_datetime = datetime.timestamp(datetime.strptime("01/01/19", "%m/%d/%y"))
splits = await get_splits(symbol)
# some big part with business logic with other async calls
return row
</code></pre>
<p>Currently I get coroutine in <code>new_row</code> variable. Is there anyway to get the result of split_row?</p>
|
<python><asynchronous><async-await><python-asyncio><fastapi>
|
2024-01-26 13:50:40
| 2
| 2,352
|
Headmaster
|
77,886,606
| 929,122
|
Executing same operator in cloud composer as multiple tasks
|
<p>I have a PythonOperator in Airflow that gets executed using Cloud Composer:</p>
<pre class="lang-py prettyprint-override"><code>with DAG(
dag_id = config['dag_id'],
schedule_interval = config['schedule_interval'],
default_args = default_args
) as dag:
generate_data_task = PythonOperator(
task_id = 'generate_dummy_data',
python_callable = generate_data,
dag = dag
)
</code></pre>
<p>The generate_data() function writes a randomly generated uniquely named CSV file in a bucket with some data in it. Executed as is works great, but I want to execute the same task multiple times in parallel. If I were to specify to execute it 10 times in parallel I would expect to have 10 files written in the bucket. I have tried with concurrency and task_concurrency, but I get the same result.</p>
<p>Is this achievable using Airflow on top of Cloud Composer?</p>
|
<python><google-cloud-platform><airflow><google-cloud-composer-2>
|
2024-01-26 13:30:47
| 1
| 437
|
drake10k
|
77,886,416
| 3,526,758
|
Altair plot in streamlit: How to add a legend?
|
<p>I'm using streamlit and need altair to plot (because of available interpolation options).</p>
<p>Given this simple code:</p>
<pre class="lang-py prettyprint-override"><code>import streamlit as st
import altair as alt
import pandas as pd
data = pd.DataFrame({"x": [0, 1, 2, 3], "y": [0, 10, 15, 20], "z": [10, 8, 10, 1]})
base = alt.Chart(data.reset_index()).encode(x="x")
chart = alt.layer(
base.mark_line(color="red").encode(y="y", color=alt.value("green")),
base.mark_line(color="red").encode(y="z", color=alt.value("red")),
).properties(title="My plot",)
st.altair_chart(chart, theme="streamlit", use_container_width=True)
</code></pre>
<p>Which results in this plot:
<a href="https://i.sstatic.net/qSFDG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qSFDG.png" alt="visualiization of plot" /></a></p>
<p>What's the correct way to add a legend next to the plot?</p>
<p>In documentation, I see the legend option as part of "Color", but this seems always be related to visualize another dimension. In my case, I just want to plot different lines and have a legend with their respective colors.</p>
|
<python><streamlit><altair>
|
2024-01-26 12:56:34
| 1
| 545
|
amw
|
77,886,346
| 2,148,416
|
Raising an exception in a custom ufunc in a NumPy ndarray subclass
|
<p>I'm implementing a NumPy ndarray subclass with a custom <code>equal()</code> universal function.</p>
<p>This is a distilled example where the custom <code>equal()</code> ufunc just calls the original:</p>
<pre><code>class MyArray(np.ndarray):
def __new__(cls, data):
return np.array(data).view(MyArray)
def __array_ufunc__(self, ufunc, method, *inputs, **kwargs):
inputs = [i.view(np.ndarray) for i in inputs]
if ufunc == np.equal and method == "__call__":
return self._custom_equal(*inputs, **kwargs)
return super().__array_ufunc__(ufunc, method, *inputs, **kwargs)
@staticmethod
def _custom_equal(a, b, **kwargs):
return np.equal(a, b, **kwargs)
</code></pre>
<p>This works:</p>
<pre><code>>>> a = MyArray([(1,2,3), (4,5,6)])
>>> b = MyArray([(1,2,3), (4,5,6)])
>>> a == b
array([[ True, True, True],
[ True, True, True]])
</code></pre>
<p>But when the arrays have different shapes a <code>DeprecationWarning</code> is issued and the fuction returns <code>False</code>:</p>
<pre><code>>>> c = MyArray([(1,2,3,4,5,6)])
>>> a == c
DeprecationWarning: elementwise comparison failed; this will raise an error in the future.
a == c
False
</code></pre>
<p>When <code>_custom_equal()</code> is updated:</p>
<pre><code> @staticmethod
def _custom_equal(a, b, **kwargs):
try:
eq = np.equal(a, b, **kwargs)
except Exception as e:
print(e)
raise
return eq
</code></pre>
<p>It can be seen that the exception is raised, then the comparison is retried:</p>
<pre><code>>>> a == c
operands could not be broadcast together with shapes (2,3) (1,6)
operands could not be broadcast together with shapes (1,6) (2,3)
False
DeprecationWarning: elementwise comparison failed; this will raise an error in the future.
a == c
</code></pre>
<p>Comparing array shapes and explicitly raising an exception produces the same result.</p>
<p>Is there a way to have the exception raised instead of getting a <code>False</code> result value?</p>
|
<python><numpy><numpy-ufunc>
|
2024-01-26 12:46:29
| 0
| 3,437
|
aerobiomat
|
77,886,336
| 8,000,016
|
How to set Nginx reverse proxy for local Docker Django
|
<p>I'm developing a docker project with <code>nginx</code> and <code>django</code> services. I've <code>django.conf.template</code> parameterised to pass environment variables dynamically depends on environment.</p>
<p>django.conf:</p>
<pre><code>upstream django_app {
server ${DJANGO_PRIVATE_IP}:${DJANGO_PORT};
}
server {
listen 80;
listen 443 ssl;
listen [::]:443 ssl;
server_name ${NGINX_SERVER_NAME};
ssl_certificate /etc/nginx/certs/elitecars_cert.pem;
ssl_certificate_key /etc/nginx/certs/elitecars_privkey.pem;
access_log /var/log/nginx/nginx.django.access.log;
error_log /var/log/nginx/nginx.django.error.log;
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_redirect off;
proxy_pass http://django_app;
}
}
</code></pre>
<p>The template works well because i can see the env vars values with <code>more /etc/nginx/conf.d/sites-available/django.conf</code> command</p>
<pre><code>upstream django_app {
server django:8000;
}
server {
listen 80;
listen 443 ssl;
listen [::]:443 ssl;
server_name 0.0.0.0 127.0.0.1 localhost;
...
</code></pre>
<p>But when I tried to access through the browser but doesn't work.</p>
<p>Any idea? Anybody could help me please?</p>
|
<python><django><docker><nginx>
|
2024-01-26 12:44:54
| 2
| 1,264
|
Alberto Sanmartin Martinez
|
77,886,215
| 10,232,932
|
Create random datetime column with condition to another datetime column pandas
|
<p>I have a pandas dataframe df_sample:</p>
<pre><code>columnA columnB
A AA
A AB
B BA
B BB
B BC
</code></pre>
<p>And I am already creating a random column with some date objects in it:</p>
<pre><code>df_sample['contract_starts'] = np.random.choice(pd.date_range('2024-01-01', '2024-05-01'), len(df_sample))
</code></pre>
<p>which leads to the following output:</p>
<pre><code>columnA columnB contract_starts
A AA 2024-01-21
A AB 2024-03-03
B BA 2024-01-18
B BB 2024-02-18
B BC 2024-04-03
</code></pre>
<p>How can I create another datetime column contract_noted, that the values also have a given range (e.g. until 2024-05-01 ) but does not exceed the <code>contract_starts</code>column, so for example:</p>
<pre><code>columnA columnB contract_starts contract_noted
A AA 2024-01-21 2024-01-20
A AB 2024-03-03 2024-01-01
B BA 2024-01-18 2024-01-13
B BB 2024-02-18 2024-02-01
B BC 2024-04-03 2024-03-28
</code></pre>
|
<python><pandas><dataframe><datetime>
|
2024-01-26 12:21:40
| 2
| 6,338
|
PV8
|
77,886,202
| 5,108,333
|
Parallel processing in python and need for if __name__ == '__main__'
|
<p>I've been battling the <strong>need for parallel processing</strong> in my application. I have relatively good understanding that GIL prevents concurrent processing in a single interpreter, and that there can only be one interpreter per process. At least until <a href="https://peps.python.org/pep-0554/" rel="nofollow noreferrer">3.10</a> (I need to use 3.8 to integrate manufacturer's drivers). But my application is proving challenging.</p>
<h3>My application requirements:</h3>
<p>In my application we I'm creating an <strong>image acquisition module</strong> that will handle an arbitrary number of Flir cameras connected to a computer, for robotics and machine vision. This means the image acquisition module will run in parallel with other modules that control important things, such as motors and DAQs.</p>
<p>When initialized, the image acquisition module will check all cameras in the system to see which one has the name requested:</p>
<pre class="lang-py prettyprint-override"><code>import ImageAcquisition as ia
ia.Initialize("cam1", "cam2") # Creates two parallel process. One for each camera.
</code></pre>
<p>For each camera a parallel process should be created to handle the image analysis associated with it. This means grabbing frames, displaying them, processing them, and also making some image analysis results available to the main process.</p>
<p>One important requirement is the module is self-contained. However, I'm finding that the <code>if __name__ == "__main__":</code> test is always needed in the program entry code (client code). This is because the new processes will run the main program on their interpreters as well.</p>
<p><strong>Is there an alternative to having <code>if __name__ == "__main__":</code>?</strong> So I can make my module fully self-contained.</p>
<p>One alternative could be to use the subprocess library and run the process without the "spawn" mechanism that is used by multiprocessing library in Windows. But there is some overhead when it comes to sharing information between processes that is better handled in the multiprocessing library.</p>
<p>Another alternative is to inject the <code>if __name__ == "__main__":</code> in the user's strings, in our main application.</p>
|
<python><multiprocessing><subprocess><gil>
|
2024-01-26 12:19:04
| 2
| 1,291
|
A. Vieira
|
77,886,168
| 815,859
|
Python OpenCV threading with tkinter
|
<p>I have a small python opencv code to predict age and gender. I also have a GUI tkinter library to print the age on a separate window.</p>
<pre class="lang-py prettyprint-override"><code>import cv2
import math
import time
import argparse
import requests
import tkinter as tk
import threading
window = tk.Tk()
def getFaceBox(net, frame,conf_threshold = 0.75):
frameOpencvDnn = frame.copy()
frameHeight = frameOpencvDnn.shape[0]
frameWidth = frameOpencvDnn.shape[1]
blob = cv2.dnn.blobFromImage(frameOpencvDnn,1.0,(300,300),[104, 117, 123], True, False)
net.setInput(blob)
detections = net.forward()
bboxes = []
for i in range(detections.shape[2]):
confidence = detections[0,0,i,2]
if confidence > conf_threshold:
x1 = int(detections[0,0,i,3]* frameWidth)
y1 = int(detections[0,0,i,4]* frameHeight)
x2 = int(detections[0,0,i,5]* frameWidth)
y2 = int(detections[0,0,i,6]* frameHeight)
bboxes.append([x1,y1,x2,y2])
cv2.rectangle(frameOpencvDnn,(x1,y1),(x2,y2),(0,255,0),int(round(frameHeight/150)),8)
return frameOpencvDnn , bboxes
faceProto = "opencv_face_detector.pbtxt"
faceModel = "opencv_face_detector_uint8.pb"
ageProto = "age_deploy.prototxt"
ageModel = "age_net.caffemodel"
genderProto = "gender_deploy.prototxt"
genderModel = "gender_net.caffemodel"
MODEL_MEAN_VALUES = (78.4263377603, 87.7689143744, 114.895847746)
ageList = ['(0-2)', '(4-6)', '(8-12)', '(15-20)', '(25-32)', '(38-43)', '(48-53)', '(60-100)']
genderList = ['Male', 'Female']
#load the network
ageNet = cv2.dnn.readNet(ageModel,ageProto)
genderNet = cv2.dnn.readNet(genderModel, genderProto)
faceNet = cv2.dnn.readNet(faceModel, faceProto)
cap = cv2.VideoCapture(0)
padding = 20
while cv2.waitKey(1) < 0:
#read frame
t = time.time()
hasFrame , frame = cap.read()
if not hasFrame:
cv2.waitKey()
break
#creating a smaller frame for better optimization
small_frame = cv2.resize(frame,(0,0),fx = 0.5,fy = 0.5)
frameFace ,bboxes = getFaceBox(faceNet,small_frame)
if not bboxes:
print("No face Detected, Checking next frame")
continue
for bbox in bboxes:
face = small_frame[max(0,bbox[1]-padding):min(bbox[3]+padding,frame.shape[0]-1),
max(0,bbox[0]-padding):min(bbox[2]+padding, frame.shape[1]-1)]
blob = cv2.dnn.blobFromImage(face, 1.0, (227, 227), MODEL_MEAN_VALUES, swapRB=False)
genderNet.setInput(blob)
genderPreds = genderNet.forward()
gender = genderList[genderPreds[0].argmax()]
print("Gender : {}, conf = {:.3f}".format(gender, genderPreds[0].max()))
ageNet.setInput(blob)
agePreds = ageNet.forward()
age = ageList[agePreds[0].argmax()]
print("Age Output : {}".format(agePreds))
print("Age : {}, conf = {:.3f}".format(age, agePreds[0].max()))
label = "{},{}".format(gender, age)
cv2.putText(frameFace, label, (bbox[0], bbox[1]-10), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 255, 255), 2, cv2.LINE_AA)
cv2.imshow("Age Gender Demo", frameFace)
print("time : {:.3f}".format(time.time() - t))
window.mainloop()
</code></pre>
<p>The issue with this is that the opencv code will wait for the tkinter window to be closed before continuing processing. Thus the video from the webcam will hang until the window is closed.</p>
<p>How do I make this into a thread (or any other method) so that both windows executes normally without blocking?</p>
|
<python><multithreading><opencv><tkinter><python-multithreading>
|
2024-01-26 12:14:43
| 0
| 795
|
Monty Swanson
|
77,886,120
| 7,695,845
|
How to ignore ruff errors in jupyter notebook in VSCode?
|
<p>I am using ruff to lint and format my Python code in Visual Studio Code. I also work often with jupyter notebooks inside VSCode. When I do that I get some ruff linting errors that don't really make sense in a notebook such as <code>Module level import not at top of cell</code>:</p>
<p><a href="https://i.sstatic.net/cVoCr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cVoCr.png" alt="" /></a></p>
<p>In a Python file, this warning certainly makes sense, but not really in a notebook. I couldn't find how I could silence these warnings for notebooks specifically. There are no ruff settings related to notebooks and I couldn't find how to apply settings specifically to jupyter notebooks. Does somebody know how to silence these warnings inside a notebook in VSCode?</p>
|
<python><visual-studio-code><jupyter-notebook><ruff>
|
2024-01-26 12:06:19
| 1
| 1,420
|
Shai Avr
|
77,886,066
| 13,839,945
|
Plotly colormaps in Matplotlib
|
<p>I really like the <a href="https://plotly.com/python/builtin-colorscales/" rel="nofollow noreferrer">additional colormaps</a> like 'dense' or 'ice' from plotly. Nevertheless, I am currently using matplotlib for most of my plots.</p>
<p>Is there a way to use a pyplot colormap in matplotlib plots?</p>
<p>When I take the colormap 'ice' for example, the only thing I get is rgb color as strings with</p>
<pre class="lang-py prettyprint-override"><code>import plotly.express as px
px.colors.sequential.ice
</code></pre>
<p>This just returns</p>
<pre><code>['rgb(3, 5, 18)',
'rgb(25, 25, 51)',
'rgb(44, 42, 87)',
'rgb(58, 60, 125)',
'rgb(62, 83, 160)',
'rgb(62, 109, 178)',
'rgb(72, 134, 187)',
'rgb(89, 159, 196)',
'rgb(114, 184, 205)',
'rgb(149, 207, 216)',
'rgb(192, 229, 232)',
'rgb(234, 252, 253)']
</code></pre>
<p>The problem is, I have no idea how to use this in a matplotlib plot. The thing I tried was creating a <a href="https://stackoverflow.com/questions/21094288/convert-list-of-rgb-codes-to-matplotlib-colormap">custom colormap</a> with</p>
<pre><code>my_cmap = matplotlib.colors.ListedColormap(px.colors.sequential.ice, name='my_colormap_name')
</code></pre>
<p>but this gave me the following error when used inside a plot:</p>
<pre><code>ValueError: Invalid RGBA argument: 'rgb(3, 5, 18)'
</code></pre>
<p>Anyone knows how to convert this properly?</p>
|
<python><matplotlib>
|
2024-01-26 11:57:34
| 1
| 341
|
JD.
|
77,886,057
| 7,662,164
|
Unexpected behavior of JAX `vmap` for multiple arguments
|
<p>I have found that <code>vmap</code> in JAX does not behave as expected when applied to multiple arguments. For example, consider the function below:</p>
<pre><code>def f1(x, y, z):
f = x[:, None, None] * z[None, None, :] + y[None, :, None]
return f
</code></pre>
<p>For <code>x = jnp.arange(7), y = jnp.arange(5), z = jnp.arange(3)</code>, the output of this function has shape <code>(7, 5, 3)</code>. However, for the vmap version below:</p>
<pre><code>@partial(vmap, in_axes=(None, 0, 0), out_axes=(1, 2))
def f2(x, y, z):
f = x*z + y
return f
</code></pre>
<p>It outputs this error:</p>
<pre><code>ValueError: vmap got inconsistent sizes for array axes to be mapped:
* one axis had size 5: axis 0 of argument y of type int32[5];
* one axis had size 3: axis 0 of argument z of type int32[3]
</code></pre>
<p>Could someone kindly explain what's behind this error?</p>
|
<python><arrays><vectorization><jax>
|
2024-01-26 11:56:13
| 1
| 335
|
Jingyang Wang
|
77,885,918
| 6,494,707
|
Why finetuning MLP model on a small dataset, still keeps the test accuracy same as pre-trained weights?
|
<p>I have designed a simple MLP model trained on 6k data samples.</p>
<pre><code>class MLP(nn.Module):
def __init__(self,input_dim=92, hidden_dim = 150, num_classes=2):
super().__init__()
self.input_dim = input_dim
self.num_classes = num_classes
self.hidden_dim = hidden_dim
#self.softmax = nn.Softmax(dim=1)
self.layers = nn.Sequential(
nn.Linear(self.input_dim, self.hidden_dim),
nn.ReLU(),
nn.Linear(self.hidden_dim, self.hidden_dim),
nn.ReLU(),
nn.Linear(self.hidden_dim, self.hidden_dim),
nn.ReLU(),
nn.Linear(self.hidden_dim, self.num_classes),
)
def forward(self, x):
x = self.layers(x)
return x
</code></pre>
<p>and the model has been instantiated</p>
<pre><code>model = MLP(input_dim=input_dim, hidden_dim=hidden_dim, num_classes=num_classes).to(device)
optimizer = Optimizer.Adam(model.parameters(), lr=learning_rate, weight_decay=1e-4)
criterion = nn.CrossEntropyLoss()
</code></pre>
<p>and the hyperparameters:</p>
<pre><code>num_epoch = 300 # 200e3//len(train_loader)
learning_rate = 1e-3
batch_size = 64
device = torch.device("cuda")
SEED = 42
torch.manual_seed(42)
</code></pre>
<p>My implementation mostly follows <a href="https://stackoverflow.com/questions/71199036/pytorch-nn-not-as-good-as-sklearn-mlp">this question</a>. I save the model as pre-trained weights <code>model_weights.pth</code>.</p>
<p>The accuracy of <code>model</code> on the test dataset is <code>96.80%</code>.</p>
<p>Then, I have another 50 samples (in <code>finetune_loader</code>) that I am trying to fine-tune the model on these 50 samples:</p>
<pre><code>model_finetune = MLP()
model_finetune.load_state_dict(torch.load('model_weights.pth'))
model_finetune.to(device)
model_finetune.train()
# train the network
for t in tqdm(range(num_epoch)):
for i, data in enumerate(finetune_loader, 0):
#def closure():
# Get and prepare inputs
inputs, targets = data
inputs, targets = inputs.float(), targets.long()
inputs, targets = inputs.to(device), targets.to(device)
# Zero the gradients
optimizer.zero_grad()
# Perform forward pass
outputs = model_finetune(inputs)
# Compute loss
loss = criterion(outputs, targets)
# Perform backward pass
loss.backward()
#return loss
optimizer.step() # a
model_finetune.eval()
with torch.no_grad():
outputs2 = model_finetune(test_data)
#predicted_labels = outputs.squeeze().tolist()
_, preds = torch.max(outputs2, 1)
prediction_test = np.array(preds.cpu())
accuracy_test_finetune = accuracy_score(y_test, prediction_test)
accuracy_test_finetune
Output: 0.9680851063829787
</code></pre>
<p>The accuracy remains the same as before fine-tuning the model to 50 samples, I checked, and the output probabilities are also the same.</p>
<p>What could be the reason? Am I making some mistakes in the code for fine-tuning?</p>
|
<python><machine-learning><deep-learning><pytorch><neural-network>
|
2024-01-26 11:32:05
| 1
| 2,236
|
S.EB
|
77,885,815
| 5,547,553
|
How to get string match starting position in str.contains() on polars?
|
<p>I know I can use str.contains() to check if a string is contained in a column, like:</p>
<pre><code>import polars as pl
df = pl.DataFrame({"a": ["my name is Bob","my little pony, my little pony"]})
(df.with_columns(bbb = pl.col('a').str.slice(1,10000).str.contains(pl.col('a').str.slice(0,10), literal=True)
)
)
</code></pre>
<p>What i'd like is the exact starting position of the match and not just a boolean, like in:</p>
<pre><code>import re
x = re.search(r"pony","my little pony")
print(x.start(),x.end())
</code></pre>
<p>How can I do that?</p>
|
<python><regex><dataframe><python-polars>
|
2024-01-26 11:15:33
| 1
| 1,174
|
lmocsi
|
77,885,792
| 7,133,942
|
How to change the order of bars in a matplotlib barchart
|
<p>I have the following code</p>
<pre><code>import matplotlib.pyplot as plt
from datetime import datetime
co2_prices = [100, 60]
electricity_prices = [-2.6, 157]
co2_timestamps = ["March 2023", "February 2024"]
electricity_timestamps = ["24.09.2023 14:00", "24.09.2023 19:00"]
co2_timestamps = [datetime.strptime(date, "%B %Y") for date in co2_timestamps]
electricity_timestamps = [datetime.strptime(date, "%d.%m.%Y %H:%M") for date in electricity_timestamps]
fig, ax = plt.subplots()
bar_width = 0.35
co2_positions = range(len(co2_prices))
electricity_positions = [pos + bar_width for pos in co2_positions]
# Plot bars for CO2 prices
ax.bar(co2_positions, co2_prices, bar_width, label='CO2 Prices (€/T)')
ax.bar(electricity_positions, electricity_prices, bar_width, label='Electricity Prices (€/MWh)')
all_positions = list(co2_positions) + list(electricity_positions)
all_labels = ["March 2023", "February 2024", "24.09.2023 14:00", "24.09.2023 19:00"]
ax.set_xticks(all_positions)
ax.set_xticklabels(all_labels, rotation=45, ha="right")
ax.set_xlabel('Timeline')
ax.set_ylabel('Prices')
ax.set_title('CO2 and Electricity Prices')
ax.legend()
plt.show()
</code></pre>
<p>to display a barchart with 4 entries. The entries for the</p>
<p><code>co2_prices = [100, 60]</code></p>
<p>should be next to each other with the corresponding labels</p>
<pre><code>co2_timestamps = ["March 2023", "February 2024"]
</code></pre>
<p>and the entries for the</p>
<p><code>electricity_prices = [-2.6, 157]</code></p>
<p>should be next to each other with the corresponding labels</p>
<p><code>electricity_timestamps = ["24.09.2023 14:00", "24.09.2023 19:00"]</code>.</p>
<p>In the current code the bars for the <code>co2_prices</code> are not next to each other, and I tried several solution to change it, but it did not work.</p>
|
<python><matplotlib>
|
2024-01-26 11:11:30
| 1
| 902
|
PeterBe
|
77,885,787
| 9,132,526
|
Resume from checkpoint with Accelerator causes loss to increase
|
<p>I've been working on a project to attempt to finetune Stable Diffusion and introduce layout conditioning. I'm using all the components of Stable diffusion from the huggingface stable diffusion pipeline as frozen and only the Unet and my custom model called LayoutEmbeddeder for the conditioning are not frozen.</p>
<p>I've managed to adapt some code to my needs and training the model however, my code crashed during execution and although I think I have implemented the checkpointing correctly when I resume training the loss is very mush higher and the images generated are pure noise compared to what I was logging during training.</p>
<p>So it looks like maybe I'm not saving the checkpoints properly. Is anyone able to take a look and give me some advice? I'm not very expert with Accelerator and there is alot of magic going on.</p>
<p>Code can be found here: <a href="https://github.com/mia01/stable-diffusion-layout-finetune" rel="nofollow noreferrer">https://github.com/mia01/stable-diffusion-layout-finetune</a> ( the checkpoint code is in main.py I'm using accelerate hooks)
Wandb log here (you can clearly see the jump in the loss): <a href="https://api.wandb.ai/links/dissertation-project/wqq4croy" rel="nofollow noreferrer">https://api.wandb.ai/links/dissertation-project/wqq4croy</a></p>
<p>This is what the checkpoint directory looks like:
<a href="https://i.sstatic.net/BIbFe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BIbFe.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/Kwn98.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Kwn98.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/FFV90.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FFV90.png" alt="enter image description here" /></a></p>
<p>Also when I resume training the validation images I inference every x amount of steps look like this:
<a href="https://i.sstatic.net/QT1G9.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QT1G9.jpg" alt="enter image description here" /></a>
Which is worse than the samples generated before I started the fine-tuning!</p>
<p>Any tips would be very much appreciated of where I went wrong!
Thank you</p>
|
<python><huggingface><stable-diffusion><accelerate>
|
2024-01-26 11:10:59
| 1
| 355
|
Suemayah Eldursi
|
77,885,674
| 1,581,090
|
How can I find the best possible shift to correlate two data sets in NumPy?
|
<p>In Python, I have a two-dimensional data set consisting of the two lists, <code>x1</code> and <code>y1</code>. I also have another set, <code>x2</code> and <code>y2</code>, defined as follows:</p>
<pre><code>import random
import numpy as np
import matplotlib.pyplot as plt
# Define the real shift
shift = 50.5
# Fluctuation of single data points
f = 1
# Fraction of missing points or extra points
a = 0.02
data1_x = []
data1_y = []
data2_x = []
data2_y = []
for index in range(int(shift)):
data2_x.append(index + f*random.random())
data2_y.append(0 + f*random.random())
for index in range(500):
x = index
if index<100:
y = 0
elif index<200:
y = (index-100)
elif index<300:
y = 100
elif index<400:
y = 400 - index
else:
y = 0
if random.random()>a:
data1_x.append(x + f*random.random())
data1_y.append(y + f*random.random())
if random.random()>a:
data2_x.append(x + shift + f*random.random())
data2_y.append(y + f*random.random())
if random.random()<a:
data1_x.append(x + 0.5 + f*random.random())
data1_y.append(100 * random.random())
if random.random()<a:
data2_x.append(x + shift + 0.5 + f*random.random())
data2_y.append(100 * random.random())
# Calculation
view = np.lib.stride_tricks.sliding_window_view
n = len(data1_x)
idx = ((view(data2_y, n) - np.array(data1_y))**2).sum(1).argmin()
calculated_shift = np.polyfit(data1_x, data2_x[idx:n+idx], 1)[1]
print(f"calculated shift: {calculated_shift}")
# Plot the original and shifted signals along with cross-correlation
plt.subplot(2, 1, 1)
plt.scatter(data1_x, data1_y, s=20, marker="o", c="b", label="Data1")
plt.scatter(data2_x, data2_y, s=5, marker="o", c="g", label="Data2")
plt.legend()
plt.subplot(2, 1, 2)
plt.scatter(data1_x, data1_y, s=20, marker="o", c="b", label="Data1")
plt.scatter([x-calculated_shift for x in data2_x], data2_y, s=5, marker="o", c="g", label="Data2")
plt.legend()
plt.tight_layout()
plt.show()
</code></pre>
<p>The real-world data</p>
<ul>
<li>might have different lengths (as the example)</li>
<li>the x-values (time) might not be equidistant (as the example)</li>
<li>a point in a dataset might not have a corresponding point in the other data set (as the example)</li>
<li>the y values might not be exactly the same (as the example)</li>
</ul>
<p>Given the solution from Onyambu I get e.g. the following result:</p>
<p><a href="https://i.sstatic.net/0UpSr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0UpSr.png" alt="enter image description here" /></a></p>
<p>The above image show the original data and the plot below shows the data shifted back. It is obvious there is a significant discrepancy still left!</p>
<p>How to improve the calculation so the points will really on top of each other?</p>
|
<python><numpy><correlation>
|
2024-01-26 10:51:24
| 1
| 45,023
|
Alex
|
77,885,605
| 8,003
|
Toggling character format with PyQt6
|
<p>I'm writing a custom word processor as a hobby project. I'm using Python and PyQt6.</p>
<p>I wrote the following. The intent is that if I select some text and apply Bold formatting (for example, by hitting "ctrl-b"), it will toggle the formatting. Specifically, it should remove the bold formatting if all of the selected text is bold. Otherwise, it will apply the bold formatting.</p>
<pre><code>class OvidFont:
def __init__(self, ovid) -> None:
self.textEditor = ovid.textEditor
def setBoldText(self) -> None:
fmt = QTextCharFormat()
if self.textEditor.currentCharFormat().fontWeight() != QFont.Weight.Bold:
print(" setting bold") # for debugging
fmt.setFontWeight(QFont.Weight.Bold)
else:
print(" setting normal") # for debugging
fmt.setFontWeight(QFont.Weight.Normal)
self.textEditor.textCursor().mergeCharFormat(fmt)
</code></pre>
<p>However, it won't remove the bold formatting.</p>
<p>For example, in the sentence "this is a test", if I select "is a" and apply the bold formatting, I get "this <strong>is a</strong> test", with the "is a" properly bold. However, with the selection in place, it still remains bold if I hit "ctrl-b". If I deselect either the first or last character, the toggling of bold works as expected. (I've tried reversing the if/else logic, but that fails, too).</p>
<p>What am I missing?</p>
<p>Update: I've added a working, minimal test case at <a href="https://gist.github.com/Ovid/65936985c6838c0220620cf40ba935fa" rel="nofollow noreferrer">https://gist.github.com/Ovid/65936985c6838c0220620cf40ba935fa</a></p>
|
<python><pyqt6>
|
2024-01-26 10:36:48
| 1
| 11,687
|
Ovid
|
77,885,145
| 5,170,442
|
How can I suppress ruff linting on a block of code
|
<p>I would like to disable/suppress the ruff linter (or certain linting rules) on a block of code. I know that I can do this for single lines (by using <code># noqa: <rule_code></code> at the end of the line) or for entire files/folder (<code>#ruff: noqa <rule_code></code> at the top of the file).</p>
<p>However, I would like to disable the linter ruff on one function or on a multi-line block of code, but not an entire file. Is there a way to do this, without adding <code>noqa</code> to the end of each line?</p>
|
<python><lint><ruff>
|
2024-01-26 09:08:08
| 2
| 653
|
db_
|
77,885,120
| 16,383,578
|
How to open $MFT in Python?
|
<p>I need to open <code>"//?/D:/$MFT"</code> in binary read mode to parse its contents, I need the raw binary data from Master File Table to resolve <em><strong>File Record Segments</strong></em> from <code>"D:/System Volume Information/Chkdsk/Chkdsk%y%m%d%H%M%S.log"</code>.</p>
<p>Long story short there was a power outage and this caused filesystem corruption and I ran <code>chkdsk /f D:</code> right away and I think some files may be corrupted because of said command. It said 107855 data files were processed, I want to know which files were affected and check if they are corrupt and if they are delete them.</p>
<p>I am extremely well versed in computers and programming if my reputation points don't tell you that.</p>
<p>Trying to open it using the usual syntax will result in... You guessed it:</p>
<pre><code>In [142]: mft = open("//?/D:/$MFT", 'rb')
---------------------------------------------------------------------------
PermissionError Traceback (most recent call last)
Cell In[142], line 1
----> 1 mft = open("//?/D:/$MFT", 'rb')
File C:\Python310\lib\site-packages\IPython\core\interactiveshell.py:284, in _modified_open(file, *args, **kwargs)
277 if file in {0, 1, 2}:
278 raise ValueError(
279 f"IPython won't let you open fd={file} by default "
280 "as it is likely to crash IPython. If you know what you are doing, "
281 "you can use builtins' open."
282 )
--> 284 return io_open(file, *args, **kwargs)
PermissionError: [Errno 13] Permission denied: '//?/D:/$MFT'
</code></pre>
<p>Before you ask, of course I ran with Administrator privileges, in fact I have disabled LUAC via registry hack. I still get <code>PermissionDenied</code>. I know exactly what I am doing.</p>
<p>Googling <code>python open mft</code> gives me only a handful of relevant results, like <a href="https://stackoverflow.com/questions/20717829/trying-to-get-mft-table-from-python-3">Trying to get MFT table from Python 3</a> and <a href="https://stackoverflow.com/questions/28559518/get-hex-values-raw-data-from-mft-on-ntfs-filesystem">Get hex-values / raw data from $MFT on NTFS Filesystem</a>, none are useful.</p>
<p>Libraries like <code>analyzeMFT</code> are ancient and written for Python 2, I have looked at the source code and found it to be very poorly written, and I have already known the raw binary structure of the 1024B records, I have done extensive research enough to write a good parser, but I just can't get access to the file.</p>
<p><code>analyzeMFT</code> when installed via PyPI (<code>pip install analyzeMFT</code>) will install the Python 2 version which cannot even be imported in Python 3:</p>
<pre><code>In [144]: import analyzemft
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[144], line 1
----> 1 import analyzemft
File C:\Python310\lib\site-packages\analyzemft\__init__.py:2
1 __all__ = ["mftutils", "mft", "mftsession", "bitparse"]
----> 2 import bitparse
3 import mft
4 import mftsession
ModuleNotFoundError: No module named 'bitparse'
</code></pre>
<p>I know it should be <code>from . import bitparse</code>, but the script files in GitHub are already patched to Python 3 and so I have copy-pasted all script files to <code>"%pythondir%/Lib/site-packages/analyzeMFT"</code>.</p>
<p>And nope, it doesn't work, utilities like it only work on a dumped copy of the Master File Table and not the "hot" one:</p>
<pre><code>PS C:\Users\Xeni> analyzeMFT -f '//?/D:/$MFT'
Unable to open file: //?/D:/$MFT
</code></pre>
<p>And they only generate human readable text serializations, I need the raw data in memory which they don't expose.</p>
<p>How can I open <code>"//?/D:/$MFT"</code> "hot"?</p>
<hr />
<p>I have an idea, maybe I can try to open the drive root directory, like:</p>
<pre><code>partition = os.open('//?/D:', os.O_BINARY)
sector = os.read(partition, 512)
</code></pre>
<p>I guess this will open partition boot sector, if so, then I can read 8 bytes at offset 0x30 to get $MFT cluster number, convert it to sector number and read the starting sectors to get its file size and then read all the sectors and then divide the entries into chunks of 1024 bytes to get the records.</p>
<p>I haven't tested this idea, will look into it.</p>
<hr />
<p>I just did this thing I described, and it did give me the partition boot sector, and it is indeed the same structure as the Partition Boot Sector table found in <a href="https://en.wikipedia.org/wiki/NTFS" rel="nofollow noreferrer">Wikipedia</a>, if so then Master File Table starts at cluster 2 and the integers seem to be little endian.</p>
<p>I will need further testing.</p>
<hr />
<p>So far I have got these:</p>
<pre><code>UNITS = ("B", "KiB", "MiB", "GiB", "TiB", "PiB", "EiB", "ZiB", "YiB", "RiB", "QiB")
BOOT_SECTOR = [
(0, 3, "assembly", 0),
(3, 11, "OEM_ID", 0),
(11, 13, "bytes/sector", 1),
(13, 14, "sectors/cluster", 1),
(21, 22, "media_descriptor", 0),
(24, 26, "sectors/track", 1),
(26, 28, "heads", 1),
(28, 32, "hidden_sectors", 1),
(40, 48, "total_sectors", 1),
(48, 56, "$MFT_cluster", 1),
(56, 64, "$MFTMirr_cluster", 1),
(64, 65, "FRS_length", 1),
(72, 80, "volume_serial_number", 1),
(84, 510, "bootstrap_code", 0),
(510, 512, "end-of-sector", 0),
]
def get_file_record_length(data: dict) -> None:
frs_length = data["FRS_length"]
if frs_length < 128:
data["FRS_length"] = (
frs_length * data["bytes/sector"] * data["sectors/cluster"]
)
else:
data["FRS_length"] = 1 << (256 - frs_length)
def format_size(length: int) -> str:
string = ""
i = 0
while length and i < 10:
chunk = length & 1023
length >>= 10
if chunk:
string = f"{chunk}{UNITS[i]} {string}"
i += 1
if length:
string = f"{length}QiB {string}"
return string.rstrip()
def process_boot_sector(data: dict) -> None:
get_file_record_length(data)
data["raw_size"] = size = data["bytes/sector"] * data["total_sectors"]
data["readable_size"] = format_size(size)
data["bytes/cluster"] = cluster = data["bytes/sector"] * data["sectors/cluster"]
data["$MFT_index"] = (
data["$MFT_cluster"] * cluster
)
def open_partition(drive: str) -> dict:
partition = open(f"//?/{drive}:", "rb")
sector = partition.read(512)
decoded = {}
for start, end, name, little in BOOT_SECTOR:
data = sector[start:end]
if little:
data = int.from_bytes(data, "little")
decoded[name] = data
process_boot_sector(decoded)
partition.seek(0)
return {
"handle": partition,
"info": decoded,
}
partition = open_partition("D")
handle = partition["handle"]
info = partition["info"]
frs_length = info["FRS_length"]
handle.seek(info["$MFT_index"])
MFT = handle.read(frs_length)
</code></pre>
<pre><code>In [2]: info
Out[2]:
{'assembly': b'\xebR\x90',
'OEM_ID': b'NTFS ',
'bytes/sector': 512,
'sectors/cluster': 8,
'media_descriptor': b'\xf8',
'sectors/track': 63,
'heads': 255,
'hidden_sectors': 40,
'total_sectors': 7810824157,
'$MFT_cluster': 2,
'$MFTMirr_cluster': 171046397,
'FRS_length': 1024,
'volume_serial_number': 18170874752001262933,
'bootstrap_code': b'\xfa3\xc0\x8e\xd0\xbc\x00|\xfbh\xc0\x07\x1f\x1ehf\x00\xcb\x88\x16\x0e\x00f\x81>\x03\x00NTFSu\x15\xb4A\xbb\xaaU\xcd\x13r\x0c\x81\xfbU\xaau\x06\xf7\xc1\x01\x00u\x03\xe9\xdd\x00\x1e\x83\xec\x18h\x1a\x00\xb4H\x8a\x16\x0e\x00\x8b\xf4\x16\x1f\xcd\x13\x9f\x83\xc4\x18\x9eX\x1fr\xe1;\x06\x0b\x00u\xdb\xa3\x0f\x00\xc1.\x0f\x00\x04\x1eZ3\xdb\xb9\x00 +\xc8f\xff\x06\x11\x00\x03\x16\x0f\x00\x8e\xc2\xff\x06\x16\x00\xe8K\x00+\xc8w\xef\xb8\x00\xbb\xcd\x1af#\xc0u-f\x81\xfbTCPAu$\x81\xf9\x02\x01r\x1e\x16h\x07\xbb\x16hp\x0e\x16h\t\x00fSfSfU\x16\x16\x16h\xb8\x01fa\x0e\x07\xcd\x1a3\xc0\xbf(\x10\xb9\xd8\x0f\xfc\xf3\xaa\xe9_\x01\x90\x90f`\x1e\x06f\xa1\x11\x00f\x03\x06\x1c\x00\x1efh\x00\x00\x00\x00fP\x06Sh\x01\x00h\x10\x00\xb4B\x8a\x16\x0e\x00\x16\x1f\x8b\xf4\xcd\x13fY[ZfYfY\x1f\x0f\x82\x16\x00f\xff\x06\x11\x00\x03\x16\x0f\x00\x8e\xc2\xff\x0e\x16\x00u\xbc\x07\x1ffa\xc3\xa0\xf8\x01\xe8\t\x00\xa0\xfb\x01\xe8\x03\x00\xf4\xeb\xfd\xb4\x01\x8b\xf0\xac<\x00t\t\xb4\x0e\xbb\x07\x00\xcd\x10\xeb\xf2\xc3\r\nA disk read error occurred\x00\r\nBOOTMGR is missing\x00\r\nBOOTMGR is compressed\x00\r\nPress Ctrl+Alt+Del to restart\r\n\x00\x8c\xa9\xbe\xd6\x00\x00',
'end-of-sector': b'U\xaa',
'raw_size': 3999141968384,
'readable_size': '3TiB 652GiB 502MiB 1006KiB 512B',
'bytes/cluster': 4096,
'$MFT_index': 8192}
In [3]: MFT
Out[3]: b'FILE0\x00\x03\x00+M\x00\xd5\x04\x00\x00\x00\x01\x00\x01\x008\x00\x01\x00\x98\x01\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\x00\x00\x00\x00\x00\x00\x00\x81\x04\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00`\x00\x00\x00\x00\x00\x18\x00\x00\x00\x00\x00H\x00\x00\x00\x18\x00\x00\x00\xb07\x86i6\x1e\xd7\x01\xb07\x86i6\x1e\xd7\x01\xb07\x86i6\x1e\xd7\x01\xb07\x86i6\x1e\xd7\x01\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x000\x00\x00\x00h\x00\x00\x00\x00\x00\x18\x00\x00\x00\x01\x00J\x00\x00\x00\x18\x00\x01\x00\x05\x00\x00\x00\x00\x00\x05\x00\xb07\x86i6\x1e\xd7\x01\xb07\x86i6\x1e\xd7\x01\xb07\x86i6\x1e\xd7\x01\xb07\x86i6\x1e\xd7\x01\x00\x00,\x89\x00\x00\x00\x00\x00\x00,\x89\x00\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x04\x03$\x00M\x00F\x00T\x00\x00\x00\x00\x00\x00\x00\x80\x00\x00\x00H\x00\x00\x00\x01\x00@\x00\x00\x001\x00\x00\x00\x00\x00\x00\x00\x00\x00\xbf\x92\x08\x00\x00\x00\x00\x00@\x00\x00\x00\x00\x00\x00\x00\x00\x00,\x89\x00\x00\x00\x00\x00\x00,\x89\x00\x00\x00\x00\x00\x00,\x89\x00\x00\x00\x00\x13\xc0\x92\x08\x02\x00\x00\x00\xb0\x00\x00\x00H\x00\x00\x00\x01\x00@\x00\x00\x000\x00\x00\x00\x00\x00\x00\x00\x00\x00D\x00\x00\x00\x00\x00\x00\x00@\x00\x00\x00\x00\x00\x00\x00\x00P\x04\x00\x00\x00\x00\x00`I\x04\x00\x00\x00\x00\x00`I\x04\x00\x00\x00\x00\x001E\xc2\x92\x08\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x81\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x81\x04'
</code></pre>
<p>I need more research to parse all of these things.</p>
|
<python><ntfs><ntfs-mft>
|
2024-01-26 09:03:15
| 0
| 3,930
|
Ξένη Γήινος
|
77,885,055
| 7,325,581
|
use AdaptiveAvgPool2d for CNN in pytorch
|
<p>I try to train a CNN with different size images but it throws an error
<code>ValueError: expected sequence of length 1200 at dim 2 (got 1069)</code>
when I convert them to tensor.</p>
<p>I think it is because of the different size images. But I don't want to resize the images. How can I use AdaptiveAvgPool2d?</p>
<p>Here is my code:</p>
<pre><code>def makeData():
data = []
# cat
for i in range(1, 11):
data.append({"path": "images/cat_" + str(i) + ".jpeg", "label": 1})
# not cat
for i in range(1, 11):
data.append({"path": "images/not_cat_" + str(i) + ".jpeg", "label": 0})
return data
def loadImage(path):
return Image.open(path).convert('RGB')
class MyDataset(Data.Dataset):
def __init__(self, data):
self.data = data
self.transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))
])
def __getitem__(self, idx):
img = loadImage(self.data[idx]["path"])
img = self.transform(img)
label = torch.FloatTensor([self.data[idx]["label"]])
return img, torch.FloatTensor(label)
def __len__(self):
return len(self.data)
def collate_fn(batch_list):
data, labels = [], []
for item in batch_list:
data.append(item[0].tolist())
labels.append(item[1].tolist())
return torch.FloatTensor(data), torch.FloatTensor(labels)
loader = Data.DataLoader(MyDataset(makeData()), 2, True, collate_fn=collate_fn)
</code></pre>
<p>And here is my code of the model:</p>
<pre><code>class CatModel(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 32, kernel_size=(3,3), stride=1, padding=1)
self.act1 = nn.ReLU()
self.drop1 = nn.Dropout(0.3)
self.conv2 = nn.Conv2d(32, 32, kernel_size=(3,3), stride=1, padding=1)
self.act2 = nn.ReLU()
self.pool2 = nn.AdaptiveAvgPool2d((16, 16))
self.flat = nn.Flatten()
self.fc3 = nn.Linear(8192, 512)
self.act3 = nn.ReLU()
self.drop3 = nn.Dropout(0.5)
self.fc4 = nn.Linear(512, 1)
def forward(self, x):
# input 3x32x32, output 32x32x32
x = self.act1(self.conv1(x))
x = self.drop1(x)
# input 32x32x32, output 32x32x32
x = self.act2(self.conv2(x))
# input 32x32x32, output 32x16x16
x = self.pool2(x)
# input 32x16x16, output 8192
x = self.flat(x)
# input 8192, output 512
x = self.act3(self.fc3(x))
x = self.drop3(x)
# input 512, output 1
x = self.fc4(x)
return nn.Sigmoid()(x)
</code></pre>
<p>I added AdaptiveAvgPool2d layers and few dropout and linear layers.</p>
|
<python><pytorch><conv-neural-network>
|
2024-01-26 08:47:14
| 1
| 1,063
|
just_code_dog
|
77,885,015
| 5,056,347
|
Convert string to boolean using database expression in Django (for generated field)
|
<p>I have the following models code:</p>
<pre class="lang-py prettyprint-override"><code>class Product(models.Model):
...
barcode_data = models.CharField(
max_length=255,
blank=True,
default="",
)
@property
def is_scannable(self):
if self.barcode_data:
return True
else:
return False
</code></pre>
<p>Basically, if <code>barcode_data</code> is not empty, <code>is_scannable</code> returns <code>True</code>. I want to turn <code>is_scannable</code> into a generated field, something like this:</p>
<pre class="lang-py prettyprint-override"><code>class Product(models.Model):
...
barcode_data = models.CharField(
max_length=255,
blank=True,
default="",
)
is_scannable = models.GeneratedField(
expression=...,
output_field=models.BooleanField(),
db_persist=True,
)
</code></pre>
<p>I've read the docs here: <a href="https://docs.djangoproject.com/en/5.0/ref/models/database-functions/" rel="nofollow noreferrer">https://docs.djangoproject.com/en/5.0/ref/models/database-functions/</a>, but it seems like there isn't really a function to convert a string to boolean based on whether it is empty or not. Is there a way to do this?</p>
|
<python><django><django-models>
|
2024-01-26 08:40:32
| 1
| 8,874
|
darkhorse
|
77,884,899
| 10,452,700
|
What is difference between aggregation and resampling over historical data within pandas data frame?
|
<p>I'm experimenting with the characterization of data over time. Let's say I have the following time data:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
import scipy.signal as signal
import matplotlib.pyplot as plt
# Function to generate triangular wave
def generate_triangular_pulse(rise_duration, fall_duration, period, samples):
control_points_x = np.array([0, rise_duration, rise_duration + fall_duration, period])
control_points_y = np.array([0, 1, 0, 0])
x = np.linspace(0, period, samples)
return np.interp(x, control_points_x, control_points_y)
# Set common parameters
samples = 288
rise_duration = 1
fall_duration = 3
period = 5
num_triangles = 5
# Create a time array with 5-minute intervals
t_num = pd.date_range(start='2024-01-01', freq='5T', periods=samples)
# Plot 1: Positive Triangle Pulse
triangular_pulse = np.zeros(samples)
for i in range(num_triangles):
start_index = i * (samples // num_triangles)
end_index = (i + 1) * (samples // num_triangles)
triangular_pulse[start_index:end_index] = generate_triangular_pulse(rise_duration, fall_duration, period, end_index - start_index)
# Convert data to a Pandas DataFrame
data = {'date': t_num, 'Positive Triangle Pulse': triangular_pulse }
df = pd.DataFrame(data)
print(df.head())
# date Positive Triangle Pulse
#0 2024-01-01 00:00:00 0.000000
#1 2024-01-01 00:05:00 0.089286
#2 2024-01-01 00:10:00 0.178571
#3 2024-01-01 00:15:00 0.267857
#4 2024-01-01 00:20:00 0.357143
#288 rows × 2 columns
</code></pre>
<p>I want to downsample data from <strong>min</strong> to <strong>hour</strong> losing minimum information.</p>
<pre class="lang-py prettyprint-override"><code>resampled_df = (df.set_index('datetime') # Conform data by setting a datetime column as dataframe index needed for resample
.resample('1H') # resample with frequency of 1 hour
.mean() # used mean() to aggregate
.interpolate() # filling NaNs and missing values [just in case]
)
resampled_df.shape # (24, 1)
</code></pre>
<p>if we plot the output and compare them:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import pandas as pd
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(15, 4))
# (nearly-)constant
axes[0].plot( df['datetime'], df['Positive Triangle Pulse'], "b.-" , label="constant" )
axes[0].set_title(f'Positive Triangle Pulse incl. {len(df)} observations')
# Resample of (nearly-)constant
axes[1].plot( resampled_df.index, resampled_df['Positive Triangle Pulse'], "b.-" , label="constant" )
axes[1].set_title(f'Positive Triangle Pulse (resampled frequency=1H) incl. {len(resampled_df)} observations')
#plt.tight_layout()
step_size = 12
selected_ticks = df['datetime'][::step_size]
for ax in axes:
ax.set_xticks(selected_ticks)
ax.set_xticklabels(selected_ticks, rotation=90)
plt.legend(loc="best")
plt.show()
</code></pre>
<p><img src="https://i.imgur.com/24ZnhgC.png" alt="img" /></p>
<p>I want to find out what is the best practice to achieve this that has minimum impact on data pattern behavior over time. If you check this <a href="https://stackoverflow.com/q/46992158/10452700">post</a> which is very close to my objective.</p>
<p>Questions:</p>
<ul>
<li>what is the difference between aggregation and resampling if one translates by <code>resample()</code> and another one by <code>agg()</code> or <code>groupby()</code> methods?
<ul>
<li>which methods use (de-)selecting between records and which one represents/reflects other observations that digest?</li>
</ul>
</li>
<li>which of these methods has the least impact on the behavior of data over time?</li>
</ul>
<hr />
<ul>
<li><a href="https://stackoverflow.com/q/57336794/10452700">Historical Data Resample Issue</a></li>
<li><a href="https://stackoverflow.com/q/66982446/10452700">Resample/aggregate intervals in pandas</a></li>
<li><a href="https://stackoverflow.com/q/25246842/10452700">Pandas data frame: resample with linear interpolation</a></li>
<li><a href="https://stackoverflow.com/q/75280083/10452700">Resampling timeseries Data Frame for different customized seasons and finding aggregates</a></li>
<li><a href="https://stackoverflow.com/q/57830819/10452700">aggregate groups results in pandas data frame</a></li>
<li><a href="https://stackoverflow.com/q/53972633/10452700">using resample to aggregate data with different rules for different columns in a pandas dataframe</a></li>
<li><a href="https://stackoverflow.com/q/73411972/10452700">Need to aggregate data in pandas data frame</a></li>
<li><a href="https://stackoverflow.com/q/46214861/10452700">pandas resample when cumulative function returns data frame</a></li>
<li><a href="https://stackoverflow.com/q/44305794/10452700">Pandas resample data frame with fixed number of rows</a></li>
</ul>
|
<python><pandas><time-series><aggregate><downsampling>
|
2024-01-26 08:17:20
| 0
| 2,056
|
Mario
|
77,884,695
| 2,465,039
|
How can I replicate Pydantic v1 behaviour of json_encoders in pydantic v2?
|
<p>The latest versions of Pydantic v2 have resurrected the configuration option of <code>json_encoders</code> (see <a href="https://docs.pydantic.dev/latest/api/config/#pydantic.config.ConfigDict.json_encoders" rel="nofollow noreferrer">https://docs.pydantic.dev/latest/api/config/#pydantic.config.ConfigDict.json_encoders</a>)</p>
<p>However the following example, does not work as in v1.</p>
<p>v2:</p>
<pre><code>class Foo(BaseModel):
foo_bytes: bytes
foo_params: list = Field(default_factory=list)
model_config = ConfigDict(json_encoders={
bytes: lambda x: f"0x{x.hex()}"
})
foo_data = bytes(b"hello_world")
foo = Foo(foo_bytes=foo_data, foo_params=[foo_data],)
print(foo.model_dump_json())
# prints: '{"foo_bytes":"0x68656c6c6f5f776f726c64","foo_params":["hello_world"]}'
</code></pre>
<p>v1:</p>
<pre><code>class Foo(BaseModel):
foo_bytes: bytes
foo_params: list = Field(default_factory=list)
class Config:
json_encoders = {
bytes: lambda x: f"0x{x.hex()}",
}
foo_data = bytes(b"hello_world")
foo = Foo(foo_bytes=foo_data, foo_params=[foo_data],)
print(foo.json())
# prints: '{"foo_bytes": "0x68656c6c6f5f776f726c64", "foo_params": ["0x68656c6c6f5f776f726c64"]}'
</code></pre>
<p>Any suggestions on how I can replicate the behaviour of v1 in v2?</p>
|
<python><pydantic>
|
2024-01-26 07:32:03
| 1
| 977
|
user2465039
|
77,884,413
| 8,047,378
|
When I convert a PyTorch model to onnx, I get many files, why?
|
<p>when I convert a torch model to onnx, I get many files like you can see in the images below:</p>
<p><a href="https://i.sstatic.net/jO4m3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jO4m3.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/WYFks.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WYFks.png" alt="enter image description here" /></a></p>
<p>No errors were reported during conversion. And my code is:</p>
<pre><code>model_path = "./saved_models/emotion_model_temp/"
config = BertConfig.from_json_file(model_path) # 加载bert模型配置信息
config.num_labels=3
device = 'cuda'
input_ids_np = torch.from_numpy(np.zeros([1, 128], dtype=np.int64))
token_type_ids_np = torch.from_numpy(np.zeros([1, 128], dtype=np.int64))
attention_mask_np = torch.from_numpy(np.zeros([1, 128], dtype=np.int64))
input_ids_tf = input_ids_np.type(torch.int64).to(device)
token_type_ids_tf = token_type_ids_np.type(torch.int64).to(device)
attention_mask_tf = attention_mask_np.type(torch.int64).to(device)
feature_wav = torch.from_numpy(np.array(np.zeros((1,313, 128, 1))))
feature_wav_tf = feature_wav.type(torch.float32).to(device)
model = EmotionModel.from_pretrained(model_path, config=config).to("cuda")
model.load_state_dict(torch.load(os.path.join(model_path, "pytorch_model.bin"),map_location="cuda"))
print("#### 加载模型完成")
model.eval()
onnx_name = "./temp/temp.onnx"
torch.onnx.export(model, # model being run
(input_ids_tf,token_type_ids_tf,attention_mask_tf,feature_wav_tf), # model input (or a tuple for multiple inputs)
onnx_name, # where to save the model
opset_version=11, # the ONNX version to export the model to
input_names=['input_ids', 'token_type_ids', 'attention_mask', 'feature_wav'],
output_names=['logits'],
dynamic_axes={"input_ids": {0: "batch_size"}, # 批处理变量
"token_type_ids": {0: "batch_size"},
"attention_mask": {0: "batch_size"},
"feature_wav": {0: "batch_size"},}
)
</code></pre>
<p>Does anyone know what happened to my onnx conversion?</p>
|
<python><pytorch><onnx>
|
2024-01-26 06:01:53
| 1
| 939
|
Frank.Fan
|
77,884,368
| 6,766,408
|
Unable to get XPath as values and location is changing
|
<p>I have one application under test. I want to test the numbers and verify with expected values using robot framework. As values and location is changing after importing one file it is hard to get the xpaths. I tried with all the xpaths like with Selectors Hub OR normal chrome xpath. How to get these values as it is constantly changing the values and location.</p>
<p>On importing one file it displays as;
<a href="https://i.sstatic.net/ak38Q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ak38Q.png" alt="enter image description here" /></a></p>
<p>When importing further files values as location is keep changing like below.
<a href="https://i.sstatic.net/vi0jY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vi0jY.png" alt="enter image description here" /></a></p>
<p>and so on</p>
<p><a href="https://i.sstatic.net/QXgWZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QXgWZ.png" alt="enter image description here" /></a></p>
|
<python><selenium-webdriver><xpath><robotframework><automation-testing>
|
2024-01-26 05:42:17
| 2
| 312
|
ADS KUL
|
77,884,311
| 3,220,135
|
Python winrt how to get notification listener permission?
|
<p>How is it possible to get needed permission for a notification listener from winrt in python? The following code immediately returns "denied" without any sort of windows prompt showing up:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import winrt.windows.ui.notifications.management as management
async def main():
listener = management.UserNotificationListener.current
access_result = await listener.request_access_async()
print(access_result) #prints 2: denied
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
<p>Naturally then it is impossible to do anything useful with the notification listener which is a problem.</p>
<p>I ran this code from cmd with python 3.11.4 and <a href="https://pypi.org/project/winrt-Windows.UI.Notifications.Management/" rel="nofollow noreferrer">winrt 2.0.0b2</a></p>
<p>The docs mention adding the permission to the manifest for c# apps, but how would this work for winrt-python?</p>
|
<python><notifications><windows-runtime>
|
2024-01-26 05:18:26
| 1
| 11,165
|
Aaron
|
77,884,246
| 11,480,383
|
How to read openapi json schema and extract API information using python
|
<p>I have a openapi json schema file. Using Python, I want to read that json file and extract all the APIs defined in it (including API definition, request headers, request body etc...) and execute all those APIs in a loop.
API execution part I can handle but what I want to know is there any standard python library to read the openapi json spec to extract API information.
If there is a standard python library could you please give me a sample code how to extract the API details out of this openapi json.</p>
<p>example openapi json schema:</p>
<pre><code>{
"openapi": "3.0.0",
"info": {
"title": "Cat Facts API",
"version": "1.0"
},
"paths": {
"/breeds": {
"get": {
"tags": [
"Breeds"
],
"summary": "Get a list of breeds",
"description": "Returns a a list of breeds",
"operationId": "getBreeds",
"parameters": [
{
"name": "limit",
"in": "query",
"description": "limit the amount of results returned",
"required": false,
"schema": {
"type": "integer",
"format": "int64"
}
}
],
"responses": {
"200": {
"description": "successful operation",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Breed"
}
}
}
}
}
}
}
},
"/fact": {
"get": {
"tags": [
"Facts"
],
"summary": "Get Random Fact",
"description": "Returns a random fact",
"operationId": "getRandomFact",
"parameters": [
{
"name": "max_length",
"in": "query",
"description": "maximum length of returned fact",
"required": false,
"schema": {
"type": "integer",
"format": "int64"
}
}
],
"responses": {
"200": {
"description": "successful operation",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/CatFact"
}
}
}
},
"404": {
"description": "Fact not found"
}
}
}
},
"/facts": {
"get": {
"tags": [
"Facts"
],
"summary": "Get a list of facts",
"description": "Returns a a list of facts",
"operationId": "getFacts",
"parameters": [
{
"name": "max_length",
"in": "query",
"description": "maximum length of returned fact",
"required": false,
"schema": {
"type": "integer",
"format": "int64"
}
},
{
"name": "limit",
"in": "query",
"description": "limit the amount of results returned",
"required": false,
"schema": {
"type": "integer",
"format": "int64"
}
}
],
"responses": {
"200": {
"description": "successful operation",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/CatFact"
}
}
}
}
}
}
}
}
},
"components": {
"schemas": {
"Breed": {
"title": "Breed model",
"description": "Breed",
"properties": {
"breed": {
"title": "Breed",
"description": "Breed",
"type": "string",
"format": "string"
},
"country": {
"title": "Country",
"description": "Country",
"type": "string",
"format": "string"
},
"origin": {
"title": "Origin",
"description": "Origin",
"type": "string",
"format": "string"
},
"coat": {
"title": "Coat",
"description": "Coat",
"type": "string",
"format": "string"
},
"pattern": {
"title": "Pattern",
"description": "Pattern",
"type": "string",
"format": "string"
}
},
"type": "object"
},
"CatFact": {
"title": "CatFact model",
"description": "CatFact",
"properties": {
"fact": {
"title": "Fact",
"description": "Fact",
"type": "string",
"format": "string"
},
"length": {
"title": "Length",
"description": "Length",
"type": "integer",
"format": "int32"
}
},
"type": "object"
}
}
}
}
</code></pre>
|
<python><openapi>
|
2024-01-26 04:50:00
| 1
| 700
|
Prabodha
|
77,884,044
| 2,624,770
|
python csvwriter gives "BrokenPipeError" when used in pipe
|
<p>Here's an example of what I'm seeing:</p>
<pre><code>❯ cat /tmp/test.py
#!/usr/bin/env python3
import csv
with open("/dev/stdout", "w") as f:
csvwriter = csv.writer(f)
csvwriter.writerows((_,) for _ in range(10000))
</code></pre>
<p>This runs just fine when redirected to a file or to sent to stdout.</p>
<pre><code>❯ /tmp/test.py > /dev/null
❯ echo $?
0
</code></pre>
<p>However, when sent to head, and some other tools likely, it raises an exception.</p>
<pre><code>❯ /tmp/test.py | head > /dev/null
Traceback (most recent call last):
File "/tmp/test.py", line 7, in <module>
csvwriter.writerows((_,) for _ in range(10000))
BrokenPipeError: [Errno 32] Broken pipe
During handling of the above exception, another exception occurred:
BrokenPipeError: [Errno 32] Broken pipe
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/test.py", line 5, in <module>
with open("/dev/stdout", "w") as f:
BrokenPipeError: [Errno 32] Broken pipe
</code></pre>
<p>If I have something like <code>range(15)</code> then no problem. Question is, is there some setting I can use in python to not have this behaviour?</p>
|
<python><python-3.x>
|
2024-01-26 03:21:39
| 1
| 2,293
|
keithpjolley
|
77,884,005
| 522,815
|
Python ModuleNotFoundError only in command line execution not in IDE
|
<p>I see a <code>ModuleNotFoundError</code> error when running a simple program.</p>
<p>The project is created using Poetry, and the setup is as shown in the screenshot.</p>
<p><a href="https://i.sstatic.net/V5KS2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V5KS2.png" alt="enter image description here" /></a></p>
<p><code>h.py</code> has this:</p>
<pre><code>def add(num1, num2):
return num1 + num2
</code></pre>
<p><code>s.py</code> has this:</p>
<pre><code>from src.helper.h import add
def main():
print(add(1, 2))
if __name__ == "__main__":
main()
</code></pre>
<p>If I execute the code from within PyCharm IDE, I see the output correctly. If I execute from command line using <code>python3 src/scripts/s.py</code>, I see an error as below:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "/Users/me/dev/python-learn/src/scripts/s.py", line 1, in <module>
from src.helper.h import add
ModuleNotFoundError: No module named 'src'
</code></pre>
<p>I tried adding <code>__init.py__</code> to each of the directories <code>src</code>, <code>helpers</code>, and <code>scripts</code>. That did not resolve the issue.</p>
<p>Can someone tell me what I am missing?</p>
|
<python><python-import><python-module><modulenotfounderror>
|
2024-01-26 03:05:20
| 2
| 1,823
|
techjourneyman
|
77,883,926
| 50,385
|
How to make `import` and `from import` diverge in Python?
|
<p>Say I have the following consecutive lines in a python code base:</p>
<pre><code>from foo import BAR # succeeds
log.info(f"{dir(BAR)=}") # succeeds
import foo.BAR # succeeds
log.info(f"{dir(foo.BAR)=}") # fails, AttributeError no field BAR in module foo
</code></pre>
<p>There's no other code in between. If I deliberately wanted to create this effect, how would I do it? I know it's possible because I'm observing it in a large code base running under Python 3.11, but I have no idea how. <strong>What feature of the python import system lets these two forms of import diverge?</strong> It seems like the ability to <code>import foo.BAR</code> must require <code>foo.BAR</code> to exist, and we even confirm it exists first with <code>from foo import BAR</code> and logging the fields of <code>BAR</code>.</p>
<p><code>foo</code> is a directory. <code>foo/__init__.py</code> exists and is empty. <code>foo/BAR.py</code> exists and contains top level items like functions and classes.</p>
|
<python><python-import><attributeerror>
|
2024-01-26 02:30:52
| 2
| 22,294
|
Joseph Garvin
|
77,883,758
| 1,401,560
|
Why this exception using Paramiko?
|
<p>I'm trying to teach myself Python to connect to an SFTP server. I tried out this sample code from <a href="https://pythoneo.com/how-do-i-change-directories-using-paramiko/" rel="nofollow noreferrer">How do I change directories using Paramiko?</a>.</p>
<pre class="lang-py prettyprint-override"><code>import paramiko
# Create an SSH client
ssh = paramiko.SSHClient()
# Connect to the remote server
ssh.connect('hostname', username='user', password='pass')
# Open a new channel
channel = ssh.get_transport().open_session()
# Start a shell on the channel
channel.get_pty()
channel.invoke_shell()
# Execute the `cd` command to change to the `/tmp` directory
channel.exec_command('cd /tmp')
# Close the channel
channel.close()
# Close the SSH client
ssh.close()
</code></pre>
<p>But I get the following (I put the correct host and creds) when it gets to the <code>channel.*</code> lines.</p>
<pre class="lang-none prettyprint-override"><code>$ python3 -V
Python 3.8.2
$ python3 sftp.py
Traceback (most recent call last):
File "sftp.py", line 27, in <module>
channel.get_pty()
File "/Users/kmhait/Library/Python/3.8/lib/python/site-packages/paramiko/channel.py", line 70, in _check
return func(self, *args, **kwds)
File "/Users/kmhait/Library/Python/3.8/lib/python/site-packages/paramiko/channel.py", line 201, in get_pty
self._wait_for_event()
File "/Users/kmhait/Library/Python/3.8/lib/python/site-packages/paramiko/channel.py", line 1224, in _wait_for_event
raise e
paramiko.ssh_exception.SSHException: Channel closed.
</code></pre>
<p>What does this mean?</p>
|
<python><python-3.x><paramiko>
|
2024-01-26 01:22:35
| 1
| 17,176
|
Chris F
|
77,883,755
| 1,297,248
|
Logging causing OSError and .nfs files
|
<p>I have this logger.py file:</p>
<pre><code>import logging
from logging.handlers import TimedRotatingFileHandler
FORMAT = '%(asctime)s %(message)s'
log = logging.getLogger(__name__)
log.setLevel(logging.INFO)
handler = TimedRotatingFileHandler('/mnt/NFS-drive/debug.log', when='midnight', backupCount=7)
handler.setFormatter(logging.Formatter(FORMAT))
log.addHandler(handler)
</code></pre>
<p>I use it in this countdown module:</p>
<pre><code>import time
from logger import log
def countdown(minutes=10, callback=None):
seconds = minutes * 60
# Run the countdown loop
while seconds >= 0:
# Calculate remaining minutes and seconds
remaining_minutes = seconds // 60
remaining_seconds = seconds % 60
# Print the remaining minutes and seconds
log.info("Time remaining: {:02d}:{:02d}".format(remaining_minutes, remaining_seconds))
# Sleep for 1 second
time.sleep(1)
# Decrement the seconds
seconds -= 1
if callback:
log.info("Timer finished...")
callback()
</code></pre>
<p>Then I run a multithreaded script which writes logs to <code>debug.log</code>. This script runs on multiple machines as well. They all write logs to the same NFS mounted file.</p>
<p>I'm see two issues</p>
<p>Error:</p>
<pre><code>--- Logging error ---
Traceback (most recent call last):
File "/usr/lib/python3.10/logging/__init__.py", line 1104, in emit
self.flush()
File "/usr/lib/python3.10/logging/__init__.py", line 1084, in flush
self.stream.flush()
OSError: [Errno 116] Stale file handle
Call stack:
File "/mnt/nf/myscripts/run.py", line 128, in run
countdown(minutes=5, callback=lambda: log.info("Checking queue for new messages"))
File "/mnt/nf/myscripts/countdown.py", line 14, in countdown
log.info("Time remaining: {:02d}:{:02d}".format(remaining_minutes, remaining_seconds))
Message: 'Time remaining: 03:25'
Arguments: ()
</code></pre>
<p>And I'm seeing all these .nfs files in the directory:
<a href="https://i.sstatic.net/rkAYC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rkAYC.png" alt="enter image description here" /></a></p>
<p>Any idea why i'm suddenly seeing this error and getting this files?</p>
|
<python><nfs>
|
2024-01-26 01:21:21
| 1
| 6,409
|
Batman
|
77,883,709
| 4,770,412
|
How to resolve python path error while trying to write to a file from different sub directory
|
<p>In my Python project, my logger script is supposed to create a log file in the logs dir, my logger class is present in src/logger.py and logs/ and src are at the same level. when the logger tries to create a log file it tries to find log/ in src/ dir and errors out</p>
<p>project structure</p>
<pre><code>.
├── README.md
├── __init__.py
├── data
│ └── test_database.db
├── flask_app
│ ├── main.py
│ └── templates
│ └── index.html
├── logs
└── src
├── __init__.py
├── constants.py
├── logger.py
├── utils.py
</code></pre>
<pre><code>log_file = f"logs/{today_date}.log"
FileNotFoundError: [Errno 2] No such file or directory: '/Users/ribahshaikh/projects/hht_webapp/src/logs/2024-01-25.log'```
</code></pre>
|
<python>
|
2024-01-26 00:57:26
| 1
| 413
|
Aamir Shaikh
|
77,883,624
| 687,739
|
Filter a pandas DataFrame by a dict
|
<p>Assume I have a DataFrame like this:</p>
<pre><code> Num1 Num2
1 1 0
2 3 2
3 5 4
4 7 6
5 9 8
</code></pre>
<p>And I have a dictionary like this:</p>
<pre><code>d = {
"Num1": 2,
"Num2": 5
}
</code></pre>
<p>I want to return the values from the DataFrame from each column that matches the key and the values greater than or equal to the value in the key.</p>
<p>My result should look like this:</p>
<pre><code> Num1 Num2
1 nan nan
2 3 nan
3 5 nan
4 7 6
5 9 8
</code></pre>
<p>How might I do this?</p>
|
<python><pandas>
|
2024-01-26 00:26:23
| 3
| 15,646
|
Jason Strimpel
|
77,883,535
| 1,519,058
|
Order dataframe by conditions combining many columns
|
<p>I have two advanced sorting scenarios.</p>
<p>They are independent cases so I have listed below a sample with expected results.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
a={
"ip":['10.10.11.30','10.10.11.30','10.10.11.30', '10.2.2.10', '10.10.2.1', '10.2.2.2'],
"path":['/data/foo/err','/data/foo/zone','/data/foo/err','/data/foo/zone','/data/foo/zone','/data/foo/tmp'],
"date":['25/01/2024','25/01/2024','01/08/2020','23/01/2024','24/01/2024','25/01/2024'],
"count":[3,10,20,5,20,50]
}
df=pd.DataFrame(a)
print(df)
print()
## Output
ip path date count
0 10.10.11.30 /data/foo/err 25/01/2024 3
1 10.10.11.30 /data/foo/zone 25/01/2024 10
2 10.10.11.30 /data/foo/err 01/08/2020 20
3 10.2.2.10 /data/foo/zone 23/01/2024 5
4 10.10.2.1 /data/foo/zone 24/01/2024 20
5 10.2.2.2 /data/foo/tmp 25/01/2024 50
</code></pre>
<p>Runnable sample: <a href="https://onecompiler.com/python/422hhyqa5" rel="nofollow noreferrer">https://onecompiler.com/python/422hhyqa5</a></p>
<h2>Sorting case 1</h2>
<h3>Rule(s)</h3>
<ul>
<li>Order by ip ASC, date ASC, count ASC</li>
</ul>
<h3>Expected output</h3>
<pre class="lang-py prettyprint-override"><code> ip path date count
5 10.2.2.2 /data/foo/tmp 25/01/2024 50
3 10.2.2.10 /data/foo/zone 23/01/2024 5
4 10.10.2.1 /data/foo/zone 24/01/2024 20
2 10.10.11.30 /data/foo/err 01/08/2020 20
0 10.10.11.30 /data/foo/err 25/01/2024 3
1 10.10.11.30 /data/foo/zone 25/01/2024 10
</code></pre>
<h3>My Attempt</h3>
<p>Performing "natural" sorting on multiple columns is straight forward (date is of type DateTime).</p>
<p>I also managed to achieve sorting by ip.</p>
<p>But I did not manage to combine the two as it always gives different errors</p>
<pre class="lang-py prettyprint-override"><code>df_sorted_by_date = (df.sort_values(by=['date', 'count'],
ascending=[True, True],
ignore_index=True)
df_sorted_by_ip = (df.sort_values(by=["ip"],
key=lambda x: x.str.split(".").apply(lambda y: [int(z) for z in y]),
ignore_index=True))
</code></pre>
<h2>Sorting case 2</h2>
<h3>Rule(s)</h3>
<ul>
<li><code>rank1</code>: if <code>(path contains 'zone') and (count >=10)</code> then place 1st and order by <code>IP ASC</code></li>
<li><code>rank2</code>: if <code>(path NOT contains 'zone') and (count >=10) and (date = today())</code> then place 2nd and order by <code>IP ASC</code></li>
<li><code>rank3</code>: The remaining rows are placed last and ordered by <code>IP ASC</code> then <code>date ASC</code> if equal values</li>
</ul>
<h3>Expected output</h3>
<p>let's assume today is 25th</p>
<pre class="lang-py prettyprint-override"><code> ip path date count
4 10.10.2.1 /data/foo/zone 24/01/2024 20 # rank1
1 10.10.11.30 /data/foo/zone 25/01/2024 10 # rank1
5 10.2.2.2 /data/foo/tmp 25/01/2024 50 # rank2
3 10.2.2.10 /data/foo/zone 23/01/2024 5 # rank3
2 10.10.11.30 /data/foo/err 01/08/2020 20 # rank3
0 10.10.11.30 /data/foo/err 25/01/2024 3 # rank3
</code></pre>
<h3>My Attempt</h3>
<p>None. I am not sure how to proceed, this is very advanced for me >_<</p>
|
<python><python-3.x><pandas><dataframe><sorting>
|
2024-01-25 23:47:17
| 2
| 4,963
|
Enissay
|
77,883,462
| 687,739
|
Get minimum unique tuple pair from list of tuples
|
<p>Consider the following list of tuples:</p>
<pre><code>transactions = [
('GBP.USD', '2022-04-29'),
('SNOW', '2022-04-26'),
('SHOP', '2022-04-21'),
('GBP.USD', '2022-04-27'),
('MSFT', '2022-04-11'),
('MSFT', '2022-04-21'),
('SHOP', '2022-04-25')
]
</code></pre>
<p>I can get the tuple with the minimum date like this:</p>
<pre><code>min(transactions, key=lambda x: x[1])
</code></pre>
<p>This returns a single tuple:</p>
<pre><code>('MSFT', '2022-04-11')
</code></pre>
<p>I need to return the minimum date of any duplicates along with all unique values. So my output should be this:</p>
<pre><code>[
('SNOW', '2022-04-26'),
('SHOP', '2022-04-21'),
('GBP.USD', '2022-04-27'),
('MSFT', '2022-04-11'),
]
</code></pre>
<p>How can I do this?</p>
|
<python><tuples>
|
2024-01-25 23:21:45
| 6
| 15,646
|
Jason Strimpel
|
77,883,300
| 1,519,417
|
Full page screenshot with Selenium Chrome driver in Python
|
<p>AI insists that for Selenium 4 or later to take the full page screenshot with <code>webdriver.Chrome()</code>, you can use the 'full' argument in <code>driver.get_screenshot_as_file("screenshot.png", full=True)</code>.
However, I get error message</p>
<blockquote>
<p>TypeError: WebDriver.get_screenshot_as_file() got an unexpected
keyword argument 'full'</p>
</blockquote>
<p>Please, help.</p>
|
<python><selenium-webdriver><selenium-chromedriver>
|
2024-01-25 22:33:02
| 1
| 668
|
Vladimir
|
77,883,267
| 2,299,939
|
Updating MP3 metadata with Python and music-tag
|
<p>I'm trying to use a Python 3 script to edit the metadata of MP3 (and other) files using music-tag (0.4.3) but with mixed success. I say "mixed" because it works for some fields and not for others.</p>
<p>I'm wondering if it's related to how I'm using the names of the fields (capitalisation etc.) or the format of the values I'm passing to them, but <strong>I couldn't find any good and clear documentation either for music-tag or the standard itself</strong> to shed light on this.</p>
<p>To demonstrate the problem with an example, I start by using Mediainfo to look at the metadata of a file before any edits:</p>
<pre><code>'''
General
Complete name : Rossini - William Tell Overture (Fritz Reiner).mp3
Format : MPEG Audio
File size : 1.19 MiB
Duration : 1 min 2 s
Overall bit rate mode : Constant
Overall bit rate : 160 kb/s
Album : Rossini - Ouvertüren
Part/Position : 1
Part/Total : 1
Track name : REINER Wm Tell Over
Track name/Position : 6
Track name/Total : 6
Performer : Fritz Reiner: Chicago Symphony Orchestra
Composer : Gioacchino Rossini/Gioachino Antonio Rossini
Encoded by : iTunes 9.0.1
Genre : Classical
Recorded date : 1958
iTunPGAP : 0
Audio
Format : MPEG Audio
Format version : Version 1
Format profile : Layer 3
Format settings : Joint stereo / MS Stereo
Duration : 1 min 2 s
Bit rate mode : Constant
Bit rate : 160 kb/s
Channel(s) : 2 channels
Sampling rate : 44.1 kHz
Frame rate : 38.281 FPS (1152 SPF)
Compression mode : Lossy
Stream size : 1.18 MiB (100%)'''
</code></pre>
<p>Then, I try to update several fields by creating a dict and using that to update the file with music-tag</p>
<pre><code>import music_tag as mtag
from music_tag.id3 import mutagen
from mutagen.id3 import TLAN
metadata_updates = {
"artist": "Rossini",
"album": "Rossini - Overtures",
"year": "1829",
"genre": "Classical music",
"Publisher": "RCA Victor Red Seal",
"Copyright": "Creative Commons Attribution-ShareAlike 4.0 License (Public Domain - Non-PD US)",
"comment": "Test comment",
"language": "French",
"tracknumber": 1,
"totaltracks": 6,
"tracktitle": "Rossini - William Tell Overture (Fritz Reiner)",
"short_title": "William Tell Overture",
"date": "22 November 1958"
}
f = mtag.load_file("Rossini - William Tell Overture (Fritz Reiner).mp3")
for key in metadata_updates:
try:
f[key] = metadata_updates[key]
except:
pass
f.save()
# Output
Error when trying to update Publisher:
'publisher'
Error when trying to update Copyright:
'copyright'
Error when trying to update language:
'language'
Error when trying to update date:
'date'
</code></pre>
<p>After this, some fields are updated as intended, but others are not (see errors). In the case of <em>Publisher</em>, <em>Copyright</em> and <em>Language</em>, if</p>
<ul>
<li>I open the file with VLC</li>
<li>open the metadata window</li>
<li>manually update those fields in the VLC window</li>
<li>click on <em>Save Metadata</em></li>
</ul>
<p>they are saved and appear both in VLC when the file is reopened and in Mediainfo.</p>
<p>Additionally, there seem to be aliases in operation; for example, <em>artist</em> (which is the term used in VLC) seems to go <em>Performer</em>.</p>
<p>The screenshot below shows a comparison of the Mediainfo output in the three states (original version, after Python update, after additional manual update with VLC).</p>
<p><a href="https://i.sstatic.net/AIi06.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AIi06.png" alt="diff of metadata between original, Python update and manual update with VLC" /></a></p>
<p>At the end of the day, <strong>I'm trying to figure out how to do this properly</strong>. A good starting point would be a list/reference showing all valid fields and aliases and any relevant restrictions/specifications on the input values.</p>
<p>Any other useful suggestions would also be appreciated!</p>
|
<python><metadata><mp3><vlc>
|
2024-01-25 22:24:35
| 1
| 479
|
Ratler
|
77,883,241
| 1,701,812
|
Get specific form of range in Python 3
|
<p>When getting <code>range(1,100)</code> I get:</p>
<pre><code>[ 1, 2, 3, 4, 5 ... 99 ]
</code></pre>
<p>I need something like zip of this range:</p>
<pre><code>[ 50, 49, 51, 48, 52, 47, 53 ... 99 ]
</code></pre>
<p>How to get it?</p>
<hr />
<p><strong>Background</strong>: It is all about Bitcoin puzzle 66.</p>
<p>First I made prediction with linear regression for past known private keys till puzzle 65 including. I put puzzle 66 and two following as the prediction I want to get. I got three integer numbers.</p>
<p>Now I don't want to search from <code>number-1M</code> to <code>number+1M</code> because I might be closer to the real private key with the original number, so I need to search from the middle. And that's what the answer is needed for.</p>
|
<python><python-3.x><range>
|
2024-01-25 22:13:14
| 5
| 752
|
pbies
|
77,883,215
| 11,278,478
|
Get column number for Python Melt function
|
<p>I need to unpivot a pandas dataframe. I am using pd.melt() function for this.
It is working as expected, now I need to add an additional column "column_number" in my output. Example below:</p>
<pre><code>name age gender id
a 18 m 1
b 20 f 2
</code></pre>
<p>Current Output:</p>
<pre><code> id variable value
1 name a
1 age 18
1 gender m
2 name b
2 age 20
2 gender f
</code></pre>
<p>Expected Output:</p>
<pre><code>id column_number variable value
1 1 name a
1 2 age 18
1 3 gender m
2 1 name b
2 2 age 20
2 3 gender f
</code></pre>
<p>Since my dataframe structure can change, I will not know if I have 3 columns or more in future. How can I generate this column_number column in melt results?</p>
|
<python><python-3.x><pandas><dataframe><pandas-melt>
|
2024-01-25 22:11:54
| 5
| 434
|
PythonDeveloper
|
77,883,150
| 3,666,302
|
Add default values to decorated function and update type signature
|
<p>I'd like to create decorator which adds default values to function arguments, but preserves the argument names and types of the function signature.</p>
<pre class="lang-py prettyprint-override"><code>from typing import Callable
def optional[**P](f: Callable[P, None]) -> Callable[P, None]:
def wrapper(*args: P.args, **kwargs: P.kwargs):
args_with_default = ... # fill in missing argument values
f(**args_with_default)
return wrapper
@optional
def my_func(a: str, b: int):
...
my_func("a") # ERROR: b also expected
</code></pre>
<p>Is this possible?</p>
|
<python><decorator><typing>
|
2024-01-25 21:55:22
| 1
| 2,950
|
TomTom
|
77,883,056
| 2,512,455
|
Restrict Flake8 for older Python version
|
<p>Is there a way how to use flake8 from python 3.11, but restrict it to follow rules of python 3.9?</p>
<p>e.g. for mypy, I can add <code>--python-version=X.Y</code></p>
<blockquote>
<p>This flag will make mypy type check your code as if it were run under Python version X.Y. Without this option, mypy will default to using whatever version of Python is running mypy.</p>
</blockquote>
|
<python><flake8>
|
2024-01-25 21:31:26
| 1
| 365
|
Michal Vašut
|
77,883,025
| 9,698,518
|
How to read in an environment variable for Jinja2 so that it can be interpolated in an if condition?
|
<p>I try to wrap-around my head on how to skip certain parts of code based on an environment variable when using a <code>Jinja2</code> template to generate a file with a <code>YAML</code> document. I did some research online, asked ChatGPT, but I did not find any answer that helped me.</p>
<p>So I made this minimal example:</p>
<pre class="lang-py prettyprint-override"><code>import os
from jinja2 import Template
os.environ['SKIP']="False"
print(
Template(
"{% if env['SKIP'] %}\n"
'{{ "do not skip" }}\n'
"{% else %}\n"
'{{ "skip" }}\n'
"{% endif %}"
).render(env=os.environ)
)
</code></pre>
<p>outputs <code>do not skip</code> (as expected).</p>
<p>Executing the following:</p>
<pre class="lang-py prettyprint-override"><code>import os
os.environ['SKIP']="True"
print(
Template(
"{% if env['SKIP'] %}\n"
'{{ "do not skip" }}\n'
"{% else %}\n"
'{{ "skip" }}\n'
"{% endif %}"
).render(env=os.environ)
)
</code></pre>
<p>also returns <code>do not skip</code>, but I expected <code>skip</code>. So I assume it is not reading the environment variable correctly.</p>
<p>How can I use environment variables in Jinja2 correctly? And how to fix this minimal example?</p>
|
<python><jinja2>
|
2024-01-25 21:24:22
| 1
| 672
|
mgross
|
77,882,964
| 23,260,297
|
Grouping data and calculating totals for new column
|
<p>I have a dataframe which contains data to be used in calculations. I have to group rows based on 3 columns and then calculate the totals and plug the totals into a formula.</p>
<p>formula = -(totalvalue - totalFP)/ totalquantity
where totalvalue is the sum of quantity column
totalvalue is the sum of the value column
totalFP is the sum of each price*quantity</p>
<p>My data looks like this(I simplified the numbers a bit):</p>
<pre><code>DealType StartDate Commodity FixedPrice Quantity Value
Sell 01Dec2023 [stock1, stock2] 1.0 10 10
Sell 01Dec2023 [stock1, stock2] 2.0 10 10
Sell 01Dec2023 [stock1, stock2] 3.0 10 10
Buy 01Dec2023 [stock1, stock2, stock3] 1.0 10 10
Buy 01Dec2023 [stock1, stock2, stock3] 2.0 10 10
Buy 01Dec2023 [stock1, stock2, stock3] 3.0 10 10
</code></pre>
<p>After grouping it should be like this:</p>
<pre><code>DealType StartDate Commodity FixedPrice Quantity Value TotalFP FloatPrice
Sell 01Dec2023 [stock1, stock2] 1.0 10 10 10
Sell 01Dec2023 [stock1, stock2] 2.0 10 10 20
Sell 01Dec2023 [stock1, stock2] 3.0 10 10 30
30 30 60 1.0
DealType StartDate Commodity FixedPrice Quantity Value TotalFP FloatPrice
Buy 01Dec2023 [stock1, stock2, stock3] 1.0 10 10 10
Buy 01Dec2023 [stock1, stock2, stock3] 2.0 10 10 20
Buy 01Dec2023 [stock1, stock2, stock3] 3.0 10 10 30
30 30 60 1.0
</code></pre>
<p>the final result should be:</p>
<pre><code>DealType StartDate Commodity FixedPrice Quantity Value FloatPrice
Sell 01Dec2023 [stock1, stock2] 1.0 10 10 1.0
Sell 01Dec2023 [stock1, stock2] 2.0 10 10 1.0
Sell 01Dec2023 [stock1, stock2] 3.0 10 10 1.0
Buy 01Dec2023 [stock1, stock2, stock3] 1.0 10 10 1.0
Buy 01Dec2023 [stock1, stock2, stock3] 2.0 10 10 1.0
Buy 01Dec2023 [stock1, stock2, stock3] 3.0 10 10 1.0
</code></pre>
<p>This is the code I have:</p>
<pre><code> # Grouping by 'Commodity', 'StartDate', and 'Deal Type'
grouped_df = df.groupby(['Commodity', 'StartDate', 'DealType'])
# Calculating total MTMValue, fixedstrikeprice and quantity for each group
total_values = grouped_df.agg({'MTMValue': 'sum', 'FixedPriceStrike': 'sum', 'Quantity': 'sum'})
# Calculating FloatPrice using the provided formula
total_values['FloatPrice'] = -(total_values['MTMValue'] - (total_values['FixedPriceStrike'] * total_values['Quantity'])) / total_values['Quantity']
# Merging the FloatPrice back to the original DataFrame based on the group columns
df = pd.merge(df, total_values['FloatPrice'], how='left', on=['Commodity', 'StartDate', 'DealType'])
</code></pre>
<p>but this throws an error on the second line 'unhashable type list' and i am unsure how to fix. Is there a better solution than the approach I have?</p>
<p>I also tried this, but got the same error:</p>
<pre><code> df['FloatPrice'] = grouped_df.apply(lambda group: -(group['MTMValue'].sum() - (group['FixedPriceStrike'] * group['Quantity']).sum()) / group['Quantity'].sum()).reset_index(level=[0, 1, 2], drop=True)
</code></pre>
|
<python><pandas>
|
2024-01-25 21:11:54
| 1
| 2,185
|
iBeMeltin
|
77,882,942
| 15,296,744
|
What is wrong with the function below for calculating simple moving average?
|
<p>I have written the function below in order to find the <a href="https://en.wikipedia.org/wiki/Moving_average#Simple_moving_average" rel="nofollow noreferrer">SMA</a> of a csv file according to the desired SMA formula, However, something is wrong with my formula which I can't figure it out.</p>
<pre><code>def SMA_calculation(t, w):
s = np.size(t)
g = np.zeros(s)
for i in range(0, s):
if i < w-1:
g[i] = np.NaN
else:
g[i] = np.mean(t[i-w:i])
return g
</code></pre>
|
<python><numpy><moving-average>
|
2024-01-25 21:06:05
| 2
| 391
|
Dorrin Samadian
|
77,882,712
| 8,942,319
|
Why does TypedDict alt syntax work for illegal names?
|
<pre><code>from typing import TypedDict, Optional
# This will work
Overview = TypedDict('Overview', {'detail.id': Optional[str]})
# This will throw an error
# SyntaxError: illegal target for annotation
class Overview(TypedDict):
'detail.id': Optional[str]
</code></pre>
<p>This is on Python 3.8.9</p>
<p>Is there any way to make the latter syntax work?</p>
|
<python><python-3.x>
|
2024-01-25 20:17:10
| 0
| 913
|
sam
|
77,882,606
| 159,072
|
Why am I getting different autocorrelation results from different libraries?
|
<ol>
<li>Why am I getting different autocorrelation results from different libraries?</li>
<li>Which one is correct?</li>
</ol>
<pre><code>import numpy as np
from scipy import signal
# Given data
data = np.array([1.0, 1.25, 1.5, 1.75, 2.0, 2.25, 3.33])
# Compute the autocorrelation using scipy's correlate function
autocorrelations = signal.correlate(data, data, mode='full')
# The middle of the autocorrelations array is at index len(data)-1
mid_index = len(data) - 1
# Show autocorrelation values for lag=1,2,3,4,...
print(autocorrelations[mid_index + 1:])
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>[21.2425 17.285 13.4525 9.8075 6.4125 3.33 ]
</code></pre>
<hr />
<pre><code>import pandas as pd
# Given data
data = [1.0, 1.25, 1.5, 1.75, 2.0, 2.25, 3.33]
# Convert data to pandas Series
series = pd.Series(data)
# Compute and print autocorrelation for lags 1 to length of series - 1
for lag in range(0, len(data)):
print(series.autocorr(lag=lag))
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>1.0
0.9374115462038415
0.9287843240596312
0.9260849979667674
0.9407970411588671
0.9999999999999999
</code></pre>
<hr />
<pre><code>from statsmodels.tsa.stattools import acf
# Your data
data = [1.0, 1.25, 1.5, 1.75, 2.0, 2.25, 3.33]
# Calculate the autocorrelation using the acf function
autocorrelation = acf(data, nlags=len(data)-1, fft=True)
# Display the autocorrelation coefficients for lags 1,2,3,4,...
print(autocorrelation)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>[ 1. 0.39072553 0.13718689 -0.08148897 -0.24787067 -0.3445268 -0.35402598]
</code></pre>
|
<python><pandas><statsmodels><autocorrelation>
|
2024-01-25 19:55:18
| 3
| 17,446
|
user366312
|
77,882,500
| 11,885,185
|
Regex for phone numbers
|
<p>I want to catch the phone number from a text using regex.</p>
<p>Examples:</p>
<p>I have this regex which finds the phone number very well:
<code>^((\(?\+45\)?)?)(\s?\d{2}\s?\d{2}\s?\d{2}\s?\d{2})$</code></p>
<p>and it catches all the numbers below well.</p>
<p>But I cannot catch the "tel.", "tlf", "mobil:", etc that could be before the number. And also, if another letter comes after the last digit, it doesn't take number anymore, but it should.</p>
<p>These examples are not covered:</p>
<pre><code>tel.: +45 09827374, +45 89895867, some kind of text...
mobil: +45 20802020, +45 20802001,
tlf.: +45 5555 1212
tlf: +4567890202Girrafe
</code></pre>
<p>If helpful, I found this regex:
<code>'\btlf\b\D*([\d\s]+\d)'</code> which can extract the number and the tlf and also stop before it finds a new character which is represented by a letter.</p>
<p>So I tried to combine them and I obtained this but it doesn't work:
<code>\b(tlf|mobil|telephone|mobile|tel)\b\D*(^((\(?\+45\)?)?)(\s?\d{2}\s?\d{2}\s?\d{2}\s?\d{2})$)</code></p>
<p>Expected output:</p>
<ul>
<li>for input: <code>"tel.: +45 09827374, +45 89895867, some kind of text..."</code> --> output: <code>"tel.: +45 09827374" and "+45 89895867"</code></li>
<li>for input: <code>"mobil: +45 20802020, +45 20802001,"</code> --> output: <code>"mobil: +45 20802020"</code> and <code>"+45 20802001"</code> or <code>"mobil: +45 20802020, +45 20802001"</code> is ok too</li>
<li>for input: <code>"tlf +45 5555 1212"</code> --> output: <code>"tlf +45 5555 1212"</code></li>
<li>for input: <code>"tlf: +4567890202Girrafe"</code> --> output: <code>"tlf: +4567890202"</code></li>
<li>for input: <code>"+4567890202"</code> --> output: <code>"+4567890202"</code></li>
</ul>
<p>Can you help me please?</p>
|
<python><regex><regex-group>
|
2024-01-25 19:34:12
| 2
| 612
|
Oliver
|
77,882,402
| 2,657,986
|
Are hashmaps typically implemented with an array of keys?
|
<p>Many languages that include hashmaps in the standard library or as builtins, allow the programmer to iterate over the hashmap. For example, in python:</p>
<pre><code>d = {"US": "English", "Spain": "Spanish", "France": "French", "Canada": "English"}
for key in d:
print(key)
</code></pre>
<p>Or C++:</p>
<pre><code>#include <iostream>
#include <unordered_map>
using namespace std;
int main(){
unordered_map <string, string> m;
m["US"] = "English";
m["Spain"] = "Spanish";
m["France"] = "French";
m["Canada"] = "English";
for (auto &it: m){
cout << it.first << endl;
}
return 0;
}
</code></pre>
<p>Rust also has a similar thing and the iteration looks more like the python.</p>
<p>So my question is: Do the hashmap implementations contain a vector of keys that they can iterate over, and either just give you the key or lookup the value on the fly in O(1)? Or do they go through the actual hash table including empty buckets?</p>
<p>I ask because in the latter case, presumably it's more efficient to keep a list of keys myself and iterate that way, rather than actually use the map's functionality.</p>
<p>Edit: Perhaps I was unclear in this question. The purpose of a hashmap is to associate keys with values in O(1) time. All of these implementations do that well. But when you want to iterate over all the key/value pairs in your map, there are two different ways I can think of to achieve that. One is to have an array of your keys, go through that array, lookup each key in O(1) time, and print the key/value. This takes O(n) because you have <em>n</em> keys.</p>
<p>The other way is to run a for loop over the hashmap and extract the keys, values, or both. My question is whether this form typically comes with a speed penalty because the hashtable has empty buckets that need to be checked as we iterate. Normally it seems obvious that we should incur this penalty, but I feel like if I were implementing a library like this I would keep an array of keys and when the user requests to iterate I would do it the first way.</p>
|
<python><c++><rust><hashmap><iterator>
|
2024-01-25 19:13:04
| 1
| 939
|
Isaac D. Cohen
|
77,882,386
| 11,092,636
|
Unexpected Sheet Renaming in OpenPyXL When Only Case Changes
|
<p>I'm using OpenPyXL in Python to manipulate Excel workbooks. I encountered an unexpected behavior when renaming a sheet. <strong>If I change the name of a sheet to a name that only differs in letter case from its previous name, OpenPyXL appends a '1' to the new name</strong>.</p>
<p>Here is a Minimal Reproducible Example:</p>
<pre><code>import openpyxl
# Create a new workbook
wb = openpyxl.Workbook()
# Get the default sheet
sheet = wb.active
# Rename the sheet to 'Input (Raw)'
sheet.title = 'Input (Raw)'
# Rename the sheet to 'Input (raw)'
sheet.title = 'Input (raw)'
# Print the current title of the sheet
print(sheet.title)
</code></pre>
<p>Expected Output:</p>
<pre><code>Input (raw)
</code></pre>
<p>Actual Output:</p>
<pre><code>Input (raw)1
</code></pre>
<p>I'm using <code>Python 3.11.1</code> and <code>openpyxl 3.1.2</code>.</p>
|
<python><openpyxl>
|
2024-01-25 19:08:06
| 1
| 720
|
FluidMechanics Potential Flows
|
77,882,308
| 1,316,702
|
azura databricks custom function not working (python)
|
<p>In databricks/python I am trying to create a custom function and I'm receiving an error.</p>
<p>here is the function I am trying to create:</p>
<pre><code>from pyspark.sql.functions import col, substr
def CreateBloombergSymbol(pctym):
digitMonth = pctym[6:2:1]
match digitMonth:
case "01":
charMonth = "F"
case "02":
charMonth = "G"
case "03":
charMonth = "H"
case "04":
charMonth = "J"
case "05":
charMonth = "K"
case "06":
charMonth = "M"
case "07":
charMonth = "N"
case "08":
charMonth = "Q"
case "09":
charMonth = "U"
case "10":
charMonth = "V"
case "11":
charMonth = "X"
case "12":
charMonth = "Z"
return charMonth
</code></pre>
<p>then I save it to variable, and lastly is how I am trying to implement it.</p>
<pre><code>varMyBloombergFunction = spark.udf.register("CreateBloombergSymbol", CreateBloombergSymbol)
from pyspark.sql.functions import col
dfPos.select(
col('*'),
varMyBloombergFunction(col('pctym')).alias('mynewcolFuncBB')
).display()
</code></pre>
<p>this is the error I am receiving, any idea what I am doing wrong?</p>
<pre><code>nboundLocalError: local variable 'charMonth' referenced before assignmen
</code></pre>
|
<python><match><databricks>
|
2024-01-25 18:52:16
| 1
| 1,277
|
solarissf
|
77,882,297
| 1,028,270
|
How does class polymorphism work with pydantic models?
|
<p>I have two classes that inherit from the same parent:</p>
<pre><code>class ChildOne(BaseSettings):
model_config = SettingsConfigDict(
extra="allow",
)
...
class ChildTwo(BaseSettings):
model_config = SettingsConfigDict(
extra="forbid",
)
...
</code></pre>
<p>I should be able to pass ChildOne and ChildTwo to anything that accepts BaseSettings right?</p>
<p>I cannot figure out how to get this to work and satisfy strict type checking.</p>
<p>I tried this, but the cast wipes out model_config set for the child and I'm still getting the two issues commented above the return statement</p>
<pre><code>def my_func(model: BaseSettings) -> BaseSettings:
# Object of type "BaseSettings" is not callable
# Return type is unknown
return model()
# This cast seems to completely wipe out model_config
my_func(cast(ChildOne, BaseSettings))
my_func(cast(ChildTwo, BaseSettings))
</code></pre>
<h1>Edit</h1>
<p>After some googling this works, but I'm still not 100% why or if there is a cleaner way:</p>
<pre><code>C = TypeVar("C", bound=BaseSettings)
def my_func(model: Type[C]) -> BaseSettings:
</code></pre>
<p>Do I have to explicitly tell the type checker that I want to accept any sub class of BaseSettings because it doesn't allow that automatically? Isn't that how basic class polymorphism is suppose to work?</p>
<p>This ^ also doesn't really work because when I try to access fields of <code>C</code> inside <code>my_func</code> I get:</p>
<pre><code>Cannot access member "<Some field of ChildTwo|ChildOne>" for type "BaseSettings"
Member "<Some field of ChildTwo|ChildOne>" is unknown
</code></pre>
<h1>Edit</h1>
<p>I wasn't using generics correctly, I need to do:</p>
<pre><code>C = TypeVar("C", bound=BaseSettings)
# Returns same type passed in, not BaseSettings type
def my_func(model: Type[C]) -> C:
</code></pre>
<p>I'm still confused as to why I need to use generics in the first place. Would I need to do something like this in C#/Java as well?</p>
|
<python><python-typing><pydantic>
|
2024-01-25 18:49:34
| 1
| 32,280
|
red888
|
77,882,214
| 1,949,081
|
Airflow DAG - Not able to parse {{ ds }}
|
<p>I have an Airflow DAG and I use {{ ds }} to get the logical date. As per Airflow documentation template {{ ds }} return logical date in format YYYY-MM-DD in string format. So I am using following code to manipulate the dates</p>
<pre><code>(datetime.strptime('{{ dag_run.logical_date|ds }}', '%Y-%m-%d') - timedelta(3)).strftime('%Y-%m-%d')
</code></pre>
<p>but getting following error</p>
<pre><code>
Broken DAG: [/usr/local/airflow/dags/custom_dags/feature_store_daily.py] Traceback (most recent call last):
File "/usr/lib/python3.10/_strptime.py", line 568, in _strptime_datetime
tt, fraction, gmtoff_fraction = _strptime(data_string, format)
File "/usr/lib/python3.10/_strptime.py", line 349, in _strptime
raise ValueError("time data %r does not match format %r" %
ValueError: time data '{{dag_run.logical_date|ds}}' does not match format '%Y-%m-%d'
</code></pre>
<p>I am not able to figure out why I am getting this error.</p>
<p>Please advice</p>
|
<python><airflow-2.x><mwaa>
|
2024-01-25 18:31:49
| 1
| 5,528
|
slysid
|
77,882,019
| 19,198,552
|
How can I switch to the tkinter text widget shortcuts defined in the TCL documentation when working under Win11?
|
<p>As I found in the <a href="https://tcl.tk/man/tcl8.5/TkCmd/text.htm" rel="nofollow noreferrer">TCL-documentation</a> Control-a is a shortcut for moving the cursor to the beginning of the line (which works at Linux). But when I use this shortcut at Windows11 it selects all the text in the text widget. There are many other differences regarding other shortcuts compared between Linux and Windows. I now have the problem that my tkinter application needs different shortcuts at Linux and at Windows.</p>
<p>I expected the shortcuts to be the same.</p>
|
<python><tkinter><keyboard-shortcuts><text-widget>
|
2024-01-25 17:55:29
| 1
| 729
|
Matthias Schweikart
|
77,881,945
| 23,002,898
|
In Django, when i compare two Html texts, how to remove the blank rows with the aim of making the comparison positive despite the blank lines?
|
<p>I'm comparing two texts with Django. Precisely, i am comparing two HTML codes using Json and <a href="https://docs.python.org/3/library/difflib.html" rel="nofollow noreferrer">difflib</a> module. The two files compared are: <code>user_input.html</code> and <code>source_compare.html</code>,they are compared with the <code>def function_comparison</code> function in views.py.</p>
<p><a href="https://i.sstatic.net/LMA1V.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LMA1V.png" alt="enter image description here" /></a></p>
<p><strong>PROBLEM:</strong> The comparison works correctly if there are no blank rows, but the problem is that if there is a space between the rows (1 or even more rows), then the two html files are different, so i get the preconfigured message <code>"No, it is not the same!"</code>.</p>
<p><strong>WHAT WOULD I WANT:</strong> I would like the comparison to be correct (so I get the message <code>"Yes, it is the same!"</code>) if there are empty rows (1 or even more rows), so it's as if I want to "ignore" the blank rows.</p>
<p><strong>EXAMPLE:</strong> I'll show you an example. I would like the comparison to be the same, if for example I have this html in the two files (note the spaces between the rows in <code>user_input.html</code>):</p>
<p>In <code>source_compare</code>.html file:</p>
<pre><code><!DOCTYPE html>
<html>
<head>
<title>Page Title</title>
</head>
<body>
<h1 class="heading">This is a Heading</h1>
<p>This is a paragraph.</p>
</body>
</html>
</code></pre>
<p>In textbox on home page (<code>user_input.html</code> file):
</p>
<pre><code><!DOCTYPE html>
<html>
<head>
<title>Page Title</title>
</head>
<body>
<h1 class="heading">This is a Heading</h1>
<p>This is a paragraph.</p>
</body>
</html>
</code></pre>
<p>I would like the comparison to be positive and results the same (I get the message <code>"Yes, it is the same!"</code>) despite one html file having the space between the lines and the other file not having the space. Currently the two files are different in the comparison, so I get the message <code>"No, it is not the same!"</code>
</p>
<p>Here is the code:</p>
<p><strong>index.html</strong></p>
<pre><code>{% load static %}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>University</title>
<link rel="stylesheet" href ="{% static 'css/style.css' %}" type="text/css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.7.0/jquery.min.js"></script>
</head>
<body>
<div class="test">
<div>Input User</div>
<div class="editor">
<pre class="editor-lines"></pre>
<div class="editor-area">
<pre class="editor-highlight"><code class="language-html"></code></pre>
<textarea
class="editor-textarea"
id="userinput"
data-lang="html"
spellcheck="false"
autocorrect="off"
autocapitalize="off">
&lt;!DOCTYPE html>
&lt;html>
&lt;head>
&lt;title>Page Title&lt;/title>
&lt;/head>
&lt;body>
&lt;h1 class="heading">This is a Heading&lt;/h1>
&lt;p>This is a paragraph.&lt;/p>
&lt;/body>
&lt;/html>
</textarea>
</div>
</div>
</div>
<button type="submit" onclick="getFormData();">Button</button>
<br><br>
<div>Comparison Result</div>
<div class="result row2 rowstyle2" id="result">
{% comment %} Ajax innerHTML result {% endcomment %}
</div>
</div>
{% comment %} script to disable "want to send form again" popup {% endcomment %}
<script>
if ( window.history.replaceState ) {
window.history.replaceState( null, null, window.location.href );
}
</script>
<script>
function getFormData() {
$.ajax({
type:"GET",
url: "{% url 'function_comparison' %}",
data:{
// con Div: "formData": document.getElementById("userinput").innerText
"formData": document.getElementById("userinput").value
},
success: function (response) {
document.getElementById("result").innerHTML = response.message;
},
error: function (response) {
console.log(response)
}
});
}
</script>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/vs2015.css">
</body>
</html>
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>from django.http import JsonResponse
from django.shortcuts import render
import difflib
from django.conf import settings
import re
def index(request):
return render(request, "index.html")
def function_comparison(request):
context = {}
if request.method == "GET":
user_form_data = request.GET.get("formData", None)
with open('App1/templates/user_input.html', 'w') as outfile:
outfile.write(user_form_data)
file1 = open('App1/templates/source_compare.html', 'r').readlines()
file2 = open('App1/templates/user_input.html', 'r').readlines()
file1_stripped = []
file2_stripped = []
# FIXED TEXT WHITE SPACE PROBLEM
# re sub here checks for each item in the list, and replace a space, or multiple space depending, with an empty string
for file1_text in file1:
file1_text = re.sub("\s\s+", "", file1_text)
file1_stripped.append(file1_text)
for file2_text in file2:
file2_text = re.sub("\s\s+", "", file2_text)
file2_stripped.append(file2_text)
# check if the last item in the user input's list is an empty line with no additional text and remove it if thats the case.
if file2_stripped[-1] == "":
file2_stripped.pop()
### End - Fixed text white space problem ###
htmlDiffer = difflib.HtmlDiff(linejunk=difflib.IS_LINE_JUNK, charjunk=difflib.IS_CHARACTER_JUNK)
htmldiffs = htmlDiffer.make_file(file1_stripped, file2_stripped, context=True)
if "No Differences Found" in htmldiffs:
context["message"] = "Yes, it is the same!"
if settings.DEBUG:
if "No Differences Found" not in htmldiffs:
context["message"] = htmldiffs
else:
if "No Differences Found" not in htmldiffs:
context["message"] = "No, it is not the same!"
return JsonResponse(context, status=200)
</code></pre>
<p><strong>user_input.html</strong>: will be empty</p>
<p><strong>source_compare.html</strong>: This is the default and will have no gaps between lines</p>
<pre><code><!DOCTYPE html>
<html>
<head>
<title>Page Title</title>
</head>
<body>
<h1 class="heading">This is a Heading</h1>
<p>This is a paragraph.</p>
</body>
</html>
</code></pre>
<p><strong>myapp/urls.py</strong></p>
<pre><code>from django.urls import path
from . import views
urlpatterns=[
path('', views.index, name='index'),
path('function_comparison/', views.function_comparison,name="function_comparison"),
]
</code></pre>
<p><strong>project/urls.py</strong></p>
<pre><code>from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path('admin/', admin.site.urls),
path('', include('App1.urls')),
]
</code></pre>
<p>For completeness, I also insert the minimal CSS:</p>
<p><strong>style.css</strong></p>
<pre><code>*,
*::after,
*::before {
margin: 0;
box-sizing: border-box;
}
.rowstyle1 {
background-color: black;
color: white;
}
.row2 {
margin-top: 20px;
width: 100%;
}
.rowstyle2 {
background-color: #ededed;;
color: black;
}
/* Code Editor per Highlightjs */
/* Scrollbars */
::-webkit-scrollbar {
width: 5px;
height: 5px;
}
::-webkit-scrollbar-track {
background: rgba(0, 0, 0, 0.1);
border-radius: 0px;
}
::-webkit-scrollbar-thumb {
background-color: rgba(255, 255, 255, 0.3);
border-radius: 1rem;
}
.editor {
--pad: 0.5rem;
display: flex;
overflow: auto;
background: #1e1e1e;
height: 100%;
width: 100%;
padding-left: 4px;
}
.editor-area {
position: relative;
padding: var(--pad);
height: max-content;
min-height: 100%;
width: 100%;
border-left: 1px solid hsl(0 100% 100% / 0.08);
}
.editor-highlight code,
.editor-textarea {
padding: 0rem !important;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: transparent;
outline: 0;
}
.editor-highlight code,
.editor-textarea,
.editor-lines {
white-space: pre-wrap;
font: normal normal 14px/1.4 monospace;
}
.editor-textarea {
display: block;
position: relative;
overflow: hidden;
resize: none;
width: 100%;
color: white;
height: 250px;
caret-color: hsl(50, 75%, 70%); /* But keep caret visible */
border: 0;
&:focus {
outline: transparent;
}
&::selection {
background: hsla(0, 100%, 75%, 0.2);
}
}
.editor-highlight {
position: absolute;
left: var(--pad);
right: var(--pad);
user-select: none;
margin-bottom: 0;
min-width: 0;
}
.editor-lines {
display: flex;
flex-direction: column;
text-align: right;
height: max-content;
min-height: 100%;
color: hsl(0 100% 100% / 0.6);
padding: var(--pad); /* use the same padding as .hilite */
overflow: visible !important;
background: hsl(0 100% 100% / 0.05);
margin-bottom: 0;
& span {
counter-increment: linenumber;
&::before {
content: counter(linenumber);
}
}
}
/* highlight.js customizations: */
.hljs {
background: none;
}
</code></pre>
|
<python><python-3.x><django><django-rest-framework><django-views>
|
2024-01-25 17:42:38
| 1
| 307
|
Nodigap
|
77,881,821
| 639,676
|
How is it possible that jax vmap returns not iterable?
|
<pre><code>import jax
import pgx
from jax import vmap, jit
import jax.numpy as jnp
env = pgx.make("tic_tac_toe")
key = jax.random.PRNGKey(42)
states = jax.jit(vmap(env.init))(jax.random.split(key, 4))
type(states)
</code></pre>
<p><code>states</code> has a type <code>pgx.tic_tac_toe.State</code>. I was expecting an Iterable object with a size 4. Somehow iterable results are inside <code>pgx.tic_tac_toe.State</code>.</p>
<p>Can you please explain how is it possible that jax vmap returns not iterable?</p>
<p>How to force vmap to return the next result:</p>
<pre><code>states = [env.init(key) for key in jax.random.split(key, 4)]
</code></pre>
<p>Note, this code works as expected:</p>
<pre><code>def square(x):
return x ** 2
inputs = jnp.array([1, 2, 3, 4])
result = jax.vmap(square)(inputs)
print(result) # list object
</code></pre>
|
<python><jax>
|
2024-01-25 17:18:31
| 1
| 4,143
|
Oleg Dats
|
77,881,795
| 23,260,297
|
list comprehension return a single value instead of a list
|
<p>I have a column in a dataframe that consists of tuples shaped like this:</p>
<pre><code>Prices
(0, 30.00)
(0, 20.00)
(0, 33.00)
(-12.00, 0)
(-13.00, 0)
(0,0)
</code></pre>
<p>I need to grab the non-zero values from each of the tuples if it exists and replace the tuple as a float in the column.</p>
<p>I used list comprehension, but it returns the value as a list instead of a float. Is there a better way to accomplish this? or an easy way to convert the list (of length 1) to a float?</p>
<p>I am using python pandas</p>
<pre><code>df['Prices'] = [[next((val for val in ls if val != 0), 0)] for ls in df['Prices'].values]
</code></pre>
|
<python><pandas>
|
2024-01-25 17:15:13
| 3
| 2,185
|
iBeMeltin
|
77,881,766
| 3,419,895
|
is numba a good choice to create program that reads from peripherial and calls python wrapped C code?
|
<p>first of all I have never used numba, so my questions can have basic answers</p>
<p>I would like to write program that reads from peripheral device and processes data in real-time, problem is not in amount of data rather will numba address this quiestions or should I use good old C++/Rust:</p>
<ol>
<li>I will call python wrappers to code in C (drivers and interact in realtime 10 times per second) so this code will be not compiled by numba, even though will numba create overhead because of it?</li>
<li>in processing I use a lot of math like FFT, problem is that I will also add GUI to my app and I would like to have at least 10 FPS, here I would also call wrappers to C/C++ code (dearImGUI other options are welcomed) is it possible to compile these with numba?</li>
</ol>
<p>with C++/Rust I can achieve this easily but is it achievable with numba in python?</p>
|
<python><performance><numba>
|
2024-01-25 17:10:30
| 0
| 564
|
quester
|
77,881,712
| 8,942,319
|
TypedDict, illegal target for annotation when using attribute of another TypedDict in definition
|
<p>I have 2 TypedDicts and in the latter I want to use an attribute of the former in the definition.</p>
<pre><code>class Detail(TypedDict):
id: Optional[str]
to_date: Optional[str]
from_date: Optional[str]
class Overview(TypedDict):
id: Optional[str]
'Detail.to_date': Optional[str]
</code></pre>
<p>The code is only running because of an import statement so there is no use of it to detail for the question.</p>
<p>Error:</p>
<pre><code>'Detail.to_date': Optional[str]
^
SyntaxError: illegal target for annotation
</code></pre>
|
<python>
|
2024-01-25 17:01:24
| 1
| 913
|
sam
|
77,881,635
| 7,554,826
|
logging training and validation loss per epoch using huggingface trainer in pytorch to assess bias-variance tradeoff
|
<p>I'm fine-tuning a transformer model for text classification in Pytorch using huggingface Trainer. I would like to log both the training and the validation loss for each epoch of training. This is so that I can assess when the model starts to overfit to the training data (i.e. the point at which training loss keeps decreasing, but validation loss is stable or increasing, the bias-variance tradeoff).</p>
<p>Here are my training arguments of the huggingface trainer:</p>
<pre><code>training_arguments = TrainingArguments(
output_dir = os.path.join(MODEL_DIR, f'{TODAYS_DATE}_multicls_cls'),
run_name = f'{TODAYS_DATE}_multicls_cls',
overwrite_output_dir=True,
evaluation_strategy='epoch',
save_strategy='epoch',
num_train_epochs=7.0,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
optim='adamw_torch',
learning_rate=LEARNING_RATE
</code></pre>
<p>My training arguments of the huggingface trainer are set to evaluate every epoch, as desired, but my training loss is computed every 500 steps by default. You can see this in the log history of <code>trainer.state</code> after training:</p>
<pre><code>{'eval_loss': 6.346338748931885, 'eval_f1': 0.2146690518783542, 'eval_runtime': 1.2777, 'eval_samples_per_second': 31.306, 'eval_steps_per_second': 31.306, 'epoch': 1.0, 'step': 160}
{'eval_loss': 5.505970001220703, 'eval_f1': 0.23817863397548159, 'eval_runtime': 1.5768, 'eval_samples_per_second': 25.367, 'eval_steps_per_second': 25.367, 'epoch': 2.0, 'step': 320}
{'eval_loss': 5.21959114074707, 'eval_f1': 0.2233676975945017, 'eval_runtime': 1.3016, 'eval_samples_per_second': 30.732, 'eval_steps_per_second': 30.732, 'epoch': 3.0, 'step': 480}
{'loss': 6.1108, 'learning_rate': 2.767857142857143e-05, 'epoch': 3.12, 'step': 500}
{'eval_loss': 5.014569282531738, 'eval_f1': 0.24625623960066553, 'eval_runtime': 1.3961, 'eval_samples_per_second': 28.652, 'eval_steps_per_second': 28.652, 'epoch': 4.0, 'step': 640}
{'eval_loss': 5.090881824493408, 'eval_f1': 0.2212643678160919, 'eval_runtime': 1.2708, 'eval_samples_per_second': 31.477, 'eval_steps_per_second': 31.477, 'epoch': 5.0, 'step': 800}
{'eval_loss': 4.950728416442871, 'eval_f1': 0.23750000000000002, 'eval_runtime': 1.298, 'eval_samples_per_second': 30.816, 'eval_steps_per_second': 30.816, 'epoch': 6.0, 'step': 960}
{'loss': 3.8989, 'learning_rate': 5.357142857142857e-06, 'epoch': 6.25, 'step': 1000}
{'eval_loss': 4.940125465393066, 'eval_f1': 0.24444444444444444, 'eval_runtime': 1.4609, 'eval_samples_per_second': 27.38, 'eval_steps_per_second': 27.38, 'epoch': 7.0, 'step': 1120}
{'train_runtime': 80.7323, 'train_samples_per_second': 13.873, 'train_steps_per_second': 13.873, 'total_flos': 73700199874560.0, 'train_loss': 4.81386468069894, 'epoch': 7.0, 'step': 1120}
</code></pre>
<p>How can I set the training arguments to log the training loss every epoch, just like my validation loss? There is no equivalent parameter to <code>evaluation_strategy=epoch</code> for training in training arguments.</p>
|
<python><pytorch><loss><huggingface-trainer>
|
2024-01-25 16:48:10
| 1
| 570
|
Des Grieux
|
77,881,453
| 217,332
|
Interactions between asyncio, pytest, and SQLAlchemy
|
<p>I have a Python 3.8 program which runs a coroutine that loops infinitely, scheduling other coroutines in the background via <code>create_task</code> but not awaiting them. These coroutines use SQLAlchemy's <code>AsyncEngine</code> to write some data to a database. If an exception is raised, the handler cancels the looping coroutine and then awaits the remaining background tasks via <code>gather</code>. This all seems to work fine.</p>
<p>Stripped-down example:</p>
<pre><code>async def write_loop():
async_engine: AsyncEngine = get_async_engine()
while True:
data = await get_some_data()
bg_task: Task = asyncio.create_task(write_to_db(async_engine, data))
outstanding_tasks.add(bg_task)
bg_task.add_done_callback(outstanding_tasks.discard)
async def main():
try:
await write_loop()
except:
await asyncio.gather(*outstanding_tasks)
</code></pre>
<p>I'm using <code>pytest</code> and <code>pytest-asyncio</code> to implement some test cases exercising the above. The exception is forced via some patching, so the line with <code>gather</code> is the last thing that should happen. Each test case is decorated with <code>@pytest.mark.asyncio</code> to ensure that none share an event loop. They all pass when run individually. When running multiple at the same time, the SQLAlchemy writes would fail with <code>cannot perform operation: another operation is in progress</code>. Googling told me that I needed to turn off connection pooling in test, which seems to imply that the different tests are somehow sharing an <code>AsyncEngine</code> (or at least the underlying connection pool?) despite each creating its own.</p>
<p>Having done that, I no longer get the above error. However, now if I run too many tests at once, at least one times out (no matter how high I set the timeout). I can see that the point where it hangs is <code>await asyncio.gather(*outstanding_tasks)</code>. If I print the remaining tasks before calling <code>gather</code>, I see that they are in the <code>pending</code> state and have <code>wait_for=<_GatheringFuture pending cb=[<TaskWakeupMethWrapper object]></code> (in addition to the callback that I attached).</p>
<p>I have the following questions:</p>
<ul>
<li>Why are the tests sharing anything in the way of the <code>AsyncEngine</code>? Does this have to do with the fact (I think) that they run in the same process?</li>
<li>What does it mean that, when enough tests are run, the last few tasks never seem to get scheduled? My only guess was that they needed extra time, but I think I've disproven that with high timeouts.</li>
</ul>
<p>In case it matters, the pytest functions are also parameterized such that each will be run multiple times with different arguments.</p>
<p>Edit: I should also mention that this behavior is not deterministic. Occasionally all tests pass.</p>
|
<python><async-await><sqlalchemy><pytest><python-asyncio>
|
2024-01-25 16:19:36
| 1
| 83,780
|
danben
|
77,881,432
| 5,998,186
|
End a nested loop in Python
|
<p>We want make a package of goal kilos of chocolate.
We have small bars (1 kilo each) and big bars (5 kilos each).
Return the number of small bars to use, assuming we always use big bars before small bars.
Return -1 if it can't be done.</p>
<pre><code>def make_chocolate(small, big, goal):
for i in range(1,big+1):
for j in range(1,small+1):
if sum(list((i*5,j*1))) == goal:
return i
else:
return -1
make_chocolate(2,2,12)
</code></pre>
<p>The text above says it all: However, the function does not give the correct output. I always get -1. How can I fix this?</p>
|
<python><function><loops>
|
2024-01-25 16:16:20
| 2
| 343
|
Hadsga
|
77,881,025
| 10,145,953
|
Detect presence of a signature opencv
|
<p>I have an image of a form which contains signatures and I need an automated method of determining if a signature is present in the image. I do not need to match the signature, I just need a basic Boolean response of "yes there is a signature in this image" or "there is no signature in this image." A lot of the resources I've found are used for signature matching (not what I need, as signatures will be different every time) or are just not working.</p>
<p>My basic workflow is I load in the image of the form, crop and grayscale then binarize the image and then use pixel coordinates to crop to specific regions on the form. Below are two examples of cropped images from regions that should contain a signature. The first crop (<code>signature_1a</code>) contains a signature, the second crop (<code>signature_2b</code>) does not.</p>
<p><a href="https://i.sstatic.net/0bgAM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0bgAM.png" alt="cropped image containing signature" /></a></p>
<p><a href="https://i.sstatic.net/btIyt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/btIyt.png" alt="cropped image without a signature" /></a></p>
<p>First I attempted to use <code>pytesseract.image_to_string()</code> but then when I tried to check if the output of that was either <code>!= None</code> or <code>== ''</code> there was no differentiation between the two images.</p>
<p>Next I attempted to count non-white pixels, assuming that the image with the signature would have > 0 non-white pixels while the image without a signature would have 0 but that was not the case.</p>
<pre><code>signature_2b = cropped_images['Section 2']['2b']
img = cv2.cvtColor(signature_2b, cv2.COLOR_BGR2GRAY)
n_white_pix = np.sum(img > 0)
print('Number of non-white pixels:', n_white_pix) # expect it to be 0
Output: Number of non-white pixels: 225663
</code></pre>
<p>I also tried blob detection but that was also a bust, as it did not detect any blobs in the output image.</p>
<pre><code># Read image
im = cv2.imread("sig_sample.png", cv2.IMREAD_GRAYSCALE)
# Set up the detector with default parameters.
detector = cv2.SimpleBlobDetector_create()
# Detect blobs.
keypoints = detector.detect(im)
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures the size of the circle corresponds to the size of blob
im_with_keypoints = cv2.drawKeypoints(im, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Show keypoints
cv2.imshow("Keypoints", im_with_keypoints)
cv2.waitKey(0)
</code></pre>
<p>What can I do to just detect if an image contains a signature or not?</p>
|
<python><opencv><image-processing><object-detection>
|
2024-01-25 15:14:33
| 0
| 883
|
carousallie
|
77,880,853
| 3,952,274
|
SQLAlchemy: IntegrityError on cascade delete for children with siblings
|
<p>I have the following ORM (shortend excerpt):</p>
<pre><code>class Base(DeclarativeBase):
pass
class Resource:
resource_id: Mapped[UUID] = mapped_column(
Uuid,
default=uuid.uuid4,
primary_key=True,
index=True,
)
created: Mapped[datetime] = mapped_column(DateTime, server_default=func.now())
updated: Mapped[datetime] = mapped_column(DateTime, server_default=func.now())
class Chat(Base, Resource):
__tablename__ = "chats"
messages = relationship(
"Message",
back_populates="chat",
cascade="all, delete",
order_by="Message.created",
)
class Message(Base, Resource):
__tablename__ = "messages"
previous_message_id: Mapped[UUID] = mapped_column(
Uuid, ForeignKey("messages.resource_id"), nullable=True
)
chat_id: Mapped[UUID] = mapped_column(Uuid, ForeignKey("chats.resource_id"))
chat = relationship("Chat", back_populates="messages")
</code></pre>
<p>When I try to delete a Chat from the database (Postgres), I get the following IntegrityError</p>
<blockquote>
<p>sqlalchemy.exc.IntegrityError: (psycopg2.errors.ForeignKeyViolation) update or delete on table "messages" violates foreign key constraint "messages_previous_message_id_fkey" on table "messages"</p>
<p>DETAIL: Key (resource_id)=(4d57d1eb-eb07-4983-a320-863f5529b1e5) is still referenced from table "messages".</p>
</blockquote>
<p>I get that there is a circular dependency and that it tries to delete a message that is references by another one, but I was wondering if I can force it to delete the messages in a specific order since there is a tree of messages and deleting them in order of creation from newest to oldest wouldn't violate any constraints.</p>
<p>Or is it possible to somehow instruct it to ignore the constraint? I'm not sure if <code>delete-orphan</code> is helpful here or even how to apply it. I don't want to remove the foreign key constraint on <code>previous_chat_message</code> because I like the validation that comes with it.</p>
<p>Thanks in advance.</p>
|
<python><sqlalchemy><orm>
|
2024-01-25 14:46:00
| 1
| 644
|
thomas
|
77,880,761
| 20,591,261
|
SQLAlchemy CASE Function Issue
|
<p>I have been trying a lot to understand the issue with my code, but I keep struggling with a the CASE function. I can't see any typo, so any advice would be really appreciated.</p>
<p>I want to do the following:</p>
<pre><code>jan_case = case([
(and_(logweb_cold.c.FECHA >= '2023-01-01 00:00:00', logweb_cold.c.FECHA < '2023-02-01 00:00:00'), 1)
], else_=0)
feb_case = case([
(and_(logweb_cold.c.FECHA >= '2023-02-01 00:00:00', logweb_cold.c.FECHA < '2023-03-01 00:00:00'), 1)
], else_=0)
# Define the query
query = select(
logweb_cold.c.id,
func.sum(jan_case).label('JanOccurrences'),
func.sum(feb_case).label('FebOccurrences')
).where(
and_(
logweb_cold.c.CODERR == 'OK',
logweb_cold.c.CODTRAN == 'xd',
logweb_cold.c.id.in_([1, 2, 3])
)
).group_by(
logweb_cold.c.id
).order_by(
logweb_cold.c.id
)
</code></pre>
<p>The issue seems to start on the Jan_case</p>
|
<python><sqlalchemy>
|
2024-01-25 14:34:55
| 0
| 1,195
|
Simon
|
77,880,757
| 13,147,413
|
Polars filter dataframe with multiple conditions
|
<p>I've got this pandas code:</p>
<pre class="lang-py prettyprint-override"><code>df['date_col'] = pd.to_datetime(df['date_col'], format='%Y-%m-%d')
row['date_col'] = pd.to_datetime(row['date_col'], format='%Y-%m-%d')
df = df[(df['groupby_col'] == row['groupby_col']) &
(row['date_col'] - df['date_col'] <= timedelta(days = 10)) &
(row['date_col'] - df['date_col'] > timedelta(days = 0))]
row['mean_col]' = df['price_col'].mean()
</code></pre>
<p>The name <code>row</code> comes from the fact that this function was applied by a lambda construct.</p>
<p>I'm subsetting df with 2 types of conditions:</p>
<ol>
<li>A condition on values equality on a column named "groupby_col",</li>
<li>Multiple onditions on time ranges based on the "date_col" column that features timestamps.</li>
</ol>
<p>I'm pretty sure that <code>filter</code> is the correct module to use:</p>
<pre class="lang-py prettyprint-override"><code>df.filter(condition_1 & condition_2)
</code></pre>
<p>But I'm struggling to write the conditions.</p>
<p>In order to embed condition 1 do I have to nest a filter condition or a <code>when</code> is the correct choice?
How do I translate the <code>timedelta</code> condition?
How do I replicate the lambda approach?</p>
|
<python><dataframe><python-polars>
|
2024-01-25 14:34:19
| 2
| 881
|
Alessandro Togni
|
77,880,613
| 8,919,749
|
How to construct an axis resulting from a subtraction of two values with matplotlib?
|
<p>From a json file, I retrieve two dates from two distinct fields: <code>endDate</code> and <code>startDate</code>.</p>
<p>I would like to plot on the <code>y</code> axis the difference between these two dates based on another value contained in the json file.</p>
<p>With <code>matplotlib</code> is it possible to calculate the difference between these two dates with code that would look like:</p>
<pre><code>y_axis_date_difference = [i["endDate"]-i["startDate"] for i in data]
</code></pre>
<p>Any ideas ?</p>
|
<python><date><matplotlib>
|
2024-01-25 14:12:19
| 0
| 380
|
Mamaf
|
77,880,604
| 859,227
|
Changing legend format for float numbers
|
<p>The following sample code from <a href="https://windrose.readthedocs.io/en/latest/usage.html" rel="nofollow noreferrer">Windrose</a> ax plot creates a legend which I want to control the number of decimal and fraction digits considering that data are shown in float.</p>
<pre><code>from windrose import WindroseAxes
from matplotlib import pyplot as plt
import matplotlib.cm as cm
import numpy as np
ws = np.random.random(500) * 6
wd = np.random.random(500) * 360
ax = WindroseAxes.from_ax()
ax.bar(wd, ws, normed=True, opening=0.8, edgecolor='white')
ax.set_legend()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/8k4XU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8k4XU.png" alt="enter image description here" /></a></p>
<p>How can I do that?</p>
|
<python><matplotlib><windrose>
|
2024-01-25 14:10:57
| 1
| 25,175
|
mahmood
|
77,880,562
| 5,056,347
|
How to convert calculated properties (that involve calling other methods) into Django generated fields?
|
<p>Let's say I have this following model:</p>
<pre class="lang-py prettyprint-override"><code>class Product(models.Model):
price = models.DecimalField(
max_digits=19,
decimal_places=4,
validators=[MinValueValidator(0)],
blank=False,
null=False,
)
expiry_date = models.DateTimeField(
blank=True,
null=True,
)
@property
def price_in_cents(self):
return round(self.price * 100)
@property
def has_expired(self):
if self.expiry_date:
return date.today() > self.expiry_date.date()
else:
return False
</code></pre>
<p>So two fields, and two other properties that rely on these two fields to generate an output. I'm trying to convert these fields into generated fields (for Django 5). My first understanding was simply just putting in the property function in the expression, so something like this:</p>
<pre class="lang-py prettyprint-override"><code>class Product(models.Model):
price = models.DecimalField(
max_digits=19,
decimal_places=4,
validators=[MinValueValidator(0)],
blank=False,
null=False,
)
price_in_cents = models.GeneratedField(
expression=round(F("price") * 100),
output_field=models.BigIntegerField(),
db_persist=True,
)
expiry_date = models.DateTimeField(
blank=True,
null=True,
)
@property
def has_expired(self):
if self.expiry_date:
return date.today() > self.expiry_date.date()
else:
return False
</code></pre>
<p>Unfortunately, this does not seem to work, and I get the following error:</p>
<blockquote>
<p>TypeError: type CombinedExpression doesn't define <strong>round</strong> method</p>
</blockquote>
<p>Going by this, I assume I won't be able to simply convert the second property either. So how would this actually work? Is there a resource to help me convert these Python functions (properties) into valid expressions?</p>
|
<python><django><django-models>
|
2024-01-25 14:02:55
| 2
| 8,874
|
darkhorse
|
77,880,524
| 7,437,143
|
How to safely verify a file contains valid Python syntax in java?
|
<h2>Question</h2>
<p>After automatically modifying some Python comments in Java, I would like to verify the file still contains valid Python syntax, how can I do that from Java, without actually running some Python code using an interpreter? (To be explicit: I am looking for a java-only solution, not a solution that calls some other code from inside Java to compute whether the syntax is valid or not).</p>
<p>I tried building the AST of the file using ANTLR, however, that seems like a non-trivial task for arbitrary Python files, as explained in <a href="https://stackoverflow.com/a/24789100/7437143">this</a> answer. Another suggestion would be to simply try and run the file to see if it runs or not, however, that is unsafe for arbitrary files. Alternatively, one could call some python code that verifies it has runnable code, from Java, however that also relies on executing external (controlled) code, (as shown in <a href="https://stackoverflow.com/a/24771868/7437143">this</a> answer), which I would prefer not to do.</p>
<h2>MWE</h2>
<p>Below is an MWE that still requires/assumes you have Python installed somewhere in your system:</p>
<pre class="lang-java prettyprint-override"><code>package com.something.is_valid_python_syntax;
import java.io.ByteArrayOutputStream;
import java.io.PrintStream;
import org.antlr.v4.runtime.CharStream;
import org.antlr.v4.runtime.CharStreams;
import org.antlr.v4.runtime.CommonTokenStream;
import org.antlr.v4.runtime.TokenSource;
import org.antlr.v4.runtime.tree.ParseTree;
import org.antlr.v4.runtime.tree.ParseTreeWalker;
import com.doctestbot.is_valid_python_syntax.generated.PythonParser;
import com.doctestbot.is_valid_python_syntax.generated.PythonLexer;
public class IsValidPythonSyntax {
public static PythonParser getPythonParser(String pythonCode) {
// Create a CharStream from the Python code
CharStream charStream = CharStreams.fromString(pythonCode);
// Create the lexer
PythonLexer lexer = new PythonLexer(charStream);
// Create a token stream from the lexer
CommonTokenStream tokenStream = new CommonTokenStream((TokenSource) lexer);
// Create the parser
return new PythonParser(tokenStream);
}
public static boolean isValidPythonSyntax(String pythonCode) {
PythonParser parser = getPythonParser(pythonCode);
// Parse the input and get the tree
// Redirect standard error stream
PrintStream originalErr = System.err;
ByteArrayOutputStream errStream = new ByteArrayOutputStream();
System.setErr(new PrintStream(errStream));
try {
ParseTree tree = parser.file_input();
} finally {
// Restore the original standard error stream
System.setErr(originalErr);
}
// Check if there were any errors in the error stream
String errorOutput = errStream.toString();
if (!errorOutput.isEmpty()) {
System.out.println("Invalid Python syntax:");
System.out.println(errorOutput);
return false;
} else {
System.out.println("Valid Python syntax");
return true;
}
}
}
</code></pre>
<p>However, that claims that the following Python code is invalid syntax:</p>
<pre class="lang-py prettyprint-override"><code>def foo():
print("hello world.")
foo()
</code></pre>
<p>Based on the following Antlr error message:</p>
<pre><code>Invalid Python syntax:
line 1:3 extraneous input ' ' expecting {'match', '_', NAME}
</code></pre>
<p>Searching this error leads to suggestions on how to adapt the grammar, however, this was autogenerated from <a href="https://github.com/antlr/grammars-v4/tree/master/python/python3_12_1" rel="nofollow noreferrer">the Python 3.12 Antlr grammar</a>.</p>
<h2>Issue</h2>
<p>It seems that the Antlr error message system does not distinguish between warnings and errors, for example, on (missing closing bracket in print statement):</p>
<pre class="lang-py prettyprint-override"><code>def foo():
print("hello world."
foo()
</code></pre>
<p>It outputs:</p>
<pre><code>Invalid Python syntax:
line 1:3 extraneous input ' ' expecting {'match', '_', NAME}
line 2:9 no viable alternative at input '('
line 3:0 no viable alternative at input 'foo'
</code></pre>
<p>I do not know how many different error messages Antlr can produce on parsing Python code, nor do I know which ones I should take seriously nor whether that decision on valid/invalid Python syntax based on Antlr parsing errors is context dependent or not.</p>
|
<python><java><syntax-checking>
|
2024-01-25 13:57:59
| 2
| 2,887
|
a.t.
|
77,880,286
| 3,647,782
|
Is the order of importing tensorflow and torch relevant?
|
<p>I am using a PyTorch model to run inference on an image, using a standard CNN model.</p>
<p>I have noticed that the order in which I import the <code>tensorflow</code> and the <code>torch</code> packages has an effect on the time the inference is taking.</p>
<p>When importing <code>tensorflow</code> first, the inference takes ~200ms. When importing it after <code>torch</code>, the inference time increases up to 800ms.</p>
<pre><code># import tensorflow # <-- model inference takes 200ms
import torch
# import tensorflow # <-- model inference takes 800ms
</code></pre>
<p>The inference also takes 800ms when <code>tensorflow</code> is not imported add all.</p>
<p>Coming from other programming languages, I assume that the import of the <code>tensorflow</code> package has side effects which affect what the <code>torch</code> import does. Is this assumption correct?</p>
<p>My notebook is running in the context of a custom Docker container based on <code>nvcr.io/nvidia/tensorflow:22.09-tf2-py3</code>. In addition, these packages were installed <code>pip install torch==1.12.1+cu113 torchaudio==0.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113</code></p>
<p>Update: What is really confusing about this is that the <code>tensorflow</code> package is not even used in my code. I simply had it left in my notebook. When I noticed that, I removed the import, which changed the inference duration.</p>
<p>Is this a known behaviour?</p>
|
<python><tensorflow><pytorch>
|
2024-01-25 13:20:56
| 1
| 659
|
FabianTe
|
77,879,969
| 610,569
|
Is there a way to save a pre-compiled AutoTokenizer?
|
<p>Sometimes, we'll have to do something like this to extend a pre-trained tokenizer:</p>
<pre><code>from transformers import AutoTokenizer
from datasets import load_dataset
ds_de = load_dataset("mc4", 'de')
ds_fr = load_dataset("mc4", 'fr')
de_tokenizer = tokenizer.train_new_from_iterator(
ds_de['text'],vocab_size=50_000
)
fr_tokenizer = tokenizer.train_new_from_iterator(
ds_fr['text'],vocab_size=50_000
)
new_tokens_de = set(de_tokenizer.vocab).difference(tokenizer.vocab)
new_tokens_fr = set(fr_tokenizer.vocab).difference(tokenizer.vocab)
new_tokens = set(new_tokens_de).union(new_tokens_fr)
tokenizer = AutoTokenizer.from_pretrained(
'moussaKam/frugalscore_tiny_bert-base_bert-score'
)
tokenizer.add_tokens(list(new_tokens))
tokenizer.save_pretrained('frugalscore_tiny_bert-de-fr')
</code></pre>
<p>And then when loading the tokenizer,</p>
<pre><code>tokenizer = AutoTokenizer.from_pretrained(
'frugalscore_tiny_bert-de-fr', local_files_only=True
)
</code></pre>
<p>It takes pretty long to load from <code>%%time</code> in a Jupyter cell:</p>
<pre><code>CPU times: user 34min 20s
Wall time: 34min 22s
</code></pre>
<p>I guess this is due to regex compilation for the added tokens which was also raised in <a href="https://github.com/huggingface/tokenizers/issues/914" rel="noreferrer">https://github.com/huggingface/tokenizers/issues/914</a></p>
<p>I think it's okay since it'll load once and the work can be done without redoing the regex compiles.</p>
<h3>But, is there a way to just save the tokenizer in binary form and avoid the whole regex compilation the next time?</h3>
|
<python><nlp><tokenize><huggingface><huggingface-tokenizers>
|
2024-01-25 12:26:05
| 1
| 123,325
|
alvas
|
77,879,888
| 991,077
|
Explain Dask-cuDF behavior
|
<p>I try to read and process the 8gb csv file using <code>cudf</code>. Reading all file at once doesn't fit neither into GPU memory nor into my RAM. That's why I use the <code>dask_cudf</code> library. Here is the code:</p>
<pre><code>import dask_cudf as dcf
import dask.dataframe as dd
exceptions = ["a", "b", "t", "c"]
x = dcf.read_csv("./data/all.csv", blocksize="256 MiB", sep=',', header=0, decimal='.', skip_blank_lines=True)
x["timestamp"] = dd.to_datetime(x["t"])
x = x.set_index("timestamp", sorted=False)
x = x.loc[pd.to_datetime("2020-01-01 00:00:00"):]
x = x.drop(columns=exceptions)
x.to_csv("./data/")
</code></pre>
<p>And it works and does produce a bunch of csv files like <code>00.part</code>, <code>01.part</code> and so on. While processing the data I can see in the task manager that the GPU memory is being used. But in the project root directory I can see the <code>cufile.log</code>.
Which says this:</p>
<blockquote>
<p>25-01-2024 12:18:54:703 [pid=11847 tid=11862] ERROR 0:140 unable to
load, liburcu-bp.so.6</p>
<p>25-01-2024 12:18:54:703 [pid=11847 tid=11862] ERROR 0:140 unable to
load, liburcu-bp.so.1</p>
<p>25-01-2024 12:18:54:704 [pid=11847 tid=11862] WARN 0:168 failed to
open /proc/driver/nvidia-fs/devcount error: No such file or directory</p>
<p>25-01-2024 12:18:54:704 [pid=11847 tid=11862] NOTICE cufio-drv:727
running in compatible mode</p>
<p>25-01-2024 12:18:54:704 [pid=11847 tid=11862] ERROR cufio-plat:98
cannot open path /sys/bus/pci/devices/0000:01:00.0/resource No such
file or directory</p>
<p>25-01-2024 12:18:54:705 [pid=11847 tid=11862] WARN cufio-plat:431
GPU index 0 NVIDIA GeForce GTX 1060: Model Not Supported</p>
<p>25-01-2024 12:18:54:705 [pid=11847 tid=11862] WARN cufio-udev:168
failed in udev device create: /sys/bus/pci/devices/0000:01:00.0</p>
<p>25-01-2024 12:18:54:899 [pid=11847 tid=11862] ERROR
cufio-topo-udev:431 no device entries present in platform topology</p>
</blockquote>
<p>I have</p>
<pre><code>Windows 10 22H2 (19045.3930)
WSL 2.0.9.0 with Ubuntu 22.04.3 LTS
Cuda compilation tools, release 12.0, V12.0.76 Build cuda_12.0.r12.0/compiler.31968024_0 (from windows console)
NVIDIA-SMI 546.65 (from wsl distro terminal)
Driver Version: 546.65 (from wsl distro terminal)
NVIDIA GeForce GTX 1060 6144MiB
cudf 23.12.01
dask-cudf 23.12.01
</code></pre>
<p>While doing all the job the CPU is also extensively loaded. So does it actually use my GPU or used the CPU as a fallback strategy? Processing this code took 16 minutes. The file says that it was running in some kind of compatability mode and my GPU is not supported. The <a href="https://docs.rapids.ai/install#system-req" rel="nofollow noreferrer">docs</a> state that:</p>
<blockquote>
<p>NVIDIA Pascal™ or better with compute capability 6.0+</p>
</blockquote>
<p>which is the case for 1060.</p>
|
<python><dask><chunking><rapids><cudf>
|
2024-01-25 12:13:44
| 1
| 734
|
shda
|
77,879,635
| 610,569
|
How to reset parameters from AutoModelForSequenceClassification?
|
<p>Currently to reinitialize a model for <code>AutoModelForSequenceClassification</code>, we can do this:</p>
<pre><code>from transformers import AutoModel, AutoConfig, AutoModelForSequenceClassification
m = "moussaKam/frugalscore_tiny_bert-base_bert-score"
config = AutoConfig.from_pretrained(m)
model_from_scratch = AutoModel(config)
model_from_scratch.save_pretrained("frugalscore_tiny_bert-from_scratch")
model = AutoModelForSequenceClassification(
"frugalscore_tiny_bert-from_scratch", local_files_only=True
)
</code></pre>
<h3>Is there some way to reinitialize the model weights without saving a new pretrained model initialized with <code>AutoConfig</code>?</h3>
<pre><code>model = AutoModelForSequenceClassification(
"moussaKam/frugalscore_tiny_bert-base_bert-score",
local_files_only=True
reinitialize_weights=True
)
</code></pre>
<p>or something like:</p>
<pre><code>model = AutoModelForSequenceClassification(
"moussaKam/frugalscore_tiny_bert-base_bert-score",
local_files_only=True
)
model.reinitialize_parameters()
</code></pre>
|
<python><machine-learning><huggingface-transformers><text-classification><safe-tensors>
|
2024-01-25 11:34:07
| 1
| 123,325
|
alvas
|
77,878,932
| 2,469,032
|
Specifying a suffixing function in feature_names_out argument of the functiontransformer function in scikit-learn
|
<p>I want to transform a series of variables 'x1', 'x2' with a squared function using FunctionTransformer and rename the feature names with a suffix 's2_'. For example, 'x1' would become 's2_x1' in the transformed data set. I have the following code:</p>
<pre><code>import pandas as pd
from sklearn.preprocessing import FunctionTransformer
from sklearn import set_config
set_config(transform_output='pandas') # This ensures the transformed output is a dataframe
df = pd.DataFrame(
{
'x1' : [1, 2, 3],
'x2' : [2, 3, 4]
}
)
my_transformer = FunctionTransformer(lambda x: x**2,
feature_names_out= lambda x : [f's2_{c}' for c in x])
transformed_df = my_transformer.fit_transform(df)
transformed_df
</code></pre>
<p>However, the renaming did not work as expected. The columns names remained 'x1' and 'x2'
How should I fix the code?</p>
|
<python><scikit-learn>
|
2024-01-25 09:47:55
| 1
| 1,037
|
PingPong
|
77,878,923
| 15,915,737
|
Use the number of active worker as a variable in a function
|
<p>I'm using <code>ThreadPoolExecutor</code> to create multiple worker in order to download simultaneously multiple table from database with an API. The API limitation is 200 request per minute (<code>rate_limit=60/200</code>), and as I'm trying to download 5 table i use 5 worker (<code>num_worker=5 </code>). Then my time sleep is <code>time_sleep=rate_limit*num_worker </code>.</p>
<p>I would like to use a variable like <code>active_num_worker</code>to adapt the time sleep in order to increase the download speed of the remaining tables when other workers have finished their task.</p>
<p><a href="https://i.sstatic.net/GdItJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GdItJ.png" alt="enter image description here" /></a></p>
<p>Main code:</p>
<pre class="lang-py prettyprint-override"><code># create list of tables
list_of_travelperks_tables=['users','trips','invoices','bookings','suppliers']
# initialize empty list for dataframes
dfs = []
num_workers=5
rate_limit = 60/200 #rate limit is 200 requests per minute
def load_and_convert_table(i):
# Load the JSON data
data = load_travelperk_table(i, os.environ['TP_API_KEY'], rate_limit*num_workers)
# Convert the list of dictionaries to a DataFrame
df = pd.json_normalize(data)
return i, df
# Create a dictionary to store the dataframes
dfs_dict = {}
with concurrent.futures.ThreadPoolExecutor(max_workers=num_workers) as executor:
futures = executor.map(load_and_convert_table, list_of_travelperks_tables)
for i, df in futures:
df_name = f'df_{i}'
dfs_dict[df_name] = df # Store the dataframe in the dictionary
dfs.append((df_name, df))
</code></pre>
<p>load_travelperk_table function:</p>
<pre><code>def load_travelperk_table(table_name: str, ApiKey: str, time_sleep: int, parameters=None):
#code to download
time.sleep(time_sleep)
return data
</code></pre>
<p>How should I proceed, once a worker finished its task the parameter in the other worker are changed?</p>
|
<python><function><threadpoolexecutor>
|
2024-01-25 09:45:54
| 0
| 418
|
user15915737
|
77,878,907
| 19,694,624
|
Discord stops responding on reaction after second iteration
|
<p>I have a problem with my discord bot. It stops working when I click on second iteration specifically, and it does nothing when I click on any other iteration. It's supposed to call check() function when I click a reaction.</p>
<pre><code>import asyncio
from discord.ui import Button, View
import discord
from discord.commands import slash_command
from discord.ext import commands
class Test(commands.Cog):
def __init__(self, bot):
self.bot = bot
self.stop_loop = False
@slash_command(name='test', description='')
async def test(self, ctx):
def check(reaction, user): # Our check for the reaction
self.stop_loop = True
print("clicked")
return user == ctx.author and str(reaction.emoji) in ["✅"]
for i in range(1, 100):
if self.stop_loop:
return
if i == 1:
embed = discord.Embed(title="1x")
interaction: discord.Interaction = await ctx.respond(embed=embed)
sent_embed = await interaction.original_response()
await sent_embed.add_reaction('✅')
await asyncio.sleep(2)
embed = discord.Embed(title=f"{i}x")
await interaction.edit_original_response(embed=embed)
await sent_embed.add_reaction('✅')
try:
reaction, user = await self.bot.wait_for("reaction_add", check=check, timeout=2.0)
print(reaction)
if reaction:
embed = discord.Embed(title=f"You clicked on {i}")
await interaction.edit_original_response(embed=embed)
return
except asyncio.exceptions.TimeoutError:
await asyncio.sleep(1)
def setup(bot):
bot.add_cog(Test(bot))
</code></pre>
<p>My result:</p>
<p><a href="https://i.sstatic.net/zyKVi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zyKVi.png" alt="enter image description here" /></a></p>
<p>No errors are being thrown, but doesn't stop working.</p>
|
<python><python-3.x><discord><discord.py><pycord>
|
2024-01-25 09:44:36
| 1
| 303
|
syrok
|
77,878,901
| 583,464
|
image reconstruction from predicted array - padding same shows grid tiles in reconstructed image
|
<p>I have two images, <code>E1</code> and <code>E3</code>, and I am training a CNN model.</p>
<p>In order to train the model, I use <code>E1</code> as train and <code>E3</code> as y_train.</p>
<p>I extract tiles from these images in order to train the model on tiles.</p>
<p>The model, does not have an activation layer, so the output can take any value.</p>
<p>So, the predictions for example, <code>preds</code> , have values around <code>preds.max() = 2.35</code> and <code>preds.min() = -1.77</code>.</p>
<p>My problem is that I can't reconstruct the image at the end using <code>preds</code> and I think the problem is the scaling-unscaling of the <code>preds</code> values.</p>
<p>If I just do <code>np.uint8(preds)</code> its is almost full of zeros since <code>preds</code> has small values.</p>
<p>The image should look like as close as possible to <code>E2</code> image.</p>
<pre><code>import cv2
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.keras.layers import Conv2D, BatchNormalization, Activation, \
Input, Add
from tensorflow.keras.models import Model
from PIL import Image
CHANNELS = 1
HEIGHT = 32
WIDTH = 32
INIT_SIZE = ((1429, 1416))
def NormalizeData(data):
return (data - np.min(data)) / (np.max(data) - np.min(data) + 1e-6)
def extract_image_tiles(size, im):
im = im[:, :, :CHANNELS]
w = h = size
idxs = [(i, (i + h), j, (j + w)) for i in range(0, im.shape[0], h) for j in range(0, im.shape[1], w)]
tiles_asarrays = []
count = 0
for k, (i_start, i_end, j_start, j_end) in enumerate(idxs):
tile = im[i_start:i_end, j_start:j_end, ...]
if tile.shape[:2] != (h, w):
tile_ = tile
tile_size = (h, w) if tile.ndim == 2 else (h, w, tile.shape[2])
tile = np.zeros(tile_size, dtype=tile.dtype)
tile[:tile_.shape[0], :tile_.shape[1], ...] = tile_
count += 1
tiles_asarrays.append(tile)
return np.array(idxs), np.array(tiles_asarrays)
def build_model(height, width, channels):
inputs = Input((height, width, channels))
f1 = Conv2D(32, 3, padding='same')(inputs)
f1 = BatchNormalization()(f1)
f1 = Activation('relu')(f1)
f2 = Conv2D(16, 3, padding='same')(f1)
f2 = BatchNormalization()(f2)
f2 = Activation('relu')(f2)
f3 = Conv2D(16, 3, padding='same')(f2)
f3 = BatchNormalization()(f3)
f3 = Activation('relu')(f3)
addition = Add()([f2, f3])
f4 = Conv2D(32, 3, padding='same')(addition)
f5 = Conv2D(16, 3, padding='same')(f4)
f5 = BatchNormalization()(f5)
f5 = Activation('relu')(f5)
f6 = Conv2D(16, 3, padding='same')(f5)
f6 = BatchNormalization()(f6)
f6 = Activation('relu')(f6)
output = Conv2D(1, 1, padding='same')(f6)
model = Model(inputs, output)
return model
# Load data
img = cv2.imread('E1.tif', cv2.IMREAD_UNCHANGED)
img = cv2.resize(img, (1408, 1408), interpolation=cv2.INTER_AREA)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = np.array(img, np.uint8)
#plt.imshow(img)
img3 = cv2.imread('E3.tif', cv2.IMREAD_UNCHANGED)
img3 = cv2.resize(img3, (1408, 1408), interpolation=cv2.INTER_AREA)
img3 = cv2.cvtColor(img3, cv2.COLOR_BGR2RGB)
img3 = np.array(img3, np.uint8)
# extract tiles from images
idxs, tiles = extract_image_tiles(WIDTH, img)
idxs2, tiles3 = extract_image_tiles(WIDTH, img3)
# split to train and test data
split_idx = int(tiles.shape[0] * 0.9)
train = tiles[:split_idx]
val = tiles[split_idx:]
y_train = tiles3[:split_idx]
y_val = tiles3[split_idx:]
# build model
model = build_model(HEIGHT, WIDTH, CHANNELS)
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001),
loss = tf.keras.losses.Huber(),
metrics=[tf.keras.metrics.RootMeanSquaredError(name='rmse')])
# scale data before training
train = train / 255.
val = val / 255.
y_train = y_train / 255.
y_val = y_val / 255.
# train
history = model.fit(train,
y_train,
validation_data=(val, y_val),
epochs=50)
# predict on E2
img2 = cv2.imread('E2.tif', cv2.IMREAD_UNCHANGED)
img2 = cv2.resize(img2, (1408, 1408), interpolation=cv2.INTER_AREA)
img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2RGB)
img2 = np.array(img2, np.uint8)
# extract tiles from images
idxs, tiles2 = extract_image_tiles(WIDTH, img2)
#scale data
tiles2 = tiles2 / 255.
preds = model.predict(tiles2)
#preds = NormalizeData(preds)
#preds = np.uint8(preds)
# reconstruct predictions
reconstructed = np.zeros((img.shape[0],
img.shape[1]),
dtype=np.uint8)
# reconstruction process
for tile, (y_start, y_end, x_start, x_end) in zip(preds[:, :, -1], idxs):
y_end = min(y_end, img.shape[0])
x_end = min(x_end, img.shape[1])
reconstructed[y_start:y_end, x_start:x_end] = tile[:(y_end - y_start), :(x_end - x_start)]
im = Image.fromarray(reconstructed)
im = im.resize(INIT_SIZE)
im.show()
</code></pre>
<p>You can find the data <a href="https://easyupload.io/m/3y2rl8" rel="nofollow noreferrer">here</a></p>
<p>If I use :</p>
<pre><code>def normalize_arr_to_uint8(arr):
the_min = arr.min()
the_max = arr.max()
the_max -= the_min
arr = ((arr - the_min) / the_max) * 255.
return arr.astype(np.uint8)
preds = model.predict(tiles2)
preds = normalize_arr_to_uint8(preds)
</code></pre>
<p>then, I receive an image which seems right, but with lines all over.</p>
<p>Here is the image I get:</p>
<p><a href="https://i.sstatic.net/EiBsN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EiBsN.png" alt="enter image description here" /></a></p>
<p>This is the image I should take (as close as possible to <code>E2</code>). Note, that I just use a small cnn network for this example, so I can't receive receive much details for the image. But, when I try better model, still I have horizontal and/or vertical lines:</p>
<p><a href="https://i.sstatic.net/NQDHN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NQDHN.png" alt="enter image description here" /></a></p>
<p><strong>UPDATE</strong></p>
<p>I found this.</p>
<p>In the code above, I use at:</p>
<pre><code># reconstruction process
for tile, (y_start, y_end, x_start, x_end) in zip(preds[:, :, -1], idxs):
</code></pre>
<p><code>preds[:, :, -1]</code> this is wrong.</p>
<p>I must use <code>preds[:, :, :, -1]</code> because <code>preds</code> shape is: <code>(1936, 32, 32, 1)</code>.</p>
<p>So, If I use <code>preds[:, :, -1]</code> I am receiving the image I posted.</p>
<p>If I use <code>preds[:, :, :, -1]</code>, which is right , I receive a new image where except from the horizontal lines, I get vertical lines also!</p>
<p><a href="https://i.sstatic.net/eALs7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eALs7.png" alt="the new image" /></a></p>
<p><strong>UPDATE 2</strong></p>
<p>I am just adding new code where I use another patches and reconstruction functions, but produce the same results (a little better picture).</p>
<pre><code>import cv2
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.keras.layers import Conv2D, BatchNormalization, Activation, \
Input, Add
from tensorflow.keras.models import Model
from PIL import Image
# gpu setup
gpus = tf.config.experimental.list_physical_devices('GPU')
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
CHANNELS = 1
HEIGHT = 1408
WIDTH = 1408
PATCH_SIZE = 32
STRIDE = PATCH_SIZE//2
INIT_SIZE = ((1429, 1416))
def normalize_arr_to_uint8(arr):
the_min = arr.min()
the_max = arr.max()
the_max -= the_min
arr = ((arr - the_min) / the_max) * 255.
return arr.astype(np.uint8)
def NormalizeData(data):
return (data - np.min(data)) / (np.max(data) - np.min(data) + 1e-6)
def recon_im(patches: np.ndarray, im_h: int, im_w: int, n_channels: int, stride: int):
"""Reconstruct the image from all patches.
Patches are assumed to be square and overlapping depending on the stride. The image is constructed
by filling in the patches from left to right, top to bottom, averaging the overlapping parts.
Parameters
-----------
patches: 4D ndarray with shape (patch_number,patch_height,patch_width,channels)
Array containing extracted patches. If the patches contain colour information,
channels are indexed along the last dimension: RGB patches would
have `n_channels=3`.
im_h: int
original height of image to be reconstructed
im_w: int
original width of image to be reconstructed
n_channels: int
number of channels the image has. For RGB image, n_channels = 3
stride: int
desired patch stride
Returns
-----------
reconstructedim: ndarray with shape (height, width, channels)
or ndarray with shape (height, width) if output image only has one channel
Reconstructed image from the given patches
"""
patch_size = patches.shape[1] # patches assumed to be square
# Assign output image shape based on patch sizes
rows = ((im_h - patch_size) // stride) * stride + patch_size
cols = ((im_w - patch_size) // stride) * stride + patch_size
if n_channels == 1:
reconim = np.zeros((rows, cols))
divim = np.zeros((rows, cols))
else:
reconim = np.zeros((rows, cols, n_channels))
divim = np.zeros((rows, cols, n_channels))
p_c = (cols - patch_size + stride) / stride # number of patches needed to fill out a row
totpatches = patches.shape[0]
initr, initc = 0, 0
# extract each patch and place in the zero matrix and sum it with existing pixel values
reconim[initr:patch_size, initc:patch_size] = patches[0]# fill out top left corner using first patch
divim[initr:patch_size, initc:patch_size] = np.ones(patches[0].shape)
patch_num = 1
while patch_num <= totpatches - 1:
initc = initc + stride
reconim[initr:initr + patch_size, initc:patch_size + initc] += patches[patch_num]
divim[initr:initr + patch_size, initc:patch_size + initc] += np.ones(patches[patch_num].shape)
if np.remainder(patch_num + 1, p_c) == 0 and patch_num < totpatches - 1:
initr = initr + stride
initc = 0
reconim[initr:initr + patch_size, initc:patch_size] += patches[patch_num + 1]
divim[initr:initr + patch_size, initc:patch_size] += np.ones(patches[patch_num].shape)
patch_num += 1
patch_num += 1
# Average out pixel values
reconstructedim = reconim / divim
return reconstructedim
def get_patches(GT, stride, patch_size):
"""Extracts square patches from an image of any size.
Parameters
-----------
GT : ndarray
n-dimensional array containing the image from which patches are to be extracted
stride : int
desired patch stride
patch_size : int
patch size
Returns
-----------
patches: ndarray
array containing all patches
im_h: int
height of image to be reconstructed
im_w: int
width of image to be reconstructed
n_channels: int
number of channels the image has. For RGB image, n_channels = 3
"""
hr_patches = []
for i in range(0, GT.shape[0] - patch_size + 1, stride):
for j in range(0, GT.shape[1] - patch_size + 1, stride):
hr_patches.append(GT[i:i + patch_size, j:j + patch_size])
im_h, im_w = GT.shape[0], GT.shape[1]
if len(GT.shape) == 2:
n_channels = 1
else:
n_channels = GT.shape[2]
patches = np.asarray(hr_patches)
return patches, im_h, im_w, n_channels
def build_model(height, width, channels):
inputs = Input((height, width, channels))
f1 = Conv2D(32, 3, padding='same')(inputs)
f1 = BatchNormalization()(f1)
f1 = Activation('relu')(f1)
f2 = Conv2D(16, 3, padding='same')(f1)
f2 = BatchNormalization()(f2)
f2 = Activation('relu')(f2)
f3 = Conv2D(16, 3, padding='same')(f2)
f3 = BatchNormalization()(f3)
f3 = Activation('relu')(f3)
addition = Add()([f2, f3])
f4 = Conv2D(32, 3, padding='same')(addition)
f5 = Conv2D(16, 3, padding='same')(f4)
f5 = BatchNormalization()(f5)
f5 = Activation('relu')(f5)
f6 = Conv2D(16, 3, padding='same')(f5)
f6 = BatchNormalization()(f6)
f6 = Activation('relu')(f6)
output = Conv2D(1, 1, padding='same')(f6)
model = Model(inputs, output)
return model
# Load data
img = cv2.imread('E1.tif', cv2.IMREAD_UNCHANGED)
img = cv2.resize(img, (HEIGHT, WIDTH), interpolation=cv2.INTER_AREA)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = np.array(img, np.uint8)
img3 = cv2.imread('E3.tif', cv2.IMREAD_UNCHANGED)
img3 = cv2.resize(img3, (HEIGHT, WIDTH), interpolation=cv2.INTER_AREA)
img3 = cv2.cvtColor(img3, cv2.COLOR_BGR2RGB)
img3 = np.array(img3, np.uint8)
# extract tiles from images
tiles, H, W, C = get_patches(img[:, :, :CHANNELS], stride=STRIDE, patch_size=PATCH_SIZE)
tiles3, H, W, C = get_patches(img3[:, :, :CHANNELS], stride=STRIDE, patch_size=PATCH_SIZE)
# split to train and test data
split_idx = int(tiles.shape[0] * 0.9)
train = tiles[:split_idx]
val = tiles[split_idx:]
y_train = tiles3[:split_idx]
y_val = tiles3[split_idx:]
# build model
model = build_model(PATCH_SIZE, PATCH_SIZE, CHANNELS)
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001),
loss = tf.keras.losses.Huber(),
metrics=[tf.keras.metrics.RootMeanSquaredError(name='rmse')])
# scale data before training
train = train / 255.
val = val / 255.
y_train = y_train / 255.
y_val = y_val / 255.
# train
history = model.fit(train,
y_train,
validation_data=(val, y_val),
batch_size=16,
epochs=20)
# predict on E2
img2 = cv2.imread('E2.tif', cv2.IMREAD_UNCHANGED)
img2 = cv2.resize(img2, (HEIGHT, WIDTH), interpolation=cv2.INTER_AREA)
img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2RGB)
img2 = np.array(img2, np.uint8)
# extract tiles from images
tiles2, H, W, CHANNELS = get_patches(img2[:, :, :CHANNELS], stride=STRIDE, patch_size=PATCH_SIZE)
#scale data
tiles2 = tiles2 / 255.
preds = model.predict(tiles2)
preds = normalize_arr_to_uint8(preds)
reconstructed = recon_im(preds[:, :, :, -1], HEIGHT, WIDTH, CHANNELS, stride=STRIDE)
im = Image.fromarray(reconstructed)
im = im.resize(INIT_SIZE)
im.show()
</code></pre>
<p>and the image produced:</p>
<p><a href="https://i.sstatic.net/34ery.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/34ery.png" alt="stride16" /></a></p>
<p><strong>UPDATE 3</strong></p>
<p>After @Lescurel comment, I tried this arcitecture:</p>
<pre><code>def build_model(height, width, channels):
inputs = Input((height, width, channels))
f1 = Conv2D(32, 3, padding='valid')(inputs)
f1 = BatchNormalization()(f1)
f1 = Activation('relu')(f1)
f2 = Conv2D(16, 3, strides=2,padding='valid')(f1)
f2 = BatchNormalization()(f2)
f2 = Activation('relu')(f2)
f3 = Conv2D(16, 3, padding='same')(f2)
f3 = BatchNormalization()(f3)
f3 = Activation('relu')(f3)
addition = Add()([f2, f3])
f4 = Conv2D(32, 3, padding='valid')(addition)
f5 = Conv2D(16, 3, padding='valid')(f4)
f5 = BatchNormalization()(f5)
f5 = Activation('relu')(f5)
f6 = Conv2D(16, 3, padding='valid')(f5)
f6 = BatchNormalization()(f6)
f6 = Activation('relu')(f6)
f7 = Conv2DTranspose(16, 3, strides=2,padding='same')(f6)
f7 = BatchNormalization()(f7)
f7 = Activation('relu')(f7)
f8 = Conv2DTranspose(16, 3, strides=2,padding='same')(f7)
f8 = BatchNormalization()(f8)
f8 = Activation('relu')(f8)
output = Conv2D(1,1, padding='same', activation='sigmoid')(f8)
model = Model(inputs, output)
return model
</code></pre>
<p>which uses valid and same padding and the image I receive its:</p>
<p>So, the square tiles changed dimensions and shape.</p>
<p>So, the problem is how can I use my original architecture and get rid of these tiles!</p>
<p><a href="https://i.sstatic.net/km8SV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/km8SV.png" alt="padding" /></a></p>
|
<python><tensorflow><machine-learning><image-processing><deep-learning>
|
2024-01-25 09:43:55
| 1
| 5,751
|
George
|
77,878,893
| 9,187,682
|
python selenium capture requests headers and responses [2024]
|
<p>I have been using selenium-wire to intercept and work with requests and responses headers, but now in 2024 selenium-wire is no longer maintained so I wonder what would be the alternative from now on.</p>
<p>At some point in time selenium maintainers mentioned that this isn't the option for the core library so obviously some other way is needed (unless something has changed dramatically in this area since then).</p>
<p>There seems to be no straightforward way to do it. Initially I thought that I can utilize execute_script function somehow, but, as I have learnt, there is no API in the browser for getting access to already issued requests.</p>
<p>The scenario I am trying to put it in is as follows:</p>
<ol>
<li>Perform oauth via browser with user actions against system under test</li>
<li>Capture access_token issued during step 1</li>
<li>Use this token against other endpoints available in identity server (eg. session, revoke, introspect) and for direct api calls to backends.</li>
</ol>
<p>Apart from that having access to underlying properties of the requests seems like a very valid use case. I have checked it in competing frameworks and it looks like some of them at least offer such possibility (like cypress with <code>intercept</code> function).</p>
<p>Is there some smart way to make straightforward workaround for this in selenium or it will be less hassle to move to another framework?</p>
|
<python><selenium-webdriver><seleniumwire>
|
2024-01-25 09:42:28
| 0
| 345
|
James Windshield
|
77,878,768
| 2,560,685
|
exec in comand line after a pip install on windows
|
<p>Hy,
I'm on windows, and a python beginner.
I do a</p>
<pre><code> pip install someInternalTool
</code></pre>
<p>in order to install some internal tool .</p>
<p>issue is, when I do</p>
<pre><code>cmd> someInternalTool
</code></pre>
<p>I have the message "someInternalTool is not recognized as an internal or external command"</p>
<p>I know that If I did it on linux, it would be supposed to launch this tool, but it don't seem to work on windows.</p>
<p>Any idea how I can make it run?
I see the package is in
C:\Users\MyName\AppData\Roaming\Python\Python311\site-packages</p>
|
<python><windows><pip>
|
2024-01-25 09:23:39
| 0
| 5,182
|
sab
|
77,878,728
| 6,212,718
|
Polars dataframe: overlapping groups
|
<p>I am currently converting code from pandas to polars as I really like the api. This question is a more generally question to a previous question of mine (see <a href="https://stackoverflow.com/questions/77868799/polars-dataframe-add-columns-conditional-on-other-column-yielding-different-len">here</a>)</p>
<p>I have the following dataframe</p>
<pre class="lang-py prettyprint-override"><code># Dummy data
df = pl.DataFrame({
"Buy_Signal": [1, 0, 1, 0, 1, 0, 0],
"Returns": [0.01, 0.02, 0.03, 0.02, 0.01, 0.00, -0.01],
})
</code></pre>
<p>I want to ultimately do aggregations on column <code>Returns</code> conditional on different intervals - which are given by column <code>Buy_Signal</code>. In the above case the length is from each 1 to the end of the dataframe. The resulting dataframe should therefore look like this</p>
<pre><code>shape: (15, 2)
┌───────┬─────────┐
│ group ┆ Returns │
│ --- ┆ --- │
│ i64 ┆ f64 │
╞═══════╪═════════╡
│ 1 ┆ 0.01 │
│ 1 ┆ 0.02 │
│ 1 ┆ 0.03 │
│ 1 ┆ 0.02 │
│ 1 ┆ 0.01 │
│ 1 ┆ 0.0 │
│ 1 ┆ -0.01 │
│ 2 ┆ 0.03 │
│ 2 ┆ 0.02 │
│ 2 ┆ 0.01 │
│ 2 ┆ 0.0 │
│ 2 ┆ -0.01 │
│ 3 ┆ 0.01 │
│ 3 ┆ 0.0 │
│ 3 ┆ -0.01 │
└───────┴─────────┘
</code></pre>
<p>One approach posted as an answer to my previous question is the following:</p>
<pre class="lang-py prettyprint-override"><code># Build overlapping group index
idx = df.select(index=
pl.when(pl.col("Buy_Signal") == 1)
.then(pl.int_ranges(pl.int_range(pl.len()), pl.len() ))
).explode(pl.col("index")).drop_nulls().cast(pl.UInt32)
# Join index with original data
df = (df.with_row_index()
.join(idx, on="index")
.with_columns(group = (pl.col("index") == pl.col("index").max())
.shift().cum_sum().backward_fill() + 1)
.select(["group", "Returns"])
)
</code></pre>
<p><strong>Question:</strong> Are there other solutions to this problem that are both readable and fast?</p>
<p>My actual problem contains much larger datasets. Thanks</p>
|
<python><dataframe><python-polars>
|
2024-01-25 09:15:53
| 3
| 1,489
|
FredMaster
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.