QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
79,109,524
| 4,913,660
|
Dataframe manipulation: "explode rows" on new dataframe with repeated indices
|
<p>I have two dataframes say <code>df1</code> and <code>df2</code>, for example</p>
<pre><code>import pandas as pd
col_1= ["A", ["B","C"], ["A","C","D"], "D"]
col_id = [1,2,3,4]
col_2 = [1,2,2,3,3,4,4]
d1 = {'ID': [1,2,3,4], 'Labels': col_1}
d2 = {'ID': col_2, }
d_2_get = {'ID': col_2, "Labels": ["A", "B", "C", "A", "C", "D", np.nan] }
df1 = pd.DataFrame(data=d1)
df2 = pd.DataFrame(data=d2)
df_2_get = pd.DataFrame(data=d_2_get)
</code></pre>
<p><code>df1</code> looking like</p>
<pre><code> ID col2
0 1 A
1 2 [B, C]
2 3 [A, C, D]
3 4 D
</code></pre>
<p>and <code>df2</code> looking like</p>
<pre><code> ID
0 1
1 2
2 2
3 3
4 3
5 4
6 4
</code></pre>
<p>I want to add a column <code>Labels</code> to <code>df2</code>, taken from <code>df1</code>, in such a way that:</p>
<ul>
<li>for index <code>i</code>, start with the first value in <code>df1</code></li>
<li>if the new row in <code>df2["ID"]</code> has a repeated entry, get the next value in <code>df1</code>, if it exists. If not, set <code>NaN</code>.</li>
</ul>
<p>Given <code>df1</code> and <code>df2</code>, the output should look like <code>df_2_get</code> below</p>
<pre><code> ID Labels
0 1 A
1 2 B
2 2 C
3 3 A
4 3 C
5 4 D
6 4 NaN
</code></pre>
<p>My current clumsy attempt is below,</p>
<pre><code>from collections import Counter
def list_flattener(list_of_lists):
return [item for row in list_of_lists for item in row]
def my_dataframe_filler(df1, df2):
list_2_fill = []
repeats = dict(Counter(df2["ID"]))
for k in repeats.keys():
available_labels_list = df1[df1["ID"]==k]["Labels"].tolist()
available_labels_list+=[[np.nan]*10]
available_labels_list = list_flattener(available_labels_list)
list_2_fill+=available_labels_list[:repeats[k]]
return list_2_fill
</code></pre>
<p>and then use as</p>
<pre><code>df2["Labels"] = my_dataframe_filler(df1, df2)
</code></pre>
<p>but I would like to learn how a pandas black belt would handle the problem, thanks</p>
|
<python><pandas><dataframe>
|
2024-10-21 09:59:28
| 2
| 414
|
user37292
|
79,109,487
| 3,104,974
|
How to check whether an sklearn estimator is a scaler?
|
<p>I'm writing a function that needs to determine whether an object passed to it is an imputer (can check with <code>isinstance(obj, </code><a href="https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/impute/_base.py#L78" rel="nofollow noreferrer"><code>_BaseImputer</code></a><code>)</code>), a scaler, or something else.</p>
<p>While all imputers have a common base class that identifies them as imputers, scalers do not. I found that all scalers in <a href="https://github.com/scikit-learn/scikit-learn/tree/main/sklearn/preprocessing" rel="nofollow noreferrer"><code>sklearn.preprocessing._data</code></a> inherit <code>(OneToOneFeatureMixin, TransformerMixin, BaseEstimator)</code>, so I <em>could</em> check if they are instances of all of them. However that could generate false positives (not sure which other object may inherit the same base classes). It doesn't feel very clean or pythonic either.</p>
<p>I was also thinking of checking whether the object has the <code>.inverse_transform()</code> method. However, not only scalers have that, a SimpleImputer (and maybe other objects) have also.</p>
<p>How can I easily check if my object is a scaler?</p>
|
<python><scikit-learn><isinstance>
|
2024-10-21 09:47:10
| 1
| 6,315
|
ascripter
|
79,109,458
| 536,262
|
playwright python how can I catch exception and just gracefully quit
|
<p>I'm unable to exit a playwright loop cleanly on <code>KeyboardInterrupt/Exception</code>. It is called by <code>subprocess()</code> in our CICD and always fails due to this garble when the system sends <ctrl+c> after running for its slotted time:</p>
<p>(simplified example with google)</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
"""
pip install playwright
playwright install
playwright install-deps
"""
import argparse, json, platform, sys, time
from playwright.sync_api import sync_playwright
url = "https://www.google.no"
parser = argparse.ArgumentParser(description='playwright example')
parser.add_argument('--browser', help='browser type', choices=['chrome','firefox','msedge','webkit'], default='chrome')
parser.add_argument('--headed', help='set to set headed with browser', action='store_true')
args, unknown_args = parser.parse_known_args()
if unknown_args:
print(f"unknown parameters: {unknown_args}")
sys.exit(-1)
args.headless = True
if args.headed:
args.headless = False
if platform.system()!='Darwin' and args.browser=='webkit':
print("webkit only supported on apple")
sys.exit(-1)
p = sync_playwright().start()
t0 = time.time()
if args.browser=='firefox':
browser = p.firefox.launch(headless=args.headless)
elif args.browser=='chrome':
browser = p.chromium.launch(channel='chrome', headless=args.headless)
elif args.browser=='msedge':
browser = p.chromium.launch(channel='msedge', headless=args.headless)
elif args.browser=='webkit':
browser = p.webkit.launch(headless=args.headless)
page = browser.new_page()
searches = ["python", "playwright", "selenium", "requests", "beautifulsoup"]
for search in searches:
try:
t1 = time.time()
page.goto(url)
try:
page.get_by_role("button", name="Godta alle").click(timeout=1000)
except Exception as e:
pass
page.get_by_label("SΓΈk", exact=True).type(search)
page.get_by_label("SΓΈk", exact=True).press("Enter")
page.locator("#tsf").get_by_role("button", name="SΓΈk", exact=True)
time.sleep(1)
page.screenshot(path=f"pics/google-{search}.png")
except Exception:
browser.close()
sys.exit(0)
</code></pre>
<p>When I exit with <ctrl+c> It fails on closing the browser</p>
<pre class="lang-bash prettyprint-override"><code>Traceback (most recent call last):
File "C:\dist\testing\exit-test.py", line 59, in <module>
browser.close()
File "C:\dist\venvs\testing\Lib\site-packages\playwright\sync_api\_generated.py", line 13927, in close
return mapping.from_maybe_impl(self._sync(self._impl_obj.close(reason=reason)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\dist\venvs\testing\Lib\site-packages\playwright\_impl\_sync_base.py", line 115, in _sync
return task.result()
^^^^^^^^^^^^^
File "C:\dist\venvs\testing\Lib\site-packages\playwright\_impl\_browser.py", line 192, in close
raise e
File "C:\dist\venvs\testing\Lib\site-packages\playwright\_impl\_browser.py", line 189, in close
await self._channel.send("close", {"reason": reason})
File "C:\dist\venvs\testing\Lib\site-packages\playwright\_impl\_connection.py", line 59, in send
return await self._connection.wrap_api_call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\dist\venvs\testing\Lib\site-packages\playwright\_impl\_connection.py", line 514, in wrap_api_call
raise rewrite_error(error, f"{parsed_st['apiName']}: {error}") from None
Exception: Browser.close: Connection closed while reading from the driver
</code></pre>
<p>if I just do:</p>
<pre><code> :
except KeyboardInterrupt:
sys.exit(0)
</code></pre>
<p>I manage to get it clean sometimes, but mostly:</p>
<pre><code>Exception ignored in: <function BaseSubprocessTransport.__del__ at 0x00000242299C2A20>
Traceback (most recent call last):
File "C:\dist\python312\Lib\asyncio\base_subprocess.py", line 126, in __del__
File "C:\dist\python312\Lib\asyncio\base_subprocess.py", line 104, in close
File "C:\dist\python312\Lib\asyncio\proactor_events.py", line 109, in close
File "C:\dist\python312\Lib\asyncio\base_events.py", line 795, in call_soon
File "C:\dist\python312\Lib\asyncio\base_events.py", line 541, in _check_closed
RuntimeError: Event loop is closed
Exception ignored in: <function _ProactorBasePipeTransport.__del__ at 0x00000242299E8220>
Traceback (most recent call last):
File "C:\dist\python312\Lib\asyncio\proactor_events.py", line 116, in __del__
File "C:\dist\python312\Lib\asyncio\proactor_events.py", line 80, in __repr__
File "C:\dist\python312\Lib\asyncio\windows_utils.py", line 102, in fileno
ValueError: I/O operation on closed pipe
Task was destroyed but it is pending!
task: <Task pending name='Task-12' coro=<Locator.click() running at C:\dist\venvs\testing\Lib\site-packages\playwright\_impl\_locator.py:156> wait_for=<Future pending cb=[Task.task_wakeup()]> cb=[SyncBase._sync.<locals>.<lambda>() at C:\dist\venvs\testing\Lib\site-packages\playwright\_impl\_sync_base.py:111, ProtocolCallback.__init__.<locals>.cb() at C:\dist\venvs\testing\Lib\site-packages\playwright\_impl\_connection.py:191]>
</code></pre>
<p>Any hints greatly appreciated :-)</p>
|
<python><playwright><playwright-python>
|
2024-10-21 09:36:34
| 1
| 3,731
|
MortenB
|
79,109,423
| 6,445,248
|
ROS2 launch error when installing my package
|
<p>I tested both foxy and humble on Ubuntu 20.04, 22.04 and got the same error.</p>
<p>When I build my package and type the source command, <code>source /home/ws/ros2_ws/install/setup.bash</code>, I get the following error.</p>
<h2>ERROR</h2>
<pre class="lang-none prettyprint-override"><code>$> ros2 launch
Failed to load entry point 'launch': No module named 'launch.launch_description_sources'
Traceback (most recent call last):
File "/opt/ros/foxy/bin/ros2", line 11, in <module>
load_entry_point('ros2cli==0.9.13', 'console_scripts', 'ros2')()
File "/opt/ros/foxy/lib/python3.8/site-packages/ros2cli/cli.py", line 39, in main
add_subparsers_on_demand(
File "/opt/ros/foxy/lib/python3.8/site-packages/ros2cli/command/__init__.py", line 237, in add_subparsers_on_demand
extension = command_extensions[name]
KeyError: 'launch'
</code></pre>
<p>The only solution I've found so far is to <strong>delete the launch folder</strong>. What could be the problem?</p>
<h2>package tree</h2>
<pre><code>βββ config
β βββ params_app.yaml
βββ face_pose_estimation
β βββ app_for_1.py
β βββ components
β β βββ main_component.py
β β βββ __init__.py
β β βββ utils
β β βββ common.py
β β βββ __init__.py
β β βββ play_voice.py
β βββ __init__.py
β βββ save_service_server.py
βββ launch
β βββ __init__.py
β βββ app_1.launch.py
βββ log
β βββ 20241011_1114.txt
βββ package.xml
βββ README.md
βββ resource
β βββ ros_app.service
β βββ weights
βββ script
β βββ all.sh
β βββ micro_ros.sh
β βββ usbcam_open.sh
βββ setup.cfg
βββ setup.py
</code></pre>
<h2>app.launch.py</h2>
<pre class="lang-py prettyprint-override"><code>from launch import LaunchDescription
from launch_ros.actions import Node
from ament_index_python.packages import get_package_share_directory
import os
def generate_launch_description():
package_share_dir = get_package_share_directory("my_package")
cam_params_file = os.path.join(package_share_dir, "config", "params_usbcam.yaml")
app_params_file = os.path.join(package_share_dir, "config", "params_app.yaml")
return LaunchDescription(
[
# Node(
# package="micro_ros_agent",
# execuable="micro_ros_agent",
# name="micro_ros",
# arguments=[
# "serial",
# "--dev",
# "/dev/ttyUSB0",
# "-b",
# "115200",
# ],
# ),
Node(
package="usb_cam",
executable="usb_cam_node_exe",
name="app_cam",
# parameters=[cam_params_file],
),
Node(
package="my_package",
executable="app_for_1",
name="app",
parameters=[app_params_file],
),
]
</code></pre>
<h2>setup.py</h2>
<pre class="lang-py prettyprint-override"><code>import os
from setuptools import setup, find_packages
package_name = "my_package"
data_files = [
("share/ament_index/resource_index/packages", ["resource/" + package_name]),
("share/" + package_name, ["package.xml"]),
]
def package_files(data_files, directory_list):
paths_dict = {}
for directory in directory_list:
for path, directories, filenames in os.walk(directory):
for filename in filenames:
if filename == "__init__.py":
continue
file_path = os.path.join(path, filename)
install_path = os.path.join("share", package_name, path)
if install_path in paths_dict.keys():
paths_dict[install_path].append(file_path)
else:
paths_dict[install_path] = [file_path]
for key in paths_dict.keys():
data_files.append((key, paths_dict[key]))
return data_files
def copy_weights_to_home():
home_dir = os.path.expanduser("~")
dest_dir = os.path.join(home_dir, ".temp", "weights")
os.makedirs(dest_dir, exist_ok=True)
src_dir = os.path.abspath(
os.path.join(os.path.dirname(__file__), "resource", "weights")
)
for path, directories, filenames in os.walk(src_dir):
for filename in filenames:
src_file = os.path.join(path, filename)
dest_file = os.path.join(dest_dir, filename)
if not os.path.exists(dest_file):
os.symlink(src_file, dest_file)
elif os.path.islink(dest_file) and os.readlink(dest_file) != src_file:
os.remove(dest_file)
os.symlink(src_file, dest_file)
setup(
name=package_name,
version="0.0.0",
packages=find_packages(exclude=["test"]),
data_files=package_files(data_files, ["launch", "config", "resource"]),
install_requires=["setuptools"],
zip_safe=True,
maintainer="foo",
maintainer_email="foo@foo.com",
description="TODO: Package description",
license="TODO: License declaration",
tests_require=["pytest"],
entry_points={
"console_scripts": [
"app_for_1 = my_package.app_for_1:main",
],
},
)
copy_weights_to_home()
</code></pre>
<p>package.xml</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>my_package</name>
<version>0.0.0</version>
<description>ROS package</description>
<maintainer email="foo@foo.com">foo</maintainer>
<license>TODO: License declaration</license>
<buildtool_depend>ament_python</buildtool_depend>
<exec_depend>rclpy</exec_depend>
<exec_depend>launch</exec_depend>
<exec_depend>launch_ros</exec_depend>
<depend>std_msgs</depend>
<depend>sensor_msgs</depend>
<depend>cv_bridge</depend>
<depend>usb_cam</depend>
<depend>v4l-utils</depend>
<depend>ffmpeg</depend>
<depend>ament_index_python</depend>
<test_depend>ament_copyright</test_depend>
<test_depend>ament_flake8</test_depend>
<test_depend>ament_pep257</test_depend>
<test_depend>python3-pytest</test_depend>
<export>
<build_type>ament_python</build_type>
</export>
</package>
</code></pre>
|
<python><ros><ros2>
|
2024-10-21 09:27:43
| 1
| 317
|
bgyooPtr
|
79,109,370
| 12,308,825
|
How to get the response.headers along with AsyncIterable content from async httpx.stream to a FastAPI StreamingResponse?
|
<p>I am trying to use httpx in a FastAPI endpoint to download files from a server and return them as a <code>StreamingResponse</code>.
For some processing, I need to get the header information along with the data. I want to stream the file data so I came up with this attempt, boiled down to a MRE:</p>
<pre class="lang-py prettyprint-override"><code>from typing import AsyncIterable
import httpx
from fastapi import FastAPI
from fastapi.responses import StreamingResponse
app = FastAPI()
class FileStream:
def __init__(self, headers: dict[str, str], stream: AsyncIterable[bytes]):
self.headers = headers
self.stream = stream
async def get_file_stream(url) -> FileStream:
async with httpx.AsyncClient() as client:
async with client.stream("GET", url) as response:
async def chunk_generator() -> AsyncIterable[bytes]:
async for chunk in response.aiter_bytes():
yield chunk
return FileStream(response.headers, chunk_generator())
@app.get("/download")
async def download_file():
file_stream = await get_file_stream(url=some_url)
headers = {}
media_type = "application/octet-stream"
# some code setting headers and media_type based on file_stream.headers
return StreamingResponse(file_stream.stream, media_type=media_type, headers=headers)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="127.0.0.1", port=8000, log_level="debug")
</code></pre>
<p>This leads to the Error <code>httpx.StreamClosed: Attempted to read or stream content, but the stream has been closed.</code>. To my understanding, this is because of the scoping of the context managers in <code>get_file_stream</code>, so I tried to solve it with a wrapping context manager:</p>
<pre class="lang-py prettyprint-override"><code>from contextlib import asynccontextmanager
from typing import AsyncIterable
import httpx
from fastapi import FastAPI
from fastapi.responses import StreamingResponse
app = FastAPI()
class FileStream:
def __init__(self, headers: dict[str, str], stream: AsyncIterable[bytes]):
self.headers = headers
self.stream = stream
@asynccontextmanager
async def get_file_stream(url) -> FileStream:
async with httpx.AsyncClient() as client:
async with client.stream("GET", url) as response:
async def chunk_generator() -> AsyncIterable[bytes]:
async for chunk in response.aiter_bytes():
yield chunk
yield FileStream(response.headers, chunk_generator())
@app.get("/download")
async def download_file():
async with get_file_stream(url=some_url) as file_stream:
headers = {}
media_type = "application/octet-stream"
# some code setting headers and media_type based on file_stream.headers
return StreamingResponse(file_stream.stream, media_type=media_type, headers=headers)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="127.0.0.1", port=8000, log_level="debug")
</code></pre>
<p>However, this leads to the same issue. It seems that I am missing some points here. Any hints on how to solve this?</p>
<p><strong>Update</strong>
The problem seems to be with how FastAPI handles <code>StreamingResponses</code>. It indeed closes the resources before starting the response streaming.
I am trying to find a solid workaround.</p>
<p>Still same for fastapi 0.116.1</p>
|
<python><python-3.x><fastapi><httpx>
|
2024-10-21 09:14:17
| 1
| 362
|
schneebuzz
|
79,109,089
| 2,307,441
|
Python selenium send_keys not working when call called in function and loop
|
<p>I Am new to python selenium. I am trying to upload csv files to a webportal. I have created the following function(s)/code to upload the file to portal.</p>
<p>webpage has the following code:</p>
<pre class="lang-html prettyprint-override"><code>
<span class="nobr">
<input type="file" name="csvFile" size="60" value="">
</span>
</code></pre>
<pre class="lang-py prettyprint-override"><code>
def logging_in():
#my login code bla bla bla
def upload_file(urlname,csvfile):
driver.get(urlname)
#Below line not able to format correctly.
WebDriverWait(driver,10).until(EC.presence_of_all_elements_located((By.TAG_NAME,'iframe')))
if check_exists_by_tag(driver,'iframe'):
iframe = driver.find_element(By.TAG_NAME,'iframe')
driver.switch_to.frame(iframe)
WebDriverWait(driver,30).until(EC.presence_of_all_elements_located((By.NAME, "csvFile")))
if check_exists_by_Name(driver,'csvFile'):
file_input = driver.find_element(By.NAME, "csvFile")
file_input.send_keys(csvfile)
upload_button = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH,"/html/body/div1/div4/div5/span/a")))
upload_button.click()
driver.switch_to.default_content()
if __name__ == '__main__':
logging_in()
time.sleep(2)
csv_files_list = [file for file in os.listdir(inputpath) if file.endswith('.csv')]
for csv_file in csv_files_list:
uploadfile(url,inputpath+csv_file)
</code></pre>
<p>File Upload is working fine for the first time. During the second loop the file upload not working while at the code <code>send_keys</code>. I am not sure what I am missing here ?</p>
<p>Error Message says, <code>upload_button = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH,"/html/body/div1/div4/div5/span/a")))</code></p>
<p>It was due to the <code>file_input.send_keys(csvfile)</code> didn't input the file to the path which I can see in the browser.</p>
<p>During the first execution of the loop I can see the file name in the textbox.
<a href="https://i.sstatic.net/T3TqEcJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/T3TqEcJj.png" alt="enter image description here" /></a></p>
<p>During the second execution of the loop I can't see the file name in the textbox.</p>
|
<python><selenium-webdriver>
|
2024-10-21 07:59:34
| 1
| 1,075
|
Roshan
|
79,108,577
| 2,987,744
|
Chat Template Error when using HuggingFace and LangGraph
|
<p>I'm following the <a href="https://langchain-ai.github.io/langgraph/tutorials/introduction/#requirements" rel="nofollow noreferrer"><code>LangGraph</code> and <code>LangChain</code> tutorial</a> on making a chatbot, and got a successful output on the first step. After binding tools and finishing the second step, this error occurs:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "[CENSORED]\beegmodel.py", line 111, in <module>
stream_graph_updates('Who is Elon Musk?')
File "[CENSORED]\beegmodel.py", line 106, in stream_graph_updates
for event in graph.stream({"messages": [("user", user_input)]}):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[CENSORED]\Lib\site-packages\langgraph\pregel\__init__.py", line 1315, in stream
for _ in runner.tick(
^^^^^^^^^^^^
File "[CENSORED]\Lib\site-packages\langgraph\pregel\runner.py", line 56, in tick
run_with_retry(t, retry_policy)
File "[CENSORED]\Lib\site-packages\langgraph\pregel\retry.py", line 29, in run_with_retry
task.proc.invoke(task.input, config)
File "[CENSORED]\Lib\site-packages\langgraph\utils\runnable.py", line 410, in invoke
input = context.run(step.invoke, input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[CENSORED]\Lib\site-packages\langgraph\utils\runnable.py", line 184, in invoke
ret = context.run(self.func, input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[CENSORED]\beegmodel.py", line 87, in chatbot
return {"messages": [llm_with_tools.invoke(state["messages"])]}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[CENSORED]\Lib\site-packages\langchain_core\runnables\base.py", line 5354, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "[CENSORED]\Lib\site-packages\langchain_core\language_models\chat_models.py", line 286, in invoke
self.generate_prompt(
File "[CENSORED]\Lib\site-packages\langchain_core\language_models\chat_models.py", line 786, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[CENSORED]\Lib\site-packages\langchain_core\language_models\chat_models.py", line 643, in generate
raise e
File "[CENSORED]\Lib\site-packages\langchain_core\language_models\chat_models.py", line 633, in generate
self._generate_with_cache(
File "[CENSORED]\Lib\site-packages\langchain_core\language_models\chat_models.py", line 851, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "[CENSORED]\Lib\site-packages\langchain_huggingface\chat_models\huggingface.py", line 373, in _generate
llm_input = self._to_chat_prompt(messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[CENSORED]\Lib\site-packages\langchain_huggingface\chat_models\huggingface.py", line 410, in _to_chat_prompt
return self.tokenizer.apply_chat_template(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[CENSORED]\Lib\site-packages\transformers\tokenization_utils_base.py", line 1801, in apply_chat_template
chat_template = self.get_chat_template(chat_template, tools)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[CENSORED]\Lib\site-packages\transformers\tokenization_utils_base.py", line 1962, in get_chat_template
raise ValueError(
ValueError: Cannot use chat template functions because tokenizer.chat_template is not set and no template argument was passed! For information about writing templates and setting the tokenizer.chat_template attribute, please see the documentation at https://huggingface.co/docs/transformers/main/en/chat_templating
</code></pre>
<p>However the model's <code>tokenizer_config.json</code> does have a <code>chat_template</code> field. Here is my script (albeit not formatted well, but again this is just to get the basics down):</p>
<pre><code>import torch
from transformers import (
AutoConfig, MistralForCausalLM, AutoTokenizer, pipeline
)
from huggingface_hub import snapshot_download
from accelerate import (
init_empty_weights,
load_checkpoint_and_dispatch,
infer_auto_device_map
)
from pprint import pprint
from collections import Counter
import time
# https://github.com/huggingface/transformers/issues/31544#issuecomment-2188510876
torch.cuda.empty_cache()
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True
checkpoint = 'mistralai/Mistral-7B-Instruct-v0.2'
weights_location = snapshot_download(
repo_id=checkpoint,
allow_patterns=[
'*.safetensors',
'*.json'
],
ignore_patterns=[
'*pytorch*',
'consolidated.safetensors'
]
)
config = AutoConfig.from_pretrained(checkpoint)
with init_empty_weights():
model = MistralForCausalLM._from_config(config, torch_dtype=torch.bfloat16)
# Much better than 'device_map="auto"'
device_map = infer_auto_device_map(model, dtype=torch.bfloat16)
model = load_checkpoint_and_dispatch(
model, checkpoint=weights_location, device_map=device_map
)
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
pipe = pipeline(
'text-generation',
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
)
from langchain_huggingface import HuggingFacePipeline, ChatHuggingFace
llm = HuggingFacePipeline(pipeline=pipe)
from langchain_community.tools import WikipediaQueryRun, BaseTool, Tool
from langchain_community.utilities import WikipediaAPIWrapper
from typing import Annotated
from langchain_anthropic import ChatAnthropic
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
wikipedia_tool = WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper(top_k_results=1, doc_content_chars_max=2500))
tools = [wikipedia_tool]
llm = ChatHuggingFace(llm=llm, verbose=True)
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=tools)
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
# Any time a tool is called, we return to the chatbot to decide the next step
graph_builder.add_edge("tools", "chatbot")
graph_builder.set_entry_point("chatbot")
graph = graph_builder.compile()
def stream_graph_updates(user_input: str):
for event in graph.stream({"messages": [("user", user_input)]}):
for value in event.values():
print("Assistant:", value)
stream_graph_updates('Who is Elon Musk?')
</code></pre>
<p>Why would this work on the first tutorial step but break on the second?</p>
|
<python><huggingface-transformers><huggingface-tokenizers><py-langchain><langgraph>
|
2024-10-21 04:41:06
| 0
| 1,543
|
T145
|
79,108,443
| 6,703,592
|
flatten dictionary with dataframe value to a dataframe
|
<p>This encoding process will generate a mapping between each categorical value and its corresponding numeric value:</p>
<pre><code>import category_encoders as ce
cols_a = ['group1', 'group2']
dfa = pd.DataFrame([['A1', 'A2', 1], ['B1', 'B2', 4], ['A1', 'C2', 3], ['B1', 'B2', 5]], columns=['group1', 'group2', 'label'])
enc = ce.TargetEncoder(cols=cols_a)
enc.fit(dfa[cols_a], dfa['label'])
enc.mapping
</code></pre>
<p><a href="https://i.sstatic.net/foxL7I6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/foxL7I6t.png" alt="enter image description here" /></a></p>
<p>Maybe you can ignore the encoding process and just remember the output mapping.</p>
<p>How to flatten this mapping into the expected dataframe below?</p>
<p><a href="https://i.sstatic.net/v8lZsVzo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v8lZsVzo.png" alt="enter image description here" /></a></p>
<p>Follow-up: I eventually want to replace the <strong>'cat_val'</strong> with its original categorical values from the mapping <code>enc.ordinal_encoder.mapping</code>. Is there any easy way to achieve this?</p>
<p>My solution is to group by <strong>'group'</strong> -> find the corresponding dictionary -> replace it with the value from the dictionary.</p>
<p><a href="https://i.sstatic.net/TZs8D3Jj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TZs8D3Jj.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe><dictionary><mapping>
|
2024-10-21 02:59:08
| 1
| 1,136
|
user6703592
|
79,108,381
| 594,900
|
DBSCAN clustering geolocations beyond the epsilon value
|
<p>I am trying to analyse some job latitude and longitude data. The nature of the jobs means they tend to happen at similar (although not identical) latitude/longitude locations</p>
<p>In order to reduce the amount of data points to display and analyse I want to cluster the jobs in similar geographical region together. To do this I was using DBSCAN to cluster the jobs and use a job close to the centre of the cluster to act as the representative point.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd, numpy as np
from sklearn.cluster import DBSCAN
from geopy.distance import great_circle
from shapely.geometry import MultiPoint
def cluster_with_dbscan(jobs, radius_km, min_samples):
# ignore jobs that are missing Lat Long coords for now
jobs_cluster = jobs[['Job ID', 'Lat', 'Long']].dropna()
# run dbscan
kms_per_radian = 6371.0088
epsilon = radius_km / kms_per_radian
coords = jobs_cluster[['Lat', 'Long']].values
db = DBSCAN(eps=epsilon, min_samples=min_samples, algorithm='ball_tree', metric='haversine').fit(np.radians(coords))
# appending cluster data onto original jobs, preserving jobs that never had location data
jobs_cluster['Cluster ID'] = db.labels_
jobs_with_cluster = pd.merge(jobs, jobs_cluster[['Job ID', 'Cluster ID']], how='left', on=['Job ID'])
# capture cluster data, including centroids
num_clusters = len(set(db.labels_))
clusters = pd.Series([coords[db.labels_ == n] for n in range(num_clusters)])
def get_centermost_point(cluster):
if len(cluster) == 0:
return tuple([None,None])
centroid = (MultiPoint(cluster).centroid.x, MultiPoint(cluster).centroid.y)
centermost_point = min(cluster, key=lambda point: great_circle(point, centroid).m)
return tuple(centermost_point)
centermost_points = clusters.map(get_centermost_point)
lats, lons = zip(*centermost_points)
clusters = pd.DataFrame({'Lat':lats, 'Long':lons}).reset_index().rename(columns={'index':'Cluster ID'})
return jobs_with_cluster,clusters
</code></pre>
<p>run with</p>
<pre class="lang-py prettyprint-override"><code>radius_km = 2
min_samples = 1 //want to keep outliers
jobs,clusters = cluster_with_dbscan(jobs, radius_km , min_samples )
</code></pre>
<p>When running I do get clustered data, but the clusters contain jobs that are far more than 2km apart (some clusters have jobs spanning 100s of kilometres). From my understanding of DBSCAN they should only be at most 2km from the core point</p>
<p>Is my understanding of DBSCAN wrong? can clusters cover areas greater than equivalent epsilon value? If so is there a more appropriate cluster algorithm?</p>
<p>Or is my implementation of DBSCAN flawed in someway?</p>
|
<python><dbscan>
|
2024-10-21 02:21:13
| 1
| 3,345
|
Alex
|
79,108,264
| 7,589,535
|
Linked List merge sort implementation attempt with Python runs into an infinite loop
|
<h2>Context</h2>
<p>I was implementing a LinkedList sort solution with python (over on leetcode.com).
The solution is a bottom up merge sort approach (divide and conquer).
The code below is incomplete, it's missing the implementation of the merge() method, which will take a list of a given size and merge it, given that the two halves of the list were already sorted i.e. take a half and inset it somewhere in the other half.</p>
<p>I ran into my issue while testing the other parts of the implementation.</p>
<p><code>sortList()</code> and <code>merge_list_of_size()</code>
The idea is that <code>sortList()</code> will start by considering all the sublists of size 2, and sorting them by calling <code>merge_list_of_size()</code> with <code>list_size</code> = <code>2</code>.
The <code>sortList()</code> will continue by doubling the <code>list_size</code>, <code>list_size</code> = <code>4</code>, and again use <code>merge_list_of_size()</code> to sort all lists of that size.
The list size will increase again and again until the <code>list_size</code> is too large to continue.</p>
<pre class="lang-py prettyprint-override"><code># Definition for singly-linked list.
# class ListNode:
# def __init__(self, val=0, next=None):
# self.val = val
# self.next = next
class Solution:
def sortList(self, head: Optional[ListNode]) -> Optional[ListNode]:
N = 1
while N < 100:
N *= 2
print(N)
has_next_merge = self.merge_list_of_size(head, N)
if not has_next_merge:
print('has_next_merge', has_next_merge)
break
return head
def merge_list_of_size(self, head, list_size):
while head:
print('--', head)
next_merge = self.merge(head, list_size)
if not next_merge:
return False
i = list_size
while head and i > 0:
head = head.next
i -= 1
return True
def merge(self, head, list_size):
mid_i = list_size // 2
print(mid_i)
mid = head
while mid.next and mid_i > 1:
mid = mid.next
mid_i -= 1
print('----', mid)
if mid.next is None:
return False
else:
# merge
print('merging')
return True
</code></pre>
<h2>The infinite loop issue</h2>
<p>When running the code above, with an input of this linked list: <code>[4,2,1,3]</code>.
<strong>Notice the while in sortList()</strong>.
The Loop breaks either if:</p>
<ol>
<li><code>merge_list_of_size()</code> returns a <code>False</code></li>
<li>N is >= 100</li>
</ol>
<p>This is the log output:</p>
<pre><code>2
-- ListNode{val: 4, next: ListNode{val: 2, next: ListNode{val: 1, next: ListNode{val: 3, next: None}}}}
1
---- ListNode{val: 4, next: ListNode{val: 2, next: ListNode{val: 1, next: ListNode{val: 3, next: None}}}}
merging
-- ListNode{val: 1, next: ListNode{val: 3, next: None}}
1
---- ListNode{val: 1, next: ListNode{val: 3, next: None}}
merging
4
-- ListNode{val: 4, next: ListNode{val: 2, next: ListNode{val: 1, next: ListNode{val: 3, next: None}}}}
2
---- ListNode{val: 2, next: ListNode{val: 1, next: ListNode{val: 3, next: None}}}
merging
8
-- ListNode{val: 4, next: ListNode{val: 2, next: ListNode{val: 1, next: ListNode{val: 3, next: None}}}}
4
---- ListNode{val: 3, next: None}
has_next_merge False
</code></pre>
<ol>
<li>There is no infinite loop</li>
<li>The program exits as expected when the <code>merge_list_of_size()</code> returns a <code>False</code> on <code>N</code> = <code>8</code>.</li>
</ol>
<p>If I now change the while from <code>while N < 100:</code> to <code>while True:</code>, for some reason unknown to me, the loop is not possible to break. And this is the log output I get:</p>
<pre><code>2
4
8
16
32
64
128
256
512
1024
2048
4096
8192
16384
32768
65536
131072
262144
524288
1048576
2097152
4194304
8388608
16777216
33554432
67108864
134217728
268435456
536870912
1073741824
2147483648
4294967296
8589934592
17179869184
34359738368
68719476736
137438953472
274877906944
549755813888
1099511627776
2199023255552
.
.
.
</code></pre>
<p>Not sure what I'm missing here. I would appreciate any hints. Thanks.
Here's the <a href="https://leetcode.com/problems/sort-list" rel="nofollow noreferrer">link to the leetcode problem</a>.</p>
|
<python><infinite-loop>
|
2024-10-20 23:51:21
| 0
| 389
|
oussema
|
79,108,227
| 5,228,348
|
import that works from main module or directly
|
<p><strong>TL;DR</strong> How do I make an import resolvable regardless of whether a file is run directly or is itself imported by a parent module?</p>
<p>This is not the same as the million questions about importing modules from parent packages. I've had a hard time finding an exactly equivalent question, so if I've missed something, please point me in the right direction. On the other hand, maybe it means that what I'm trying to do doesn't make sense.</p>
<p>I have a VSCode project based in <code>pkg</code>:</p>
<pre><code>pkg\
main.py
__init__.py
subpkg\
foo.py
bar.py
__init__.py
</code></pre>
<p>Suppose <code>foo.py</code> contains something like this:</p>
<pre><code>from bar import baz
def something_intended_to_be_called_from_main():
baz()
def quick_and_dirty_tests_while_programming():
pass
if __name__ == '__main__':
quick_and_dirty_tests_while_programming()
</code></pre>
<p>And suppose that <code>main.py</code> just contains <code>import subpkg.foo</code>.</p>
<p>Here's the problem. If I run <code>foo.py</code> directly, the import is resolved. But if I run <code>main.py</code>, the import is not resolved because <code>bar</code> is not in <code>pkg</code>. I can address this by changing the import line in <code>foo.py</code> to <code>from subpkg.bar import baz</code>, but then it won't be resolved when running it directly.</p>
<p>Is there a way to do what I'm trying to do? I mean, I guess I could put in a <code>try ... except</code> block with both resolutions, but that seems hackish (even more than including quick and dirty tests in submodules).</p>
|
<python><package>
|
2024-10-20 23:19:58
| 1
| 333
|
Luke Sawczak
|
79,107,898
| 1,761,907
|
Why can't subprocess.Open find this executable (phantomjs)
|
<p>[update]</p>
<p>I think the error message here is actually misleading. It turns out if I try to run the command phantomjs, I get an error:</p>
<p><code>-bash: /usr/bin/phantomjs: cannot execute: required file not found</code></p>
<p>I think that is a problem with the executable, and not that Popen can't find it.</p>
<p>I am trying to use <code>selenium.webdriver.PhantomJS</code> on a Raspberry Pi 5. This simple script</p>
<pre><code>from selenium import webdriver
wd = webdriver.PhantomJS()
</code></pre>
<p>fails with this message:</p>
<pre><code>Traceback (most recent call last):
File "/home/jkitchin/.venv/lib/python3.11/site-packages/selenium/webdriver/common/service.py", line 72, in start
self.process = subprocess.Popen(cmd, env=self.env,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/subprocess.py", line 1024, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3.11/subprocess.py", line 1901, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'phantomjs'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/jkitchin/bots/bibtex/pgs.py", line 3, in <module>
wd = webdriver.PhantomJS()
^^^^^^^^^^^^^^^^^^^^^
File "/home/jkitchin/.venv/lib/python3.11/site-packages/selenium/webdriver/phantomjs/webdriver.py", line 52, in __init__
self.service.start()
File "/home/jkitchin/.venv/lib/python3.11/site-packages/selenium/webdriver/common/service.py", line 79, in start
raise WebDriverException(
selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable needs to be in PATH.
</code></pre>
<p>However, phantomjs is definitely in my path:</p>
<pre><code>$ which phantomjs
/usr/bin/phantomjs
</code></pre>
<p>With these permissions:</p>
<pre><code>-rwxr-xr-x 1 root root 50179436 Oct 20 13:11 /usr/bin/phantomjs
</code></pre>
<p>I can run the command in a shell just fine.</p>
<p>this simpler script also fails:</p>
<pre><code>import subprocess as sp
sp.run('phantomjs', capture_output=True)
</code></pre>
<p>Even using the full path leads to</p>
<pre><code>Traceback (most recent call last):
File "/home/jkitchin/bots/bibtex/pgs.py", line 5, in <module>
sp.run('/usr/bin/phantomjs', capture_output=True)
File "/usr/lib/python3.11/subprocess.py", line 548, in run
with Popen(*popenargs, **kwargs) as process:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/subprocess.py", line 1024, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3.11/subprocess.py", line 1901, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: '/usr/bin/phantomjs'
</code></pre>
<p>Any ideas on what is wrong here?</p>
<p>[update]
This python program:</p>
<pre><code>import subprocess as sp
p = sp.run(['ls', '/usr/bin'], capture_output=True)
print('phantomjs' in p.stdout.decode('ascii'))
sp.run(['which', 'phantomjs'])
print()
sp.run(['file', '/usr/bin/phantomjs'])
print()
sp.run(['ls', '-al', '/usr/bin/phantomjs'])
print()
sp.run(['/usr/bin/phantomjs'])
</code></pre>
<p>Gives this output:</p>
<pre><code>True
/usr/bin/phantomjs
/usr/bin/phantomjs: ELF 32-bit LSB executable, ARM, EABI5 version 1 (GNU/Linux), dynamically linked, interpreter /lib/ld-linux-armhf.so.3, for GNU/Linux 3.2.0, BuildID[sha1]=93e423f5359137d6cbb97c5e11aa34945e86e004, not stripped
-rwxr-xr-x 1 root root 50179436 Oct 20 13:11 /usr/bin/phantomjs
Traceback (most recent call last):
File "/home/jkitchin/bots/bibtex/pgs.py", line 14, in <module>
sp.run(['/usr/bin/phantomjs'])
File "/usr/lib/python3.11/subprocess.py", line 548, in run
with Popen(*popenargs, **kwargs) as process:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/subprocess.py", line 1024, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3.11/subprocess.py", line 1901, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: '/usr/bin/phantomjs'
</code></pre>
|
<python><selenium-webdriver><raspberry-pi><phantomjs>
|
2024-10-20 19:30:34
| 0
| 2,453
|
John Kitchin
|
79,107,780
| 3,291,077
|
Pip adding a path entry that works for python but not jupyter
|
<p>I have a library, I am installing it for development purposes using the command</p>
<pre><code>$ pip install -e .
</code></pre>
<p>The library structure is this:</p>
<pre><code>.
βββ package-name/
βββ package_name/
β βββ __init__.py
β βββ module1.py
β βββ module2.py
βββ setup.py
βββ requirements.txt
βββ README.md
</code></pre>
<p>The pip command installs the package and appends this entry to the path:</p>
<pre><code>$ python -c "import sys; print(sys.path)"
[..., '/home/user_name/package-name/package_name']
</code></pre>
<p>When I import the package in my environment's python. I can import it fine. When I try to import it in jupyter, I get "ModuleNotFoundError: No module named 'package_name'"</p>
<p>When I reinstall the library from the jupyter kernel. Same result.</p>
<p>However when I run:</p>
<pre><code>import sys
sys.path.append('/home/user_name/package-name')
</code></pre>
<p>I am able to import the package just fine.</p>
<p>So, I guess my question is, why is this happening?</p>
<ul>
<li>Why can Python find the package at that path, but Jupyter cannot?</li>
<li>Can I tell Pip to add a different entry to the path?</li>
<li>How can I ensure that my users don't encounter this problem when installing this early-stage library?</li>
<li>Would changing the name of the top-level directory to match the src dir name have an impact?</li>
</ul>
<p>Any ideas would be greatly appreciated.</p>
|
<python><jupyter-notebook><pip>
|
2024-10-20 18:30:23
| 3
| 4,465
|
rgalbo
|
79,107,659
| 11,062,613
|
How to pass aggregation functions as function argument in Polars?
|
<p>How can we pass aggregation functions as argument to a custom aggregation function in Polars?
You should be able to pass a single function for all columns or a dictionary if you have different aggregations by column.</p>
<pre><code>import polars as pl
# Sample DataFrame
df = pl.DataFrame({
"category": ["A", "A", "B", "B", "B"],
"value": [1, 2, 3, 4, 5]
})
def agg_with_sum(df: pl.DataFrame | pl.LazyFrame) -> pl.DataFrame | pl.LazyFrame:
return df.group_by("category").agg(pl.col("*").sum())
# Custom function to perform aggregation
def agg_with_expr(df: pl.DataFrame | pl.LazyFrame,
agg_expr: pl.Expr | dict[str, pl.Expr]) -> pl.DataFrame | pl.LazyFrame:
if isinstance(agg_expr, dict):
return df.group_by("category").agg([pl.col(col).aggexpr() for col, aggexpr in agg_expr.items()])
return df.group_by("category").agg(pl.col("*").agg_expr())
# Trying to pass a Polars expression for sum aggregation
print(agg_with_sum(df))
# ββββββββββββ¬ββββββββ
# β category β value β
# β --- β --- β
# β str β i64 β
# ββββββββββββͺββββββββ‘
# β A β 3 β
# β B β 12 β
# ββββββββββββ΄ββββββββ
# Trying to pass a custom Polars expression
print(agg_with_expr(df, pl.sum))
# AttributeError: 'Expr' object has no attribute 'agg_expr'
print(agg_with_expr(df, {'value': pl.sum}))
# AttributeError: 'Expr' object has no attribute 'aggexpr'
</code></pre>
|
<python><python-polars>
|
2024-10-20 17:31:07
| 1
| 423
|
Olibarer
|
79,107,524
| 398,348
|
What is this? 'j1 = %d' % j1. Why isn't 'j1 = %d' , j1 enough?
|
<p>Perhaps because I am coming to it from Java, it seems strange. In java, <code>print("%d", aNumber)</code> is sufficient to fill the <code>%d</code> with <code>aNumber</code>. Why isn't <code>'j1 = %d' , j1</code> enough?</p>
<p>I am doing the Coursera algorithms course and the test case for the assignment is failing in the Jupyter notebook. Unfortunately, there is no help in the course forums from the staff.
I don't understand what in the world this is - it looks like a string modulo an integer??:</p>
<pre><code>'j1 = %d' % j1
</code></pre>
<pre class="lang-none prettyprint-override"><code>TypeError Traceback (most recent call last)
<ipython-input-10-87cacde37648> in <module>
1 # BEGIN TEST CASES
2 j1 = findCrossoverIndex([0, 1, 2, 3, 4, 5, 6, 7], [-2, 0, 4, 5, 6, 7, 8, 9])
----> 3 print('j1 = %d' % j1)
4 assert j1 == 1, "Test Case # 1 Failed"
5
TypeError: %d format: a number is required, not NoneType
</code></pre>
<p><a href="https://i.sstatic.net/8kcJ1eTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8kcJ1eTK.png" alt="screenshot" /></a></p>
|
<python>
|
2024-10-20 16:29:50
| 0
| 3,795
|
likejudo
|
79,107,387
| 12,394,386
|
Page format changes during RTF to PDF conversion using pypandoc
|
<p>Iβm using pypandoc to convert an RTF file to a PDF, but Iβm running into an issue where the page structure and formatting are altered during the conversion. It looks like the output PDF is being generated using LaTeX, and this changes the layout compared to the original RTF file.</p>
<p>Hereβs the code Iβm using:</p>
<pre><code>import pypandoc
def rtf_to_pdf(input_file, output_file):
"""
Convert an RTF file to PDF using pypandoc.
Args:
input_file (str): Path to the input RTF file.
output_file (str): Path where the output PDF will be saved.
"""
try:
output = pypandoc.convert_file(input_file, 'pdf', outputfile=output_file)
print(f"Conversion successful! PDF saved as {output_file}")
return output
except Exception as e:
print(f"An error occurred: {e}")
# Example usage
rtf_to_pdf('input_file.rtf', 'output_file.pdf')
</code></pre>
<p>The issue is that the formatting (e.g., margins, alignment, spacing) does not match the original RTF document after conversion. I just want to retain the same format and layout as the RTF file without any changes.</p>
<p>Question:</p>
<p>Is there a way to use pypandoc or another library to ensure the formatting and layout of the original RTF file is preserved in the PDF output?
Are there any alternative approaches or libraries I can use for this kind of conversion where the layout stays exactly the same?</p>
<p>Any suggestions or insights would be much appreciated!</p>
<p>Here is a simple example of an RTF file Iβm working with (sample.rtf):</p>
<p>It is an example test rtf-file to RTF2XML bean for testing</p>
<p><a href="https://jeroen.github.io/files/sample.rtf" rel="nofollow noreferrer">https://jeroen.github.io/files/sample.rtf</a></p>
<p>and here a screenshot of the output :
<a href="https://i.sstatic.net/3KgRxuAl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3KgRxuAl.png" alt="enter image description here" /></a></p>
<p>I'm using MicrosoftWord to visualise the rtf document and I'm working using MacOS , Python version: 3.11.7, pandoc version: 3.4</p>
|
<python><pdf><rtf><pdf-conversion><pypandoc>
|
2024-10-20 15:10:59
| 1
| 323
|
youta
|
79,107,247
| 9,751,001
|
How can I stop pymupdf converting 'ff' to a different character such as @ or I?
|
<p>I'm reading in text from a bunch of PDFs using the following code:</p>
<pre><code>import fitz
import numpy as np
import pandas as pd
# open the document
doc = fitz.open(filename_path)
# get the text from each page in the document
for idx, page in enumerate(doc):
page = doc.load_page(idx)
page_text = page.get_text("text")
doc_text = doc_text + page_text
# store the document text in a "text" column in my dataframe
doc_df["text"] = doc_text
</code></pre>
<p>It mostly works fine but I've noticed that words containing 'ff' such as 'stuff' are not read in correctly e.g. 'stu@' or 'stuI'. From a brief search it seems this is something to do with 'ligatures' but I don't know what they are or how to resolve it.</p>
<p>Example text similar to what I read in from my PDF:</p>
<p>"I found some stuff in the bag"</p>
<p>Text after pymupdf has read it in:</p>
<p>"I found some stuI in the bag"</p>
<p>It doesn't seem to be a static conversion either as once it converted ff to @ (in a different word but same phrase as above used below for illustration):</p>
<p>"I found some stu@ in the bag"</p>
<p>What should the corrected code look like so I can stop this happening?</p>
|
<python><pdf><text><pymupdf>
|
2024-10-20 14:03:49
| 0
| 631
|
code_to_joy
|
79,107,070
| 5,881,804
|
How to use the StringAdapter in pycasbin
|
<p>pycasbin includes a class <code>StringAdapter</code> in the file <code>adapters/string_adapter.py</code>, however, it doesn't seem to be useable.</p>
<p>Note that it is not an attribute of <code>casbin.persist</code>:</p>
<p><code>dir(casbin.persist)</code></p>
<p><code>['Adapter', 'BatchAdapter', 'FileAdapter', 'FilteredAdapter', 'FilteredFileAdapter', 'UpdateAdapter', '__builtins__', ...]</code></p>
<p>How can it be used ?</p>
|
<python><casbin>
|
2024-10-20 12:46:28
| 1
| 732
|
Blindfreddy
|
79,106,960
| 4,265,498
|
Serializing a Complex python Class to JSON
|
<p>In my project I analyze the questions of a given exam. Let's say each exam has 10 questions.</p>
<p>For each question I compute some stuff and save it, using the constructor method of class <code>QuestionData</code> (defined in file <code>question_data.py</code>). Each <code>QuestionData</code> object has a <code>pandas</code> dataframe, some dicts, some float attributes and a <code>numpy</code> array.</p>
<p>Next, the exam analysis is done using class <code>ExamData</code> - which also has some simple attributes, some dicts and a list of all the <code>QuestionData</code> objects.</p>
<p>Eventually, what I need to do is to return the <code>ExamData</code> object as JSON so it can be sent back as a response.</p>
<hr />
<p>I'm working with conda and python 3.12.4. I thought it's a sensible move to start with serializing a single <code>QuestionData</code> object. Tried using the <code>__dict__</code> trick explained <a href="https://www.geeksforgeeks.org/serialize-and-deserialize-complex-json-in-python/" rel="nofollow noreferrer">here</a>, but it failed with</p>
<pre><code>AttributeError: 'weakref.ReferenceType' object has no attribute '__dict__'. Did you mean: '__dir__'?
</code></pre>
<p>Then I tried installing <a href="https://github.com/ijl/orjson" rel="nofollow noreferrer">orjson</a> using <code>conda install orjson</code>, but it refuses to work due to SSL:</p>
<pre><code>>conda install orjson
Collecting package metadata (current_repodata.json): failed
CondaSSLError: OpenSSL appears to be unavailable on this machine. OpenSSL is required to
download and install packages.
Exception: HTTPSConnectionPool(host='repo.anaconda.com', port=443): Max retries exceeded with url: /pkgs/main/win-64/current_repodata.json (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available."))
</code></pre>
<p>The above is after I let it update <code>openssl</code> from <code>3.0.14-h827c3e9_0 --> 3.0.15-h827c3e9_0</code>, which was a requirement for the installation.</p>
<hr />
<ol>
<li>Is there any way of serializing such complex objects without writing my own serializer?</li>
<li>If so, which package is recommended? Am I missing something with <code>orjson</code>?</li>
<li>If a manual serializer is the only solution, how do I write it?</li>
</ol>
<p>I have plenty of experience with various programming languages, with OOP and with JSON but I'm new to python so please tread lightly.</p>
<hr />
<p>code:</p>
<p><code>question_data.py</code>:</p>
<pre><code>import pandas as pd
import numpy as np
import scipy.stats as sps
import string
class QuestionData:
def __init__(self, data, item: str):
options_list = ...
#df for answer analysis
self._options_data = pd.DataFrame(index = options_list)
#percent chosen column
self._options_data["pct"] = ...
#mean ability for chosen answer
self._options_data["theta_mean"] = ...
#ability sd for chosen answer
self._options_data["theta_sd"] = ...
#corr of chosen answer with ability
self._options_data["theta_corr"] = ...
#item delta
self._delta = ...
#biserial of key with theta
self._key_biserial = ...
#initial IRT params. To be done later
self._IRT_params = {"a": 1, "b": 0, "c": 0}
self._IRT_info = {"theta_MI": 0, "info_theta_MI": 0}
#response times vector
self._response_time = data._response_times[str(item)].to_numpy()
</code></pre>
<p><code>exam_data.py</code>:</p>
<pre><code>from question_data import QuestionData
from datetime import datetime
from dateutil import relativedelta
class ExamData:
_quantile_list = [5, 25, 50, 75, 95]
_date_format = '%d/%m/%Y'
def __init__(self, data):
fromDate = datetime.strptime(data._details["fromDate"], self._date_format)
toDate = datetime.strptime(data._details["toDate"], self._date_format)
delta = relativedelta.relativedelta(toDate, fromDate)
self._report_duration ={"years": delta.years, "months": delta.months, "days": delta.days}
self._exposure_num = ...
self._total_times = data._response_times.sum(axis = 1)
self._time_quantiles = dict(zip(self._quantile_list,
[self._total_times.quantile(q/100) for q in self._quantile_list]))
self._q_list = ...
self._q_data = dict(zip(self._q_list,
[QuestionData(data, q) for q in self._q_list]))
</code></pre>
<hr />
<p>Examples of what I want to get-</p>
<p>QuestionData:</p>
<pre><code>{
"_options_data": {"pct": {...}, "theta_mean": {...}, ...}, //<pandas df serialization>
"_delta": 10,
"_IRT_info": {"theta_MI": 0, "info_theta_MI": 0},
"_response_time": [25.5, 41.6, 30.9, ...],
...
}
</code></pre>
<p>ExamData:</p>
<pre><code>{
"_report_duration": {"years": 0, "months": 0, "days": 17},
"_exposure_num": 150,
"_time_quantiles": {"5": 117.89, "25": 167.15, "50": 224.1, ...},
"_total_times": {"id1": 120.3, "id2": 149.9, ...}, //<pandas series serialization>
"_q_data": {"Q1": <QuestionData Object>, "Q2": <QuestionData Object>, ...},
...
}
</code></pre>
|
<python><json><serialization><orjson>
|
2024-10-20 11:45:26
| 1
| 767
|
SpΓ€tzle
|
79,106,878
| 5,615,873
|
How can I position a taichi GUI window on screen?
|
<p>I have started to work with Python's Taichi package and I couldn't find how to position the GUI window.
A simple example:</p>
<pre><code>import taichi as ti
gui = ti.GUI("Test", (640, 480))
while gui.running:
if gui.get_event(ti.GUI.PRESS):
gui.running = False
# ...
gui.show()
</code></pre>
<p>Each time I run this, the window is placed at different locations. I would like to have a fixed location, mainly for automatic image screenshots and animation recordings in which the window needs to be grabbed.
Is there a way to fix the location of the window?</p>
|
<python><taichi>
|
2024-10-20 10:55:10
| 1
| 3,537
|
Apostolos
|
79,106,724
| 2,536,614
|
read mifare classic with pyscard using Dell ControlVault 3 contactless
|
<p>I have Dell Latitude 5430, which has Dell ControlVault 3 contactless smart-card reader with NFC.</p>
<p>Here are its specs:</p>
<p><a href="https://www.dell.com/support/manuals/es-es/latitude-5430-laptop/latitude_5430_ss/contactless-smart-card-reader?guid=guid-38176113-cf9b-4fee-80ec-8766ca14dd9c&lang=en-us" rel="nofollow noreferrer">Dell Broadcom Smartcard Reader</a></p>
<p>In windows device manager, they appear as:</p>
<p><a href="https://i.sstatic.net/6nS8m3BM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6nS8m3BM.png" alt="enter image description here" /></a></p>
<p>Using python's pyscard, I can already process a Desfire card:</p>
<pre><code>from smartcard.scard import *
def select_reader():
for reader in readers():
print(reader)
if reader.name == 'Broadcom Corp Contactless SmartCard 0':
return reader
reader = select_reader()
cxnToCard = reader.createConnection()
cxnToCard.connect()
protocol = cxnToCard.component.getProtocol()
pcscprotocolheader = translateprotocolheader(protocol)
SCardTransmit(cxnToCard.component.hcard, pcscprotocolheader, [0x5A, 17, 17, 17])
SCardTransmit(cxnToCard.component.hcard, pcscprotocolheader, [0x71, 0, 0])
...
</code></pre>
<p>Until here, all is fine!</p>
<p>But when I want to process a NXP Mifare Classic 1k card, I have tried every thing but always seen the same result:</p>
<pre><code>SW1: 0x6A SW2: 0x81
</code></pre>
<p>I have seen examples of commands like <code>0xFF 0x82</code> (load key to the reader), <code>0xFF 0x86</code> (authenticate), or others, but there is no way that I could get work any command to the Mifare.</p>
<p>I have also tried to transmit the same commands with:</p>
<pre><code>cxnToCard.transmit
</code></pre>
<p>instead of <code>SCardTransmit</code>; I do not know if there is a difference, but the result is the same.</p>
<p>From specs it is seen that Mifare Classic 1k is supported!</p>
<p>Since I get always <code>0x6A 0x81</code>, I am sure that I am missing some principle thing; I am using a wrong protocol, wrong command set, wrong command format, or whatever.</p>
<p>I also did not understand if this reader acts like pcsc or what. Also beside all searches, it was impossible to find any documentation for this reader.</p>
<p>What am I missing?</p>
<p>How can I process Mifare Classic 1k with this reader?</p>
<p>Where can I find any doc to see how to use this reader?</p>
|
<python><smartcard><mifare><pcsc><pyscard>
|
2024-10-20 09:32:02
| 1
| 1,263
|
Mert Mertce
|
79,106,707
| 601,311
|
LibCST matcher for detecting nested f-string expressions in Python AST
|
<p>I want to create a transformer that converts all quotes of f-strings from a single quote to triple quotes, but leaves nested f-strings intact.</p>
<p>For example, the next expression left intact.</p>
<pre class="lang-py prettyprint-override"><code>f"""\
Hello {developer_name}
My name is {_get_machine(f"{self.prop_a}, {self.prop_b}")}
"""
</code></pre>
<p>But, the transformer result is:</p>
<pre class="lang-py prettyprint-override"><code>f"""\
Hello {developer_name}
My name is {_get_machine(f"""{self.prop_a}, {self.prop_b}""")}
"""
</code></pre>
<p>I tried the following matchers but without success:</p>
<pre class="lang-py prettyprint-override"><code>class _FormattedStringEscapingTransformer(m.MatcherDecoratableTransformer):
@m.call_if_not_inside(
m.FormattedString(
parts=m.OneOf(
m.FormattedStringExpression(expression=m.TypeOf(m.FormattedString))
)
)
)
@m.leave(m.FormattedString())
def escape_f_string(
self, original_node: cst.FormattedString, updated_node: cst.FormattedString
) -> cst.FormattedString:
return updated_node.with_changes(start='f"""', end='"""')
</code></pre>
<pre class="lang-py prettyprint-override"><code>class _FormattedStringEscapingTransformer(m.MatcherDecoratableTransformer):
@m.call_if_not_inside(
m.FormattedString(
parts=m.OneOf(
m.FormattedStringExpression(
expression=m.Not(m.FormattedString(parts=m.DoNotCare()))
)
)
)
)
@m.leave(m.FormattedString())
def escape_f_string(
self, original_node: cst.FormattedString, updated_node: cst.FormattedString
) -> cst.FormattedString:
return updated_node.with_changes(start='f"""', end='"""')
</code></pre>
<p>None of them worked.</p>
<p>What is the correct matcher to exclude transformation if inner f-strings expressions?</p>
|
<python><abstract-syntax-tree><libcst>
|
2024-10-20 09:17:31
| 1
| 2,759
|
Maxim Kirilov
|
79,106,691
| 2,545,680
|
How to update system installation of python (setuptools) on Ubuntu
|
<p>I get the following warning from AWS Inspector on the Ubuntu machine:</p>
<blockquote>
<p>CVE-2024-6345 - setuptools, setuptools Finding ID:
arn:aws:inspector2:eu-west-1:355370908234:finding/348da83463e3a933ffccae7dcffeb1cc
A vulnerability in the package_index module of pypa/setuptools
versions up to 69.1.1 allows for remote code execution via its
download functions. These functions, which are used to download
packages from URLs provided by users or retrieved from package index
servers, are susceptible to code injection. If these functions are
exposed to user-controlled inputs, such as package URLs, they can
execute arbitrary commands on the system. The issue is fixed in
version 70.0.</p>
</blockquote>
<p>As I understand I now need install version 70. My assumption is that this is related to the system wide pre-installed version of python/pip on my machine. What's the correct way to update <code>setuptools</code> for the system wide python installation on Ubuntu?</p>
<p>Running this:</p>
<pre><code>sudo -H pip3 install --upgrade setuptools
</code></pre>
<p>gives well known error <code>externally-managed-environment</code>:</p>
<blockquote>
<p>error: externally-managed-environment</p>
<p>Γ This environment is externally managed β°β> T<strong>o install Python
packages system-wide, try apt install
python3-xyz</strong>, where xyz is the package you are trying to
install...</p>
<pre><code>See /usr/share/doc/python3.12/README.venv for more information.
</code></pre>
<p>note: If you believe this is a mistake, please contact your Python
installation or OS distribution provider. You can override this, at
the risk of breaking your Python installation or OS, by passing
--break-system-packages. hint: See PEP 668 for the detailed specification.</p>
</blockquote>
<p>This command:</p>
<pre><code>$ sudo apt install python3-setuptools
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
python3-setuptools is already the newest version (68.1.2-2ubuntu1.1).
python3-setuptools set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.
</code></pre>
<p>doesn't install v70, it says 68 is the most recent one.</p>
|
<python><ubuntu>
|
2024-10-20 09:06:19
| 0
| 106,269
|
Max Koretskyi
|
79,106,642
| 12,466,687
|
How to webscrape elements using beautifulsoup properly?
|
<p>I am not from web scaping or website/html background and new to this field.</p>
<p>Trying out scraping elements from <a href="https://ihgfdelhifair.in/mis/Exhibitors" rel="nofollow noreferrer">this link</a> that contains containers/cards.</p>
<p>I have tried below code and find a little success but not sure how to do it properly to get just informative content without getting html/css elements in the results.</p>
<pre><code>from bs4 import BeautifulSoup as bs
import requests
url = 'https://ihgfdelhifair.in/mis/Exhibitors'
page = requests.get(url)
soup = bs(page.text, 'html')
</code></pre>
<p>What I am looking to extract (as practice) info from below content:
<a href="https://i.sstatic.net/QsmXDfgn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QsmXDfgn.png" alt="sample image" /></a></p>
<pre><code>cards = soup.find_all('div', class_="row Exhibitor-Listing-box")
cards
</code></pre>
<p>below sort of content it display:</p>
<pre><code>[<div class="row Exhibitor-Listing-box">
<div class="col-md-3">
<div class="card">
<div class="container">
<h4><b> 1 ARTIFACT DECOR (INDIA)</b></h4>
<p style="margin-bottom: 5px!important; font-size: 13px;"><span>Email : </span> artifactdecor01@gmail.com</p>
<p style="margin-bottom: 5px!important; font-size: 13px;"><span>Contact Person : </span> SHEENU</p>
<p style="margin-bottom: 5px!important; font-size: 13px;"><span>State : </span> UTTAR PRADESH</p>
<p style="margin-bottom: 5px!important; font-size: 13px;"><span>City : </span> AGRA</p>
<p style="margin-bottom: 5px!important; font-size: 13px;"><span>Hall No. : </span> 12</p>
<p style="margin-bottom: 5px!important; font-size: 13px;"><span>Stand No. : </span> G-15/43</p>
<p style="margin-bottom: 5px!important; font-size: 13px;"><span>Mobile No. : </span> +91-5624010111, +91-7055166000</p>
<p style="margin-bottom: 5px!important; font-size: 11px;"><span>Website : </span> www.artifactdecor.com</p>
<p style="margin-bottom: 5px!important; font-size: 13px;"><span>Source Retail : </span> Y</p>
<p style="margin-bottom: 5px!important; font-size: 13px;"><span>Vriksh Certified : </span> N</p>
</div>
</code></pre>
<p>Now when I use below code to extract element:</p>
<pre><code>for element in cards:
title = element.find_all('h4')
email = element.find_all('p')
print(title)
print(email)
</code></pre>
<p><strong>Output:</strong> It is giving me the info that I need but with html/css content in it which I do not want</p>
<pre><code>[<h4><b> 1 ARTIFACT DECOR (INDIA)</b></h4>, <h4><b> 10G HOUSE OF CRAFT</b></h4>, <h4><b> 2 S COLLECTION</b></h4>, <h4><b> ........]
[<p style="margin-bottom: 5px!important; font-size: 13px;"><span>Email : </span> artifactdecor01@gmail.com</p>, <p style="margin-bottom: 5px!important; font-size: 13px;"><span>Contact Person : </span> ..................]
</code></pre>
<p>So how can I take out just <strong>title, email, Contact Person, State, City</strong> elements from this without html/css in results?</p>
|
<python><html><css><beautifulsoup>
|
2024-10-20 08:25:35
| 2
| 2,357
|
ViSa
|
79,106,607
| 8,910,441
|
Passing list of objects and dictionary to snowflake from streamlit
|
<p>I'm developing a Streamlit app to create an AI chatbot using the "complete" function in snowflake. When a user starts the app, I need to pass a sequence of initial messages into the function. Here's an example:</p>
<pre class="lang-sql prettyprint-override"><code>messages = [
{'system': 'act like a database professional'},
{'user': 'context: I have a database containing sales data. I need you to answer user questions based on the input data between <data> </data> tags'},
{'user': '<data>
actual data ...
</data>'},
{'user': 'provide the answer in markdown format'}
]
</code></pre>
<p>These messages are hard-coded in the app.</p>
<p>User input and assistant messages will be appended to the messages list like this:</p>
<pre class="lang-sql prettyprint-override"><code>messages.append({'role': 'user', 'content': question})
</code></pre>
<p>The option parameter is set as follows:</p>
<pre class="lang-sql prettyprint-override"><code>option = {
'temperature': 0,
'max_tokens': 128,
'guardrails': True
}
</code></pre>
<p>Here's how I call the complete function:</p>
<pre class="lang-sql prettyprint-override"><code>cmd = """
select snowflake.cortex.complete(?, ?, ?) as response
"""
df_response = session.sql(cmd, params=[st.session_state.model_name, messages, option]).collect()
</code></pre>
<p>Currently, I'm consistently receiving this error message:</p>
<p><code>SnowparkSQLException: (1304): 252011: Python data type [dict] cannot be automatically mapped to Snowflake data type. Specify the snowflake data type explicitly.</code></p>
<p>I've completed this quickstart, but its solution summarizes all past conversations into one prompt. I'd prefer to pass it as an array of objects.</p>
<p><a href="https://quickstarts.snowflake.com/guide/ask_questions_to_your_own_documents_with_snowflake_cortex_search/index.html?index=..%2F..index#0" rel="nofollow noreferrer">https://quickstarts.snowflake.com/guide/ask_questions_to_your_own_documents_with_snowflake_cortex_search/index.html?index=..%2F..index#0</a></p>
<p>I've attempted several solutions, including using Snowpark types such as StructType, StructField, StringType, ArrayType, MapType, and the UDF library, but without success.</p>
<p>Can anyone offer assistance?</p>
|
<python><snowflake-cloud-data-platform><streamlit>
|
2024-10-20 07:55:17
| 2
| 584
|
Julian Eccleshall
|
79,106,221
| 3,294,994
|
Enum of dataclass works but frozen attrs doesn't
|
<p>The built-in <code>enum</code> provides a way to create enums of primitive types (IntEnum, StrEnum).</p>
<p>I'd like to create an enum of structured objects.</p>
<p>One way to do that is with <code>dataclass</code>, and it works:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
from enum import Enum
from typing_extensions import assert_never
@dataclass(frozen=True)
class _BaseInstTypeDataclass:
instance_type: str
display_name: str
class InstTypeDataClass(_BaseInstTypeDataclass, Enum):
t2_micro = ("t2.micro", "t2.micro: Cheap!")
r7i_2xlarge = ("r7i.2xlarge", "r7i.2xlarge: Expensive!")
assert list(InstTypeDataClass) == [
InstTypeDataClass.t2_micro,
InstTypeDataClass.r7i_2xlarge,
]
assert isinstance(InstTypeDataClass.t2_micro, InstTypeDataClass)
# This function type checks
def f_dataclass(e: InstTypeDataClass):
if e == InstTypeDataClass.t2_micro:
...
elif e == InstTypeDataClass.r7i_2xlarge:
...
else:
assert_never(e)
</code></pre>
<p>Both the static- (via pyright) and runtime-behavior is as expected.</p>
<p>However, with attrs...:</p>
<pre class="lang-py prettyprint-override"><code>import attrs
@attrs.define(frozen=True)
class _BaseInstTypeAttrs:
instance_type: str
display_name: str
class InstTypeAttrs(_BaseInstTypeAttrs, Enum):
t2_micro = ("t2.micro", "t2.micro: Cheap!")
r7i_2xlarge = ("r7i.2xlarge", "r7i.2xlarge: Expensive!")
# This function type checks
def f_attrs(e: InstTypeAttrs):
if e == InstTypeAttrs.t2_micro:
...
elif e == InstTypeAttrs.r7i_2xlarge:
...
else:
assert_never(e)
</code></pre>
<p>... the type checker is happy, but at run-time...:</p>
<pre><code>$ python foo.py
_BaseInstTypeDataclass(instance_type='foo', display_name='bar')
Traceback (most recent call last):
File "foo.py", line 58, in <module>
class InstTypeAttrs(_BaseInstTypeAttrs, Enum):
File "/Users/.../python3.10/enum.py", line 287, in __new__
enum_member._value_ = value
File "/Users/.../python3.10/site-packages/attr/_make.py", line 551, in _frozen_setattrs
raise FrozenInstanceError()
attr.exceptions.FrozenInstanceError
</code></pre>
<p>Is this because... <code>Enum</code> and <code>attrs.define(frozen=True)</code> don't play nice...?</p>
|
<python><python-attrs>
|
2024-10-20 01:21:13
| 1
| 846
|
obk
|
79,106,128
| 1,492,229
|
How to filter out a dataframe based on another dataframe
|
<p>My dataframe loads from a csv file that looks like this</p>
<pre><code>RepID Account Rank
123 Abcd 1
345 Zyxw 2
567 Hijk 3
...
...
837 Kjsj 8
</code></pre>
<p>and I have another csv that has only one column</p>
<pre><code>RepID
345
488
</code></pre>
<p>I load the first csv in a dataframe <code>df</code> and the other csv in dataframe <code>dE</code></p>
<p>I want to have a new datafrmae <code>dX</code> that is all records from df that <code>RepID</code> does not exist in <code>dE</code>
and <code>dY</code> all the records that <code>RepID</code> exist in <code>dE</code></p>
<p>how to do that?</p>
|
<python><pandas>
|
2024-10-19 23:27:12
| 2
| 8,150
|
asmgx
|
79,106,088
| 5,565,100
|
Correct Python DBus Connection Syntax?
|
<p>I'm having trouble getting dbus to connect:</p>
<pre><code> try:
logging.debug("Attempting to connect to D-Bus.")
self.bus = SessionBus()
self.keepass_service = self.bus.get("org.keepassxc.KeePassXC.MainWindow", "/org/keepassxc/KeePassXC/MainWindow")
# self.keepass_service = self.bus.get("org.keepassxc.KeePassXC", "/org/keepassxc/KeePassXC/")
# self.keepass_service = self.bus.get("org.keepassxc.KeePassXC.MainWindow")
</code></pre>
<p>Dbus.Listnames shows:</p>
<pre><code> $ dbus-send --print-reply --dest=org.freedesktop.DBus --type=method_call /org/freedesktop/DBus org.freedesktop.DBus.ListNames
method return time=1729375987.604568 sender=org.freedesktop.DBus -> destination=:1.826 serial=3 reply_serial=2
array [
string "org.freedesktop.DBus"
string ":1.469"
string "org.freedesktop.Notifications"
string "org.freedesktop.PowerManagement"
string ":1.7"
string "org.keepassxc.KeePassXC.MainWindow"
</code></pre>
<p>This version produces this error:</p>
<pre><code> self.keepass_service = self.bus.get("org.keepassxc.KeePassXC.MainWindow", "/org/keepassxc/KeePassXC/MainWindow")
ERROR:root:Error message: g-dbus-error-quark: GDBus.Error:org.freedesktop.DBus.Error.UnknownObject: No such object path '/org/keepassxc/KeePassXC/MainWindow' (41)
</code></pre>
<p>This version produces this error:</p>
<pre><code> self.keepass_service = self.bus.get("org.keepassxc.KeePassXC", "/org/keepassxc/KeePassXC/")
(process:607644): GLib-GIO-CRITICAL **: 16:18:39.599: g_dbus_connection_call_sync_internal: assertion 'object_path != NULL && g_variant_is_object_path (object_path)' failed
ERROR:root:Failed to connect to KeePassXC D-Bus interface.
ERROR:root:Error message: 'no such object; you might need to pass object path as the 2nd argument for get()'
</code></pre>
<p>I've tried adding a time delay in case it was a race condition. I've tried with a keepassxc instance already running. I don't know where to go next?</p>
<p>Here's the code in full context:</p>
<pre><code>from pydbus import SessionBus
import logging
import os
import subprocess
from gi.repository import GLib
import time
# Set up logging configuration
logging.basicConfig(level=logging.DEBUG) # Set logging level to debug
class KeePassXCManager:
def __init__(self, db_path, password=None, keyfile=None, appimage_path=None):
logging.debug("Initializing KeePassXCManager")
self.db_path = db_path
self.password = password
self.keyfile = keyfile
self.kp = None
self.keepass_command = []
# Set default path to the KeePassXC AppImage in ~/Applications
self.appimage_path = appimage_path or os.path.expanduser("~/Applications/KeePassXC.appimage")
logging.debug(f"AppImage path set to: {self.appimage_path}")
# Determine the KeePassXC launch command
self._set_keepassxc_command()
self._ensure_keepassxc_running()
# Set up the D-Bus connection to KeePassXC
self.bus = SessionBus()
self.keepass_service = None
self._connect_to_dbus()
# Open the database once the manager is initialized
if not self.open_database():
logging.error("Failed to open the database during initialization.")
def _set_keepassxc_command(self):
"""Sets the command to launch KeePassXC."""
try:
if self._is_keepassxc_installed():
logging.info("Using installed KeePassXC version.")
self.keepass_command = ["keepassxc"]
elif os.path.isfile(self.appimage_path) and os.access(self.appimage_path, os.X_OK):
logging.info(f"KeePassXC AppImage is executable at {self.appimage_path}")
self.keepass_command = [self.appimage_path]
else:
logging.error("KeePassXC is not installed or AppImage is not executable.")
raise RuntimeError("KeePassXC is not installed. Please install it or provide a valid AppImage.")
logging.debug(f"Final KeePassXC command set: {self.keepass_command}")
except Exception as e:
logging.error(f"Error setting KeePassXC command: {e}")
raise
def _is_keepassxc_installed(self):
"""Checks if KeePassXC is installed on the system."""
logging.debug("Checking if KeePassXC is installed via package manager")
try:
result = subprocess.run(["which", "keepassxc"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if result.returncode == 0:
logging.info(f"KeePassXC found at {result.stdout.decode().strip()}")
return True
else:
logging.warning("KeePassXC is not installed via package manager.")
return False
except Exception as e:
logging.error(f"Error checking KeePassXC installation: {e}")
return False
def _ensure_keepassxc_running(self):
"""Checks if KeePassXC is running and starts it if not."""
logging.debug("Checking if KeePassXC is running")
try:
# Check if KeePassXC is running using pgrep
result = subprocess.run(["pgrep", "-x", "keepassxc"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if result.returncode != 0:
logging.info("KeePassXC is not running. Starting KeePassXC.")
# Start KeePassXC
subprocess.Popen(self.keepass_command)
# Optionally, wait for a short time to allow KeePassXC to start
GLib.idle_add(lambda: None) # Allows the GUI to initialize
else:
logging.info("KeePassXC is already running.")
except Exception as e:
logging.error(f"Error checking or starting KeePassXC: {e}")
def _construct_open_command(self):
"""Constructs the command to open the KeePassXC database."""
command = [self.keepass_command[0], self.db_path]
if self.password:
command.append("--pw-stdin")
logging.debug(f"Command includes password for opening database: {self.db_path}")
if self.keyfile:
command.append(f"--keyfile={self.keyfile}")
logging.debug(f"Command includes keyfile for opening database: {self.keyfile}")
logging.debug(f"Final command to open KeePassXC database: {command}")
return command if self.password or self.keyfile else None
def _clear_sensitive_data(self):
"""Clears sensitive data from memory."""
logging.debug("Clearing sensitive data from memory")
self.password = None
self.keyfile = None
self.db_path = None
def _connect_to_dbus(self):
"""Connects to the KeePassXC D-Bus interface."""
try:
logging.debug("Attempting to connect to D-Bus.")
self.bus = SessionBus()
# self.keepass_service = self.bus.get("org.keepassxc.KeePassXC.MainWindow", "/org/keepassxc/KeePassXC/MainWindow")
self.keepass_service = self.bus.get("org.keepassxc.KeePassXC", "/org/keepassxc/KeePassXC/")
# self.keepass_service = self.bus.get("org.keepassxc.KeePassXC.MainWindow")
# self.keepass_service = self.bus.get("org.KeePassXC.MainWindow", "/org/KeePassXC/MainWindow")
if self.keepass_service:
logging.info("Successfully connected to KeePassXC D-Bus interface.")
else:
logging.error("KeePassXC D-Bus interface is not available.")
except Exception as e:
logging.error("Failed to connect to KeePassXC D-Bus interface.")
logging.error(f"Error message: {e}")
services = self.bus.get_services()
logging.error(f"Available D-Bus services: {services}")
def open_database(self):
"""Opens the KeePassXC database using D-Bus."""
try:
if not self.keepass_service:
logging.error("KeePassXC D-Bus service is not available.")
return False
logging.info(f"Opening database: {self.db_path}")
# Prepare parameters for the D-Bus call
password = self.password or ""
keyfile = self.keyfile or ""
# Call the D-Bus method with parameters directly
response = self.keepass_service.openDatabase(self.db_path, password, keyfile)
if response:
logging.info("Database opened successfully via D-Bus.")
return True
else:
logging.error("Failed to open database via D-Bus.")
return False
except Exception as e:
logging.error(f"An error occurred while opening the database: {e}")
return False
def unlock_database(self):
"""Unlocks the KeePassXC database with the password via D-Bus."""
try:
if not self.keepass_service:
logging.error("KeePassXC D-Bus service is not available.")
return False
logging.info("Unlocking database with the provided password.")
response = self.keepass_service.unlockDatabase(self.password)
if response:
logging.info("Database unlocked successfully via D-Bus.")
return True
else:
logging.error("Failed to unlock database via D-Bus.")
return False
except Exception as e:
logging.error(f"An error occurred while unlocking the database: {e}")
return False
</code></pre>
|
<python><python-3.x><dbus>
|
2024-10-19 22:40:27
| 1
| 2,371
|
Emily
|
79,105,984
| 13,634,560
|
multi index with .loc on columns
|
<p>I have a dataframe with multi index as follows</p>
<pre><code>arrays = [
["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
["one", "two", "one", "two", "one", "two", "one", "two"],
]
tuples = list(zip(*arrays))
index = pd.MultiIndex.from_tuples(tuples, names=["first", "second"])
s = pd.DataFrame(np.random.randn(8), index=index).T
</code></pre>
<p>which looks like this</p>
<pre><code> bar baz foo qux
one two one two one two one two
0 -0.144135 0.625481 -2.139184 -1.066893 -0.123791 -1.058165 0.495627 -0.654353
</code></pre>
<p>to which the documentation says to index in the following way</p>
<pre><code>df.loc[:, (slice("bar", "two"), ...)]
</code></pre>
<p>and so I do</p>
<pre><code>s.loc[:, (slice("bar", "two"):(slice("baz", "two"))]
</code></pre>
<p>which gives me a <code>SyntaxError</code>.</p>
<pre><code> Cell In[98], line 3
s.loc[:, (slice("bar", "two"):(slice("baz", "two")))]
^
SyntaxError: invalid syntax
</code></pre>
<p>In my specific use-case [albeit beyond the scope of this question], the level 1 indices are of type timestamp [Year], but I figure the answer should be the same. What is the proper way to access a range of multi-indexed items via a multi-index column?</p>
|
<python><pandas><multi-index>
|
2024-10-19 21:19:04
| 2
| 341
|
plotmaster473
|
79,105,760
| 2,329,968
|
Sphinx - How to document a subclass of param.Parameterized
|
<p>One of <a href="https://param.holoviz.org/user_guide/Simplifying_Codebases.html" rel="nofollow noreferrer"><code>param</code></a>'s main advantage is that of simplifying the codebase.</p>
<p>Let's consider this toy class:</p>
<pre class="lang-py prettyprint-override"><code>import param
class A(param.Parameterized):
"""This is my docstring.
"""
a = param.Boolean(True, doc="some docstring for parameter a")
b = param.Number(1, bounds=(-10, 10), doc="some docstring for parameter b")
c = param.Boolean(False, readonly=True, doc="some docstring for parameter c")
def __init__(self, a, b, **kwargs):
super().__init__(a=a, b=b, **kwargs)
def my_method(self):
"my method docstring"
pass
</code></pre>
<p>While working on Jupyter Notebook, I can read the class docstring with:</p>
<pre class="lang-py prettyprint-override"><code>A?
</code></pre>
<p>which outputs:</p>
<pre class="lang-none prettyprint-override"><code>Init signature: A(a, b, **kwargs)
Docstring:
This is my docstring.
Parameters of 'A'
=================
Parameters changed from their default values are marked in red.
Soft bound values are marked in cyan.
C/V= Constant/Variable, RO/RW = ReadOnly/ReadWrite, AN=Allow None
Name Value Type Bounds Mode
a True Boolean V RW
b 1 Number (-10, 10) V RW
c False Boolean C RO
Parameter docstrings:
=====================
a: some docstring for parameter a
b: some docstring for parameter b
c: some docstring for parameter c
Type: ParameterizedMetaclass
Subclasses:
</code></pre>
<p>This is great, because we can immediatly understand what parameters are available and what they are supposed to do.</p>
<p>However, we may need to create an html documentation (with Sphinx) of a module containing subclasses of <code>param.Parameterized</code>. So far, I tried Sphinx <code>autodoc</code>. In the file <code>my_module.rst</code> I wrote:</p>
<pre class="lang-none prettyprint-override"><code>.. module:: my_module
.. autoclass:: A
.. autoattribute:: my_module.A.a
.. autoattribute:: my_module.A.b
.. autoattribute:: my_module.A.c
.. autofunction:: my_module.A.my_method
</code></pre>
<p>However, <code>autoattribute</code> is unable to extract the docstring from the parameters.</p>
<p>For people regularly using <code>param</code>, how do you create html documentation of your applications/modules? Is there any way to get autodoc to extract those docstring? Or are there any sphinx extensions that helps process subclasses of <code>param.Parameterized</code>?</p>
|
<python><parameters><python-sphinx><holoviz>
|
2024-10-19 19:28:08
| 0
| 13,725
|
Davide_sd
|
79,105,380
| 13,097,194
|
Is it possible to have Plotly HTML charts appear within Colab without running the code beforehand?
|
<p>I recently created a <a href="https://github.com/kburchfiel/pfn/blob/main/Automated_Notebooks/recent_weather_data.ipynb" rel="nofollow noreferrer">Python script</a> that retrieves recent weather reports from the National Weather Service, then creates some simple Plotly visualizations of temperature and precipitation data. I configured a spare laptop to run this script on an hourly basis (using Papermill) and then copy the output to Google Drive, thus allowing relatively recent data to appear within <a href="https://drive.google.com/file/d/1AmcSXI5ykszbQryyQmGkDhTg_PTG3m4A/view?usp=sharing" rel="nofollow noreferrer">this Colab notebook</a>.</p>
<p>If I run and save this notebook on my local computer, then close and reopen it within JupyterLab Desktop or VS Code, the Plotly HTML-based charts will still appear (as long as Plotly is present within the active environment). In addition, if I run the notebook within Colab, the HTML charts will get produced there also.</p>
<p><strong>However,</strong> I'm wondering whether it's possible for the interactive Plotly charts to appear within Colab <em>without running the Colab notebook beforehand.</em> This notebook does display static PNG copies of each chart by default, but until I actually run the notebook online, I only see blank spaces where the HTML charts should be:</p>
<p><a href="https://i.sstatic.net/bZn9OfPU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZn9OfPU.png" alt="missing_chart" /></a></p>
<p>(This screenshot from JupyterLab Desktop shows what I'd expect to see instead. Note that this HTML chart appeared by default when I opened the notebook; there was no need to run the script after opening JupyterLab in order to view this chart.)</p>
<p><a href="https://i.sstatic.net/VC8mqdVt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VC8mqdVt.png" alt="present_chart" /></a></p>
<p>In this case, it's not much trouble to just go ahead and run the notebook; however, other notebooks may make use of local files and folders and thus couldn't easily be run by others within Colab. Therefore, I'd love to find a method for showing interactive charts within Colab that doesn't require actually running the notebook.</p>
<p>Thank you in advance for your help!</p>
<p>PS: I know that an alternative setup would be to just display these interactive charts within a Dash app, but I may sometimes prefer to use a Jupyter Notebook as my medium in order to more easily display both my code and its output.</p>
|
<python><html><plotly><google-colaboratory>
|
2024-10-19 16:10:08
| 0
| 974
|
KBurchfiel
|
79,105,264
| 5,800,005
|
PyBind11 produces pyd file without any class i defined
|
<p>I am working on wrapping a C++ class using Pybind11 to make it accessible in Python. My project involves a dynamic library built with Qt6, which contains a class named Package. I am writing a wrapper class called PackageExt, and I am using Pybind11 to bind this wrapper to a Python module. Below is the code I am working with:</p>
<p><strong>C++ Wrapper Header (packageext.h)</strong></p>
<pre><code>#ifndef PACKAGEEXT_H
#define PACKAGEEXT_H
#include "package.h" // from the dynamic library
class PackageExt {
public:
PackageExt(const std::string &id);
PackageExt(const ContainerCore::Package &pkg);
PackageExt(const PackageExt &other);
PackageExt& operator=(const PackageExt &other);
~PackageExt();
void setPackageID(const std::string &id);
std::string packageID() const;
ContainerCore::Package* getBasePackage();
private:
ContainerCore::Package *mPackage; // Defined in the dynamic library
};
#endif // PACKAGEEXT_H
</code></pre>
<p><strong>Pybind11 Binding (bindcontainer.cpp)</strong></p>
<pre><code>#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
#include "packageext.h"
namespace py = pybind11;
PYBIND11_MODULE(ContainerPy, m) {
m.doc() = "Pybind11 plugin for Container library";
py::class_<PackageExt>(m, "Package")
.def(py::init<const std::string &>(), py::arg("id"),
"Constructor that initializes a Package with the specified ID.")
.def("get_package_id", &PackageExt::packageID,
"Get the package ID as std::string.")
.def("set_package_id", &PackageExt::setPackageID, py::arg("id"),
"Set the package ID using std::string.");
}
</code></pre>
<p><strong>CMake Configuration (CMakeLists.txt)</strong></p>
<pre><code>find_package(Python REQUIRED COMPONENTS Interpreter Development)
find_package(QT NAMES Qt6 REQUIRED COMPONENTS Core Concurrent Xml Network Sql)
find_package(Qt${QT_VERSION_MAJOR} REQUIRED COMPONENTS Core Concurrent Xml Network Sql)
find_package(pybind11 REQUIRED CONFIG HINTS ${PYBIND11_HINTS_PATH})
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
set(BINDING_FILES
bindcontainer.cpp
containerext.cpp
packageext.cpp
containermapext.cpp
)
pybind11_add_module(${PYTHON_LIB_NAME} MODULE ${BINDING_FILES})
target_link_libraries(${PYTHON_LIB_NAME} PRIVATE Container) # Dynamic library
target_link_libraries(${PYTHON_LIB_NAME} PRIVATE Qt6::Core Qt6::Concurrent Qt6::Network Qt6::Xml Qt6::Sql)
target_link_libraries(${PYTHON_LIB_NAME} PRIVATE Python::Python)
</code></pre>
<p><strong>Issue:</strong>
After building the module, I successfully get a .pyd file. However, when I import the module in Python and inspect it, I see the following:</p>
<p><strong>python</strong></p>
<pre><code>import ContainerPy
print(dir(ContainerPy))
The output is:
['__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__']
</code></pre>
<p>It appears that the Package class and its methods are not being exposed as expected. What could be causing this issue, and how can I troubleshoot or fix it?</p>
<p><strong>Additional Details:</strong></p>
<ul>
<li>I am using Qt6 with pybind11 to wrap the C++ class.</li>
<li>The dynamic library is being linked correctly, and there are no build errors.</li>
<li>My environment is configured to use CMake for building the project.</li>
<li>I have verified that the .pyd file is being generated, but the class bindings do not seem to be visible in Python.</li>
<li>the pyd file <em>ContainerPy/ContainerPy.cpython-313-x86_64-linux-gnu.so</em> is installed in the site-packages folder in my python environment.</li>
<li>I am building it on linux but the code should be buildable on windows/macos as well.</li>
</ul>
<p><strong>What I Have Tried:</strong></p>
<ul>
<li>I ensured the CMake configuration links to the required Qt6 and Python components.</li>
<li>I verified that all source files are included in the BINDING_FILES list.</li>
<li>I checked for any missing dependencies that could cause the class bindings not to appear.</li>
</ul>
<p><strong>Questions:</strong></p>
<ul>
<li>Could there be an issue with how I set up the Pybind11 bindings?</li>
<li>Is there anything specific I should check in the CMake configuration or the way the library is being linked?</li>
</ul>
|
<python><c++><pybind11>
|
2024-10-19 15:13:48
| 1
| 314
|
Ahmed Aredah
|
79,105,119
| 607,846
|
Slice array using boolean values
|
<p>Given a:</p>
<pre><code>a = numpy.zeros(100, dtype=bool)
a[10:20] = True
a[40:60] = True
</code></pre>
<p>I wish to slice an array b, also of length 100, into two arrays:</p>
<pre><code>b[10:20], b[40:60]
</code></pre>
<p>In other words, I wish to establish the range of indexes in <code>a</code> containing True values, so that I can slice <code>b</code> into individual arrays.</p>
|
<python><numpy>
|
2024-10-19 14:06:26
| 2
| 13,283
|
Baz
|
79,104,875
| 8,535,456
|
Returning a generator as return value, and the generator becomes empty
|
<p>I am puzzled by the following snippet:</p>
<pre><code>def iter_test(x):
l = [1,2,3,4,5]
default = (i for i in l)
if x:
return default
else:
for i in default:
yield i
def test_iter():
a = iter_test(True)
b = iter_test(False)
print(a, b)
print('a', list(a))
print('b', list(b))
test_iter()
</code></pre>
<p>The output is:</p>
<pre><code><generator object iter_test at 0x00000171F1A82B50> <generator object iter_test at 0x00000171F1A82DC0>
a []
b [1, 2, 3, 4, 5]
</code></pre>
<p>This snippet tests two different methods of returning an iterator
in a function.</p>
<ol>
<li>In the first method, the iterator itself, <code>default</code>
is returned.</li>
<li>In the second method, the iterator was firstly unpacked with <code>for</code>,
then its contents are <code>yield</code>'d one by one.</li>
</ol>
<p>With <code>print(a, b)</code> we could see that both method
returns a generator object. However, the generator
returned by the first method is instead empty.</p>
<p>Since the type of <code>a</code> is <code>generator</code>, the iterator should be
returned successfully. How does it turn into an empty
generator instead?</p>
<p>My Python version is Python 3.12.2.</p>
|
<python><python-3.x><iterator>
|
2024-10-19 12:03:20
| 1
| 1,097
|
Limina102
|
79,104,540
| 5,109,125
|
getting GoogleAuthError while following a langchain tutorial
|
<p>I am trying to follow an <code>"official"</code> langchain tutorial <a href="https://python.langchain.com/docs/how_to/sql_csv/" rel="nofollow noreferrer">How to do question answering over CSVs</a> using <code>llm model = gemini-1.5-flash</code> but my code is failing anyways. I am writing/running the code in Jupyter notebook.</p>
<p>Before I executed the code, I went to my Google Cloud account and created an API key and named it as "GOOGLE_API_KEY" as shown below, as the tutorial code requires it.</p>
<p><a href="https://i.sstatic.net/9nGOPS3K.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9nGOPS3K.png" alt="enter image description here" /></a></p>
<p>I had no problem with running the code from the tutorial with Jupyter notebook until it got stuck when it executed the getpass() logic. I did a workaround by replacing the <code>getpass</code> with using <code>load_dotenv()</code> as shown below.</p>
<pre><code> # SET UP GCP Credentials
# Create GOOGLE_API_KEY to access the llm model USING Google Cloud console
from dotenv import load_dotenv
# import getpass
import os
# run load_dotenv() to get the GOOGLE_API_KEY value from the .env file
load_dotenv()
os.getenv("GOOGLE_API_KEY")
</code></pre>
<p>I created an <code>.env</code> file containing the <code>GOOGLE_API_KEY=<value></code> line which the dotenv_load_env() was able to read successfully.</p>
<p>But unfortunately, I encountered an error when executing the llm assignment statement.</p>
<pre><code>from langchain_google_vertexai import ChatVertexAI
llm = ChatVertexAI(model="gemini-1.5-flash")
</code></pre>
<p>This is the error that I received:</p>
<p><a href="https://i.sstatic.net/JpydQbc2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpydQbc2.png" alt="enter image description here" /></a></p>
<p>The error shows <code>GoogleAuthError</code> with <code> exceptions.DefaultCredentialsError(_CLOUD_SDK_MISSING_CREDENTIALS)</code>. But the langchain tutorial code does not require it.</p>
<p>Helpful input(s) from experienced stackoverflowers is much appreciated.</p>
|
<python><large-language-model><google-cloud-vertex-ai>
|
2024-10-19 08:53:47
| 1
| 597
|
punsoca
|
79,104,501
| 7,695,845
|
How to render numpy docstring nicely in VSCode?
|
<p>I am working on a Python utilities library for parsing output generated by an external simulation program we use (called MESA, but it's not important for this question).
I plan to use the library mainly for myself, but it's very likely that I will share it with my friends who also use this program to run simulations. In the future I might even publish this library for anyone who needs to parse MESA output.</p>
<p>Both for my sake, and also for my friends who are less experienced in Python, I want to document the tools I write. I like the <a href="https://numpydoc.readthedocs.io/en/latest/format.html" rel="nofollow noreferrer">numpy docstring style</a> so I decided to go with that. I base my docstrings on the style guide I linked above and based on <a href="https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_numpy.html" rel="nofollow noreferrer">this example</a> from the sphinx napoleon extension which looks complete.</p>
<p>For example, I have this function and its docstring:</p>
<pre class="lang-py prettyprint-override"><code>"""Utilities for reading and processing MESA output files."""
__all__ = [
"PathLike",
"read_history",
]
import os
import pandas as pd
import astropy.units as u
from astropy.table import QTable
PathLike = os.PathLike | str | bytes
def read_history(
path: PathLike = "history.data",
units: dict[str, u.Unit] | None = None,
descriptions: dict[str, str] | None = None,
*,
meta_start: int = 1,
data_start: int = 4,
) -> QTable:
"""Read a MESA history file and return it as an astropy `QTable`.
Parameters
----------
path
Path to the history file (defaults to "history.data").
units
Optional dictionary with units to apply to columns. For example,
if the history file has columns named "star_age" and "star_mass", and you want
to assign them units of years and solar masses, respectively, you can pass
you can pass `units={"star_age": u.yr, "star_mass": u.Msun}`. The returned table
will have the corresponding units applied to the columns.
descriptions
Optional dictionary with descriptions to apply to columns.
meta_start
Indicates the line number where the metadata of the history file starts.
Only non-blank and non-comment lines are considered. Starts from zero.
Defaults to 1 as that is usually where the metadata is located in MESA
history files (Usually the first line, line 0, is column numbers and then the
second line, line 1, is column names).
data_headers_start
Indicates the line number where the data of the history file starts.
Only non-blank and non-comment lines are considered. Starts from zero.
Defaults to 4 as that is usually where the data is located in MESA
history files (Usually the first 3 lines are, lines 0 to 2, are for metadata.
The 4th line, is blank so it doesn't count, and the 5th line, line 3, contains
column numbers so the data columns names are at the 6th line which is line 4).
Returns
-------
QTable
An astropy `QTable` with the data from the history file. The history's metadata
is stored in the `meta` attribute of the table under the key "history_meta"
(The `meta` attribute can contain more information, including user custom
information).
"""
...
</code></pre>
<p>This example looks good to me and it seems to follow the numpy docstring style guide. I work with VSCode and when I hover the function to look at the generated docs, it doesn't look very good:</p>
<p><a href="https://i.sstatic.net/H9aIUNOy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H9aIUNOy.png" alt="" /></a></p>
<p>It looks cluttered and messy. I think I followed the style guide correctly, so I guess that VSCode simply doesn't render numpy's style guide very good (If I am not wrong, VSCode simply renders markdown).</p>
<p>Is there a way to make the documentation look good and readable in VSCode? Right now, I mainly use the library for myself, and I'll probably share it with my friends so I don't want to use a full blown documentation generator like <code>sphinx</code> (although I probably will in the future if I publish the library). On the other hand, I don't want to write the docs in markdown which might render well in VSCode, but doesn't comply to any standard docstring style. I just want to document my code, such that the docstring is readable and the IDE (VSCode) is able to show me the docs nicely.</p>
|
<python><visual-studio-code><docstring><numpydoc>
|
2024-10-19 08:32:48
| 0
| 1,420
|
Shai Avr
|
79,104,338
| 1,161,518
|
"-bash: syntax error near unexpected token `('" error while trying to activate project using Conda
|
<p>I am trying to create a project using Conda.</p>
<pre><code>conda create --name cooking-assistant python=3.11
</code></pre>
<p>The project is created but when I try to activate it, it gives me the above-mentioned error. Also, I cannot find any project-related file in the directory, where I am creating this project.</p>
|
<python><conda>
|
2024-10-19 06:34:23
| 0
| 1,152
|
WasimSafdar
|
79,104,217
| 986,387
|
Dunder method other param type
|
<p>how to add type annotations dunder method params</p>
<pre><code>class Car :
def __init__(self,name:str,horse_power:int,fav:bool) -> None:
self.name = name
self.horse_power = horse_power
self.fav = fav
def __str__(self) -> str:
return f"Car Name {self.name} HorsePower {self.horse_power} Fav {self.fav}"
def __add__(self,other) :
print(f"helo {self.name} {other.name}")
</code></pre>
<p>In <strong>add</strong> dunder method when i tries to specify type for other param</p>
<pre><code>def __add__(self,other:Car) :
print(f"helo {self.name} {other.name}")
</code></pre>
<p>In VScode i am getting error</p>
<blockquote>
<p>"Car" is not definedPylancereportUndefinedVariable (function) Car:
Unknown</p>
</blockquote>
|
<python><python-typing>
|
2024-10-19 04:45:48
| 1
| 8,920
|
invariant
|
79,104,136
| 54,873
|
In pandas, how can I get a version of nth() to act as an aggregator?
|
<p>In Pandas v1.x.x,</p>
<pre><code>df.groupby("col").nth(0)
</code></pre>
<p>returned a dataframe that had "col" as the index col.</p>
<p>Now in pandas v2.x.x it doesn't, and my understanding of why is that nth is now seen as a "filter" and not an "aggregator".</p>
<p>I saw some threads that suggested I instead do</p>
<pre><code> df.groupby("col").nth(0).reset_index().set_index("col")
</code></pre>
<p>If I wanted a result with "col" as the index. This strikes me as crazy wordy, and duplicates code since I have to say "col" twice.</p>
<p>Is there a better, cleaner way to do this? Bonus points if it is backwards compatible.</p>
|
<python><pandas>
|
2024-10-19 03:08:32
| 1
| 10,076
|
YGA
|
79,104,066
| 8,436,767
|
Yfinance Python Library returns different result from NASDAQ
|
<p>I am using yfinance to get information about USOI ticker.</p>
<pre><code>tickerSymbol = 'USOI'
#get data on this ticker
tickerData = yf.Ticker(tickerSymbol)
#get the historical prices for this ticker
db = tickerData.history(period='1d',start = '2020-10-14', end = '2024-10-17')
db['Close'][0:10], db.index[0:10]
</code></pre>
<p>Returns</p>
<pre><code>Date
2020-10-14 00:00:00-04:00 32.161522
2020-10-15 00:00:00-04:00 32.234608
2020-10-16 00:00:00-04:00 32.088417
2020-10-19 00:00:00-04:00 32.088417
2020-10-20 00:00:00-04:00 32.324036
2020-10-21 00:00:00-04:00 31.587734
2020-10-22 00:00:00-04:00 32.324036
2020-10-23 00:00:00-04:00 31.808615
2020-10-26 00:00:00-04:00 31.219574
2020-10-27 00:00:00-04:00 31.514090
</code></pre>
<p>But if I check NASDAQ website <a href="https://www.nasdaq.com/market-activity/etf/usoi/historical?page=101&rows_per_page=10&timeline=y10" rel="nofollow noreferrer">https://www.nasdaq.com/market-activity/etf/usoi/historical?page=101&rows_per_page=10&timeline=y10</a></p>
<p>I get different values.</p>
<p><a href="https://i.sstatic.net/BmXZM1zu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BmXZM1zu.png" alt="enter image description here" /></a></p>
<p>What is going on?</p>
|
<python><yfinance>
|
2024-10-19 01:45:10
| 0
| 739
|
Valentyn
|
79,104,019
| 10,958,326
|
How to preserve an informative `__init__` signature when using parameterized Mixins in Python?
|
<p>In my Python project, I heavily use Mixins as a design pattern, and Iβd like to continue doing so. However, I am facing an issue with the <code>__init__</code> method signatures in the final class. Since I am passing arguments through <code>**kwargs</code>, the resulting signature is not helpful for introspection or documentation or type checking. Hereβs an example to illustrate the issue:</p>
<pre class="lang-py prettyprint-override"><code>class Base:
def __init__(self, arg1):
self.arg1 = arg1
class ParamMixin1:
def __init__(self, arg2, **kwargs):
super().__init__(**kwargs)
self.arg2 = arg2
class ParamMixin2:
def __init__(self, arg3, **kwargs):
super().__init__(**kwargs)
self.arg3 = arg3
class NonParamMixin:
def __init__(self, **kwargs):
super().__init__(**kwargs)
class Derived(NonParamMixin, ParamMixin1, ParamMixin2, Base):
pass
d = Derived(arg1=1, arg2=2, arg3=3)
from inspect import signature
print(signature(Derived.__init__))
</code></pre>
<p>This prints:</p>
<pre><code>(self, **kwargs)
</code></pre>
<p>The signature is not very helpful since all arguments are hidden under <code>**kwargs</code>. I could technically rewrite the <code>__init__</code> methods like this to expose the full signature:</p>
<pre class="lang-py prettyprint-override"><code>class Base:
def __init__(self, arg1):
self.arg1 = arg1
class ParamMixin1:
def __init__(self, arg2, arg1):
super().__init__(arg1)
self.arg2 = arg2
class ParamMixin2:
def __init__(self, arg3, arg2, arg1):
super().__init__(arg2, arg1)
self.arg3 = arg3
class NonParamMixin:
def __init__(self, arg3, arg2, arg1):
super().__init__(arg3, arg2, arg1)
class Derived(NonParamMixin, ParamMixin1, ParamMixin2, Base):
pass
from inspect import signature
print(signature(Derived.__init__))
</code></pre>
<p>This works, and the signature is more informative, but it introduces a lot of boilerplate and requires the correct ordering of arguments, which sometimes doesn't matter depending on the mixins used. Furthermore sometimes only a subset of the mixins is used.</p>
<p>I've tried creating a metaclass to override the <code>__init__</code> signature dynamically, but it became very messy, and I couldnβt get it to work reliably.</p>
<p>However, it feels counterintuitive not to have access to the proper <code>__init__</code> signature. Without it, Iβd need to manually track all the mixins and their required parameters, which seems impractical. Surely, thereβs a better way to manage this?</p>
<p>Any suggestions or alternative approaches to achieve a cleaner, more maintainable solution would be greatly appreciated as well!</p>
|
<python><oop><mixins><signature>
|
2024-10-19 00:59:23
| 2
| 390
|
algebruh
|
79,104,006
| 2,417,578
|
How to read stdout from subprocess safely in Python
|
<p>Two choice quotes from the documentation:</p>
<p>On <a href="https://docs.python.org/3/library/subprocess.html#subprocess.Popen.stdout" rel="nofollow noreferrer">Popen.stdout</a>:</p>
<blockquote>
<p>Warning</p>
<p>Use <code>communicate()</code> rather than <code>.stdin.write</code>, <code>.stdout.read</code> or <code>.stderr.read</code> to avoid deadlocks due to any of the other OS pipe buffers filling up and blocking the child process.</p>
</blockquote>
<p>And on <a href="https://docs.python.org/3/library/subprocess.html#subprocess.Popen.communicate" rel="nofollow noreferrer">Popen.communicate</a>:</p>
<blockquote>
<p>Note</p>
<p>The data read is buffered in memory, so do not use this method if the data size is large or unlimited.</p>
</blockquote>
<p>Great! Now what?</p>
<p>What I need is something to the effect of:</p>
<pre class="lang-none prettyprint-override"><code>with popen(["command", "--arg=foo"]) as file:
while True:
(out, err) = file.communicate_one_line()
if out is None and err is None: break
[...]
</code></pre>
<p>And I have an implementation using <a href="https://docs.python.org/3/library/selectors.html" rel="nofollow noreferrer">selectors</a>, but that seems far too complicated for a Python program. Have I overlooked a simple library which lets me work with a long-running program without trying to capture everything in a single output blob before handling it?</p>
|
<python><subprocess><stdout>
|
2024-10-19 00:50:28
| 0
| 4,990
|
sh1
|
79,104,005
| 3,486,684
|
Using `hist` to bin data while grouping with `over`?
|
<p>Consider the following example:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame(
[
pl.Series(
"name", ["A", "B", "C", "D"], dtype=pl.Enum(["A", "B", "C", "D"])
),
pl.Series("month", [1, 2, 12, 1], dtype=pl.Int8()),
pl.Series(
"category", ["x", "x", "y", "z"], dtype=pl.Enum(["x", "y", "z"])
),
]
)
print(df)
</code></pre>
<pre><code>shape: (4, 3)
ββββββββ¬ββββββββ¬βββββββββββ
β name β month β category β
β --- β --- β --- β
β enum β i8 β enum β
ββββββββͺββββββββͺβββββββββββ‘
β A β 1 β x β
β B β 2 β x β
β C β 12 β y β
β D β 1 β z β
ββββββββ΄ββββββββ΄βββββββββββ
</code></pre>
<p>We can count the number of months in the dataframe that match each month of the year:</p>
<pre class="lang-py prettyprint-override"><code>from math import inf
binned_df = (
df.select(
pl.col.month.hist(
bins=[x + 1 for x in range(11)],
include_breakpoint=True,
).alias("binned"),
)
.unnest("binned")
.with_columns(
pl.col.breakpoint.map_elements(
lambda x: 12 if x == inf else x, return_dtype=pl.Float64()
)
.cast(pl.Int8())
.alias("month")
)
.drop("breakpoint")
.select("month", "count")
)
print(binned_df)
</code></pre>
<pre><code>shape: (12, 2)
βββββββββ¬ββββββββ
β month β count β
β --- β --- β
β i8 β u32 β
βββββββββͺββββββββ‘
β 1 β 2 β
β 2 β 1 β
β 3 β 0 β
β 4 β 0 β
β 5 β 0 β
β β¦ β β¦ β
β 8 β 0 β
β 9 β 0 β
β 10 β 0 β
β 11 β 0 β
β 12 β 1 β
βββββββββ΄ββββββββ
</code></pre>
<p>(Note: there are 3 categories <code>"x"</code>, <code>"y"</code>, and <code>"z"</code>, so we expect a dataframe of shape 12 x 3 = 36.)</p>
<p>Suppose I want to bin the data per the column <code>"category"</code>. I can do the following:</p>
<pre class="lang-py prettyprint-override"><code># initialize an empty dataframe
category_binned_df = pl.DataFrame()
for cat in df["category"].unique():
# repeat the binning logic from earlier, except on a dataframe filtered for
# the particular category we are iterating over
binned_df = (
df.filter(pl.col.category.eq(cat)) # <--- the filter
.select(
pl.col.month.hist(
bins=[x + 1 for x in range(11)],
include_breakpoint=True,
).alias("binned"),
)
.unnest("binned")
.with_columns(
pl.col.breakpoint.map_elements(
lambda x: 12 if x == inf else x, return_dtype=pl.Float64()
)
.cast(pl.Int8())
.alias("month")
)
.drop("breakpoint")
.select("month", "count")
.with_columns(category=pl.lit(cat).cast(df["category"].dtype))
)
# finally, vstack ("append") the resulting dataframe
category_binned_df = category_binned_df.vstack(binned_df)
print(category_binned_df)
</code></pre>
<pre><code>shape: (36, 3)
βββββββββ¬ββββββββ¬βββββββββββ
β month β count β category β
β --- β --- β --- β
β i8 β u32 β enum β
βββββββββͺββββββββͺβββββββββββ‘
β 1 β 1 β x β
β 2 β 1 β x β
β 3 β 0 β x β
β 4 β 0 β x β
β 5 β 0 β x β
β β¦ β β¦ β β¦ β
β 8 β 0 β z β
β 9 β 0 β z β
β 10 β 0 β z β
β 11 β 0 β z β
β 12 β 1 β z β
βββββββββ΄ββββββββ΄βββββββββββ
</code></pre>
<p>It seems to me that there should be a way to do this using <a href="https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.Expr.over.html" rel="nofollow noreferrer"><code>over</code></a>, something like <code>pl.col.month.hist(bins=...).over("category")</code>, but the very first step of trying to do so raises an error:</p>
<pre class="lang-py prettyprint-override"><code>df.select(
pl.col.month.hist(
bins=[x + 1 for x in range(11)],
include_breakpoint=True,
)
.over("category")
.alias("binned"),
)
</code></pre>
<pre><code>ShapeError: the length of the window expression did not match that of the group
Error originated in expression: 'col("month").hist([Series]).over([col("category")])'
</code></pre>
<p>So there's some sort of conceptual error I am making when thinking of <code>over</code>? Is there a way to use <code>over</code> here at all?</p>
|
<python><dataframe><python-polars>
|
2024-10-19 00:49:01
| 4
| 4,654
|
bzm3r
|
79,103,936
| 1,492,229
|
merging numpy arrays converts int to decimal
|
<p>I am need to merge 2 arrays together</p>
<p>so if</p>
<pre><code>a = []
</code></pre>
<p>and</p>
<pre><code>b is array([76522, 82096], dtype=int64)
</code></pre>
<p>the merge will be <code>[76522, 82096]</code></p>
<p>but i am getting this in a form of decimal</p>
<pre><code>array([76522., 82096.])
</code></pre>
<p>here is my code</p>
<pre><code>a = np.concatenate((a, b))
</code></pre>
<p>how can i merge both arrays with same datatype?</p>
|
<python><arrays><numpy>
|
2024-10-18 23:33:44
| 1
| 8,150
|
asmgx
|
79,103,932
| 3,510,201
|
Type hint a custom dictionary object as TypedDict
|
<p>Is it possible to get the <code>TypedDict</code> values passed as a generic within a class to enable type hinting?</p>
<p>I have a dictionary where I would like to be able to get the type annotations from the passed generic <code>TypeDict</code> within the <code>__getattr__</code>.</p>
<p>Given this example. I want to get code completion and highlighting when using dot notation. Is this possible?</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypedDict
class MyDict[T](dict):
def __init__(self, base_dict: T) -> None:
super().__init__()
self._data = base_dict
def __getattr__(self, item):
dots = item.split(".")
data = self._data
for dot in dots:
data = data.get(dot)
if isinstance(data, dict):
return MyDict(data)
if data is None:
raise ValueError("Key not found")
return data
class TestDef(TypedDict):
foo: int
bar: str
baz: dict[str, bool]
data_dict: TestDef = {
"foo": 10,
"bar": "bar",
"baz": {"qux": True}
}
d = MyDict(data_dict)
v1 = d.foo
print(v1)
v2 = d.baz.qux
print(v2)
</code></pre>
|
<python><python-typing>
|
2024-10-18 23:27:47
| 1
| 539
|
Jerakin
|
79,103,866
| 807,797
|
Stopping asyncio program using file input
|
<p>What specific code needs to change in the Python 3.12 example below in order for the program <code>myReader.py</code> to be successfully halted every time the line "Stop, damnit!" gets printed into <code>sourceFile.txt</code> by the program <code>myWriter.py</code>?</p>
<p><strong>THE PROBLEM:</strong></p>
<p>The problem is that <code>myReader.py</code> only sometimes stops when the line "Stop, damnit!" is printed into <code>sourceFile.txt</code>.</p>
<p>One workaround is to have <code>myWriter.py</code> continue to write "Stop, damnit!" again and again to <code>sourceFile.txt</code>. This can cause <code>myReader.py</code> to eventually halt. But the problem is that <code>myWriter.py</code> has to continue writing the same line for arbitrarily long periods of time. We have tested continuing for 15 minutes. But there might be situations in which <code>myWriter.py</code> might need to continue writing "Stop, damnit!" every second for 30 minutes. And there might be other times when <code>myWriter.py</code> might need to continue writing "Stop, damnit!" every second for only one or two minutes.</p>
<p>The problem seems to be that the API calls being made by <code>myReader.py</code> take variable amounts of time to return, so that the backlog can become arbitrarily long sometimes, but not always. And it seems that the <code>myReader.py</code> loop is not able to see the "Stop, damnit!" line unless and until the many asynchronous API call tasks have completed.</p>
<p>The solution would ideally involve having <code>myReader.py</code> actually hear and respond to a single writing of "Stop, damnit!" instead of needing to have "Stop, damnit!" written so many times.</p>
<p><strong>WRITER PROGRAM:</strong></p>
<p>The <code>myWriter.py</code> program writes a lot of things. But the relevant part of <code>myWriter.py</code> which writes the stop command is:</p>
<pre><code>import time
#Repeat 900 times to test output. Sleep for 1 second between each.
for i in range(900):
writeToFile("Stop, damnit!")
time.sleep(1)
</code></pre>
<p><strong>READER PROGRAM:</strong></p>
<p>The relevant portion of <code>myReader.py</code> is as follows:</p>
<pre><code>import os
import platform
import asyncio
import aiofiles
BATCH_SIZE = 10
def get_source_file_path():
if platform.system() == 'Windows':
return 'C:\\path\\to\\sourceFile.txt'
else:
return '/path/to/sourceFile.txt'
async def send_to_api(linesBuffer):
success = runAPI(linesBuffer)
return success
async def read_source_file():
source_file_path = get_source_file_path()
counter = 0
print("Reading source file...")
print("source_file_path: ", source_file_path)
#Detect the size of the file located at source_file_path and store it in the variable file_size.
file_size = os.path.getsize(source_file_path)
print("file_size: ", file_size)
taskCountList = []
background_tasks = set()
async with aiofiles.open(source_file_path, 'r') as source_file:
await source_file.seek(0, os.SEEK_END)
linesBuffer = []
while True:
# Always make sure that file_size is the current size:
line = await source_file.readline()
new_file_size = os.path.getsize(source_file_path)
if new_file_size < file_size:
print("The file has been truncated.")
print("old file_size: ", file_size)
print("new_file_size: ", new_file_size)
await source_file.seek(0, os.SEEK_SET)
file_size = new_file_size
# Allocate a new list instead of clearing the current one
linesBuffer = []
counter = 0
continue
line = await source_file.readline()
if line:
new_line = str(counter) + " line: " + line
print(new_line)
linesBuffer.append(new_line)
print("len(linesBuffer): ", len(linesBuffer))
if len(linesBuffer) == BATCH_SIZE:
print("sending to api...")
task = asyncio.create_task(send_to_api(linesBuffer))
background_tasks.add(task)
task.add_done_callback(background_tasks.discard)
pendingTasks = len(background_tasks)
taskCountList.append(pendingTasks)
print("")
print("pendingTasks: ", pendingTasks)
print("")
# Do not clear the buffer; allocate a new one:
linesBuffer = []
counter += 1
print("counter: ", counter)
#detect whether or not the present line is the last line in the file.
# If it is the last line in the file, then write whatever batch
# we have even if it is not complete.
if "Stop, damnit!" in line:
#Print the next line 30 times to simulate a large file.
for i in range(30):
print("LAST LINE IN FILE FOUND.")
#sleep for 1 second to simulate a large file.
await asyncio.sleep(1)
#Omitting other stuff for brevity.
break
else:
await asyncio.sleep(0.1)
async def main():
await read_source_file()
if __name__ == '__main__':
asyncio.run(main())
</code></pre>
|
<python><python-3.x><python-asyncio><python-aiofiles>
|
2024-10-18 22:39:20
| 1
| 9,239
|
CodeMed
|
79,103,833
| 1,492,229
|
How to filter dataframe based on array of index
|
<p>I am using Python</p>
<p>X is a dataframe that has these values</p>
<pre><code>In [29]: X
Out[29]:
RepID
76758 207355
5787 15900
101140 273993
96040 260308
82096 221946
65858 178020
40664 109821
56044 151664
76522 206735
12478 33774
</code></pre>
<p>test_indices is an array that has index numbers</p>
<pre><code>In [30]: test_indices
Out[30]: array([7, 8])
</code></pre>
<p>I am trying to filter X into a new variable called X_test</p>
<p>where X_test is the records of indecies in test_indices</p>
<p>I tried this</p>
<pre><code>X_test = X[test_indices]
</code></pre>
<p>but I got an error</p>
<pre><code>KeyError: "None of [Index([7, 8], dtype='int32')] are in the [columns]"
</code></pre>
<p>how to fix this problem</p>
|
<python><dataframe>
|
2024-10-18 22:21:37
| 1
| 8,150
|
asmgx
|
79,103,654
| 15,835,974
|
Edit the column that is nested into a array that is nested into a struct
|
<p>How can I edit the <code>I</code> column of my DataFrame by applying my <code>example_loop</code> function on it?</p>
<pre class="lang-py prettyprint-override"><code>from pyspark.sql import DataFrame, SparkSession
from pyspark.sql.functions import col, udf
from pyspark.sql.types import ArrayType, IntegerType, StringType, StructField, StructType
@udf(returnType=IntegerType())
def example(i, a):
if a == "querty":
i += 5
else:
i += 10
return i
@udf(returnType=ArrayType(IntegerType()))
def example_loop(i_elements, a):
return [example(i, a) for i in i_elements]
def main():
spark = SparkSession.builder.getOrCreate()
dataDF = [
('qwerty', 'ytrewq', ('Jon', 'Smith', [('huhu', 'haha', 14)], 20))
]
schema = StructType([
StructField("A", StringType()),
StructField("B", StringType()),
StructField("C",
StructType([
StructField("D", StringType()),
StructField("E", StringType()),
StructField("F",
ArrayType(
StructType([
StructField("G", StringType()),
StructField("H", StringType()),
StructField("I", IntegerType())
])
)
),
StructField("K", IntegerType()),
])
)
])
df: DataFrame = spark.createDataFrame(data=dataDF, schema=schema)
df.show(truncate=False)
df = df.withColumn("C", col("C").withField("K", example(col("C.K"), col("A"))))
df.show(truncate=False)
df = df.withColumn("C", col("C").withField("F", example_loop(col("C.F.I"), col("A"))))
df.show(truncate=False)
if __name__ == "__main__":
main()
</code></pre>
|
<python><dataframe><pyspark>
|
2024-10-18 20:57:54
| 1
| 597
|
jeremie bergeron
|
79,103,633
| 6,357,916
|
nvcc is not installed despite successfully running conda install command
|
<p>I followed following steps to setup conda environment with python 3.8, CUDA 11.8 and pytorch 2.4.1:</p>
<pre><code>$ conda create -n py38_torch241_CUDA118 python=3.8
$ conda activate py38_torch241_CUDA118
$ conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
</code></pre>
<p>Python and pytorch seem to have installed correctly:</p>
<pre><code>$ python --version
Python 3.8.20
$ pip list | grep torch
torch 2.4.1
torchaudio 2.4.1
torchvision 0.20.0
</code></pre>
<p>But when I try to check CUDA version, I realise that <code>nvcc</code> is not installed:</p>
<pre><code>$ nvcc
Command 'nvcc' not found, but can be installed with:
sudo apt install nvidia-cuda-toolkit
</code></pre>
<p>This also caused issue in the further setup of some git repositories which require <code>nvcc</code>. Do I need to run <code>sudo apt install nvidia-cuda-toolkit</code> as suggested above? Shouldnt above <code>conda install</code> command install <code>nvcc</code>? I tried these steps again by completely deleting all packaged and environments of conda. But no help.</p>
<p>Below is some relevant information that might help debug this issue:</p>
<pre><code>$ conda --version
conda 24.5.0
$ nvidia-smi
Sat Oct 19 02:12:06 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.90.07 Driver Version: 550.90.07 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name User-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA RTX 2000 Ada Gene... Off | 00000000:01:00.0 Off | N/A |
| N/A 48C P0 588W / 35W | 8MiB / 8188MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 1859 G /usr/lib/xorg/Xorg 4MiB |
+-----------------------------------------------------------------------------------------+
$ which nvidia-smi
/usr/bin/nvidia-smi
</code></pre>
<p>Note that my machine runs NVIDIA RTX 2000 Ada Generation. Also above <code>nvidia-smi</code> command says I am running CUDA 12.4. This driver I have installed manually long back when I did not have conda installed on the machine.</p>
<p>I tried setting <code>CUDA_HOME</code> path to my conda environment, but no help:</p>
<pre><code>$ export CUDA_HOME=$CONDA_PREFIX
$ echo $CUDA_HOME
/home/User-M/miniconda3/envs/FairMOT_py38_torch241_CUDA118
$ which nvidia-smi
/usr/bin/nvidia-smi
$ nvcc
Command 'nvcc' not found, but can be installed with:
sudo apt install nvidia-cuda-toolkit
</code></pre>
|
<python><pytorch><anaconda><cuda><conda>
|
2024-10-18 20:49:05
| 0
| 3,029
|
MsA
|
79,103,538
| 11,069,614
|
function returning list with one extra length than it should
|
<p>I have data that looks like this:</p>
<pre><code>data = [[['INS', 'Y', '18', '021', '28', 'A', '', '', 'AC'],
['REF', '0F', '816383217'],
['HD', '021', '', 'EPO', 'Copayment Level -1', 'IND']],
[['INS', 'Y', '18', '024', '07', 'A', '', '', 'TE'],
['REF', '0F', '734419065']],
[['INS', 'Y', '18', '024', '07', 'A', '', '', 'TE'],
['REF', '0F', '778954065']]]
</code></pre>
<p>I want to loop through it and create a pandas series of the copayment_level which is the number after the dash <code>-</code>. If that element does not exist in the list of lists then it should append <code>None</code> to the new list. I created a function to do this but the problem is it returns one extra than it should.</p>
<pre><code>def copay_level():
l = []
for i in data:
for lst in i:
for x in lst:
if x.startswith('Copayment Level'):
x = x.split('-')
l.append(x[1])
else:
l.append(None)
return l
copayment_level = pd.Series(copay_level())
print(copayment_level)
0 1
1 None
2 None
3 None
dtype: object
</code></pre>
<p>Cant figure out why its doing this. Thanks</p>
|
<python>
|
2024-10-18 20:14:38
| 2
| 392
|
Ben Smith
|
79,103,405
| 289,037
|
what causes a gcloud functions deploy failure with no error message (OperationError: code=13, message=None)
|
<p>deploying from a GitHub workflow with this command</p>
<pre><code> gcloud functions deploy "$FUNCTION_NAME" \
--entry-point "entry" \
--region northamerica-northeast1 \
--runtime python39 \
--set-env-vars=SENDGRID_API_KEY="$SENDGRID_API_KEY" \
--source "my-source-folder" \
--timeout "540s" \
--trigger-topic "my-trigger-topic"
</code></pre>
<p>I get this error</p>
<pre><code>Deploying function (may take a while - up to 2 minutes)...
..
For Cloud Build Logs, visit: https://console.cloud.google.com/cloud-build/builds;region=northamerica-northeast1/<<snip>>
...............................failed.
ERROR: (gcloud.functions.deploy) OperationError: code=13, message=None
Error: Process completed with exit code 1.
</code></pre>
<p>I have other another Function that deploys successfully with the same command line args.</p>
<p>What will fix this and|or how do I troubleshoot it?</p>
|
<python><google-cloud-functions>
|
2024-10-18 19:13:35
| 1
| 770
|
PMorganCA
|
79,103,393
| 9,315,690
|
How can I detect whether a given Python module was compiled with mypyc?
|
<p>I have a Python program, and I want to detect whether it was compiled with mypyc or not so I can include this in the version information of the program for debugging purposes. But how can I do this? I tried looking through <a href="https://github.com/python/mypy/blob/c9d4c61d9c80b02279d1fcc9ca8a1974717b5e1c/mypyc/doc/dev-intro.md" rel="nofollow noreferrer">dev-intro.md</a> in the mypyc docs, but I couldn't find anything useful. I also looked through <a href="https://mypyc.readthedocs.io/en/latest/differences_from_python.html" rel="nofollow noreferrer">Differences from Python</a> in the mypy docs, but I couldn't find anything there either.</p>
<p>How can I detect whether a Python module was compiled with mypyc?</p>
|
<python><compilation><detection><mypyc>
|
2024-10-18 19:07:45
| 2
| 3,887
|
Newbyte
|
79,103,185
| 4,351,030
|
Calculate a mean value for column C for all dates less than the date in row R with Pandas
|
<p>I have a pandas dataframe with >101K rows from which I am trying to calculate a mean value for the <code>won</code> column based upon date. The logic is that for each row, find the mean value of <code>won</code> for all rows where <code>row['created_on']</code> < current <code>row['created_on']</code>. Please note that I am not trying to achieve a cumulative mean value for the <code>won</code> column <a href="https://stackoverflow.com/questions/59759856/get-cumulative-mean-among-groups-in-python">as this question was identified as a duplicate</a>. The suggested duplicate provides a means of calculating a cumulative value by row but I'm looking to try and calculate a cumulative value by <em>date</em>, i.e., all values for <code>rolling_won_prop</code> should be the same for a given date but should not be cumulative by row.</p>
<p>I can calculate a simple value with</p>
<pre><code>def get_win_prop(df, d) -> float:
mask = (df['created_on'] < d)
prop = df[mask].won.mean()
return(prop)
get_win_prop(d, '2022-10-25')
</code></pre>
<p>I get no errors when I try to use this function with <code>pd.assign()</code> but all values end up being <code>NaN</code>:</p>
<pre><code>d.assign(rolling_won_prop = lambda x: get_win_prop(x, x.created_on))
</code></pre>
<p><a href="https://i.sstatic.net/Gk347uQE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Gk347uQE.png" alt="enter image description here" /></a></p>
<p>What am I missing? I would have assumed the <code>get_win_prop()</code> fx to be what I needed. Is there a more efficient way to do this in pandas? Note that I have supplied a sample of data below but that I will need to group by a customer ID column before calculating the <code>get_win_prop()</code> value.</p>
<h1>UPDATE 1</h1>
<p>I have come up with a sample solution that work for this toy dataset below but may not scale well:</p>
<pre><code>result = []
for i in d.created_on.unique():
prev_vals = d[d['created_on'] < i]
result.append(prev_vals.won.mean())
d.merge(pd.DataFrame({'created_on': d.created_on.unique(),
'rolling_won_prop ': result}), how = 'left')
</code></pre>
<p><a href="https://i.sstatic.net/xFkM21iI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xFkM21iI.png" alt="enter image description here" /></a></p>
<h1>Update 2</h1>
<p>I hacked together an inelegant solution with a <code>for</code> loop that produces appropriate results:</p>
<pre><code>results = []
for i in d.created_on.unique():
prev_vals = d[d['created_on'] < i]
results.append(prev_vals.won.mean())
d.merge(pd.DataFrame({'created_on': d.created_on.unique(),
'rolling_won_prop ': results}), how = 'left')
</code></pre>
<p><a href="https://i.sstatic.net/2fJxxaTM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fJxxaTM.png" alt="enter image description here" /></a></p>
<p>Given I have an additional column for customer ID (not present in the MWE presented here), I can adapt the above solution to group by the customer ID, but this is still not idea. I'd much rather have this solution in a pandas framework, if possible.</p>
<p>Sample data</p>
<pre><code>import pandas as pd
from pandas import Timestamp
d = pd.DataFrame({'created_on': [Timestamp('2022-09-22 00:00:00'), Timestamp('2022-10-14 00:00:00'),Timestamp('2022-10-19 00:00:00'),Timestamp('2022-10-25 00:00:00'),Timestamp('2022-11-02 00:00:00'),
Timestamp('2022-11-04 00:00:00'),Timestamp('2022-11-16 00:00:00'),Timestamp('2022-11-28 00:00:00'),Timestamp('2022-11-28 00:00:00'),Timestamp('2022-12-07 00:00:00'),
Timestamp('2022-12-21 00:00:00'),Timestamp('2022-12-21 00:00:00'),Timestamp('2022-12-21 00:00:00'),Timestamp('2022-12-21 00:00:00')],
'n_lines': [7, 3, 7, 6, 6, 4, 5, 3, 10, 3, 6, 6, 9, 6],
'n_pieces': [606, 202, 706, 765, 255, 803, 1004, 2702, 1909, 546, 555, 555, 558,555],
'quote_total': [1780.4299999999998, 3575.4600000000005, 11762.079999999994, 6725.160000000002, 995.9300000000001, 1644.2100000000003, 2620.2299999999996,
8082.090000000001, 5302.320000000001, 1959.7599999999998, 8734.67, 9792.3, 0.0, 9720.71],
'won': [1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0]})
</code></pre>
|
<python><pandas>
|
2024-10-18 17:55:21
| 1
| 3,334
|
Steven
|
79,102,985
| 2,233,500
|
Asyncio + Claude raises a "RuntimeError: Event loop is closed"
|
<p>I have a very simple toy example that uses Claude and asyncio.
It is a loop that runs several times the <code>asyncio.run()</code> function.
Sometimes, a</p>
<blockquote>
<p>RuntimeError: Event loop is closed</p>
</blockquote>
<p>exception is raised during the iterations and I don't understand why.
I guess I'm not using asyncio right, but I cannot understand where.
The code seems to run fine though.</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
from time import perf_counter
import numpy as np
from anthropic import Anthropic, AsyncAnthropic
async def coroutine1():
prompt = "Who discovered gravity. Answer in 10 words."
client = AsyncAnthropic()
message = await client.messages.create(
model="claude-3-5-sonnet-20240620",
max_tokens=1024,
messages=[{"role": "user", "content": prompt}],
)
return message.content[0].text
async def coroutine2():
prompt = "Who discovered radioactivity. Answer in 10 words."
client = AsyncAnthropic()
message = await client.messages.create(
model="claude-3-5-sonnet-20240620",
max_tokens=1024,
messages=[{"role": "user", "content": prompt}],
)
return message.content[0].text
async def main():
async with asyncio.TaskGroup() as tg:
t0 = tg.create_task(coroutine1())
t1 = tg.create_task(coroutine2())
print(t0.result())
print(t1.result())
if __name__ == "__main__":
for i in range(10):
tic = perf_counter()
asyncio.run(main())
toc = perf_counter()
print(f"Elapsed time = {toc - tic:.3f}")
</code></pre>
<p>Edit bellow. Here is the full traceback:</p>
<pre><code>Task exception was never retrieved
future: <Task finished name='Task-40' coro=<AsyncClient.aclose() done, defined at /Users/foo/Desktop/async/venv/lib/python3.12/site-packages/httpx/_client.py:2024> exception=RuntimeError('Event loop is closed')>
Traceback (most recent call last):
File "/Users/foo/Desktop/async/venv/lib/python3.12/site-packages/httpx/_client.py", line 2031, in aclose
await self._transport.aclose()
File "/Users/foo/Desktop/async/venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 389, in aclose
await self._pool.aclose()
File "/Users/foo/Desktop/async/venv/lib/python3.12/site-packages/httpcore/_async/connection_pool.py", line 313, in aclose
await self._close_connections(closing_connections)
File "/Users/foo/Desktop/async/venv/lib/python3.12/site-packages/httpcore/_async/connection_pool.py", line 305, in _close_connections
await connection.aclose()
File "/Users/foo/Desktop/async/venv/lib/python3.12/site-packages/httpcore/_async/connection.py", line 171, in aclose
await self._connection.aclose()
File "/Users/foo/Desktop/async/venv/lib/python3.12/site-packages/httpcore/_async/http11.py", line 265, in aclose
await self._network_stream.aclose()
File "/Users/foo/Desktop/async/venv/lib/python3.12/site-packages/httpcore/_backends/anyio.py", line 55, in aclose
await self._stream.aclose()
File "/Users/foo/Desktop/async/venv/lib/python3.12/site-packages/anyio/streams/tls.py", line 201, in aclose
await self.transport_stream.aclose()
File "/Users/foo/Desktop/async/venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 1287, in aclose
self._transport.close()
File "/Users/foo/.pyenv/versions/3.12.1/lib/python3.12/asyncio/selector_events.py", line 1206, in close
super().close()
File "/Users/foo/.pyenv/versions/3.12.1/lib/python3.12/asyncio/selector_events.py", line 871, in close
self._loop.call_soon(self._call_connection_lost, None)
File "/Users/foo/.pyenv/versions/3.12.1/lib/python3.12/asyncio/base_events.py", line 792, in call_soon
self._check_closed()
File "/Users/foo/.pyenv/versions/3.12.1/lib/python3.12/asyncio/base_events.py", line 539, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
Task exception was never retrieved
future: <Task finished name='Task-41' coro=<AsyncClient.aclose() done, defined at /Users/foo/Desktop/async/venv/lib/python3.12/site-packages/httpx/_client.py:2024> exception=RuntimeError('Event loop is closed')>
Traceback (most recent call last):
File "/Users/foo/Desktop/async/venv/lib/python3.12/site-packages/httpx/_client.py", line 2031, in aclose
await self._transport.aclose()
File "/Users/foo/Desktop/async/venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 389, in aclose
await self._pool.aclose()
File "/Users/foo/Desktop/async/venv/lib/python3.12/site-packages/httpcore/_async/connection_pool.py", line 313, in aclose
await self._close_connections(closing_connections)
File "/Users/foo/Desktop/async/venv/lib/python3.12/site-packages/httpcore/_async/connection_pool.py", line 305, in _close_connections
await connection.aclose()
File "/Users/foo/Desktop/async/venv/lib/python3.12/site-packages/httpcore/_async/connection.py", line 171, in aclose
await self._connection.aclose()
File "/Users/foo/Desktop/async/venv/lib/python3.12/site-packages/httpcore/_async/http11.py", line 265, in aclose
await self._network_stream.aclose()
File "/Users/foo/Desktop/async/venv/lib/python3.12/site-packages/httpcore/_backends/anyio.py", line 55, in aclose
await self._stream.aclose()
File "/Users/foo/Desktop/async/venv/lib/python3.12/site-packages/anyio/streams/tls.py", line 201, in aclose
await self.transport_stream.aclose()
File "/Users/foo/Desktop/async/venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 1287, in aclose
self._transport.close()
File "/Users/foo/.pyenv/versions/3.12.1/lib/python3.12/asyncio/selector_events.py", line 1206, in close
super().close()
File "/Users/foo/.pyenv/versions/3.12.1/lib/python3.12/asyncio/selector_events.py", line 871, in close
self._loop.call_soon(self._call_connection_lost, None)
File "/Users/foo/.pyenv/versions/3.12.1/lib/python3.12/asyncio/base_events.py", line 792, in call_soon
self._check_closed()
File "/Users/foo/.pyenv/versions/3.12.1/lib/python3.12/asyncio/base_events.py", line 539, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
</code></pre>
|
<python><python-asyncio><claude>
|
2024-10-18 16:53:51
| 2
| 867
|
Vincent Garcia
|
79,102,797
| 9,945,539
|
Varying embedding dim due to changing padding in batch size
|
<p>I want to train a simple neural network, which has <strong>embedding_dim</strong> as a parameter:</p>
<pre><code>class BoolQNN(nn.Module):
def __init__(self, embedding_dim):
super(BoolQNN, self).__init__()
self.fc1 = nn.Linear(embedding_dim, 64)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(64, 1)
def forward(self, question_emb, passage_emb):
combined = torch.cat((question_emb, passage_emb), dim=1)
x = self.fc1(combined)
x = self.relu(x)
x = self.fc2(x)
return torch.sigmoid(x)
</code></pre>
<p>To load the data I used torchs DataLoader with a custom collate_fn.</p>
<pre><code>train_dataset = BoolQDataset(train_data, pretrained_embeddings)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=16, shuffle=True,collate_fn=collate_fn_padd)
model = BoolQNN(301)
</code></pre>
<p>The collate_fn_padd function looks the following:</p>
<pre><code>def collate_fn_padd(batch):
questions, passages, labels = zip(*batch)
questions = [torch.tensor(q) for q in questions]
passages = [torch.tensor(p) for p in passages]
padded_questions = pad_sequence(questions, batch_first=True, padding_value=0)
padded_passages = pad_sequence(passages, batch_first=True, padding_value=0)
labels = torch.tensor(labels, dtype=torch.float32)
return padded_questions, padded_passages, labels
</code></pre>
<p><strong>The problem:</strong> For every batch I want to train my model with, the embedded text gets padded differently long (it takes the longest sequence of the current batch).</p>
<p>That means that my embedding dim/input size for the linear layer in my neural network changes from batch to batch, althoug I want the size to be the same for every batch.</p>
<p>Due to that, I receive errors like that: <strong>mat1 and mat2 shapes cannot be multiplied (16x182 and 301x64)</strong></p>
<p>Is it possible to adjust the collate_fn_pad function so that it padds the sequence the same size, independet of the batch size?</p>
|
<python><text><nlp><padding><data-preprocessing>
|
2024-10-18 15:54:51
| 1
| 392
|
samuel gast
|
79,102,700
| 11,561,121
|
Is my integration test well structured and why is it returning import error
|
<p>I am learning mock and trying it on a personal project.</p>
<p>This is my project structure:</p>
<pre><code>project/
β
βββ src/
β βββ My_app/
β βββ __init__.py
β βββ application/
β β βββ main_code.py
β β βββ __init__.py
β βββ infrastructure/
β β βββ __init__.py
β β βββ data_quality.py
β β βββ s3_utils.py
β βββ settings/
β βββ __init__.py
β βββ s3_utils.py
β
βββ tests/
βββ integration_tests/
βββ application/
βββ test_main_code.py
</code></pre>
<p>Aim is to write integration test for <code>main_code.py</code></p>
<h1><strong><strong>simplified version of main_code.py</strong></strong></h1>
<pre><code>import My_app.settings.config as stg
from awsglue.utils import getResolvedOptions
from My_app.infrastructure.data_quality import evaluateDataQuality, generateSchema
from My_app.infrastructure.s3_utils import csvReader, dataframeWriter
from pyspark.sql import SparkSession
def main(argv: List[str]) -> None:
args = getResolvedOptions(
argv,
[
'JOB_NAME',
'S3_BRONZE_BUCKET_NAME',
'S3_PRE_SILVER_BUCKET_NAME',
'S3_BRONZE_PATH',
'S3_PRE_SILVER_PATH',
'S3_DATA_QUALITY_LOGS_BUCKET_NAME',
],
)
s3_bronze_bucket_name = args['S3_BRONZE_BUCKET_NAME']
s3_pre_silver_bucket_name = args['S3_PRE_SILVER_BUCKET_NAME']
s3_bronze_path = args['S3_BRONZE_PATH']
s3_pre_silver_path = args['S3_PRE_SILVER_PATH']
s3_data_quality_logs_bucket_name = args['S3_DATA_QUALITY_LOGS_BUCKET_NAME']
spark = SparkSession.builder.getOrCreate() # TODO replace this init with common method (waiting for S3 part)
spark.conf.set('spark.sql.sources.partitionOverwriteMode', 'dynamic')
for table in list(stg.data_schema.keys()):
raw_data = stg.data_schema[table].columns.to_dict()
schema = generateSchema(raw_data)
df = csvReader(spark, s3_bronze_bucket_name, s3_bronze_path, table, schema, '\t')
(quality_df, table_ingestion_status) = evaluateDataQuality(spark, df, table)
dataframeWriter(
quality_df,
s3_data_quality_logs_bucket_name,
'data-quality/',
'logs',
'date',
'append',
)
if __name__ == '__main__':
main(sys.argv)
</code></pre>
<h1><strong><strong>simplified version of data_quality.py</strong></strong></h1>
<pre><code>import My_app.settings.config as stg
from awsglue.context import GlueContext
from awsglue.dynamicframe import DynamicFrame
from awsgluedq.transforms import EvaluateDataQuality
from pyspark.sql import SparkSession, Row
from pyspark.sql import functions as F
from pyspark.sql.dataframe import DataFrame
from pyspark.sql.types import StructType, IntegerType, StringType, FloatType, DateType
def typeOf(t: str) -> IntegerType | StringType | FloatType | DateType:
...
return StringType()
def generateSchema(columns_dict: dict) -> StructType:
...
return schema
def evaluateDataQuality(spark: SparkSession, df: DataFrame, table: str) -> (DataFrame, bool):
...
return (
EvaluateDataQuality.apply(...)
.toDF(),
True,
)
</code></pre>
<h1><strong><strong>simplified version of s3_utils.py</strong></strong></h1>
<pre><code>from pyspark.sql import SparkSession
from pyspark.sql.dataframe import DataFrame
from pyspark.sql.types import StructType
def csvReader(spark: SparkSession, bucket: str, path: str, table: str, schema: StructType, sep: str) -> DataFrame:
"""Reads a CSV file as a Dataframe from S3 using user parameters for format."""
return (
spark.read.format('csv')
.option('header', 'true')
.option('sep', sep)
.schema(schema)
.load(f's3a://{bucket}/{path}/{table}.csv')
)
def dataframeWriter(
df: DataFrame, bucket: str, path: str, table: str, partition_key: str, mode: str = 'overwrite'
) -> None:
"""Writes a dataframe in S3 in parquet format using user parameters to define path and partition key."""
df.write.partitionBy(partition_key).mode(mode).parquet(f's3a://{bucket}/{path}/{table}/')
</code></pre>
<h1><strong>What I want to do</strong></h1>
<p>Write an integration test for <code>main_code.py</code> while:</p>
<ul>
<li>Mocking <code>csvReader</code> function and replace it with <code>local_csvReader</code>.</li>
<li>Mocking <code>dataframeWriter</code> function and replace it with <code>local_dataframeWriter</code>.</li>
<li>Mocking the import from <code>awsgluedq</code> in order to avoid installing it locally.</li>
</ul>
<h1><strong>What I did:</strong></h1>
<h1><strong><strong>test_main_code.py</strong></strong></h1>
<pre><code>"""Module that contains unit tests for My_app pre silver job."""
import os
from unittest import TestCase
from unittest.mock import patch, Mock
from pyspark.sql import SparkSession
from pyspark.sql.types import StructType
def local_csvReader(spark: SparkSession, bu: str, pa: str, table: str, schema: StructType, sep: str):
"""Mocked function that replaces real csvReader. this one reads from local rather than S3."""
return (
spark.read.format('csv')
.option('header', 'true')
.option('sep', ';')
.schema(schema)
.load(f'./tests/integration_tests/input_mock/{table}.csv')
)
def local_dataframeWriter(df, bu: str, pa: str, table: str, partition_key: str):
"""Mocked function that replaces real dataframeWriter. this one writes in local rather than S3."""
output_dir = f'./tests/integration_tests/output_mock/{table}/'
if not os.path.exists(output_dir):
os.makedirs(output_dir)
df.write.partitionBy(partition_key).mode('overwrite').parquet(output_dir)
class IntegrationTest(TestCase):
@classmethod
def setUpClass(cls):
cls.spark = SparkSession.builder.master('local').appName('TestPerfmarketSilver').getOrCreate()
cls.spark.conf.set('spark.sql.sources.partitionOverwriteMode', 'dynamic')
@patch('My_app.application.main_code.getResolvedOptions')
@patch('My_app.application.main_code.csvReader', side_effect=local_csvReader)
@patch('My_app.application.main_code.dataframeWriter', side_effect=local_dataframeWriter)
def test_main(self, mock_csvreader, mock_datawriter, mocked_get_resolved_options: Mock):
"""Test the main function with local CSV and Parquet output."""
import My_app.application.main_code as main_code
import My_app.settings.config as stg
import tests.integration_tests.settings.config as stg_new
stg.data_schema = stg_new.data_schema_test
expected_results = {'chemins': {'nbRows': 8}}
# Mock the resolved options
mocked_get_resolved_options.return_value = {
'JOB_NAME': 'test_job',
'S3_BRONZE_BUCKET_NAME': 'test_bronze',
'S3_PRE_SILVER_BUCKET_NAME': 'test_pre_silver',
'S3_BRONZE_PATH': './tests/integration_tests/input_mock',
'S3_PRE_SILVER_PATH': './tests/integration_tests/output_mock',
'S3_DATA_QUALITY_LOGS_BUCKET_NAME': 'test_dq',
}
main_code.main([])
for table in stg.data_schema.keys():
# Verify that the output Parquet file is created
output_path = f'./tests/integration_tests/output_mock/{table}/'
self.assertTrue(os.path.exists(output_path))
# Read the written Parquet file and check the data
written_df = self.spark.read.parquet(output_path)
self.assertEqual(written_df.count(), expected_results[table]['nbRows']) # Check row count
self.assertTrue(
set(
[column_data['bronze_name'] for column_data in stg.data_schema[table]['columns'].to_dict().values()]
)
== set(written_df.columns)
)
# Clean up
os.system(f'rm -rf ./tests/integration_tests/output_mock/{table}/')
</code></pre>
<h1><strong>Questions:</strong></h1>
<p>Running test class is returning:</p>
<pre><code>======================================================================
ERROR: test_main (tests.integration_tests.application.test_main_code.IntegrationTest)
Test the main function with local CSV and Parquet output.
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/me/.asdf/installs/python/3.10.14/lib/python3.10/unittest/mock.py", line 1248, in _dot_lookup
return getattr(thing, comp)
AttributeError: module 'My_app.application' has no attribute 'main_code'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/me/.asdf/installs/python/3.10.14/lib/python3.10/unittest/mock.py", line 1376, in patched
with self.decoration_helper(patched,
File "/Users/me/.asdf/installs/python/3.10.14/lib/python3.10/contextlib.py", line 135, in __enter__
return next(self.gen)
File "/Users/me/.asdf/installs/python/3.10.14/lib/python3.10/unittest/mock.py", line 1358, in decoration_helper
arg = exit_stack.enter_context(patching)
File "/Users/me/.asdf/installs/python/3.10.14/lib/python3.10/contextlib.py", line 492, in enter_context
result = _cm_type.__enter__(cm)
File "/Users/me/.asdf/installs/python/3.10.14/lib/python3.10/unittest/mock.py", line 1431, in __enter__
self.target = self.getter()
File "/Users/me/.asdf/installs/python/3.10.14/lib/python3.10/unittest/mock.py", line 1618, in <lambda>
getter = lambda: _importer(target)
File "/Users/me/.asdf/installs/python/3.10.14/lib/python3.10/unittest/mock.py", line 1261, in _importer
thing = _dot_lookup(thing, comp, import_path)
File "/Users/me/.asdf/installs/python/3.10.14/lib/python3.10/unittest/mock.py", line 1250, in _dot_lookup
__import__(import_path)
File "/Users/me/IdeaProjects/project_root/apps/project/src/My_app/application/main_code.py", line 10, in <module>
from My_app.infrastructure.data_quality import evaluateDataQuality, generateSchema
File "/Users/me/IdeaProjects/project_root/apps/project/src/My_app/infrastructure/data_quality.py", line 4, in <module>
from awsgluedq.transforms import EvaluateDataQuality
ModuleNotFoundError: No module named 'awsgluedq'
----------------------------------------------------------------------
Ran 1 test in 2.114s
FAILED (errors=1)
</code></pre>
<ul>
<li><p>Is my test class well structured ? I am importing <code>main_code</code> right ?
I dont think so because of: <code>AttributeError: module 'My_app.application' has no attribute 'main_code'</code></p>
</li>
<li><p>How can I integrate a mocking technique to replace <code>awsgluedq</code> module by another code ?</p>
</li>
</ul>
|
<python><unit-testing><mocking>
|
2024-10-18 15:24:20
| 2
| 1,019
|
Haha
|
79,102,605
| 11,069,614
|
How to group a list of lists into a new list based on the beginning character of one of the elements in one of the lists
|
<p>I have a lists of lists called "lines" that looks like this:</p>
<pre><code>[['INS', 'Y', '18', '024', '07', 'A', '', '', 'TE'],
['REF', '0F', '708066255'],
['REF', '1L', '708066255'],
['DTP', '303', 'D8', '20240901'],
['DTP', '356', 'D8', '20240801'],
['NM1', 'IL', '1', 'FIGUEROA', 'LILIET', '', '', '', '34', '536899858'],
['N3', '2670 SO A W GRIMES BO', '#6102'],
['N4', 'ROUND ROCK', 'TX', '786642849', '', 'CY', '246'],
['DMG', 'D8', '19931219', 'F', '', 'H'],
['INS', 'Y', '18', '024', '07', 'A', '', '', 'TE'],
['REF', '0F', '811229070'],
['REF', '1L', '811229070'],
['DTP', '303', 'D8', '20240901'],
['DTP', '356', 'D8', '20240201'],
['NM1', 'IL', '1', 'MORILLO RUZA', 'OMARLY', 'V', '', '', '34', '000000000'],
['PER', 'IP', '', 'HP', '5129233526'],
['N3', '154 TERRI TL'],
['N4', 'ELGIN', 'TX', '786218937', '', 'CY', '011'],
['DMG', 'D8', '20040628', 'F', '', 'H']]
</code></pre>
<p>What I need is to group the lists into new lists wherever the 'INS' element appears in one of the lists. The output should look like:</p>
<pre><code>[[['INS', 'Y', '18', '024', '07', 'A', '', '', 'TE'],
['REF', '0F', '708066255'],
['REF', '1L', '708066255'],
['DTP', '303', 'D8', '20240901'],
['DTP', '356', 'D8', '20240801'],
['NM1', 'IL', '1', 'FIGUEROA', 'LILIET', '', '', '', '34', '536899858'],
['N3', '2670 SO A W GRIMES BO', '#6102'],
['N4', 'ROUND ROCK', 'TX', '786642849', '', 'CY', '246'],
['DMG', 'D8', '19931219', 'F', '', 'H']],
[['INS', 'Y', '18', '024', '07', 'A', '', '', 'TE'],
['REF', '0F', '811229070'],
['REF', '1L', '811229070'],
['DTP', '303', 'D8', '20240901'],
['DTP', '356', 'D8', '20240201'],
['NM1', 'IL', '1', 'MORILLO RUZA', 'OMARLY', 'V', '', '', '34', '000000000'],
['PER', 'IP', '', 'HP', '5129233526'],
['N3', '154 TERRI TL'],
['N4', 'ELGIN', 'TX', '786218937', '', 'CY', '011'],
['DMG', 'D8', '20040628', 'F', '', 'H']]]
</code></pre>
<p>Im unsure how to do this</p>
|
<python><list>
|
2024-10-18 14:55:52
| 2
| 392
|
Ben Smith
|
79,102,550
| 2,886,575
|
How to elegantly map over deep iterables?
|
<p>I have an iterable of tuples <code>Iterable[tuple[int, str]]</code>. I would like to <code>map</code> over this, and only edit the second item of each <code>tuple</code>. This, unfortunately, leaves me with a lot of boilerplate for un-packing and re-packing the first element of each tuple:</p>
<pre><code>map(lambda t: (t[0], do_something(t[1])), my_iterable)
</code></pre>
<p>In the spirit of functional programming, I actually have lots of these small functions that I would like to apply sequentially to the second part of these tuples:</p>
<pre><code>map(
lambda t: (t[0], bar(t[1])),
map(
lambda t: (t[0], foo(t[1])),
map(
lambda t: (t[0], do_something(t[1])),
my_iterable)
)
)
)
</code></pre>
<p>The repeated un-packing and re-packing of <code>t[0]</code> feels clumsy. Is there a better way to do this? Is there a solution that generalizes more complex data structures?</p>
|
<python>
|
2024-10-18 14:39:13
| 1
| 5,605
|
Him
|
79,102,407
| 11,092,636
|
Variable is not accessed Pylance
|
<p>Python 3.12.5:</p>
<pre class="lang-py prettyprint-override"><code>my_list: list[str] = ["MFI BL {num}" for num in range(1, 15)]
</code></pre>
<p>The <code>num</code> variable is clearly accessed but I have a warning that says <code>"num" is not accessed Pylance</code>.</p>
<p>Why do I have this warning:
<a href="https://i.sstatic.net/0PwTtxCY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0PwTtxCY.png" alt="enter image description here" /></a></p>
<p>I'm using VS Code. I do not have the warning on PyCharm.</p>
<p>Printing <code>my_list</code> does not change anything (making sure <code>my_list</code> ends up being used):</p>
<pre class="lang-py prettyprint-override"><code>my_list: list[str] = ["MFI BL {num}" for num in range(1, 15)]
print(my_list)
</code></pre>
|
<python><pyright>
|
2024-10-18 14:00:31
| 1
| 720
|
FluidMechanics Potential Flows
|
79,102,294
| 17,556,733
|
How to simulate AWS DynamoDB locally in python with moto 5
|
<p>I want to make a class representing a dynamodb connection which I can use locally to mimic the behavior of DynamoDB without having to actually contact the AWS service.</p>
<p>I want to use it during development (not just for running tests) and use that class for creating a table which exists during program execution, and make queries to it just as I would a real dynamo table with a real connection</p>
<p>I am using <code>boto3 v1.34.162</code> and <code>moto v5.0.15</code></p>
<p>So far, here is what I have come up with:</p>
<pre class="lang-py prettyprint-override"><code>import boto3
from moto import mock_aws
@mock_aws
class MyDbClient:
def __init__(self):
self.dynamodb = boto3.client('dynamodb')
self.table_name = 'ExampleTable'
self.create_table()
def create_table(self):
table_params = {
'TableName': self.table_name,
'KeySchema': [
{'AttributeName': 'id', 'KeyType': 'HASH'},
],
'AttributeDefinitions': [
{'AttributeName': 'id', 'AttributeType': 'S'},
],
'ProvisionedThroughput': {
'ReadCapacityUnits': 5,
'WriteCapacityUnits': 5,
}
}
print(f"Creating table {self.table_name}...")
self.dynamodb.create_table(**table_params)
print(f"Waiting for table {self.table_name} to exist...")
waiter = self.dynamodb.get_waiter('table_exists')
waiter.wait(TableName=self.table_name)
print(f"Table {self.table_name} is now active.")
tables = self.dynamodb.list_tables()['TableNames']
print("Existing tables:", tables)
def put_item_in_table(self, item):
self.dynamodb.put_item(
TableName=self.table_name,
Item={
'id': {'S': item['id']},
'name': {'S': item['name']},
'description': {'S': item['description']}
}
)
def get_item_from_table(self, key):
response = self.dynamodb.get_item(
TableName=self.table_name,
Key={
'id': {'S': key['id']}
}
)
print(response)
if __name__ == "__main__":
db_client = MyDbClient()
item_to_put = {
'id': '123',
'name': 'ExampleName',
'description': 'This is a sample item'
}
db_client.put_item_in_table(item_to_put)
key_to_get = {'id': '123'}
db_client.get_item_from_table(key_to_get)
</code></pre>
<p>The <code>@mock_aws</code> decorator should be what I need, but when executing this I am getting the following error: <code>botocore.exceptions.ClientError: An error occurred (404) when calling the GetRoleCredentials operation: Not yet implemented</code> when the program execution comes to the <code>create_table</code> call.</p>
<p>Here is the full stack trace as well:</p>
<pre class="lang-bash prettyprint-override"><code>(venv) βΊ 15:19:16 ~/Dev/sandbox $ python main.py
Creating table ExampleTable...
Traceback (most recent call last):
File "/Users/jovan/Dev/sandbox/main.py", line 60, in <module>
db_client = MyDbClient()
^^^^^^^^^^^^
File "/Users/jovan/Dev/sandbox/main.py", line 9, in __init__
self.create_table()
File "/Users/jovan/Dev/sandbox/venv/lib/python3.12/site-packages/moto/core/models.py", line 122, in wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/jovan/Dev/sandbox/main.py", line 27, in create_table
self.dynamodb.create_table(**table_params)
File "/Users/jovan/Dev/sandbox/venv/lib/python3.12/site-packages/botocore/client.py", line 569, in _api_call
return self._make_api_call(operation_name, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jovan/Dev/sandbox/venv/lib/python3.12/site-packages/botocore/client.py", line 1005, in _make_api_call
http, parsed_response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/Users/jovan/Dev/sandbox/venv/lib/python3.12/site-packages/botocore/client.py", line 1029, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jovan/Dev/sandbox/venv/lib/python3.12/site-packages/botocore/endpoint.py", line 119, in make_request
return self._send_request(request_dict, operation_model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jovan/Dev/sandbox/venv/lib/python3.12/site-packages/botocore/endpoint.py", line 196, in _send_request
request = self.create_request(request_dict, operation_model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jovan/Dev/sandbox/venv/lib/python3.12/site-packages/botocore/endpoint.py", line 132, in create_request
self._event_emitter.emit(
File "/Users/jovan/Dev/sandbox/venv/lib/python3.12/site-packages/botocore/hooks.py", line 412, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jovan/Dev/sandbox/venv/lib/python3.12/site-packages/botocore/hooks.py", line 256, in emit
return self._emit(event_name, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jovan/Dev/sandbox/venv/lib/python3.12/site-packages/botocore/hooks.py", line 239, in _emit
response = handler(**kwargs)
^^^^^^^^^^^^^^^^^
File "/Users/jovan/Dev/sandbox/venv/lib/python3.12/site-packages/botocore/signers.py", line 105, in handler
return self.sign(operation_name, request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jovan/Dev/sandbox/venv/lib/python3.12/site-packages/botocore/signers.py", line 188, in sign
auth = self.get_auth_instance(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jovan/Dev/sandbox/venv/lib/python3.12/site-packages/botocore/signers.py", line 306, in get_auth_instance
frozen_credentials = credentials.get_frozen_credentials()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jovan/Dev/sandbox/venv/lib/python3.12/site-packages/botocore/credentials.py", line 634, in get_frozen_credentials
self._refresh()
File "/Users/jovan/Dev/sandbox/venv/lib/python3.12/site-packages/botocore/credentials.py", line 522, in _refresh
self._protected_refresh(is_mandatory=is_mandatory_refresh)
File "/Users/jovan/Dev/sandbox/venv/lib/python3.12/site-packages/botocore/credentials.py", line 538, in _protected_refresh
metadata = self._refresh_using()
^^^^^^^^^^^^^^^^^^^^^
File "/Users/jovan/Dev/sandbox/venv/lib/python3.12/site-packages/botocore/credentials.py", line 685, in fetch_credentials
return self._get_cached_credentials()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jovan/Dev/sandbox/venv/lib/python3.12/site-packages/botocore/credentials.py", line 695, in _get_cached_credentials
response = self._get_credentials()
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jovan/Dev/sandbox/venv/lib/python3.12/site-packages/botocore/credentials.py", line 2160, in _get_credentials
response = client.get_role_credentials(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jovan/Dev/sandbox/venv/lib/python3.12/site-packages/botocore/client.py", line 569, in _api_call
return self._make_api_call(operation_name, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jovan/Dev/sandbox/venv/lib/python3.12/site-packages/botocore/client.py", line 1023, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (404) when calling the GetRoleCredentials operation: Not yet implemented
</code></pre>
<p>Another thing that I have tried is to individually wrap the <code>__init__</code>, <code>put_item_in_table</code> and <code>get_item_from_table</code> with the <code>@mock_aws</code> decorator, however that does NOT work, since in that case the the aws mock environment is created anew every time any of those functions is called, and therefore the created dynamo table does not persist between invocations (which is the complete opposite of what I need)</p>
<p>If possible I would like to just rely on the <code>boto3</code> and <code>moto</code> libraries. I know that <a href="https://www.localstack.cloud/" rel="nofollow noreferrer">aws localstack</a> exists as a tool, but that is overkill for what I am trying to achieve and I would like to not use it</p>
|
<python><amazon-dynamodb><boto3><moto>
|
2024-10-18 13:30:06
| 1
| 495
|
TheMemeMachine
|
79,102,275
| 865,169
|
Python logging: where are messages coming from
|
<p>I have an application using lots of different packages and many of them are logging output. I would like to clean up my application's logs and perhaps disable some of the other packages' output. My problem is that I do not know where many of these log messages are coming from.
Can I configure / turn something on in Python's <code>logging</code> at a global level that lets me identify where the messages are coming from?</p>
|
<python><python-logging>
|
2024-10-18 13:24:12
| 1
| 1,372
|
Thomas Arildsen
|
79,102,186
| 4,194,554
|
Constant error while referencing function in package (TypeError: 'module' object is not callable)
|
<p>I get error while trying to execute pytest UTs for the project.</p>
<pre><code>E TypeError: 'module' object is not callable
</code></pre>
<p>I have the following repository structure:</p>
<pre><code>ββββsrc
β ββββcompany
β ββββacc
β ββββdp
β ββββlogic
β ββββbusiness
β β ββββ__init__.py
β β ββββalter_customer.py
β β ββββfilter_customer.py
β β ββββreview_customers.py
β ββββgeneral
β ββββsome_function.py
β ββββ__init__.py
ββββtests
β ββββcompany
β ββββdp
β ββββlogic
β ββββbusiness
β ββββtest_alter_customer.py
β ββββtest_review_customers.py
ββββconftest.py
ββββpyproject.toml
</code></pre>
<p>Each file under business package contains one function with the same name as the file.</p>
<p>Let's imagine file <code>filter_customer.py</code> like that:</p>
<pre><code>def filter_customer(i: int) -> int:
return i
</code></pre>
<p>Let's imagine file <code>review_customers.py</code> like that:</p>
<pre><code>from company.dp.logic.general import some_function
def review_customers() -> str:
x = some_function("custom")
return x
</code></pre>
<p>Let's imagine file <code>alter_customer.py</code> like that:</p>
<pre><code>from company.dp.logic.business import filter_customer
def alter_customer() -> str:
x = filter_customer(10) <- this line raise error while trying to run Unit Test
return x
</code></pre>
<p>UT looks something like that:</p>
<pre><code>from company.dp.logic.business import alter_customer
def test_alter_customer():
x = alter_customer()
assert x == 10
</code></pre>
<p><code>./business/__init__.py</code> file for business package looks like that:</p>
<pre><code>from company.dp.logic.business.alter_customer import alter_customer
from company.dp.logic.business.filter_customer import filter_customer
from company.dp.logic.business.review_customers import review_customers
</code></pre>
<p><code>./general/__init__.py</code> file for business package looks like that:</p>
<pre><code>from src.company.dp.logic.general.some_function import some_function
</code></pre>
<p><code>pyproject.toml</code></p>
<pre><code>[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"
[tool.setuptools.packages.find]
where = ["src"]
[tool.pytest.ini_options]
addopts = [
]
testpaths = ["tests/company/dp/logic/business"]
python_files = "test_*.py"
python_functions = "test_*"
pythonpath = "src"
norecursedirs = "*"
</code></pre>
<p>How to correctly organize the packages/modules in such scenario?</p>
<p>The goal was to have one function per file and packages like business and general to cover imports from modules.</p>
<p>I have tried multiple versions.</p>
<p>When I use direct import from module/file, it work. It also works when I call function from different package like general.</p>
<pre><code># This works:
from src.company.dp.logic.business.filter_customer import filter_customer
# With this I have error, even though IDE resolve it correctly:
from src.company.dp.logic.business import filter_customer
# This works:
from src.company.dp.logic.general import some_function
def review_customers()
c = filter_customer(10)
return something
</code></pre>
<p>I prepared repository which allows to recreate this problem: <a href="https://github.com/paweltajs/module-package/tree/main" rel="nofollow noreferrer">https://github.com/paweltajs/module-package/tree/main</a></p>
<p>Edit. Few other observations.</p>
<p>When I change the name of the function to differ from file name, I got different error:</p>
<pre><code>E ImportError: cannot import name 'f_filter_customer' from partially initialized module 'company.dp.logic.business' (most likely due to a circular import) (C:\Repositories\module-package\src\company\dp\logic\business\__init__.py)
</code></pre>
<p>When I put import within the function, it works. Like that:</p>
<pre><code>def alter_customer() -> str:
from company.dp.logic.business import filter_customer
x = filter_customer(10)
return x
</code></pre>
|
<python><pytest><python-import><python-module><python-packaging>
|
2024-10-18 13:00:01
| 1
| 482
|
PaweΕ Tajs
|
79,102,009
| 9,251,158
|
How to load tests from some files and not others?
|
<p>I want to run a suite of unit tests in the <code>tests</code> folder. The basic code is:</p>
<pre class="lang-py prettyprint-override"><code>suite = unittest.defaultTestLoader.discover('tests')
</code></pre>
<p>I want only some of these tests to run, for example <code>test_e1</code> if file <code>e1.py</code> is present, <code>test_e5</code> if <code>e5.py</code> is present, but not <code>test_e2</code> and <code>test_e11</code> (because files <code>e2.py</code> and <code>e11.py</code> are missing).</p>
<p>I tried the <code>pattern</code> argument of the <code>discoverer()</code> function, which defaults to <code>test_*.py</code>, but it does not allow enough control for what I need (see <a href="https://stackoverflow.com/questions/79099969/how-to-match-specific-files-with-a-shell-pattern-in-unit-test-discoverer/79100094?noredirect=1#comment139477178_79100094">How to match specific files with a shell pattern in unit test discoverer?</a> ).</p>
<p>One answer in that thread suggests finding these tests with <code>unittest.TestLoader().loadTestsFromNames</code>, so I tried this code:</p>
<pre><code> file_list = []
for some_file in some_file_list:
full_filepath = os.path.join(some_dir, some_file)
if not os.path.exists(full_filepath):
continue
file_list.append("tests/test_%s.TestDocs" % some_file)
suite = unittest.TestLoader().loadTestsFromNames(file_list)
print(suite)
</code></pre>
<p>The name <code>TestDocs</code> is the class name that inherits from the unit test:</p>
<pre><code>class TestDocs(unittest.TestCase):
</code></pre>
<p>But this shows a list of failed tests such as:</p>
<pre><code><unittest.suite.TestSuite tests=[<unittest.loader._FailedTest testMethod=tests/test_>]>
</code></pre>
<p>How can I run tests only for a certain set of files?</p>
|
<python><unit-testing><python-unittest>
|
2024-10-18 12:05:17
| 2
| 4,642
|
ginjaemocoes
|
79,101,944
| 839,733
|
mypy warning on numpy.apply_along_axis
|
<p><strong>Edit Oct-18-2024:</strong></p>
<p>An even more trivial reproduction of the problem is shown below.</p>
<blockquote>
<p><code>mypy_arg_type.py</code>:</p>
</blockquote>
<pre><code>import numpy as np
from numpy.typing import NDArray
import random
def winner(_: NDArray[np.bytes_]) -> bytes | None:
return b"." if bool(random.randint(0, 1)) else None
board = np.full((2, 2), ".", "|S1")
for w in np.apply_along_axis(winner, 0, board):
print(w)
</code></pre>
<blockquote>
<p><code>>> python mypy_arg_type.py</code></p>
</blockquote>
<pre><code>b'.'
None
</code></pre>
<blockquote>
<p><code>>> mypy mypy_arg_type.py</code></p>
</blockquote>
<pre><code>mypy_arg_type.py:9: error: Argument 1 to "apply_along_axis" has incompatible type "Callable[[ndarray[Any, dtype[bytes_]]], bytes | None]"; expected "Callable[[ndarray[Any, dtype[Any]]], _SupportsArray[dtype[Never]] | _NestedSequence[_SupportsArray[dtype[Never]]]]" [arg-type]
mypy_arg_type.py:9: note: This is likely because "winner" has named arguments: "_". Consider marking them positional-only
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<hr />
<p><strong>Original question:</strong></p>
<p>I'm working on a problem to determine the winner of a <a href="https://en.wikipedia.org/wiki/Connect_Four" rel="nofollow noreferrer">Connect Four</a> game, given the position of the pieces on the board.
The board is of size 6x7, and each column is marked with a letter from <code>A</code> to <code>G</code>. The winner, if any, will have 4 pieces of identical color in a row, column, diagonal or anti-diagonal.</p>
<p>Example:</p>
<p>Input: <code>["A_Red", "B_Yellow", "A_Red", "B_Yellow", "A_Red", "B_Yellow", "G_Red", "B_Yellow"]</code></p>
<p>Board:</p>
<pre><code>R Y . . . . R
R Y . . . . .
R Y . . . . .
. Y . . . . .
. . . . . . .
. . . . . . .
</code></pre>
<p>Winner: <code>Yellow</code></p>
<p>The following code determines a winner.</p>
<pre><code>import itertools
import numpy as np
from numpy.typing import NDArray
def who_is_winner(pieces: list[str]) -> str:
def parse_board() -> NDArray[np.bytes_]:
m, n = 6, 7
indices = [0] * n
# https://numpy.org/doc/stable/user/basics.strings.html#fixed-width-data-types
# One-byte encoding, the byteorder is β|β (not applicable)
board = np.full((m, n), ".", "|S1")
for p in pieces:
col = ord(p[0]) - ord("A")
board[indices[col], col] = p[2]
indices[col] += 1
return board
def winner(arr: NDArray[np.bytes_]) -> np.bytes_ | None:
i = len(arr)
xs = next(
(xs for j in range(i - 3) if (xs := set(arr[j : j + 4])) < {b"R", b"Y"}),
{None},
)
return xs.pop()
def axis(x: int) -> np.bytes_ | None:
# https://numpy.org/doc/2.0/reference/generated/numpy.apply_along_axis.html#numpy-apply-along-axis
# Axis 0 is column-wise, 1 is row-wise.
return next(
(w for w in np.apply_along_axis(winner, x, board) if w is not None), None
)
def diag(d: int) -> np.bytes_ | None:
# https://numpy.org/doc/stable/reference/generated/numpy.diagonal.html#numpy-diagonal
# Diagonal number is w.r.t. the main diagonal.
b = board if bool(d) else np.fliplr(board)
return next(
(w for d in range(-3, 4) if (w := winner(b.diagonal(d))) is not None), None
)
board = parse_board()
match next(
(
w
for f, i in itertools.product((axis, diag), (0, 1))
if (w := f(i)) is not None
),
None,
):
case b"Y":
return "Yellow"
case b"R":
return "Red"
case _:
return "Draw"
</code></pre>
<p>However, this generates a mypy violation as follows:</p>
<pre><code>error: Argument 1 to "apply_along_axis" has incompatible type "Callable[[ndarray[Any, dtype[bytes_]]], bytes_ | None]"; expected "Callable[[ndarray[Any, dtype[Any]]], _SupportsArray[dtype[bytes_]] | _NestedSequence[_SupportsArray[dtype[bytes_]]]]" [arg-type]
note: This is likely because "winner" has named arguments: "arr". Consider marking them positional-only
</code></pre>
<p>According to the documentation of <a href="https://numpy.org/doc/2.0/reference/generated/numpy.apply_along_axis.html#numpy-apply-along-axis" rel="nofollow noreferrer">apply_along_axis</a>, it is supposed to return a single value, which is consistent with the code above.</p>
<p>How to fix this violation? Making function <code>winner</code> positional-only makes no difference, except that suggestion is gone.</p>
<p>I'm using Python 3.12.5 with mypy 1.11.2.</p>
|
<python><numpy><python-typing><mypy>
|
2024-10-18 11:45:32
| 1
| 25,239
|
Abhijit Sarkar
|
79,101,670
| 5,065,546
|
Environment variable not recognised in python or terminal but I can see it in system properties
|
<p>I am trying to set an openai key to use the ChatGPT api. I did so using the following command in the terminal:</p>
<pre><code>setx OPENAI_API_KEY "api_key_text"`
</code></pre>
<p>After this, if I go to the list of environmental variables from system properties, (i.e. "edit the system environmental variables" from control panel) then I can see the new env variable and it looks like it has worked fine. However, if I go to python (using Jupyter notebook if that is relevant) and type:</p>
<pre><code>import os
api_key = os.getenv("OPENAI_API_KEY")
print(api_key)
</code></pre>
<p>then I get the answer "None". Also, it doesn't show up typing SET in terminal. Given that I can see it in systems properties, why can Python not find it?</p>
<p>Any help appreciated.</p>
|
<python><windows><environment-variables>
|
2024-10-18 10:29:23
| 1
| 362
|
Euan Ritchie
|
79,101,599
| 386,861
|
How to solve strange plotting error in Altair
|
<p>I'm trying to plot some data which broadly should form a map like so:</p>
<pre><code>import numpy as np
lats = np.random.uniform(51.5, 51.6, 100)
lons = np.random.uniform(-0.1, 0.1, 100)
months = np.arange(1, 13)
vouchers = np.random.randint(1, 100, 100)
test_df = pd.DataFrame({
'lat': lats,
'lon': lons,
'month': np.random.choice(months, 100),
'vouchers': vouchers
})
test_df
alt.Chart(test_df).mark_circle().encode(
longitude='lon:Q',
latitude='lat:Q',
size='vouchers:Q',
color='month:N'
)
</code></pre>
<p>That looks like this:</p>
<p><a href="https://i.sstatic.net/b94P3eUr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/b94P3eUr.png" alt="enter image description here" /></a></p>
<p>Anyway, my real data is very similar.</p>
<p>Describe shows this:</p>
<p><a href="https://i.sstatic.net/kxqJxxb8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kxqJxxb8.png" alt="enter image description here" /></a></p>
<p>and the data is the right dtypes:</p>
<p><a href="https://i.sstatic.net/FyraKBlV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FyraKBlV.png" alt="enter image description here" /></a></p>
<p>Here's my code:</p>
<pre><code>alt.Chart(lunch_vouchers).mark_circle().encode(
longitude='long_jittered:Q',
latitude='lat_jittered:Q',
size=alt.Size('Total_People_in_voucher:Q', scale=alt.Scale(range=[0, 1000]), legend=None),
color=alt.value('red'),
tooltip=['Postcode', 'Total_People_in_voucher', 'Month']
)
</code></pre>
<p>But it produces a strange display error in VS Code.</p>
<p><a href="https://i.sstatic.net/A2XUvi78.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A2XUvi78.png" alt="enter image description here" /></a></p>
<p>How can I fix this?</p>
|
<python><altair>
|
2024-10-18 10:08:52
| 0
| 7,882
|
elksie5000
|
79,101,594
| 6,751,456
|
django ORM query filter methods running multiple filter duplicates joins
|
<p>I'm trying to run filters using methods in two separate attributes.</p>
<p>In ICD10Filter:</p>
<pre><code>class Icd10Filter(filters.FilterSet):
# New Filters for DOS Range
dosFrom = filters.DateFilter(method='filter_by_dos_from', lookup_expr='gte')
dosTo = filters.DateFilter(method='filter_by_dos_to', lookup_expr='lte')
def filter_by_dos_from(self, queryset, name, value):
return queryset.filter(
base_icd__chart_review_dos__dos_from__gte=value
)
def filter_by_dos_to(self, queryset, name, value):
return queryset.filter(
base_icd__chart_review_dos__dos_to__lte=value
)
</code></pre>
<p>ICD10 filter is referenced in ChartReviewDx Model:</p>
<pre><code>class ChartReviewDx(models.Model):
chart_review_dos = models.ForeignKey(
ChartReviewDos, on_delete=models.SET_NULL, null=True, related_name="diagnosis_details"
)
diagnosis_code = models.CharField(max_length=1024, null=True, blank=True)
diagnosis_description = models.CharField(max_length=1024, null=True, blank=True)
icd10 = models.ForeignKey("risk_adjustment.Icd10", on_delete=models.SET_NULL, null=True)
base_icd = models.ForeignKey(
"risk_adjustment.Icd10", on_delete=models.SET_NULL, null=True, blank=True, related_name="base_icd"
)
</code></pre>
<p>and ChartReviewDx is referenced in ChartReviewDOS model:</p>
<pre><code>class ChartReviewDos(models.Model):
chart = models.ForeignKey(Chart, on_delete=models.SET_NULL, null=True, blank=True, related_name="diagnosis")
dos_from = models.DateField()
dos_to = models.DateField()
</code></pre>
<p>I want to fetch the ICD10 codes for particular DOS range only.</p>
<p>The desired query is:</p>
<pre><code>SELECT
distinct id,
code,
description
FROM
risk_adjustment_icd10
INNER JOIN healthcare_data_chart_review_dx ON (
id = healthcare_data_chart_review_dx.base_icd_id
)
INNER JOIN healthcare_data_chart_review_dos ON (
healthcare_data_chart_review_dx.chart_review_dos_id = healthcare_data_chart_review_dos.id
)
WHERE
(
valid = 1
AND healthcare_data_chart_review_dos.dos_from >= '2023-08-19'
AND healthcare_data_chart_review_dos.dos_to <= '2023-08-19'
)
ORDER BY
code ASC
</code></pre>
<p>When I only run the filter for one of the fields, the query is working fine.</p>
<p>But running filters on both fields give redundant JOINS and thus inaccurate results:</p>
<p>The query that is generated after applying both filters:</p>
<pre><code>SELECT
DISTINCT id,
code,
description
FROM
risk_adjustment_icd10
INNER JOIN healthcare_data_chart_review_dx ON (
id = healthcare_data_chart_review_dx.base_icd_id
)
INNER JOIN healthcare_data_chart_review_dos ON (
healthcare_data_chart_review_dx.chart_review_dos_id = healthcare_data_chart_review_dos.id
)
INNER JOIN healthcare_data_chart_review_dx T4 ON (
id = T4.base_icd_id
)
INNER JOIN healthcare_data_chart_review_dos T5 ON (
T4.chart_review_dos_id = T5.id
)
WHERE
(
valid = 1
AND healthcare_data_chart_review_dos.dos_from >= '2023-08-19'
AND T5.dos_to <= '2023-08-19'
)
ORDER BY
code asc
</code></pre>
<p>How can I remove this redundant joins?</p>
|
<python><django><join><django-orm>
|
2024-10-18 10:07:18
| 1
| 4,161
|
Azima
|
79,101,444
| 14,264,760
|
Unable to login into outlook email server with smtplib even after setting app passkey and 2 factor auth
|
<p>I am unable to login into outlook email server with smtplib even after setting app passkey and 2 factor authentication in outlook account.</p>
<pre><code>import smtplib
smtp_server = 'smtp.office365.com'
smtp_port = 587
sender_email = 'myemail@outlook.com'
app_password = "my_outlook_app_passwords"
def login_stmplib():
try:
server = smtplib.SMTP(smtp_server, smtp_port)
server.starttls()
server.login(sender_email, app_password)
print("Login successful!")
except Exception as e:
print(f"Login error: {e}")
finally:
server.quit()
login_stmplib()
</code></pre>
<p>It fails with an error given below. main5.py contains above code with outlook credentials.</p>
<pre><code>(EmailEnv) ayushraj@pclocal emailer % python main5.py
Login error: (535, b'5.7.139 Authentication unsuccessful, basic authentication is disabled. [PN2PR01CA0158.INDPRD01.PROD.OUTLOOK.COM 2024-10-18T09:14:28.211Z 08DCEEEF057DFEC7]')
</code></pre>
<p>I have been able to login successfully with gmail credentials. main6.py contains above code with gmail credentials.</p>
<pre><code>(EmailEnv) ayushraj@pclocal emailer % python main6.py
Login successful!
</code></pre>
<p>My endgoal is to use this python pgm to send emails.
Ps: I've setup 2 factor authentication and then used App password from outlook. Adding ss for the same.
<a href="https://i.sstatic.net/WiaNDBkw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WiaNDBkw.png" alt="enter image description here" /></a></p>
|
<python><authentication><outlook><gmail>
|
2024-10-18 09:27:49
| 0
| 339
|
Ayush Raj
|
79,101,335
| 3,906,713
|
Is it possible to vectorize scipy multivariate_normal over means?
|
<p>I have a multivariate normal distribution. I use it in a markov chain sampler, that makes use of vectorization option. That means that it is most optimal, when it can request logarithm of PDF for multiple different mean vectors at the same time. Below is a minimal example. The first piece of code makes a request for a single datapoint an a single mean vector, it works fine. The second piece of code makes a request for multiple mean vectors. For the second piece of code, I get the error</p>
<pre><code>ValueError: Array 'mean' must be a vector of length 600.
</code></pre>
<p>Is there a built-in way around this, or do I have to use a for-loop?</p>
<pre><code>import numpy as np
from scipy.stats import multivariate_normal
np.random.seed(42)
cov = np.diag(np.ones(6))
mu = np.random.normal(0, 1, 6)
x = np.random.normal(0, 1, 6)
print(multivariate_normal.logpdf(x, mean=mu, cov=cov))
mu = np.random.normal(0, 1, (100, 6))
x = np.random.normal(0, 1, (100, 6))
print(multivariate_normal.logpdf(x, mean=mu, cov=cov))
</code></pre>
<p><strong>EDIT</strong>: I just found a solution for a single datapoint. Since mean and data are interchangeable for normal distribution, the following would yield the correct result</p>
<pre><code>mu = np.random.normal(0, 1, (100, 6))
x = np.random.normal(0, 1, 6)
print(multivariate_normal.logpdf(mu, mean=x, cov=cov))
</code></pre>
|
<python><scipy>
|
2024-10-18 08:59:08
| 1
| 908
|
Aleksejs Fomins
|
79,101,286
| 8,909,944
|
How to have immutable shared object in python multiprocessing map
|
<p>How can I let all of my worker processes share the same object that none of them mutate? For example, what is the cleanest way of writing a function that computes the dot product of an argument vector with a second vector that is the same for all processes. Naively, I would write something like this:</p>
<pre><code>import multiprocessing
import numpy as np
def main():
static_vector = np.array([1,2,3,4,5])
def f(v):
return np.dot(v, static_vector)
with multiprocessing.Pool() as p:
results = p.map(f, [np.random.random((5,1)) for _ in range(10)])
print(results)
if __name__ == "__main__":
main()
</code></pre>
<p>But this fails with the error <code>AttributeError: Can't pickle local object 'main.<locals>.f'</code>. For the sake of argument, computing the static vector takes some time and should not be done in each subprocess.</p>
|
<python><parallel-processing><multiprocessing>
|
2024-10-18 08:41:51
| 1
| 321
|
David
|
79,101,228
| 1,503,683
|
boto3 upload_file: how to specify Checksum?
|
<p>I'm using <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3/client/upload_file.html" rel="nofollow noreferrer"><code>boto3's upload_file</code></a> to upload files to some S3 buckets.</p>
<p>This works well:</p>
<pre class="lang-py prettyprint-override"><code>s3 = boto3.client('s3')
s3_client.upload_file(
Bucket="my_bucket",
Filename="local_filename",
Key="remote_filename"
)
</code></pre>
<p>Now I want S3 to validates my uploaded file checksum (let's say <code>sha256</code>) at upload time. <code>boto3</code> documentation mention the <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/customizations/s3.html#boto3.s3.transfer.S3Transfer.ALLOWED_UPLOAD_ARGS" rel="nofollow noreferrer"><code>ChecksumAlgorithm</code></a> parameter for this. But how this supposed to be used ? I can't find more details in <code>boto3</code>'s documentation nor</p>
<p>If I try to specify only <code>ChecksumAlgorithm</code>, it fails:</p>
<pre class="lang-py prettyprint-override"><code>s3_client.upload_file(
Bucket="my_bucket",
Filename="local_filename",
Key="remote_filename",
ExtraArgs={
"ChecksumAlgorithm": "SHA256",
},
)
</code></pre>
<p>Fails with:</p>
<blockquote>
<p>botocore.exceptions.SSLError: SSL validation failed for <a href="https://s3.my-cloud-provider.net/my_bucket/remote_filename" rel="nofollow noreferrer">https://s3.my-cloud-provider.net/my_bucket/remote_filename</a> EOF occurred in violation of protocol (_ssl.c:2427)</p>
</blockquote>
<p>I'm a bit confuse that here I don't have to provide any locally-computed sha256.</p>
<p>Am I missing something ? Or is this checksum validation available only when using <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3/multipartuploadpart/upload.html" rel="nofollow noreferrer"><code>S3.MultipartUploadPart.upload</code></a> ?</p>
|
<python><amazon-s3><boto3>
|
2024-10-18 08:27:29
| 0
| 2,802
|
Pierre
|
79,101,170
| 428,542
|
Disable entityref decoding in html.parser
|
<p>Python's <code>html.parser.HTMLParser</code> seems to alway decode entityrefs in attributes (e.g. translate <code>&para;</code> to <code>ΒΆ</code>). Is there a way to disable this?</p>
<p>The HTML I like to parse is:</p>
<pre><code> <ul>
<li>&para; &amp; &#xb6;</a>
<li><a href="index.php?option=view&id=108">First link</a>
<li><a href="index.php?option=view&params=28">Second link</a>
</ul>
</code></pre>
<p>This is slightly misformed HTML: the <code><li></code> tags are never closed, and the ampersand in the href URLs is not url-encoded (it should be <code>&amp;</code>).</p>
<p>Sadly, <code>html.parser.HTMLParser</code> decodes the <code>&params</code> in the URL as <code>&para;ms</code> and returns <code>ΒΆms</code>, as seen in this code:</p>
<pre><code>from html.parser import HTMLParser
from html.entities import name2codepoint
class MyHTMLParser(HTMLParser):
def handle_starttag(self, tag, attrs):
print("Start tag:", tag)
for attr in attrs:
print(" attr:", attr)
def handle_endtag(self, tag):
print("End tag :", tag)
def handle_data(self, data):
if data.strip():
print("Data :", data)
def handle_comment(self, data):
print("Comment :", data)
def handle_entityref(self, name):
c = chr(name2codepoint[name])
print("Named ent:", c)
def handle_charref(self, name):
if name.startswith('x'):
c = chr(int(name[1:], 16))
else:
c = chr(int(name))
print("Num ent :", c)
def handle_decl(self, data):
print("Decl :", data)
html = """
<ul>
<li>&para; &amp; &#xb6;</a>
<li><a href="index.php?option=view&id=108">First link</a>
<li><a href="index.php?option=view&params=28">Second link</a>
</ul>
"""
parser = MyHTMLParser(convert_charrefs=False)
parser.feed(html)
</code></pre>
<p>The output is:</p>
<pre><code>Start tag: ul
Start tag: li
Named ent: ΒΆ
Named ent: &
Num ent : ΒΆ
End tag : a
Start tag: li
Start tag: a
attr: ('href', 'index.php?option=view&id=108')
Data : First link
End tag : a
Start tag: li
Start tag: a
attr: ('href', 'index.php?option=viewΒΆms=28')
Data : Second link
End tag : a
End tag : ul
</code></pre>
<p>The output shows that <code>handle_entityref</code> is called for the data, but never called for the href parameter. Hence, I have no ability to influence this behaviour, and I'm stuck with an incorrectly interpreted URL.</p>
<p>Is there a way to disable decoding of entityrefs by <code>html.parser.HTMLParser</code>?</p>
|
<python><html-parser>
|
2024-10-18 08:12:01
| 1
| 3,568
|
MacFreek
|
79,101,058
| 4,382,305
|
Are probabilities correct in elements of toss in binomial distribution in numpy python?
|
<p>I saw in many tutorials below code in numpy for Binomial Distribution:</p>
<pre><code>x = random.binomial(n=10, p=0.5, size=10)
</code></pre>
<p>n - number of trials.</p>
<p>p - probability of occurrence of each trial (e.g. for toss of a coin 0.5 each).</p>
<p>size - The shape of the returned array.</p>
<p>in above example number of trial is 11(10+1(zero)). in this sample p=0.5 .</p>
<p>When the number of states is 10, if we want to consider the probability of each of them as equal, it will be equal to 0.1. But in this example, the probability is 0.5. I think 0.5 is suitable for a coin that has two modes. Can someone clear this up for me?</p>
|
<python><numpy><statistics>
|
2024-10-18 07:39:30
| 0
| 2,091
|
Darwin
|
79,101,053
| 8,445,557
|
Access data on my google drive using service account
|
<p>I can't read the files that I saved on "Google Drive" using the code that You can see below.</p>
<ul>
<li>I've yet enabled the "Google Drive API" in the GCP project.</li>
<li>I've yet created the "service account" and in the "<strong>API/Service Detail</strong>"(<a href="https://console.cloud.google.com/apis/api/drive.googleapis.com" rel="nofollow noreferrer">https://console.cloud.google.com/apis/api/drive.googleapis.com</a>) a can see it.</li>
<li>I've yet generate the json file with the <strong>credentials</strong> of the "service account".</li>
</ul>
<p>All seems fine and the the code don't get errors, but don't find the files that are on Google Drive, why?<br />
Where is my fault?</p>
<pre class="lang-py prettyprint-override"><code>from google.oauth2 import service_account
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
service_account_json_key = '****sa-key-jsonfile****.json'
SCOPES = ['https://www.googleapis.com/auth/drive']
creds = service_account.Credentials.from_service_account_file(
filename=service_account_json_key, scopes=SCOPES)
def search_file():
"""Search file in drive location"""
try:
# create drive api client
service = build("drive", "v3", credentials=creds)
files = []
page_token = None
while True:
# pylint: disable=maybe-no-member
response = (
service.files()
.list(
# q="mimeType='image/jpeg'",
spaces="drive",
fields="nextPageToken, files(id, name)",
pageToken=page_token,
)
.execute()
)
for file in response.get("files", []):
# Process change
print(f'Found file: {file.get("name")}, {file.get("id")}')
files.extend(response.get("files", []))
page_token = response.get("nextPageToken", None)
if page_token is None:
break
except HttpError as error:
print(f"An error occurred: {error}")
files = None
return files
if __name__ == "__main__":
ret = search_file()
print(ret)
</code></pre>
|
<python><google-drive-api><service-accounts><google-api-python-client>
|
2024-10-18 07:37:55
| 1
| 361
|
Stefano G.
|
79,100,279
| 6,703,592
|
dataframe easily implement merge_asof
|
<p>I have two dataframes:</p>
<pre class="lang-py prettyprint-override"><code>time_start = datetime.datetime.strptime('2024-02-01 10:00:00', "%Y-%m-%d %H:%M:%S")
interval_l = [1, 7, 14, 17, 21, 22, 31]
df_l = pd.DataFrame(index = [time_start + datetime.timedelta(seconds=i) for i in interval_l])
df_r = pd.DataFrame(range(4), index = [time_start + datetime.timedelta(seconds=10*i) for i in range(4)], columns=['val'])
df_l.index.name = 'datetime'
df_r.index.name = 'datetime'
</code></pre>
<p><a href="https://i.sstatic.net/z1F0jkM5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z1F0jkM5.png" alt="enter image description here" /></a></p>
<p><em>df_r</em> is every 10 seconds with a <em>val</em> columns. I want to merge <em>df_r</em> to <em>df_l</em> with its closest forward time and keep the other times in <em>df_l</em> as <em>nan</em>.</p>
<p>Here is the expected result of <em>df_l</em> after merge:</p>
<p><a href="https://i.sstatic.net/2fiJQHyM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fiJQHyM.png" alt="enter image description here" /></a></p>
<p>I implemented a complicated method by using <em>merge_asof</em> two times and merge <em>by</em> the key of index:</p>
<pre><code>df_l['time'] = df_l.index
df_r = pd.merge_asof(
df_r,
df_l[['time']],
left_index=True,
right_index=True,
direction='forward',
)
df_l = pd.merge_asof(
df_l,
df_r[['time', 'val']],
left_index=True,
right_index=True,
direction='backward',
by='time'
)
df_l.drop(columns=['time'], inplace=True)
</code></pre>
<p>Is there any simple way to do this, such as directly using <code>merge_asof</code>?</p>
|
<python><pandas><dataframe><merge>
|
2024-10-18 01:37:25
| 2
| 1,136
|
user6703592
|
79,100,223
| 3,388,962
|
How to export a Matplotlib figure to PDF-1.3?
|
<p>When saving a plot as a PDF, Matplotlib generates files based on the PDF 1.4 standard. (You can check the PDF version by opening the file in a text editor).</p>
<p>However, I run into problems with such PDFs when I use them in Microsoft PowerPoint and other MS Office tools, and then export the MS document again into a PDF. Most notably, I lose areas with transparent colors that turn completely white. Obviously the Microsoft PDF engine seems to be the bottleneck (so far I have not found a way to change the Microsoft PDF engine).</p>
<p>I can reproduce the problem on a recent <em>MacOS</em> with the below code:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.bar([1, 2, 3], [224, 620, 425],
facecolor=(0.3,0.7,0.9,0.3),
ec='k')
plt.savefig("test.pdf")
</code></pre>
<p><a href="https://i.sstatic.net/AGClOQ8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AGClOQ8J.png" alt="enter image description here" /></a></p>
<p>I found out that I can fix the problem by converting the PDF to version PDF-1.3. I can do this with Adobe Illustrator by selecting the PDF/X-1a or PDF/X-3 standard in the export settings.</p>
<p><strong>My questions</strong>:</p>
<ul>
<li>How to enforce the PDF-1.3 format in Matplotlib?</li>
<li>How to recover from this transparency problem?</li>
</ul>
<p>I have tried to save the image as SVG. However, I am encountering similar problems with transparency being lost in some situations. EPS does not support transparent face colors. Also, I don't want to export the image to a rasterized format.</p>
|
<python><matplotlib><transparency>
|
2024-10-18 00:57:19
| 2
| 9,959
|
normanius
|
79,100,109
| 3,842,845
|
How delete a substring part within filename of several files (matching a certain filename pattern) in Python?
|
<p>I am trying to eliminate some part of file names so I could use for pattern to ingest data daily.</p>
<p>I have following files inside this folder (<strong>F:\Source</strong>).</p>
<pre><code>WH_BEE_FULL_20241017_170853_1.bak
WH_BEE_FULL_20241017_170853_2.bak
WH_BEE_FULL_20241017_170853_3.bak
WH_BEE_FULL_20241017_170853_4.bak
WH_BEE_FULL_20241017_170853_5.bak
WH_BEE_FULL_20241017_170853_6.bak
</code></pre>
<p>I need to delete "170853" part of file name, so that end result would be:</p>
<pre><code>WH_BEE_FULL_20241017_1.bak
WH_BEE_FULL_20241017_2.bak
WH_BEE_FULL_20241017_3.bak
WH_BEE_FULL_20241017_4.bak
WH_BEE_FULL_20241017_5.bak
WH_BEE_FULL_20241017_6.bak
</code></pre>
<p>So, "20241017" is changeable daily as it is "YYYYMMDD" format, but other fields (like "WH", "BEE", "FULL" & "_1 ~ _6") would be static.</p>
<p>After renaming, those files would be relocated to (<strong>F:\Destination</strong>) folder.</p>
<p>How do I go about doing it in Python?</p>
|
<python>
|
2024-10-17 23:29:14
| 0
| 1,324
|
Java
|
79,100,013
| 7,273,648
|
What does "DataFrame.at[source]: TypeError: only integer scalar arrays can be converted to a scalar index" mean?
|
<p>Searching for answers to "dataframe. at TypeError: only integer scalar arrays can be converted to a scalar index" resulted in "We couldn't find anything for dataframe. at typeerror: only integer scalar arrays can be converted to a scalar index".</p>
<p>Searching with less stringent rules produced results that required me to search those for ".at" produced nothing either. Of course, that could be due to my search terms.</p>
<p>Can anyone explain precisely what this means in terms that a non-programmer can understand?</p>
<blockquote>
<p>"property DataFrame.at[source]: TypeError: only integer scalar arrays
can be converted to a scalar index"</p>
</blockquote>
<p>The pandas website states the following:</p>
<blockquote>
<p>pandas.DataFrame.at property DataFrame.at[source] Access a single
value for a row/column label pair.</p>
</blockquote>
<p>That seems obvious and thus easy to use. However, the following code snippit:</p>
<pre><code>col = str(currentyr)
print('col = ' + str(col))
row = str(currentmonth) + 'UTtotal'
print('row = ' + str(row))
DTduration = df_total.at[row, col]
print('row / col (' + str(row) + ' / ' + str(col) + ') =\n' + str(DTduration))
</code></pre>
<p>generates the following error:</p>
<pre><code>"TypeError Traceback (most recent call last)
Cell In[1], line 269
266 monthlyUT()
268 #deal with monthly downtimes
--> 269 monthlyDT(faultindexfirst, faultindexlast)
271 framenum += 1
273 elif framestartyear == currentyr & frameendyear == currentyr + 1:
274 #the currently selected frame straddles two years
Cell In[1], line 46
44 print('row = ' + str(row))
45 # DTduration = df_total.loc[row, col]
---> 46 DTduration = df_total.at[row, col]
47 print('row / col (' + str(row) + ' / ' + str(col) + ') =\n' + str(DTduration))
48 row = str(currentmonth) + 'Eventstotal'"
</code></pre>
<p>faultindexfirst, faultindexlast, currentyr and currentmonth are all integers. I am trying to access a specific datum in a dataframe:</p>
<pre><code> 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 \
UTtotal 2311.933333 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
DTtotal NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
Eventstotal NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1UTtotal NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1DTtotal NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1Eventstotal NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
.
.
.
11Eventstotal NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
12UTtotal NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
12DTtotal NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
12Eventstotal NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
</code></pre>
<p>In this case, I need to populate this table with valid data but the pandas site provides no information that I understand enough to resolve this error. The print statements successfully out the following:</p>
<pre><code>col = 2013
row = 9UTtotal
</code></pre>
<p><strong>Minimally Reproducible Example:</strong></p>
<pre><code>import pandas
import os
import numpy as np
from pathlib import Path
#import matplotlib.pyplot as plt
#import matplotlib.dates as mdates
#from matplotlib.patches import Rectangle
from datetime import date
from datetime import time
from datetime import datetime
import array
def monthlyDT(firstidx, lastidx):
#this routine handles the monthly downtime data totaling and copying to the table
print('entered downtimes()')
print('\tthe first downtime timestamp = ' + str(DTs[firstidx]))
print('\tthe last downtime timestamp = ' + str(DTs[lastidx]))
#retrieve the current monthly DT duration and number of events totals from the table
print('row indices:\n' + str(df_total.index))
print('column headers: \n' + str(df_total.columns))
col = str(currentyr)
print('col = ' + str(col))
row = str(currentmonth) + 'UTtotal'
print('row = ' + str(row))
DTduration = df_total.at[row, col]
print('row / col (' + str(row) + ' / ' + str(col) + ') =\n' + str(DTduration))
return()
#setup working arrays to hold the datasets
DTs = np.array([1.378854180000000000e+09, 1.378904520000000000e+09, 1.378957920000000000e+09, 1.378968180000000000e+09])
DTe = np.array([1.378858140000000000e+09, 1.378908000000000000e+09, 1.378958040000000000e+09, 1.378968240000000000e+09])
#build the 'year' column names array
colname = []
y = 2013
currentyr = 2013
yrend = 2024
currentmonth = 9
while y <= yrend:
colname.append(str(y))
y = y + 1
print('colname = ' + str(colname))
#create the index name array
indexnames = ['UTtotal', 'DTtotal', 'Eventstotal',
'1UTtotal', '1DTtotal', '1Eventstotal',
'2UTtotal', '2DTtotal', '2Eventstotal',
'3UTtotal', '3DTtotal', '3Eventstotal',
'4UTtotal', '4DTtotal', '4Eventstotal',
'5UTtotal', '5DTtotal', '5Eventstotal',
'6UTtotal', '6DTtotal', '6Eventstotal',
'7UTtotal', '7DTtotal', '7Eventstotal',
'8UTtotal', '8DTtotal', '8Eventstotal',
'9UTtotal', '9DTtotal', '9Eventstotal',
'10UTtotal', '10DTtotal', '10Eventstotal',
'11UTtotal', '11DTtotal', '11Eventstotal',
'12UTtotal', '12DTtotal', '12Eventstotal'
]
#create dataframe
df_total = pandas.DataFrame(columns=[colname], index=[indexnames], dtype=np.float64)
print('df_total is:\n' + str(df_total))
df_total = df_total.fillna(-1)
print('\n\ndf_total is:\n' + str(df_total) + '\n\n')
monthlyDT(0, 3)
print('end')
</code></pre>
|
<python><pandas><dataframe>
|
2024-10-17 22:28:04
| 1
| 471
|
Jeff
|
79,099,969
| 9,251,158
|
How to match specific files with a shell pattern in unit test discoverer?
|
<p>I want to run a suite of unit tests. The basic code is:</p>
<pre class="lang-py prettyprint-override"><code>suite = unittest.defaultTestLoader.discover('tests')
</code></pre>
<p>The documentation for the function mentions the <code>pattern</code> argument, which defaults to <code>test_*.py</code>:</p>
<blockquote>
<p>Only test files that match the pattern will be loaded. (Using shell style pattern matching.)</p>
</blockquote>
<p>I want only some of these tests to run, for example <code>test_abcd</code>, <code>test_ab1</code>, and <code>test_ab3</code>, but not <code>test_ab2</code> (because it imports a file that may not be present). In a shell, I could match these files with:</p>
<pre><code>test_{abcd,ab1,ab3}
</code></pre>
<p>But it fails inside the test discoverer. I also tried these patterns, but none of them work inside the discoverer:</p>
<pre><code>test_\(abcd|ab1|ab3\)
test_abcd,test_ab1,test_ab3
</code></pre>
<p>One pattern that works inside the discoverer is <code>test_ab[2-4]</code>, but it will also catch <code>test_ab3</code>, which I don't want.</p>
<p>How can I match these specific files with a shell pattern or the unit test discoverer?</p>
|
<python><shell><unit-testing><wildcard>
|
2024-10-17 22:05:17
| 2
| 4,642
|
ginjaemocoes
|
79,099,953
| 1,082,367
|
alternative to unstable scan_pyarrow_dataset()?
|
<p>We are new to polars and pyarrow, and we are trying to work with a pyarrow <a href="https://arrow.apache.org/docs/python/generated/pyarrow.dataset.Dataset.html" rel="nofollow noreferrer">Dataset</a> of .csv (and possibly other) file formats. We are using pyarrow to create the dataset because it supports partitioning of .csv files, whereas I believe <a href="https://docs.pola.rs/api/python/stable/reference/api/polars.scan_csv.html" rel="nofollow noreferrer">polars.scan_csv()</a> does not. <a href="https://docs.pola.rs/api/python/dev/reference/api/polars.scan_pyarrow_dataset.html" rel="nofollow noreferrer">scan_pyarrow_dataset()</a> works fine (we're using polars 1.9.0) but we are concerned about depending on this function due to the warning <a href="https://docs.pola.rs/api/python/dev/reference/api/polars.scan_pyarrow_dataset.html" rel="nofollow noreferrer">here</a> :</p>
<blockquote>
<p>This functionality is considered unstable. It may be changed at any
point without it being considered a breaking change.</p>
</blockquote>
<p>Our question: Why is this warning there, and is there another way to recursively scan directories for .csv files that are paritioned? Someone suggested we use <a href="https://arrow.apache.org/docs/python/generated/pyarrow.dataset.Dataset.html#pyarrow.dataset.Dataset.to_table" rel="nofollow noreferrer">pyarrow.dataset.Dataset.to_table()</a>, but it reads the entire dataset into memory. Thank you!</p>
|
<python><python-polars><pyarrow>
|
2024-10-17 21:57:51
| 1
| 4,224
|
Matthew Cornell
|
79,099,894
| 13,968,392
|
Avoid many line breaks when formatting long lines containing binary operators with Ruff
|
<p>The ruff formatter generally wraps lines with more than 88 characters (this is the default, i.e. <code>line-length = 88</code>).</p>
<p>Unfortunately, this leads to lines of pathlib paths wrapped in an unpractical fashion:</p>
<pre><code>from pathlib import Path
path_save = Path().cwd().parents[1] / "some" / "folder" / "a_very_long_and_lenghtly_file_name.csv"
</code></pre>
<p>After applying ruff format, the second line becomes:</p>
<pre><code>path_save = (
Path().cwd().parents[1]
/ "some"
/ "folder"
/ "a_very_long_and_lenghtly_file_name.csv"
)
</code></pre>
<p>Desired would be only one line break:</p>
<pre><code>path_save2 = (Path().cwd().parents[1] / "some" /
"folder" / "a_very_long_and_lenghtly_file_name.csv")
</code></pre>
<p>Is this possible with ruff? The <a href="https://docs.astral.sh/ruff/settings/#line-length" rel="nofollow noreferrer">line-length docs</a> don't explain at which positions line breaks are placed. I have many python files with such pathlib paths and would appreciate some sort of solution for this with few or only one line break.</p>
|
<python><line-breaks><formatter><pathlib><ruff>
|
2024-10-17 21:34:58
| 1
| 2,117
|
mouwsy
|
79,099,776
| 4,796,942
|
Importing data from Google Business Review to Python `Error 400: redirect_uri_mismatch`
|
<p>I am trying to import data from Google Business Reviews into Python but keep running into this error `Error 400: redirect_uri_mismatch` even though I followed the documentation <a href="https://support.google.com/business/thread/215701947/how-do-i-get-user-reviews-for-my-business-via-api?hl=en" rel="nofollow noreferrer">Google post</a> where I was told to follow the steps:</p>
<ol>
<li>Enable the Google Reviews API.</li>
<li>Create a project in the Google Developers Console.</li>
<li>Enable the Google Reviews API in your project.</li>
<li>Create a service account.</li>
<li>Download the JSON key file for your service account.</li>
<li>Install the Google API client library for your programming language.</li>
<li>Create a new client object.</li>
<li>Call the <code>accounts.locations.reviews.list()</code> method to get a list of reviews for your business.</li>
<li>To get just the reviews, you can call the <code>accounts.locations.reviews.list()</code> method with the following parameters:</li>
</ol>
<ul>
<li><code>account_id</code>: The ID of your Google My Business account.</li>
<li><code>location_id</code>: The ID of your Google My Business location.</li>
<li><code>filter</code>: A filter that can be used to narrow down the results. For example, you can use the rating filter to only get reviews with a certain rating.</li>
</ul>
<pre><code>Error 400: redirect_uri_mismatch
You can't sign in to this app because it doesn't comply with Google's OAuth 2.0 policy.
If you're the app developer, register the redirect URI in the Google Cloud Console.
Request details: redirect_uri=http://localhost:52271/ flowName=GeneralOAuthFlow
</code></pre>
<p>The number <code>52271</code> seems to keep changing even though I tried to keep it fixed in my code as <code>8080</code>:</p>
<pre><code>
import requests
from google_auth_oauthlib.flow import InstalledAppFlow
from loguru import logger as log
# Your Google My Business account ID and location ID
my_business_account_id = "..." # Replace with actual
location_id = "..." # Replace with actual
# OAuth 2.0 access token obtained
access_token = "..." # Replace with your actual access token
# Path to your OAuth 2.0 Client Secret JSON file
GCP_CREDENTIALS_PATH = "google_review_client.json" # Replace with actual
# Ensure the redirect URI matches the one in Google Cloud Console
redirect_uri = "http://localhost:8080/"
# Setup the OAuth 2.0 flow with required scopes
flow = InstalledAppFlow.from_client_secrets_file(
GCP_CREDENTIALS_PATH,
scopes=["https://www.googleapis.com/auth/business.manage"],
redirect_uri=redirect_uri,
)
# Run the OAuth flow to obtain credentials
credentials = flow.run_local_server(port=0)
# Log the credentials to confirm successful OAuth
log.debug(f"Credentials: {credentials}")
# Setup session to use the credentials for accessing Google My Business API
session = requests.Session()
session.headers.update(
{"Authorization": f"Bearer {credentials.token}", "Content-Type": "application/json"}
)
# Construct the API endpoint URL
url = f"https://mybusiness.googleapis.com/v4/accounts/{my_business_account_id}/locations/{location_id}/reviews"
# Perform the API request and handle potential errors
try:
log.info(f"Making API request to URL: {url}")
response = session.get(url)
response.raise_for_status() # This will raise an error for bad HTTP status codes
reviews = response.json()
log.success("Reviews fetched successfully.")
print(reviews)
except requests.exceptions.HTTPError as http_err:
log.error(
f"HTTP error occurred: {http_err}"
) # Specific details about the HTTP error
except Exception as err:
log.error(f"An unexpected error occurred: {err}") # Other errors
</code></pre>
<p>I am expecting a table with the Google reviews for the business but when I run this I am getting the error <code>Error 400: redirect_uri_mismatch</code>.</p>
|
<python><google-cloud-platform><python-requests><google-business-profile-api>
|
2024-10-17 20:41:17
| 1
| 1,587
|
user4933
|
79,099,747
| 4,382,391
|
code walkthrough of chain syntax in langchain
|
<p>I am following a RAG tutorial from: <a href="https://medium.com/@vndee.huynh/build-your-own-rag-and-run-it-locally-langchain-ollama-streamlit-181d42805895" rel="nofollow noreferrer">https://medium.com/@vndee.huynh/build-your-own-rag-and-run-it-locally-langchain-ollama-streamlit-181d42805895</a></p>
<p>In the tutorial there is a section that creates a chain:</p>
<pre class="lang-py prettyprint-override"><code> self.chain = ({"context": self.retriever, "question": RunnablePassthrough()}
| self.prompt
| self.model
| StrOutputParser())
</code></pre>
<p>Can somebody explain what this block of code does? The syntax is unfamiliar to me. My best understanding is that:
the pipe operator feeds the output of the last function to the others. so this can be re-written as:</p>
<pre class="lang-py prettyprint-override"><code>query = {"context": self.retriever, "question": RunnablePassthrough()}
prompt = self.prompt(query)
response = self.model(prompt)
string_out = StrOutputParser(response)
chain(string_out)
</code></pre>
<p>In the example, <code>self.chain</code> is invoked as follows:</p>
<pre class="lang-py prettyprint-override"><code>self.chain.invoke(query) # query is a str
</code></pre>
<p>and the following are the properties defined on the main object:</p>
<pre class="lang-py prettyprint-override"><code>from langchain_community.chat_models import ChatOllama
from langchain.prompts import PromptTemplate
from langchain.schema.output_parser import StrOutputParser
# ...
class ChatPDF:
def __init__(self):
self.prompt = PromptTemplate.from_template(
"""
<s> [INST] Vous Γͺtes un assistant pour les tΓ’ches de rΓ©ponse aux questions. Utilisez les Γ©lΓ©ments de contexte suivants pour rΓ©pondre Γ la question.
Si vous ne connaissez pas la rΓ©ponse, dites simplement que vous ne savez pas.. Utilisez trois phrases
maximum et soyez concis dans votre rΓ©ponse. [/INST] </s>
[INST] Question: {question}
Context: {context}
Answer: [/INST]
""")
self.model = ChatOllama(model="mistral")
# ...
</code></pre>
<p>The whole code can be viewed on <a href="https://medium.com/@vndee.huynh/build-your-own-rag-and-run-it-locally-langchain-ollama-streamlit-181d42805895" rel="nofollow noreferrer">the medium blogpost</a></p>
<p>In summary, I would like an explanation of what this block of code does, and I am especially confused about the following:</p>
<ol>
<li>How can we pass input to <code>self.model</code>? <code>self.model</code> is not a function, it is a <code>ChatOllama</code> that has already been constructed. So what's happening here?</li>
<li>What does <code>RunnablePassthrough()</code> do? I read the documentation and it seems to just be an identity function. Why do we need this?(<a href="https://python.langchain.com/v0.1/docs/expression_language/primitives/passthrough/" rel="nofollow noreferrer">https://python.langchain.com/v0.1/docs/expression_language/primitives/passthrough/</a>) I assume when you write <code>self.chain.invoke(query)</code> then <code>RunnablePassthrough() == query</code></li>
<li>why is the whole expression wrapped in parentheses? I assume this has something to do with langchain and also the pipe operator. When I mess around and make variables like <code>test = (print); test("hi")</code> vs <code>test = print; test("hi")</code> the results are the same</li>
</ol>
<p>Edit: someone said this is similar to <a href="https://stackoverflow.com/questions/38987/how-do-i-merge-two-dictionaries-in-a-single-expression-in-python">How do I merge two dictionaries in a single expression in Python?</a> , but it is not. The pipe operator is being used in this example to merge dicts where here it is being used to pass outputs of functions to the next, afaik. No dicts here. The pipe operator is overloaded in python.</p>
|
<python><langchain><large-language-model><retrieval-augmented-generation><rag>
|
2024-10-17 20:28:12
| 0
| 1,070
|
Null Salad
|
79,099,612
| 4,048,657
|
When I run without CUDA: Function βPowBackward0β returned nan values in its 0th output
|
<p>My code was running fine <em>with</em> CUDA, but now that I run it with <code>device="cpu"</code>, with the flag <code>torch.autograd.set_detect_anomaly(True)</code>, the runtime error is raised:</p>
<pre class="lang-none prettyprint-override"><code>RuntimeError: Function 'PowBackward0' returned nan values in its 0th output.
</code></pre>
<p>Looking closely at the call stack:</p>
<pre><code> File "<ipython-input-468-be9e157834e4>", line 83, in forward
self.grad_mag = torch.sqrt(self.grad_x**2 + self.grad_y**2)
File "/usr/local/lib/python3.10/dist-packages/torch/_tensor.py", line 41, in wrapped
return f(*args, **kwargs)
(Triggered internally at ../torch/csrc/autograd/python_anomaly_mode.cpp:111.)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
</code></pre>
<p>Indicating that the error is when backwarding through:</p>
<pre class="lang-py prettyprint-override"><code>self.grad_mag = torch.sqrt(self.grad_x**2 + self.grad_y**2)
</code></pre>
<p>Firstly, I don't see why this issue is on CPU but not on CUDA. Secondly, I don't understand why it would get NaNs from the backward pass</p>
<pre class="lang-py prettyprint-override"><code>print("grad_x: ",self.grad_x.isinf().any(), self.grad_x.isnan().any())
print("grad_y: ",self.grad_y.isinf().any(), self.grad_y.isnan().any())
self.grad_mag = torch.sqrt(self.grad_x**2 + self.grad_y**2)
print("grad mag ", self.grad_mag.isinf().any(), self.grad_mag.isnan().any())
</code></pre>
<p>which outputs:</p>
<pre><code>grad_x: tensor(False) tensor(False)
grad_y: tensor(False) tensor(False)
grad mag tensor(False) tensor(False)
</code></pre>
<p>If it makes any difference, I'm optimizing with LBFGS</p>
|
<python><pytorch><autograd>
|
2024-10-17 19:34:34
| 2
| 1,239
|
Cedric Martens
|
79,099,393
| 2,837,253
|
Raising errors from a generator function immediately
|
<p>I have a python class that is a wrapper around a datafile containing a number of variables and attributes. Some of these variables may have also have attributes associated with them, and I want to be able to iterate over these attributes using a generator. I want to be able to raise an Exception if a specified variable does not exist in the file, and yield the attributes if it <em>does</em> exist in the file. The problem, however, is that I would prefer to be able to raise the exception <em>immediately</em>, but because generator functions in Python are evaluated lazily, the exception only gets thrown when the generator is first evaluated. This essentially boils down to the following:</p>
<pre class="lang-py prettyprint-override"><code>class file:
def __init__(self, varlist: List[str], attrlist: Dict[str, Any]):
self.varlist = varlist
self.attrlist = attrlist
def loadvar(self, varname: str):
...
def attrs(self, varname: str = None):
if varname:
if varname not in self.varlist:
raise RuntimeError(f"{varname} is not a valid variable name")
yield from self.loadvar(varname).attrs()
yield from self.attrlist.keys()
datafile = file(['a', 'b','c'], {'d':1, 'e':2, 'f':3})
list(datafile.attrs('non-existent')) # <-- raises exception; good
attr = datafile.attrs('non-existent')) # <-- does not raise exception; bad
list(attr) # <-- does raise exception
</code></pre>
<p>This is just a silly snippet example, so the logical doesn't necessarily make sense. Is it at all possible to raise an exception when the generator function is <strong>called</strong>, rather than <strong>evaluated</strong>?</p>
|
<python>
|
2024-10-17 18:23:58
| 1
| 4,778
|
MrAzzaman
|
79,099,366
| 11,644,167
|
Failed to satisfy constraint: Member must satisfy regular expression pattern
|
<p>I'm trying to follow a simple example from <a href="https://spacy.io/universe/project/Klayers" rel="nofollow noreferrer">spacy universe layers page</a>, but this is failing for me:</p>
<p>Code Implementation:</p>
<pre class="lang-yaml prettyprint-override"><code># template.yaml file
AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
Resources:
GetWordCounts:
Type: AWS::Serverless::Function
Properties:
Handler: word-counts/app.lambda_handler
Runtime: python3.9
CodeUri: .
Timeout: 30
Layers:
- arn:aws:lambda:${self:provider.region}:113088814899:layer:Klayers-python37-spacy:18
Events:
ApiGateway:
Type: Api
Properties:
Path: /word-counts
Method: get
</code></pre>
<pre class="lang-py prettyprint-override"><code># word-counts/app.py file
import json
import spacy
def lambda_handler(event, context):
# Logic for Lambda function
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
</code></pre>
<p>I'm using the following command to launch the API:</p>
<pre class="lang-bash prettyprint-override"><code>sam local start-api --profile my-profile
</code></pre>
<p>So when I run the endpoint, it fails with:</p>
<pre class="lang-none prettyprint-override"><code># http://localhost:3000/word-counts
botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the GetLayerVersion operation: 1 validation error detected: Value 'arn:aws:lambda:${self:provider.region}:113088814899:layer:Klayers-python37-spacy' at
'layerName' failed to satisfy constraint: Member must satisfy regular expression pattern: (arn:(aws[a-zA-Z-]*)?:lambda:[a-z]{2}((-gov)|(-iso([a-z]?)))?-[a-z]+-\d{1}:\d{12}:layer:[a-zA-Z0-9-_]+)|[a-zA-Z0-9-_]+
</code></pre>
<p>Additional error details:</p>
<pre class="lang-none prettyprint-override"><code>Mounting GetWordCounts at http://127.0.0.1:3000/word-counts [GET]
You can now browse to the above endpoints to invoke your functions. You do not need to restart/reload SAM CLI while working on your functions, changes will be reflected instantly/automatically. If you used sam build before running local
commands, you will need to re-run sam build for the changes to be picked up. You only need to restart SAM CLI if you update your AWS SAM template
2024-10-17 15:09:35 WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on http://127.0.0.1:3000
2024-10-17 15:09:35 Press CTRL+C to quit
Invoking word-counts/app.lambda_handler (python3.9)
Exception on /word-counts [GET]
Traceback (most recent call last):
File "/Users/willian/.pyenv/versions/3.9.20/lib/python3.9/site-packages/flask/app.py", line 1473, in wsgi_app
response = self.full_dispatch_request()
File "/Users/willian/.pyenv/versions/3.9.20/lib/python3.9/site-packages/flask/app.py", line 882, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Users/willian/.pyenv/versions/3.9.20/lib/python3.9/site-packages/flask/app.py", line 880, in full_dispatch_request
rv = self.dispatch_request()
File "/Users/willian/.pyenv/versions/3.9.20/lib/python3.9/site-packages/flask/app.py", line 865, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
File "/Users/willian/.pyenv/versions/3.9.20/lib/python3.9/site-packages/samcli/local/apigw/local_apigw_service.py", line 726, in _request_handler
lambda_response = self._invoke_lambda_function(route.function_name, route_lambda_event)
File "/Users/willian/.pyenv/versions/3.9.20/lib/python3.9/site-packages/samcli/local/apigw/local_apigw_service.py", line 619, in _invoke_lambda_function
self.lambda_runner.invoke(lambda_function_name, event_str, stdout=stdout_writer, stderr=self.stderr)
File "/Users/willian/.pyenv/versions/3.9.20/lib/python3.9/site-packages/samcli/commands/local/lib/local_lambda.py", line 166, in invoke
self.local_runtime.invoke(
File "/Users/willian/.pyenv/versions/3.9.20/lib/python3.9/site-packages/samcli/lib/telemetry/metric.py", line 325, in wrapped_func
return_value = func(*args, **kwargs)
File "/Users/willian/.pyenv/versions/3.9.20/lib/python3.9/site-packages/samcli/local/lambdafn/runtime.py", line 224, in invoke
container = self.create(
File "/Users/willian/.pyenv/versions/3.9.20/lib/python3.9/site-packages/samcli/local/lambdafn/runtime.py", line 96, in create
container = LambdaContainer(
File "/Users/willian/.pyenv/versions/3.9.20/lib/python3.9/site-packages/samcli/local/docker/lambda_container.py", line 103, in __init__
image = LambdaContainer._get_image(
File "/Users/willian/.pyenv/versions/3.9.20/lib/python3.9/site-packages/samcli/local/docker/lambda_container.py", line 257, in _get_image
return lambda_image.build(runtime, packagetype, image, layers, architecture, function_name=function_name)
File "/Users/willian/.pyenv/versions/3.9.20/lib/python3.9/site-packages/samcli/local/docker/lambda_image.py", line 201, in build
downloaded_layers = self.layer_downloader.download_all(layers, self.force_image_build)
File "/Users/willian/.pyenv/versions/3.9.20/lib/python3.9/site-packages/samcli/local/layers/layer_downloader.py", line 77, in download_all
layer_dirs.append(self.download(layer, force))
File "/Users/willian/.pyenv/versions/3.9.20/lib/python3.9/site-packages/samcli/local/layers/layer_downloader.py", line 111, in download
layer_zip_uri = self._fetch_layer_uri(layer)
File "/Users/willian/.pyenv/versions/3.9.20/lib/python3.9/site-packages/samcli/local/layers/layer_downloader.py", line 160, in _fetch_layer_uri
raise e
File "/Users/willian/.pyenv/versions/3.9.20/lib/python3.9/site-packages/samcli/local/layers/layer_downloader.py", line 141, in _fetch_layer_uri
layer_version_response = self.lambda_client.get_layer_version(
File "/Users/willian/.pyenv/versions/3.9.20/lib/python3.9/site-packages/botocore/client.py", line 569, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/willian/.pyenv/versions/3.9.20/lib/python3.9/site-packages/botocore/client.py", line 1023, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the GetLayerVersion operation: 1 validation error detected: Value 'arn:aws:lambda:${self:provider.region}:113088814899:layer:Klayers-python37-spacy' at
'layerName' failed to satisfy constraint: Member must satisfy regular expression pattern: (arn:(aws[a-zA-Z-]*)?:lambda:[a-z]{2}((-gov)|(-iso([a-z]?)))?-[a-z]+-\d{1}:\d{12}:layer:[a-zA-Z0-9-_]+)|[a-zA-Z0-9-_]+
2024-10-17 15:09:39 127.0.0.1 - - [17/Oct/2024 15:09:39] "GET /word-counts HTTP/1.1" 502 -
2024-10-17 15:09:39 127.0.0.1 - - [17/Oct/2024 15:09:39] "GET /favicon.ico HTTP/1.1" 403 -
</code></pre>
|
<python><amazon-web-services><aws-lambda><spacy><aws-lambda-layers>
|
2024-10-17 18:14:53
| 1
| 3,475
|
Willian
|
79,099,321
| 16,348,170
|
VSCODE not showing documentation on hover for external libraries in python for ipynb files
|
<p>Ok, everytime I import an external library(any external library) in an ipynb file, vscode doesn't show any documentation whatsoever for the external library functions(almost all the time). I have faced this issue multiple times, with almost any library.</p>
<p>For example,
suppose I import a library and use a method in the albumentations library that I installed from pip:</p>
<p><a href="https://i.sstatic.net/MVhDOXpB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MVhDOXpB.png" alt="enter image description here" /></a></p>
<p>When I hover over it with mouse, it shows me this:</p>
<p><a href="https://i.sstatic.net/A2pgmlL8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A2pgmlL8.png" alt="enter image description here" /></a></p>
<p>The library does indeed get imported correctly, and I can use its functions perfectly well. I just don't know why it doesn't show the documentation on hover.</p>
<p>Any idea how to get it to work with showing the full and proper documentation that I expect to see normally with any standard ide. I don't have this problem with intellij, and I expect this to work with vscode as well. Have I configured something wrong here? Is there some setting that should be enabled to get this to work? Why is it that it doesn't work with any external library that I install with pip?</p>
<p>Edit:
I am aware of doing A.Rotate? and getting the documentation, but I want to know how to do it with hover in particular because its a minimum level of convenience that I expect from an editor</p>
|
<python><visual-studio-code><editor>
|
2024-10-17 18:01:55
| 1
| 341
|
hidden_machine
|
79,099,224
| 11,069,614
|
How to replace character and split lines in python
|
<p>I have a text file with a bunch of data like this:</p>
<pre class="lang-none prettyprint-override"><code>DMG*D8*19931219*F**H~AMT*P3*0~NM1*31*1~N3*2670 SO A W GRIMES BO*#6102~N4*ROUND ROCK*TX*786642849~NM1*QD*1*FIGUEROA*LILIET~N3*2670 SO A W GRIMES BO*#6102~N4*ROUND ROCK*TX*786642849~INS*Y*18*024*07*A***TE~REF*0F*811229070~REF*1L*811229070~REF*3H*0~REF*ZZ*H~REF*6O*1055763324~DTP*303*D8*20240901~DTP*356*D8*20240201~DTP*357*D8*20250131~DTP*286*D8*20240831
</code></pre>
<p>I want to split the data into new lines where the <code>~</code> character appears.</p>
<p>I have tried:</p>
<pre><code>with open(txtP835s + '/' + fileName, "r") as txtfile:
for line in txtfile:
line.replace('~', '\n')
lines = line.splitlines()
print(lines)
txtfile.close()
</code></pre>
<p>But this doesn't work. The data just looks the same as the raw file.</p>
|
<python>
|
2024-10-17 17:28:06
| 1
| 392
|
Ben Smith
|
79,099,138
| 4,985,049
|
usage of retain graph in pytorch
|
<p>I get error if I don't supply <code>retain_graph=True</code> in <code>y1.backward()</code></p>
<pre><code> import torch
x = torch.tensor([2.0], requires_grad=True)
y = torch.tensor([3.0], requires_grad=True)
f = x+y
z = 2*f
y1 = z**2
y2 = z**3
y1.backward()
y2.backward()
</code></pre>
<pre><code>Traceback (most recent call last):
File "/Users/a0m08er/pytorch/pytorch_tutorial/tensor.py", line 58, in <module>
y2.backward()
File "/Users/a0m08er/pytorch/lib/python3.11/site-packages/torch/_tensor.py", line 521, in backward
torch.autograd.backward(
File "/Users/a0m08er/pytorch/lib/python3.11/site-packages/torch/autograd/__init__.py", line 289, in backward
_engine_run_backward(
File "/Users/a0m08er/pytorch/lib/python3.11/site-packages/torch/autograd/graph.py", line 769, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
</code></pre>
<p>But I don't get error when I do this:</p>
<pre><code> import torch
x = torch.tensor([2.0], requires_grad=True)
y = torch.tensor([3.0], requires_grad=True)
z = x+y
y1 = z**2
y2 = z**3
y1.backward()
y2.backward()
</code></pre>
<p>Since z is a common node for <code>y1</code> and <code>y2</code> why it is not showing me error when I do <code>y2.backward()</code></p>
|
<python><pytorch><tensor><autograd>
|
2024-10-17 16:55:58
| 1
| 403
|
pasternak
|
79,099,118
| 725,932
|
Override value in pydantic model with environment variable
|
<p>I am building some configuration logic for a Python 3 app, and trying to use <code>pydantic</code> and <code>pydantic-settings</code> to manage validation etc. I'm able to load raw settings from a YAML file and create my settings object from them. I'm also able to read a value from an environment variable. But I can't figure out how to make the environment variable value take precedence over the raw settings:</p>
<pre class="lang-py prettyprint-override"><code>import os
import yaml as pyyaml
from pydantic_settings import BaseSettings, SettingsConfigDict
class FooSettings(BaseSettings):
foo: int
bar: str
model_config = SettingsConfigDict(env_prefix='FOOCFG__')
raw_yaml = """
foo: 13
bar: baz
"""
os.environ.setdefault("FOOCFG__FOO", "42")
raw_settings = pyyaml.safe_load(raw_yaml)
settings = FooSettings(**raw_settings)
assert settings.foo == 42
</code></pre>
<p>If I comment out <code>foo: 13</code> in the input yaml, the assertion passes. How can I make the env value take precedence?</p>
|
<python><pydantic><pydantic-settings>
|
2024-10-17 16:49:59
| 2
| 3,258
|
superstator
|
79,099,059
| 5,133,008
|
Moving (PyQt6) window not working when python file is compiled with PyInstaller on Linux Fedora
|
<p>I'm trying to compile a python program. The UI is made with PyQt6, and the main window is frameless. I wanted to implement moving the window manually by using the <code>move()</code> function of a <code>QWidget</code>. When running the Python code itself, it works, but when I compile the code with PyInstaller, the window no longer moves. I have boiled down the problem to this minimal example:</p>
<pre><code>import sys
from PyQt6.QtWidgets import QApplication, QWidget
from PyQt6.QtCore import Qt
if __name__ == "__main__":
app = QApplication(sys.argv)
window = QWidget()
window.setGeometry(0, 0, 800, 300)
window.setWindowFlags(Qt.WindowType.FramelessWindowHint)
window.show()
screen_geometry = app.primaryScreen().geometry()
window.move(screen_geometry.width() - window.width(), 0)
sys.exit(app.exec())
</code></pre>
<p>I am compiling the code with PyInstaller: <code>pyinstaller --onefile test.py</code>.</p>
<p>The <code>window.move()</code> function is supposed to move the window to the right side of the screen. When running this as Python code, it works, but after compiling on Linux (Fedora), it no longer does. I tried compiling it on MacOS, and it did work there. So that leads me to believe it's a backend issue.</p>
<p><strong>Edit:</strong> When I run the python code from the terminal, it doesn't work, but when I run it in VS Code, it does somehow work.</p>
<p>I only have a basic understanding of PyQt and I don't know anything about graphics backends, so I don't know how to troubleshoot this. I'm on Linux Fedora 40, if that's relevant. Any suggestions, explanations or insights? Thanks!</p>
|
<python><linux><pyqt><pyinstaller><fedora>
|
2024-10-17 16:31:56
| 0
| 441
|
svs
|
79,099,018
| 2,678,716
|
Scraping the hulkapps table using Selenium or Beautiful soup
|
<p>I have this URL that I am trying to scrape: <a href="https://papemelroti.com/products/live-free-badge" rel="nofollow noreferrer">https://papemelroti.com/products/live-free-badge</a></p>
<p>But it seems that I can't find this table class</p>
<pre><code><table class="hulkapps-table table"><thead><tr><th style="border-top-left-radius: 0px;">Quantity</th><th style="border-top-right-radius: 0px;">Bulk Discount</th><th style="display: none">Add to Cart</th></tr></thead><tbody><tr><td style="border-bottom-left-radius: 0px;">Buy 50 + <span class="hulk-offer-text"></span></td><td style="border-bottom-right-radius: 0px;"><span class="hulkapps-price"><span class="money"><span class="money"> β±1.00 </span></span> Off</span></td><td style="display: none;"><button type="button" class="AddToCart_0" style="cursor: pointer; font-weight: 600; letter-spacing: .08em; font-size: 11px; padding: 5px 15px; border-color: #171515; border-width: 2px; color: #ffffff; background: #161212;" onclick="add_to_cart(50)">Add to Cart</button></td></tr></tbody></table>
</code></pre>
<p>I already have my Selenium code but it's still not scraping it. Here's my code:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from bs4 import BeautifulSoup
import time
# Set up Chrome options
chrome_options = Options()
chrome_options.add_argument("--headless")
chrome_options.add_argument("--no-sandbox")
chrome_options.add_argument("--disable-dev-shm-usage")
service = Service('/usr/local/bin/chromedriver') # Adjust path if necessary
driver = webdriver.Chrome(service=service, options=chrome_options)
def get_page_html(url):
driver.get(url)
time.sleep(3) # Wait for JS to load
return driver.page_source
def scrape_discount_quantity(url):
page_html = get_page_html(url)
soup = BeautifulSoup(page_html, "html.parser")
# Locate the table containing the quantity and discount
table = soup.find('table', class_='hulkapps-table')
print(page_html)
if table:
table_rows = table.find_all('tr')
for row in table_rows:
quantity_cells = row.find_all('td')
if len(quantity_cells) >= 2: # Check if there are at least two cells
quantity_cell = quantity_cells[0].get_text(strip=True) # Get quantity text
discount_cell = quantity_cells[1].get_text(strip=True) # Get discount text
return quantity_cell, discount_cell
return None, None
# Example usage
url = 'https://papemelroti.com/products/live-free-badge'
quantity, discount = scrape_discount_quantity(url)
print(f"Quantity: {quantity}, Discount: {discount}")
driver.quit() # Close the browser when done
</code></pre>
<p>It keeps on returning 'None'</p>
<p>For reference:
<a href="https://i.sstatic.net/EDOyPuVZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EDOyPuVZ.png" alt="enter image description here" /></a></p>
|
<python><selenium-webdriver><beautifulsoup>
|
2024-10-17 16:19:32
| 1
| 1,383
|
Rav
|
79,098,960
| 2,826,018
|
PyTorch LSTM regression: Take only last output value or take all output values of LSTM?
|
<p>I try to train my first LSTM regression model based on global average temperature data. The temperature is available for every month since January 1st, 1850.</p>
<p>From what I've learned online, I feed 12 months in a row into the LSTM and letting it predict the next month and I do this for all my sequences generated from the data (all data except the last 30 years).</p>
<p>I first only took the last output value from the LSTM and advanced it into the final linear linear layer but I noticed that it is not converging very well. Then I advanced all of the output data of the LSTM (so for every month I get the hidden size: <code>12 x hidden_size</code>) and it works much better.</p>
<p>So with the second solution I can't put in variable length sequences but I won't do this anyway - right?</p>
<p>What would be the best approach here?</p>
<pre><code>import torch
import torch.nn as nn
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from torch.utils.data import DataLoader, Dataset
class LSTMDataset( Dataset ):
def __init__( self, x, y ):
self.x = x
self.y = y
def __len__(self):
return len( self.y )
def __getitem__(self, idx):
sample, label = self.x[ idx ], self.y[ idx ]
return sample.reshape( ( -1, 1 ) ), label.reshape( ( 1 ) )
class LSTMNet( nn.Module ):
def __init__( self ):
super().__init__()
self.hidden_size = 24
self.lstm = nn.LSTM( input_size=1, hidden_size=self.hidden_size, num_layers=1, batch_first=True )
self.net = nn.Sequential(
nn.Flatten(),
nn.Linear( self.hidden_size * 12, self.hidden_size * 12 ),
nn.ReLU(),
nn.Linear( self.hidden_size * 12, 1 ) # 12 is the fixed sequence length (12 months of temperature data)
)
def forward(self, x):
x, _ = self.lstm( x ) # or x[ :, -1, : ] - which one is preferred?
x = self.net( x )
return x
df = pd.read_csv( "globalTemperatures.csv" )
df = df[ [ "dt", "LandAverageTemperature" ] ]
df[ "dt" ] = pd.to_datetime( df[ "dt" ], format="%Y-%m-%d" )
forecastMonths = 12 * 30 # forecast 30 years
sequenceLength = 12 # 12 months are fed into LSTM one after another
trainX = []
trainY = []
testX = []
testY = []
for i in range( len( df ) - sequenceLength ):
x = np.array( df[ "LandAverageTemperature" ].iloc[ i : i + sequenceLength ] ).astype( np.float32 )
y = np.array( df[ "LandAverageTemperature" ].iloc[ i + sequenceLength ] ).astype( np.float32 )
if i + sequenceLength >= ( len( df ) - forecastMonths ):
testX.append( x )
testY.append( y )
else:
trainX.append( x )
trainY.append( y )
trainingSet = LSTMDataset( trainX, trainY )
testSet = LSTMDataset( testX, testY )
training_loader = DataLoader( trainingSet, batch_size=1, shuffle=True )
test_loader = DataLoader( testSet, batch_size=1, shuffle=False )
model = LSTMNet()
optimizer = torch.optim.Adam( model.parameters(), lr=0.01 )
loss_fn = torch.nn.MSELoss()
accuracies = []
epochs = 2
for epoch in range( epochs ):
losses = []
for i, data in enumerate( training_loader ):
inputs, labels = data
optimizer.zero_grad()
outputs = model(inputs)
loss = loss_fn(outputs, labels)
loss.backward()
optimizer.step()
losses.append( loss.item() )
print( f"Epoch [{epoch + 1}/{epochs}] Loss: {np.mean( losses ):.2f}" )
predictedTemperatures = []
model.eval()
for i, data in enumerate( test_loader ):
inputs, labels = data
output = model( inputs )
loss = loss_fn(outputs, labels)
losses.append( loss.item() )
predictedTemperatures.append( output.item() )
print( f"Test Loss: {np.mean( losses ):.2f}" )
plt.figure( figsize=(18, 2) )
plt.plot( df[ "dt" ], df[ "LandAverageTemperature" ], label="True Temperatures" )
plt.plot( df[ "dt" ].iloc[ -forecastMonths : ], predictedTemperatures, label="Predicted Temperatures" )
plt.savefig( "temperatures.png" )
</code></pre>
|
<python><pytorch><regression><lstm>
|
2024-10-17 16:02:16
| 0
| 1,724
|
binaryBigInt
|
79,098,721
| 6,362,595
|
Fixing badly formatted floats with numpy
|
<p>I am reading a text file only containing floating point numbers using <code>numpy.loadtxt</code>. However, some of the data is corrupted and reads something like <code>X.XXXXXXX+YYY</code> instead of <code>X.XXXXXXXE+YY</code> (Missing <code>E</code> char). I'd like to interpret them as the intended floating point number (or NaN if impossible) and wondered if there was any easy way to do this upon reading the file instead of manually correcting each entries in the file since it contains hundreds of thousands of lines of data.</p>
<p>MWE:</p>
<pre><code>import numpy as np
data = np.loadtxt("path/to/datafile")
</code></pre>
<p>Example of error raised:</p>
<p><code>ValueError: could not convert string '0.710084093014+195' to float64 at row 862190, column 6</code></p>
|
<python><numpy>
|
2024-10-17 15:06:05
| 2
| 921
|
fgoudra
|
79,098,671
| 12,304,000
|
Loading gsheet data into Python with gspread
|
<p>This is an example from their docs:</p>
<pre><code>import gspread
credentials = {
"type": "service_account",
"project_id": "api-project-XXX",
"private_key_id": "2cd β¦ ba4",
"private_key": "-----BEGIN PRIVATE KEY-----\nNrDyLw β¦ jINQh/9\n-----END PRIVATE KEY-----\n",
"client_email": "473000000000-yoursisdifferent@developer.gserviceaccount.com",
"client_id": "473 β¦ hd.apps.googleusercontent.com",
...
}
gc = gspread.service_account_from_dict(credentials)
sh = gc.open("Example spreadsheet")
print(sh.sheet1.get('A1'))
</code></pre>
<p>but what exactly goes in their "Example spreadsheet"?</p>
<p>When I copy paste the entire link from my browser, it says Spreadsheet not found. If I copy just the gid, I get the same output. If I copy the middle part, I still get the same error :(</p>
|
<python><google-sheets><google-api><google-sheets-api><gspread>
|
2024-10-17 14:55:29
| 1
| 3,522
|
x89
|
79,098,592
| 2,566,283
|
How to identify cases where both elements of a pair are greater than others' respective elements in the set?
|
<p>I have a case where I have a list of pairs, each with two numerical values. I want to find the subset of these elements containing only those pairs that are <em>not</em> exceeded by both elements of another (let's say "eclipsed" by another).</p>
<p>For example, the pair (1,2) is eclipsed by (4,5) because both elements are less than the respective elements in the other pair.</p>
<p>Also, (1,2) is considered eclipsed by (1,3) because while the first element is equal to the other and the second element is less than the other's.</p>
<p>However the pair (2, 10) is not eclipsed by (9, 9) because only one of its elements is exceeded by the other's.</p>
<p>Cases where the pairs are identical should be reduced to just one (duplicates removed).</p>
<p>Ultimately, I am looking to reduce the list of pairs to a subset where only pairs that were not eclipsed by any others remain.</p>
<p>For example, take the following list:</p>
<pre><code>(1,2)
(1,5)
(2,2)
(1,2)
(2,2)
(9,1)
(1,1)
</code></pre>
<p>This should be reduced to the following:</p>
<pre><code>(1,5)
(2,2)
(9,1)
</code></pre>
<p>My initial implementation of this in python was the following, using polars:</p>
<pre><code>import polars as pl
pairs_list = [
(1,2),
(1,5),
(2,2),
(1,2),
(2,2),
(9,1),
(1,1),
]
# tabulate pair elements as 'a' and 'b'
pairs = pl.DataFrame(
data=pairs_list,
schema={'a': pl.UInt32, 'b': pl.UInt32},
orient='row',
)
# eliminate any duplicate pairs
unique_pairs = pairs.unique()
# self join so every pair can be compared (except against itself)
comparison_inputs = (
unique_pairs
.join(
unique_pairs,
how='cross',
suffix='_comp',
)
.filter(
pl.any_horizontal(
pl.col('a') != pl.col('a_comp'),
pl.col('b') != pl.col('b_comp'),
)
)
)
# flag pairs that were eclipsed by others
comparison_results = (
comparison_inputs
.with_columns(
pl.all_horizontal(
pl.col('a') <= pl.col('a_comp'),
pl.col('b') <= pl.col('b_comp'),
)
.alias('is_eclipsed')
)
)
# remove pairs that were eclipsed by at least one other
principal_pairs = (
comparison_results
.group_by('a', 'b')
.agg(pl.col('is_eclipsed').any())
.filter(is_eclipsed=False)
.select('a', 'b')
)
</code></pre>
<p>While this does appear to work, it is computationally infeasible for large datasets due to the sheer size of the self-joined table.</p>
<p>I have considered filtering the comparison_inputs table down by removing redundant reversed comparisons, e.g., pair X vs pair Y and pair Y vs pair X don't both need to be in the table as they currently are, but changing that requires an additional condition in each comparison to report which element was eclipsed in the comparison and only reduces the dataset in half, which isn't that significant.</p>
<p>I have found I can reduce the needed comparisons substantially by doing a window function filter that filters to only the max b for each a and vice versa before doing the self joining step. In other words:</p>
<pre><code>unique_pairs = (
pairs
.unique()
.filter(a = pl.col('a').last().over('b', order_by='a')
.filter(b = pl.col('b').last().over('a', order_by='b')
</code></pre>
<p>But of course this only does so much and depends on the cardinality of a and b. I still need to self-join and compare after this to get a result.</p>
<p>I am curious if there is already some algorithm established for calculating this and whether anyone has ideas for a more efficient method. Interested to learn more anyway. Thanks in advance.</p>
|
<python><join><optimization><mathematical-optimization><self-join>
|
2024-10-17 14:38:08
| 2
| 2,724
|
teepee
|
79,098,507
| 2,133,561
|
Python - Priority-Based Conditional Data Transformation
|
<p>I have a use case of a form with 3 dropdowns ( A, B and C for this example). They each have 4 options:</p>
<p><a href="https://i.sstatic.net/BOONcxVz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BOONcxVz.png" alt="enter image description here" /></a></p>
<p>Which gives me data per record/ID like this:</p>
<pre><code>data = {
'ID': [1, 2, 3, 4],
'A_OLD': ['A1', 'A4', 'A3', 'A4'],
'A_NEW': [np.nan, np.nan, np.nan, np.nan],
'Priority_A_NEW': [np.nan, np.nan, np.nan, np.nan],
'B_OLD': ['B2', 'B1', 'B1', 'B2'],
'B_NEW': [np.nan, np.nan, np.nan, np.nan],
'Priority_B_NEW': [np.nan, np.nan, np.nan, np.nan],
'C_OLD': ['C4', 'C3', 'C1', 'C2'],
'C_NEW': [np.nan, np.nan, np.nan, np.nan],
'Priority_C_NEW': [np.nan, np.nan, np.nan, np.nan],
}
df = pd.DataFrame(data)
</code></pre>
<p><a href="https://i.sstatic.net/eAtR7FOv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eAtR7FOv.png" alt="enter image description here" /></a></p>
<p>I want to change the naming of the options. I need to change all the old data to the new values. The complexity comes in due to the following:</p>
<ul>
<li>Some of the values will not only change the value of 1 dropdown value, but also desides the change in another dropdown</li>
<li>This might lead to conflicts between 2 (or more) new values.</li>
<li>To resolve most of the conflicts I added priotities</li>
</ul>
<p><a href="https://i.sstatic.net/lGuIUg79.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lGuIUg79.png" alt="enter image description here" /></a></p>
<p>If I apply the conversion, taking into account the priorities, I want my result to be like this:</p>
<pre><code>ID A_OLD B_OLD C_OLD A_NEW B_NEW C_NEW
1 A1 B2 C4 A-1 B-2 C-4
2 A4 B1 C3 A-1< B-1 C-3
3 A3 B1 C1 A-3< B-1 C-1
4 A4 B2 C2 A-4 B-1, B-2< C-2
</code></pre>
<p>The '<' gives an indication of which values were affected by the cross-over and priorities.</p>
<p>Or in the table with priorities it looks like this:</p>
<p><a href="https://i.sstatic.net/Y8thhpx7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Y8thhpx7.png" alt="enter image description here" /></a></p>
<p>To make it more clear I also created an extra image:
<a href="https://i.sstatic.net/rOdUnLkZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rOdUnLkZ.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/xF6Vvl7i.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xF6Vvl7i.png" alt="enter image description here" /></a></p>
<p>I know that this is possible, with the loop, if/else and the comparison of the priorities it should do it, but I can't get to the right answer.</p>
<p>I gor pretty far, though the solution is not pretty (quiet extensive for loop..), and the comparison between priorities is something I'm stuck at at the moment as I get this solution from the code I have:</p>
<p><a href="https://i.sstatic.net/flEJNk6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/flEJNk6t.png" alt="enter image description here" /></a></p>
<p>My code:</p>
<pre><code>import pandas as pd
import numpy as np
data = {
'ID': [1, 2, 3, 4],
'A_OLD': ['A1', 'A4', 'A3', 'A4'],
'A_NEW': [np.nan, np.nan, np.nan, np.nan],
'Priority_A_NEW': [np.nan, np.nan, np.nan, np.nan],
'B_OLD': ['B2', 'B1', 'B1', 'B2'],
'B_NEW': [np.nan, np.nan, np.nan, np.nan],
'Priority_B_NEW': [np.nan, np.nan, np.nan, np.nan],
'C_OLD': ['C4', 'C3', 'C1', 'C2'],
'C_NEW': [np.nan, np.nan, np.nan, np.nan],
'Priority_C_NEW': [np.nan, np.nan, np.nan, np.nan],
}
conversionTable_data = {
'Dropdown': ['A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'C', 'C', 'C', 'C'],
'Value_OLD': ['A1', 'A2', 'A3', 'A4', 'B1', 'B2', 'B3', 'B4', 'C1', 'C2', 'C3', 'C4'],
'Value_NEW': ['A-1', 'A-2', 'A-3', 'A-4', 'B-1', 'B-2', 'B-3', 'B-4', 'C-1', 'C-2', 'C-3', 'C-4'],
'Priority_NEW': [2, 1, 2, 3, 1, 1, 2, 3, 1, 3, 1, 2],
'Other_Dropdown_Change': [np.nan, np.nan, np.nan, np.nan, 'A', np.nan, np.nan, 'A', 'A', 'B', np.nan, np.nan],
'Change_Value_Other_Dropdown': [np.nan, np.nan, np.nan, np.nan, 'A-1a', np.nan, np.nan, 'A-2a', 'A-3a', 'B-1a',
np.nan, np.nan],
'Priority_Other_Dropdown': [np.nan, np.nan, np.nan, np.nan, 2, np.nan, np.nan, 2, 1, 1, np.nan, np.nan]
}
# Convert string holding columns to 'object' type
df = pd.DataFrame(data)
df['A_NEW'] = df['A_NEW'].astype('object')
df['B_NEW'] = df['B_NEW'].astype('object')
df['C_NEW'] = df['C_NEW'].astype('object')
df_conversionTable = pd.DataFrame(conversionTable_data)
for index, row in df.iterrows():
# For A A
OldRowA = row['A_OLD']
filtered_conversionTable_A = df_conversionTable[(df_conversionTable['Dropdown'] == 'A') & (df_conversionTable['Value_OLD'] == OldRowA)]
if not filtered_conversionTable_A.empty:
df.at[index, 'A_NEW'] = filtered_conversionTable_A['Value_NEW'].iloc[0]
df.at[index, 'Priority_A_NEW'] = filtered_conversionTable_A['Priority_NEW'].iloc[0]
# Check for Other_Dropdown_Change
if pd.notna(filtered_conversionTable_A['Other_Dropdown_Change'].iloc[0]):
DropdownType = filtered_conversionTable_A['Other_Dropdown_Change'].iloc[0]
if DropdownType == 'B':
NEW = 'B_NEW'
PRIORITY = 'Priority_B_NEW'
OLD = 'B_OLD'
elif DropdownType == 'C':
NEW = 'C_NEW'
PRIORITY = 'Priority_C_NEW'
OLD = 'C_OLD'
else:
print("Mistake in Type")
continue # Skip to next iteration
# Check if NEW column is not NaN
if pd.notna(row[NEW]):
# Compare priority
if pd.notna(row[PRIORITY]) and row[PRIORITY] < filtered_conversionTable_A['Priority_Other_Dropdown'].iloc[0]:
df.at[index, NEW] = filtered_conversionTable_A['Change_Value_Other_Dropdown'].iloc[0]
df.at[index, PRIORITY] = filtered_conversionTable_A['Priority_Other_Dropdown'].iloc[0]
elif row[PRIORITY] == filtered_conversionTable_A['Priority_Other_Dropdown'].iloc[0]:
# Concatenate values
combi_new_value = row[NEW] + '_' + row[OLD]
df.at[index, NEW] = combi_new_value
else:
# If NEW is NaN, update it
df.at[index, NEW] = filtered_conversionTable_A['Change_Value_Other_Dropdown'].iloc[0]
df.at[index, PRIORITY] = filtered_conversionTable_A['Priority_Other_Dropdown'].iloc[0]
# For B
OldRowB = row['B_OLD']
filtered_conversionTable_B = df_conversionTable[(df_conversionTable['Dropdown'] == 'B') & (df_conversionTable['Value_OLD'] == OldRowB)]
if not filtered_conversionTable_B.empty:
if pd.notna(row['B_NEW']):
# Compare priority
if pd.notna(row['Priority_B_NEW']) and row['Priority_B_NEW'] < \
filtered_conversionTable_B['Priority_Other_Dropdown'].iloc[0]:
df.at[index, 'B_NEW'] = filtered_conversionTable_B['Change_Value_Other_Dropdown'].iloc[0]
df.at[index, 'Priority_B_NEW'] = filtered_conversionTable_B['Priority_Other_Dropdown'].iloc[0]
elif row['Priority_B_NEW'] == filtered_conversionTable_B['Priority_Other_Dropdown'].iloc[0]:
# Concatenate values
combi_new_value = row['B_NEW'] + '_' + filtered_conversionTable_B['Change_Value_Other_Dropdown'].iloc[0]
df.at[index, 'B_NEW'] = combi_new_value
else:
df.at[index, 'B_NEW'] = filtered_conversionTable_B['Value_NEW'].iloc[0]
df.at[index, 'Priority_B_NEW'] = filtered_conversionTable_B['Priority_NEW'].iloc[0]
# Check for Other_Dropdown_Change
if pd.notna(filtered_conversionTable_B['Other_Dropdown_Change'].iloc[0]):
DropdownType = filtered_conversionTable_B['Other_Dropdown_Change'].iloc[0]
if DropdownType == 'A':
NEW = 'A_NEW'
PRIORITY = 'Priority_A_NEW'
OLD = 'A_OLD'
elif DropdownType == 'C':
NEW = 'C_NEW'
PRIORITY = 'Priority_C_NEW'
OLD = 'C_OLD'
else:
continue # Skip to next iteration
# Check if NEW column is not NaN
if pd.notna(row[NEW]):
# Compare priority
if pd.notna(row[PRIORITY]) and row[PRIORITY] < filtered_conversionTable_B['Priority_Other_Dropdown'].iloc[0]:
df.at[index, NEW] = filtered_conversionTable_B['Change_Value_Other_Dropdown'].iloc[0]
df.at[index, PRIORITY] = filtered_conversionTable_B['Priority_Other_Dropdown'].iloc[0]
elif row[PRIORITY] == filtered_conversionTable_B['Priority_Other_Dropdown'].iloc[0]:
# Concatenate values
combi_new_value = row[NEW] + '_' + filtered_conversionTable_B['Change_Value_Other_Dropdown'].iloc[0]
print(combi_new_value)
df.at[index, NEW] = combi_new_value
else:
# If NEW is NaN, update it
df.at[index, NEW] = filtered_conversionTable_B['Change_Value_Other_Dropdown'].iloc[0]
df.at[index, PRIORITY] = filtered_conversionTable_B['Priority_Other_Dropdown'].iloc[0]
# For C
OldRowC = row['C_OLD']
filtered_conversionTable_C = df_conversionTable[(df_conversionTable['Dropdown'] == 'C') & (df_conversionTable['Value_OLD'] == OldRowC)]
if not filtered_conversionTable_C.empty:
if pd.notna(row['C_NEW']):
# Compare priority
if pd.notna(row['Priority_C_NEW']) and row['Priority_C_NEW'] < \
filtered_conversionTable_C['Priority_Other_Dropdown'].iloc[0]:
df.at[index, 'C_NEW'] = filtered_conversionTable_C['Change_Value_Other_Dropdown'].iloc[0]
df.at[index, 'Priority_C_NEW'] = filtered_conversionTable_C['Priority_Other_Dropdown'].iloc[0]
elif row['Priority_C_NEW'] == filtered_conversionTable_C['Priority_Other_Dropdown'].iloc[0]:
# Concatenate values
combi_new_value = row['C_NEW'] + '_' + filtered_conversionTable_C['Change_Value_Other_Dropdown'].iloc[0]
df.at[index, 'C_NEW'] = combi_new_value
print("geplakte versie C:")
print(combi_new_value)
else:
df.at[index, 'C_NEW'] = filtered_conversionTable_C['Value_NEW'].iloc[0]
df.at[index, 'Priority_C_NEW'] = filtered_conversionTable_C['Priority_NEW'].iloc[0]
# Check for Other_Dropdown_Change
if pd.notna(filtered_conversionTable_C['Other_Dropdown_Change'].iloc[0]):
print("test")
DropdownType = filtered_conversionTable_C['Other_Dropdown_Change'].iloc[0]
if DropdownType == 'A':
NEW = 'A_NEW'
PRIORITY = 'Priority_A_NEW'
OLD = 'A_OLD'
elif DropdownType == 'B':
NEW = 'B_NEW'
PRIORITY = 'Priority_B_NEW'
OLD = 'B_OLD'
else:
print("Error in Type")
continue # Skip to next iteration
# Check if NEW column is not NaN
if pd.notna(row[NEW]):
# Compare priority
if pd.notna(row[PRIORITY]) and row[PRIORITY] < filtered_conversionTable_C['Priority_Other_Dropdown'].iloc[0]:
df.at[index, NEW] = filtered_conversionTable_C['Change_Value_Other_Dropdown'].iloc[0]
df.at[index, PRIORITY] = filtered_conversionTable_C['Priority_Other_Dropdown'].iloc[0]
elif row[PRIORITY] == filtered_conversionTable_C['Priority_Other_Dropdown'].iloc[0]:
# Concatenate values
combi_new_value = row[NEW] + '_' + filtered_conversionTable_C['Change_Value_Other_Dropdown'].iloc[0]
df.at[index, NEW] = combi_new_value
else:
# If NEW is NaN, update it
df.at[index, NEW] = filtered_conversionTable_C['Change_Value_Other_Dropdown'].iloc[0]
df.at[index, PRIORITY] = filtered_conversionTable_C['Priority_Other_Dropdown'].iloc[0]
</code></pre>
<p><strong>Additional Comment after edit</strong></p>
<p>My code at the moment (apart from not being very pretty) seems to be able to find the correct new value for the dropdowns but it does not do the extra replacement of 'Other dropdown change' yet. Though I have not checked everything, it seems like this is the only problem left..</p>
|
<python><for-loop><if-statement><conditional-statements>
|
2024-10-17 14:15:04
| 0
| 331
|
user2133561
|
79,098,226
| 9,670,009
|
The list of urlpatterns should not have a prefix string
|
<p>In Django 5.0 I get the error:</p>
<p>The list of urlpatterns should not have a prefix string.</p>
<p>My code is:</p>
<pre><code>from django.conf.urls import url
from django.urls import path, include
from django.contrib import admin
app_name = 'AppServer_test' # specified as string literal rather than a variable
urlpatterns = [
path('admin/', admin.site.urls),
path('api/v1.0/user/', include('user_accounts.urls')),
path('api/v1.0/notifications/', include('notifications.urls')),
path('api/v1.0/feedback/', include('feedback.urls')),
# path('', include('AppServer_test.urls')),
# check that AppServer_test.urls is a valid and accessible module
]
</code></pre>
<p>I've got the same code working in Django 3.2, the error only occurs in a system using 5.0</p>
<p>I've tried using round brackets instead of square as in this post:</p>
<p><a href="https://stackoverflow.com/questions/52136299/django-1-11-url-pattern-error-how-to-solve">Django 1.11 url pattern error, how to solve?</a></p>
<p>Traceback:</p>
<pre><code>Invalid line: amqp://qtofknpw:j67JIAaQk-HzvM1-gjkYUD7GV1uSvy3O@elegant-cobalt-crow.rmq3.cloudamqp.com:5671/qtofknpw
SystemCheckError: System check identified some issues:
ERRORS:
?: (urls.E004) Your URL pattern 'AppServer_test.urls' is invalid. Ensure that urlpatterns is a list of path() and/or re_path() instances.
HINT: Try removing the string 'AppServer_test.urls'. The list of urlpatterns should not have a prefix string as the first element.
</code></pre>
<p>App URL's:</p>
<p>Feedback App:</p>
<pre><code>from django.urls import path
from . import views
urlpatterns = [
path('submit/', views.submit_feedback, name='submit_feedback'),
]
</code></pre>
<p>Notifications App:</p>
<pre><code>from django.conf import settings
from django.urls import path
from django.conf.urls.static import static
from .views import FetchNotificationReceipts, LogNotification, LogNotificationOpened, ResponderResponse, FetchNotificationReceipts
urlpatterns = [
path('log-notification/', LogNotification, name='log_notification'),
path('log-notification-opened/', LogNotificationOpened, name='log_notification_opened'),
path('responder-response/', ResponderResponse, name='responder_response'),
path('fetch-notification-receipts/', FetchNotificationReceipts, name='fetch_notification_receipts'),
]
urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
</code></pre>
<p>User accounts App:</p>
<pre><code>from django.urls import path
from .views import test
urlpatterns =[
path('test/', test,name='test'),
]
</code></pre>
|
<python><django><prefix><url-pattern>
|
2024-10-17 13:06:22
| 0
| 537
|
Tirna
|
79,098,010
| 10,737,147
|
unwrapping a contour
|
<p>I have 3 arrays as below</p>
<pre><code>(Pdb) Istag
[[67, 68], [227, 228]]
(Pdb) spl
array([151, 302])
(Pdb) wspl
array([26, 52])
</code></pre>
<p>what these data represents is as per the diagram below;</p>
<p><a href="https://i.sstatic.net/AJMVI48J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJMVI48J.png" alt="enter image description here" /></a></p>
<p>Numbers in black corresponds to the numbers shown in the above arrays while, numbers in Red shows the transformed numbers if I were to take them into a single list of numbers. The curvy sections are held in one array "spl" while the straight sections are held in another array "wspl". Curvy section is to be split into two ranges based on the "Istag" while adding straight sections into indices.</p>
<p>basically, what I want is to convert these array indices into 6 range objects S1,S2,S3... etc and the ranges should be as follows</p>
<pre><code>S1: range( 67, 0, -1)
S2: range( 68, 151, 1)
S3: range(151, 177, 1)
S4: range(253, 178, -1)
S5: range(254, 328, 1)
S6: range(328, 380, 1)
</code></pre>
<p>could someone please shed some light indicating how this can be done ?</p>
|
<python><numpy><range>
|
2024-10-17 12:11:28
| 1
| 437
|
XYZ
|
79,097,893
| 8,467,078
|
Re-decorate a python (class) decorator
|
<p>I'd like to create a decorator that basically wraps an already existing decorator that has parameters, such that the new decorator acts like the old one with some of the arguments supplied.</p>
<p>Specifically, this is about the builtin <code>@dataclass</code> decorator. I have a number of classes to decorate with it, while always using <code>kw_only=True</code> and <code>eq=False</code>, and I'd like to have a new decorator that does just that, saving me to spell out the parameters every time. So, e.g. ...</p>
<pre class="lang-py prettyprint-override"><code>@mydataclass
class Foo:
a: int = 5
</code></pre>
<p>...should be equivalent to...</p>
<pre class="lang-py prettyprint-override"><code>@dataclass(kw_only=True, eq=False)
class Foo:
a: int = 5
</code></pre>
<p><em>I know this doesn't seem like saving a lot of typing, but well this is more about providing a convenience decorator for the rest of our team, so that no one forgets to correctly add these two parameters.</em></p>
|
<python><python-typing><python-decorators><python-dataclasses>
|
2024-10-17 11:38:06
| 3
| 345
|
VY_CMa
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.