QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
78,722,450
| 2,153,235
|
Specify None as default value for Boolean function argument?
|
<p>The prototype for <a href="https://numpy.org/doc/stable/reference/generated/numpy.histogram.html" rel="nofollow noreferrer"><code>numpy.histogram</code></a> contains an input argument <code>density</code> of type <code>bool</code> and default value <code>None</code>. If the caller does not supply <code>density</code>, what value does it take on?</p>
<p>The closest Q&A that I can find is <a href="https://stackoverflow.com/questions/58515437">this</a>. The <a href="https://stackoverflow.com/a/58515455">first answer</a> says "Don't use <code>False</code> as a value for a non-<code>bool</code> field", which doesn't apply here. It also says that <code>bool(x)</code> returns <code>False</code>, but that doesn't assure the caller that the function will set <code>density</code> to <code>False</code> if it isn't provided. Is this a mistake in the documentation of the prototype for <code>numpy.histogram</code>, or am I missing something about the documentation convention?</p>
<p>The other answers to the <a href="https://stackoverflow.com/questions/58515437">above Q&A</a> do not seem relevant to my question.</p>
|
<python><default-arguments>
|
2024-07-08 18:44:36
| 2
| 1,265
|
user2153235
|
78,722,382
| 1,749,551
|
How to use prompt-toolkit key binding with inner prompt
|
<p>I have a <a href="https://python-prompt-toolkit.readthedocs.io" rel="nofollow noreferrer">prompt_toolkit</a> application where I want a bunch of keys to be acted on immediately, without pressing Enter. It's not a fullscreen app, it's just using the <code>bottom_toolbar</code>. So I have set up key bindings:</p>
<pre class="lang-py prettyprint-override"><code>class Annotator:
def process(self):
kb = KeyBindings()
@kb.add("5")
def _(event):
self.buildJohnnyFive()
@kb.add("s")
def _(event):
self.needInput()
text = prompt("> ", bottom_toolbar=self.bottomToolbar, key_bindings=kb)
print(f"You said: {text}")
def buildJohnnyFive(self):
self.buildHim() # Do work
def needInput(self):
raw = prompt("input> ", bottom_toolbar=self.bottomToolbar)
self.useInput(raw)
def bottomToolbar(self):
return f"Current: {self.status1} - {self.status2}"
</code></pre>
<p>This works well, and the functions get called. However, in some of these functions I need to accept more specific input from the user. But when I call <code>prompt()</code> from inside the function bound to a key, I get an exception:</p>
<pre><code> File "/opt/homebrew/Cellar/python@3.10/3.10.13_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/Users/nick/my_app/.venv/lib/python3.10/site-packages/prompt_toolkit/application/application.py", line 714, in read_from_input_in_context
context.copy().run(read_from_input)
File "/Users/nick/my_app/.venv/lib/python3.10/site-packages/prompt_toolkit/application/application.py", line 694, in read_from_input
self.key_processor.process_keys()
File "/Users/nick/my_app/.venv/lib/python3.10/site-packages/prompt_toolkit/key_binding/key_processor.py", line 273, in process_keys
self._process_coroutine.send(key_press)
File "/Users/nick/my_app/.venv/lib/python3.10/site-packages/prompt_toolkit/key_binding/key_processor.py", line 188, in _process
self._call_handler(matches[-1], key_sequence=buffer[:])
File "/Users/nick/my_app/.venv/lib/python3.10/site-packages/prompt_toolkit/key_binding/key_processor.py", line 323, in _call_handler
handler.call(event)
File "/Users/nick/my_app/.venv/lib/python3.10/site-packages/prompt_toolkit/key_binding/key_bindings.py", line 127, in call
result = self.handler(event)
File "/Users/nick/my_app/src/annotator.py", line 110, in _
self.addCustomED()
File "/Users/nick/my_app/src/annotator.py", line 145, in needInput
raw = prompt("input> ", bottom_toolbar=self.bottom_toolbar)
File "/Users/nick/my_app/.venv/lib/python3.10/site-packages/prompt_toolkit/shortcuts/prompt.py", line 1425, in prompt
return session.prompt(
File "/Users/nick/my_app/.venv/lib/python3.10/site-packages/prompt_toolkit/shortcuts/prompt.py", line 1035, in prompt
return self.app.run(
File "/Users/nick/my_app/.venv/lib/python3.10/site-packages/prompt_toolkit/application/application.py", line 1002, in run
return asyncio.run(coro)
File "/opt/homebrew/Cellar/python@3.10/3.10.13_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/runners.py", line 33, in run
raise RuntimeError(
Exception asyncio.run() cannot be called from a running event loop
</code></pre>
<p>I understand the problem: I'm already running a <code>prompt()</code>, which hasn't ended yet, and now I'm asking for another, inner prompt which conflicts with the existing event loop.</p>
<p>What I don't understand is: how am I <em>supposed</em> to do this? I need to call the initial prompt so that the user gets the info in the bottom toolbar. There needs to be a run loop so that the app doesn't quit before the user has time to press keys. But then how am I supposed to get input at this inner function? Can I somehow end or suspend the outer prompt while the inner prompt is waiting for the user's input? I don't want to switch to doing everything on the one prompt, because then the user will have to press Return every time. And I actually don't WANT the terminal to fill up with lines of input commands.</p>
<p>Some guidance would be much appreciated.</p>
|
<python><prompt-toolkit>
|
2024-07-08 18:21:25
| 1
| 4,798
|
Nick K9
|
78,722,378
| 1,412,564
|
parser add_mutually_exclusive_group - how can I set a default value?
|
<p>We use Python and Django for our websites. We set a test command that adds to the default Django test command:</p>
<pre class="lang-py prettyprint-override"><code>from django.core.management.commands import test
class Command(test.Command):
def add_arguments(self, parser):
super().add_arguments(parser=parser)
group = parser.add_argument_group('language options', 'These arguments are mutually exclusive. Default: --test-default-languages')
language_group = group.add_mutually_exclusive_group()
language_group.add_argument(
"--test-all-languages",
action="store_true",
help="Run tests for all languages, and don't skip languages.",
)
language_group.add_argument(
"--test-default-languages",
action="store_true",
help="Run tests for default languages (English, French, Hebrew + randomly select one more language or none).",
)
language_group.add_argument(
"--test-only-english",
action="store_true",
help="Run tests for only English.",
)
# ...
</code></pre>
<p>Now, in the models we have this code:</p>
<pre class="lang-py prettyprint-override"><code> class SiteDiscoverRunner(DiscoverRunner):
def __init__(self, *args, **kwargs):
# ...
super().__init__(*args, **kwargs)
self.test_all_languages = kwargs.get('test_all_languages', False)
self.test_default_languages = kwargs.get('test_default_languages', False)
self.test_only_english = kwargs.get('test_only_english', False)
if ((self.test_all_languages is False) and (self.test_only_english is False)):
self.test_default_languages = True
</code></pre>
<p>Tests must be run with one of the above command line arguments or none. If none then the <code>test_default_languages</code> value should be true.</p>
<p>I would like to know if the parser can make <code>self.test_default_languages</code> True if none of the 3 arguments is given, so I can remove the command <code>self.test_default_languages = True</code> in the model? I searched and didn't find out how to do it.</p>
|
<python><django><argparse>
|
2024-07-08 18:19:34
| 1
| 3,361
|
Uri
|
78,722,284
| 5,790,653
|
How to add a colon after each two characters in a string
|
<p>I have a list like this:</p>
<pre><code>list1 = [ '000c.29e6.8fa5', 'fa16.3e9f.0c8c', 'fa16.3e70.323b' ]
</code></pre>
<p>I'm going to convert them to mac addresses in format <code>00:0C:29:E5:8F:A5</code> in uppercase.</p>
<p>How can I do that?</p>
<p>I googled but found nothing. I also thought how to do, but still don't have any clues.</p>
<p>I just know this:</p>
<pre><code>for x in list1:
x = x.replace('.', '').upper()[::1]
</code></pre>
<p>I know <code>[::1]</code> splits, but not sure if it's correct and if I can continue with this or not.</p>
|
<python>
|
2024-07-08 17:49:45
| 3
| 4,175
|
Saeed
|
78,722,207
| 71,612
|
Is it possible to get the exact script invocation in Python?
|
<p>I would like my python script to produce a text file containing the exact command line the user entered to invoke the script. In the past, I have done the following in bash:</p>
<p><code>echo "$0" "$@" > "${output_dir}/configuration/invocation.txt"</code></p>
<p>Is there something equivalent in python? As noted <a href="https://stackoverflow.com/a/50284916/71612">here</a>, "the command line arguments are already handled by the shell before they are sent into <code>sys.argv</code>. Therefore, shell quoting and whitespace are gone and cannot be exactly reconstructed."</p>
|
<python><command-line-arguments>
|
2024-07-08 17:28:48
| 1
| 3,641
|
Stephen
|
78,722,190
| 2,207,559
|
Azure Functions Python PIP issue
|
<p>I have an azure functions app running on python@3.11</p>
<p>I need to use requests module, so at the top of the .py file I've added 'import requests'. I've also added requests to requirements.txt.</p>
<p>I have CI/CD configured with GitHub Actions. The yml is attached.</p>
<p>My understanding is MS should install the reqs in requirements.txt and make them available however, when I add import requests to the py file. No functions show in the azure portal. If I remoe the import then the functions are again visible.</p>
<p>Is it possible that the packages are being loaded but when the python function app starts it's starting from a different bin and as a result no packages are present? Or am I missing a step/approach with my yaml?</p>
<p>How should I correctly load the requirements.txt? is my flow correct?</p>
<pre><code>name: Build and deploy Python project to Azure Function App - scubashackchat
on:
push:
branches:
- master
workflow_dispatch:
env:
AZURE_FUNCTIONAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root
PYTHON_VERSION: '3.11' # set this to the python version to use (supports 3.6, 3.7, 3.8)
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Python version
uses: actions/setup-python@v1
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Create and start virtual environment
run: |
python -m venv venv
source venv/bin/activate
- name: Install dependencies
run: pip install -r requirements.txt
# Optional: Add step to run tests here
- name: Zip artifact for deployment
run: zip release.zip ./* -r
- name: Upload artifact for deployment job
uses: actions/upload-artifact@v3
with:
name: python-app
path: |
release.zip
!venv/
deploy:
runs-on: ubuntu-latest
needs: build
# environment:
# name: 'Production'
# url: ${{ steps.deploy-to-function.outputs.webapp-url }}
permissions:
id-token: write #This is required for requesting the JWT
steps:
- name: Download artifact from build job
uses: actions/download-artifact@v3
with:
name: python-app
- name: Unzip artifact for deployment
run: unzip release.zip
- name: Login to Azure
uses: azure/login@v1
with:
client-id: red
tenant-id: red
subscription-id: red
- name: 'Deploy to Azure Functions'
uses: Azure/functions-action@v1
id: deploy-to-function
with:
app-name: 'scubashackchat'
slot-name: 'Production'
package: ${{ env.AZURE_FUNCTIONAPP_PACKAGE_PATH }}
scm-do-build-during-deployment: true
enable-oryx-build: true
</code></pre>
|
<python><azure-functions><github-actions>
|
2024-07-08 17:22:50
| 1
| 3,103
|
atoms
|
78,722,158
| 375,432
|
Use Ibis to filter table to row with largest value in each group
|
<p>I have a table like this:</p>
<pre><code>βββββββββββββββββ³ββββββββββββββ³βββββββββββββ
β country β city β population β
β‘βββββββββββββββββββββββββββββββββββββββββββ©
β string β string β int64 β
βββββββββββββββββΌββββββββββββββΌβββββββββββββ€
β India β Bangalore β 8443675 β
β India β Delhi β 11034555 β
β India β Mumbai β 12442373 β
β United States β Los Angeles β 3820914 β
β United States β New York β 8258035 β
β United States β Chicago β 2664452 β
β China β Shanghai β 24281400 β
β China β Guangzhou β 13858700 β
β China β Beijing β 19164000 β
βββββββββββββββββ΄ββββββββββββββ΄βββββββββββββ
</code></pre>
<p>I want to filter this table, returning only the most populous city in each country. So the result should look like this (order of rows does not matter):</p>
<pre><code>βββββββββββββββββ³βββββββββββ³βββββββββββββ
β country β city β population β
β‘ββββββββββββββββββββββββββββββββββββββββ©
β string β string β int64 β
βββββββββββββββββΌβββββββββββΌβββββββββββββ€
β India β Mumbai β 12442373 β
β United States β New York β 8258035 β
β China β Shanghai β 24281400 β
βββββββββββββββββ΄βββββββββββ΄βββββββββββββ
</code></pre>
<p>With <a href="https://pandas.pydata.org" rel="nofollow noreferrer">pandas</a>, I can do it like this:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame(data={'country': ['India', 'India', 'India', 'United States', 'United States', 'United States', 'China', 'China', 'China'],
'city': ['Bangalore', 'Delhi', 'Mumbai', 'Los Angeles', 'New York', 'Chicago', 'Shanghai', 'Guangzhou', 'Beijing'],
'population': [8443675, 11034555, 12442373, 3820914, 8258035, 2664452, 24281400, 13858700, 19164000]})
idx = df.groupby('country').population.idxmax()
df.loc[idx]
</code></pre>
<p>How do I do this with <a href="https://ibis-project.org" rel="nofollow noreferrer">Ibis</a>?</p>
|
<python><ibis>
|
2024-07-08 17:15:03
| 1
| 763
|
ianmcook
|
78,722,140
| 9,098,350
|
Integration testing FMX app, page_source doesn't contain app controls
|
<p>I'm exploring the idea of creating integrations tests for my FMX app. Ideally I'd be able to execute these tests for multiple platforms (Windows, Android, iOS, macOS), but for now I'm just trying to get it to work on Windows (64-bit) first using WinAppDriver.</p>
<p>I've created this Python script</p>
<pre><code>from appium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import os
os.startfile('C:\\Program Files\\Windows Application Driver\\WinAppDriver.exe')
desired_caps = {
'app': "E:\\appiumtest\\integrationtestdemoproj.exe",
'platformName': "Windows",
'deviceName': "WindowsPC"
}
driver = webdriver.Remote("http://127.0.0.1:4723", desired_caps)
wait = WebDriverWait(driver, 30)
print(driver.page_source)
</code></pre>
<p>This is the printed <code>driver.page_source</code></p>
<pre class="lang-xml prettyprint-override"><code></?xml version="1.0" encoding="utf-16"?>
<Window AcceleratorKey="" AccessKey="" AutomationId=""
ClassName="TFMAppClass" FrameworkId="Win32" HasKeyboardFocus="False"
HelpText="" IsContentElement="True" IsControlElement="True"
IsEnabled="True" IsKeyboardFocusable="True" IsOffscreen="True"
IsPassword="False" IsRequiredForForm="False" ItemStatus="" ItemType=""
LocalizedControlType="window" Name="Form1" Orientation="None"
ProcessId="5040" RuntimeId="42.330834" x="0" y="0" width="0"
height="0" CanMaximize="False" CanMinimize="True" IsModal="False"
WindowVisualState="Normal"
WindowInteractionState="ReadyForUserInteraction" IsTopmost="False" IsAvailable="True" //>
</code></pre>
<p><em>I've added a couple random characters to the xml output, because I couldn't get stackoverflow to display it well otherwise.</em></p>
<p>The problem with this output is that my test application executable contains a <code>TButton</code> and a <code>TEdit</code>, but they don't seem to show up in the output.</p>
<p>What I also find weird about the output is that <code>FrameworkId</code> is <code>Win32</code> even though the application is 64-bit.</p>
<p>I looked up if I maybe need to add some accessibility info somehow. Based on this:
<a href="https://docwiki.embarcadero.com/RADStudio/Athens/en/FireMonkey_Accessibility_Package" rel="nofollow noreferrer">https://docwiki.embarcadero.com/RADStudio/Athens/en/FireMonkey_Accessibility_Package</a> it seems like that's not the case.</p>
<p>I also find it interesting it is able to find the form and the its name, but not the controls.</p>
<p><strong>Relevant version info</strong></p>
<ul>
<li>Windows 11</li>
<li>Delphi 12</li>
<li>Appium-Python-Client 1.3.0</li>
<li>selenium 3.141.0</li>
</ul>
<p>I also tried a vcl app, same result. But if I try the calculator Windows calculator app for example it does work, I can see all the controls in the <code>page_source</code>.</p>
<p>Do fmx and vcl apps just not support what I'm trying to do? Couldn't find anywhere that it doesn't.</p>
<p>I'm interested if somebody else has tried something like this before and if there some way to make this work.</p>
|
<python><delphi><appium><firemonkey><winappdriver>
|
2024-07-08 17:07:32
| 0
| 15,192
|
bas
|
78,722,044
| 3,581,875
|
Is it possible to actually change a function signature at runtime? (and have it enforced)
|
<p>I want to dynamically change a function's signature. Here's my code:</p>
<pre><code>from inspect import getfullargspec, signature
from functools import wraps
def mock(sign):
def deco(func):
@wraps(func)
def wrap(*args, **kwargs):
print('wrap', args, kwargs)
wrap.__signature__ = signature(sign)
return wrap
return deco
def test(a, b):
print(a, b)
@mock(test)
def func(c):
print(c)
print(getfullargspec(func)) # -> FullArgSpec(args=['a', 'b'], varargs=None, varkw=None, defaults=None, kwonlyargs=[], kwonlydefaults=None, annotations={})
func(1) # -> wrap (1,) {}
</code></pre>
<p>According the program output, the signature is changed successfully. However, supplying an input that doesn't fit the signature goes through with no Exceptions, contrary to what I would expect. Is there a way to make sure the signature is automatically enforced?</p>
|
<python><decorator><signature>
|
2024-07-08 16:44:24
| 0
| 1,152
|
giladrv
|
78,721,957
| 9,003,672
|
How to mock response for request.get object in AWS Lambda deployed through localstack for local testing purpose
|
<p>As per requirement I am doing the lambda testing locally using the LocalStack. I could able to deploy and invoke the lambda using <strong>localstack</strong>. But I get an error in the lambda handler code, as my code is accessing some external link through requests.get() module.</p>
<p>I have mocked those external link locally using <strong>wiremock</strong>, So If I try to access those link on terminal, I can get the response, but I gets below error when its getting called from Lambda which was invoked by Localstack.</p>
<p>Error:</p>
<pre><code>INFO:root:error MyHTTPConnectionPool(host='localhost', port=8082): Max retries exceeded with url: /api/v1/health (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fff8314cee0>: Failed to establish a new connection: [Errno 111] Connection refused'))
</code></pre>
<p>Below is my localstack and wiremock configuration. I am creating both docker container with docker-compose.yml file</p>
<pre class="lang-yaml prettyprint-override"><code>version: "3.3"
services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
image: localstack/localstack:3.0.0
platform: linux/amd64
ports:
- "4566:4566"
expose:
- "4566"
environment:
- SERVICES=lambda,s3,iam,logs
- AWS_DEFAULT_REGION=eu-west-1
- SKIP_SSL_CERT_DOWNLOAD=true
networks:
- shared-network
wiremock:
container_name: wiremock-container
platform: linux/amd64
image: wiremock/wiremock:2.31.0
volumes:
- $PWD/resources/wiremock:/home/wiremock
ports:
- "8082:8080"
networks:
- shared-network
networks:
shared-network:
name: local_network
</code></pre>
<p>Below is the Lambda handler code</p>
<pre class="lang-python prettyprint-override"><code>def lambda_handler(event, context):
print("Lambda invoked")
# Some code
my_external_url = "http://localhost:8082/api/v1/health" # I am setting this link to get the mocked response through localstack
requests.get(my_external_url).json() # I get the error here
# Some code
return {
'status': 'OK',
}
</code></pre>
<p>I am mocking the links through wiremock, I can use any other way also for mocking the link if suggested. But I need to use localstack to invoke the lambda. Obviously there should be some use cases where lambda is trying to access some link, and local testing can be done through localstack by mocking those links.</p>
|
<python><aws-lambda><wiremock><localstack>
|
2024-07-08 16:19:22
| 1
| 501
|
Binit Amin
|
78,721,860
| 10,200,497
|
How can I change values of a column if the group nunique is more than N?
|
<p>My DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': ['a', 'a', 'a', 'b', 'c', 'x', 'j', 'w'],
'b': [1, 1, 1, 2, 2, 3, 3, 3],
}
)
</code></pre>
<p>Expected output is changing column <code>a</code>:</p>
<pre><code> a b
0 a 1
1 a 1
2 a 1
3 NaN 2
4 NaN 2
5 NaN 3
6 NaN 3
7 NaN 3
</code></pre>
<p>Logic:</p>
<p>The groups are based on <code>b</code>. If for a group <code>df.a.nunique() > 1</code> then <code>df.a == np.nan</code>.</p>
<p>This is my attempt. It works but I wonder if there is a one-liner/more efficient way to do it:</p>
<pre><code>df['x'] = df.groupby('b')['a'].transform('nunique')
df.loc[df.x > 1, 'a'] = np.nan
</code></pre>
|
<python><pandas><dataframe>
|
2024-07-08 15:59:22
| 5
| 2,679
|
AmirX
|
78,721,846
| 1,814,420
|
Unable to get queues: ''
|
<p>I have a RabbitMQ running inside a Docker container using image tag <code>rabbitmq:management</code> for the management plugin and proper port mapping (5672 and 15672) using the command</p>
<pre class="lang-bash prettyprint-override"><code>$ docker run -d -p 5672:5672 -p 15672:15672 rabbitmq:management
</code></pre>
<p>The container starts with the default credentials (guest/guest). The Management UI is accessible on the host machine at <code>http://localhost:15672</code> in the browser and with the default credentials.</p>
<p>Then, I set up my Celery as the following:</p>
<pre class="lang-py prettyprint-override"><code>from celery import Celery
celery_app = Celery(
main='background-task',
broker='amqp://guest@localhost:5672//',
backend='mongodb://localhost:27017/tasks',
result_extended=True,
include=['app.tasks']
)
celery_app.conf.task_routes = {
'add': 'add'
}
</code></pre>
<p>I can start the worker with the command below, and it works perfectly fine.</p>
<pre class="lang-bash prettyprint-override"><code>$ celery --app app.main:celery_app worker --loglevel=INFO -Q add
</code></pre>
<p>For the Flower, it was started with the command:</p>
<pre class="lang-bash prettyprint-override"><code>$ celery --app app.main:celery_app flower --broker_api="http://guest@localhost:15672/api"
</code></pre>
<p>Flower starts up fine, and I can access all tabs, except the Broker tab. Whenever I click on it, the UI shows <code>Error 500</code>, and this error pops up in the log:</p>
<pre class="lang-bash prettyprint-override"><code>[E 240708 17:37:30 broker:31] Unable to get queues: ''
[E 240708 17:37:30 web:1875] Uncaught exception GET /broker (::1)
HTTPServerRequest(protocol='http', host='localhost:5555', method='GET', uri='/broker', version='HTTP/1.1', remote_ip='::1')
Traceback (most recent call last):
File "/path/to/project/.venv/lib/python3.11/site-packages/tornado/web.py", line 1790, in _execute
result = await result
^^^^^^^^^^^^
File "/path/to/project/.venv/lib/python3.11/site-packages/flower/views/broker.py", line 35, in get
queues=queues)
^^^^^^
UnboundLocalError: cannot access local variable 'queues' where it is not associated with a value
</code></pre>
<p>What did I miss? How to make Flower show the Broker tab properly?</p>
|
<python><rabbitmq><celery><flower>
|
2024-07-08 15:55:28
| 0
| 12,163
|
Triet Doan
|
78,721,735
| 1,718,989
|
Recursive Directory Search - Not getting the correct current working directory (Python)
|
<p>I'm currently in the early stages of a course for Python, and have hit a wall and was hoping for some guidance. The problem states.</p>
<blockquote>
<ol>
<li>-Write a function or method called find that takes two arguments called path and dir. The path argument should accept a relative or absolute path to a directory where the search should start, while the dir argument should be the name of a directory that you want to find in the given path. Your program should display the absolute paths if it finds a directory with the given name.</li>
<li>The directory search should be done recursively. This means that the search should also include all subdirectories in the given path.</li>
</ol>
</blockquote>
<p>Example input: <code>path="./tree", dir="python"</code></p>
<p>I wasnt able to solve it and finally took a look at the given answer but it doesnt seem to work correctly either</p>
<pre><code>import os
class DirectorySearcher:
def find(self, path, dir):
try:
os.chdir(path)
except OSError:
# Doesn't process a file that isn't a directory.
return
current_dir = os.getcwd()
for entry in os.listdir("."):
if entry == dir:
print(os.getcwd() + "/" + dir)
self.find(current_dir + "/" + entry, dir)
directory_searcher = DirectorySearcher()
directory_searcher.find("./tree", "python")
</code></pre>
<p>I created a few sample folders on my desktop and the expected output was</p>
<pre><code>.../tree/python
.../tree/c/python
</code></pre>
<p>My output though is actually yielding</p>
<pre><code>C:\Users\abc\Desktop\tree\c\python
C:\Users\abc\Desktop\tree\c\python\python
</code></pre>
<p>I had created the following folders</p>
<pre><code>tree ------>c------>python
------>python
</code></pre>
<p>If I understand correctly, it looks like the code would first be able to pull up the folders [c, python]. It then goes into the c directory, finds the Python folder, tries to loop through that, and gets returned back to the next value in the list [python]. But at this point the current working directory is still the last one it traversed which is C:\Users\abc\Desktop\tree\c\python. Since the next value is [python] it just takes the last directory and tags on the [python] value at the end.</p>
<p>I been trying to make it work, but can't quite seem to wrap my head around how to fix this. Any help is much appreciated.</p>
|
<python><python-3.x><object><recursion>
|
2024-07-08 15:32:43
| 1
| 311
|
chilly8063
|
78,721,694
| 10,732,434
|
Unicode look-alikes
|
<p>I am looking for an easy way to match all unicode characters that look like a given letter. Consider an example for a selection of characters that look like small letter <code>n</code>.</p>
<pre class="lang-py prettyprint-override"><code>import re
import sys
sys.stdout.reconfigure(encoding='utf-8')
look_alike_chars = [ 'n', 'Ε', 'Ε', 'Ε', 'Ε', 'Ε', 'ΥΈ', 'ΠΏ', 'ΥΈ']
pattern = re.compile(r'[nΕΕΕΕΕΥΈΠΏ]')
for char in look_alike_chars:
if pattern.match(char):
print(f"Character '{char}' matches the pattern.")
else:
print(f"Character '{char}' does NOT match the pattern.")
</code></pre>
<p>Instead of <code>r'[nΕΕΕΕΕΥΈΠΏ]'</code> I expect something like <code>r'[\LookLike{n}]</code> where <code>LookLike</code> is a token.</p>
<p>If this is not possible, do you know of a program or website that lists all the look-alike symbols for a given ASCII letter?</p>
|
<python><string><unicode>
|
2024-07-08 15:22:25
| 2
| 2,197
|
sanitizedUser
|
78,721,646
| 12,234,535
|
Adjusting colormaps for geoplotting
|
<p>I draw some geoplots using cartopy. Using Two Slope Normalisation and "RdBu_r" colormap to plot air temperature field.</p>
<pre><code>color_map = 'RdBu_r'
fig = plt.figure(figsize=(16, 12))
ax = plt.axes(projection=prj)
norm = TwoSlopeNorm(vmin=np.min(data), vcenter=273.15, vmax=np.max(data))
filled_contour = ax.contourf(data['longitude'], data['latitude'], data, norm=norm, levels=15, transform=ccrs.PlateCarree(), cmap=color_map, extend='both')
isotherm_0 = ax.contour(data['longitude'], data['latitude'], data, levels=[273.15], colors='green', transform=ccrs.PlateCarree(), linewidths=0.5, linestyles='dashed')
</code></pre>
<p>I perform normalization in order to cover negative temperatures with blue shades, and positive temperatures with red shades. If the dataset contains a wide range of negative and positive temperatures, the geoplot seems to be fine:</p>
<p><a href="https://i.sstatic.net/oYD3qdA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oYD3qdA4.png" alt="nice color spread" /></a>
But <strong>if there are a lot of positive temperatures and a small amount of slightly negative ones, the map will not be built as expected</strong>:
<a href="https://i.sstatic.net/bm7k3IiU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bm7k3IiU.png" alt="wrong color spread" /></a>
In the image you can see how slightly negative temperatures (the minimum is 272.6236 K) are painted in dark blue shade (extreme blue value for the colormap), instead of light blue as for temperatures slightly below zero.</p>
<p>When I draw a geoplot zoomed in to this negative temperatures area, I also get a nice image with proper spread of color shades:
<a href="https://i.sstatic.net/DlAMbg4E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DlAMbg4E.png" alt="nice color spread" /></a>
Why are the colors not drawn correctly in case as of Pic 2? How can I avoid this?</p>
<p><strong>UPDATE:</strong>
Since there are no such natural extreme temperatures in my project, I've taken a fixed array and cast the colormap onto it:</p>
<pre><code># Compute min and max of data
min_val, max_val = np.min(data), np.max(data)
# Define the center (K or C)
center = 273.15 if min_val>150 else 0
# Define the temperature range for colormapping
levels = np.arange(-80, 81, 1) if center==0 else np.arange(193.15, 354.15, 1)
color_map = 'RdBu_r'
filled_contour = ax.contourf(data['lon'], data['lat'], data, levels=levels, transform=ccrs.PlateCarree(), cmap=color_map, extend='both')
</code></pre>
<p>In the result the colors are well spread, but around 0 degrees the colors are too weak (almost white):
<a href="https://i.sstatic.net/zjfG265n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zjfG265n.png" alt="enter image description here" /></a></p>
<p>So I've added different options for the colormap: "Reds" for positive temperature only, "Blues" for negative ones, and "RdBu_r" for both. <strong>And here I defined the appropriate normalization</strong>:</p>
<pre><code># Determine normalization and colormap
if min_val >= center:
norm = Normalize(vmin=center, vmax=max_val)
color_map = 'Reds'
elif max_val < center:
norm = Normalize(vmin=min_val, vmax=center)
color_map = 'Blues'
else:
norm = TwoSlopeNorm(vmin=min_val, vcenter=center, vmax=max_val)
color_map = 'RdBu_r'
</code></pre>
<p>Then drawing it with <code>norm=norm</code> gave me this:
<a href="https://i.sstatic.net/GP1zmvhQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GP1zmvhQ.png" alt="enter image description here" /></a>
It's better in terms of colors, though I need to trim the colorbar. In this image, there is the blue color for the negtive temperatures, but it's not very representative: it's too strong for these temps slighly below zero.</p>
<p>Finally, in order to trim the colorbar, I've added the following:</p>
<pre><code>filled_contour.set_clim(min_val-5, max_val+6)
</code></pre>
<p>Which gave me this:
<a href="https://i.sstatic.net/nSxi9plP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nSxi9plP.png" alt="enter image description here" /></a>
And while the colorbar hasn't been trimmed here, the colors here are spread even more naturally with light blue for negative (but not too white) and nice gradient for positive. Still, I have to trim the colorbar somehow...</p>
<p>The other problem arose with different colormap options, like the one for positive temperatures only. The colormap (for values range -80, ..., 80) is not correctly spread, starting with vmin=0:</p>
<pre><code>norm = Normalize(vmin=center, vmax=max_val)
color_map = 'Reds'
</code></pre>
<p><a href="https://i.sstatic.net/OcG9vf18.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OcG9vf18.png" alt="enter image description here" /></a></p>
<p>I guess I'll have to define the other levels range for positive temps (like, np.arange(0, 81, 1)). Yet all this looks like a quick'n'dirty workaround to me. I wish there was some smart automatic way for the subject. This color fuss takes much more time than working with data etc, never could've thought it is so sophisticated...</p>
|
<python><matplotlib><plot><colors><cartopy>
|
2024-07-08 15:12:17
| 2
| 379
|
Outlaw
|
78,721,617
| 8,372,455
|
Plotting subplots in Matplotlib: second plot not showing data
|
<p>I am trying to create subplots in Matplotlib where the first plot is a stacked bar plot and the second plot is a line plot. Both plots should share the same datetime index for the x-axis. However, I am facing an issue where the second plot does not show any data.</p>
<pre><code> Economizer Economizer_plus_Mech ... Not_Mechanical OaTemp
2023-10-01 1.680672 1.680672 ... 1.680672 84.317815
2023-10-02 17.127072 5.524862 ... 1.657459 76.263536
2023-10-03 31.481481 5.092593 ... 0.000000 73.407407
</code></pre>
<p>Here is my plot function:</p>
<pre><code>import matplotlib.pyplot as plt
import os
import pandas as pd
PLOTS_DIR = 'path/to/plots'
def save_mode_plot_with_oat(combined_data, title, ylabel, filename):
# Ensure the index is datetime
combined_data.index = pd.to_datetime(combined_data.index)
# Create a 2-row subplot layout, sharing the x-axis
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(12, 12), sharex=True)
# Bar plot for mode percentages on the first subplot
combined_data.iloc[:, :-1].plot(kind='bar', stacked=True, ax=ax1, colormap='viridis')
ax1.set_title(title)
ax1.set_ylabel(ylabel)
# Hide x-label on the first plot
ax1.set_xlabel('')
ax1.tick_params(axis='x', labelbottom=False) # Hide x-tick labels on the first plot
# Line plot for outside air temperature on the second subplot
ax2.plot(combined_data.index, combined_data['OaTemp'], color='red', linestyle='-', marker='o')
ax2.set_title('Average Outside Air Temperature Over Time')
ax2.set_ylabel('Average Outside Air Temperature (Β°F)')
ax2.set_xlabel('Date')
ax2.tick_params(axis='x', rotation=90) # Rotate x-tick labels on the second plot
ax2.grid(True)
plt.tight_layout()
plt.savefig(os.path.join(PLOTS_DIR, filename))
plt.close()
# Example call
save_mode_plot_with_oat(month_df, 'Mode Percentages and OAT', 'Percentage (%)', 'mode_oat_plot.png')
</code></pre>
<p>My plots come through like this where the second plot is always blank. What can I try next?</p>
<p><a href="https://i.sstatic.net/o0ICXoA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o0ICXoA4.png" alt="enter image description here" /></a></p>
|
<python><pandas><matplotlib>
|
2024-07-08 15:06:45
| 1
| 3,564
|
bbartling
|
78,721,505
| 19,067,218
|
Choosing Between yield and addfinalizer in pytest Fixtures for Teardown
|
<p>I've recently started using pytest for testing in Python and created a fixture to manage a collection of items using gRPC. Below is the code snippet for my fixture:</p>
<pre class="lang-py prettyprint-override"><code>import pytest
@pytest.fixture(scope="session")
def collection():
grpc_page = GrpcPages().collections
def create_collection(collection_id=None, **kwargs):
default_params = {
"id": collection_id,
"is_active": True,
# some other params
}
try:
return grpc_page.create_collection(**{**default_params, **kwargs})
except Exception as err:
print(err)
raise err
yield create_collection
def delete_created_collection():
# Some code to hard and soft delete created data
</code></pre>
<p>This is my first attempt at creating a fixture, and I realized that I need a mechanism to delete data created during the fixture's lifecycle.</p>
<p>While exploring options for implementing teardown procedures, I came across yield and addfinalizer. From what I understand, both can be used to define teardown actions in pytest fixtures. However, I'm having trouble finding clear documentation and examples that explain the key differences between these two approaches and when to choose one over the other.</p>
<p>Here are the questions (for fast-forwarding :) ):</p>
<ol>
<li>What are the primary differences between using yield and addfinalizer in pytest fixtures for handling teardown?</li>
<li>Are there specific scenarios where one is preferred over the other?</li>
</ol>
|
<python><pytest><fixtures>
|
2024-07-08 14:47:17
| 2
| 344
|
llRub3Nll
|
78,721,443
| 4,126,652
|
Type hints for decorators
|
<p>The <a href="https://outlines-dev.github.io/outlines/reference/prompting/" rel="nofollow noreferrer">outlines</a> library has a prompt class like this</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class Prompt:
"""Represents a prompt function.
We return a `Prompt` class instead of a simple function so the
template defined in prompt functions can be accessed.
"""
template: str
signature: inspect.Signature
def __post_init__(self):
self.parameters: List[str] = list(self.signature.parameters.keys())
def __call__(self, *args, **kwargs) -> str:
"""Render and return the template.
Returns
-------
The rendered template as a Python ``str``.
"""
bound_arguments = self.signature.bind(*args, **kwargs)
bound_arguments.apply_defaults()
return render(self.template, **bound_arguments.arguments)
def __str__(self):
return self.template
</code></pre>
<p>It has a prompt decorator like this</p>
<pre class="lang-py prettyprint-override"><code>def prompt(fn: Callable) -> Prompt:
signature = inspect.signature(fn)
# The docstring contains the template that will be rendered to be used
# as a prompt to the language model.
docstring = fn.__doc__
if docstring is None:
raise TypeError("Could not find a template in the function's docstring.")
template = cast(str, docstring)
return Prompt(template, signature)
</code></pre>
<p>This seems very different from the decorators I have seen with decorator and wrapper nested functions.</p>
<p>If I try to use this prompt like</p>
<pre class="lang-py prettyprint-override"><code>@prompt
def my_system_prompt():
"""This is system prompt."""
system_prompt = query_gen_system_prompt()
</code></pre>
<p>Pyright seems to be inferring type of system_prompt as str instead of Prompt (probably because of the <strong>call</strong>?) which is causing issues.</p>
<p>How do I resolve this?</p>
|
<python><python-typing><python-decorators>
|
2024-07-08 14:33:32
| 1
| 3,263
|
Vikash Balasubramanian
|
78,721,344
| 6,751,456
|
django using aggregate() and distinct() together
|
<p>I have a filterset that has following attributes:</p>
<pre><code>dosFromGte = filters.DateFilter(method="search_by_dos_from_gte", lookup_expr="gte")
dosToLte = filters.DateFilter(method="search_by_dos_from_lte", lookup_expr="lte")
# One of these methods:
def search_by_dos_from_lte(self, queryset: Chart, name: str, value: str) -> Chart:
return queryset.annotate(max_dos=Min("diagnosis__dos_from")).filter(max_dos__lte=value)
# using annotate and aggregate Min().
</code></pre>
<p>I need to run few annotations and then only fetch distinct ids.</p>
<pre><code>queryset = self.filter_queryset(self.get_queryset().exclude(state__in=repo_exclude))
queryset = queryset.annotate(
ChartId=F("chart_id")
).values(
"ChartId"
).distinct("id")
</code></pre>
<p>When I apply these filters <code>?clientId=11&project=56&dosFromGte=2023-08-17&dosToLte=2023-08-19</code> this tries to run <code>Min()</code> and then distinct on <code>ids</code>.</p>
<p>This gives <code>NotImplementedError("aggregate() + distinct(fields) not implemented.")</code></p>
<p>So I tried to work around a bit and</p>
<pre><code>queryset.values(
"id", "ChartId"
)
# remove duplicate charts based on pk, i.e. id
df = df.drop_duplicates('id')
df = df.drop('id', axis=1)
</code></pre>
<p>which kind of felt like a hack and dirty work around.</p>
<p>Is there a cleaner ORM way to do it?</p>
|
<python><django><orm><django-annotate><django-aggregation>
|
2024-07-08 14:14:53
| 0
| 4,161
|
Azima
|
78,721,341
| 23,196,983
|
Efficiently remove rows from pandas df based on second latest time in column
|
<p>I have a pandas Dataframe that looks similar to this:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Index</th>
<th>ID</th>
<th>time_1</th>
<th>time_2</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>101</td>
<td>2024-06-20 14:32:22</td>
<td>2024-06-20 14:10:31</td>
</tr>
<tr>
<td>1</td>
<td>101</td>
<td>2024-06-20 15:21:31</td>
<td>2024-06-20 14:32:22</td>
</tr>
<tr>
<td>2</td>
<td>101</td>
<td>2024-06-20 15:21:31</td>
<td>2024-06-20 15:21:31</td>
</tr>
<tr>
<td>3</td>
<td>102</td>
<td>2024-06-20 16:26:51</td>
<td>2024-06-20 15:21:31</td>
</tr>
<tr>
<td>4</td>
<td>102</td>
<td>2024-06-20 16:26:51</td>
<td>2024-06-20 16:56:24</td>
</tr>
<tr>
<td>5</td>
<td>103</td>
<td>2024-06-20 20:05:44</td>
<td>2024-06-20 21:17:35</td>
</tr>
<tr>
<td>6</td>
<td>103</td>
<td>2024-06-20 22:41:22</td>
<td>2024-06-20 22:21:31</td>
</tr>
<tr>
<td>7</td>
<td>103</td>
<td>2024-06-20 23:11:56</td>
<td>2024-06-20 23:01:31</td>
</tr>
</tbody>
</table></div>
<p>For each ID in my df I want to take the second latest time_1 (if it exists). I then want to compare this time with the timestamps in time_2 and remove all rows from my df where time_2 is earlier than this time.
My expected output would be:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Index</th>
<th>ID</th>
<th>time_1</th>
<th>time_2</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>101</td>
<td>2024-06-20 15:21:31</td>
<td>2024-06-20 14:32:22</td>
</tr>
<tr>
<td>2</td>
<td>101</td>
<td>2024-06-20 15:21:31</td>
<td>2024-06-20 15:21:31</td>
</tr>
<tr>
<td>3</td>
<td>102</td>
<td>2024-06-20 16:26:51</td>
<td>2024-06-20 15:21:31</td>
</tr>
<tr>
<td>4</td>
<td>102</td>
<td>2024-06-20 16:26:51</td>
<td>2024-06-20 16:56:24</td>
</tr>
<tr>
<td>7</td>
<td>103</td>
<td>2024-06-20 23:11:56</td>
<td>2024-06-20 23:01:31</td>
</tr>
</tbody>
</table></div>
<p>This problem is above my pandas level. I asked ChatGPT and this is the solution I got which in principle does what I want:</p>
<pre><code>import pandas as pd
ids = [101, 101, 101, 102, 102, 103, 103, 103]
time_1 = ['2024-06-20 14:32:22', '2024-06-20 15:21:31', '2024-06-20 15:21:31', '2024-06-20 16:26:51', '2024-06-20 16:26:51', '2024-06-20 20:05:44', '2024-06-20 22:41:22', '2024-06-20 23:11:56']
time_2 = ['2024-06-20 14:10:31', '2024-06-20 14:32:22', '2024-06-20 15:21:31', '2024-06-20 15:21:31', '2024-06-20 16:56:24', '2024-06-20 21:17:35', '2024-06-20 22:21:31', '2024-06-20 23:01:31']
df = pd.DataFrame({
'id': ids,
'time_1': pd.to_datetime(time_1),
'time_2': pd.to_datetime(time_2)
})
grouped = df.groupby('id')['time_1']
mask = pd.Series(False, index=df.index)
for id_value, group in df.groupby('id'):
# Remove duplicates and sort timestamps
unique_sorted_times = group['time_1'].drop_duplicates().sort_values()
# Check if there's more than one unique time
if len(unique_sorted_times) > 1:
# Select the second last time
second_last_time = unique_sorted_times.iloc[-2]
# Update the mask for rows with time_2 greater than or equal to the second last time_1
mask |= (df['id'] == id_value) & (df['time_2'] >= second_last_time)
else:
# If there's only one unique time, keep the row(s)
mask |= (df['id'] == id_value)
filtered_data = df[mask]
</code></pre>
<p>My issue with this solution is the for-loop. This seems rather inefficient and my real data is quite large. And also I am curious if there is a better, more efficient solution for this.</p>
|
<python><pandas><group-by>
|
2024-07-08 14:14:27
| 3
| 310
|
Frede
|
78,721,195
| 4,489,082
|
AttributeError: `np.string_` was removed in the NumPy 2.0 release. Use `np.bytes_` instead.. Did you mean: 'strings'?
|
<p>I m interested in seeing neural network as graph using tensorboard. I have constructed a network in pytorch with following code-</p>
<pre><code>import torch
BATCH_SIZE = 16
DIM_IN = 1000
HIDDEN_SIZE = 100
DIM_OUT = 10
class TinyModel(torch.nn.Module):
def __init__(self):
super(TinyModel, self).__init__()
self.layer1 = torch.nn.Linear(DIM_IN, HIDDEN_SIZE)
self.relu = torch.nn.ReLU()
self.layer2 = torch.nn.Linear(HIDDEN_SIZE, DIM_OUT)
def forward(self, x):
x = self.layer1(x)
x = self.relu(x)
x = self.layer2(x)
return x
some_input = torch.randn(BATCH_SIZE, DIM_IN, requires_grad=False)
ideal_output = torch.randn(BATCH_SIZE, DIM_OUT, requires_grad=False)
model = TinyModel()
</code></pre>
<p>Setting-up tensorboard</p>
<pre><code>from torch.utils.tensorboard import SummaryWriter
# Create a SummaryWriter
writer = SummaryWriter("checkpoint")
# Add the graph to TensorBoard
writer.add_graph(model, some_input)
writer.close()
</code></pre>
<p>While I run <code>tensorboard --logdir=checkpoint</code> on terminal , I receive the following error -</p>
<pre><code>Traceback (most recent call last):
File "/home/k/python_venv/bin/tensorboard", line 5, in <module>
from tensorboard.main import run_main
File "/home/k/python_venv/lib/python3.10/site-packages/tensorboard/main.py", line 27, in <module>
from tensorboard import default
File "/home/k/python_venv/lib/python3.10/site-packages/tensorboard/default.py", line 39, in <module>
from tensorboard.plugins.hparams import hparams_plugin
File "/home/k/python_venv/lib/python3.10/site-packages/tensorboard/plugins/hparams/hparams_plugin.py", line 30, in <module>
from tensorboard.plugins.hparams import backend_context
File "/home/k/python_venv/lib/python3.10/site-packages/tensorboard/plugins/hparams/backend_context.py", line 26, in <module>
from tensorboard.plugins.hparams import metadata
File "/home/k/python_venv/lib/python3.10/site-packages/tensorboard/plugins/hparams/metadata.py", line 32, in <module>
NULL_TENSOR = tensor_util.make_tensor_proto(
File "/home/k/python_venv/lib/python3.10/site-packages/tensorboard/util/tensor_util.py", line 405, in make_tensor_proto
numpy_dtype = dtypes.as_dtype(nparray.dtype)
File "/home/k/python_venv/lib/python3.10/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py", line 677, in as_dtype
if type_value.type == np.string_ or type_value.type == np.unicode_:
File "/home/k/python_venv/lib/python3.10/site-packages/numpy/__init__.py", line 397, in __getattr__
raise AttributeError(
AttributeError: `np.string_` was removed in the NumPy 2.0 release. Use `np.bytes_` instead.. Did you mean: 'strings'?
</code></pre>
<p>Probably the issue will be fixed in future releases, but is there a fix for now?</p>
|
<python><numpy><pytorch><tensorboard>
|
2024-07-08 13:44:22
| 2
| 793
|
pkj
|
78,721,181
| 2,123,706
|
Replace subtrings in a list of strings using dictionary in python
|
<p>I have a list of substrings:</p>
<pre><code>ls = ['BLAH a b c A B C D 12 34 56',
'BLAH d A B 12 45 78',
'BLAH a/b A B C 12 45 78',
'BLAH a/ b A 12 45 78',
'BLAH a b c A B C D 12 34 99']
</code></pre>
<p>I want to replace the lower case substrings with an identifier:</p>
<pre><code>dict2 = {'a b c':'1','d':'2','a/b':'3','a/ b':'4'}
</code></pre>
<p>I am looping over all the items in the list and trying to perform a replace using:</p>
<pre><code>[[ls[i].replace(k,v) for k,v in dict2.items()][0] for i in range(len(ls))]
</code></pre>
<p>but this does not perform the correct replacement from the dictionary for each element(<code>d</code>, <code>a/b</code>, and <code>a/ b</code> were not replaced), and results in:</p>
<pre><code>['BLAH 1 A B C D 12 34 56',
'BLAH d A B 12 45 78',
'BLAH a/b A B C 12 45 78',
'BLAH a/ b A 12 45 78',
'BLAH 1 A B C D 12 34 99']
</code></pre>
<p>I would like to end up with:</p>
<pre><code>['BLAH 1 A B C D 12 34 56',
'BLAH 2 A B 12 45 78',
'BLAH 3 A B C 12 45 78',
'BLAH 4 A 12 45 78',
'BLAH 1 A B C D 12 34 99']
</code></pre>
|
<python><string><dictionary><replace>
|
2024-07-08 13:40:58
| 3
| 3,810
|
frank
|
78,721,153
| 8,110,961
|
pandas dataframe filter rows by datetime
|
<p>I have a dataframe with (non index) column called 'est'. I want to filter out rows having datetime prior to 2023-01-01. I tried as below</p>
<pre><code>spot = df.query(df.est >= date(year=2023, month=1, day=1))
spot = df[df['est'].dt >= date(year=2023, month=1, day=1)]
</code></pre>
<p>but its throwing error as <code>ValueError: expr must be a string to be evaluated, <class 'pandas.core.series.Series'> given</code> or <code>AttributeError: Can only use .dt accessor with datetimelike values. Did you mean: 'at'?</code>.</p>
<p>est column is created from column 't' --where t is having UTC unix timestamp in miliseconds (1/1000) like 1656662400000, 1656668700000. and est column is created with code below</p>
<pre><code>df["est"] = pd.to_datetime(df["t"], unit="ms")
df["est"] = df["est"].dt.tz_localize("UTC").dt.tz_convert("US/Eastern").dt.date
</code></pre>
<p>I tried few other things but failed, searched online but most of result is showing using .loc after indexing the column. How can I filter data without indexing the column? Thanks!</p>
|
<python><pandas><dataframe>
|
2024-07-08 13:36:15
| 1
| 385
|
Jack
|
78,721,151
| 2,302,262
|
Type checker cannot find attribute "year" on class "pandas.DatetimeIndex"
|
<p>I'm writing functions that act on <code>pandas.Series</code> with a <code>DatetimeIndex</code>. I can tell the type checker that the <code>Series</code> does not have just any <code>Index</code>, but a <code>DatetimeIndex</code>, using <code>typing.cast</code> (see <a href="https://stackoverflow.com/questions/78720900/type-hints-for-subset-of-class">this question</a>):</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import typing
def filter_year(s: pd.Series, year: int) -> pd.Series:
s.index = typing.cast(pd.DatetimeIndex, s.index)
keep = s.index.year == year
return s[keep]
</code></pre>
<p>Before adding the <code>cast</code> line, I had an error <code>Cannot access attribute "year" for class "Index"</code>, which made sense, because <code>pd.Index.year</code> does not exist (<code>AttributeError: type object 'Index' has no attribute 'year'</code>)</p>
<p>However, after adding the <code>cast</code> line, I STILL had the error, now <code>Cannot access attribute "year" for class "DatetimeIndex"</code>. This I do not understand, as <code>pd.DatetimeIndex.year</code> DOES exist (<code><property at 0x15ceafc90></code>).</p>
<p><strong>How can I make my typechecker see the errors of its ways?</strong></p>
|
<python><pandas><python-typing>
|
2024-07-08 13:35:50
| 0
| 2,294
|
ElRudi
|
78,721,064
| 4,442,337
|
How to mark a DRF ViewSet action as being exempt from the application of a custom middleware?
|
<p>I've created a <a href="https://docs.djangoproject.com/en/5.0/topics/http/middleware/#writing-your-own-middleware" rel="nofollow noreferrer">Custom Django Middleware</a> and added to the <code>MIDDLEWARE</code> settings variable correctly.</p>
<pre class="lang-py prettyprint-override"><code>from django.http import HttpResponseForbidden
class MyCustomMiddleware:
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
return self.get_response(request)
def process_view(self, request, view_func, view_args, view_kwargs):
# Perform some internal actions on the `request` object.
return None
</code></pre>
<p>Since this is applied to all DRF ViewSets by default, I would like to exempt some actions that don't need this check. The idea would be to check a flag inside the <code>process_view</code> function taking inspiration from the Django <code>CsrfViewMiddleware</code> which checks if the <code>csrf_exempt</code> variable has been set by the <code>csrf_exempt</code> decorator. So I modified the custom middleware and created a custom decorator to exempt views explicitly.</p>
<pre class="lang-py prettyprint-override"><code>from functools import wraps
from django.http import HttpResponseForbidden
class MyCustomMiddleware:
...
def process_view(self, request, view_func, view_args, view_kwargs):
if getattr(view_func, "some_condition", False):
return HttpResponseForbidden("Forbidden on custom middleware")
# Perform some internal actions on the `request` object.
return None
def custom_middleware_exempt(view_func):
@wraps(view_func)
def _view_wrapper(request, *args, **kwargs):
return view_func(request, *args, **kwargs)
_view_wrapper.some_condition = True
return _view_wrapper
</code></pre>
<p>Having this I do something like this and it correctly enters the custom decorator before going inside the Django middleware.</p>
<pre class="lang-py prettyprint-override"><code>from rest_framework import viewsets
from rest_framework.decorators import action
from rest_framework.response import Response
class MyViewSet(viewsets.ViewSet):
@action(detail=False, methods=['get'])
@custom_middleware_exempt
def my_action(self, request):
return Response()
</code></pre>
<p>So far so good until I noticed that the <code>view_func</code> of the custom middleware <code>process_view</code> doesn't correspond to the action function (which is being decorated) but to the ViewSet function.</p>
<p>Inside the decorator: <code>view_func = <function MyViewSet.my_action at 0x79ee9c471760></code></p>
<p>Inside the middleware: <code>view_func = <function MyViewSet at 0x79ee9c49c220></code></p>
<p>Apparently, Django middlewares are applied at viewset level instead of action level. As a consequence <code>view_func</code> doesn't have the <code>some_condition</code> attribute set.</p>
<p>Is there a way to decorate a viewset action and change the viewset level function or an alternative way to achieve what I'm trying to do?</p>
|
<python><django><django-rest-framework>
|
2024-07-08 13:14:28
| 0
| 2,191
|
browser-bug
|
78,720,908
| 10,892,021
|
AsyncIO CPython hangs with 100% CPU usage
|
<p>Our Python application is hanging on these 2 particular machines after 10-20 minutes of use. Htop shows 100% CPU usage. I used Pystack to get the stack trace of the running process. The Python side of the stack trace shows nothing interesting, it was just some dictionary look up (and each time it hangs they are at different code). But at the last call Pystack shows that it is stuck at this particular line in CPython source code (the while loop):</p>
<p><a href="https://github.com/python/cpython/blob/v3.12.3/Modules/_asynciomodule.c#L3594" rel="nofollow noreferrer">https://github.com/python/cpython/blob/v3.12.3/Modules/_asynciomodule.c#L3594</a></p>
<pre class="lang-c prettyprint-override"><code> module_traverse(PyObject *mod, visitproc visit, void *arg)
{
asyncio_state *state = get_asyncio_state(mod);
Py_VISIT(state->FutureIterType);
Py_VISIT(state->TaskStepMethWrapper_Type);
Py_VISIT(state->FutureType);
Py_VISIT(state->TaskType);
Py_VISIT(state->asyncio_mod);
Py_VISIT(state->traceback_extract_stack);
Py_VISIT(state->asyncio_future_repr_func);
Py_VISIT(state->asyncio_get_event_loop_policy);
Py_VISIT(state->asyncio_iscoroutine_func);
Py_VISIT(state->asyncio_task_get_stack_func);
Py_VISIT(state->asyncio_task_print_stack_func);
Py_VISIT(state->asyncio_task_repr_func);
Py_VISIT(state->asyncio_InvalidStateError);
Py_VISIT(state->asyncio_CancelledError);
Py_VISIT(state->scheduled_tasks);
Py_VISIT(state->eager_tasks);
Py_VISIT(state->current_tasks);
Py_VISIT(state->iscoroutine_typecache);
Py_VISIT(state->context_kwname);
// Visit freelist.
PyObject *next = (PyObject*) state->fi_freelist;
while (next != NULL) {
// stuck inside this loop
PyObject *current = next;
Py_VISIT(current);
next = (PyObject*) ((futureiterobject*) current)->future;
}
return 0;
}
</code></pre>
<p>I believe this part of the code has something to do with garbage collection. What can I learn from this to troubleshoot the issue? Where should I look next?</p>
|
<python><python-asyncio><cpython><python-internals><python-3.12>
|
2024-07-08 12:39:04
| 0
| 797
|
Sophon Aniketos
|
78,720,900
| 2,302,262
|
Type hints for subset of class
|
<p>I'm writing functions that act on <code>pandas.Series</code> with a <code>DatetimeIndex</code>. I can do type hints, like so:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
def filter_year(s: pd.Series, year: int) -> pd.Series:
keep = s.index.year == year
return s[keep]
</code></pre>
<p>This works fine, but the editor complains <code>Cannot access attribute "year" for class "Index"</code>. The reason is, that the editor expects <em>any</em> <code>Index</code>.</p>
<p><strong>Is there a way to specify that <code>s</code> has a <code>DatetimeIndex</code> (which is a subclass of <code>Index</code>)?</strong></p>
|
<python><python-typing>
|
2024-07-08 12:37:43
| 4
| 2,294
|
ElRudi
|
78,720,804
| 6,803,114
|
Split a pandas dataframe column into multiple based on text in those columns
|
<p>I have a pandas dataframe with a column.</p>
<pre><code>id text_col
1 Was it Accurate?: Yes\n\nReasoning: This is a sample text
2 Was it Accurate?: Yes\n\nReasoning: This is a sample text
3 Was it Accurate?: No\n\nReasoning: This is a sample text
</code></pre>
<p>I have to break the text_col into two columms <code>"Was it accurate?"</code> and <code>"Reasoning"</code></p>
<p>The final dataframe should look like:</p>
<pre><code>id Was it Accurate? Reasoning
1 Yes This is a sample text
2 Yes This is a sample text
3 No This is a sample text
</code></pre>
<p>I tried splitting the text_col using "\n\nReasoning:" but did'nt get desired result.</p>
<p><code>df[['Was it Accurate?','Reasoning']] = df['text_col'].str.split("\n\nReasoning:")</code></p>
|
<python><python-3.x><pandas><dataframe>
|
2024-07-08 12:17:02
| 1
| 7,676
|
Shubham R
|
78,720,752
| 3,628,240
|
Getting intersection of subset of multiindex dataframe from Pandas
|
<p>I have some a multi index df, with a month and then facilityIDs and a TotalSpend value for each facility. I'm trying to aggregate the TotalSpend across all facilities for a quarter, where they have data in all 3 months of the quarter AND all 3 months of the quarter in the previous year.</p>
<p><a href="https://i.sstatic.net/Di3fCQ4E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Di3fCQ4E.png" alt="enter image description here" /></a></p>
<p>In my example data, I tried getting a subset of April, May, and June from the df and then doing an inner join, but when I try that I get an error that it's not a df, but a df that using df.loc[[date]] is giving me. I would basically like to check which facilityIDs show up in all 3 months of the quarter and only keep those values.</p>
<p>Desired output:</p>
<p>The desired output would be the sum of the spend in Q2 2024 across all facilities that have data in all 3 months of Q2 2024, and then the sum of the spend in Q2 2023 across all of those same facilities.
In this case it would be Facility 1 only, so Q2 2024 sum of 450 and Q1 2024 sum of 300.</p>
<p><a href="https://i.sstatic.net/6GUWEfBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6GUWEfBM.png" alt="enter image description here" /></a></p>
<p>Code:</p>
<pre><code>import pandas as pd
import datetime
def open_file(path, quarter_number, months):
df_raw = pd.DataFrame({'Date':["2024-04-01","2024-05-01","2024-06-01", "2024-06-01","2024-05-01","2023-04-01","2023-05-01","2023-06-01","2024-05-01","2024-06-01","2023-05-01","2023-06-01", "2023-04-01","2024-05-01","2024-06-01"],
'FacilityID': [1,1,1,1,1,1,1,1,2,2,2,2,3,4,4],
'TotalSpend': [100,110,120,50,70,90,100,110,150,140,120,60,90,190,150]
}).set_index('Date')
df = df_raw.groupby(['Date', 'FacilityID'])['TotalSpend'].sum()
# print(df)
cur_dates = []
prev_dates = []
for month in months:
cur_date = datetime.date(2024, month, 1)
prev_date = datetime.date(cur_date.year - 1, month, 1)
cur_dates.append(cur_date.strftime('%Y-%m-%d'))
prev_dates.append(prev_date.strftime('%Y-%m-%d'))
#this is where i'm having issues
cur_data =df.loc[[cur_dates[1]]].join(df.loc[[cur_dates[1]]], on='FacilityID' ,join = "inner")
prev_data = df.loc[prev_dates[0]:prev_dates[-1]]
# print(cur_data)
# print(prev_data)
if __name__ == "__main__":
change = open_file("path",2 ,[4,5,6])
print(change)
</code></pre>
|
<python><pandas><dataframe>
|
2024-07-08 12:05:29
| 1
| 927
|
user3628240
|
78,720,274
| 9,924,230
|
How to efficiently perform parallel file search using pathlib `glob` in Python for large directory structures?
|
<p>I'm working on a Python project where I need to search for specific files across a very large directory structure. Currently, I'm using the glob (or rglob) method from the pathlib module, but it is quite slow due to the extensive number of files and directories.</p>
<p>Here's a simplified version of my current code:</p>
<pre><code>from pathlib import Path
base_dir = Path("/path/to/large/directory")
files = list(base_dir.rglob("ind_stat.zpkl"))
</code></pre>
<p>This works, but it's too slow because it has to traverse through a massive number of directories and files. Ideally, I'd like to divide the directory traversal work across multiple threads or processes to improve performance. Are there optimizations or alternative libraries/methods that could help improve the performance?</p>
|
<python><parallel-processing><glob><pathlib>
|
2024-07-08 10:16:56
| 2
| 613
|
Roy
|
78,720,213
| 6,930,340
|
Putting polars API extensions in dedicated module - How to import from target module?
|
<p>I want to extend <code>polars</code> API as described in the <a href="https://docs.pola.rs/api/python/stable/reference/api.html" rel="nofollow noreferrer">docs</a>, like this:</p>
<pre><code>@pl.api.register_expr_namespace("greetings")
class Greetings:
def __init__(self, expr: pl.Expr):
self._expr = expr
def hello(self) -> pl.Expr:
return (pl.lit("Hello ") + self._expr).alias("hi there")
def goodbye(self) -> pl.Expr:
return (pl.lit("SayΕnara ") + self._expr).alias("bye")
</code></pre>
<p>If I were to put the actual registration in a dedicated module (<code>extensions.py</code>), how am I supposed to import the methods from the respective class from within another module?</p>
<p>Going with the dataframe example in the <a href="https://docs.pola.rs/api/python/stable/reference/api.html" rel="nofollow noreferrer">docs</a>, let's say I put the following code in a module called <code>target.py</code>.<br />
I need to make the <code>greetings</code>-namespace available. How can I do it, i.e. how excactly should the import look like?</p>
<pre><code>pl.DataFrame(data=["world", "world!", "world!!"]).select(
[
pl.all().greetings.hello(),
pl.all().greetings.goodbye(),
]
)
</code></pre>
|
<python><python-polars>
|
2024-07-08 10:04:56
| 1
| 5,167
|
Andi
|
78,719,729
| 16,869,946
|
Multivariate normal distribution using python scipy stats and integrate nquad
|
<p>Let<br>
<img src="https://i.sstatic.net/itTcpdLj.png" height="50" /><br>
be independent normal random variables with means<br>
<img src="https://i.sstatic.net/Wi4QpGxw.png" height="50" /><br>
and unit variances, i.e. <br>
<img src="https://i.sstatic.net/EEG24NZP.png" height="40" /><br>
I would like to compute the probability</p>
<p><a href="https://i.sstatic.net/I1ZRUWkS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/I1ZRUWkS.png" alt="enter image description here" /></a></p>
<p>Is there any easy way to compute the above probability using <code>scipy.stats.multivariate_normal</code>? If not, how do we do it using <code>scipy.integrate</code>?</p>
|
<python><scipy><normal-distribution><scipy.stats><scipy-integrate>
|
2024-07-08 08:06:21
| 1
| 592
|
Ishigami
|
78,719,496
| 1,858,864
|
Dynamically generate Django .filter() query with various attrs and matching types
|
<p>I use Django 1.6 and Python 2.7 and I need to generate queryset filter dynamically.</p>
<p>The basic thing I need is to use different fields (field1, field2, field3) in filter and use different type of matching (equals, startsfrom, endswith, contains).</p>
<p>Here is an example of possible combinations:</p>
<pre><code>Mymodel.objects.filter(field1__strartswith=somevalue).
# Or like this:
Mymodel.objects.filter(field2__endswith=somevalue).
# Or like this:
Mymodel.objects.filter(field3=somevalue)
# Or like this:
Mymodel.objects.filter(atrr3__contains=somevalue)
</code></pre>
<p>I've found <a href="https://stackoverflow.com/a/39198220/1858864">this</a> answer and it looks good but I believe there are some more "Django-like" ways to do it. I've also found <a href="https://stackoverflow.com/a/34739887/1858864">this one</a> with Q object.
But could I somehow import and pass to the queryset some objects of this types of matching?</p>
|
<python><django><django-orm>
|
2024-07-08 07:07:48
| 0
| 6,817
|
Paul
|
78,719,295
| 4,382,391
|
show scale legend of 2D histplot
|
<p>I want to add a color scale legend to a 2D seaborn dist plot that shows the frequency range of the color scale. An example is mocked up here in ms paint:</p>
<p><a href="https://i.sstatic.net/n9h7aoPN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/n9h7aoPN.png" alt="a 2D seaborn distplot with a color scale legend" /></a></p>
<p>I have looked around and seen many methods to make legends for nominal or ordinal data (e.g. map hue to different categories and then show legends in hue) but nothing for continuous color scales, and nothing applied to 2D distplot. How can this be done? It doesn't need to look exactly like my mockup of course, this is just to illustrate the problem. Ideally it would be a continuous gradient.</p>
|
<python><seaborn><colorbar><histplot>
|
2024-07-08 06:01:03
| 1
| 1,070
|
Null Salad
|
78,719,212
| 3,486,684
|
Reducing code/expression duplication when using Polars `with_columns`?
|
<p>Consider some Polars code like so:</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(
pl.date_ranges(
pl.col("current_start"), pl.col("current_end"), "1mo", closed="left"
).alias("current_tpoints")
).drop("current_start", "current_end").with_columns(
pl.date_ranges(
pl.col("history_start"), pl.col("history_end"), "1mo", closed="left"
).alias("history_tpoints")
).drop(
"history_start", "history_end"
)
</code></pre>
<p>The key issue to note here is the repetitiveness of <code>history_*</code> and <code>current_*</code>. I could reduce duplication by doing this:</p>
<pre class="lang-py prettyprint-override"><code>for x in ["history", "current"]:
fstring = f"{x}" + "_{other}"
start = fstring.format(other="start")
end = fstring.format(other="end")
df = df.with_columns(
pl.date_ranges(
pl.col(start),
pl.col(end),
"1mo",
closed="left",
).alias(fstring.format(other="tpoints"))
).drop(start, end)
</code></pre>
<p>But are there any other ways to reduce duplication I ought to consider?</p>
|
<python><python-polars>
|
2024-07-08 05:26:34
| 1
| 4,654
|
bzm3r
|
78,719,155
| 11,608,962
|
PyTorch RuntimeError: No operator found for memory_efficient_attention_forward with torch.float16 inputs on CPU
|
<p>I am working with a PyTorch model (AutoModelForCausalLM) using the transformers library and encountering a RuntimeError related to tensor types and operator support. Hereβs a simplified version of my code:</p>
<pre class="lang-py prettyprint-override"><code>import torch
import requests
from PIL import Image
from IPython.display import display
from transformers import AutoModelForCausalLM, LlamaTokenizer
# Load tokenizer and model
tokenizer = LlamaTokenizer.from_pretrained('lmsys/vicuna-7b-v1.5')
model = AutoModelForCausalLM.from_pretrained(
'THUDM/cogvlm-chat-hf',
torch_dtype=torch.float16, # Using torch.float16
low_cpu_mem_usage=True,
trust_remote_code=True
).eval()
def generate(query: str, img_url: str, max_length: int = 2048) -> str:
image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
display(image)
# Generate token inputs
inputs = model.build_conversation_input_ids(tokenizer, query=query, history=[], images=[image], template_version='vqa')
# Convert tensors to appropriate types
input_ids = inputs['input_ids'].unsqueeze(0).to(torch.long)
token_type_ids = inputs['token_type_ids'].unsqueeze(0).to(torch.long)
attention_mask = inputs['attention_mask'].unsqueeze(0).to(torch.float16)
images = [[inputs['images'][0].to(torch.float16)]]
inputs = {
'input_ids': input_ids,
'token_type_ids': token_type_ids,
'attention_mask': attention_mask,
'images': images,
}
gen_kwargs = {"max_length": max_length, "do_sample": False}
with torch.no_grad():
outputs = model.generate(**inputs, **gen_kwargs)
outputs = outputs[:, input_ids.shape[1]:]
return tokenizer.decode(outputs[0])
query = 'Describe this image in detail'
img_url = 'https://i.ibb.co/x1nH9vr/Slide1.jpg'
generate(query, img_url)
</code></pre>
<p>Above code throws throws the following error:</p>
<pre><code>NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs:
query : shape=(1, 1226, 16, 112) (torch.float16)
key : shape=(1, 1226, 16, 112) (torch.float16)
value : shape=(1, 1226, 16, 112) (torch.float16)
attn_bias : <class 'NoneType'>
p : 0.0
`ck_decoderF` is not supported because:
device=cpu (supported: {'cuda'})
operator wasn't built - see `python -m xformers.info` for more info
`ckF` is not supported because:
device=cpu (supported: {'cuda'})
operator wasn't built - see `python -m xformers.info` for more info
</code></pre>
<p>I'm trying to use <code>torch.float16</code> tensors with my <strong>PyTorch</strong> model on CPU <code>(device=cpu)</code>. The model is loaded with <code>torch.float16</code> using <code>AutoModelForCausalLM</code> from the <code>transformers</code> library. However, I encounter the <code>NotImplementedError</code> stating that the <code>memory_efficient_attention_forward</code> operator isn't supported on CPU with <code>torch.float16</code>.</p>
<p>Is there a way to make <code>memory_efficient_attention_forward</code> work with <code>torch.float16</code> on CPU? Are there alternative approaches or configurations I should consider to resolve this issue?</p>
<blockquote>
<p>I am trying to run this on a MacBook PRO with Intel Core i7 processor.</p>
</blockquote>
|
<python><tensorflow><pytorch>
|
2024-07-08 04:58:13
| 1
| 1,427
|
Amit Pathak
|
78,719,135
| 3,486,684
|
How can I create a Polars struct when using list eval?
|
<p>I am trying to create a Polars DataFrame that includes a column of structs based on another DataFrame column. Here's the setup:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame(
[
pl.Series("start", ["2023-01-01"]).str.to_date(),
pl.Series("end", ["2024-01-01"]).str.to_date(),
]
)
</code></pre>
<pre><code>shape: (1, 2)
ββββββββββββββ¬βββββββββββββ
β start β end β
β --- β --- β
β date β date β
ββββββββββββββͺβββββββββββββ‘
β 2023-01-01 β 2024-01-01 β
ββββββββββββββ΄βββββββββββββ
</code></pre>
<pre class="lang-py prettyprint-override"><code>df = df.with_columns(
pl.date_ranges(pl.col("start"), pl.col("end"), "1mo", closed="left")
.alias("date_range")
)
</code></pre>
<pre><code>shape: (1, 3)
ββββββββββββββ¬βββββββββββββ¬ββββββββββββββββββββββββββββββββββ
β start β end β date_range β
β --- β --- β --- β
β date β date β list[date] β
ββββββββββββββͺβββββββββββββͺββββββββββββββββββββββββββββββββββ‘
β 2023-01-01 β 2024-01-01 β [2023-01-01, 2023-02-01, β¦ 202β¦ β
ββββββββββββββ΄βββββββββββββ΄ββββββββββββββββββββββββββββββββββ
</code></pre>
<p>Now, I want to make a struct out of the year/month parts:</p>
<pre class="lang-py prettyprint-override"><code>df = df.with_columns(
pl.col("date_range")
.list.eval(
pl.struct(
{
"year": pl.element().dt.year(),
"month": pl.element().dt.month(),
}
)
)
.alias("years_months")
)
</code></pre>
<p>But this does not work.</p>
<pre class="lang-py prettyprint-override"><code># TypeError: Cannot pass a dictionary as a single positional argument.
</code></pre>
<p>My best idea is one I don't like because I have to repeatedly call <code>list.eval</code>:</p>
<pre class="lang-py prettyprint-override"><code>df = (
df.with_columns(
pl.col("date_range").list.eval(pl.element().dt.year()).alias("year"),
pl.col("date_range").list.eval(pl.element().dt.month()).alias("month"),
)
.drop("start", "end", "date_range")
.explode("year", "month")
.select(pl.struct("year", "month"))
)
df
</code></pre>
<p>The other idea is to use <code>map_elements</code>, but I think that ought to be something of a last resort. What's the idiomatic way to eval into a struct?</p>
|
<python><dataframe><list><python-polars>
|
2024-07-08 04:44:58
| 1
| 4,654
|
bzm3r
|
78,719,078
| 3,486,684
|
How to type hint a `pl.date`?
|
<p>Suppose we create some dates:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame(
[
pl.Series("start", ["2023-01-01"], dtype=pl.Date).str.to_date(),
pl.Series("end", ["2024-01-01"], dtype=pl.Date).str.to_date(),
]
)
</code></pre>
<p>Now I can create a date range from these:</p>
<pre class="lang-py prettyprint-override"><code>dates = pl.date_range(df[0, "start"], df[0, "end"], "1mo", eager=True)
</code></pre>
<p>But I want to define a function which takes a couple of dates and spits out a range, as a wrapper around <code>pl.date_range</code>:</p>
<pre><code>def my_date_range(start: pl.Date, end: pl.Date) -> pl.Series:
return pl.date_range(start, end, "1mo", eager=True)
</code></pre>
<p>The above doesn't typecheck with <code>pyright</code>/Pylance, because:</p>
<pre><code>Argument of type "Date" cannot be assigned to parameter "start" of type "IntoExprColumn | date | datetime" in function "date_range"
Type "Date" is incompatible with type "IntoExprColumn | date | datetime"
"Date" is incompatible with "date"
"Date" is incompatible with "datetime"
"Date" is incompatible with "Expr"
"Date" is incompatible with "Series"
"Date" is incompatible with "str"PylancereportArgumentType
</code></pre>
<p>If I check out <code>type(df[0, "start"])</code>, I see:</p>
<pre><code>datetime.date
</code></pre>
<p>and <code>pl.Date</code> is no good because <code>isinstance(df[0, "start"], pl.Date) == False</code>.</p>
<p>I cannot figure out how to import <code>datetime.date</code> in order to use it as a type annotation (trying <code>import polars.datetime as dt</code> raises <code>No module named 'polars.datetime'</code>).</p>
<p>How can this be done? Or put differently: how should <code>my_date_range</code>'s date arguments be annotated?</p>
|
<python><python-typing><python-polars><pyright>
|
2024-07-08 04:11:35
| 1
| 4,654
|
bzm3r
|
78,718,996
| 8,535,456
|
Browser doesn't get response header from my inherited http.server.SimpleHTTPRequestHandler
|
<p>I am trying to build a HTTP server by inheriting <code>http.server.SimpleHTTPRequestHandler</code>.</p>
<p>It works fine and unless that the browser just doesn't know my server is trying to
send HTML data, and keep rendering the response HTML in plain text.</p>
<p>However, I have already set <code>self.send_header('Content-Type', 'text/html; charset=UTF-8')</code>
in my server.</p>
<p>By check, I found that my browser seems not getting any response header from my server at all, though the response content
is received.</p>
<p>It is very weird. Why is that?</p>
<p>I want to let the browser get the reponse header, so that the HTML response can be
rendered as HTML instead of as plain text, what should I do?</p>
<p>Here is an example for test, by removing unrelated part from my server code:</p>
<pre><code>import http.server
import socketserver
class CustomHandler(http.server.SimpleHTTPRequestHandler):
def do_GET(self):
html_content = "<html><body><h1>Hello, World!</h1></body></html>"
self.send_response(200)
self.send_header('Content-Type', 'text/html; charset=UTF-8')
self.send_header('Content-Length', len(html_content))
self.wfile.write(html_content.encode('utf-8'))
def main(port_no):
with socketserver.TCPServer(("", port_no), CustomHandler) as httpd:
print("Serving at port ", port_no)
httpd.serve_forever()
if __name__ == '__main__':
main(8098)
</code></pre>
|
<python><html><http><server><simplehttprequesthandler>
|
2024-07-08 03:20:38
| 1
| 1,097
|
Limina102
|
78,718,891
| 2,604,247
|
Do Python coders have a bias towards list over tuple?
|
<h4>Basic Facts</h4>
<ul>
<li>Lists are mutable (supporting inserts, appending etc.), Tuples are not</li>
<li>Tuples are more memory efficient, and faster to iterate over</li>
</ul>
<p>So it would seem their use-cases are clear. Functionally speaking, lists offer a superset of operations, tuples are more performant at what they do.</p>
<h4>Observation</h4>
<p>Most arrays that my team creates in the course of a program, are in fact, perfectly fine as immutable. We iterate over them, apply map, reduce, filter on them, may be insert into a database from them etc. all without insertion, popping or appending on the array-like-structure.</p>
<h4>Question</h4>
<p>Yet, a list seems to be not only the default (and only) choice among my developers, but seems even favoured by many library APIs to pass data around (like polars, tensorflow etc. which I use heavily).</p>
<p>And not even like using Tuples require some special skill, knowledge or understanding another-data-structure, it's really the same in terms of necessary syntax to subscript, or iterate.</p>
<p>What am I missing in the reasoning here?</p>
|
<python><list><performance><data-structures><iterable-unpacking>
|
2024-07-08 02:16:53
| 2
| 1,720
|
Della
|
78,718,762
| 13,118,291
|
Getting error when trying to install Pandas using pip
|
<p>I am trying to install Pandas.</p>
<p>I did:</p>
<pre><code>pip install pandas
</code></pre>
<p>And I tried other versions:</p>
<pre><code>pip install pandas=2.2.1
</code></pre>
<p>Always the same error:</p>
<pre><code>root@thinkpad:/# pip install pandas==2.2.1
Collecting pandas==2.2.1
Downloading pandas-2.2.1.tar.gz (4.4 MB)
ββββββββββββββββββββββββββββββββββββββββ 4.4/4.4 MB 12.6 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
Γ Preparing metadata (pyproject.toml) did not run successfully.
β exit code: 1
β°β> [204 lines of output]
</code></pre>
<p>I am on a lightweight Debian 12, using Python 3.13. The latest beta release, I upgraded pip, and checked these as well:</p>
<pre><code>python3 -m pip install --upgrade pip setuptools wheel
sudo apt install build-essential libatlas-base-dev
</code></pre>
<p>The problem always persists; any help is welcome.</p>
|
<python><python-3.x><pandas><linux><pip>
|
2024-07-08 00:54:45
| 1
| 465
|
Elyes Lounissi
|
78,718,748
| 3,486,684
|
Writing a DataFrame which has a column with type `pl.List(SomePolarsEnum)` inside a `pl.Struct` causes panic "ListArray's child's DataType must match"
|
<p>A minimal not-working example:</p>
<pre class="lang-py prettyprint-override"><code>from pathlib import Path
import polars as pl
Alphas = pl.Enum(["hello"])
MiniStruct = pl.Struct({
"alpha": Alphas,
})
MiniStructs = pl.List(MiniStruct)
df = pl.DataFrame(
pl.Series("xs", [[] for _ in range(1)], dtype=MiniStructs)
)
path = Path("xs.arrow")
if path.exists():
path.unlink()
df.write_ipc(path)
</code></pre>
<p>Produces:</p>
<pre><code>thread '<unnamed>' panicked at /home/runner/work/polars/polars/crates/polars-arrow/src/array/list/mod.rs:82:61:
called `Result::unwrap()` on an `Err` value: ComputeError(ErrString("ListArray's child's DataType must match. However, the expected DataType is Struct([Field { name: \"alpha\", data_type: Dictionary(UInt32, LargeUtf8, false), is_nullable: true, metadata: {\"POLARS.CATEGORICAL_TYPE\": \"ENUM\"} }]) while it got Struct([Field { name: \"alpha\", data_type: Dictionary(UInt32, LargeUtf8, false), is_nullable: true, metadata: {} }])."))
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
</code></pre>
<pre><code>PanicException Traceback (most recent call last)
/tmp/ipykernel_264688/3257874384.py in ?()
13 path = Path("xs.arrow")
14 if path.exists():
15 path.unlink()
16
---> 17 df.write_ipc(path)
~/bumblebee/.venv/lib/python3.12/site-packages/polars/dataframe/frame.py in ?(self, file, compression, future)
3251 issue_unstable_warning(
3252 "The `future` parameter of `DataFrame.write_ipc` is considered unstable."
3253 )
3254
-> 3255 self._df.write_ipc(file, compression, future)
3256 return file if return_bytes else None # type: ignore[return-value]
PanicException: called `Result::unwrap()` on an `Err` value: ComputeError(ErrString("ListArray's child's DataType must match. However, the expected DataType is Struct([Field { name: \"alpha\", data_type: Dictionary(UInt32, LargeUtf8, false), is_nullable: true, metadata: {\"POLARS.CATEGORICAL_TYPE\": \"ENUM\"} }]) while it got Struct([Field { name: \"alpha\", data_type: Dictionary(UInt32, LargeUtf8, false), is_nullable: true, metadata: {} }])."))
</code></pre>
|
<python><python-polars>
|
2024-07-08 00:39:26
| 0
| 4,654
|
bzm3r
|
78,718,725
| 3,628,240
|
Quarter over Quarter for monthly spend with Pandas
|
<p>I have some transaction data that after grouping by date and FacilityID that looks like the below after the grouping. I'm trying to calculate the quarter over quarter change for all of the transactions, so the sum of the total spend of all of the Facilities combined (i.e. the aggregate spend in all 3 months of both the current quarter and the 3 months of the prior year quarter). So in this example, I would just want the sum of spend for Facility #1, for April-June 2024 over Facility #1 April-June 2023 sum of total spend to get the change. Facility 2 should be excluded because it doesn't have any spend in April of 2023 or 2024.</p>
<p>If there were other facilities that had data for April-June 2024 and April-June 2023 (e.g. Facility 7 and 9) then I would want the total spend for the quarter for Facility 1,7, and 9 combined, over the previous quarter. I just need the most recent quarter, which depending on the month, may only have 1-2 months of data, which is fine.</p>
<p><a href="https://i.sstatic.net/Wx8tueTw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Wx8tueTw.png" alt="pytho" /></a></p>
<p>This is the code I've tried so far, but it's including Facility 2 in the code as well, when it should be excluded since it doesn't have any data in April 2024 and 2023.</p>
<pre><code>import pandas as pd
import datetime
def open_file(path, quarter_number, months):
df_raw = pd.DataFrame({'Date':["2024-04-01","2024-05-01","2024-06-01", "2024-06-01","2024-05-01","2023-04-01","2023-05-01","2023-06-01","2024-05-01","2024-06-01","2023-05-01","2023-06-01", "2023-04-01","2024-05-01","2024-06-01"],
'FacilityID': [1,1,1,1,1,1,1,1,2,2,2,2,3,4,4],
'TotalSpend': [100,110,120,50,70,90,100,110,150,140,120,60,90,190,150]
}).set_index('Date')
df = df_raw.groupby(['Date', 'FacilityID'])['TotalSpend'].sum()
print(df)
cur_dates = []
prev_dates = []
for month in months:
cur_date = datetime.date(2024, month, 1)
prev_date = datetime.date(cur_date.year - 1, month, 1)
cur_dates.append(cur_date.strftime('%Y-%m-%d'))
prev_dates.append(prev_date.strftime('%Y-%m-%d'))
cur_quarter_data = pd.concat(
[df.loc[date] if date in df.index.levels[0] else pd.Series(dtype='float64') for date in cur_dates])
prev_quarter_data = pd.concat(
[df.loc[date] if date in df.index.levels[0] else pd.Series(dtype='float64') for date in prev_dates])
common_facilities = cur_quarter_data.index.intersection(prev_quarter_data.index)
cur_quarter_vals = cur_quarter_data.loc[common_facilities]
prev_quarter_vals = prev_quarter_data.loc[common_facilities]
yoy_change = (cur_quarter_vals.sum() - prev_quarter_vals.sum()) / prev_quarter_vals.sum() * 100
return yoy_change
if __name__ == "__main__":
change = open_file("path",2 ,[4,5,6])
print(change)
</code></pre>
|
<python><pandas>
|
2024-07-08 00:19:40
| 1
| 927
|
user3628240
|
78,718,715
| 1,575,548
|
Response 403 - Is it me or them?
|
<p>I am using <a href="https://github.com/lvxhnat/pyetfdb-scraper" rel="nofollow noreferrer"><code>pyetfdb-scraper</code></a> to scrap info about ETFs. It worked last week, but suddenly, I am consistently getting <code>403</code> errors.</p>
<pre><code>from pyetfdb_scraper.etf import ETF
test = ETF('VTI')
</code></pre>
<p>How can I check that the server-side is blocking me, versus something wrong with the code?</p>
<p>Note that I am not doing a massive scrapping - I'm looking up half a dozen with 5 sec. sleep in between.</p>
|
<python><python-requests>
|
2024-07-08 00:13:38
| 1
| 982
|
BΓ©atrice M.
|
78,718,676
| 395,857
|
What is the difference, if any, between model.half() and model.to(dtype=torch.float16) in huggingface-transformers?
|
<p>Example:</p>
<pre><code># pip install transformers
from transformers import AutoModelForTokenClassification, AutoTokenizer
# Load model
model_path = 'huawei-noah/TinyBERT_General_4L_312D'
model = AutoModelForTokenClassification.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
# Convert the model to FP16
model.half()
</code></pre>
<p>vs.</p>
<pre><code>model.to(dtype=torch.float16)
</code></pre>
<p>What is the difference, if any, between model.half() and model.to(dtype=torch.float16) in huggingface-transformers?</p>
|
<python><huggingface-transformers><huggingface><quantization><half-precision-float>
|
2024-07-07 23:33:43
| 1
| 84,585
|
Franck Dernoncourt
|
78,718,668
| 2,146,894
|
How to interpret and adjust the colorbar when plotting an RGB image with imshow?
|
<p>I have a 2x2x3 numpy array like this</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
red = np.array([[0, 50], [100, 200]])
green = np.array([[0, 185], [100, 255]])
blue = np.array([[0, 129], [0, 255]])
combined = np.stack((red, green, blue), -1)
# [[[ 0, 0, 0],
# [ 50, 185, 129]],
# [[100, 100, 0],
# [200, 255, 255]]]
</code></pre>
<p>I can plot the array with color just fine.</p>
<pre class="lang-py prettyprint-override"><code>plt.figure()
im = plt.imshow(
X=combined,
vmin=0,
vmax=255,
interpolation='none',
aspect='equal',
)
plt.colorbar()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/UDlumNHE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UDlumNHE.png" alt="enter image description here" /></a></p>
<p>I can also plot just the red channel, like this</p>
<pre class="lang-py prettyprint-override"><code>reds = combined.copy()
reds[:, :, 1:] = 0
plt.figure()
im = plt.imshow(
X=reds,
vmin=0,
vmax=255,
interpolation='none',
aspect='equal',
)
plt.colorbar()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/ZLRVY5nm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZLRVY5nm.png" alt="enter image description here" /></a></p>
<p>But I'm struggling with two questions..</p>
<ol>
<li>Is the colorbar in the first plot event valid. I'm not sure how to interpret it.</li>
<li>How do change the colorbar in the second plot to go from 0 (black) to 255 (red), as it should? Obviously I want the colorbar values and scale to match the actual plot behavior.</li>
</ol>
|
<python><matplotlib>
|
2024-07-07 23:25:29
| 1
| 21,881
|
Ben
|
78,718,645
| 4,884,235
|
Python in Excel: How to access the records of a JSON file imported using Excel Query
|
<p>I have a deeply nested JSON file that I have imported using Power Query into Excel and Microsoft 365 using
Data>Get Data>From File>From JSON</p>
<p>It loads into a [Record].</p>
<p>I would like to access and process this loaded object using Python in Excel.</p>
<p>I have a manual workaround of loading the JSON as a single string into a single cell and then processing. This works, but using queries to first load the file would give more control and circumvent 32K character limits for single cells.</p>
|
<python><json><excel><powerquery>
|
2024-07-07 23:07:58
| 0
| 656
|
Chris Seeling
|
78,718,554
| 9,212,050
|
Training a Custom Feature Extractor in Stable Baselines3 Starting from Pre-trained Weights?
|
<p>I am using the following custom feature extractor for my StableBaselines3 model:</p>
<pre><code>import torch.nn as nn
from stable_baselines3 import PPO
class Encoder(nn.Module):
def __init__(self, input_dim, embedding_dim, hidden_dim, output_dim=2):
super(Encoder, self).__init__()
self.encoder = nn.Sequential(
nn.Linear(input_dim, embedding_dim),
nn.ReLU()
)
self.regressor = nn.Sequential(
nn.Linear(embedding_dim, hidden_dim),
nn.ReLU(),
)
def forward(self, x):
x = self.encoder(x)
x = self.regressor(x)
return x
model = Encoder(input_dim, embedding_dim, hidden_dim)
model.load_state_dict(torch.load('trained_model.pth'))
# Freeze all layers
for param in model.parameters():
param.requires_grad = False
class CustomFeatureExtractor(BaseFeaturesExtractor):
def __init__(self, observation_space, features_dim):
super(CustomFeatureExtractor, self).__init__(observation_space, features_dim)
self.model = model # Use the pre-trained model as the feature extractor
self._features_dim = features_dim
def forward(self, observations):
features = self.model(observations)
return features
policy_kwargs = {
"features_extractor_class": CustomFeatureExtractor,
"features_extractor_kwargs": {"features_dim": 64}
}
model = PPO("MlpPolicy", env=envs, policy_kwargs=policy_kwargs)
</code></pre>
<p>The model is trained well so far with no issues and good results. Now I want to not freeze the weights, and try to train the Feature Extractor as well starting from the initial pre-trained weight. How can I do that with such a custom Feature Extractor defined as a class inside another class? My feature extractor is not the same as the one defined in the <a href="https://stable-baselines3.readthedocs.io/en/master/guide/custom_policy.html" rel="nofollow noreferrer">documentation</a>, so I am not sure if it will be trained. Or will it start training if I unfreeze the layers?</p>
|
<python><pytorch><reinforcement-learning><stable-baselines><stablebaseline3>
|
2024-07-07 22:11:00
| 1
| 1,404
|
Sayyor Y
|
78,718,450
| 123,891
|
authlib + mailchimp oauth2: invalid_client: client_id parameter missing
|
<p>I'm building out an OAuth2 Factory using authlib and FastAPI because my upstream application needs to authenticate with multiple providers.</p>
<p>The authentication factory works well with all providers except for Mailchimp.</p>
<p>I don't want to use the mailchimp_marketing client library (for a few different reasons).</p>
<p>When I initiate the OAuth flow in the browser, I ultimately get the following error in my callback function on <code>token = await oauth_client.authorize_access_token(request)</code> (exchanging the authorization code for an access token). And I've confirmed that <code>"client_id=settings.MAILCHIMP_CLIENT_ID,"</code> is actually set.</p>
<pre><code>"GET /oauth/mailchimp/callback?state=ABC123XYZ&code=ABC123XYZ HTTP/1.1" 500 Internal
Server Error
...
2024-07-07 16:48:30 File "/app/app/api/oauth.py", line 25, in callback
2024-07-07 16:48:30 token = await oauth_client.authorize_access_token(request)
2024-07-07 16:48:30 File "/usr/local/lib/python3.9/site-packages/authlib/integrations/starlette_client/apps.py", line 81, in authorize_access_token
2024-07-07 16:48:30 token = await self.fetch_access_token(**params, **kwargs)
2024-07-07 16:48:30 File "/usr/local/lib/python3.9/site-packages/authlib/integrations/base_client/async_app.py", line 125, in fetch_access_token
2024-07-07 16:48:30 token = await client.fetch_token(token_endpoint, **params)
2024-07-07 16:48:30 File "/usr/local/lib/python3.9/site-packages/authlib/integrations/httpx_client/oauth2_client.py", line 138, in _fetch_token
2024-07-07 16:48:30 return self.parse_response_token(resp)
2024-07-07 16:48:30 File "/usr/local/lib/python3.9/site-packages/authlib/oauth2/client.py", line 344, in parse_response_token
2024-07-07 16:48:30 raise self.oauth_error_class(
2024-07-07 16:48:30 authlib.integrations.base_client.errors.OAuthError: invalid_client: client_id parameter missing
</code></pre>
<p>Here's my relevant code:</p>
<p><strong>provider_factory.py</strong></p>
<pre><code># app/services/provider_factory.py
from authlib.integrations.starlette_client import OAuth
from app.core.config import settings
from sqlalchemy.orm import Session
class ProviderFactory:
oauth = OAuth()
oauth.register(
name='mailchimp',
client_id=settings.MAILCHIMP_CLIENT_ID,
client_secret=settings.MAILCHIMP_CLIENT_SECRET,
authorize_url='https://login.mailchimp.com/oauth2/authorize',
access_token_url='https://login.mailchimp.com/oauth2/token',
api_base_url='https://login.mailchimp.com/oauth2/',
)
oauth.register(
name='another_provider',
client_id=settings.ANOTHER_CLIENT_ID,
client_secret=settings.ANOTHER_CLIENT_SECRET,
authorize_url='https://auth.another.com/oauth2/authorize',
access_token_url='https://auth.another.com/oauth2/token',
client_kwargs={'scope': '...'},
)
@staticmethod
def get_provider_service(provider_name: str, db: Session):
from app.models.provider import Provider
provider = db.query(Provider).filter_by(name=provider_name).first()
if not provider:
raise ValueError(f"No provider found with name: {provider_name}")
if provider.name == 'mailchimp':
from app.services.mailchimp import MailchimpProvider
return MailchimpProvider(db, provider)
elif provider.name == 'another':
from app.services.another import AnotherProvider
return AnotherProvider(db, provider)
else:
raise ValueError(f"Unsupported provider: {provider.name}")
@staticmethod
def get_oauth_client(provider_name: str):
return getattr(ProviderFactory.oauth, provider_name)
@staticmethod
def get_token_url(provider_name: str) -> str:
if provider_name == 'mailchimp':
return 'https://login.mailchimp.com/oauth2/token'
elif provider_name == 'another':
return 'https://auth.another.com/oauth2/token'
else:
raise ValueError("Unsupported provider")
@staticmethod
def create_provider(db: Session, provider_name: str):
return ProviderFactory.get_provider_service(provider_name, db)
</code></pre>
<p><strong>oauth.py</strong></p>
<pre><code># app/api/oauth.py
from fastapi import APIRouter, Depends, HTTPException, Request
from sqlalchemy.orm import Session
from app.db.session import get_db
from app.models.provider import Provider
from app.models.user import User
from app.models.connection import Connection
from app.core.logger import logger
from app.services.provider_factory import ProviderFactory
router = APIRouter()
@router.get("/{provider}/login")
async def login(provider: str, request: Request):
redirect_uri = request.url_for('callback', provider=provider)
return await ProviderFactory.get_oauth_client(provider).authorize_redirect(request, redirect_uri)
@router.get("/{provider}/callback")
async def callback(provider: str, request: Request, db: Session = Depends(get_db)):
logger.info(f"Provider: {provider}")
logger.info(f"Request query params: {request.query_params}")
oauth_client = ProviderFactory.get_oauth_client(provider)
token = await oauth_client.authorize_access_token(request)
if not token:
raise HTTPException(status_code=400, detail="Failed to get access token")
provider_record = db.query(Provider).filter_by(name=provider).first()
if not provider_record:
raise HTTPException(status_code=400, detail="Provider not found")
provider_service = ProviderFactory.get_provider_service(provider, db)
# Get user information from the provider
user_info = await provider_service.get_account_info(token)
logger.info(f"User info: {user_info}")
...
return {"status": "success", "provider": provider}
</code></pre>
<p><strong>mailchimp.py</strong></p>
<pre><code># app/services/mailchimp.py
import requests
from app.services.base_provider import BaseProvider
from app.models.list import List
from app.models.list_properties import ListProperties
class MailchimpProvider(BaseProvider):
tokens_expire = False
async def get_account_info(self, token: dict):
async with aiohttp.ClientSession() as session:
headers = {'Authorization': f'Bearer {token["access_token"]}'}
async with session.get('https://login.mailchimp.com/oauth2/metadata', headers=headers) as response:
return await response.json()
async def get_lists(self):
headers = {
'Authorization': f'Bearer {self.provider.access_token}'
}
response = requests.get('https://api.mailchimp.com/3.0/lists', headers=headers)
lists_data = response.json().get('lists', [])
for list_data in lists_data:
...
...
</code></pre>
<p>Any ideas? Hoping that I'm just missing something small.</p>
|
<python><oauth-2.0><fastapi><mailchimp><mailchimp-api-v3>
|
2024-07-07 21:12:30
| 1
| 20,303
|
littleK
|
78,718,381
| 1,818,935
|
Create a list accumulating the results of calling a function repeatedly on an input value
|
<p>Is there a library function that creates recursive lists in the following sense,</p>
<pre><code>recursive_list(f, x0, n) = [x0, f(x0), f(f(x0)), f(f(f(x0))), ...]
</code></pre>
<p>with <code>n</code> elements in the returned list?</p>
<p>If not, how can this be written?</p>
|
<python><list><recursion><higher-order-functions>
|
2024-07-07 20:39:32
| 4
| 6,053
|
Evan Aad
|
78,718,359
| 2,218,321
|
Python: Multiprocessing took longer than sequential, why?
|
<p>I have this code, it generates 2,000,000 points uniformly distributed in a bounding box and does some calculations to partition the points based on some criteria.</p>
<pre><code>import numpy as np
from draw import draw
import time
X = 0
Y = 1
N = 2000000
max_x = -100000
max_y = -100000
min_x = 100000
min_y = 100000
points = np.random.uniform(-10, 10, (N, 2))
start = time.time()
max_x_index = np.argmax(points[:, X])
max_y_index = np.argmax(points[:, Y])
min_x_index = np.argmin(points[:, X])
min_y_index = np.argmin(points[:, Y])
p_right = points[max_x_index]
p_top = points[max_y_index]
p_left = points[min_x_index]
p_bottom = points[min_y_index]
top_right = points[
points[:, X] > ((points[:, Y] - p_top[Y]) / (p_right[Y] - p_top[Y])) * (p_right[X] - p_top[X]) + p_top[X]]
top_left = points[
points[:, X] < ((points[:, Y] - p_top[Y]) / (p_left[Y] - p_top[Y])) * (p_left[X] - p_top[X]) + p_top[X]]
bottom_right = points[
points[:, X] > ((points[:, Y] - p_bottom[Y]) / (p_right[Y] - p_bottom[Y])) * (p_right[X] - p_bottom[X]) + p_bottom[
X]]
bottom_left = points[
points[:, X] < ((points[:, Y] - p_bottom[Y]) / (p_left[Y] - p_bottom[Y])) * (p_left[X] - p_bottom[X]) + p_bottom[X]]
end = time.time()
print(end - start)
</code></pre>
<p>The output is usually 0.09, which is in seconds. The most time-consuming section is the last four computations to get top_right, top_left, bottom_right, and bottom_left. I rewrite the code as follows</p>
<pre><code>import numpy as np
from draw import draw
import time
import multiprocessing
N = 2000000
X = 0
Y = 1
points = np.random.uniform(-10, 10, (N, 2))
max_x = -100000
max_y = -100000
min_x = 100000
min_y = 100000
manager = multiprocessing.Manager()
top_right = manager.list()
top_left = manager.list()
bottom_right = manager.list()
bottom_left = manager.list()
def set_top_right():
global X, Y, points, p_top, p_right, top_right
top_right.extend(points[
points[:, X] > ((points[:, Y] - p_top[Y]) / (p_right[Y] - p_top[Y])) * (p_right[X] - p_top[X]) + p_top[X]])
def set_top_left():
global X, Y, points, p_top, p_left, top_left
top_left.extend(points[
points[:, X] < ((points[:, Y] - p_top[Y]) / (p_left[Y] - p_top[Y])) * (p_left[X] - p_top[X]) + p_top[X]])
def set_bottom_right():
global X, Y, points, p_bottom, p_right, bottom_right
bottom_right.extend(points[
points[:, X] > ((points[:, Y] - p_bottom[Y]) / (p_right[Y] - p_bottom[Y])) * (p_right[X] - p_bottom[X]) +
p_bottom[X]])
def set_bottom_left():
global X, Y, points, p_bottom, p_left, bottom_left
bottom_left.extend(points[
points[:, X] < ((points[:, Y] - p_bottom[Y]) / (p_left[Y] - p_bottom[Y])) * (p_left[X] - p_bottom[X]) +
p_bottom[X]])
start = time.time()
max_x_index = np.argmax(points[:, X])
max_y_index = np.argmax(points[:, Y])
min_x_index = np.argmin(points[:, X])
min_y_index = np.argmin(points[:, Y])
p_right = points[max_x_index]
p_top = points[max_y_index]
p_left = points[min_x_index]
p_bottom = points[min_y_index]
p1 = multiprocessing.Process(target=set_top_right)
p2 = multiprocessing.Process(target=set_top_left)
p3 = multiprocessing.Process(target=set_bottom_right)
p4 = multiprocessing.Process(target=set_bottom_left)
p1.start()
p2.start()
p3.start()
p4.start()
p1.join()
p2.join()
p3.join()
p4.join()
end = time.time()
print(end - start)
</code></pre>
<p>and surprisingly it became worse, about 0.15 seconds. I'm almost new to Python, however, I guess both approaches are a single thread and there are no I.O. operations. My Laptop CPU is <code>core i5</code> generation 11 and I expected each core to take one of the processes and make it faster. Then why is this slower?</p>
|
<python><numpy><multiprocessing>
|
2024-07-07 20:30:09
| 1
| 2,189
|
M a m a D
|
78,718,332
| 2,203,144
|
SSL: CERTIFICATE_VERIFY_FAILED certificate verify failed (_ssl.c:727) or _ssl.c:1000
|
<h2>The problem</h2>
<p>When I run <code>pip install certifi</code> (or python -m pip install certifi) (pip2), I get the error</p>
<pre><code>[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:727)
</code></pre>
<p>or if I run <code>pip3 install certifi</code>, I get the error</p>
<pre><code>[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:1000)
</code></pre>
<p>.</p>
<p>As a result I can't pip install anything, not even the <a href="https://pypi.org/project/certifi/" rel="nofollow noreferrer">certifi</a> package which some people have said can resolve this very type of error.</p>
<h2>Information about my setup</h2>
<ul>
<li>I am running MacOS Ventura 13.4.</li>
<li>It has both a Python 2.7 and a Python 3.12 installation on it, as well as a brew install and a conda install, as I'm a Python developer and experiment with many things.</li>
<li>There are various paths for ssl certs on my machine. I am aware of:
<ul>
<li><code>/usr/local/Cellar/ca-certificates/2024-07-02/share/ca-certificates/cacert.pem</code></li>
<li><code>~/ca_certs</code> -- symbolic link to --> <code>/usr/local/share/ca-certificates</code></li>
<li><code>/usr/local/share/ca-certificates</code> --> symbolic link to --> <code>../Cellar/ca-certificates/2024-07-02/share/ca-certificates</code></li>
<li><code>/etc/ssl/cert.pem</code></li>
<li><code>/private/etc/ssl/cert.pem</code></li>
</ul>
</li>
<li>I believe my Python installation looks for certs here:</li>
</ul>
<pre><code>$ python -c 'import ssl; print(ssl.get_default_verify_paths().openssl_cafile)'
/Library/Frameworks/Python.framework/Versions/2.7/etc/openssl/cert.pem
</code></pre>
<p>(which I've symlinked to my cert file also) and similar for Python 3.</p>
<h2>What I've tried</h2>
<p>I've tried setting up symlinks so that all of these locations are pointing to the same cert file in the end.</p>
<p>I've tried following the solution detailed <a href="https://stackoverflow.com/a/52961564/2203144">here</a> which seemed very promising. I downloaded the latest Firefox certs and discovered they were the same as the ones I already had. Then I created the self-signed certificate as instructed, and added its contents to my <code>cert.pem</code> file.</p>
<p>I've tried running the <code>Install Certificates.command</code> file to install the certificates, but it fails with the same error because the script that it runs uses pip.</p>
<p>All to no avail. I just keep getting the same errors. And I really want to know why.</p>
<p>Thank you in advance for your help. In the meantime I'll be trying to read more about the specific numbers <code>727</code> and <code>1000</code> to see if there is a more specific error message.</p>
<h2>Updates</h2>
<p>UPDATE 1: I tried using the pip <code>--cert</code> option as described <a href="https://stackoverflow.com/a/26062583/2203144">here</a> with each of the cert files I've described above. None of them work.</p>
<p>UPDATE 2: I tried installing <code>certifi</code> from a wheel. With pip3 I hit the same error. With pip2 I found an older version of certifi (2020.12.5) and it installed successfully. Subsequent pip installs using pip2 still failed with the same error.</p>
<p>UPDATE 3: I tried using the <code>--trusted-host</code> option with the three URLs suggested <a href="https://stackoverflow.com/a/29751768/2203144">here</a>. This seems to be working, and I can install anything with pip2 or pip3. I have not tried the permanent solution yet. Is this safe and should I continue with this solution?</p>
|
<python><ssl><ssl-certificate>
|
2024-07-07 20:17:23
| 0
| 3,988
|
mareoraft
|
78,718,204
| 14,336,726
|
How to set weights with pyDecision
|
<p>I'm trying to figure out how to set weights in a MAUT excercise in which I follow this <a href="https://colab.research.google.com/drive/1qm3ARgQm68GUK2irGiCB-B49vnVHazB7?usp=sharing#scrollTo=z9tbtRnUKbKM" rel="nofollow noreferrer">pyDecision example</a>. In my evaluation matrix I have 68 measurement criteria and they belong to <em>7 different main criteria</em>. It is on these 7 main criteria I want to set the weights to, and absolutely not onto every one of the 68 measurement criteria. You can assume, that in the dataset</p>
<p>the first 5 values belong to the first main criteria;</p>
<p>the next 4 to the second main criteria;</p>
<p>the next 14 to the third main criteria;</p>
<p>the next 11 to the fourth main criteria;</p>
<p>the next 20 to the fifth main criteria;</p>
<p>the next 11 to the sixth main criteria;</p>
<p>and the last 3 the seventh main criteria</p>
<p>How could this be done?</p>
<p>Here's my code</p>
<pre><code># Required Libraries
import pyDecision
import numpy as np
from pyDecision.algorithm import maut_method
weights = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0] #This is the tricky part. It should sum up to 1
#Here I have 68 criterion types, one for every measurement criteria
criterion_type = ['min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'min', 'max', 'max', 'max', 'max', 'max', 'max', 'max', 'max', 'max', 'max', 'min', 'max', 'min']
# Load Utility Functions: 'exp'; 'ln'; 'log'; 'quad' or 'step'
# Possibly the amount of utility functions defined here should be 68 as well?
utility_functions = ['exp', 'exp', 'exp', 'exp', 'exp', 'exp', 'exp', 'exp', 'exp', 'exp']
# In this dataset, every value is a measurement criteria, and every row belongs to one of the five decision alternatives
dataset = np.array([
[1, 1, 1, 1, 1, 2, 2, 2, 3, 59.4, 4.13, 4, 4, 2, 3, 4, 1, 1, 1, 1, 1, 1, 1, 1550, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1.43, 187, 1.87e-05, 0.0698, 0.149, 1, 0.0398, 1, 1, 1, 1, 1, 1, 315, 6030, 1, 2910, 0.00134, 1, 183, 27.2, 30.6, 3, 48, 3, 23, 14, 3.3, 1, 3.65, 0.025, 2, 0.0], #A1
[1, 1, 1, 1, 1, 2, 2, 2, 3, 57.5, 5.04, 3, 3, 1, 3, 3, 1, 1, 1, 1, 1, 1, 1, 198, 1, 1, 1, 1, 1, 1, 1, 1, 1, 11, 357, 4.13e-05, 0.551, 1.47, 1, 0.12, 1, 1, 1, 1, 1, 1, 768, 8760, 1, 5770, 0.0214, 1, 218, 28, 32, 3.4, 22.1, 3, 27, 17, 13, 5, 8.936, 0.304, 2, 0.0], #A2
[1, 1, 1, 1, 1, 2, 2, 2, 3, 57.5, 5.04, 3, 3, 1, 3, 3, 1, 1, 1, 1, 1, 1, 1, 198, 1, 1, 1, 1, 1, 1, 1, 1, 1, 11, 357, 4.13e-05, 0.551, 1.47, 1, 0.12, 1, 1, 1, 1, 1, 1, 768, 8760, 1, 5770, 0.0214, 1, 218, 20, 30, 6.2, 26.1, 3, 27, 17, 13, 5, 8.936, 0.304, 2, 0.0], #A3
[1, 1, 1, 1, 1, 2, 2, 2, 3, 54.9, 2.58, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1, 1, 1, 8.7, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1.53, 216, 1.78e-05, 0.0706, 0.199, 1, 0.0364, 1, 1, 1, 1, 1, 1, 312, 5830, 1, 3400, 0.00133, 1, 227, 15, 19, 5.4, 20.8, 2, 21.5, 18.5, 8.6, 2.1, 1.125, 0.41, 2, 0.0], #A4
[1, 1, 1, 1, 1, 2, 2, 2, 3, 54.9, 2.58, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1, 1, 1, 8.7, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1.53, 216, 1.78e-05, 0.0706, 0.199, 1, 0.0364, 1, 1, 1, 1, 1, 1, 312, 5830, 1, 3400, 0.00133, 1, 227, 16, 22, 9.1, 41.2, 2, 21.5, 18.5, 8.6, 2.1, 1.125, 0.41, 2, 0.0] #A5
])
# Call MAUT Function
rank = maut_method(dataset, weights, criterion_type, utility_functions, step_size, graph = True)
</code></pre>
|
<python><mcdm>
|
2024-07-07 19:12:51
| 1
| 480
|
Espejito
|
78,717,770
| 10,445,333
|
Pandas read XML file with designated data type
|
<p>My code:</p>
<pre><code>df = pd.read_xml(
path_or_buffer=PATH,
xpath="//Data",
compression="gzip"
)
</code></pre>
<p>I'm using Pandas <code>read_xml()</code> function to read <code>xml.gz</code> format data. I'm using Pandas <code>1.3.2</code> version. When I tried to read the data, Pandas read data wrongly.</p>
<p>The data looks like as below. Both <code>colA</code> and <code>colB</code> should be a string.</p>
<p>1st data file:</p>
<pre><code><Data>
<colA>abc</colA>
<colB>168E3</colB>
</Data>
<Data>
<colA>def</colA>
</Data>
</code></pre>
<p>2nd data file:</p>
<pre><code><Data>
<colA>ghi</colA>
<colB>23456</colB>
</Data>
<Data>
<colA>jkl</colA>
</Data>
</code></pre>
<p>When I use <code>read_xml()</code> function, it looks like below:</p>
<p>1st dataframe:</p>
<pre><code>colA: abc, def
colB: 168000.0, None
</code></pre>
<p>2nd dataframe:</p>
<pre><code>colA: ghi, jkl
colB: 23456.0, None
</code></pre>
<p>I want to read the data in <code>string</code> format but there is no <code>dtype</code> argument in pandas <code>1.3.2</code>. I want to know:</p>
<ol>
<li>How can I read the data with designated data type?</li>
<li>When there is missing data in a column, Pandas will assign the float type to that column. How to avoid it, or is there any setting to configure the data type of column with missing value when data is read?</li>
</ol>
<p>Please note that I can only this Pandas version and can't update it.</p>
|
<python><pandas><xml>
|
2024-07-07 16:01:42
| 3
| 2,333
|
Jonathan
|
78,717,759
| 3,628,240
|
Calculating Year over Year change for a Month in Pandas, from Excel
|
<p>I have an excel file with raw data, which lists transaction data by month for facilities. There's the month/year, the facilityID, and then spend in that transaction. There can be multiple transactions for a facility in a month. I've managed to group the transactions by date and facilityID, with the total spend as the values. It looks something like this.</p>
<p><a href="https://i.sstatic.net/YjjMsFOx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YjjMsFOx.png" alt="enter image description here" /></a></p>
<p>I'm trying to calculate the aggregate year-over-year change in TotalSpend, for say 2024-05-01 (May 2024). Some of the facilities may be new so they wouldn't have May 2023 data, or they might have dropped out or not yet reported the data so they wouldn't have May 2024 data. In both cases I want to exclude those facilities then from the calculation. I would also like to have a Quarterly year-over-year change, but I'm assuming I can just do the same thing I do for the May, but for April, May, and June when the data is available.</p>
<p>This is what I've tried so far, but I'm getting an error. I don't necessarily need to add it into the column in the df, just
<code>May 2024 5%</code></p>
<p>would suffice for my purposes</p>
<p>Code - with some pseudocode if it makes sense</p>
<pre><code>import pandas as pd
import datetime
def open_file(path, date_str, prev_date_str):
df_raw = pd.DataFrame({'Date':["2024-05-01","2024-05-01","2024-05-01","2023-05-01","2024-05-01","2023-05-01","2023-05-01","2024-04-01","2022-05-01"],
'FacilityID': [6,6,5,5,1,6,6,4,6],
'TotalSpend': [100,200,5,5,90,190,150,500,200]
})
df = df_raw.groupby(['Date','FacilityID'])['TotalSpend'].sum()
#facilities = get complete list of facilities
cur_month_vals = []
prev_month_vals = []
for facility in facilities:
if df.loc[date_str][facility] and if df.loc[prev_date_str]:
cur_month_vals.append(df.loc[date_str][facility].value)
prev_month_vals.append(df.loc[prev_date_str][facility].value)
if __name__ == "__main__":
df = open_file("Path", '2024-05-01', prev_date_str= '2023-05-01')
</code></pre>
|
<python><pandas>
|
2024-07-07 15:56:45
| 1
| 927
|
user3628240
|
78,717,523
| 7,563,454
|
Main loop waits on thread pool despite using map_async
|
<p>I have a multiprocessing thread pool with a loop running on the main thread. The main loop must run without being blocked: It issues a task to the thread pool on startup and whenever all results have been processed and a new set must be calculated, while retrieving those results that become available even when the pool is still busy.</p>
<pre><code>import multiprocessing as mp
def double(x):
return x * 2
pool = mp.Pool()
items = [1, 2, 3, 4]
result = None
while True:
if result:
for value in result.get():
print(value)
if not result or result.ready():
result = pool.map_async(double, items)
print("This should still execute even when results aren't ready!")
</code></pre>
<p>Despite every documentation agreeing that <code>map_async</code> should be non-blocking, the entire <code>while True</code> loop waits until they are ready. This seems to be triggered by <code>result.get()</code> but even this shouldn't block the main loop if using <code>map_async</code> hence why there's a <code>result.ready()</code> method to check if the entire task has finished. Is there a non-blocking version of <code>result.get()</code> or another approach I must use?</p>
|
<python><multithreading><threadpool>
|
2024-07-07 14:10:52
| 1
| 1,161
|
MirceaKitsune
|
78,717,509
| 14,224,948
|
Can't add cache for my dependencies in GitHub Actions using Python
|
<p>I have a project that uses ruff and pytest. I want to cache those dependencies, but in the docs there is an example that uses some kind of node.js env, and chatGPT suggested me something like that:</p>
<pre><code>name: Code analysis
on:
pull_request:
push:
branches: main
jobs:
analyze:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.8", "3.9", "3.10"]
steps:
- name: Check out repository
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python-version }}
- name: Cache pip
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}-ruff-pytest
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install ruff
pip install pytest
- name: Run Ruff
run: |
ruff check script.py tests.py
- name: Run pytest
run: |
pytest tests.py
</code></pre>
<p>Error logs for Cache pip:</p>
<pre><code>Run actions/cache@v3
with:
path: ~/.cache/pip
key: Linux-pip-3.8-ruff-pytest
restore-keys: Linux-pip-3.8-
enableCrossOsArchive: false
fail-on-cache-miss: false
lookup-only: false
env:
pythonLocation: /opt/hostedtoolcache/Python/3.8.18/x64
LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.8.18/x64/lib
Cache not found for input keys: Linux-pip-3.8-ruff-pytest, Linux-pip-3.8-
</code></pre>
<p>Which of course doesn't work. Do I need to add <code>requirements.txt</code> or some other dependency management to my project or can I somehow make GitHub Actions cache the result of the pip commands?</p>
|
<python><continuous-integration><github-actions>
|
2024-07-07 14:06:15
| 1
| 1,086
|
Swantewit
|
78,717,481
| 1,982,032
|
How can add the positional argument :UnitedStates.__init__() missing 1 required positional argument: 'm'?
|
<p>In the article "Automating Option Pricing Calculations",https://sanketkarve.net/automating-option-pricing-calculations/,chapter "Calculation of Implied Volatility via Stock Prices":</p>
<pre><code>from QuantLib import *
valuation_date = Date(20,11,2020)
Settings.instance().evaluationDate = valuation_date
calendar = UnitedStates()
</code></pre>
<p>It get the type error info:</p>
<pre><code>TypeError: UnitedStates.__init__() missing 1 required positional argument: 'm'
</code></pre>
<p>How can add the positional argument then?</p>
|
<python><quantlib>
|
2024-07-07 13:52:29
| 0
| 355
|
showkey
|
78,717,463
| 4,451,315
|
pyarrow: find diff for chunkedarray
|
<p>If I have a chunkedarray, how do I find its diff (similar to pandas.Series.diff or polars.Series.diff)?</p>
<p>e.g. if I start with</p>
<pre class="lang-py prettyprint-override"><code>import pyarrow as pa
ca = pa.chunked_array([[1,3, 2], [5, 2, 1]])
</code></pre>
<p>I'd like to end up with an array (or chunked array) with values: <code>[null, 2, -1, 3, -1, -1]</code></p>
|
<python><pyarrow>
|
2024-07-07 13:43:15
| 1
| 11,062
|
ignoring_gravity
|
78,717,452
| 181,783
|
CPython: what happens when the take_gil function calls the drop_gil function
|
<p>I'm using <a href="https://github.com/maartenbreddels/per4m" rel="nofollow noreferrer">perf probes to profile the GIL contention</a> in a multithreaded Python application and I find sequences where the <a href="https://github.com/python/cpython/blob/main/Python/ceval_gil.c#L292C1-L292C9" rel="nofollow noreferrer">take_gil function</a> <a href="https://github.com/python/cpython/blob/main/Python/ceval_gil.c#L402" rel="nofollow noreferrer">calls drop_gil</a> as shown in the following perf script dump of the perf data captured:</p>
<pre><code> viztracer 5533 [003] 3220.317244274: python:take_gil: (55fe99b0d01e)
viztracer 5533 [003] 3220.317407813: python:drop_gil: (55fe99b0cf20)
viztracer 5533 [003] 3220.317412443: python:drop_gil__return: (55fe99b0cf20 <- 55fe99baaa9e)
viztracer 5533 [003] 3220.317419189: python:take_gil: (55fe99b0d01e)
viztracer 5533 [003] 3220.317422951: python:drop_gil: (55fe99b0cf20)
viztracer 5533 [003] 3220.317425869: python:drop_gil__return: (55fe99b0cf20 <- 55fe99baaa9e)
</code></pre>
<p>The corresponding part of take_gil in the CPython code where this happens looks like:</p>
<pre class="lang-c prettyprint-override"><code>if (_PyThreadState_MustExit(tstate)) {
/* bpo-36475: If Py_Finalize() has been called and tstate is not
the thread which called Py_Finalize(), exit immediately the
thread.
This code path can be reached by a daemon thread which was waiting
in take_gil() while the main thread called
wait_for_thread_shutdown() from Py_Finalize(). */
MUTEX_UNLOCK(gil->mutex);
/* tstate could be a dangling pointer, so don't pass it to
drop_gil(). */
drop_gil(interp, NULL, 1);
PyThread_exit_thread();
}
</code></pre>
<p>My question is, underwhat condition is this block of code executed? Is the thread that is called about to terminate, as <code>PyThread_exit_thread()</code> would seem to indicate. The perf script dump however suggests that the thread PID 5533 immediately reattempts to take the GIL at timestamp <code>3220.317419189</code> after previously dropping it at timestamp <code>3220.317412443</code>, therefore the thread PID 5533 was not terminated.</p>
|
<python><cpython><perf><gil>
|
2024-07-07 13:39:09
| 1
| 5,905
|
Olumide
|
78,717,232
| 10,807,094
|
Why is an imported variable reevaluated on each request in Flask?
|
<p>I'm working on a flask app. There is the app.py file which uses an imported variable:</p>
<pre><code>from other_file import rnd
@app.route('/some/path/', methods=['GET'])
def some_func:
return make_response(jsonify(value=str(rnd)))
</code></pre>
<p>and other_file.py</p>
<pre><code>import random
rnd = random.randint(100000, 1000000)
</code></pre>
<p>On each request, the returned value of the function is different. Why?</p>
<p>My desired behavior is that it returns the same value on each request.</p>
<p>What if I want to store some app-wide data in a variable, for example:</p>
<pre><code>logged_in_users_list = []
</code></pre>
<p>Would that be emptied after each request too?</p>
<p>(Actually I use a similar approach in a FastAPI app, and the variable keeps the list fine. What makes the difference?)</p>
|
<python><flask>
|
2024-07-07 12:03:57
| 0
| 591
|
Yusif
|
78,716,778
| 10,200,497
|
How can I use groupby in a way that each group is grouped with the previous overlapping group?
|
<p>My DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': list('xxxxxxxxxxyyyyyyyyy'),
'b': list('1111222333112233444')
}
)
</code></pre>
<p>Expected output is a list of groups:</p>
<pre><code> a b
0 x 1
1 x 1
2 x 1
3 x 1
4 x 2
5 x 2
6 x 2
a b
4 x 2
5 x 2
6 x 2
7 x 3
8 x 3
9 x 3
a b
10 y 1
11 y 1
12 y 2
13 y 2
a b
12 y 2
13 y 2
14 y 3
15 y 3
a b
14 y 3
15 y 3
16 y 4
17 y 4
18 y 4
</code></pre>
<p>Logic:</p>
<p>Grouping starts with <code>df.groupby(['a', 'b'])</code> and then after that I want to join each group with its previous one which gives me the expected output.</p>
<p>Maybe the initial grouping that I mentioned is not necessary.</p>
<p>Note that in the expected output <code>a</code> column cannot contain both <code>x</code> and <code>y</code>.</p>
<p>Honestly overlapping rows is not what I have used to do when using <code>groupby</code>. So I don't know how to try to do it. I tried <code>df.b.diff()</code> but It is not even close.</p>
|
<python><pandas><dataframe>
|
2024-07-07 08:24:13
| 1
| 2,679
|
AmirX
|
78,716,770
| 1,371,666
|
Access function and variable of parent(not exactly) class in tkinter of python
|
<p>Please see the code below I got from OysterShucker<br></p>
<pre><code>import tkinter as tk
class Table(tk.Frame):
def __init__(self, master, header_labels:tuple, *args, **kwargs):
tk.Frame.__init__(self, master, *args, **kwargs)
# configuration for all Labels
# easier to maintain than directly inputting args
self.lbl_cfg = {
'master' : self,
'foreground' : 'blue',
'relief' : 'raised',
'font' : 'Arial 16 bold',
'padx' : 0,
'pady' : 0,
'borderwidth': 1,
'width' : 11,
}
self.headers = []
self.rows = []
for col, lbl in enumerate(header_labels):
self.grid_columnconfigure(col, weight=1)
# make and store header
(header := tk.Label(text=lbl, **self.lbl_cfg)).grid(row=0, column=col, sticky='nswe')
self.headers.append(header)
def add_row(self, desc:str, quantity:int, rate:float, amt:float, pending:bool) -> None:
print("how can I update variable total and call show_total")
self.rows.append([])
for col, lbl in enumerate((desc, quantity, rate, amt, pending), 1):
(entry := tk.Label(text=lbl, **self.lbl_cfg)).grid(row=len(self.rows), column=col, sticky='nswe')
self.rows[-1].append(entry)
def del_row(self, i:int) -> None:
for ent in self.rows[i]:
ent.destroy()
del self.rows[i]
for r, row in enumerate(self.rows, 1):
for c, ent in enumerate(row, 1):
ent.grid_forget()
ent.grid(row=r, column=c, sticky='nswe')
class Application(tk.Tk):
def __init__(self, title:str="Sample Application", x:int=0, y:int=0, **kwargs):
tk.Tk.__init__(self)
self.title(title)
self.config(**kwargs)
header_labels = ('', 'Description', 'Quantity', 'Rate', 'Amt', 'pending')
self.total=0
self.table = Table(self, header_labels)
self.table.grid(row=0, column=0, sticky='nswe')
# update so we can get the current dimensions
self.update_idletasks()
self.geometry(f'{self.winfo_width()}x{self["height"] or self.winfo_screenheight()}+{x}+{y}')
# test
self.table.add_row("A", 2, 12.5, 25, True)
self.table.add_row("B", 4, 12.5, 50, False)
self.table.del_row(0)
self.table.add_row("A", 2, 12.5, 25, True)
def show_total(self):
print(self.total)
print("Total has been updated")
return
if __name__ == "__main__":
# height of 0 defaults to screenheight, else use height
Application(title="My Application", height=0).mainloop()
</code></pre>
<p>Please correct me if my understanding is wrong.<br>
I think Table object is used by Application object.<br>
I have put 'not exactly' in title because Table and Application are not child and parent respectively. Isn't it ?<br>
There is a variable 'total' in Application object.<br>
You can assume that 'total' is also affected by other widgets in Application which I have not shown because that will not be relevant to this question.<br>
My aim is to update 'total' according to Quantity column in table . So, when entry is added to table then total should be updated and show_total function has to be called from inside add_row. Same will be case in del_row.<br>
Also note that del_row can be called by clicking of a button inside table which I have not shown (actually, there is a delete button in every row which I have not shown to keep code short).
How can this be done ?<br>
Thanks.<br></p>
|
<python><tkinter>
|
2024-07-07 08:18:12
| 1
| 481
|
user1371666
|
78,716,751
| 10,855,529
|
How to remove or drop a field from a struct in Polars?
|
<p>I want to remove one field from a struct. Currently, I have it set up like this, but is there a simpler way to achieve this?</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
import polars.selectors as cs
def remove_one_field(df: pl.DataFrame) -> pl.DataFrame:
meta_data_columns = (df.select('meta_data')
.unnest('meta_data')
.select(cs.all() - cs.by_name('system_data')).columns)
print(meta_data_columns)
return (df.unnest('meta_data')
.select(cs.all() - cs.by_name('system_data'))
.with_columns(meta_data=pl.struct(meta_data_columns))
.drop(meta_data_columns))
# Example usage
input_df = pl.DataFrame({
"id": [1, 2],
"meta_data": [{"system_data": "to_remove", "user_data": "keep"}, {"user_data": "keep_"}]
})
output_df = remove_one_field(input_df)
print(output_df)
</code></pre>
<pre><code>['user_data']
shape: (2, 2)
βββββββ¬ββββββββββββ
β id β meta_data β
β --- β --- β
β i64 β struct[1] β
βββββββͺββββββββββββ‘
β 1 β {"keep"} β
β 2 β {"keep_"} β
βββββββ΄ββββββββββββ
</code></pre>
<p>Something like <code>select</code> on fields within a struct?</p>
|
<python><dataframe><python-polars>
|
2024-07-07 08:07:50
| 2
| 3,833
|
apostofes
|
78,716,568
| 1,056,563
|
After installing python3.11 none of the general python/python3 executables are symlinked to them
|
<p>I have installed <code>python 3.11</code> via <code>Homebrew</code>. There are many posts about similar issues with python@3.X not updating python/python3, etc. But I have not seen a solution that fixes the fact that the executables including <code>pip/pip3</code>, <code>pydoc</code>, <code>idle</code>, <code>wheel</code> are all not available.</p>
<pre><code>cd /usr/local/Cellar/python@3.11/3.11.9/bin
/usr/local/Cellar/python@3.11/3.11.9/bin$ ll
total 8
lrwxr-xr-x 1 steve admin 66 Apr 2 01:25 python3.11-config -> ../Frameworks/Python.framework/Versions/3.11/bin/python3.11-config
lrwxr-xr-x 1 steve admin 59 Apr 2 01:25 python3.11 -> ../Frameworks/Python.framework/Versions/3.11/bin/python3.11
lrwxr-xr-x 1 steve admin 58 Apr 2 01:25 pydoc3.11 -> ../Frameworks/Python.framework/Versions/3.11/bin/pydoc3.11
lrwxr-xr-x 1 steve admin 57 Apr 2 01:25 idle3.11 -> ../Frameworks/Python.framework/Versions/3.11/bin/idle3.11
lrwxr-xr-x 1 steve admin 58 Apr 2 01:25 2to3-3.11 -> ../Frameworks/Python.framework/Versions/3.11/bin/2to3-3.11
-rwxr-xr-x 1 steve wheel 233 Jul 6 22:27 wheel3.11
-rwxr-xr-x 1 steve wheel 246 Jul 6 22:27 pip3.11
drwxr-xr-x 9 steve admin 288 Jul 6 22:27 .
</code></pre>
<p>It's not simply adding the new directory to the path: it would require symlinking each and every one. What is the way to do this?</p>
<p><strong>Update</strong>. I <em>needed</em> to make some forward progress so have manually symlinked <code>python[/3]</code> and <code>pip[/3]</code>. I am still looking for a more general process to do this: it is doubtful that everyone is creating 14 symlinks manually each time a new version of python is installed. [I will wait through downvotes to see if someone else has noticed a better way.]</p>
|
<python><homebrew>
|
2024-07-07 06:13:02
| 1
| 63,891
|
WestCoastProjects
|
78,716,562
| 3,949,631
|
How to get only plain text from a webpage
|
<p>I am trying to access the contents of a webpage using <code>urllib</code> and <code>bs4</code>:</p>
<pre><code>import bs4
from urllib.request import Request, urlopen
url = "https://ar5iv.labs.arxiv.org/html/2309.10034"
req = Request(url=url, headers={'User-Agent': 'Mozilla/7.0'})
webpage = str(urlopen(req).read())
soup = bs4.BeautifulSoup(webpage)
text = soup.get_text()
</code></pre>
<p>However, this contains all kinds of non-ASCII characters like <code>\n</code> and <code>\xc2</code> or <code>\x89</code> or <code>\subscript</code> and so on. I want to remove all those characters and extract the plain text only. Is this possible and how can I do it?</p>
|
<python><beautifulsoup><urllib>
|
2024-07-07 06:05:52
| 3
| 497
|
John
|
78,716,221
| 25,818,422
|
Is there any way to construct a code object from a code string and assign it to an existing function using Python?
|
<p>My problem is like this: I need to change how a function behave, but I can't access or change the file with which the function is located. I <em>could</em> <code>import</code> it though, and I would like to change the implementation a bit. However, since I can't access the source code itself, I can't change it directly. I want to actually <em>change</em> the function implementation so that all other files that <code>import</code> it will also have the new function. I'm currently trying to manipulate the code bytes of the function's <code>__code__.co_code</code> attribute (I know this is very bad software practice, but I <em>really</em> have no choice.). The <code>__code__.co_code</code> returns a series of bytes characters, which is very incomprehensible and extremely hard to change (I mean, like, how am I going to write a code using bytes??). I would like to inject a new code object into the function. Is there any way to first convert a Python string containing the new implementation to a series of byte characters and then inject it safely into the old function?</p>
<p>For a minimal reproducible example, suppose I have the following function:</p>
<pre><code>def func1():
return 1
</code></pre>
<p>and I want to change it to:</p>
<pre><code>def func1():
return 0
</code></pre>
<p>I've managed to access the byte code sequences of the function's <code>__code__.co_code</code> attribute, like this:</p>
<pre><code>def func1():
return 1
code_obj = func1.__code__.co_code
print(code_obj) # prints b'd\x01S\x00'
new_code = b'd\x01S\x00' # copied the original code bytes sequence. How am I going to write `return 0` in bytes?
func1.__code__.co_code = new_code
print(func1)
code_obj2 = func1.__code__.co_code
</code></pre>
<p>And it gives me this error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm 2024.1\plugins\python\helpers\pydev\pydevconsole.py", line 364, in runcode
coro = func()
File "<input>", line 7, in <module>
AttributeError: readonly attribute
</code></pre>
<p>It tells me that I cannot change the <code>__code__.co_code</code> attribute of the function. Additionally, I <em>really</em> don't know how to write the new function implementation (<code>return 0</code>) in bytes sequences. I know that I can probably make use of ASTs but I don't know how to use them either. Can anyone help me? I'm really stuck here. (Side note: I can't just shadow the old function with a new implementation because I want other files that import the library to also have the change. Also, I don't want to set the function in the module using <code>import lib; lib.old_function = new_function</code>.)</p>
|
<python><function><byte><python-internals>
|
2024-07-07 00:39:03
| 1
| 330
|
Luke L
|
78,716,159
| 21,152,416
|
How to execute typer program without specifying filename
|
<p>Assume having <code>tool.py</code> file with the following content:</p>
<pre class="lang-py prettyprint-override"><code>import typer
app = typer.Typer()
@app.command()
def create():
...
@app.command()
def delete():
...
if __name__ == "__main__":
app()
</code></pre>
<p>It can be executed using <code>typer tool.py run create</code>.</p>
<p>I'm wondering if there is a way to execute it without specifying <code>.py</code> extension? Something like <code>typer tool run create</code>. Or using "an alias" instead of filename e.g. <code>tool.py > talias</code> and use <code>typer talias run create</code>.</p>
|
<python><typer>
|
2024-07-06 23:49:51
| 0
| 1,197
|
Victor Egiazarian
|
78,716,097
| 3,628,240
|
Using Pandas to get year over year change from Excel file
|
<p>I have an excel file that looks something like this: <a href="https://i.sstatic.net/JpcFaJe2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpcFaJe2.png" alt="excel file example" /></a></p>
<p>Each row represents a month of the year, each column is a user_ID, and the cell is the dollars that user spent in a month.</p>
<p>The file gets updated a few times a month, and I don't always get the data for the previous month (e.g. June 2024) until sometimes until a month later like the end of July.</p>
<p>What would be the best way using Pandas to calculate the year-over-year change in spending in aggregate for all of the users? Users without any spend in the prior year period (e.g. May 2023) should be excluded as well as if they don't have data in the current period (e.g. May 2024). In this example, I would like to calculate the year-over-year change for the 2nd quarter, so that would mean only users 2, 3, 5 would be included, but also for May 2024, which would mean users 2, 3, 4, 5.</p>
<p>Would using psycopg2 be easier to just add this into a PSQL DB?</p>
<p>This is what I've tried so far:</p>
<pre><code>import datetime
import pandas as pd
# import psycopg2
def open_file(path):
df = pd.read_excel(path, skiprows=53,
nrows=140, usecols="M:CCL")
users = df.columns[1:-1]
cur_month_vals = []
prev_month_vals = []
cur_month = '2024-05-01'
prev_month = '2023-05-01'
for user in df[1:-1]:
try:
if cur_month in df.columns and prev_month in df.columns:
cur_spend = df.loc[cur_month, user]
print(user, cur_month, cur_spend)
prev_spend = df.loc[pre_month, user]
print(cur_spend/prev_spend)
except:
print("user not in", user)
</code></pre>
<p>Edit: using the raw data, I now have something that looks like this
<a href="https://i.sstatic.net/tCBuOFLy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tCBuOFLy.png" alt="enter image description here" /></a></p>
|
<python><pandas>
|
2024-07-06 22:55:13
| 1
| 927
|
user3628240
|
78,715,993
| 2,856,552
|
How do I add legend handles in Matplotlib?
|
<p>I would like to add a legend to my Python plot, with a title and legend handles. My sincere apologies as a complete novice in Python, I got my code from a post. The code below works, but I want to add a legend. All the plots I have googled deal with line plots with several lines.</p>
<pre><code>import geopandas as gpd
import matplotlib.pyplot as plt
from datetime import date
from mpl_toolkits.basemap import Basemap
map_df = gpd.read_file("../Shapefiles/lso_adm_fao_mlgca_2019/lso_admbnda_adm1_FAO_MLGCA_2019.shx")
risks_df=pd.read_csv("../Output/Wndrisks.csv")
merged_df = map_df.merge(risks_df, left_on=["ADM1_EN"], right_on=["District"])
d = {1: "green", 2: "yellow", 3: "orange", 4: "red"}
colors = map_df["ADM1_EN"].map(risks_df.set_index("District")["risk"].map(d))
ax = map_df.plot(color=colors, edgecolor="k", alpha=0.7, legend=True, legend_kwds={"label": "Risk Level", "orientation": "vertical"})
map = Basemap(projection='merc', llcrnrlon=26.5,llcrnrlat=-31.0,urcrnrlon=30.0,urcrnrlat=-28.5, epsg=4269)
map.drawlsmask(land_color='grey',ocean_color='aqua',lakes=True)
legend = plt.legend(handles=[one, two, three, four], title="Risk Levels",
loc=4, fontsize='small', fancybox=True)
plt.title(f"Strong Wind Risks 01-10Jun24", y=1.04)
plt.tick_params(
axis="both", # affect both the X and Y
which="both", # get rid of both major and minor ticks
top=False, # get rid of ticks top/bottom/left/right
bottom=False,
left=False,
right=False,
labeltop=False, # get rid of labels top/bottom/left/right
labelbottom=False,
labelleft=False,
labelright=False)
plt.axis("off") # Get rid of the border around the map
plt.subplots_adjust(right=0.85) # Nudge the country to the left a bit
plt.savefig('wndriskmap.png', dpi=300)
plt.show()
</code></pre>
<p>The data is for the form:</p>
<pre><code>"District","risk"
"Berea",3
"Butha-Buthe",4
"Leribe",4
"Mafeteng",4
"Maseru",4
"Mohale's Hoek",4
"Mokhotlong",4
"Qacha's Nek",4
"Quthing",4
"Thaba-Tseka",4
</code></pre>
<p>The plot I get is as attached<a href="https://i.sstatic.net/fm4D2P6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fm4D2P6t.png" alt="windrisk plot" /></a></p>
<p>I can attach the shapefile if required. I want the legend to have a title <code>"Risk Level"</code> and the levels <code>1=no risk</code>, <code>2=low risk</code>, <code>3=medium risk</code> and <code>4=high risk</code>. what I have included in <code>legend = plt.legend(...)</code> does not work.</p>
<p>Assistance will be appreciated.</p>
|
<python><matplotlib><geopandas>
|
2024-07-06 21:43:09
| 1
| 1,594
|
Zilore Mumba
|
78,715,983
| 963,319
|
Pandas outer join not working as expected
|
<p>I want to merge two CSV files from the open Bixi dataset. The problem is that after the outer merge, there are rows missing:</p>
<pre><code>In [148]: outer_merged_df['Code']==7150
Out[148]:
0 False
1 False
2 False
3 False
4 False
...
1045584 False
1045585 False
1045586 False
1045587 False
1045588 False
Name: Code, Length: 1045589, dtype: bool
</code></pre>
<p>But this row is present in the left dataset:</p>
<pre><code>In [151]: df['Code']==7150
...
615 True
</code></pre>
<p>Here is the code for the outer merge:</p>
<pre><code>outer_merged_df = pd.merge(df, df_ride, left_on='Code', right_on='start_station_code', how='outer', indicator=True)
</code></pre>
<p>Here is the code to read the Bixi rides and the station:</p>
<pre><code>df_ride = pd.read_csv('OD_2019-08.csv')
df = pd.read_csv('Stations_2019.csv')
</code></pre>
<p>And there is the <a href="https://bixi.com/en/open-data/" rel="nofollow noreferrer">link</a> to the CSV files. If you're going to download them, please use the August of 2019 file.</p>
<p>When I do a left merge, it finds it:</p>
<pre><code>In [154]: merged_df_left=pd.merge(df, df_ride, left_on='Code', right_on='start_station_code', how='left')
In [155]: merged_df_left['Code']==7150
Out[155]:
913466 True
Name: Code, Length: 913470, dtype: bool
</code></pre>
<p>This is extremely confusing. Can someone please give a hint?</p>
|
<python><pandas>
|
2024-07-06 21:36:40
| 1
| 2,751
|
Jenia Be Nice Please
|
78,715,925
| 15,803,668
|
Prevent root items from being dropped onto another root item in PyQt5 QTreeView
|
<p>I'm working on a PyQt5 application where I have a QTreeView populated with a QStandardItemModel. The root items present documents and the child sections within the document. So they should not be mixed. I want to achieve the following behaviors:</p>
<ul>
<li>top level items can only be moved in a way that only their order changes, while still remaining in the top level</li>
<li>children can only be moved within the hierarchy of their top level parent</li>
</ul>
<p>I've tried implementing custom <code>dropEvent</code> methods in my QTreeView subclass. I tried using <code>dropEvent</code> as the user can move the top level item order.</p>
<p>Here's a simplified version of my current implementation. The problem the code does allow the child to be moved outside of their top level item and the order of the top levels item can't be changed.</p>
<pre><code>from PyQt5.QtWidgets import QApplication, QTreeView, QMainWindow, QVBoxLayout, QWidget
from PyQt5.QtGui import QStandardItem, QStandardItemModel
class MyTreeView(QTreeView):
def dropEvent(self, e):
currentIndex = e.source().currentIndex()
destinationIndex = self.indexAt(e.pos())
# Check if the dragged item is a root item
if not currentIndex.parent().isValid():
# Check if the drop destination is a root item
if not destinationIndex.parent().isValid():
# Ignore the drop if both are root items
e.ignore()
return
super().dropEvent(e)
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
self.tree_view = MyTreeView(self)
self.tree_view.setDragEnabled(True)
self.tree_view.setAcceptDrops(True)
self.tree_view.setDropIndicatorShown(True)
self.tree_view.setDragDropMode(QTreeView.InternalMove)
self.model = QStandardItemModel()
# Create sample items
root_item1 = QStandardItem("Root Item 1")
child_item11 = QStandardItem("Child Item 1.1")
child_item12 = QStandardItem("Child Item 1.2")
root_item1.appendRow(child_item11)
root_item1.appendRow(child_item12)
root_item2 = QStandardItem("Root Item 2")
child_item21 = QStandardItem("Child Item 2.1")
child_item22 = QStandardItem("Child Item 2.2")
root_item2.appendRow(child_item21)
root_item2.appendRow(child_item22)
self.model.appendRow(root_item1)
self.model.appendRow(root_item2)
self.tree_view.setModel(self.model)
layout = QVBoxLayout()
layout.addWidget(self.tree_view)
container = QWidget()
container.setLayout(layout)
self.setCentralWidget(container)
if __name__ == '__main__':
app = QApplication([])
window = MainWindow()
window.show()
app.exec_()
</code></pre>
|
<python><pyqt5><qtreeview>
|
2024-07-06 21:07:58
| 3
| 453
|
Mazze
|
78,715,742
| 159,072
|
Why is my Binary PSO feature selection showing no progress?
|
<p>I have written a Python script as follows:</p>
<ol>
<li>A data-file <code>file</code> of path <code>'/content/drive/MyDrive/dataset.csv'</code>
(a) dataset file is 6.62 GB in size
(b) has 1079134 rows excluding the header row<br />
(c) has 1029 columns</li>
<li>Divide the rows into 25 chunks each of size <code>chunk_size</code></li>
<li>Read a chunk <code>chunk</code> of size <code>chunk_size</code> from <code>file</code></li>
<li>Remove first three columns from the left hand side and create a dataframe <code>df</code></li>
<li>If the left-most column of any row in <code>df</code> contains anything other than characters {<code>A</code>,<code>B</code>,<code>C</code>}, remove that row from <code>df</code></li>
<li>Take left most column of <code>df</code> as the target column <code>y</code></li>
<li>Map the target values into intergers as {<code>A</code>:0,<code>B</code>:1,<code>C</code>:2}</li>
<li>Take rest of the columns of <code>df</code> as features <code>X</code></li>
<li>Apply particle swarm optimizer and a very fast classifier (choose yourself) to find best feature-columns</li>
<li>Dispose the chunk and free memory</li>
<li>go to step#3 until all <code>n</code> chunks are not done processing</li>
<li>Use voting to select best features</li>
<li>Save the selected features to file '/content/drive/MyDrive/feature_selection_output.txt'</li>
</ol>
<pre><code>import pandas as pd
import numpy as np
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
import pyswarms as ps
from datetime import datetime as dt
import gc
# File path
file = '/content/drive/MyDrive/dataset.csv'
output_file = '/content/drive/MyDrive/feature_selection_output.txt'
# Constants
chunk_size = 1079134 // 25
# Initialize variables
selected_features = []
# Function to map target values
def map_target_values(val):
mapping = {'A': 0, 'B': 1, 'C': 2}
return mapping.get(val, -1)
# Function to apply particle swarm optimization
def f_per_particle(m, alpha, X, y, total_features, classifier):
if np.count_nonzero(m) == 0:
X_subset = X
else:
X_subset = X[:, m == 1]
scores = cross_val_score(classifier, X_subset, y, cv=3)
P = scores.mean()
j = (alpha * (1.0 - P) + (1.0 - alpha) * (1 - (X_subset.shape[1] / total_features)))
return j
def f(x, alpha, X, y, classifier):
n_particles = x.shape[0]
total_features = X.shape[1]
j = [f_per_particle(x[i], alpha, X, y, total_features, classifier) for i in range(n_particles)]
return np.array(j)
# Read and process file in chunks
for chunk in pd.read_csv(file, chunksize=chunk_size):
# Remove the first three columns
df = chunk.iloc[:, 3:]
# Filter rows based on the left-most column values
df = df[df.iloc[:, 0].isin(['A', 'B', 'C'])]
# Map target column and extract features
y = df.iloc[:, 0].map(map_target_values).values
X = df.iloc[:, 1:].values
# Handle missing values by imputing with zero
imputer = SimpleImputer(strategy='constant', fill_value=0)
X = imputer.fit_transform(X)
# Scale the data
scaler = StandardScaler()
X = scaler.fit_transform(X)
# Define classifier with increased max_iter
classifier = LogisticRegression(max_iter=5000)
# Initialize swarm for PSO
options = {'c1': 1, 'c2': 1, 'w': 0.5, 'k': 100, 'p': 20}
dimensions = X.shape[1]
optimizer = ps.discrete.BinaryPSO(n_particles=100, dimensions=dimensions, options=options)
# Perform optimization
cost, pos = optimizer.optimize(f, iters=100, alpha=0.9, X=X, y=y, classifier=classifier)
# Record selected features
selected_features.append(pos)
# Free memory
del df, X, y, optimizer
gc.collect()
# Use voting to select best features
final_selected_features = np.sum(selected_features, axis=0)
selected_feature_indices = np.where(final_selected_features > (len(selected_features) / 2))[0]
# Save selected features to file
with open(output_file, 'w') as f:
for idx in selected_feature_indices:
f.write(f"{idx}\n")
print(f"Selected features have been saved to {output_file}")
</code></pre>
<p>I ran this script</p>
<ol>
<li>on Google Colab (free plan, 28 GB HDD and 12 GB RAM)</li>
<li>on GeForce GTX 780</li>
</ol>
<p>In both cases, the script shows no progress.</p>
<p>Output:</p>
<pre><code>2024-07-06 19:24:47,269 - pyswarms.discrete.binary - INFO - Optimize for 100 iters with {'c1': 1, 'c2': 1, 'w': 0.5, 'k': 100, 'p': 20}
pyswarms.discrete.binary: 0%| |0/100
</code></pre>
|
<python><feature-selection><particle-swarm>
|
2024-07-06 19:38:22
| 0
| 17,446
|
user366312
|
78,715,584
| 681,911
|
Fast way to remove multiple rows by indices from a Pytorch or Numpy 2D array
|
<p>I have a numpy array (and equivalently a Pytorch tensor) of shape <code>Nx3</code>. I also have a list of indices corresponding to rows, that I want to remove from this tensor. This list of indices is called <code>remove_ixs</code>. <code>N</code> is very big, about 5 million rows, and <code>remove_ixs</code> is 50k long. The way I'm doing it now is as follows:</p>
<pre><code>mask = [i not in remove_ixs for i in range(my_array.shape[0])]
new_array = my_array[mask,:]
</code></pre>
<p>But the first line is just not terminating, takes forever. The above is in numpy code. An equivalent Pytorch code would also work for me.</p>
<p>Is there a faster way to do this with either numpy or pytorch?</p>
|
<python><numpy><pytorch>
|
2024-07-06 18:14:49
| 2
| 4,366
|
sanjeev mk
|
78,715,532
| 1,033,217
|
How to Expose Python Enum as "Constants" without the Class Name All At Once
|
<p>The following code creates three "constants" that can be used without the enum's class name before each one. Is there a way to do this to all members of the enum without having an explicit line of code for each member as shown here?</p>
<pre class="lang-py prettyprint-override"><code>import enum
class Test(enum.Enum):
ONE = 1
TWO = 2
THREE = 3
ONE = Test.ONE
TWO = Test.TWO
THREE = Test.THREE
</code></pre>
<p>Basically, is there an efficient way of doing this for an enum with 20-30 members all at once?</p>
|
<python><python-3.x><enums><constants>
|
2024-07-06 17:49:40
| 2
| 795
|
Utkonos
|
78,715,358
| 1,125,062
|
How to assign value to a zero dimensional torch tensor?
|
<pre><code>z = torch.tensor(1, dtype= torch.int64)
z[:] = 5
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: slice() cannot be applied to a 0-dim tensor.
</code></pre>
<p>I'm trying to assign a value to a torch tensor but because it has zero dimensions the slice operator doesn't work. How do I assign a new value then?</p>
|
<python><pytorch>
|
2024-07-06 16:24:12
| 3
| 4,641
|
Anonymous
|
78,715,315
| 3,581,217
|
Filter OpenStreetMap edges on `Surface` type
|
<p>I'm accessing OpenStreetMap data using <code>osmnx</code>, using:</p>
<pre><code>import osmnx as ox
graph = ox.graph_from_place('Bennekom')
nodes, edges = ox.graph_to_gdfs(graph)
</code></pre>
<p>I know from openstreetmap.org/edit that all (?) street features have an attribute <code>Surface</code>, which can be <code>Unpaved</code>, <code>Asphalt</code>, <code>Gravel</code>, et cetera. However, that info is not included in the GeoDataFrames, so I cannot filter or select certain surface types. Is that somehow possible?</p>
|
<python><openstreetmap><osmnx>
|
2024-07-06 16:05:20
| 1
| 10,354
|
Bart
|
78,715,165
| 7,563,454
|
Function to rotate a quaternion by the given amount
|
<pre><code>class quaternion:
__slots__ = ("x", "y", "z", "w")
def __init__(self, x: float, y: float, z: float, w: float):
self.x = x
self.y = y
self.z = z
self.w = w
def rotate(self, x: float, y: float, z: float):
# Correctly bump self.x, self.y, self.z, self.w with the provided radians
</code></pre>
<p>I'm implementing quaternion rotations in my code for cases where euler vectors are the lesser option. I have functions to set them and retrieve the forward / right / up vectors, but can't find good examples of how to apply a rotation. Unlike eulers I can't just bump x, y, z, w individually: Each modification to one axis must reflect on the others, and I need quaternions to rotate along their selves so for example if I want to bump it by a rotation of -0.1 on the X axis I want the object to rotate to its own left considering its pitch and roll.</p>
|
<python><math><3d><rotation><coordinates>
|
2024-07-06 15:00:42
| 1
| 1,161
|
MirceaKitsune
|
78,715,100
| 9,744,061
|
Subscript title of figure which contain variable in matplotlib
|
<p>I want to write title in python figure, which contain variable <code>n=100</code>. I want to write as $t_n=0.1$ (n is subscript). I try as follows:</p>
<pre><code>import matplotlib.pyplot as plt
n=100
plt.figure()
plt.title(f'$t_{n:3d}=0.1$')
plt.show()
</code></pre>
<p>But it shows as follows:</p>
<p><a href="https://i.sstatic.net/v8DDUkno.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v8DDUkno.png" alt="enter image description here" /></a></p>
<p>It shows that 1 is subscript and 00 is normal text. I want 100 is subscript.</p>
<p>If I change the code into <code>plt.title(f'$t_{{n:3d}}=0.1$')</code> the result is</p>
<p><a href="https://i.sstatic.net/3GqnJrGl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3GqnJrGl.png" alt="enter image description here" /></a></p>
<p>The result is not as my desired. So, how to do it?</p>
|
<python><matplotlib><plot><title>
|
2024-07-06 14:33:01
| 1
| 305
|
Ongky Denny Wijaya
|
78,715,040
| 3,581,875
|
Import function from package file
|
<p>My project structure is as follows:</p>
<pre><code>package/
module/
__init__.py
component.py
main.py
script.py
</code></pre>
<p>In main.py I have: <code>from module.component import something</code>.</p>
<p>And in script.py I want to import a function (lets call it 'function') from main.py.</p>
<p>What is the correct way to do this?</p>
<p>(I keep getting module not found errors - different modules depending on how I execute python and from which directory)</p>
<p>I tried changing the working directory which didn't help.</p>
<p>I believe this is not quite the same situation as in <a href="https://stackoverflow.com/questions/72852/how-can-i-do-relative-imports-in-python">How can I do relative imports in Python?</a>, because script.py is not actually part of the package I'm working on.</p>
<p>In the end I could make everything work by simply adding 'package' to the path:</p>
<pre><code>sys.path.insert(0, "package")
</code></pre>
<p>inside script.py.</p>
|
<python><import>
|
2024-07-06 14:05:11
| 1
| 1,152
|
giladrv
|
78,714,497
| 4,040,643
|
Unclear behaviour of pinecone's load_dataset
|
<p>In the following program, I have the three functions <code>get_dataset1</code>, <code>get_dataset2</code>, and <code>get_dataset3</code> that are all very similar. They only differ in when they call <code>len(dataset)</code> and <code>os.path.join = tmp</code>.</p>
<p>The functions <code>get_dataset1</code> and <code>get_dataset3</code> behave as intended; they load a dataset, and it has a length greater 0. However, in the case of <code>get_dataset2</code>, the dataset has length 0. Why is that?</p>
<pre class="lang-py prettyprint-override"><code>import copy
import os
import time
from pinecone_datasets import load_dataset
datasetName = "langchain-python-docs-text-embedding-ada-002"
def get_dataset1():
os.path.join = lambda *s: "/".join(s) # pinecone bug workaround
dataset = load_dataset(datasetName)
print("Dataset loaded:", len(dataset) != 0) # dataset has length greater than 0
def get_dataset2():
os.path.join = lambda *s: "/".join(s) # pinecone bug workaround
dataset = load_dataset(datasetName)
os.path.join = tmp
print("Dataset loaded:", len(dataset) != 0) # dataset has length 0
def get_dataset3():
os.path.join = lambda *s: "/".join(s) # pinecone bug workaround
dataset = load_dataset(datasetName)
print("Dataset loaded:", len(dataset) != 0) # dataset has length greater than 0
os.path.join = tmp
print("Dataset loaded:", len(dataset) != 0) # dataset has length greater than 0
def main():
get_dataset1()
get_dataset2()
get_dataset3()
if __name__ == "__main__":
tmp = copy.deepcopy(os.path.join)
main()
</code></pre>
|
<python><os.path><pinecone>
|
2024-07-06 09:54:46
| 1
| 489
|
Imago
|
78,714,456
| 4,451,315
|
Type a Callable so it can take any number of string arguments, and then keyword arguments
|
<p>I have a function which accepts any function such that:</p>
<ul>
<li>it accepts any number of arguments of type <code>str</code></li>
<li>it may accept any number of keyword arguments of any type</li>
<li>it returns <code>str</code></li>
</ul>
<p>Examples of acceptable function:</p>
<ul>
<li><code>def func(a: str, b: str, c: str, *, foo: int) -> str</code></li>
<li><code>def func(a: str, b: str, *, foo: int) -> str</code></li>
<li><code>def func(a: str) -> str</code></li>
</ul>
<p>Examples of non-acceptable functions:</p>
<ul>
<li><code>def func(a: str, b: str, c: str, foo: int) -> str</code></li>
<li><code>def func(foo: int) -> str</code></li>
</ul>
<p>How can I type this so mypy will accept it?</p>
<p>I've tried <code>Callable[[str, ...], str]</code> but mypy already rejects that, and I don't know how to add in the "kwargs of any type" part</p>
|
<python><mypy><python-typing>
|
2024-07-06 09:39:53
| 0
| 11,062
|
ignoring_gravity
|
78,714,445
| 2,989,330
|
Understanding and introspecting torch.autograd.backward
|
<p>In order to locate a bug, I am trying to introspect the backward calculation in PyTorch. Following the <a href="https://pytorch.org/docs/stable/notes/autograd.html#backward-hooks-execution" rel="nofollow noreferrer">description of torch's Autograd mechanics</a>, I added backward hooks to each parameter of my model as well as hooks on the <code>grad_fn</code> of each activation. The following code snippet illustrates how I add the hooks to the <code>grad_fn</code>:</p>
<pre><code>import torch.distributed as dist
def make_hook(grad_fn, note=None):
if grad_fn is not None and grad_fn.name is not None:
def hook(*args, **kwargs):
print(f"[{dist.get_rank()}] {grad_fn.name()} with {len(args)} args "
f"and {len(kwargs)} kwargs [{note or '/'}]")
return hook
else:
return None
def register_hooks_on_grads(grad_fn, make_hook_fn):
if not grad_fn:
return
hook = make_hook_fn(grad_fn)
if hook:
grad_fn.register_hook(hook)
for fn, _ in grad_fn.next_functions:
if not fn:
continue
var = getattr(fn, "variable", None)
if var is None:
register_hooks_on_grads(fn, make_hook_fn)
x = torch.zeros(15, requires_grad=True)
y = x.exp()
z = y.sum()
register_hooks_on_grads(z.grad_fn, make_hook)
</code></pre>
<p>When running my model, I noticed that each invocation of <code>hook</code> gets two arguments and no key-word arguments. In case of a <code>AddBackward</code> function, the first argument is a list of two tensors, the second argument is a list of one tensor. The same holds true for the <a href="https://github.com/NVIDIA/Megatron-LM/blob/0bc3547702464501feefeb5523b7a17e591b21fa/megatron/core/tensor_parallel/layers.py#L373" rel="nofollow noreferrer"><code>LinearWithGradAccumulationAndAsyncCommunicationBackward</code></a> function. In case of a <code>MeanBackward</code> function, both arguments are lists with one tensor each.</p>
<p>My conjecture to this is that the first argument probably contains the inputs to the operator (or whatever was saved with <code>ctx.save_for_backward</code>) and that the second argument contains the gradients. Am I right with this? Can I just replicate the backward computation with <code>grad_fn(*args)</code> or is there more to it (e.g., state)?</p>
<p>Unfortunately, I didn't find any documentation on this. I am grateful for any pointer towards the relevant documentation.</p>
|
<python><pytorch><torch><autograd>
|
2024-07-06 09:35:25
| 1
| 3,203
|
Green η»Ώθ²
|
78,714,232
| 6,227,500
|
How to convert binary to string (UUID) without UDF in Apache Spark (PySpark)?
|
<p>I can't find a way to convert a binary to a string representation without using a UDF. Is there a way with native PySpark functions and not a UDF?</p>
<pre class="lang-py prettyprint-override"><code>from pyspark.sql import DataFrame, SparkSession
import pyspark.sql.functions as F
import uuid
from pyspark.sql.types import Row, StringType
spark_test_instance = (SparkSession
.builder
.master('local')
.getOrCreate())
df: DataFrame = spark_test_instance.createDataFrame([Row()])
df = df.withColumn("id", F.lit(uuid.uuid4().bytes))
df = df.withColumn("length", F.length(df["id"]))
uuidbytes_to_str = F.udf(lambda x: str(uuid.UUID(bytes=bytes(x), version=4)), StringType())
df = df.withColumn("id_str", uuidbytes_to_str(df["id"]))
df = df.withColumn("length_str", F.length(df["id_str"]))
df.printSchema()
df.show(1, truncate=False)
</code></pre>
<p>gives:</p>
<pre><code>root
|-- id: binary (nullable = false)
|-- length: integer (nullable = false)
|-- id_str: string (nullable = true)
|-- length_str: integer (nullable = false)
+-------------------------------------------------+------+------------------------------------+----------+
|id |length|id_str |length_str|
+-------------------------------------------------+------+------------------------------------+----------+
|[0A 35 DC 67 13 C8 47 7E B0 80 9F AB 98 CA FA 89]|16 |0a35dc67-13c8-477e-b080-9fab98cafa89|36 |
+-------------------------------------------------+------+------------------------------------+----------+
</code></pre>
|
<python><apache-spark><pyspark><binary>
|
2024-07-06 07:50:57
| 1
| 1,040
|
raphaelauv
|
78,713,669
| 3,842,788
|
How to run a Python file in VS Code inside a virtual environment?
|
<p>In my folder <code>folder1</code> I have a python venv which I have activated in my terminal with <code>source .venv/bin/activate</code>.</p>
<p>Then, I run my python file in the terminal using <code>python3.12 file1.py</code></p>
<p>How do I run this through VS Code's <code>launch.json</code>?</p>
|
<python><visual-studio-code><virtual-environment>
|
2024-07-06 01:31:57
| 1
| 6,957
|
Aseem
|
78,713,666
| 7,867,195
|
Find the index of a child in lxml
|
<p>I am using Python 3.12 and lxml.</p>
<p>I want to find a particular tag, and I can do it with elem.find("tag"). elem is of type Element.</p>
<p>But I want to move child elements of this child into the parent where the child was. For that, I need the index of the child. ANd I can't find a way to find that index.</p>
<p>lxml's API description has the _Element.index() method, but I have no idea how to get an _Element instance from an Element instance.</p>
<p>Please advise how to determine that index. (Using a loop instead of find() can do that but I'd like a neater way).</p>
<p>EDIT: here is a sample XML element</p>
<pre><code><parent>
<child-a/>
<container>
<child-b/>
<child-c/>
</container>
<child-d/>
<child-e/>
</parent>
</code></pre>
<p>I am writing code that finds , which is a child of but I don't know its position in advance (there can be several of them too), and moves its children into the parent where it was, then deletes , to get this:</p>
<pre><code><parent>
<child-a/>
<child-b/>
<child-c/>
<child-d/>
<child-e/>
</parent>
</code></pre>
<p>So, I can find <code><container></code> using <code>parent.find()</code>. But to move its children into the same place under <code><parent></code> I need to have the index of <code><container></code>, as the <code>insert()</code> method requires an index. For now I use this kludge:</p>
<pre><code> while True:
index = None
found = None
for i in range(len(parent)):
if parent[i].tag =="container":
found = parent[i]
index = i
break
if found is None:
break
offset = 0
while len(found) > 0:
parent.insert(index+offset,found[0])
offset+=1
parent.remove(found)
</code></pre>
<p>I do know that <code>offset</code> is redundant as one could just increase <code>index</code>, I did that for aesthetic reasons. But the loop itself is quite the kludge. Here is what I would do if <code>Element</code> had an <code>index()</code> method, but it doesn't:</p>
<pre><code> found = parent.find("container")
while found:
index = parent.index(found)
offset = 0
while len(found) > 0:
parent.insert(index+offset,found[0])
offset+=1
parent.remove(found)
found = parent.find("container")
</code></pre>
<p>But <code>Element.index()</code> does not exist; <code>_Element.index()</code> exists but I don't know how to access <code>_Element</code>.</p>
|
<python><python-3.x><xml><lxml>
|
2024-07-06 01:29:28
| 2
| 1,115
|
Mikhail Ramendik
|
78,713,551
| 395,857
|
I load a float32 Hugging Face model, cast it to float16, and save it. How can I load it as float16?
|
<p>I load a huggingface-transformers float32 model, cast it to float16, and save it. How can I load it as float16?</p>
<p>Example:</p>
<pre><code># pip install transformers
from transformers import AutoModelForTokenClassification, AutoTokenizer
# Load model
model_path = 'huawei-noah/TinyBERT_General_4L_312D'
model = AutoModelForTokenClassification.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
# Convert the model to FP16
model.half()
# Check model dtype
def print_model_layer_dtype(model):
print('\nModel dtypes:')
for name, param in model.named_parameters():
print(f"Parameter: {name}, Data type: {param.dtype}")
print_model_layer_dtype(model)
save_directory = 'temp_model_SE'
model.save_pretrained(save_directory)
model2 = AutoModelForTokenClassification.from_pretrained(save_directory, local_files_only=True)
print('\n\n##################')
print(model2)
print_model_layer_dtype(model2)
</code></pre>
<p>In this example, <code>model2</code> loads as a <code>float32</code> model (as shown by <code>print_model_layer_dtype(model2)</code>), even though <code>model2</code> was saved as float16 (as <a href="https://i.sstatic.net/Um45egKE.png" rel="nofollow noreferrer">shown in <code>config.json</code></a>). What is the proper way to load it as float16?</p>
<p>Tested with <code>transformers==4.36.2</code> and Python 3.11.7 on Windows 10.</p>
|
<python><machine-learning><huggingface-transformers><huggingface><half-precision-float>
|
2024-07-05 23:58:06
| 1
| 84,585
|
Franck Dernoncourt
|
78,713,461
| 1,386,054
|
temporary variable in list comprehension
|
<p>Is there a concise way to rewrite the following code fragment as a list comprehension?</p>
<pre class="lang-py prettyprint-override"><code>nsms = []
for line in lines:
fields = line.split(';')
if len(fields) > 4 and fields[4] == 'NSM':
nsms += [fields]
</code></pre>
<p>The challenge is the temporary variable <code>fields</code>. Without it, the comprehension must call <code>split</code> (up to) three times per line:</p>
<pre class="lang-py prettyprint-override"><code>nsms = [line.split(';')
for line in lines
if len(line.split(';')) > 4 and line.split(';')[4] == 'NSM']
</code></pre>
<p>I'm not worried about the inefficiency of the redundant calls but about their impact on the expressiveness of the code and the increased risk of a typo.</p>
<p>I thought that perhaps <code>with</code> could help:</p>
<pre class="lang-py prettyprint-override"><code># My failed attempt
nsms = [fields
for line in lines
with fields = line.split(';')
if len(fields) > 4 and fields[4] == 'NSM']
</code></pre>
<p>(I found it rather surprising that this fails with an indentation error. My other attempts resulting in a syntax error pointed at the <code>with</code>.)</p>
<p>As far as I can tell, a for loop is the only way to introduce a local temporary into a list comprehension, so a possible hack would be to "iterate" through a list of one item like this:</p>
<pre class="lang-py prettyprint-override"><code>nsms = [fields
for line in lines
for fields in [line.split(';')]
if len(fields) > 4 and fields[4] == 'NSM']
</code></pre>
<p>This works, but is there a way to do it without disguising the temporary as a loop induction variable?</p>
<hr />
<p>In the above examples, <code>lines</code> is a list of the lines from a text file that contains semicolon-separated values. Specifically, from a copy of UnicodeData.txt:</p>
<pre class="lang-py prettyprint-override"><code>with open('UnicodeData.txt', 'r') as source:
lines = source.readlines()
</code></pre>
|
<python><list-comprehension>
|
2024-07-05 23:02:21
| 2
| 49,710
|
Adrian McCarthy
|
78,713,426
| 3,240,688
|
pandas groupby duplicate index in Pandas 1 vs Pandas 2
|
<p>I have the following 3 lines of code of Pandas groupby and apply that behaves differently in Pandas 1.3 vs Pandas 2.2.</p>
<pre><code>df = pd.DataFrame({'group': ['A', 'A', 'B', 'B'], 'value': [1, 2, 3, 4]}).set_index(['group'])
print(df.groupby(level='group', group_keys=True).apply(lambda x: x))
print(df.groupby(level='group', group_keys=False).apply(lambda x: x))
</code></pre>
<p>So input looks like this</p>
<pre><code> value
group
A 1
A 2
B 3
B 4
</code></pre>
<p>In Pandas 1. both resulting dataframes have a single index called <code>group</code>.</p>
<pre><code> value
group
A 1
A 2
B 3
B 4
</code></pre>
<p>In Pandas 2, the first version returns duplicate index called <code>group</code>, while the second version gives single index called <code>group</code>.</p>
<pre><code> value
group group
A A 1
A 2
B B 3
B 4
</code></pre>
<p>and</p>
<pre><code> value
group
A 1
A 2
B 3
B 4
</code></pre>
<p>I'm not 100% clear on what happened. From reading the doc on the <code>group_keys</code>, it seems like this parameter controls whether the group key ('group' in my case) gets added to the index. In that sense, the Pandas 2 behavior seems to make more sense. So we get the result of the apply (which already has 'group' as index), and this parameter decides to whether add the group key to the index again. That's why if you set it to True, you get duplicate index called group.</p>
<p>It does seem a bit confusing that the behavior for <code>group_keys=True</code> is different between the 2 Pandas version. It seems like in Pandas 1, it dropped the duplicate index. And I don't see a breaking change documentation about this parameter.</p>
<p>Does anyone have a better explanation on what happened?</p>
|
<python><pandas><dataframe>
|
2024-07-05 22:37:59
| 0
| 1,349
|
user3240688
|
78,713,372
| 2,153,235
|
ascii_graph works on primitive literals but not same(?) array from numpy.arange & zip
|
<p>I am trialing the <a href="https://anaconda.org/conda-forge/ascii_graph" rel="nofollow noreferrer"><code>ascii_graph</code></a> package for Python. If I assemble the histogram data using <code>numpy.arange</code> and <code>zip</code>, the plotting fails. If I assemble the data from primitive literals, it succeeds. Can anyone please explain what the difference is?</p>
<pre><code>import numpy as np
BinMid = np.arange(20) + 1 # Bin mid-points
BinEdge = np.arange(21) + 0.5
# Bin edges, used only in generating histogram
# counts (not shown in this sample code)
nDist = np.array( # Bin counts
[ 7083, 73485, 659204, 3511238, 10859771, 22162510,
34511661, 45891902, 55651178, 59153091, 56242073,
48598282, 37947325, 27541907, 19356046, 13630601,
8810979, 4262462, 1227506, 216751], dtype=np.int64 )
# Histogram data
histData = list( zip( BinMid.astype(str) , nDist ) )
# [('1', 7083),
# ('2', 73485),
# ('3', 659204),
# ('4', 3511238),
# ('5', 10859771),
# ('6', 22162510),
# ('7', 34511661),
# ('8', 45891902),
# ('9', 55651178),
# ('10', 59153091),
# ('11', 56242073),
# ('12', 48598282),
# ('13', 37947325),
# ('14', 27541907),
# ('15', 19356046),
# ('16', 13630601),
# ('17', 8810979),
# ('18', 4262462),
# ('19', 1227506),
# ('20', 216751)]
# Create ASCII histograph plotter
from ascii_graph import Pyasciigraph
graph = Pyasciigraph()
# FAILS: Plot using zip expression assigned to histData
#------------------------------------------------------
for line in graph.graph( "Test" ,
list( zip( BinMid.astype(str) , nDist ) ) ):
print(line)
for line in graph.graph( "Test" , histData ): print(line)
# Traceback (most recent call last):
# Cell In[139], line 1
# for line in graph.graph( "Test" , histData ): print(line)
# File ~\AppData\Local\anaconda3\envs\py39\lib\site-packages\ascii_graph\__init__.py:399 in graph
# san_data = self._sanitize_data(data)
# File ~\AppData\Local\anaconda3\envs\py39\lib\site-packages\ascii_graph\__init__.py:378 in _sanitize_data
# (self._sanitize_string(item[0]),
# File ~\AppData\Local\anaconda3\envs\py39\lib\site-packages\ascii_graph\__init__.py:351 in _sanitize_string
# return info
# UnboundLocalError: local variable 'info' referenced before assignment
# SUCCEEDS: Assign pimitive literals to histData
#-----------------------------------------------
histData = [ ('1', 7083),
('2', 73485),
('3', 659204),
('4', 3511238),
('5', 10859771),
('6', 22162510),
('7', 34511661),
('8', 45891902),
('9', 55651178),
('10', 59153091),
('11', 56242073),
('12', 48598282),
('13', 37947325),
('14', 27541907),
('15', 19356046),
('16', 13630601),
('17', 8810979),
('18', 4262462),
('19', 1227506),
('20', 216751) ]
for line in graph.graph( "Test" , histData ): print(line)
# Test
# ###############################################################################
# 7083 1
# 73485 2
# 659204 3
# βββ 3511238 4
# βββββββββββ 10859771 5
# ββββββββββββββββββββββββ 22162510 6
# βββββββββββββββββββββββββββββββββββββ 34511661 7
# ββββββββββββββββββββββββββββββββββββββββββββββββββ 45891902 8
# βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 55651178 9
# βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 59153091 10
# βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 56242073 11
# βββββββββββββββββββββββββββββββββββββββββββββββββββββ 48598282 12
# βββββββββββββββββββββββββββββββββββββββββ 37947325 13
# ββββββββββββββββββββββββββββββ 27541907 14
# βββββββββββββββββββββ 19356046 15
# ββββββββββββββ 13630601 16
# βββββββββ 8810979 17
# ββββ 4262462 18
# β 1227506 19
# 216751 20
</code></pre>
<p><strong>Afternote</strong></p>
<p>Based on <em>Nick ODell's</em> response, the following works:</p>
<pre><code>import numpy as np
BinMidStr = [ str(i+1) for i in range(20) ] # Bin edges
nDist = np.array( # Bin counts
[ 7083, 73485, 659204, 3511238, 10859771, 22162510,
34511661, 45891902, 55651178, 59153091, 56242073,
48598282, 37947325, 27541907, 19356046, 13630601,
8810979, 4262462, 1227506, 216751], dtype=np.int64 )
# Histogram data
histData = list( zip( BinMidStr , nDist ) )
# Create ASCII histograph plotter
from ascii_graph import Pyasciigraph
graph = Pyasciigraph()
# Plot code pattern #1
for line in graph.graph( "Test" ,
list( zip( BinMidStr , nDist ) ) ):
print(line)
# Plot code pattern #2
for line in graph.graph( "Test" , histData ): print(line)
# Plot code pattern #3 for when labels are in integer form
BinMid = [ i+1 for i in range(20) ] # Bin edges
BinMidStr = [ str(i) for i in BinMid ]
for line in graph.graph( "Test" ,
list( zip( BinMidStr , nDist ) ) ):
print(line)
</code></pre>
<p>If you work a lot in NumPy and have you bin labels in the form if NumPy integers, be aware that the following almost looks like it creates native (non-NumPy) Python string labels, but it actually creates one string representing the entire array should be displayed:</p>
<pre><code># Plot code pattern #4 (nonfunctional) for when labels are in
# NumPy integer form
BinMid = np.arange(20) + 1 # Bin edges
BinMidStr = np.array_str( BinMid )
# '[ 1 2 3 4 5 6 7 8 9 10 11
# 12 13 14 15 16 17 18 19 20]'
BinMidStr = np.array_str( BinMid.astype('str') )
# "['1' '2' '3' '4' '5' '6' '7' '8' '9' '10' '11' '12'
# '13' '14' '15' '16'\n '17' '18' '19' '20']"
</code></pre>
<p>I find it odd that <code>Pyasciigraph.graph()</code> accepts an array of NumPy datatype for the numerical bar sizes, but not Nump strings for the bar labels. Another thing I am puzzled by is the lack of a function prototype for the <code>Pyasciigraph.graph()</code> method. While I still consider myself new to Python, most packages I've used provide Python-like documentation with function prototypes and explanations of the input and output arguments.</p>
<p>I wish there were standard streamlined ways to convert between arrays of native Python and NumPy data types. Going from NumPy to native Python seems trickier, as there are probably fewer cases in which people want that. <em><strong>Afternote:</strong></em> Based on <a href="https://stackoverflow.com/questions/9452775">this Q&A</a>, it seems that <code>MyNParray.tolist()</code> is the standard streamline idiom to convert NumPy array of NumPy data types to a native Python array of Python data types. It is even better than <code>[ Element.item() for Element in MyNParray ]</code>. The latter doesn't work on a NumPy array of NumPy strings.</p>
|
<python><numpy><python-zip>
|
2024-07-05 22:05:22
| 1
| 1,265
|
user2153235
|
78,713,151
| 595,305
|
How can I implement a responsive QPlainTextEdit?
|
<p>Here's an MRE:</p>
<pre><code>import sys, random
from PyQt5 import QtWidgets, QtCore, QtGui
class MainWindow(QtWidgets.QMainWindow):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
main_splitter = MainSplitter(self)
self.setCentralWidget(main_splitter)
main_splitter.setOrientation(QtCore.Qt.Vertical)
main_splitter.setStyleSheet('background-color: red; border: 1px solid pink;');
# top component of vertical splitter: a QFrame to hold horizontal splitter and breadcrumbs QPlainTextEdit
top_frame = QtWidgets.QFrame()
top_frame_layout = QtWidgets.QVBoxLayout()
top_frame.setLayout(top_frame_layout)
top_frame.setStyleSheet('background-color: green; border: 1px solid green;');
main_splitter.addWidget(top_frame)
# top component of top_frame: horizontal splitter
h_splitter = HorizontalSplitter()
h_splitter.setStyleSheet('background-color: cyan; border: 1px solid orange;')
top_frame_layout.addWidget(h_splitter)
# bottom component of top_frame: QPlainTextEdit (for "breadcrumbs")
self.breadcrumbs_pte = BreadcrumbsPTE(top_frame)
top_frame_layout.addWidget(self.breadcrumbs_pte)
self.text = 'some plain text '
self.breadcrumbs_pte.setPlainText(self.text * 50)
self.breadcrumbs_pte.setSizeAdjustPolicy(QtWidgets.QAbstractScrollArea.AdjustToContents)
self.breadcrumbs_pte.setStyleSheet('background-color: orange; border: 1px solid blue;');
self.breadcrumbs_pte.setMinimumWidth(300)
self.breadcrumbs_pte.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn)
# bottom component of vertical splitter: a QFrame
bottom_panel = BottomPanel(main_splitter)
bottom_panel.setStyleSheet('background-color: magenta; border: 1px solid cyan;')
main_splitter.addWidget(bottom_panel)
def resizeEvent(self, *args):
print('resize event...')
n_repeat = random.randint(10, 50)
self.breadcrumbs_pte.setPlainText(self.text * n_repeat)
super().resizeEvent(*args)
class BreadcrumbsPTE(QtWidgets.QPlainTextEdit):
def sizeHint(self):
return QtCore.QSize(500, 100)
class MainSplitter(QtWidgets.QSplitter):
def sizeHint(self):
return QtCore.QSize(40, 150)
class HorizontalSplitter(QtWidgets.QSplitter):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.setOrientation(QtCore.Qt.Horizontal)
self.left_panel = LeftPanel()
self.left_panel.setStyleSheet('background-color: yellow; border: 1px solid black;');
self.addWidget(self.left_panel)
self.left_panel.setMinimumHeight(150)
right_panel = QtWidgets.QFrame()
right_panel.setStyleSheet('background-color: black; border: 1px solid blue;');
self.addWidget(right_panel)
# to achieve 66%-33% widths ratio
self.setStretchFactor(0, 20)
self.setStretchFactor(1, 10)
def sizeHint(self):
return self.left_panel.sizeHint()
class LeftPanel(QtWidgets.QFrame):
def sizeHint(self):
return QtCore.QSize(150, 250)
class BottomPanel(QtWidgets.QFrame):
def sizeHint(self):
return QtCore.QSize(30, 180)
app = QtWidgets.QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec_()
</code></pre>
<p>If you run it, you'll see that if you adjust the size of the window the text in the <code>QPlainTextEdit</code> changes each time.</p>
<p><strong>What I'm trying to achieve</strong>: I want the text in the <code>QPlainTextEdit</code> to adjust the size of that component and the component above it (<code>HorizontalSplitter</code>) so that the <code>QPlainTextEdit</code> just contains perfectly the text, not leaving space at the bottom.</p>
<p>I want that to happen so that no adjustment to the size of the main window occurs (obviously if this happened, this would, as the code is written, currently lead to infinite triggering of the <code>MainWindow.resizeEvent()</code>).</p>
<p>The tutorials on Qt/PyQt just don't seem to give a comprehensive technical explanation of how all the various mechanisms work and how they interact. For example, I know that <code>sizeHint</code> plays a crucial role relating to sizing and layouts, but, short of trying to examine the source code in depth, I don't know how I can improve my understanding.</p>
<p>For example, I tried infinite permutations of commenting out <code>sizeHint</code> on the various classes here, including commenting them all out: but <code>self.top_pte.setSizeAdjustPolicy(QtWidgets.QAbstractScrollArea.AdjustToContents)</code> never seems to work (i.e. as I want it to!).</p>
<p>NB if necessary, <code>BottomPanel</code> (i.e. 2nd child of the vertical <code>QSplitter</code>) can be set with minimum and maximum height (i.e. fixed height) to make things simpler. The main aim is to get the horizontal <code>QSplitter</code> and the <code>QPlainTextEdit</code> to adjust so that the latter's height is just perfectly adjusted for the text it contains...</p>
|
<python><pyqt><resize>
|
2024-07-05 20:14:41
| 1
| 16,076
|
mike rodent
|
78,713,012
| 62,857
|
Read parquet file using pandas and pyarrow fails for time values larger than 24 hours
|
<p>I have exported a parquet file using parquet.net which includes a <code>duration</code> column that contains values that are greater than 24 hours. I've opened the tool using the floor tool that's included with parquet.net and the column has type INT32, converted type TIME_MILIS and logical type TIME (unit: MILLIS, isAdjustedToUTC: True). In .NET code the column was added as <code>new DataField<DateTime>("duration")</code></p>
<p>I'm trying to parse the file using pandas and pyarrow using the following method:</p>
<pre><code>pd.read_parquet('myfile.parquet', engine="pyarrow")
</code></pre>
<p>This results in the following error:</p>
<pre><code>ValueError: hour must be in 0..23
</code></pre>
<p>Is there a way to give pyarrow directions to load columns as the primitive type instead of the logical type? Pandas has a <code>pandas.Period</code> type and Python has the <code>datetime.timedelta</code> type. Is parquet.net creating an invalid column type?</p>
|
<python><pandas><parquet><pyarrow><parquet.net>
|
2024-07-05 19:18:07
| 1
| 2,270
|
Wouter
|
78,712,904
| 4,391,360
|
Create a similar matrix object in matlab and python
|
<p>For comparison purposes, I want to create an object which would have the same shape and indexing properties in matlab and python (numpy).
Let's say that on the matlab side the object would be :</p>
<pre><code>arr_matlab = cat(4, ...
cat(3, ...
[ 1, 2;
3, 4;
5, 6], ...
[ 7, 8;
9, 10;
11, 12], ...
[ 13, 14;
15, 16;
17, 18], ...
[ 20, 21;
22, 23;
24, 25]), ...
cat(3, ...
[ 26, 27;
28, 29;
30, 31], ...
[ 32, 33;
34, 35;
36, 37], ...
[ 38, 39;
40, 41;
42, 43], ...
[ 44, 45;
46, 47;
48, 49]), ...
cat(3, ...
[ 50, 51;
52, 53;
54, 55], ...
[ 56, 57;
58, 59;
60, 61], ...
[ 62, 63;
64, 65;
66, 67], ...
[ 68, 69;
70, 71;
72, 73]), ...
cat(3, ...
[ 74, 75;
76, 77;
78, 79], ...
[ 80, 81;
82, 83;
84, 85], ...
[ 86, 87;
88, 89;
90, 91], ...
[ 92, 93;
94, 95;
96, 97]), ...
cat(3, ...
[ 98, 99;
100, 101;
102, 103], ...
[104, 105;
106, 107;
108, 109], ...
[110, 111;
112, 113;
114, 115], ...
[116, 117;
118, 119;
120, 121]));
K>> size(arr_matlab)
ans =
3 2 4 5
K>> arr_matlab(1, 2, 1 ,1)
ans =
2
</code></pre>
<p>size(arr_matlab) should be identical to arr_python.shape and indexing should give the same result (same result for arr_python[0,1,0,0] and arr_matlab(1,2,1,1) for example).
For the moment I can't do it.</p>
<pre><code>data = np.array([
...: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 20, 21, 22, 23, 24, 25,
...: 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49,
...: 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73,
...: 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97,
...: 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121
...: ])
...:
...: # Reshape to 3x2x4x5
...: arr_python = data.reshape((3, 2, 4, 5), order='F')
In [138]: arr_python.shape
Out[138]: (3, 2, 4, 5)
arr_python[0,1,0,0]
Out[143]: 4
</code></pre>
|
<python><numpy><matlab>
|
2024-07-05 18:41:55
| 1
| 727
|
servoz
|
78,712,857
| 9,951,273
|
BigQuery `load_table_from_dataframe` in a transaction?
|
<p>I have multiple data pipelines which perform the following actions in BigQuery:</p>
<ol>
<li>Load data into a table using the BQ Python client's <code>load_table_from_dataframe</code> method.</li>
<li>Execute a BigQuery <code>merge</code> SQL statement to update/insert that data to another table.</li>
<li>Truncate the original table to keep it empty for the next pipeline.</li>
</ol>
<p>How can I perform these actions in a transaction to prevent pipelines from interfering with one another?</p>
<p>I know I can use <code>BEGIN TRANSACTION</code> and <code>COMMIT TRANSACTION</code> as shown in <a href="https://cloud.google.com/bigquery/docs/transactions#transaction_scope" rel="nofollow noreferrer">the docs</a> but my insertion using <code>load_table_from_dataframe</code> does not allow me to include my own raw SQL, so I'm unsure how to implement this part in a transaction.</p>
<p>Additionally BigQuery <a href="https://cloud.google.com/bigquery/docs/transactions#transaction_concurrency" rel="nofollow noreferrer">cancels transactions that conflict with one another</a>. Ideally I want each transaction to queue rather than fail on conflict. I question whether there is a better approach to this.</p>
|
<python><google-bigquery>
|
2024-07-05 18:20:55
| 0
| 1,777
|
Matt
|
78,712,792
| 916,945
|
Use scipy.integrate.nquad to integrate integrals whose result is a complex number
|
<p>I want to try to numerically integrate the intergal of <code>1/(1+x**2+y**2)</code>, where <code>y</code> is from <code>0</code> to <code>sqrt(16-x**2)</code> and <code>x</code> is from <code>0</code> to <code>5</code>.</p>
<p>The problem is that the result is a complex number.</p>
<p>Can <code>scipy.nquad</code> handle it or are there any other functions?</p>
<p>With <code>scipy.nquad</code> the line <code>flip, a, b = b < a, min(a, b), max(a, b)</code> gives the error, because it just can't compare complex numbers.</p>
|
<python><numpy><scipy><complex-numbers>
|
2024-07-05 17:57:16
| 1
| 2,837
|
Paul R
|
78,712,452
| 832,490
|
The date provided in the filter query must be within the last 30 days and not in the future
|
<p>I have the following request which works fine when days is less than 30.</p>
<pre><code>days = 29
</code></pre>
<pre><code>end = datetime.now(timezone.utc)
start = end - timedelta(days=days)
query_parameters = CallRecordsRequestBuilder.CallRecordsRequestBuilderGetQueryParameters(
filter=" ".join(
f"""
participants_v2/any(p:p/id eq '{uuid}')
and
startDateTime ge {start.strftime('%Y-%m-%dT%H:%M:%SZ')}
and
startDateTime lt {end.strftime('%Y-%m-%dT%H:%M:%SZ')}
""".split()
)
)
</code></pre>
<p>When I try to fetch 30 days, I receive the following error</p>
<blockquote>
<p>ODataError: APIError Code: 400 message: None error: MainError(additional_data={}, code='BadRequest', details=None, inner_error=InnerError(additional_data={}, client_request_id='647fb989-75ef-4f39-8804-1a7c6da25cf1', date=DateTime(2024, 7, 5, 16, 14, 21, tzinfo=Timezone('UTC')), odata_type=None, request_id='13371966-0244-40c0-a503-dca6082d53dc'), message='The date provided in the filter query must be within the last 30 days and not in the future.', target=None)</p>
</blockquote>
<p>So my question is, how to adjust the query, or how to make the query to cover an entire month given that some months have 31 days?</p>
|
<python><azure><microsoft-graph-api><microsoft-teams>
|
2024-07-05 16:19:01
| 0
| 1,009
|
Rodrigo
|
78,712,374
| 2,218,321
|
Pyhull python library: Output of convex hull are not subset of the input set
|
<p>I installed <a href="https://pypi.org/project/pyhull/" rel="nofollow noreferrer">Pyhull</a> library to generate the convex hull for a set of points. The documentation and usage are straightforward. However, the output of the library is strange. Its documentation at <a href="https://pythonhosted.org/pyhull/" rel="nofollow noreferrer">pythonhosted</a> has this snippet</p>
<pre><code>from pyhull.convex_hull import ConvexHull
pts = [[-0.5, -0.5], [-0.5, 0.5], [0.5, -0.5], [0.5, 0.5], [0,0]]
hull = ConvexHull(pts)
hull.vertices
hull.points
</code></pre>
<p>and the output, respectively, is</p>
<pre><code>[[0, 2], [1, 0], [2, 3], [3, 1]]
[[-0.5, -0.5], [-0.5, 0.5], [0.5, -0.5], [0.5, 0.5], [0, 0]]
</code></pre>
<p>The convex hull of a set of points <code>P</code> is the smallest convex set containing <code>P</code>. Therefore, the points defining the convex hull are a subset of the input set while the output of the library is not so.</p>
|
<python><convex-hull>
|
2024-07-05 15:57:53
| 1
| 2,189
|
M a m a D
|
78,711,953
| 16,525,263
|
How to extract date part from folder name and move it to another folder on hdfs using pyspark
|
<p>I currently have folders and sub-folders in day-wise structure in this path <code>'/dev/data/'</code></p>
<pre><code>2024.03.30
part-00001.avro
part-00002.avro
2024.03.31
part-00001.avro
part-00002.avro
2024.04.01
part-00001.avro
part-00002.avro
2024.04.02
part-00001.avro
part-00002.avro
</code></pre>
<p>and so on.. I have avro files inside these folders.</p>
<p>I need to move this to another path by combining all day-wise folders into month folder. I need the folder structure as below</p>
<pre><code>2024.03
2024.04
2024.05
</code></pre>
<p>and so on. All the avro files should be under month folder.</p>
<p>I have written the below code where I'm creating extra columns and using existing date column to extract the yyyy-MM part.</p>
<pre><code>day_df = spark.read.format("avro").load("path/to/dev/data")
month_df = day_df.withColumn("month", F.date_format(F.col("exit_date"),'yyyy-MM'))
month_df.write.partitionBy("month").format("com.databricks.spark.avro").save("path/to/dest")
</code></pre>
<p>Can anyone suggest how to achieve this without creating <code>month</code> column</p>
|
<python><pyspark><hdfs>
|
2024-07-05 14:19:25
| 0
| 434
|
user175025
|
78,711,626
| 8,842,262
|
Pydantic forces SQL query to already populated field
|
<p>I have interesting case happening with Pydantic: even though the field is already populated by object, when FastAPI returns (or I manually use <code>model_validate</code>) the extra SQL is called to some fields (to be more precise, undefered and joinedload-ed fields):</p>
<pre><code>class PortfolioFullSchema(BaseModel):
id: int
title: str
category: str
cover_image: Optional[str] = None
description: Optional[str] = None
user_username: str
is_saved: bool = False
is_appreciated: bool = False
read: int
tools: list[str]
tags: list[str]
portfolio_files: list[PortfolioFileSchema]
created_at: datetime
simple_appreciates_count: int = 0
senior_appreciates_count: int = 0
@router.get(
'/{portfolio_id}',
summary='Get portfolio detail',
response_model=PortfolioFullSchema,
status_code=status.HTTP_200_OK,
)
async def get_portfolio(
portfolio_id: int,
auth_user: OptionalAuthUser,
db: DbDependency,
):
"""get user portfolio route"""
builder = PortfolioQueryBuilder(
auth_user_username=auth_user and auth_user.username, # type: ignore[arg-type]
portfolio_id=portfolio_id,
)
portfolio: Portfolio = db.execute(await builder.full()).unique()
print(portfolio.portfolio_files) # [<models.portfolio.PortfolioFile object at 0x7f5f0a5d8a50>]
print(portfolio.created_at) # 2024-05-22 09:20:11.749022+00:00
portfolio.simple_appreciates_count, portfolio.senior_appreciates_count = 4, 5 # type: ignore[attr-defined]
portfolio.read += 1
db.add(portfolio)
db.commit()
return PortfolioFullSchema.model_validate(portfolio, from_attributes=True)
</code></pre>
<p><code>builder.full()</code> comes to:</p>
<pre><code>select(Portfolio).where(Portfolio.id == self.portfolio_id)
.options(joinedload(Portfolio.portfolio_files).joinedload(PortfolioFile.file))
.options(undefer(Portfolio.created_at))
</code></pre>
<p>The SQL generated by it is as follows (for id=7):</p>
<pre><code>select
user_profile.id,
user_profile.user_username,
user_profile.is_senior,
user_profile.job_title,
user_profile.company_name,
user_profile.location,
user_profile.biography,
user_profile.social_profiles,
user_profile.cover_photo,
user_profile.open_to_work,
user_user.username,
user_user.id as id_1,
user_user.first_name,
user_user.last_name,
user_user.email,
user_user.avatar,
user_user.password,
user_user.user_type,
user_user.is_active,
user_user.last_login,
portfolio_owner.id as id_2,
portfolio_owner.portfolio_id,
portfolio_owner.user_username as user_username_1,
portfolio_portfolio.id as id_3,
portfolio_portfolio.user_username as user_username_2,
portfolio_portfolio.title,
portfolio_portfolio.description,
portfolio_portfolio.cover_image,
portfolio_portfolio.category,
portfolio_portfolio.read,
portfolio_portfolio.tools,
portfolio_portfolio.tags,
portfolio_portfolio.created_at,
core_file_1.id as id_4,
core_file_1.name,
core_file_1.original_name,
core_file_1.url,
core_file_1.mime_type,
portfolio_file_1.id as id_5,
portfolio_file_1.portfolio_id as portfolio_id_1,
portfolio_file_1.file_id,
portfolio_file_1."order",
portfolio_file_1.slideshow
from
portfolio_portfolio
join portfolio_owner on
portfolio_portfolio.id = portfolio_owner.portfolio_id
join user_user on
user_user.username = portfolio_owner.user_username
join user_profile on
user_user.username = user_profile.user_username
left outer join portfolio_file as portfolio_file_1 on
portfolio_portfolio.id = portfolio_file_1.portfolio_id
left outer join core_file as core_file_1 on
core_file_1.id = portfolio_file_1.file_id
where
portfolio_portfolio.id = 7
order by
portfolio_owner.id
</code></pre>
<p>The two prints in main code works okay: they DO NOT generate extra SQL, and the fields are loaded as they are, showing correct results. However, when I do <code>model_validate</code>, two extra SQLs are run:</p>
<pre><code>select
portfolio_portfolio.id as portfolio_portfolio_id,
portfolio_portfolio.user_username as portfolio_portfolio_user_username,
portfolio_portfolio.title as portfolio_portfolio_title,
portfolio_portfolio.description as portfolio_portfolio_description,
portfolio_portfolio.cover_image as portfolio_portfolio_cover_image,
portfolio_portfolio.category as portfolio_portfolio_category,
portfolio_portfolio.read as portfolio_portfolio_read,
portfolio_portfolio.tools as portfolio_portfolio_tools,
portfolio_portfolio.tags as portfolio_portfolio_tags,
core_file_1.id as core_file_1_id,
core_file_1.name as core_file_1_name,
core_file_1.original_name as core_file_1_original_name,
core_file_1.url as core_file_1_url,
core_file_1.mime_type as core_file_1_mime_type,
portfolio_file_1.id as portfolio_file_1_id,
portfolio_file_1.portfolio_id as portfolio_file_1_portfolio_id,
portfolio_file_1.file_id as portfolio_file_1_file_id,
portfolio_file_1."order" as portfolio_file_1_order,
portfolio_file_1.slideshow as portfolio_file_1_slideshow
from
portfolio_portfolio
left outer join portfolio_file as portfolio_file_1 on
portfolio_portfolio.id = portfolio_file_1.portfolio_id
left outer join core_file as core_file_1 on
core_file_1.id = portfolio_file_1.file_id
where
portfolio_portfolio.id = 7
</code></pre>
<p>AND</p>
<pre><code>select
portfolio_portfolio.created_at as portfolio_portfolio_created_at
from
portfolio_portfolio
where
portfolio_portfolio.id = 7
</code></pre>
<p>However, those should not be triggered, as <code>portfolio</code> object already contains those data.</p>
|
<python><sqlalchemy><fastapi><pydantic>
|
2024-07-05 13:03:35
| 1
| 513
|
Miradil Zeynalli
|
78,711,489
| 1,818,059
|
Calculate position (transform) on SVG object in code?
|
<p>I have started a small project to make sheets of (complex) labels in code by effectively merging first a pre-generated QR code (svg format) onto a single label, then this label onto a master sheet.</p>
<p>The merging is automated in Python and works.</p>
<p>But, the position of the QR in the single label is done by first editing a label to my needs in inkScape, then duplicating the result. This requires very specific parametres on the QR code source to succeed.</p>
<p>The QR generated starts like this. Note the width and height which I can of course extract.</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg width="294" height="294" version="1.1" xmlns="http://www.w3.org/2000/svg">
<desc>Zint Generated Symbol</desc>
<g id="barcode" fill="#000000">
<rect x="0" y="0" width="294" height="294" fill="#FFFFFF" />
<rect x="0.00" y="0.00" width="98.00" height="14.00" />
..many more follows..
</code></pre>
<p>By designing in inkScape, I have found that I need to apply a transformation parameter to the barcode to insert into the label. Example for one example is <code>transform="matrix(0.086, 0,0,0.086, 52, 100)"</code></p>
<p>This is all good, but how can I go from the <code>294 x 294</code> (or any other number) to achieve the same position on the label (correct transformation) - making me independent on a specific setting in Zint generator ? My desired end goal is that I can extract the size and control final size and x,y position as indicated here:</p>
<p><a href="https://i.sstatic.net/YF4349nx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YF4349nx.png" alt="extract of label showing QR size and position" /></a></p>
<p>This way I can also small small adjustments in the script to fine tune position, without having to go back to inkScape.</p>
<p>I modify / merge the SVG files using Python and <code>xml.etree.ElementTree</code>. Could it be simpler / better to use a dedicated SVG module instead ?</p>
<p>The final merge of generated labels on the mastersheet is fairly trivial. Same problem but should be done only once and not change. I will be happy to answer questions in that relation if anyone is curious.</p>
|
<python><xml><svg>
|
2024-07-05 12:32:44
| 0
| 1,176
|
MyICQ
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.