QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 β |
|---|---|---|---|---|---|---|---|---|
77,341,375 | 579,228 | Count number of regexp matches in a Queryset regexp filter in Django | <p>I've a queryset I'm running in a django view.</p>
<pre><code>sp = Session.objects.filter(year__gte=start_year, year__lte=end_year, data__iregex=word_to_search_for_re)
</code></pre>
<p>Although it says word_to_search_for, this can be a couple of words separated by whitespace.</p>
<p>What I want to do is count how many times the match occurs in data. At the moment, I'm pulling the results from this queryset back and doing this manually, which seems like a grossly inefficient code smell. I've seen something called annotations (<a href="https://medium.com/@singhgautam7/django-annotations-steroids-to-your-querysets-766231f0823a" rel="nofollow noreferrer">https://medium.com/@singhgautam7/django-annotations-steroids-to-your-querysets-766231f0823a</a>), but not sure how to get this to work in this case or if I'm asking too much.</p>
| <python><django> | 2023-10-22 19:18:26 | 1 | 1,850 | James |
77,341,186 | 3,143,269 | Getting a pipe reference into a task | <p>I have two classes, Task1 and Task2. Task1 should read from a TCP/IP port and send up a pipe to Task2. Task2 should return the data to Task1 who echoes this to the client. This is just an experiment. Task2 will eventually process the data and send to another piece of hardware across a serial line. Task1 and Task2 should run on separate cores. I cannot see how to get the pipe reference into the tasks. It is not really necessary Task1 be different from the application. Just has to monitor a TCP/IP port and a pipe asynchronously.</p>
<p>Task1</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import os
from multiprocessing import Pipe
import sys
class Task1:
def __init__(self, pipe):
self.comm_pipe = pipe
async def run(self, reader, writer):
while True:
data = await reader.read(100)
if data:
self.comm_pipe.send(data)
else:
break
if self.comm_pipe.poll():
data_from_pipe = self.comm_pipe.recv()
writer.write(data_from_pipe)
await writer.drain()
writer.close()
async def task_main():
server = await asyncio.start_server(
lambda r, w: Task1(?????).run(r, w),
'0.0.0.0', 4365)
addr = server.sockets[0].getsockname()
print(f'Serving on {addr}')
async with server:
await server.serve_forever()
def main(pipe):
asyncio.run(task_main())
</code></pre>
<p>Task2</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
from multiprocessing import Pipe
class Task2:
def __init__(self, pipe):
self.pipe = pipe
async def run(self):
while True:
if self.pipe.poll():
data = self.pipe.recv()
reversed_data = data[::-1]
self.pipe.send(reversed_data)
if __name__ == "__main__":
task2 = Task2(????????)
asyncio.run(task2.run())
</code></pre>
<p>App</p>
<pre class="lang-py prettyprint-override"><code>import multiprocessing
import Task1
import Task2
if __name__ == "__main__":
parent_pipe1, child_pipe1 = multiprocessing.Pipe()
p1 = multiprocessing.Process(target=Task1.main, args=(parent_pipe1,))
p2 = multiprocessing.Process(target=Task2.main, args=(child_pipe1,))
p1.start()
p2.start()
p1.join()
p2.join()
</code></pre>
| <python><python-3.x> | 2023-10-22 18:21:42 | 1 | 1,124 | AeroClassics |
77,341,161 | 5,924,264 | How to I change import statements in a pickle file? | <p>I recently removed a module <code>xyz</code> and replace it with another one of a different name <code>abc</code>. In some pickle files, it still imports the <code>xyz</code> module. I want to change it to import the <code>abc</code> module. How can I do this?</p>
<p>I don't have much experience with serialization/deserialization. My understanding is I'd have to read in the pickle file (i.e., deserialize) and somehow make the changes and then serialize it again after the changes.</p>
| <python><serialization><pickle> | 2023-10-22 18:14:09 | 1 | 2,502 | roulette01 |
77,341,038 | 3,623,537 | Is it safe to remove items from reversed list? | <p>I guess <a href="https://stackoverflow.com/questions/1207406/how-to-remove-items-from-a-list-while-iterating">it is a common problem</a> that it's unsafe to remove elements from the list while iterating over it. The typical solution is to make a copy of a list. What I've found that using <code>reversed(list)</code> does work too. And my question - is it really safe?</p>
<p><code>reversed</code> seems to be just an iterator too (since I can't retrieve individual elements from it), so it doesn't copy the list and probably just accessing the original list's elements under the hood. So it's still not safe?</p>
<p>I'm talking about the case similar to the one below - it seems to work fine.</p>
<p>PS Not sure if it's the best example since <code>for item in my_list</code> doesn't introduce any errors too.</p>
<pre><code>my_list = [1, 2, 3, 4, 5]
for item in reversed(my_list):
if item % 2 == 0:
my_list.remove(item)
# [1, 3, 5], no errors
print(my_list)
</code></pre>
| <python> | 2023-10-22 17:38:42 | 1 | 469 | FamousSnake |
77,341,025 | 3,118,330 | docker environment variable not loaded in pytest and FastApi | <p>This is a FastApi test docker environment with postgrade DB. paths are several and it need to test it.
As see in FastApi guide I tried to use <code>pytest</code> however enviroment variable in compose.yaml does not pass to path as they're working on main app.</p>
<p>error returned is because environment <code>ENVIRONMENT=dev</code> and <code>TESTING=0</code> seems not loaded. Does tests can be done loading docker enviroment</p>
<p>Test file is:</p>
<pre><code>from fastapi.testclient import TestClient
from app.main import app
client = TestClient(app)
def test_read_main():
response = client.get("/ping")
assert response.status_code == 200
assert response.json() == {"ping": "pong!", "environment": "dev", "testing": 0}
</code></pre>
<p>Error got it:</p>
<pre><code>test_main.py F [100%]
=========================== FAILURES ===========================
________________________ test_read_main ________________________
def test_read_main():
response = client.get("/ping")
assert response.status_code == 200
> assert response.json() == {"ping": "pong!", "environment": "dev", "testing": 0}
E AssertionError: assert {'environment...esting': True} == {'environment... 'testing': 0}
E Omitting 1 identical items, use -vv to show
E Differing items:
E {'testing': True} != {'testing': 0}
E {'environment': 'dev2'} != {'environment': 'dev'}
E Use -v to get more diff
test_main.py:11: AssertionError
=================== short test summary info ====================
FAILED test_main.py::test_read_main - AssertionError: assert {'environment...esting': True} == {'...
====================== 1 failed in 0.44s =======================
</code></pre>
<p>project files:</p>
<pre><code># services/backend/app/main.py
from fastapi import FastAPI, Depends, HTTPException
from config import get_settings, Settings
app = FastAPI()
# app.include_router(admin.router)
@app.get("/ping")
def pong(settings: Settings = Depends(get_settings)):
return {
"ping": "pong!",
"environment": settings.environment,
"testing": settings.testing,
}
</code></pre>
<pre><code># backend/config.py
import logging
from functools import lru_cache
from pydantic_settings import BaseSettings
log = logging.getLogger("uvicorn")
class Settings(BaseSettings):
environment: str = "dev2"
testing: bool = 1
# DATABASE_URL: str
# SECRET_KEY: str
# ALGORITHM: str
@lru_cache
def get_settings() -> BaseSettings:
log.info("Loading config settings from environment...")
return Settings()
</code></pre>
<p>file estructure and docker components are</p>
<pre><code>.
βββ services
β βββ backend
β β βββ alembic.ini
β β βββ app
β β β βββ __init__.py
β β β βββ main.py
β β β βββ models.py
β β β βββ routers
β β β β βββ admin.py
β β β β βββ admin_schemas.py
β β β βββ schemas.py
β β βββ config.py
β β βββ db.py
β β βββ Dockerfile
β β βββ migrations
β β βββ requirements.txt
β β βββ setup.py
β β βββ test_main.py
β βββ compose.yaml
β βββ frontend
βββ tree.txt
</code></pre>
<p>docker file</p>
<pre><code># syntax=docker/dockerfile:1
ARG PYTHON_VERSION=3.11
FROM python:${PYTHON_VERSION}-slim as base
# Prevents Python from writing pyc files.
ENV PYTHONDONTWRITEBYTECODE=1
# Keeps Python from buffering stdout and stderr to avoid situations where
# the application crashes without emitting any logs due to buffering.
ENV PYTHONUNBUFFERED=1
WORKDIR /usr/src/app
# Copy the source code into the container.
COPY . .
# Expose the port that the application listens on.
EXPOSE 8004
# Run the application.
CMD uvicorn app.main:app --reload --workers 2 --host 0.0.0.0 --port 8000
</code></pre>
<p>compose file</p>
<pre><code>services:
backend:
build:
context: ./backend
volumes:
- ./backend:/usr/src/app
ports:
- 8004:8000
environment:
- ENVIRONMENT=dev
- TESTING=0
networks:
mynet:
ipv4_address: 10.5.0.10
depends_on:
db:
condition: service_healthy
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: mypass123
ports:
- "5432:5432"
volumes:
- db-data:/var/lib/postgresql/data
networks:
mynet:
ipv4_address: 10.5.0.23
healthcheck:
test: ["CMD", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
admin:
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: testemail@gmail.com
PGADMIN_DEFAULT_PASSWORD: mypass123
ports:
- 8080:80
volumes:
- admin-data:/var/lib/pgadmin
networks:
mynet:
volumes:
db-data:
admin-data:
networks:
mynet:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/24
gateway: 10.5.0.1
</code></pre>
| <python><docker><pytest><fastapi> | 2023-10-22 17:35:40 | 0 | 472 | dannisis |
77,340,854 | 3,912,693 | Recommended way of installing two or more Python applications sharing dependencies | <p>Here is the situation: I have installed applications distributed only via <code>pip install</code>. <code>pip install</code> installs application scripts (entry-points) to one of directories designated by <code>$PATH</code> and I can execute those apps from anywhere.</p>
<p>If two apps share a dependency, upgrading one will affect the other. Is there a recommended way to avoid such a situation?</p>
| <python> | 2023-10-22 16:47:11 | 0 | 333 | Jinux |
77,340,804 | 1,714,490 | CRC computation port from C to Python | <p>I need to convert the following CRC computation algorithm to Python:</p>
<pre class="lang-c prettyprint-override"><code>#include <stdio.h>
unsigned int Crc32Table[256];
unsigned int crc32jam(const unsigned char *Block, unsigned int uSize)
{
unsigned int x = -1; //initial value
unsigned int c = 0;
while (c < uSize)
{
x = ((x >> 8) ^ Crc32Table[((x ^ Block[c]) & 255)]);
c++;
}
return x;
}
void crc32tab()
{
unsigned int x, c, b;
c = 0;
while (c <= 255)
{
x = c;
b = 0;
while (b <= 7)
{
if ((x & 1) != 0)
x = ((x >> 1) ^ 0xEDB88320); //polynomial
else
x = (x >> 1);
b++;
}
Crc32Table[c] = x;
c++;
}
}
int main() {
unsigned char buff[] = "whatever buffer content";
unsigned int l = sizeof(buff) -1;
unsigned int hash;
crc32tab();
hash = crc32jam(buff, l);
printf("%d\n", hash);
}
</code></pre>
<p>two (failed) attempts to rewrite this in python follow:</p>
<pre class="lang-py prettyprint-override"><code>def crc32_1(buf):
crc = 0xffffffff
for b in buf:
crc ^= b
for _ in range(8):
crc = (crc >> 1) ^ 0xedb88320 if crc & 1 else crc >> 1
return crc ^ 0xffffffff
def crc32_2(block):
table = [0] * 256
for c in range(256):
x = c
b = 0
for _ in range(8):
if x & 1:
x = ((x >> 1) ^ 0xEDB88320)
else:
x >>= 1
table[c] = x
x = -1
for c in block:
x = ((x >> 8) ^ table[((x ^ c) & 255)])
return x & 0xffffffff
data = b'whatever buffer content'
print(crc32_1(data), crc32_2(data))
</code></pre>
<p>Using the three routines on thee exact same data yield three different results:</p>
<pre><code>mcon@cinderella:~/Desktop/3xDAsav/DDDAedit$ ./test5
2022541416
mcon@cinderella:~/Desktop/3xDAsav/DDDAedit$ python3 test5.py
2272425879 2096952735
</code></pre>
<p>As said: <code>C</code> code is "Golden Standard", how do I fix this in Python?</p>
<p>Note: I know I can call <code>C</code> routines from Python, but I consider that as "last resort".</p>
| <python><c><porting><crc32> | 2023-10-22 16:30:59 | 2 | 3,106 | ZioByte |
77,340,552 | 9,494,140 | How to auth user to Django admin site using external API end point | <p>I'm using an external endpoint to check if the username and password are given and if it is valid I return <code>success:true</code> if not I return it as <code>false</code>. and ofc I generate a token and return it back to him.</p>
<p>Here is the function amusing for this :</p>
<pre class="lang-py prettyprint-override"><code>@api_view(["POST"]) # decorator to make this a rest API endpoint
def login_user(request):
"""
Login User Endpoint takes in email & password as input from the client side
"""
username, password = request.data["username"], request.data["password"]
# authenticate function from Django auth library is used here
# to check if the provided credentials are correct or not
user = authenticate(username=username, password=password)
# If authentication is successful then generate a token else return an error message as a JSON object
if user:
token, _ = Token.objects.get_or_create(user=user)
the_company_object = hiring_app_company.objects.get(user=user)
user_object = {
"name": f"{user.first_name} {user.last_name}",
"email": user.username,
"image": f"https://admin.apexnile.com{the_company_object.company_logo.url}",
"link":f"https://admin.apexnile.com/admin/"
}
return Response(
{"success": True, "user": user_object, "token": token.key},
status=status.HTTP_200_OK,
)
else:
return Response(
{"success": False, "error": "Invalid Credentials"},
status=status.HTTP_401_UNAUTHORIZED,
)
</code></pre>
<p>The API called are being called from a totally different server which uses <code>Next auth</code> and it authenticates the user based on the response returned from this API call.
now the problem is sometimes I need this authed user to go to the Django admin site and use a few features there .. but based on the logic used here .. he is not Authenticated in the Django site and he has to rewrite his username and password to login .. any idea how to achieve this?</p>
| <python><django> | 2023-10-22 15:21:50 | 1 | 4,483 | Ahmed Wagdi |
77,340,508 | 4,348,534 | How do I solve this atproto ModelError? | <p>I'm trying to connect to bluesky using <code>atproto</code>:</p>
<pre><code>bsky_handle = "_____.bsky.social"
bsky_password = "____-____-____-____"
from atproto import Client
bsky = Client()
bsky.login(bsky_handle, bsky_password)
</code></pre>
<p>(Taken from <a href="https://github.com/Linus2punkt0/bluesky-crossposter/blob/main/crosspost.py" rel="nofollow noreferrer">https://github.com/Linus2punkt0/bluesky-crossposter/blob/main/crosspost.py</a>)</p>
<p>But that last line gives me the following error:</p>
<pre><code>Traceback (most recent call last):
File ~\Anaconda3\anaconda3\lib\site-packages\atproto\xrpc_client\models\utils.py:87 in _get_or_create
return model(**model_data)
File ~\Anaconda3\anaconda3\lib\site-packages\pydantic\main.py:165 in __init__
__pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)
ValidationError: 1 validation error for Response
emailConfirmed
Extra inputs are not permitted [type=extra_forbidden, input_value=True, input_type=bool]
For further information visit https://errors.pydantic.dev/2.3/v/extra_forbidden
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
Cell In[12], line 2
bsky.login(bsky_handle, bsky_password)
File ~\Anaconda3\anaconda3\lib\site-packages\atproto\xrpc_client\client\client.py:74 in login
session = self._get_and_set_session(login, password)
File ~\Anaconda3\anaconda3\lib\site-packages\atproto\xrpc_client\client\client.py:37 in _get_and_set_session
session = self.com.atproto.server.create_session(
File ~\Anaconda3\anaconda3\lib\site-packages\atproto\xrpc_client\namespaces\sync_ns.py:1757 in create_session
return get_response_model(response, models.ComAtprotoServerCreateSession.Response)
File ~\Anaconda3\anaconda3\lib\site-packages\atproto\xrpc_client\models\utils.py:98 in get_response_model
return get_or_create(response.content, model)
File ~\Anaconda3\anaconda3\lib\site-packages\atproto\xrpc_client\models\utils.py:64 in get_or_create
raise e
File ~\Anaconda3\anaconda3\lib\site-packages\atproto\xrpc_client\models\utils.py:57 in get_or_create
model_instance = _get_or_create(model_data, model, strict=strict)
File ~\Anaconda3\anaconda3\lib\site-packages\atproto\xrpc_client\models\utils.py:89 in _get_or_create
raise ModelError(str(e)) from e
ModelError: 1 validation error for Response
emailConfirmed
Extra inputs are not permitted [type=extra_forbidden, input_value=True, input_type=bool]
For further information visit https://errors.pydantic.dev/2.3/v/extra_forbidden
</code></pre>
<p>What am I doing wrong?</p>
| <python><at-protocol> | 2023-10-22 15:08:42 | 1 | 4,297 | FranΓ§ois M. |
77,340,461 | 5,565,481 | ChromeDriver is outdated, how do i update it? | <p>Im currently learning Selenium automation with python and im using chromedriver.
Im having an issue running this simple script:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.service import Service
options = webdriver.ChromeOptions()
options.add_experimental_option("detach", True)
driver = webdriver.Chrome(options=options,service=Service(executable_path = 'C:/Users/{UserName}/Downloads/chromedriver-win64/chromedriver-win64'))
driver.get("https://google.com")
</code></pre>
<p>The error i receive is</p>
<blockquote>
<p>selenium.common.exceptions.SessionNotCreatedException: Message:
session not created: This version of ChromeDriver only supports Chrome
version 114 Current browser version is 118.0.5993.89 with binary path
C:\Program Files\Google\Chrome\Application\chrome.exe</p>
</blockquote>
<p>How do i resolve this? many thanks!</p>
| <python><google-chrome><testing><automation><selenium-chromedriver> | 2023-10-22 14:55:21 | 5 | 861 | Mr_Shoryuken |
77,340,272 | 5,429,320 | JSON from api request in func app encoded differently to that of a Django app | <p>I have a Django application which contains a function that calls an API and adds the data from the response into my database. I am trying to move this function into an Azure function app, however the JSON that is returned from the request seems to be encoded differently which is causing the database insert to fail.</p>
<p>Request function from Django:</p>
<pre class="lang-py prettyprint-override"><code>def fetch_magic_set_data(code, max_retries=3, sleep_base=2):
for retry in range(max_retries):
try:
response = requests.get(f'https://mtgjson.com/api/v5/{code}.json')
response.raise_for_status()
magic_sets_json = response.json()
if 'data' in magic_sets_json and 'cards' in magic_sets_json['data']:
return True, magic_sets_json['data']
else:
print(f'Response is empty for {code}, retrying...')
except HTTPError as http_err:
print(f'HTTP error occurred: {http_err}')
except RequestException as req_err:
print(f'Request error occurred: {req_err}')
sleep_time = sleep_base * (2 ** retry)
time.sleep(sleep_time)
return False, None
</code></pre>
<p>Request function from function app:</p>
<pre class="lang-py prettyprint-override"><code>def get_response(url, max_attempts=10):
for attempt in range(max_attempts):
try:
response = requests.get(url)
if response.status_code == 200:
return response
elif 400 <= response.status_code < 600:
logger.warning(f"Status: {response.status_code}. Error from {url}.")
if response.status_code == 404:
return None
except requests.RequestException as e:
logger.warning(f"Attempt {attempt + 1} of {max_attempts}. Error: {e}")
if attempt < max_attempts - 1:
time.sleep((2 ** attempt) + (attempt % 2))
else:
logger.error(f"Failed to get response from {url} after {max_attempts} attempts.")
return None
return None
def fetch_magic_set_data(code):
url = MTGJSON_API_BASE_URL + code.upper() + '.json'
response = get_response(url)
if response:
try:
magic_set_data = response.json()
return magic_set_data['data']
except ValueError:
logger.error(f"Failed to parse JSON data from {url}.")
else:
logger.error(f"Failed to get a valid response from {url}.")
return {}
</code></pre>
<p>If I <code>print(response.headers['Content-Type'])</code> returns <code>application/json</code> for both</p>
<p>If I <code>print(response.content)</code>, part of the response from the Django app is <code>'type': 'Legendary Planeswalker β Jace',</code>, however it is <code>"type": "Legendary Planeswalker \xe2\x80\x94 Jace",</code> in the function app.</p>
<p>If I add the following code to the function app, the decoded data looks like <code>"type": "Legendary Planeswalker ΓΉ Jace",</code></p>
<pre class="lang-py prettyprint-override"><code>decoded_content = response.content.decode('utf-8')
print(decoded_content)
magic_sets_json = json.loads(decoded_content)
</code></pre>
<p>Both applications are using <code>python version 3.10.11</code> and <code>requests version 2.31.0</code>.</p>
<p>How do I get the function app to return the JSON data encoded the same as the JSON data returned from the Django app.</p>
| <python><json><django><python-requests><azure-functions> | 2023-10-22 14:07:51 | 1 | 2,467 | Ross |
77,340,162 | 12,689,373 | How to add an x before each element in a list of strings so I can use it then in a lambda expression | <p>I have the code I show below, where I make a dictionary after grouping by some columns. My goal is to transfer the mean I've calculated in the groupby dataframe to an other dataframe. For that I've created a dictionary. My problem is that I don't know how to make variable this part of the lambda expression: <code>x['year'],x['month'],x['quantity']</code>, so that if I change the variables I could change easily the expression. How can I add a x (as variable) to each element of a list? Or maybe I have to change all code to make it easier?</p>
<pre><code>list = ['year','month']
m = data.groupby(list).agg({"quantity":"mean"}).reset_index()
d = m.groupby(list)["quantity"].apply(list).to_dict()
df["m"] = df.apply(
lambda x: d[(x['year'],x['month'])][0]
if (x['year'],x['month']) in d
else 0,
axis=1
)
</code></pre>
| <python><dictionary><lambda> | 2023-10-22 13:36:54 | 0 | 349 | Cristina Dominguez Fernandez |
77,339,948 | 1,457,380 | Pandas: Merge two dataframes with different dates, using interpolate for numeric values and ffill for dates and boolean data | <p>I have two <strong>pandas</strong> dataframes to <code>merge</code> with <code>interpolate()</code> on the missing numeric data and with <code>ffill()</code> on boolean data and dates. The source data looks like this:</p>
<pre><code>df1
Date Unemployment Rate Federal Debt
0 1969-01-01 3.4 36.19577
1 1969-04-01 3.4 34.97403
2 1969-07-01 3.5 35.01946
df2
President Start End Republican Democrat
1969-01-20 Nixon 1969-01-20 1974-08-09 True False
1974-08-09 Ford 1974-08-09 1977-01-20 True False
1977-01-20 Carter 1977-01-20 1981-01-20 False True
</code></pre>
<p>The desired output is something like:</p>
<pre><code>df3
Date Unemployment Rate Federal Debt President Start End Republican Democrat
1969-01-01 3.4 36.19577 NaN NaN NaN NaN NaN
1969-01-20 3.4 35.93785 Nixon 1969-01-20 1974-08-09 True False
1969-04-01 3.4 34.97403 Nixon 1969-01-20 1974-08-09 True False
1969-07-01 3.5 35.01946 Nixon 1969-01-20 1974-08-09 True False
...
1977-01-01 7.5 33.65136 Ford 1974-08-09 1977-01-20 True False
1977-01-20 7.4 33.47252 Carter 1977-01-20 1981-01-20 False True
1977-04-01 7.2 32.80422 Carter 1977-01-20 1981-01-20 False True
</code></pre>
<p>In words, I have economic data that comes either at daily, monthly, quarterly, or annual rate and political data on the presidential term dates. The purpose of merging the dataframes is to compute statistics for each president's term and to get the (interpolated) data for the start/end of their terms, and eventually to graph this.</p>
<p>My difficulties seem to be mostly about merging two datasets. I show below how far I got.</p>
<pre><code>import pandas as pd
from pandas import Timestamp
import numpy as np # np.nan
df1 = pd.DataFrame.from_dict({'Date': {0: '1969-01-01', 1: '1969-04-01', 2: '1969-07-01', 3: '1969-10-01', 4: '1970-01-01', 5: '1970-04-01', 6: '1970-07-01', 7: '1970-10-01', 8: '1971-01-01', 9: '1971-04-01', 10: '1971-07-01', 11: '1971-10-01', 12: '1972-01-01', 13: '1972-04-01', 14: '1972-07-01', 15: '1972-10-01', 16: '1973-01-01', 17: '1973-04-01', 18: '1973-07-01', 19: '1973-10-01', 20: '1974-01-01', 21: '1974-04-01', 22: '1974-07-01', 23: '1974-10-01', 24: '1975-01-01', 25: '1975-04-01', 26: '1975-07-01', 27: '1975-10-01', 28: '1976-01-01', 29: '1976-04-01', 30: '1976-07-01', 31: '1976-10-01', 32: '1977-01-01', 33: '1977-04-01', 34: '1977-07-01', 35: '1977-10-01', 36: '1978-01-01', 37: '1978-04-01', 38: '1978-07-01', 39: '1978-10-01', 40: '1979-01-01', 41: '1979-04-01', 42: '1979-07-01', 43: '1979-10-01'}, 'Unemployment Rate': {0: 3.4, 1: 3.4, 2: 3.5, 3: 3.7, 4: 3.9, 5: 4.6, 6: 5.0, 7: 5.5, 8: 5.9, 9: 5.9, 10: 6.0, 11: 5.8, 12: 5.8, 13: 5.7, 14: 5.6, 15: 5.6, 16: 4.9, 17: 5.0, 18: 4.8, 19: 4.6, 20: 5.1, 21: 5.1, 22: 5.5, 23: 6.0, 24: 8.1, 25: 8.8, 26: 8.6, 27: 8.4, 28: 7.9, 29: 7.7, 30: 7.8, 31: 7.7, 32: 7.5, 33: 7.2, 34: 6.9, 35: 6.8, 36: 6.4, 37: 6.1, 38: 6.2, 39: 5.8, 40: 5.9, 41: 5.8, 42: 5.7, 43: 6.0}, 'Federal Debt': {0: 36.19577, 1: 34.97403, 2: 35.01946, 3: 35.46954, 4: 35.38879, 5: 34.67329, 6: 34.86717, 7: 35.74822, 8: 34.50345, 9: 34.36089, 10: 35.00694, 11: 35.63237, 12: 34.72622, 13: 33.67383, 14: 33.62447, 15: 33.74758, 16: 33.29287, 17: 32.34466, 18: 32.12455, 19: 31.77379, 20: 31.76449, 21: 30.99462, 22: 30.86269, 23: 30.79768, 24: 31.53604, 25: 32.27817, 26: 32.38043, 27: 32.7301, 28: 32.98513, 29: 33.49464, 30: 33.64333, 31: 33.78753, 32: 33.65136, 33: 32.80422, 34: 32.98791, 35: 33.21873, 36: 33.5012, 37: 32.12444, 38: 32.21407, 39: 31.86206, 40: 31.53601, 41: 31.06277, 42: 30.98402, 43: 31.02615}})
df2 = pd.DataFrame.from_dict({'President': {'1969-01-20': 'Nixon', '1974-08-09': 'Ford', '1977-01-20': 'Carter'}, 'Start': {'1969-01-20': '1969-01-20', '1974-08-09': '1974-08-09', '1977-01-20': '1977-01-20'}, 'End': {'1969-01-20': '1974-08-09', '1974-08-09': '1977-01-20', '1977-01-20': '1981-01-20'}, 'Republican': {'1969-01-20': True, '1974-08-09': True, '1977-01-20': False}, 'Democrat': {'1969-01-20': False, '1974-08-09': False, '1977-01-20': True}})
# make dates | utc=True appeared to be needed for resampling below
df1["Date"] = pd.to_datetime(df1["Date"], utc=True)
df2["Start"] = pd.to_datetime(df2["Start"], utc=True)
df2["End"] = pd.to_datetime(df2["End"], utc=True)
# create a common date column for merging
df2["Date"] = df2["Start"]
df3 = pd.merge(df1, df2, how="outer", on="Date")
df3.sort_values("Date", inplace=True)
df3.set_index("Date", drop=False, inplace=True)
# select the numeric data columns
cols = df3.select_dtypes(include="number").columns.to_list()
# cols = ['Unemployment Rate', 'Federal Debt']
# The dataframe has missing values because the dates for the presidential terms do not coincide with the dates for which the data is published.
# I interpolated the data, but couldn't merge it back in.
"""
df3[cols]
Unemployment Rate Federal Debt
Date
1969-01-01 00:00:00+00:00 3.4 36.19577
1969-01-20 00:00:00+00:00 NaN NaN
1969-04-01 00:00:00+00:00 3.4 34.97403
"""
# resample and interpolate to replace na values
df4 = df3[cols].resample("D").interpolate(method="linear").reset_index()
# merge the numeric columns back into the dataframe | merge back
df5 = pd.merge(df3, df4, how="inner", on=cols)# tried "left" too
"""
df4.head(30)
Date Unemployment Rate Federal Debt
0 1969-01-01 00:00:00+00:00 3.4 36.195770
1 1969-01-02 00:00:00+00:00 3.4 36.182195
2 1969-01-03 00:00:00+00:00 3.4 36.168620
"""
# desired outcome for df5:
"""
df5[cols]
Unemployment Rate Federal Debt
Date
1969-01-01 00:00:00+00:00 3.4 36.19577
1969-01-20 00:00:00+00:00 3.4 35.937847
1969-04-01 00:00:00+00:00 3.4 34.97403
"""
# fill missing rows forward for selected columns | merge back
cols = ["Start", "End", "President"]
df3[cols] = df3[cols].ffill(axis=0)
# This matches the desired output
"""
df3[cols]
Start End President
Date
1969-01-01 00:00:00+00:00 NaT NaT NaN
1969-01-20 00:00:00+00:00 1969-01-20 00:00:00+00:00 1974-08-09 00:00:00+00:00 Nixon
1969-04-01 00:00:00+00:00 1969-01-20 00:00:00+00:00 1974-08-09 00:00:00+00:00 Nixon
"""
# fill boolean Republican/Democrat with appropriate True/False | merge back
cols = ["Republican", "Democrat"]
df3[cols] = df3[cols].ffill(axis=0)
# also matches the desired output
"""
df3[cols]
Republican Democrat
Date
1969-01-01 00:00:00+00:00 NaN NaN
1969-01-20 00:00:00+00:00 True False
1969-04-01 00:00:00+00:00 True False
</code></pre>
<p>To sum up, I have managed to compute just about what I needed, but couldn't merge it back nicely into a clean dataframe for further processing. Either I dropped too many rows, too few, or I still had missing data where it should have been interpolated/filled.</p>
| <python><pandas><merge><python-datetime> | 2023-10-22 12:32:03 | 1 | 10,646 | PatrickT |
77,339,598 | 7,227,146 | Get dictionary from list of dictionaries | <p>I have these two dictionaries:</p>
<pre><code>objs = [{"code": 34, "reps": 0}, {"code": 59, "reps": 0}]
comments = [{"text": "a", "code": 34}, {"text": "b", "code": 59}]
</code></pre>
<p>I would like to update the <code>reps</code> count every time the comment is <code>a</code>. <code>objs</code> should become:</p>
<pre><code>objs = [{"code": 34, "reps": 1}, {"code": 59, "reps": 0}]
</code></pre>
<p>I've tried:</p>
<pre><code>for comment in comments:
if comment["text"] == "a":
obj = [d for d in objs if d['code'] == comment["code"]]
obj["reps"] += 1
</code></pre>
<p>But I get <code>TypeError: list indices must be integers or slices, not str</code>. How can <code>obj</code> be a dictionary and not a list?</p>
| <python> | 2023-10-22 10:39:33 | 4 | 679 | zest16 |
77,339,486 | 16,462,878 | How to use super with arguments | <p>I have linear hierarchy of classes, <code>A</code>, <code>B</code>, <code>C</code>, sharing a method with a different implementation which depends on the result of the father's one. Such methods have <em>same identifier</em> but different signature and belongingness (static, class, instance).</p>
<p>Since I am working with <em>dynamic</em> object creation I need to call <code>super</code> with its <a href="https://docs.python.org/3/library/functions.html#super" rel="nofollow noreferrer">"full" signature</a>... and here the problems arise.</p>
<p>The <em>zero argument form</em> works perfectly:</p>
<pre><code>class A:
@staticmethod
def m(): return 'A.m()'
class B(A):
@classmethod
def m(cls):
return super().m() + ' ... & from B.m()'
class C(B):
def m(self, text):
return super().m() + f' ... & from C().m("{text}")'
print(C().m("XXX"))
</code></pre>
<p>but if I try to add arguments</p>
<pre><code>class A:
@staticmethod
def m(): return 'A.m()'
class B(A):
@classmethod
def m(cls):
return super(cls, cls).m() + ' ... & from B.m()' # <- raise RecursionError
class C(B):
def m(self, text):
return super(type(self), self).m() + f' ... & from C().m("{text}")'
print(C().m("XXX"))
</code></pre>
<p>the compiler complains</p>
<pre class="lang-none prettyprint-override"><code>File "path_to_file.py", line XY, in m
return super(cls, cls).m() + ' ... & from B.m()'
^^^^^^^^^^^^^^^^^^^
[Previous line repeated 995 more times]
RecursionError: maximum recursion depth exceeded
</code></pre>
<p>I tried different combinations without success.</p>
<p>What is the right way to call <code>super(?, ?).m()</code>?</p>
<hr />
<p>Here a further <strong>abstraction</strong> of the above problem with same issue:</p>
<pre><code>class A:
def m(self):
return f"A().m from {type(self).__name__}()"
class B(A):
def m(self):
return super(?, ?).m() + f" & B().m from {type(self).__name__}()"
class C(B):
def m(self):
return super(type(self), self).m() + f" & C().m from {type(self).__name__}()"
print(C().m())
#A().m from C() & B().m from C() & C().m from C()
</code></pre>
| <python><inheritance><dynamic><super> | 2023-10-22 10:04:15 | 1 | 5,264 | cards |
77,339,100 | 10,737,396 | float() argument must be a string or a number, not 'BarContainer' | <p>I wanted to fix the number of decimal points for barcontainer (python object)</p>
<pre><code>ax = sns.countplot(...)
for i in ax.containers:
# typecast
ax.bar_label(i,)
</code></pre>
<p>The i has a type "barcontainer". In the visualization, it's a float number of 8 decimal points. I want to round the float number to 4 decimal points. How can I do that?</p>
<p>update: I tried to typecast in float but it is showing "float() argument must be a string or a number, not 'BarContainer'" then tried first convert into str then float, but failed again</p>
| <python><casting><visualization> | 2023-10-22 07:53:10 | 1 | 784 | Towsif Ahamed Labib |
77,338,821 | 16,556,045 | Attempting to migrate an Django ImageField, but failing to migrate | <p>I am currently facing a problem with trying to migrate a Django Image field from a folder named media_root located inside a folder named static_cdn inside of my current project here is what my current file structure looks like as that may help;</p>
<pre><code>veganetworkmain
-----profiles
-------static_cdn
---------media_root
----------test.png
</code></pre>
<p>Test.png is a png file that is generated using a picture generation algorithm using Pillow, here is what Test.png actually looks like:
<a href="https://i.sstatic.net/4GDqA.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4GDqA.jpg" alt="enter image description here" /></a>
However what actually happens when I try to implement the code to get the picture into my program using a Django Image Field is that it just becomes white as seen here:
<a href="https://i.sstatic.net/baeF5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/baeF5.png" alt="enter image description here" /></a>
I had tried methods like using python manage.py makemigrations, python manage.py migrate, changing the static by changing url_patterns and setting and even python ./manage.py makemigrations profiles. Here are the links to each method that I had tried:</p>
<p><a href="https://stackoverflow.com/questions/36153748/django-makemigrations-no-changes-detected">Django - makemigrations - No changes detected</a></p>
<p><a href="https://stackoverflow.com/questions/36280056/page-not-found-404-django-media-files">Page not found 404 Django media files</a></p>
<p>And finally here is my code that I am currently working with during this time:</p>
<p>models.py</p>
<pre><code>from django.db import models
from django.shortcuts import reverse
from django.contrib.auth.models import User
from .utils import get_random_word
from django.template.defaultfilters import slugify
from django.db.models import Q
--------------------------------------------------
class Profile(models.Model):
first_name = models.CharField(max_length=200, blank=True)
last_name = models.CharField(max_length=200, blank=True)
user = models.OneToOneField(User, on_delete=models.CASCADE)
bio = models.TextField(default="no bio here...", max_length=300)
email = models.EmailField(max_length=200, blank=True)
country = models.CharField(max_length=200, blank=True)
test = models.ImageField(default='test.png', upload_to='test/')
avatar = models.ImageField(default='avatar.png', upload_to='avatars/')
connections = models.ManyToManyField(User, blank=True, related_name='connections')
slug = models.SlugField(unique=True, blank=True)
updated = models.DateTimeField(auto_now=True)
created = models.DateTimeField(auto_now_add=True)
objects = ProfileManager()
def get_connections(self):
return self.connections.all()
def get_connections_num(self):
return self.connections.all().count()
def get_posts_num(self):
return self.posts.all().count()
def fetch_all_author_posts(self):
return self.posts.all()
def get_absolute_url(self):
return reverse("profiles:profile-detail-view", kwargs={"slug":self.slug})
def get_likes_given_num(self):
likes = self.like_set.all()
total_liked = 0
for item in likes:
if item.value=='Like':
total_liked += 1
return total_liked
def get_likes_received_num(self):
posts = self.posts.all()
total_liked = 0
for item in posts:
total_liked += item.liked.all().count()
return total_liked
def __str__(self):
return f"{self.user.username}-{self.created.strftime('%d-%m-%Y')}"
__initial_first_name = None
__initial_last_name = None
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.__initial_first_name = self.first_name
self.__initial_last_name = self.last_name
def save(self, *args, **kwargs):
x = False
to_slug = self.slug
if self.first_name != self.__initial_first_name or self.last_name != self.__initial_last_name or self.slug == '':
if self.first_name and self.last_name:
newSlug = slugify(str(self.first_name) + ' ' + str(self.last_name))
x = Profile.objects.filter(slug=newSlug).exists()
while x:
newSlug = slugify(newSlug + " " + str(get_random_word()))
x = Profile.objects.filter(slug=newSlug).exists()
else:
newSlug = str(self.user)
self.slug = newSlug
super().save(*args, **kwargs)
STATUS_CHOICES = (('send', 'send'),
('accepted', 'accepted')
)
</code></pre>
<p>forms.py</p>
<pre><code>from django import forms
from .models import Profile
class ProfileModelForm(forms.ModelForm):
class Meta:
model = Profile
fields = ('first_name', 'last_name', 'bio', 'avatar', 'test')
</code></pre>
<p>myprofile.html</p>
<pre><code>{% extends 'base.html' %}
{% block title %}
My Profile
{% endblock title %}
{% block content %}
<!-- MODAL -->
<div class="ui modal profile mymodal">
<i class="close icon"></i>
<div class="header button">
Update your Profile
</div>
<div class="image content">
<div class="ui medium image">
<img src="{{profile.avatar.url}}">
</div>
<div class="description">
<div class="ui header">Provide some additional/newest info about you </div>
<form action="" method="POST" class="ui form" enctype='multipart/form-data'>
{% csrf_token %}
{{form.as_p}}
</div>
</div>
<div class="actions">
<button type='submit' class="ui positive right labeled icon button">
Update
<i class="checkmark icon"></i>
</button>
</form>
</div>
</div>
<div class="ui modal theory mymodal">
<i class="close icon"></i>
<div class="header">
Create a Theory through a chat prompt.
</div>
<div class="image content">
<div class="ui medium image">
<img src="{{profile.avatar.url}}">
</div>
<div class="description">
<div class="ui header">Please input the required information that you would like to simulate your theory from, you will also be able to see your theory being simulated inside of the window below, thank you and have a good day today:</div>
<img src="{{profile.test.url}}" width="700" height="400">
<br>
<button class="ui purple">Create Theory</button>
</div>
</div>
<div class="actions">
<button type="submit" class="ui positive button">
Save
</button>
</div>
</div>
<div class="ui segment">
{% if confirm %}
<div class="ui green message">Your profile has been updated</div>
{% endif %}
<h3>my profile: {{request.user}}</h3>
<div class="ui grid">
<div class='row'>
<div class='six wide column'>
<img class="ui small rounded image" src={{profile.avatar.url}}>
<div class="row mt-5">
<br>
<button class='ui secondary' id='modal-btn-1'>Update Profile</button>
</div>
<div class="row mt-5">
<br>
<button class="ui secondary" id="modal-btn-2">Create Theory</button>
</div>
</div>
<div class="ten wide column">
<table class="ui table">
<tbody>
<tr>
<td>username</td>
<td>{{profile.user}}</td>
</tr>
<tr>
<td>first name</td>
<td>{{profile.first_name}}</td>
</tr>
<tr>
<td>last name</td>
<td>{{profile.last_name}}</td>
</tr>
<tr>
<td>bio</td>
<td>{{profile.bio}}</td>
</tr>
<tr>
<td>number of connections</td>
<td>{{profile.get_connections_no}}</td>
</tr>
<tr>
<td>number of connections</td>
<td>
<ul>
{% for connection in profile.get_connections %}
<li>{{connection}}</li>
{% endfor %}
</ul>
</td>
</tr>
<tr>
<td>number of posts</td>
<td>{{profile.get_posts_no}}</td>
</tr>
<tr>
<td>number of likes given</td>
<td>{{profile.get_likes_given_no}}</td>
</tr>
<tr>
<td>number of likes received</td>
<td>{{profile.get_likes_recieved_no}}</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
</div>
{% endblock content %}
</code></pre>
<p>settings.py</p>
<pre><code>BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
DEBUG = True
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'posts',
'profiles',
'allauth',
'allauth.account',
'allauth.socialaccount',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'veganetworkmain.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
'profiles.context_processors.profile_pic',
'profiles.context_processors.invatations_received_num',
],
},
},
]
STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'static_project')
]
STATIC_ROOT = os.path.join(os.path.dirname(BASE_DIR), "static_cdn", "static_root")
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(os.path.dirname(BASE_DIR), "static_cdn", "media_root")
</code></pre>
<p>Is there something specific that I'm doing wrong, and if I'm doing nothing wrong what functions or methods should I add to this code, so I can fix this troublesome bug? Thank you all very much for your time, any help is appreciated.</p>
| <python><django><django-media> | 2023-10-22 05:59:26 | 1 | 934 | KronosHedronos2077 |
77,338,685 | 3,462,509 | Deep Learning with Python (Keras): Reuters Multiclass Classification | <p>I'm working through this fantastic book, rewriting the examples in PyTorch so that I better retain the material. My results have been comparable to the book's Keras code for most examples, but I'm having some trouble with this exercise. For those that have the book, this is on page 106.</p>
<p>The network used in the book to classify the text is as follows:</p>
<p><strong>Book Code (Keras)</strong></p>
<pre><code>keras_model = keras.Sequential([
layers.Dense(64,activation='relu'),
layers.Dense(64,activation='relu'),
layers.Dense(46,activation='softmax'),
])
keras_model.compile(
optimizer = 'rmsprop',
loss = 'sparse_categorical_crossentropy',
metrics = ['accuracy']
)
hist = keras_model.fit(
partial_train_xs,
partial_train_ys,
epochs=20,
batch_size=512,
validation_data=[val_xs,val_ys]
)
</code></pre>
<p><strong>My attempt at recreating the same in PyTorch:</strong></p>
<pre><code>model = nn.Sequential(
nn.Linear(10_000,64),
nn.ReLU(),
nn.Linear(64,64),
nn.ReLU(),
nn.Linear(64,46),
nn.Softmax()
)
def compute_val_loss(model,xs,ys):
preds = model(xs)
return(F.cross_entropy(preds,ys)).item()
def compute_accuracy(model,xs,ys):
preds = model(xs)
acc = (preds.argmax(dim=1) == ys).sum() / len(preds)
return acc.item()
def train_loop(model,xs,ys,epochs=20,lr=1e-3,opt=torch.optim.RMSprop,
batch_size=512,loss_func=F.cross_entropy):
opt = opt(model.parameters(),lr=lr)
losses = []
for i in range(epochs):
epoch_loss = []
for b in range(0,len(xs),batch_size):
xbatch = xs[b:b+batch_size]
ybatch = ys[b:b+batch_size]
logits = model(xbatch)
loss = loss_func(logits,ybatch)
model.zero_grad()
loss.backward()
opt.step()
epoch_loss.append(loss.item())
losses.append([i,sum(epoch_loss)/len(epoch_loss)])
print(loss.item())
return losses
</code></pre>
<p>I've excluded the data loading portion for brevity, but it's just "multihot" encoding the word sequences. For example if the vocab is 10k words, each input is a 10k vector where there is a "1" at every index in the vector corresponding to words index in the vocab.</p>
<p><strong>My Question:</strong></p>
<p>The problem I'm running into here is that there is a substantial divergence in results between the book's Keras version (which behaves as expected) and the PyTorch version. After 20 epochs, the Keras version has negligible training loss and is about 80% accurate on validaiton. The Torch version however barely moves on the training loss. For context it starts at about 3.4 and ends after 20 epochs at 3.1. The Keras version had a lower train loss than that after a single epoch (2.6).</p>
<p>The torch version did make progress on accuracy, though still lagging the Keras version. There is also a strange stair-step pattern to the accuracy:</p>
<p><a href="https://i.sstatic.net/sdCPH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sdCPH.png" alt="Torch Train/Val Accuracy" /></a></p>
<p>What am I doing wrong in the Torch version? Or is there a legitimate reason for the divergence that is expected? There are minor hyperparameter differences in the RMSProp args of both libraries, but I fiddled around with those and didn't see much of a difference. Learning rate is equivalent for both. Even if I run the torch version for 150 epochs, the train/test loss continue to go down (very slowly) but validation accuracy peaks out around 75%.</p>
| <python><keras><deep-learning><pytorch> | 2023-10-22 04:47:30 | 1 | 2,792 | Solaxun |
77,338,663 | 2,975,510 | In VS code jupyter manim magic command stops autocomplete and hint | <p>I want to use jupyter in VS Code to write manim code. However, if I put the cell magic command <code>%%manim -v WARNING --disable_caching Scene_Name</code> at the top of the cell, the autocomplete and parameter hint will be lost in the cell.</p>
<p>I wonder how to include the magic and retain the autocomplete + hint?</p>
<p>Snapshot: without %%manim, autocomplete and hints work properly
<a href="https://i.sstatic.net/Lfykg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Lfykg.png" alt="without %%manim" /></a></p>
<p>Snapshot: with %%manim, no hint pops out, autocomplete stops
<a href="https://i.sstatic.net/hp4Rz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hp4Rz.png" alt="with %%manim" /></a></p>
| <python><visual-studio-code><jupyter-notebook><manim> | 2023-10-22 04:38:45 | 0 | 2,203 | Lelouch |
77,338,597 | 3,595,026 | How to add middleware to the Fast API to create metrics to track time spent and requests made? | <p>I am adding Middleware to my Fast API app to create Prometheus metrics to get the Processing Time and Number of requests per route. Can someone tell me what I am missing?</p>
<p>Using <a href="https://github.com/prometheus/client_python" rel="nofollow noreferrer">Prometheus/client_python</a></p>
<p>This is my middleware.</p>
<pre><code>import time
from fastapi import Request
from prometheus_client import Summary, make_asgi_app, CollectorRegistry
from starlette.middleware.base import BaseHTTPMiddleware
registry = CollectorRegistry()
metrics_app = make_asgi_app(registry=registry)
summary_metrics = {}
class MyMiddleware:
def __init__(
self, app
):
self.app = app
route_names = [r.name for r in app.routes]
if len(set(route_names)) != len(route_names):
raise Exception("Route names must be unique")
for route_name in route_names:
if route_name is not None:
summary_metrics[route_name] = Summary(route_name, f'{route_name} processing time', registry=registry)
async def __call__(self, request: Request, call_next):
start_time = time.time()
response = await call_next(request)
process_time = time.time() - start_time
if request.scope.get('route'):
summary_metrics.get(request.scope.get('route')).observe(process_time)
return response
</code></pre>
<p>This is my app.py</p>
<pre><code>app.include_router(some_valid_router)
app.mount("/metrics", metrics_app)
my_middleware = MyMiddleware(app)
app.add_middleware(BaseHTTPMiddleware, dispatch=my_middleware)
if __name__ == "__main__":
LOGGER.info("Starting up the application")
uvicorn_logger = (
UVICORN_LOG_CONFIG if config["CUSTOM_UVICORN_LOGGING"].lower() == "y" else None
)
if uvicorn_logger:
uvicorn.run(
"app:app",
host="0.0.0.0",
port=int(config["PORT"]),
reload=config["DEBUG"],
log_config=uvicorn_logger,
)
else:
uvicorn.run(
"app:app", host="0.0.0.0", port=int(config["PORT"]), reload=config["DEBUG"]
)
</code></pre>
<p>I am getting the <code>ValueError: Duplicated timeseries in CollectorRegistry: </code> error if I have the <code>my_middleware = MyMiddleware(app) app.add_middleware(BaseHTTPMiddleware, dispatch=my_middleware)</code> outside of <code>__main__</code>, and my metrics endpoint is blank if I move those two lines to the starting of <code>__main__</code>.</p>
<p>I can see the overall requests count and total time spent if I change my Middleware to below and move <code>app.add_middleware</code> to the outside of <code>__main__</code>.</p>
<pre><code>import time
from fastapi import Request
from prometheus_client import Summary, make_asgi_app, CollectorRegistry
from starlette.middleware.base import BaseHTTPMiddleware
registry = CollectorRegistry()
metrics_app = make_asgi_app(registry=registry)
summary_metrics = {}
REQUEST_TIME = Summary('request_processing_seconds', 'Time spent processing request', registry=registry)
class MyMiddleware:
def __init__(
self, app
):
self.app = app
async def __call__(self, request: Request, call_next):
start_time = time.time()
response = await call_next(request)
process_time = time.time() - start_time
REQUEST_TIME.observe(process_time)
return response
</code></pre>
<p>Can someone tell me what I am doing wrong? This is my first interaction with the Fast API. Please let me know if my approach needs to be corrected or if there are better ways to get the Processing Time and Number of requests per route metrics.</p>
<p>Thanks!!</p>
| <python><prometheus><fastapi><fastapi-middleware> | 2023-10-22 04:02:38 | 2 | 629 | user3595026 |
77,338,572 | 6,719,819 | ImportError: No module named langchain.llms | <p>I used the following import statetement:</p>
<pre><code>from langchain.llms import OpenAI
</code></pre>
<p>And I am getting the following error:</p>
<blockquote>
<p>pycode python main.py Traceback (most recent call last): File
"main.py", line 1, in
from langchain.llms import openai ImportError: No module named langchain.llms</p>
</blockquote>
<p>I am using Python 3.11.6 and I installed the packages using</p>
<pre><code>pip3 install openai langchain
</code></pre>
| <python><openai-api><langchain> | 2023-10-22 03:51:20 | 4 | 15,655 | Daniel |
77,338,348 | 7,822,387 | How to create objective function in pywraplp when looping through multiple variables | <p>I have a linear program with the following variables.</p>
<pre><code>X[i, j, k] for i,j,k in range(0,100)
Y[j, k] for j, k in range(0,100)
</code></pre>
<p>I am wondering how I can create the following constraint for X and objective function from Y.</p>
<pre><code>Constraint: (creates 100 constraints, 1 for each i)
for i in range(0,100):
solver.Add( sum( X[i,j,k] for j, k in range(0,100) ) == 1)
Objective:
solver.Minimize(solver.Sum(Y[j, k] for j,k in range(0,100) ))
</code></pre>
<p>These work if I am only iterating through j, not j and k. I am wondering if there is a way to iterate through multiple lists on a single line sum. If not, is there a way I can store the LP variables into a variable and then create the constraints/objective by summing those variables together? Or what would be the best approach for this? Thanks</p>
| <python><linear-programming> | 2023-10-22 01:20:40 | 1 | 311 | J. Doe |
77,338,343 | 12,310,828 | Assigning to nested list with list of indices | <p>Given a nested list, you can access the elements with something like</p>
<pre><code>data = [[12, 15], [21, 22], [118, 546], [200, 1200]]
assert data[0][0] == 12
</code></pre>
<p>The issue is that in this case, I want to index into a nested list with a "list of indexes", with dynamic at runtime length. For example, the "list of indexes" for above would be [0,0]</p>
<p>I want this general type of function</p>
<pre><code>def nested_list_assignment(nested_list, list_of_indices, value):
</code></pre>
<p>and would work something like</p>
<pre><code># passes this basic test
data = [[12, 15], [21, 22], [118, 546], [200, 1200]]
assert data[0][0] == 12
nested_list_assignment(data, [0, 0], 0)
assert data[0][0] == 0
</code></pre>
<p>I have a working impl like</p>
<pre><code>def nested_list_assignment(nested_list, list_of_indices, value):
# ill handle this case later
assert len(list_of_indices) > 0
if len(list_of_indices) == 1:
nested_list[list_of_indices[0]] = value
else:
nested_list_assignment(nested_list[list_of_indices[0]], list_of_indices[1:], value)
</code></pre>
<p>but I'm curious if Python gives any constructs for this, or just a standard library function for this in general.</p>
| <python><list><variable-assignment><nested-lists> | 2023-10-22 01:18:13 | 3 | 443 | k huang |
77,338,311 | 159,072 | Save *.csv without spaces and double quote (2) | <p>I want to save CSV data without double quotes.</p>
<pre><code># Write the data sets to CSV files
training_set.to_csv(os.path.join(output_dir, target_dir, 'training_set.csv'), index=False, header=False)
validation_set.to_csv(os.path.join(output_dir, target_dir, 'validation_set.csv'), index=False, header=False)
test_set.to_csv(os.path.join(output_dir, target_dir, 'test_set.csv'), index=False, header=False)
</code></pre>
<p>The above source code is saving data with double quotes:</p>
<pre><code>... ... ...
"76,GLU,H,5.406,5.079,6.304,0,0,0,1,1,0"
"172,THR,H,6.651,8.414,9.157,0,0,0,0,0,0"
"238,GLU,C,5.764,9.526,11.865,0,0,0,0,2,0"
"133,LYS,C,7.412,9.808,11.162,0,0,0,0,1,0"
"247,ASP,C,5.351,6.6,9.927,0,0,0,2,4,0"
"133,GLU,H,5.498,5.134,6.529,0,0,0,0,1,0"
"111,GLN,C,6.674,9.082,9.925,0,0,0,0,1,0"
"374,SER,C,6.642,8.332,11.536,0,0,0,0,1,0"
"192,SER,C,6.346,8.69,12.18,0,1,2,4,9,0"
"182,LEU,H,5.453,7.862,8.894,0,0,0,0,4,0"
... ... ...
</code></pre>
<p>If I write the following, then the compiler asks for an escape character:</p>
<pre><code># Write the data sets to CSV files
training_set.to_csv(os.path.join(output_dir, target_dir, 'training_set.csv'), index=False, header=False, quoting=csv.QUOTE_NONE)
validation_set.to_csv(os.path.join(output_dir, target_dir, 'validation_set.csv'), index=False, header=False, quoting=csv.QUOTE_NONE)
test_set.to_csv(os.path.join(output_dir, target_dir, 'test_set.csv'), index=False, header=False, quoting=csv.QUOTE_NONE)
</code></pre>
<p>Then, I wrote the following and error occured:</p>
<pre><code># Write the data sets to CSV files
training_set.to_csv(os.path.join(output_dir, target_dir, 'training_set.csv'), index=False, header=False, quoting=csv.QUOTE_NONE, escapechar='')
validation_set.to_csv(os.path.join(output_dir, target_dir, 'validation_set.csv'), index=False, header=False, quoting=csv.QUOTE_NONE, escapechar='')
test_set.to_csv(os.path.join(output_dir, target_dir, 'test_set.csv'), index=False, header=False, quoting=csv.QUOTE_NONE, escapechar='')
</code></pre>
<pre><code>Traceback (most recent call last):
File "data_split_segregate.py", line 87, in <module>
training_set.to_csv(os.path.join(output_dir, target_dir, 'training_set.csv'), index=False, header=False, quoting=csv.QUOTE_NONE, escapechar='')
File "/home/my_username/heca_v2/env/lib/python3.7/site-packages/pandas/core/generic.py", line 3482, in to_csv
storage_options=storage_options,
File "/home/my_username/heca_v2/env/lib/python3.7/site-packages/pandas/io/formats/format.py", line 1105, in to_csv
csv_formatter.save()
File "/home/my_username/heca_v2/env/lib/python3.7/site-packages/pandas/io/formats/csvs.py", line 257, in save
self._save()
File "/home/my_username/heca_v2/env/lib/python3.7/site-packages/pandas/io/formats/csvs.py", line 262, in _save
self._save_body()
File "/home/my_username/heca_v2/env/lib/python3.7/site-packages/pandas/io/formats/csvs.py", line 300, in _save_body
self._save_chunk(start_i, end_i)
File "/home/my_username/heca_v2/env/lib/python3.7/site-packages/pandas/io/formats/csvs.py", line 316, in _save_chunk
self.writer,
File "pandas/_libs/writers.pyx", line 72, in pandas._libs.writers.write_csv_rows
_csv.Error: need to escape, but no escapechar set
</code></pre>
<p>How can I resolve this issue?</p>
<p><strong>NOTE:</strong> The source data does not have double quotes.</p>
| <python><pandas><csv> | 2023-10-22 00:55:35 | 1 | 17,446 | user366312 |
77,338,137 | 8,030,794 | The fastest way to reconnect websocket with update url | <p>I need to recieve some data from binance api. And when coins changes, i need to update url of websocket. But I want to spend as little time as possible.To avoid overloading the main thread, I create the websocket in a separate process
This is code, how i impletemented this:</p>
<pre><code>from functools import partial
from threading import Thread
from multiprocessing import Process, Manager
import time
import json
import websocket
import requests
def on_message(_wsa, result, prices):
result = json.loads(result)
prices[result['s']]['best_bid'] = result['b']
prices[result['s']]['best_ask'] = result['a']
def start_stream_currency(curr_list,prices):
url = 'wss://fstream.binance.com/ws'
manager = Manager()
for curr in curr_list:
url += '/' + curr.lower() + '@bookTicker'
wsa = websocket.WebSocketApp(url, on_message=partial(on_message, prices=prices))
for curr in curr_list:
prices[curr] = manager.dict()
wsa.run_forever()
def check_price_symbols(prices): # This func check prices from websocket
while True:
for key, value in prices.copy().items():
print(f"{key} {value}")
#some calculations
time.sleep(1)
def main():
prices = Manager().dict()
th = Thread(target=check_price_symbols, args=(prices,))
th.start()
curr_list = ['BTCUSDT','ADAUSDT']
pr = Process(target=start_stream_currency, args=(curr_list,prices))
pr.start()
time.sleep(15)
print('Restart starting')
curr_list2 = ['ADAUSDT','XRPUSDT']
pr2 = Process(target=start_stream_currency, args=(curr_list2, prices))
pr2.start()
pr.terminate()
del prices['BTCUSDT']
pr2.join()
if __name__ =='__main__':
main()
</code></pre>
<p>Need to close old process when new process is already running. Can anyone tell me how best to implement this?</p>
| <python><websocket><multiprocessing> | 2023-10-21 23:22:09 | 1 | 465 | Fresto |
77,338,098 | 1,893,148 | Traceback for Phantom KeyError | <p>I have never seen a KeyError traceback that doesn't make immediate sense: it is always pointing at some get item, typically on a dictionary, for a key that does not exist. My mistake.</p>
<p>Here, however, I am getting a KeyError even though (a) every attempted get of a dictionary value is guarded with <code>.get()</code> and (b) the KeyError is being thrown for a key whose literal value is not present in my code as a string/key but only as a variable name:</p>
<pre><code>---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[12], line 1
----> 1 resp = openai.upload_file(request_body, files=file_handle)
File ~/dev/project/python_modules/libraries/http_client/http_client/resource.py:67, in Resource.__call__(self, request_body, headers, files)
63 request = self.prepare_request(request_body, headers, files)
65 request = self.validate_request(request)
---> 67 request = self.before_dispatch_hook(request)
69 response_future = self.client.dispatch_request(request)
71 if self.client._within_context: # noqa: SLF001
File ~/dev/project/python_modules/libraries/openai_client/openai_client/resources/files.py:34, in before_dispatch_hook(self, request)
32 def before_dispatch_hook(self, request):
33 file_handle = request.files
---> 34 request.files = [
35 ("purpose", (None, request.request_body.get('purpose'))),
36 ("file", (request.request_body.get('file'), file_handle, "application/octet-stream"))
37 ]
38 return request
KeyError: 'file_handle'
</code></pre>
<p>Has anyone ever seen anything like this? I am mystified.</p>
| <python> | 2023-10-21 23:04:15 | 0 | 2,220 | user1893148 |
77,337,997 | 6,232,518 | Why is my qbittorrent search plugin not supported? | <p>I started to write a search plugin for qBitTorrent, to search through a torrent website.</p>
<p>I followed the instructions given in qbittorrent's wiki : <a href="https://github.com/qbittorrent/search-plugins/wiki/How-to-write-a-search-plugin" rel="nofollow noreferrer">https://github.com/qbittorrent/search-plugins/wiki/How-to-write-a-search-plugin</a></p>
<p>When I try to install it however, I get the error:</p>
<pre><code>Couldn't install "zetorrents" search engine plugin. Plugin is not supported.
</code></pre>
<p>I already have a couple of search plugins that are working just fine</p>
<p>I went through these dated threads</p>
<ul>
<li><a href="https://github.com/qbittorrent/qBittorrent/issues/11149" rel="nofollow noreferrer">https://github.com/qbittorrent/qBittorrent/issues/11149</a></li>
<li><a href="https://github.com/qbittorrent/qBittorrent/issues/13122" rel="nofollow noreferrer">https://github.com/qbittorrent/qBittorrent/issues/13122</a></li>
</ul>
<p>and tried their solutions anyway ( clearing the qbittorrent nova3's cache mostly ) but had no luck</p>
<p>I believe my code is the issue but I cannot fathom why.</p>
<p>I come here to ask for help finishing this small project available on Github at
<a href="https://github.com/alexandre-eliot/zetorrents_qbittorrent_search_plugin" rel="nofollow noreferrer">https://github.com/alexandre-eliot/zetorrents_qbittorrent_search_plugin</a></p>
<p>I also tried testing my search plugin <a href="https://github.com/qbittorrent/search-plugins/wiki/How-to-write-a-search-plugin#testing-your-plugin" rel="nofollow noreferrer">as in the wiki</a> but did not get an output</p>
<p>If anyone has any clue on the reason of the matter, I would be very pleased to hear your comments</p>
| <python><plugins><bittorrent><torrent> | 2023-10-21 22:18:05 | 1 | 316 | Alexandre ELIOT |
77,337,970 | 18,215,706 | How can you use a Python library with CircuitPython? | <p>I recently got started using CircuitPython on microcontrollers (specifially the Raspberry Pi Pico W and Zero W [I know, technically it's a single-board computer]). There are some Python packages that I would like to use on the microcontroller, but I don't know how to do it. The <a href="https://learn.adafruit.com/welcome-to-circuitpython/circuitpython-libraries" rel="nofollow noreferrer">documentation</a> says you can use the <code>lib</code> folder to install packages, so I tried downloading the package from <a href="https://pypi.org/" rel="nofollow noreferrer">https://pypi.org/</a> but I wasn't sure what to add to the folder. Then, I found out about <a href="https://github.com/adafruit/circup" rel="nofollow noreferrer">circup</a> which I thought could solve my problem, but it turns out it only works with official Adafruit libraries. So, how can I install other libraries to the microcontroller to use with CircuitPython?</p>
<p>Thanks in advance.</p>
| <python><package><adafruit-circuitpython> | 2023-10-21 22:06:44 | 0 | 1,117 | Anders |
77,337,901 | 5,568,409 | How to draw a point on an ellipse from general form | <p>The following program draws a rotated ellipse from an equation given in general form:</p>
<pre><code>import matplotlib as mpl
import matplotlib.pyplot as plt
x_range = np.linspace(-6, +6)
y_range = np.linspace(-10, +1)
x, y = np.meshgrid(x_range, y_range)
A, B, C, D, E, F = 0.25, 0.40, 0.25, 1.50, 2.00, 1.50
f = lambda x, y, A, B, C, D, E, F: A*x**2 + B*x*y +C*y**2 + D*x + E*y + F
equation = f(x, y, A, B, C, D, E, F)
fig, ax = plt.subplots(1, 1, figsize = (4, 4), tight_layout = True)
ax.contour(x, y, equation, levels = [0], colors = color)
plt.show()
</code></pre>
<p>Is there a simple way to draw a point <strong>on</strong> this ellipse,no matter where it is ?</p>
<p>My question comes from the fact that I use <code>Geogebra</code> sometimes and, in <code>Geogebra</code>, if you draw an ellipse by the input <code>0.25*x^2 + 0.40*x*y + 0.25*y^2 + 1.5*x + 2*y + 1.5 = 0</code>, you can easily put a point where you want on the ellipse (and even move it), using the simple instruction <code>Point on Object</code>...</p>
<p>I wonder if something similar could be implemented in Python, starting from the small program that I wrote a little above?</p>
| <python><draw><point> | 2023-10-21 21:40:24 | 1 | 1,216 | Andrew |
77,337,571 | 8,028,263 | Jinja equivalent to __file__? | <p>I would like to record the full path to my template in its output, much like the <code>__FILE__</code> macro expands to the current file in C.</p>
<p>I've tried the <code>{{ self._TemplateReference__context.name }}</code> trick, but this doesn't work when extending templates. If A extends from B and A is rendered, then <code>self._TemplateReference__context.name</code> will evaluate to A's path always (even inside B).</p>
<p>What is equivalent to the <code>__FILE__</code> macro for jinja?</p>
| <python><jinja2> | 2023-10-21 19:54:54 | 0 | 303 | Dragon |
77,337,559 | 1,142,881 | How to extract an int value from a string that matches a given regex? | <p>I have a database url that looks like <code>sqlite+pool:///C:/temp/test.db?max_connections=20&stale_timeout=300</code> I would like to extract the <code>20</code> value.</p>
<p>I would do something like:</p>
<pre><code>url = 'sqlite+pool:///C:/temp/test.db?max_connections=20&stale_timeout=300'
# default value
max_connetions = 5
if url.count('max_connections') > 0:
# work the magic the regex would be something like
# ".*(max_connections=)[0-9]+&.*" then $2 to get it .. or?
max_connections = ...
</code></pre>
| <python> | 2023-10-21 19:51:21 | 2 | 14,469 | SkyWalker |
77,337,481 | 13,916,049 | Merge two dataframes with partial column name match | <p>I want to merge/concatenate two dataframes <code>tcia</code> and <code>clin</code>. In contrast to the <code>tcia</code> dataframe, the <code>clin</code> dataframe has a substring at the end of the column names (i.e., the 3rd "-" followed by subsequent letters). The dataframes should be combined irrespective of the substring but the final dataframe should have this substring.</p>
<p>My code does the job but I'm hoping for a more robust/concise way to do it.</p>
<p>Code:</p>
<pre><code>clin_df = clin.copy()
clin_df.columns = clin_df.columns.str.rsplit('-', n=1).str.get(0)
df = pd.concat([clin_df, tcia], axis=0)
df.columns = clin.columns
</code></pre>
<p>Input:
<code>clin</code></p>
<pre><code>pd.DataFrame({'TCGA-2K-A9WE-01': {'admin.batch_number': '398.45.0',
'age': '53',
'days_to_initial_pathologic_diagnosis': '0',
'days_to_last_follow_up': '207.0',
'ethnicity': 'not hispanic or latino'},
'TCGA-2Z-A9J1-01': {'admin.batch_number': '398.45.0',
'age': '71',
'days_to_initial_pathologic_diagnosis': '0',
'days_to_last_follow_up': '2298.0',
'ethnicity': 'not hispanic or latino'},
'TCGA-2Z-A9J3-01': {'admin.batch_number': '398.45.0',
'age': '67',
'days_to_initial_pathologic_diagnosis': '0',
'days_to_last_follow_up': nan,
'ethnicity': 'not hispanic or latino'},
'TCGA-2Z-A9J6-01': {'admin.batch_number': '398.45.0',
'age': '60',
'days_to_initial_pathologic_diagnosis': '0',
'days_to_last_follow_up': '1731.0',
'ethnicity': 'not hispanic or latino'},
'TCGA-2Z-A9J7-01': {'admin.batch_number': '398.45.0',
'age': '63',
'days_to_initial_pathologic_diagnosis': '0',
'days_to_last_follow_up': nan,
'ethnicity': 'not hispanic or latino'}})
</code></pre>
<p><code>tcia</code></p>
<pre><code>pd.DataFrame({'TCGA-2K-A9WE': {'ips_ctla4_neg_pd1_neg': 8.0,
'ips_ctla4_neg_pd1_pos': 7.0,
'ips_ctla4_pos_pd1_neg': 7.0,
'ips_ctla4_pos_pd1_pos': 6.0,
'patient_uuid': '73292c19-d6a8-4bc4-97bc-ccce54f264f8'},
'TCGA-2Z-A9J1': {'ips_ctla4_neg_pd1_neg': 9.0,
'ips_ctla4_neg_pd1_pos': 8.0,
'ips_ctla4_pos_pd1_neg': 9.0,
'ips_ctla4_pos_pd1_pos': 7.0,
'patient_uuid': '851a1157-e460-4794-8534-2eb6f0ae7468'},
'TCGA-2Z-A9J3': {'ips_ctla4_neg_pd1_neg': 9.0,
'ips_ctla4_neg_pd1_pos': 7.0,
'ips_ctla4_pos_pd1_neg': 8.0,
'ips_ctla4_pos_pd1_pos': 6.0,
'patient_uuid': '5195c9ac-b649-49f8-8750-f9a4787e8e52'},
'TCGA-2Z-A9J6': {'ips_ctla4_neg_pd1_neg': 9.0,
'ips_ctla4_neg_pd1_pos': 7.0,
'ips_ctla4_pos_pd1_neg': 8.0,
'ips_ctla4_pos_pd1_pos': 7.0,
'patient_uuid': '4a540448-f106-4b0e-9038-9f7ccefc785b'},
'TCGA-2Z-A9J7': {'ips_ctla4_neg_pd1_neg': 7.0,
'ips_ctla4_neg_pd1_pos': 5.0,
'ips_ctla4_pos_pd1_neg': 6.0,
'ips_ctla4_pos_pd1_pos': 5.0,
'patient_uuid': 'd66c9261-6c0c-44b0-92fa-a43757f34cb2'}})
</code></pre>
<p>Desired output:</p>
<pre><code>pd.DataFrame({'TCGA-2K-A9WE': {'admin.batch_number': '398.45.0',
'age': '53',
'days_to_initial_pathologic_diagnosis': '0',
'days_to_last_follow_up': '207.0',
'ethnicity': 'not hispanic or latino',
'ips_ctla4_neg_pd1_neg': 8.0,
'ips_ctla4_neg_pd1_pos': 7.0,
'ips_ctla4_pos_pd1_neg': 7.0,
'ips_ctla4_pos_pd1_pos': 6.0,
'patient_uuid': '73292c19-d6a8-4bc4-97bc-ccce54f264f8'},
'TCGA-2Z-A9J1': {'admin.batch_number': '398.45.0',
'age': '71',
'days_to_initial_pathologic_diagnosis': '0',
'days_to_last_follow_up': '2298.0',
'ethnicity': 'not hispanic or latino',
'ips_ctla4_neg_pd1_neg': 9.0,
'ips_ctla4_neg_pd1_pos': 8.0,
'ips_ctla4_pos_pd1_neg': 9.0,
'ips_ctla4_pos_pd1_pos': 7.0,
'patient_uuid': '851a1157-e460-4794-8534-2eb6f0ae7468'},
'TCGA-2Z-A9J3': {'admin.batch_number': '398.45.0',
'age': '67',
'days_to_initial_pathologic_diagnosis': '0',
'days_to_last_follow_up': nan,
'ethnicity': 'not hispanic or latino',
'ips_ctla4_neg_pd1_neg': 9.0,
'ips_ctla4_neg_pd1_pos': 7.0,
'ips_ctla4_pos_pd1_neg': 8.0,
'ips_ctla4_pos_pd1_pos': 6.0,
'patient_uuid': '5195c9ac-b649-49f8-8750-f9a4787e8e52'},
'TCGA-2Z-A9J6': {'admin.batch_number': '398.45.0',
'age': '60',
'days_to_initial_pathologic_diagnosis': '0',
'days_to_last_follow_up': '1731.0',
'ethnicity': 'not hispanic or latino',
'ips_ctla4_neg_pd1_neg': 9.0,
'ips_ctla4_neg_pd1_pos': 7.0,
'ips_ctla4_pos_pd1_neg': 8.0,
'ips_ctla4_pos_pd1_pos': 7.0,
'patient_uuid': '4a540448-f106-4b0e-9038-9f7ccefc785b'},
'TCGA-2Z-A9J7': {'admin.batch_number': '398.45.0',
'age': '63',
'days_to_initial_pathologic_diagnosis': '0',
'days_to_last_follow_up': nan,
'ethnicity': 'not hispanic or latino',
'ips_ctla4_neg_pd1_neg': 7.0,
'ips_ctla4_neg_pd1_pos': 5.0,
'ips_ctla4_pos_pd1_neg': 6.0,
'ips_ctla4_pos_pd1_pos': 5.0,
'patient_uuid': 'd66c9261-6c0c-44b0-92fa-a43757f34cb2'}})
</code></pre>
| <python><pandas> | 2023-10-21 19:30:19 | 2 | 1,545 | Anon |
77,337,407 | 1,759,443 | Deploy gen2 Google Cloud Functions with GitHub Actions? | <p>I was able to easily deploy to GitHub Actions for 1st generation Google Cloud Functions, but now with 2nd generation, I get authentication errors.</p>
<p>How can I set up a GitHub workflow to deploy my function when I merge or push to my <code>main</code> branch?</p>
<p>Here is the workflow I was using before</p>
<pre class="lang-yaml prettyprint-override"><code> - id: "deploy"
uses: "google-github-actions/deploy-cloud-functions@v0"
with:
name: "<cloud-function-name>"
runtime: "python310"
region: "us-east1"
entry_point: "<function-in-main-file>"
timeout: 540
service_account_email: PROJECT@appspot.gserviceaccount.com
ingress_settings: ALLOW_ALL
max_instances: 1
</code></pre>
| <python><typescript><google-cloud-functions><github-actions> | 2023-10-21 19:04:16 | 1 | 8,398 | Kyle Venn |
77,337,388 | 17,835,656 | how to make the application accept scrolling by touching in PyQt5? | <p>I have an application made by PyQt5 and this application is used by customers use touch screen , without keyboard or mouse only screen and they touch on it to move and to do everything.</p>
<p>my application has QScrollArea and this QScrollArea contains choices and i need to scroll up and down to be able to see all of the choices, but my customers are not able to scroll when they touch on the screen and pull it down or raise it up (for example like google chrome) :</p>
<p><a href="https://i.sstatic.net/kcrBD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kcrBD.png" alt="enter image description here" /></a></p>
<p>but my application does not accept that when i do it on my touch screen :</p>
<p><a href="https://i.sstatic.net/hWuYD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hWuYD.png" alt="enter image description here" /></a></p>
<p>i know that i have a scroll bar on the window but it is not comfort when all the time you need to scroll down you have to go to a specific place to scroll down.</p>
<p>do you have a solution ?</p>
<p>this is a code to test it out only:</p>
<pre class="lang-py prettyprint-override"><code>
from PyQt5 import QtWidgets
import sys
app = QtWidgets.QApplication(sys.argv)
window = QtWidgets.QWidget()
window.resize(500,500)
window_layout = QtWidgets.QGridLayout()
scrollarea = QtWidgets.QScrollArea()
frame = QtWidgets.QFrame()
frame_layout = QtWidgets.QGridLayout()
button1 = QtWidgets.QPushButton()
button1.setStyleSheet("background-color:rgb(150,0,0);")
button1.setFixedSize(200,200)
button2 = QtWidgets.QPushButton()
button2.setStyleSheet("background-color:rgb(150,0,0);")
button2.setFixedSize(200,200)
button3 = QtWidgets.QPushButton()
button3.setStyleSheet("background-color:rgb(150,0,0);")
button3.setFixedSize(200,200)
button4 = QtWidgets.QPushButton()
button4.setStyleSheet("background-color:rgb(150,0,0);")
button4.setFixedSize(200,200)
frame_layout.addWidget(button1)
frame_layout.addWidget(button2)
frame_layout.addWidget(button3)
frame_layout.addWidget(button4)
frame.setLayout(frame_layout)
scrollarea.setWidget(frame)
window_layout.addWidget(scrollarea)
window.setLayout(window_layout)
window.show()
app.exec()
</code></pre>
<p>thanks</p>
| <python><python-3.x><qt><pyqt><pyqt5> | 2023-10-21 18:58:28 | 1 | 721 | Mohammed almalki |
77,337,197 | 3,155,945 | Problem opening image file using PIL.Image on Windows 11 | <p>I'm trying to open images using <code>PIL.Image</code>, but encountering this error:</p>
<pre><code>Python 3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> os.listdir(".")
['Characters', 'Creatures', 'processing.ipynb']
>>> picpath = os.path.join("Characters", "4sprite", "Female_Archer", "Archer_Base", "Sprite_1.png")
>>> os.path.exists(picpath)
True
>>> from PIL import Image
>>> img = Image.open(picpath)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Alex\AppData\Local\Programs\Python\Python311\Lib\site-packages\PIL\Image.py", line 3305, in open
raise UnidentifiedImageError(msg)
PIL.UnidentifiedImageError: cannot identify image file 'Characters\\4sprite\\Female_Archer\\Archer_Base\\Sprite_1.png'
</code></pre>
<p>My best guess right now is that it has something to do with the way <code>PIL</code> is handling paths on Windows, because <code>os.path.exists</code> finds the file (and I've manually verified the file exists and can be opened and viewed). I notice the path has double backslashes, but the <a href="https://stackoverflow.com/questions/11924706/how-to-get-rid-of-double-backslash-in-python-windows-file-path-string">resources</a> I've found say that's the canonical representation. I saw someone encountering a <a href="https://www.reddit.com/r/learnpython/comments/rv2hl4/cant_get_rid_of_double_backslashes_in_filepaths/" rel="nofollow noreferrer">similar problem</a>, but unfortunately their solution was to use <code>PIL.Image.open</code> (which is what's throwing the error for me).</p>
<p>Has anyone else encountered and worked around this?</p>
<hr />
<p>Q: What type of image (encoding, channel count and size) is Sprite_1.png? Is it actually a PNG with a valid format? Can you show us the first hundred bytes? β Brian61354270</p>
<p>A:</p>
<pre><code>with open(os.path.join("Characters", "4sprite", "Female_Archer", "Archer_Base", "Sprite_1.png"), "rb") as f:
first_100 = f.read(100)
print(first_100)
</code></pre>
<p>gives <code>b"\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x0f\x00\x00\x00\x0f\xa0\x08\x06\x00\x00\x00G'\xe9\xd3\x00\x00a.zTXtRaw profile type exif\x00\x00x\xda\xac\xbdY\x92\xc48\xb6m\xf7\xcfQ\xbc!\x10=8\x1c\x00$\xcc4\x03\r_k\xc1#\xab\xea\xdew%3\xc9"</code></p>
| <python><python-3.x><windows><image><runtime-error> | 2023-10-21 18:00:04 | 1 | 334 | alxmke |
77,337,183 | 4,958,156 | Finding regions of interest in image/radio data heatmap | <p>I have this weird 'image' data. It's not really an image, it's more like radio waves/signals. But it can be considered an 'image' for python/opencv purposes. It has dimensions 100x100x3 (100x100 size and 3 channels similar to RGB in an image). As you can see, the 'pixel' values are very small (order of magnitude of 10^-12). The rightmost image is normalised with all 3 channels.</p>
<p>I have a set of labels, which you can see marked as 'X'. These are what I'm trying to predict/find in these images. I was curious to know of what approaches might work well for this task. I'm wondering if I should explore approaches like clustering/dbscan or deep learning methods using the 'X' locations as target variables? The reason I am worried about deep learning approaches is that I can't afford any false positives, and there's only 1000 of these images to train on. Any insight (and even code/answers!) would be much appreciated.</p>
<p><a href="https://i.sstatic.net/JoVEp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JoVEp.png" alt="enter image description here" /></a></p>
<p>Here is an example of what an original image looks like, which can be read in with opencv:</p>
<pre><code>im = cv2.imread('image_data.png')
cv2.imshow('frame', im)
</code></pre>
<p><a href="https://i.sstatic.net/uxgXo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uxgXo.png" alt="enter image description here" /></a></p>
| <python><opencv><image-processing><computer-vision> | 2023-10-21 17:57:08 | 0 | 1,294 | Programmer |
77,337,174 | 6,263,000 | How can I define PotsgreSQL table partitions using SQLAlchemy | <p>I'm using</p>
<ul>
<li>psycopg2(2.9.9)</li>
<li>SQLAlchemy(2.0.22)</li>
<li>PostgreSQL(16.0)</li>
</ul>
<p>and I'm trying to create a SQLAlchemy model that generates a list partitioned table. I can use the following to define a table, BUT (see below)</p>
<pre><code>from sqlalchemy import create_engine, Column, Integer, String, dialects
from sqlalchemy.ext.declarative import declarative_base
engine = create_engine('postgresql://test_user:test_password@localhost:5432/test_db')
Base = declarative_base()
class PartitionedTable(Base):
__tablename__ = 'partitioned_table'
__table_args__ = {
'postgresql_partition_by': 'LIST (category)',
}
id = Column(Integer, primary_key=True)
name = Column(String)
category = Column(String)
value = Column(Integer)
</code></pre>
<p>How can I also define the actual partitions in the same piece of code? For example, I would like two partitions for <code>category</code> values of <code>'A'</code> and <code>'B'</code>. How can I define the partitions inside the SQLAlchemy model?</p>
| <python><postgresql><sqlalchemy> | 2023-10-21 17:55:45 | 0 | 609 | Babak Tourani |
77,337,150 | 1,142,881 | How to create an instance of subclass from a factory (i.e., constructor) which invokes the super __init__ directly? | <p>I have an abstract class A, and a concrete subclass B. B has a constructor with a large code base depending on it but now I need a new way to construct instances of B without impacting the dependent code, so I need to create an instance of B from a new constructor factory method and by invoking B's parent constructor directly.</p>
<p>Code before the refactoring:</p>
<pre><code>from abc import ABC
class A(ABC):
def __init__(self, k, l, m, n)
# complex initialization ...
self._x = x
self._y = y
class B(A):
def __init__(self, k, l, m, n):
super().__init__(k, l, m, n)
# additional code here for initializing B's side
</code></pre>
<p>Code after the refactoring:</p>
<pre><code>from abc import ABC
class A(ABC):
def __init__(self, x, y):
self._x = x
self._y = y
@classmethod
def _complex_initialization(cls, k, l, m, n):
# <<< complex initialization code here that uses cls >>>>
return x, y
class B(A):
def __init__(self, k, l, m, n):
x, y = self._complex_initialization(k, l, m, n)
super().__init__(x, y)
@classmethod
def _create_from_x_y(cls, x, y):
# is this correct? I need an instance of B but
# initialized using the constructor of A
return super(B, A).__init__(x, y)
</code></pre>
<p>I need in <code>B._create_from_x_y(x, y)</code> to be able to create an instance of B initialized using A's constructor.</p>
| <python> | 2023-10-21 17:45:16 | 1 | 14,469 | SkyWalker |
77,337,046 | 5,272,283 | Getting number of followers of a Twitter user using Twitter API for free | <p>I wrote a Twitter bot written in Python 3-4 years back, using tweepy which used to tweet about new followers gained in a day for a specific user ID. I tried reviving it today, but it started giving these errors.</p>
<blockquote>
<p>tweepy.errors.Forbidden: 403 Forbidden 453 - You currently have access
to a subset of Twitter API v2 endpoints and limited v1.1 endpoints
(e.g. media post, oauth) only. If you need access to this endpoint,
you may need a different access level. You can learn more here:
<a href="https://developer.twitter.com/en/portal/product" rel="nofollow noreferrer">https://developer.twitter.com/en/portal/product</a></p>
</blockquote>
<p>The code is straightforward :</p>
<pre><code>import tweepy
import sys
def main():
ACCESS_KEY = 'XXXXXXXXX'
ACCESS_SECRET = 'XXXXXXXXXXXX'
CONSUMER_KEY = 'XXXXXXXXXXXXXXXXXX'
CONSUMER_SECRET = 'XXXXXXXXXXXXXXXXXXXXXXXX'
Past_Followers = int(read_prev_followers_from_file())
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth)
usr= api.get_user(screen_name='@id')
Current_Followers = usr.followers_count
print("Current followers : " + str(Current_Followers))
diff=Current_Followers-Past_Followers
stat='Total Followers = '+str(Current_Followers)+'. New Followers Today = '+str(diff)
api.update_status(status=stat)
write_curr_followers_to_file(Current_Followers)
</code></pre>
<p>The response seems to specify that now I cannot get the number of followers of a particular twitter user using the twitter API <strong>for free</strong>. I tried using the twitter V2 API free version as well but there only the following endpoint is exposed through which I can get only my user ID's information.</p>
<blockquote>
<p>GET /2/users/me</p>
<p>Client.get_me()</p>
</blockquote>
<p>So is it the end of the road for this task or there exists a way to resolve this?</p>
| <python><twitter><tweepy> | 2023-10-21 17:16:29 | 0 | 707 | arjun gulyani |
77,336,940 | 7,076,130 | Telegram bot inline search doesn't work for custom gif URLs | <p>The <code>InlineQueryResultArticle</code> API does not work correctly when we pass a custom URL (the URL of gig which is on my own server). <br />
I mean when the search bar opens, it says "No Results".</p>
<p>However, when I pass a gif from a famous website like giphy, it works correctly!</p>
<p>Does anyone know about this issue?</p>
<pre class="lang-py prettyprint-override"><code>InlineQueryResultGif(
id = str(uuid4()),
title = "giff",
gif_url = "https://media0.giphy.com/media/kA5Gnnr2bcEIzu26DT/giphy.gif",
thumbnail_url = "https://media0.giphy.com/media/kA5Gnnr2bcEIzu26DT/giphy.gif"
)
</code></pre>
<p>I tried everything I could, searched a lot, used other similar APIs but they didn't help at all.</p>
| <python><bots><telegram><gif> | 2023-10-21 16:45:37 | 1 | 383 | Alireza Kavian |
77,336,926 | 390,897 | Plot Shapely LineStrings overlayed by Points in Jupyter | <p>I'm trying to display points over line strings to debug whether there are multiple points on the same line:</p>
<pre><code>def to_shapely(points: list[point]):
return shapely.ops.unary_union([
shapely.geometry.LineString(points),
shapely.geometry.MultiPoint(points)
])
to_shapely([(0,0), (2,2), (1,0.5)])
</code></pre>
<p>This only displays the linestrings however. Is there a way to overlay the points on the linestrings?</p>
<p><a href="https://i.sstatic.net/PnHPD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PnHPD.png" alt="enter image description here" /></a></p>
| <python><jupyter-notebook><shapely> | 2023-10-21 16:41:36 | 1 | 33,893 | fny |
77,336,662 | 12,415,855 | Send date-value to input-field using Selenium? | <p>i try to add a date-value to an input-field on a website using selenium using the following code:
(at first i am switching to the iframe, input some other values and press a button - but on the new site this is not working anymore)</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
import time
if __name__ == '__main__':
checkLink = "https://www.skatteetaten.no/person/avgifter/bil/eksportere/regn-ut/"
options = Options()
options.add_argument("start-maximized")
options.add_experimental_option("prefs", {"profile.default_content_setting_values.notifications": 1})
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('excludeSwitches', ['enable-logging'])
options.add_experimental_option('useAutomationExtension', False)
options.add_argument('--disable-blink-features=AutomationControlled')
srv=Service()
driver = webdriver.Chrome (service=srv, options=options)
waitWD = WebDriverWait (driver, 10)
driver.get (checkLink)
time.sleep(1)
waitWD.until(EC.frame_to_be_available_and_switch_to_it((By.XPATH,"//iframe[@id='iFrameResizer0']")))
time.sleep(1)
waitWD.until(EC.presence_of_element_located((By.XPATH,'//input[@id="Regnummer"]'))).send_keys("VH77934")
time.sleep(1)
waitWD.until(EC.element_to_be_clickable((By.XPATH, '//button[text()="Neste"]'))).click()
time.sleep(1)
waitWD.until(EC.presence_of_element_located((By.XPATH,'//input[@id="DatoEksporteringFraNorge"]'))).send_keys("30.11.2023")
driver.quit()
</code></pre>
<p>But i allways get this error:</p>
<pre><code>$ python test.py
Traceback (most recent call last):
File "G:\DEV\Fiverr\TRY\csgoenterprise\test.py", line 31, in <module>
waitWD.until(EC.presence_of_element_located((By.XPATH,'//input[@id="DatoEksporteringFraNorge"]'))).send_keys("30.11.2023")
File "G:\DEV\.venv\selenium\lib\site-packages\selenium\webdriver\remote\webelement.py", line 231, in send_keys
self._execute(
File "G:\DEV\.venv\selenium\lib\site-packages\selenium\webdriver\remote\webelement.py", line 395, in _execute
return self._parent.execute(command, params)
File "G:\DEV\.venv\selenium\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 345, in execute
self.error_handler.check_response(response)
File "G:\DEV\.venv\selenium\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 229, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable
(Session info: chrome=118.0.5993.89)
</code></pre>
<p>How can i add the date to this field?</p>
| <python><selenium-webdriver> | 2023-10-21 15:29:34 | 1 | 1,515 | Rapid1898 |
77,336,649 | 11,586,205 | What and why following two with same apply() functionality differ in their results? | <p>Objective is to convert each string to uppercase in Pandas dataframe (<code>df</code>).
Used following 3 approaches but not sure why 1st is NOT working ?
Please refer code snippet and 3 dataframes viz <code>df1</code>, <code>df2</code>, <code>df3</code>.
Using pandas.<strong>version</strong> == 2.1.1</p>
<pre><code>import pandas as pd
# create a sample dataframe
df = pd.DataFrame({'C0': [1, 2, 3], 'C1': ['a', 'b', 'c']})
# apply lambda function to each element of the dataframe
df1 = df.apply(lambda x: x.upper() if isinstance(x, str) else x)
print(df1, '\n')
# convert all elements of the dataframe to strings and then convert them to uppercase
df2 = df.apply(lambda x: x.astype(str).str.upper())
print(df2, '\n')
# applymap lambda function to each element of the dataframe
df3 = df.applymap(lambda x: x.upper() if isinstance(x, str) else x)
print(df3, '\n')
C0 C1
0 1 a
1 2 b
2 3 c
C0 C1
0 1 A
1 2 B
2 3 C
C0 C1
0 1 A
1 2 B
2 3 C
</code></pre>
<ol>
<li>Why outputs differ in <code>df1</code> and <code>df2</code>?</li>
<li>Why we need to cast the type of each column into string as in <code>df2</code>?</li>
<li>Why <code>apply()</code> doesn't work in <code>df1</code> but <code>applymap()</code> worked with <code>df3</code> where it checks each column datatype as string with <code>isinstance()</code></li>
</ol>
| <python><pandas><dataframe><apply><python-applymap> | 2023-10-21 15:25:18 | 2 | 696 | afghani |
77,336,569 | 9,133,819 | Is there any way to publish pyarmor obfuscated python package which is python version independent? | <p>So I am trying to publish a proprietary python package and I am using <code>pyarmor</code> package to obfuscate it and then publish the obfuscated build on <code>PyPi</code>. It is working on the same python version environment, however as mentioned in the <a href="https://pyarmor.readthedocs.io/en/latest/tutorial/getting-started.html#packaging-obfuscated-scripts" rel="nofollow noreferrer">packaging-obfuscated-scripts</a> it is version dependent and can not not be ran using any older or newer python version.</p>
<p>I have quite a heavy package, is there any way around this (avoiding building and maintaining package with different python version)?</p>
<p>EDIT: If not as mentioned in the documentation, is there any other way to achieve this?</p>
| <python><pypi><python-packaging><pyarmor> | 2023-10-21 15:05:47 | 1 | 445 | aakarsh |
77,336,357 | 7,334,203 | Create a list that its elements are grouped by type | <p>I have a list:</p>
<pre><code>lst = [4,67,8, 'not found', 32, 'missing', 21, 23, 'warning', 'alert', 2.01, [], {}]
</code></pre>
<p>I want to create a new list that is grouped by its type. For example i want my list to be</p>
<pre><code>new = [4,67,8, 32, 21, 23, 2.01,'not found','missing','warning', 'alert', [], {}]
</code></pre>
<p>As we checked from the code below we find the types of each element and assign that to a set to be unique</p>
<pre><code>type_list = []
for i in lst:
type_list.append(type(i))
type_set = set(type_list)
</code></pre>
<p>The type_set returned this {int, float, str, list, dict}
Now i want to group the list above by per type in the type_set as shown above:</p>
<pre><code>new = [4,67,8, 32, 21, 23, 2.01,'not found','missing','warning', 'alert', [], {}]
for index, data_type in enumerate(list(type_set)):
tel = list(filter(lambda x : (index, data_type), lst))
print(tel)
</code></pre>
<p>Of course it didnt print something. How to achieve it without using sorted?</p>
| <python><python-3.x><list> | 2023-10-21 14:08:33 | 4 | 7,486 | RamAlx |
77,336,121 | 1,293,193 | Running DRF endpoints in a venv with gunicorn | <p>I have some APIs implemented with DRF and I want those to run in a venv. I am using gunicorn. I run gunicorn like this:</p>
<pre><code>/path_to_venv/bin/gunicorn
</code></pre>
<p>But in the API code I log <code>os.getenv('VIRTUAL_ENV')</code> and it's None. Is that valid evidence that the code is not running in the venv? If so, how can I force it to run in the venv? If not, how can I validate that it is running in the venv?</p>
| <python><django><django-rest-framework><gunicorn><python-venv> | 2023-10-21 13:08:32 | 0 | 3,786 | Larry Martell |
77,335,990 | 3,225,420 | Matplotlib cannot find the installed PIL module | <p>I want to print <code>rcParams</code> to see the settings during stages of my program.</p>
<p>Here are my imports:</p>
<pre><code>import matplotlib as mpl
from matplotlib import rcParams
</code></pre>
<p>I've tried the following:</p>
<p><code>print(mpl.rcParamsDefault)</code>,<code>print(mpl.rcParams)</code> and <code>print(rcParams)</code></p>
<p>I get the same error message for all three:</p>
<pre><code>Traceback (most recent call last):
File "e:\Minitab_Killer\rcParams_print_out.py", line 1, in <module>
import matplotlib as mpl
File "C:\Users\D\miniconda3\lib\site-packages\matplotlib\__init__.py", line 107, in <module>
from . import _api, cbook, docstring, rcsetup
File "C:\Users\D\miniconda3\lib\site-packages\matplotlib\rcsetup.py", line 24, in <module>
from matplotlib import _api, animation, cbook
File "C:\Users\D\miniconda3\lib\site-packages\matplotlib\animation.py", line 34, in <module>
from PIL import Image
File "C:\Users\D\miniconda3\lib\site-packages\PIL\Image.py", line 103, in <module>
from . import _imaging as core
ImportError: DLL load failed while importing _imaging: The specified module could not be found.
</code></pre>
<p>I found this <a href="https://stackoverflow.com/q/43264773/3225420">answer</a>, some resolve this by fixing their <code>pillow</code> installation. I am using <code>anaconda</code> so I ran the <code>conda install -c anaconda pillow</code> command and it updated <code>pillow</code> but the error message did not change.</p>
<p>I tried this <a href="https://stackoverflow.com/a/75601334/3225420">answer</a> and ran <code>pip install --force pillow</code>, same error message.</p>
<p>While I tried the <code>pip</code> answer I desire a pure <code>conda</code> solution.</p>
<p>From <code>conda list</code> here are the relevant packages:
<code>python 3.10.9</code>,<code>pillow 9.4.0</code>,<code>matplotlib 3.5.3</code>.</p>
<p>Update:</p>
<p>Tried <a href="https://stackoverflow.com/a/75601334/3225420">this</a> :<code>pip install --force pillow</code>, and <a href="https://www.reddit.com/r/learnpython/comments/z1ur40/pillow_importerror_dll_load_failed_while/" rel="nofollow noreferrer">this</a>: <code>Pillow 9.0.0</code> getting the following error:</p>
<pre><code>Traceback (most recent call last):
File "e:\Minitab_Killer\rcParams_print_out.py", line 2, in <module>
from matplotlib import rcParams
File "C:\Users\D\miniconda3\lib\site-packages\matplotlib\__init__.py", line 107, in <module>
from . import _api, cbook, docstring, rcsetup
File "C:\Users\D\miniconda3\lib\site-packages\matplotlib\rcsetup.py", line 24, in <module>
from matplotlib import _api, animation, cbook
File "C:\Users\D\miniconda3\lib\site-packages\matplotlib\animation.py", line 34, in <module>
from PIL import Image
File "C:\Users\D\miniconda3\lib\site-packages\PIL\Image.py", line 103, in <module>
from . import _imaging as core
ImportError: DLL load failed while importing _imaging: The specified module could not be found.
</code></pre>
<p>What should I try next?</p>
| <python><matplotlib><anaconda><python-imaging-library> | 2023-10-21 12:34:59 | 1 | 1,689 | Python_Learner |
77,335,915 | 8,761,554 | Getting the row's occurence of max value by column in pandas | <p>I have the following dataframe:</p>
<pre><code>import pandas as pd
data = {
"a": [0.02, 0.04, -0.1,-0.02],
"b": [0.04, -0.1, -0.02,0.01],
"c": [0.01, 0.3, 0.02,0.02],
"d": [-0.07,0.02,-0.01,0.0]
}
df = pd.DataFrame(data)
</code></pre>
<p>I'd like to get the following table:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>dataset</th>
<th>no of being max a column</th>
</tr>
</thead>
<tbody>
<tr>
<td>c</td>
<td>3</td>
</tr>
<tr>
<td>b</td>
<td>1</td>
</tr>
<tr>
<td>a</td>
<td>0</td>
</tr>
<tr>
<td>d</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
<p>C would be 3 as in the 4 given column (months) there were 3 months when C had the highest performance.</p>
<p>Edit:
Would the structure of the dataframe into table-like format change the way how I can get the correct output?
For instance if the way I create my dataframe would be something like this:</p>
<pre><code>a = [0.02, 0.04, -0.1,-0.02]
b = [0.04, -0.1, -0.02,0.01]
c = [0.01, 0.3, 0.02,0.02]
d = [-0.07,0.02,-0.01,0.0]
total = [a,b,c,d]
total2 = ['a','b','c','d']
df_final = pd.DataFrame(columns=list(range(4)), index=['a','b','c','d'])
i=0
for l in total2:
df_final.loc[l] = total[i]
i += 1
print(df_final)
</code></pre>
| <python><pandas><dataframe> | 2023-10-21 12:12:37 | 4 | 341 | Sam333 |
77,335,812 | 7,055,769 | Django not saving data with post | <p>I have this view:</p>
<pre><code>class TaskApiView(APIView):
def post(self, request):
serializer = TaskSerializer(data=request.data)
print(request.data)
if serializer.is_valid():
print("valid", serializer.data)
serializer.save()
return Response(status=status.HTTP_201_CREATED)
else:
print(serializer.errors)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
</code></pre>
<p>request body:</p>
<pre><code>{
"content": "asd"
}
</code></pre>
<p>log:</p>
<pre><code>{'content': 'asd'}
valid {'id': 30}
[21/Oct/2023 11:33:19] "POST /api/task/create HTTP/1.1" 201 0
</code></pre>
<p>But when I try to get all tasks with this view</p>
<pre><code>class TaskListAPIView(generics.ListAPIView):
queryset = Task.objects.all()
serializer_class = TaskSerializer
</code></pre>
<p>I get just the id:</p>
<pre><code>[
{
"id": 25
}
]
</code></pre>
<p>Serializer:</p>
<pre><code>class TaskSerializer(serializers.ModelSerializer):
class Meta:
model = Task
fields = "__all__"
</code></pre>
<p>Model</p>
<pre><code> id: models.UUIDField(unique=True, auto_created=True)
content: models.CharField(default="")
creationDate: models.DateField(auto_now_add=True)
author: models.ForeignKey(User, on_delete=models.CASCADE)
status: models.CharField(
choices=StatusEnum,
max_length=5,
default=StatusEnum.TODO,
)
def __str__(self):
return self.id + self.content
</code></pre>
<p>settings.py:</p>
<pre><code>REST_FRAMEWORK = {
"DEFAULT_PARSER_CLASSES": [
"rest_framework.parsers.JSONParser",
]
}
</code></pre>
<p>I want to create a task with content</p>
| <python><django><django-rest-framework> | 2023-10-21 11:41:03 | 1 | 5,089 | Alex Ironside |
77,335,795 | 3,130,747 | How to run a single pre-commit hook with particular arguments that are different to the yaml definition | <p>I know that I <a href="https://pre-commit.com/#pre-commit-run" rel="nofollow noreferrer">can run a hook using</a></p>
<pre><code>pre-commit run <hook id>
</code></pre>
<p>But if <code><hook id></code> is typically run with a set of arguments, and I want to run it with a different set, how should I go about that ?</p>
<p>For example, I might have a <code>black</code> hook that just runs with no arguments, but want to run it with <code>black --preview</code>.</p>
<p>Currently if I wanted to do that I would have black managed by my project venv <em>and</em> pre-commits venv, which seems overkill.</p>
<p>I guess I'm wondering if something like the following exists:</p>
<pre><code>pre-commit run --just-call-the-package black --preview
</code></pre>
| <python><pre-commit.com> | 2023-10-21 11:34:03 | 1 | 4,944 | baxx |
77,335,784 | 5,224,341 | Is it possible to "brute combine" sections of bitmaps in R (perhaps by incorporating e.g. Python code, if needed)? | <p>This question is an expansion of <a href="https://stackoverflow.com/questions/77076487/how-do-i-add-a-legend-indicating-significance-levels-below-a-ggplot-object/77077095#77077095">an earlier discussion</a>, where a good solution is described, but a more brutal combining method might be necessary in some scenarios.</p>
<p>Consider the following MWEs that produce the two bitmaps.</p>
<p>First, graph 1:</p>
<pre class="lang-r prettyprint-override"><code># Load packages
library(ggforestplot)
library(tidyverse)
library(ggplot2)
# Use the example data of the ggforesplot() package (only a few rows of it)
df <- ggforestplot::df_linear_associations %>%
filter(trait == "BMI") %>% slice(26:27)
# Create plot
ggforestplot::forestplot(
df = df,
name = name,
estimate = beta,
se = se,
pvalue = pvalue,
psignif = 0.05
)
# Export as png
ggsave("graph1.png", width = 8, bg = 'white', dpi = 150)
</code></pre>
<p><img src="https://i.imgur.com/uq1jmlT.png" alt="" /></p>
<p>Graph 2:</p>
<pre class="lang-r prettyprint-override"><code># Load packages
library(ggstats)
# Load example data
data(tips, package = "reshape")
# Run linear model
linear_model <- lm(tip ~ size + total_bill, data = tips)
# Plot model
ggcoef_model(linear_model)
# Export as png
ggsave("graph2.png", width = 8, bg = 'white', dpi = 150)
</code></pre>
<p><img src="https://i.imgur.com/ctFwaEw.png" alt="" /></p>
<p>Let's say I want to cut a slice (here: a legend) with the height of 79 pixels (ca 10.8%) from the bottom of the 2nd graph (second bitmap file) and append that to the 1st graph (1st bitmap file).</p>
<p>Is there a way to do this straight in R, or alternatively, by incorporating Python code into an R script?</p>
<p>The result I desire is shown below:</p>
<p><a href="https://i.sstatic.net/wymYz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wymYz.png" alt="enter image description here" /></a></p>
| <python><r><ggplot2><ggsave> | 2023-10-21 11:31:22 | 0 | 522 | vlangen |
77,335,516 | 1,195,883 | Why can this model predict the rotation of MNIST digits that accurately? | <p>I've trained a model on the MNIST set in order to detect the rotation (0, 90, 180, 270 degrees) of the digits like so:</p>
<pre><code># model / data parameters
num_classes = 4
input_shape = (28, 28, 1)
# load the data and split it between train and test sets
(x_train_orig, _), (x_test_orig, _) = keras.datasets.mnist.load_data()
# generate new data set with rotated numbers
x_train = []
y_train = []
for img in x_train_orig:
for i in range(0, 4):
x_train.append(np.rot90(img, i))
y_train.append(i)
x_train = np.array(x_train)
y_train = np.array(y_train)
x_test = []
y_test = []
for img in x_test_orig:
for i in range(0, 4):
x_test.append(np.rot90(img, i))
y_test.append(i)
x_test = np.array(x_test)
y_test = np.array(y_test)
# scale images to the [0, 1] range
x_train = x_train.astype("float32") / 255
x_test = x_test.astype("float32") / 255
# make sure images have shape (28, 28, 1)
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = keras.Sequential([
keras.Input(shape=input_shape),
layers.Conv2D(32, kernel_size=(3, 3), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Flatten(),
layers.Dropout(0.5),
layers.Dense(num_classes, activation="softmax"),
])
batch_size = 128
epochs = 10
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_split=0.1)
save_model(model, os.path.join('model', 'saved_numori_model'))
</code></pre>
<p>I am then running a check on the test set with "1"'s rotated 270 degrees using the following code:</p>
<pre><code>model_ori = load_model(os.path.join('model', 'saved_numori_model'), compile=True)
(_, _), (x_test_orig, y_test_orig) = keras.datasets.mnist.load_data()
x_test_orig = x_test_orig.astype("float32") / 255
subimages = []
rot = 3
while len(subimages) < 8:
rnd = np.random.randint(0, x_test_orig.shape[0])
if y_test_orig[rnd] != 1:
continue
subimage = x_test_orig[rnd]
subimages.append(np.rot90(subimage, rot))
subimages = np.array(subimages)
_, ax = plt.subplots(1, subimages.shape[0])
for i in range(subimages.shape[0]):
ax[i].imshow(subimages[i])
ax[i].axis('off')
plt.show()
predictions_ori = model_ori.predict(np.expand_dims(subimages, -1), verbose=verbose)
print("Probabilities for orientation:")
for prediction in predictions_ori:
for i, probability in enumerate(prediction):
print(f"{i}: {probability:.02f} ", end="")
print(f"Prediction: {np.argmax(prediction)}")
</code></pre>
<p>This results in the following output for "1"'s:</p>
<p><a href="https://i.sstatic.net/Giauz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Giauz.png" alt="enter image description here" /></a></p>
<p>And for "0"'s:</p>
<p><a href="https://i.sstatic.net/7kY47.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7kY47.png" alt="enter image description here" /></a></p>
<p>I simply cannot understand why it can predict those rotations that accurately. What I was expecting was a high uncertainty between rotation of 90 or 270 degrees for the "1"'s and the "0"'s.</p>
<p>Why can the model prediction those rotations that accurately? Am I doing something wrong?</p>
| <python><machine-learning><mnist> | 2023-10-21 10:04:26 | 0 | 836 | user1195883 |
77,335,372 | 12,415,855 | Does providing content to input-field inside iframe not possible using selenium? | <p>i try to provide a value to an input-field on a website using the following code:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
if __name__ == '__main__':
checkLink = "https://www.skatteetaten.no/person/avgifter/bil/eksportere/regn-ut/"
os.environ['WDM_LOG'] = '0'
options = Options()
options.add_argument("start-maximized")
options.add_experimental_option("prefs", {"profile.default_content_setting_values.notifications": 1})
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('excludeSwitches', ['enable-logging'])
options.add_experimental_option('useAutomationExtension', False)
options.add_argument('--disable-blink-features=AutomationControlled')
srv=Service()
driver = webdriver.Chrome (service=srv, options=options)
waitWD = WebDriverWait (driver, 10)
driver.get (checkLink)
waitWD.until(EC.presence_of_element_located((By.XPATH,'//input[@id="Regnummer"]'))).send_keys("xyz")
driver.quit()
</code></pre>
<p>But i allways get a TimeoutException:</p>
<pre><code>$ python test.py
Traceback (most recent call last):
File "G:\DEV\Fiverr\TRY\csgoenterprise\test.py", line 29, in <module>
waitWD.until(EC.presence_of_element_located((By.XPATH,'//input[@id="Regnummer"]'))).send_keys("xyz")
File "G:\DEV\.venv\selenium\lib\site-packages\selenium\webdriver\support\wait.py", line 95, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
Stacktrace:
</code></pre>
<p>How can i put a value in this input-field?</p>
| <python><selenium-webdriver> | 2023-10-21 09:08:03 | 1 | 1,515 | Rapid1898 |
77,335,224 | 7,055,769 | no such column: api_task.content during makemigraions | <p>When trying <code>./manage.py runserver</code> or <code>./manage.py makemigrations</code> or <code>./manage.py migrate</code> I am getting:</p>
<blockquote>
<p>django.db.utils.OperationalError: no such column: api_task.content</p>
</blockquote>
<p>All solutions say to</p>
<ol>
<li>remove db.sqlite3</li>
<li>remove migrations folder</li>
<li>run <code>./manage.py makemigrations; ./manage.py migrate</code></li>
</ol>
<p>I did 1 and 2, but when trying to do 3 I get the error above (both commands, along with <code>./manage.py runserver</code>.</p>
<p>I am really confused as to why since, yes, it's obviously missing, but it should get recreated.</p>
<p>My aim was to clear the db, and since django recreates the db file, I should be able to just delete and make migrations, but I must be missing smth.</p>
<p>Task model:</p>
<pre><code>import uuid
from django.db import models
from django.contrib.auth.models import User
class Task(models.Model):
id: str = models.UUIDField(
unique=True,
auto_created=True,
primary_key=True,
default=uuid.uuid4,
editable=False,
)
content: str = models.CharField(
default="",
max_length=255,
)
deadline: int = models.IntegerField(
default=None,
null=True,
blank=True,
)
creationDate: models.DateField(auto_now_add=True)
author: models.ForeignKey(User, on_delete=models.CASCADE)
def __str__(self) -> str:
return str(self.id) + str(self.content)
</code></pre>
| <python><django> | 2023-10-21 08:17:41 | 1 | 5,089 | Alex Ironside |
77,335,159 | 859,141 | Django Objects Filter received a naive datetime while time zone support is active when using min max range | <p>So this question is specifically about querying a timezone aware date range with max min in Django 4.2.</p>
<p>Timezone is set as <em>TIME_ZONE = 'UTC'</em> in settings.py and the model in question has two fields:</p>
<pre><code>open_to_readers = models.DateTimeField(auto_now=False, verbose_name="Campaign Opens")
close_to_readers = models.DateTimeField(auto_now=False, verbose_name="Campaign Closes")
</code></pre>
<p>The query looks like</p>
<pre><code>allcampaigns = Campaigns.objects.filter(open_to_readers__lte=today_min, close_to_readers__gt=today_max)
</code></pre>
<p><strong>Failed Solution 1</strong></p>
<pre><code> today_min = datetime.combine(timezone.now().date(), datetime.today().time().min)
today_max = datetime.combine(timezone.now().date(), datetime.today().time().max)
print("Today Min", today_min, " & Today Max", today_max)
</code></pre>
<p>returns the following which would be a suitable date range except it also gives the error below for both min and max.</p>
<pre><code>Today Min 2023-10-21 00:00:00 & Today Max 2023-10-21 23:59:59.999999
DateTimeField ... received a naive datetime (9999-12-31 23:59:59.999999) while time zone support is active.
</code></pre>
<p><strong>Partial Working Solution</strong></p>
<pre><code>today_min = timezone.now()
allcampaigns = Campaigns.objects.filter(open_to_readers__lte=today_min, close_to_readers__gt=today_min)
</code></pre>
<p>Returns results without error but the time given is the current time and not the minimum or maximum for the day.</p>
<p><strong>Failed Solution 2 from <a href="https://stackoverflow.com/questions/56287435/convert-datetime-min-into-offset-aware-datetime">here</a>:</strong></p>
<pre><code> now = datetime.now(timezone.get_default_timezone())
today_min = now.min
today_max = now.max
print("Today Min", today_min, " & Today Max", today_max)
</code></pre>
<p>Returns Today Min 0001-01-01 00:00:00 & Today Max 9999-12-31 23:59:59.999999 and the aforementioned timezone error.</p>
<p>How can I create two timezone aware datetime for the minumum and maximum parts of the day?</p>
| <python><django><datetime> | 2023-10-21 07:52:45 | 1 | 1,184 | Byte Insight |
77,334,975 | 20,240,835 | How to stop the annoying warnings output by reticulate in R | <p>I am trying to read a <code>pickle</code> data in R, I have following R code:</p>
<pre><code>library(reticulate)
pd <- suppressMessages(import("pandas"))
hubs.data <- suppressWarnings(pd$read_pickle("input.pickle"))
</code></pre>
<p>The code is worked well, but its will output a warnning:</p>
<pre><code>sys:1: FutureWarning: pandas.Float64Index is deprecated and will be removed from pandas in a future version. Use pandas.Index with the appropriate dtype instead.
sys:1: FutureWarning: pandas.Int64Index is deprecated and will be removed from pandas in a future version. Use pandas.Index with the appropriate dtype instead.
sys:1: FutureWarning: pandas.UInt64Index is deprecated and will be removed from pandas in a future version. Use pandas.Index with the appropriate dtype instead.
</code></pre>
<p>This warning will show when exculate any other R code line after this. I want to stop it.</p>
<p>I have tried:</p>
<pre><code>pd <- suppressWarnings(import("pandas"))
# or
pd <- suppressMessages(import("pandas"))
</code></pre>
<p>But the <code>suppressWarnings/suppressMessages</code> seem not work</p>
| <python><r><pandas><reticulate> | 2023-10-21 06:39:12 | 2 | 689 | zhang |
77,334,714 | 1,144,868 | Find and delete/replace a part of bigger string | <p>In my python/Django project, I am trying to write a custom exception class where I want to avoid showing few things to the end user.</p>
<p>In the below error message, if you see the project name <code>prj-eng-d-nc-prcd-1077</code> is appearing. I want to remove that before the message goes to UI.</p>
<p>I was trying to do find and replace. For ex:</p>
<pre><code>prj_name = 'prj-eng'
if prj_name in msg:
msg = msg.replace(prj_name, '')
print (msg)
</code></pre>
<p>But the problem here is project name will be dynamic. It could be <code>prj-eng-h-nc-raw-1637</code> or <code>prj-eng-p-nc-oauth-2218</code>.</p>
<p>I was also trying to use rfind() where we can find the start of the substring and replace, but that did not help.</p>
<p>Can anyone please help me with the solution.</p>
<pre><code> {
"jobs": [],
"error": "Unable to process the request",
"msg": "400 Bad int64 value: prj-eng-d-nc-prcd-1077\n\nLocation: us-central1\nJob ID: f8fc4fde-280b-4f35-ae9e-5fb92d2d0873\n"
}
</code></pre>
<p>Input would be :</p>
<pre><code>msg = "400 Bad int64 value: prj-eng-d-nc-prcd-1077\n\nLocation: us-central1\nJob ID: f8fc4fde-280b-4f35-ae9e-5fb92d2d0873\n"
</code></pre>
<p>Output should be :</p>
<pre><code>msg = "400 Bad int64 value: \n\nLocation: us-central1\nJob ID: f8fc4fde-280b-4f35-ae9e-5fb92d2d0873\n"
</code></pre>
| <python><python-3.x><string><replace> | 2023-10-21 04:11:57 | 3 | 3,355 | sandeep |
77,334,419 | 4,047,444 | Comparing 2 DataFrames and Get Similarity Based on Columns | <p>I have 2 data frames with different size rows. I would like to compare and get the similarity between the two data frames based on two columns, category_id and size. The results can go into a new data frame. The columns will be equal lengths between the 2 data frames.</p>
<pre><code>d1 = {'item_id': [1, 2, 3, 4],
'brand': ['E', 'E', 'E', 'E'],
'category_id': [100, 100, 101, 100],
'size': ['S', 'M', 'S', 'L'],
'cost': [8.15, 12.91, 18.44, 14.95],
'sell': [9.95, 14.49, 19.99, 16.79]
}
d2 = {'item_id': [5, 6, 7, 8, 9, 10],
'brand': ['V', 'V', 'V', 'V', 'V', 'V'],
'category_id': [100, 100, 102, 100, 101, 103],
'size': ['S', 'M', 'XL', 'L', 'XS', 'XXL'],
'cost': [9.29, 13.99, 8.44, 10.95, 11.79, 14.95],
'sell': [10.95, 15.49, 12.99, 12.79, 13.69, 17.29]
}
df1 = pd.DataFrame(d1)
df2 = pd.DataFrame(d2)
df1
item_id brand category_id size cost sell
0 1 E 100 S 8.15 9.95
1 2 E 100 M 12.91 14.49
2 3 E 101 S 18.44 19.99
3 4 E 100 L 14.95 16.79
df2
item_id brand category_id size cost sell
0 5 V 100 S 9.29 10.95
1 6 V 100 M 13.99 15.49
2 7 V 102 XL 8.44 12.99
3 8 V 100 L 10.95 12.79
4 9 V 101 XS 11.79 13.69
5 10 V 103 XXL 14.95 17.29
</code></pre>
<p>Desire Output:</p>
<pre><code> item_id brand category_id size cost sell
0 1 E 100 S 8.15 9.95
1 5 V 100 S 9.29 10.95
2 2 E 100 M 12.91 14.49
3 6 V 100 M 13.99 15.49
4 4 E 100 L 14.95 16.79
5 8 V 100 L 10.95 12.79
</code></pre>
<p>I've read many solutions that show the differences but not much when it comes to similarities. I tried the code below but it just showed the similarity data from df1.</p>
<pre><code>common_columns = df1.loc[:, df1.columns.isin(df2.columns)]
</code></pre>
<p>This <a href="https://stackoverflow.com/questions/56586398/comparing-two-dataframes-and-getting-the-similarities-as-a-new-dataframe">solution</a> comes closest but it has equal row lengths so I don't think it will work in my case.</p>
| <python><pandas><dataframe> | 2023-10-21 01:11:19 | 2 | 861 | dreamzboy |
77,334,369 | 12,043,259 | Extra redundant new line needed to see the expected result when dealing with multiline input | <p>I am trying to read multiline text and print it which seems to be very easy job but interestingly I see odd behavior</p>
<pre><code>while True:
client_input = ''
print("Please enter your queries:") # User input we'll be adding to
for line in sys.stdin: # Iterator loops over readline() for file-like object
if line == '\n' or line=='': # Got a line that was just a newline
break # Break out of the input loop
client_input += line # Keep adding input to variable
print('cccccccccccccccc')
print(client_input)
</code></pre>
<p>So now enter this multiline text</p>
<pre><code> {
"xxxx": [
{
"zzzz": {
"zz": "a",
"eee": "d"
}
},
{
"qqq": {
"q": "a",
"a": "d"
}
}
]}
</code></pre>
<p>so I expect when I enter a new line after pasting this text to see <code>cccccccccccccccc</code> printed however I need to click enter twice to see the expected behavior .
anyone can help what is that extra need for new line is and how to fix it?</p>
| <python><python-3.x> | 2023-10-21 00:44:02 | 2 | 2,039 | Learner |
77,334,346 | 3,951,468 | Execute an update using single-quoted columns in sqlalchemy | <p>I am writing some tests for our SQL code and have some defined tables:</p>
<pre><code>class CompanyAddressORM(Base):
__tablename__ = "company_address"
pk = Column("pk", Text, primary_key=True)
project_id = Column("PROJECT_ID", Integer)
company_id = Column("COMPANY_ID", Integer)
class CompanyDocumentsORM(Base):
__tablename__ = "company_documents"
pk = Column("pk", Text, primary_key=True)
project_id = Column("PROJECT_ID", Integer)
company_id = Column("COMPANY_ID", Integer)
page_id = Column("PAGE_ID", Integer)
</code></pre>
<p>Our underlying code uses snowflake so I must not change it, but instead run the queries and mock the connection to snowflake using the testing.postgresql package.</p>
<p>I'm running into issues when trying to query items that are case sensitive. E.g.</p>
<pre><code>session.execute("select PROJECT_ID from company_address")
>>> *** sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedColumn) column "project_id" does not exist
session.execute('select "PROJECT_ID" from company_address').all()
>>> [(1,), (2,), (3,), (4,), (5,)]
</code></pre>
<p>The problem comes in later when somewhere in the code we do an update with this query:</p>
<pre><code>column_names = ", ".join(columns)
source_column_names = ", ".join(f"source.{col}" for col in columns)
target_column_updates = ", ".join(f"target.{col} = source.{col}" for col in columns)
# Generate the join condition for the merge query
join_condition = " AND ".join(
f"target.{col} = source.{col}" for col in join_columns
)
# Perform the merge operation in Snowflake
merge_query = f"""
MERGE INTO {table_name} as target
USING {temp_table_name} as source
ON {join_condition}
WHEN MATCHED THEN UPDATE SET
{target_column_updates}
WHEN NOT MATCHED THEN INSERT
({column_names})
VALUES ({source_column_names})
"""
cursor.execute(merge_query)
</code></pre>
<p>The original query runs fine in Snowflake but in sqlalchemy breaks because of case-insensitivity, so i must double quote all the cols:</p>
<pre><code>source_column_names = ", ".join(f'''"source.{col}"''' for col in columns)
target_column_updates = ", ".join(f'''"target.{col}" = "source.{col}"''' for col in columns)
# Generate the join condition for the merge query
join_condition = " AND ".join(
f'''"target.{col}" = "source.{col}"''' for col in join_columns
</code></pre>
<p>This will give me a query like this:</p>
<pre><code>query = 'MERGE INTO company_address as target
USING company_address_tmp as source
ON "target.project_id" = "source.project_id"
WHEN MATCHED THEN UPDATE SET
.....
</code></pre>
<p>but when I try to execute this query, I get a compilation error.</p>
<p>If i remove the double quotes:</p>
<pre><code>LINE 4: ON target.COMPANY_ID = source.COMPANY_ID AND target....```
What's the best way of dealing with this?
</code></pre>
| <python><sql><sqlalchemy><snowflake-cloud-data-platform> | 2023-10-21 00:32:12 | 0 | 364 | Drivebyluna |
77,334,325 | 11,025,049 | How do relationships work on the mongoengine? | <p>I'm new in mongoengine and I'm trying to relate two documments.</p>
<pre class="lang-py prettyprint-override"><code>class Actor(Document):
name = StringField(required=True, unique=True)
movies = ListField()
class Movie(Document):
title = StringField(required=True, unique=True)
release_date = DateField()
director = StringField()
cast = ListField(ReferenceField(Actor))
</code></pre>
<p>I'd like to list the actors and have the movies appear. Or list the movies and the actors would appear related.</p>
<p>In my mind, I think of something like this:</p>
<pre class="lang-py prettyprint-override"><code>class Actor(Resource):
def get_actors(self) -> Response:
for actor in Actor.objects.all():
movies = Movie.objects(actors=actor)
[...]
return Response(planets, mimetype='application/json', status=200)
</code></pre>
<p>Is this a valid path, or is my head in Postgres and Mongo different? Any ideas?</p>
| <python><mongodb><flask><mongoengine> | 2023-10-21 00:18:36 | 1 | 625 | Joey Fran |
77,334,317 | 5,091,964 | PyInstaller-Generated .exe closes Immediately When Running Google Palm-2 program | <p>I wrote a short example of a Python program that sends a prompt to the Google Palm-2 model and receives a reply from the model. My program runs correctly when I run it in the command console with the ".py" extension. However, after I created an ".exe" file using Pyinstaller, and ran the ".exe" file, the command window closed immediately without running the program. Has anyone else experienced the same problem? I spend hours looking into the Pyinstaller analyzer log and at the run time error message and could not find a solution.</p>
<pre><code>import google.generativeai as palm
# Configure the PaLM API
palm.configure(api_key='my-api-key')
input_text = "What is the capital of France?"
# Get the model's response with the given input text
response = palm.generate_text(
model='models/text-bison-001',
prompt=input_text,
)
# Get the corrected text as a string
response = response.result
# Print the reply
print("Result:")
print(response)
print()
</code></pre>
| <python><artificial-intelligence><pyinstaller> | 2023-10-21 00:15:21 | 2 | 307 | Menachem |
77,334,292 | 3,476,463 | perform peft with lora on flan-t5 model causing no executable batch size error | <p>I'm trying to perform PEFT with LoRA. I'm using the Google flan-T5 base model. I'm using the Python code below. I'm running the code with an nvidia GPU with 8 GB of ram on Ubuntu server 18.04 LTS. In the Python code I'm loading the public dataset from huggingface. I've loaded the pre-trained flan-T5 model. I've set up the PEFat and LoRA model.</p>
<p>I then add the LoRA adapter and layers to the original LLM. I define a trainer instance, but when I try to train the PEFT adapter and save the model, I get the error below that "no executable batch size found."</p>
<p>Can anyone see what the issue might be and can you suggest how to solve it?</p>
<p>Code:</p>
<pre><code># import modules
from datasets import load_dataset
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig, TrainingArguments, Trainer
import torch
import time
import evaluate
import pandas as pd
import numpy as np
# load dataset and LLM
huggingface_dataset_name = "knkarthick/dialogsum"
dataset = load_dataset(huggingface_dataset_name)
# load pre-trained FLAN-T5 model
model_name='google/flan-t5-base'
original_model = AutoModelForSeq2SeqLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# set up peft LORA model
from peft import LoraConfig, get_peft_model, TaskType
lora_config = LoraConfig(
r=32, # Rank
lora_alpha=32,
target_modules=["q", "v"],
lora_dropout=0.05,
bias="none",
task_type=TaskType.SEQ_2_SEQ_LM # FLAN-T5
)
# add LoRA adpter layers/parameters to the origianl LLM to be trained
peft_model = get_peft_model(original_model,
lora_config)
print(print_number_of_trainable_model_parameters(peft_model))
# define training arguments and create Trainer instance
output_dir = f'./peft-dialogue-summary-training-{str(int(time.time()))}'
peft_training_args = TrainingArguments(
output_dir=output_dir,
auto_find_batch_size=True,
learning_rate=1e-3, # Higher learning rate than full fine-tuning.
num_train_epochs=1,
logging_steps=1,
max_steps=1
)
peft_trainer = Trainer(
model=peft_model,
args=peft_training_args,
train_dataset=tokenized_datasets["train"],
)
# train PEFT adapter and save the model
peft_trainer.train()
peft_model_path="./peft-dialogue-summary-checkpoint-local"
peft_trainer.model.save_pretrained(peft_model_path)
tokenizer.save_pretrained(peft_model_path)
</code></pre>
<h1>Error:</h1>
<pre><code>---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[16], line 1
----> 1 peft_trainer.train()
3 peft_model_path="./peft-dialogue-summary-checkpoint-local"
5 peft_trainer.model.save_pretrained(peft_model_path)
File ~/anaconda3/envs/new_llm/lib/python3.10/site-packages/transformers/trainer.py:1664, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1659 self.model_wrapped = self.model
1661 inner_training_loop = find_executable_batch_size(
1662 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
1663 )
-> 1664 return inner_training_loop(
1665 args=args,
1666 resume_from_checkpoint=resume_from_checkpoint,
1667 trial=trial,
1668 ignore_keys_for_eval=ignore_keys_for_eval,
1669 )
File ~/anaconda3/envs/new_llm/lib/python3.10/site-packages/accelerate/utils/memory.py:134, in find_executable_batch_size.<locals>.decorator(*args, **kwargs)
132 while True:
133 if batch_size == 0:
--> 134 raise RuntimeError("No executable batch size found, reached zero.")
135 try:
136 return function(batch_size, *args, **kwargs)
RuntimeError: No executable batch size found, reached zero.
</code></pre>
<p>Update:</p>
<p>I restarted my kernel and error went away, not sure why. Perhaps previous model I had run was taking up too much space.</p>
| <python><python-3.x><large-language-model><huggingface><peft> | 2023-10-21 00:01:12 | 1 | 4,615 | user3476463 |
77,334,235 | 2,013,747 | Is there a standard function in Python that takes an argument, ignores it, and always returns None? | <p>I need to supply a Callable returning None to a library function of the following form:</p>
<pre class="lang-py prettyprint-override"><code>def some_third_party_function(callback: Callable[[], None]) -> None:
...
</code></pre>
<p>but I have a function that does not return <code>None</code></p>
<pre class="lang-py prettyprint-override"><code>def func_to_use_as_callback() -> bool:
...
</code></pre>
<p>and I would like to supply it to the library function in such a way that its argument is discarded.</p>
<p>The following attempt fails type checking, since the lambda does not return None:</p>
<pre class="lang-py prettyprint-override"><code>some_third_party_function(lambda: func_to_use_as_callback())
</code></pre>
<p>What I have at the moment is:</p>
<pre class="lang-py prettyprint-override"><code>def _discard_arg(arg) -> None:
return None
some_third_party_function(lambda: _discard_arg(func_to_use_as_callback()))
</code></pre>
<p>Does the Python 3.10 standard library contain a function that is equivalent to <code>_discard_arg</code>? or is there some other equally concise way to achieve the desired behavior?</p>
| <python><python-3.x> | 2023-10-20 23:40:38 | 1 | 4,240 | Ross Bencina |
77,334,172 | 5,179,643 | How to start the x-axis ticks at 1 (instead of 0) when plotting a Numpy array using Seaborn lineplot | <p>Let's assume the following array of length 15:</p>
<pre><code>import numpy as np
arr = np.array([0.17, 0.15, 0.13, 0.12, 0.07, 0.06, 0.05, 0.05, 0.03, 0.03, 0.03, 0.03, 0.03, 0.03, 0.02])
arr
array([0.17, 0.15, 0.13, 0.12, 0.07, 0.06, 0.05, 0.05, 0.03, 0.03, 0.03,
0.03, 0.03, 0.03, 0.02])
</code></pre>
<p>I take the cumulative sum of this array and assign it to <code>cum_arr</code>:</p>
<pre><code>cum_arr = np.cumsum(arr)
cum_arr
array([0.17, 0.32, 0.45, 0.57, 0.64, 0.7 , 0.75, 0.8 , 0.83, 0.86, 0.89,
0.92, 0.95, 0.98, 1. ])
</code></pre>
<p>Now, I'd like to simply plot this array using a Seaborn's <code>lineplot</code>.</p>
<p>My attempt:</p>
<pre><code>import matplotlib.pyplot as plt
import seaborn as sns
x_range = [x for x in range(1, len(cum_arr) +1)]
ax = sns.lineplot(data=cum_arr)
ax.set_xlim(1, 15)
ax.set_ylim(0, 1.1)
ax.set_xticks(x_range)
plt.grid()
plt.show()
</code></pre>
<p>Which gives:</p>
<p><a href="https://i.sstatic.net/oMqrh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oMqrh.png" alt="enter image description here" /></a></p>
<p>Notice how the y coordinate associated with 1 on the x-axis is 0.32, which corresponds to the <em><strong>second</strong></em> element in <code>cum_arr</code>.</p>
<p>The desired plot would look something like this:</p>
<p><a href="https://i.sstatic.net/eqrEt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eqrEt.png" alt="enter image description here" /></a></p>
<p>How would I start plotting at the first element in <code>cum_arr</code>? (or alternatively adjust the xticks to take this into account)</p>
<p>Thanks!</p>
| <python><numpy><matplotlib><seaborn> | 2023-10-20 23:13:19 | 2 | 2,533 | equanimity |
77,334,146 | 3,052,438 | How to escape an environment variable in a SCons action? | <p>I have a build command that runs a Python script.
Because the python command is sometimes <code>python</code> and sometimes <code>python3</code> (depending on operating system), I instead use the full path which I'm keeping in an environment variable.</p>
<p>However, this path sometimes has weird characters like spaces, which need to be escaped.
For example <code>.../Program Files/...</code> in Windows.</p>
<p>SCons doesn't seem to automatically escape a variable when it substitutes it in the command.
When I run this script:</p>
<pre class="lang-py prettyprint-override"><code>import sys
env = Environment()
env.Replace(PYTHON_EXECUTABLE=sys.executable)
foo = env.Alias('foo', [], '$PYTHON_EXECUTABLE my-script.py')
env.AlwaysBuild(foo)
</code></pre>
<p>I get:</p>
<pre class="lang-none prettyprint-override"><code>scons: Reading SConscript files ...
scons: done reading SConscript files.
scons: Building targets ...
C:\Program Files\Python311\python.exe my-script.py
'C:\Program' is not recognized as an internal or external command,
operable program or batch file.
scons: *** [foo] Error 1
scons: building terminated because of errors.
</code></pre>
<p>I don't want to put the path in double quotes because they still allow some characters to be expanded on Linux.
On the other hand, Windows doesn't support single quotes.</p>
<p>Is there any simple portable method to escape a part of a command, regardless of what characters are there?</p>
| <python><shell><escaping><scons> | 2023-10-20 23:05:39 | 1 | 5,159 | Piotr Siupa |
77,334,833 | 13,142,245 | Sampling from conditional distribution to impute NaN values in joint | <p>Suppose that X and Z are sets of non-overlapping dimensions where <code>$P(X,Z)$</code> is the joint distribution over all features. I have two sets of data:</p>
<ol>
<li>X and Z are observed, and</li>
<li>only X is observed.</li>
</ol>
<p>Given the first set, I would like to MLE fit a multivariate Gaussian. Then using X, observed in the second set, sample values for Z using the conditional distribution <code>P(Z|X)</code></p>
<p>Note: I am not trying to frame as a multivariable regression problem. Rather I want to impute missing Z values in the second set given the first. And I believe that sampling from the conditional gaussian is sufficient for my purposes.</p>
<p>Is such functionality possible in numpy, scipy, etc. (extended Python ecosystem)?</p>
| <python><sampling> | 2023-10-20 23:05:08 | 0 | 1,238 | jbuddy_13 |
77,333,861 | 12,846,804 | Have read_html read cell content and tooltip (text bubble) separately, instead of concatenate them | <p>This site <a href="https://stats.gladiabots.com/pantheon?" rel="nofollow noreferrer">page</a> has tooltips appearing when hovering over values in columns <code>"Score"</code> and <code>"XP LVL"</code>.</p>
<p>It appears that <code>read_html</code> will concatenate <strong>cell content</strong> and <strong>tooltip</strong>. Splitting those in post-processing is not always obvious and I seek a way to have <code>read_html</code> handle them separately, possibly return them as two columns.</p>
<p>This is how the first row appears online:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>(Rank)#</th>
<th>Name</th>
<th>Score</th>
<th>XP LVL</th>
<th>Victories / Total</th>
<th>Victory Ratio</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Raininββββ</td>
<td>6129</td>
<td>447</td>
<td>408 / 531</td>
<td>76%</td>
</tr>
</tbody>
</table>
</div>
<ul>
<li>where <code>"Score"</code>'s "6129" carries tooltip "Max6129"</li>
<li>where, more annoyingly, <code>"XP LVL"</code>'s "447" carries tooltip "21173534 pts"</li>
</ul>
<p>This is how it appears after reading:</p>
<pre><code>pd.read_html('https://stats.gladiabots.com/pantheon?', header=0, flavor="html5lib")[0]
# Name Score XP LVL Victories / Total \
0 1 Raininββββ 6129Max 6129 44721173534 pts 408 / 531
</code></pre>
<p>See "44721173534 pts" is the concatenation of "447" and "21173534 pts". <code>"XP LVL"</code> values have a variable number of digits, so splitting the string in the post-processing phase would require being pretty smart about it and I woud like to explore the "<em>let read_html do the split</em>", first.</p>
<p>(The special flavor="html5lib" was added because the page is dynamically-generated)</p>
<p>I have not found any mention of tooltips in the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_html.html" rel="nofollow noreferrer">docs</a></p>
| <python><html><pandas><web-scraping><tooltip> | 2023-10-20 21:31:10 | 3 | 1,303 | OCa |
77,333,860 | 225,020 | What does having o_ (o underscore) do in dir? | <p>I was reviewing the code from this <a href="https://github.com/EvanZhouDev/donut-py" rel="nofollow noreferrer">donut.py</a> code that produces an ASCII spinning taurus like the donut.c code.</p>
<p>I was wondering what having these 2 snippets in the code does?</p>
<pre><code>'o_' in dir()
# and
(o_ := True)
</code></pre>
<p>Just looking at the code they seem useless but when I remove them the code doesn't run. I know what the walrus op does and from what I'm guessing it just ensures the loop runs infinitely. And I tried replacing the first line with False and the second with True</p>
<p>Here's the code for reference:</p>
<pre><code>while ('o_' in dir()) or (A := (0)
) or (B := 0) or print((
"\x1b[2J")) or not ((sin := ((
__import__('math'))).sin)) or not (
cos := __import__('math').cos) or (o_ := (
True)): [pr() for b in [[(func(), b) for ((z
)) in [[0 for _ in range(1760)]] for b in [[ (
"\n") if ii % 80 == 79 else " " for ii in range(
1760)]] for o, D, N in [(o, D, N) for j in range((
0), 628, 7) for i in range(0, 628, 2) for (c, d, e,(
f), g, l, m, n) in [(sin( i / 100), cos(j / 100),(
sin(A)), sin(j / 100), cos(A), cos(i / 100) ,
cos(B), sin(B))] for (h) in [d + 2] for (
D, t) in [ (1 / ( c * h * e + f * g + (5
)), c * h * g - f * e)] for (x, y) in [
(int( 40 + 30 * D * ( l * h * m - t * n)),
int( 12 + 15 * D * ( l * h * n + t * (m))))]
for (o, N) in [(x + 80 * y,int(8 * (( f * e - c
* d * g) * m - c * d * e - f * g - l * d * n)))] if
0 < x < 80 and 22 > y > 0] if D > z[o] for func in
[lambda: (z.pop((o))), lambda: z.insert((o), (D)
),(lambda: (b.pop( o))), lambda: b.insert((o),
".,-~:;=!*#$@"[ N if N > 0 else 0])]][ 0][1]
] for pr in [lambda: print("\x1b[H"),
lambda: print("".join(b))] if (A :=
A + 0.02) and ( B := B + 0.02)]
#..--~~EvanZhouDev:~~--.#
#...,2023,...#
</code></pre>
| <python> | 2023-10-20 21:30:24 | 1 | 27,595 | Jab |
77,333,779 | 7,776,801 | How do I add Select TensorFlow op(s) to a python interpreter in Android using Chaquopy | <p>I'm using <a href="https://chaquo.com/chaquopy/" rel="nofollow noreferrer">chaquopy</a> within the Android code of a Flutter project to leverage a python script that uses some tensorflow lite models.</p>
<p>Here's the python script:</p>
<pre><code>from io import BytesIO
import base64
import tensorflow as tf
from skimage import io
from imutils.object_detection import non_max_suppression
import numpy as np
import math
import time
import cv2
import string
from os.path import dirname, join
def preprocess_east(image: np.ndarray):
input_image = image
orig = input_image.copy()
(H, W) = input_image.shape[:2]
(newW, newH) = (416, 640)
rW = W / float(newW)
rH = H / float(newH)
image = cv2.resize(input_image, (newW, newH))
(H, W) = image.shape[:2]
image = image.astype("float32")
mean = np.array([123.68, 116.779, 103.939][::-1], dtype="float32")
image -= mean
image = np.expand_dims(image, 0)
return input_image, image, rW, rH
def run_east_tflite(input_data):
model_path = join(dirname(__file__), "east_float_640.tflite")
interpreter = tf.lite.Interpreter(model_path=model_path)
input_details = interpreter.get_input_details()
interpreter.allocate_tensors()
interpreter.set_tensor(input_details[0]["index"], input_data)
interpreter.invoke()
scores = interpreter.tensor(interpreter.get_output_details()[0]["index"])()
geometry = interpreter.tensor(interpreter.get_output_details()[1]["index"])()
return scores, geometry
def postprocess_east(scores, geometry, rW, rH, orig):
scores = np.transpose(scores, (0, 3, 1, 2))
geometry = np.transpose(geometry, (0, 3, 1, 2))
(numRows, numCols) = scores.shape[2:4]
rects = []
confidences = []
for y in range(0, numRows):
scoresData = scores[0, 0, y]
xData0 = geometry[0, 0, y]
xData1 = geometry[0, 1, y]
xData2 = geometry[0, 2, y]
xData3 = geometry[0, 3, y]
anglesData = geometry[0, 4, y]
for x in range(0, numCols):
if scoresData[x] < 0.5:
continue
(offsetX, offsetY) = (x * 4.0, y * 4.0)
angle = anglesData[x]
cos = np.cos(angle)
sin = np.sin(angle)
h = xData0[x] + xData2[x]
w = xData1[x] + xData3[x]
endX = int(offsetX + (cos * xData1[x]) + (sin * xData2[x]))
endY = int(offsetY - (sin * xData1[x]) + (cos * xData2[x]))
startX = int(endX - w)
startY = int(endY - h)
rects.append((startX, startY, endX, endY))
confidences.append(scoresData[x])
boxes = non_max_suppression(np.array(rects), probs=confidences)
crops = []
for startX, startY, endX, endY in boxes:
startX = int(startX * rW)
startY = int(startY * rH)
endX = int(endX * rW)
endY = int(endY * rH)
cv2.rectangle(orig, (startX, startY), (endX, endY), (0, 0, 255), 3)
crops.append([[startX, startY], [endX, endY]])
return orig, crops
def preprocess_ocr(image):
input_data = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
input_data = cv2.resize(input_data, (200, 31))
input_data = input_data[np.newaxis]
input_data = np.expand_dims(input_data, 3)
input_data = input_data.astype("float32") / 255
return input_data
def run_tflite_ocr(input_data):
model_path = join(dirname(__file__), "keras_ocr_float16_ctc.tflite")
interpreter = tf.lite.Interpreter(model_path=model_path)
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_shape = input_details[0]["shape"]
interpreter.set_tensor(input_details[0]["index"], input_data)
interpreter.invoke()
output = interpreter.get_tensor(output_details[0]["index"])
return output
alphabets = string.digits + string.ascii_lowercase
blank_index = len(alphabets)
def postprocess_ocr(output, greedy=True):
# Running decoder on TFLite Output
final_output = "".join(
alphabets[index] for index in output[0] if index not in [blank_index, -1]
)
return final_output
def run_ocr(img_bytes: bytes, detector="east", greedy=True):
nd_array = read_image(img_bytes)
start_time = time.time()
input_image, preprocessed_image, rW, rH = preprocess_east(nd_array)
scores, geometry = run_east_tflite(preprocessed_image)
output, crops = postprocess_east(scores, geometry, rW, rH, input_image)
font_scale = 1
thickness = 2
# i=0
(h, w) = input_image.shape[:2]
for box in crops:
# i += 1
yMin = box[0][1]
yMax = box[1][1]
xMin = box[0][0]
xMax = box[1][0]
xMin = max(0, xMin)
yMin = max(0, yMin)
xMax = min(w, xMax)
yMax = min(h, yMax)
cropped_image = input_image[yMin:yMax, xMin:xMax, :]
# Uncomment it if you want to see the croppd images in output folder
# cv2.imwrite(f'output/{i}.jpg', cropped_image)
# print("i: ", i)
# print("Box: ", box)
# plt_imshow("cropped_image", input_image)
processed_image = preprocess_ocr(cropped_image)
ocr_output = run_tflite_ocr(processed_image)
final_output = postprocess_ocr(ocr_output, greedy)
# print("Text output: ", final_output)
# final_output = ''
cv2.putText(
output,
final_output,
(box[0][0], box[0][1] - 10),
cv2.FONT_HERSHEY_SIMPLEX,
font_scale,
(0, 0, 255),
thickness,
)
print(
f"Time taken to run OCR Model with {detector} detector and KERAS OCR is",
time.time() - start_time,
)
return output.tobytes()
def image_to_byte_array(image_path: string) -> bytes:
with open(image_path, "rb") as image:
f = image.read()
return bytes(f)
def read_image(content: bytes) -> np.ndarray:
"""
Image bytes to OpenCV image
:param content: Image bytes
:returns OpenCV image
:raises TypeError: If content is not bytes
:raises ValueError: If content does not represent an image
"""
if not isinstance(content, bytes):
raise TypeError(f"Expected 'content' to be bytes, received: {type(content)}")
image = cv2.imdecode(np.frombuffer(content, dtype=np.uint8), cv2.IMREAD_COLOR)
if image is None:
raise ValueError(f"Expected 'content' to be image bytes")
return image
# image_path = r"/Users/josegeorges/Desktop/puro-labels/train/yes/label_1.jpg"
# img_bytes = image_to_byte_array(image_path)
# final_image = run_ocr(img_bytes, detector="east", greedy=True)
def call_ocr_from_android(img_bytes: bytearray):
return run_ocr(img_bytes=bytes(img_bytes), detector="east", greedy=True)
# dst_folder = "./"
# out_file_name = "out_image.png"
# # Save the image in JPG format
# cv2.imwrite(os.path.join(dst_folder, out_file_name), final_image)
</code></pre>
<p>Here are the installed packages through gradle:</p>
<pre><code>install "numpy"
install "opencv-python"
install "imutils"
install "scikit-image"
install "tensorflow"
</code></pre>
<p>I'm currently running into the following exception when trying to load the <code>keras_ocr_float16_ctc.tflite</code> interpreter:</p>
<pre><code>Regular TensorFlow ops are not supported by this interpreter. Make sure you apply/link the Flex delegate before inference.Node number 192 (FlexCTCGreedyDecoder) failed to prepare.
</code></pre>
<p>From what I've read in <a href="https://www.tensorflow.org/lite/guide/ops_select" rel="nofollow noreferrer">TF docs</a>, I should have the select-ops available since I'm installing the pip Tensorflow package, but that doesn't seem to be the case. I also thought I needed to follow the android instructions to add the <code>org.tensorflow:tensorflow-lite-select-tf-ops:0.0.0-nightly-SNAPSHOT</code> dependency but that also doesn't seem to work.</p>
<p>What can I do to run this Select Op(s) model on Android using chaquopy?</p>
| <python><android><tensorflow><chaquopy> | 2023-10-20 21:08:18 | 1 | 820 | Jose Georges |
77,333,720 | 3,821,009 | Add series to a polars dataframe by cycling to match the dataframe row count | <p>This:</p>
<pre><code>df = polars.DataFrame(dict(
j=numpy.random.randint(10, 99, 10)
))
print('--- df')
print(df)
s = polars.Series('k', numpy.random.randint(10, 99, 3))
print('--- s')
print(s)
dfj = (df
.with_row_index()
.with_columns(
polars.col('index') % len(s)
)
.join(s.to_frame().with_row_index(), on='index')
.drop('index')
)
print('--- dfj')
print(dfj)
</code></pre>
<p>produces:</p>
<pre><code>--- df
j (i64)
47
22
82
19
85
15
89
74
26
11
shape: (10, 1)
--- s
shape: (3,)
Series: 'k' [i64]
[
86
81
16
]
--- dfj
j (i64) k (i64)
47 86
22 81
82 16
19 86
85 81
15 16
89 86
74 81
26 16
11 86
shape: (10, 2)
</code></pre>
<p>That is, it cycles series 'k' as needed to match the dataframe row count.</p>
<p>It looks a bit verbose. Is there a shorter (or more idiomatic) way to do this in polars?</p>
| <python><python-polars> | 2023-10-20 20:52:54 | 1 | 4,641 | levant pied |
77,333,710 | 2,348,290 | python won't update globals from imported function | <p>I wrote a script to parse arguments from the command line. It has a function called <code>getFlags</code> which parses <code>sys.argv</code> and a command string from the user to create a dictionary called <code>myVars</code> with the variable name and associated values, and returns the dictionary. If I include the function in the main script I can move those variables to the globals() list by simply doing <code>globals().update(myVars)</code> inside the function.</p>
<p>However, when I save this function and import it, the update function does not work and I am forced to use <code>globals().update(getFlags(...))</code>. This works fine for me but confuses some people who use my script.</p>
<p>How can I make it update the globals using the imported function?</p>
<p><strong>Example</strong></p>
<pre><code>def getFlags(commandString,usage="No usage was provided"):
....
globals().update(myVars)
return myVars
getFlags("...")
#My variables are now in the global scope
</code></pre>
<pre><code>from myScripts import getFlags
getFlags("...")
#Oh no, variables not in global scope
globals().update(getFlags("...")
#Now the variables are in the global scope
</code></pre>
| <python><global> | 2023-10-20 20:49:52 | 1 | 2,924 | jeffpkamp |
77,333,437 | 616,460 | Faster way to pass a numpy array through a protobuf message | <p>I have a 921000 x 3 numpy array (921k 3D points, one point per row) that I am trying to pack into a protobuf message and I am running into performance issues. I have control over the protocol and can change it as needed. I am using Python 3.10 and numpy 1.26.1. I am using protocol buffers because I'm using gRPC.</p>
<p>For the very first unoptimized attempt I was using the following message structure:</p>
<pre><code>message Point {
float x = 1;
float y = 2;
float z = 3;
}
message BodyData {
int32 id = 1;
repeated Point points = 2;
}
</code></pre>
<p>And packing the points in one at a time (let <code>data</code> be the large numpy array):</p>
<pre><code>body = BodyData()
for row in data:
body.points.append(Point(x=row[0], y=row[1], z=row[2]))
</code></pre>
<p>This takes approximately 1.6 seconds, which is way too slow.</p>
<p>For the next attempt I ditched the <code>Point</code> structure and decided to transmit the points as a flat array of X/Y/Z triplets:</p>
<pre><code>message Points {
repeated float xyz = 1;
}
message BodyData {
int32 id = 1;
Points points = 2;
}
</code></pre>
<p>I did some performance tests to determine the fastest way to append a 2D numpy array to a list, and got the following results:</p>
<pre><code># Time: 80.1ms
points = []
points.extend(data.flatten())
# Time: 96.8ms
points = []
points.extend(data.reshape((data.shape[0] * data.shape[1],)))
# Time: 76.5ms - FASTEST
points = []
points.extend(data.flatten().tolist())
</code></pre>
<p>From this I determined that <code>.extend(data.flatten().tolist())</code> was the fastest.</p>
<p>However, when I applied this to the protobuf message, it slowed way down:</p>
<pre><code># Time: 436.0ms
body = BodyData()
body.points.xyz.extend(data.flatten().tolist())
</code></pre>
<hr />
<p>So the fastest I've been able to figure out how to pack the numpy array into any protobuf message is 436ms for 921000 points.</p>
<p>This is very far short of my performance target, which is ~12ms per copy. I'm not sure if I can get close to that but, is there any way I can do this more quickly?</p>
| <python><numpy><performance><protocol-buffers> | 2023-10-20 19:46:10 | 2 | 40,602 | Jason C |
77,333,340 | 8,940,973 | Gunicorn behavior with worker and thread configuration | <p>I'm currently using Gunicorn version 20.1.0, and I have a question regarding the behavior of Gunicorn when using the following command:</p>
<p>Dockerfile</p>
<pre><code> CMD ["gunicorn", "--workers", "20", "--threads", "5", "application:application", "--max-requests", "250", "--max-requests-jitter", "50"]
</code></pre>
<p>My initial understanding was that this command should instruct Gunicorn to start 20 workers, each supported by 5 threads. After each worker completes 250 requests, it should be terminated and a new worker should be spawned. However, the observed behavior is different.</p>
<p>The number of workers (processes with different PIDs) keeps increasing beyond 60, which eventually leads to memory exhaustion and timeouts. It seems like the workers are not being terminated as expected.</p>
<p>My question is two-fold:</p>
<p>Does Gunicorn create more than 20 workers as I specified in the command?
Is Gunicorn failing to terminate workers that have served 250 requests?
I would appreciate any insights or suggestions to help me understand and rectify this behavior. Thank you!</p>
| <python><flask><gunicorn> | 2023-10-20 19:20:49 | 3 | 345 | No_One |
77,333,320 | 3,216,318 | python-oracledb module fails to use instantclient in a docker container | <p>I'm trying to build a docker image to access an oracle database
at runtime, I'm having the following error message: <code>DPI-1047: Cannot locate a 64-bit Oracle Client library: "/oracle/instantclient/libclntsh.so: cannot open shared object file: No such file or directory".</code></p>
<p>Inside the container, there actually is a symbolic link at <code>/oracle/instantclient/libclntsh.so</code></p>
<p>Dockerfile:</p>
<pre><code>FROM python:3.12-slim
RUN apt-get update && \
apt-get install -y wget unzip libaio1
ARG ORACLE_HOME=/oracle
ARG ORACLE_CLIENT_HOME=${ORACLE_HOME}/instantclient
# Download and install Oracle instantclient
RUN mkdir /tmp/oracle && \
wget https://download.oracle.com/otn_software/linux/instantclient/1920000/instantclient-basic-linux.x64-19.20.0.0.0dbru.zip -P /tmp/oracle && \
unzip /tmp/oracle/instantclient-basic-* -d /tmp/oracle && \
mkdir ${ORACLE_HOME} && \
mv /tmp/oracle/instantclient_* ${ORACLE_CLIENT_HOME}
ENV LD_LIBRARY_PATH="${ORACLE_CLIENT_HOME}"
RUN pip install --upgrade pip && \
pip install pipenv
ENV PIPENV_VENV_IN_PROJECT=1
WORKDIR /app
ADD main.py Pipfile Pipfile.lock ./
RUN pipenv sync
ENTRYPOINT ["./.venv/bin/python", "main.py"]
CMD [""]
</code></pre>
<p>main.py:</p>
<pre class="lang-py prettyprint-override"><code>import os
import oracledb
def print_db_version(db_config):
params = oracledb.ConnectParams(host=db_config['host'], port=db_config['port'], service_name=db_config['name'])
with oracledb.connect(user=db_config['username'], password=db_config['password'], params=params) as conn:
print(f'Database version: {conn.version}')
conn.close()
if __name__ == '__main__':
# Both calls below fail...
# oracledb.init_oracle_client()
oracledb.init_oracle_client(os.environ['LD_LIBRARY_PATH'])
db_config = {
'host': os.environ['DB_HOST'],
'port': os.environ['DB_PORT'],
'name': os.environ['DB_NAME'],
'username': os.environ['DB_USERNAME'],
'password': os.environ['DB_PASSWORD'],
}
print_db_version(db_config)
</code></pre>
<p>Pipfile:</p>
<pre><code>[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
oracledb = "1.4.2"
</code></pre>
<p>command lines (last one allows to explore the container):</p>
<pre><code>docker build -t my-version .
docker run my-version
docker run -it --entrypoint "" my-version bash
</code></pre>
<p>I can not figure out why this error pops up while the library is actually installed in my container... any ideas?</p>
<hr />
<p><strong>EDIT</strong></p>
<p>I tried Anthony Tuininga suggestions and had the following output:</p>
<pre><code>ODPI [00001] 2023-10-21 19:13:32.206: ODPI-C 5.0.1
ODPI [00001] 2023-10-21 19:13:32.206: debugging messages initialized at level 64
ODPI [00001] 2023-10-21 19:13:32.206: Context Parameters:
ODPI [00001] 2023-10-21 19:13:32.206: Oracle Client Lib Dir: /oracle/instantclient
ODPI [00001] 2023-10-21 19:13:32.206: Environment Variables:
ODPI [00001] 2023-10-21 19:13:32.206: LD_LIBRARY_PATH => "/oracle/instantclient"
ODPI [00001] 2023-10-21 19:13:32.206: load in parameter directory
ODPI [00001] 2023-10-21 19:13:32.206: load in dir /oracle/instantclient
ODPI [00001] 2023-10-21 19:13:32.206: load with name /oracle/instantclient/libclntsh.so
ODPI [00001] 2023-10-21 19:13:32.206: load by OS failure: /oracle/instantclient/libclntsh.so: cannot open shared object file: No such file or directory
ODPI [00001] 2023-10-21 19:13:32.206: load with name /oracle/instantclient/libclntsh.so.19.1
ODPI [00001] 2023-10-21 19:13:32.207: load by OS failure: /oracle/instantclient/libclntsh.so.19.1: cannot open shared object file: No such file or directory
ODPI [00001] 2023-10-21 19:13:32.207: load with name /oracle/instantclient/libclntsh.so.18.1
ODPI [00001] 2023-10-21 19:13:32.207: load by OS failure: /oracle/instantclient/libclntsh.so.18.1: cannot open shared object file: No such file or directory
ODPI [00001] 2023-10-21 19:13:32.207: load with name /oracle/instantclient/libclntsh.so.12.1
ODPI [00001] 2023-10-21 19:13:32.207: load by OS failure: /oracle/instantclient/libclntsh.so.12.1: cannot open shared object file: No such file or directory
ODPI [00001] 2023-10-21 19:13:32.207: load with name /oracle/instantclient/libclntsh.so.11.1
ODPI [00001] 2023-10-21 19:13:32.207: load by OS failure: /oracle/instantclient/libclntsh.so.11.1: cannot open shared object file: No such file or directory
ODPI [00001] 2023-10-21 19:13:32.207: load with name /oracle/instantclient/libclntsh.so.20.1
ODPI [00001] 2023-10-21 19:13:32.207: load by OS failure: /oracle/instantclient/libclntsh.so.20.1: cannot open shared object file: No such file or directory
ODPI [00001] 2023-10-21 19:13:32.207: load with name /oracle/instantclient/libclntsh.so.21.1
ODPI [00001] 2023-10-21 19:13:32.207: load by OS failure: /oracle/instantclient/libclntsh.so.21.1: cannot open shared object file: No such file or directory
Traceback (most recent call last):
File "/app/main.py", line 18, in <module>
['libocci.so.19.1', 'libnnz19.so', 'adrci', 'libipc1.so', 'xstreams.jar', 'libclntsh.so.11.1', 'libclntsh.so.18.1', 'genezi', 'libocci.so.12.1', 'network', 'libocci.so.10.1', 'libocci.so', 'libociei.so', 'libclntsh.so', 'libclntsh.so.12.1', 'libocci.so.18.1', 'libclntsh.so.19.1', 'ucp.jar', 'BASIC_LICENSE', 'libocijdbc19.so', 'ojdbc8.jar', 'BASIC_README', 'libmql1.so', 'liboramysql19.so', 'libocci.so.11.1', 'libclntshcore.so.19.1', 'libclntsh.so.10.1', 'uidrvci']
oracledb.init_oracle_client(os.environ['LD_LIBRARY_PATH'])
File "src/oracledb/impl/thick/utils.pyx", line 476, in oracledb.thick_impl.init_oracle_client
File "src/oracledb/impl/thick/utils.pyx", line 500, in oracledb.thick_impl.init_oracle_client
File "src/oracledb/impl/thick/utils.pyx", line 421, in oracledb.thick_impl._raise_from_info
oracledb.exceptions.DatabaseError: DPI-1047: Cannot locate a 64-bit Oracle Client library: "/oracle/instantclient/libclntsh.so: cannot open shared object file: No such file or directory". See https://python-oracledb.readthedocs.io/en/latest/user_guide/initialization.html for help
</code></pre>
<p>It shows that:</p>
<ul>
<li><code>LD_LIBRARY_PATH</code> is properly set to <code>/oracle/instantclient</code></li>
<li>the instant client is actually searched in this directory</li>
<li><code>/oracle/instantclient</code> actually contains libclntsh.so file (it's actually a symbolic link to libclntsh.so.19.1</li>
</ul>
<p>It sounds pretty weird to me...</p>
<p>I made the source code available here: <a href="https://github.com/galak75/python-oracle-img" rel="nofollow noreferrer">https://github.com/galak75/python-oracle-img</a></p>
<hr />
<p><strong>EDIT 2</strong></p>
<p>I tried to use the bare call to <code>init_oracle_client()</code>, and then it looks like <code>LD_LIBRARY_PATH</code> variable is not used: instant client is search in my python virtualenv:</p>
<pre><code>ODPI [00001] 2023-10-22 14:48:34.681: ODPI-C 5.0.1
ODPI [00001] 2023-10-22 14:48:34.681: debugging messages initialized at level 64
ODPI [00001] 2023-10-22 14:48:34.681: Context Parameters:
ODPI [00001] 2023-10-22 14:48:34.681: Environment Variables:
ODPI [00001] 2023-10-22 14:48:34.681: LD_LIBRARY_PATH => "/oracle/instantclient"
ODPI [00001] 2023-10-22 14:48:34.681: check module directory
ODPI [00001] 2023-10-22 14:48:34.681: module name is /app/.venv/lib/python3.12/site-packages/oracledb/thick_impl.cpython-312-aarch64-linux-gnu.so
ODPI [00001] 2023-10-22 14:48:34.681: load in dir /app/.venv/lib/python3.12/site-packages/oracledb
ODPI [00001] 2023-10-22 14:48:34.681: load with name /app/.venv/lib/python3.12/site-packages/oracledb/libclntsh.so
ODPI [00001] 2023-10-22 14:48:34.681: load by OS failure: /app/.venv/lib/python3.12/site-packages/oracledb/libclntsh.so: cannot open shared object file: No such file or directory
ODPI [00001] 2023-10-22 14:48:34.681: load with OS search heuristics
ODPI [00001] 2023-10-22 14:48:34.681: load with name libclntsh.so
ODPI [00001] 2023-10-22 14:48:34.682: load by OS failure: libaio.so.1: cannot open shared object file: No such file or directory
ODPI [00001] 2023-10-22 14:48:34.682: load with name libclntsh.so.19.1
ODPI [00001] 2023-10-22 14:48:34.682: load by OS failure: libaio.so.1: cannot open shared object file: No such file or directory
ODPI [00001] 2023-10-22 14:48:34.682: load with name libclntsh.so.18.1
ODPI [00001] 2023-10-22 14:48:34.682: load by OS failure: libaio.so.1: cannot open shared object file: No such file or directory
ODPI [00001] 2023-10-22 14:48:34.682: load with name libclntsh.so.12.1
ODPI [00001] 2023-10-22 14:48:34.683: load by OS failure: libaio.so.1: cannot open shared object file: No such file or directory
ODPI [00001] 2023-10-22 14:48:34.683: load with name libclntsh.so.11.1
ODPI [00001] 2023-10-22 14:48:34.683: load by OS failure: libaio.so.1: cannot open shared object file: No such file or directory
ODPI [00001] 2023-10-22 14:48:34.683: load with name libclntsh.so.20.1
ODPI [00001] 2023-10-22 14:48:34.683: load by OS failure: libclntsh.so.20.1: cannot open shared object file: No such file or directory
ODPI [00001] 2023-10-22 14:48:34.683: load with name libclntsh.so.21.1
ODPI [00001] 2023-10-22 14:48:34.683: load by OS failure: libclntsh.so.21.1: cannot open shared object file: No such file or directory
ODPI [00001] 2023-10-22 14:48:34.683: check ORACLE_HOME
Traceback (most recent call last):
File "/app/main.py", line 19, in <module>
oracledb.init_oracle_client()
File "src/oracledb/impl/thick/utils.pyx", line 476, in oracledb.thick_impl.init_oracle_client
LD_LIBRARY_PATH = /oracle/instantclient
['libocci.so.19.1', 'libnnz19.so', 'adrci', 'xstreams.jar', 'libclntsh.so.11.1', 'libclntsh.so.18.1', 'genezi', 'libocci.so.12.1', 'network', 'libocci.so.10.1', 'libocci.so', 'libociei.so', 'libclntsh.so', 'libclntsh.so.12.1', 'libocci.so.18.1', 'libclntsh.so.19.1', 'ucp.jar', 'BASIC_LICENSE', 'libocijdbc19.so', 'ojdbc8.jar', 'BASIC_README', 'liboramysql19.so', 'libocci.so.11.1', 'libclntshcore.so.19.1', 'libclntsh.so.10.1', 'uidrvci']
File "src/oracledb/impl/thick/utils.pyx", line 500, in oracledb.thick_impl.init_oracle_client
File "src/oracledb/impl/thick/utils.pyx", line 421, in oracledb.thick_impl._raise_from_info
oracledb.exceptions.DatabaseError: DPI-1047: Cannot locate a 64-bit Oracle Client library: "libaio.so.1: cannot open shared object file: No such file or directory". See https://python-oracledb.readthedocs.io/en/latest/user_guide/initialization.html for help
</code></pre>
<hr />
<p><strong>EDIT 3</strong></p>
<p>I added <code>libaio1</code> package installation to my code, since it was clearly the next issue (after the target OS platform issue), as pointed out by <a href="https://stackoverflow.com/a/77342237/3216318">Christopher's answer</a></p>
| <python><oracle-database><docker><instantclient><python-oracledb> | 2023-10-20 19:16:21 | 3 | 2,083 | GΓ©raud |
77,333,277 | 11,959,501 | DeprecationWarning: sipPyTypeDict() is deprecated PyQt5 | <p>I was writing the most simplest piece of code to run some small app.</p>
<p>I got the next warning message:</p>
<pre><code>~\PycharmProjects\LoggerTest\main.py:10: DeprecationWarning: sipPyTypeDict() is deprecated, the extension module should use sipPyTypeDictRef() instead
class MainWindow(QMainWindow):
</code></pre>
<p>My code:</p>
<pre class="lang-py prettyprint-override"><code># Import libraries
import sys
# from PyQt5 import QtGui
# from PyQt5.QtCore import QEvent
from PyQt5.QtWidgets import QApplication, QMainWindow
# from PyQt5.QtCore import pyqtSignal #, pyqtSlot
from gui_ui import Ui_MainWindow
class MainWindow(QMainWindow):
def __init__(self, parent=None, **kwargs):
super(MainWindow, self).__init__(parent=parent)
self.ui = Ui_MainWindow()
self.ui.setupUi(self)
self.show()
if __name__ == '__main__':
app = QApplication(sys.argv)
g = MainWindow()
app.exec_()
</code></pre>
<p><strong>What does that warning mean?</strong></p>
| <python><pyqt5><python-3.11> | 2023-10-20 19:06:51 | 4 | 577 | Jhon Margalit |
77,333,151 | 7,916,257 | How to programmatically identify the first and second minima and the peak in a bell-shaped curve? | <p>I'm working with a dataset that forms a bell-shaped (normal-like) distribution. I'm trying to find three specific points in this distribution:</p>
<ol>
<li>The first minimum point before the curve ascends.</li>
<li>The peak of the curve (the maximum point).</li>
<li>The second minimum as the curve descends after the peak.</li>
</ol>
<p>The challenge is that the "minimum" points in the tails of my curve are not well-defined (the data flattens out), so it's tricky to identify these points precisely. I understand that I might need to find where the curve starts to rise (where the first derivative changes from negative to positive) for the first minimum, and where it starts to fall (where the first derivative changes from positive to negative) for the second minimum, after the peak.</p>
<p>The following is the graph, and the data is:
<a href="https://i.sstatic.net/oKpo0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oKpo0.png" alt="enter image description here" /></a></p>
<p>Here's a simplified structure of my data after loading it with pandas:</p>
<pre class="lang-py prettyprint-override"><code># ... (data loading code) ...
print(df.head())
</code></pre>
<p>The y-axis values:</p>
<pre class="lang-py prettyprint-override"><code>0 0.000006775275118
1 0.000002841071152
2 0.000002331050869
3 0.000002098089639
4 0.000001958793763
5 0.000001882957831
6 0.000001817511261
7 0.000001778793930
8 0.000001747600657
9 0.000001726581760
10 0.000001736836910
11 0.000001725393807
12 0.000001735801905
13 0.000001722637070
14 0.000001749210289
15 0.000001743336865
16 0.000001773540895
17 0.000001758737558
18 0.000001792945553
19 0.000001789850672
20 0.000001779160328
21 0.000001807576901
22 0.000001808267621
23 0.000001818196607
24 0.000001811775275
25 0.000001818907290
26 0.000001807848091
27 0.000001836718285
28 0.000001808366208
29 0.000001808187769
30 0.000001782767490
31 0.000001769246699
32 0.000001775707035
33 0.000001759920903
34 0.000001737253676
35 0.000001722037872
36 0.000001727249139
37 0.000001693093662
38 0.000001701267438
39 0.000001692311112
40 0.000001678170239
41 0.000001661488536
42 0.000001668086770
43 0.000001667761220
44 0.000001662043200
45 0.000001667680139
46 0.000001659051206
47 0.000001708371198
48 0.000001732222077
49 0.000001774399919
50 0.000001876523600
51 0.000002025685347
52 0.000002259535699
53 0.000002560415994
54 0.000003055340098
55 0.000003727916538
56 0.000004705124476
57 0.000005971950809
58 0.000007664882924
59 0.000009665827809
60 0.000012083860418
61 0.000014769510653
62 0.000017550004674
63 0.000020119588986
64 0.000022386885842
65 0.000024171012583
66 0.000025206126640
67 0.000025491871789
68 0.000024878712706
69 0.000023424992853
70 0.000021276252458
71 0.000018607410922
72 0.000015824313725
73 0.000012923828210
74 0.000010311275904
75 0.000008025889954
76 0.000006292151302
77 0.000004904108668
78 0.000003974381668
79 0.000003333372577
80 0.000002833383398
81 0.000002537387898
82 0.000002308652989
83 0.000002216008051
84 0.000002145439742
85 0.000002146526344
86 0.000002167240574
87 0.000002248661389
88 0.000002323548464
89 0.000002430060014
90 0.000002537689493
91 0.000002347846822
Name: diff_current, dtype: float64
</code></pre>
<p>I'm currently using the <code>find_peaks</code> function from SciPy to locate the peaks and the minima (by inverting the y values) following <a href="https://stackoverflow.com/a/56812929/10543310">https://stackoverflow.com/a/56812929/10543310</a>. But I am still not abl to get only the values of first min, peak, and second min.</p>
<p>Could anyone guide me on how to identify these three points? Any help with the logic or code would be greatly appreciated.</p>
| <python><python-3.x><pandas><scipy><signal-processing> | 2023-10-20 18:42:48 | 1 | 919 | Joe |
77,333,128 | 2,728,148 | Does Python guarantee that a coroutine will run to its first waiting-point immediately? | <p>If I have three asynchronous methods:</p>
<pre><code>async def f():
doSomething1()
await g()
async def g():
doSomething2()
await h()
async def h():
# some unspecified content
</code></pre>
<p>Does Python provide any guarantees that no other coroutines can be executed on the current thread between the calls to <code>doSomething1()</code> and <code>doSomething2()</code>?</p>
| <python><asynchronous><async-await><python-asyncio> | 2023-10-20 18:38:06 | 3 | 11,113 | Cort Ammon |
77,333,100 | 18,030,914 | Geoalchemy2 Geometry schema for pydantic (FastAPI) | <p>I want to use PostGIS with FastAPI and therefore use geoalchemy2 with alembic to create the table in the DB.
But I'm not able to declare the schema in pydantic v2 correctly.</p>
<p>My Code looks as follows:</p>
<pre class="lang-py prettyprint-override"><code># auto-generated from env.py
from alembic import op
import sqlalchemy as sa
from geoalchemy2 import Geometry
from sqlalchemy.dialects import postgresql
...
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.create_geospatial_table('solarpark',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('name_of_model', sa.String(), nullable=True),
sa.Column('comment', sa.String(), nullable=True),
sa.Column('lat', sa.ARRAY(sa.Float()), nullable=True),
sa.Column('lon', sa.ARRAY(sa.Float()), nullable=True),
sa.Column('geom', Geometry(geometry_type='POLYGON', srid=4326, spatial_index=False, from_text='ST_GeomFromEWKT', name='geometry'), nullable=True),
sa.PrimaryKeyConstraint('id')
)
op.create_geospatial_index('idx_solarpark_geom', 'solarpark', ['geom'], unique=False, postgresql_using='gist', postgresql_ops={})
op.create_index(op.f('ix_solarpark_id'), 'solarpark', ['id'], unique=False)
# ### end Alembic commands ###
</code></pre>
<pre class="lang-py prettyprint-override"><code># models.py
from geoalchemy2 import Geometry
from sqlalchemy import ARRAY, Column, Date, Float, Integer, String
from app.db.base_class import Base
class SolarPark(Base):
id = Column(Integer, primary_key=True, index=True)
name_of_model = Column(String)
comment = Column(String, default="None")
lat = Column(ARRAY(item_type=Float))
lon = Column(ARRAY(item_type=Float))
geom = Column(Geometry("POLYGON", srid=4326))
</code></pre>
<pre class="lang-py prettyprint-override"><code># schemas.py
from typing import List
from pydantic import ConfigDict, BaseModel, Field
class SolarParkBase(BaseModel):
model_config = ConfigDict(from_attributes=True, arbitrary_types_allowed=True)
name_of_model: str = Field("test-model")
comment: str = "None"
lat: List[float] = Field([599968.55, 599970.90, 599973.65, 599971.31, 599968.55])
lon: List[float] = Field([5570202.63, 5570205.59, 5570203.42, 5570200.46, 5570202.63])
geom: [WHAT TO INSERT HERE?] = Field('POLYGON ((599968.55 5570202.63, 599970.90 5570205.59, 599973.65 5570203.42, 599971.31 5570200.46, 599968.55 5570202.63))')
</code></pre>
<p>I want the column <strong>geom</strong> to be a type of geometry to perform spatial operations on it. But how can I declare that in pydantic v2?</p>
<p>Thanks a lot in advance!</p>
| <python><postgis><pydantic><geoalchemy2> | 2023-10-20 18:32:23 | 1 | 315 | Taraman |
77,332,955 | 6,654,904 | How to test Data Bulid Tools(DBT) macros in DBT | <p>I have Data Build Tools (BDT) macro for Snowflake tables in aws and the snowflake table table name is source.customer. The macro is here:</p>
<pre><code>{% macro get_customer(customer_status='active') -%}
{{ source("source", "customer") }}
WHERE
{% if customer_status.lower() == 'active' -%}
customerdesc = 'Existing' and customer_status = 0
{%- endif -%}
{%- endmacro %}
</code></pre>
<p>I add tests folder in dbt_project.yml file and create a test cases store under the tests folder. my test cases is here:</p>
<pre><code> {% test get_customer_active %}
expect query
with expected as (
select 'Existing' as customerdesc, 0 as customer_status
)
{{ get_customer('active') }}
union all
select * from expected;
</code></pre>
<p>I run the test case at this command:</p>
<pre><code>dbt test --select get_customer_active
</code></pre>
<p>The test run is success, and I get this message "The selection criterion 'get_customer_active' does not match any nodes"</p>
<p>Actually, the test case does not test the macor.</p>
<p>Question: How to test DBT macro?</p>
| <python><sql><snowflake-cloud-data-platform><jinja2><dbt> | 2023-10-20 18:02:51 | 1 | 353 | Anson |
77,332,822 | 633,182 | Train a pyg model with edge features only, no node level feature vector | <p>With a dataset containing only edge features. This graph has nodes with no X features. This dataset is a set of edges about a network and its network flow. Ex: ip-address1, ip-address2 are the source & target nodes ... <code>the edge_attr</code></p>
<p>To build a model in pytorch geometric (pyg) how do I pass the x (features) to the Data object?</p>
<p>Here is what I do now, but is not working:</p>
<pre><code>X = np.zeros(shape=(len(node_indices),0))
x = torch.tensor(X, dtype=torch.double)
...
data = Data(x=x, edge_index=edge_index, edge_attr=edge_attr, y=y)
</code></pre>
<p>print data:
<code>Data(x=[1784, 0], edge_index=[2, 1087823], edge_attr=[1087823, 6], y=[1087823, 1])</code></p>
<p>P.S. <a href="https://stackoverflow.com/questions/77258901/can-we-use-gnn-on-graphs-with-only-edge-features">A similar question is in this link</a></p>
| <python><vector><dataset><pytorch-geometric> | 2023-10-20 17:38:55 | 1 | 651 | sAguinaga |
77,332,706 | 3,171,007 | Querying Snowflake in Python where the table may not exist | <p>I got this code from a coworker that gets and manipulates data from Snowflake using snowflake.connector:</p>
<pre><code>with read_snowflake(user = snf_user, warehouse = snf_warehouse, database = snf_db, schema = snf_schema) as snf_conn:
max_date_df = pd.read_sql(max_date_qry, snf_conn) #Read
</code></pre>
<p>However, when the table doesn't exist (and therefore I need to feed the algorithm a default date value) I couldn't figure out how to trap the error. I did figure out how to trap that specific error with this code:</p>
<pre><code>snf_conn = read_snowflake(user = snf_user, warehouse = snf_warehouse, database = snf_db, schema = snf_schema)
cur = snf_conn.cursor()
try:
cur.execute(max_date_qry)
except Exception as e:
print(f"Some error you don't know how to handle {e}")
max_date = '2023/01/01'
</code></pre>
<p>But then when I try to read the results from the simple <code>SELECT MAX(SN_DATE) MAX_DT FROM db.tbl.N_SNAPSHOT</code> query (which returns a single date from Snowflake) the <code>print(cur)</code> output spits out numerals ranging from 1 to 52 (with 9 twice).</p>
<p>How can I query Snowflake for a max date and also trap the potential "table doesn't exist" error?</p>
| <python><snowflake-cloud-data-platform> | 2023-10-20 17:12:13 | 0 | 1,744 | n8. |
77,332,131 | 580,937 | Get name of the current Snowpark Python Stored Procedure | <p>is it possible to retrieve the name of the currently running python sproc?</p>
<p>For example if I have a python proc</p>
<pre><code>CREATE OR REPLACE PROCEDURE copydata(fromtable VARCHAR, totable VARCHAR, count INT)
RETURNS STRING
LANGUAGE PYTHON
RUNTIME_VERSION = '3.8'
PACKAGES = ('snowflake-snowpark-python')
HANDLER = 'run'
AS
$$
def run(session, from_table, to_table, count):
session.table(from_table).limit(count).write.save_as_table(to_table)
return "SUCCESS"
$$;
</code></pre>
<p>I would like to return "SUCCESS " + PROC_NAME</p>
| <python><stored-procedures><snowflake-cloud-data-platform> | 2023-10-20 15:26:40 | 1 | 2,758 | orellabac |
77,332,054 | 2,232,418 | Get Process ID of the current running build on Azure DevOps | <p>I'm trying to piece together some scripts to add CodeQL scanning to a existing build pipeline on Azure DevOps.
For compiled languages such as .NET, a pre-compile command is required to create a CodeQL database to watch the compile. I have set this up as follows:</p>
<p>YAML:</p>
<pre><code>parameters:
- name: githubToken
default: ''
- name: buildType
default: ''
- name: codeql_db
default: "codeql-db"
steps:
- script: |
echo "##vso[task.prependpath]/apps/ado/tools/codeql"
displayName: 'Setup codeql'
- task: PythonScript@0
displayName: 'CodeQL setup environment'
inputs:
scriptSource: 'filepath'
scriptPath: '$(Pipeline.Workspace)/utils/codeql_setup.py'
arguments: '--github-token ${{ parameters.githubToken }} --build-type ${{ parameters.buildType }} --repository-name $(Build.Repository.Name) --repository-path $(Build.Repository.LocalPath) --agent-os $(agent.os) --codeql-db ${{ parameters.codeql_db }}'
workingDirectory: $(Pipeline.Workspace)
</code></pre>
<p>codeql_setup.py:</p>
<pre><code>if build_type in compiled_buildtypes:
print('Compiled build type identified. Setting up indirect build tracing.', flush=True)
codeql_setup_command = ['codeql', 'database', 'init','--source-root', repository_local_path, '--language', repo_languages_argument, '--begin-tracing', codeql_db_name, '--overwrite']
# Set additional options
if len(repo_languages) > 1 :
print('Multiple languages detected.', flush=True)
codeql_setup_command.append('--db-cluster')
if 'windows' in agent_os.lower():
print('Windows Agent detected.', flush=True)
codeql_setup_command.append(f'--trace-process-level {PROCESS_NUMBER}')
database_init_proc = subprocess.run(codeql_setup_command, env=os.environ.copy())
print('CodeQL database setup for indirect build tracing.', flush=True)
</code></pre>
<p>My issue is the second additional argument. <a href="https://docs.github.com/en/code-security/codeql-cli/getting-started-with-the-codeql-cli/preparing-your-code-for-codeql-analysis#using-indirect-build-tracing" rel="nofollow noreferrer">For Windows agents, the process number or parent process name is required for codeQL to watch the compile</a>.</p>
<p>Is there a simple way to get the process ID of the build? Similar to how I have retrieved the OS.</p>
| <python><windows><azure-devops><codeql> | 2023-10-20 15:16:18 | 1 | 2,787 | Ben |
77,331,996 | 10,270,246 | Can't manage with lxml to have both pretty printing and not turning xml elements to self-closing elements | <p>I'm currently facing a dilemma.</p>
<p>This code below doesn't print my XML properly :</p>
<pre class="lang-py prettyprint-override"><code>import lxml.etree
xml_tree = lxml.etree.parse("myFile.xml")
root = xml_tree.getroot()
for fruit in root:
if fruit.tag == "apple":
for apple in fruit:
if apple.tag == "McIntosh":
fruit.remove(apple)
tree = lxml.etree.ElementTree(root)
tree.write("output.xml", pretty_print=True, xml_declaration=True, encoding="utf-8")
</code></pre>
<p>Here is my XML input file :</p>
<pre class="lang-xml prettyprint-override"><code><fruits>
<apple>
<McIntosh/>
</apple>
</fruits>
</code></pre>
<p>And here is my (ugly) output XML file. the indentation is not correct :</p>
<pre class="lang-xml prettyprint-override"><code><?xml version='1.0' encoding='UTF-8'?>
<fruits>
<apple>
</apple>
</fruits>
</code></pre>
<p>I read somewhere that to get the pretty printing actually work, I had to use a <code>lxml.etree.XMLParser</code> with the <code>remove_blank_text=True</code> option like this :</p>
<pre class="lang-py prettyprint-override"><code>xml_parser = lxml.etree.XMLParser(remove_blank_text=True)
xml_tree = lxml.etree.parse("myFile.xml", xml_parser)
</code></pre>
<p>It works to actually activate the pretty printing but on the other hand my empty XML elements are now turned into self-closing elements :</p>
<pre class="lang-xml prettyprint-override"><code><?xml version='1.0' encoding='UTF-8'?>
<fruits>
<apple/>
</fruits>
</code></pre>
<p>Does anyone know how to fix this side effect of lxml pretty printing ?</p>
| <python><lxml><pretty-print> | 2023-10-20 15:07:26 | 1 | 633 | Autechre |
77,331,946 | 7,385,440 | Find Amazon ASIN in an API? | <p>I'm trying to find the Amazon ASIN from an API call, which I'll give some parameters.
Example:
Google search for: <code>site:amazon.com Top Gun Tom Cruise</code> --> ASIN: B001K3K5MO.</p>
<p>Explanation:
I'm telling the Google search to only focus on sites containing <code>amazon.com</code> and giving it the keywords: Top Gun Tom Cruise. In this case the first link is the correct one. Now this ASIN can be scraped from the Amazon url and thus I can make a Python script to find the ASIN. However, if I mistype some keywords or it can't find the correct version, Google is still going to find another product and scraping that ASIN will result in a wrong ASIN.</p>
<p>Now there's an Amazon API that uses the ASIN to find details about the product (<a href="https://amazon-asin.com/asincheck/?product_id=B001K3K5MO#hl-product-detail" rel="nofollow noreferrer">This site uses it</a>), but I'd like to do the exact opposite. So finding the ASIN by giving it keywords. I could even specify keywords into <code>title: Top Gun</code> and <code>cast: Tom Cruise</code> or something along those lines.
Looking forward to any reply!</p>
| <python><amazon-product-api> | 2023-10-20 14:58:17 | 1 | 303 | G Buis |
77,331,752 | 8,792,159 | Check if a file is a .tsv file | <p>This question is closely related to <a href="https://softwarerecs.stackexchange.com/questions/61996/github-continuous-integration-to-validate-a-csv-tsv-file">this thread</a>. I would like to set up a function that checks if an input file is a <code>.tsv</code> file. I realized that it's not sufficient to only use <code>file.endswith('.tsv')</code> as you can easily just save a comma-separated or semicolon-separated file with the <code>.tsv</code> ending.</p>
<p>Consider the following table:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>a</th>
<th>b</th>
<th>c</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td></td>
<td>foo</td>
</tr>
<tr>
<td>3.42</td>
<td>bar</td>
<td>foo, bar</td>
</tr>
</tbody>
</table>
</div>
<p>In a text editor this would look something like this:</p>
<p><a href="https://i.sstatic.net/V03MQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V03MQ.png" alt="enter image description here" /></a></p>
<p>And from <a href="https://en.wikipedia.org/wiki/Tab-separated_values" rel="nofollow noreferrer">wikipedia</a> we know that:</p>
<blockquote>
<p>Records are separated by newlines, and values within a record are
separated by tab characters.</p>
</blockquote>
<p>So I think the MVP would be to check that each line ends with a newline character and that each line contains the same amount of tab-characters? I am not sure though if this is correct and/or sufficient and/or if I maybe have overseen other things that one needs to check for.</p>
| <python><testing><spreadsheet> | 2023-10-20 14:28:54 | 1 | 1,317 | Johannes Wiesner |
77,331,690 | 1,128,336 | databricks throws OSError: [Errno 95] Operation not supported when appending to azure blob file | <p>Doing some file manipulation on databricks and it's failing due to the following error when using <code>open('\mnt\file\','a')</code></p>
<pre><code>OSError: [Errno 95] Operation not supported
</code></pre>
<p>however, it will succeed if <code>open('\mnt\file\','w')</code>.
How can "a" not be supported on azure blob? Any workaround here?</p>
| <python><azure-blob-storage><databricks><azure-databricks> | 2023-10-20 14:18:57 | 2 | 4,945 | sam yi |
77,331,645 | 10,270,246 | Pretty printing is failing with lxml when I add an element to an empty element | <p>The pretty printing of lxml module fails when I add an element to an empty XML element :</p>
<p>Here is my python code :</p>
<pre class="lang-py prettyprint-override"><code>import lxml.etree
xml_parser = lxml.etree.XMLParser(remove_blank_text=True)
xml_tree = lxml.etree.parse("myFile.xml", xml_parser)
root = xml_tree.getroot()
for elem in root:
if elem.tag == "apple":
lxml.etree.SubElement(elem, "McIntosh")
tree = lxml.etree.ElementTree(root)
tree.write("output.xml", pretty_print=True, xml_declaration=True, encoding="utf-8")
</code></pre>
<p>Here is my XML input file :</p>
<pre class="lang-xml prettyprint-override"><code><fruits>
<apple>
</apple>
</fruits>
</code></pre>
<p>And here is my output XML file :</p>
<pre class="lang-xml prettyprint-override"><code><?xml version='1.0' encoding='UTF-8'?>
<fruits>
<apple>
<McIntosh/></apple>
</fruits>
</code></pre>
<p>My code works if my XML element is not empty tho :</p>
<pre class="lang-xml prettyprint-override"><code><?xml version='1.0' encoding='UTF-8'?>
<fruits>
<apple>
<golden/>
<McIntosh/>
</apple>
</fruits>
</code></pre>
<p>I will have to implement a custom xml formatter if I don't find a solution.</p>
<p>Does anyone know how to fix this problem ?</p>
| <python><lxml><pretty-print> | 2023-10-20 14:12:36 | 1 | 633 | Autechre |
77,331,633 | 3,608,591 | Pandas idxmax() returns 0 if no value matches condition? | <p>I'm trying to understand the behaviour of <code>idxmax()</code>.</p>
<p>I'm using <code>idxmax()</code> to get all rows below the first row meeting a condition in a dataframe, like this:</p>
<p><code>df = df[df['A'].gt(0).idxmax():]</code></p>
<p>I'm then checking if the resulting dataframe is empty. There's one unit test where I expect an empty dataframe (no row meets the condition), but it was never empty, so I looked into it.</p>
<p>I found that if the condition is NEVER met, idxmax() returns 0 (instead of, say, <code>None</code> or instead of throwing an exception I could catch) - which clashes with the case where the condition <strong>IS MET at row 0</strong>.</p>
<p>Here's an example of what I'm seeing:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(data={'A':[0, 0, 0, 0]}) # no element where gt(0) is True
print("Truths values\n", df['A'].gt(0)) # checking the truth values of the Series
print("Index of first row where 'A' is at 0: ", df['A'].gt(0).idxmax())
</code></pre>
<p>The dataframe:</p>
<p><a href="https://i.sstatic.net/0prr0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0prr0.png" alt="First dataframe" /></a></p>
<p>The execution result:</p>
<pre><code>>>> Truth values
0 False
1 False
2 False
3 False
Index of first row where 'A' is at 0: 0 <--- ???
</code></pre>
<p>And with a different dataframe:</p>
<pre><code>df2 = pd.DataFrame(data={'A':[1, 0, 0, 0]})
print("Truths values\n", df['A'].gt(0))
print("Index of first row where 'A' is at 0", df['A'].gt(0).idxmax())
</code></pre>
<p>The dataframe:</p>
<p><a href="https://i.sstatic.net/Z7WQP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z7WQP.png" alt="enter image description here" /></a></p>
<p>The execution result:</p>
<pre><code>Truth values
0 True
1 False
2 False
3 False
Index of first row where 'A' is at 0: 0
</code></pre>
<p>So we end up with the same behaviour for two different inputs.</p>
<p>My current solution: summing over 'A' and checking if the sum is 0, and doing something different if that's the case - which seems a bit overkill.</p>
<p>Am I using <code>idxmax()</code> wrong ? Could someone shed some light on this, as the behaviour seems very counter-intuitive ?</p>
<p>Thanks :)</p>
| <python><pandas><dataframe> | 2023-10-20 14:11:22 | 1 | 317 | Xhattam |
77,331,482 | 5,212,614 | How can we plot a network graph, using pyvis, in a browser? | <p>When I run the code below, I get this message but no network graph is displayed.</p>
<p><strong>Warning: When cdn_resources is 'local' jupyter notebook has issues displaying graphics on chrome/safari. Use cdn_resources='in_line' or cdn_resources='remote' if you have issues viewing graphics in a notebook.
example.html</strong></p>
<pre><code>import pandas as pd
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
data = [{'Circuit': 'html','Description':1, 'Duration':10.2, 'Source':'Westchester', 'Destination':'Davie', 'Picklist':1000, 'Postlist':50000.2},
{'Circuit': 'html', 'Description':2, 'Duration':12.1, 'Source':'Westchester', 'Destination':'Davie', 'Picklist':3000, 'Postlist':40000.1},
{'Circuit': 'html', 'Description':3, 'Duration':11.3, 'Source':'Westchester', 'Destination':'Davie', 'Picklist':7000, 'Postlist':50000.2},
{'Circuit': 'html', 'Description':3, 'Duration':8.1, 'Source':'West', 'Destination':'San Bernardino', 'Picklist':3000, 'Postlist':40000.0},
{'Circuit': '.net', 'Description':4, 'Duration':6.2, 'Source':'Queens', 'Destination':'San Bernardino', 'Picklist':5000, 'Postlist':6000.1},
{'Circuit': '.net', 'Description':3, 'Duration':20.1, 'Source':'Queens', 'Destination':'Los Angeles', 'Picklist':5000, 'Postlist':4000.1},
{'Circuit': '.net', 'Description':2, 'Duration':15.5, 'Source':'Brooklyn', 'Destination':'San Francisco', 'Picklist':5000, 'Postlist':9000.3},
{'Circuit': '.net', 'Description':4, 'Duration':7.7, 'Source':'Brooklyn', 'Destination':'Davie', 'Picklist':6000, 'Postlist':10000},
{'Circuit': '.net', 'Description':4, 'Duration':7.7, 'Source':'Los Angeles', 'Destination':'Westchester', 'Picklist':6000, 'Postlist':10000},
{'Circuit': '.net', 'Description':4, 'Duration':7.7, 'Source':'San Berdarnino', 'Destination':'Westchester', 'Picklist':6000, 'Postlist':10000}]
df = pd.DataFrame(data)
df
import pandas as pd
import networkx as nx
import matplotlib.pyplot as plt
G = nx.from_pandas_edgelist(df, source='Source', target='Destination', edge_attr='Picklist')
from pyvis.network import Network
net = Network(notebook=True)
net.from_nx(G)
net.show('example.html')
</code></pre>
<p>If I add this code below, I get a static graph, but I really want the dynamic graph using the code above.</p>
<pre><code>G = nx.from_pandas_edgelist(pandas_df, source = "StorageLocation", target = "StorageType", edge_attr = "Value", create_using = nx.Graph())
plt.figure(figsize=(15,12))
nx.draw(G, with_labels=True, node_color='skyblue', edge_cmap=plt.cm.Blues)
plt.show()
</code></pre>
| <python><python-3.x><networkx><pyvis> | 2023-10-20 13:52:38 | 2 | 20,492 | ASH |
77,331,446 | 4,508,605 | How to provide date with specific format in folder path in Python | <p>I received csv file with sysdate attached to the filename with format <code>yyyy-mm-dd</code>. For example today i received file as <code>User_test_2023-10-20.csv</code>.</p>
<p>I have below python code where i want to attach this sysdate with format <code>yyyy-md-dd</code> to the filename but not sure how to do this.</p>
<pre><code>import codecs
import shutil
with codecs.open(r"C:\User_test_.csv", encoding="utf-16") as input_file:
with codecs.open(r"C:\User_test_.csv", "w", encoding="utf-8") as output_file:
shutil.copyfileobj(input_file, output_file)
</code></pre>
| <python><filenames><python-3.9><sysdate> | 2023-10-20 13:47:02 | 2 | 4,021 | Marcus |
77,331,415 | 11,049,287 | Remove non alpanumberic characters within doublequotes in a string - Python | <p>My input text looks like this :</p>
<pre><code>answer_result = I donβt think weβll still be doing "prompt engineering" in five years "i.e." figuring out how to hack the prompt by " ," adding one magic word to the "" end that changes everything else. " What will always matter is the "1" quality of ideas.
</code></pre>
<p>I need to remove all the non-alphanumeric chars in double quotes. I need to retain "prompt engineering", "i.e.", "1" . All other needs to be removed.</p>
<p>My expected output :</p>
<pre><code>I donβt think weβll still be doing "prompt engineering" in five years "i.e." figuring out how to hack the prompt by adding one magic word to the end that changes everything else. What will always matter is the "1" quality of ideas.
</code></pre>
<p>I attempted the following code to get the location of double quotes:</p>
<pre><code>import re
double_quotes_locs = [m.start() for m in re.finditer('"', answer_result)]
to_be_deleted = []
single_quotes = []
for s in range(0,len(double_quotes_locs),2):
try:
if (double_quotes_locs[s+1] - double_quotes_locs[s]) <= 1:
to_be_deleted.append((double_quotes_locs[s],double_quotes_locs[s+1]))
continue
else:
if re.match("^[A-Za-z0-9]+", answer_result[double_quotes_locs[s]+1:double_quotes_locs[s+1]]):
continue
else:
to_be_deleted.append((double_quotes_locs[s],double_quotes_locs[s+1]))
except IndexError:
single_quotes.append(double_quotes_locs[s])
break
</code></pre>
<p>Can any assist me how to proceed further?</p>
<p>Or even fresh solution for this problem is welcome.</p>
<p>Thanks</p>
| <python><python-3.x><regex> | 2023-10-20 13:42:38 | 1 | 838 | usr_lal123 |
77,331,171 | 4,715,957 | How to access NetBox data about Virtualization / Virtual Machines via Python | <p>This is a Python script to create a CSV from NetBox data about Devices:</p>
<pre><code>devices = nb.dcim.devices.all()
csv_file_path = 'netbox_devices.csv'
with open(csv_file_path, mode='w', newline='') as csv_file:
csv_writer = csv.writer(csv_file)
csv_writer.writerow(['Name', 'Manufacturer', 'Model', 'Serial Number', 'Asset Tag'])
for device in devices:
csv_writer.writerow([device.name, device.device_type.manufacturer.name, device.device_type.model, device.serial, device.asset_tag])
</code></pre>
<p>How could I write a similar script to do the same, but to list Virtual Machines?</p>
<p>Replacing the first line with</p>
<pre><code>vms = nb.virtualization.virtual-machines.all()
</code></pre>
<p>returns an error</p>
<blockquote>
<p>NameError: name 'machines' is not defined</p>
</blockquote>
| <python><netbox> | 2023-10-20 13:09:37 | 1 | 2,315 | dr_ |
77,331,168 | 52,074 | In the python functions `all` and `any`, what does the second undocumented argument do? | <h3>Background</h3>
<p>The output of <code>help(all)</code> is:</p>
<pre><code>Help on built-in function all in module builtins:
all(iterable, /)
Return True if bool(x) is True for all values x in the iterable.
If the iterable is empty, return True.
</code></pre>
<p>I went to look up the official python docs and the same function only has listed the first argument.</p>
<h3>Question: What does the second undocumented argument do?</h3>
<p>I think it is strange that the official docs do not list the second argument but the <code>help()</code> for the function does list the second argument. I also see the same second argument on other functions like <code>any()</code>. Also the second argument does not look like valid syntax because usually you can not have a variable named <code>/</code> because variables must be alpha-numeric/underscores characters.</p>
| <python><built-in> | 2023-10-20 13:09:05 | 2 | 19,456 | Trevor Boyd Smith |
77,331,073 | 865,169 | How to make pandas.read_sql consistently return `NULL` values as `NaN`? | <p>Let us say I have a table in a database containing data as follows:</p>
<pre><code>date value
2023-10-01 08:00:00 88.3
2023-10-02 08:00:00 75.4
2023-10-03 08:00:00 NULL
</code></pre>
<p>When I read data from this table using <code>pandas.read_sql()</code>, if I read all of the above table, the resulting data frame will have 'value' as a float column with the <code>NULL</code> value as a <code>NaN</code> value.<br />
However, if I read from the table with for example a <code>WHERE date BETWEEN '2023-10-03 00:00:00' AND '2023-10-04 00:00:00'</code> condition so that the result only contains the third row, 'value' instead becomes an object column with a <code>None</code> value. This causes me trouble in a test where I have to expect either <code>NaN</code> or <code>None</code> values.<br />
Is there a way to force <code>pandas.read_sql()</code> to consistently return a data frame with 'value' as a float column (with <code>NaN</code>values)?</p>
| <python><pandas> | 2023-10-20 12:52:10 | 1 | 1,372 | Thomas Arildsen |
77,330,978 | 3,749,896 | Alias for an typable Union | <p>I am looking for a way to create a type alias for a <code>Union</code> that is typeable.</p>
<p>In my opinion, there is no suitable default type in the <code>typing</code> module.</p>
<ul>
<li><code>Typing.Iterable</code> is an <code>iterable</code>, which could be a <code>dict</code> β but it shouldn't be.</li>
<li><code>typing.Sequence</code> is a <code>sequence</code>, which supports access by integer indices, but this doesn't include <code>set</code>s.</li>
</ul>
<hr />
<p>Imagine I have these variables (which could of course also be parameters of a method):</p>
<pre class="lang-py prettyprint-override"><code>x : list[str] | tuple[str] | set[str] | frozenset[str]
y : list[str] | tuple[str] | set[str] | frozenset[str]
z : list[int] | tuple[int] | set[int] | frozenset[int]
</code></pre>
<p>This is basically always the same.</p>
<p>Therefore I would like to create a type alias for it. Something like this:</p>
<pre class="lang-py prettyprint-override"><code>my_type = list | tuple | set | frozenset
</code></pre>
<p>so that I can annotate them, like:</p>
<pre class="lang-py prettyprint-override"><code>x: my_type[str] = ["foo", "bar"]
y: my_type[str] = ["baz", "bar"]
z: my_type[int] = [1, 7, 42]
</code></pre>
<p>But this gives me this error:</p>
<pre class="lang-py prettyprint-override"><code>TypeError: There are no type variables left in list | tuple | set | frozenset
</code></pre>
<p>When I define my type with <code>Union</code> (<code>my_type = typing.Union[list, tuple, set, frozenset]</code>) instead of the <code>|</code> shorthand, I get:</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "/opt/pycharm-professional/plugins/python/helpers/pydev/pydevconsole.py", line 364, in runcode
coro = func()
File "<input>", line 4, in <module>
File "/usr/lib/python3.10/typing.py", line 312, in inner
return func(*args, **kwds)
File "/usr/lib/python3.10/typing.py", line 1058, in __getitem__
_check_generic(self, params, len(self.__parameters__))
File "/usr/lib/python3.10/typing.py", line 228, in _check_generic
raise TypeError(f"{cls} is not a generic class")
TypeError: typing.Union[list, tuple, set, frozenset] is not a generic class
</code></pre>
<p>How can I create my type alias to make this work? It should work with python>=3.10.</p>
| <python><python-typing> | 2023-10-20 12:37:25 | 1 | 3,090 | Sven Eberth |
77,330,931 | 1,214,547 | Python frozen dataclass with an optional parameter but non-optional field type | <p>I have a dataclass such as this one:</p>
<pre class="lang-py prettyprint-override"><code>@dataclasses.dataclass(frozen=True)
class MyClass:
my_field: str
other_field: str
</code></pre>
<p>and I have a complicated function for computing a default value for <code>my_field</code> that depends on <code>other_field</code>:</p>
<pre class="lang-py prettyprint-override"><code>def get_default_value_for_my_field(other_field: str) -> str:
... # lots of code
</code></pre>
<p>Is there a way to:</p>
<ol>
<li>Call <code>get_default_value_for_my_field(other_field)</code> and initialize <code>my_field</code> from its result if no value for <code>my_field</code> is passed at initialization, otherwise initialize <code>my_field</code> from the passed value;</li>
<li>Keep <code>MyClass</code> frozen;</li>
<li>Convince <code>pytype</code> that <code>MyClass.my_field</code> has type <code>str</code> rather than <code>str | None</code>;</li>
<li>Convince <code>pytype</code> that <code>MyClass.__init__()</code> has a parameter <code>my_field: str | None = None</code></li>
</ol>
<p>using dataclasses, or am I better off switching to a plain class?</p>
| <python><python-dataclasses> | 2023-10-20 12:29:53 | 2 | 893 | Pastafarianist |
77,330,445 | 4,143,790 | Python 2D array is not initialising properly | <p>I am new to Python and trying to initialise a 2D array like this (pasting the code extracts only)</p>
<pre><code> ary = [[0]*cols]*rows
</code></pre>
<p>for insertion, I am doing</p>
<pre><code> def Insert(self, val):
ary[0][2] = val
</code></pre>
<p>After that, I insert value like this:
myS.Insert(15)</p>
<p>When I try to print the array using a loop, it should print the value 15 only at index 0, 2. But it is printing that value against every index e.g. at 1, 2 then 2, 2 then 3, 2 and so on.</p>
<pre><code>for r in range(self.rows):
for c in range(self.cols):
#print (self.sheet[r][c])
self.col = c
print ('value at row index ', r, ' col index ', c , ' is ', self.sheet[r][c])
</code></pre>
<p>Output is</p>
<pre><code>value at row index 0 col index 0 is 0
value at row index 0 col index 1 is 0
value at row index 0 col index 2 is 15
value at row index 0 col index 3 is 20
value at row index 0 col index 4 is 0
value at row index 1 col index 0 is 0
value at row index 1 col index 1 is 0
value at row index 1 col index 2 is 15
value at row index 1 col index 3 is 20
value at row index 1 col index 4 is 0
</code></pre>
<p>Any ideas what I am doing wrong? is it the assignment or the loop?</p>
| <python><python-3.x> | 2023-10-20 11:09:06 | 2 | 427 | sam |
77,330,184 | 19,325,656 | FastAPI run heavy compute task asynchronously | <p>Hi all I have an issue with running heavy compute task on fastapi endpoint.</p>
<pre class="lang-py prettyprint-override"><code>@app.post("/run/")
async def run_cases(data: List[schemas.TestID], db: orm.Session = Depends(services.get_db)):
try:
CONTAINER.create()
tcm = TestManager(CONTAINER, data) # this is a Class that's does all my data parsing
case_manager_data = tcm.run()
for x in case_manager_data.result():
services.create_result(db=db, result=schemas.Result(**x))
except Exception as ex:
raise ex
return True
</code></pre>
<p>This is my "stock" endpoint and I'm passing data to it that is computed with class TestManager.</p>
<p>However, when I'm sending this data, the whole app stops until I get the result from <em>tcm</em>. I can't go to any other endpoint etc... (all my endpoints are created with <strong>async def</strong>)</p>
<p>So I've tried</p>
<pre class="lang-py prettyprint-override"><code>from starlette.concurrency import run_in_threadpool
data = await run_in_threadpool(tcm.run())
import asyncio
loop = asyncio.get_running_loop()
data = await loop.run_in_executor(None, tcm.run())
from fastapi.concurrency import run_in_threadpool
await run_in_threadpool(tcm.run())
</code></pre>
<p>but every time behavior stays the same, application is blocked when running tcm.run()</p>
<p>What Im doing wrong?</p>
<p>posts that I've been researching on
<a href="https://stackoverflow.com/questions/67599119/fastapi-asynchronous-background-tasks-blocks-other-requests">fisrt</a> <a href="https://stackoverflow.com/questions/63169865/how-to-do-multiprocessing-in-fastapi/63171013#63171013">second</a> <a href="https://stackoverflow.com/questions/71516140/fastapi-runs-api-calls-in-serial-instead-of-parallel-fashion">third</a></p>
| <python><multithreading><asynchronous><concurrency><fastapi> | 2023-10-20 10:26:50 | 1 | 471 | rafaelHTML |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.