QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,660,261
| 12,946,401
|
Appropriate applications for the as_strided function
|
<p>I am practicing on figuring out how to use the as_strided function in numpy. I began with the following example of my own where I generate 5 images of 3 x 3 where each image is full of 1s, the next full of 2s, and so on till 5. If I want convert the (5,3,3) volume to place all images into a single row, I can do the following which works:</p>
<pre><code>import numpy as np
from numpy.lib.stride_tricks import as_strided
array_3d = np.empty((5, 3, 3))
for i in range(5):
array_3d[i] = np.full((3, 3), i+1)
print(array_3d.itemsize)
array_2d = as_strided(array_3d, shape=(5, 9), strides=(8*9, 8))
print(array_2d)
</code></pre>
<p>This outputs the following:</p>
<pre><code>8
[[1. 1. 1. 1. 1. 1. 1. 1. 1.]
[2. 2. 2. 2. 2. 2. 2. 2. 2.]
[3. 3. 3. 3. 3. 3. 3. 3. 3.]
[4. 4. 4. 4. 4. 4. 4. 4. 4.]
[5. 5. 5. 5. 5. 5. 5. 5. 5.]]
</code></pre>
<p>Now I wanted to try harder example of placing the (3, 3) images side by side in the following manner using the as_strided function:</p>
<pre><code>[[1. 1. 1. 2. 2. 2. 3. 3. 3. 4. 4. 4. 5. 5. 5.]
[1. 1. 1. 2. 2. 2. 3. 3. 3. 4. 4. 4. 5. 5. 5.]
[1. 1. 1. 2. 2. 2. 3. 3. 3. 4. 4. 4. 5. 5. 5.]]
</code></pre>
<p>However, I cannot think of a way to do this just using the as_strided function. So I am wondering if this is just not possible only with the as_strided function and how can I know when it is appropriate to use the as_strided function apart from sliding window operations.</p>
|
<python><arrays><numpy><memory>
|
2024-06-24 01:33:52
| 3
| 939
|
Jeff Boker
|
78,660,242
| 18,346,591
|
How do I access FastAPI URL from my docker application?
|
<p>I have a front end react application hosted on a subdomain (<a href="http://www.admin.example.com" rel="nofollow noreferrer">www.admin.example.com</a>). I also have a backend application that uses FastAPI to communicate with the front-end react application. This backend application is deployed using docker. The front end is built using <code>npm run build</code> command and deployed manually i.e build files are copied to the subdomain's root directory.</p>
<p>I am however having issues accessing this FastAPI backend application which is hosted using docker. I have tried using the IP address of my server + the port (<code>https://server_ip_address:8000</code>) but I do not get the response set on the <code>/</code> path from my FastAPI backend. That url (https://server_ip_address:8000), in fact, doesn't load a page. I get the error: <code>This site can’t be reached</code>.</p>
<p>But when I access <code>https://server_ip_address</code> I get a page loaded talking about nginx. I checked the docker logs and there's no error or response from the FastAPI application. I was told docker has it's own IP address too but I tried that and it doesn't work. I also can't access a URL such as <code>https://server_ip_address:8000</code> from my front-end application else I get a CORS policy error. The overall question is, how do I access my FastAPI application listening on docker?</p>
<p>Docker Compose File:</p>
<pre><code>services:
db:
build: .
ports:
- "8000:8000"
expose:
- "8000"
secrets:
- MONGODB_URI
- LOGIN_SECRET_KEY
secrets:
LOGIN_SECRET_KEY:
file: admin_login_secret_key.txt
MONGODB_URI:
file: mongo_db_uri.txt
</code></pre>
<p>Docker File:</p>
<pre><code># Use an official Python runtime as a base image
FROM python:3.9
# Set the working directory in the container
WORKDIR /app
# Copy the dependencies file to the working directory
COPY requirements.txt .
# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy the content of the local src directory to the working directory
COPY . .
# Specify the command to run on container start
CMD ["uvicorn", "db:app", "--host", "0.0.0.0", "--port", "8000"]
</code></pre>
<p>Server nginx.conf File (Only the server setting)</p>
<pre><code>server {
listen 8080;
server_name admin.example.com;
location /backend/ {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
</code></pre>
<p>I tried accessing like this <code>https://admin.example.com:8080/backend</code> but it doesn't route to <code>http://127.0.0.1:8000</code>.</p>
|
<python><linux><docker><docker-compose><fastapi>
|
2024-06-24 01:23:28
| 1
| 662
|
Alexander Obidiegwu
|
78,660,203
| 9,582,542
|
Local Azure Function Deployment
|
<p>I am very green to Azure functions. But I have managed to deploy a function and have it run Successfully when the output is to be created to the current Directory. No issues up to here everything is fine.</p>
<p>The next thing I like to do is save the output to a specific folder. This causes an Error. I will list error shortly.
What I like to know is once I get this error even if I go back to the code that I have verified works now produces the same error. Yes I made sure I saved file before I ran the func start -verbose command</p>
<p>Ok here is the working code:</p>
<pre><code>#Works NO issues
import azure.functions as func
import datetime
import logging
import subprocess
import os
app = func.FunctionApp()
@app.timer_trigger(schedule="0 * * * * *", arg_name="myTimer", run_on_startup=True, use_monitor=False)
def ScrapyTimerTrigger(myTimer: func.TimerRequest) -> None:
if myTimer.past_due:
logging.info('The timer is past due!')
logging.info('Python timer trigger function executed.')
week = 'week' # Set your desired week
year = '2023' # Set your desired year
game = 'hall-of-fame-weekend' # Set your desired game
local_file_name = f'{year}_{week}_{game}.json'
#local_file_path = os.path.join(os.getcwd(), local_file_name)
scrapy_command = [
'scrapy', 'crawl', 'NFLWeatherData',
'-a', f'Week={week}', '-a', f'Year={year}', '-a', f'Game={game}',
'-o', local_file_name # local_file_path
]
# Update this path to the correct directory
process = subprocess.Popen(scrapy_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=r"N:\github\SportsData\NFLWeather\FunctionLocal\NFLWeather")
stdout, stderr = process.communicate()
if process.returncode != 0:
logging.error(stderr)
else:
logging.info(stdout)
</code></pre>
<p>That will produce file with no issue</p>
<p>Here is the modified code to write to an another location</p>
<pre><code>import azure.functions as func
import datetime
import logging
import subprocess
import os
app = func.FunctionApp()
@app.timer_trigger(schedule="0 * * * * *", arg_name="myTimer", run_on_startup=True,
use_monitor=False)
def ScrapyTimerTrigger(myTimer: func.TimerRequest) -> None:
if myTimer.past_due:
logging.info('The timer is past due!')
logging.info('Python timer trigger function executed.')
week = 'week' # Set your desired week
year = '2023' # Set your desired year
game = 'hall-of-fame-weekend' # Set your desired game
local_file_name = f'{year}_{week}_{game}.json'
local_file_path = os.path.join(r"N:\\github\\SportsData\\NFLWeather\\Data_Gathering", year, local_file_name)
# Create the directory if it does not exist
directory_path = os.path.join(r"N:\\github\\SportsData\\NFLWeather\\Data_Gathering", year)
os.makedirs(directory_path, exist_ok=True)
scrapy_command = [
'scrapy', 'crawl', 'NFLWeatherData',
'-a', f'Week={week}', '-a', f'Year={year}', '-a', f'Game={game}',
'-o', local_file_path
]
# Update this path to the correct directory
process = subprocess.Popen(scrapy_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=r"N:\github\SportsData\NFLWeather\FunctionLocal\NFLWeather")
stdout, stderr = process.communicate()
if process.returncode != 0:
logging.error(stderr)
else:
logging.info(stdout)
</code></pre>
<p>Here is the error code that this script produces</p>
<pre><code> [2024-06-24T00:41:45.459Z] Received WorkerInitRequest, python version 3.11.9 | packaged by Anaconda, Inc. | (main, Apr 19 2024, 16:40:41) [MSC v.1916 64 bit (AMD64)], worker version 4.28.1, request ID 583ffc8a-a82c-49d4-b586-a0ded194964c. App Settings state: PYTHON_THREADPOOL_THREAD_COUNT: 1000 | PYTHON_ENABLE_WORKER_EXTENSIONS: False. To enable debug level logging, please refer to https://aka.ms/python-enable-debug-logging
[2024-06-24T00:41:45.767Z] Received WorkerMetadataRequest, request ID 583ffc8a-a82c-49d4-b586-a0ded194964c, function_path: N:\github\SportsData\NFLWeather\FunctionLocal\NFLWeather\NFLWeatherFunctionApp\function_app.py
[2024-06-24T00:41:45.797Z] Worker failed to index functions
[2024-06-24T00:41:45.799Z] Result: Failure
Exception: SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 1269-1270: malformed \N character escape (function_app.py, line 43)
Stack: File "M:\Microsoft\Azure Functions Core Tools\workers\python\3.11\WINDOWS\X64\azure_functions_worker\dispatcher.py", line 413, in _handle__functions_metadata_request
self.load_function_metadata(
File "M:\Microsoft\Azure Functions Core Tools\workers\python\3.11\WINDOWS\X64\azure_functions_worker\dispatcher.py", line 393, in load_function_metadata
self.index_functions(function_path, function_app_directory)) \
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "M:\Microsoft\Azure Functions Core Tools\workers\python\3.11\WINDOWS\X64\azure_functions_worker\dispatcher.py", line 765, in index_functions
indexed_functions = loader.index_function_app(function_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "M:\Microsoft\Azure Functions Core Tools\workers\python\3.11\WINDOWS\X64\azure_functions_worker\utils\wrappers.py", line 44, in call
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "M:\Microsoft\Azure Functions Core Tools\workers\python\3.11\WINDOWS\X64\azure_functions_worker\loader.py", line 238, in index_function_app
imported_module = importlib.import_module(module_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "M:\Miniconda3\envs\Azure\Lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 936, in exec_module
File "<frozen importlib._bootstrap_external>", line 1074, in get_code
File "<frozen importlib._bootstrap_external>", line 1004, in source_to_code
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
</code></pre>
<p>Here is what I dont understand an I like to know why or what do I have to do so that the working file works again. Is there some cache or something that has to be cleared in order for the working file to work again. This will allow me to make changes to path location and put something that works</p>
<p>Also these are some of the variations I have tried</p>
<pre><code> # local_file_path = os.path.join("N:\\github\\SportsData\\NFLWeather\\Data_Gathering\\2024", local_file_name)
# local_file_path = os.path.join(r"N:\\github\\SportsData\\NFLWeather\\Data_Gathering\\2024", local_file_name)
# local_file_path = Path("N:/github/SportsData/NFLWeather/Data_Gathering/2024") / local_file_name
# local_file_path = Path(r"N:\github\SportsData\NFLWeather\Data_Gathering\2024") / local_file_name
</code></pre>
|
<python><azure-functions>
|
2024-06-24 00:56:07
| 1
| 690
|
Leo Torres
|
78,660,117
| 395,857
|
How can I export a tokenizer from Huggingface transformers to CoreML?
|
<p>I load a tokenizer and a Bert model from Huggingface transformers, and export the Bert model to CoreML:</p>
<pre><code>from transformers import AutoTokenizer, AutoModelForTokenClassification
import torch
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("huawei-noah/TinyBERT_General_4L_312D")
# Load the model
model = AutoModelForTokenClassification.from_pretrained("huawei-noah/TinyBERT_General_4L_312D")
# Example usage
text = "Hugging Face is creating a tool that democratizes AI."
inputs = tokenizer(text, return_tensors="pt")
</code></pre>
<p>Requirements:</p>
<pre><code>pip install transformers torch
</code></pre>
<p>How can I export a tokenizer from Huggingface transformers to CoreML?</p>
|
<python><huggingface-transformers><coreml>
|
2024-06-23 23:59:46
| 1
| 84,585
|
Franck Dernoncourt
|
78,660,102
| 4,139,698
|
How to iterate through a python list to get each additional list element?
|
<p>I have a python list that I need to iterate through to get the additional consecutive element.</p>
<p>I'm trying to make a string from every other consecutive element in my list named data</p>
<p><code>data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, ...]</code></p>
<p>I need to turn this into a string that reads like...</p>
<p><code>data_string = "1, 3, 6, 10"</code></p>
<p>The list <code>data</code> can have any number of elements and may change, but the string should always to consist of elements 1,3, 6, 10, 15, 21, and so on.</p>
<p>I've tried running the following function...</p>
<pre><code>def create_lists(values_list):
result = [[]]
for value in values_list[1:]:
new_lists = [curr_list + [value] for curr_list in result]
result.extend(new_lists)
return result
result = create_lists(values_list)
print(result)
</code></pre>
<p>This only prints out separate lists with individual elements.</p>
<p>I'm new to python and having trouble. Any ideas?</p>
|
<python><loops>
|
2024-06-23 23:45:18
| 2
| 9,199
|
Charles Jr
|
78,659,905
| 893,254
|
Efficient (vectorized) method of converting a Pandas DataFrame categorical column back into original values
|
<p>After a quite lengthy series of operations on a Pandas DataFrame I find myself with a categorical type column.</p>
<p>This has been created as a result of a <code>pandas.cut()</code> operation.</p>
<p>I would like to convert the column back into "values" rather than the current categorical encoding.</p>
<p>This is how I am currently performing that conversion:</p>
<pre><code>df['datetime_de_cat'] = (
df['datetime_bin'].cat.codes.map(
lambda code: df['datetime_bin'].cat.categories[code]
)
)
</code></pre>
<p>Here's how this code works:</p>
<p><code>df['datetime_bin']</code> is a column of "binned" data values. This is created by <code>pandas.cut()</code>. The column is a categorical column, meaning that the data is encoded using integer values, and these integers can be used to map to the original data values.</p>
<p><code>df['datetime_bin'].cat.codes</code> is a Pandas Series containing the above mentioned integer codes.</p>
<p><code>df['datetime_bin'].cat.categories</code> is a <code>datetime.DatetimeIndex</code>. It can be used to map between the integer codes and original values.</p>
<p>There is some additional complexity introduced by the fact that some other intermediate operations are performed. For example, a Pandas <code>cut()</code> operation returns intervals, and I have already converted these by taking only the "right" part of the interval. (This changes an <code>interval</code> of two <code>datetime</code>s into a single <code>datetime</code> type.)</p>
<p>The code I am using, which is based on <code>.map()</code> is very slow. This is perhaps unsurprising. I suspect this operation is not particularly well vectorized. The lambda function will be interpreted by the Python interpreter at the interpreter level, rather than running as some more optimized C code.</p>
<p>Is there a way to accelerate this operation? It seems like converting a categorical column back into a regular non-categorical encoded column should be a fairly common operation with some faster implementation.</p>
<hr />
<p>After some testing I have found that this works:</p>
<pre><code>df['datetime_bin'].astype(str)
</code></pre>
<p>On the other hand:</p>
<pre><code>df['datetime_bin'].astype(pandas.Timestamp)
df['datetime_bin'].astype(datetime)
</code></pre>
<p>do not work. These fail with the error <code>class datetime.datetime is not understood</code> (etc)</p>
<p>The conversion to string is very fast, which is not surprising, since this will be fully vectorized.</p>
<p>Is there a way to convert to a <code>datetime</code> type instead of <code>str</code>?</p>
<p>doc: <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/categorical.html#regaining-original-data" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/user_guide/categorical.html#regaining-original-data</a></p>
<hr />
<h4>Some information about the data I am working with</h4>
<pre><code>df.dtypes
datetime datetime64[ns, UTC]
datetime_bin category
datetime_bin_right category
datetime_bin_right_decat datetime64[ns, UTC]
dtype: object
</code></pre>
<pre><code>df['datetime_bin_right'] = df['datetime_bin'].map(lambda interval: interval.right)
# thanks to suggestion in the comments
# but: why doesn't this work using a `type` as the argument
# (eg: `datetime.datetime`, `pandas.Timestamp`)
# rather than `str`='datetime64[ns, UTC]'
df['datetime_bin_right_decat'] = df['datetime_bin_right'].astype('datetime64[ns, UTC]')
</code></pre>
|
<python><pandas>
|
2024-06-23 21:36:17
| 1
| 18,579
|
user2138149
|
78,659,844
| 50,385
|
async version of Context.run for context vars in python asyncio?
|
<p>Python has the <a href="https://docs.python.org/3/library/contextvars.html#contextvars.copy_context" rel="nofollow noreferrer"><code>contextvars.copy_context()</code></a> function which lets you store a copy of the current values of all the <code>ContextVars</code> that are currently active. Then later, you can run with all those values active by calling <a href="https://docs.python.org/3/library/contextvars.html#contextvars.Context.run" rel="nofollow noreferrer"><code>context_returned_by_copy_context.run(func)</code></a>. However, the <code>run</code> method is sync and expects a regular callable, not an async one. How can you run an async function with a context returned by <code>copy_context</code>?</p>
|
<python><python-asyncio><python-contextvars>
|
2024-06-23 21:08:59
| 2
| 22,294
|
Joseph Garvin
|
78,659,781
| 3,486,684
|
If I already have an instance of a super class, how can I create an instance of a child class from it?
|
<p>How do I properly inherit <em>data</em> from an instantiation of a super class? For example, consider something I can get to work, but is confusing in its behaviour:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
from typing import Self
@dataclass
class SuperClass:
hello: int
world: int
@dataclass
class ChildClass(SuperClass):
kitty: int
@classmethod
def from_super(cls, s: SuperClass, kitty: int) -> Self:
x = cls(s, kitty=kitty)
return x
</code></pre>
<p>Now, let's see whether this works:</p>
<pre class="lang-py prettyprint-override"><code>super_instance = SuperClass(0, 1)
child = ChildClass.from_super(super_instance, 2)
child
</code></pre>
<p>This produces output:</p>
<pre class="lang-none prettyprint-override"><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[48], line 1
----> 1 ChildClass.from_super(super_instance, 2)
Cell In[46], line 15
13 @classmethod
14 def from_super(cls, s: SuperClass, kitty: int) -> Self:
---> 15 x = cls(s, kitty=kitty)
16 return x
TypeError: ChildClass.__init__() missing 1 required positional argument: 'world'
</code></pre>
<p>So how do I do this correctly, without manually writing out each variable instantiation from the super class?</p>
|
<python><inheritance>
|
2024-06-23 20:39:54
| 3
| 4,654
|
bzm3r
|
78,659,682
| 7,713,770
|
How to add permissions in django admin panel for user?
|
<p>I have a django app. And I added in the admin panel of django the following permission:</p>
<pre><code>Accounts | account | Can view account
</code></pre>
<p>And in the code of admin.py of the accounts app. I have added this:</p>
<pre><code>from django.contrib.auth import get_user_model
from django.contrib import admin
from django.contrib.auth.admin import UserAdmin
from django.contrib.auth.models import Permission
from django.contrib.contenttypes.models import ContentType
from .models import Account
# from .models import *
# Register your models here.
User = get_user_model()
user = User.objects.get(email='n@n.nl')
print(user.has_perm('view_account'))
content_type = ContentType.objects.get_for_model(Account)
permission = Permission.objects.get(
codename='view_account', content_type=content_type)
user.user_permissions.add(permission)
permissions = Permission.objects.filter(content_type=content_type)
for perm in permissions:
print(f"Codename: {perm.codename}, Name: {perm.name}")
print(user)
class AccountAdmin(UserAdmin):
list_display = (
"email",
"first_name",
"last_name",
"username",
"last_login",
"date_joined",
"is_active",
)
list_display_links = ("email", "first_name", "last_name")
filter_horizontal = (
'groups',
'user_permissions',
)
readonly_fields = ("last_login", "date_joined")
ordering = ("-date_joined",)
list_filter = ()
User = get_user_model()
user = User.objects.get(email='n@n.nl')
content_type = ContentType.objects.get_for_model(Account)
permission = Permission.objects.get(
codename='view_account', content_type=content_type)
user.user_permissions.add(permission)
admin.site.register(Account, AccountAdmin)
</code></pre>
<p>acount - models.py:</p>
<pre><code>from django.contrib.auth.models import (
AbstractBaseUser, BaseUserManager, PermissionsMixin)
from django.db import models
from django.contrib.auth.models import Permission
from .decorators import allowed_users
# Create your models here.
class MyAccountManager(BaseUserManager):
@allowed_users(allowed_roles=['account_permission'])
def create_user(self, email, password=None, **extra_field):
if not email:
raise ValueError("Gebruiker moet een email adres hebben.")
if not password:
raise ValueError("Gebruiker moet een password hebben.")
user = self.model(email=email, **extra_field)
user.set_password(password)
permission = Permission.objects.get(
codename='DierenWelzijnAdmin | Dier | Can change animal')
user.user_permissions.add(permission)
user.save(using=self._db)
return user
def create_superuser(self, email, username, password):
user = self.create_user(
email=self.normalize_email(email),
username=username,
password=password,
)
user.is_admin = True
user.is_active = True
user.is_staff = True
user.is_superadmin = True
user.save(using=self._db)
return user
class Account(AbstractBaseUser, PermissionsMixin):
first_name = models.CharField(max_length=50, blank=True)
last_name = models.CharField(max_length=50, blank=True)
username = models.CharField(max_length=50, unique=True)
email = models.EmailField(max_length=100, unique=True)
phone_number = models.CharField(max_length=50, blank=True)
# required
date_joined = models.DateTimeField(auto_now_add=True)
last_login = models.DateTimeField(auto_now_add=True)
is_admin = models.BooleanField(default=False)
is_staff = models.BooleanField(default=False)
is_active = models.BooleanField(default=True)
is_superadmin = models.BooleanField(default=False)
USERNAME_FIELD = "email"
REQUIRED_FIELDS = ["username"]
objects = MyAccountManager()
def full_name(self):
return f"{self.first_name} {self.last_name}"
def __str__(self):
return self.email
def has_perm(self, perm, obj=None):
return self.is_admin
def has_module_perms(self, add_label):
return True
</code></pre>
<p>And when I start the django app. I don't get any errors.</p>
<p>But when I login with the account with the user permission. I still see the message:</p>
<blockquote>
<p>You don’t have permission to view or edit anything.</p>
</blockquote>
<p>And the permissions:</p>
<ul>
<li>Is active</li>
<li>Is staff
are selected</li>
</ul>
<p>Question: how to add permissions for users?</p>
|
<python><django><django-rest-framework>
|
2024-06-23 19:51:20
| 1
| 3,991
|
mightycode Newton
|
78,659,174
| 7,713,770
|
How to show permissions in django admin project?
|
<p>I have a django app. And I try in the admin panel of django restrict user permissions with the AUTHENTICATION AND AUTHORIZATION tab.</p>
<p>But I noticed that something is missing in my existing django app on the AUTHENTICATION AND AUTHORIZATION tab.</p>
<p>Because I installed a clean install from django. And the AUTHENTICATION AND AUTHORIZATION tab looked like:</p>
<p><a href="https://i.sstatic.net/yqTGIQ0w.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yqTGIQ0w.png" alt="enter image description here" /></a></p>
<p>and the AUTHENTICATION AND AUTHORIZATION tab in my existing django app looks like:</p>
<p><a href="https://i.sstatic.net/jbsfMFdJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jbsfMFdJ.png" alt="enter image description here" /></a></p>
<p>So as you can see in my existing django app the right part is missing of the user permissions.</p>
<p>What I have done? Is checking the versions. And they are the same(clean version and exsting version)</p>
<p>I am using django version: 5.0.6. and the libraries</p>
<p>I checked the libraries installed with pip freeze of my exisiting version:</p>
<pre><code>anyio==4.2.0
asgiref==3.7.2
attrs==23.1.0
azure-common==1.1.28
azure-core==1.29.6
azure-storage-blob==12.19.0
azure-storage-common==2.1.0
certifi==2023.7.22
cffi==1.15.1
charset-normalizer==3.2.0
click==8.1.7
colorama==0.4.6
coreapi==2.3.3
coreschema==0.0.4
cryptography==39.0.0
defusedxml==0.7.1
Django==5.0.6
django-allauth==0.52.0
django-cors-headers==3.10.1
django-dotenv==1.4.2
django-filter==23.2
django-storages==1.14.2
djangorestframework==3.14.0
drf-spectacular==0.26.4
drf-yasg==1.20.0
exceptiongroup==1.2.0
gunicorn==21.2.0
h11==0.14.0
idna==3.4
inflection==0.5.1
isodate==0.6.1
itypes==1.2.0
Jinja2==3.1.2
jsonschema==4.19.0
jsonschema-specifications==2023.7.1
Markdown==3.4.4
MarkupSafe==2.1.3
oauthlib==3.2.2
packaging==23.1
Pillow==10.0.0
psycopg2==2.9.7
pycparser==2.21
PyJWT==2.6.0
pyrsistent==0.19.3
python-dateutil==2.8.2
python3-openid==3.2.0
pytz==2023.3.post1
PyYAML==6.0.1
referencing==0.30.2
requests==2.31.0
requests-oauthlib==1.3.1
rpds-py==0.10.2
ruamel.yaml==0.17.32
ruamel.yaml.clib==0.2.7
six==1.16.0
sniffio==1.3.0
sqlparse==0.4.4
typing_extensions==4.7.1
tzdata==2023.3
uritemplate==4.1.1
urllib3==2.0.4
uvicorn==0.23.2
waitress==2.1.2
</code></pre>
<p>And part of settings.py file of the existing app:</p>
<pre><code>
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'drf_yasg',
'corsheaders',
'rest_framework.authtoken',
'django_filters',
'welAdmin',
"accounts",
"core",
'drf_spectacular',
]
LOGOUT_REDIRECT_URL = "/"
ACCOUNT_LOGOUT_REDIRECT_URL = "/"
REST_FRAMEWORK = {
'DEFAULT_AUTHENTICATION_CLASSES': [
'rest_framework.authentication.BasicAuthentication',
'rest_framework.authentication.SessionAuthentication',
'rest_framework.authentication.TokenAuthentication',
],
'DEFAULT_SCHEMA_CLASS': 'drf_spectacular.openapi.AutoSchema',
'DEFAULT_PERMISSION_CLASSES': (
'rest_framework.permissions.IsAuthenticated',
),
'SWAGGER_SETTINGS': {
'LOGIN_URL': 'rest_framework:login',
'LOGOUT_URL': 'rest_framework:logout'
}
}
# Password validation
# https://docs.djangoproject.com/en/4.1/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/4.1/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'CET'
USE_I18N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/4.1/howto/static-files/
STATIC_URL = '/static/'
MEDIA_URL = '/media/'
STATIC_DIRS = [
(BASE_DIR, 'static')
]
STATICFILES_DIRS = [BASE_DIR / 'static']
STATIC_ROOT = BASE_DIR / 'staticfiles/'
MEDIA_ROOT = BASE_DIR / 'media'
</code></pre>
<p>admin.py looks:</p>
<pre><code>from django.contrib import admin, messages
from django.db import models
from django.forms import RadioSelect
from django.http import HttpResponseRedirect
from django.urls import reverse
from django.utils.safestring import mark_safe
from .models import Animal, Category
from .forms import SetCategoryForm
class AnimalAdmin(admin.ModelAdmin):
fields = ['name', 'sort', 'category', 'klasse_name', 'cites', 'uis', 'pet_list',
'description', "feeding", "housing", "care", "literature", 'images', 'img_preview']
readonly_fields = ['img_preview', 'klasse_name']
autocomplete_fields = ['category']
list_display = ('name', 'category_link', 'description')
search_fields = ['name', 'category__name']
search_help_text = "Zoek in de namen of families"
actions = ['set_category']
action_form = SetCategoryForm
ordering = ['id']
formfield_overrides = {
models.BooleanField: {"widget": RadioSelect(
choices=[(True, "Ja"), (False, "Nee")])}
}
def response_add(self, request, obj, post_url_continue=None):
if '_addanother' in request.POST:
url = reverse("admin:WelzijnAdmin_animal_add")
category_id = request.POST['category']
qs = '?category=%s' % category_id
return HttpResponseRedirect(''.join((url, qs)))
if '_continue' in request.POST:
return super(AnimalAdmin, self).response_add(request, obj, post_url_continue)
else:
return HttpResponseRedirect(reverse("admin:WelzijnAdmin_animal_changelist"))
def response_change(self, request, obj, post_url_continue=None):
if '_addanother' in request.POST:
url = reverse("admin:WelzijnAdmin_animal_add")
category_id = request.POST['category']
qs = '?category=%s' % category_id
return HttpResponseRedirect(''.join((url, qs)))
if '_continue' in request.POST:
return super(AnimalAdmin, self).response_change(request, obj)
else:
return HttpResponseRedirect(reverse("admin:WelzijnAdmin_animal_changelist"))
@admin.action(description="Zet de familie van de geselecteerde n")
def set_category(self, request, queryset):
category = request.POST['familie']
queryset.update(category=category)
messages.success(
request, '{0} animal(s) were updated'.format(queryset.count()))
def category_link(self, obj):
return mark_safe('<a href="{}">{}</a>'.format(
reverse("admin:WelzijnAdmin_category_change",
args=[obj.category.pk]),
obj.category.name
))
category_link.short_description = 'Familie'
class CategoryAdmin(admin.ModelAdmin):
fields = ['name', 'category', 'description', 'images', 'img_preview']
readonly_fields = ['img_preview']
autocomplete_fields = ['category']
list_display = ('name', 'category_link',
'subcategories_link', 'animals_link')
search_fields = ['name', 'category__name']
search_help_text = "Zoek in de namen of families"
actions = ['set_category']
action_form = SetCategoryForm
ordering = ['id']
def response_add(self, request, obj, post_url_continue=None):
if '_addanother' in request.POST:
url = reverse("admin:WelzijnAdmin_category_add")
category_id = request.POST['category']
qs = '?category=%s' % category_id
return HttpResponseRedirect(''.join((url, qs)))
if '_continue' in request.POST:
return super(CategoryAdmin, self).response_add(request, obj, post_url_continue)
else:
return HttpResponseRedirect(reverse("admin:WelzijnAdmin_category_changelist"))
def response_change(self, request, obj, post_url_continue=None):
if '_addanother' in request.POST:
url = reverse("admin:WelzijnAdmin_category_add")
category_id = request.POST['category']
qs = '?category=%s' % category_id
return HttpResponseRedirect(''.join((url, qs)))
if '_continue' in request.POST:
return super(CategoryAdmin, self).response_change(request, obj)
else:
return HttpResponseRedirect(reverse("admin:WelzijnAdmin_category_changelist"))
@admin.action(description="Zet de familie van de geselecteerde families")
def set_category(self, request, queryset):
category = request.POST['familie']
queryset.update(category=category)
messages.success(
request, '{0} categories were updated'.format(queryset.count()))
def category_link(self, obj):
if obj.category:
return mark_safe('<a href="{}">{}</a>'.format(
reverse("admin:WelzijnAdmin_category_change",
args=[obj.pk]),
obj.category
))
return None
category_link.short_description = 'Familie'
def subcategories_link(self, obj):
if obj.subcategories.exists():
return mark_safe('<a href="{}">{}</a>'.format(
reverse("admin:WelzijnAdmin_category_changelist") +
f"?category={obj.id}",
"Subgroepen"
))
return None
subcategories_link.short_description = "Subgroepen"
def animals_link(self, obj):
if obj.animals.exists():
return mark_safe('<a href="{}">{}</a>'.format(
reverse("admin:WelzijnAdmin_animal_changelist") +
f"?category={obj.id}",
""
))
return None
animals_link.short_description = "in groep"
admin.site.register(Animal, AnimalAdmin)
admin.site.register(Category, CategoryAdmin)
</code></pre>
<p>I have also a accounts app. and the admin.py looks like:</p>
<pre><code>from django.contrib import admin
from django.contrib.auth.admin import UserAdmin
from .models import Account
# Register your models here.
class AccountAdmin(UserAdmin):
list_display = (
"email",
"first_name",
"last_name",
"username",
"last_login",
"date_joined",
"is_active",
)
list_display_links = ("email", "first_name", "last_name")
filter_horizontal = ('groups', 'user_permissions',)
readonly_fields = ("last_login", "date_joined")
ordering = ("-date_joined",)
filter_horizontal = ()
list_filter = ()
fieldsets = ()
# Register your models here.
admin.site.register(Account, AccountAdmin)
# admin.site.register(Account, AccountAdmin)
</code></pre>
<p>Question: How to show the choosen user persmissions in the existing django app?</p>
|
<python><django><django-rest-framework>
|
2024-06-23 15:56:06
| 1
| 3,991
|
mightycode Newton
|
78,659,070
| 3,616,293
|
How can I vectorize this linalg.lstq() operation?
|
<p>I am trying to implement a multi-frequency phase unwrapping algorithm using Python3 and NumPy. I have 7 single channel (gray scale) images of shape <code>(1080, 1920)</code>. After stacking them along the third axis I get <code>(1080, 1920, 7)</code>.</p>
<p>I have shift-matrix <code>A</code> (which is fixed for all pixels) of shape <code>(7, 7)</code>. For all pixels I have different intensity values (array <code>r</code> of shape <code>(1, 7)</code>).</p>
<p>To solve them for each pixel by minimizing L2-norm as ||r - Au||, I can do:</p>
<pre><code># Just for illustration
A = np.random.randn(7, 7)
r = np.random.randn(7, 1)
# Solved for each pixel location
u = np.linalg.lstsq(a = A, b = r, rcond = None)
</code></pre>
<p>This can be implemented using 2 for loops:</p>
<pre><code>for y in range(0, image_height - 1):
for x in range(0, image_width - 1):
# code here
</code></pre>
<p>This is inefficient. How can I write it as efficient NumPy code?</p>
|
<python><numpy><least-squares><array-broadcasting>
|
2024-06-23 15:17:04
| 1
| 2,518
|
Arun
|
78,659,055
| 268,581
|
Package name with username
|
<p>Let's suppose there's a company <code>abc</code> that makes a library <code>abc</code>.</p>
<p>I'd like to make a Python package that interfaces with <code>abc</code>.</p>
<p>I could just call it <code>abc</code>. However, that seems selfish to take that name in case the company itself wants to publish a package using that name.</p>
<p>Alternatively, I could incorporate my username:</p>
<pre><code>dharmatech_abc
</code></pre>
<p>Or:</p>
<pre><code>dharmatech.abc
</code></pre>
<h2>Question</h2>
<p>If I go the username approach, which of the above is more idiomatic in Python packaging? Underscore or <code>.</code>?</p>
<p>If I decide to publish my package to pypi, and I went with the <code>.</code> approach, would the package be called <code>dharmatech.abc</code>?</p>
<p>If I go with the <code>.</code> approach, and I have other packages that also start with my username, will it be a problem that multiple separate packages all contribute to that <code>dharmatech</code> top level namespace?</p>
<h2>Separate username package</h2>
<p>One approach is to simply have a package <code>dharmatech</code> and have <code>abc</code> be in there.</p>
<p>The drawback to that approach is, I may want to similarly prefix <code>dharmatech</code> for other packages. And I may not want those packages to be in the same github repository. I.e. they would be in their own github repositories and be considered separate Python packages, although with the same module prefix.</p>
|
<python><pypi>
|
2024-06-23 15:08:14
| 1
| 9,709
|
dharmatech
|
78,659,009
| 8,964,393
|
Get minimum through record iterations in pandas dataframe
|
<p>I have created the following pandas dataframe:</p>
<pre><code>import pandas as pd
import numpy as np
ds = { 'trend' : [1,1,1,1,2,2,3,3,3,3,3,3,4,4,4,4,4], 'price' : [23,43,56,21,43,55,54,32,9,12,11,12,23,3,2,1,1]}
df = pd.DataFrame(data=ds)
</code></pre>
<p>The dataframe looks as follows:</p>
<pre><code>display(df)
trend price
0 1 23
1 1 43
2 1 56
3 1 21
4 2 43
5 2 55
6 3 54
7 3 32
8 3 9
9 3 12
10 3 11
11 3 12
12 4 23
13 4 3
14 4 2
15 4 1
16 4 1
</code></pre>
<p>I have saved the dataframe to a .csv file called <code>df.csv</code>:</p>
<pre><code>df.to_csv("df.csv", index = False)
</code></pre>
<p>I need to create a new field called <code>minimum</code> which:</p>
<ol>
<li>iterates through each and every record of the dataframe</li>
<li>takes the mimimum between the <code>price</code> observed at each iteration and the last <code>price</code> observed in the previous <code>trend</code>.</li>
</ol>
<p>For example:</p>
<ul>
<li>I iterate at record 0 and the minimum price is 23 (there is only that one).</li>
<li>I iterate at record 1 and take the minimum between 43 and 23: the result is 23.</li>
</ul>
<p>Fast forward to record 4.</p>
<ul>
<li>I need to calculate the minimum between the <code>price</code> observed at record 4 (<code>price: 43</code>) and the last <code>price</code> observed for the previous <code>trend</code> (<code>price: 21</code>). The result is 21.</li>
</ul>
<p>Fast forward to record 14.</p>
<ul>
<li>I need to calculate the minimum between the <code>price</code> observed at record 14 (<code>price: 2</code>) and the last <code>price</code> observed for the previous <code>trend</code> (<code>price: 12</code>). The result is 2.</li>
</ul>
<p>And so on.</p>
<p>I have then written this code:</p>
<pre><code>minimum = []
for i in range(len(df)):
ds = pd.read_csv("df.csv", nrows=i+1)
d = ds.groupby('trend', as_index=False).agg(
{'price':'last'})
d['minimum'] = d['price'].min()
minimum.append(d['minimum'].iloc[-1])
ds['minimum'] = minimum
</code></pre>
<p>The resulting dataframe looks as follows:</p>
<p>display(ds)</p>
<pre><code> trend price minimum
0 1 23 23
1 1 43 43
2 1 56 56
3 1 21 21
4 2 43 21
5 2 55 21
6 3 54 21
7 3 32 21
8 3 9 9
9 3 12 12
10 3 11 11
11 3 12 12
12 4 23 12
13 4 3 3
14 4 2 2
15 4 1 1
16 4 1 1
</code></pre>
<p>The resulting dataframe is correct.</p>
<p>The problem is that I have to apply this process to a dataframe which contains about 1 million records and it will take about 48 years to complete.</p>
<p>Does anybody know a quicker way to obtain the same results above?</p>
|
<python><pandas><dataframe><function><group-by>
|
2024-06-23 14:50:24
| 1
| 1,762
|
Giampaolo Levorato
|
78,658,908
| 4,269,851
|
IDA Pro change color of variables in pseudocode
|
<p>Very basic Ida Pro plugin that changes color of <code>MyVar123</code> inside pseudocode window.</p>
<p>Problem is this approach is limited to using <a href="https://hex-rays.com/products/ida/support/sdkdoc/group___s_c_o_l_o_r__.html" rel="nofollow noreferrer">ida_lines.SCOLOR_...</a> constants for color. How to define my own text color e.g. <code>#00FF00</code> or any other way?</p>
<p>I read trough the SDK and i find no function that responsible for custom color codes.</p>
<pre><code>import idaapi, ida_kernwin, ida_lines
class ColorizeVariable(ida_kernwin.action_handler_t):
def __init__(self):
ida_kernwin.action_handler_t.__init__(self)
def activate(self, ctx):
if ida_kernwin.get_widget_type(ctx.widget) == ida_kernwin.BWN_PSEUDOCODE:
vu = idaapi.get_widget_vdui(ctx.widget)
pc = vu.cfunc.get_pseudocode()
find = "MyVar"
vu.refresh_view(False)
for sl in pc:
sl.line = sl.line.replace(find, ida_lines.COLSTR(find, ida_lines.SCOLOR_DNAME))
#sl.line = sl.line.replace(find, f'{ida_lines.COLOR_ON}{ida_lines.COLOR_CHAR}{find}{ida_lines.COLOR_OFF}{ida_lines.COLOR_CHAR}') # broken
#sl.line = sl.line.replace(find, f'{ida_lines.COLOR_ON}\x0A{find}{ida_lines.COLOR_OFF}\x0A') #working
#sl.line = sl.line.replace(find, f'\1\x0A{find}\2\x0A') #short version
return 0
def update(self, ctx):
return ida_kernwin.AST_ENABLE_ALWAYS
class ida_plugin_container(idaapi.plugin_t):
flags = idaapi.PLUGIN_UNL
comment = 'plugin comment'
help = 'help message'
wanted_name = "myPlugin"
wanted_hotkey = 'Shift-Q'
def init(self):
action_desc = idaapi.action_desc_t(
'myAction',
'Colorize Variable',
ColorizeVariable(),
'Ctrl+R',
'Colorize Variable in Pseudocode',
10)
ida_kernwin.unregister_action('myAction')
ida_kernwin.register_action(action_desc)
ida_kernwin.attach_action_to_toolbar("SearchToolBar", 'myAction')
print('loaded')
return idaapi.PLUGIN_OK
def run(self, arg):
pass
def term(self):
pass
def PLUGIN_ENTRY():
return ida_plugin_container()
</code></pre>
|
<python><plugins><ida>
|
2024-06-23 14:11:05
| 2
| 829
|
Roman Toasov
|
78,658,742
| 694,716
|
langchain chat chain invoke does not return an object?
|
<p>I have a simple example about langchain runnables. From <a href="https://python.langchain.com/v0.1/docs/expression_language/interface/" rel="nofollow noreferrer">https://python.langchain.com/v0.1/docs/expression_language/interface/</a></p>
<pre><code>from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-4")
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
chain = prompt | model
print(chain.invoke({"topic": "chickens"}))
</code></pre>
<p>It should return like following in web site:</p>
<pre><code>AIMessage(content="Why don't bears wear shoes? \n\nBecause they have bear feet!")
</code></pre>
<p>But it returns unstructured response:</p>
<pre><code>content="Why don't bears wear shoes? \n\nBecause they have bear feet!" response_metadata={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 13, 'total_tokens': 32}, 'model_name': 'gpt-4-0613', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None} id='run-bd7cda7e-dee2-4107-af3f-97282faa9fa4-0' usage_metadata={'input_tokens': 13, 'output_tokens': 19, 'total_tokens': 32}
</code></pre>
<p>how can I fix this issue?</p>
|
<python><py-langchain>
|
2024-06-23 12:55:39
| 2
| 6,935
|
barteloma
|
78,658,424
| 1,457,380
|
Matplotlib: set ticks and labels at regular intervals but starting at specific date
|
<p>I am trying to set the following dates (year only):</p>
<p><strong>lab = [1969, 1973, 1977, 1981, 1985, 1989, 1993, 1997, 2001, 2005, 2009, 2013, 2017, 2021, 2025]</strong></p>
<p>I tried with <code>dates.YearLocator(base=4)</code>, but didn't find a way to set the starting year. I get the labels starting in 1968 instead of 1969. I also tried with <code>ticker.FixedFormatter(lab)</code>, but the ticks and dates were shown in the wrong place.</p>
<pre><code># reproducible example
import pandas as pd
from pandas import Timestamp
import numpy as np # np.nan
import matplotlib.pyplot as plt
from matplotlib import dates
from matplotlib import ticker
data = {'Date': {1: Timestamp('1969-01-20 00:00:00'), 2: Timestamp('1969-04-01 00:00:00'), 3: Timestamp('1969-07-01 00:00:00'), 4: Timestamp('1969-10-01 00:00:00'), 5: Timestamp('1970-01-01 00:00:00'), 6: Timestamp('1970-04-01 00:00:00'), 7: Timestamp('1970-07-01 00:00:00'), 8: Timestamp('1970-10-01 00:00:00'), 9: Timestamp('1971-01-01 00:00:00'), 10: Timestamp('1971-04-01 00:00:00'), 11: Timestamp('1971-07-01 00:00:00'), 12: Timestamp('1971-10-01 00:00:00'), 13: Timestamp('1972-01-01 00:00:00'), 14: Timestamp('1972-04-01 00:00:00'), 15: Timestamp('1972-07-01 00:00:00'), 16: Timestamp('1972-10-01 00:00:00'), 17: Timestamp('1973-01-01 00:00:00'), 18: Timestamp('1973-04-01 00:00:00'), 19: Timestamp('1973-07-01 00:00:00'), 20: Timestamp('1973-10-01 00:00:00'), 21: Timestamp('1974-01-01 00:00:00'), 22: Timestamp('1974-04-01 00:00:00'), 23: Timestamp('1974-07-01 00:00:00'), 24: Timestamp('1974-08-09 00:00:00'), 25: Timestamp('1974-10-01 00:00:00'), 26: Timestamp('1975-01-01 00:00:00'), 27: Timestamp('1975-04-01 00:00:00'), 28: Timestamp('1975-07-01 00:00:00'), 29: Timestamp('1975-10-01 00:00:00'), 30: Timestamp('1976-01-01 00:00:00'), 31: Timestamp('1976-04-01 00:00:00'), 32: Timestamp('1976-07-01 00:00:00'), 33: Timestamp('1976-10-01 00:00:00'), 34: Timestamp('1977-01-01 00:00:00'), 35: Timestamp('1977-01-20 00:00:00'), 36: Timestamp('1977-04-01 00:00:00'), 37: Timestamp('1977-07-01 00:00:00'), 38: Timestamp('1977-10-01 00:00:00'), 39: Timestamp('1978-01-01 00:00:00'), 40: Timestamp('1978-04-01 00:00:00'), 41: Timestamp('1978-07-01 00:00:00'), 42: Timestamp('1978-10-01 00:00:00'), 43: Timestamp('1979-01-01 00:00:00'), 44: Timestamp('1979-04-01 00:00:00'), 45: Timestamp('1979-07-01 00:00:00'), 46: Timestamp('1979-10-01 00:00:00'), 47: Timestamp('1980-01-01 00:00:00'), 48: Timestamp('1980-04-01 00:00:00'), 49: Timestamp('1980-07-01 00:00:00'), 50: Timestamp('1980-10-01 00:00:00'), 51: Timestamp('1981-01-01 00:00:00'), 52: Timestamp('1981-01-20 00:00:00'), 53: Timestamp('1981-04-01 00:00:00'), 54: Timestamp('1981-07-01 00:00:00'), 55: Timestamp('1981-10-01 00:00:00'), 56: Timestamp('1982-01-01 00:00:00'), 57: Timestamp('1982-04-01 00:00:00'), 58: Timestamp('1982-07-01 00:00:00'), 59: Timestamp('1982-10-01 00:00:00'), 60: Timestamp('1983-01-01 00:00:00'), 61: Timestamp('1983-04-01 00:00:00'), 62: Timestamp('1983-07-01 00:00:00'), 63: Timestamp('1983-10-01 00:00:00'), 64: Timestamp('1984-01-01 00:00:00'), 65: Timestamp('1984-04-01 00:00:00'), 66: Timestamp('1984-07-01 00:00:00'), 67: Timestamp('1984-10-01 00:00:00'), 68: Timestamp('1985-01-01 00:00:00'), 69: Timestamp('1985-04-01 00:00:00'), 70: Timestamp('1985-07-01 00:00:00'), 71: Timestamp('1985-10-01 00:00:00'), 72: Timestamp('1986-01-01 00:00:00'), 73: Timestamp('1986-04-01 00:00:00'), 74: Timestamp('1986-07-01 00:00:00'), 75: Timestamp('1986-10-01 00:00:00'), 76: Timestamp('1987-01-01 00:00:00'), 77: Timestamp('1987-04-01 00:00:00'), 78: Timestamp('1987-07-01 00:00:00'), 79: Timestamp('1987-10-01 00:00:00'), 80: Timestamp('1988-01-01 00:00:00'), 81: Timestamp('1988-04-01 00:00:00'), 82: Timestamp('1988-07-01 00:00:00'), 83: Timestamp('1988-10-01 00:00:00'), 84: Timestamp('1989-01-01 00:00:00'), 85: Timestamp('1989-01-20 00:00:00'), 86: Timestamp('1989-04-01 00:00:00'), 87: Timestamp('1989-07-01 00:00:00'), 88: Timestamp('1989-10-01 00:00:00'), 89: Timestamp('1990-01-01 00:00:00'), 90: Timestamp('1990-04-01 00:00:00'), 91: Timestamp('1990-07-01 00:00:00'), 92: Timestamp('1990-10-01 00:00:00'), 93: Timestamp('1991-01-01 00:00:00'), 94: Timestamp('1991-04-01 00:00:00'), 95: Timestamp('1991-07-01 00:00:00'), 96: Timestamp('1991-10-01 00:00:00'), 97: Timestamp('1992-01-01 00:00:00'), 98: Timestamp('1992-04-01 00:00:00'), 99: Timestamp('1992-07-01 00:00:00'), 100: Timestamp('1992-10-01 00:00:00'), 101: Timestamp('1993-01-01 00:00:00'), 102: Timestamp('1993-01-20 00:00:00'), 103: Timestamp('1993-04-01 00:00:00'), 104: Timestamp('1993-07-01 00:00:00'), 105: Timestamp('1993-10-01 00:00:00'), 106: Timestamp('1994-01-01 00:00:00'), 107: Timestamp('1994-04-01 00:00:00'), 108: Timestamp('1994-07-01 00:00:00'), 109: Timestamp('1994-10-01 00:00:00'), 110: Timestamp('1995-01-01 00:00:00'), 111: Timestamp('1995-04-01 00:00:00'), 112: Timestamp('1995-07-01 00:00:00'), 113: Timestamp('1995-10-01 00:00:00'), 114: Timestamp('1996-01-01 00:00:00'), 115: Timestamp('1996-04-01 00:00:00'), 116: Timestamp('1996-07-01 00:00:00'), 117: Timestamp('1996-10-01 00:00:00'), 118: Timestamp('1997-01-01 00:00:00'), 119: Timestamp('1997-04-01 00:00:00'), 120: Timestamp('1997-07-01 00:00:00'), 121: Timestamp('1997-10-01 00:00:00'), 122: Timestamp('1998-01-01 00:00:00'), 123: Timestamp('1998-04-01 00:00:00'), 124: Timestamp('1998-07-01 00:00:00'), 125: Timestamp('1998-10-01 00:00:00'), 126: Timestamp('1999-01-01 00:00:00'), 127: Timestamp('1999-04-01 00:00:00'), 128: Timestamp('1999-07-01 00:00:00'), 129: Timestamp('1999-10-01 00:00:00'), 130: Timestamp('2000-01-01 00:00:00'), 131: Timestamp('2000-04-01 00:00:00'), 132: Timestamp('2000-07-01 00:00:00'), 133: Timestamp('2000-10-01 00:00:00'), 134: Timestamp('2001-01-01 00:00:00'), 135: Timestamp('2001-01-20 00:00:00'), 136: Timestamp('2001-04-01 00:00:00'), 137: Timestamp('2001-07-01 00:00:00'), 138: Timestamp('2001-10-01 00:00:00'), 139: Timestamp('2002-01-01 00:00:00'), 140: Timestamp('2002-04-01 00:00:00'), 141: Timestamp('2002-07-01 00:00:00'), 142: Timestamp('2002-10-01 00:00:00'), 143: Timestamp('2003-01-01 00:00:00'), 144: Timestamp('2003-04-01 00:00:00'), 145: Timestamp('2003-07-01 00:00:00'), 146: Timestamp('2003-10-01 00:00:00'), 147: Timestamp('2004-01-01 00:00:00'), 148: Timestamp('2004-04-01 00:00:00'), 149: Timestamp('2004-07-01 00:00:00'), 150: Timestamp('2004-10-01 00:00:00'), 151: Timestamp('2005-01-01 00:00:00'), 152: Timestamp('2005-04-01 00:00:00'), 153: Timestamp('2005-07-01 00:00:00'), 154: Timestamp('2005-10-01 00:00:00'), 155: Timestamp('2006-01-01 00:00:00'), 156: Timestamp('2006-04-01 00:00:00'), 157: Timestamp('2006-07-01 00:00:00'), 158: Timestamp('2006-10-01 00:00:00'), 159: Timestamp('2007-01-01 00:00:00'), 160: Timestamp('2007-04-01 00:00:00'), 161: Timestamp('2007-07-01 00:00:00'), 162: Timestamp('2007-10-01 00:00:00'), 163: Timestamp('2008-01-01 00:00:00'), 164: Timestamp('2008-04-01 00:00:00'), 165: Timestamp('2008-07-01 00:00:00'), 166: Timestamp('2008-10-01 00:00:00'), 167: Timestamp('2009-01-01 00:00:00'), 168: Timestamp('2009-01-20 00:00:00'), 169: Timestamp('2009-04-01 00:00:00'), 170: Timestamp('2009-07-01 00:00:00'), 171: Timestamp('2009-10-01 00:00:00'), 172: Timestamp('2010-01-01 00:00:00'), 173: Timestamp('2010-04-01 00:00:00'), 174: Timestamp('2010-07-01 00:00:00'), 175: Timestamp('2010-10-01 00:00:00'), 176: Timestamp('2011-01-01 00:00:00'), 177: Timestamp('2011-04-01 00:00:00'), 178: Timestamp('2011-07-01 00:00:00'), 179: Timestamp('2011-10-01 00:00:00'), 180: Timestamp('2012-01-01 00:00:00'), 181: Timestamp('2012-04-01 00:00:00'), 182: Timestamp('2012-07-01 00:00:00'), 183: Timestamp('2012-10-01 00:00:00'), 184: Timestamp('2013-01-01 00:00:00'), 185: Timestamp('2013-04-01 00:00:00'), 186: Timestamp('2013-07-01 00:00:00'), 187: Timestamp('2013-10-01 00:00:00'), 188: Timestamp('2014-01-01 00:00:00'), 189: Timestamp('2014-04-01 00:00:00'), 190: Timestamp('2014-07-01 00:00:00'), 191: Timestamp('2014-10-01 00:00:00'), 192: Timestamp('2015-01-01 00:00:00'), 193: Timestamp('2015-04-01 00:00:00'), 194: Timestamp('2015-07-01 00:00:00'), 195: Timestamp('2015-10-01 00:00:00'), 196: Timestamp('2016-01-01 00:00:00'), 197: Timestamp('2016-04-01 00:00:00'), 198: Timestamp('2016-07-01 00:00:00'), 199: Timestamp('2016-10-01 00:00:00'), 200: Timestamp('2017-01-01 00:00:00'), 201: Timestamp('2017-01-20 00:00:00'), 202: Timestamp('2017-04-01 00:00:00'), 203: Timestamp('2017-07-01 00:00:00'), 204: Timestamp('2017-10-01 00:00:00'), 205: Timestamp('2018-01-01 00:00:00'), 206: Timestamp('2018-04-01 00:00:00'), 207: Timestamp('2018-07-01 00:00:00'), 208: Timestamp('2018-10-01 00:00:00'), 209: Timestamp('2019-01-01 00:00:00'), 210: Timestamp('2019-04-01 00:00:00'), 211: Timestamp('2019-07-01 00:00:00'), 212: Timestamp('2019-10-01 00:00:00'), 213: Timestamp('2020-01-01 00:00:00'), 214: Timestamp('2020-04-01 00:00:00'), 215: Timestamp('2020-07-01 00:00:00'), 216: Timestamp('2020-10-01 00:00:00'), 217: Timestamp('2021-01-01 00:00:00'), 218: Timestamp('2021-01-20 00:00:00'), 219: Timestamp('2021-04-01 00:00:00'), 220: Timestamp('2021-07-01 00:00:00'), 221: Timestamp('2021-10-01 00:00:00'), 222: Timestamp('2022-01-01 00:00:00'), 223: Timestamp('2022-04-01 00:00:00'), 224: Timestamp('2022-07-01 00:00:00'), 225: Timestamp('2022-10-01 00:00:00'), 226: Timestamp('2023-01-01 00:00:00'), 227: Timestamp('2023-04-01 00:00:00'), 228: Timestamp('2023-07-01 00:00:00'), 229: Timestamp('2023-10-01 00:00:00'), 230: Timestamp('2024-01-01 00:00:00')}, 'Unemployment Rate': {1: 3.4, 2: 3.4, 3: 3.5, 4: 3.7, 5: 3.9, 6: 4.6, 7: 5.0, 8: 5.5, 9: 5.9, 10: 5.9, 11: 6.0, 12: 5.8, 13: 5.8, 14: 5.7, 15: 5.6, 16: 5.6, 17: 4.9, 18: 5.0, 19: 4.8, 20: 4.6, 21: 5.1, 22: 5.1, 23: 5.5, 24: 5.711956521739131, 25: 6.0, 26: 8.1, 27: 8.8, 28: 8.6, 29: 8.4, 30: 7.9, 31: 7.7, 32: 7.8, 33: 7.7, 34: 7.5, 35: 7.4366666666666665, 36: 7.2, 37: 6.9, 38: 6.8, 39: 6.4, 40: 6.1, 41: 6.2, 42: 5.8, 43: 5.9, 44: 5.8, 45: 5.7, 46: 6.0, 47: 6.3, 48: 6.9, 49: 7.8, 50: 7.5, 51: 7.5, 52: 7.4366666666666665, 53: 7.2, 54: 7.2, 55: 7.9, 56: 8.6, 57: 9.3, 58: 9.8, 59: 10.4, 60: 10.4, 61: 10.2, 62: 9.4, 63: 8.8, 64: 8.0, 65: 7.7, 66: 7.5, 67: 7.4, 68: 7.3, 69: 7.3, 70: 7.4, 71: 7.1, 72: 6.7, 73: 7.1, 74: 7.0, 75: 7.0, 76: 6.6, 77: 6.3, 78: 6.1, 79: 6.0, 80: 5.7, 81: 5.4, 82: 5.4, 83: 5.4, 84: 5.4, 85: 5.357777777777778, 86: 5.2, 87: 5.2, 88: 5.3, 89: 5.4, 90: 5.4, 91: 5.5, 92: 5.9, 93: 6.4, 94: 6.7, 95: 6.8, 96: 7.0, 97: 7.3, 98: 7.4, 99: 7.7, 100: 7.3, 101: 7.3, 102: 7.257777777777777, 103: 7.1, 104: 6.9, 105: 6.8, 106: 6.6, 107: 6.4, 108: 6.1, 109: 5.8, 110: 5.6, 111: 5.8, 112: 5.7, 113: 5.5, 114: 5.6, 115: 5.6, 116: 5.5, 117: 5.2, 118: 5.3, 119: 5.1, 120: 4.9, 121: 4.7, 122: 4.6, 123: 4.3, 124: 4.5, 125: 4.5, 126: 4.3, 127: 4.3, 128: 4.3, 129: 4.1, 130: 4.0, 131: 3.8, 132: 4.0, 133: 3.9, 134: 4.2, 135: 4.242222222222223, 136: 4.4, 137: 4.6, 138: 5.3, 139: 5.7, 140: 5.9, 141: 5.8, 142: 5.7, 143: 5.8, 144: 6.0, 145: 6.2, 146: 6.0, 147: 5.7, 148: 5.6, 149: 5.5, 150: 5.5, 151: 5.3, 152: 5.2, 153: 5.0, 154: 5.0, 155: 4.7, 156: 4.7, 157: 4.7, 158: 4.4, 159: 4.6, 160: 4.5, 161: 4.7, 162: 4.7, 163: 5.0, 164: 5.0, 165: 5.8, 166: 6.5, 167: 7.8, 168: 8.053333333333333, 169: 9.0, 170: 9.5, 171: 10.0, 172: 9.8, 173: 9.9, 174: 9.4, 175: 9.4, 176: 9.1, 177: 9.1, 178: 9.0, 179: 8.8, 180: 8.3, 181: 8.2, 182: 8.2, 183: 7.8, 184: 8.0, 185: 7.6, 186: 7.3, 187: 7.2, 188: 6.6, 189: 6.2, 190: 6.2, 191: 5.7, 192: 5.7, 193: 5.4, 194: 5.2, 195: 5.0, 196: 4.8, 197: 5.1, 198: 4.8, 199: 4.9, 200: 4.7, 201: 4.636666666666667, 202: 4.4, 203: 4.3, 204: 4.2, 205: 4.0, 206: 4.0, 207: 3.8, 208: 3.8, 209: 4.0, 210: 3.7, 211: 3.7, 212: 3.6, 213: 3.6, 214: 14.8, 215: 10.2, 216: 6.8, 217: 6.4, 218: 6.336666666666667, 219: 6.1, 220: 5.4, 221: 4.5, 222: 4.0, 223: 3.7, 224: 3.5, 225: 3.6, 226: 3.4, 227: 3.4, 228: 3.5, 229: 3.8, 230: 3.7}, 'Republican': {1: True, 2: True, 3: True, 4: True, 5: True, 6: True, 7: True, 8: True, 9: True, 10: True, 11: True, 12: True, 13: True, 14: True, 15: True, 16: True, 17: True, 18: True, 19: True, 20: True, 21: True, 22: True, 23: True, 24: True, 25: True, 26: True, 27: True, 28: True, 29: True, 30: True, 31: True, 32: True, 33: True, 34: True, 35: False, 36: False, 37: False, 38: False, 39: False, 40: False, 41: False, 42: False, 43: False, 44: False, 45: False, 46: False, 47: False, 48: False, 49: False, 50: False, 51: False, 52: True, 53: True, 54: True, 55: True, 56: True, 57: True, 58: True, 59: True, 60: True, 61: True, 62: True, 63: True, 64: True, 65: True, 66: True, 67: True, 68: True, 69: True, 70: True, 71: True, 72: True, 73: True, 74: True, 75: True, 76: True, 77: True, 78: True, 79: True, 80: True, 81: True, 82: True, 83: True, 84: True, 85: True, 86: True, 87: True, 88: True, 89: True, 90: True, 91: True, 92: True, 93: True, 94: True, 95: True, 96: True, 97: True, 98: True, 99: True, 100: True, 101: True, 102: False, 103: False, 104: False, 105: False, 106: False, 107: False, 108: False, 109: False, 110: False, 111: False, 112: False, 113: False, 114: False, 115: False, 116: False, 117: False, 118: False, 119: False, 120: False, 121: False, 122: False, 123: False, 124: False, 125: False, 126: False, 127: False, 128: False, 129: False, 130: False, 131: False, 132: False, 133: False, 134: False, 135: True, 136: True, 137: True, 138: True, 139: True, 140: True, 141: True, 142: True, 143: True, 144: True, 145: True, 146: True, 147: True, 148: True, 149: True, 150: True, 151: True, 152: True, 153: True, 154: True, 155: True, 156: True, 157: True, 158: True, 159: True, 160: True, 161: True, 162: True, 163: True, 164: True, 165: True, 166: True, 167: True, 168: False, 169: False, 170: False, 171: False, 172: False, 173: False, 174: False, 175: False, 176: False, 177: False, 178: False, 179: False, 180: False, 181: False, 182: False, 183: False, 184: False, 185: False, 186: False, 187: False, 188: False, 189: False, 190: False, 191: False, 192: False, 193: False, 194: False, 195: False, 196: False, 197: False, 198: False, 199: False, 200: False, 201: True, 202: True, 203: True, 204: True, 205: True, 206: True, 207: True, 208: True, 209: True, 210: True, 211: True, 212: True, 213: True, 214: True, 215: True, 216: True, 217: True, 218: False, 219: False, 220: False, 221: False, 222: False, 223: False, 224: False, 225: False, 226: False, 227: False, 228: False, 229: False, 230: False}, 'Democrat': {1: False, 2: False, 3: False, 4: False, 5: False, 6: False, 7: False, 8: False, 9: False, 10: False, 11: False, 12: False, 13: False, 14: False, 15: False, 16: False, 17: False, 18: False, 19: False, 20: False, 21: False, 22: False, 23: False, 24: False, 25: False, 26: False, 27: False, 28: False, 29: False, 30: False, 31: False, 32: False, 33: False, 34: False, 35: True, 36: True, 37: True, 38: True, 39: True, 40: True, 41: True, 42: True, 43: True, 44: True, 45: True, 46: True, 47: True, 48: True, 49: True, 50: True, 51: True, 52: False, 53: False, 54: False, 55: False, 56: False, 57: False, 58: False, 59: False, 60: False, 61: False, 62: False, 63: False, 64: False, 65: False, 66: False, 67: False, 68: False, 69: False, 70: False, 71: False, 72: False, 73: False, 74: False, 75: False, 76: False, 77: False, 78: False, 79: False, 80: False, 81: False, 82: False, 83: False, 84: False, 85: False, 86: False, 87: False, 88: False, 89: False, 90: False, 91: False, 92: False, 93: False, 94: False, 95: False, 96: False, 97: False, 98: False, 99: False, 100: False, 101: False, 102: True, 103: True, 104: True, 105: True, 106: True, 107: True, 108: True, 109: True, 110: True, 111: True, 112: True, 113: True, 114: True, 115: True, 116: True, 117: True, 118: True, 119: True, 120: True, 121: True, 122: True, 123: True, 124: True, 125: True, 126: True, 127: True, 128: True, 129: True, 130: True, 131: True, 132: True, 133: True, 134: True, 135: False, 136: False, 137: False, 138: False, 139: False, 140: False, 141: False, 142: False, 143: False, 144: False, 145: False, 146: False, 147: False, 148: False, 149: False, 150: False, 151: False, 152: False, 153: False, 154: False, 155: False, 156: False, 157: False, 158: False, 159: False, 160: False, 161: False, 162: False, 163: False, 164: False, 165: False, 166: False, 167: False, 168: True, 169: True, 170: True, 171: True, 172: True, 173: True, 174: True, 175: True, 176: True, 177: True, 178: True, 179: True, 180: True, 181: True, 182: True, 183: True, 184: True, 185: True, 186: True, 187: True, 188: True, 189: True, 190: True, 191: True, 192: True, 193: True, 194: True, 195: True, 196: True, 197: True, 198: True, 199: True, 200: True, 201: False, 202: False, 203: False, 204: False, 205: False, 206: False, 207: False, 208: False, 209: False, 210: False, 211: False, 212: False, 213: False, 214: False, 215: False, 216: False, 217: False, 218: True, 219: True, 220: True, 221: True, 222: True, 223: True, 224: True, 225: True, 226: True, 227: True, 228: True, 229: True, 230: True}, 'President': {1: 'Nixon', 2: 'Nixon', 3: 'Nixon', 4: 'Nixon', 5: 'Nixon', 6: 'Nixon', 7: 'Nixon', 8: 'Nixon', 9: 'Nixon', 10: 'Nixon', 11: 'Nixon', 12: 'Nixon', 13: 'Nixon', 14: 'Nixon', 15: 'Nixon', 16: 'Nixon', 17: 'Nixon', 18: 'Nixon', 19: 'Nixon', 20: 'Nixon', 21: 'Nixon', 22: 'Nixon', 23: 'Nixon', 24: 'Ford', 25: 'Ford', 26: 'Ford', 27: 'Ford', 28: 'Ford', 29: 'Ford', 30: 'Ford', 31: 'Ford', 32: 'Ford', 33: 'Ford', 34: 'Ford', 35: 'Carter', 36: 'Carter', 37: 'Carter', 38: 'Carter', 39: 'Carter', 40: 'Carter', 41: 'Carter', 42: 'Carter', 43: 'Carter', 44: 'Carter', 45: 'Carter', 46: 'Carter', 47: 'Carter', 48: 'Carter', 49: 'Carter', 50: 'Carter', 51: 'Carter', 52: 'Reagan', 53: 'Reagan', 54: 'Reagan', 55: 'Reagan', 56: 'Reagan', 57: 'Reagan', 58: 'Reagan', 59: 'Reagan', 60: 'Reagan', 61: 'Reagan', 62: 'Reagan', 63: 'Reagan', 64: 'Reagan', 65: 'Reagan', 66: 'Reagan', 67: 'Reagan', 68: 'Reagan', 69: 'Reagan', 70: 'Reagan', 71: 'Reagan', 72: 'Reagan', 73: 'Reagan', 74: 'Reagan', 75: 'Reagan', 76: 'Reagan', 77: 'Reagan', 78: 'Reagan', 79: 'Reagan', 80: 'Reagan', 81: 'Reagan', 82: 'Reagan', 83: 'Reagan', 84: 'Reagan', 85: 'Bush', 86: 'Bush', 87: 'Bush', 88: 'Bush', 89: 'Bush', 90: 'Bush', 91: 'Bush', 92: 'Bush', 93: 'Bush', 94: 'Bush', 95: 'Bush', 96: 'Bush', 97: 'Bush', 98: 'Bush', 99: 'Bush', 100: 'Bush', 101: 'Bush', 102: 'Clinton', 103: 'Clinton', 104: 'Clinton', 105: 'Clinton', 106: 'Clinton', 107: 'Clinton', 108: 'Clinton', 109: 'Clinton', 110: 'Clinton', 111: 'Clinton', 112: 'Clinton', 113: 'Clinton', 114: 'Clinton', 115: 'Clinton', 116: 'Clinton', 117: 'Clinton', 118: 'Clinton', 119: 'Clinton', 120: 'Clinton', 121: 'Clinton', 122: 'Clinton', 123: 'Clinton', 124: 'Clinton', 125: 'Clinton', 126: 'Clinton', 127: 'Clinton', 128: 'Clinton', 129: 'Clinton', 130: 'Clinton', 131: 'Clinton', 132: 'Clinton', 133: 'Clinton', 134: 'Clinton', 135: 'Bush', 136: 'Bush', 137: 'Bush', 138: 'Bush', 139: 'Bush', 140: 'Bush', 141: 'Bush', 142: 'Bush', 143: 'Bush', 144: 'Bush', 145: 'Bush', 146: 'Bush', 147: 'Bush', 148: 'Bush', 149: 'Bush', 150: 'Bush', 151: 'Bush', 152: 'Bush', 153: 'Bush', 154: 'Bush', 155: 'Bush', 156: 'Bush', 157: 'Bush', 158: 'Bush', 159: 'Bush', 160: 'Bush', 161: 'Bush', 162: 'Bush', 163: 'Bush', 164: 'Bush', 165: 'Bush', 166: 'Bush', 167: 'Bush', 168: 'Obama', 169: 'Obama', 170: 'Obama', 171: 'Obama', 172: 'Obama', 173: 'Obama', 174: 'Obama', 175: 'Obama', 176: 'Obama', 177: 'Obama', 178: 'Obama', 179: 'Obama', 180: 'Obama', 181: 'Obama', 182: 'Obama', 183: 'Obama', 184: 'Obama', 185: 'Obama', 186: 'Obama', 187: 'Obama', 188: 'Obama', 189: 'Obama', 190: 'Obama', 191: 'Obama', 192: 'Obama', 193: 'Obama', 194: 'Obama', 195: 'Obama', 196: 'Obama', 197: 'Obama', 198: 'Obama', 199: 'Obama', 200: 'Obama', 201: 'Trump', 202: 'Trump', 203: 'Trump', 204: 'Trump', 205: 'Trump', 206: 'Trump', 207: 'Trump', 208: 'Trump', 209: 'Trump', 210: 'Trump', 211: 'Trump', 212: 'Trump', 213: 'Trump', 214: 'Trump', 215: 'Trump', 216: 'Trump', 217: 'Trump', 218: 'Biden', 219: 'Biden', 220: 'Biden', 221: 'Biden', 222: 'Biden', 223: 'Biden', 224: 'Biden', 225: 'Biden', 226: 'Biden', 227: 'Biden', 228: 'Biden', 229: 'Biden', 230: 'Biden'}}
df = pd.DataFrame.from_dict(data)
# set up plot
f, ax = plt.subplots(figsize=(12, 8))
df.plot(ax=ax, x="Date", y="Unemployment Rate",color="darkgreen", zorder=2, legend=False)# legend=False is ignored
y1, y2 = ax.get_ylim()
ax.fill_between(df["Date"], y1=y1, y2=y2, where=df["Republican"], color="#E81B23", alpha=0.5, zorder=1, label="Rebublican")
ax.fill_between(df["Date"], y1=y1, y2=y2, where=df["Democrat"], color="#00AEF3", alpha=0.5, zorder=1, label="Democrat")
# set labels on every 4th years at selected dates | attempt 1
ax.xaxis.set_major_locator(dates.YearLocator(base=4))
# set labels on every 4th years at selected dates | attempt 2
#lab = list(range(1969,2026,4))
#ax.xaxis.set_major_formatter(ticker.FixedFormatter(lab))
ax.figure.autofmt_xdate(rotation=0, ha="center")
ax.set_xlabel('')
ax.set_ylabel('Unemployment Rate (%)')
ax.set_title("U.S. Unemployment Rate")
# set legend
ax.legend()# I do want to keep the filled area legends
# position the legend
ax.legend(bbox_to_anchor=(0.8, 1.08), loc="center")
# set grid
ax.grid(True)
f.tight_layout()# not as tight as I would like
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/W6TNrTwX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/W6TNrTwX.png" alt="enter image description here" /></a></p>
<p>P.S. Unrelated things I'd like to do: remove the white space outside of the blue/red filled areas; remove the "Unemployment Rate" from the legend (while keeping Republican/Democrat).</p>
|
<python><matplotlib>
|
2024-06-23 10:41:47
| 1
| 10,646
|
PatrickT
|
78,658,112
| 14,655,211
|
How do I repeat question in python if user enters wrong input?
|
<p>Ι am asking user a series of questions in this loop and assigning each answer to a variable to be used later in my script.</p>
<p>I'm also doing some error handling if the user doesn't enter an integer as response but not sure how to get it to repeat the relevant question if the user enters invalid input.</p>
<pre><code>questions = ['Please enter your gross annual salary: ', 'Please enter the annual amount of any bonus or commision that appears on your paylip. If not applicable enter 0: ', 'Please enter the annual amount of any benefits in kind that appear on your payslip. If not applicable enter 0: ', 'Please enter your annual total tax credit: ', 'Please enter your annual total pension contribution. If not applicable enter 0: ', 'Do you have any other non tax deductable income? (ie. Holiday purchase schemes etc.) If not applicable enter 0: ']
variables = ['salary', 'bonus', 'bik', 'tax_credits', 'pension contributions', 'other_non_tax_deductibles']
for question, variable in zip(questions, variables):
try:
variable = float(input({question}))
except ValueError:
print("Please enter a number")
else:
print(variable)
</code></pre>
|
<python>
|
2024-06-23 08:10:15
| 0
| 471
|
DeirdreRodgers
|
78,658,075
| 7,339,624
|
Unexpected horizontal line in discrete inverse Fourier transform of Gaussian function with odd number of samples
|
<p>I'm implementing a discrete inverse Fourier transform in Python to approximate the inverse Fourier transform of a Gaussian function.</p>
<p>While the output looks promising, I'm encountering an unexpected horizontal line when the number of samples <code>n</code> is an odd number.</p>
<p>For <code>n = 1000</code> (even), the output looks correct:
<a href="https://i.sstatic.net/vMdCk5o7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vMdCk5o7.png" alt="enter image description here" /></a></p>
<p>For <code>n = 1001</code> (odd), an unexpected horizontal line appears:
<a href="https://i.sstatic.net/Da6yrIq4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Da6yrIq4.png" alt="enter image description here" /></a></p>
<p>Why does this horizontal line appear when n is odd?
Any insights or suggestions would be greatly appreciated. Thank you!</p>
<h2>My Implementation</h2>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
from sympy import symbols, exp, pi, lambdify, sqrt
# Defining the Fourier transform of a Gaussian function, sqrt(pi) * exp(-omega ** 2 / 4)
x, omega = symbols('x omega')
f_gaussian_symbolic = exp(-omega ** 2 / 4) * sqrt(pi)
f_gaussian_function = lambdify(omega, f_gaussian_symbolic, 'numpy')
def fourier_inverse(f, n):
"""
This function computes the inverse Fourier transform of a function f.
:param f: The function to be transformed
:param n: Number of samples
"""
omega_max = 20 # The max frequency we want to be sampled
omega_range = np.linspace(-omega_max, omega_max, n)
f_values = f(omega_range)
inverse_f = np.fft.ifftshift(np.fft.ifft(np.fft.fftshift(f_values)))
delta_omega = omega_range[1] - omega_range[0]
x_range = 2 * np.pi * np.fft.ifftshift(np.fft.fftfreq(n, d=delta_omega))
inverse_f *= delta_omega * n / (2 * np.pi)
return x_range, inverse_f
plt.figure(figsize=(10, 5))
x_range, inverse_f = fourier_inverse(f_gaussian_function, n=1001)
plt.plot(x_range, inverse_f.real)
plt.ylim(-2, 2)
plt.xlim(-4, 4)
plt.show()
</code></pre>
|
<python><numpy><matplotlib><signal-processing><fft>
|
2024-06-23 07:52:03
| 0
| 4,337
|
Peyman
|
78,657,967
| 5,540,159
|
Signal processing Conv1D Keras
|
<p>I am learning Keras using signal classification where all values are binary (0 or 1). The input data is a vector of [,32] and the output is [,16]. I have a large dataset more then 400K samples.</p>
<p>This is the dataset: <a href="https://www.dropbox.com/scl/fi/bx51lgwsd1pa7l2c4issb/NewNew.csv?rlkey=mpqdttoq74f9s1xlwzrp5scct&st=km4ixm7e&dl=0" rel="nofollow noreferrer">LINK</a></p>
<p>I want to build a CNN model to predict Y from X. I built the following code:</p>
<pre><code>import tensorflow as tf
import numpy as np
import pandas as pd
from numpy import std
from tensorflow import keras
from keras import layers
from keras.layers import Input, Dense
from keras.models import Model
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error
import time
import matplotlib.pyplot as plt
from keras.models import load_model
from keras.layers import Lambda, Input, Dense, Dropout, BatchNormalization
LinkOfDataset= 'NewNew.csv'
Data=pd.read_csv(LinkOfDataset,encoding= 'utf-8')
ChallengeLen= 32
ResponseLen= 16
NumberOfTraining= 0.6
NumberOfValidation= 0.2
NumberOfTesting=0.2
AllData=Data;
TrainData , TestData=train_test_split(AllData, train_size=0.8);
TestData , ValidationData=train_test_split(TestData, train_size=0.8);
XTrainData = TrainData.iloc[:, :ChallengeLen]
YTrainData = TrainData.iloc[:, ChallengeLen:]
XValidationData= ValidationData.iloc[:, :ChallengeLen]
YValidationData= ValidationData.iloc[:, ChallengeLen:]
XTestData = TestData.iloc[:, :ChallengeLen]
YTestData = TestData.iloc[:, ChallengeLen:]
# Split the data into X and y
n_inputs = 32 #input size (num of columns)
n_outputs = 16 #output size (num of columns)
# cnn model
from numpy import mean
from numpy import std
from numpy import dstack
from pandas import read_csv
from matplotlib import pyplot
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import Dropout
from keras.utils import to_categorical
import keras
from keras.layers import Dense, Conv1D, Dropout, Reshape, MaxPooling1D
#%% AE
from keras.layers import Input, Dense, BatchNormalization, Dropout
from keras.models import Model
dropout_rate = 0.5 # You can adjust this value
# Create the encoder layers
# Define model parameters
n_timesteps = 32 # length of the input sequence
n_features = 1 # number of features per timestep
n_outputs = 16 # number of output classes
# Define the model
model = Sequential()
model.add(Conv1D(filters=32, kernel_size=3, activation='relu', input_shape=(n_timesteps, n_features)))
model.add(Conv1D(filters=64, kernel_size=3, activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.25))
model.add(Conv1D(filters=64, kernel_size=3, activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.25))
model.add(Conv1D(filters=128, kernel_size=3, activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.25))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(n_outputs, activation='softmax'))
optimizer = tf.keras.optimizers.Adam(0.0001)
model.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
history = model.fit(XTrainData, YTrainData, epochs=10, batch_size=100,
validation_data=(XValidationData, YValidationData))
model.save("PAutoencoder.h5")
# Plotting the training and validation loss
plt.figure(figsize=(12, 5))
plt.subplot(1, 2, 1)
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss') # Now this should work
plt.title('Training and Validation Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.subplot(1, 2, 2)
plt.plot(history.history['accuracy'], label='Training Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
</code></pre>
<p>I don't understand why my model did not learn. The output are:</p>
<pre><code>Epoch 1/10
: UserWarning: Do not pass an `input_shape`/`input_dim` argument to a layer. When using Sequential models, prefer using an `Input(shape)` object as the first layer in the model instead.
super().__init__(activity_regularizer=activity_regularizer, **kwargs)
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 19s 7ms/step - accuracy: 0.0527 - loss: 18883880.0000 - val_accuracy: 0.0116 - val_loss: 433219648.0000
Epoch 2/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 24s 10ms/step - accuracy: 0.0362 - loss: 1131962368.0000 - val_accuracy: 0.0116 - val_loss: 5247283200.0000
Epoch 3/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 22s 9ms/step - accuracy: 0.0307 - loss: 8491829760.0000 - val_accuracy: 0.0116 - val_loss: 21394169856.0000
Epoch 4/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 19s 7ms/step - accuracy: 0.0265 - loss: 30088540160.0000 - val_accuracy: 3.1636e-04 - val_loss: 60751552512.0000
Epoch 5/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 18s 7ms/step - accuracy: 0.0255 - loss: 77107576832.0000 - val_accuracy: 3.1636e-04 - val_loss: 132881571840.0000
Epoch 6/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 17s 7ms/step - accuracy: 0.0243 - loss: 168126316544.0000 - val_accuracy: 0.0116 - val_loss: 278413082624.0000
Epoch 7/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 19s 7ms/step - accuracy: 0.0248 - loss: 339567247360.0000 - val_accuracy: 0.0226 - val_loss: 518753681408.0000
Epoch 8/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 19s 8ms/step - accuracy: 0.0261 - loss: 616867364864.0000 - val_accuracy: 3.1636e-04 - val_loss: 891050786816.0000
Epoch 9/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 20s 8ms/step - accuracy: 0.0248 - loss: 1045693661184.0000 - val_accuracy: 0.0116 - val_loss: 1515715559424.0000
Epoch 10/10
2529/2529 ━━━━━━━━━━━━━━━━━━━━ 19s 8ms/step - accuracy: 0.0264 - loss: 1706346020864.0000 - val_accuracy: 3.1636e-04 - val_loss: 2313546366976.0000
WARNING:absl:You are saving your model as an HDF5 file via `model.save()` or `keras.saving.save_model(model)`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')` or `keras.saving.save_model(model, 'my_model.keras')`.
</code></pre>
<p><a href="https://i.sstatic.net/JO3tzs2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JO3tzs2C.png" alt="enter image description here" /></a></p>
<p>I want to make my model can predict the Y correctly and a lower loss !</p>
|
<python><keras><deep-learning><neural-network><conv-neural-network>
|
2024-06-23 06:55:37
| 0
| 962
|
stevGates
|
78,657,909
| 482,519
|
How to build external URL with parameters in flask/jinja2?
|
<p>Simple question, how can I build URL with parameters for external websites?</p>
<p>Flask helper <code>url_for</code> is really handy to build internal links and I was wandering if it could also be used for external links, but it seems that this is not as simple as it sounds.</p>
<pre class="lang-html prettyprint-override"><code><a href="{{ url_for('endpoint', foo='bar') }}">internal</a>
<a href="{{ url_for('https://example.net/', foo='bar', _external=true) }}">external (doesn't work)</a>
</code></pre>
<pre class="lang-py prettyprint-override"><code>from flask import Flask, url_for
app = Flask(__name__)
app.config["SERVER_NAME"] = "example.com"
app.config["PREFERRED_URL_SCHEME"] = "https"
with app.app_context():
# Expected: "https://example.net/?foo=bar"
print(url_for("https://example.net/", foo="bar", _external=True))
</code></pre>
<p><code>url_for</code> has <code>_external</code> option, but it seems different from what I imagined.</p>
<hr />
<h2>Edit</h2>
<p>I currently use <a href="https://jinja.palletsprojects.com/en/3.0.x/templates/#jinja-filters.urlencode" rel="nofollow noreferrer">urlencode</a> filter to build parameters:</p>
<pre class="lang-html prettyprint-override"><code><a href="https://example.net/?{{ dict(foo='bar') | urlencode }}">external link</a>
</code></pre>
|
<python><flask><jinja2>
|
2024-06-23 06:18:46
| 2
| 3,675
|
ernix
|
78,657,558
| 7,583,953
|
Does a recursive segment tree require more space than an iterative one?
|
<p>I'm learning the segment tree data structure.</p>
<p>I have seen several iterative segment trees that use only 2n space. So I tried to use the same build method in a segment tree with recursive update and sumRange. Is this not allowed? Why can an iterative seg tree be stored in 2n but a recursive one needs 4n? Or do I just have an implementation flaw in my non-working tree?</p>
<p>For my 2n tree, I'm using a 1-indexed tree, so nothing is stored at <code>tree[0]</code>. This means the root is at <code>tree[1]</code>. I make recursive calls using initial range 1 to n - 1, which I'm not sure about. I get different wrong answers when I make it go to self.n or start at 0. I also get different wrong answers if I pass in index+1, left+1 or right+1</p>
<p>Here is my implementation:</p>
<pre class="lang-py prettyprint-override"><code>class NumArray:
# Classic Segment Tree
def __init__(self, nums: List[int]):
self.n = len(nums)
self.tree = [0] * self.n * 2
self.build(nums)
def build(self, nums):
# leaves
for i in range(self.n):
self.tree[i + self.n] = nums[i]
# internal
for i in range(self.n - 1, 0, -1):
self.tree[i] = self.tree[i * 2] + self.tree[i * 2 + 1]
def merge(self, left, right):
return left + right
def _update(self, tree_idx, seg_left, seg_right, i, val):
# leaf
if seg_left == seg_right:
self.tree[tree_idx] = val
return
mid = (seg_left + seg_right) // 2
if i > mid:
self._update(tree_idx * 2 + 1, mid + 1, seg_right, i, val)
else:
self._update(tree_idx * 2, seg_left, mid, i, val)
self.tree[tree_idx] = self.merge(self.tree[tree_idx * 2], self.tree[tree_idx * 2 + 1])
def update(self, index: int, val: int) -> None:
self._update(1, 1, self.n - 1, index, val)
def _sumRange(self, tree_idx, seg_left, seg_right, query_left, query_right):
# segment out of query bounds
if seg_left > query_right or seg_right < query_left:
return 0
# segment fully in bounds
if seg_left >= query_left and seg_right <= query_right:
return self.tree[tree_idx]
# segment partially in bounds
mid = (seg_left + seg_right) // 2
# this is not necessary for correctness, but helps with efficiency (we only go down 1 path if 2 is unnecessary)
if query_left > mid:
return self._sumRange(tree_idx * 2 + 1, mid + 1, seg_right, query_left, query_right)
elif query_right <= mid:
return self._sumRange(tree_idx * 2, seg_left, mid, query_left, query_right)
left_sum = self._sumRange(tree_idx * 2, seg_left, mid, query_left, query_right)
right_sum = self._sumRange(tree_idx * 2 + 1, mid + 1, seg_right, query_left, query_right)
return self.merge(left_sum, right_sum)
def sumRange(self, left: int, right: int) -> int:
return self._sumRange(1, 1, self.n - 1, left, right)
</code></pre>
<p>I do have a working fully recursive 0-indexed version, but it uses double the space</p>
<pre><code>class NumArray:
# Classic Segment Tree
# 0-indexed recursive
def __init__(self, nums: List[int]):
self.n = len(nums)
self.tree = [0] * self.n * 4
self.build(nums, 0, 0, self.n - 1)
def build(self, nums, tree_idx, left, right):
# leaf
if left == right:
self.tree[tree_idx] = nums[left]
return
mid = (left + right) // 2
self.build(nums, tree_idx * 2 + 1, left, mid)
self.build(nums, tree_idx * 2 + 2, mid + 1, right)
self.tree[tree_idx] = self.tree[tree_idx * 2 + 1] + self.tree[tree_idx * 2 + 2]
def merge(self, left, right):
return left + right
def _update(self, tree_idx, seg_left, seg_right, i, val):
# leaf
if seg_left == seg_right:
self.tree[tree_idx] = val
return
mid = (seg_left + seg_right) // 2
if i > mid:
self._update(tree_idx * 2 + 2, mid + 1, seg_right, i, val)
else:
self._update(tree_idx * 2 + 1, seg_left, mid, i, val)
self.tree[tree_idx] = self.merge(self.tree[tree_idx * 2 + 1], self.tree[tree_idx * 2 + 2])
def update(self, index: int, val: int) -> None:
self._update(0, 0, self.n - 1, index, val)
def _sumRange(self, tree_idx, seg_left, seg_right, query_left, query_right):
# segment out of query bounds
if seg_left > query_right or seg_right < query_left:
return 0
# segment fully in bounds
if seg_left >= query_left and seg_right <= query_right:
return self.tree[tree_idx]
# segment partially in bounds
mid = (seg_left + seg_right) // 2
# this is not necessary for correctness, but helps with efficiency (we only go down 1 path if 2 is unnecessary)
if query_left > mid:
return self._sumRange(tree_idx * 2 + 2, mid + 1, seg_right, query_left, query_right)
elif query_right <= mid:
return self._sumRange(tree_idx * 2 + 1, seg_left, mid, query_left, query_right)
left_sum = self._sumRange(tree_idx * 2 + 1, seg_left, mid, query_left, query_right)
right_sum = self._sumRange(tree_idx * 2 + 2, mid + 1, seg_right, query_left, query_right)
return self.merge(left_sum, right_sum)
def sumRange(self, left: int, right: int) -> int:
return self._sumRange(0, 0, self.n - 1, left, right)
</code></pre>
<p><a href="https://leetcode.com/problems/range-sum-query-mutable/" rel="nofollow noreferrer">This website</a> verifies if a segment tree implementation is correct</p>
<p>Also I know recursive uses more call stack space. That's not what my question is about</p>
|
<python><algorithm><data-structures><segment-tree>
|
2024-06-23 01:08:15
| 1
| 9,733
|
Alec
|
78,657,517
| 6,147,374
|
Pushing dataframe to Azure SQL DB table with sqlalchemy
|
<p>I had a python script to push data frames to Azure SQL DB that was working fine. I now use a new computer and I had to update python and packages used such as sqlalchemy etc</p>
<p>Now, the script is executing, but for the part that it pushes dataframe data to SQL tables it is extremely slow, I even tried pyspark and got the same results, what has changed that is causing this, here is the code:</p>
<pre><code>con_str = (
'Driver={ODBC Driver 17 for SQL Server};'
'SERVER=myserver.database.windows.net;'
'Database=mydb;'
'UID=myuserid;'
'PWD=mypw;'
'Trusted_Connection=no;'
)
# create engine
connection_uri = f'mssql+pyodbc:///?odbc_connect={urllib.parse.quote_plus(con_str)}'
engine = sa.create_engine(connection_uri, fast_executemany=True)
conn_sa=engine.connect()
df_projects_2.to_sql("WfProjects", engine, schema="wf", if_exists="append", index=False)
</code></pre>
|
<python><sqlalchemy><azure-sql-database>
|
2024-06-23 00:35:19
| 1
| 4,319
|
Ibo
|
78,657,390
| 2,748,409
|
Accessing elements of an awkward array that are not a passed-in index
|
<p>I'm trying to access the elements of an awkward array that do <em>not</em> correspond to some particular set of indices. I have 3 events in total with one jet per event and some number of leptons. Each lepton has a particular flag associated with it. For each jet I keep track of the indices of the leptons in that jet:</p>
<pre><code>jet_lepton_indices = ak.Array([[0, 2], [1], [2,3]])
print(f'jet_lepton_indices\n{jet_lepton_indices}\n')
lepton_flags = ak.Array([[0, 10, 20, 30], [0, 10, 20, 30], [0, 10, 20, 30, 40]])
print(f'lepton_flags\n{lepton_flags}\n')
</code></pre>
<p>The output:</p>
<pre><code>jet_lepton_indices
[[0, 2], [1], [2, 3]]
lepton_flags
[[0, 10, 20, 30], [0, 10, 20, 30], [0, 10, 20, 30, 40]]
</code></pre>
<p>If I want the flags of the leptons that are in each jet I do <code>lepton_flags[jet_lepton_indices]</code> and get:</p>
<pre><code>[[0, 20],
[10],
[20, 30]]
</code></pre>
<p>But I also need to access all the lepton flags associated with leptons that are <em>not</em> in the jets. I'd like to be able to produce:</p>
<pre><code>[[10, 30],
[0, 20, 30],
[0, 10, 40]]
</code></pre>
<p>I thought I could do <code>lepton_flags[~jet_lepton_indices]</code>, but that has behavior I don't understand. A way to flatten/unflatten I can't figure that out either.</p>
|
<python><awkward-array>
|
2024-06-22 23:02:58
| 2
| 440
|
Matt Bellis
|
78,657,336
| 12,091,935
|
Convex Function in CVXPy does not appear to be producing the optimal solution for a battery in an electircal system
|
<p>I am working on optimizing a battery system(1 MWh) that, in conjunction with solar power(200 kW nameplate capacity), aims to reduce electricity costs for a commercial building. For those unfamiliar with commercial electricity pricing, here's a brief overview: charges typically include energy charges (based on total energy consumption over a month) and demand charges (based on the peak energy usage within a 15-minute interval, referred to as peak demand). These rates vary throughout the day based on Time of Use (TOU).</p>
<p>The objective is to minimize monthly energy consumption to lower energy charges, while also ensuring enough energy is stored in the battery to mitigate sudden spikes in consumption to reduce demand charges. The battery should achieve this by charging when solar generation exceeds building consumption and discharging when consumption exceeds solar production. This should be straightforward with a basic load-following algorithm when sufficient solar and battery storage are available.</p>
<p>I tested this approach with data that successfully optimized battery operations (shown in Figure 1). However, using convex optimization resulted in significantly poorer performance (Figure 2). The optimized solution from the convex solver increased energy consumption and worsened demand charges compared to not using a battery at all. Despite optimizing for TOU rates, the solver's output falls short of an ideal solution. I have thoroughly reviewed my code, objectives, and constraints, and they appear correct to me. My hypothesis is that the solver's algorithm might prioritize sending excess power to the grid (resulting in positive peaks), potentially in an an attempt negative peaks. Maybe that is why there is a random peak on the last data point.</p>
<p>Figure 1: Near Ideal Battery Operation from the Load Following Algorithm
<a href="https://i.sstatic.net/53OBhFMH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/53OBhFMH.png" alt="" /></a>
Figure 2: Battery Operation from the Convex Algorithm
<a href="https://i.sstatic.net/UJU3cnED.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UJU3cnED.png" alt="" /></a>
Ideally, I aim to minimize both energy and demand charges, except when it's economical to store excess power for anticipated high-demand periods. Any insights or suggestions on refining this approach would be greatly appreciated.</p>
<p>Thank you for your assistance.</p>
<p>Convex Optimization CVXPy Code:</p>
<pre><code># Import libraries needed
import numpy as np
import cvxpy as cp
import matplotlib.pyplot as plt
# One day of 15-minute load data
load = [ 36, 42, 40, 42, 40, 44, 42, 42, 40, 32, 32, 32, 32,
30, 34, 30, 32, 30, 32, 32, 32, 32, 32, 32, 30, 32,
32, 34, 54, 62, 66, 66, 76, 76, 80, 78, 80, 80, 82,
78, 46, 104, 78, 76, 74, 78, 82, 88, 96, 84, 94, 92,
92, 92, 92, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100,
100, 100, 86, 86, 82, 66, 72, 56, 56, 54, 48, 48, 42,
50, 42, 46, 46, 46, 42, 42, 42, 44, 44, 36, 34, 32,
34, 32, 34, 32, 32]
# One day of 15-minute solar data
solar = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
2, 6, 14, 26, 46, 66, 86, 104, 120, 138, 154, 168, 180,
190, 166, 152, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200,
200, 200, 200, 200, 200, 200, 190, 178, 164, 148, 132, 114, 96,
76, 58, 40, 22, 4, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0]
# Define alpha matrix which are the TOU energy charges for one day
lg = [31, 16, 25, 20, 4] # Length of each TOU period in 15 minute intervals
pk = ['off', 'mid', 'on', 'mid', 'off'] # Classifcation of each TOU period
alpha = np.array([])
for i in range(len(lg)):
if pk[i] == 'on':
mult = 0.1079
elif pk[i] == 'mid':
mult = 0.0874
elif pk[i] == 'off':
mult = 0.0755
alpha = np.append(alpha, (mult * np.ones(lg[i])))
# Define beta matricies which are the TOU demand charges for one day
val = [[0.1709, 0, 0], [0, 0.0874, 0], [0, 0, 0.0755]]
beta = {}
for i in range(len(val)):
beta_i = np.array([])
for j in range(len(lg)):
if pk[j] == 'on':
mult = val[0][i]
elif pk[j] == 'mid':
mult = val[1][i]
elif pk[j] == 'off':
mult = val[2][i]
beta_i = np.append(beta_i, (mult * np.ones(lg[j])))
beta[i] = beta_i
beta_ON = np.zeros((96, 96))
np.fill_diagonal(beta_ON, beta[0])
beta_MID = np.zeros((96, 96))
np.fill_diagonal(beta_MID, beta[1])
beta_OFF = np.zeros((96, 96))
np.fill_diagonal(beta_OFF, beta[2])
# Declare Parameters
eta_plus=0.96 # charging efficiency
eta_minus=0.96 # discharging efficiency
Emax=900 # SOC upper limit
Emin=200 # SOC lower limit
E_init=500 # initial state of charge
P_B_plus_max=200 # charging power limit
P_B_minus_max=200 # discharging power limit
opt_load=load #declaring optimal load
n=96 #declaring number of timestpes for each optimization
del_t=1/4 #time delta
d = 1 # int(len(load) / n ) # number of days
# Declare the arrays for the data outputs
pg = np.array([])
psl = np.array([])
eb = np.array([])
pbp = np.array([])
pbn = np.array([])
for i in range(d):
# Declare constraints List
cons = []
# Pull solar and load data for nth day
P_S = solar[int(n*i) : int(n*i + n)]
P_L = load[int(n*i) : int(n*i + n)]
# Declare variables
P_G = cp.Variable(n) # Power drawn from the grid at t
E_B = cp.Variable(n) # Energy in the Battery
P_B_plus = cp.Variable(n) # Battery charging power at t
P_B_minus = cp.Variable(n) # Battery discharging power at t
P_SL = cp.Variable(n) # Solar power fed to load at t
obj = cp.Minimize(cp.sum(cp.matmul(alpha, P_G) * del_t) + cp.max(cp.matmul(beta_OFF, P_G)) + cp.max(cp.matmul(beta_MID, P_G)) + cp.max(cp.matmul(beta_ON, P_G)))
for t in range(n):
# First iteration of constraints has an inital amount of energy for the battery.
if t == 0:
cons_temp = [
E_B[t] == E_init,
E_B[t] >= Emin,
E_B[t] <= Emax,
P_B_plus[t] >= 0,
P_B_plus[t] <= P_B_plus_max,
P_B_minus[t] >= 0,
P_B_minus[t] <= P_B_minus_max,
P_SL[t] + P_B_plus[t]/eta_plus == P_S[t],
P_SL[t] + P_G[t] + P_B_minus[t]*eta_minus == P_L[t],
P_SL[t] >= 0
]
# Subsequent iterations use have the amount of energy from the battery calcuated from the previous constraint
else:
cons_temp = [
E_B[t] == E_B[t - 1] + del_t*(P_B_plus[t - 1] - P_B_minus[t - 1]),
E_B[t] >= Emin,
E_B[t] <= Emax,
P_B_plus[t] >= 0,
P_B_plus[t] <= P_B_plus_max,
P_B_minus[t] >= 0,
P_B_minus[t] <= P_B_minus_max,
P_SL[t] + P_B_plus[t]/eta_plus == P_S[t],
P_SL[t] + P_G[t] + P_B_minus[t]*eta_minus == P_L[t],
P_SL[t] >= 0
]
cons += cons_temp
# Solve CVX Problem
prob = cp.Problem(obj, cons)
prob.solve(solver=cp.CBC, verbose = True, qcp = True)
# Store solution
pg = np.append(pg, P_G.value)
psl = np.append(psl, P_SL.value)
eb = np.append(eb, E_B.value)
pbp = np.append(pbp, P_B_plus.value)
pbn = np.append(pbn, P_B_minus.value)
# Update energy stored in battery for next iteration
E_init = E_B[n - 1]
# Plot Output
time = np.arange(0, 24, 0.25) # 24 hours, 15-minute intervals
plt.figure(figsize=(10, 6))
plt.plot(time, solar, label='Solar')
plt.plot(time, [i * -1 for i in load], label='Load before Optimization')
plt.plot(time, [i * -1 for i in pg], label='Load after Optimization')
plt.plot(time, pbn - pbp, label='Battery Operation')
# Adding labels and title
plt.xlabel('Time')
plt.ylabel('Demand (kW)')
plt.title('Battery Optimization Output')
# Adding legend
plt.legend()
# Display the plot
plt.grid(True)
plt.show()
</code></pre>
<p>Convex Optimization CVXPy Output:</p>
<pre><code>===============================================================================
CVXPY
v1.3.2
===============================================================================
(CVXPY) Jun 22 03:24:36 PM: Your problem has 480 variables, 960 constraints, and 0 parameters.
(CVXPY) Jun 22 03:24:36 PM: It is compliant with the following grammars: DCP, DQCP
(CVXPY) Jun 22 03:24:36 PM: (If you need to solve this problem multiple times, but with different data, consider using parameters.)
(CVXPY) Jun 22 03:24:36 PM: CVXPY will first compile your problem; then, it will invoke a numerical solver to obtain a solution.
-------------------------------------------------------------------------------
Compilation
-------------------------------------------------------------------------------
(CVXPY) Jun 22 03:24:36 PM: Compiling problem (target solver=CBC).
(CVXPY) Jun 22 03:24:36 PM: Reduction chain: Dcp2Cone -> CvxAttr2Constr -> ConeMatrixStuffing -> CBC
(CVXPY) Jun 22 03:24:36 PM: Applying reduction Dcp2Cone
(CVXPY) Jun 22 03:24:36 PM: Applying reduction CvxAttr2Constr
(CVXPY) Jun 22 03:24:36 PM: Applying reduction ConeMatrixStuffing
(CVXPY) Jun 22 03:24:36 PM: Applying reduction CBC
(CVXPY) Jun 22 03:24:37 PM: Finished problem compilation (took 8.116e-01 seconds).
-------------------------------------------------------------------------------
Numerical solver
-------------------------------------------------------------------------------
(CVXPY) Jun 22 03:24:37 PM: Invoking solver CBC to obtain a solution.
-------------------------------------------------------------------------------
Summary
-------------------------------------------------------------------------------
(CVXPY) Jun 22 03:24:37 PM: Problem status: optimal
(CVXPY) Jun 22 03:24:37 PM: Optimal value: -4.894e+01
(CVXPY) Jun 22 03:24:37 PM: Compilation took 8.116e-01 seconds
(CVXPY) Jun 22 03:24:37 PM: Solver (including time spent in interface) took 5.628e-03 seconds
</code></pre>
<p>Load Following Algorithm Code:</p>
<pre><code># Import libraries needed
import matplotlib.pyplot as plt
# One day of 15-minute load data
load = [ 36, 42, 40, 42, 40, 44, 42, 42, 40, 32, 32, 32, 32,
30, 34, 30, 32, 30, 32, 32, 32, 32, 32, 32, 30, 32,
32, 34, 54, 62, 66, 66, 76, 76, 80, 78, 80, 80, 82,
78, 46, 104, 78, 76, 74, 78, 82, 88, 96, 84, 94, 92,
92, 92, 92, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100,
100, 100, 86, 86, 82, 66, 72, 56, 56, 54, 48, 48, 42,
50, 42, 46, 46, 46, 42, 42, 42, 44, 44, 36, 34, 32,
34, 32, 34, 32, 32]
# One day of 15-minute solar data
solar = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
2, 6, 14, 26, 46, 66, 86, 104, 120, 138, 154, 168, 180,
190, 166, 152, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200,
200, 200, 200, 200, 200, 200, 190, 178, 164, 148, 132, 114, 96,
76, 58, 40, 22, 4, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0]
battery = 500
output = [] #
soc = [] # State of Charge of Battery
net_load = [] # "Optimized Load"
for i in range(96):
# With non fully charged battery and excess solar: Pull power from the solar panels to charge the batteries
if (battery < 900) and ((solar[i] - load[i]) >= 0):
# Battery can only charge up to 100 kW
if (solar[i] - load[i]) > 200:
output.append(-200)
else:
output.append(load[i] - solar[i])
# With non depleted charged battery and excessive load: Discharge the batteries and send power to the gtid
elif (battery > (200 + (load[i]/4))) and ((solar[i] - load[i]) < 0):
# Battery can only discharge up to 100 kW
if (solar[i] - load[i]) < -200:
output.append(200)
else:
output.append(load[i] - solar[i])
else:
output.append(0)
battery += (-0.25 * output[i])
soc.append(battery / 1000)
net_load.append(solar[i] - load[i] + output[i])
# Plot Output
time = np.arange(0, 24, 0.25) # 24 hours, 15-minute intervals
plt.figure(figsize=(10, 6))
plt.plot(time, solar, label='Solar')
plt.plot(time, [i * -1 for i in load], label='Load before Optimization')
plt.plot(time, net_load, label='Load after Optimization')
plt.plot(time, output, label='Battery Operation')
# Adding labels and title
plt.xlabel('Time')
plt.ylabel('Demand (kW)')
plt.title('Battery Optimization Output')
# Adding legend
plt.legend()
# Display the plot
plt.grid(True)
plt.show()
</code></pre>
|
<python><mathematical-optimization><cvxpy><convex-optimization>
|
2024-06-22 22:30:22
| 1
| 435
|
Luis Enriquez-Contreras
|
78,657,208
| 811,299
|
Official Python documentation of os.symlink() function is lacking crucial information
|
<p>This is the header for the section on the os.symlink() function in <a href="https://docs.python.org/3.12/library/os.html#os.symlink" rel="nofollow noreferrer">official python 3.12 documentation</a>, which is the version of python I am running:</p>
<pre><code>os.symlink(src, dst, target_is_directory=False, *, dir_fd=None)
</code></pre>
<blockquote>
<p>Create a symbolic link pointing to src named dst.</p>
<p>On Windows, a symlink represents either a file or a directory, and
does not morph to the target dynamically. If the target is present,
the type of the symlink will be created to match. Otherwise, the
symlink will be created as a directory if target_is_directory is True
or a file symlink (the default) otherwise. On non-Windows platforms,
target_is_directory is ignored.</p>
</blockquote>
<p>No description is given for the fourth parameter (*) or for the fifth parameter(dir_fd).
I assume the fifth parameter means the directory in which to place the symlink, but I'm not sure I'm right. I totally missed the fourth parameter when I was writing my code, and now that I see it, I have no idea what to put there, and the docs don't tell me.</p>
<p>This was my code, I know it's wrong, but I don't know what to put for the fourth parameter:</p>
<pre><code>os.symlink(absold, newfn, False, dir)
</code></pre>
<p>However, the error message I got was not particularly helpful given that the doc mentions 4 if not 5 parameters.</p>
<pre><code>TypeError: symlink() takes at most 3 positional arguments (4 given)
</code></pre>
<p>How do I code this so that the symlink is created in the directory I want it to be?
I am not running from the directory where these files are.</p>
<p>Are the official docs incorrect?</p>
|
<python>
|
2024-06-22 21:09:12
| 1
| 4,909
|
Steve Cohen
|
78,657,042
| 1,534,504
|
How to show tickmarks when using whitegrid seaborn style
|
<p>I'm struggling to find how to <em>show</em> the tick marks on the x-axis when I enable the <code>whitegrid</code> seaborn style:</p>
<pre class="lang-py prettyprint-override"><code>import datetime
import itertools
import random
import pandas as pd
import numpy as np
import seaborn as sns
from matplotlib import pyplot as plt
import matplotlib.dates as mdates
chronos = np.arange('2024-03-01', '2024-07-01', dtype='datetime64[D]')
with sns.axes_style("whitegrid"):
sns.set({'xtick.bottom': True, 'xtick.top': True, 'patch.edgecolor': 'b'})
fig, ax = plt.subplots(figsize=(20, 4), layout='constrained')
sns.barplot(
x=chronos,
y=[random.randint(0, 10) for _ in chronos],
ax=ax,
width=1,
)
ax.tick_params(
axis='x',
rotation=30
)
ax.tick_params(axis="both", colors="black")
ax.xaxis.set_major_locator(
mdates.MonthLocator(
interval=1,
bymonthday=1
)
)
</code></pre>
<p><a href="https://i.sstatic.net/F0ycfeaV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F0ycfeaV.png" alt="code output" /></a></p>
<p>Without the style applied, the marks appear. I've looked at the <a href="https://seaborn.pydata.org/generated/seaborn.axes_style.html" rel="nofollow noreferrer">docs</a> but can't figure out which option is removing them.</p>
<p>How do i show the tick marks with the seaborn style enabled?</p>
|
<python><matplotlib><seaborn><bar-chart>
|
2024-06-22 19:47:05
| 0
| 1,003
|
Pablo
|
78,657,029
| 18,091,372
|
How can I add custom xtics for a python gnuplotlib plot?
|
<p>I want to add custom tick labels on the x-axis using the python gnuplotlib.</p>
<p>I have tried (using jupyterlab):</p>
<pre><code>import numpy as np
import gnuplotlib as gp
xtics_positions = [0, 2, 4, 6, 8]
xtics_labels = ["zero", "two", "four", "six", "eight"]
xtics = [(label, pos) for label, pos in zip(xtics_labels, xtics_positions)]
datapoints = np.array([ 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 11, 5, 5, 2, 21, 1, 21, 0, 8, 62, 56, 0, 0, 0, 6, 3, 32, 7, 17, 1, 6, 10, 12, 0, 21, 11, 6, 14, 1, 15, 25, 30, 17, 11, 6, 4, 0])
gp.plot(datapoints, xtics = xtics)
</code></pre>
<p>but that does not work.</p>
<p>If I call <code>gp.plot(datapoints)</code>, the plot produced is:</p>
<p><a href="https://i.sstatic.net/XWjD9V1c.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XWjD9V1c.png" alt="plot" /></a></p>
|
<python><gnuplot><jupyter-lab>
|
2024-06-22 19:41:10
| 1
| 796
|
Eric G
|
78,656,997
| 15,763,991
|
How to Localize Button Labels in discord.py Based on User Locale or Guild Settings?
|
<p>I am using <code>discord.py</code> to build a Discord bot that includes interactive views with buttons. Each button performs specific actions related to message embed manipulation. I want to localize the button labels so they are displayed in the user's locale or default to the guild's locale. Below is a simplified version of my setup:</p>
<pre class="lang-py prettyprint-override"><code>class EmbedCreatorView(discord.ui.View):
@discord.ui.button(label="commands.admin.embed.buttons.setTitle", style=discord.ButtonStyle.primary)
async def set_title(self, interaction: discord.Interaction, button: discord.ui.Button):
...
# Modal interaction here
# More buttons with similar structure...
</code></pre>
<p>The button labels are initialized with keys for localization. Here's how I localize strings within interaction responses:</p>
<pre class="lang-py prettyprint-override"><code>await interaction.response.send_message(tanjunLocalizer.localize(
self.commandInfo.locale, "commands.admin.embed.previewSent"
), ephemeral=True)
</code></pre>
<p><code>tanjunLocalizer</code> is just my implementation of a localzer</p>
<p>So my questions are:</p>
<ul>
<li>How can I dynamically set the button labels based on each user's locale at the time of interaction?</li>
<li>If individual user localization isn't feasible, how can I set the labels based on the guild's default locale?</li>
</ul>
|
<python><discord><discord.py>
|
2024-06-22 19:23:48
| 1
| 418
|
EntchenEric
|
78,656,978
| 296,239
|
Rank within group in Polars?
|
<p>I have a Polars dataframe like so:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.from_repr("""
┌─────┬─────┬─────┐
│ c1 ┆ c2 ┆ c3 │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ i64 │
╞═════╪═════╪═════╡
│ a ┆ a ┆ 1 │
│ a ┆ a ┆ 1 │
│ a ┆ b ┆ 1 │
│ a ┆ c ┆ 1 │
│ d ┆ a ┆ 1 │
│ d ┆ b ┆ 1 │
└─────┴─────┴─────┘
""")
</code></pre>
<p>I am trying to assign a number to each group of (c2, c3) within c1, so that would look like this:</p>
<pre><code>┌─────┬─────┬─────┬──────┐
│ c1 ┆ c2 ┆ c3 ┆ rank │
│ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ str ┆ i64 ┆ u32 │
╞═════╪═════╪═════╪══════╡
│ a ┆ a ┆ 1 ┆ 0 │
│ a ┆ a ┆ 1 ┆ 0 │
│ a ┆ b ┆ 1 ┆ 1 │
│ a ┆ c ┆ 1 ┆ 2 │
│ d ┆ a ┆ 1 ┆ 0 │
│ d ┆ b ┆ 1 ┆ 1 │
└─────┴─────┴─────┴──────┘
</code></pre>
<p>How do I accomplish this?</p>
<p>I see how to do a global ranking:</p>
<pre class="lang-py prettyprint-override"><code>df.join(
df.select("c1", "c2", "c3")
.unique()
.with_columns(rank=pl.int_range(1, pl.len() + 1)),
on=["c1", "c2", "c3"]
)
</code></pre>
<pre><code>shape: (6, 4)
┌─────┬─────┬─────┬──────┐
│ c1 ┆ c2 ┆ c3 ┆ rank │
│ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ str ┆ i64 ┆ i64 │
╞═════╪═════╪═════╪══════╡
│ a ┆ a ┆ 1 ┆ 1 │
│ a ┆ a ┆ 1 ┆ 1 │
│ a ┆ b ┆ 1 ┆ 2 │
│ a ┆ c ┆ 1 ┆ 4 │
│ d ┆ a ┆ 1 ┆ 3 │
│ d ┆ b ┆ 1 ┆ 5 │
└─────┴─────┴─────┴──────┘
</code></pre>
<p>but that is a global ranking, not one within the c1 group. I also wonder if it possible to do this with over() instead of the groupby/join pattern.</p>
|
<python><dataframe><window-functions><python-polars>
|
2024-06-22 19:12:34
| 1
| 4,540
|
ldrg
|
78,656,475
| 5,793,672
|
Using Python heapq.heapify(Array)
|
<p>I want to use <code>heapq.heapify(Arr)</code> but everytime i want a slice of array to be used without creating new memory allocation for the new array. I want the rest of the array to be as is. example: <code>heapq.heapify(arr[0:3])</code> and use rest of <code>arr[3::]</code> in some calculation and go on changing this slice width in a loop. Can I do it? I dont see arr being heapified in this case</p>
|
<python><heap>
|
2024-06-22 15:32:02
| 1
| 303
|
swifty
|
78,656,452
| 41,572
|
Python use wx.CallAfter into HttpServer
|
<p>I am using python 2.7 and <code>wxPython</code> 2.8. I created a simple desktop application from which the user can start an <code>HTTPServer</code> created from the one builtin in python itself. Server is created in this way:</p>
<pre><code>class Services(Thread):
def __init__(self):
Thread.__init__(self)
def run(self):
http_server = BaseHTTPServer.HTTPServer(('', 9020)), RESTRequestHandler)
http_server.serve_forever()
</code></pre>
<p>in another piece of code I am using <code>wxPrinting</code> facilities to send HTML to printer so I am trying to reuse the same code in a rest api by doing the following (I am simplyfing):</p>
<pre><code>def save(handler):
manager = Manager('Samuel')
wx.CallAfter(manager.do_print) #do_print is a method already used in another part of UI code and it is working)
</code></pre>
<p>if I don't use <code>wx.CallAfter</code> I got the error "only the main thread can process Windows messages".</p>
<p>But in this way the code seems working only in older windows system like Windows XP meanwhile in Windows 10 nothing happens.</p>
<p><code>HTTPServer</code> has been created starting from
<a href="https://gist.github.com/tliron/8e9757180506f25e46d9" rel="nofollow noreferrer">https://gist.github.com/tliron/8e9757180506f25e46d9</a></p>
|
<python><wxpython>
|
2024-06-22 15:21:13
| 1
| 1,123
|
semantic-dev
|
78,656,297
| 17,889,328
|
Why does importing FastAPI TestClient into my PyCharm debugger session take 24 seconds?
|
<p>Using PyCharm Pro, importing FastAPI TestClient takes 2 seconds in run mode and 23 in debug. I don't know if this is PyCharm, FastAPI, Starlette, or basically where I start to reduce the time.</p>
<p>Here is everything installed in the venv:</p>
<pre><code>> uv pip list
Package Version
----------------- --------
annotated-types 0.7.0
anyio 4.4.0
certifi 2024.6.2
click 8.1.7
colorama 0.4.6
dnspython 2.6.1
email-validator 2.2.0
fastapi 0.111.0
fastapi-cli 0.0.4
h11 0.14.0
httpcore 1.0.5
httptools 0.6.1
httpx 0.27.0
idna 3.7
iniconfig 2.0.0
jinja2 3.1.4
markdown-it-py 3.0.0
markupsafe 2.1.5
mdurl 0.1.2
orjson 3.10.5
packaging 24.1
pluggy 1.5.0
pydantic 2.7.4
pydantic-core 2.18.4
pygments 2.18.0
pytest 8.2.2
python-dotenv 1.0.1
python-multipart 0.0.9
pyyaml 6.0.1
rich 13.7.1
shellingham 1.5.4
sniffio 1.3.1
starlette 0.37.2
typer 0.12.3
typing-extensions 4.12.2
ujson 5.10.0
uvicorn 0.30.1
watchfiles 0.22.0
websockets 12.0
</code></pre>
<pre class="lang-py prettyprint-override"><code>import time
START_TIME = time.time()
def test_debugger_speed():
end_time = time.time()
print(f"Time taken for test_debugger_speed: {end_time - START_TIME} seconds")
def test_debugger_speed_with_client():
start_time = time.time()
from fastapi.testclient import TestClient
end_time = time.time()
print(f"Time taken for importing TestClient: {end_time - start_time} seconds")
</code></pre>
<p>Run output:</p>
<pre><code>C:\Users\RYZEN\prdev\pythonProject1\.venv\Scripts\python.exe "C:/Program Files/JetBrains/PyCharm 2024.1.3/plugins/python/helpers/pycharm/_jb_pytest_runner.py" --path C:\Users\RYZEN\prdev\pythonProject1\test_slow_debugger.py
Testing started at 3:12 PM ...
Launching pytest with arguments C:\Users\RYZEN\prdev\pythonProject1\test_slow_debugger.py --no-header --no-summary -q in C:\Users\RYZEN\prdev\pythonProject1
============================= test session starts =============================
collecting ... collected 2 items
test_slow_debugger.py::test_debugger_speed PASSED [ 50%]Time taken for test_debugger_speed: 0.00400090217590332 seconds
test_slow_debugger.py::test_debugger_speed_with_client PASSED [100%]Time taken for importing TestClient: 0.8693747520446777 seconds
============================== 2 passed in 0.90s ==============================
Process finished with exit code 0
</code></pre>
<p>Debug output:</p>
<pre><code>C:\Users\RYZEN\prdev\pythonProject1\.venv\Scripts\python.exe -X pycache_prefix=C:\Users\RYZEN\AppData\Local\JetBrains\PyCharm2024.1\cpython-cache "C:/Program Files/JetBrains/PyCharm 2024.1.3/plugins/python/helpers/pydev/pydevd.py" --multiprocess --qt-support=auto --client 127.0.0.1 --port 1842 --file "C:/Program Files/JetBrains/PyCharm 2024.1.3/plugins/python/helpers/pycharm/_jb_pytest_runner.py" --path C:\Users\RYZEN\prdev\pythonProject1\test_slow_debugger.py
Testing started at 3:12 PM ...
Connected to pydev debugger (build 241.17890.14)
Launching pytest with arguments C:\Users\RYZEN\prdev\pythonProject1\test_slow_debugger.py --no-header --no-summary -q in C:\Users\RYZEN\prdev\pythonProject1
============================= test session starts =============================
collecting ... collected 2 items
test_slow_debugger.py::test_debugger_speed PASSED [ 50%]Time taken for test_debugger_speed: 0.019998788833618164 seconds
test_slow_debugger.py::test_debugger_speed_with_client PASSED [100%]Time taken for importing TestClient: 24.218650102615356 seconds
============================= 2 passed in 24.30s ==============================
Process finished with exit code 0
</code></pre>
|
<python><debugging><pycharm><fastapi><starlette>
|
2024-06-22 14:11:17
| 0
| 704
|
prosody
|
78,656,258
| 7,134,737
|
Why leave the variable at the end of the with statement?
|
<p>I am trying to make sense of the why you would just write a particular variable at the end of the <code>with</code> statement ?</p>
<pre><code>from datetime import datetime, timedelta
from airflow import DAG
from airflow.operators.bash import BashOperator
default_args = {
'owner': 'coder2j',
'retries': 5,
'retry_delay': timedelta(minutes=5)
}
with DAG(
default_args=default_args,
dag_id="dag_with_cron_expression_v04",
start_date=datetime(2021, 11, 1),
schedule_interval='0 3 * * Tue-Fri'
) as dag:
task1 = BashOperator(
task_id='task1',
bash_command="echo dag with cron expression!"
)
task1 # What does this mean ?
</code></pre>
<p>This example is taken from <a href="https://github.com/coder2j/airflow-docker/blob/main/dags/dag_with_cron_expression.py" rel="nofollow noreferrer">here</a>.</p>
<p>Why just leave the <code>task1</code> variable at the end ?</p>
|
<python><airflow><contextmanager>
|
2024-06-22 13:51:59
| 1
| 3,312
|
ng.newbie
|
78,656,232
| 3,177,186
|
Visual Studio Code is adding 5 spaces of indentation for new lines despite all settings being for tabs
|
<p>I'm using Visual Studio Code with Python and every time I hit enter on a line of code, the next line is indented by 5 spaces. However:</p>
<ul>
<li>Detect Indentation is OFF</li>
<li>Insert spaces when pressing tab is OFF</li>
<li>Tab size is 4</li>
</ul>
<p>My settings per the UI are:
<a href="https://i.sstatic.net/TMbsmXBJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMbsmXBJ.png" alt="enter image description here" /></a></p>
<p>But here's what I see. When I pressed enter on my last line of code and then highlighted up, you can see the five spaces. Where are they coming from?</p>
<p><a href="https://i.sstatic.net/lGMvzkx9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lGMvzkx9.png" alt="enter image description here" /></a></p>
|
<python><visual-studio-code>
|
2024-06-22 13:42:45
| 1
| 2,198
|
not_a_generic_user
|
78,656,159
| 7,123,797
|
Are these built-in types mutable or not?
|
<p>In Python some type is called mutable if it is possible to change the value of any object of this type (without changing the identity of the object).</p>
<p>Which of the following built-in types are mutable, and which of them are immutable? And why?</p>
<ul>
<li>function (in particular, method)</li>
<li>module (here I mean the type of the imported module, e.g. the command <code>import sys; type(sys)</code> returns <code><class 'module'></code>)</li>
<li>file type (for example, <code>_io.TextIOWrapper</code>)</li>
</ul>
<p>The official reference says the following <a href="https://docs.python.org/3/reference/datamodel.html#code-objects" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p>Unlike function objects, code objects are immutable and contain no references (directly or indirectly) to mutable objects.</p>
</blockquote>
<p>I think this implies that functions are mutable because function objects can contain references (directly or indirectly) to mutable objects. Also it is always possible to change the code of a function object without affecting its id (see example <a href="https://stackoverflow.com/a/41155727/7123797">here</a>).</p>
|
<python>
|
2024-06-22 13:09:21
| 1
| 355
|
Rodvi
|
78,656,115
| 17,721,722
|
PostgreSQL Recursive Delete with Foreign Key Constraint
|
<p>In my PostgreSQL table, when I try to delete a record, I encounter the following error:</p>
<pre><code>SQL Error [23503]: ERROR: update or delete on table "parent_table" violates foreign key constraint "parent_table_header_ref_id_id_4fd15d08_fk" on table "child_table"
Detail: Key (id)=(1) is still referenced from table "child_table".
</code></pre>
<p>This happens even with ON DELETE RESTRICT enabled. Is there any script or a PostgreSQL function/loop that can be used to recursively delete data, ensuring that all related records across multiple levels are deleted safely when foreign key constraints are in place?</p>
<p>Python Pseudo code:</p>
<pre><code>def delete_record(query):
while True:
try:
cursor.execute(query)
break
except Exception as exception:
# exception = 'SQL Error [23503]: ERROR: update or delete on table "parent_table" violates foreign key constraint "parent_table_header_ref_id_id_4fd15d08_fk" on table "child_table"'
child_table_delete_query = get_delete_query(exception)
# child_table_delete_query = 'delete from child_table where header_ref_id_id = 1'
delete_record(child_table_delete_query)
delete_record('delete from parent_table where id = 1')
</code></pre>
|
<python><postgresql>
|
2024-06-22 12:59:38
| 2
| 501
|
Purushottam Nawale
|
78,655,764
| 914,641
|
What exactly does the parameter "real_field=True" in the sympy constructor of a quaternion mean?
|
<p>In the sympy documentation about quatenions <a href="https://docs.sympy.org/latest/modules/algebras.html#sympy.algebras.Quaternion" rel="nofollow noreferrer">https://docs.sympy.org/latest/modules/algebras.html#sympy.algebras.Quaternion</a> the quatenion constructor is defined as:</p>
<p><em>class sympy.algebras.Quaternion(a=0, b=0, c=0, d=0, <strong>real_field=True</strong>, norm=None)</em></p>
<p>What is the meaning of keyword argument <strong>real_field=True</strong>?</p>
|
<python><sympy>
|
2024-06-22 10:29:16
| 2
| 832
|
Klaus Rohe
|
78,655,628
| 7,339,624
|
Inverse Fourier Transform X-Axis Scaling
|
<p>I'm implementing a discrete inverse Fourier transform in Python to approximate the inverse Fourier transform of a Gaussian function.</p>
<p><a href="https://i.sstatic.net/fcVR2S6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fcVR2S6t.png" alt="enter image description here" /></a></p>
<p>The input function is <code>sqrt(pi) * e^(-w^2/4)</code> so the output must be <code>e^(-x^2)</code>.</p>
<p>While the shape of the resulting function looks correct, the x-axis scaling seems to be off (There might be just some normalization issue). I expect to see a Gaussian function of the form <code>e^(-x^2)</code>, but my result is much narrower.</p>
<p>This is my implementation:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
from sympy import symbols, exp, pi, lambdify, sqrt
# Defining the Fourier transform of a Gaussian function, sqrt(pi) * exp(-omega ** 2 / 4)
x, omega = symbols('x omega')
f_gaussian_symbolic = exp(-omega ** 2 / 4) * sqrt(pi)
f_gaussian_function = lambdify(omega, f_gaussian_symbolic, 'numpy')
def fourier_inverse(f, n, omega_max):
"""
This function computes the inverse Fourier transform of a function f.
:param f: The function to be transformed
:param n: Number of samples
:param omega_max: The max frequency we want to be sampled
"""
omega_range = np.linspace(-omega_max, omega_max, n)
f_values = f(omega_range)
inverse_f = np.fft.ifftshift(np.fft.ifft(np.fft.fftshift(f_values)))
delta_omega = omega_range[1] - omega_range[0]
x_range = np.fft.ifftshift(np.fft.fftfreq(n, d=delta_omega))
inverse_f *= delta_omega * n / (2 * np.pi)
return x_range, inverse_f
plt.figure(figsize=(10, 5))
x_range, inverse_f = fourier_inverse(f_gaussian_function, 10000, 100)
plt.plot(x_range, inverse_f.real)
plt.ylim(-2, 2)
plt.xlim(-4, 4)
plt.show()
</code></pre>
<p>I expect the plot to be:
<a href="https://i.sstatic.net/e8cmaFnv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e8cmaFnv.png" alt="enter image description here" /></a></p>
<p>Buy my output is this:
<a href="https://i.sstatic.net/Cu3E7Prk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Cu3E7Prk.png" alt="enter image description here" /></a></p>
<p>The shape of the function looks correct, but it's much narrower than expected. I suspect there's an issue with how I'm calculating or scaling the <code>x_range</code> in my <code>fourier_inverse</code> function.</p>
<p>What am I doing wrong in my implementation, and how can I correct the x-axis scaling to get the expected Gaussian function e^(-x^2)?</p>
|
<python><numpy><fft><continuous-fourier>
|
2024-06-22 09:31:49
| 1
| 4,337
|
Peyman
|
78,655,562
| 12,276,279
|
How to write a function to read csv files with different separators in pandas in Python?
|
<p>I have a bunch of CSV files for different years named my_file_2019, my_file_2020, my_file_2023 and so on. Some files have tab separator while others have semi-colon.</p>
<p>I want to write a common function to extract data from all files.</p>
<p>This was my initial function:</p>
<pre><code>def get_data(year):
file = f"my_file_{year}.csv"
df = pd.read_csv(file,
sep = "\t")
#filter for germany
df = df[df["CountryCode"] == "DE"]
return df
</code></pre>
<p>I called the functions like below to get data from file for each year.</p>
<pre><code>df_2019 = get_data(2019)
df_2020 = get_data(2020)
df_2021 = get_data(2021)
df_2022 = get_data(2022)
df_2023 = get_data(2023)
</code></pre>
<p>I got KeyError: 'CountryCode' when the separator was different.</p>
<p>I used the try except method as shown</p>
<pre><code>def get_data(year):
file = f"my_file_{year}.csv"
try:
df = pd.read_csv(file,
sep = "\t")
except KeyError:
df = pd.read_csv(file,
sep = ";")
#filter for germany
df = df[df["CountryCode"] == "DE"]
return df
</code></pre>
<p>Then I can still read the file when the separator is tab, but not semi-colon.</p>
<p>How can I fix this?</p>
|
<python><python-3.x><pandas><dataframe><filter>
|
2024-06-22 08:58:43
| 1
| 1,810
|
hbstha123
|
78,655,289
| 21,376,217
|
How does Python's Binascii.a2b_base64 (base64.b64decode) work?
|
<p>I checked the source code in <a href="https://www.python.org/downloads/release/python-3124/" rel="nofollow noreferrer">Python</a> and implemented a function that is the same as the Python's <a href="https://docs.python.org/3/library/binascii.html#binascii.a2b_base64" rel="nofollow noreferrer">binascii.a2b_base64</a> function.</p>
<p>This function is located under path <code>Python-3.12.4/Modules/binascii.c</code>, line 387.</p>
<pre class="lang-c prettyprint-override"><code>static PyObject *
binascii_a2b_base64_impl(PyObject *module, Py_buffer *data, int strict_mode)
</code></pre>
<p>I used C++ and re implemented this function in my own code according to the original function in Python, in order to better understand and learn the working principle of Base64 decoding.</p>
<p>However, I don't know why the function I implemented cannot handle non Base64 encoded characters correctly.</p>
<p>I have checked these codes and confirmed that they do not affect the function's handling of non Base64 encoded characters, such as function [<code>_PyBytesWriter_Init</code>, <code>_PyBytesWriter_Alloc</code>, <code>_PyBytesWriter_Finish</code>, ...], and ignored it from my code.</p>
<p>When processing Base64 strings that comply with the <a href="https://datatracker.ietf.org/doc/html/rfc4648" rel="nofollow noreferrer">RFC4648 standard</a>, as well as, In the case where only <code>\n</code> is used as a non Base64 encoded character, the function I implemented will achieve the same result as the corresponding function in Python.<br />
For example:</p>
<pre class="lang-c prettyprint-override"><code>const char *encoded = {
"QUJDREVGR0hJSktMTU5PUFFSU1RVVldYWVpBQkNERUZHSElKS0xNTk9QUVJTVFVW\n"
"V1hZWkFCQ0RFRkdISUpLTE1OT1BRUlNUVVZXWFlaQUJDREVGR0hJSktMTU5PUFFS\n"
"U1RVVldYWVpBQkNERUZHSElKS0xNTk9QUVJTVFVWV1hZWg==\n"
};
</code></pre>
<p>Using either my function or Python's binascii.a2b_base64 function will yield the same result as the following:</p>
<pre class="lang-c prettyprint-override"><code>ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZ
</code></pre>
<p>Here is the specific implementation of my code:</p>
<pre class="lang-c prettyprint-override"><code>#include <iostream>
#include <cstdlib>
#include <cstring>
#include <cstdint>
#include <stdexcept>
#define BASE64PAD '='
constexpr uint8_t b64de_table[256] = {
255,255,255,255, 255,255,255,255, 255,255,255,255, 255,255,255,255,
255,255,255,255, 255,255,255,255, 255,255,255,255, 255,255,255,255,
255,255,255,255, 255,255,255,255, 255,255,255, 62, 255,255,255, 63,
52 , 53, 54, 55, 56, 57, 58, 59, 60, 61,255,255, 255, 0,255,255,
255, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
15 , 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,255, 255,255,255,255,
255, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,
41 , 42, 43, 44, 45, 46, 47, 48, 49, 50, 51,255, 255,255,255,255,
255,255,255,255, 255,255,255,255, 255,255,255,255, 255,255,255,255,
255,255,255,255, 255,255,255,255, 255,255,255,255, 255,255,255,255,
255,255,255,255, 255,255,255,255, 255,255,255,255, 255,255,255,255,
255,255,255,255, 255,255,255,255, 255,255,255,255, 255,255,255,255,
255,255,255,255, 255,255,255,255, 255,255,255,255, 255,255,255,255,
255,255,255,255, 255,255,255,255, 255,255,255,255, 255,255,255,255,
255,255,255,255, 255,255,255,255, 255,255,255,255, 255,255,255,255,
255,255,255,255, 255,255,255,255, 255,255,255,255, 255,255,255,255};
uint8_t *
pyBase64Decode(const char *buffer, size_t &length,
bool strict_mode = false)
{
std::string error_message;
const uint8_t *ascii_data = (const uint8_t *)buffer;
size_t ascii_len = length;
bool padding_started = 0;
size_t bin_len = ascii_len / 4 * 3;
uint8_t *bin_data = new (std::nothrow) uint8_t[bin_len + 1];
if(!bin_data) {
throw std::runtime_error("Failed to allocate memory for bin_data.");
}
uint8_t *bin_data_start = bin_data;
bin_data[bin_len] = 0x0;
uint8_t leftchar = 0;
uint32_t quad_pos = 0;
uint32_t pads = 0;
if(strict_mode && (ascii_len > 0) && (*ascii_data == BASE64PAD)) {
error_message = "Leading padding not allowed.";
goto error_end;
}
size_t i;
uint8_t this_ch;
for(i = 0; i < ascii_len; ++i) {
this_ch = ascii_data[i];
if(this_ch == BASE64PAD) {
padding_started = true;
// If the current character is a padding character, the length
// will be reduced by one to obtain the decoded true length.
bin_len--;
if(strict_mode && (!quad_pos)) {
error_message = "Excess padding not allowed.";
goto error_end;
}
if((quad_pos >= 2) && (quad_pos + (++pads) >= 4)) {
if(strict_mode && ((i + 1) < ascii_len)) {
error_message = "Excess data after padding.";
goto error_end;
}
goto done;
}
continue;
}
this_ch = b64de_table[this_ch];
if(this_ch == 255) {
if(strict_mode) {
error_message = "Only base64 data is allowed.";
goto error_end;
}
continue;
}
if(strict_mode && padding_started) {
error_message = "Discontinuous padding not allowed.";
goto error_end;
}
pads = 0;
switch(quad_pos) {
case 0:
quad_pos = 1;
leftchar = this_ch;
break;
case 1:
quad_pos = 2;
*bin_data++ = (leftchar << 2) | (this_ch >> 4);
leftchar = this_ch & 0xf;
break;
case 2:
quad_pos = 3;
*bin_data++ = (leftchar << 4) | (this_ch >> 2);
leftchar = this_ch & 0x3;
break;
case 3:
quad_pos = 0;
*bin_data++ = (leftchar << 6) | (this_ch);
leftchar = 0;
break;
}
}
if(quad_pos) {
if(quad_pos == 1) {
char tmpMsg[128]{};
snprintf(tmpMsg, sizeof(tmpMsg),
"Invalid base64-encoded string: "
"number of data characters (%zd) cannot be 1 more "
"than a multiple of 4",
(bin_data - bin_data_start) / 3 * 4 + 1);
error_message = tmpMsg;
goto error_end;
} else {
error_message = "Incorrect padding.";
goto error_end;
}
error_end:
delete[] bin_data;
throw std::runtime_error(error_message);
}
done:
length = bin_len;
return bin_data_start;
}
</code></pre>
<p>How to use this function:</p>
<pre class="lang-c prettyprint-override"><code>int main()
{
const char *encoded = "aGVsbG8sIHdvcmxkLg==";
size_t length = strlen(encoded);
uint8_t *decoded = pyBase64Decode(encoded, length);
printf("decoded: %s\n", decoded);
return 0;
}
</code></pre>
<hr />
<p>Here are a few samples with different results after executing Python and my code.</p>
<p>original decoded:</p>
<pre><code>stackoverflow
</code></pre>
<p>original encoded:</p>
<pre><code>c3RhY2tvdmVyZmxvdw==
</code></pre>
<p><strong>sample 1</strong>:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>original</th>
<th>"c3##RhY2t...vdmV!?y~Zmxvdw=="</th>
</tr>
</thead>
<tbody>
<tr>
<td>result of python</td>
<td>"stackoverflow"</td>
</tr>
<tr>
<td>result of pyBase64Decode</td>
<td>"stackoverflowP"[^print_method_1]</td>
</tr>
<tr>
<td>result of pyBase64Decode</td>
<td>"stackoverflow"[^print_method_2] but, length: 19</td>
</tr>
</tbody>
</table></div>
<p><strong>sample 2</strong>:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>original</th>
<th>"c3\n\nRh~Y2tvd#$mVyZmx$vdw=="</th>
</tr>
</thead>
<tbody>
<tr>
<td>result of python</td>
<td>"stackoverflow"</td>
</tr>
<tr>
<td>result of pyBase64Decode</td>
<td>"stackoverflow"[^print_method_1] but, length: 16</td>
</tr>
</tbody>
</table></div>
<p><strong>sample 3</strong>:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>original</th>
<th>"c3Rh$$$$$$$$$$$$$$$$$$$$$Y2tvdmVy###############Zmxvdw=="</th>
</tr>
</thead>
<tbody>
<tr>
<td>result of python</td>
<td>"stackoverflow"</td>
</tr>
<tr>
<td>result of pyBase64Decode</td>
<td>"stackoverflowP\2;SP2;SPROFILE_"[^print_method_1] length: 40 Bytes</td>
</tr>
<tr>
<td>result of pyBase64Decode</td>
<td>"stackoverflow"[^print_method_2] but, length: 40</td>
</tr>
</tbody>
</table></div>
<hr />
<p>[^print_method_1]: cout << std::string((char *)decoded, length) << endl;<br />
[^print_method_2]: printf("%s", decoded);</p>
|
<python><c++><c><base64>
|
2024-06-22 06:47:25
| 2
| 402
|
S-N
|
78,655,265
| 5,091,964
|
Unable to alter the text color within a QTableWidget
|
<p>I wrote a complex program that utilizes a PyQt6 table. I encountered an issue where I was unable to alter the font color within the table. To troubleshoot, I created a smaller program (see below) that generates a single-cell PyQt6 table containing the letter "A" in red font. However, upon executing the code, the character "A" remained black. Below is the code with comments. Your help is highly appreciated.</p>
<pre class="lang-py prettyprint-override"><code>import sys
from PyQt6.QtWidgets import QApplication, QTableWidget, QTableWidgetItem, QMainWindow
from PyQt6.QtGui import QColor
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
# Set up the table widget
self.table = QTableWidget(1, 1)
self.table.setHorizontalHeaderLabels(["Character"])
# Create the table item with the "A" character
item = QTableWidgetItem("A")
# Set the font color of the item to red
item.setForeground(QColor('red'))
# Add the item to the table
self.table.setItem(0, 0, item)
# Set the table as the central widget of the window
self.setCentralWidget(self.table)
# Set the window title
self.setWindowTitle("Table")
# Set the window size
self.resize(300, 200)
if __name__ == "__main__":
app = QApplication(sys.argv)
window = MainWindow()
window.show()
sys.exit(app.exec())
</code></pre>
|
<python><pyqt5><pyqt6>
|
2024-06-22 06:31:40
| 1
| 307
|
Menachem
|
78,654,772
| 3,177,186
|
Python winreg is updating SOMETHING, but not the Windows registry
|
<p>I am using winreg to read, write, and change the Windows registry. My program flips a lot of different settings to fix things that annoy me about the windows defaults. When I run it, it seems to work. The changes even persist between Flask runs, even after restarting Windows Explorer, and the computer.</p>
<p>But every time I check the registry, the changes aren’t there. I already verified that everything is 64bit. I am running VScode as admin (and running flask through the terminal there). Just in case, I ran it in an admin terminal outside of vscode too.</p>
<p>No matter what I try, it tells me it’s changing the registry when it actually isn’t. It’s completely punking me. I've tried reopening Windows Explorer, closing and reopening the Registry, and restarting the system. None made a difference.</p>
<p>For the settings where I created a value and set it's "data" attribute, that persists between sessions and I can test for it in my code as if it's storing it SOMEWHERE.</p>
<p>This is the code:</p>
<h2>NOTE: if you're at all worried about the registry changes below, back up your registry before running this</h2>
<pre class="lang-py prettyprint-override"><code>import winreg
def hive_name(hive):
if hive == winreg.HKEY_CURRENT_USER:
return "HKEY_CURRENT_USER"
elif hive == winreg.HKEY_LOCAL_MACHINE:
return "HKEY_LOCAL_MACHINE"
elif hive == winreg.HKEY_CLASSES_ROOT:
return "HKEY_CLASSES_ROOT"
elif hive == winreg.HKEY_USERS:
return "HKEY_USERS"
elif hive == winreg.HKEY_PERFORMANCE_DATA:
return "HKEY_PERFORMANCE_DATA"
elif hive == winreg.HKEY_CURRENT_CONFIG:
return "HKEY_CURRENT_CONFIG"
else:
return "UNKNOWN_HIVE"
def open_or_create_key(hive, path):
try:
# Open the registry key for reading and writing in 64-bit view
key = winreg.OpenKey(hive, path, 0, winreg.KEY_READ | winreg.KEY_WRITE | winreg.KEY_WOW64_64KEY)
print(f"Key opened: {hive_name(hive)}\\{path}")
except FileNotFoundError:
# Handle if the key doesn't exist
print(f"Creating key: {hive_name(hive)}\\{path}")
key = winreg.CreateKeyEx(hive, path, 0, winreg.KEY_READ | winreg.KEY_WRITE | winreg.KEY_WOW64_64KEY)
except PermissionError:
# Handle if there are permission issues
print(f"Permission denied while accessing the key: {hive_name(hive)}\\{path}")
key = None
except Exception as e:
# Handle any other exceptions
print(f"An error occurred: {e}")
key = None
return key
def get_value(key,which):
try:
value, _ = winreg.QueryValueEx(key, which)
print(f"Current value: {value}")
except FileNotFoundError:
print("Current value: <not set>")
except Exception as e:
print(f"An error occurred while querying the value: {e}")
def set_value(key,which,what):
try:
winreg.SetValueEx(key, which, 0, winreg.REG_DWORD, what)
print (which, "was set to", what)
except FileNotFoundError:
print (which, "could not be set to", what)
def close_key(key):
if key:
winreg.CloseKey(key)
print("Key closed.")
# Test the open_or_create_key function
if __name__ == "__main__":
print("# This key does exist on my system and has tons of values")
print("# Expected Output: Key opened: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced")
key = open_or_create_key(winreg.HKEY_CURRENT_USER, r"Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced")
print("# Value name DisallowShaking is NOT in my registry.")
print("# Expected Output: Current value: <not set> ")
# Prevents the windows "feature" of minimizing everything if you "shake" a window while dragging. Does nothing if the value isn't actually set.
get_value(key,"DisallowShaking")
print("# Value name HideFileExt IS in my registry.")
print("# Expected Output: HideFileExt set to X (where X is the value set in the code) - needs to be checked in the registry to see if it changed between runs")
# enables or disables hiding of file extensions. 0 to not hide it.
set_value(key,"HideFileExt",1)
close_key(key)
# This restores the Windows 10 right-click context menu to your system in Windows 11 if the key is present. It has no effect when left
print("# Neither {86ca1aa0-34aa-4e8b-a509-50c905bae2a2} nor InprocServer32 exist in my registry.")
print("# Expected Output: Creating Key: Software\Classes\CLSID\{86ca1aa0-34aa-4e8b-a509-50c905bae2a2}\InprocServer32")
key = open_or_create_key(winreg.HKEY_CURRENT_USER, r"Software\Classes\CLSID\{86ca1aa0-34aa-4e8b-a509-50c905bae2a2}\InprocServer32")
close_key(key)
# Setting this key and vlue restores the Windows 10 Windows Explorer ribbon.
print("# The Blocked key does not exist in my registry.")
print("# Expected Output: Creating Key: SOFTWARE\Microsoft\Windows\CurrentVersion\Shell Extensions\Blocked")
key = open_or_create_key(winreg.HKEY_CURRENT_USER, r"SOFTWARE\Microsoft\Windows\CurrentVersion\Shell Extensions\Blocked")
print("# If the key were created, then I can test for value {e2bf9676-5f8f-435c-97eb-11607a5bedf7} which should not exist yet.")
print("# Expected Output: An error occurred while querying the value: ")
get_value(key,"{e2bf9676-5f8f-435c-97eb-11607a5bedf7}")
set_value(key,"{e2bf9676-5f8f-435c-97eb-11607a5bedf7}", 1)
close_key(key)
</code></pre>
<p>When I run the above (in VsCode terminal with VsCode run as admin), I get the following output:</p>
<pre><code># This key does exist on my system and has tons of values
# Expected Output: Key opened: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced
Key opened: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced
# Value name DisallowShaking is NOT in my registry.
# Expected Output: Current value: <not set>
Current value: 1
# Value name HideFileExt IS in my registry.
# Expected Output: HideFileExt set to X (where X is the value set in the code) - needs to be checked in the registry to see if it changed between runs
HideFileExt was set to 1
Key closed.
# Neither {86ca1aa0-34aa-4e8b-a509-50c905bae2a2} nor InprocServer32 exist in my registry.
# Expected Output: Creating Key: Software\Classes\CLSID\{86ca1aa0-34aa-4e8b-a509-50c905bae2a2}\InprocServer32
Key opened: HKEY_CURRENT_USER\Software\Classes\CLSID\{86ca1aa0-34aa-4e8b-a509-50c905bae2a2}\InprocServer32
Key closed.
# The Blocked key does not exist in my registry.
# Expected Output: Creating Key: SOFTWARE\Microsoft\Windows\CurrentVersion\Shell Extensions\Blocked
Key opened: HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Shell Extensions\Blocked
# If the key were created, then I can test for value {e2bf9676-5f8f-435c-97eb-11607a5bedf7} which should not exist yet.
# Expected Output: An error occurred while querying the value:
Current value: <not set>
Key closed.
</code></pre>
<p>Despite that, the state of my registry after running the above is as follows:</p>
<ul>
<li><p>DisallowShaking was NOT created</p>
</li>
<li><p>HideFileExt was NOT set to 1
<a href="https://i.sstatic.net/Qhhk8DnZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qhhk8DnZ.png" alt="enter image description here" /></a></p>
</li>
<li><p>key <code>Software\Classes\CLSID\{86ca1aa0-34aa-4e8b-a509-50c905bae2a2}\InprocServer32</code> is NOT created
<a href="https://i.sstatic.net/2t8WZDM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2t8WZDM6.png" alt="enter image description here" /></a></p>
</li>
<li><p>key <code>SOFTWARE\Microsoft\Windows\CurrentVersion\Shell Extensions\Blocked</code> is NOT created</p>
</li>
<li><p>value <code>{e2bf9676-5f8f-435c-97eb-11607a5bedf7}</code> is clearly does not exist, but outputs as if it were.
<a href="https://i.sstatic.net/WRuyJrwX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WRuyJrwX.png" alt="enter image description here" /></a></p>
</li>
</ul>
<p>And somehow, the state is preserved. I added a "set_value" to the last value ({e2bf9676-5f8f-435c-97eb-11607a5bedf7}) on a subsequent run and the value I set persisted between runs of the program. The key and value both still don't exist in my registry.</p>
<p>The common responses are the 64 vs 32 bit issue which isn't in play (the system, python, and the Regedit are all 64 bit). I've checked the "VirtualStore" - I checked the ENTIRE registry for any text containing one of the values I set and it's not there (except when I run the code... then it thinks it is).</p>
|
<python><windows><registry>
|
2024-06-21 23:44:14
| 1
| 2,198
|
not_a_generic_user
|
78,654,728
| 7,339,624
|
How to calculate continuous Fourier inverse numerically with numpy?
|
<p>I'm trying to implement a function that calculates the inverse Fourier transform of a continuous function <code>F(ω)</code> to obtain <code>f(x)</code>.</p>
<p>My input function is continuous (you can assume it is a lambda function) so I have to approximate it discretely using discrete sampling. Here's my current implementation:</p>
<pre><code>import numpy as np
def fourier_inverse(f, n, omega_max):
"""
:param f: The function to be transformed
:param n: Number of samples
:param omega_max: The max frequency we want to be samples
"""
omega_range = np.linspace(-omega_max, omega_max, n)
f_values = f(omega_range, sigma, alpha)
inverse_f = np.fft.ifftshift(np.fft.ifft(np.fft.fftshift(f_values)))
return inverse_f
</code></pre>
<p>However, I have two main concerns:</p>
<ol>
<li>The x-axis of the result ranges from 0 to n, which doesn't seem
correct. I don't know how to approach it.</li>
<li>I'm unsure if the overall approach is correct, particularly
regarding the sampling and normalization.</li>
</ol>
<p>Any help or guidance would be greatly appreciated. Thank you!</p>
|
<python><numpy><fft><continuous-fourier>
|
2024-06-21 23:18:18
| 1
| 4,337
|
Peyman
|
78,654,453
| 6,622,697
|
How to handle circular dependecy between two tables in SQLAlchemy
|
<p>I'm having a problem resolving a circular dependency between two tables. Have followed all the information I found, but it still doesn't quite work.</p>
<p>My tables look like this</p>
<pre><code> run calibration_metric
------------------------------------------- ---------------------------------
run_pk calibration_metric_pk calibration_metric run_pk
1 5 5 1
</code></pre>
<p>The relationship from <code>calibration_metric</code> to <code>run</code> is many to 1, but from <code>run</code> to <code>calibration_metric</code> is 1 to 1</p>
<p>I have the following definitions (extraneous fields removed)</p>
<pre><code>lass Run(ModelBase):
__tablename__ = 'run'
pk = mapped_column(Integer, primary_key=True)
calibration_metric_pk = mapped_column(ForeignKey("calibration_metric.pk", name="fk_run_calibration_metric"))
metric = relationship("CalibrationMetric",
primaryjoin="calibration_metric_pk == CalibrationMetric.pk",
post_update=True)
lass CalibrationMetric(ModelBase):
__tablename__ = 'calibration_metric'
pk = mapped_column(Integer, primary_key=True)
run_pk = mapped_column(ForeignKey("run.pk", name="fk_calibration_metric_run"))
run = relationship("Run")
</code></pre>
<p>I have this section, <code>"objective_metric_pk == CalibrationMetric.pk"</code>, in quotes because without them, Python complained about circular import</p>
<p>I'm getting this error
<code>sqlalchemy.exc.InvalidRequestError: When initializing mapper Mapper[Run(run)], expression 'calibration_metric_pk == CalibrationMetric.pk' failed to locate a name ("name 'calibration_metric_pk' is not defined"). If this is a class name, consider adding this relationship() to the <class 'root.db.models.Run.Run'> class after both dependent classes have been defined.</code></p>
<p>I can't figure out why it's not finding the <code>calibration_metric_pk</code></p>
<p>Note that I don't care about getting the list of Calibration_Metric objects in Run, that are associated by the Run primary key. I only care about the 1-1 relationship and getting the Calibration_Metric object pointed to by the calibration_metric_pk field.
But I would also like to have access to the Run object from Calibration_Metric.run_pk. I don't think I need to specify <code>back_populates</code>, since I only care about 1 end of each of the relationships</p>
|
<python><sqlalchemy>
|
2024-06-21 21:11:45
| 2
| 1,348
|
Peter Kronenberg
|
78,654,348
| 922,533
|
How to enable python includes when EXECing from PHP
|
<p>I have a small python script for calculating the peaks of a PPG signal, as below.</p>
<pre><code>#!/usr/bin/python
import numpy as np
from scipy.signal import find_peaks
# Sample PPG signal
ppg_signal = np.sin(2*np.pi*0.5*np.linspace(0, 10, 1000)) + np.random.rand(1000) * 0.2
# Find peaks with prominence of 0.1 and minimum distance of 20 samples
peaks, _ = find_peaks(ppg_signal, prominence=0.1, distance=20)
# Print the peak locations
print("Done")
</code></pre>
<p>This works fine in CLI mode on my server. However, when I execute it from my php script with:</p>
<pre><code>exec('/usr/bin/python3 /mypython/peaks.py', $output, $ret_code);
</code></pre>
<p>I get a ret_code of 1 and a null output array. If I comment it all out except the import line and the print line, it works. I have deduced that it is the 'from scipy...' line that breaks.</p>
<p>My guess is that it has something to do with permissions? What else can I try?</p>
<p>EDIT: I know that find_peaks requires a scipy update which I did install. That's why it works from command line. Just not from php exec.</p>
<p>EDIT2: Turns out it is the paths. My web user doesn't have the path to the scipy library enabled. I installed scipy as root. Just need to figure out how to solve that.</p>
|
<python><php><python-3.x><scipy>
|
2024-06-21 20:31:02
| 0
| 2,156
|
Doug Wolfgram
|
78,654,305
| 1,202,417
|
Serving generated image from firebase
|
<p>I'm working on a simple web app, using firebase.</p>
<p>Essentially I would like to make it so that whenever a user visits a page it will serve them an image with the text of the url:</p>
<p>For example <code>mywebpage.com/happy_birthday/hannah.jpg</code>, would serve the user a simple image takes an image I have stored in storage and writes the name 'hannah' across it.</p>
<p>I can easily modify the image with PIL, but I'm not sure how I can serve this to the user without having to pre-upload everything.</p>
<pre><code>@https_fn.on_request(cors=options.CorsOptions(cors_origins="*", cors_methods=["get", "post"])):
def rss_image(req: https_fn.Request) -> https_fn.Response:
return https_fn.Response(make_birthday_image(), mimetype='image/jpg')
from PIL import Image, ImageDraw
def make_birthday_image(name):
image = Image.new('RGB', (100, 100), color='white')
draw = ImageDraw.Draw(image)
draw.text(name, font_size=10, fill='black')
... # unsure what to return here
</code></pre>
<p>Any advice would be super helpful!</p>
|
<python><firebase><image><google-cloud-functions><python-imaging-library>
|
2024-06-21 20:15:11
| 1
| 411
|
Ben Fishbein
|
78,654,233
| 1,382,667
|
Selenium get element text after click event
|
<p>Most modern website have dynamic elements where a click event makes an AJAX call and replaces all or usually part of a webpage. The response can be tracked using the network tab in Developer tools</p>
<p>How do you do this in Selenium?</p>
<p>In python I thought about grabbing the element text again after the click but it's not updating</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.service import Service
ChromeDriverPath64 = "D:/Software/chromedriver_126-win64/chromedriver.exe"
options = webdriver.ChromeOptions()
s = Service(executable_path="D:/Software/chromedriver_126-win64/chromedriver.exe")
options.page_load_strategy = 'normal'
options.binary_location = "C:/Program Files/Google/Chrome/Application/chrome.exe"
try:
driver = webdriver.Chrome(service=s, options=options)
driver.get("https://example.com/")
MyClasss = driver.find_elements(By.CLASS_NAME, "MyClass")
for el in MyClasss:
elTxt = el.text
if "wanted" in elTxt.lower():
el.click()
MyClass = driver.find_elements(By.CLASS_NAME, "MyClass")
for el1 in MyClass:
el1Txt = el1.text
if "wanted" in el1Txt.lower():
print(el1Txt)
driver.quit()
except Exception as e:
print(e)
</code></pre>
<p>There is a lot of conflicting information on how to get AJAX request with Selenium from what I've tried most seems outdated</p>
|
<python><selenium-webdriver>
|
2024-06-21 19:52:53
| 0
| 333
|
Holly
|
78,654,158
| 4,522,501
|
Is it safe sharing Self-signed certificate to other persons?
|
<p>I have a python script that I will share with other persons to consume messages from a Kafka service, I dont have experience with security but I have a doubt before assume things. My kafka broker is hosted in DigitalOcean cloud, they provide few certificates and one of them is used to connect via SSL, this is a ca.crt file (self-signed CA root certificate), I have to provide my python script to other people to connect to that Kafka service to consume messages but I'm concerned if is possible to share this ca.crt certificate without any risk, I'm not familiar with security hence I'm worried about this.</p>
<p>As me I think that others could have this doubt and could help as reference.</p>
|
<python><ssl><apache-kafka><cloud><digital-ocean>
|
2024-06-21 19:27:42
| 1
| 1,188
|
Javier Salas
|
78,654,124
| 2,992,192
|
Convert a python non-tail recursive function adding sub-folders info to a loop
|
<p>I wrote a function to get the total size and total files/subdirs of the entire disk or specific folder. The recursive function was very simple:</p>
<pre><code># Make the python dirSize function a non recursive one:
from pathlib import Path
obj = {}
def dirSize(dir: Path, size = 0, files = 0, dirs = 0):
for sub_dir in dir.glob('*'):
if sub_dir.is_dir():
dirs += 1
res_size, res_files, res_dirs = dirSize(sub_dir)
size += res_size
files += res_files
dirs += res_dirs
elif sub_dir.is_file():
files += 1
size += sub_dir.stat().st_size
obj[dir.as_posix()] = {
'size': size,
'files': files,
'dirs': dirs,
}
return size, files, dirs
dirSize(Path('~/Documents').expanduser())
#Line above is for tests. Final code should run in elevated cmd:
#dirSize(Path('C:/'))
with open(Path('~/Desktop/results.tsv').expanduser().as_posix(), 'w') as file:
file.write('Directory\tBytes\tFiles\tDirs\n')
for key in sorted(obj):
dir = obj[key]
file.write(key)
file.write('\t')
file.write(str(dir['size']))
file.write('\t')
file.write(str(dir['files']))
file.write('\t')
file.write(str(dir['dirs']))
file.write('\n')
</code></pre>
<p>I tried to use AI to convert it to a non recursive function, but AI seems to ignore these lines are a bottom up sum, not a simple sum:</p>
<pre><code>size += res_size
files += res_files
dirs += res_dirs
</code></pre>
<p>So I tried a stack loop, like in the Dijkstra algorithm, but it turns up that I couldn't find a simple solution to make the bottom up sum.
Alternatively I could add level/parent/child properties to the obj variable to make a bottom up sum after the queue processing. But this seems very cumbersome. Is there a simpler way to achieve this?</p>
<hr />
<hr />
<hr />
<p>Replying questions asked below:</p>
<ol>
<li>Why don't you just use os.walk?</li>
</ol>
<ul>
<li>It eliminates the recursion immediately, but doesn't help to do the bottom up sum - the generated report shows how many subdirs, disk space and files each directory has.</li>
</ul>
<ol start="2">
<li>What's the problem with the recursive function?</li>
</ol>
<ul>
<li>There is no problem, it works to perfection! But my boss thinks not all developers can work with recursion, so he asked me for a non recursive version and I'm struggling to not make it too complicated.</li>
</ul>
|
<python><recursion>
|
2024-06-21 19:15:03
| 3
| 5,209
|
Marquinho Peli
|
78,654,028
| 3,995,256
|
WxPython Align text in TextCtrl Center Vertical
|
<p>I am trying to center the text inside a TextCtrl. Many solutions to other similar questions rely on using <code>ALIGN_CENTER_VERTICAL</code> however I cannot do this as I want my text box to be larger than the font height.</p>
<p>This is what I am trying now:</p>
<pre><code>self.name_tb = wx.TextCtrl(self.showing_panel, style=wx.BORDER_NONE | wx.TE_CENTRE)
self.showing_sizer.Add(self.name_tb, 1, flag=wx.ALIGN_CENTER_VERTICAL | wx.RIGHT, border=3)
self.name_tb.SetMinSize((-1, height)) # height > font height
</code></pre>
<p><a href="https://i.sstatic.net/iVPQwUDj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iVPQwUDj.png" alt="enter image description here" /></a></p>
<p>Desired outcome is that the text and the cursor is aligned vertically centered.</p>
<p>Edit:</p>
<p>Code and Screenshot of runnable code</p>
<pre><code>
class MyFrame(wx.Frame):
def __init__(self, parent):
wx.Frame.__init__(self, parent)
self.SetSize((400, 200))
self.SetTitle("Centre Text Vertically")
self.panel = wx.Panel(self, wx.ID_ANY)
sizer = wx.BoxSizer(wx.VERTICAL)
self.text_ctrl = wx.TextCtrl(self.panel, wx.ID_ANY, "", style=wx.TE_CENTRE)
self.text_ctrl.SetMinSize((150, 50))
sizer.Add(self.text_ctrl, 0, wx.ALIGN_CENTER_HORIZONTAL | wx.TOP, 40)
self.panel.SetSizer(sizer)
self.Layout()
wx.CallAfter(self.displayText)
def displayText(self):
self.text_ctrl.SetValue("Text Text Text")
self.text_ctrl.SetInsertionPointEnd()
if __name__ == "__main__":
app = wx.App()
frame = MyFrame(None)
frame.Show()
app.MainLoop()
</code></pre>
<p>Ran on Mac:
<a href="https://i.sstatic.net/AZvGiS8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AZvGiS8J.png" alt="enter image description here" /></a>
Ran on Linux:
<a href="https://i.sstatic.net/MBHHZ4dp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MBHHZ4dp.png" alt="enter image description here" /></a></p>
<p>It seems like this is only an issue on Mac. Any ideas?</p>
|
<python><wxpython><wxwidgets>
|
2024-06-21 18:50:44
| 0
| 490
|
nyxaria
|
78,653,995
| 6,622,697
|
What are the pros and cons of the two different declaritive methods in SQLAlchemy 2.0
|
<p>This page, <a href="https://docs.sqlalchemy.org/en/20/orm/declarative_tables.html#declarative-table-with-mapped-column" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/20/orm/declarative_tables.html#declarative-table-with-mapped-column</a>, desribes two different ways of using Declarative Mapping</p>
<p>Declarative Table with mapped_column()</p>
<pre><code>class User(Base):
__tablename__ = "user"
id = mapped_column(Integer, primary_key=True)
name = mapped_column(String(50), nullable=False)
fullname = mapped_column(String)
nickname = mapped_column(String(30))
</code></pre>
<p>and then the Annotated Declarative Table, which ueses the Mapped object</p>
<pre><code>class User(Base):
__tablename__ = "user"
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str] = mapped_column(String(50))
fullname: Mapped[Optional[str]]
nickname: Mapped[Optional[str]] = mapped_column(String(30))
</code></pre>
<p>The first one seems much more straight-forward. What are the advantages of the 2nd one? The type is specified twice, both as a Python type and as an SQLAlchemy type</p>
|
<python><sqlalchemy>
|
2024-06-21 18:41:35
| 1
| 1,348
|
Peter Kronenberg
|
78,653,824
| 20,591,261
|
Efficiently Marking Holidays in a Data Column
|
<p>I'm trying to add a column that indicates whether a date is a holiday or not. I found some code online, but I believe there's a more efficient way to do it, possibly using a polar method instead of map elements and lambda.</p>
<p>Example code:</p>
<pre><code>import polars as pl
import holidays
# Initialize the holidays for Chile
cl_holidays = holidays.CL()
# Sample data
data = {
"Date": ["2024-06-20 00:00:00", "2024-06-21 00:00:00", "2024-06-22 00:00:00", "2024-06-23 00:00:00", "2024-06-24 00:00:00"],
"Amount": [100, 200, 300, 400, 500],
"User_Count" : [1, 2, 3, 4, 5]
}
# Create DataFrame
df = pl.DataFrame(data)
# Add a new column 'Is_Holiday' based on the Date column
df = df.with_columns(
(pl.col("Date").map_elements(lambda x: x.split(" ")[0] in cl_holidays, return_dtype=pl.Boolean)).alias("Is_Holiday")
).with_columns(pl.col("Date").str.strptime(pl.Datetime))
df
</code></pre>
<p>Expected output:</p>
<pre><code>shape: (5, 4)
┌─────────────────────┬────────┬────────────┬────────────┐
│ Date ┆ Amount ┆ User_Count ┆ Is_Holiday │
│ --- ┆ --- ┆ --- ┆ --- │
│ datetime[μs] ┆ i64 ┆ i64 ┆ bool │
╞═════════════════════╪════════╪════════════╪════════════╡
│ 2024-06-20 00:00:00 ┆ 100 ┆ 1 ┆ true │
│ 2024-06-21 00:00:00 ┆ 200 ┆ 2 ┆ false │
│ 2024-06-22 00:00:00 ┆ 300 ┆ 3 ┆ false │
│ 2024-06-23 00:00:00 ┆ 400 ┆ 4 ┆ false │
│ 2024-06-24 00:00:00 ┆ 500 ┆ 5 ┆ false │
└─────────────────────┴────────┴────────────┴────────────┘
</code></pre>
<p>UPDATE: i tried with @ignoring_gravity aproach, and also tried changing the date format but i keep getting false instead of true</p>
<p>UPDATE2: If i try @Hericks aproach i keep getting false. (I'm using polars 0.20.31 )</p>
<pre><code>import polars as pl
import holidays
# Initialize the holidays for Chile
cl_holidays = holidays.CL()
# Sample data
data = {
"Date": ["2024-06-20 00:00:00", "2024-06-21 00:00:00", "2024-06-22 00:00:00", "2024-06-23 00:00:00", "2024-06-24 00:00:00"],
"Amount": [100, 200, 300, 400, 500],
"User_Count" : [1, 2, 3, 4, 5]
}
# Create DataFrame
df = pl.DataFrame(data)
# Add a new column 'Is_Holiday' based on the Date column
df.with_columns(
Is_Holiday=pl.col('Date').str.to_datetime().dt.date().is_in(cl_holidays.keys())
)
</code></pre>
<p>Output:</p>
<pre><code>shape: (5, 4)
┌─────────────────────┬────────┬────────────┬────────────┐
│ Date ┆ Amount ┆ User_Count ┆ Is_Holiday │
│ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ i64 ┆ bool │
╞═════════════════════╪════════╪════════════╪════════════╡
│ 2024-06-20 00:00:00 ┆ 100 ┆ 1 ┆ false │
│ 2024-06-21 00:00:00 ┆ 200 ┆ 2 ┆ false │
│ 2024-06-22 00:00:00 ┆ 300 ┆ 3 ┆ false │
│ 2024-06-23 00:00:00 ┆ 400 ┆ 4 ┆ false │
│ 2024-06-24 00:00:00 ┆ 500 ┆ 5 ┆ false │
└─────────────────────┴────────┴────────────┴────────────┘
</code></pre>
|
<python><dataframe><python-polars><python-holidays>
|
2024-06-21 17:53:39
| 2
| 1,195
|
Simon
|
78,653,796
| 955,558
|
TTL setting for Redis inside Connection String, is it possible?
|
<p>I am doing a maintenance on several Python applications, which use connections to Redis services. Due to an infrastructure need, I am reducing the number of variables we will have in the environment settings as much as possible for all applicaitons.
Today, the application uses several parameters, for example:</p>
<pre><code>REDIS_PRIMARY_HOST= "localhost"
REDIS_PRIMARY_PORT= "6379"
REDIS_PRIMARY_PASSWORD= "password"
REDIS_PRIMARY_TTL_SECONDS = "1"
REDIS_DATABASE = "0"
</code></pre>
<p>We want to make a change and only use a connection string and another variable, such as:</p>
<pre><code>REDIS_PRIMARY_CONNECTION_STRING = "redis://:password@localhost:6379?db=<database_number>&decode_responses=True&health_check_interval=2"
REDIS_DATABASE_NUMBER = "0"
</code></pre>
<p>However, via Connection String, I couldn't find in the documentation a way to set the TTL (Time To Live) as default to a value. Either I do this directly in the code, or I will have to extract the key parameter and define it at run time.
Is there any way to set the Redis Time To Live by default, in the connection string?</p>
|
<python><redis><connection-string>
|
2024-06-21 17:45:39
| 1
| 539
|
Gustavo Gonçalves
|
78,653,649
| 2,839,803
|
ModuleNotFoundError message using Python
|
<p>I'm trying to create an API using Python. In my app folder there are the folders: models, services, tests. In each folder (including app) there is an empty<code> __init__.py</code></p>
<p>In my model folder there is Ad.py file.</p>
<p>In my services folder there is an AdService.py file.</p>
<p>This is part of the code in my AdService.py file:</p>
<pre><code>from models.Ad import Ad
from models.SubCategory import SubCategory
from database import session
class AdService:
.
.
.
</code></pre>
<p>In my tests folder there is a file name AdServiceTest.py.</p>
<p>This is part of the code in my AdServiceTest.py file:</p>
<pre><code>import unittest
from datetime import datetime
from app.models.Ad import Ad # Ensure the import path is correct
from app.services.AdService import AdService # Ensure the import path is correct
from app.database import (Base,engine,session) # Assuming Base and engine are defined in your database module
from sqlalchemy.orm import scoped_session, sessionmaker
from sqlalchemy import create_engine
# Setup in-memory SQLite database for testing
DATABASE_URL = "XXXX"
engine = create_engine(DATABASE_URL)
Session = sessionmaker(bind=engine)
session = scoped_session(Session)
class AdServiceTest(unittest.TestCase):
.
.
.
</code></pre>
<p>When I run my AdServiceTest.py I get an error on the line: "from app.models.Ad import Ad" with the text:</p>
<blockquote>
<p>"ModuleNotFoundError: No module named 'app'".</p>
</blockquote>
<p>I tried changing the prefix to just models.Ad or ..models.Ad and nothing worked.</p>
<p>What am I doing wrong?</p>
|
<python><module><modulenotfounderror>
|
2024-06-21 17:02:25
| 0
| 582
|
itzick binder
|
78,653,638
| 18,476,381
|
SQLAlchemy Join statement introducing duplicates
|
<p>I have two tables <code>component</code> and <code>component_transform</code></p>
<pre><code>class Component(BaseModel):
__tablename__ = "component"
component_id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
component_serial_number: Mapped[str] = mapped_column(String(250), unique=True)
component_transform: Mapped[List["ComponentTransform"]] = relationship(
"ComponentTransform", back_populates="component"
)
class ComponentTransform(BaseModel):
__tablename__ = "component_transform"
transform_id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
component_id: Mapped[Optional[int]] = mapped_column(
ForeignKey("component.component_id")
)
component_serial_number: Mapped[Optional[str]] = mapped_column(String(250))
component: Mapped["Component"] = relationship(
"Component", back_populates="component_transform"
)
</code></pre>
<p>component has a one to many relation to component_transform. I have a function that needs to search for the serial_number. Now I want to be able to search the serial_number in either component or component_transform. Regardless of the matches I just want to return the one record being held in the component table.</p>
<p>I have the function below but it keeps returning multiple records. For example if have the serial_number "A" in my component table and "B", "C", "D" in the component_transform table. If I search for any of these I should only get the record back for A. Ofcourse they all share the component_id.</p>
<pre><code>async def search_components(
session: AsyncSession,
component_serial_number: Optional[str] = None,
component_name: Optional[str] = None,
component_status: Optional[list[str]] = None,
limit: int = 20,
offset: int = 0,
) -> Sequence[ComponentSearchModel]:
async with session:
subquery_service_hrs = (
select(
DBMotorComponent.component_id,
func.sum(DBMotorComponent.drilling_hrs).label("service_hrs"),
func.sum(DBMotorComponent.drilling_hrs).label("life_hrs"),
).group_by(DBMotorComponent.component_id)
).subquery()
statement = (
select(DBComponent)
.options(joinedload(DBComponent.part))
.outerjoin(
subquery_service_hrs,
DBComponent.component_id == subquery_service_hrs.c.component_id,
)
.outerjoin(
DBComponentTransform,
DBComponent.component_id == DBComponentTransform.component_id,
)
.add_columns(
subquery_service_hrs.c.service_hrs, subquery_service_hrs.c.life_hrs
)
)
if component_serial_number is not None:
statement = statement.where(
or_(
DBComponent.component_serial_number.ilike(
f"%{component_serial_number}%"
),
DBComponentTransform.component_serial_number.ilike(
f"%{component_serial_number}%"
),
)
)
if component_name is not None:
statement = statement.where(
DBComponent.component_name.ilike(f"%{component_name}%")
)
if component_status is not None:
statement = statement.where(
DBComponent.component_status.in_(component_status)
)
statement = statement.limit(limit).offset(offset)
result = await session.execute(statement)
components = result.fetchall()
component_model = [
ComponentSearchModel.model_validate(
{
**component.__dict__,
"service_hrs": service_hrs or 0,
"life_hrs": life_hrs or 0,
}
)
for component, service_hrs, life_hrs in components
]
return component_model
</code></pre>
<p>I have tried doing a distinct but I am getting an error of <code>ORA-00932: inconsistent datatypes: expected - got CLOB</code></p>
<p>I have also tried to do a group_by on the DBComponent but getting a "Not a group by expression" error.</p>
|
<python><sql><sqlalchemy>
|
2024-06-21 17:00:51
| 1
| 609
|
Masterstack8080
|
78,653,631
| 3,821,009
|
Polars literal DuplicateError using when/then with selectors
|
<p>Say I have this:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
import polars.selectors as cs
df = pl.from_repr("""
┌─────┬─────┬─────┐
│ j ┆ k ┆ l │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 │
╞═════╪═════╪═════╡
│ 71 ┆ 79 ┆ 67 │
│ 26 ┆ 42 ┆ 55 │
│ 12 ┆ 43 ┆ 85 │
│ 92 ┆ 96 ┆ 14 │
│ 95 ┆ 26 ┆ 62 │
│ 75 ┆ 14 ┆ 56 │
│ 61 ┆ 41 ┆ 75 │
│ 74 ┆ 97 ┆ 70 │
│ 73 ┆ 32 ┆ 10 │
│ 66 ┆ 98 ┆ 40 │
└─────┴─────┴─────┘
""")
</code></pre>
<p>and I want to apply the same <code>when</code>/<code>then</code>/<code>otherwise</code> condition on multiple columns:</p>
<pre class="lang-py prettyprint-override"><code>df.select(
pl.when(cs.numeric() < 50)
.then(1)
.otherwise(2)
)
</code></pre>
<p>This fails with:</p>
<pre><code>DuplicateError: the name 'literal' is duplicate
</code></pre>
<p>How do I make this use the currently selected column as the alias? I.e. I want the equivalent of this:</p>
<pre class="lang-py prettyprint-override"><code>df.select(
pl.when(pl.col(c) < 50)
.then(1)
.otherwise(2)
.alias(c)
for c in df.columns
)
</code></pre>
<pre><code>shape: (10, 3)
┌─────┬─────┬─────┐
│ j ┆ k ┆ l │
│ --- ┆ --- ┆ --- │
│ i32 ┆ i32 ┆ i32 │
╞═════╪═════╪═════╡
│ 2 ┆ 2 ┆ 2 │
│ 1 ┆ 1 ┆ 2 │
│ 1 ┆ 1 ┆ 2 │
│ 2 ┆ 2 ┆ 1 │
│ 2 ┆ 1 ┆ 2 │
│ 2 ┆ 1 ┆ 2 │
│ 2 ┆ 1 ┆ 2 │
│ 2 ┆ 2 ┆ 2 │
│ 2 ┆ 1 ┆ 1 │
│ 2 ┆ 2 ┆ 1 │
└─────┴─────┴─────┘
</code></pre>
|
<python><dataframe><python-polars>
|
2024-06-21 16:58:02
| 1
| 4,641
|
levant pied
|
78,653,572
| 15,912,168
|
How to remove the line breaks that Visual Studio Code includes when saving the code
|
<p>How to remove the line breaks that's added in the formatting when saving the code in VSCode</p>
<p>For example:</p>
<p>If I have an expression with multiple validations:</p>
<pre class="lang-py prettyprint-override"><code>if True == True or False == True or False == False:
</code></pre>
<p>It is being formatted as:</p>
<pre class="lang-py prettyprint-override"><code>if (True == True
or False == True
or False == False)
</code></pre>
<p>This complicates readability. How can I reformat it to look like the first example?</p>
|
<python><visual-studio-code><pep8>
|
2024-06-21 16:42:14
| 2
| 305
|
WesleyAlmont
|
78,653,525
| 13,562,186
|
Python Module not Found after pip install in VSCode
|
<p>Installed a new fresh version of python and running into the following error in VS Code:</p>
<p>Here are the steps I have taken:</p>
<ol>
<li>Create Virtual Environment and activate:</li>
</ol>
<p><a href="https://i.sstatic.net/TfowPYJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TfowPYJj.png" alt="enter image description here" /></a></p>
<ol start="2">
<li>Create Test.py Script and Test out importing module. Not found... Though strangely it is predicted as I type?</li>
</ol>
<p><a href="https://i.sstatic.net/oTWpoJ2A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTWpoJ2A.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/KnsolfLG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KnsolfLG.png" alt="enter image description here" /></a></p>
<p>I notice its underlined in yellow so need to install the module.</p>
<p>3.Install module using pip:
<a href="https://i.sstatic.net/19h5sqV3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/19h5sqV3.png" alt="enter image description here" /></a></p>
<p>4.All seems successfully installed and yellow line goes away
<a href="https://i.sstatic.net/f1sswS6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f1sswS6t.png" alt="enter image description here" /></a></p>
<ol start="5">
<li>Running Script I Still get the error:</li>
</ol>
<p><a href="https://i.sstatic.net/itJtF3dj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/itJtF3dj.png" alt="enter image description here" /></a></p>
<p>No idea why.</p>
<p>I have removed the virtual environment and reinstalled. same with the modules but no change.</p>
<p>only thing I think is odd is that after removal it still predicts Matplotlib as in the first image... SO maybe it's getting confused in some way?</p>
|
<python><visual-studio-code><pip><modulenotfounderror>
|
2024-06-21 16:28:17
| 1
| 927
|
Nick
|
78,653,471
| 10,200,497
|
How can I form groups by a mask and N rows after that mask?
|
<p>My <strong>DataFrame</strong> is:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': [False, True, False, True, False, True, False, True, True, False, False],
}
)
</code></pre>
<p><strong>Expected output</strong> is forming groups like this:</p>
<pre><code> a
1 True
2 False
3 True
a
5 True
6 False
7 True
a
8 True
9 False
10 False
</code></pre>
<p><strong>The logic is:</strong></p>
<p>Basically I want to form groups where <code>df.a == True</code> and two rows after that. For example, in order to create the first group, the first <code>True</code> should be found which is row <code>1</code>. Then the first group is rows 1, 2 and 3. For the second group the next <code>True</code> must be found which is not in the first group. That row is row <code>5</code>. So the second group is consisted of rows 5, 6 and 7. This image clarifies the point:</p>
<p><a href="https://i.sstatic.net/rE2vDRtk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rE2vDRtk.png" alt="enter image description here" /></a></p>
<p>And this is my attempt that didn't work:</p>
<pre><code>N = 2
mask = ((df.a.eq(True))
.cummax().cumsum()
.between(1, N+1)
)
out = df[mask]
</code></pre>
|
<python><pandas><dataframe>
|
2024-06-21 16:16:23
| 6
| 2,679
|
AmirX
|
78,653,369
| 19,276,472
|
Scrapy + Playwright: calling a synchronous parse_single function from an async parse function
|
<p>I'm working with scrapy + Playwright.</p>
<p>A simplified version of the spider I have currently:</p>
<pre><code>class MySpider(CodeSpider):
def start_requests(self):
url = 'https://www.google.com/search?q=product+designer+nyc&ibp=htl;jobs'
yield Request(url, headers=headers, meta={
'playwright': True,
'playwright_include_page': True,
'errback': self.errback,
})
async def parse(self, response):
page = response.meta["playwright_page"]
jobs = page.locator("//li")
num_jobs = await jobs.count()
for idx in range(num_jobs):
await jobs.nth(idx).click()
job_details = page.locator("#tl_ditsc")
job_details_html = await job_details.inner_html()
soup = BeautifulSoup(job_details_html, 'html.parser')
data = self.parse_single_jd(soup)
yield {
'idx': idx,
'data': data,
}
def parse_single_jd(self, soup):
print("parse_single_jd running!")
title_of_role = soup.h2.text
data = {
"title": title_of_role,
}
return data
</code></pre>
<p>The spider runs - it opens a Playwright browser, navigates to the URL, and will loop thru the jobs on the page and click on each one.</p>
<p>However, it is NOT running the <code>self.parse_single_jd</code> function as expected - the <code>data</code> that's yielded ends up being a <code><generator object ... ></code>. Indeed, the <code>print("parse_single_jd running!")</code> line never fires.</p>
<p>I suspect this has to do with running the synchronous <code>parse_single_jd</code> function from the async <code>parse</code> function. How do I force <code>parse_single_jd</code> to run / evaluate in this situation?</p>
|
<python><scrapy><python-asyncio>
|
2024-06-21 15:57:35
| 1
| 720
|
Allen Y
|
78,653,112
| 6,360,521
|
pybind11 cannot import simplest module
|
<p>I just started with pybind11 and I wanted to try the <a href="https://pybind11.readthedocs.io/en/stable/basics.html#creating-bindings-for-a-simple-function" rel="nofollow noreferrer">minimal example</a> they provide on their page.</p>
<pre><code>#include <pybind11/pybind11.h>
int add(int i, int j) {
return i + j;
}
PYBIND11_MODULE(example, m) {
m.doc() = "pybind11 example plugin"; // optional module docstring
m.def("add", &add, "A function that adds two numbers");
}
</code></pre>
<p>I am using visual studio 2022 and I can compile the solution without problem as a dll. This gives me inside the directory <code>x64/Debug</code> the following files</p>
<pre><code>example.dll
example.exp
example.lib
example.pdb
</code></pre>
<p>I then go inside the <code>x64/Debug</code> folder and and run python 3.12 (the same version as I used in pybind11), but when I do</p>
<pre><code>>>> import example
</code></pre>
<p>I get the following</p>
<pre><code>>>> import example
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'example'
</code></pre>
<p>What am I doing wrong?</p>
<p>In the output of the compilation, setting the verbosity to detail, I could get the exact cl.exe arguments which are:</p>
<pre><code>CL.exe /c /IC:\Users\usr\AppData\Local\Programs\Python\Python312\include /I"C:\Users\usr\source\repos\python_cxx_interaction\python_cxx_interaction\.venv\Lib\site-packages\pybind11\include" /Zi /nologo /W3 /WX- /diagnostics:column /sdl /O2 /Oi /GL /D NDEBUG /D _CONSOLE /D _WINDLL /D _UNICODE /D UNICODE /Gm- /EHsc /MD /GS /Gy /fp:precise /Zc:wchar_t /Zc:forScope /Zc:inline /permissive- /Fo"python_c.f6c111de\x64\Release\\" /Fd"python_c.f6c111de\x64\Release\vc143.pdb" /external:W3 /Gd /TP /FC /errorReport:prompt example.cpp
</code></pre>
|
<python><c++><visual-studio-2022><pybind11><python-3.12>
|
2024-06-21 15:06:18
| 0
| 636
|
roi_saumon
|
78,653,069
| 11,769,133
|
LevelDB error: plyvel._plyvel.Error: NotFound: c:/tmp/testdb/LOCK: The system cannot find the path specified
|
<p>I followed <a href="https://stackoverflow.com/a/68862748/11769133">this</a> answer on how to install LevelDB on Windows. Everything went smoothly. The only thing I didn't know how to do (and if I had to it anyway) is step 1.2. Also I didn't do this part: <code>set PYTHONLIB=<full path to plyvel></code>.</p>
<p>Now when I try to run this simple script:</p>
<pre><code>import plyvel
db = plyvel.DB('c:/tmp/testdb/', create_if_missing=True)
</code></pre>
<p>...I get this error:</p>
<pre><code>Traceback (most recent call last):
File "D:\Programi\proba\proba.py", line 3, in <module>
db = plyvel.DB('c:/tmp/testdb/', create_if_missing=True)
File "plyvel\_plyvel.pyx", line 247, in plyvel._plyvel.DB.__init__
File "plyvel\_plyvel.pyx", line 94, in plyvel._plyvel.raise_for_status
plyvel._plyvel.Error: b'NotFound: c:/tmp/testdb//LOCK: The system cannot find the path specified.\r\n'
</code></pre>
<p>I tried removing trailing <code>/</code>, changing path, creating folders before calling script, removing argument <code>create_if_missing</code> but to no avail.
I'm using python 3.9.</p>
|
<python><leveldb>
|
2024-06-21 14:58:28
| 1
| 1,142
|
Milos Stojanovic
|
78,653,060
| 12,806,076
|
How to make a circle inside other circles white and fat?
|
<p>I have a problem, to change the color of a single circle in my plotly graph. I want to color the third ring white and make it fat and also the 7th ring, to show separate layers.</p>
<p>I have tried a lot, like line_color and so on, but it did not affect the ring itself, but the piece of the cake.</p>
<p>I will give you an example code, where I need this to be implemented:</p>
<pre><code>import plotly.graph_objects as go
import matplotlib.cm as cm
import matplotlib.colors as mcolors
# Daten für das Beispiel (Mittelwerte aus der eingangs erwähnten Tabelle)
dimensionen = [
'Daten/Technologie', 'Organisation und Prozesse', 'Strategie und Governance',
'Mindset und Kultur', 'Umsetzbarkeit und Zusammenarbeit', 'Externe Marktteilnehmer'
]
# Mittelwerte der Skalen pro Dimension (aus der Tabelle)
mittelwerte = {
'Daten/Technologie': 6, 'Organisation und Prozesse': 5, 'Strategie und Governance': 7,
'Mindset und Kultur': 8, 'Umsetzbarkeit und Zusammenarbeit': 3, 'Externe Marktteilnehmer': 2
}
# Viridis colormap für die Farbauswahl
viridis = cm.get_cmap('viridis', 12)
colors = [mcolors.to_hex(viridis(i)) for i in range(viridis.N)]
# Funktion zur Erstellung des erweiterten Donut-Radars
def create_extended_donut_radar(dimensionen, mittelwerte):
num_dimensionen = len(dimensionen)
num_scales = 11 # 11 Skalen von 0 bis 10
fig = go.Figure()
for idx, dim_name in enumerate(dimensionen):
mittelwert = mittelwerte[dim_name]
# Bar für jede Skala hinzufügen
for scale in range(num_scales):
inner_radius = scale * 0.09 + 0.05 # Adjusted for equal spacing
outer_radius = inner_radius + 0.09 # Adjusted for equal spacing
if scale == 0:
color = 'white'
elif scale <= mittelwert:
color = colors[scale]
else:
color = 'lightgrey'
fig.add_trace(go.Barpolar(
r=[inner_radius, outer_radius],
theta=[(360 / num_dimensionen) * idx] ,
width=[(360 / num_dimensionen) - 1],
marker_color=color,
marker_line_color='black',
marker_line_width=2,
opacity=0.7
))
# Layout anpassen
fig.update_layout(
title='',
title_font_size=24,
title_x=0.5,
polar=dict(
radialaxis=dict(visible=False),
angularaxis=dict(visible=False)
),
showlegend=False
)
# Plotly Diagramm im Standard-Browser anzeigen
fig.show(renderer='browser')
# Donut-Radar erstellen
create_extended_donut_radar(dimensionen, mittelwerte)
</code></pre>
<p>Here a png, where i did a red marker for what I want to be white and fat:<a href="https://i.sstatic.net/8M26s1oT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8M26s1oT.png" alt="enter image description here" /></a></p>
|
<python><plotly>
|
2024-06-21 14:56:39
| 0
| 516
|
LGe
|
78,652,999
| 10,785,003
|
Tortoise ORM: bulk_create update_fields vs on_conflict
|
<p>To perform upsert operations in Tortoise, we can use the <code>bulk_create</code> method. For this we should provide both <code>update_fields</code> and <code>on_conflict</code> params. Like this:</p>
<pre><code>await Users.bulk_create(users_list, on_conflict=fields_list, update_fields=fields_list)
</code></pre>
<p>The official <a href="https://tortoise.github.io/models.html#tortoise.models.Model.bulk_create" rel="nofollow noreferrer">documentation</a> lacks examples or in-depth explanations of what the difference is between these two parameters. But it forces me to set them both. Why?</p>
|
<python><sql><orm><tortoise-orm>
|
2024-06-21 14:45:04
| 0
| 1,169
|
Igor Alex
|
78,652,843
| 8,741,781
|
Why does csv.reader with TextIOWrapper include new line characters?
|
<p>I have two functions, one downloads individual csv files and the other downloads a zip with multiple csv files.</p>
<p>The <code>download_and_process_csv</code> function works correctly with <code>response.iter_lines()</code> which seems to delete new line characters.</p>
<blockquote>
<p>'Chicken, water, cornmeal, salt, dextrose, sugar, sodium phosphate,
sodium erythorbate, sodium nitrite. Produced in a facility where
allergens are present such as eggs, milk, soy, wheat, mustard, gluten,
oats, dairy.'</p>
</blockquote>
<p>The <code>download_and_process_zip</code> function seems to include new line characters for some reason (<code>\n\n</code>). I've tried <code>newline=''</code> in <code>io.TextIOWrapper</code> however it just replaces it with <code>\r\n</code>.</p>
<blockquote>
<p>'Chicken, water, cornmeal, salt, dextrose, sugar, sodium phosphate,
sodium erythorbate, sodium nitrite. \n\nProduced in a facility where
allergens are present such as eggs, milk, soy, wheat, mustard, gluten,
oats, dairy.'</p>
</blockquote>
<p>Is there a way to modify <code>download_and_process_zip</code> so that new line characters are excluded/replaced or do I have to iterate over all the rows and manually replace the characters?</p>
<pre><code>@request_exceptions
def download_and_process_csv(client, url, model_class):
with closing(client.get(url, stream=True)) as response:
response.raise_for_status()
response.encoding = 'utf-8'
reader = csv.reader(response.iter_lines(decode_unicode=True))
process_copy_from_csv(model_class, reader)
@request_exceptions
def download_and_process_zip(client, url):
with closing(client.get(url, stream=True)) as response:
response.raise_for_status()
with io.BytesIO(response.content) as buffer:
with zipfile.ZipFile(buffer, 'r') as z:
for filename in z.namelist():
base_filename, file_extension = os.path.splitext(filename)
model_class = apps.get_model(base_filename)
if file_extension == '.csv':
with z.open(filename) as csv_file:
reader = csv.reader(io.TextIOWrapper(
csv_file,
encoding='utf-8',
# newline='',
))
process_copy_from_csv(model_class, reader)
</code></pre>
|
<python><django><csv>
|
2024-06-21 14:12:50
| 1
| 6,137
|
bdoubleu
|
78,652,758
| 597,742
|
Cryptic "OSError: [WinError 10106] The requested service provider could not be loaded or initialized" error from Python subprocess call
|
<p>When porting some subprocess execution code from Linux to Windows, the Windows version failed with a Python traceback in the executed subprocess that reported <code>OSError: [WinError 10106] The requested service provider could not be loaded or initialized</code></p>
<p>None of the answers that came up in a Google search were helpful, and the documentation for <a href="https://docs.python.org/3/library/subprocess.html#subprocess.run" rel="nofollow noreferrer"><code>subprocess.run</code></a> wasn't helpful either.</p>
|
<python><subprocess>
|
2024-06-21 13:54:38
| 1
| 41,806
|
ncoghlan
|
78,652,649
| 3,848,833
|
Append an entry per view to the DEFAULT_FILTER_BACKENDS
|
<p>In a Django (and DRF) project with a large number of views we have set a list of filter backends in <code>settings.py</code>:</p>
<pre><code>REST_FRAMEWORK = {
"DEFAULT_FILTER_BACKENDS": [
# filter backend classes
],
# other settings
}
</code></pre>
<p>Some of the view classes need additional filter backends. We can specify the attribute <code>filter_backends</code> per view class:</p>
<pre><code>FooViewSet(viewsets.ModelViewSet):
filter_backends = [DefaultFilterOne, DefaultFilterTwo, DefaultFilterThree, AdditionalFilter]
</code></pre>
<p>However, this is not DRY. Here in this example three (3) filter backends are default and have to be repeated in the viewset class. If there's a change in <code>settings.py</code> it has to be reflected in the view class.</p>
<p>What is best practice to append an additional filter backend per class, while keeping the default filter backends from <code>settings.py</code>?</p>
|
<python><django><django-rest-framework><django-rest-framework-filters>
|
2024-06-21 13:34:44
| 1
| 12,082
|
cezar
|
78,652,436
| 16,527,170
|
Pandas Extract Sequence where prev value > current value
|
<p>Need to extract sequence of negative values where earlier negative value is smaller than current value and next value is smaller than current value</p>
<pre><code>import pandas as pd
# Create the DataFrame with the given values
data = {
'Value': [0.3, 0.2, 0.1, -0.1, -0.2, -0.3, -0.4, -0.35, -0.25, 0.1, -0.15, -0.25, -0.13, -0.1, 1]
}
df = pd.DataFrame(data)
print("Original DataFrame:")
print(df)
</code></pre>
<p>My Code:</p>
<pre><code># Initialize a list to hold the sequences
sequences = []
current_sequence = []
# Iterate through the DataFrame to apply the condition
for i in range(1, len(df) - 1):
prev_value = df.loc[i - 1, 'Value']
curr_value = df.loc[i, 'Value']
next_value = df.loc[i + 1, 'Value']
# Check the condition
if curr_value < prev_value and curr_value < next_value:
current_sequence.append(curr_value)
else:
# If the current sequence is not empty and it's a valid sequence, add it to sequences list and reset
if current_sequence:
sequences.append(current_sequence)
current_sequence = []
# Add the last sequence if it's not empty
if current_sequence:
sequences.append(current_sequence)
</code></pre>
<p>My Output:</p>
<pre><code>Extracted Sequences:
[-0.4]
[-0.25]
</code></pre>
<p>Expected Output:</p>
<pre><code>[-0.1,-0.2,-0.3,-0.4]
[-0.15,-0.25]
</code></pre>
|
<python><pandas><dataframe>
|
2024-06-21 12:46:21
| 3
| 1,077
|
Divyank
|
78,652,281
| 13,663,100
|
Python Polars SQL Interface results in InvalidOperationError on multiple joins
|
<p>I am struggling to run a query that joins 3 tables together with the middle table joining the other two.</p>
<ul>
<li>Table A has a column "a".</li>
<li>Table B has columns "a" and "b".</li>
<li>Table C has columns "b" and "c". (Only column "b" is required).</li>
</ul>
<p>MWE:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
ctx = pl.SQLContext({"dual": pl.DataFrame({"dummy": ["X"]})})
sql = """
select *
from (select 123 a from dual) tbl_a
inner join (select 123 a, 'value' b from dual) tbl_b
on tbl_a.a = tbl_b.a
inner join (select 'value' b, 'value2' c from dual) tbl_c
on tbl_c.b = tbl_b.b
--using (b)
"""
print(ctx.execute(sql, eager=True))
</code></pre>
<p>The above produces the following error:</p>
<p><code>InvalidOperationError: collect_compound_identifiers: left_name="tbl_a", right_name="tbl_c", tbl_a="tbl_c", tbl_b="tbl_b"</code>.</p>
<p>One workaround that I found was to replace the <code>on tbl_c.b = tbl_b.b</code> clause with <code>using (b)</code>, as the commented out SQL above shows. Another workaround is to reorder the joins so that <code>tbl_b</code> is selected from first followed by <code>tbl_c</code> and then <code>tbl_a</code>. However, these are not ideal solutions.</p>
<p>How do I get around this? Thanks.</p>
|
<python><python-polars>
|
2024-06-21 12:11:47
| 0
| 1,365
|
Chris du Plessis
|
78,652,141
| 3,906,713
|
How to vectorize Scipy.integrate.quad()
|
<p>I have a 1D function <code>f(t)</code> defined on an interval <code>[t0, t1]</code>. I would like to obtain an integral of this function, uniformly sampled over the above interval with a timestep <code>delta_t</code>. The naive solution for this is to use <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.quad.html" rel="nofollow noreferrer">scipy.integrate.quad</a> inside of a python for-loop</p>
<pre><code>rez = np.zeros(nStep-1)
for i in range(1, nStep):
rez[i-1] = scipy.integrate.quad(my_func, t0, t0 + i * delta_t)
</code></pre>
<p>I presume that this is not the fastest way to perform such an integral. Is there an intended way to do this operation in a vectorized form?</p>
<p>One improvement I can think of is to do piecewise integration, then sum up, because, for every iteration, the integrator will have to deal with simpler functions (less wiggles).</p>
<pre><code>rez = np.zeros(nStep-1)
for i in range(1, nStep):
rez[i-1] = scipy.integrate.quad(my_func, t0 + (i - 1) * delta_t, t0 + i * delta_t)
rez = np.cumsum(rez)
</code></pre>
|
<python><performance><scipy><vectorization>
|
2024-06-21 11:37:43
| 1
| 908
|
Aleksejs Fomins
|
78,651,589
| 6,412,997
|
Converting np.random.logistic to scipy.stats.fisk
|
<p>I cannot figure out how to convert <code>np.random.logistic</code> to <code>scipy.stats.fisk</code>, here's my code:</p>
<pre><code>import numpy as np
import numpy.random as npr
import scipy.stats as ss
import matplotlib.pyplot as plt
SEED = 1337
SIZE = 1_000_000
Generator = npr.default_rng(seed=SEED)
PARAMS = {
"loc": 0,
"scale": 1
}
n = Generator.logistic(
loc=PARAMS['loc'],
scale=PARAMS['scale'],
size=SIZE,
)
ns = np.exp(n)
s = ss.fisk(
c=1/PARAMS['scale'],
scale=np.exp(PARAMS['loc']),
).rvs(
random_state=SEED,
size=SIZE,
)
</code></pre>
<p>Reading from the <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.fisk.html" rel="nofollow noreferrer">scipy doc</a>, Suppose <code>n</code> is a logistic random variable with location <code>loc</code> and scale <code>scale</code>. Then <code>ns=np.exp(n)</code> is a Fisk (log-logistic) random variable with <code>scale=np.exp(l)</code> and shape <code>c=1/scale</code>.</p>
<p>However, plotting <code>n</code> and <code>ns</code> gives completely different distributions.
What am I doing wrong?</p>
<pre><code>plt.figure(figsize=(14, 6))
plt.subplot(1, 2, 1)
plt.hist(n, bins=30, density=True, alpha=0.6, color='g', label="n")
plt.title('Logistic Distribution')
plt.xlabel('Value')
plt.ylabel('Frequency')
plt.legend()
plt.subplot(1, 2, 2)
plt.hist(ns, bins=30, density=True, alpha=0.6, color='b', label="ns")
plt.hist(s, bins=30, density=True, alpha=0.6, color='r', label="s")
plt.title('Log-Logistic Distribution')
plt.xlabel('Value')
plt.ylabel('Frequency')
plt.legend()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/5TPCVyHO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5TPCVyHO.png" alt="result of plotting snippet above" /></a></p>
|
<python><numpy><scipy><distribution>
|
2024-06-21 09:49:27
| 1
| 1,996
|
E.Z
|
78,651,547
| 10,889,650
|
Take value of nearest (L1) non-zero in three dimensional array
|
<p>I wish to fill all zero valued elements of a three-dimensional numpy array with the value of the "nearest" non-zero valued element (at the point of running the program.) I do not mind which is used when there are multiple with the same distance.</p>
<p>I can demonstrate the desired result in 2D:</p>
<p>input:</p>
<pre><code>0 0 0 0
0 1 2 0
0 3 4 0
0 0 0 0
</code></pre>
<p>output:</p>
<pre><code>1 1 2 2
1 1 2 2
3 3 4 4
3 3 4 4
</code></pre>
<p>input:</p>
<pre><code>1 0 2 0
0 3 0 1
2 0 4 0
</code></pre>
<p>a valid output:</p>
<pre><code>1 2 2 2
3 3 2 1
2 3 4 1
</code></pre>
<p>I guess the best I've thought of myself is to take each non-zero index in turn and fill neighboring zero valued elements with that value, and repeat until no non-zero elements remain. Which would take ages. This will be running on arrays of size ~ 256,256,256 so being fast would be good.</p>
<p>[EDIT] This works, and is not too slow for use actually. Still would be nice to find a solution without the for loop:</p>
<pre><code>is_useful_value_data = np.isin(actual_data, values_we_want_to_preserve)
indices = np.where(is_useful_value_data)
interp = scipy.interpolate.NearestNDInterpolator(indices, actual_data[is_useful_value_data])
indices_to_fill = np.where(~is_useful_value_data)
for index in np.transpose(indices_to_fill):
x = index[0]
y = index[1]
z = index[2]
nearest_val = interp([x,y,z])
actual_data[x,y,z] = int(nearest_val)
</code></pre>
|
<python><numpy>
|
2024-06-21 09:40:25
| 0
| 1,176
|
Omroth
|
78,651,453
| 5,472,354
|
Matplotlib LaTeX preamble not correctly interpreted
|
<p>I was trying to use a custom LaTeX preamble with matplotlib, but it raises</p>
<pre><code>ValueError: Key text.latex.preamble: Could not convert ['\\usepackage{siunitx}' ...] to str
</code></pre>
<p>For some reason, the <code>\</code> are converted into <code>\\</code>, but I don't understand why. I'm using Python v. 3.11.5 and Matplotlib v. 3.7.2.</p>
<p>Here is the MWE (inspired from <a href="https://stackoverflow.com/a/20709149/5472354">this post</a>):</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
from matplotlib import rcParams
rcParams['text.usetex'] = True
rcParams['text.latex.preamble'] = [
r'\usepackage{siunitx}', # i need upright \micro symbols, but you need...
r'\sisetup{detect-all}', # ...this to force siunitx to actually use your fonts
r'\usepackage{helvet}', # set the normal font here
r'\usepackage{sansmath}', # load up the sansmath so that math -> helvet
r'\sansmath' # <- tricky! -- gotta actually tell tex to use!
]
fig, ax = plt.subplots()
fig.show()
</code></pre>
|
<python><string><matplotlib><latex>
|
2024-06-21 09:19:08
| 1
| 2,054
|
mapf
|
78,651,412
| 508,236
|
Cannot mock the method in dependency module correctly in Python
|
<p>Here is my project structure</p>
<pre><code>|-- project
|-- util.py
|-- main.py
|-- tests
|-- test_main.py
</code></pre>
<p>In the <code>main.py</code> file I reference the function in <code>util.py</code></p>
<pre class="lang-py prettyprint-override"><code>from util import rename
def use_rename():
after_rename = rename("name")
return after_rename
</code></pre>
<p>And this is how I implement the <code>rename</code> function in <code>util.py</code></p>
<pre class="lang-py prettyprint-override"><code>def rename(name: str) -> str:
return name
</code></pre>
<p>Finally I test it in the <code>test_main.py</code> file</p>
<pre class="lang-py prettyprint-override"><code>import sys
import os
from unittest.mock import patch
sys.path.append(os.path.join(os.path.dirname(__file__), ".."))
from main import use_rename
@patch("util.rename")
def test_use_rename(mock_rename):
mock_rename.return_value = "name2"
assert use_rename() == "name"
</code></pre>
<p>As you can see I try to mock the return result of the <code>rename</code> function and assume the test should fail, but it always succeeds. Is there anything wrong with my code?</p>
|
<python><testing><mocking>
|
2024-06-21 09:10:05
| 1
| 15,698
|
hh54188
|
78,651,332
| 3,575,623
|
Define input files for snakemake using glob
|
<p>I am building a snakemake pipeline for some bioinformatics analyses, and I'm a beginner with the tool. The end users will be mainly biologists with little to no IT training, so I'm trying to make it quite user-friendly, in particular not needing much information in the config file (a previous bioinformatician in the institute had built a more robust pipeline but that required a lot of information in the config file, and it fell into disuse).</p>
<p>One rule that I would like to implement is to autodetect what <code>.fastq</code> (raw data) files are given in their specific directory, align them all and run some QC steps. In particular, deepTools has a <a href="https://deeptools.readthedocs.io/en/develop/content/tools/plotFingerprint.html" rel="nofollow noreferrer">plotFingerprint</a> tool that compares the distribution of data in a control data file to the distribution in the treatment data files. For this, I would like to be able to autodetect which batches of data files go together as well.</p>
<p>My file architecture is set up like so: <code>DATA/<FILE TYPE>/<EXP NAME>/<data files></code>, so for example <code>DATA/FASTQ/CTCF_H3K9ac/</code> contains:</p>
<pre><code>CTCF_T1_pos_2.fq.gz
CTCF_T1_pos_3.fq.gz
CTCF_T7_neg_2.fq.gz
CTCF_T7_neg_3.fq.gz
CTCF_T7_pos_2.fq.gz
CTCF_T7_pos_3.fq.gz
H3K9ac_T1_pos_2.fq.gz
H3K9ac_T1_pos_3.fq.gz
H3K9ac_T7_neg_2.fq.gz
H3K9ac_T7_neg_3.fq.gz
H3K9ac_T7_pos_2.fq.gz
H3K9ac_T7_pos_3.fq.gz
Input_T1_pos.fq.gz
Input_T7_neg.fq.gz
Input_T7_pos.fq.gz
</code></pre>
<p>For those not familiar with ChIP-seq, each <code>Input</code> file is a control data file for normalisation, and <code>CTCF</code> and <code>H3K9ac</code> are experimental data to be normalised. So one batch of files I would like to process and then send to <code>plotFingerprint</code> would be</p>
<pre><code>Input_T1_pos.fq.gz
CTCF_T1_pos_2.fq.gz
CTCF_T1_pos_3.fq.gz
H3K9ac_T1_pos_2.fq.gz
H3K9ac_T1_pos_3.fq.gz
</code></pre>
<p>With that in mind, I would need to give to my <code>fingerprint_bam</code> snakemake rule the path to the aligned versions of those files, i.e.</p>
<pre><code>DATA/BAM/CTCF_H3K9ac/Input_T1_pos.bam
DATA/BAM/CTCF_H3K9ac/CTCF_T1_pos_2.bam
DATA/BAM/CTCF_H3K9ac/CTCF_T1_pos_3.bam
DATA/BAM/CTCF_H3K9ac/H3K9ac_T1_pos_2.bam
DATA/BAM/CTCF_H3K9ac/H3K9ac_T1_pos_3.bam
</code></pre>
<p>(I would also need each of those files indexed, so all of those again with the <code>.bai</code> suffix for the snakemake input, but that's trivial once I've managed to get all the <code>.bam</code> paths. The snakemake rules I have to get up to that point all work, I've tested them independantly.)</p>
<p>There is also a special case where an experiment could be run using paired-end sequencing, so the <code>FASTQ</code> dir would contain <code>exp_fw.fq.gz</code> and <code>exp_rv.fq.gz</code> and would need to be mapped to <code>exp_pe.bam</code>, but that doesn't seem like a massive exception to handle.</p>
<p>Originally I had tried using list comprehensions to create the list of input files, using this:</p>
<pre><code>def exps_from_inp(ifile): # not needed?
path, fname = ifile.split("Input")
conds, ftype = fname.split(".", 1)
return [f for f in glob.glob(path+"*"+conds+"*."+ftype)]
def bam_name_from_fq_name(fqpath, suffix=""):
if re.search("filtered", fqpath) :
return # need to remove files that were already filtered that could be in the same dir
else:
return fqpath.replace("FASTQ", "BAM").replace(".fq.gz", ".bam") + suffix
rule fingerprint_bam:
input:
bam=[bam_name_from_fq_name(f) for f in exps_from_inp("DATA/FASTQ/{expdir}/Input_{expconds}.fq.gz")]
bai=[bam_name_from_fq_name(f, suffix=".bai") for f in exps_from_inp("DATA/FASTQ/{expdir}/Input_{expconds}.fq.gz")]
...
</code></pre>
<p>Those list comprehensions generated the correct list of files when I tried them in python, using the values that <code>expdir</code> and <code>expconds</code> take when I dry run the pipeline. However, during that dry run, the <code>{input.bam}</code> wildcard in the shell command never gets assigned a value.</p>
<p>I went digging in the docs and found <a href="https://snakemake.readthedocs.io/en/stable/snakefiles/rules.html" rel="nofollow noreferrer">this page</a> which implies that snakemake does not handle list comprehensions, and the <code>expand</code> function is its replacement. In my case, the experiment numbers (the <code>_2</code> and <code>_3</code> in the file names) are pretty variable, they're sometimes just random numbers, some experiments have 2 reps and some have 3, ... All these factors mean that using <code>expand</code> without a lot of additional considerations would be tricky (for the rep number, finding the experiment names would be fairly easy).</p>
<p>I then tried wrapping the list comprehensions in a function and running those in the input of my rule, but those failed, as did wrapping those function in one big one and using <code>unpack</code> (although I could be using that wrong, I'm not entirely sure I understood how <code>unpack</code> works).</p>
<pre><code>def get_fingerprint_bam_inputfiles(wildcards):
return {"bams": get_fingerprint_bam_bams(wildcards),
"bais": get_fingerprint_bam_bais(wildcards)}
def get_fingerprint_bam_bams(wildcards):
return [bam_name_from_fq_name(f) for f in exps_from_inp("DATA/FASTQ/{wildcards.expdir}/Input_{wildcards.expconds}.fq.gz")]
def get_fingerprint_bam_bais(wildcards):
return [bam_name_from_fq_name(f, suffix=".bai") for f in exps_from_inp("DATA/FASTQ/{wildcards.expdir}/Input_{wildcards.expconds}.fq.gz")]
rule fingerprint_bam:
input:
bams=get_fingerprint_bam_bams,
bais=get_fingerprint_bam_bais
...
rule fingerprint_bam_unpack:
input:
unpack(get_fingerprint_bam_inputfiles)
...
</code></pre>
<p>So now I'm feeling pretty stuck in this approach. How can I autodetect these experiment batches and give the correct bam file paths to my <code>fingerprint_bam</code> rule? I'm not even sure which approach I should go for.</p>
<p><strong>EDIT</strong> - fixed return statement in input functions. However, the shell command still fails to complete the list of input files.</p>
<pre><code>rule fingerprint_bam:
input:
bams=get_fingerprint_bam_bams,
bais=get_fingerprint_bam_bais
output:
"FIGURES/QC/{expdir}/{expconds}_fingerprint.svg"
log:
"snakemake_logs/fingerprint_bam/{expdir}_{expconds}.log"
shell:
"plotFingerprint --bamfiles {input.bams} -o {output} --ignoreDuplicates --plotTitle 'Fingerprint of ChIP-seq data' 2>{log}"
</code></pre>
<p>Gives this output with <code>snakemake -np</code>:</p>
<pre><code>plotFingerprint --bamfiles -o FIGURES/QC/CTCF_H3K9ac/T7_pos_fingerprint.svg --ignoreDuplicates --plotTitle 'Fingerprint of ChIP-seq data' 2>snakemake_logs/fingerprint_bam/CTCF_H3K9ac_T7_pos.log
</code></pre>
<p>So my wildcards get the correct values, but still no bam files are identified.</p>
<p><strong>EDIT 2</strong> - I got it to work! But I still don't know why my original solution didn't work.</p>
<p>I discarded using Snakemake's "wildcards in strings" utility, so my input functions now look like this:</p>
<pre><code>
def get_fingerprint_bam_bams(wildcards):
# return [bam_name_from_fq_name(f) for f in exps_from_inp("DATA/FASTQ/{wildcards.expdir}/Input_{wildcards.expconds}.fq.gz")]
return [bam_name_from_fq_name(f) for f in exps_from_inp("DATA/FASTQ/"+wildcards.expdir+"/Input_"+wildcards.expconds+".fq.gz")]
def get_fingerprint_bam_bais(wildcards):
# return [bam_name_from_fq_name(f, suffix=".bai") for f in exps_from_inp("DATA/FASTQ/{wildcards.expdir}/Input_{wildcards.expconds}.fq.gz")]
return [bam_name_from_fq_name(f, suffix=".bai") for f in exps_from_inp("DATA/FASTQ/"+wildcards.expdir+"/Input_"+wildcards.expconds+".fq.gz")]
</code></pre>
<p>which properly detects files and propagates them to the relevant rules.</p>
<p>So the question is now... why did the in-string wildcard integration not work? Visual Studio Code even highlighted them for me :(</p>
|
<python><glob><snakemake>
|
2024-06-21 08:52:35
| 1
| 507
|
Whitehot
|
78,651,311
| 2,928,970
|
Hydra sweep lower level parameters in config file
|
<p>In hydra experiment when I want to sweep over two parameters, why does</p>
<pre><code>hydra:
mode: MULTIRUN
sweeper:
params:
seed: 'range(1000,1005)'
data.index: 'range(0,10)'
</code></pre>
<p>work, but</p>
<pre><code>hydra:
mode: MULTIRUN
sweeper:
params:
seed: 'range(1000,1005)'
data:
index: 'range(0,10)'
</code></pre>
<p>doesn't work? What is the underlying logic for dot index? Is this something specific to OmegaConf?</p>
|
<python><fb-hydra>
|
2024-06-21 08:46:33
| 1
| 1,395
|
hovnatan
|
78,651,308
| 4,502,950
|
{'code': 500, 'message': 'Internal error encountered.', 'status': 'INTERNAL'}
|
<p>I have several projects in Google cloud that I am using to extract the data from Google Sheets using python. They run daily and refreshes the data in destination. Now this morning, one of the job failed and started throwing the error '<code>{'code': 500, 'message': 'Internal error encountered.', 'status': 'INTERNAL'</code>}' in both staging and prod env.</p>
<p>All the files in staging and prod are different as the one in staging are just copies of prod instance. I can not figure out what the issue is. the rest of my jobs are running fine, it's only this project that keeps throwing the issue. I have tried to run it manually as well.</p>
<p>It actually starts of fine and read couple of sheets before throwing this error. I have also implemented an exponential backoff algo to avoid quota issues</p>
<p>The issue is I can't provide any further details because i don't know what the issue is. Nothing's changed from the previous refresh, I have checked that.</p>
|
<python><google-sheets><google-cloud-platform>
|
2024-06-21 08:45:40
| 0
| 693
|
hyeri
|
78,651,138
| 20,554,684
|
How can I parse a .ini file with this type of format?
|
<p>I would like to parse a generated .ini file containing some config parameters, but I am a little uncertain of the format used. I am not able to change the format of the generated .ini file. This is a snippet of what the format looks like:</p>
<pre><code>[Sensors]
Sensors = {
(
[0]={
SensorType=Camera
Refresh=30
}
)}
</code></pre>
<p>I am able to access the <code>Sensors</code> variable with the following code:</p>
<pre><code>import configparser
config = configparser.ConfigParser()
path = "my_config.ini"
config.read(path)
single_sensor = config["Sensors"]["Sensors"]
print(single_sensor)
</code></pre>
<p>Output:</p>
<pre><code>{
(
[0]={
SensorType=Camera
Refresh=30
}
)}
</code></pre>
<p>The problem is that this is a string that I can not parse further the same way. Let's say I want to access <code>SensorType</code>. Then the following syntax would not work:</p>
<pre><code>single_sensor = config["Sensors"]["Sensors"]["0"]["SensorType"]
</code></pre>
<p>So how could I go about parsing a .ini file in this format? Would I have to parse the string separately after retrieving what I can using the .ini section syntax?</p>
|
<python><parsing><ini>
|
2024-06-21 08:05:53
| 1
| 399
|
a_floating_point
|
78,651,007
| 5,251,061
|
mypy can't infer type of a dictionary value: What's a clean way to still use it as a function argument?
|
<p>I have a dictionary that I use as my "config". I don't change values, so it could be immutable. I use its values as arguments for functions via key indexing, something like this:</p>
<pre><code>from typing import Union
my_config: dict[str, Union[int, float, str]] = {
"timeout": 60,
"precision": 3.14,
"greeting": "Hello",
}
def my_function(arg: int):
print(f"The value is: {arg}")
my_function(arg=my_config["timeout"])
</code></pre>
<p>Since mypy is a static type checker, it cannot infer the types inside a dictionary. So it will complain:</p>
<blockquote>
<p>Argument "arg" to "my_function" has incompatible type "Union[int, float, str]"; expected "int"</p>
</blockquote>
<p>What is the proper way to deal with this situation? Some ideas:</p>
<pre><code>from typing import cast
# type cast before call
my_function(arg=int(my_dict["key1"]))
my_function(arg=cast(int, my_dict["key1"]))
</code></pre>
<p>Is there more? What is the preferred way to deal with this? Do I have to go back a step and change my approach of using a dict for <code>my_config</code>? Maybe use a truly immutable data type in the first place?</p>
|
<python><mypy><python-typing>
|
2024-06-21 07:35:48
| 1
| 2,358
|
mc51
|
78,650,942
| 2,721,265
|
Make all if statements true in Jinja2 templates
|
<p>I am writing a Jinja2 template validator where I want to check all possibly needed variables. I already implemented my own Undefined class and can successfully detect missing variables, but obviously variables within if-statements are not found.</p>
<p>I do not want to hardcode multiple if-else-cases, so I am wondering if there is a way to make all if-statements true so that I do not check the "else" cases only.</p>
<p>Maybe a way to temporary overwrite the jinja2 function for the {% if %} or something?</p>
<p>Here's my code that detects the used variables:</p>
<pre><code>def find_all_variables(template_path):
variables, undefined_cls = create_collector()
try:
env = Environment(loader = FileSystemLoader(searchpath="./"), autoescape = select_autoescape(),undefined=undefined_cls)
template = env.get_template(template_path)
print(template.render({})) # empty so all variables are undefined
except TemplateError as error:
print(f"Validation of script {template_path} failed:")
print(error)
exit(1)
return variables
</code></pre>
|
<python><jinja2>
|
2024-06-21 07:19:12
| 1
| 1,107
|
Charma
|
78,650,882
| 1,818,935
|
GridSearchCV runs smoothly when scoring='accuracy', but not when scoring=accuracy_score
|
<p>When I run the following piece of code in a Jupyter notebook inside Visual Studio Code, it runs smoothly.</p>
<pre><code>from sklearn.datasets import load_iris
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score
from sklearn.model_selection import GridSearchCV
X, y = load_iris(return_X_y=True, as_frame=True)
gs = GridSearchCV(estimator=KNeighborsClassifier(),
param_grid=[{'n_neighbors': [3]}],
scoring='accuracy')
# scoring=accuracy_score)
gs.fit(X, y)
</code></pre>
<p>However, if I un-comment the commented line and comment the line above it, and re-run the notebook, I get the following error. Why?</p>
<pre><code>c:\Users\isc\Documents\Python\MLClassification\.venv\Lib\site-packages\sklearn\model_selection\_validation.py:982: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
File "c:\Users\isc\Documents\Python\MLClassification\.venv\Lib\site-packages\sklearn\model_selection\_validation.py", line 971, in _score
scores = scorer(estimator, X_test, y_test, **score_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\isc\Documents\Python\MLClassification\.venv\Lib\site-packages\sklearn\utils\_param_validation.py", line 191, in wrapper
params = func_sig.bind(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\isc\AppData\Local\Programs\Python\Python312\Lib\inspect.py", line 3267, in bind
return self._bind(args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\isc\AppData\Local\Programs\Python\Python312\Lib\inspect.py", line 3191, in _bind
raise TypeError(
TypeError: too many positional arguments
warnings.warn(
c:\Users\isc\Documents\Python\MLClassification\.venv\Lib\site-packages\sklearn\model_selection\_validation.py:982: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
File "c:\Users\isc\Documents\Python\MLClassification\.venv\Lib\site-packages\sklearn\model_selection\_validation.py", line 971, in _score
scores = scorer(estimator, X_test, y_test, **score_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\isc\Documents\Python\MLClassification\.venv\Lib\site-packages\sklearn\utils\_param_validation.py", line 191, in wrapper
params = func_sig.bind(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\isc\AppData\Local\Programs\Python\Python312\Lib\inspect.py", line 3267, in bind
return self._bind(args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\isc\AppData\Local\Programs\Python\Python312\Lib\inspect.py", line 3191, in _bind
raise TypeError(
TypeError: too many positional arguments
warnings.warn(
c:\Users\isc\Documents\Python\MLClassification\.venv\Lib\site-packages\sklearn\model_selection\_validation.py:982: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
File "c:\Users\isc\Documents\Python\MLClassification\.venv\Lib\site-packages\sklearn\model_selection\_validation.py", line 971, in _score
scores = scorer(estimator, X_test, y_test, **score_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\isc\Documents\Python\MLClassification\.venv\Lib\site-packages\sklearn\utils\_param_validation.py", line 191, in wrapper
params = func_sig.bind(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\isc\AppData\Local\Programs\Python\Python312\Lib\inspect.py", line 3267, in bind
return self._bind(args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\isc\AppData\Local\Programs\Python\Python312\Lib\inspect.py", line 3191, in _bind
raise TypeError(
TypeError: too many positional arguments
warnings.warn(
c:\Users\isc\Documents\Python\MLClassification\.venv\Lib\site-packages\sklearn\model_selection\_validation.py:982: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
File "c:\Users\isc\Documents\Python\MLClassification\.venv\Lib\site-packages\sklearn\model_selection\_validation.py", line 971, in _score
scores = scorer(estimator, X_test, y_test, **score_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\isc\Documents\Python\MLClassification\.venv\Lib\site-packages\sklearn\utils\_param_validation.py", line 191, in wrapper
params = func_sig.bind(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\isc\AppData\Local\Programs\Python\Python312\Lib\inspect.py", line 3267, in bind
return self._bind(args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\isc\AppData\Local\Programs\Python\Python312\Lib\inspect.py", line 3191, in _bind
raise TypeError(
TypeError: too many positional arguments
warnings.warn(
c:\Users\isc\Documents\Python\MLClassification\.venv\Lib\site-packages\sklearn\model_selection\_validation.py:982: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
File "c:\Users\isc\Documents\Python\MLClassification\.venv\Lib\site-packages\sklearn\model_selection\_validation.py", line 971, in _score
scores = scorer(estimator, X_test, y_test, **score_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\isc\Documents\Python\MLClassification\.venv\Lib\site-packages\sklearn\utils\_param_validation.py", line 191, in wrapper
params = func_sig.bind(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\isc\AppData\Local\Programs\Python\Python312\Lib\inspect.py", line 3267, in bind
return self._bind(args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\isc\AppData\Local\Programs\Python\Python312\Lib\inspect.py", line 3191, in _bind
raise TypeError(
TypeError: too many positional arguments
warnings.warn(
c:\Users\isc\Documents\Python\MLClassification\.venv\Lib\site-packages\sklearn\model_selection\_search.py:1052: UserWarning: One or more of the test scores are non-finite: [nan]
warnings.warn(
</code></pre>
|
<python><scikit-learn><gridsearchcv>
|
2024-06-21 07:04:53
| 0
| 6,053
|
Evan Aad
|
78,650,641
| 8,973,620
|
How do I redirect file writing location for a python package?
|
<p>I have a package that uses one temporary file, and this file is written and loaded using the package functions. The problem is that I am using this package with Azure Functions, and you are only allowed to write to a special location there. How can I force this package to write to this location? The path is defined inside the package as a global variable:</p>
<pre><code>### foo_package.py ###
path = 'file.json'
def read_file():
with open(path, "r") as file:
json_data = json.load(file)
def save_file(json_data):
with open(path, "w") as file:
json.dump(json_data, file)
def foo_function():
file = load_file()
...
save_file(json_data)
### my_code.py ###
from foo_package import foo_function
foo_function() # writes to ./, but i need to write to /some/path
</code></pre>
|
<python><file-writing>
|
2024-06-21 06:02:12
| 3
| 18,110
|
Mykola Zotko
|
78,650,620
| 11,098,908
|
Trying to understand pygame.key.get_pressed()
|
<p>I made the following toy code to understand what <code>pygame.key.get_pressed()</code> means</p>
<pre><code>window = pygame.display.set_mode((500, 500))
run = True
while run:
for event in pygame.event.get():
if event.type == pygame.QUIT:
run = False
keys = pygame.key.get_pressed()
if keys:
print(keys, 'key pressed')
</code></pre>
<p>When I ran the code, it produced this output, in despite of any key was pressed or not</p>
<pre><code>pygame.key.ScancodeWrapper(False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False) key pressed
</code></pre>
<p>What happened? What is the purpose of using <code>pygame.key.get_pressed()</code> and when is it needed?</p>
|
<python><pygame>
|
2024-06-21 05:55:45
| 2
| 1,306
|
Nemo
|
78,650,607
| 561,243
|
compare numpy array against list of values
|
<p>I would like to have your help about a problem I have with a numpy array.</p>
<p>In my code I have a huge array made of integers, this is the result of a skimage labelling process.</p>
<p>The idea is that I want to generate a boolean array with same shape as the input array containing a True value if the cell value belong to a given list of good labels, and false otherwise.</p>
<p>It is much easier with a small example:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
input_array = np.arange(15, dtype=np.int64).reshape(3, 5)
good_labels = [1, 5, 7]
mask_output = np.zeros(input_array.shape).astype(int)
for l in good_labels:
masked_input = (input_array == l).astype(int)
mask_output += masked_input
mask_output = mask_output > 0
</code></pre>
<p>This code works, but it is extremely slow because input_array is huge and good_labels is also very long, so I need to loop too much.</p>
<p>Is there a way to make it faster?</p>
<p>I was thinking to replace the whole loop with something like</p>
<pre><code>mask_output = input_array in good_labels
</code></pre>
<p>But this does not work as expected.</p>
<p>Can you help me?</p>
|
<python><numpy><scikit-image>
|
2024-06-21 05:51:06
| 1
| 367
|
toto
|
78,650,421
| 3,777,717
|
Get variable from identifier in a context
|
<p>Is it possible in Python to get some abstract representation of a variable, given <code>FrameInfo</code> and location (line number and offset) of its identifier occurrence?</p>
<p>What I'd like to be able to do with it is change its value, find its definition site and all occurrences (which may form a proper subset of the occurrences of this identifier in the code associated with the frame), ideally also dynamically create a closure closing over it.</p>
<p>It's like what the <code>ast</code> module does, but enhanced with some rudimentary semantics. The interpreter already has to figure it out anyway, so maybe there's a way to get at it from within the programme.</p>
<p>Here's sample code that correctly identifies the call to <code>f</code> on the line <code>f()</code>, but on the next one it gives 2 as the position (<code>f</code> refers to a different function there) instead of 20:</p>
<pre><code>from inspect import *
import io
from tokenize import *
def f():
print('Doing something in function f.')
detect()
def detect():
name = stack()[1].function
tokens = tokenize(io.BytesIO(stack()[2].code_context[0].encode()).read)
for t in tokens:
if t.type == token.NAME and t.string == name and all(c == next(tokens, None).string for c in '()'):
print(f'Current invocation of function {name} was in line {t.line[:-1]} at position {t.start[1]}.')
break
f()
([f() for f in ()], f())
</code></pre>
|
<python><abstract-syntax-tree><code-inspection>
|
2024-06-21 04:39:23
| 0
| 1,201
|
ByteEater
|
78,650,388
| 6,733,980
|
Not able to import functions from python file
|
<p>Folder structure:</p>
<p><a href="https://i.sstatic.net/8iIs5KTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8iIs5KTK.png" alt="enter image description here" /></a></p>
<p>I have a master script which calls <code>requires.py</code> only. The <code>requires.py</code> file is responsible to call all the necessary supporting files which are in the folder <code>essentials</code>.</p>
<p>Snippet of <code>requires.py</code>:</p>
<pre><code>import os
import sys
import time
from pathlib import *
global formatter
formatter = "\n #################### => "
wokring_dir = os.getcwd()
essential_path = "\essentials"
#forimport = os.path.join(wokring_dir,essential_path)
forimport = (wokring_dir)+essential_path
os.chdir(forimport)
print(formatter,os.getcwd())
from selectthis import printthis
printthis()
</code></pre>
<p>Here I am not able to import anything from <code>selectthis</code>. Why is that?</p>
|
<python><python-import>
|
2024-06-21 04:16:44
| 1
| 639
|
Coding4Life
|
78,650,371
| 8,474,894
|
Convert date column with DDMMMYYYY strings into YYYYMMDD string in Pandas
|
<p>Convert a date column loaded as strings into a pandas dataframe, in the format <code>DDMMMYYYY</code> (i.e. <code>%d%b%Y</code>) to the desired format of <code>YYYYMMDD</code> (i.e. <code>%Y%m%d</code>).</p>
<p>Example: Convert <code>31Oct2023</code> to <code>20231031</code>.</p>
<pre class="lang-none prettyprint-override"><code> GivenInput ExpectedDates
0 31Jan2023 20230131
1 28Feb2023 20230228
2 31Mar2023 20230331
3 30Apr2023 20230430
4 31May2023 20230531
5 30Jun2023 20230630
6 31Jul2023 20230731
7 31Aug2023 20230831
8 30Sep2023 20230930
9 31Oct2023 20231031
10 30Nov2023 20231130
11 31Dec2023 20231231
12 31Jan2024 20240131
13 29Feb2024 20240229
14 31Mar2024 20240331
15 30Apr2024 20240430
16 31May2024 20240531
</code></pre>
<p>Snippet to load the sample data:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
from io import StringIO
s = """
GivenInput
31Jan2023
28Feb2023
31Mar2023
30Apr2023
31May2023
30Jun2023
31Jul2023
31Aug2023
30Sep2023
31Oct2023
30Nov2023
31Dec2023
31Jan2024
29Feb2024
31Mar2024
30Apr2024
31May2024
"""
s = s.strip()
df = pd.read_csv(StringIO(s))
print(df)
</code></pre>
|
<python><pandas><datetime>
|
2024-06-21 04:09:38
| 1
| 7,403
|
CypherX
|
78,650,258
| 8,521,346
|
Strange Default Behavior For Python Dict Get Method
|
<p>We recieve API requests from two versions of embedded systems as follows.</p>
<pre><code>data = {"sender": "555-555-5555"} #new version
data = {"from": "555-555-5555"} #old version
</code></pre>
<p>To support backwards compatibility with older systems I wrote the code.</p>
<p><code>data.get("sender", data["from"]))</code></p>
<p><strong>Question</strong></p>
<p>When I pass the dict containing the key "sender" to it, I would expect it not to even evaluate the default value from the key <code>"from"</code>, but instead it throws a KeyError for the key 'from'.</p>
<p>I know how to fix the issue, but why would python be evaluating the default when the key is present?</p>
<p>Note the following code.</p>
<pre><code>{"sender": "111-111-1111"}.get("sender", {"from": "222-222-2222"}['from'])
</code></pre>
<p>This returns 111-111-1111 but if i change the key in the default, it fails with a KeyError</p>
|
<python>
|
2024-06-21 02:59:36
| 1
| 2,198
|
Bigbob556677
|
78,650,222
| 13,086,128
|
ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
|
<h2>MRE</h2>
<pre><code>pip install pandas==2.1.1 numpy==2.0.0
</code></pre>
<p><code>Python 3.10</code> on Google Colab</p>
<p>Output</p>
<pre><code>Collecting pandas==2.1.1
Downloading pandas-2.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 12.3/12.3 MB 44.7 MB/s eta 0:00:00
Collecting numpy==2.0.0
Using cached numpy-2.0.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (19.3 MB)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.10/dist-packages (from pandas==2.1.1) (2.8.2)
Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas==2.1.1) (2023.4)
Requirement already satisfied: tzdata>=2022.1 in /usr/local/lib/python3.10/dist-packages (from pandas==2.1.1) (2024.1)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.8.2->pandas==2.1.1) (1.16.0)
Installing collected packages: numpy, pandas
Attempting uninstall: numpy
Found existing installation: numpy 1.26.4
Uninstalling numpy-1.26.4:
Successfully uninstalled numpy-1.26.4
Attempting uninstall: pandas
Found existing installation: pandas 2.0.3
Uninstalling pandas-2.0.3:
Successfully uninstalled pandas-2.0.3
Successfully installed numpy-2.0.0 pandas-2.1.1
</code></pre>
<p>When I do :</p>
<pre><code>import pandas
</code></pre>
<p>I get this error:</p>
<pre><code>Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.10/dist-packages/colab_kernel_launcher.py", line 37, in <module>
ColabKernelApp.launch_instance()
File "/usr/local/lib/python3.10/dist-packages/traitlets/config/application.py", line 992, in launch_instance
app.start()
File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelapp.py", line 619, in start
self.io_loop.start()
File "/usr/local/lib/python3.10/dist-packages/tornado/platform/asyncio.py", line 195, in start
self.asyncio_loop.run_forever()
File "/usr/lib/python3.10/asyncio/base_events.py", line 603, in run_forever
self._run_once()
File "/usr/lib/python3.10/asyncio/base_events.py", line 1909, in _run_once
handle._run()
File "/usr/lib/python3.10/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/usr/local/lib/python3.10/dist-packages/tornado/ioloop.py", line 685, in <lambda>
lambda f: self._run_callback(functools.partial(callback, future))
File "/usr/local/lib/python3.10/dist-packages/tornado/ioloop.py", line 738, in _run_callback
ret = callback()
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 825, in inner
self.ctx_run(self.run)
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 786, in run
yielded = self.gen.send(value)
File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py", line 361, in process_one
yield gen.maybe_future(dispatch(*args))
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 234, in wrapper
yielded = ctx_run(next, result)
File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py", line 261, in dispatch_shell
yield gen.maybe_future(handler(stream, idents, msg))
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 234, in wrapper
yielded = ctx_run(next, result)
File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py", line 539, in execute_request
self.do_execute(
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 234, in wrapper
yielded = ctx_run(next, result)
File "/usr/local/lib/python3.10/dist-packages/ipykernel/ipkernel.py", line 302, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python3.10/dist-packages/ipykernel/zmqshell.py", line 539, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 2975, in run_cell
result = self._run_cell(
File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3030, in _run_cell
return runner(coro)
File "/usr/local/lib/python3.10/dist-packages/IPython/core/async_helpers.py", line 78, in _pseudo_sync_runner
coro.send(None)
File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3257, in run_cell_async
has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3473, in run_ast_nodes
if (await self.run_code(code, result, async_=asy)):
File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3553, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-1-38d4b0363d82>", line 1, in <cell line: 1>
import pandas
File "/usr/local/lib/python3.10/dist-packages/pandas/__init__.py", line 23, in <module>
from pandas.compat import (
File "/usr/local/lib/python3.10/dist-packages/pandas/compat/__init__.py", line 27, in <module>
from pandas.compat.pyarrow import (
File "/usr/local/lib/python3.10/dist-packages/pandas/compat/pyarrow.py", line 8, in <module>
import pyarrow as pa
File "/usr/local/lib/python3.10/dist-packages/pyarrow/__init__.py", line 65, in <module>
import pyarrow.lib as _lib
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
AttributeError: _ARRAY_API not found
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-1-38d4b0363d82> in <cell line: 1>()
----> 1 import pandas
2 frames
/usr/local/lib/python3.10/dist-packages/pandas/_libs/__init__.py in <module>
16 import pandas._libs.pandas_parser # noqa: E501 # isort: skip # type: ignore[reportUnusedImport]
17 import pandas._libs.pandas_datetime # noqa: F401,E501 # isort: skip # type: ignore[reportUnusedImport]
---> 18 from pandas._libs.interval import Interval
19 from pandas._libs.tslibs import (
20 NaT,
interval.pyx in init pandas._libs.interval()
ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
</code></pre>
|
<python><python-3.x><pandas><numpy><pip>
|
2024-06-21 02:43:35
| 1
| 30,560
|
Talha Tayyab
|
78,649,938
| 3,486,684
|
How do I print a python slice as a string separated by ":"?
|
<p>I have a <a href="https://docs.python.org/3/library/functions.html#slice" rel="nofollow noreferrer"><code>slice</code></a> object, and I'd like to print it out in the form of a string <code>start:stop:step</code>. How do I do so?</p>
|
<python><string><slice>
|
2024-06-20 23:38:44
| 2
| 4,654
|
bzm3r
|
78,649,880
| 1,711,271
|
Finding probabilities of each value in all categorical columns across a dataframe
|
<p>My question is nearly identical to <a href="https://stackoverflow.com/q/70811240">Finding frequency of each value in all categorical columns across a dataframe</a>, but I need the probabilities, instead of the frequencies. We can use the same example dataframe:</p>
<pre><code>df = pd.DataFrame(
{'sub_code': ['CSE01', 'CSE01', 'CSE01', 'CSE02', 'CSE03', 'CSE04',
'CSE05', 'CSE06'],
'stud_level': [101, 101, 101, 101, 101, 101, 101, 101],
'grade': ['STA', 'STA', 'PSA', 'STA', 'STA', 'SSA', 'PSA', 'QSA']})
</code></pre>
<p>I tried adapting <a href="https://stackoverflow.com/a/70811258">this answer</a> in the following way:</p>
<pre><code>out = (df.select_dtypes(object)
.melt(var_name="Variable", value_name="Class")
.value_counts(dropna=False, normalize=True)
.reset_index(name="Probability")
.sort_values(by=['Variable', 'Class'], ascending=[True, True])
.reset_index(drop=True))
</code></pre>
<p>However, the code doesn't work, because the sum of the class probabilities for each variable is not 1. What am I doing wrong?</p>
|
<python><pandas><categorical-data><pandas-melt>
|
2024-06-20 23:07:01
| 1
| 5,726
|
DeltaIV
|
78,649,615
| 6,622,697
|
Defined many-to-many with Association table in SQLAlchemy
|
<p>Updating with a more complete example</p>
<p>I've seen many solutions, but I still can't get this to work.
I have</p>
<pre><code>class ModuleGroupMember(ModelBase):
__tablename__ = 'module_group_member'
pk = mapped_column(Integer, primary_key=True)
module_pk = mapped_column(Integer, ForeignKey('module.pk'))
module_group_pk = mapped_column(Integer, ForeignKey('module_group.pk'))
class Module(ModelBase):
__tablename__ = 'module'
pk = mapped_column(Integer, primary_key=True)
name = mapped_column(String)
groups = relationship("Module_Group", secondary="module_group_member", back_populates="modules")
class ModuleGroup(ModelBase):
__tablename__ = 'module_group'
pk = mapped_column(Integer, primary_key=True)
description = mapped_column(String)
is_active = mapped_column(Boolean)
name = mapped_column(String)
modules = relationship("Module", secondary="module_group_member", back_populates="groups")
</code></pre>
<p>There are all in separate files, Module.py, ModuleGroup.py, and ModuleGroupMember.py, that are in a package called <code>root.db.models</code>. I have an <code>__init__.py</code> like this</p>
<pre><code>from .Module import Module
from .Module_Group import ModuleGroup
from .Module_Group_Member import ModuleGroupMember
metadata = ModelBase.metadata
def create_tables():
metadata.create_all(engine)
def drop_tables():
metadata.drop_all(engine)
</code></pre>
<p>It's not clear to me if the <code>secondary</code> parameter is the table name or class name, but I've tried both.</p>
<p>When I run create_all(), the tables are not getting created. Here's what I'm running. Note that I import everything from <code>root.db.models</code></p>
<pre><code>from sqlalchemy import create_engine, URL
from sqlalchemy.orm import sessionmaker, DeclarativeBase
from root.db.models import create_tables, Module, drop_tables
url = URL.create(
drivername="postgresql",
username="postgres",
password="postgres",
host="localhost",
database="flask_test"
)
engine = create_engine(url, echo=True)
class ModelBase(DeclarativeBase):
pass
metadata = ModelBase.metadata
drop_tables()
create_tables()
with sessionmaker(bind=engine)() as session:
with session.begin():
session.add(Module(name="Module1"))
session.commit()
</code></pre>
<p>I get the following error when trying to add a row in the Module table</p>
<p><code> raise NameError( NameError: Module 'Module_Group' has no mapped classes registered under the name '_sa_instance_state'</code></p>
|
<python><sqlalchemy>
|
2024-06-20 21:16:38
| 0
| 1,348
|
Peter Kronenberg
|
78,649,420
| 6,329,284
|
Vscode command to run selection in REPL and maintain focus in editor
|
<p>It seemed like a simple feature but I am unable figure it out.</p>
<p>What I want:</p>
<p>Work in the editor
Execute a selection of lines in the native REPL
Maintain focus on the editor from which I selected the lines of code</p>
<p>What I have:
python.execSelectionInTerminal: this sends commands to the terminal and it maintains focus on the editor.
python.execInREPL: this sends commands to the REPL but loses focus on the editor</p>
<p>What I tried:</p>
<pre><code>{
"command": "runCommands",
"key": "shift+enter",
"when": "editorTextFocus",
"args": {
"commands": [
"python.execInREPL",
"workbench.action.focusActiveEditorGroup"
]
}
},
</code></pre>
<p>but to no avail, my own created command did not set the focus properly. Can someone point me in the right direction? And can someone tell me where I can find documentation on all these workbench.action... and python... commands that I can use? I find it difficult to find that.</p>
|
<python><visual-studio-code>
|
2024-06-20 20:11:46
| 1
| 1,340
|
zwep
|
78,649,416
| 2,302,911
|
UNIX socket data exchange between client and server. Due to the message length, it stops
|
<p>I'm trying to set up a UNIX socket communication for exchanging floats between a client and a server.
The following <a href="https://medium.com/python-pandemonium/python-socket-communication-e10b39225a4c" rel="nofollow noreferrer">tutorial</a> did help a lot and I realized a very simple client python code which sends 4 randomly generated floats and send them to a server. The server reads the 4 floats, sum them up and then it sends the result back to the client. Here the code:</p>
<p><strong>client.py</strong></p>
<pre><code>import numpy as np
import socket
import errno
import time
import sys
import os
def main():
# Create a socket object
client_socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
server_address = "/tmp/uds_socket"
print(f"Connecting to: {server_address}")
try:
client_socket.connect(server_address)
except:
print("Error")
sys.exit(1)
while True:
data = np.random.uniform(low = -10.0, high = 10.0, size = 4).tolist()
msg = ','.join(map(str, data))
try:
# Send the floats to the server
client_socket.sendall(msg.encode())
# Receive the response from the server
received_bytes = 0
expected_bytes = 16
while received_bytes < expected_bytes:
result = client_socket.recv(16)
received_bytes += len(result)
print(received_bytes)
print(f"Received sum from server: {result.decode()}")
except IOError as e:
if e.errno == errno.EPIPE:
print(f"Error here: {e.errno}")
except KeyboardInterrupt:
break
time.sleep(1)
client_socket.close()
if __name__ == "__main__":
main()
</code></pre>
<p>and here <strong>server.py</strong></p>
<pre><code>import numpy as np
import socket
import os
from signal import signal, SIGPIPE, SIG_DFL
signal(SIGPIPE, SIG_DFL)
def main():
server_address = "/tmp/uds_socket"
# Remove old sockets
try:
os.unlink(server_address)
except OSError:
if os.path.exists(server_address):
raise
# Create a socket object
server_socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
print(f"Loading the model: {server_address}")
# Bind the socket to the address
server_socket.bind(server_address)
# Listen for incoming connections
server_socket.listen(1)
while True:
print("Waiting for a connection...")
client_socket, client_address = server_socket.accept()
print("Connection established...")
try:
# Receiving data in small chunks
while True:
msg = client_socket.recv(128)
if msg:
data = list(map(float, msg.decode().split(',')))
print(f"Received: {data}, with length: {len(msg)}")
result = sum(data)
print(type(result))
client_socket.sendall(str(result).encode())
else:
print("No data")
break
# Finally because it is going to be executed even in case of error when transfmitting a message
finally:
print("Closing the connection")
client_socket.close()
if __name__ == "__main__":
main()
</code></pre>
<p>Now... both codes run as expected. But after many runs I realized, that one of the weakness of the code is the <em><strong>expected length</strong></em> of the message sent back from the server. I put 16 because a float is rarely shorter then 16 characters but then I had some cases, where the float was smaller than 16, so the client stucks in the while loop...</p>
<p>How can I prevent such cases?
Since the length of the messages is not known at priori, how to check if the lenght of the message has been reached and the while loop can be breaked?</p>
|
<python><sockets><unix><server><client>
|
2024-06-20 20:10:46
| 0
| 348
|
Dave
|
78,649,311
| 14,498,998
|
How to add a Tabular Inline to Django's default user's page When The other model has a many to many relation with user model
|
<p>So I surfed the web for this problems solutions but I couldn't quite find the answer to this problem.</p>
<p>Here are my models:</p>
<pre><code>class Website(models.Model):
url = models.URLField()
users = models.ManyToManyField(User)
def __str__(self) -> str:
return str(self.url)
class Profile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE)
bio = models.TextField(null=True, blank=True)
nickname = models.CharField(blank=True, null=True, max_length=50)
location = models.CharField(blank=True, null=True, max_length=50)
weight = models.DecimalField(null=True, max_digits=5, decimal_places=2)
def __str__(self) -> str:
return str(self.user)
</code></pre>
<p>So as you can see one field in the Website model has a "ManyToMany" relationship with django's default User model. So I was wondering How I could change or see the list of websites a user has in the User's page/section.</p>
<ul>
<li>I tried doing this and it didn't work:</li>
</ul>
<pre><code>class WebsiteInline(admin.TabularInline):
model = Website
extra = 1
class UserAdmin(admin.ModelAdmin):
inlines = [WebsiteInline]
admin.site.unregister(User)
admin.site.register(User, UserAdmin)
</code></pre>
<p>and here is the error: <class 'accounts.admin.WebsiteInline'>: (admin.E202) 'accounts.Website' has no ForeignKey to 'auth.User'.</p>
|
<python><django><many-to-many><admin><user-administration>
|
2024-06-20 19:36:41
| 0
| 313
|
Alin
|
78,649,170
| 297,780
|
Can’t Install the search-ads-360-python Library
|
<p>I'm trying to connect with the SearchAds 360 API via a Python script. I'm following their <a href="https://developers.google.com/search-ads/reporting/quickstart/quickstart-guide" rel="nofollow noreferrer">quickstart guide</a> and I’m having trouble installing the <a href="https://developers.google.com/search-ads/reporting/client-libraries/client-libraries" rel="nofollow noreferrer">search-ads-360-python client library</a>.</p>
<p>I've downloaded the file, and when I run the command <code>pip install searchads360-py.tar.gz</code>, I get the following error:</p>
<pre><code>(sa360api) lenwood@Lenwood-MBP 2024.06.19_sa360-api % pip install searchads360-py.tar.gz
Processing ./searchads360-py.tar.gz
Preparing metadata (setup.py) ... done
Requirement already satisfied: google-api-core<3.0.0dev,>=2.10.0 in /Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages (from google-api-core[grpc]<3.0.0dev,>=2.10.0->google-ads-searchads360==0.0.0) (2.19.0)
Requirement already satisfied: google-auth<3.0.0dev,>=2.14.1 in /Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages (from google-ads-searchads360==0.0.0) (2.30.0)
Requirement already satisfied: googleapis-common-protos>=1.53.0 in /Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages (from google-ads-searchads360==0.0.0) (1.63.1)
Requirement already satisfied: grpcio>=1.10.0 in /Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages (from google-ads-searchads360==0.0.0) (1.64.1)
Requirement already satisfied: proto-plus<2.0.0dev,>=1.22.3 in /Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages (from google-ads-searchads360==0.0.0) (1.24.0)
Requirement already satisfied: protobuf!=3.20.0,!=3.20.1,!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<5.0.0.dev0,>=3.19.5 in /Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages (from google-api-core<3.0.0dev,>=2.10.0->google-api-core[grpc]<3.0.0dev,>=2.10.0->google-ads-searchads360==0.0.0) (4.25.3)
Requirement already satisfied: requests<3.0.0.dev0,>=2.18.0 in /Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages (from google-api-core<3.0.0dev,>=2.10.0->google-api-core[grpc]<3.0.0dev,>=2.10.0->google-ads-searchads360==0.0.0) (2.32.3)
Requirement already satisfied: grpcio-status<2.0.dev0,>=1.33.2 in /Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages (from google-api-core[grpc]<3.0.0dev,>=2.10.0->google-ads-searchads360==0.0.0) (1.62.2)
Requirement already satisfied: cachetools<6.0,>=2.0.0 in /Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages (from google-auth<3.0.0dev,>=2.14.1->google-ads-searchads360==0.0.0) (5.3.3)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages (from google-auth<3.0.0dev,>=2.14.1->google-ads-searchads360==0.0.0) (0.4.0)
Requirement already satisfied: rsa<5,>=3.1.4 in /Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages (from google-auth<3.0.0dev,>=2.14.1->google-ads-searchads360==0.0.0) (4.9)
Requirement already satisfied: pyasn1<0.7.0,>=0.4.6 in /Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages (from pyasn1-modules>=0.2.1->google-auth<3.0.0dev,>=2.14.1->google-ads-searchads360==0.0.0) (0.6.0)
Requirement already satisfied: charset-normalizer<4,>=2 in /Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages (from requests<3.0.0.dev0,>=2.18.0->google-api-core<3.0.0dev,>=2.10.0->google-api-core[grpc]<3.0.0dev,>=2.10.0->google-ads-searchads360==0.0.0) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in /Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages (from requests<3.0.0.dev0,>=2.18.0->google-api-core<3.0.0dev,>=2.10.0->google-api-core[grpc]<3.0.0dev,>=2.10.0->google-ads-searchads360==0.0.0) (3.7)
Requirement already satisfied: urllib3<3,>=1.21.1 in /Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages (from requests<3.0.0.dev0,>=2.18.0->google-api-core<3.0.0dev,>=2.10.0->google-api-core[grpc]<3.0.0dev,>=2.10.0->google-ads-searchads360==0.0.0) (2.2.2)
Requirement already satisfied: certifi>=2017.4.17 in /Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages (from requests<3.0.0.dev0,>=2.18.0->google-api-core<3.0.0dev,>=2.10.0->google-api-core[grpc]<3.0.0dev,>=2.10.0->google-ads-searchads360==0.0.0) (2024.6.2)
Building wheels for collected packages: google-ads-searchads360
Building wheel for google-ads-searchads360 (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [42 lines of output]
/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/setuptools/__init__.py:80: _DeprecatedInstaller: setuptools.installer and fetch_build_eggs are deprecated.
!!
********************************************************************************
Requirements should be satisfied by a PEP 517 installer.
If you are using pip, you can try `pip install --use-pep517`.
********************************************************************************
!!
dist.fetch_build_eggs(dist.setup_requires)
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/3w/h_s63nl14p75qzww71xm_3fw0000gn/T/pip-req-build-gwhvmyov/setup.py", line 56, in <module>
setuptools.setup(
File "/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/setuptools/__init__.py", line 102, in setup
_install_setup_requires(attrs)
File "/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/setuptools/__init__.py", line 75, in _install_setup_requires
_fetch_build_eggs(dist)
File "/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/setuptools/__init__.py", line 80, in _fetch_build_eggs
dist.fetch_build_eggs(dist.setup_requires)
File "/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/setuptools/dist.py", line 643, in fetch_build_eggs
return _fetch_build_eggs(self, requires)
File "/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/setuptools/installer.py", line 38, in _fetch_build_eggs
resolved_dists = pkg_resources.working_set.resolve(
File "/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/pkg_resources/__init__.py", line 832, in resolve
dist = self._resolve_dist(
File "/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/pkg_resources/__init__.py", line 868, in _resolve_dist
dist = best[req.key] = env.best_match(
File "/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/pkg_resources/__init__.py", line 1155, in best_match
return self.obtain(req, installer)
File "/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/pkg_resources/__init__.py", line 1170, in obtain
return installer(requirement) if installer else None
File "/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/setuptools/installer.py", line 106, in _fetch_build_egg_no_warn
wheel.install_as_egg(dist_location)
File "/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/setuptools/wheel.py", line 122, in install_as_egg
self._install_as_egg(destination_eggdir, zf)
File "/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/setuptools/wheel.py", line 130, in _install_as_egg
self._convert_metadata(zf, destination_eggdir, dist_info, egg_info)
File "/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/setuptools/wheel.py", line 175, in _convert_metadata
os.rename(dist_info, egg_info)
OSError: [Errno 66] Directory not empty: '/private/var/folders/3w/h_s63nl14p75qzww71xm_3fw0000gn/T/pip-req-build-gwhvmyov/.eggs/libcst-1.4.0-py3.9-macosx-11.1-arm64.egg/libcst-1.4.0.dist-info' -> '/private/var/folders/3w/h_s63nl14p75qzww71xm_3fw0000gn/T/pip-req-build-gwhvmyov/.eggs/libcst-1.4.0-py3.9-macosx-11.1-arm64.egg/EGG-INFO'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for google-ads-searchads360
Running setup.py clean for google-ads-searchads360
error: subprocess-exited-with-error
× python setup.py clean did not run successfully.
│ exit code: 1
╰─> [42 lines of output]
/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/setuptools/__init__.py:80: _DeprecatedInstaller: setuptools.installer and fetch_build_eggs are deprecated.
!!
********************************************************************************
Requirements should be satisfied by a PEP 517 installer.
If you are using pip, you can try `pip install --use-pep517`.
********************************************************************************
!!
dist.fetch_build_eggs(dist.setup_requires)
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/3w/h_s63nl14p75qzww71xm_3fw0000gn/T/pip-req-build-gwhvmyov/setup.py", line 56, in <module>
setuptools.setup(
File "/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/setuptools/__init__.py", line 102, in setup
_install_setup_requires(attrs)
File "/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/setuptools/__init__.py", line 75, in _install_setup_requires
_fetch_build_eggs(dist)
File "/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/setuptools/__init__.py", line 80, in _fetch_build_eggs
dist.fetch_build_eggs(dist.setup_requires)
File "/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/setuptools/dist.py", line 643, in fetch_build_eggs
return _fetch_build_eggs(self, requires)
File "/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/setuptools/installer.py", line 38, in _fetch_build_eggs
resolved_dists = pkg_resources.working_set.resolve(
File "/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/pkg_resources/__init__.py", line 832, in resolve
dist = self._resolve_dist(
File "/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/pkg_resources/__init__.py", line 868, in _resolve_dist
dist = best[req.key] = env.best_match(
File "/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/pkg_resources/__init__.py", line 1155, in best_match
return self.obtain(req, installer)
File "/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/pkg_resources/__init__.py", line 1170, in obtain
return installer(requirement) if installer else None
File "/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/setuptools/installer.py", line 106, in _fetch_build_egg_no_warn
wheel.install_as_egg(dist_location)
File "/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/setuptools/wheel.py", line 122, in install_as_egg
self._install_as_egg(destination_eggdir, zf)
File "/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/setuptools/wheel.py", line 130, in _install_as_egg
self._convert_metadata(zf, destination_eggdir, dist_info, egg_info)
File "/Users/lenwood/opt/anaconda3/envs/sa360api/lib/python3.9/site-packages/setuptools/wheel.py", line 175, in _convert_metadata
os.rename(dist_info, egg_info)
OSError: [Errno 66] Directory not empty: '/private/var/folders/3w/h_s63nl14p75qzww71xm_3fw0000gn/T/pip-req-build-gwhvmyov/.eggs/libcst-1.4.0-py3.9-macosx-11.1-arm64.egg/libcst-1.4.0.dist-info' -> '/private/var/folders/3w/h_s63nl14p75qzww71xm_3fw0000gn/T/pip-req-build-gwhvmyov/.eggs/libcst-1.4.0-py3.9-macosx-11.1-arm64.egg/EGG-INFO'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed cleaning build dir for google-ads-searchads360
Failed to build google-ads-searchads360
ERROR: Could not build wheels for google-ads-searchads360, which is required to install pyproject.toml-based projects
</code></pre>
<p>I've tried:</p>
<ul>
<li>running <code>pip install searchads360-py.tar.gz --use-pep517</code>, it completes without error but the package doesn't show in either pip list or conda list and I get a ModuleNotFoundError when I try to import it.</li>
<li>unpacking the tarball, navigating to the folder and running <code>python -m pip install .</code>.</li>
<li>updating pip, setuptools and wheel to their latest versions</li>
<li>running this within Python 3.8.19, 3.9.19 and 3.10.14 environments</li>
<li>searching my system for the non-empty folder, the <code>/private/var/folders/3w/h_s63nl14p75qzww71xm_3fw0000gn/T/pip-req-build-gwhvmyov</code> folder doesn't exist.</li>
</ul>
<p>My system is an M1 Mac with macOS Sonoma 14.5. Can anyone share guidance on how to resolve?</p>
|
<python><pip>
|
2024-06-20 18:52:54
| 1
| 1,414
|
Lenwood
|
78,649,010
| 3,486,684
|
How to pass optimization options such as `read_only=True` to `pandas.read_excel` using `openpyxl` as the engine,?
|
<p>I want to use <a href="https://pandas.pydata.org/docs/reference/api/pandas.read_excel.html" rel="nofollow noreferrer"><code>pandas.read_excel</code></a> to read an Excel file with the option <code>engine="openpyxl"</code>. However, I also want to pass additional <a href="https://openpyxl.readthedocs.io/en/latest/optimized.html" rel="nofollow noreferrer">optimization options to <code>openpyxl</code> such as</a>:</p>
<ul>
<li><code>read_only=True</code></li>
<li><code>data_only=True</code></li>
<li><code>keep_links=False</code></li>
</ul>
<p>How do I do this?</p>
|
<python><excel><pandas><openpyxl>
|
2024-06-20 18:05:23
| 1
| 4,654
|
bzm3r
|
78,648,906
| 10,470,575
|
Vectors in Redis Search index are corrupted even though index searches work correctly
|
<p>I have a redis cache using Redis Search and an HNSW index on a 512 element vector of float32 values.</p>
<p>It is defined like this:</p>
<pre class="lang-py prettyprint-override"><code>schema = (
VectorField(
"vector",
"HNSW",
{
"TYPE": "FLOAT32",
"DIM": 512,
"DISTANCE_METRIC": "IP",
"EF_RUNTIME": 400,
"EPSILON": 0.4
},
as_name="vector"
),
)
definition = IndexDefinition(prefix=[REDIS_PREFIX], index_type=IndexType.HASH)
res = client.ft(REDIS_INDEX_NAME).create_index(
fields=schema, definition=definition
)
</code></pre>
<p>I can insert numpy float32 vectors into this index by writing the result of <code>vector.tobytes()</code> into them directly. I can then accurately query those same vectors using a vector similarity search.</p>
<p>Despite this working correctly, when I read these vectors out of the cache using <code>client.hget(key, "vector")</code> I get results that are a variable number of bytes. All of these vectors are definitely 512 elements when I insert them, but sometimes they come back as a number of bytes that isn't even a multiple of 4! I can't decode them back into a numpy vector at that point.</p>
<p>I can't tell if this is a bug, or if I'm doing something wrong. Either way, something clearly isn't right.</p>
<p><strong>Edit</strong>: I've discovered that the records that are corrupted aren't actually in the index (if I'm interpreting this right).</p>
<p>I check whether or not a record is in the index by running</p>
<pre class="lang-py prettyprint-override"><code>client.ft(REDIS_INDEX_NAME).execute_command("FT.SEARCH", REDIS_INDEX_NAME, "*", f"INKEYS", "1", key)
</code></pre>
<p>This returns nothing when the record is not in the index. I'm now questioning whether or not I somehow wrote a number of corrupted records to this database with an old piece of code that has since been fixed. This might be the explanation.</p>
<p><strong>Edit 2</strong>: The corrupted records are distributed evenly throughout the database by insertion time, so this isn't an issue of some old code that was buggy and has since been fixed.</p>
|
<python><redis>
|
2024-06-20 17:33:23
| 1
| 466
|
magnanimousllamacopter
|
78,648,876
| 13,088,678
|
Nested condition on simple data
|
<p>I have a dataframe having 3 columns, two boolean type and one column as string.</p>
<pre><code>from pyspark.sql import SparkSession
from pyspark.sql.types import StructType, StructField, BooleanType, StringType
# Create a Spark session
spark = SparkSession.builder \
.appName("Condition Test") \
.getOrCreate()
# Sample data
data = [
(True, 'CA', None),
(True, 'US', None),
(False, 'CA', None)
]
# Define schema for the dataframe
schema = StructType([
StructField("is_flag", BooleanType(), nullable=False),
StructField("country", StringType(), nullable=False),
StructField("rule", BooleanType(), nullable=True)
])
# Create DataFrame
df = spark.createDataFrame(data, schema=schema)
# Show initial dataframe
df.show(truncate=False)
condition = (
(~col("is_flag")) |
((col("is_flag")) & (trim(col("country")) != 'CA') & nvl(col("rule"),lit(False)) != True)
)
df = df.filter(condition)
# show filtered dataframe
df.show(truncate=False)
</code></pre>
<p>Above code is returning below data.</p>
<pre><code>+-------+-------+----+
|is_flag|country|rule|
+-------+-------+----+
|true |CA |NULL|
|true |US |NULL|
|false |CA |NULL|
+-------+-------+----+
</code></pre>
<p>However since I'm explicitely mentioning <code>((col("is_flag")) & (trim(col("country")) != 'CA') & nvl(col("rule"),lit(False)) != True)</code> ie. trim(col("country")) != 'CA' when is_flag is true, I'm not expecting first record, I need results like below.</p>
<pre><code>+-------+-------+----+
|is_flag|country|rule|
+-------+-------+----+
|true |US |NULL|
|false |CA |NULL|
+-------+-------+----+
</code></pre>
<p>Question: why the above code also returns 1st record <code>|true |CA |NULL|</code>, where as we have explicitly mentioned country != 'CA' when is_flag is true (boolean).</p>
<p>However same when confition is applied via sql returns expected result.</p>
<pre><code>select *
from df
where (
not is_flag or
(is_flag and trim(country) != 'CA' and nvl(rule,False) != True)
)
</code></pre>
|
<python><apache-spark><pyspark>
|
2024-06-20 17:22:19
| 1
| 407
|
Matthew
|
78,648,820
| 1,182,299
|
BeautifulSoup output not properly formatted
|
<p>I'm trying to webscrape some text from a website, the problem is its HTML formatting.</p>
<pre><code> <div class="coptic-text html">
<div class="htmlvis"><t class="translation" title="The book of the genealogy of Jesus Christ, the son of David, the son of Abraham."><div class="verse" verse="1"><span class="word"><span class="norm"><a href='https://coptic-dictionary.org/results.cgi?quick_search=ⲡ' target='_new'>ⲡ</a></span><!--
--><span class="norm"><a href='https://coptic-dictionary.org/results.cgi?quick_search=ϫⲱⲱⲙⲉ' target='_new'>ϫⲱⲱⲙⲉ</a></span></span><!--
--><span class="word"><span class="norm"><a href='https://coptic-dictionary.org/results.cgi?quick_search=ⲛ' target='_new'>ⲙ</a></span><!--
--><span class="norm"><a href='https://coptic-dictionary.org/results.cgi?quick_search=ⲡ' target='_new'>ⲡⲉ</a></span><!--
--><span class="norm"><a href='https://coptic-dictionary.org/results.cgi?quick_search=ϫⲡⲟ' target='_new'>ϫⲡⲟ</a></span></span><!--
--><span class="word"><span class="norm"><a href='https://coptic-dictionary.org/results.cgi?quick_search=ⲛ' target='_new'>ⲛ</a></span><!--
--><span class="norm"><a href='https://coptic-dictionary.org/results.cgi?quick_search=ⲓⲏⲥⲟⲩⲥ' target='_new'>ⲓⲏⲥⲟⲩⲥ</a></span></span><!--
--><span class="word"><span class="norm"><a href='https://coptic-dictionary.org/results.cgi?quick_search=ⲡ' target='_new'>ⲡⲉ</a></span><!--
--><span class="norm"><a href='https://coptic-dictionary.org/results.cgi?quick_search=ⲭⲣⲓⲥⲧⲟⲥ' target='_new'>ⲭⲣⲓⲥⲧⲟⲥ</a></span></span><!--
--><span class="word"><span class="norm"><a href='https://coptic-dictionary.org/results.cgi?quick_search=ⲡ' target='_new'>ⲡ</a></span><!--
--><span class="norm"><a href='https://coptic-dictionary.org/results.cgi?quick_search=ϣⲏⲣⲉ' target='_new'>ϣⲏⲣⲉ</a></span></span><!--
--><span class="word"><span class="norm"><a href='https://coptic-dictionary.org/results.cgi?quick_search=ⲛ' target='_new'>ⲛ</a></span><!--
--><span class="norm"><a href='https://coptic-dictionary.org/results.cgi?quick_search=ⲇⲁⲩⲉⲓⲇ' target='_new'>ⲇⲁⲩⲉⲓⲇ</a></span></span><!--
--><span class="word"><span class="norm"><a href='https://coptic-dictionary.org/results.cgi?quick_search=ⲡ' target='_new'>ⲡ</a></span><!--
--><span class="norm"><a href='https://coptic-dictionary.org/results.cgi?quick_search=ϣⲏⲣⲉ' target='_new'>ϣⲏⲣⲉ</a></span></span><!--
--><span class="word"><span class="norm"><a href='https://coptic-dictionary.org/results.cgi?quick_search=ⲛ' target='_new'>ⲛ</a></span><!--
--><span class="norm"><a href='https://coptic-dictionary.org/results.cgi?quick_search=ⲁⲃⲣⲁϩⲁⲙ' target='_new'>ⲁⲃⲣⲁϩⲁⲙ</a></span></span><!--
--><span class="word"><span class="norm"><a href='https://coptic-dictionary.org/results.cgi?quick_search=.' target='_new'>.</a></span></span></div></t><!--
--></span></div></t></div>
</code></pre>
<p>My desired output:</p>
<pre><code>1: ⲡϫⲱⲱⲙⲉ ⲙⲡⲉϫⲡⲟ ⲛⲓⲏⲥⲟⲩⲥ ⲡⲉⲭⲣⲓⲥⲧⲟⲥ ⲡϣⲏⲣⲉ ⲛⲇⲁⲩⲉⲓⲇ ⲡϣⲏⲣⲉ ⲛⲁⲃⲣⲁϩⲁⲙ.
</code></pre>
<p>My output:</p>
<pre><code>ⲡϫⲱⲱⲙⲉⲙⲡⲉϫⲡⲟⲛⲓ ⲏⲥⲟⲩⲥⲡⲉⲭⲣⲓ ⲥⲧⲟⲥⲡϣⲏⲣⲉⲛⲇⲁⲩⲉⲓ ⲇⲡϣⲏⲣⲉⲛⲁⲃⲣⲁϩⲁⲙ.
</code></pre>
<p>My code so far:</p>
<pre><code>#coding: utf-8
import requests
from bs4 import BeautifulSoup
import signal
import sys
import os.path
signal.signal(signal.SIGINT, lambda x, y: sys.exit(0))
if len(sys.argv) != 4:
print("Usage: %s <book name> <first chapter> <last chapter>" % os.path.basename(__file__))
quit()
book_name = sys.argv[1]
start = int(sys.argv[2])
stop = int(sys.argv[3])
while start <= stop:
out_file = open(f"./{book_name}_{str(start)}.txt", "a")
try:
response = requests.get(f'https://data.copticscriptorium.org/texts/new-testament/{book_name}_{str(start)}/sahidica')
soup = BeautifulSoup(response.text, "lxml")
content_list = soup.find_all("span", class_="norm")
text = []
print(f"[{str(start)}/{str(stop)}] https://data.copticscriptorium.org/texts/new-testament/{book_name}_{str(start)}/sahidica")
for element in content_list:
text.append(element.get_text())
text = ''.join(text).strip()
out_file.write("%s\n" % text)
except:
print("Error")
start += 1
</code></pre>
<p>P.S. Language is old Coptic.</p>
<p>EDIT:</p>
<p>I think the problem is the formatting is made with css, can somehow use the css style with BeautiFulsoup?</p>
<pre><code>.word{ white-space: inherit; }
.word:after{content: " ";}
div.verse{display: block; padding-top: 6px; padding-bottom: 6px; text-indent: -15px; padding-left: 15px; }
div.verse:before{content: attr(verse)": "; font-weight:bold}
.norm a{text-decoration: none !important; color:inherit}
.norm a:hover{text-decoration: underline !important; color: blue}
</code></pre>
<p>EDIT:</p>
<p>Seems <code>content_list = soup.find_all("span", class_="word")</code> outputs the desired result but still can't output the verse number.</p>
|
<python><beautifulsoup><python-requests>
|
2024-06-20 17:05:39
| 1
| 1,791
|
bsteo
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.