QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,062,559
| 1,773,592
|
In polars how do I count unique rows cumulatively?
|
<p>My df:</p>
<pre><code>data_string= """
A,B,C,D
0,0,0,0
0,0,0,1
0,0,1,1
1,0,0,0
1,0,0,0
1,0,0,0
1,1,0,0
1,1,0,0
1,1,0,1
1,1,1,0
"""
df = pl.read_csv(StringIO(data_string))
</code></pre>
<p>I want a cumulative count of unique rows (the 'actual' column at end of this). I try:</p>
<pre><code>df=(df
.with_columns(pl.lit(1).alias("ones"))
.select([
pl.all().exclude("ones"),
pl.col("ones").cum_count().over(['A', 'B', 'C', 'D']).flatten().alias("cum_count")
]))
</code></pre>
<p>This is close but not quite (the 'expected' column at end of this). What am i missing?</p>
<pre><code># expected actual
# 1 1
# 1 1
# 1 1
# 1 1
# 2 1
# 3 2
# 1 1
# 2 2
# 1 1
# 1 1
</code></pre>
|
<python><dataframe><python-polars>
|
2024-02-26 16:43:51
| 1
| 3,391
|
schoon
|
78,062,514
| 4,402,306
|
How to skip pbr version check in Python packages
|
<p>I use pbr in my package. I also use readthedocs to host and generate the docs of the package.</p>
<p>pbr doesn't seem to have a way to skip the version checking when you add changes for the sake of configuration setting, e.g. if I add a yaml file for readthedocs, if I don't have a tag on that commit and have a version specified in my <code>setup.cfg</code> file which is the same as the previous version I had released - it complains that the target version should have been a patch higher. e.g I have <code>230.1.2</code> in <code>setup.cfg</code> and it expects <code>230.1.3</code>.</p>
<p>I don't want to bump the version each time I'm making a change to a configuration file, what can I do to avoid this?</p>
|
<python><read-the-docs><python-pbr>
|
2024-02-26 16:37:39
| 0
| 633
|
Corel
|
78,062,460
| 1,050,187
|
Grouping in minutes intervals starting at fixed time regardless the first row time and the date
|
<p>I have this code intended to count occurrences in 30-minute intervals; the requirement is to have these intervals at fixed starting points, minute 00 and minute 30 of each hour. Regretfully, despite every attempt of mine, the second group is aligned to minute 03 and minute 33.</p>
<p>I suspect that both the groups are aligned with the first time row and that the first one is correct just by chance. How can I tell the grouper to force the alignment to minutes 00 and 30?</p>
<pre><code># load and prepare data
df_long_forecast = pd.read_csv('df_long_forecast - reduced.csv')
df_long_forecast['after_12_max_datetime'] = pd.to_datetime(
df_long_forecast['after_12_max_datetime']
)
df_long_forecast['after_12_max_time'] = (
df_long_forecast['after_12_max_datetime']
- df_long_forecast['after_12_max_datetime'].dt.normalize()
) # timedelta64[ns]
# count the number of maxes happening after 12am (absolute and percentage)
hist_max = df_long_forecast.groupby(
pd.Grouper(key='after_12_max_time', freq='30T', offset='0T', origin='epoch')
)['Date'].count()
display(hist_max)
# count the number of maxes after 12 that result in a profit trade
# (cannot be MORE than the previous ones)
df_long_forecast_profit = df_long_forecast[
df_long_forecast['after_12_max_>_9_to_12_high'] > 0
]
profit_long = df_long_forecast_profit.groupby(
pd.Grouper(key='after_12_max_time', freq='30T', offset='0T', origin='epoch')
)['Date'].count()
display(profit_long)
</code></pre>
<p>Here is <code>hist_max</code></p>
<pre class="lang-none prettyprint-override"><code>after_12_max_time
0 days 12:00:00 24
0 days 12:30:00 5
0 days 13:00:00 7
0 days 13:30:00 5
0 days 14:00:00 5
0 days 14:30:00 4
0 days 15:00:00 4
0 days 15:30:00 1
0 days 16:00:00 5
0 days 16:30:00 7
0 days 17:00:00 1
0 days 17:30:00 6
0 days 18:00:00 1
0 days 18:30:00 1
0 days 19:00:00 1
0 days 19:30:00 6
0 days 20:00:00 3
0 days 20:30:00 0
0 days 21:00:00 6
0 days 21:30:00 19
0 days 22:00:00 8
Freq: 30T, Name: Date, dtype: int64
</code></pre>
<p>and this is <code>profit_long</code></p>
<pre class="lang-none prettyprint-override"><code>after_12_max_time
0 days 12:03:00 8
0 days 12:33:00 4
0 days 13:03:00 5
0 days 13:33:00 4
0 days 14:03:00 5
0 days 14:33:00 4
0 days 15:03:00 3
0 days 15:33:00 2
0 days 16:03:00 5
0 days 16:33:00 6
0 days 17:03:00 2
0 days 17:33:00 5
0 days 18:03:00 1
0 days 18:33:00 2
0 days 19:03:00 0
0 days 19:33:00 5
0 days 20:03:00 3
0 days 20:33:00 0
0 days 21:03:00 6
0 days 21:33:00 21
0 days 22:03:00 3
Freq: 30T, Name: Date, dtype: int64
</code></pre>
<p>The CSV file with just the pertinent columns can be downloaded from this <a href="https://www.dropbox.com/scl/fi/mytcuez9yyb841vpj50vo/df_long_forecast-reduced.csv?rlkey=llnlvviird802ikxmkd3sjw15&dl=0" rel="nofollow noreferrer">link</a>.</p>
|
<python><pandas><group-by><time-series><pd.grouper>
|
2024-02-26 16:28:03
| 2
| 623
|
fede72bari
|
78,062,451
| 6,435,921
|
Most Pythonic and object-oriented way to create a class that collects together other objects?
|
<p>My real example is different, but I am going to explain it in terms of this made-up example.</p>
<p>I have some base abstract class such as <code>AbstractPerson</code>. I then have several specific persons which inherit from this, such as <code>Bob</code> and <code>Alice</code>.</p>
<pre><code>class AbstractPerson:
def __init__(self, age=1):
"Abstract person base class."""
self.age = age
def interact(self):
raise NotImplementedError("Interaction not implemented.")
class Bob(AbstractPerson):
def __init__(self, age=25):
"""Bob is a specific person."""
super().__init__(age=age)
def interact(self):
return "Bob makes a joke."
class Alice(AbstractPerson):
def __init__(self, age=23):
"""Alice is another specific person."""
super().__init__(age=age)
def interact(self):
return "Alice laughs."
</code></pre>
<p>Now, I would like to create objects that are collections of <strong>any number of persons</strong>. What is the most Pythonic way of doing this? Examples are: lists, dictionaries, tuples, and using classes. I suppose, in an OOP framework, one ideally would create a class. What's the typical way of doing this? Here is one example, following <a href="https://stackoverflow.com/a/8187408/6435921">this</a>.</p>
<pre><code>class FriendGroup:
def __init__(self, **persons_dict):
self.__dict__.update(persons_dict)
def interact():
"""The friends interact."""
interaction = ""
for key, value in self.__dict__.items():
interaction += self.key.interact()
return interaction
</code></pre>
<p>Or is it better to create <code>FriendGroup</code> as inheriting from <code>AbstractPerson</code>?</p>
<p>The idea is that at some point I want to create a <code>FriendGroup</code> that contains any (finite) number of <code>AbstractPerson</code> and such that it is easy enough to "use" these instances. Something like:</p>
<pre><code>bob = Bob(age=40)
second_bob = Bob(age=25)
alice = Alice(age=35)
friends = Friends(bob, alice, second_bob)
</code></pre>
<p>Then I want to be able to do things with these people.
I don't care about the names, just that they are all
inheriting from AbstractPerson and ideally have the
same methods and attributes.</p>
|
<python><class><oop><inheritance>
|
2024-02-26 16:26:46
| 1
| 3,601
|
Euler_Salter
|
78,062,394
| 3,442,125
|
Debug python program in VSCode remote window with sourced enviroment
|
<p>I'm developing a ROS2-application in python and the source code is on the robot and needs a specific environment to run/debug. This environment is setup by sourcing a file, so if I run it manually, it boils down to</p>
<pre><code>ssh user@robot
source foo/setup.bash
python3 fancy.py
</code></pre>
<p>I tried to integrate the source-command as preLaunchTask, but these tasks are executed in their own shell and then a new one (with new environment variables) is started to run the program and I can't find the modules that are added with the source-command.</p>
<p>I assume there is a way to debug in the same terminal in which I ran my preLaunchTasks, but I didn't fine the right config yet.</p>
|
<python><visual-studio-code><ros2>
|
2024-02-26 16:15:56
| 0
| 867
|
FooTheBar
|
78,062,352
| 899,578
|
Unable to create clientid-secret in Azure Active Directory
|
<p>I am trying to generate a New ClientID and Secret for an Application in Azure Active Directory using <code>client_credentials</code> flow using code below.</p>
<p>The application used to run this script has <code>Application.ReadWrite.All</code>, <code>Application.ReadWrite.OwnedBy</code> and <code>Directory.ReadWrite.All</code> permissions at <code>Application</code> and <code>Delegated</code> level.</p>
<pre><code>import requests
from datetime import datetime, timedelta
# Azure AD B2C Constants
application_id ='888-xxxx-xxxx-xxxxx'
TENANT_ID = 'xxxx-xxxx-xxxxxxx'
CLIENT_ID = 'xxxx-xxxx-xxxxxxx'
CLIENT_SECRET = 'xxxxx-xxxxx-xxxxx'
# Token endpoint to get the access token
token_endpoint = f'https://login.microsoftonline.com/{TENANT_ID}/oauth2/v2.0/token'
# Resource URL
resource_url = 'https://graph.microsoft.com'
# Scopes for the Microsoft Graph API
scopes = ['https://graph.microsoft.com/.default' ]
# Parameters to get the access token
token_data = {
'grant_type': 'client_credentials',
'client_id': CLIENT_ID,
'client_secret': CLIENT_SECRET,
'scope': ' '.join(scopes)
}
# Get access token
token_response = requests.post(token_endpoint, data=token_data)
access_token = token_response.json().get('access_token')
# Create app registration in Azure AD B2C
create_app_endpoint = f'{resource_url}/v1.0/{TENANT_ID}/applications'
headers = {
'Authorization': f'Bearer {access_token}',
'Content-Type': 'application/json'
}
url_password = f'{resource_url}/v1.0/applications/{application_id}/addPassword'
print(url_password)
# Construct the request body with password credentials details
password_credentials_data = {
"passwordCredential": {
"displayName": "System_Access_1"
}
}
# Send the request to create password credentials
response = requests.post(url_password, headers=headers, json=password_credentials_data)
# Check the response
if response.status_code == 200:
print("Password credentials created successfully.")
else:
print("Failed to create password credentials. Status code:", response.status_code)
print("Response:", response.text)
</code></pre>
<p>However, I am getting an error</p>
<pre><code>https://graph.microsoft.com/v1.0/applications/888-xxxx-xxxx-xxxxx/addPassword
Failed to create password credentials. Status code: 404
Response: {"error":{"code":"Request_ResourceNotFound","message":"Resource '888-xxxx-xxxx-xxxxx' does not exist or one of its queried reference-property objects are not present.","innerError":{"date":"2024-02-26T16:01:36","request-id":" ","client-request-id":""}}}
</code></pre>
<p>I have followed the approach here,</p>
<p><a href="https://learn.microsoft.com/en-us/graph/api/application-addpassword?view=graph-rest-1.0&tabs=http" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/graph/api/application-addpassword?view=graph-rest-1.0&tabs=http</a></p>
<p>I checked in Azure Active Directory and the application exists. What could be the potential reason?</p>
|
<python><azure><rest><azure-active-directory>
|
2024-02-26 16:09:36
| 1
| 973
|
GeekzSG
|
78,062,295
| 20,122,390
|
How can I configure an enum list in FastAPI?
|
<p>I have a FastAPI endpoint that fetches all database records for an entity, applying a payload. One of the payload fields refers to the order when bringing the data (which can even be by more than one column). So I have this:</p>
<pre><code>class PayloadExecution(BaseExecution):
init_date__lte: Optional[datetime] = Field(None, alias="init_date_below_equal")
final_date__gte: Optional[datetime] = Field(None, alias="final_date_above_equal")
custom_order: Optional[List[OrderOptions]]
class config:
use_enum_values = True
class QueryPayloadExecution(PayloadExecution):
@classmethod
async def as_query(
cls,
init_date: Optional[datetime] = Query(None),
final_date: Optional[datetime] = Query(None),
tag: Optional[str] = Query(None),
guid_execution: Optional[str] = Query(None),
is_officialized: Optional[bool] = Query(None),
worker_status: Optional[int] = Query(None),
worker_info: Optional[str] = Query(None),
official_name: Optional[str] = Query(None),
status: Optional[int] = Query(None),
last_real: Optional[datetime] = Query(None),
subfamily_x_forecast_id: Optional[int] = Query(None),
execution_type: Optional[str] = Query(None),
init_date__lte: Optional[datetime] = Query(None),
final_date__gte: Optional[datetime] = Query(None),
custom_order: Optional[List[OrderOptions]] = Query(None),
):
return cls(
init_date=init_date,
final_date=final_date,
tag=tag,
guid_execution=guid_execution,
is_officialized=is_officialized,
worker_status=worker_status,
worker_info=worker_info,
official_name=official_name,
status=status,
last_real=last_real,
subfamily_x_forecast_id=subfamily_x_forecast_id,
execution_type=execution_type,
init_date__lte=init_date__lte,
final_date__gte=final_date__gte,
custom_order=custom_order,
)
@router.get(
"",
response_class=JSONResponse,
response_model=Optional[List[ExecutionInDB]],
status_code=200,
responses={
200: {"description": "Execution found"},
401: {"description": "unauthorized"},
},
)
async def get_all(
*,
payload: QueryPayloadExecution = Depends(QueryPayloadExecution.as_query),
skip: int = Query(0),
limit: int = Query(99999)
):
executions_responses = await execution_service.get_all_ordered(
payload=payload.dict(exclude_none=True),
skip=skip,
limit=limit,
)
return executions_responses
</code></pre>
<p>As you can see, custom_order is a list of OrderOptions. So, this looks like this in my swagger:</p>
<p><a href="https://i.sstatic.net/JHNSu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JHNSu.png" alt="enter image description here" /></a></p>
<p>But I don't want it to be generated like this, since the order in which they are sent is important. I would like it to appear as when it is simply a string:</p>
<p><a href="https://i.sstatic.net/JVhcj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JVhcj.png" alt="enter image description here" /></a></p>
<p>(But I can select values from the enum). This way I can add them in the order I want. Is this possible? If it is, how is it done?</p>
|
<python><fastapi>
|
2024-02-26 16:00:05
| 0
| 988
|
Diego L
|
78,062,249
| 5,975,991
|
Disable NonBrowserUserAgent in AWS CDK Waf
|
<p>We set up a WAF in AWS CDK with default rules, and it includes a rule that blocks any request with <code>SignalNonBrowserUserAgent</code>. It's tough to get around this when your clients are apps or postman or python requests.</p>
<p>I couldn't not find a solution to this and spent a few days figuring it out so I'm documenting the setup and solution for anyone else that has struggled with this. The WAF was instantiated with the following code</p>
<pre><code> from aws_solutions_constructs.aws_wafwebacl_apigateway import WafwebaclToApiGateway
my_waf = WafwebaclToApiGateway(scope, waf_id, existing_api_gateway_interface=gateway)
</code></pre>
|
<python><amazon-web-services><aws-cdk><web-application-firewall>
|
2024-02-26 15:54:03
| 1
| 412
|
RooterTooter
|
78,062,233
| 4,858,640
|
Is pip.conf broken? extra-index-url ignored
|
<p>I'm running <code>pip 24.0</code> on Ubuntu and have the following in <code>~/.pip/pip.conf</code>:</p>
<pre><code>[user]
extra-index-url https://<MY_TOKEN_NAME>:<MY_TOKEN>@gitlab.com/api/v4/projects/<MY_PROJECT>/packages/pypi/simple
trusted-host = gitlab.com
</code></pre>
<p>No other config files exist since <code>pip config debug</code> gives me:</p>
<pre class="lang-none prettyprint-override"><code>global:
/etc/xdg/xdg-ubuntu/pip/pip.conf, exists: False
/etc/xdg/pip/pip.conf, exists: False
/etc/pip.conf, exists: False
site:
/usr/pip.conf, exists: False
user:
/home/timo/.pip/pip.conf, exists: True
user.extra-index-url https: //...
user.trusted-host: gitlab.com
/home/timo/.config/pip/pip.conf, exists: False
</code></pre>
<p>And <code>pip config list</code> correctly displays <code>user.extra-index-url</code> and <code>user.trusted-host</code>. But now when I run <code>pip install --user <MY PACKAGE> -vv</code> I get:</p>
<pre class="lang-none prettyprint-override"><code>1 location(s) to search for versions of <MY PACKAGE>:
* https://pypi.org/simple/<MY PACKAGE>/
</code></pre>
<p>And my package is not found. When I pass <code>--extra-index-url</code> to <code>pip</code> directly it works as expected. I'm almost certain that this used to work in the past. Am I crazy or is this fundamental feature broken?</p>
|
<python><pip>
|
2024-02-26 15:51:34
| 1
| 3,242
|
Peter
|
78,062,168
| 6,525,082
|
what would be in preferred approach to plotting data?
|
<p>let us assume I have several datasets whose plots I want to compare. One approach would be:</p>
<pre><code> paths = ['list', 'of', 'folders']
colors = ['r', 'g', 'b'] # list of colors based on number of folders
for i, path in enumerate(paths):
sn = create_object_holding_data(path)
sn.load_data()
sn.plot_currents()
sn.plot_density()
sn.plot_temperature()
</code></pre>
<p>the other approach would be</p>
<pre><code> sn_list=[]
for i, path in enumerate(paths):
sn = create_object_holding_data(path)
sn.load_data()
sn_list.append(sn)
compare_currents(sn_list)
compare_temperatures(sn_list)
compare_densities(sn_list)
</code></pre>
<p>In the first case, all data is being loaded sequentially. In the other one all data is being loaded at once. In the particular problem at hand, the amount of data is not particularly large. What is the preferred approach in cases like this? what kind of trouble should one look for in both these cases? My goal is to prevent rewriting code while being clear to myself as what I am doing.</p>
|
<python><python-3.x>
|
2024-02-26 15:42:02
| 0
| 1,436
|
wander95
|
78,062,132
| 7,657,180
|
Python function to download zip file
|
<p>I am trying python function to download zip file <code>https://storage.googleapis.com/chrome-for-testing-public/122.0.6261.69/win32/chromedriver_win32.zip </code>
I tried this and the file is downloaded but damaged not complete</p>
<pre><code>import os
import requests
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36'}
url = 'https://storage.googleapis.com/chrome-for-testing-public/122.0.6261.69/win32/chromedriver_win32.zip'
request = requests.get(url, stream=True, headers=headers)
zip_filename = os.path.basename(url)
with open(zip_filename, 'wb') as zfile:
zfile.write(request.content)
</code></pre>
|
<python><python-requests><zip>
|
2024-02-26 15:38:29
| 1
| 9,608
|
YasserKhalil
|
78,062,117
| 11,061,371
|
Configuration Error when formatting Python with Black in VSCode
|
<p>I can't setup black to format when saving in VSCode.</p>
<ul>
<li>I went through the steps in the black <a href="https://black.readthedocs.io/en/stable/integrations/editors.html#visual-studio-code" rel="nofollow noreferrer">docs</a></li>
<li>Installed the VSCode plugins: <a href="https://marketplace.visualstudio.com/items?itemName=ms-python.black-formatter" rel="nofollow noreferrer">black formatter</a> and <a href="https://marketplace.visualstudio.com/items?itemName=ms-python.python" rel="nofollow noreferrer">python</a></li>
<li>Setup personal configuration (I also tried this same configuration in the project scope <code>.vscode/settings.json</code>):</li>
</ul>
<pre class="lang-json prettyprint-override"><code> "[python]": {
"editor.defaultFormatter": "ms-python.black-formatter",
"editor.formatOnSave": true,
"editor.formatOnSaveMode": "file",
"editor.tabSize": 4,
"editor.codeActionsOnSave": {
"source.organizeImports": "explicit"
}
},
</code></pre>
<p>I am using <a href="https://pipenv.pypa.io/en/latest/" rel="nofollow noreferrer">pipenv</a> to setup my environment <code>pipenv shell</code>, <code>pipenv install --dev black</code>.
I also setup my <code>Python: Select Interpreter</code> and pointed to my environment.</p>
<p>This is my Pipfile:</p>
<pre class="lang-ini prettyprint-override"><code>[dev-packages]
black = ">=24.2.0"
faker = ">=23.0.0"
pylint = "*"
[requires]
python_version = "3.11"
</code></pre>
<p>This is my <code>pyproject.toml</code>:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.black]
color = true
diff = true
verbose = true
line-length = 88
target-version = ['py311']
include = '.lib/etl/.*/*.py'
required-version = "24.2.0"
</code></pre>
<p>I tried both with an environment (Pipenv) <code>black = ">=24.2.0"</code> and installing in my system <code>brew install black-sat/black/black-sat</code> and get the same results.</p>
<p>Versions:</p>
<ul>
<li>VSCode: 1.86.2</li>
<li>System: MacOS Apple M1 Pro</li>
<li>black: <code>24.2.0</code></li>
</ul>
<p><strong>Problem</strong></p>
<p>When I select my formatter and format the code I get the following result:</p>
<p>Before:</p>
<pre class="lang-py prettyprint-override"><code>def handler(a: int, b: int) -> int:
return add(a, b)
def add(a, b):
"""Add two numbers"""
return a + b
</code></pre>
<p>After:</p>
<pre class="lang-py prettyprint-override"><code>[1m--- STDIN 2024-02-27 09:36:46.352177+00:00[0m
[1m+++ STDOUT 2024-02-27 09:36:46.355218+00:00[0m
[36m@@ -1,9 +1,7 @@[0m
def handler(a: int, b: int) -> int:
return add(a, b)
[31m-[0m
[31m-[0m
def add(a, b):
"""Add two numbers"""
return a + b
</code></pre>
<p><strong>Running from CLI</strong></p>
<p>If running <code>pipenv run black .</code> it correctly detects my <code>pyproject.toml</code> with the <code>black</code> args</p>
<pre class="lang-bash prettyprint-override"><code>❯ pipenv run black .
Identified `/<project-path>/<project>` as project root containing a .git directory.
Using configuration from project root.
color: True
diff: True
verbose: True
line_length: 88
target_version: ['py311']
include: .lib/etl/.*/*.py
required_version: 24.2.0
...
@@ -11,14 +11,7 @@
return add(a, b)
def add(a, b):
-
-
-
-
-
-
-
"""Add two numbers"""
return a + b
would reformat /<path-to-file>/index.py
</code></pre>
|
<python><visual-studio-code><pipenv><python-black>
|
2024-02-26 15:36:04
| 1
| 337
|
0xBradock
|
78,062,070
| 8,543,025
|
plotly: add rectangle with varying fill color
|
<p>I have an eye tracking dataset consisting of four columns:</p>
<ul>
<li><code>t</code> is a numpy array of timestamps</li>
<li><code>x</code> and <code>y</code> are the pixel-coordinates of the gaze</li>
<li><code>e</code> is an array of values <code>{0, 1, 2, 3, 4, 5}</code> marking each sample as a different gaze-event (fixation, saccade, etc.)</li>
</ul>
<p>I want to plot the <code>x</code> and <code>y</code> coordinates over time, and add a rectangle on/under the figure with changing colors depending on the value of <code>e</code>.</p>
<p>Some example data:</p>
<pre><code>t = np.arange(30)
x = np.array([125.9529, 124.6142, 125.0569, 125.3117, 126.7498, 127.035,125.4822, 125.6249, 126.9371, 127.6047, 129.031 , 128.2419, 121.521 , 114.7071, 109.4141, 100.5057, 94.9606, 95.2231, 95.9032, 96.4991, 101.2602, 103.9582, 108.2527, 108.8801, 110.3254, 112.8205, 113.0079, 113.3547, 113.0962, 113.2508])
y = np.array([31.218 , 31.236 , 31.147 , 31.2614, 30.806 , 30.8423, 31.727, 32.2256, 32.0504, 32.7774, 34.7089, 37.0671, 46.309 , 55.9716, 62.4481, 68.0248, 75.4912, 79.0622, 81.2176, 83.191 , 83.7656, 84.6713, 83.9343, 82.4546, 81.1652, 80.7981, 80.2136, 80.7405, 80.4398, 80.0738])
e = np.array([1., 1., 1., 1., 1., 1., 1., 1., 1., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 3., 3., 3., 3., 3., 3., 4., 4., 4., 4.])
</code></pre>
<p>And my attempt at coding this:</p>
<pre><code>import plotly.graph_objects as go
from plotly.subplots import make_subplots
fig = make_subplots()
fig.add_trace(ply.graph_objects.Line(x=t, y=x, name="X"),
secondary_y=False, row=1, col=1)
fig.add_trace(ply.graph_objects.Line(x=t, y=y, name="Y"),
secondary_y=False, row=1, col=1)
fig.add_shape(type="rect",
x0=t[0], y0=0, x1=t[-1], y1=0.05 * np.max([x, y]),
line=dict(color="black", width=2),
fillcolor=e)
</code></pre>
<p>This raises a <code>Value Error: Invalid value of type 'numpy.ndarray' received for the 'fillcolor' property of layout.shape</code></p>
|
<python><python-3.x><plotly><eye-tracking>
|
2024-02-26 15:29:31
| 2
| 593
|
Jon Nir
|
78,062,016
| 11,170,350
|
Unable to split text received as POST request in django
|
<p>i am facing a weird error. I am trying to split text sent via form in django.
Here is my code in django</p>
<pre><code>from transformers import AutoTokenizer
from langchain.text_splitter import CharacterTextSplitter
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english" )
text_splitter = CharacterTextSplitter.from_huggingface_tokenizer(
tokenizer, chunk_size=256, chunk_overlap=0
)
def home(request):
if request.method == 'POST':
input_text = request.POST['input'] # Get input text from the POST request
print("input_text",type(input_text))
splitted_text = text_splitter.split_text(str(input_text))
print("splitted_text_length_outside",len(splitted_text))
</code></pre>
<p>The length is always 1, which mean text is not split. I have checked that i am receiving text from html form and the type of text is str.</p>
<p>But when i use the same code outside of django such as in jupyter notebook, it work well and split the text.</p>
<p>In django I tried <code>input_text.split()</code> and that worked. But i am clueless why the langchain text splitter is not working.</p>
<pre><code>def home(input_text):
splitted_text = text_splitter.split_text(input_text)
print("splitted_text_length_outside",len(splitted_text))
</code></pre>
<p>Here is my html form</p>
<pre><code><form action="{% url 'home' %}" method="post" id="myForm">
{% csrf_token %}
<textarea name="input" id="input" rows="4" cols="50"></textarea>
<br>
<button type="submit">Submit</button>
</form>
</code></pre>
<p>Here is my ajax code</p>
<pre><code>$("#myForm").submit(function(event) {
event.preventDefault();
let formData = $(this).serialize();
$.ajax({
"url": "/",
"type": "POST",
"data": formData,
"success": function(response) {
console.log("success");
},
"error": function(error) {
console.log("error",error);
}
});
});
</code></pre>
<p><strong>EDIT</strong>
If i save and load the text this way. then spliting worked</p>
<pre><code>with open('input_text.txt', 'w') as f:
f.write(input_text)
with open('input_text.txt', 'r') as f:
input_text = f.read()
</code></pre>
|
<python><django>
|
2024-02-26 15:22:08
| 0
| 2,979
|
Talha Anwar
|
78,061,902
| 984,621
|
How to access particular stats ("finish_reason", "elapsed_time_seconds") in Scrapy?
|
<p>I am playing with Scrapy and I am able to scrape with it data from different URLs. One thing I am trying to obtain, and so far unsuccessfully, is stats data, particularly <code>finish_reason</code> and <code>elapsed_time_seconds</code>.</p>
<p>My <code>testspider.py</code>'s structure looks pretty standard:</p>
<pre><code>class TestSpider(scrapy.Spider):
name = 'testspider'
def parse(self, response):
...
reason = self.crawler.stats.get_value('finish_reason')
print(reason)
print(self.crawler.stats.get_value('elapsed_time_seconds'))
print(self.crawler.stats.get_value('log_count/DEBUG'))
</code></pre>
<p>and on the output is the following</p>
<pre><code>...
None # finish_reason
None # elapsed_time_seconds
75 # log_count/DEBUG
2024-02-26 16:03:03 [scrapy.core.engine] INFO: Closing spider (finished)
2024-02-26 16:03:03 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 3039,
'downloader/request_count': 4,
'downloader/request_method_count/GET': 4,
'downloader/response_bytes': 126947,
'downloader/response_count': 4,
'downloader/response_status_count/200': 4,
'elapsed_time_seconds': 0.964805,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2024, 2, 26, 15, 3, 3, 840387, tzinfo=datetime.timezone.utc),
'httpcompression/response_bytes': 639395,
'httpcompression/response_count': 4,
'item_scraped_count': 68,
'log_count/DEBUG': 75,
'log_count/INFO': 10,
'memusage/max': 63651840,
'memusage/startup': 63635456,
'request_depth_max': 2,
'response_received_count': 4,
'robotstxt/request_count': 1,
'robotstxt/response_count': 1,
'robotstxt/response_status_count/200': 1,
'scheduler/dequeued': 3,
'scheduler/dequeued/memory': 3,
'scheduler/enqueued': 3,
'scheduler/enqueued/memory': 3,
'start_time': datetime.datetime(2024, 2, 26, 15, 3, 2, 875582, tzinfo=datetime.timezone.utc)}
2024-02-26 16:03:03 [scrapy.core.engine] INFO: Spider closed (finished)
</code></pre>
<p>I cannot access some fields from the <code>def parse</code> method, perhaps because the script is still running? How do I achieve that? I would like to have direct access to the data in <code>Scrapy stats</code> and save it "manually" to the database (I am doing that in <code>pipeline.py</code>). How do I achieve that?</p>
|
<python><scrapy>
|
2024-02-26 15:06:24
| 1
| 48,763
|
user984621
|
78,061,810
| 429,596
|
Pygame music play start position seems to have a 1 second resolution
|
<p>I'm using <code>pygame.mixer.music.play</code> to load and play a wav file as follows:</p>
<pre><code>from pygame import mixer
mixer.init()
wavfile = "/my/wav/file.wav"
mixer.music.load(wavfile)
start = 0.5
mixer.music.play(start=start)
</code></pre>
<p>However, <code>start</code> seems to have a 1 second resolution. In other words, any float acts as if it was truncated to its whole number part. This is unexpected because the <a href="https://www.pygame.org/docs/ref/music.html#pygame.mixer.music.play" rel="nofollow noreferrer">documentation of pygame</a> shows that the start position is a float given in seconds. In fact, investigating the source, you can see that <code>mixer.music.play</code> is just a thin wrapper around the SDL library's <code>Mix_FadeInMusicPos</code> (<a href="https://wiki.libsdl.org/SDL2_mixer/Mix_FadeInMusicPos" rel="nofollow noreferrer">documented here</a>). Those docs also show a float second start time.</p>
<p>Is this a limitation of the wav format, the library, or am I specifying something incorrectly here?</p>
|
<python><pygame><pygame-mixer>
|
2024-02-26 14:53:02
| 0
| 1,268
|
Fadecomic
|
78,061,746
| 15,980,284
|
Recursion error when combining abstract factory pattern with delegator pattern
|
<p>I am learning about design patterns in Python and wanted to combine the abstract factory with the delegation pattern (to gain deeper insights into how the pattern works). However, I am getting a weird recursion error when combining the two patterns, which I do not understand.</p>
<p>The error is:</p>
<pre><code> [Previous line repeated 987 more times]
File "c:\Users\jenny\Documents\design_pattern\creational\abstract_factory.py", line 60, in __getattribute__
def __getattribute__(self, name: str):
RecursionError: maximum recursion depth exceeded
</code></pre>
<p>It is raised when <code>client_with_laptop.display()</code> is called. However a/the recursion error is already stored in <code>client_with_laptop._hardware</code> during the <code>__init__</code>, although <code>factory.get_hardware()</code> returns a laptop instance.</p>
<p>The code is:</p>
<pre><code>from abc import abstractmethod
class ITechnique:
#abstract product
@abstractmethod
def display(self):
pass
def turn_on(self):
print("I am on!")
def turn_off(self):
print("I am off!")
class Laptop(ITechnique):
#concrete product
def display(self):
print("I'am a Laptop")
class Smartphone(ITechnique):
#concrete product
def display(self):
print("I'am a Smartphone")
class Tablet(ITechnique):
#concrete product
def display(self):
print("I'm a tablet!")
class IFactory:
@abstractmethod
def get_hardware():
pass
class SmartphoneFactory(IFactory):
def get_hardware(self):
return Smartphone()
class LaptopFactory(IFactory):
def get_hardware(self):
return Laptop()
class TabletFactory(IFactory):
def get_hardware(self):
return Tablet()
class Client():
def __init__(self, factory: IFactory) -> None:
self._hardware = factory.get_hardware()
def __getattribute__(self, name: str):
return getattr(self._hardware, name)
if __name__ == "__main__":
client_with_laptop = Client(LaptopFactory())
client_with_laptop.display()
client_with_tablet = Client(TabletFactory())
client_with_tablet.display()
client_with_smartphone = Client(SmartphoneFactory())
client_with_smartphone.display()
</code></pre>
<p>When I access the attribute _hardware and remove the get_attribute section (so, basically, when I remove the delegation pattern), everything works as expected. See below the modified code section, which works:</p>
<pre><code>class Client():
def __init__(self, factory: IFactory) -> None:
self._hardware = factory.get_hardware()
if __name__ == "__main__":
client_with_laptop = Client(LaptopFactory())
client_with_laptop._hardware.display()
client_with_tablet = Client(TabletFactory())
client_with_tablet._hardware.display()
client_with_smartphone = Client(SmartphoneFactory())
client_with_smartphone._hardware.display()
</code></pre>
<p>Can anybody help me explain why the recursion error occurs or how to fix it. My objective was (1) to have varying devices depending on the factory used in client and (2) to be able to call the methods from the <code>_hardware</code> without typing <code>client._hardware</code> all the time but calling it directly from the client object, e.g. <code>client.display()</code>. It is not whether this is, in reality, a useful approach or not; I simply want to understand the pattern - and the occurring error - better. :-)</p>
|
<python><design-patterns>
|
2024-02-26 14:41:53
| 2
| 1,303
|
JKupzig
|
78,061,465
| 5,330,527
|
Save the current user when saving a model in Django admin backend
|
<p>I'd like to store the user that saved a model for the first time in one of the fields of that model. This is what I have.</p>
<p><code>models.py</code>:</p>
<pre><code>from django.conf import settings
class Project(models.Model):
[...]
added_by = models.ForeignKey(settings.AUTH_USER_MODEL, null=True, blank=True, on_delete=models.PROTECT)
def save_model(self, request, obj, form, change):
if not obj.pk:
obj.added_by = request.user
super().save_model(request, obj, form, change)
</code></pre>
<p><code>settings.py</code>:</p>
<pre><code>AUTH_USER_MODEL = 'auth.User'
</code></pre>
<p>The <code>request.user</code> appears to be always empty (I'm logged in as root to the <code>/admin</code>). What am I missing?</p>
|
<python><django><django-admin><django-authentication>
|
2024-02-26 13:57:30
| 1
| 786
|
HBMCS
|
78,061,369
| 4,389,785
|
Can not see resource-id attribute in appium with jetpack compose
|
<p>My android application is built using <code>jetpack compose</code>.
Some elements have applied <code>resource-id</code> attributes for elements.
But when analyse DOM model on local physical device via Appium Inspector, I can not see that attributes.
And when use BrowserStack, I can see <code>resource-id</code> attributes using Appium Inspector remotely connected to BrowserStack.
When desired capabilities or something else I should apply to may see attributes on local device?</p>
|
<python><selenium-webdriver><appium><appium-android>
|
2024-02-26 13:40:06
| 1
| 321
|
Den Silver
|
78,061,319
| 3,647,970
|
How to make sense of the output of the reward model, how do we know what string it is preferring?
|
<p>In the process of doing RLHF I made a reward model using a dataset of <code>chosen</code> and <code>rejected</code> string pairs. It is very similar to the example that's there in the official TRL library - <a href="https://huggingface.co/docs/trl/main/en/reward_trainer" rel="nofollow noreferrer">Reward Modeling</a></p>
<p>I used LLaMA 2 7b model (tried both the chat and non-chat versions - the behavior is the same).
Now what I would like to do is to actually pass an input and see the output of the Reward model. However I can’t seem to make any sense of what the reward model outputs.</p>
<p>For example: I tried to make the input as follows -</p>
<pre><code>chosen = "This is the chosen text."
rejected = "This is the rejected text."
test = {"chosen": chosen, "rejected": rejected}
</code></pre>
<p>Then I try -</p>
<pre><code>import torch
import torch.nn as nn
from transformers import AutoModelForSequenceClassification, AutoTokenizer, AutoModelForCausalLM
base_model_id = "./llama2models/Llama-2-7b-chat-hf"
model_id = "./reward_models/Llama-2-7b-chat-hf_rm_inference/checkpoint-500"
model = AutoModelForSequenceClassification.from_pretrained(
model_id,
# num_labels=1, #gives an error since the model always outputs a tensor of [2, 4096]
)
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
rewards_chosen = model(
**tokenizer(chosen, return_tensors='pt')
).logits
print('reward chosen is ', rewards_chosen)
rewards_rejected = model(
**tokenizer(rejected, return_tensors='pt')
).logits
print('reward rejected is ', rewards_rejected)
loss = -nn.functional.logsigmoid(rewards_chosen - rewards_rejected).mean()
print(loss)
</code></pre>
<p>And the output looks something like this -</p>
<pre><code>reward chosen is tensor([[ 2.1758, -8.8359]], dtype=torch.float16)
reward rejected is tensor([[ 1.0908, -2.2168]], dtype=torch.float16)
tensor(0.0044)
</code></pre>
<p>Printing loss wasn’t helpful. I mean I do not see any trend (for example positive loss turning negative) even if I switch <code>rewards_chosen</code> and <code>rewards_rejected</code> in the formula.</p>
<p>Also the outputs did not yield any insights. I do not understand how to make sense of <code>rewards_chosen</code> and <code>rewards_rejected</code>. Why are they a tensor with two elements instead of one?</p>
<p>I tried <code>rewards_chosen > rewards_rejected</code> but that is also not helpful since it outputs <code>tensor([[ True, False]])</code></p>
<p>When I try some public reward model (its just a few megabytes since its just the adapter - <a href="https://huggingface.co/vincentmin/llama-2-13b-reward-oasst1" rel="nofollow noreferrer">https://huggingface.co/vincentmin/llama-2-13b-reward-oasst1</a>) then I get outputs that make more sense since its outputs a single element tensor -</p>
<p><strong>Code</strong> -</p>
<pre><code>import torch
import torch.nn as nn
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSequenceClassification, AutoTokenizer, AutoModelForCausalLM
peft_model_id = "./llama-2-13b-reward-oasst1"
base_model_id = "/cluster/work/lawecon/Work/raj/llama2models/13b-chat-hf"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForSequenceClassification.from_pretrained(
base_model_id,
num_labels=1,
# torch_dtype=torch.float16,
)
model = PeftModel.from_pretrained(model, peft_model_id)
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
chosen = "prompter: What is your purpose? assistant: My purpose is to assist you."
rejected = "prompter: What is your purpose? assistant: I do not understand you."
test = {"chosen": chosen, "rejected": rejected}
model.eval()
with torch.no_grad():
rewards_chosen = model(
**tokenizer(chosen, return_tensors='pt')
).logits
print('reward chosen is ', rewards_chosen)
rewards_rejected = model(
**tokenizer(rejected, return_tensors='pt')
).logits
print('reward rejected is ', rewards_rejected)
loss = -nn.functional.logsigmoid(rewards_chosen - rewards_rejected).mean()
print(loss)
</code></pre>
<p><strong>Output</strong> -</p>
<pre><code>reward chosen is tensor([[0.6876]])
reward rejected is tensor([[-0.9243]])
tensor(0.1819)
</code></pre>
<p>This output makes more sense to me. But why do I get the outputs that have two values with my reward model?</p>
|
<python><huggingface-transformers><llama><reward>
|
2024-02-26 13:32:00
| 1
| 2,938
|
jar
|
78,061,285
| 3,906,786
|
Sorting nested dictionaries at two levels from TOML configuration
|
<p>I have the idea to create a TOML configuration file which defines some sort of job chain to be executed. Those jobs are not executed immediate, there may be a break in between, so there should also be some grouping to have the chance to do something in between the chain.</p>
<p>An example TOML file looks like this:</p>
<pre class="lang-ini prettyprint-override"><code>[jobid_3.2]
name='script_3.2'
type='sql'
[jobid_10.1]
name='whatever_works'
type='shell'
[jobid_1.1]
name='script_1.1'
type='shell'
[jobid_3.1]
name='foobar'
type='shell'
[jobid_2.1]
name='barbaz'
type='shell'
[jobid_2.2]
name='script_2.2'
type='sql'
</code></pre>
<p>So, the order is a bit messy, therefore the configuration read should then be in some ordered state.</p>
<p>Problem 1: when reading the TOML string in to a variable, you end up with nested dictionaries, in this case 3 (jobid_xxx, y, and then a dict with name type)<br />
Problem 2: TOML recognizes an integer as string, not as an integer if used as arrays of tables. That means we have to do some conversion in the first nested dictionary.<br />
Problem 3: sorting on the first level should happen on the digit, not on the string itself. Of course, I could use <code>[3.2]</code> instead of <code>[jobid_3.2]</code>, but this actually looks a bit nicer and descriptive.</p>
<p>So, how to solve this?</p>
|
<python><python-3.x><dictionary>
|
2024-02-26 13:27:48
| 1
| 983
|
brillenheini
|
78,061,250
| 6,293,886
|
find the memory usage of lru_cache decorator
|
<p>I want to track the memory usage of <em>lru_cached</em> class method.<br />
I know how to extract the number of method calls</p>
<pre><code>from functools import lru_cache
class Exponent:
def __init__ (self, base):
self.base = base
@lru_cache(maxsize=None)
def __call__(self, exponent):
return self.base ** exponent
exponent_base_two = Exponent(2)
exponent_base_two(2);
exponent_base_two(3);
exponent_base_two.__call__.cache_info()
</code></pre>
<p>output:</p>
<pre><code>CacheInfo(hits=2, misses=2, maxsize=None, currsize=2)
</code></pre>
<p>However, I'm looking for a method to extract <em>cache</em> size in bytes (similar to <code>sys.getsizeof</code> output)</p>
|
<python><caching><python-lru-cache>
|
2024-02-26 13:24:13
| 2
| 1,386
|
itamar kanter
|
78,061,092
| 8,365,731
|
Python curses Textbox.gather() removes empty lines, is there a way to preserve them?
|
<p>In below code box.gather() removes empty lines from text. Is there a way to collect the text as is including empty lines?</p>
<pre><code>from curses import wrapper
from curses.textpad import Textbox
def main(stdscr):
stdscr.addstr(0, 0, "Enter text separated by empty lines: (hit Ctrl-G to send)")
box = Textbox(stdscr)
box.edit()
return box.gather()
if __name__ == '__main__':
s = wrapper(main)
print(s)
</code></pre>
|
<python><python-curses>
|
2024-02-26 12:57:40
| 1
| 563
|
Jacek Błocki
|
78,061,077
| 3,602,296
|
code completion based on a hierarchy of properties
|
<p>I want to create a class that has the following structure:</p>
<pre><code>ClassName
UpperLevel
Element_A:
val_1 = 'example'
Element_B: Tuple(str, str)
Element_C: Tuple(str, str)
LowerLevel
Element_D: Tuple(str, str)
Element_E: Tuple(str, str)
</code></pre>
<p>so that I can type: <code>ClassName.UpperLevel.</code> and the IDE will recommend me <code>Element_A</code>, <code>Element_B</code>, <code>Element_C</code>.</p>
<p>I have the following:</p>
<pre class="lang-py prettyprint-override"><code>class Element:
def __init__(self, val_1, val_2):
self.val_1: str = val_1
self.val_2: str = val_2
class Example:
def __init__(self):
self.upper_level.Element_A = Element(r'example1', r'example2')
self.upper_level.Element_B = Element(r'example1', r'example2')
self.upper_level.Element_C = Element(r'example1', r'example2')
self.lower_level.Element_D = Element(r'example1', r'example2')
self.lower_level.Element_F = Element(r'example1', r'example2')
</code></pre>
<p>Note that the whole hierarchy is defined, meaning that I know all of the "levels" and "elements" beforehand, I do not need to create the dynamically in reality.</p>
<p>Why does this not work?<br />
Do I need to have another class for <code>Level</code>?</p>
|
<python><class><code-completion>
|
2024-02-26 12:55:53
| 1
| 314
|
penfold1992
|
78,061,044
| 8,110,961
|
safely forking a multithreaded process is problematic
|
<p>source:<a href="https://docs.python.org/3/library/multiprocessing.html" rel="nofollow noreferrer">https://docs.python.org/3/library/multiprocessing.html</a></p>
<p>I am reading python documentation for multiprocessing and noticed sentence</p>
<blockquote>
<p>Note that safely forking a multithreaded process is problematic.</p>
</blockquote>
<p>wondering if I am not getting it right OR how would it be problematic?</p>
|
<python><multiprocessing><documentation>
|
2024-02-26 12:51:18
| 0
| 385
|
Jack
|
78,060,857
| 9,135,359
|
Kaggle's packages missing many essential methods, for instance Kaggle's `Dataset` class has no `from_generator()` method
|
<p>I have been working on a particular NLP project for a month and have been running into error after error. I built a small model on my potato PC and it works perfectly. I upscaled it to Kaggle and ran into multiple errors, which have frustrated the hell out of me! I've been exploring all individual packages, and lo and behold, many methods are missing from Kaggle packages!</p>
<p>A perfect example is the <code>Dataset</code> class from the <code>datasets</code> package: the one in my PC and on Kaggle are version 2.17.1</p>
<p>But the class in Kaggle is missing so many essential methods, such as <code>from_generator()</code>! You can see for yourself, just install the <code>datasets</code> package, then do the following on your local machine and on Kaggle and note the differences:</p>
<pre><code>from datasets import Dataset
dir(Dataset)
</code></pre>
<p>This is what led to most of my errors. How and why is this happening? Is there a way to <em>enable</em> all the essential methods on Kaggle, like <code>from_generator()</code>?</p>
|
<python><nlp><kaggle>
|
2024-02-26 12:18:16
| 1
| 844
|
Code Monkey
|
78,060,747
| 4,934,061
|
How to create a modern Windows "Open Folder" dialog with Python
|
<p>For several hours now, I have been trying to open the modern folder selection dialog using python.</p>
<p>My application is currently using pywebview's <a href="https://pywebview.flowrl.com/examples/open_file_dialog.html" rel="nofollow noreferrer">create_file_dialog</a>. Which works as intended, but it's really not very convient for the user. See picture below.</p>
<p><a href="https://i.sstatic.net/c8Rxx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c8Rxx.png" alt="enter image description here" /></a></p>
<p>What I would like, is this one:</p>
<p><a href="https://i.sstatic.net/SVAeK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SVAeK.png" alt="enter image description here" /></a></p>
<p>Besides looking nicer and having a much more intuative design, it also uses the last selected path as the initial selection.</p>
<p>I have tried to use pywin32, but I have not found a way to create this exact dialog (only different forms of the default file selection dialog).</p>
<p>From my research I'd say that I probably need the <a href="https://learn.microsoft.com/de-de/windows/win32/api/shobjidl_core/nn-shobjidl_core-ifileopendialog" rel="nofollow noreferrer">IFileOpenDialog</a>. But there seems to be no way to use IFileOpenDialog using Python, or is there?</p>
<p>Does anyone know how to create said dialog?</p>
|
<python><winapi>
|
2024-02-26 11:59:08
| 2
| 3,632
|
Josef
|
78,060,727
| 7,695,845
|
Parameter validation in Python
|
<p>I have a function that accepts some parameters that need to be validated. For example:</p>
<pre class="lang-py prettyprint-override"><code>def foo(a: int, b: float, c: str) -> None:
if a <= 0:
raise ValueError(f"'a' must pe positive: {a}")
if not (0 < b <= 1):
raise ValueError(f"'b' must be in the range (0, 1]: {b}")
if not c:
raise ValueError(f"'c' can't be empty: {c!r}")
# Do things with the validated parameters a, b and c...
</code></pre>
<p>This is a lot of boilerplate code to validate the arguments before I get to the actual code of the function. I was wondering if there is a scheme or a library to ease this parameter validation. I use <code>attrs</code> and its validators for fields of classes, but it doesn't work with simple functions. I tried the <code>validate_call</code> decorator from <code>pydantic</code>, but the problem is that <code>pydantic</code> seems like an "all in" library. I have to validate everything, and if I have some parameters I don't need validation for, or I think type hints and static checks are good enough, then <code>pydantic</code> still tries to validate them, which is bad especially when I have complex type hints for arbitrary types such as a <code>Callable</code> or a <code>numpy</code> array (I am aware of the <code>arbitrary_types_allowed</code> parameter, but it still does <code>isinstance</code> checks which are wasteful in my opinion).</p>
<p>In other words, I don't really need a complex data validation solution like <code>pydantic</code>. I just need to add simple validations <strong>when I want to</strong>. I'd like something like <code>attrs</code> where validations are optional (<code>attrs</code> won't complain if I declare an <code>int</code> field and pass it a string instead. Only the static type checker will warn me), and if I want to make sure an <code>int</code> field is positive, for example, I can easily add a validator without boilerplate. I'd like something similar for functions and their arguments. Which library should I use to achieve such simple validation on functions?</p>
|
<python><validation>
|
2024-02-26 11:56:54
| 3
| 1,420
|
Shai Avr
|
78,060,632
| 386,861
|
Jupyter notebook: No module named 'notebook.base'
|
<p>I'm trying to create a simple HTML version of a notebook without the code.</p>
<pre><code>jupyter nbconvert "HFB data dashboard.ipynb" --to html --no-input
</code></pre>
<p>But it returns:</p>
<pre><code>line 18, in <module>
from notebook.base.handlers import APIHandler, IPythonHandler
ModuleNotFoundError: No module named 'notebook.base'
</code></pre>
<p>I'm using notebook version 7.1</p>
|
<python><jupyter-notebook><nbconvert>
|
2024-02-26 11:38:30
| 0
| 7,882
|
elksie5000
|
78,060,622
| 614,443
|
Building of project won't install correct version of package
|
<p>I'm trying to build a package and for some reason when it's trying to install setuptools, it's running into issues:</p>
<pre class="lang-bash prettyprint-override"><code>$ python -m build
* Creating virtualenv isolated environment...
* Installing packages in isolated environment... (setuptools>=61.0)
Collecting setuptools>=61.0
Using cached setuptools-69.1.1-py3-none-any.whl (819 kB)
Installing collected packages: setuptools
Attempting uninstall: setuptools
Found existing installation: setuptools 59.6.0
Not uninstalling setuptools at /usr/lib/python3/dist-packages, outside environment /usr
Can't uninstall 'setuptools'. No files were found to uninstall.
ERROR: Can't roll back setuptools; was not uninstalled
ERROR: Could not install packages due to an OSError: [Errno 13] Permission denied: '/usr/local/lib/python3.10/dist-packages/distutils-precedence.pth'
Consider using the `--user` option or check the permissions.
Traceback (most recent call last):
File "/home/x/.local/lib/python3.10/site-packages/build/__main__.py", line 388, in main
built = build_call(
File "/home/x/.local/lib/python3.10/site-packages/build/__main__.py", line 239, in build_package_via_sdist
sdist = _build(isolation, srcdir, outdir, 'sdist', config_settings, skip_dependency_check)
File "/home/x/.local/lib/python3.10/site-packages/build/__main__.py", line 147, in _build
return _build_in_isolated_env(srcdir, outdir, distribution, config_settings)
File "/home/x/.local/lib/python3.10/site-packages/build/__main__.py", line 113, in _build_in_isolated_env
env.install(builder.build_system_requires)
File "/home/x/.local/lib/python3.10/site-packages/build/env.py", line 143, in install
_subprocess(cmd)
File "/home/x/.local/lib/python3.10/site-packages/build/env.py", line 64, in _subprocess
subprocess.run(cmd, check=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
File "/usr/lib/python3.10/subprocess.py", line 526, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/tmp/build-env-l1ium7ru/local/bin/python', '-Im', 'pip', 'install', '--use-pep517', '--no-warn-script-location', '-r', '/tmp/build-reqs-f8wiyjft.txt']' returned non-zero exit status 1.
ERROR Command '['/tmp/build-env-l1ium7ru/local/bin/python', '-Im', 'pip', 'install', '--use-pep517', '--no-warn-script-location', '-r', '/tmp/build-reqs-f8wiyjft.txt']' returned non-zero exit status 1.
</code></pre>
<p>But when I try and see what version I have:</p>
<pre class="lang-bash prettyprint-override"><code>$ pip list | grep setuptools
setuptools 69.1.1
</code></pre>
<p>When I go into the directory <code>/usr/lib/python3/dist-packages</code>, there is a version 59.6.0:</p>
<pre class="lang-bash prettyprint-override"><code>$ ls /usr/lib/python3/dist-packages | grep setuptools
setuptools
setuptools-59.6.0.egg-info
</code></pre>
<p>I have both <code>build</code> and <code>pip</code> upgraded. As mentioned above I also have <code>setuptools</code> upgraded. It's saying it's permission issue (which yes, the files in <code>/usr/lib/...</code> are root) but the main question is, why is it trying to use the root version instead of the local version? Or am I doing something wrong?</p>
<p>(If I need to include extra files, just let me know)</p>
|
<python><build><pyproject.toml>
|
2024-02-26 11:36:17
| 1
| 2,551
|
Aram Papazian
|
78,060,574
| 2,989,330
|
How to annotate the type of a list of functions with different output type in Python
|
<p>The following <code>try_parse</code> method takes as input a value of type <code>T</code> and a series of functions with input type <code>T</code>. The functions do not necessarily have to have the same output type, e.g., it could be <code>int</code>, <code>float</code>, <code>MyCustomClass</code> etc. How would I appropriately annotate the function type in this code?</p>
<pre><code>T = TypeVar("T")
U = TypeVar("U")
def try_parse(value: T, *parse_functions: Callable[[T], U]) -> U:
for parse_function in parse_functions:
try:
return parse_function(value)
except ValueError:
pass
raise ValueError(f"Cannot parse {value} with any of {parse_functions}")
</code></pre>
|
<python><python-typing>
|
2024-02-26 11:27:28
| 1
| 3,203
|
Green 绿色
|
78,060,479
| 4,582,949
|
Annotate a Python function so that it narrows down the return type of an union involving a typevar
|
<p>I'm currently in the process of adding type annotations for a medium-sized library. The library has some quite complex cases that I would like to handle. I managed to simplify these complex cases to the following, much easier, one.</p>
<p>Consider the following <em>pseudo-identity</em> function:</p>
<pre class="lang-py prettyprint-override"><code>def g(x, y): return x, y
</code></pre>
<p>Let's assume that this function accepts either two parameters of the same type (TL;DR: any pair of objects of the same type implementing the <code>Comparable</code> protocol I defined but let's consider subclasses of <code>int</code> for this example) <strong>OR</strong> a special string literal (for this example, let's simply consider a <code>str</code>). To summarize, these calls are valid: <code>g(1, 2)</code>, <code>g('hi', 'world')</code>, <code>g(1, 'world')</code>, etc. but these calls aren't: <code>g({}, 3)</code>, <code>g([], True)</code>, etc.</p>
<p>I'm looking for a way to annotate this function so that the return type can be deduced by mypy precisely. For instance, I'm expecting <code>g(1,2)</code> to lead to a tuple of ints, <code>g('hi', 'world')</code> to a tuple of <code>str</code>, <code>g(1, 'world')</code> to <code>tuple[int, str]</code>, and the invalid cases would lead to an error.</p>
<p>Naively, I wrote:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar
T = TypeVar('T', bound=int)
def g(x: T | str, y: T | str) -> tuple[T | str, T | str]:
return x, y
</code></pre>
<p>AFAIK, this should be enough to ensure that both parameters have the same type (and are instances of int) OR that one (or both) of them is/are str.</p>
<p>However, when I execute mypy on this snippet as follows:</p>
<pre class="lang-py prettyprint-override"><code>reveal_type(g(3, 4))
reveal_type(g('hello', 'world'))
reveal_type(g('hello', 2))
</code></pre>
<p>I got this output:</p>
<pre><code>main.py:8: note: Revealed type is "tuple[Union[builtins.int, builtins.str], Union[builtins.int, builtins.str]]"
main.py:9: note: Revealed type is "tuple[builtins.str, builtins.str]"
main.py:10: note: Revealed type is "tuple[Union[builtins.int, builtins.str], Union[builtins.int, builtins.str]]"
Success: no issues found in 1 source file
</code></pre>
<p>You can see this live in mypy Playground: <a href="https://mypy-play.net/?mypy=latest&python=3.12&gist=a5063b88271ddd94a58c82e3376deb5c" rel="nofollow noreferrer">https://mypy-play.net/?mypy=latest&python=3.12&gist=a5063b88271ddd94a58c82e3376deb5c</a></p>
<p>The second line is what I expected (a tuple of strings), but the two other calls <em>should</em> (or <em>could</em>) be refined to <code>tuple[int, int]</code> and <code>tuple[str, int]</code>. The result is similar with PyRight, and merely similar with Pyre (the latter indicates <code>tuple[str, Literal[2]]</code> instead of <code>tuple[str, int]</code> for the last case, for some unknown reason).</p>
<p>Is this something out of scope of the <code>typing</code> module or out of scope of mypy? Am I missing something?</p>
<p>Notice that I <strong>cannot</strong> write <code>T = TypeVar('T', int, str)</code>. While this works (for the above example), remember that I used <code>int</code> to simplify the case, where in practice I want <code>T</code> to be any class implementing my <code>Comparable</code> protocol (so I need to define <code>T</code> with an upper bound, not with a list of exact types, this is, <code>T = TypeVar('T', bound=Comparable)</code>).</p>
<hr />
<p>To put some more context for this question (do not read if you don't care ;-), the library I want to annotate defines an <code>Interval</code> class made of two bounds (the lower and upper ones). These bounds can be any object that supports comparison (e.g., ints, floats, dates). To handle infinite and semi-infinite intervals, the library defines a two special objects, namely <code>inf</code> and <code>-inf</code> that are respectively singleton instances of <code>_PInf</code> and <code>_NInf</code>. My goal is to annotate this <code>Interval</code> class so that mypy can check whether <code>Interval(x, y)</code> is valid (i.e., <code>x</code> and <code>y</code> are of the same <code>Comparable</code> type, or <code>x</code> or <code>y</code> or both are instances of <code>_PInf</code> <code>_NInf</code>).</p>
<p>Generalizing the above example, I currently have something like this:</p>
<pre class="lang-py prettyprint-override"><code>Comparable = ... # Protocol with __eq__, __lt__, __le__, ...
T = TypeVar('T', bound=Comparable)
Bound: TypeAlias = T | _PInf | _NInf
class Interval(Generic[T]):
def __init__(self, lower: Bound[T], upper: Bound[T]) -> None:
...
</code></pre>
<p>The goal is (1) to make sure that both bounds are "compatible" (as explained in previous paragraph), and (2) that, for example, <code>Interval(1, 2).lower + 1</code> does not trigger any complaint from mypy, while <code>Interval(-inf, 2).lower + 1</code> does.</p>
|
<python><mypy><python-typing>
|
2024-02-26 11:10:44
| 0
| 2,842
|
Guybrush
|
78,060,363
| 2,794,152
|
Is it possible to save the save figure in different ranges, without plotting it twice?
|
<p>Suppose I have one data set which is very big, <strong>it takes a long time to plot</strong>. I want to save two figures of it in different range. In the GUI, I just plot once and zoom in to the desired range, and repeat to get the other range. How to do it in automatically the code?</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
# Generate data
x = np.linspace(0, 2 * np.pi, 100)
y = np.sin(x)
# Create the first figure
plt.figure("fig1")
plt.scatter(x, y) # suppose plot this will take a long time
plt.ylim(-1, 0)
plt.title("Figure 1: Sin(x) with y-range (-1, 0)")
# Save the first figure
plt.savefig("figure1.png")
# Create the second figure
plt.figure("fig2")
plt.scatter(x, y) # is it possible to avoid this second plot?
plt.ylim(0, 1)
plt.title("Figure 2: Sin(x) with y-range (0, 1)")
# Save the second figure
plt.savefig("figure2.png")
# Show the plots
plt.show()
</code></pre>
|
<python><matplotlib><plot>
|
2024-02-26 10:52:29
| 1
| 4,904
|
an offer can't refuse
|
78,060,302
| 3,337,089
|
Can I call prepare() separately on multiple models or should it be a single call when using accelerator with pytorch?
|
<p>I'm training two deep learning models in tandem, say <code>model1</code> and <code>model2</code> using pytorch. I'm using <code>accelerator</code> to handle distributed training. When calling the <code>prepare()</code> function, can I call it separately as in the below code?</p>
<pre><code>model1, dataloader, optimizer = accelerator.prepare(model1, dataloader, optimizer)
if some_condition:
model2 = acceletator.prepare(model2)
</code></pre>
<p>Or do I have to call <code>prepare()</code> only once, as in the below code?</p>
<pre><code>if not some_condition:
model1, dataloader, optimizer = accelerator.prepare(model1, dataloader, optimizer)
else:
model1, model2, dataloader, optimizer = acceletator.prepare(model1, model2, dataloader, optimizer)
</code></pre>
|
<python><pytorch>
|
2024-02-26 10:43:43
| 1
| 7,307
|
Nagabhushan S N
|
78,060,150
| 14,660,815
|
AWS S3 script to get versions of object & folder with same name (Version Enabled Bucket)
|
<p>I am using AWS Cloudshell to execute simple python script to fetch the all versionIds with following hierarchy.</p>
<pre><code>AWS S3 > buckets >
my-bucket >
test_001
KpJEgbcnjMr5QLzOkA2CfG5NMzPBvyqK
test_001/
my-obj
4RektV43Cf.HK17BTyVpDVtFSQiLr.yf
HGKVjaVDoPbdPyl2xKN5f0eDYjt69Jt1
IWuZ62icRmaV7Qz7_TQlpxXaZeY1COyk
rNTACJqaJVbudBB70XkjDssGDbFTAOe6
</code></pre>
<p>When I am doing this way</p>
<pre><code>import boto3
AWS_REGION = 'us-east-1'
AWS_PROFILE = 'default'
session = boto3.Session(profile_name=AWS_PROFILE, region_name=AWS_REGION)
s3_client = session.client('s3')
class StopExecution(Exception):
pass
def getVersions(bucket_name, object_key):
versions_response = s3_client.list_object_versions(Bucket=bucket_name, Prefix=object_key, KeyMarker=object_key)
versions = versions_response.get('Versions', [])
version_ids = [version['VersionId'] for version in versions]
print(f"VersionIds: {version_ids}")
bucket_name = 'my-bucket'
# Iterate over each bucket and copy its objects and versions to the destination bucket
try:
print(f"Processing bucket: {bucket_name}")
# List all objects in the bucket
objects_response = s3_client.list_objects(Bucket=bucket_name)
object_keys = [obj['Key'] for obj in objects_response.get('Contents', [])]
object_keys_size = len(object_keys)
print(f"Total {object_keys_size} no. of keys found in bucket: '{bucket_name}'")
# Iterate over each object key and copy its versions to the destination bucket
for object_key in object_keys:
try:
print("-----------------------------------------------------")
print(f"Processing object: {object_key}")
print("-----------------------------------------------------")
getVersions(bucket_name, object_key)
print("-----------------------------------------------------END_OF_COPY_ITERATION")
except StopExecution as e:
print("Execution stopped due to an error", e)
break # Exit the inner loop if an error occurs
except StopExecution as e:
print("Execution stopped due to an error external loop:", e)
</code></pre>
<p>This above print statement giving me all versions, expecting individual key & folder should be differenciated.</p>
<p>output is</p>
<pre><code>VersionIds: [KpJEgbcnjMr5QLzOkA2CfG5NMzPBvyqK, rNTACJqaJVbudBB70XkjDssGDbFTAOe6, 4RektV43Cf.HK17BTyVpDVtFSQiLr.yf]
</code></pre>
<p><em><strong>Expecting for test_001 object</strong></em></p>
<pre><code>VersionIds: [KpJEgbcnjMr5QLzOkA2CfG5NMzPBvyqK]
</code></pre>
<p><em><strong>Expecting for test_001 folder</strong></em></p>
<pre><code>VersionIds: [rNTACJqaJVbudBB70XkjDssGDbFTAOe6, 4RektV43Cf.HK17BTyVpDVtFSQiLr.yf]
</code></pre>
<p>actual output:</p>
<pre><code>Processing bucket: my-bucket
Total 2 no. of keys found in bucket: 'my-bucket'
-----------------------------------------------------
Processing object: test_001
-----------------------------------------------------
VersionIds: ['4RektV43Cf.HK17BTyVpDVtFSQiLr.yf', 'HGKVjaVDoPbdPyl2xKN5f0eDYjt69Jt1', 'IWuZ62icRmaV7Qz7_TQlpxXaZeY1COyk', 'rNTACJqaJVbudBB70XkjDssGDbFTAOe6']
-----------------------------------------------------END_OF_COPY_ITERATION
-----------------------------------------------------
Processing object: test_001/my-obj
-----------------------------------------------------
VersionIds: []
-----------------------------------------------------END_OF_COPY_ITERATION
</code></pre>
<p>Not sure how can I restrict on <code>list_object_versions</code> method to be a specific response.</p>
|
<python><amazon-web-services><amazon-s3>
|
2024-02-26 10:20:50
| 1
| 373
|
Dnyaneshwar Jadhav
|
78,059,960
| 12,415,855
|
Getting 403-status response when get requesting from website?
|
<p>i try to do a site request using the following code:</p>
<pre><code>import requests
if __name__ == '__main__':
headers = {
'x-requested-with':'XMLHttpRequest',
'Cookie': 'sessionid=8tjkos13k0wpd9rzje3xcc8dp1v64kug; __cf_bm=kbWudR66vyTEyth2CjVopfXLfOA167E8Yk9iDbeWj9E-1708936562-1.0-AUMLR7tUp64sdzNeaVPMSqvEnR6OAF51OY2DP6PhyH2N2DoH3WgQ5VWAlM/nK0YXJUTj+ZRXojZ/mFoXSJN1fG4=; _dist=pc%7Cat%7CUSD%7CFalse%7CFalse; csrftoken=QKKDI9DJijqDPqZNxSb1nZX4S8OL42me; _vis_opt_s=1%7C; _vis_opt_test_cookie=1; _vwo_uuid=JD54AA9370AC4D52B053FB35FD1C85A99; _vwo_ds=3%241708936564%3A65.47499426%3A%3A; ssUserId=de191604-36fd-4a4f-9104-fbbdcf6fe231; _isuid=de191604-36fd-4a4f-9104-fbbdcf6fe231; _vwo_uuid_v2=D09233018FAFE283F98683391374FBCE0|2674009931fdff7a4795c686321c68ed; atatus-aid=id|e58b15cdc83046e49cc2a955b99fefc5&timestamp|2024-02-26T08:36:05.118Z; _gcl_au=1.1.311801418.1708936565; tkbl_cvuuid=e9e07924-d9ab-4e8b-80ae-62c4be6284d7; NaN; cyo_shape=; cartCount=0; be_rep=false; be_symbol=USD; be_f=p; wlCount=0; rskxRunCookie=0; rCookie=dzeon27uts3xvdkizp1e6lt2opigo; OptanonAlertBoxClosed=2024-02-26T08:36:07.703Z; _mpacs=y; _bamAttribution=%7B%22gclid%22%3A%22%22%2C%22source%22%3A%22%28direct%29%22%2C%22campaign%22%3A%22%28direct%29%22%2C%22medium%22%3A%22%28none%29%22%2C%22keyword%22%3A%22%22%2C%22content%22%3A%22%22%2C%22date%22%3A%2220240226%22%7D; __utmz=11288170.1708936567.1.1.utmcsr%3D%28direct%29%7Cutmccn%3D%28direct%29%7Cutmcmd%3D%28none%29; _cs_mk_ga=0.5318477980920424_1708936567733; _li_dcdm_c=.brilliantearth.com; _lc2_fpi=6fb155432e4e--01hqj8x76a1sef92faajeg2mdv; _lc2_fpi_meta={%22w%22:1708936568010}; _gid=GA1.2.601193034.1708936568; _cs_c=0; wurfl=%7B%22complete_device_name%22%3A%22Google%20Chrome%22%2C%22form_factor%22%3A%22Desktop%22%2C%22is_mobile%22%3Afalse%7D; _nb_sp_ses.8341=*; LPVID=k2ZDQzMmE4YWY4MjE0YjQx; LPSID-28108963=uv-5K3DIRk6Je-mmxaSN-A; __rtbh.uid=%7B%22eventType%22%3A%22uid%22%2C%22id%22%3A%22undefined%22%7D; __rtbh.lid=%7B%22eventType%22%3A%22lid%22%2C%22id%22%3A%22ReYfRBtI3t6F6GKAqQhC%22%7D; IR_gbd=brilliantearth.com; __pdst=695c2b26e84546a4afb14ea258c64ebb; sa-user-id=s%253A0-9d89ace7-a469-5187-7b79-5d6182a5e58f.zB6NsxnOmZu%252BbTIDQ58D5D7rjxwdnncBHIk0ao4a0mU; sa-user-id-v2=s%253AnYms56RpUYd7eV1hgqXljylCYAE.LhCzFOflL17%252FKUFI2Gox7wFZpLDGlqq7wgYmuKQH470; sa-user-id-v3=s%253AAQAKIKFWtzj1ZsrmOW1G74Ktn3CYIFz579xmDWT3NZsAKqkIEHwYBCC3yp2qBjABOgRrkJNFQgRc-Bfg.7QGWzkDvUHGoHhKUmnfDezeE6LB522xcqxSosrfKcAU; _tt_enable_cookie=1; _ttp=zBx-OAnc20baRY40NqYQv_8i1i8; _fbp=fb.1.1708936632968.1778099912; FPID=FPID2.2.s8fmXfEFRjAaSNZCvTz0eowChl5gt2gBIPNtiicOrGY%3D.1708936568; FPAU=1.1.311801418.1708936565; _scid=9c4fc9b7-d370-44c8-bffd-3a3f323d8080; FPLC=LOvTg0bWJWBoAk7Us4m6pSbl7EFJshMqrllCCe0M0qMipk6z%2F6Ixi8pAYKyyGpR6wqoS6aPYV3xn%2BFeavui9VFSomy8kjhwteGD%2B51wQnelsCsJAI6mCvoPoiWux1w%3D%3D; _pin_unauth=dWlkPU1ETTNZekpqWlRndE5UWmhOQzAwWVdKakxUaGtObUl0TW1WbVpqRTVZVE01T0dVeA; __attentive_id=d493ac2e98f845d78d4b6bf302c78b02; _attn_=eyJ1Ijoie1wiY29cIjoxNzA4OTM2NjM0OTAyLFwidW9cIjoxNzA4OTM2NjM0OTAyLFwibWFcIjoyMTkwMCxcImluXCI6ZmFsc2UsXCJ2YWxcIjpcImQ0OTNhYzJlOThmODQ1ZDc4ZDRiNmJmMzAyYzc4YjAyXCJ9In0=; __attentive_cco=1708936634905; __attentive_ss_referrer=ORGANIC; __attentive_dv=1; ABTastySession=mrasn=&lp=https%253A%252F%252Fwww.brilliantearth.com%252Fdiamond%252Fshop-all%252F; bluecoreNV=false; ABTasty=uid=8302ae3brdbbs1ka&fst=1708936564165&pst=-1&cst=1708936564165&ns=1&pvt=5&pvis=5&th=; _vwo_sn=0%3A5; ssSessionIdNamespace=bf9d8632-4dca-4c0f-8f94-e23423d38cf6; sailthru_pageviews=5; OptanonConsent=isGpcEnabled=0&datestamp=Mon+Feb+26+2024+09%3A37%3A44+GMT%2B0100+(Mitteleurop%C3%A4ische+Normalzeit)&version=202209.2.0&isIABGlobal=false&hosts=&consentId=1c7e67ff-944e-491f-88c9-0aec95fcae55&interactionCount=1&landingPath=NotLandingPage&groups=C0001%3A1%2CC0002%3A1%2CC0003%3A1%2CC0004%3A1&geolocation=AT%3B9&AwaitingReconsent=false; _rdt_uuid=1708936632338.e572b8ed-d53d-48bf-a7ba-f399bd63f6f8; _scid_r=9c4fc9b7-d370-44c8-bffd-3a3f323d8080; mp_brilliant_earth_mixpanel=%7B%22distinct_id%22%3A%20%2218de48e9421551-05c22383cb151f-26001951-1fa400-18de48e942210cf%22%2C%22bc_persist_updated%22%3A%201708936565806%2C%22country_code%22%3A%20%22us%22%7D; IR_13541=1708936664426%7C0%7C1708936664426%7C%7C; _uetsid=3d02a960d48211ee9fa73743dd272beb; _uetvid=3d02d2a0d48211ee96f861403df819e6; sailthru_content=070b89718411939d45e45d4a4e8ec3f8db7de6e593badaa3a5e20d61bff846e5; sailthru_visitor=e459df5d-15e0-4f3b-99f8-5ade47e0474c; cto_bundle=DQpw_l9BWklLMTdrbEM0SmJ2TiUyQjVHJTJGNHBDU0NtcTRNaEJWeHRLNXoybllrNEM5dTRFcWNvJTJCJTJGTVBib1ZRd1ZKMyUyQkwlMkJ2VnV3JTJGc0R0Mmt1WjZrQTBKRzBZNFV3bkc4JTJGbTdjUzQ3THNxb3pSSHRtTXJ1UldWc2JWbmpDdHBXckZUWWp2S0lBeFdrYkQzV2FPZDBQYXF4Mk12Mk9mbE5ydDMzV2Z3JTJGYllGOHNtY0htbTQlM0Q; _derived_epik=dj0yJnU9cHc1d3BpYTAwUnFkMXBnT3hOOHJuX1ZwYzBIRFJPc0Mmbj1rcGdKN3J2eDJrbDhzVXRYSkl3czR3Jm09MSZ0PUFBQUFBR1hjVGRnJnJtPTEmcnQ9QUFBQUFHWGNUZGcmc3A9Mg; __attentive_pv=4; atatus-sid=id|b5f9daeb9d544730ac6de3a2986630b0&timestamp|2024-02-26T08:37:45.413Z; _ga=GA1.2.2045111530.1708936568; lastRskxRun=1708936665496; RUM_EPISODES=s=1708936666476&r=https%3A//www.brilliantearth.com/diamond/shop-all/; _ga_123456=GS1.1.1708936632.1.1.1708936666.0.0.0; cf_chl_3=9c8b8b78037c98e; cf_clearance=SLxHGRS8AYg9tAcmN4M3UqKMqvPDqxIFirjMk.9RELM-1708936786-1.0-AUeQqx5zUdAPFqeeA37P41IM3hlqfz3mxBCla1KEodLjmR9QCoAX4AbIxmA61H2Z+XmRrwmdMJc5LaVv2Yf2XDs=; _cs_id=27513c55-41b2-a60b-d123-7083081613df.1708936568.1.1708937244.1708936568.1.1743100568221.1; _cs_s=6.0.0.1708939044676; _nb_sp_id.8341=0684837f-b168-4ac5-8292-ed3c261b76b2.1708936568.1.1708937385.1708936568.376c17c2-df7e-47f8-9af7-5c2cf8b9602b; _dd_s=rum=0&expire=1708938289392; _gat=1; _ga_M6K9G20MZ3=GS1.1.1708936575.1.1.1708937389.0.0.0; ld_lcd_params=shapes%3DAll%7Cclarities%3DSI2%2CSI1%2CVS2%2CVS1%2CVVS2%2CVVS1%2CIF%2CFL%7Ccuts%3DFair%2CGood%2CVery%20Good%2CIdeal%2CSuper%20Ideal',
'Referer': 'https://www.brilliantearth.com/diamond/shop-all/'
}
url = "https://www.brilliantearth.com/loose-diamonds/list/?shapes=All&cuts=Fair%2CGood%2CVery%20Good%2CIdeal%2CSuper%20Ideal&colors=J%2CI%2CH%2CG%2CF%2CE%2CD&clarities=SI2%2CSI1%2CVS2%2CVS1%2CVVS2%2CVVS1%2CIF%2CFL&polishes=Good%2CVery%20Good%2CExcellent&symmetries=Good%2CVery%20Good%2CExcellent&fluorescences=Very%20Strong%2CStrong%2CMedium%2CFaint%2CNone&min_carat=0.25&max_carat=10.21&min_table=48.00&max_table=88.00&min_depth=3.50&max_depth=90.70&min_price=340&max_price=1062110&stock_number=&row=0&page=1&requestedDataSize=50&currency=%24&has_v360_video=&dedicated=&min_ratio=1.00&max_ratio=2.75&shipping_day=&suppler_shipping_day=&exclude_quick_ship_suppliers=&MIN_PRICE=380&MAX_PRICE=332150&MIN_CARAT=0.25&MAX_CARAT=10.21&MIN_TABLE=48&MAX_TABLE=88&MIN_DEPTH=3.5&MAX_DEPTH=90.7&order_by=&order_method=&category=Loose%20Diamonds&fill_most_popular=true&most_popular_order_by=&most_popular_order_method="
resp = requests.get(url, headers=headers)
print(resp.status_code)
</code></pre>
<p>But i allways get a 403-statuscode as response?
When i put the url directly in the chrome-browser (also in incognito mode) i see the data in the browser - see attached<a href="https://i.sstatic.net/mq32o.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mq32o.png" alt="enter image description here" /></a></p>
<p>Why is the request not working?</p>
|
<python><python-requests>
|
2024-02-26 09:49:17
| 0
| 1,515
|
Rapid1898
|
78,059,868
| 1,100,711
|
Type hint function accepting union types
|
<p>The following function with argument of type <code>type</code> accepts both
regular types and union types and works as expected.
However Pylance treats passing union type as error.</p>
<pre class="lang-py prettyprint-override"><code>def foo(t: type):
print(t)
foo(int)
# OK
foo(int | str)
# Pylance:
# Argument of type "type[int] | type[str]"
# cannot be assigned to parameter "t" of type "type"
# in function "foo"
</code></pre>
<p>What is the correct type hint for any type including union types?</p>
<p><strong>UPDATE</strong>:</p>
<p>Seems to be <a href="https://github.com/microsoft/pyright/issues/7110" rel="nofollow noreferrer">Pyright issue #7110</a> fixed in recent version.</p>
|
<python><python-typing><union-types><pyright>
|
2024-02-26 09:35:26
| 1
| 671
|
Maksim Zholudev
|
78,059,845
| 5,672,961
|
SQLAlchemy query result doesn't have attribute `_asdict()`
|
<p>I have this models.</p>
<pre class="lang-py prettyprint-override"><code>class Base(DeclarativeBase):
pass
class PortfolioPerformanceTable(Base):
__tablename__ = tn.portfolio_performance
clientuid: Mapped[str] = mapped_column(ForeignKey(f"{tn.clients}.clientno"))
clientno: Mapped[str] = mapped_column(ForeignKey(f"{tn.client_strategies}.clientno"))
clientid: Mapped[str] = mapped_column(ForeignKey(f"{tn.custodian_banks}.id"), primary_key=True)
class ConsolidatedHoldingsTable(Base):
__tablename__ = tn.consolidated_holdings
clientuid: Mapped[str] = mapped_column(ForeignKey(f"{tn.clients}.clientno"))
clientno: Mapped[str] = mapped_column(ForeignKey(f"{tn.client_strategies}.clientno"))
clientid: Mapped[str] = mapped_column(ForeignKey(f"{tn.custodian_banks}.id"), primary_key=True)
hldsdt: Mapped[str] = mapped_column(DateTime, primary_key=True) # holdings date
isin: Mapped[str] = mapped_column(String(12), primary_key=True)
bbticker: Mapped[str] = mapped_column(String(100))
bbmktsectordes: Mapped[str] = mapped_column(String(100))
</code></pre>
<p>And I use this two query</p>
<pre class="lang-py prettyprint-override"><code>def get_performances(date: str, clientids: list[str], db: Session):
query = (
db.query(PortfolioPerformanceTable.perfdt, PortfolioPerformanceTable.dailyrt)
.where(PortfolioPerformanceTable.perfdt <= date)
.where(PortfolioPerformanceTable.clientid.in_(clientids))
.order_by(PortfolioPerformanceTable.perfdt.desc())
)
performances = [row._asdict() for row in query.all()]
return performances
def get_holdings(clientids: list[str], db: Session):
query = (
db.query(ConsolidatedHoldingsTable)
.filter(
ConsolidatedHoldingsTable.hldsdt == select(func.max(ConsolidatedHoldingsTable.hldsdt))
)
.filter(ConsolidatedHoldingsTable.clientid.in_(clientids))
.order_by(ConsolidatedHoldingsTable.isin.desc())
)
holdings = [row._asdict() for row in query.all()]
return holdings
</code></pre>
<p>It's same query. I use the model to query. But strage thing is there are no method <code>_asdict</code> on <code>get_holdings</code> function. It errored out :</p>
<pre class="lang-bash prettyprint-override"><code>formance_report
holdings = get_holdings(custodian_banks, db)
File "/Users/mandaputra/clients_dashboard/service.py", line 39, in get_holdings
holdings = [row._asdict() for row in query.all()] # type: ignore
File "/Users/mandaputra/clients_dashboard/service.py", line 39, in <listcomp>
holdings = [row._asdict() for row in query.all()] # type: ignore
AttributeError: 'ConsolidatedHoldingsTable' object has no attribute '_asdict'
</code></pre>
<p>I tried to check the type of <code>row</code> but, strange thing is the row type form <code>get_holdings</code> is <code>ConsolidatedHoldingsTable</code> and the row type for <code>get_performances</code> is <code>Row[Tuple[str, float]]</code>.</p>
<p>How could this happen? And why? Is there any difference when selecting partial column on the model vs all of them</p>
<pre><code>db.query(ConsolidatedHoldingsTable)
db.query(ConsolidatedHoldingsTable.nav, ConsolidatedHoldingsTable.isin)
</code></pre>
<p>How to solve this problem? I want consistent way to convert SQLAlchemy query result to list of dictionaries.</p>
|
<python><sqlalchemy>
|
2024-02-26 09:30:40
| 1
| 1,090
|
mandaputtra
|
78,059,822
| 9,923,776
|
Keycloak - Log user without password
|
<p>Im using Keycloak to manage User Login.
I have a Software that redirect to Keycloak Login if user is not Logged.
Normal login by username and password works fine.</p>
<p>I need to support some "italian identity system" that are not covered with plugins yet.
So I was thinking to managed that kind of login whithin my software and then, after I have trusted the user, log in it in Keycloak using API.</p>
<p>Im using Python Keycloak to manage comunication between my software and Keycloak.</p>
<p>So I need a way, maybe using an "admin account" to force an autentication of a user having only his username because external Identity Manager has already trusted him.</p>
<p>Is there a way?</p>
<p><a href="https://i.sstatic.net/nFPjy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nFPjy.png" alt="enter image description here" /></a></p>
|
<python><keycloak>
|
2024-02-26 09:26:26
| 0
| 656
|
EviSvil
|
78,059,445
| 10,200,497
|
How to add plus sign for positive number when using to_excel?
|
<p>This is my DataFrame:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame(
{
'a': [2, 2, 2, -4, np.nan, np.nan, 4, -3, 2, -2, -6],
'b': [2, 2, 2, 4, 4, 4, 4, 3, 2, 2, 6]
}
)
</code></pre>
<p>I want to add a plus sign for positive numbers only for column <code>a</code> when exporting to Excel. For example 1 becomes +1. Note that I have <code>NaN</code> values as well. I want them to be empty cells in Excel similar to the default behavior of Pandas when dealing with <code>NaN</code> values in <code>to_excel.</code></p>
<p>I have tried many solutions. This is one of them. But it didn't work in Excel.</p>
<pre><code>df.style.format({'a': '{:+g}'}).to_excel(r'df.xlsx', sheet_name='xx', index=False)
</code></pre>
|
<python><pandas><excel><dataframe>
|
2024-02-26 08:16:08
| 2
| 2,679
|
AmirX
|
78,059,437
| 4,568,212
|
How to schedule Background Task with FastAPI + Uvicorn + Celery in Production for lower response time?
|
<p>I am running FastAPI + Uvicorn server setup on EC2 instance. I am using 4 Uvicorn workers and 4 celery workers on 8vCPU Ec2 instance. I am performing very simple task. I get data on post API, schedule background task using celery and return response immediately. For this simple task if I send 40 concurrent request at sec, I get p99 as 600 ms which is very high. It should be within two digit. What could be going wrong? BackgroundTasks which is built-in for FastAPI wont be of any help here because I have cpu bound task which takes time to execute. I have already tried it.</p>
<pre class="lang-python prettyprint-override"><code>from fastapi import FastAPI
from celeryy import Celery
celery_app = Celery("tasks", broker="redis://localhost:6379/0", backend="redis://localhost:6379/0")
app = FastAPI(lifespan=lifespan)
def generate_model_score(request_data):
# Calculate ML model score
pass
@app.post("/predict/")
async def sample_fn(request_data: dict):
if total_transactions == 0:
logger.info("user doesnt have any data")
return default_response(request_data['userId'])
generate_model_score.delay(request_data)
logger.debug("end of predict")
# Respond with dummy data immediately
return default_response(request_data['userId'])
</code></pre>
|
<python><machine-learning><celery><fastapi><uvicorn>
|
2024-02-26 08:15:09
| 0
| 4,344
|
Punit Vara
|
78,059,436
| 1,997,852
|
Why does PyLance show reportArgumentType?
|
<p>I have the following Python code which works fine (version 3.11). Why does PyLance complain about this?</p>
<pre class="lang-py prettyprint-override"><code>import jinja2
templateEnv = jinja2.Environment(loader=jinja2.BaseLoader)
template = templateEnv.from_string(my_string)
</code></pre>
<p>PyLance v2024.2.2 in VS Code shows this error:</p>
<pre><code>Argument of type "type[BaseLoader]" cannot be assigned to parameter "loader" of type "BaseLoader | None" in function "__init__"
Type "type[BaseLoader]" cannot be assigned to type "BaseLoader | None"
"type[type]" is incompatible with "type[BaseLoader]"
"type[type]" is incompatible with "type[None]"
https://github.com/microsoft/pyright/blob/main/docs/configuration.md#reportArgumentType
</code></pre>
|
<python><pyright>
|
2024-02-26 08:15:07
| 1
| 1,217
|
Elliott B
|
78,059,324
| 10,200,497
|
AttributeError: 'Styler' object has no attribute 'style'
|
<p>This is my DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': [2, 2, 2, -4, 4, 4, 4, -3, 2, -2, -6],
'b': [2, 2, 2, 4, 4, 4, 4, 3, 2, 2, 6]
}
)
</code></pre>
<p>I use a function to highlight cells in <code>a</code> when I use <code>to_excel</code>:</p>
<pre><code>def highlight_cells(s):
if s.name=='a':
conds = [s > 0, s < 0, s == 0]
labels = ['background-color: lime', 'background-color: pink', 'background-color: gold']
array = np.select(conds, labels, default='')
return array
else:
return ['']*s.shape[0]
</code></pre>
<p>Now I want to add one more feature by adding plus sign if a value in <code>a</code> is positive. For example 1 becomes +1. I want this feauture only for column <code>a</code>.</p>
<p>This is my attempt but it does not work. It gives me the error that is the title of the post.</p>
<pre><code>df.style.apply(highlight_cells).style.format({'a': '{:+g}'}).to_excel('df.xlsx', sheet_name='xx', index=False)
</code></pre>
|
<python><pandas><dataframe>
|
2024-02-26 07:50:34
| 1
| 2,679
|
AmirX
|
78,059,279
| 7,233,155
|
Restructuring `struct` with generic types for binding with PyO3
|
<p>If I have used generics in a rust implementation such as the following MWE:</p>
<pre class="lang-rust prettyprint-override"><code>#[derive(Debug)]
struct A<T> {
a: T,
}
impl<T> A<T> {
fn new(a: T) -> A::<T> {
A {a}
}
}
</code></pre>
<p>This works fine but if I now want to add Python bindings with PyO3 this wont work because it needs definitive types. If <code>A</code> is designed to work with multiple types do I need to re-write all of this (and there will be many more implemented methods) for each specific type when wanting to use the <code>pyclass</code> and <code>pymethods</code> flags?</p>
<pre class="lang-rust prettyprint-override"><code>#[pyclass]
struct AF64{
a: f64,
};
#[pymethods]
impl AF64 {
#[new]
fn new(a: f64) -> AF64{
AF64{ a }
}
}
</code></pre>
<p>etc. for <code>f32</code>, <code>i32</code> etc.. ?</p>
<p><a href="https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=2d25264e572715e35647d36b31fd2dae" rel="nofollow noreferrer">https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=2d25264e572715e35647d36b31fd2dae</a></p>
|
<python><rust><types>
|
2024-02-26 07:43:45
| 0
| 4,801
|
Attack68
|
78,059,252
| 6,086,295
|
instagram graph api rate limit execeed
|
<p>I am using instagram graph api to extract data of my own account and other business accounts' media.
During development, I have made couples of api calls to test the api, at first, everything works fine and the api response is returned successfully.
All of the sudden, it is prompting request limit reached</p>
<pre><code>{'error': {'message': '(#4) Application request limit reached', 'type': 'OAuthException', 'is_transient': True, 'code': 4, 'fbtrace_id': 'AMtO2GIogwSNpTD0-WqGny-'}}
</code></pre>
<p>Here is my python code to trigger the api call:</p>
<pre><code>def get_url():
print('access code url', access_url)
code = input("enter the url")
code = code.rsplit('access_token=')[1]
code = code.rsplit('&data_access_expiration')[0]
return code
def get_long_lived_access_token(
access_token=''
):
url = graph_url + 'oauth/access_token'
params = dict()
params['grant_type'] = 'fb_exchange_token'
params['client_id'] = client_id
params['client_secret'] = client_secret
params['fb_exchange_token'] = access_token
response = requests.get(url, headers=headers, params=params)
response =response.json()
long_lived_access_token = response['access_token']
return long_lived_access_token
def get_page_id(
access_token=''
):
url = graph_url + 'me/accounts'
params = dict()
params['access_token'] = access_token
response = requests.get(url, headers=headers, params=params)
response = response.json()
print(response)
page_id = response['data'][0]['id']
return page_id
access_token = get_url()
long_lived_access_token = get_long_lived_access_token(access_token=access_token)
page_id = get_page_id(access_token=long_lived_access_token)
</code></pre>
<p>Checking Application-Level Rate Limiting in meta developer dashboard, I can clearly see that there is still quota remaining:
<a href="https://i.sstatic.net/NLAXV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NLAXV.png" alt="enter image description here" /></a></p>
|
<python><facebook-graph-api><instagram><instagram-api><instagram-graph-api>
|
2024-02-26 07:38:16
| 1
| 479
|
Kevin Lee
|
78,059,194
| 857,932
|
Is it possible to add type annotations to a method outside of my code?
|
<p>Let's suppose I'm writing a Python plugin for a large, legacy codebase not under my control. That codebase exposes an object with several functions that I can call from my plugin code:</p>
<pre class="lang-py prettyprint-override"><code>def plugin_entrypoint(ctx: LegacyCodebaseObject):
...
</code></pre>
<p>One of these functions cause the plugin controller to terminate the plugin with an error condition:</p>
<pre class="lang-py prettyprint-override"><code>def plugin_entrypoint(ctx: LegacyCodebaseObject):
...
ctx.fail(reason="foo", ...) # never returns
</code></pre>
<p>However, being a legacy codebase, none of these functions are type-annotated. My plugin <em>is</em> type-annotated and I use mypy, which causes an annoyance because mypy thinks that <code>ctx.fail()</code> may return:</p>
<pre class="lang-py prettyprint-override"><code>def plugin_entrypoint(ctx: LegacyCodebaseObject):
...
try:
some_data = some_function()
except Exception as e:
ctx.fail(reason=f"Failed to prepare data: {e}") # never returns
other_function(some_data) # mypy complains that some_data might be referenced before assignment
</code></pre>
<hr />
<p>Is it possible to somehow annotate <code>LegacyCodebaseObject.fail()</code> from within my code in a way that mypy will understand?</p>
<p>I could simply wrap <code>ctx.fail()</code> into my own function annotated with Never type, but this would be unidiomatic and my plugin would not pass code review.</p>
|
<python><python-typing>
|
2024-02-26 07:27:56
| 1
| 2,955
|
intelfx
|
78,059,139
| 10,341,337
|
Why python3 dict's get method O(1) time while under the hood it is aclually O(n)?
|
<p>We all know that python <code>dict.get()</code> has <code>O(1)</code> time complexity because it need to get a hash of the key and get value, corresponding to it's key. The question is, what magic can actually access the value, corresponding to the hash? Looking for the answer I went to <a href="https://github.com/python/cpython/blob/main/Objects/dictobject.c#L1010" rel="nofollow noreferrer">do_lookup</a> method in CPython and found out the interpreter iterates over all buckets to find the needed hash.</p>
<pre><code>do_lookup(PyDictObject *mp, PyDictKeysObject *dk, PyObject *key, Py_hash_t hash,
Py_ssize_t (*check_lookup)(PyDictObject *, PyDictKeysObject *, void *, Py_ssize_t ix, PyObject *key, Py_hash_t))
{
// ...
for (;;) {
ix = dictkeys_get_index(dk, i);
if (ix >= 0) {
Py_ssize_t cmp = check_lookup(mp, dk, ep0, ix, key, hash);
if (cmp < 0) {
return cmp;
} else if (cmp) {
return ix;
}
}
else if (ix == DKIX_EMPTY) {
return DKIX_EMPTY;
}
perturb >>= PERTURB_SHIFT;
i = mask & (i*5 + perturb + 1);
// ...
}
}
</code></pre>
<p>so due to this and also keep in mind hash collision resolver it looks like this:</p>
<pre><code>O(1) only if first hash + no collision - best case
O(N) for worst case + no collision
O(N) + O(len(bucket)) for worst case + collision resolver
</code></pre>
<p>am I missing something or what is wrong with this logic?</p>
|
<python><dictionary><hashtable>
|
2024-02-26 07:15:45
| 0
| 456
|
Ivan
|
78,058,655
| 188,331
|
How to specify additional parameters when using HuggingFace Evaluate's evaluate.combine() method?
|
<p>I am using the <a href="https://huggingface.co/docs/evaluate/en/index" rel="nofollow noreferrer">HuggingFace Evaluate library</a> to evaluate my results using 2 metrics. Here are the codes:</p>
<pre><code>import evaluate
metric = evaluate.combine(
["sacrebleu", "chrf"], force_prefix=True
)
</code></pre>
<p>And in the <code>compute_metrics()</code> function, here is how I call the <code>metric.compute()</code>:</p>
<pre><code>def compute_metrics(eval_preds):
preds, labels = eval_preds
if isinstance(preds, tuple):
preds = preds[0]
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
result = metric.compute(predictions=decoded_preds, references=decoded_labels)
results = {"bleu": result["sacrebleu_score"], "chrf": result["chr_f_score"]}
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]
results["gen_len"] = np.mean(prediction_lens)
results = {k: round(v, 4) for k, v in results.items()}
return results
</code></pre>
<p>However, I would like to specify the chrF to use <code>word_order=2</code>. How can I do so? Thanks.</p>
|
<python><bleu><huggingface-evaluate><chrf>
|
2024-02-26 04:37:21
| 2
| 54,395
|
Raptor
|
78,058,636
| 9,983,652
|
CondaVerificationError when installing PyTorch
|
<p>I am trying to install pytorch using either of below command and I got a lot of error. I am using windows CPU only.</p>
<pre><code>conda install pytorch::pytorch
or
conda install pytorch torchvision torchaudio cpuonly -c pytorch
</code></pre>
<p>some of the error are</p>
<pre><code>CondaVerificationError: The package for pytorch located at C:\Users\test\miniconda3\pkgs\pytorch-2.2.1-py3.10_cpu_0
appears to be corrupted. The path 'Lib/site-packages/torchgen/static_runtime/gen_static_runtime_ops.py'
specified in the package manifest cannot be found.
CondaVerificationError: The package for pytorch located at C:\Users\test\miniconda3\pkgs\pytorch-2.2.1-py3.10_cpu_0
appears to be corrupted. The path 'Lib/site-packages/torchgen/static_runtime/generator.py'
specified in the package manifest cannot be found.
CondaVerificationError: The package for pytorch located at C:\Users\test\miniconda3\pkgs\pytorch-2.2.1-py3.10_cpu_0
appears to be corrupted. The path 'Lib/site-packages/torchgen/utils.py'
specified in the package manifest cannot be found.
CondaVerificationError: The package for pytorch located at C:\Users\test\miniconda3\pkgs\pytorch-2.2.1-py3.10_cpu_0
appears to be corrupted. The path 'Lib/site-packages/torchgen/yaml_utils.py'
specified in the package manifest cannot be found.
CondaVerificationError: The package for pytorch located at C:\Users\test\miniconda3\pkgs\pytorch-2.2.1-py3.10_cpu_0
appears to be corrupted. The path 'Scripts/convert-caffe2-to-onnx-script.py'
specified in the package manifest cannot be found.
CondaVerificationError: The package for pytorch located at C:\Users\test\miniconda3\pkgs\pytorch-2.2.1-py3.10_cpu_0
appears to be corrupted. The path 'Scripts/convert-onnx-to-caffe2-script.py'
specified in the package manifest cannot be found.
CondaVerificationError: The package for pytorch located at C:\Users\test\miniconda3\pkgs\pytorch-2.2.1-py3.10_cpu_0
appears to be corrupted. The path 'Scripts/torchrun-script.py'
specified in the package manifest cannot be found.
</code></pre>
|
<python><pytorch><conda><miniconda>
|
2024-02-26 04:31:06
| 1
| 4,338
|
roudan
|
78,058,624
| 4,367,371
|
Jupyter Widgets Formatting
|
<p>I am using the following code:</p>
<pre><code>from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
def f(x):
return x
interact_manual(f, x='Hi there!');
</code></pre>
<p>It yields the following widget in the notebook:</p>
<p><a href="https://i.sstatic.net/PxPcp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PxPcp.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/2RZYM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2RZYM.png" alt="enter image description here" /></a></p>
<p>The issue is that the text box has a fixed default size that is very small, I would like to increase the size of the text box so that the user's text does not get cutoff. I would also like to change the text shown on the button from the default "Run Interact" to something custom.</p>
<p>I have extensively gone over the documentation and looked online and I cannot find a single example of code that shows how you can set any kind of formatting for the interact/interact_manual function.</p>
<p>Is there any way to change the size of the text box and format other attributes of the widget?</p>
|
<python><jupyter-notebook><widget><jupyter><ipywidgets>
|
2024-02-26 04:24:26
| 1
| 3,671
|
Mustard Tiger
|
78,058,590
| 1,277,865
|
How to fix CppException forward() Expected a value of type 'Tensor' in pytorch android, but the same model works fine in python
|
<p>How to fix CppException forward() Expected a value of type 'Tensor' in pytorch android, load by Module.load('mel_ptmobile_v2.pt'):</p>
<p>android log:</p>
<pre><code>mel, melInputTensor = org.pytorch.Tensor$Tensor_float32, [1, 840]
Caused by: com.facebook.jni.CppException: forward() Expected a value of type 'Tensor' for argument 'x'
but instead found type 'Dynamic<128>[Dynamic<1>,]'.
Position: 1
Declaration: forward(__torch__.models.preprocess.___torch_mangle_11.AugmentMelSTFT self, Tensor x) -> Tensor
Exception raised from checkArg at /Users/huydo/Storage/mine/pytorch/aten/src/ATen/core/function_schema_inl.h:340 (most recent call first):
(no backtrace available)
at org.pytorch.NativePeer.forward(Native Method)
at org.pytorch.Module.forward(Module.java:52)
</code></pre>
<p>android invoke:</p>
<pre><code>val wavBatch = 1
val wavLength = 840
val dummyInput = dummyInput(wavBatch, wavLength, 0.0f)
val inputShape = longArrayOf(wavBatch.toLong(), wavLength.toLong())
melInputTensor = Tensor.fromBlob(dummyInput, inputShape)
if (DEBUG) Log.i(TAG,
"mel, melInputTensor = " + melInputTensor?.javaClass?.name + ", " + melInputTensor?.shape().contentToString()
)
// Caused by: com.facebook.jni.CppException: forward() Expected a value of type 'Tensor' for argument 'x'
// but instead found type 'Dynamic<128>[Dynamic<1>,]'.
val forward = melModel!!.forward(IValue.listFrom(melInputTensor))
// Caused by: java.lang.IllegalStateException: Expected IValue type Tuple, actual type Tensor
//val forward = melModel!!.forward(IValue.from(melInputTensor))
</code></pre>
<p>but the same model works fine in python, load by torch.load('mel_ptmobile_v2.pt'):</p>
<p>python log:</p>
<pre><code>inputs 0 134400 [-1.4953613e-03 -1.6479492e-03 -1.4648438e-03 ...
inputs 1 torch.Size([1, 134400]) tensor([[-1.4954e-03, -1.6479e-03, -1.4648e-03, . <class 'torch.Tensor'>
inputs 2 torch.Size([1, 128, 420]) tensor([[[-0.7647, -0.5746, -0.6255, ..., -1.3985
outputs 0 torch.Size([1, 527]) tensor([[ -3.1250, -6.5625, -7.0312, -7.7500,
</code></pre>
<p>python invoke:</p>
<pre><code># model to preprocess waveform into mel spectrograms
mel = load_model_from_uri(mel_ptmobile_name)
(waveform, _) = librosa.core.load(audio_path, sr=sample_rate, mono=True)
if DEBUG: print('inputs 0', len(waveform), str(waveform)[:50])
waveform = torch.from_numpy(waveform[None, :]).to(device)
if DEBUG: print('inputs 1', waveform.shape, str(waveform)[:50], type(waveform))
# our models are trained in half precision mode (torch.float16)
# run on cuda with torch.float16 to get the best performance
# running on cpu with torch.float32 gives similar performance, using torch.bfloat16 is worse
with torch.no_grad(), autocast(device_type=device.type) if cuda else nullcontext():
spec = mel(waveform)
if DEBUG: print('inputs 2', spec.shape, str(spec)[:50])
</code></pre>
<p>python model:</p>
<pre><code>class AugmentMelSTFT(nn.Module):
def __init__(self, n_mels=128, sr=32000, win_length=800, hopsize=320, n_fft=1024, freqm=48, timem=192,
fmin=0.0, fmax=None, fmin_aug_range=10, fmax_aug_range=2000):
torch.nn.Module.__init__(self)
# adapted from: https://github.com/CPJKU/kagglebirds2020/commit/70f8308b39011b09d41eb0f4ace5aa7d2b0e806e
self.win_length = win_length
self.n_mels = n_mels
self.n_fft = n_fft
self.sr = sr
self.fmin = fmin
if fmax is None:
fmax = sr // 2 - fmax_aug_range // 2
if DEBUG: print(f"Warning: FMAX is None setting to {fmax} ")
self.fmax = fmax
self.hopsize = hopsize
self.register_buffer('window',
torch.hann_window(win_length, periodic=False),
persistent=False)
assert fmin_aug_range >= 1, f"fmin_aug_range={fmin_aug_range} should be >=1; 1 means no augmentation"
assert fmax_aug_range >= 1, f"fmax_aug_range={fmax_aug_range} should be >=1; 1 means no augmentation"
self.fmin_aug_range = fmin_aug_range
self.fmax_aug_range = fmax_aug_range
self.register_buffer("preemphasis_coefficient", torch.as_tensor([[[-.97, 1]]]), persistent=False)
if freqm == 0:
self.freqm = torch.nn.Identity()
else:
self.freqm = torchaudio.transforms.FrequencyMasking(freqm, iid_masks=True)
if timem == 0:
self.timem = torch.nn.Identity()
else:
self.timem = torchaudio.transforms.TimeMasking(timem, iid_masks=True)
def forward(self, x):
if onnx_conf.DEBUG: print('mel.forward,', x.shape, x[0][0].dtype, type(x))
x = nn.functional.conv1d(x.unsqueeze(1), self.preemphasis_coefficient).squeeze(1)
x = torch.stft(x, self.n_fft, hop_length=self.hopsize, win_length=self.win_length,
center=True, normalized=False, window=self.window, return_complex=False)
# x = stft(x, self.n_fft, hop_length=self.hopsize, win_length=self.win_length,
# center=True, normalized=False, window=self.window, return_complex=False)
x = (x ** 2).sum(dim=-1) # power mag
fmin = self.fmin + torch.randint(self.fmin_aug_range, (1,)).item()
fmax = self.fmax + self.fmax_aug_range // 2 - torch.randint(self.fmax_aug_range, (1,)).item()
# don't augment eval data
if not self.training:
fmin = self.fmin
fmax = self.fmax
mel_basis, _ = torchaudio.compliance.kaldi.get_mel_banks(self.n_mels, self.n_fft, self.sr,
fmin, fmax, vtln_low=100.0, vtln_high=-500.,
vtln_warp_factor=1.0)
mel_basis = torch.as_tensor(torch.nn.functional.pad(mel_basis, (0, 1), mode='constant', value=0),
device=x.device)
with torch.cuda.amp.autocast(enabled=False):
melspec = torch.matmul(mel_basis, x)
melspec = (melspec + 0.00001).log()
if self.training:
melspec = self.freqm(melspec)
melspec = self.timem(melspec)
melspec = (melspec + 4.5) / 5. # fast normalization
return melspec
</code></pre>
|
<python><android><deep-learning><pytorch><tensor>
|
2024-02-26 04:05:51
| 1
| 2,207
|
thecr0w
|
78,058,404
| 5,022,913
|
Python: Execute long-running task in a background process forking off FastAPI app
|
<h1>TL;DR</h1>
<p>In my <code>gunicorn/uvicorn</code>-run FastAPI app, I need to execute some long-running tasks in a completely non-blocking way, so the main <code>asyncio</code> event loop is not affected. The only approach I can think of is spinning off separate processes to fire the tasks and then somehow collect the results and signal the main loop. So basically, my workflow should look something like:</p>
<pre><code>1. Fire task in separate process (do ffmpeg video encoding and save files/data).
2. Forget about the running process and do other stuff in a normal way.
3. Get "I'm done" signal from the process(es) and check for errors.
4. Handle results.
</code></pre>
<h1>The long-running stuff</h1>
<p>My long-running task calls <code>ffmpeg</code> video encoder on some files, actually using <code>asyncio.subprocess</code> to fork an external <code>ffmpeg</code> process. Then it does some file operations on the resulting files, and stores some data in the app's database. The code looks as follows (simplified version):</p>
<pre class="lang-py prettyprint-override"><code>import ffmpeg # ffmpeg-python (https://kkroening.github.io/ffmpeg-python/)
import asyncio
from pydantic import BaseModel
class ProcessResultModel(BaseModel):
returncode: int = None
stdout: str = ''
stderr: str = ''
class Config:
arbitrary_types_allowed = True
@ffmpeg.nodes.output_operator()
async def run_async_async(stream_spec, cmd='ffmpeg', pipe_stdin=False, pipe_stdout=False,
pipe_stderr=False, quiet=False, overwrite_output=False,
run: bool = True) -> Union[asyncio.subprocess.Process, ProcessResultModel]:
# compile ffmpeg args
args = ffmpeg._run.compile(stream_spec, cmd, overwrite_output=overwrite_output)
# pipe streams as required
stdin_stream = asyncio.subprocess.PIPE if pipe_stdin else None
stdout_stream = asyncio.subprocess.PIPE if pipe_stdout or quiet else None
stderr_stream = asyncio.subprocess.PIPE if pipe_stderr or quiet else None
# create subprocess (ffmpeg)
process = await asyncio.create_subprocess_exec(*args, stdin=stdin_stream,
stdout=stdout_stream, stderr=stderr_stream)
# if not told to run, simply return the process object
if not run: return process
# run process and collect results
stdout, stderr = await process.communicate()
# return results in a nice object
return ProcessResultModel(returncode=process.returncode,
stdout=stdout.decode('utf-8') if stdout else '',
stderr=stderr.decode('utf-8') if stderr else '')
</code></pre>
<h1>The problem</h1>
<p>If I simply call this in my FastAPI CRUD function as is:</p>
<pre class="lang-py prettyprint-override"><code>async def fire_task(stream):
res = await stream.run_async_async(run=True)
</code></pre>
<p>it will call <code>process.communicate()</code> and effectually block my main event loop until the entire task is done. If I call it with <code>run=False</code>, it will just return the initialized process which I will need to start somewhere myself.</p>
<p>Is there a way to fire-and-forget the process without blocking the event loop and then at some point, get the process to signalize that it's done and collect the results - in a safe and robust manner?</p>
|
<python><multiprocessing><subprocess><python-asyncio><fastapi>
|
2024-02-26 02:32:10
| 1
| 584
|
s0mbre
|
78,057,906
| 3,241,846
|
Django Model self reference OneToOne, Can't set link while bulk_create
|
<p><strong>Context</strong></p>
<p>Bare with me as im new to Django. I have model <code>Account</code> which I want to refer to itself, in one-to-one relationship. <code>Account</code> also has a one to Many with <code>Activities</code>.</p>
<p><strong>Goal</strong></p>
<ol>
<li>When saving Account, save linked_account_id as a self fk</li>
<li>Being able to fetch all activities linked to BOTH accounts or just one</li>
</ol>
<p><strong>Problem</strong>
error stacks:</p>
<ol>
<li>cannot save linked_account_id (string_value), it expects type <code>Account</code></li>
</ol>
<p>It is also possible the linked account has not yet been persisted so it wont return an entity when trying to save object by <code>linked_account_id</code></p>
<p>This is the data im trying to save using <code>bulk_create</code></p>
<pre><code>for data in filteredList:
account_data = {
'accountNumber': data['id'],
'status': data['status'],
'last_synced': parse_datetime(data['last_synced_at']),
'created_at': parse_datetime(data['created_at']),
'updated_at': parse_datetime(data['updated_at']),
'current_balance': data['current_balance']['amount'],
'net_deposits': data['net_deposits']['amount'],
'linked_account_id': data['linked_account_id'], (FK)
'currency': data['base_currency'],
'type': data['account_type'],
}
</code></pre>
<p>this is my Model code</p>
<pre><code>class Account (models.Model):
accountNumber = models.CharField(default=0, max_length=20,primary_key=True)
type = models.CharField(default='', max_length=20)
current_balance = models.DecimalField(default=0, max_digits=20, decimal_places=0)
net_deposits = models.DecimalField(default=0, max_digits=20, decimal_places=0)
currency = models.CharField(default='', max_length=10)
last_synced = models.DateTimeField(default=timezone.now)
is_primary = models.BooleanField(default=False)
linked_account_id = models.OneToOneField('self', on_delete=models.SET_NULL, null=True, blank=True)
created_at = models.DateTimeField( default=timezone.now)
updated_at = models.DateTimeField(default=timezone.now)
status = models.CharField(default='', max_length=10)
def __str__(self):
return self.type # Or whatever you want to represent your Account instance with
def set_linked_account(self, other_account_number):
try:
related_account = Account.objects.get(accountNumber=other_account_number)
self.linked_account_id = related_account
self.save()
except Account.DoesNotExist:
print("The related account does not exist.")
def get_all_linked_activities(self):
if self.linked_account_id:
return Activity.objects.filter(models.Q(account=self) | models.Q(account=self.linked_account_id))
else:
return Activity.objects.filter(account=self)
</code></pre>
|
<python><django>
|
2024-02-25 22:26:47
| 1
| 677
|
user3241846
|
78,057,893
| 1,609,514
|
What is the 'u' parameter estimated by Statsmodel's ARIMA fit method?
|
<p>I'm struggling to find mention of the <code>'u'</code> parameter that is returned by the statsmodels <a href="https://www.statsmodels.org/stable/generated/statsmodels.tsa.arima.model.ARIMA.fit.html#statsmodels.tsa.arima.model.ARIMA.fit" rel="nofollow noreferrer">ARIMA.fit</a> method in the following ARMAX model parameter estimation example:</p>
<pre><code>import pandas as pd
from statsmodels.tsa.arima.model import ARIMA
# Sample of the full input-output dataset
id_data = pd.DataFrame({
'u': [0, 0, 0, 0, 0, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
'y': [-1.4369, -0.999, -0.0325, 0.8435, 0.4339, -0.2925,
-0.8885, -2.3191, -4.004, -5.4779, -7.053, -7.5489,
-8.779, -8.9262, -8.5207, -8.3915, -8.5699, -8.2192,
-8.284, -7.6011]
})
arma22 = ARIMA(id_data.y, exog=id_data.u, order=(2, 0, 2), trend='n').fit()
print(arma22.params)
print(arma22.summary())
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>u -0.564553
ar.L1 1.798081
ar.L2 -0.829465
ma.L1 -0.859744
ma.L2 0.998002
sigma2 0.220407
dtype: float64
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 20
Model: ARIMA(2, 0, 2) Log Likelihood -18.559
Date: Sun, 25 Feb 2024 AIC 49.119
Time: 14:26:11 BIC 55.093
Sample: 0 HQIC 50.285
- 20
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
u -0.5646 0.303 -1.861 0.063 -1.159 0.030
ar.L1 1.7981 0.158 11.407 0.000 1.489 2.107
ar.L2 -0.8295 0.164 -5.059 0.000 -1.151 -0.508
ma.L1 -0.8597 21.365 -0.040 0.968 -42.735 41.016
ma.L2 0.9980 49.501 0.020 0.984 -96.022 98.018
sigma2 0.2204 10.881 0.020 0.984 -21.106 21.547
===================================================================================
Ljung-Box (L1) (Q): 1.05 Jarque-Bera (JB): 0.81
Prob(Q): 0.31 Prob(JB): 0.67
Heteroskedasticity (H): 0.78 Skew: 0.27
Prob(H) (two-sided): 0.76 Kurtosis: 2.18
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
</code></pre>
<p>I was expecting an ARMAX(2, 2) model to have four parameters (four degrees of freedom). I've turned off the trend (<code>const</code>) parameter so it can't be that.</p>
<p>There is a worked-example of an ARMAX(1, 1) model <a href="https://www.statsmodels.org/stable/examples/notebooks/generated/statespace_sarimax_stata.html#ARIMA-Example-4:-ARMAX-(Friedman)" rel="nofollow noreferrer">here</a> but no <code>u</code> parameter appears in the results.</p>
<p>If someone can point me to part of the documentation which mentions what 'u' is I'd be grateful.</p>
|
<python><statsmodels><arima>
|
2024-02-25 22:19:39
| 1
| 11,755
|
Bill
|
78,057,740
| 23,182,657
|
Chrome 122 - How to allow insecure content? (Insecure download blocked)
|
<p>I'm unable to test file download with Selenium (python), after Chrome update to the version '122.0.6261.70'.</p>
<p>Previously running Chrome with the '--allow-running-insecure-content' arg did a trick. The same is suggested over the net. On some sites one additional arg is suggested: '--disable-web-security'.</p>
<p>But both change nothing for me (the warning keeps appearing).</p>
<p>Does anybody know if something has been changed between the 121 and 122 versions?</p>
<p>Is there some arg or pref that I'm missing?</p>
<p>Warning image for the reference:</p>
<p><a href="https://i.sstatic.net/cEOtD.png" rel="noreferrer"><img src="https://i.sstatic.net/cEOtD.png" alt="enter image description here" /></a></p>
<br>
Driver creation (simplified):
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options = Options()
for arg in ["--allow-running-insecure-content", "--disable-web-security"]:
options.add_argument(arg)
driver = webdriver.Chrome(options=options)
</code></pre>
|
<python><selenium-webdriver><selenium-chromedriver><ui-automation>
|
2024-02-25 21:22:00
| 7
| 971
|
sashkins
|
78,057,716
| 340,142
|
Trying to run Gemma on Kaggle and got issue 'keras_nlp.models' has no attribute 'GemmaCausalLM'
|
<p>I'm trying to run Gemma on Keras with this model <a href="https://www.kaggle.com/models/keras/gemma/frameworks/keras/variations/gemma_instruct_7b_en" rel="nofollow noreferrer">https://www.kaggle.com/models/keras/gemma/frameworks/keras/variations/gemma_instruct_7b_en</a>
And I'm reproducing the Example available on "Model Card" on above page.
When I run this code:</p>
<pre><code>gemma_lm = keras_nlp.models.GemmaCausalLM.from_preset("gemma_instruct_7b_en")
gemma_lm.generate("Keras is a", max_length=30)
# Generate with batched prompts.
gemma_lm.generate(["Keras is a", "I want to say"], max_length=30)
</code></pre>
<p>I get this error:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[2], line 1
----> 1 gemma_lm = keras_nlp.models.GemmaCausalLM.from_preset("gemma_instruct_7b_en")
2 gemma_lm.generate("Keras is a", max_length=30)
4 # Generate with batched prompts.
AttributeError: module 'keras_nlp.models' has no attribute 'GemmaCausalLM'
</code></pre>
<p>How can I fix this?</p>
|
<python><kaggle><gemma>
|
2024-02-25 21:13:02
| 2
| 2,126
|
Marecky
|
78,057,705
| 19,077,881
|
How to access a value in a Polars struct generated from value_counts?
|
<p>If I have a df such as:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({
"color": ["red", "blue", "red", "green", "blue", "blue"]
})
</code></pre>
<p>then if I want the count for color = 'red' in Pandas I could simply use:</p>
<pre class="lang-py prettyprint-override"><code>df.to_pandas()["color"].value_counts()["red"]
# 2
</code></pre>
<p>which is clear and obvious.</p>
<p>In Polars, <code>value_counts()</code> produces a DF with a column of struct values:</p>
<pre class="lang-py prettyprint-override"><code>df.select(pl.col("color").value_counts())
</code></pre>
<pre><code>shape: (3, 1)
┌─────────────┐
│ color │
│ --- │
│ struct[2] │
╞═════════════╡
│ {"red",2} │
│ {"blue",3} │
│ {"green",1} │
└─────────────┘
</code></pre>
<p>which could be split into a DF with separate columns using</p>
<pre><code>counts = df.select(pl.col("color").value_counts()).unnest('color')
</code></pre>
<p>and then the required value can be obtained using</p>
<pre><code>counts.select(pl.col('count').filter(pl.col('color') == 'red')).item()
</code></pre>
<p>similarly <code>group_by('color').len()</code> could be used instead of value_counts</p>
<p>This all seems rather complicated for such a frequent requirement.
Is there a simpler way of extracting a single count value using Polars and more generally to access struct values.</p>
|
<python><dataframe><python-polars>
|
2024-02-25 21:06:16
| 2
| 5,579
|
user19077881
|
78,057,611
| 4,751,165
|
Py installed from scoop can't find installed Python
|
<p>After installing python using scoop:</p>
<pre><code>scoop install python
</code></pre>
<p>When running <code>py -3 script.py</code>, py.exe, which is the Python Launcher for windows, can't find an installed Python.</p>
<pre><code>py -3 script.py
No installed Python found!
</code></pre>
|
<python><windows><powershell><scoop-installer>
|
2024-02-25 20:36:20
| 1
| 16,236
|
Pau
|
78,057,602
| 13,101,893
|
How to draw a curve through an arbitrary number of true waypoints that may double back
|
<p>I have been exploring many options to do this. The idea is that I will plot points and a curve will be drawn through them in order, however, the curve will not connect the starting and ending points.<br />
This is what I have so far:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import CubicSpline
plt.rcParams["figure.figsize"] = [7.00, 3.50]
plt.rcParams["figure.autolayout"] = True
x = []
y = []
pts: list[tuple[float, float]] = []
xnew = np.linspace(start=0, stop=10, num=1000)
fig = plt.figure()
graph = plt.subplot(xlim=(0, 10), ylim=(0, 10))
pt_sorter = lambda li: li[0]
def mouse_event(event):
global xnew
pts.append((event.xdata, event.ydata))
pts.sort(key=pt_sorter)
x = [pt[0] for pt in pts]
y = [pt[1] for pt in pts]
graph.cla()
graph.set_xlim([0, 10])
graph.set_ylim([0, 10])
xnew = np.linspace(start=min(x), stop=max(x), num=1000)
graph.plot(x, y, ls='', marker='o', color='r')
if len(pts) > 2:
spl = CubicSpline(np.array(x), np.array(y))
graph.plot(xnew, spl(xnew), color='b')
plt.show()
cid = fig.canvas.mpl_connect('button_press_event', mouse_event)
plt.show()
</code></pre>
<p>This approach using a CubicSpline and scipy interpolate allows me to interactively plot points and draw a smooth curve through them. However, it does not support "doubling back" on the x-axis. What I mean is I can't have the curve go right and then cut back left. I can't figure out how to do this. I'm trying to make a route planning software. Maybe drawing the curve in <code>matplotlib</code> wasn't the greatest idea. I'm not sure how else to do it though. Any pointers would be appreciated.</p>
|
<python><numpy><matplotlib><scipy><curve-fitting>
|
2024-02-25 20:35:23
| 0
| 303
|
Xbox One
|
78,057,573
| 11,760,835
|
QFormLayout does not expand width of left column widgets
|
<p>I'm trying to create a form layout in a Right-to-left application. I think that the expanding fields grow flag is not working because the most common usage is having the input fields at the right side. Some input widgets do not have the expanding width policy working and they look bad. What could I do?</p>
<p><a href="https://i.sstatic.net/ifNuL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ifNuL.png" alt="application demo screenshot" /></a></p>
<p>Demo code:</p>
<pre class="lang-py prettyprint-override"><code>from PySide6.QtWidgets import QApplication, QMainWindow, QWidget, QFormLayout, QLabel, QLineEdit, QSpinBox
class FormLayoutDemo(QMainWindow):
def __init__(self):
QMainWindow.__init__(self)
self.central_widget = QWidget()
self.setCentralWidget(self.central_widget)
self.central_layout = QFormLayout()
# This does not seem to work :(
self.central_layout.setFieldGrowthPolicy(QFormLayout.FieldGrowthPolicy.ExpandingFieldsGrow)
self.central_widget.setLayout(self.central_layout)
self.name_label = QLabel("Name")
self.name_input = QLineEdit()
self.central_layout.addRow(self.name_input, self.name_label)
self.age_label = QLabel("Age")
self.age_input = QSpinBox()
# Also not working
self.age_input.setSizePolicy(QSizePolicy.Expanding, QSizePolicy.Fixed)
self.central_layout.addRow(self.age_input, self.age_label)
if __name__ == "__main__":
app = QApplication()
main_window = FormLayoutDemo()
main_window.show()
app.exec()
</code></pre>
|
<python><pyside><qtwidgets>
|
2024-02-25 20:25:29
| 1
| 394
|
Jaime02
|
78,057,555
| 386,861
|
nbextensions not showing in jupyter notebook: Command `jupyter-contrib` not found
|
<p>I'm trying to install nbextensions in jupyter notebook so I can hide code from slideshow.</p>
<pre><code>pip install jupyter_contrib_nbextensions
jupyter contrib nbextension install --user
</code></pre>
<p>Error.</p>
<pre><code>Jupyter command `jupyter-contrib` not found.
</code></pre>
<p>Any ideas</p>
|
<python><jupyter-notebook><jupyter-contrib-nbextensions>
|
2024-02-25 20:17:04
| 1
| 7,882
|
elksie5000
|
78,057,467
| 17,800,932
|
Fluent interfaces with pipelining or method chaining in Python
|
<p>I am coming to Python from F# and Elixir, and I am struggling heavily in terms of cleanly coding data transformations. Every language I have ever used has had a concept of a pipeline operator and/or method chaining, and so with Python, I am confused on finding an <em>easy</em> way to accomplish this that doesn't stray away from so-called "Pythonic" code.</p>
<p>Here's a simple collection of some processing functions I might have:</p>
<pre class="lang-py prettyprint-override"><code>def convert_int_to_bool(integer: int) -> bool:
match integer:
case 0:
return False
case 1:
return True
case _:
ValueError(
f"Integer value must be either 0 or 1 to be converted to `bool`. Given: {integer}"
)
def convert_string_to_characters(string: str) -> list[str]:
return [character for character in string]
</code></pre>
<p>In Python, I can do something like:</p>
<pre class="lang-py prettyprint-override"><code>def test(response: str) -> <some type>:
[a, b, c, d] = map(convert_int_to_bool, map(int, convert_string_to_characters(response)))
...
</code></pre>
<p>But this is non-ideal, even in a simple case of mapping over the list twice. Well, then I know I could do something like:</p>
<pre class="lang-py prettyprint-override"><code>[a, b, c, d] = [convert_int_to_bool(int(character)) for character in response]
</code></pre>
<p>That's <em>okay</em>, but it again doesn't scale all that well to a chain of processing functions, especially if there's a <code>filter</code> inside there. So what I'd like to do is <em>something like</em>:</p>
<pre class="lang-py prettyprint-override"><code>[a, b, c, d] = response.convert_string_to_characters().map(int).map(convert_int_to_bool)
</code></pre>
<p>or</p>
<pre class="lang-py prettyprint-override"><code>[a, b, c, d] = response |> convert_string_to_characters() |> map(int) |> map(convert_int_to_bool)
</code></pre>
<p>For the first proposed way with method chaining, it seems I could <em>potentially</em> do this by extending the built-in types of <code>str</code> and <code>list</code>, but then that has issues from I have read about not integrating well with the built-in literal constructions of those types.</p>
<hr />
<p>Are there any libraries or ways of overloading/overriding built-in types or defining custom operators that would allow me to do this in a clean way? Thank you.</p>
|
<python><functional-programming><pipeline><fluent><method-chaining>
|
2024-02-25 19:54:31
| 2
| 908
|
bmitc
|
78,057,400
| 11,724,014
|
GnuGPG python without binary and cross-platform
|
<p>I am looking for a python project to encrypt and decrypt files using the <a href="https://en.wikipedia.org/wiki/GNU_Privacy_Guard" rel="nofollow noreferrer">GNU Privacy Guard</a> protocol.</p>
<p>There exists packages like:</p>
<ul>
<li><a href="https://pypi.org/project/python-gnupg/" rel="nofollow noreferrer">python-gnupg</a></li>
<li><a href="https://pypi.org/project/pretty-bad-protocol/" rel="nofollow noreferrer">pretty-bad-protocol</a></li>
</ul>
<p>But they both needs to refer the a binary where the installation is platform dependant and it is not possible to verify lines as you can do it for a python script.</p>
<hr />
<p>I am looking for a package that is able to encrypt and decrypt this standard with only python scripts. I am aware this will be much slower.</p>
<p><strong>Reasons are:</strong></p>
<ul>
<li>On Windows, the first use of gpg2.exe take a while and crash sometimes (for very extensive use)</li>
<li>A python script can be opened by user to check for any malicious lines of codes</li>
<li>It doesn't require any extra-step to install and can be used on any computeur that have python installed (I wan't to create a plugin on a software (QGIS) that contains a python install but it is possible to include only python scripts).</li>
</ul>
<hr />
<p><strong>My question</strong>:</p>
<p>Where is it possible to find the GNUPG pseudo-code to recreate it from scratch in python OR a project of someone that already done it.</p>
|
<python><gnupg><pgp><python-gnupgp>
|
2024-02-25 19:36:02
| 0
| 1,314
|
Vincent Bénet
|
78,057,395
| 16,728,369
|
django-tailwind is not rebuilding styles
|
<p>I'm trying to use tailwind with my django project and i've followed as this documentation
(<a href="https://django-tailwind.readthedocs.io/en/latest/installation.html" rel="nofollow noreferrer">https://django-tailwind.readthedocs.io/en/latest/installation.html</a>) when i'm making changes in tailwind classes in my html files in the terminal it get rebuild but the changes are not being seen to my design somehow.</p>
<p>settings.py</p>
<pre><code>
INSTALLED_APPS = [
'django.contrib.admin',
'authen',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'tailwind',
'theme',
'django_browser_reload',
'rental',
'roommate'
]
TAILWIND_APP_NAME = 'theme'
INTERNAL_IPS = [
"127.0.0.1",
]
</code></pre>
<p>index.html in my templates folder where theme, manage.py etc folders exist.</p>
<pre><code>{% load static tailwind_tags %}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Rentopia</title>
<!-- <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.5.0/css/bootstrap.min.css"> -->
<link rel="icon" href="{% static 'images/home.png' %}" type="image/png">
{% tailwind_css %}
</code></pre>
<p>i've tried with this in package.json but it still does nothing</p>
<pre><code>"dev": "cross-env NODE_ENV=development CHOKIDAR_USEPOLLING=1 tailwindcss --postcss -i ./src/styles.css -o ../static/css/dist/styles.css -w",
</code></pre>
|
<python><node.js><django><django-tailwind>
|
2024-02-25 19:33:47
| 0
| 469
|
Abu RayhaN
|
78,057,362
| 10,452,700
|
why it is not possible use matplotlib.animation when data type within pandas dataframe is datetime due to your time stamp?
|
<p>I'm experimenting with 1D time-series data and trying to reproduce the following approach via animation over my own data in GoogleColab notebook.</p>
<p>I faced the problem of re-producing animation from this post when you pass data type of column list values as <code>'datetime'</code> with dataframe when the x-axis is <strong>timestamp</strong>! I assume there is a bug somewhere even when I try to index <code>timestamp</code> column and anime the plots in which x-axis values passed by using <code>df.index</code>.</p>
<p>what I have tried unsuccessfully following scripts based on learning from my post available in reference at the end of my post:</p>
<pre class="lang-py prettyprint-override"><code>#-----------------------------------------------------------
# Libs
#-----------------------------------------------------------
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
from matplotlib.patches import Rectangle
from IPython.display import HTML
#-----------------------------------------------------------
# LOAD THE DATASET
#-----------------------------------------------------------
df = pd.read_csv('https://raw.githubusercontent.com/amcs1729/Predicting-cloud-CPU-usage-on-Azure-data/master/azure.csv')
df['timestamp'] = pd.to_datetime(df['timestamp'])
df = df.rename(columns={'min cpu': 'min_cpu',
'max cpu': 'max_cpu',
'avg cpu': 'avg_cpu',})
df.head()
# Data preparation
# ==============================================================================
sliced_df = df[['timestamp', 'avg_cpu']]
# convert column to datetime object
#sliced_df['timestamp'] = pd.to_datetime(sliced_df['timestamp'], format='%Y-%m-%d %H:%M:%S')
#df = df.set_index('timestamp')
step_size = 4*287
data_train = sliced_df[:-step_size]
data_test = sliced_df[-step_size:] #unseen
#-----------------------------------------------------------
# Animation
#-----------------------------------------------------------
# create plot
plt.style.use("ggplot") # <-- set overall look
fig, ax = plt.subplots( figsize=(10,4))
# plot data
plt.plot(list(sliced_df['timestamp']), sliced_df['avg_cpu'], 'r-', linewidth=0.5, label='data or y')
# make graph beautiful
plt.plot([], [], 'g-', label="Train", linewidth=8, alpha=0.3)
plt.plot([], [], 'b-', label="Test", linewidth=8, alpha=0.3)
step_size = 287
selected_ticks = sliced_df['timestamp'][::step_size]
plt.xticks(selected_ticks, rotation=90)
#plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d %H:%M:%S'))
Y_LIM = 2*10**8 #df[f'{name_columns}'].max()
TRAIN_WIDTH = 288*27
TEST_WIDTH = 357*1
print(TRAIN_WIDTH)
print(TEST_WIDTH)
#plt.title(f'Data split:\n taraing-set {100*(len(data_train)/len(df)):.2f}% = {TRAIN_WIDTH/288:.2f} days and test-set {100*(len(data_test)/len(df)):.2f}% = {TEST_WIDTH/288:.f} days')
plt.title(f'Data split:\n taraing-set % = days and test-set % = days')
plt.ylabel(f' usage', fontsize=15)
plt.xlabel('Timestamp', fontsize=15)
plt.grid(True)
#plt.legend(loc="upper left")
plt.legend(bbox_to_anchor=(1.3,.9), loc="upper right")
fig.tight_layout(pad=1.2)
def init():
rects = [Rectangle((0, 0) , TRAIN_WIDTH, Y_LIM, alpha=0.3, facecolor='green'),
Rectangle((0 + TRAIN_WIDTH, 0), TEST_WIDTH, Y_LIM, alpha=0.3, facecolor='blue')]
patches = []
for rect in rects:
patches.append(ax.add_patch(rect))
return patches
def update(x_start):
patches[0].xy = (x_start, 0)
patches[1].xy = (x_start + TRAIN_WIDTH, 0)
return patches
# create "Train" and "Test" areas
patches = init()
ani = FuncAnimation(
fig,
update,
frames= np.linspace(0, 288, 80), # all starting points
interval=50,
blit=True)
HTML(ani.to_html5_video())
</code></pre>
<hr />
<p>My current output:
<img src="https://i.imgur.com/pEaJU5D.png" alt="img" /></p>
<hr />
<p>Expected animation output (but with full timestamp):</p>
<p><img src="https://d33wubrfki0l68.cloudfront.net/f9e6d3495ba5437512a3ff12ac0bdef7fa1745ae/7ef53/images/backtesting_refit_fixed_train_size.gif" alt="ani" /></p>
<hr />
<p>Reference:</p>
<ul>
<li><a href="https://stackoverflow.com/q/76836503/10452700">How can reproduce animation for rolling window over time?</a></li>
</ul>
|
<python><pandas><sliding-window><matplotlib-animation>
|
2024-02-25 19:23:37
| 1
| 2,056
|
Mario
|
78,057,262
| 823,859
|
Cannot update llamaindex
|
<p>After <code>llamaindex</code> introduced v0.10 in February 2024, it introduced a lot of breaking changes to imports. I am trying to update <code>llama-index</code> within a <code>conda</code> environment, but I receive the following error:</p>
<pre><code>> pip install llama-index --upgrade
ERROR: Cannot install llama-index-cli because these package versions have conflicting dependencies.
The conflict is caused by:
llama-index-vector-stores-chroma 0.1.4 depends on onnxruntime<2.0.0 and >=1.17.0
llama-index-vector-stores-chroma 0.1.3 depends on onnxruntime<2.0.0 and >=1.17.0
llama-index-vector-stores-chroma 0.1.2 depends on onnxruntime<2.0.0 and >=1.17.0
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
</code></pre>
<p>I have tried <code>pip install llama-index-vector-stores-chroma</code> but get the same error.</p>
<p>I have also tried installing <code>onnxruntime</code> but get this error:</p>
<pre><code>pip install onnxruntime
ERROR: Could not find a version that satisfies the requirement onnxruntime (from versions: none)
ERROR: No matching distribution found for onnxruntime
</code></pre>
<p>How can I update <code>llama-index</code>?</p>
|
<python><pip><llama-index>
|
2024-02-25 18:56:36
| 4
| 7,979
|
Adam_G
|
78,057,145
| 6,741,482
|
Object for storing types for use during static analysis
|
<p>Is there a way/convention in Python (3.10+) to store types defined in a module, within some container/registry class so that static analysis tools like <code>mypy</code> can correctly infer the type of an annotation?</p>
<p>Context for this problem:</p>
<ul>
<li>Format standard that in-house packages should use the form <code>from <package> import X</code></li>
<li>Module in question can have 100+ types</li>
<li>Consuming module generates 1:1 type for each type in the other module</li>
</ul>
<p>Starting from the last item, a 1:1 example looks like:</p>
<pre><code>from <package> import TypeBase, TypeOne, TypeTwo
class OuterBase:
attr: TypeBase
class OuterOne:
attr: TypeOne
class OuterTwo:
attr: TypeTwo
</code></pre>
<p>This would become frustrating if I have to import 100+ types, so it would be convenient to have some container that I can reference the types for static analysis similar to an <code>Enum</code> but something like <code>TypeEnum</code>.</p>
<p>There are two ways that this outcome can be achieved:</p>
<ul>
<li>Import the package and using its namespace as the "container"</li>
</ul>
<pre><code>from <package-parent> import <package>
class OuterBase:
attr: <package>.TypeBase
</code></pre>
<ul>
<li>Create a non-instantiable class holding class attributes with <code>TypeAlias</code></li>
</ul>
<pre><code># inside <package>
class PackageTypes
TypeBase: TypeAlias = TypeBase
TypeOne: TypeAlias = TypeOne
...
# inside consumer package
from <package> import PackageTypes
class OuterBase:
attr: PackageTypes.TypeBase
</code></pre>
<p>The former breaks the "format standard" imposed on the project that imports should get to their final absolute form (i.e. no in-house packages import in their entirety, only names within modules); this is something I've inherited and cannot change. It would however, be the most idiomatic solution, but it's out of my hands.</p>
<p>The latter is concise, but I have to have hardcoded class with 100+ class attributes that are really just references to types; while not terrible, it seems repetitive and bulky.</p>
<p>Does Python have a native solution given these constraints or perhaps Pydantic 2?</p>
|
<python><python-3.x><mypy><pydantic>
|
2024-02-25 18:23:27
| 0
| 3,878
|
pstatix
|
78,056,946
| 8,223,979
|
How to read a huge csv faster?
|
<p>I tried using pyarrow without success. My code:</p>
<pre><code>df = pd.read_csv("file.csv", engine='pyarrow')
</code></pre>
<p>I get this error:</p>
<pre><code>"pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)"
</code></pre>
<p>I cannot find any argument to change the block size. Any suggestions?</p>
|
<python><pandas><csv><io><pyarrow>
|
2024-02-25 17:24:49
| 2
| 1,097
|
Caterina
|
78,056,934
| 4,451,315
|
pandas or Polars: find index of previous element larger than current one
|
<p>Suppose my data looks like this:</p>
<pre class="lang-py prettyprint-override"><code>data = {
'value': [1,9,6,7,3, 2,4,5,1,9]
}
</code></pre>
<p>For each row, I would like to find the row number of the latest previous element larger than the current one.</p>
<p>So, my expected output is:</p>
<pre><code>[None, 0, 1, 2, 1, 1, 3, 4, 1, 0]
</code></pre>
<ul>
<li>the first element <code>1</code> has no previous element, so I want <code>None</code> in the result</li>
<li>the next element <code>9</code> is at least as large than all its previous elements, so I want <code>0</code> in the result</li>
<li>the next element <code>6</code>, has its previous element <code>9</code> which is larger than it. The distance between them is <code>1</code>. So, I want <code>1</code> in the result here.</li>
</ul>
<p>I'm aware that I can do this in a loop in Python (or in C / Rust if I write an extension).</p>
<p>My question: is it possible to solve this <strong>using entirely dataframe operations</strong>? pandas or Polars, either is fine. But only dataframe operations.</p>
<p>So, none of the following please:</p>
<ul>
<li><code>apply</code></li>
<li><code>map_elements</code></li>
<li><code>map_rows</code></li>
<li><code>iter_rows</code></li>
<li>Python for loops which loop over the rows and extract elements one-by-one from the dataframes</li>
</ul>
|
<python><pandas><python-polars>
|
2024-02-25 17:22:09
| 6
| 11,062
|
ignoring_gravity
|
78,056,851
| 934,757
|
How to leave out all NaNs a pandas dataframe
|
<p>I'm upgrading some code from python2 to python3 and the modern pandas version (I now have pandas 2.0.3 and numpy version 1.26.4)</p>
<p>My dataframe is :</p>
<pre><code> N NE E SE S SW W NW
H12 NaN NaN NaN NaN NaN NaN NaN NaN
H13 0.7 NaN NaN NaN NaN NaN 1.0 1.4
H14 0.3 NaN NaN NaN NaN NaN 0.8 1.1
H15 NaN NaN NaN NaN NaN NaN NaN NaN
H16 NaN NaN NaN NaN NaN NaN NaN NaN
</code></pre>
<p>And I want to leave out all the NaNs such that i get a new df:</p>
<pre><code> N W NW
H13 0.7 1.0 1.4
H14 0.3 0.8 1.1
</code></pre>
<p>My old code has <code>df.any(1)</code> or something very similar used working, but now I get and error message</p>
<p><code>NDFrame._add_numeric_operations.<locals>.any() takes 1 positional argument but 2 were given</code></p>
<p>Maybe there is a better way to do it, I'm not fussed about using <code>any()</code>.</p>
|
<python><pandas>
|
2024-02-25 16:53:41
| 2
| 437
|
djnz0feh
|
78,056,806
| 9,133,582
|
Using tf.keras.metrics.R2Score results in an error in Tensorflow
|
<p>I'm making a regression model with Tensorflow, but when I use <code>tf.keras.metrics.R2Score()</code> as a metric, it fails with <code>ValueError: Tensor conversion requested dtype int32 for Tensor with dtype float32: <tf.Tensor: shape=(), dtype=float32, numpy=0.0></code> after the first epoch. (But works fine up until then) However, if I use a different metric (<code>tf.keras.metrics.RootMeanSquaredError()</code>, it works fine.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
weather_states = pd.read_sql("SELECT stations.id, stations.capacity_kw, start, wind_speed_10m, wind_direction_10m, wind_speed_80m, wind_direction_80m, wind_speed_180m, wind_direction_180m FROM stations INNER JOIN weather_states ON stations.id = weather_states.station WHERE weather_states.source = 'openmeteo_forecast/history/best' AND stations.source = 'wind'", db_client)
grid_states = pd.read_sql("SELECT start, wind FROM grid_states", db_client)
def create_x_y(df: tuple[Any, pd.DataFrame]):
start = df[1]["start"].iloc[0]
res = df[1].sort_values("id").drop(["id", "start"], axis=1)
temp_wind = grid_states.loc[grid_states["start"] == start]["wind"].to_list()
wind_kw = temp_wind if len(temp_wind) >= 1 else None
res_flat_df = pd.DataFrame(res.to_numpy().reshape((1, -1)))
res_flat_df["wind_kw"] = wind_kw
return res_flat_df
data = pd.concat(map(create_x_y, weather_states.groupby("start"))).dropna()
from sklearn.model_selection import train_test_split
data = data.astype("float32")
train, test = train, test = train_test_split(data.dropna(), test_size=0.2)
train_y = train.pop("wind_kw")
train_x = train
test_y = test.pop("wind_kw")
test_x = test
norm = tf.keras.layers.Normalization()
norm.adapt(train_x)
model = tf.keras.Sequential([
norm,
tf.keras.layers.Dense(16, activation="linear"),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Dense(1, activation="linear"),
])
model.compile(
optimizer=tf.keras.optimizers.legacy.Adam(0.001),
metrics=[tf.keras.metrics.R2Score(dtype=tf.float32)],
loss=tf.keras.losses.MeanSquaredError(),
)
model.fit(train_x, train_y, epochs=7, batch_size=2)
tf.keras.models.save_model(model, 'wind.keras')
</code></pre>
<p><code>print(data.describe())</code></p>
<pre><code> 0 1 2 3 4 ... 241 242 243 244 wind_kw
count 1896.0 1896.000000 1896.000000 1896.000000 1896.000000 ... 1896.000000 1896.000000 1896.000000 1896.000000 1896.000000
mean 144000.0 4.315717 189.610759 5.791377 193.830169 ... 3.881292 145.420359 4.572205 143.642405 1292.576958
std 0.0 2.482439 113.178764 2.926497 113.685887 ... 2.612259 93.293471 2.775681 94.721086 611.333721
min 144000.0 0.100000 1.000000 0.100000 1.000000 ... 0.100000 2.000000 0.000000 1.000000 34.263000
25% 144000.0 2.110000 88.000000 3.487500 90.000000 ... 1.900000 67.000000 2.500000 63.000000 793.109500
50% 144000.0 4.110000 199.000000 5.500000 231.000000 ... 3.075000 137.000000 3.940000 135.000000 1251.590000
75% 144000.0 6.220000 291.000000 7.882500 294.000000 ... 5.502500 205.000000 6.082500 205.000000 1761.926750
max 144000.0 11.670000 360.000000 15.210000 360.000000 ... 14.460000 360.000000 16.980000 360.000000 3008.125000
</code></pre>
<pre class="lang-py prettyprint-override"><code>print(type(data))
#<class 'pandas.core.frame.DataFrame'>
print(data.dtypes)
#0 float32
#1 float32
#2 float32
#3 float32
#4 float32
# ...
#241 float32
#242 float32
#243 float32
#244 float32
#wind_kw float32
#Length: 246, dtype: object
print(data.shape)
#(1896, 246)
</code></pre>
<p>I can't seem to find any information online about this error when using R2Score- any ideas as to what could be the issue?</p>
|
<python><tensorflow><machine-learning><keras>
|
2024-02-25 16:40:38
| 1
| 1,153
|
Jacques Amsel
|
78,056,797
| 1,946,418
|
"Expected expression" in a single liner "else continue"
|
<p><code>Python -V => 3.12.1</code></p>
<pre class="lang-py prettyprint-override"><code>value = "abc" # or could be None in some other cases
def doSomething(v):
print(f"value is {v}")
for v in range(100):
doSomething(v) if value else continue
</code></pre>
<p><a href="https://i.sstatic.net/1CRSb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1CRSb.png" alt="enter image description here" /></a></p>
<p>Any ideas how to use a single liner in these kind of situations anyone? TIA</p>
|
<python>
|
2024-02-25 16:36:04
| 3
| 1,120
|
scorpion35
|
78,056,709
| 11,825,717
|
PyTorch tensor.sum() performance drops with large tensors vs. NumPy
|
<p><code>tensor.sum()</code> performance drops once my tensor exceeds a certain size. Why is that?</p>
<pre><code>import torch
tensor = torch.FloatTensor(200_000, 2_000).uniform_() > 0.8 # random 1's and 0's
tensor[:, :1000].sum(dim=0)
tensor[:, :2000].sum(dim=0) # 2x wider but 20x slower
</code></pre>
<p>Profiling:</p>
<pre><code>import time
import torch
start = time.time()
tensor[:, :1000].sum(dim=0)
end = time.time()
print(end - start) # 0.69s
start = time.time()
tensor[:, :2000].sum(dim=0)
end = time.time()
print(end - start) # 20s
</code></pre>
<p>NumPy doesn't appear to share this limitation:</p>
<pre><code>import numpy as np
start = time.time()
np.array(tensor[:, :2000]).sum(axis=0)
end = time.time()
print(end - start) # 0.40s
</code></pre>
|
<python><performance><pytorch>
|
2024-02-25 16:16:47
| 1
| 2,343
|
Jeff Bezos
|
78,056,565
| 3,305,998
|
How do I get doctest to run with examples in markdown codeblocks for mkdocs?
|
<p>I'm using mkdocs & mkdocstring to build my documentation and including code examples in the docstrings. I'm also using doctest (via <code>pytest --doctest-modules</code>) to test all those examples.</p>
<h2>Option 1 - format for documentation</h2>
<p>If I format my docstring like this:</p>
<pre><code> """
Recursively flattens a nested iterable (including strings!) and returns all elements in order left to right.
Examples:
--------
```
>>> [x for x in flatten([1,2,[3,4,[5],6],7,[8,9]])]
[1, 2, 3, 4, 5, 6, 7, 8, 9]
```
"""
</code></pre>
<p>Then it renders nicely in the documentation but doctest fails with the error:</p>
<pre><code>Expected:
[1, 2, 3, 4, 5, 6, 7, 8, 9]
```
Got:
[1, 2, 3, 4, 5, 6, 7, 8, 9]
</code></pre>
<p>That makes sense as doctest treats <em>everything</em> until a blank line as expected output and aims to match is <em>exactly</em></p>
<h2>Option 2 - format for doctest</h2>
<p>If I format the docstring for doctest without code blocks:</p>
<pre><code> """
Recursively flattens a nested iterable (including strings!) and returns all elements in order left to right.
Examples:
--------
>>> [x for x in flatten([1,2,[3,4,[5],6],7,[8,9]])]
[1, 2, 3, 4, 5, 6, 7, 8, 9]
"""
</code></pre>
<p>then doctest passes but the documentation renders</p>
<blockquote>
<blockquote>
<blockquote>
<p>[x for x in flatten([1,2,[3,4,[5],6],7,[8,9]])][1, 2, 3, 4, 5, 6, 7, 8, 9]</p>
</blockquote>
</blockquote>
</blockquote>
<h2>Workaround? - add a blank line for doctest</h2>
<p>If I format it with an extra blank line before the end of the codeblock:</p>
<pre><code> """
Recursively flattens a nested iterable (including strings!) and returns all elements in order left to right.
Examples:
--------
```
>>> [x for x in flatten([1,2,[3,4,[5],6],7,[8,9]])]
[1, 2, 3, 4, 5, 6, 7, 8, 9]
```
"""
</code></pre>
<p>Then doctest passes but</p>
<ol>
<li>there is a blank line at the bottom of the example in the documentation (ugly)</li>
<li>I need to remember to add a blank line at the end of each example (error prone and annoying)</li>
</ol>
<p>Does anyone know of a better solution?</p>
|
<python><pytest><doctest><mkdocs><mkdocstrings>
|
2024-02-25 15:36:03
| 2
| 318
|
MusicalNinja
|
78,056,541
| 190,887
|
How do I resolve this LoRA loading error?
|
<p>I'm trying to run through <a href="https://huggingface.co/docs/diffusers/training/lora#text-to-image" rel="noreferrer">the 🤗 LoRA tutorial</a>. I've gotten the dataset pulled down, trained it and have checkpoints on disk (in the form of several subdirectories and <code>.safetensors</code> files).</p>
<p>The last part is trying to run inference. In particular,</p>
<pre><code>from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda")
pipeline.load_lora_weights("path/to/lora/model", weight_name="pytorch_lora_weights.safetensors")
</code></pre>
<p>However, on my local when I try to run that <code>load_lora_weights</code> line, I get</p>
<pre><code>>>> pipeline.load_lora_weights("path/to/my/lora", weight_name="pytorch_lora_weights.safetensors")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/path/to/my/site-packages/diffusers/loaders/lora.py", line 107, in load_lora_weights
raise ValueError("PEFT backend is required for this method.")
ValueError: PEFT backend is required for this method.
>>>
</code></pre>
<p>I have PEFT installed, but there don't seem to be instructions calling for me to do anything else about it in order to load a LoRA.</p>
<p>What am I doing wrong here? If the answer is "nothing, this is the 'it's an experimental API' note coming back to bite you", are there any workarounds?</p>
|
<python><huggingface-transformers><huggingface>
|
2024-02-25 15:26:56
| 2
| 14,105
|
Inaimathi
|
78,056,450
| 2,502,331
|
DataBricks PySpark withColumn() Fails After First Success
|
<p>I'm working in DataBricks with Python/PySpark. I have several output columns that copy a single input column. One of the output columns is the lower-case version of the upper-case or mixed-case input column, which I need to rename this way before I drop it. Because I am also casting its type, I use the <code>withColumn()</code> command (rather than the <code>withColumnRenamed()</code> command).</p>
<p>This works fine the first time but fails afterward. DataBricks/PySpark is a case-sensitive language, so this should work. What's wrong?</p>
|
<python><pyspark><databricks>
|
2024-02-25 15:06:30
| 1
| 978
|
StephenDonaldHuffPhD
|
78,056,249
| 11,584,327
|
How to hide or show a canvas line controlled by sliders using Checkbuttom from Tkinter?
|
<p>With the code below it is possible to control the amplitude and frequency of a triangular signal.<br />
Just starting out with Python, I fail to hide or show a given signal using a <code>Checkbutton</code> from <code>tkinter</code>. The few examples found on Internet apply to label texts or images, but none to canvas objects controlled via functions...<br />
Thanks for help</p>
<pre><code>import tkinter as tk
from tkinter import ttk
import numpy as np
from scipy import signal as sg
root = tk.Tk()
root.title('Oscilloscope')
root.geometry("1000x600+0+0")
N1 = 2000
X1 = 1000
O1 = 200
# canva
cnv = tk.Canvas(root, width = 800, height = 400, bg = 'white')
cnv.place(relx=0.5, rely=0.3, anchor=tk.CENTER)
cnv.create_line(400, 0, 400, 400, fill='black', width=1)
cnv.create_line(0, 200, 1000, 200, fill='black', width=1)
# sliders labels
amplitude = ttk.Label(root, text="Amplitude", font=("Arial", 12, "bold"), foreground="black")
amplitude.place(relx=0.3, rely=0.7, anchor=tk.CENTER)
frequence = ttk.Label(root, text="Frequency", font=("Arial", 12, "bold"), foreground="black")
frequence.place(relx=0.7, rely=0.7, anchor=tk.CENTER)
# signal
def draw1(cnv, A1, F1, O1, N1):
cnv.delete("line1")
xpts1 = X1 / (N1-1)
line1 = []
for i in range(N1):
x = (i * xpts1)
y = A1*sg.sawtooth(2*np.pi*F1*i/N1, width=0.5) + O1
line1.extend((x, y))
cnv.create_line(line1, fill="red2", width=3, tag="line1")
# control sliders
def ctrl_curseurs_1(*args):
A1 = A1_valeur.get()
F1 = F1_valeur.get()
draw1(cnv, A1, F1, O1, N1)
# horizontal slider for amplitude A1 ############################
def curseur_A1():
sel = "Value = " + str(value.get(A1))
A1_valeur = tk.IntVar()
A1_frm = ttk.Frame(root)
A1_frm.place(relx=0.3, rely=0.8, anchor=tk.CENTER)
A1_scale = tk.Scale(A1_frm, variable=A1_valeur, command=ctrl_curseurs_1,
from_ = -200, to = 200, length=200, showvalue=1, tickinterval=100, orient = tk.HORIZONTAL, resolution=1)
A1_scale.pack(anchor=tk.CENTER)
# horizontal slider for frequency F1 ############################
def curseur_F1():
sel = "Value = " + str(value.get(F1))
F1_valeur = tk.IntVar()
F1_frm = ttk.Frame(root)
F1_frm.place(relx=0.7, rely=0.8, anchor=tk.CENTER)
F1_scale = tk.Scale(F1_frm, variable=F1_valeur, command=ctrl_curseurs_1,
from_ = 0, to = 50, length=200, showvalue=1, tickinterval=10, orient = tk.HORIZONTAL, resolution=1)
F1_scale.pack(anchor=tk.CENTER)
root.mainloop()
</code></pre>
|
<python><tkinter><canvas><checkbox><tkinter.checkbutton>
|
2024-02-25 14:08:06
| 1
| 902
|
denis
|
78,056,243
| 5,431,734
|
call functions from a docker image based on a conda python project
|
<p>I dont quite understand how to make a docker image and then run a basic function please
My toy project (running on a conda env) looks like one shown below, at the bottom:</p>
<p>The build looks to executed fine <code>docker build --no-cache -t my_docker_img .</code> What I dont understand is how I can load the image and call the functions exposed by the package <code>print_hi, app</code> and pass some user-defined arguments to them please.</p>
<p>I am a real beginner with docker, this is literally my first one.</p>
<pre><code>dockerize_me\
|
|
|
dockerize_me\
|
|
__init__.py
main.py
utils.py
|
|
dockerfile
environment.yml
setup.py
</code></pre>
<p>where the file contents are:</p>
<p><strong><strong>init</strong>.py</strong></p>
<pre class="lang-py prettyprint-override"><code># file: __init__.py
from dockerize_me.main import print_hi, app
</code></pre>
<p><strong>main.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from dockerize_me.utils import add_scalar
import numpy as np
def print_hi(name):
print("hello %s" % name )
def app(lst_in, const):
arr = add_scalar(np.array(lst_in), const)
return arr
</code></pre>
<p><strong>utils.py</strong></p>
<pre class="lang-py prettyprint-override"><code>def add_scalar(arr, scalar):
return arr + scalar
</code></pre>
<p><strong>environment.yml</strong></p>
<pre class="lang-yaml prettyprint-override"><code>name: dockerize_me
channels:
- conda-forge
dependencies:
- pip
- python=3.8
- numpy
</code></pre>
<p><strong>dockerfile</strong></p>
<pre><code>FROM mambaorg/micromamba:0.19.1
ARG MAMBA_DOCKERFILE_ACTIVATE=1
COPY environment.yml .
RUN micromamba env create -f environment.yml
</code></pre>
|
<python><docker><conda>
|
2024-02-25 14:06:45
| 1
| 3,725
|
Aenaon
|
78,056,148
| 2,030,532
|
How to make pytest exceptions cause breakpoint PyCharm?
|
<p>PyCharm has this handy feature that, if enabled, you can get a breakpoint where the exception happens and debug it. However, when running pytests through PyCharm, when there is an exception it only shows the error in the console and does not cause exception breakpoints. I know pytest handles the exception by itself, but is there an option to make the pytest behave like running a regular python scripts and run the test sequentially and cause exception breakpoint when one test fails?</p>
|
<python><pycharm><pytest>
|
2024-02-25 13:40:03
| 0
| 3,874
|
motam79
|
78,056,001
| 893,254
|
Why can't operator.eq be applied to a class map object using functools.reduce?
|
<p>Why is the following Python code invalid?</p>
<pre><code>iterable = [1, 2, 3]
mapped = map(operator.eq, iterable)
print(type(mapped)) # `class map`
functools.reduce(operator.eq, mapped)
</code></pre>
<p>A class <code>map</code> object is iterable. The following will work,</p>
<p><sub>but produces (perhaps) "unexpected" results:</sub></p>
<pre><code>functools.reduce(operator.eq, iterable)
</code></pre>
<sub>
For example, applying this to the iterable `iterable = [True, False, False]` produces `True` not `False`. It is obvious why if you think about each iteration, but on a first glance it might look like it behaves in a different way.
</sub>
<hr />
<p>If <code>functools.reduce</code> can apply a binary function <code>operator.eq</code> to the iterable <code>iterable</code>, why can it not apply the same function to the iterable <code>mapped</code> which is of type <code>class map</code>?</p>
|
<python>
|
2024-02-25 12:51:26
| 1
| 18,579
|
user2138149
|
78,055,800
| 3,904,031
|
Generate a mesh from my polygon geometry to iterate FEM for geometry optimization?
|
<p>The 2D script below generates polygons in a box, which represent a cross-section of three cylindrical rings parallel to the z axis which will have voltages applied. I'll solve the <a href="https://scicomp.stackexchange.com/a/43810/17869">Laplace equation</a> in cylindrical coordinates using the Finite Element Method as described in <a href="https://scicomp.stackexchange.com/q/43805/17869">the answer to my SciComp SE question <em>Simple, easy to install and use Python FEM solver (and example) for 2D cylindrical Laplace equation</em></a>.</p>
<p>I'll convert the potential to an electric field and ray trace an electron beam through the lens, evaluate the results, and then iteratively adjust the lens geometry and voltages to optimize.</p>
<p>In this loop I'll need to convert my geometry to a proper mesh that <a href="https://scikit-fem.readthedocs.io/en/latest/" rel="nofollow noreferrer">scikit-fem</a> can use. (The inside of the polygons is solid metal so I only need to make the mesh for the region in the box but outside the polygons).</p>
<p>I see that scikit-fem has <a href="https://scikit-fem.readthedocs.io/en/latest/api.html#abstract-class-mesh" rel="nofollow noreferrer">built-in mesh constructors</a> but from the page (and from the examples) I don't see how to take my polygons (with points and sides defined) and produce even a preliminary mesh. I'm completely new to this, a bit of hand-holding will be greatly appreciated!</p>
<p>For now I don't care what mesh generator I use, except that it needs to be really easy and foolproof to install (hopefully as simple pip install, I'm not a developer proper) and easily callable from within a python script.</p>
<p><strong>Question:</strong> What is a simple automated way to generate a mesh from my polygon geometry so I can apply FEM (scikit-learn) in an iterative way to optimize my design?</p>
<p>The main script</p>
<pre><code>V = 10
box = Rect_boundary('box', 0, 16, 0, 50, potential='GND', is_box=True)
lens_1 = Rect_boundary('lens 1', 13, 14.5, 8, 15, bevel=True, potential=+V)
lens_2 = Rect_boundary('lens 2', 13, 14.5, 18, 32, bevel=True, potential=-V)
lens_3 = Rect_boundary('lens 3', 13, 14.5, 35, 42, bevel=True, potential=+V)
things = box, lens_1, lens_2, lens_3
plot_things(things)
summarize_things(things)
</code></pre>
<p>A plot:</p>
<p><a href="https://i.sstatic.net/F8Jx7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F8Jx7.png" alt="enter image description here" /></a></p>
<p>A printout:</p>
<pre><code>thing: box
is_box: True
bevel: False
points:
[[ 0 0]
[50 0]
[50 16]
[ 0 16]]
facets:
[[0 1]
[1 2]
[2 3]
[3 0]]
thing: lens 1
is_box: False
bevel: True
points:
[[ 8.5 13. ]
[14.5 13. ]
[15. 13.5]
[15. 14. ]
[14.5 14.5]
[ 8.5 14.5]
[ 8. 14. ]
[ 8. 13.5]]
facets:
[[0 1]
[1 2]
[2 3]
[3 4]
[4 5]
[5 6]
[6 7]
[7 0]]
thing: lens 2
is_box: False
bevel: True
points:
[[18.5 13. ]
[31.5 13. ]
[32. 13.5]
[32. 14. ]
[31.5 14.5]
[18.5 14.5]
[18. 14. ]
[18. 13.5]]
facets:
[[0 1]
[1 2]
[2 3]
[3 4]
[4 5]
[5 6]
[6 7]
[7 0]]
thing: lens 3
is_box: False
bevel: True
points:
[[35.5 13. ]
[41.5 13. ]
[42. 13.5]
[42. 14. ]
[41.5 14.5]
[35.5 14.5]
[35. 14. ]
[35. 13.5]]
facets:
[[0 1]
[1 2]
[2 3]
[3 4]
[4 5]
[5 6]
[6 7]
[7 0]]
</code></pre>
<p>The classes and functions:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
class Rect_boundary():
def __init__(self, name, r1, r2, z1, z2, bevel=False, potential='ground',
is_box=False):
"""r2 > r1 and z2 > z1 please!"""
self.name = name
self.bevel = bevel
self.is_box = is_box
self.update_shape(r1, r2, z1, z2)
self.update_potential(potential)
def update_shape(self, r1, r2, z1, z2):
self.r1 = r1
self.r2 = r2
self.z1 = z1
self.z2 = z2
self.thickness = self.r2 - self.r1
b = self.thickness / 3.
self.bevel_width = b
if self.bevel:
self.points = np.array([(z1+b, r1), (z2-b, r1), (z2, r1+b),
(z2, r2-b), (z2-b, r2), (z1+b, r2),
(z1, r2-b), (z1, r1+b)])
self.facets = np.array([(0, 1), (1, 2), (2, 3), (3, 4),
(4, 5), (5, 6), (6, 7), (7, 0)])
else:
self.points = np.array([(z1, r1), (z2, r1), (z2, r2), (z1, r2)])
self.facets = np.array([(0, 1), (1, 2), (2, 3), (3, 0)])
self.zmin, self.rmin = self.points.min(axis=0)
self.zmax, self.rmax = self.points.max(axis=0)
self.r_center = 0.5 * (self.rmin + self.rmax)
self.z_center = 0.5 * (self.zmin + self.zmax)
self.thickness = self.rmax - self.rmin
self.length = self.zmax - self.zmin
def update_potential(self, potential):
if isinstance(potential, str) and potential.lower().startswith(('gr', 'gn')):
self.potential = 0.
self.color = 'black'
else:
self.ground = False
self.potential = float(potential)
if self.potential > 0:
self.color = 'red'
else:
self.color = 'blue'
def plot_things(things):
fs = 14
fig, ax = plt.subplots(1, 1)
for thing in things:
x, y = zip(*thing.points)
ax.plot(x, y, '.k')
for i, j in thing.facets:
(x0, y0), (x1, y1) = thing.points[i], thing.points[j]
ax.plot([x0, x1], [y0, y1], '-', color=thing.color)
if not thing.is_box:
ax.text(thing.z_center, thing.rmin-3, thing.name,
color=thing.color, fontsize=fs, ha='center')
ax.set_aspect('equal')
ax.set_xlabel('z', fontsize=14)
ax.set_ylabel('r', fontsize=14)
plt.show()
def summarize_things(things):
for thing in things:
print('thing: ', thing.name)
print('is_box: ', thing.is_box)
print('bevel: ', thing.bevel)
print('points:')
print(thing.points)
print('facets:')
print(thing.facets)
print('')
</code></pre>
|
<python><mesh><finite-element-analysis>
|
2024-02-25 11:45:58
| 0
| 3,835
|
uhoh
|
78,055,498
| 7,045,119
|
Python date.today() return different date on the same environment & pc?
|
<p>I have an Odoo server, where the instance have the following condition and <code>date.today()</code> return yesterday. the following condition return true:</p>
<pre class="lang-py prettyprint-override"><code>if invoice.invoice_date > date.today():
errors.append("- Please, make sure the invoice date is set to either the same as or before Today.")
</code></pre>
<p>however, using the python shell in the same pc it returns the correct date. please, note that the time was Feb 24 1:20 AM and both python in the same environment.</p>
<p>so, could you let me know if python <code>date.today()</code> could be affect by any <code>context</code>?</p>
<p><a href="https://peertube.otakufarms.com/w/ou39YDnDZtMrUYkbVLSS9u" rel="nofollow noreferrer">Watch the video</a></p>
|
<python><odoo>
|
2024-02-25 09:46:54
| 1
| 1,205
|
kerbrose
|
78,055,408
| 5,378,271
|
TimeoutError when clicking a button in Playwright
|
<p>I got playwright._impl._errors.TimeoutError with the following code.</p>
<pre><code>import asyncio
from playwright.async_api import async_playwright
async def fb():
async with async_playwright() as p:
browser = await p.chromium.launch(args=["--disable-gpu", "--single-process"])
page = await browser.new_page()
await page.goto('https://www.facebook.com/zuck')
s = "#mount_0_0_Y4 > div > div:nth-child(1) > div > div:nth-child(5) > div > div > div.x9f619.x1n2onr6.x1ja2u2z > div > div.x1uvtmcs.x4k7w5x.x1h91t0o.x1beo9mf.xaigb6o.x12ejxvf.x3igimt.xarpa2k.xedcshv.x1lytzrv.x1t2pt76.x7ja8zs.x1n2onr6.x1qrby5j.x1jfb8zj > div > div > div > div.x92rtbv.x10l6tqk.x1tk7jg1.x1vjfegm > div";
await page.locator(s).get_by_role("button").click(); # TimeoutError
await browser.close()
if __name__ == "__main__":
asyncio.new_event_loop().run_until_complete(fb())
</code></pre>
<p>How can I solve this issue? My environment is</p>
<ul>
<li>Docker base image : mcr.microsoft.com/playwright/python:v1.41.0-jammy</li>
<li>Python 3.10.12</li>
<li>playwright-1.41.2</li>
</ul>
<p>The entire traceback is as follows:</p>
<pre><code>Traceback (most recent call last):
File "/function/dbgq.py", line 20, in <module>
asyncio.new_event_loop().run_until_complete(fb())
File "/usr/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/function/dbgq.py", line 15, in fb
await page.locator(s).get_by_role("button").click(); # TimeoutError
File "/function/playwright/async_api/_generated.py", line 15786, in click
await self._impl_obj.click(
File "/function/playwright/_impl/_locator.py", line 158, in click
return await self._frame.click(self._selector, strict=True, **params)
File "/function/playwright/_impl/_frame.py", line 494, in click
await self._channel.send("click", locals_to_params(locals()))
File "/function/playwright/_impl/_connection.py", line 63, in send
return await self._connection.wrap_api_call(
File "/function/playwright/_impl/_connection.py", line 495, in wrap_api_call
return await cb()
File "/function/playwright/_impl/_connection.py", line 101, in inner_send
result = next(iter(done)).result()
playwright._impl._errors.TimeoutError: Timeout 30000ms exceeded.
</code></pre>
|
<python><python-3.x><playwright>
|
2024-02-25 09:12:07
| 1
| 619
|
nemy
|
78,055,011
| 9,999,114
|
How can i query and efficiently store 50 million REST API records to build a data lake
|
<p>The problem use case is simple.</p>
<p>I want to query more than 50M records from Salesforce through REST API. It is a part of effort to gather data from multiple sources to fuse them together later which can be fed into a ML model.
This data i want to store into a database or a parquet file, from where i can perform the next steps.</p>
<p>Now, Salesforce on a rest api query on such a huge data size for an object, will return something like</p>
<pre><code> {"totalsize":50000000, "done":false,
"nextrecordUrl":"/services/data/v49.0/a1000xxxxxx-2000",
"records":[here the data of query remains of the 2000 batch that was returned]}
</code></pre>
<p>The nextRecordUrl simple gives the url to query the next batch od 2000(max batch size that can be returned by salesforce, however no gurantee on this number).
So i am simple running a loop until there is no more nextRecordUrl to get the entire set of data.</p>
<p>Current approach looks like this:</p>
<pre><code> read the data = query(api)
while "done" is not True:
read the "records" part of the json
load it into Database ( using psycopg2 )
again read data = query( nextRecordUrl )
read value of "done"
</code></pre>
<p>My question is, how can i speed this operation up?
Considering even if every loop takes 2 second to complete, this will still take 16-17 hours at best and we might need to run it every fortnight to keep up the latest data as reflected in the application ( salesforce ), not to mention the database timeout error or the memory errors I have to deal with.</p>
<p>Any suggestions?</p>
<p>edit :
current approach is like this :</p>
<pre><code>sf = salesforce () # simple_salesforce library connection
df = pandas dataframe
sqlalc_engine = sqlalchemy engine
# query the first batch here then this loop start
while True:
if num_batches == 1:
current_df = df
else:
current_df = pd.DataFrame(lstRecords)
#print(current_df.columns)
current_df = current_df.drop(['attributes'], axis=1)
print(f"next record URL is :{nextRecordsUrl}, || batch no is : {num_batches}")
print(f"Loading batch : {num_batches} | records count : {current_df.shape[0]}")
#this line i am using to insert records into DB
current_df.to_sql(table_name, con=sqlalc_engine, schema='', if_exists='append', index=False)
print(f"{num_batches} is loaded to DB")
num_batches+=1
if completed:
#no more records
break
# get next batch of records
records = sf.query_more(nextRecordsUrl, identifier_is_url=True)
lstRecords = records.get('records')
nextRecordsUrl = records.get('nextRecordsUrl')
# set running check for next loop
completed = records.get('done')
</code></pre>
|
<python><database><rest><salesforce><psycopg2>
|
2024-02-25 06:15:58
| 1
| 325
|
Healer77Om
|
78,054,752
| 8,935,725
|
FastAPI, SQLModel - Pydantic not serializing datetime
|
<p>My understanding is that SQLModel uses pydantic BaseModel (checked the _mro_. Why is it that the bellow fails the type comparison. (err provided).</p>
<pre><code>class SomeModel(SQLModel,table=True):
timestamp: datetime
id: UUID = Field(default_factory=lambda: str(uuid4()), primary_key=True)
def test_some_model():
m=SomeModel(**{'timestamp':datetime.utcnow().isoformat()})
assert type(m.timestamp)==datetime
</code></pre>
<p><strong>E AssertionError: assert <class 'str'> == datetime</strong></p>
<p>FastAPI/SQLModel experts explain yourselves :D .</p>
<p>Note - I tried using Field with default factories etc... as well.</p>
|
<python><fastapi><pydantic><sqlmodel>
|
2024-02-25 03:27:47
| 2
| 755
|
Octavio del Ser
|
78,054,656
| 3,388,962
|
What is the difference between the various spline interpolators from Scipy?
|
<p>My goal is to calculate a smooth trajectory passing through a set of points as shown below. I have had a look at the available methods of <a href="https://docs.scipy.org/doc/scipy/reference/interpolate.html" rel="nofollow noreferrer"><code>scipy.interpolate</code></a>, and also the <a href="https://docs.scipy.org/doc/scipy/tutorial/interpolate.html" rel="nofollow noreferrer">scipy user guide</a>. However, the choice of the right method is not quite clear to me.</p>
<p>What is the difference between
<a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.BSpline.html" rel="nofollow noreferrer"><code>BSpline</code></a>,
<a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.splprep.html" rel="nofollow noreferrer"><code>splprep</code></a>,
<a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.splrep.html" rel="nofollow noreferrer"><code>splrep</code></a>,
<a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.UnivariateSpline.html" rel="nofollow noreferrer"><code>UnivariateSpline</code></a>,
<a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html#scipy.interpolate.interp1d" rel="nofollow noreferrer"><code>interp1d</code></a>,
<a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.make_interp_spline.html" rel="nofollow noreferrer"><code>make_interp_spline</code></a> and
<a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.CubicSpline.html" rel="nofollow noreferrer"><code>CubicSpline</code></a>?</p>
<p>According to the documentation, all functions compute some polynomial spline function through a sequence of input points. Which function should I select? A) What is the difference between a cubic spline and a B-spline of order 3? B) What is the difference between <code>splprep()</code> and <code>splrep()</code>? C) Why is <code>interp1d()</code> going to be deprecated?</p>
<p>I know I'm asking different questions here, but I don't see the point in splitting the questions up as I assume the answers will be related.</p>
<p>All in all, I find that the scipy.interpolate module is organized a little confusingly. I thought maybe I'm not the only one who has this impression, which is why I'm reaching out to SO.</p>
<hr />
<p>Here's how far I've come. Below is some code that runs the different spline functions for some test data. It creates the figure below.</p>
<ol>
<li>I've read somewhere: "All cubic splines can be represented as B-splines of order 3", and that "it's a matter of perspective which representation is more convenient". But why do I end up in different results if I use <code>CublicSpline</code> and any of the B-spline methods?</li>
<li>I found that it is possible to construct a <code>BSpline</code> object from the output of <code>splprep</code> and <code>splrep</code>, such that the results of <code>splev()</code> and <code>BSpline()</code> are equivalent. That way, we can convert the output of <code>splrep</code> and <code>splprep</code> into the object-oriented interface of <code>scipy.interpolate()</code>.</li>
<li><code>splrep</code>, <code>UnivariateSpline</code> and <code>make_interp_spline</code> lead to the same result. In my 2D data, I need to apply the interpolation independently per data dimension that it works. The convenience function <code>interp1d</code> yields the same result, too. Related SO-question: <a href="https://stackoverflow.com/questions/61675281/">Link</a></li>
<li><code>splprep</code> and <code>splrep</code> seem unrelated. Even if I compute <code>splprep</code> twice for every data axis independently (see p0_new), the result looks differently. I see in the docs that splprep computes the B-spline representation of an n-D curve. But should <code>splrep</code> and <code>splprep</code> not be related?</li>
<li><code>splprep</code>, <code>splrep</code> and <code>UnivariateSpline</code> have a smoothing parameter, while other interpolators have no such parameter.</li>
<li><code>splrep</code> pairs with <code>UnivariateSpline</code>. However, I couldn't find a matching object-oriented counterpart for <code>splprep</code>. Is there one?</li>
</ol>
<p><a href="https://i.sstatic.net/wJzMU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wJzMU.png" alt="Comparison of different spline methods" /></a></p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from scipy.interpolate import *
import matplotlib.pyplot as plt
points = [[0, 0], [4, 4], [-1, 9], [-4, -1], [-1, -9], [4, -4], [0, 0]]
points = np.asarray(points)
n = 50
ts = np.linspace(0, 1, len(points))
ts_new = np.linspace(0, 1, n)
(t0_0,c0_0,k0_0), u = splprep(points[:,[0]].T, s=0, k=3)
(t0_1,c0_1,k0_1), u = splprep(points[:,[1]].T, s=0, k=3)
p0_new = np.r_[np.asarray(splev(ts_new, (t0_0,c0_0,k0_0))),
np.asarray(splev(ts_new, (t0_1,c0_1,k0_1))),
].T
# splprep/splev
(t1,c1,k1), u = splprep(points.T, s=0, k=3)
p1_new = splev(ts_new, (t1,c1,k1))
# BSpline from splprep
p2_new = BSpline(t1, np.asarray(c1).T, k=k1)(ts_new)
# splrep/splev (per dimension)
(t3_0,c3_0,k3_0) = splrep(ts, points[:,0].T, s=0, k=3)
(t3_1,c3_1,k3_1) = splrep(ts, points[:,1].T, s=0, k=3)
p3_new = np.c_[splev(ts_new, (t3_0,c3_0,k3_0)),
splev(ts_new, (t3_1,c3_1,k3_1)),
]
# Bspline from splrep
p4_new = np.c_[BSpline(t3_0, np.asarray(c3_0), k=k3_0)(ts_new),
BSpline(t3_1, np.asarray(c3_1), k=k3_1)(ts_new),
]
# UnivariateSpline
p5_new = np.c_[UnivariateSpline(ts, points[:,0], s=0, k=3)(ts_new),
UnivariateSpline(ts, points[:,1], s=0, k=3)(ts_new),]
# make_interp_spline
p6_new = make_interp_spline(ts, points, k=3)(ts_new)
# CubicSpline
p7_new = CubicSpline(ts, points, bc_type="clamped")(ts_new)
# interp1d
p8_new = interp1d(ts, points.T, kind="cubic")(ts_new).T
fig, ax = plt.subplots()
ax.plot(*points.T, "o-", label="Original points")
ax.plot(*p1_new, "o-", label="1: splprep/splev")
ax.plot(*p2_new.T, "x-", label="1: BSpline from splprep")
ax.plot(*p3_new.T, "o-", label="2: splrep/splev")
ax.plot(*p4_new.T, "x-", label="2: BSpline from splrep")
ax.plot(*p5_new.T, "*-", label="2: UnivariateSpline")
ax.plot(*p6_new.T, "+-", label="2: make_interp_spline")
ax.plot(*p7_new.T, "x-", label="3: CubicSpline")
#ax.plot(*p8_new.T, "k+-", label="3: interp1d")
#ax.plot(*p0_new.T, "k+-", label="3: CubicSpline")
ax.set_aspect("equal")
ax.grid("on")
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
</code></pre>
|
<python><scipy><interpolation><spline>
|
2024-02-25 02:24:22
| 1
| 9,959
|
normanius
|
78,054,416
| 9,135,359
|
Huggingface tokenizer object has no attribute 'pad'
|
<p>I am trying to train a model to classify some diseases, following the HuggingFace tutorial to the dot. I used Kaggle to run the code, as I do not have a powerful GPU. There is no way to include all my code, hence I will insert the pertinent lines only:</p>
<ol>
<li>I created a pandas <code>dataframe</code> and loaded it into a <code>dataset</code>.</li>
</ol>
<pre><code> from datasets import Dataset
ds = Dataset.from_pandas(df)
</code></pre>
<ol start="2">
<li>The tokenizer:</li>
</ol>
<pre><code> from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_nm = 'microsoft/deberta-v3-small'
tokenz = AutoTokenizer.from_pretrained(model_nm)
def tokenize_func(x): return tokenz(x["input"])
</code></pre>
<ol start="3">
<li>Data collator with padding function:</li>
</ol>
<pre><code> from transformers import DataCollatorWithPadding
data_collator = DataCollatorWithPadding(tokenizer=tokenize_func)
</code></pre>
<ol start="4">
<li>Training and test sets. I split the training data as such:</li>
</ol>
<pre><code> dds = tok_ds.train_test_split(0.25, seed=42)
</code></pre>
<ol start="5">
<li>Metrics & correlation:</li>
</ol>
<pre><code> import evaluate
accuracy = evaluate.load("accuracy")
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = np.argmax(predictions, axis=1)
return accuracy.compute(predictions=predictions, references=labels)
</code></pre>
<ol start="6">
<li>Training
Before I started training the model, I created a map of the expected ids to their labels with <code>id2label</code> and <code>label2id</code>:</li>
</ol>
<pre><code> id2label = {i:dx for i, dx in enumerate(list(df['Diagnosis'].unique()))}
label2id = {dx:i for i, dx in enumerate(list(df['Diagnosis'].unique()))}
from transformers import TrainingArguments,Trainer
args = TrainingArguments(output_dir="outputs", learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=32, num_train_epochs=epochs, weight_decay=0.01, evaluation_strategy="epoch", save_strategy="epoch", load_best_model_at_end=True, push_to_hub=False, report_to='none',)
model = AutoModelForSequenceClassification.from_pretrained(model_nm, num_labels=len(id2label), id2label=id2label, label2id=label2id)
trainer = Trainer(model=model, args=args, train_dataset=dds['train'], eval_dataset=dds['test'], tokenizer=tokenize_func, data_collator=data_collator, compute_metrics=compute_metrics,)
trainer.train()
</code></pre>
<p>This is where I run into a <code>AttributeError</code> problem:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[286], line 1
----> 1 trainer.train()
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1539, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1537 hf_hub_utils.enable_progress_bars()
1538 else:
-> 1539 return inner_training_loop(
1540 args=args,
1541 resume_from_checkpoint=resume_from_checkpoint,
1542 trial=trial,
1543 ignore_keys_for_eval=ignore_keys_for_eval,
1544 )
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1836, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1833 rng_to_sync = True
1835 step = -1
-> 1836 for step, inputs in enumerate(epoch_iterator):
1837 total_batched_samples += 1
1839 if self.args.include_num_input_tokens_seen:
File /opt/conda/lib/python3.10/site-packages/accelerate/data_loader.py:451, in DataLoaderShard.__iter__(self)
449 # We iterate one batch ahead to check when we are at the end
450 try:
--> 451 current_batch = next(dataloader_iter)
452 except StopIteration:
453 yield
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:630, in _BaseDataLoaderIter.__next__(self)
627 if self._sampler_iter is None:
628 # TODO(https://github.com/pytorch/pytorch/issues/76750)
629 self._reset() # type: ignore[call-arg]
--> 630 data = self._next_data()
631 self._num_yielded += 1
632 if self._dataset_kind == _DatasetKind.Iterable and \
633 self._IterableDataset_len_called is not None and \
634 self._num_yielded > self._IterableDataset_len_called:
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:674, in _SingleProcessDataLoaderIter._next_data(self)
672 def _next_data(self):
673 index = self._next_index() # may raise StopIteration
--> 674 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
675 if self._pin_memory:
676 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:54, in _MapDatasetFetcher.fetch(self, possibly_batched_index)
52 else:
53 data = self.dataset[possibly_batched_index]
---> 54 return self.collate_fn(data)
File /opt/conda/lib/python3.10/site-packages/transformers/data/data_collator.py:271, in DataCollatorWithPadding.__call__(self, features)
270 def __call__(self, features: List[Dict[str, Any]]) -> Dict[str, Any]:
--> 271 batch = pad_without_fast_tokenizer_warning(
272 self.tokenizer,
273 features,
274 padding=self.padding,
275 max_length=self.max_length,
276 pad_to_multiple_of=self.pad_to_multiple_of,
277 return_tensors=self.return_tensors,
278 )
279 if "label" in batch:
280 batch["labels"] = batch["label"]
File /opt/conda/lib/python3.10/site-packages/transformers/data/data_collator.py:59, in pad_without_fast_tokenizer_warning(tokenizer, *pad_args, **pad_kwargs)
57 # To avoid errors when using Feature extractors
58 if not hasattr(tokenizer, "deprecation_warnings"):
---> 59 return tokenizer.pad(*pad_args, **pad_kwargs)
61 # Save the state of the warning, then disable it
62 warning_state = tokenizer.deprecation_warnings.get("Asking-to-pad-a-fast-tokenizer", False)
AttributeError: 'function' object has no attribute 'pad'
</code></pre>
<p>What do I do?</p>
|
<python><tensorflow><huggingface-transformers>
|
2024-02-25 00:06:50
| 0
| 844
|
Code Monkey
|
78,054,251
| 1,895,939
|
Why does my heap size and thread count increase in this recursive Python code?
|
<p>I'm working on finding a memory leak in my application, and narrowed down suspicious activity to this (simplified) code:</p>
<pre class="lang-py prettyprint-override"><code>import time
import threading
class Work:
def by_duration(self, seconds: float, block: bool = True) -> None:
if seconds < 0:
raise ValueError("seconds >= 0")
if seconds == 0:
return
if block:
time.sleep(seconds)
else:
t = threading.Thread(target=self.by_duration, args=(seconds, True))
t.start()
return
def clean_up(self):
pass
def __enter__(self):
return self
def __exit__(self, *args) -> None:
self.clean_up()
while True:
with Work() as p:
p.by_duration(3, block=False)
time.sleep(3)
</code></pre>
<p>Using the tool <code>memray</code>, I can watch this code's memory. It both:</p>
<ol>
<li>has an increasing number of threads over time. All threads look "empty" (according to memray). Example:</li>
</ol>
<p><a href="https://i.sstatic.net/C1nYU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/C1nYU.png" alt="enter image description here" /></a></p>
<ol start="2">
<li>My heap size increases over time, by about 0.5 kb each iteration.</li>
</ol>
<p>What's going on? AFAIU, Python should be cleaning up these threads when they finish.</p>
|
<python><memory><python-multithreading>
|
2024-02-24 22:43:13
| 0
| 1,736
|
Cam.Davidson.Pilon
|
78,054,191
| 6,068,294
|
how to use a string variable to define dataarray name when adding to xarray dataset?
|
<p>I want to add a <code>dataarray</code> to an <code>xarray</code> <code>dataset</code>, and do so by using <code>xarray.assign</code>, but I don't know how to define the name of the dataarray using a string variable (i.e. to call the new entry "myvar":</p>
<pre><code>import xarray as xr
varname="myvar"
vals=[1,2,3]
coords=[4,5,6]
ds=xr.Dataset(data_vars={},coords={'xcoord':coords})
ds=ds.assign(varname=(['xcoord'],vals))
ds.to_netcdf("test.nc")
ds.close()
</code></pre>
<p>This gives me the variable called "varname" - how do I use a variable here?</p>
|
<python><python-xarray><keyword-argument>
|
2024-02-24 22:17:43
| 1
| 8,176
|
ClimateUnboxed
|
78,054,172
| 4,949,649
|
Python - Implement Authentik SSO in web server using requests_oauthlib
|
<p>I'm trying to implement SSO in a Web Application using OpenID Connect.</p>
<h2>What I am using</h2>
<ul>
<li><strong>python 3.12</strong></li>
<li><strong>Authentik</strong> (the Identity Provider <em>aka IdP</em>)</li>
<li><strong>flask</strong> (to expose the webserver)</li>
<li><strong>requests_oauthlib</strong> (to handle OAuth2 Session)</li>
</ul>
<h2>What I've done</h2>
<p>I'm trying to replicate the <a href="https://requests-oauthlib.readthedocs.io/en/latest/examples/real_world_example.html#" rel="nofollow noreferrer">example</a> in <a href="https://requests-oauthlib.readthedocs.io/en/latest/oauth2_workflow.html" rel="nofollow noreferrer">request-oauthlib</a> for Web Application but without success</p>
<ol>
<li>Create an application on the IdP <a href="https://goauthentik.io/docs/#what-is-authentik" rel="nofollow noreferrer">Authentik</a> as <code>OAuth2/OpenID Provider</code></li>
<li>Create the web server using <code>flask</code> and 2 test endpoint: <code>/login</code> that redirects to the IdP and retrieves (<code>authorization_url</code> and <code>state</code>) and <code>/callback</code> that using the <code>client_id</code>, <code>client_secret</code>, <code>state</code> and <code>authorization_response</code> tries to retrieve the access token</li>
</ol>
<p>Unfortunately when I try to retrieve the access token I receive back the following error: <code>oauthlib.oauth2.rfc6749.errors.InvalidClientError: (invalid_client) Client authentication failed (e.g., unknown client, no client authentication included, or unsupported authentication method)</code></p>
<h2>The code</h2>
<pre class="lang-py prettyprint-override"><code>import json
import os.path
from uuid import uuid4
from requests_oauthlib import OAuth2Session
from waitress import serve
from flask import Flask, jsonify, request, url_for, redirect, session
from pprint import pprint
with open(os.path.join("Config", "client_secrets.json"), "r") as f:
idp = json.load(f)
os.environ["OAUTHLIB_INSECURE_TRANSPORT"] = "1"
def main():
app = Flask(__name__)
# This allows us to use a plain HTTP callback
os.environ['OAUTHLIB_INSECURE_TRANSPORT'] = "1"
app.config['SECRET_KEY'] = str(uuid4())
@app.route('/')
def index():
return """
<a href="/login">Login</a>
"""
@app.route("/login")
def login():
oauth = OAuth2Session(client_id=idp["client_id"],
scope=idp["scope"],
redirect_uri=idp["callback"]
)
authorization_url, state = oauth.authorization_url(idp["authorize"])
session['oauth_state'] = state
return redirect(authorization_url)
@app.route("/callback")
def callback():
pprint(request.__dict__)
oauth = OAuth2Session(client_id=idp["client_id"],
state=session['oauth_state']
)
# When I try to get the token, nothing works
token = oauth.fetch_token(
idp["token"],
client_secret=idp["client_secret"],
authorization_response=request.url
)
# I never reach this line
session['oauth_token'] = token
return "I cannot see this :("
print("Starting webserver")
serve(app, host='0.0.0.0', port=5000)
print("Webserver running")
if __name__ == "__main__":
main()
</code></pre>
<h2>The callback request:</h2>
<pre><code>{'cookies': ImmutableMultiDict([('session', 'REDACTED_SESSION')]),
'environ': {'HTTP_ACCEPT': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8',
'HTTP_ACCEPT_ENCODING': 'gzip, deflate, br',
'HTTP_ACCEPT_LANGUAGE': 'en-US,en;q=0.5',
'HTTP_CONNECTION': 'keep-alive',
'HTTP_COOKIE': 'session=REDACTED_SESSION',
'HTTP_DNT': '1',
'HTTP_HOST': 'localhost:5000',
'HTTP_SEC_FETCH_DEST': 'document',
'HTTP_SEC_FETCH_MODE': 'navigate',
'HTTP_SEC_FETCH_SITE': 'cross-site',
'HTTP_UPGRADE_INSECURE_REQUESTS': '1',
'HTTP_USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; '
'rv:123.0) Gecko/20100101 Firefox/123.0',
'PATH_INFO': '/callback',
'QUERY_STRING': 'code=REDACTED_CODE&state=REDACTED_STATE',
'REMOTE_ADDR': '127.0.0.1',
'REMOTE_HOST': '127.0.0.1',
'REMOTE_PORT': '64951',
'REQUEST_METHOD': 'GET',
'REQUEST_URI': '/callback?code=REDACTED_CODE&state=REDACTED_STATE',
'SCRIPT_NAME': '',
'SERVER_NAME': 'waitress.invalid',
'SERVER_PORT': '5000',
'SERVER_PROTOCOL': 'HTTP/1.1',
'SERVER_SOFTWARE': 'waitress',
'waitress.client_disconnected': <bound method HTTPChannel.check_client_disconnected of <waitress.channel.HTTPChannel connected 127.0.0.1:64951 at 0x285f5356ba0>>,
'werkzeug.request': <Request 'http://localhost:5000/callback?code=REDACTED_CODE&state=REDACTED_STATE' [GET]>,
'wsgi.errors': <_io.TextIOWrapper name='<stderr>' mode='w' encoding='utf-8'>,
'wsgi.file_wrapper': <class 'waitress.buffers.ReadOnlyFileBasedBuffer'>,
'wsgi.input': <_io.BytesIO object at 0x00000285F5391E90>,
'wsgi.input_terminated': True,
'wsgi.multiprocess': False,
'wsgi.multithread': True,
'wsgi.run_once': False,
'wsgi.url_scheme': 'http',
'wsgi.version': (1, 0)},
'headers': EnvironHeaders([('Host', 'localhost:5000'), ('User-Agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:123.0) Gecko/20100101 Firefox/123.0'), ('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8'), ('Accept-Language', 'en-US,en;q=0.5'), ('Accept-Encoding', 'gzip, deflate, br'), ('Dnt', '1'), ('Connection', 'keep-alive'), ('Cookie', 'session=REDACTED_SESSION'), ('Upgrade-Insecure-Requests', '1'), ('Sec-Fetch-Dest', 'document'), ('Sec-Fetch-Mode', 'navigate'), ('Sec-Fetch-Site', 'cross-site')]),
'host': 'localhost:5000',
'json_module': <flask.json.provider.DefaultJSONProvider object at 0x00000285F5354530>,
'method': 'GET',
'path': '/callback',
'query_string': b'code=REDACTED_CODE&state=REDACTED_'
b'_STATE',
'remote_addr': '127.0.0.1',
'root_path': '',
'scheme': 'http',
'server': ('waitress.invalid', 5000),
'shallow': False,
'url': 'http://localhost:5000/callback?code=REDACTED_CODE&state=REDACTED_STATE',
'url_rule': <Rule '/callback' (GET, OPTIONS, HEAD) -> callback>,
'view_args': {}}
</code></pre>
|
<python><flask><oauth-2.0><single-sign-on><openid-connect>
|
2024-02-24 22:08:33
| 1
| 755
|
Timmy
|
78,054,007
| 2,896,638
|
How to run many threads/procs with global timeout
|
<p>I'm trying to run many python threads (or separate processes) from the 'run' method of class GetDayMin. I want the threads or processes to run simultaneously, and after 40 seconds the class instance writes it's data (from each thread/proc) and exits. While I can start each thread without waiting on anything to complete, if I use the join method to wait on any thread, it could take a long time to timeout since successive threads may all be blocked. It seems the join method of both threading and multiprocessing will hang until the timeout.</p>
<p>For example, if in my class I start 5 threads and then wait 40 seconds in order of thread creation, the first thread could take 40 seconds to timeout, and then we go to the second thread with takes 40 seconds to timeout etc. We could end up waiting 200 seconds for 5 threads.</p>
<p>What I want is that no thread takes longer than 40 seconds, so the whole class instance lasts a maximum of 40 seconds too. I'm willing to do multiprocessing instead of multithreading if that makes things easier. What I really anticipate is that most threads will complete within 10 seconds but three or four may hang, and I don't want to wait for them. How can I accomplish this?</p>
<pre><code>import multiprocessing
import pandas as pd
import random
import time
class GetDayMin:
def __init__(self):
self.results = pd.DataFrame() # Shared DataFrame to store results
def add_result(self, result):
self.results = self.results.append(result, ignore_index=True)
def process_function(self):
sleep_time = random.randint(30, 50) # Random sleep time between 30 to 50 seconds
time.sleep(sleep_time) # Pretend I'm calculating something
# Return the time slept to store in the results to simulate thread communication
return {'process_id': multiprocessing.current_process().pid, 'time_slept': sleep_time}
def run(self):
processes = []
for _ in range(30):
process = multiprocessing.Process(target=self.process_function)
processes.append(process)
# Start all processes
for process in processes:
process.start()
# Wait for all processes to finish or timeout after 40 seconds (each--unfortunately)
for process in processes:
process.join(timeout=40)
if process.is_alive():
process.terminate()
process.join() # wait on process--want this to be a collective 40 seconds
</code></pre>
|
<python><multithreading><multiprocessing>
|
2024-02-24 21:15:15
| 1
| 347
|
HonestMath
|
78,053,770
| 2,612,259
|
How can I "dry" up this pytest-mock?
|
<p>I have some test methods that require a mock of a class. The below code works but I am repeating the same mock in several methods. Here is the code that is getting repeated. You can see it used below in the last 3 methods.</p>
<pre class="lang-py prettyprint-override"><code>class PyWaves(object):
def names(self):
return test_class_self.mock_wave_names()
</code></pre>
<p>I tried moving it to the test class's <code>__init__</code> but that does not work. What's the right way to avoid repeating this code in each method where I need the mock?</p>
<pre class="lang-py prettyprint-override"><code>class TestExtractedXPSNameAdapter:
# def __init__(self, mocker):
# class PyWaves(object):
# def names(self):
# return self.mock_wave_names()
# self.waves = PyWaves()
def mock_wave_names(self):
return ['net_a', 'top.net_b', 'top.foo@bar@net1#foo@bar@inst@0_g', 'top.foo1@bar1@net2#foo1@bar1@inst1_d']
def expected_index(self):
return {'net_a': 'net_a', 'top.net_b': 'top.net_b', 'foo@bar@inst@0_g': 'top.foo@bar@net1#foo@bar@inst@0_g', 'foo1@bar1@inst1_d': 'top.foo1@bar1@net2#foo1@bar1@inst1_d'}
def test_gen_populated_name_guesses(self):
adapter = ExtractedXPSNameAdapter()
identifier = WaveformIdentifier(instance_name="my_inst", path="/i_macro/sub1/sub2", term_name="my_terminal")
guesses = adapter.gen_name_guesses(identifier)
expected_guesses = [
"i_macro@sub1@sub2@my_inst@0_my_terminal",
"i_macro@sub1@sub2@my_inst_my_terminal",
]
assert guesses == expected_guesses
def test_gen_empty_name_guesses(self):
adapter = ExtractedXPSNameAdapter()
identifier = WaveformIdentifier(instance_name="my_inst", path="/i_macro/sub1/sub2")
guesses = adapter.gen_name_guesses(identifier)
expected_guesses = []
assert guesses == expected_guesses
def test_build_index(self, mocker):
test_class_self = self
class PyWaves(object):
def names(self):
return test_class_self.mock_wave_names()
waves = PyWaves()
adapter = ExtractedXPSNameAdapter()
adapter.set_wave_list(waves)
adapter.build_index()
assert adapter.index == self.expected_index()
def test_waveform_name(self, mocker):
test_class_self = self
class PyWaves(object):
def names(self):
return test_class_self.mock_wave_names()
waves = PyWaves()
adapter = ExtractedXPSNameAdapter()
adapter.set_wave_list(waves)
identifier = WaveformIdentifier(instance_name='inst1', path='/foo1/bar1', term_name="d")
name = adapter.waveform_name(identifier)
assert name == 'top.foo1@bar1@net2#foo1@bar1@inst1_d'
def test_info_from_identifier(self, mocker):
test_class_self = self
class PyWaves(object):
def names(self):
return test_class_self.mock_wave_names()
waves = PyWaves()
adapter = ExtractedXPSNameAdapter()
adapter.set_wave_list(waves)
identifier = WaveformIdentifier(instance_name='inst1', path='/foo1/bar1', term_name="d")
expected = WaveformNameInfo(identifier=identifier, type=identifier.type, full_name='top.foo1@bar1@net2#foo1@bar1@inst1_d', path=identifier.path, terminal_name=identifier.term_name)
assert adapter.info_from_identifier(identifier) == expected
</code></pre>
|
<python><pytest-mock>
|
2024-02-24 19:50:41
| 1
| 16,822
|
nPn
|
78,053,767
| 4,119,262
|
Why is every steps printed and not the final string without vowels?
|
<p>I have seen various questions close to what I am trying to achieve, i.e removing vowels from a string.</p>
<p>My code is as follows (I want to avoid function):</p>
<pre><code>word_with_whole_alphabet = input("give me a string bro: ")
vowel = ["a", "e", "i", "o", "u", "y", "A", "E", "I", "O", "U", "Y"]
for i in word_with_whole_alphabet:
if i in vowel:
word_with_whole_alphabet = word_with_whole_alphabet.replace(i, "")
print(word_with_whole_alphabet)
</code></pre>
<p>However, I do not understand why the output I have provides me with intermediate steps.</p>
<p>The output is as follows for the word "LALALILALOU"</p>
<pre><code>LALALILALOU
LLLILLOU
LLLILLOU
LLLILLOU
LLLILLOU
LLLLLOU
LLLLLOU
LLLLLOU
LLLLLOU
LLLLLU
LLLLL
</code></pre>
|
<python>
|
2024-02-24 19:50:11
| 1
| 447
|
Elvino Michel
|
78,053,580
| 16,459,035
|
How to get a <a> tag in a nested HTML using BeautifulSoup4
|
<p>I want to get to access href links. Although my HTML is a nested structure like the image below</p>
<p><a href="https://i.sstatic.net/EhRrj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EhRrj.png" alt="enter image description here" /></a></p>
<p>I'am trying to do that using BeautifulSoup4, however I'am new to WebScrapping. The code I'm using is:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import time
url = "https://openfinancebrasil.atlassian.net/wiki/spaces/OF/pages/17368301/DA+API+-+Canais+de+Atendimento"
response = requests.get(url)
if response.status_code == 200:
soup = BeautifulSoup(response.text, 'html.parser')
page_body = soup.find_all('div', class_= '_1bsb1osq _19pkidpf _2hwx1wug _otyridpf _18u01wug')
for p in page_body:
print(p.find_all('a'))
else:
print(f"Failed to retrieve content. Status Code: {response.status_code}")
</code></pre>
<p>But, my print shows an empty list <code>[]</code></p>
<p>My doubt is: Is there a way to access this element directly?</p>
|
<python><web-scraping><beautifulsoup>
|
2024-02-24 18:49:05
| 2
| 671
|
OdiumPura
|
78,053,503
| 22,437,609
|
How to add gif image inside a Kivy Popup?
|
<p>I have a popup in my app and works great. It shows a Text as well but i want to add a gif inside it, but i could not success.</p>
<p>I will share my structure.</p>
<p><strong>KV File:</strong></p>
<pre><code><PopupBox>:
pop_up_text: _pop_up_text
pop_up_image: _pop_up_image
background_color: '#38B6FF'
background: 'white'
size_hint: 1, 1
auto_dismiss: True
title: 'FOOTBALL PREDICTOR'
title_size: '15dp'
title_font: 'fonts/Bungee-Regular.ttf'
BoxLayout:
orientation: "vertical"
Label:
id: _pop_up_text
text: ''
font_name: 'fonts/Bungee-Regular.ttf'
font_size: '30dp'
color: 1, 0.4, 0.769, 1
AsyncImage:
id: _pop_up_image
source: ''
</code></pre>
<p><strong>Python File:</strong></p>
<pre><code>from kivy.app import App
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.label import Label
from kivy.uix.image import Image
from kivy.metrics import dp
from kivy.clock import mainthread
from kivy.uix.popup import Popup
from kivy.factory import Factory
from kivy.properties import ObjectProperty
from kivy.utils import platform
if platform == 'android':
from kivmob_mod import KivMob
from time import sleep
from unidecode import unidecode
from bs4 import BeautifulSoup
import aiohttp
import asyncio
import requests
import threading
import re
import ssl
class PopupBox(Popup):
pop_up_text = ObjectProperty()
pop_up_image = ObjectProperty()
def update_pop_up_text(self, p_message):
self.pop_up_text.text = p_message
def update_pop_up_image(self, image_url):
self.pop_up_image.source = image_url
class Predictor(BoxLayout):
def __init__(self, **kwargs):
super(Predictor, self).__init__(**kwargs)
self.status = True
@mainthread
def on_size_calculate(self, *args):
self.ids.sv_calculate.scroll_y = 1
print('Scroll 1')
self.pop_up.dismiss()
print('Popup Dismiss')
@mainthread
def on_size_guide_eng(self, *args):
self.ids.sv_guide_eng.scroll_y = 1
print('Scroll 3')
self.pop_up.dismiss()
print('Popup Dismiss')
@mainthread
def on_size_guide_tr(self, *args):
self.ids.sv_guide_tr.scroll_y = 1
print('Scroll 4')
self.pop_up.dismiss()
print('Popup Dismiss')
def homepage(self, s_image, screenmanager):
screenmanager.current = 'homepage_screen'
def league(self, s_image, screenmanager):
screenmanager.current = 'league_screen'
def show_popup(self):
self.pop_up = Factory.PopupBox()
self.pop_up.update_pop_up_text('''Please Wait...\n...Be Patient''')
self.pop_up.update_pop_up_image('wait.gif') # Replace 'path_to_your_gif.gif' with the actual path to your GIF file
self.pop_up.open()
@mainthread
def clear_widgets(self, *args):
if self.ids.gridsonuc.children:
for child in [child for child in self.ids.gridsonuc.children]:
self.ids.gridsonuc.remove_widget(child)
print('Screen Cleared')
if self.ids.guidesonuc_eng.children:
for child in [child for child in self.ids.guidesonuc_eng.children]:
self.ids.guidesonuc_eng.remove_widget(child)
print('Screen Cleared')
if self.ids.guidesonuc_tr.children:
for child in [child for child in self.ids.guidesonuc_tr.children]:
self.ids.guidesonuc_tr.remove_widget(child)
print('Screen Cleared')
def calculate_screen(self, s_image, screenmanager, lig):
screenmanager.current = 'calculate_screen'
self.show_popup()
mythread = threading.Thread(target=self.clear_widgets)
mythread.start()
mythread3 = threading.Thread(target=self.calculate(lig))
mythread3.start()
mythread4 = threading.Thread(target=self.on_size_calculate)
mythread4.start()
@mainthread
def calculate(self, lig, *args):
async def fetch_stats_home(session, url, match_codes_home):
"some codes"
async def fetch_stats_away(session, url, match_codes_away):
"some codes"
async def main():
url =
async with aiohttp.ClientSession() as session:
tasks_home = [fetch_stats_home(session, url, match_code) for match_code in match_codes_home]
tasks_away = [fetch_stats_away(session, url, match_code) for match_code in match_codes_away]
await asyncio.gather(*tasks_home, *tasks_away)
asyncio.run(main())
veri = []
veri.append(calculate)
class FootballpredictorApp(App):
def build(self):
return Predictor()
if __name__ == '__main__':
FootballpredictorApp().run()
</code></pre>
<p><a href="https://i.sstatic.net/rtt6x.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rtt6x.png" alt="enter image description here" /></a></p>
<p>As you can see there are white cicles. <code>'wait.gif'</code> is in the root, as main.py.
I can see <code>Please Wait</code> text but gif does not seen. I did not used <code>from kivy.uix.image import AsyncImage</code> in pure python. I am not good at Python, Kivy As well.
Could you pleae help me to fix this problem?</p>
<p>Thousands Thanks for each Comment.</p>
<p>**UPDATE: A CLUE **</p>
<p>I run this app in Vscode and in terminal when the popup closes i get this error:</p>
<pre><code>Scroll 1
Popup Dismiss
Unknown <gif> type, no loader found.
</code></pre>
|
<python><kivy><kivymd>
|
2024-02-24 18:21:35
| 1
| 313
|
MECRA YAVCIN
|
78,053,296
| 7,019,073
|
Align multiple overlayed facetted Seaborn cat plots
|
<p>I'm trying to create a plot similar to <a href="https://stackoverflow.com/questions/76518217/how-to-map-stripplots-onto-boxplots-in-a-facetgrid">this one</a>. A facet grid with strip and boxplots overlapping. The data is stored in a pandas dataframe. My difference to the referenced question is, that on top of distributing the bars over the X axis, I'm also drawing multiple bars (and point strips) per X value via the <code>hue</code> parameter. So far so good, this works.</p>
<p>The problem is, that the boxes and point strips do not align their vertical positions, as can be seen in the figure in the upper row in the first column as well as in the lower row in the second and last column. The corresponding boxes and point strips are mostly next to each other and even with varying offsets.</p>
<p><a href="https://i.sstatic.net/Ic4eS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ic4eS.png" alt="Boxes and Points do not always align" /></a></p>
<p>Here is my code so far with a dummy dataset:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
################### generate dummy data set ###################
np.random.seed(20240224)
numPoints = 300 # should be divisible by 3 and 2
df = pd.DataFrame({"CategoryX": np.random.randint(1, 4, numPoints),
"CategoryY": np.random.rand(numPoints),
# the imbalance here seems to be the problem trigger
"CategoryColor": np.random.choice([0,1,2,3], size=numPoints, p=[0.33, 0.33, 0.33, 0.01]),
"CategoryColumn": np.array(["ColA", "ColB", "ColC"] * (numPoints // 3)),
"CategoryRow": np.array(["RowA"] * (numPoints // 2) + ["RowB"] * (numPoints // 2)),
})
################### actual plot ###################
commonParams = dict(
x="CategoryX",
y="CategoryY",
hue="CategoryColor",
)
g = sns.catplot(
data=df,
**commonParams,
col="CategoryColumn",
row="CategoryRow",
kind="strip",
dodge=True,
)
# map by hand bc I couldn't figure out how to properly use map() or map_dataframe()
for i, s in enumerate(df['CategoryColumn'].unique()):
for j, f in enumerate(df['CategoryRow'].unique()):
sns.boxplot(
data=df[(df['CategoryColumn'] == s) & (df['CategoryRow'] == f)],
**commonParams,
ax=g.axes[j, i], # draw on the existing axes
legend=False,
)
</code></pre>
<p><a href="https://i.sstatic.net/nxmSG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nxmSG.png" alt="bad alignment sns.catplot and boxplots" /></a></p>
<p>Any help aligning this neatly on top of each other is highly appreciated!</p>
|
<python><seaborn>
|
2024-02-24 17:19:50
| 1
| 1,040
|
Seriously
|
78,052,964
| 18,572,509
|
Get the credentials an instance of `Channel` used to authenticate in Paramiko
|
<p>I've been working with this multi-threaded SSH server that I made and I've encountered an issue. I need a way to get the credentials each channel used for authentication. For example, in <code>handle</code> after I accept a <code>channel</code> from the <code>transport</code>, I'd like to know what credentials that channel used to authenticate itself to the server, something like <code>channel.get_credentials() -> (username, password)</code>. Reading the docs, I couldn't find a way, but hopefully someone out there knows something I don't.</p>
<pre class="lang-py prettyprint-override"><code>import paramiko
import socket
import threading
class Server(paramiko.server.ServerInterface):
def get_allowed_auths(self, username):
return "password"
def check_channel_request(self, kind, channelID):
return paramiko.OPEN_SUCCEEDED
def check_channel_shell_request(self, channel):
return True
def check_channel_pty_request(self, c, t, w, h, p, ph, m):
return True
def get_banner(self):
return ("Paramiko SSH Server v1.0\n\r", "EN")
def check_auth_password(self, username, password):
print(f"[*] Auth request with credentials {username}:{password}")
return paramiko.AUTH_SUCCESSFUL
def handle(conn, addr):
print("[*] Handler waiting for SSH connection...")
transport = paramiko.Transport(conn)
transport.add_server_key(host_key)
transport.start_server(server=server)
channel = transport.accept(30)
if channel:
print("[*] SSH connection recieved")
channel.send("Hi :)\r\n")
print(f"[>] {channel.recv(1024)}")
channel.close()
host_key = paramiko.RSAKey.generate(2048)
server = Server()
sock = socket.socket()
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind(("127.0.0.1", 5555))
sock.listen(100)
print("[*] Socket listening")
while True:
conn, addr = sock.accept()
print(f"[*] Connection from {addr[0]}:{addr[1]}, starting handler...")
handler = threading.Thread(target=handle, args=(conn, addr))
handler.start()
print("[*] Started handler...")
</code></pre>
|
<python><python-3.x><ssh><paramiko>
|
2024-02-24 15:22:06
| 1
| 765
|
TheTridentGuy supports Ukraine
|
78,052,958
| 11,462,274
|
How to monitor in real time all the Python files that are running on the computer including files started from subprocess?
|
<p>I have 4 Python files that are executed separately and uninterruptedly on my computer throughout the day via cmd. Within the codes, there are <code>subprocess</code> that execute other Python files in the background.</p>
<p>To keep track of which files are currently being executed, I added a function to the code of all of them that, when the file starts executing, creates an empty <code>.txt</code> file in a specific folder and deletes this file when it finishes executing.</p>
<p>For example, the <code>subprocess</code> executes the file <code>collect_data.py</code>, so this file at the beginning of the code creates the file <code>collect_data.txt</code>, does its work, and when finished, deletes the file <code>collect_data.txt</code>.</p>
<p>Note: The 4 main files that run all day also create their own <code>.txt</code> and delete it when they stop running.</p>
<p><a href="https://i.sstatic.net/UvH2A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UvH2A.png" alt="enter image description here" /></a></p>
<p>As the files complete their service, they are deleted and only the main ones that are still running remain:</p>
<p><a href="https://i.sstatic.net/X0yjk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/X0yjk.png" alt="enter image description here" /></a></p>
<p>I monitor the existing files in the aforementioned folder using <code>watchdog</code>:</p>
<pre class="lang-python prettyprint-override"><code>from watchdog.events import FileSystemEventHandler
from watchdog.observers import Observer
import tkinter as tk
import os
class FileChangeHandler(FileSystemEventHandler):
def __init__(self, file_list):
super().__init__()
self.file_list = file_list
def on_any_event(self, event):
self.update_file_list()
def update_file_list(self):
files = os.listdir(folder_path)
self.file_list.delete(0, tk.END)
for file in files:
self.file_list.insert(tk.END, file)
def monitor_folder():
event_handler.update_file_list()
root.after(1000, monitor_folder)
folder_path = r"C:\Users\Computador\Desktop\utilities_code\run_process"
root = tk.Tk()
root.title("Lista de Arquivos")
root.minsize(400, 300)
file_list = tk.Listbox(root, bg="black", fg="white")
file_list.pack(fill=tk.BOTH, expand=True)
event_handler = FileChangeHandler(file_list)
observer = Observer()
observer.schedule(event_handler, folder_path)
observer.start()
monitor_folder()
root.mainloop()
</code></pre>
<p>What I do, I know that it is a palliative form and I would like to know if there is any intelligent and correct method to specifically address my need to be completely and absolutely sure of which files are being executed and also to track a possible file that not able to complete its execution leaving your process open "forever"?</p>
|
<python>
|
2024-02-24 15:16:31
| 0
| 2,222
|
Digital Farmer
|
78,052,918
| 474,491
|
GAE is very slow loading a sentence transformer
|
<p>I'm using Google App Engine to host a website using Python and Flask.</p>
<p>I need to add text similarity functionality, using sentence_transformers. In requirements.txt, I add a dependency to the cpu version of torch:</p>
<pre><code>torch @ https://download.pytorch.org/whl/cpu/torch-2.2.1%2Bcpu-cp311-cp311-linux_x86_64.whl
sentence-transformers==2.4.0
</code></pre>
<p>When I add these statements to the main.py file:</p>
<pre><code>from sentence_transformers import SentenceTransformer
model = SentenceTransformer('all-MiniLM-L6-v2')
</code></pre>
<p>the GAE instance creation time degrades from < 1 sec to > 20 sec.</p>
<p>Performance improves if I save the model to a directory in the project and use:</p>
<pre><code>model = SentenceTransformer('./idp_web_server/model')
</code></pre>
<p>but it is still over 15 sec. (Removing the statement for model creation reduces instance creation time to 4 sec). Going from an F4 instance (2.4 GHZ, with automatic scaling) to a B8 instance (4.8 MHZ, basic scaling) instance does not improve performance, so, it seems to be IO bound. Running the app locally on my machine (2.4 GHz), the model creation takes only 1.7 sec, i.e., is 5 to 10 times faster.</p>
<p>Can this be improved? Should I move to Google Cloud instead of GAE?</p>
|
<python><google-app-engine><pytorch><sentence-transformers>
|
2024-02-24 15:03:22
| 2
| 2,695
|
Pierre Carbonnelle
|
78,052,906
| 5,640,517
|
Digit OCR using Tesseract
|
<p>I'm trying to ocr some numbers:</p>
<p><a href="https://i.sstatic.net/SeGXC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SeGXC.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/wLZqG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wLZqG.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/FG9OQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FG9OQ.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/LQVfF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LQVfF.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/HP5AP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HP5AP.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/AibSA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AibSA.png" alt="enter image description here" /></a></p>
<p>And I have made this code to test different psm arguments (6,7,8,13), I don't see much difference.</p>
<pre><code>import os
import pytesseract
import matplotlib.pyplot as plt
import cv2
import numpy as np
pytesseract.pytesseract.tesseract_cmd = (
r"path/to/tesseract"
)
def apply_tesseract(image_path, psm):
image = cv2.imread(image_path)
text = pytesseract.image_to_string(image, config=f"--psm {psm} digits")
return image, text
def display_images_with_text(images, texts):
num_images = len(images)
num_rows = min(3, num_images)
num_cols = (num_images + num_rows - 1) // num_rows
fig, axes = plt.subplots(num_rows, num_cols, figsize=(12, 8), subplot_kw={'xticks': [], 'yticks': []})
for i, (image, text) in enumerate(zip(images, texts)):
ax = axes[i // num_cols, i % num_cols] if num_rows > 1 else axes[i % num_cols]
ax.imshow(image)
ax.axis("off")
ax.set_title(text)
plt.show()
def main(folder_path):
for psm in [6]:
images = []
texts = []
for filename in os.listdir(folder_path):
if filename.lower().endswith((".png")):
image_path = os.path.join(folder_path, filename)
image, text = apply_tesseract(image_path, psm)
images.append(image)
texts.append(text)
display_images_with_text(images, texts)
if __name__ == "__main__":
folder_path = r"./digitImages"
main(folder_path)
</code></pre>
<p>This is the output of <code>--psm 6</code></p>
<p><a href="https://i.sstatic.net/4tbAa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4tbAa.png" alt="enter image description here" /></a></p>
<p>As you can see, it's not that good.</p>
<p>How can I improve this? the number images are already black and white and quite small, I've tried some processing but I end up with the same black and white image.</p>
<pre><code># Read the original image
original_image = cv2.imread(image_path)
new_width = original_image.shape[1] * 2 # Double the width
new_height = original_image.shape[0] * 2 # Double the height
resized_image = cv2.resize(original_image, (new_width, new_height))
# Convert the original image to grayscale
gray = cv2.cvtColor(resized_image, cv2.COLOR_BGR2GRAY)
# Sharpen the blurred image
sharpen_kernel = np.array([[-1, -1, -1], [-1, 9, -1], [-1, -1, -1]])
sharpen = cv2.filter2D(gray, -1, sharpen_kernel)
# Apply Otsu's thresholding to the blurred image
thresh = cv2.threshold(sharpen, 0, 255, cv2.THRESH_OTSU)[1]
</code></pre>
<p>Update:</p>
<p>Turns out simply adding some borders helped a ton, nto perfect but better.</p>
<p><a href="https://i.sstatic.net/8fhML.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8fhML.png" alt="enter image description here" /></a></p>
|
<python><ocr><tesseract><image-preprocessing>
|
2024-02-24 14:59:37
| 2
| 1,601
|
Daviid
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.