QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,600,751
| 4,847,250
|
Does LSTM need all steptime shift for parameter identification of a model?
|
<p>I use LSTM deep learning to identify parameters of a mathematical model which generate a univariate time series. According to the parameters I choose for the model, the time series will change in it's oscillation frequency or shape (type of oscillation).</p>
<p>For now, I made a Deep learning LSTM that's taking 1 second of the time series (1024 samples)</p>
<pre><code>DLmodel = Sequential()
DLmodel.add(LSTM(units=Nb_input, return_sequences=True, input_shape=(None, Nb_input), activation='sigmoid'))
DLmodel.add(Dropout(0.3))
DLmodel.add(Dense(Nb_input // 4, activation="relu", kernel_initializer="uniform"))
DLmodel.add(Dropout(0.3))
DLmodel.add(Dense(Nb_input // 8, activation="relu", kernel_initializer="uniform"))
DLmodel.add(Dropout(0.3))
DLmodel.add(Dense(Nb_output, activation="linear", kernel_initializer="uniform"))
</code></pre>
<p>where</p>
<pre><code>Nb_input = number of sample
Nb_output = number of parameter to identify
</code></pre>
<p>My idea is to do</p>
<pre><code>while 1:
label = randomly choose 3 model parameters N times (dimension = 1 x N x 3)
signal = generate N signal of 5 seconds (dimension = 1 x N x 1024)
increase the training set by using a shift window on the time series
training set = windowed signal (dimension = 1 x N*5 x 1024)
label = copy model parameters(dimension = 1 x N*5 x 3)
Fit the LSTM
Test LSTM on new data
</code></pre>
<p>What I'm wondering is, does LSTM inputs need to have a time shift of 1 sample (to create the training set) to be correctly trained or can I leave the time shift to a complete window (1024 samples)? For instance, if I have an event at the beginning of the one second window, does it is the same (for the LSTM point of view) as if the event is at the end?</p>
|
<python><keras><lstm>
|
2023-03-01 07:40:10
| 1
| 5,207
|
ymmx
|
75,600,734
| 3,751,931
|
launch.json breaks debugging in VSCode
|
<p>I wanted to set <code>justMyCode</code> to false for a python project in VSCode.
So I created a <code>launch.json</code> file in a <code>.vscode</code> folder under the project root folder with the following content:</p>
<pre class="lang-json prettyprint-override"><code>{
"configurations": [
{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal", // <== adding/deleting this makes no difference
"env": {"PYDEVD_WARN_SLOW_RESOLVE_TIMEOUT": "5"}, // <== I'd like to have this too
"justMyCode": false,
}
],
"debug.allowBreakpointsEverywhere": true,
"jupyter.debugJustMyCode": false,
}
</code></pre>
<p>Afterwards, the debugger won't start. The code runs fine outside the debugging tough.</p>
<p>What should I modify?</p>
|
<python><visual-studio-code><visual-studio-debugging>
|
2023-03-01 07:37:39
| 1
| 2,391
|
shamalaia
|
75,600,684
| 10,715,700
|
AssertionError: SparkContext._active_spark_context is not None
|
<p>I create an object which when running the <code>__init__</code> function creates a map from a dictionary. This is done outside of any function or classes. So it runs when the module gets loaded during imports.</p>
<p>It works fine when I run it, but when I run it using SparkStreaming, I get an assertion error shown below. It is thrown inside the <code>__init__</code> function of the class.</p>
<p>Why am I facing this issue only when I use spark streaming and how do I fix it?</p>
<pre><code>File "some_file.py", line 58, in __init__
some_map = F.create_map(*[F.lit(x) for x in chain(*some_dict.items())])
File "some_file.py", line 58, in <listcomp>
some_map = F.create_map(*[F.lit(x) for x in chain(*some_dict.items())])
File "/databricks/spark/python/pyspark/sql/functions.py", line 139, in lit
return col if isinstance(col, Column) else _invoke_function("lit", col)
File "/databricks/spark/python/pyspark/sql/functions.py", line 85, in _invoke_function
assert SparkContext._active_spark_context is not None
AssertionError
</code></pre>
|
<python><apache-spark><pyspark>
|
2023-03-01 07:32:11
| 2
| 430
|
BBloggsbott
|
75,600,462
| 2,186,785
|
Rotating secret key but keeping other database credentials?
|
<p>I am currently using AWS Lambda for rotating the secret key. I followed the guide on the AWS website which uses the SecretsManagerRotationTemplate. This works well to rotate the current secret key with a new secret key.</p>
<p>The problem is that I have also stored the username and database name as credentials. The SecretsManagerRotationTemplate removes the other credentials / data for the database connection and simply shows the new secret key. Is there a way to keep the username, database name etc. while using the Lambda function for SecretsManagerRotationTemplate?</p>
<p>The code of the template is attached below:</p>
<pre><code> # Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: MIT-0
import boto3
import logging
import os
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def lambda_handler(event, context):
"""Secrets Manager Rotation Template
This is a template for creating an AWS Secrets Manager rotation lambda
Args:
event (dict): Lambda dictionary of event parameters. These keys must include the following:
- SecretId: The secret ARN or identifier
- ClientRequestToken: The ClientRequestToken of the secret version
- Step: The rotation step (one of createSecret, setSecret, testSecret, or finishSecret)
context (LambdaContext): The Lambda runtime information
Raises:
ResourceNotFoundException: If the secret with the specified arn and stage does not exist
ValueError: If the secret is not properly configured for rotation
KeyError: If the event parameters do not contain the expected keys
"""
print(event)
arn = event['SecretId']
token = event['ClientRequestToken']
step = event['Step']
# Setup the client
service_client = boto3.client('secretsmanager')#, endpoint_url=os.environ['SECRETS_MANAGER_ENDPOINT'])
# Make sure the version is staged correctly
metadata = service_client.describe_secret(SecretId=arn)
if not metadata['RotationEnabled']:
logger.error("Secret %s is not enabled for rotation" % arn)
raise ValueError("Secret %s is not enabled for rotation" % arn)
versions = metadata['VersionIdsToStages']
if token not in versions:
logger.error("Secret version %s has no stage for rotation of secret %s." % (token, arn))
raise ValueError("Secret version %s has no stage for rotation of secret %s." % (token, arn))
if "AWSCURRENT" in versions[token]:
logger.info("Secret version %s already set as AWSCURRENT for secret %s." % (token, arn))
return
elif "AWSPENDING" not in versions[token]:
logger.error("Secret version %s not set as AWSPENDING for rotation of secret %s." % (token, arn))
raise ValueError("Secret version %s not set as AWSPENDING for rotation of secret %s." % (token, arn))
if step == "createSecret":
create_secret(service_client, arn, token)
elif step == "setSecret":
set_secret(service_client, arn, token)
elif step == "testSecret":
test_secret(service_client, arn, token)
elif step == "finishSecret":
finish_secret(service_client, arn, token)
else:
raise ValueError("Invalid step parameter")
def create_secret(service_client, arn, token):
"""Create the secret
This method first checks for the existence of a secret for the passed in token. If one does not exist, it will generate a
new secret and put it with the passed in token.
Args:
service_client (client): The secrets manager service client
arn (string): The secret ARN or other identifier
token (string): The ClientRequestToken associated with the secret version
Raises:
ResourceNotFoundException: If the secret with the specified arn and stage does not exist
"""
# Make sure the current secret exists
service_client.get_secret_value(SecretId=arn, VersionStage="AWSCURRENT")
# Now try to get the secret version, if that fails, put a new secret
try:
service_client.get_secret_value(SecretId=arn, VersionId=token, VersionStage="AWSPENDING")
logger.info("createSecret: Successfully retrieved secret for %s." % arn)
except service_client.exceptions.ResourceNotFoundException:
# Get exclude characters from environment variable
exclude_characters = os.environ['EXCLUDE_CHARACTERS'] if 'EXCLUDE_CHARACTERS' in os.environ else '/@"\'\\'
# Generate a random password
passwd = service_client.get_random_password(ExcludeCharacters=exclude_characters)
# Put the secret
service_client.put_secret_value(SecretId=arn, ClientRequestToken=token, SecretString=passwd['RandomPassword'], VersionStages=['AWSPENDING'])
logger.info("createSecret: Successfully put secret for ARN %s and version %s." % (arn, token))
def set_secret(service_client, arn, token):
secret = service_client.get_secret_value(SecretId=arn, VersionId=token, VersionStage="AWSPENDING")
logger.info(secret)
pass
"""Set the secret
This method should set the AWSPENDING secret in the service that the secret belongs to. For example, if the secret is a database
credential, this method should take the value of the AWSPENDING secret and set the user's password to this value in the database.
Args:
service_client (client): The secrets manager service client
arn (string): The secret ARN or other identifier
token (string): The ClientRequestToken associated with the secret version
"""
# This is where the secret should be set in the service
# raise NotImplementedError
def test_secret(service_client, arn, token):
pass
"""Test the secret
This method should validate that the AWSPENDING secret works in the service that the secret belongs to. For example, if the secret
is a database credential, this method should validate that the user can login with the password in AWSPENDING and that the user has
all of the expected permissions against the database.
Args:
service_client (client): The secrets manager service client
arn (string): The secret ARN or other identifier
token (string): The ClientRequestToken associated with the secret version
"""
# This is where the secret should be tested against the service
#raise NotImplementedError
def finish_secret(service_client, arn, token):
"""Finish the secret
This method finalizes the rotation process by marking the secret version passed in as the AWSCURRENT secret.
Args:
service_client (client): The secrets manager service client
arn (string): The secret ARN or other identifier
token (string): The ClientRequestToken associated with the secret version
Raises:
ResourceNotFoundException: If the secret with the specified arn does not exist
"""
# First describe the secret to get the current version
metadata = service_client.describe_secret(SecretId=arn)
current_version = None
for version in metadata["VersionIdsToStages"]:
if "AWSCURRENT" in metadata["VersionIdsToStages"][version]:
if version == token:
# The correct version is already marked as current, return
logger.info("finishSecret: Version %s already marked as AWSCURRENT for %s" % (version, arn))
return
current_version = version
break
# Finalize by staging the secret version current
service_client.update_secret_version_stage(SecretId=arn, VersionStage="AWSCURRENT", MoveToVersionId=token, RemoveFromVersionId=current_version)
logger.info("finishSecret: Successfully set AWSCURRENT stage to version %s for secret %s." % (token, arn))
</code></pre>
|
<python><database><amazon-web-services><aws-lambda><rotation>
|
2023-03-01 07:02:27
| 1
| 1,179
|
JavaForAndroid
|
75,600,460
| 3,745,149
|
Calculate vertex distances of a mesh
|
<p>I am using Numpy arrays to express a triangular mesh.</p>
<p>I have two matrices: <code>coordinates</code> is a 3 x n matrix, and <code>connectivity</code> is an n x n matrix that uses 0s and 1s to store vertex connectivity.</p>
<p>Now I want to calculate a n x n matrix named <code>distances</code> that stores vertex distances. Only the positions where <code>connectivity[i,j] == 1</code> are calculated. Anywhere else are not.</p>
<p>What is the most elegant way to calculate this in Python?</p>
<p>For example, I have a mesh of 4 vertices like this:</p>
<p><a href="https://i.sstatic.net/C0rCF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/C0rCF.png" alt="enter image description here" /></a></p>
<p>Then</p>
<pre><code>import numpy as np
coordinates = np.array(
[
[-1, -1, 0], # A
[1, -1, 0], # B
[1, 1, 0], # C
[-1, 1, 0] # D
],
dtype=np.float32
)
connectivity = np.array(
[
[0, 1, 1, 1], # A-B, A-C, A-D
[1, 0, 1, 0], # B-A, B-C
[1, 1, 0, 1], # C-A, C-B, C-D
[1, 0, 1, 0], # D-A, D-C
],
dtype=np.int32
)
# For this example, expected `distances` is like this
distances = np.array(
[
[0, 2, 2.828, 2], # A-B, A-C, A-D
[2, 0, 2, 0], # B-A, B-C
[2.828, 2, 0, 2], # C-A, C-B, C-D
[2, 0, 2, 0], # D-A, D-C
],
dtype=np.float32
)
</code></pre>
|
<python><numpy><mesh>
|
2023-03-01 07:02:11
| 2
| 770
|
landings
|
75,600,328
| 14,445,883
|
I want to broadcast an pytorcc tensor of dimension (a,b,c) onto an array of dimension (b,c) to get an output of dimension (a,c) how do I do this?
|
<p>I have two pytorch tensors,</p>
<pre><code>A.shape = [416, 20, 3]
B.shape = [416,20]
</code></pre>
<p>I want to produce</p>
<pre><code>C = matmul(A,B)
C.shape = [416,3]
</code></pre>
<p>Ie for each of the 416 20x3 arrays in A, find the corresponding 20X1 array in B and compute <code>torch.matmul(A_i,B)</code>. Set that 3x1 array the i index of the output index. How do I make the broadcasting work out like this?</p>
|
<python><arrays><numpy><pytorch><array-broadcasting>
|
2023-03-01 06:44:12
| 1
| 783
|
alois
|
75,600,322
| 11,725,056
|
How to use stopwords and filters properly in Elasticsearch (python client)
|
<p>I'm learning Elasticsearch using Python client and have managed to build an index and query function.</p>
<p><strong>Problem:</strong> Even if I have used <code>stemmer</code> and <code>stop words</code> in the below settings, when I query a text with all the stop words to test the working, it returns some result when it shouldn't as these are all stop words and should be removed actually. What am I doing wrong here?</p>
<pre><code>
# Crete Own Stemmer
STEMMER_FILTER = {
"type":"stemmer",
"language": "english",
}
# create own Stopwords
STOPWORD_FILTER = {
"type":"stop",
"stopwords":"_english_",
"ignore_case": True,
}
# Create your own Synonyms and mappings
SYNONYM_FILTER = {
"type":"synonym",
"synonyms":[
"i-pod, ipod",
"universe, cosmos"
]
}
custom_synonyms = {
"type": "synonym_graph",
"synonyms": [
"mind, brain",
"brain storm, brainstorm, envisage"
]
}
custom_index_analyzer = {
"tokenizer": "whitespace", #"standard",
"filter": [
"lowercase",
"asciifolding",
"stemmer", # how to use custom stemmer?
"stop"]} # how to use custom stop words?
custom_search_analyzer = {
"tokenizer": "whitespace", #"standard"
"filter": [
"lowercase",
"asciifolding",
"stemmer",
"stop"]}
INDEX_BODY = {
"settings": {
"index": {
"analysis": {
"analyzer": {
"custom_index_time_analyzer": custom_index_analyzer,
"custom_search_time_analyzer": custom_search_analyzer
},
"filter": {"my_graph_synonyms": custom_synonyms,
"english_stemmer": STEMMER_FILTER,
"english_stop": STOPWORD_FILTER,
"synonym": SYNONYM_FILTER},
}
}
},
"mappings": {
"properties": {
"que_op": {"type": "text", "analyzer": "custom_index_time_analyzer", "search_analyzer": "custom_search_time_analyzer"}
}}}
es.indices.create(index="questions", mappings=INDEX_BODY["mappings"], settings=INDEX_BODY["settings"])
bulk_data = []
for i,row in tqdm(df.iterrows()):
bulk_data.append(
{
"_index": index_name,
"_id": i,
"_source": {
"que_op": row["que_op"]
}
}
)
bulk(es, bulk_data)
</code></pre>
<p>Now the problem is that when I search a <a href="https://github.com/apache/lucene/blob/main/lucene/analysis/common/src/java/org/apache/lucene/analysis/en/EnglishAnalyzer.java#L48" rel="nofollow noreferrer">list of all the stopwords in Elasticsearch</a> , it gives me results.</p>
<pre><code>def full_text_search(index_name:str, query_string:str, search_on_field:str = 'que_op', size:int = 10):
query = {"match": {search_on_field: query_string}}
return es.search(index = index_name, query = query, size = size, pretty = True)
full_text_search("questions", "a an and are as at be but by for if in into is it no not of on or such that the their then there these they this to was will with", size = 3)
</code></pre>
|
<python><elasticsearch><full-text-search><querying>
|
2023-03-01 06:43:39
| 0
| 4,292
|
Deshwal
|
75,600,244
| 8,229,534
|
How to perform dynamic filtering across multiple columns using st.session_state() or on_change()?
|
<p>I am trying to create a streamlit app where based on 1 filter selection criteria, I want to populate other filter selections. Then, once the submit button is hit, then I want to proceed ahead with processing the data.</p>
<pre><code>import streamline as st
import pandas as pd
my_df = pd.DataFrame({
'Name': ['A', 'A', 'B', 'B', 'C', 'C', 'C', 'D', 'D', 'D', 'D'],
'Color':['red', 'blue', 'blue', 'black', 'black', 'green', 'blue',
'yellow', 'white', 'green', 'purple']
})
col1, col2 = st.columns(2)
name_selection = col1.multiselect('select names ', my_df.name.unique().tolist(), key='names')
color_selection = col2.multiselect('select color ', my_df.color.unique().tolist(), key='color')
</code></pre>
<p>Scenario 1
If I select name as A then the color selection should be only a list of red and blue and not others.</p>
<p>Scenario 2
Similarly, when I choose color as Black first, then I should get only a list of B and C in name list. The filter order is dependent on the user.</p>
<p>In general, I have around 5 to 6 filters and once a user selects a filter condition on any one of the multi select columns, then the other filter conditions should automatically update and populate the list.</p>
<p>How can I achieve this using session_state or on_change() functions?</p>
<p>Do I need a st.form() for this?</p>
<p>Here is scenario 1 -
<a href="https://i.sstatic.net/uhpa5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uhpa5.png" alt="enter image description here" /></a></p>
<p>and here is scenario 2 -</p>
<p><a href="https://i.sstatic.net/qxIpR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qxIpR.png" alt="enter image description here" /></a></p>
|
<python><pandas><streamlit>
|
2023-03-01 06:31:14
| 2
| 1,973
|
Regressor
|
75,600,223
| 562,930
|
How should I wait for a queue or an event?
|
<p>In Python, I would like to know how to wait for the first of either <code>queue.get()</code> or <code>event.wait()</code>.</p>
<p>At the moment, I am using <a href="https://docs.python.org/3/library/asyncio-task.html#asyncio.wait" rel="nofollow noreferrer"><code>asyncio.wait()</code></a> to achieve this, but this is producing a deprecation warning. I do not understand how I should alter my code so that it will be compatible with future versions of Python.</p>
<p>The following code is functional, however it gives the warning <code>DeprecationWarning: The explicit passing of coroutine objects to asyncio.wait() is deprecated since Python 3.8, and scheduled for removal in Python 3.11.</code></p>
<pre><code>import random
import asyncio
event = asyncio.Event()
queue = asyncio.Queue()
async def producer():
for i in range(5):
print(f"Putting {i}")
await queue.put(i)
await asyncio.sleep(random.random())
# Check if we should terminate
if event.is_set():
break
print("Producer done")
async def terminator():
await asyncio.sleep(random.random() * 5)
print("Terminating")
event.set()
print("Terminator done")
async def consumer():
while True:
print(f"Waiting on the result of either the queue or the event")
done, _ = await asyncio.wait(
[queue.get(), event.wait()],
return_when=asyncio.FIRST_COMPLETED
)
# Check if we should terminate
if event.is_set():
break
# Otherwise, we got a queue item
item = done.pop().result()
print(f"got {item}")
print("Consumer done")
async def main():
await asyncio.gather(producer(), terminator(), consumer())
asyncio.run(main())
</code></pre>
<p>Example output:</p>
<pre><code>Putting 0
Waiting on the result of either the queue or the event
got 0
Waiting on the result of either the queue or the event
Putting 1
got 1
Waiting on the result of either the queue or the event
Terminating
Terminator done
Consumer done
Producer done
</code></pre>
|
<python><concurrency><python-asyncio>
|
2023-03-01 06:28:04
| 1
| 2,795
|
Matthew Walker
|
75,600,219
| 8,176,763
|
getting many warnings for particular dags in airflow
|
<p>I just recently installed airflow and whenever I execute a task, I get warning about different dags:</p>
<pre><code>[2023-03-01 06:25:35,691] {taskmixin.py:205} WARNING - Dependency <Task(BashOperator): create_entry_group>, delete_entry_group already registered for DAG: example_complex
[2023-03-01 06:25:35,691] {taskmixin.py:205} WARNING - Dependency <Task(BashOperator): delete_entry_group>, create_entry_group already registered for DAG: example_complex
[2023-03-01 06:25:35,691] {taskmixin.py:205} WARNING - Dependency <Task(BashOperator): create_entry_gcs>, delete_entry already registered for DAG: example_complex
[2023-03-01 06:25:35,692] {taskmixin.py:205} WARNING - Dependency <Task(BashOperator): delete_entry>, create_entry_gcs already registered for DAG: example_complex
[2023-03-01 06:25:35,692] {taskmixin.py:205} WARNING - Dependency <Task(BashOperator): create_tag>, delete_tag already registered for DAG:
example_complex
[2023-03-01 06:25:35,692] {taskmixin.py:205} WARNING - Dependency <Task(BashOperator): delete_tag>, create_tag already registered for DAG:
example_complex
[2023-03-01 06:25:35,759] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): prepare_email>, send_email already registered for DAG: example_dag_decorator
[2023-03-01 06:25:35,759] {taskmixin.py:205} WARNING - Dependency <Task(EmailOperator): send_email>, prepare_email already registered for DAG: example_dag_decorator
[2023-03-01 06:25:35,769] {example_kubernetes_executor.py:41} WARNING - The example_kubernetes_executor example DAG requires the kubernetes provider. Please install it with: pip install apache-***[cncf.kubernetes]
[2023-03-01 06:25:35,772] {example_local_kubernetes_executor.py:39} WARNING - Could not import DAGs in example_local_kubernetes_executor.py
Traceback (most recent call last):
File "/home/d5291029/venv/lib/python3.10/site-packages/airflow/example_dags/example_local_kubernetes_executor.py", line 37, in <module>
from kubernetes.client import models as k8s
ModuleNotFoundError: No module named 'kubernetes'
[2023-03-01 06:25:35,773] {example_local_kubernetes_executor.py:40} WARNING - Install Kubernetes dependencies with: pip install apache-***[cncf.kubernetes]
[2023-03-01 06:25:35,781] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): print_the_context>, log_sql_query already registered for DAG: example_python_operator
[2023-03-01 06:25:35,781] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): log_sql_query>, print_the_context already registered for DAG: example_python_operator
[2023-03-01 06:25:35,782] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): print_the_context>, log_sql_query already registered for DAG: example_python_operator
[2023-03-01 06:25:35,782] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): log_sql_query>, print_the_context already registered for DAG: example_python_operator
[2023-03-01 06:25:35,782] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): print_the_context>, log_sql_query already registered for DAG: example_python_operator
[2023-03-01 06:25:35,782] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): log_sql_query>, print_the_context already registered for DAG: example_python_operator
[2023-03-01 06:25:35,783] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): print_the_context>, log_sql_query already registered for DAG: example_python_operator
[2023-03-01 06:25:35,783] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): log_sql_query>, print_the_context already registered for DAG: example_python_operator
/home/d5291029/venv/lib/python3.10/site-packages/airflow/cli/commands/task_command.py:159 RemovedInAirflow3Warning: Calling `DAG.create_dagrun()` without an explicit data interval is deprecated
</code></pre>
<p>How do i get rid of these warnings ?</p>
|
<python><airflow>
|
2023-03-01 06:27:22
| 1
| 2,459
|
moth
|
75,599,864
| 14,477,706
|
input excel file from UI in fastapi
|
<p>Hi I'm trying the following for api input</p>
<pre><code>def upload_excel_parser(file: UploadFile = File(...)):
s_filename = file.filename
unique_id = str(uuid4())
project_id = s_filename + unique_id
df = pd.read_excel(file)
return "success"
</code></pre>
<p>also tried</p>
<pre><code>df = pd.read_excel(file.file)
</code></pre>
<p>getting error
ValueError: Excel file format cannot be determined, you must specify an engine manually.</p>
<p>Is there some error in reading the file?</p>
|
<python><fastapi>
|
2023-03-01 05:23:37
| 1
| 327
|
devb
|
75,599,861
| 17,778,275
|
Pandas split corresponding rows based on separator in two columns duplicating everything else
|
<p>I have an excel sheet</p>
<pre><code>Col1 Col2 Col3 Col4
John English\nMaths 34\n33 Pass
Sam Science 40 Pass
Jack English\nHistory\nGeography 89\n07\n98 Pass
</code></pre>
<p>Need to convert it to</p>
<pre><code>Col1 Col2 Col3 Col4
John English 34 Pass
John Maths 33 Pass
Sam Science 40 Pass
Jack English 89 Pass
Jack History 07 Pass
Jack Geography 98 Pass
</code></pre>
<p>The excel sheet has <code>\n</code> as separator for corresponding Col2 and col3 column. Just need to pull each subject in a new row with its corresponding marks and copy all the other column contents as it is.</p>
<p>Tried</p>
<pre><code>split_cols = ['Col2', 'Col3']
# loop over the columns and split them
separator = '\n'
for col in split_cols:
df[[f'{col}_Split1', f'{col}_Split2']] = df[col].str.split(separator, n=1, expand=True).fillna('')
# create two new dataframes with the desired columns
df1 = df[['Col1', 'Col2_Split1', 'Col3_Split1', 'Col4']].rename(columns={'Col2_Split1': 'D', 'Col3_Split1': 'C'})
df2 = df[['Col1', 'Col2_Split2', 'Col3_Split2', 'Col4']].rename(columns={'Col2_Split2': 'D', 'Col3_Split2': 'C'})
# concatenate the two dataframes
final_df = pd.concat([df1, df2], ignore_index=True)
# print the final dataframe
print(final_df)
</code></pre>
|
<python><pandas><dataframe><split>
|
2023-03-01 05:23:28
| 2
| 354
|
spd
|
75,599,831
| 13,684,789
|
How to properly replicate a web site's GET request to an API?
|
<p>I am trying to scrape data from this <a href="https://www.jewelosco.com/shop/search-results.html?q=rice" rel="nofollow noreferrer">page</a>, specifically all the information about the products.</p>
<p>Using my browser's Inspect tool, I found that all of the products' data come from a JSON file; it is a response to a GET request sent to an API at this <a href="https://www.jewelosco.com/abs/pub/xapi/pgmsearch/v1/search/products?request-id=1771677643767994529&url=https%3A%2F%2Fwww.jewelosco.com&pageurl=https%3A%2F%2Fwww.jewelosco.com&pagename=search&rows=30&start=0&search-type=keyword&storeid=1118&featured=true&search-uid=&q=rice&sort=&featuredsessionid=&screenwidth=1533&dvid=web-4.1search&channel=instore&banner=jewelosco" rel="nofollow noreferrer">URL</a>. Looking at the request headers I found the Subscription Key (i.e. <code>Ocp-Apim-Subscription-Key</code>) and its value (i.e. <code>5e790236c84e46338f4290aa1050cdd4</code>).</p>
<p>I tried to get this JSON file by sending the GET request myself using the python request module, but it responded with a JSON file that contained an error message--<code>"appMsg": "Search encountered a problem. Please try again OSSR0033-R"</code>.</p>
<p>So it seems like I am able to connect to the API but the program on the other side is failing to find the product-data JSON file. I'm assuming the failure is due to a mistake in my GET request. <strong>If this assumption is even valid, how can I properly replicate the request so that I can receive the expected output?</strong></p>
<h4>Here is My Code:</h4>
<pre><code>import requests
import json
# query url
def request_from_api(url, url_params, req_headers):
response = requests.get(url, params=url_params, headers=req_headers)
return response
def format_cookies(cookie_pairs):
'''
Takes a "list" of name-value pairs e.g. "cook1=value1; cook2=val2"
'''
pairs = [pair.split('=') for pair in cookie_pairs.split('; ')]
formatted_pairs = {cookie_val[0]:cookie_val[1] for cookie_val in pairs}
return formatted_pairs
if __name__ == '__main__':
# url that API is located at
api_url = "https://www.jewelosco.com/abs/pub/xapi/pgmsearch/v1/search/products?"
# url parameters for api_url
url_params = {
"request-id": "1771677643767994529",
"url": "https://www.jewelosco.com",
"pageurl": "https://www.jewelosco.com",
"pagename": "search",
"rows": "30",
"start": "0",
"search-type": "keyword",
"storeid": "1118",
"featured": "true",
"search-uid": "",
"q": "rice",
"sort": "",
"featuredsessionid": "",
"screenwidth": "1533",
"dvid": "web-4.1search",
"channel": "instore",
"banner": "jewelosco"
}
# API sub key
headers = {
"Accept": "application/json, text/plain, */*",
"Accept-Encoding": "gzip, deflate, br",
"Accept-Language": "en-US,en;q=0.5",
"Connection": "keep-alive",
"DNT": "1",
"Host": "www.jewelosco.com",
"Ocp-Apim-Subscription-Key": "5e790236c84e46338f4290aa1050cdd4",
"Referer": "https://www.jewelosco.com/shop/search-results.html?q=rice",
"Sec-Fetch-Dest": "empty",
"Sec-Fetch-Mode": "cors",
"Sec-Fetch-Site": "same-origin",
"TE": "trailers",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/110.0"
}
# List of cookie-value pairs copied from my browser's Inspect tool
raw_form_cookies = "visid_incap_1990338=s+w9h0GrTSqb/iWdgj5yGT7p/2MAAAAAQUIPAAAAAAD+pkwygiCfx/ikABjRUg/L; nlbi_1990338=mHC1ApVnlTLFJURPzoaznQAAAACG3swCSzQedoLPtuqqPhlT; incap_ses_8080_1990338=FeLoM/tDE2aUu2sos+0hcD7p/2MAAAAAyClJy0AvAh6rRWqmCVVCcw==; ECommBanner=jewelosco; abs_gsession=%7B%22info%22%3A%7B%22COMMON%22%3A%7B%22Selection%22%3A%22user%22%2C%22preference%22%3A%22J4U%22%2C%22userType%22%3A%22G%22%2C%22zipcode%22%3A%2252732%22%2C%22banner%22%3A%22jewelosco%22%7D%2C%22J4U%22%3A%7B%22zipcode%22%3A%2252732%22%2C%22storeId%22%3A%221118%22%7D%2C%22SHOP%22%3A%7B%22zipcode%22%3A%2252732%22%2C%22storeId%22%3A%221118%22%7D%7D%7D; SWY_SHARED_SESSION_INFO=%7B%22info%22%3A%7B%22COMMON%22%3A%7B%22userType%22%3A%22G%22%2C%22zipcode%22%3A%2252732%22%2C%22banner%22%3A%22jewelosco%22%2C%22preference%22%3A%22J4U%22%2C%22Selection%22%3A%22user%22%2C%22userData%22%3A%7B%7D%7D%2C%22J4U%22%3A%7B%22storeId%22%3A%221118%22%2C%22zipcode%22%3A%2252732%22%2C%22userData%22%3A%7B%7D%7D%2C%22SHOP%22%3A%7B%22storeId%22%3A%221118%22%2C%22zipcode%22%3A%2252732%22%2C%22userData%22%3A%7B%7D%7D%7D%7D; abs_previouslogin=%7B%22info%22%3A%7B%22COMMON%22%3A%7B%22Selection%22%3A%22user%22%2C%22preference%22%3A%22J4U%22%2C%22userType%22%3A%22G%22%2C%22zipcode%22%3A%2252732%22%2C%22banner%22%3A%22jewelosco%22%7D%2C%22J4U%22%3A%7B%22zipcode%22%3A%2252732%22%2C%22storeId%22%3A%221118%22%7D%2C%22SHOP%22%3A%7B%22zipcode%22%3A%2252732%22%2C%22storeId%22%3A%221118%22%7D%7D%7D; SWY_SYND_USER_INFO=%7B%22storeAddress%22%3A%22%22%2C%22storeZip%22%3A%2252732%22%2C%22storeId%22%3A%221118%22%2C%22preference%22%3A%22J4U%22%7D; ECommSignInCount=0; SAFEWAY_MODAL_LINK=; OptanonConsent=isGpcEnabled=0&datestamp=Wed+Mar+01+2023+18%3A10%3A25+GMT-0600+(Central+Standard+Time)&version=202212.1.0&isIABGlobal=false&hosts=&consentId=2481ceef-8878-4f3b-924b-3b28079d9b13&interactionCount=1&landingPath=NotLandingPage&groups=C0001%3A1%2CC0002%3A0%2CC0004%3A0%2CC0003%3A1&AwaitingReconsent=false; nlbi_1990338_2147483392=jYckLK1heGBAHrRyzoaznQAAAACZvsW6rrz3C1oXWBs6UFc8; reese84=3:Gl8qjGMtFKfV15EgMleAnA==:OIn+iQ/52nnNf5lyREodaDDwUAjg8dDGS98wIlrt5otpbU+Cf8LVvyWEszAKcXR472IFIvx0GqApqQXL+AwRenGrptfNzKJtsu+zlyayIVp5q9BJEyz9T9tIFT2YmnQ+D1rZkBlw2lcnRZqxvVX5dSG6pFJH9nebThXLpHGzKF+j2O1jRKRTanLc72sHU5aqkDgp6aKgzvMI3IQTg9JPnSYW1I0779+gNrb/WfVOID4YT3FLG3OBiMxXsnGGrGQD+3QUsGWzJGXqKkLgErxusDcDI+J82YxLg8Lg7u+qbLFLdUPB4dUsPJJLlHJx8kMBuoRh/47QtMYdykoXYmcZ4PYYLnop7lpDFahVOwcqGmwGCCBjkAnxGuVejNESYc4Yiu5iHFluuEHSDyLxXUmlQWRfDl6axKS+0m6Zm7IqPmvetfC4BsZKbDRk5p/jbFDCIYD/iHbRi8OE/mkzTD03r+un1iC5GFK4BhIQrtBDybXmZYJU1VBwXl+raL8wR0Db3d3I/Mbh4/CK1uT/7CJDRIDznlCZC0/C3gFwXQpfLiA=:XtGGSfw6IB+W6dYIh0iO+xPVdddBfiRA1zwKMhu0OmE=; mbox=session#2686aefa9dea422db9f92c9b39a01830#1677717696; at_check=true; ADRUM_BT=R:57|i:5124367|g:a106a4d3-bbb8-4619-8262-9d3f98852991652436|e:104|n:safeway-loyalty_d99a98d0-07cc-4871-98b7-0beac77d0580"
formatted_cookies = format_cookies(raw_form_cookies)
# combine api_url and url_params and make GET request with headers
product_data = request_from_api(api_url, url_params, headers, formatted_cookies).json()
# pretty print json file
print(json.dumps(product_data, indent=3))
</code></pre>
<h4>Actual Output:</h4>
<pre><code>{
"appMsg": "[PS: Success.]",
"primaryProducts": {
"appCode": "400",
"appMsg": "Search encountered a problem. Please try again OSSR0033-R",
"pgmName": "search-products",
"order": "1"
},
"appCode": "[PS: 200]"
}
</code></pre>
<h4>Expected Output:</h4>
<p>Its a large json file that contains all of the products information (e.g. name, price, quantity...). Here is "snippet" of it:</p>
<pre><code>{
"appMsg":"[PS: Success.]",
"primaryProducts":{
"response":{
"numFound":725,
"start":0,
"isExactMatch":true,
"docs":[
{
"name":"Signature SELECT Rice Enriched Long Grain - 5 Lb",
"pid":"126150030",
"upc":"0002113050205",
"id":"126150030",
"featured":false,
"inventoryAvailable":"1",
"pastPurchased":false,
"restrictedValue":"0",
"salesRank":99999,
"price":4.99,
"basePrice":4.99,
"pricePer":1.0,
"displayType":"-1",
"aisleId":"1_6_9_9",
"aisleName":"Rice|1_6_9",
"departmentName":"Grains, Pasta & Sides",
"shelfName":"White Rice",
"unitOfMeasure":"LB",
"sellByWeight":"I",
"averageWeight":[
"0.00"
],
"unitQuantity":"LB",
"displayUnitQuantityText":"ea",
"previousPurchaseQty":0,
"maxPurchaseQty":0,
"prop65WarningIconRequired":false,
"isArProduct":true,
"isMtoProduct":false,
"customizable":false,
"inStoreShoppingElig":false,
"preparationTime":"0",
"isMarketplaceItem":"N",
"triggerQuantity":0,
"channelEligibility":{
"pickUp":true,
"delivery":true,
"inStore":true,
"shipping":false
},
"channelInventory":{
"delivery":"1",
"pickup":"1",
"instore":"1",
"shipping":"0"
},
"productReview":{
"avgRating":"4.8",
"reviewCount":"64",
"isReviewWriteEligible":"true",
"isReviewDisplayEligible":"true",
"isForOnetimeReview":"true",
"reviewTemplateType":"default"
}
}
},
"appCode":"[PS: 200]"
}
</code></pre>
<h1>Update:</h1>
<p>Despite adding all of the request headers, the response is the same.</p>
<p>Here are all of the headers I added:</p>
<pre><code>headers = {
"Accept": "application/json, text/plain, */*",
"Accept-Encoding": "gzip, deflate, br",
"Accept-Language": "en-US,en;q=0.5",
"Connection": "keep-alive",
"DNT": "1",
"Host": "www.jewelosco.com",
"Ocp-Apim-Subscription-Key": "5e790236c84e46338f4290aa1050cdd4",
"Referer": "https://www.jewelosco.com/shop/search-results.html?q=rice",
"Sec-Fetch-Dest": "empty",
"Sec-Fetch-Mode": "cors",
"Sec-Fetch-Site": "same-origin",
"TE": "trailers",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/110.0"
}
all_cookies = {
"visid_incap_1990338":"s+w9h0GrTSqb/iWdgj5yGT7p/2MAAAAAQUIPAAAAAAD+pkwygiCfx/ikABjRUg/L",
"nlbi_1990338": "mHC1ApVnlTLFJURPzoaznQAAAACG3swCSzQedoLPtuqqPhlT",
"incap_ses_8080_1990338":"FeLoM/tDE2aUu2sos+0hcD7p/2MAAAAAyClJy0AvAh6rRWqmCVVCcw",
"ECommBanner": "jewelosco",
"abs_gsession":"%7B%22info%22%3A%7B%22COMMON%22%3A%7B%22Selection%22%3A%22user%22%2C%22preference%22%3A%22J4U%22%2C%22userType%22%3A%22G%22%2C%22zipcode%22%3A%2252732%22%2C%22banner%22%3A%22jewelosco%22%7D%2C%22J4U%22%3A%7B%22zipcode%22%3A%2252732%22%2C%22storeId%22%3A%221118%22%7D%2C%22SHOP%22%3A%7B%22zipcode%22%3A%2252732%22%2C%22storeId%22%3A%221118%22%7D%7D%7D",
"SWY_SHARED_SESSION_INFO":"%7B%22info%22%3A%7B%22COMMON%22%3A%7B%22userType%22%3A%22G%22%2C%22zipcode%22%3A%2252732%22%2C%22banner%22%3A%22jewelosco%22%2C%22preference%22%3A%22J4U%22%2C%22Selection%22%3A%22user%22%2C%22userData%22%3A%7B%7D%7D%2C%22J4U%22%3A%7B%22storeId%22%3A%221118%22%2C%22zipcode%22%3A%2252732%22%2C%22userData%22%3A%7B%7D%7D%2C%22SHOP%22%3A%7B%22storeId%22%3A%221118%22%2C%22zipcode%22%3A%2252732%22%2C%22userData%22%3A%7B%7D%7D%7D%7D",
"abs_previouslogin":"%7B%22info%22%3A%7B%22COMMON%22%3A%7B%22Selection%22%3A%22user%22%2C%22preference%22%3A%22J4U%22%2C%22userType%22%3A%22G%22%2C%22zipcode%22%3A%2252732%22%2C%22banner%22%3A%22jewelosco%22%7D%2C%22J4U%22%3A%7B%22zipcode%22%3A%2252732%22%2C%22storeId%22%3A%221118%22%7D%2C%22SHOP%22%3A%7B%22zipcode%22%3A%2252732%22%2C%22storeId%22%3A%221118%22%7D%7D%7D",
"SWY_SYND_USER_INFO":"%7B%22storeAddress%22%3A%22%22%2C%22storeZip%22%3A%2252732%22%2C%22storeId%22%3A%221118%22%2C%22preference%22%3A%22J4U%22%7D",
"ECommSignInCount": "0",
"SAFEWAY_MODAL_LINK": "",
"OptanonConsent": "isGpcEnabled",
"nlbi_1990338_2147483392":"jYckLK1heGBAHrRyzoaznQAAAACZvsW6rrz3C1oXWBs6UFc8",
"reese84": "3:Gl8qjGMtFKfV15EgMleAnA",
"mbox": "session#2686aefa9dea422db9f92c9b39a01830#1677717696",
"at_check": "true",
"ADRUM_BT": "R:57|i:5124367|g:a106a4d3-bbb8-4619-8262-9d3f98852991652436|e:104|n:safeway-loyalty_d99a98d0-07cc-4871-98b7-0beac77d0580"
}
</code></pre>
<p>Here is a function I made that formats a list of cookie-value pairs (e.g <code>Cookie: "c1=v1; c2=v2; c3=v3"</code>) into a dictionary where the keys are cookie names and the values are the cookie values--this format is needed to work with <code>request.get()</code> :</p>
<pre><code>def format_cookies(cookie_pairs):
'''
Takes a "list" of name-value pairs e.g. "cook1=value1; cook2=val2"
'''
pairs = [pair.split('=') for pair in cookie_pairs.split('; ')]
formatted_pairs = {cookie_val[0]:cookie_val[1] for cookie_val in pairs}
return formatted_pairs
</code></pre>
<p>I have altered the original script to reflect these changes.</p>
|
<python><web-scraping><python-requests><xmlhttprequest>
|
2023-03-01 05:18:40
| 1
| 330
|
Übermensch
|
75,599,811
| 3,821,009
|
Set value based on previous value in previous group if it exists
|
<p>Say I have this:</p>
<pre><code>df = pandas.DataFrame(
[ dict(a=75, b=numpy.nan, d='2023-01-01 00:00')
, dict(a=82, b=numpy.nan, d='2023-01-01 10:00')
, dict(a=39, b=numpy.nan, d='2023-01-01 20:00')
, dict(a=10, b=82 , d='2023-01-05 00:00')
, dict(a=90, b=82 , d='2023-01-05 20:00')
, dict(a=61, b=numpy.nan, d='2023-02-08 00:00')
, dict(a=35, b=numpy.nan, d='2023-02-08 10:00')
, dict(a=95, b=numpy.nan, d='2023-02-08 20:00')
, dict(a=21, b=35 , d='2023-04-15 00:00')
, dict(a=60, b=35 , d='2023-04-15 10:00')
])
df['d'] = pandas.to_datetime(df['d'])
df = df.set_index('d')
print(df)
</code></pre>
<p>which outputs:</p>
<pre><code> a b
d
2023-01-01 00:00:00 75 NaN
2023-01-01 10:00:00 82 NaN
2023-01-01 20:00:00 39 NaN
2023-01-05 00:00:00 10 82.0
2023-01-05 20:00:00 90 82.0
2023-02-08 00:00:00 61 NaN
2023-02-08 10:00:00 35 NaN
2023-02-08 20:00:00 95 NaN
2023-04-15 00:00:00 21 35.0
2023-04-15 10:00:00 60 35.0
</code></pre>
<p>In real life, I only have column <code>a</code> and my desired output is in column <code>b</code>.</p>
<p>Here, <code>b</code> equals the value in <code>a</code> from the previous available date at 10:00. Dates are not necessarily consecutive. Value at 10:00 may not exist for the previous available date, in which case <code>b</code> should be NaN.</p>
<p>Logically, I'd solve this by grouping by date and extracting the value from the previous group.</p>
<p>Without resorting to iterating each <code>(previous group, group)</code> tuples or something of sorts, can that be done with pandas?</p>
<p>More generally, are there any pandas idioms to deal with these "look up value from the previous group" situations?</p>
<hr />
<p>I'll be adding edits here as answers come to show additional info that doesn't fit nicely in a comment.</p>
<p>For <a href="https://stackoverflow.com/a/75599866/3821009">https://stackoverflow.com/a/75599866/3821009</a></p>
<pre><code>df['c'] = df.groupby(df.index.date)['a'].shift()
print(df)
</code></pre>
<p>produces:</p>
<pre><code> a b c
d
2023-01-01 00:00:00 75 NaN NaN
2023-01-01 10:00:00 82 NaN 75.0
2023-01-01 20:00:00 39 NaN 82.0
2023-01-05 00:00:00 10 82.0 NaN
2023-01-05 20:00:00 90 82.0 10.0
2023-02-08 00:00:00 61 NaN NaN
2023-02-08 10:00:00 35 NaN 61.0
2023-02-08 20:00:00 95 NaN 35.0
2023-04-15 00:00:00 21 35.0 NaN
2023-04-15 10:00:00 60 35.0 21.0
</code></pre>
<p>so that's not what I'm looking for.</p>
|
<python><pandas><dataframe>
|
2023-03-01 05:12:53
| 0
| 4,641
|
levant pied
|
75,599,785
| 7,394,787
|
how to make variable visibility in block statement in Python?
|
<p>How to achieve an effect like :</p>
<pre><code>#there is no variable named `i`
for i in range(1):
pass
print(i) #why
</code></pre>
<p>I don't want to make <code>i</code> visitable after the <code>for</code> statement finished.</p>
<p>But I don't want to use <code>del i</code> manually.</p>
|
<python>
|
2023-03-01 05:08:35
| 1
| 305
|
Z.Lun
|
75,599,689
| 9,782,619
|
python can't import module when running a file, but can import the module in interactive shell
|
<p>I got a strange problem.</p>
<p><code>filegetter</code> is a module developed by someone else and installed with <code>python setup.py install</code>.</p>
<p>Here is a test file.</p>
<pre><code>#instance.py
import filegetter
</code></pre>
<p>when I run</p>
<pre><code>/home/ynx/miniconda3/bin/python /home/ynx/notebook/instance.py
</code></pre>
<p>it says:</p>
<pre><code>Traceback (most recent call last):
File "/home/ynx/notebook/instance.py", line 2, in <module>
import filegetter
ModuleNotFoundError: No module named 'filegetter'
</code></pre>
<p>But if I run an interactive shell: python</p>
<pre><code>>>> import filegetter
>>>
</code></pre>
<p>It works.
I am sure the same python bin is used by check which, why and how can I import it in the file mode?</p>
|
<python>
|
2023-03-01 04:52:48
| 2
| 635
|
YNX
|
75,599,667
| 1,905,276
|
How to replace a element in a nested List in Python
|
<p>I can locate any listed content. In the example I locate 'q'. I manually mapped its index as <code>[1][0][1][1]</code>. Then I replaced it with 'z' and it works. My question is what is the magic to get the index(q) or Object Address(q) when the if() condition get set to True?</p>
<pre><code> import ctypes
lis = [['a', ['b'], 'c'], [['d', ['p', ['q']]], 'e', 'f']]
idLis = id(lis)
if 'q' in str(lis):
idLisContent = ctypes.cast(idLis, ctypes.py_object).value
print("List Content: = ", idLisContent)
print("Index 0 = ", idLisContent[0])
print("Index 1 = ", idLisContent[1])
qId = id(idLisContent[1][0][1][1])
print("Index Q = ", ctypes.cast(qId, ctypes.py_object).value)
idLisContent[1][0][1][1] = 'z'
print("List Content: = ", idLisContent)
exit(1)
</code></pre>
<p>Output:</p>
<pre><code>List Content: = [['a', ['b'], 'c'], [['d', ['p', ['q']]], 'e', 'f']]
Index 0 = ['a', ['b'], 'c']
Index 1 = [['d', ['p', ['q']]], 'e', 'f']
Index Q = ['q']
List Content: = [['a', ['b'], 'c'], [['d', ['p', 'z']], 'e', 'f']]
</code></pre>
|
<python>
|
2023-03-01 04:47:30
| 2
| 411
|
Santhosh Kumar
|
75,599,648
| 5,091,964
|
Pyinstaller comes with errors when importing backtesting.py module
|
<p>I am using Windows 11 for Python code development. I have a large Python program that uses the backteting.py module. The program works fine when running it using Visual Studio Code or executing it in the console. However, when I create an EXE file using the Pyinstaller, the EXE file does not work. I managed to reduce the code to two instructions (see below) and yet the EXE program does not work.</p>
<pre><code>import backtesting
print ("Hello")
</code></pre>
<p>I am getting the following warnings when I run the Pyinstaller.</p>
<pre><code>115289 WARNING: Library user32 required via ctypes not found
115301 WARNING: Library msvcrt required via ctypes not found
</code></pre>
<p>In addition, I am also getting the following errors when I run the EXE file.</p>
<pre><code>C:\Users\menb\Documents\tests\test3\test3>test3
Traceback (most recent call last):
File "test3.py", line 1, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module
File "backtesting\__init__.py", line 60, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module
File "backtesting\backtesting.py", line 32, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module
File "backtesting\_plotting.py", line 43, in <module>
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\menb\\Documents\\tests\\test3\\test3\\backtesting\\autoscale_cb.js'
[33336] Failed to execute script 'test3' due to unhandled exception!
</code></pre>
<p>Any help to solve this problem is highly appreciated.</p>
|
<python><python-3.x><pyinstaller><back-testing>
|
2023-03-01 04:43:10
| 1
| 307
|
Menachem
|
75,599,626
| 4,451,521
|
Path ordering based on particular criteria
|
<p>I have four files (or any number of files for that matter) named</p>
<pre><code>file_V2023.2.2_0.txt
file_V2023.2.2_1.txt
file_V2023.2.3_0.txt
file_V2023.2.3_1.txt
</code></pre>
<p>If I do</p>
<pre><code>from pathlib import Path
output_path = Path("./")
for video_path in sorted(output_path.glob("*.txt")):
print(video_path)
</code></pre>
<p>I get the order above.</p>
<p>Is there a way I can get the following order:</p>
<pre><code>file_V2023.2.2_0.txt
file_V2023.2.3_0.txt
file_V2023.2.2_1.txt
file_V2023.2.3_1.txt
</code></pre>
|
<python><glob><pathlib>
|
2023-03-01 04:38:39
| 2
| 10,576
|
KansaiRobot
|
75,599,488
| 5,659,969
|
How can I prevent cached modules/variables when using runpy in pytest tests?
|
<p>(Preface: This is a toy example to illustrate an issue that involves much larger scripts that use a ton of modules/libraries that I don't have control over)</p>
<p>Given these files:</p>
<pre class="lang-py prettyprint-override"><code># bar.py
barvar = []
def barfun():
barvar.append(1)
# foo.py
import bar
foovar = []
def foofun():
foovar.append(1)
if __name__ == '__main__':
foofun()
bar.barfun()
foovar.append(2)
bar.barvar.append(2)
print(f'{foovar =}')
print(f'{bar.barvar=}')
# test_foo.py
import sys
import os
import pytest
import runpy
sys.path.insert(0,os.getcwd()) # so that "import bar" in foo.py works
@pytest.mark.parametrize('execution_number', range(5))
def test1(execution_number):
print(f'\n{execution_number=}\n')
sys.argv=[os.path.join(os.getcwd(),'foo.py')]
runpy.run_path('foo.py',run_name="__main__")
</code></pre>
<p>If I now run <code>pytest test_foo.py -s</code> I will get:</p>
<pre><code>========================================================================
platform win32 -- Python 3.10.8, pytest-7.2.0, pluggy-1.0.0
rootdir: C:\Temp
plugins: anyio-3.6.2
collected 5 items
test_foo.py
execution_number=0
foovar =[1, 2]
bar.barvar=[1, 2]
.
execution_number=1
foovar =[1, 2]
bar.barvar=[1, 2, 1, 2]
.
execution_number=2
foovar =[1, 2]
bar.barvar=[1, 2, 1, 2, 1, 2]
.
execution_number=3
foovar =[1, 2]
bar.barvar=[1, 2, 1, 2, 1, 2, 1, 2]
.
execution_number=4
foovar =[1, 2]
bar.barvar=[1, 2, 1, 2, 1, 2, 1, 2, 1, 2]
.
========================================================================
</code></pre>
<p>So <code>barvar</code> is remembering its previous content. This is obviously detrimental to testing.</p>
<p><strong>Can it be prevented while still using <code>runpy</code>?</strong></p>
<p>Understandably, python <a href="https://docs.python.org/3/library/runpy.html" rel="nofollow noreferrer">docs</a> warn about <code>runpy</code> side effects:</p>
<blockquote>
<p>Note that this is not a sandbox module - all code is executed in the current process, and any side effects (such as cached imports of other modules) will remain in place after the functions have returned.</p>
</blockquote>
<p><strong>If this is tricky or too complicated to do reliably, are there alternatives?</strong> I am looking for the convenience of testing scripts that take arguments and produce stuff (usually files). My typical <code>pytest</code> test script sets up arguments via <code>sys.argv</code> then runs via <code>runpy</code> the target script (very large programs with lots of imports), then validates the generated files (e.g., compare against a baseline for regression testing). There are many invocations within a single test run; hence the need for a clean slate.</p>
<p><code>subprocess.run(['python.exe', 'script.py', *arglist])</code> is another option I can think of.</p>
<p>Thanks.</p>
|
<python><unit-testing><testing><pytest><runpy>
|
2023-03-01 04:07:36
| 2
| 479
|
omasoud
|
75,599,358
| 10,215,301
|
Fail to install FlexGen: ImportError: cannot import name 'define' from 'attr'
|
<p>I am trying to install <a href="https://github.com/FMInference/FlexGen#install" rel="nofollow noreferrer">FlexGen</a> on Ubuntu running under Windows Subsystem for Linux (WSL; but not WSL2). I have already installed a required PyTorch >= 1.12 and run <code>python3 -m flexgen.flex_opt --model facebook/opt-1.3b</code>, but an error message <code>ImportError: cannot import name 'define' from 'attr'</code> comes out and FlexGen is unavailable in my environment. Actually, <code>attr.py</code> does not have <code>define</code> package. Then, what should I do to solve this issue?</p>
<pre class="lang-bash prettyprint-override"><code>$ python3 -c "import torch; print(torch.__version__)"
1.13.1+cu117
$ python3 -m flexgen.flex_opt --model facebook/opt-1.3b
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/MY_USERNAME/.local/lib/python3.8/site-packages/flexgen/flex_opt.py", line 18, in <module>
from flexgen.compression import CompressionConfig
File "/home/MY_USERNAME/.local/lib/python3.8/site-packages/flexgen/compression.py", line 6, in <module>
from flexgen.pytorch_backend import (TorchTensor, TorchDevice,
File "/home/MY_USERNAME/.local/lib/python3.8/site-packages/flexgen/pytorch_backend.py", line 16, in <module>
from flexgen.utils import (GB, T, cpu_mem_stats, vector_gather,
File "/home/MY_USERNAME/.local/lib/python3.8/site-packages/flexgen/utils.py", line 3, in <module>
from attr import define, field
ImportError: cannot import name 'define' from 'attr' (/home/MY_USERNAME/.local/lib/python3.8/site-packages/attr.py)
</code></pre>
|
<python><python-3.x><windows-subsystem-for-linux><chatbot>
|
2023-03-01 03:36:53
| 1
| 3,723
|
Carlos Luis Rivera
|
75,599,357
| 2,848,049
|
paypal rest api and flask/django/python integration 2023, update user credits immediately after payment
|
<p>In my flask web application, I am trying to update user's credits immediately after user has made payment through paypal. To make it secure, I want to make sure that user doesn't modify the amount of payment. I also want to ensure that I credits to the correct user who made the payment.</p>
<p>Because when paypal returns, the user session is detached. So I cannot update database using session. So I want to send a 'custom' variable assocaite to the user who is going to make the payment. And on the payment is captured/successed, the custom variable can be read back from the payment confirmation, so that I can identify which user made the payment. After that, I can update database and update user credit.</p>
<p>My biggest question is that I can't find an appropriate place to create the custom variable and send to paypal in the creataion of the payment.</p>
<p>I searched for a tons of tutorials for flask, but they are deprecated: such as the github one paypal-python-sdk at this link: <a href="https://github.com/paypal/PayPal-Python-SDK" rel="nofollow noreferrer">github paypal-python-sdk</a> or they are written in another language: such as in nodejs from the official paypal developer doc <a href="https://developer.paypal.com/docs/checkout/standard/integrate/" rel="nofollow noreferrer">paypal official doc with html&nodejs example</a></p>
<p>I can't get my head arround when reading the official nodejs example. But I know that I have to create two routes in my flask server side dealing with create_order and capture_order <a href="https://developer.paypal.com/docs/api/orders/v2/#orders_capture" rel="nofollow noreferrer">capture order rest api from paypal official doc</a>. But then I failed with my 1st big question, how do I update user's credit immediately after user has paid?</p>
<p>If anyone could give me any suggestions, I'd really appreciate it. Complete flask code example will be highly apprecaite it.</p>
<p>Thanks in advance</p>
<p>ps: in the old tutorials, I saw some solutions to verify payment using paypal ipn. But it does not seem to appear in the new paypal integration api (ie. the notify_url).</p>
|
<python><flask><paypal>
|
2023-03-01 03:36:15
| 2
| 574
|
wildcolor
|
75,599,257
| 2,998,077
|
Python to add data label on linechart from Matplotlib and Pandas GroupBy
|
<p>I am hoping to add data labels to a line chart produced by Matplotlib from Pandas GroupBy.</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
from io import StringIO
csvfile = StringIO(
"""
Name Year - Month Score
Mike 2022-09 192
Mike 2022-08 708
Mike 2022-07 140
Mike 2022-05 144
Mike 2022-04 60
Mike 2022-03 108
Kate 2022-07 19850
Kate 2022-06 19105
Kate 2022-05 23740
Kate 2022-04 19780
Kate 2022-03 15495
Peter 2022-08 51
Peter 2022-07 39
Peter 2022-06 49
Peter 2022-05 49
Peter 2022-04 79
Peter 2022-03 13
Lily 2022-11 2
David 2022-11 3
David 2022-10 6
David 2022-08 2""")
df = pd.read_csv(csvfile, sep = '\t', engine='python')
for group_name, sub_frame in df.groupby("Name"):
if sub_frame.shape[0] >= 2:
sub_frame_sorted = sub_frame.sort_values('Year - Month') # sort the data-frame by a column
line_chart = sub_frame_sorted.plot("Year - Month", "Score")
label = sub_frame_sorted['Score']
line_chart.annotate(label, (sub_frame_sorted['Year - Month'], sub_frame_sorted['Score']), ha='center')
plt.show()
</code></pre>
<p>The 2 lines for data labels throw an error:</p>
<pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>How can I have them corrected?</p>
|
<python><pandas><matplotlib><plot>
|
2023-03-01 03:13:12
| 2
| 9,496
|
Mark K
|
75,599,247
| 7,267,480
|
pyswarms toy example - help to understand simple things
|
<p>trying to understand Particle Swarm Optimization using Python pyswarms package.</p>
<p><a href="https://pyswarms.readthedocs.io/en/latest/intro.html" rel="nofollow noreferrer">https://pyswarms.readthedocs.io/en/latest/intro.html</a></p>
<p>Need to optimize a function of multiple variables, given as:</p>
<pre><code># Define the objective function
def objective_function(x):
return ((x[0] - 1)**2 + (x[1]-2)**2 + (x[2] - 3)**2)
</code></pre>
<p>I want to find a global minimum in a bounded region if I have an initial guess near that global min.</p>
<p>Here is what I have done, but it's actually not working.
It gives the error:</p>
<blockquote>
<p>ValueError: operands could not be broadcast together with shapes (3,)
(100,)</p>
</blockquote>
<p>Something wrong with the initial guess or bounds? I need to reshape it in some specific way? Please give me a key to resolve that issue..</p>
<p>Can anyone look and explain - what is the problem?
Here is the code to try.</p>
<pre><code>import numpy as np
import pyswarms as ps
# Define the objective function
def objective_function(x):
return ((x[0] - 1)**2 + (x[1]-2)**2 + (x[2] - 3)**2)
# Define the bounds for each element of x
bounds = ([-5]*3, [5]*3)
print('Bounds:')
print(bounds)
# Define the initial guesses for each element of x
initial_guess_1 = np.array([1.0, 2.0, 2.9])
# Define the number of elements to optimize
dimensions = initial_guess_1.size
print('Dimensions:', dimensions)
# defining the number of particles to use:
n_particles = 100
print('Objective function for initial guess:')
print(objective_function(initial_guess_1))
# reshaping to get all the particles initial guess positions?
# I don't know if it's necessary to do this?
initial_guess = initial_guess_1.reshape((1, dimensions))
init_pos = np.tile(initial_guess, (n_particles, 1))
print('Initial guess of one particle:')
print(initial_guess_1)
print('Initial positions for all particles: ')
print(init_pos.shape)
print(init_pos)
# Define the options for the optimizer
options = {
'c1': 0.5, # cognitive parameter
'c2': 0.3, # social parameter
'w': 0.9 # inertia weight
}
# Create a PSO optimizer
optimizer = ps.single.GlobalBestPSO(n_particles=n_particles,
dimensions=dimensions,
options=options,
bounds=bounds,
init_pos=init_pos
)
# Initialize the particles with the initial guesses
#optimizer.pos = init_pos
# Run the optimization
best_cost, best_position= optimizer.optimize(objective_function, iters=1000, verbose=True)
# Print the results
print("Best position:", best_position)
print("Best cost:", best_cost)
print('Func value at best pos', objective_function(best_cost))
</code></pre>
|
<python><optimization><particle-swarm>
|
2023-03-01 03:09:00
| 1
| 496
|
twistfire
|
75,599,218
| 20,240,835
|
Target rules may not contain wildcards. Please specify concrete files or a rule without wildcards error
|
<p>I have a snakemake srcipt like</p>
<pre><code># minimal example
configfile: "../snakemake/config.yaml"
import os
rule generateInclude:
input:
archaic_inc=config['input']['archaic_include'],
modern_inc=config['input']['modern_include']
output:
all_include='include_{reference_genome}.bed'
params:
reference_genome='{reference_genome}'
shell:
"""
if [ {params.reference_genome}=='hg19' ]; then
bedtools intersect -a <(zcat {input.archaic_inc} | sed 's/^chr//') \
-b <(zcat {input.modern_inc} | sed 's/^chr//') > {output.all_include}
else
bedtools intersect -a <(zcat {input.archaic_inc}) \
-b <(zcat {input.modern_inc}) > {output.all_include}
fi
"""
rule all:
input:
'include_hg19.bed'
</code></pre>
<p>When I run it, snakemake report</p>
<pre><code>Target rules may not contain wildcards. Please specify concrete files or a rule without wildcards.
</code></pre>
<p>I am not sure whats wrong with it, could your please gave me some help</p>
|
<python><bioinformatics><snakemake>
|
2023-03-01 03:03:36
| 1
| 689
|
zhang
|
75,599,188
| 6,335,363
|
How can I split a for statement over multiple lines with Flake8?
|
<p>I am currently trying to write a for statement that is over 80 characters long.</p>
<pre class="lang-py prettyprint-override"><code>for i, (expected, actual) in enumerate(zip_longest(self.__lines, lines)):
...
# Note that the line above is 73 characters long. It becomes 81
# characters long once you put it in a class and a method.
# I did not include the class definition for the sake of the
# simplicity of the question, but I've needed to add this comment
# explaining this because people have complained that the line isn't
# actually > 80 characters long.
</code></pre>
<p>In order to satisfy the maximum line length rule in Flake8, I have attempted to split it into multiple lines, but no combination I've found appears to satisfy it.</p>
<pre class="lang-py prettyprint-override"><code>for i, (expected, actual)\
in enumerate(zip_longest(self.__lines, lines)):
# ^ continuation line with same indent as next logical line - flake8(E125)
...
</code></pre>
<pre class="lang-py prettyprint-override"><code>for i, (expected, actual)\
in enumerate(zip_longest(self.__lines, lines)):
# ^ continuation line missing indentation or outdented - flake8(E122)
...
</code></pre>
<p>How can I split my for loop over multiple lines without needing to disable these rules in my config or add a <code># noqa</code> comment?</p>
|
<python><flake8>
|
2023-03-01 02:56:04
| 1
| 2,081
|
Maddy Guthridge
|
75,599,179
| 13,258,121
|
Tkinter dynamic tooltip that moves with the cursor and updates
|
<p>I would like to implement a <code>tooltip</code> beside the cursor in my <code>tkinter</code> GUI - specifically one that would display the x value of a <code>matplot</code> plot in a <code>tk.Canvas</code> that moves with and updates dynamically as the mouse moves.</p>
<p>There are some good examples (<a href="https://stackoverflow.com/questions/3221956/how-do-i-display-tooltips-in-tkinter">here</a>) in creating a <code>tooltip</code>, however everything seems to either updating a <code>tk.Label</code> or create a static label popup - nothing that moves with the cursor. <code>Matplotlib</code> has a <code>Cursor</code> class (<a href="https://matplotlib.org/stable/gallery/widgets/cursor.html" rel="nofollow noreferrer">docs</a>) but that draws lines on the plot, and doesn't display values. <code>Tkinter</code> also has a <code>Cursor</code> class (<a href="https://www.tcl.tk/man/tcl8.6/TkCmd/cursors.html" rel="nofollow noreferrer">docs</a>), but that just changes the cursor symbol over the associated <code>widget</code>. I am happy I can get the x value from the plot via calling <code>Canvas.canvasx</code> or similar and send to something associated with the mouse (or worst case, a static <code>tk.Label</code> somewhere)</p>
<p>Below is code showing the <code>matplotlib.Cursor</code> functionality and the <code>tkinter.Cursor</code>. Ideally a tooltip pops up and shows x values between 0-8000 as the mouse moves.</p>
<pre><code>import tkinter as tk
from matplotlib.backends.backend_tkagg import (FigureCanvasTkAgg)
from matplotlib.figure import Figure
from matplotlib.widgets import Cursor
import numpy as np
root = tk.Tk()
Fs = 8000
f = 5
sample = 8000
x = np.arange(sample)
y = np.sin(2 * np.pi * f * x / Fs)
figure = Figure(figsize=(5, 4), dpi=100)
plot = figure.add_subplot(1, 1, 1)
plot.plot(x, y, color="blue")
canvas = FigureCanvasTkAgg(figure, root)
canvas.get_tk_widget().grid(column=0, row=3,columnspan=4,pady = 4, padx=4)
# add button to demonstrate tkinter cursor function
B = tk.Button(root, text ="Cursor", relief=tk.RAISED,
cursor="coffee_mug", width = 25)
B.grid(column = 0, row = 4, columnspan = 5, padx = 4, pady = 4)
# matplotlib cursor function
cursor = Cursor(plot, useblit=True, horizOn=False, vertOn=True,
color="green", linewidth=2.0)
root.mainloop()
</code></pre>
|
<python><matplotlib><tkinter>
|
2023-03-01 02:54:20
| 1
| 370
|
Lachlan
|
75,598,980
| 4,508,962
|
How to return a anonymous NamedTuple in python defined only in the return type hint
|
<p>I come from Typescript, new to Python. When I have 2 things to return from a function and I only use these 2 keys for that function return type and nowhere else in the code, I don't create a complete class but I instead use typescript convinient syntax:</p>
<pre><code>fn(): {
'return_key_1': number,
'return_key_2': string,
} {
// statements..
return {
{
'return_key_1': 5,
'return_key_2': 'hello',
'error_key': 9, // IDE shows an error here as I didn't declared this key
}
}
</code></pre>
<p>which enables to create rapidly return data structure without the boilerplate of declaring a entire class, and I have all the IDE autocomplete and errors/warning associated to that data-structure.</p>
<p>Is there the same in Python ? I saw there are NamedTuple, or new @dataclass classes from Python 3.7, but how do you make Python return a NamedTuple with defined keys without declaring the class elsewhere but in the return type ?</p>
<p>I tried:</p>
<pre><code>def fn(self) -> NamedTuple('MyTuple', fields = [('params', str), ('cursor_end', int)]):
return {
'params': 'hello',
'cursor_end': 4,
}
</code></pre>
<p>But PyCharm says "Expected type 'MyTuple', got 'dict[str, str | int]' instead"</p>
|
<python><typescript><type-hinting>
|
2023-03-01 02:12:14
| 1
| 1,207
|
Jerem Lachkar
|
75,598,774
| 4,451,521
|
Adding a column to a dataframe based on another dataframe
|
<p>I have a dataframe like this</p>
<pre><code>some_info THIS_info
abd set_1
def set_1
www set_1
qqq set_2
wws set_2
2222 set_3
</code></pre>
<p>and another dataframe like this</p>
<pre><code>THIS_info this_algo
set_1 algo_1
set_2 algo_2
set_3 algo_2
</code></pre>
<p>I want to add a column to the first dataframe, based on the info on "THIS_info" so that I can get</p>
<pre><code>some_info THIS_info this_algo
abd set_1 algo_1
def set_1 algo_1
www set_1 algo_1
qqq set_2 algo_2
wws set_2 algo_2
2222 set_3 algo_2
</code></pre>
<p>Is there a way to achieve this?</p>
|
<python><pandas>
|
2023-03-01 01:16:39
| 2
| 10,576
|
KansaiRobot
|
75,598,670
| 5,342,009
|
Stripe Subscription using stripe.Subscription.create function does not provide client_secret with Django
|
<p>As suggested <a href="https://stripe.com/docs/billing/subscriptions/build-subscriptions?ui=elements" rel="nofollow noreferrer">here</a> , I am using stripe.Subscription.create function to create a subscription for the users in my Django DRM and expect to have a client secret that is associated with the subscription and the related payment_intent.</p>
<p>However following is the error that I get :</p>
<pre><code>customer.clientsecret=stripe_subscription.latest_invoice.payment_intent.client_secret
</code></pre>
<p>AttributeError: 'NoneType' object has no attribute 'client_secret'</p>
<p>Below is the code that I am executing in the backend :</p>
<pre><code>stripe_customer = stripe.Customer.create(email=instance.email)
customer.product = Product.objects.get(plan=0) # Free plan price id
customer.stripe_customer_id = stripe_customer['id']
stripe_subscription = stripe.Subscription.create(
customer=customer.stripe_customer_id,
items=[{"price": customer.product.stripe_plan_id},],
payment_behavior='default_incomplete',
payment_settings={'save_default_payment_method': 'on_subscription'},
expand=['latest_invoice.payment_intent'],
)
customer.clientsecret=stripe_subscription.latest_invoice.payment_intent.client_secret
</code></pre>
<p>Here below is the definition for Customer model :</p>
<pre><code>class Customer(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
stripe_customer_id = models.CharField(max_length=40, default="")
product = models.ForeignKey(Product, on_delete=models.SET_NULL, null=True)
stripe_subscription_id = models.CharField(max_length=40, default="")
clientsecret = models.CharField(max_length=80, default="")
active = models.BooleanField(default=True)
@property
def get_created_date(self):
subscription = stripe.Subscription.retrieve(self.stripe_subscription_id)
return datetime.fromtimestamp(subscription.created)
@property
def get_next_billing_date(self):
subscription = stripe.Subscription.retrieve(self.stripe_subscription_id)
return datetime.fromtimestamp(subscription.current_period_end)
def __str__(self):
return self.user.username
</code></pre>
<p>The customers are created and associated with users when a new user signs up :</p>
<pre><code>def post_save_customer_create(sender, instance, created, *args, **kwargs):
customer, created = Customer.objects.get_or_create(user=instance)
...
</code></pre>
<p>And here is the question : "How can I get a valid client_secret for the subscription that I create ?"</p>
<p>The value of the stripe_subscription is as below, and as you can see that latest_invoice is there, but payment_intent is null :</p>
<pre><code>{
"application": null,
"application_fee_percent": null,
"automatic_tax": {
"enabled": false
},
"billing_cycle_anchor": 1677632660,
"billing_thresholds": null,
"cancel_at": null,
"cancel_at_period_end": false,
"canceled_at": null,
"collection_method": "charge_automatically",
"created": 1677632660,
"currency": "usd",
"current_period_end": 1680311060,
"current_period_start": 1677632660,
"customer": "cus_NRX9a1XQtjWJPd",
"days_until_due": null,
"default_payment_method": null,
"default_source": null,
"default_tax_rates": [],
"description": null,
"discount": null,
"ended_at": null,
"id": "sub_1Mge5gLS6SANVcyCOTRj2dLw",
"items": {
"data": [
{
"billing_thresholds": null,
"created": 1677632660,
"id": "si_NRXAdvOPbsuzfz",
"metadata": {},
"object": "subscription_item",
"plan": {
"active": true,
"aggregate_usage": null,
"amount": 0,
"amount_decimal": "0",
"billing_scheme": "per_unit",
"created": 1673957019,
"currency": "usd",
"id": "price_1MRDt9LS6SANVcyCvUMav5Fr",
"interval": "month",
"interval_count": 1,
"livemode": false,
"metadata": {},
"nickname": "Free",
"object": "plan",
"product": "prod_NBb5jZ3Sg3Knrl",
"tiers": null,
"tiers_mode": null,
"transform_usage": null,
"trial_period_days": null,
"usage_type": "licensed"
},
"price": {
"active": true,
"billing_scheme": "per_unit",
"created": 1673957019,
"currency": "usd",
"custom_unit_amount": null,
"id": "price_1MRDt9LS6SANVcyCvUMav5Fr",
"livemode": false,
"lookup_key": null,
"metadata": {},
"nickname": "Free",
"object": "price",
"product": "prod_NBb5jZ3Sg3Knrl",
"recurring": {
"aggregate_usage": null,
"interval": "month",
"interval_count": 1,
"trial_period_days": null,
"usage_type": "licensed"
},
"tax_behavior": "unspecified",
"tiers_mode": null,
"transform_quantity": null,
"type": "recurring",
"unit_amount": 0,
"unit_amount_decimal": "0"
},
"quantity": 1,
"subscription": "sub_1Mge5gLS6SANVcyCOTRj2dLw",
"tax_rates": []
}
],
"has_more": false,
"object": "list",
"total_count": 1,
"url": "/v1/subscription_items?subscription=sub_1Mge5gLS6SANVcyCOTRj2dLw"
},
"latest_invoice": {
"account_country": "GB",
"account_name": "Su Technology Ltd",
"account_tax_ids": null,
"amount_due": 0,
"amount_paid": 0,
"amount_remaining": 0,
"amount_shipping": 0,
"application": null,
"application_fee_amount": null,
"attempt_count": 0,
"attempted": true,
"auto_advance": false,
"automatic_tax": {
"enabled": false,
"status": null
},
"billing_reason": "subscription_create",
"charge": null,
"collection_method": "charge_automatically",
"created": 1677632660,
"currency": "usd",
"custom_fields": null,
"customer": "cus_NRX9a1XQtjWJPd",
"customer_address": null,
"customer_email": "bulkanutku@gmail.com",
"customer_name": null,
"customer_phone": null,
"customer_shipping": null,
"customer_tax_exempt": "none",
"customer_tax_ids": [],
"default_payment_method": null,
"default_source": null,
"default_tax_rates": [],
"description": null,
"discount": null,
"discounts": [],
"due_date": null,
"ending_balance": 0,
"footer": null,
"from_invoice": null,
"hosted_invoice_url": "https://invoice.stripe.com/i/acct_1GJetwLS6SANVcyC/test_YWNjdF8xR0pldHdMUzZTQU5WY3lDLF9OUlhBQjBaaVZMQk5FSFFFcTBydVB5bzNpVnlORWk1LDY4MTczNDYw0200gE9zTwdN?s=ap",
"id": "in_1Mge5gLS6SANVcyCtGvBkugu",
"invoice_pdf": "https://pay.stripe.com/invoice/acct_1GJetwLS6SANVcyC/test_YWNjdF8xR0pldHdMUzZTQU5WY3lDLF9OUlhBQjBaaVZMQk5FSFFFcTBydVB5bzNpVnlORWk1LDY4MTczNDYw0200gE9zTwdN/pdf?s=ap",
"last_finalization_error": null,
"latest_revision": null,
"lines": {
"data": [
{
"amount": 0,
"amount_excluding_tax": 0,
"currency": "usd",
"description": "1 \u00d7 (at $0.00 / month)",
"discount_amounts": [],
"discountable": true,
"discounts": [],
"id": "il_1Mge5gLS6SANVcyCTgQ4Jyt4",
"livemode": false,
"metadata": {},
"object": "line_item",
"period": {
"end": 1680311060,
"start": 1677632660
},
"plan": {
"active": true,
"aggregate_usage": null,
"amount": 0,
"amount_decimal": "0",
"billing_scheme": "per_unit",
"created": 1673957019,
"currency": "usd",
"id": "price_1MRDt9LS6SANVcyCvUMav5Fr",
"interval": "month",
"interval_count": 1,
"livemode": false,
"metadata": {},
"nickname": "Free",
"object": "plan",
"product": "prod_NBb5jZ3Sg3Knrl",
"tiers": null,
"tiers_mode": null,
"transform_usage": null,
"trial_period_days": null,
"usage_type": "licensed"
},
"price": {
"active": true,
"billing_scheme": "per_unit",
"created": 1673957019,
"currency": "usd",
"custom_unit_amount": null,
"id": "price_1MRDt9LS6SANVcyCvUMav5Fr",
"livemode": false,
"lookup_key": null,
"metadata": {},
"nickname": "Free",
"object": "price",
"product": "prod_NBb5jZ3Sg3Knrl",
"recurring": {
"aggregate_usage": null,
"interval": "month",
"interval_count": 1,
"trial_period_days": null,
"usage_type": "licensed"
},
"tax_behavior": "unspecified",
"tiers_mode": null,
"transform_quantity": null,
"type": "recurring",
"unit_amount": 0,
"unit_amount_decimal": "0"
},
"proration": false,
"proration_details": {
"credited_items": null
},
"quantity": 1,
"subscription": "sub_1Mge5gLS6SANVcyCOTRj2dLw",
"subscription_item": "si_NRXAdvOPbsuzfz",
"tax_amounts": [],
"tax_rates": [],
"type": "subscription",
"unit_amount_excluding_tax": "0"
}
],
"has_more": false,
"object": "list",
"total_count": 1,
"url": "/v1/invoices/in_1Mge5gLS6SANVcyCtGvBkugu/lines"
},
"livemode": false,
"metadata": {},
"next_payment_attempt": null,
"number": "E86AFCCB-0313",
"object": "invoice",
"on_behalf_of": null,
"paid": true,
"paid_out_of_band": false,
"payment_intent": null,
"payment_settings": {
"default_mandate": null,
"payment_method_options": null,
"payment_method_types": null
},
"period_end": 1677632660,
"period_start": 1677632660,
"post_payment_credit_notes_amount": 0,
"pre_payment_credit_notes_amount": 0,
"quote": null,
"receipt_number": null,
"rendering_options": null,
"shipping_cost": null,
"shipping_details": null,
"starting_balance": 0,
"statement_descriptor": null,
"status": "paid",
"status_transitions": {
"finalized_at": 1677632660,
"marked_uncollectible_at": null,
"paid_at": 1677632660,
"voided_at": null
},
"subscription": "sub_1Mge5gLS6SANVcyCOTRj2dLw",
"subtotal": 0,
"subtotal_excluding_tax": 0,
"tax": null,
"tax_percent": null,
"test_clock": null,
"total": 0,
"total_discount_amounts": [],
"total_excluding_tax": 0,
"total_tax_amounts": [],
"transfer_data": null,
"webhooks_delivered_at": 1677632660
},
"livemode": false,
"metadata": {},
"next_pending_invoice_item_invoice": null,
"object": "subscription",
"on_behalf_of": null,
"pause_collection": null,
"payment_settings": {
"payment_method_options": null,
"payment_method_types": null,
"save_default_payment_method": "on_subscription"
},
"pending_invoice_item_interval": null,
"pending_setup_intent": "seti_1Mge5gLS6SANVcyCiD2a3pQM",
"pending_update": null,
"plan": {
"active": true,
"aggregate_usage": null,
"amount": 0,
"amount_decimal": "0",
"billing_scheme": "per_unit",
"created": 1673957019,
"currency": "usd",
"id": "price_1MRDt9LS6SANVcyCvUMav5Fr",
"interval": "month",
"interval_count": 1,
"livemode": false,
"metadata": {},
"nickname": "Free",
"object": "plan",
"product": "prod_NBb5jZ3Sg3Knrl",
"tiers": null,
"tiers_mode": null,
"transform_usage": null,
"trial_period_days": null,
"usage_type": "licensed"
},
"quantity": 1,
"schedule": null,
"start_date": 1677632660,
"status": "active",
"tax_percent": null,
"test_clock": null,
"transfer_data": null,
"trial_end": null,
"trial_settings": {
"end_behavior": {
"missing_payment_method": "create_invoice"
}
},
"trial_start": null
}
</code></pre>
|
<python><django><django-models><stripe-payments>
|
2023-03-01 00:49:34
| 1
| 1,312
|
london_utku
|
75,598,463
| 19,299,757
|
Pytest ordering of test suites
|
<p>I've a set of test files (.py files) for different UI tests.
I want to run these test files using pytest in a specific order. I used the below command</p>
<pre><code>python -m pytest -vv -s --capture=tee-sys --html=report.html --self-contained-html ./Tests/test_transTypes.py ./Tests/test_agentBank.py ./Tests/test_bankacct.py
</code></pre>
<p>The pytest execution is triggered from an AWS Batch job.
When the test executions happens it is not executing the test files in the order as specified in the above command.
Instead it first runs test_agentBank.py followed by test_bankacct.py, then test_transTypes.py
Each of these python files contains bunch of test functions.</p>
<p>I also tried decorating the test function class such as @pytest.mark.run(order=1) in the first python file(test_transTypes.py), @pytest.mark.run(order=2) in the 2nd python file(test_agentBank.py) etc.
This seems to run the test in the order, but at the end I get a warning</p>
<pre><code> PytestUnknownMarkWarning: Unknown pytest.mark.run - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs
.pytest.org/en/stable/how-to/mark.html
@pytest.mark.run(order=1)
</code></pre>
<p>What is the correct way of running tests in a specific order in pytest?
Each of my "test_" python files are the ones I need to run using pytest.</p>
<p>Any help much appreciated.</p>
|
<python><pytest>
|
2023-03-01 00:06:54
| 3
| 433
|
Ram
|
75,598,419
| 4,451,521
|
Pandas string extract from a dataframe with strings resembling dictionaries
|
<p>I am looking to use the Pandas string extract feature.</p>
<p>I have a dataframe like this:</p>
<pre><code>lista=[ "{'FIRST_id': 'awe', 'THIS_id': 'awec_20230222_1626_i0ov0w', 'NOTTHIS_id': 'awep_20230222_1628_p8f5hd52u3oknc24'}","{'FIRST_id': 'awe', 'THIS_id': 'awec_20230222_1626_i0ov0w', 'NOTTHIS_id': 'awep_20230222_1641_jwjajtals49wc88p'}"]
dfpack=pd.DataFrame(lista,columns=["awesome_config"])
print(dfpack)
</code></pre>
<p>So in the column "awesome_config" I have some string with some information:</p>
<pre><code> awesome_config
0 {'FIRST_id': 'awe', 'THIS_id': 'awec_20230222...
1 {'FIRST_id': 'awe', 'THIS_id': 'awec_20230222...
</code></pre>
<p>I want to get only the "THIS_id" info on a column.</p>
<p>Therefore what I want to get is a dataframe with:</p>
<pre><code>THIS_id
awec_20230222_1626_i0ov0w
awec_20230222_1626_i0ov0w
</code></pre>
<p>I have been trying something like:</p>
<pre><code>#dd=dfpack['awesome_config'].str.extract(pat= "({'FIRST_id':'awe', 'THIS_id':).")
dd=dfpack['awesome_config'].str.extract(pat= "({'FIRST_id':'awe').")
print(dd)
</code></pre>
<p>But they all give me a dataframe with NaNs.</p>
<p>How can I use extract correctly here?</p>
<h2>Edit</h2>
<p>I have come with this:</p>
<pre><code>dd=dfpack['awesome_config'].str.extract(r"^({'FIRST_id': 'awe', 'THIS_id': )(?P<THIS_id>.*), 'NOTTHIS_id':(?P<restofit>).* ")
</code></pre>
<p>but now I got:</p>
<pre><code>0 'awec_20230222_1626_i0ov0w'
1 'awec_20230222_1626_i0ov0w'
Name: THIS_id, dtype: object
</code></pre>
<p>so the quotations are still there, I need it without quotations</p>
|
<python><pandas>
|
2023-02-28 23:57:56
| 1
| 10,576
|
KansaiRobot
|
75,598,344
| 10,266,106
|
Properly Fitting a Gamma Cumulative Distribution Function
|
<p>I have two Numpy arrays (both 210 entries in total) of rainfall values, one observed and the other forecast. My goal is to create a best-fit gamma CDF (my first time diving into gamma CDFs) to both of these arrays and determine the relevant percentile that values then provided would fall into. The image below provides a simpler graphical reference of the gamma CDF I'm attempting to create with these two arrays. An important note is that the y-axis references the percentile of each value in the histogram, so ranging from 1st to 99th:</p>
<p><a href="https://i.sstatic.net/lFNJZ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lFNJZ.jpg" alt="Gamma CDFs" /></a></p>
<p>These arrays are as follows:</p>
<pre><code>guess = [0.02 0.03 0.02 0.04 0.01 0.01 0.04 0.01 0. 0. 0.01 0.03 0.03 0.04
0.05 0.03 0. 0.02 0.03 0.03 0.04 0.03 0.04 0.04 0.04 0.04 0.01 0.01
0.01 0.03 0.04 0.03 0.02 0.05 0.03 0. 0. 0.04 0.05 0.03 0.05 0.03
0.03 0. 0.01 0.02 0.01 0.05 0.01 0.05 0.05 0.04 0.04 0.02 0.02 0.04
0.04 0.04 0.02 0.04 0.02 0.03 0.04 0.04 0. 0.15 0.07 0.08 0.15 0.08
0.13 0.14 0.07 0.13 0.13 0.08 0.14 0.1 0.08 0.12 0.14 0.11 0.15 0.14
0.14 0.16 0.15 0.15 0.06 0.1 0.1 0.09 0.09 0.11 0.07 0.12 0.11 0.15
0.06 0.11 0.09 0.09 0.08 0.09 0.12 0.07 0.07 0.09 0.12 0.16 0.13 0.11
0.1 0.08 0.13 0.06 0.09 0.13 0.16 0.12 0.23 0.35 0.33 0.28 0.24 0.33
0.25 0.25 0.24 0.25 0.28 0.28 0.34 0.24 0.33 0.17 0.25 0.24 0.35 0.24
0.24 0.22 0.29 0.23 0.2 0.32 0.25 0.25 0.33 0.21 0.18 0.22 0.27 0.18
0.25 0.22 0.29 0.27 0.33 0.2 0.31 0.29 0.17 0.17 0.29 0.39 0.65 0.84
0.71 0.64 0.52 0.91 0.82 0.36 0.37 0.95 0.87 0.73 0.67 0.73 0.8 0.91
0.63 0.58 0.6 0.75 0.53 0.88 0.84 0.98 1.2 1.2 1.02 1.02 1.17 1.14
1.02 1.13 1.15 1.25 1.03 1.04 1.25 1.12 1.02 1.26 1.44 1.33 1.33 1.49]
actual = [0.04 0.03 0.03 0.02 0.04 0.01 0.03 0.02 0.01 0.01 0.04 0.01 0. 0.05
0.03 0.03 0.05 0.04 0.02 0.04 0.02 0.01 0.05 0. 0.01 0.05 0.01 0.02
0.04 0. 0.01 0.01 0.04 0.04 0.03 0.01 0.03 0.04 0. 0.03 0.03 0.05
0.05 0.01 0.05 0.05 0.03 0.02 0.02 0.05 0.04 0.05 0.04 0.04 0.01 0.03
0.02 0.01 0.01 0. 0.03 0.02 0.05 0.03 0.04 0.13 0.06 0.07 0.14 0.11
0.1 0.15 0.14 0.15 0.07 0.13 0.08 0.07 0.07 0.1 0.15 0.1 0.11 0.08
0.09 0.06 0.15 0.12 0.1 0.12 0.14 0.16 0.16 0.11 0.07 0.06 0.15 0.1
0.15 0.14 0.14 0.09 0.13 0.13 0.15 0.09 0.11 0.11 0.13 0.15 0.14 0.12
0.12 0.06 0.08 0.13 0.07 0.16 0.09 0.1 0.21 0.17 0.27 0.24 0.33 0.24
0.28 0.28 0.19 0.17 0.29 0.27 0.22 0.35 0.19 0.28 0.3 0.33 0.29 0.31
0.17 0.27 0.34 0.26 0.22 0.3 0.22 0.22 0.32 0.34 0.21 0.21 0.3 0.19
0.27 0.22 0.19 0.23 0.26 0.33 0.23 0.31 0.18 0.34 0.35 0.55 0.76 0.37
0.92 0.86 0.72 0.78 0.54 0.7 0.4 0.45 0.37 1. 0.48 0.92 0.45 0.57
0.55 0.56 0.75 0.5 0.41 0.71 0.82 0.73 1.04 1.17 1.17 1.09 1.06 1.04
1.14 1.18 1.09 1.03 1.08 1.16 1.09 1.12 1.22 1.32 1.38 1.39 1.37 1.37]
</code></pre>
<p>I've created a histogram for both of these arrays, binned in increments of 0.05 for a total of 30 bins. The code snippet to achieve this from the data supplied above is as follows:</p>
<pre><code>rngst = 0.00
rngend = 1.50
gushist = np.histogram(guess, bins = [round(x, 2) for x in np.arange(rngst,(rngend + 0.05),0.05)])
acthist = np.histogram(actual, bins = [round(x, 2) for x in np.arange(rngst,(rngend + 0.05),0.05)])
</code></pre>
<p>I've also graphed both of these histograms, which looks as follows:</p>
<p><a href="https://i.sstatic.net/sCcyr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sCcyr.png" alt="Dual Histogram Bar Chart" /></a></p>
<p>I am unsure where to proceed from here in order to create the best-fit gamma CDFs for both arrays, though I've initially found a stats.gamma function in scipy. Any help on how to proceed would be appreciated.</p>
|
<python><numpy><scipy><cdf><gamma-distribution>
|
2023-02-28 23:40:13
| 1
| 431
|
TornadoEric
|
75,598,306
| 12,734,492
|
Pyspark: How to write table to AWS S3 file
|
<p>I try to write a simple file to S3 :</p>
<pre><code>from pyspark.sql import SparkSession
from pyspark import SparkConf
import os
from dotenv import load_dotenv
from pyspark.sql.functions import *
# Load environment variables from the .env file
load_dotenv()
os.environ['PYSPARK_PYTHON'] = sys.executable
os.environ['PYSPARK_DRIVER_PYTHON'] = sys.executable
AWS_ACCESS_KEY_ID = os.getenv("AWS_ACCESS_KEY_ID")
AWS_SECRET_ACCESS_KEY = os.getenv("AWS_SECRET_ACCESS_KEY")
# My spark configuration
conf = SparkConf()
conf.set('spark.jars.packages', 'org.apache.hadoop:hadoop-aws:3.3.2')
conf.set('spark.hadoop.fs.s3a.access.key', AWS_ACCESS_KEY_ID)
conf.set('spark.hadoop.fs.s3a.secret.key', AWS_SECRET_ACCESS_KEY)
spark = SparkSession.builder.config(conf=conf).getOrCreate()
# Create a PySpark DataFrame
df = spark.createDataFrame([(1, "John Doe", 30), (2, "Jane Doe", 35), (3, "Jim Brown", 40)], ["id", "name", "age"])
# Write the DataFrame to a CSV file on S3
df.write.format("csv").option("header","true").mode("overwrite").save("s3a://bucket/test/store/price.csv")
# Stop the Spark context and Spark session
spark.stop()
</code></pre>
<p>but this does not save price.csv like the file it opens a new empty folder:</p>
<p><a href="https://i.sstatic.net/r699Q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/r699Q.png" alt="enter image description here" /></a></p>
<p>The same result if I want to save locally This just opens a folder with name price.csv</p>
|
<python><apache-spark><amazon-s3><pyspark>
|
2023-02-28 23:33:36
| 1
| 487
|
Galat
|
75,598,240
| 5,452,378
|
Dynamically index into Python dictionary based on function parameters
|
<p>I'm building a function that indexes into a Python dictionary nested inside a list. The user knows what the dictionary looks like in advance. This is what it looks like so far:</p>
<pre><code>def dict_idx(arr: list, subkey: str) -> list:
for i in arr:
i[subkey] = i[subkey].replace("_", " ")
return arr
</code></pre>
<p>I would like to make the <code>subkey</code> parameter a <em>list</em> of strings, and to index into the dictionary depending like this. So for instance, if I pass this function a <code>subkey</code> value of ['user','location'], it would index into the array like follows:</p>
<pre><code> for i in arr:
i['user']['location'] = i['user']['location'].replace("_", " ")
</code></pre>
<p>Is this possible (short of converting the list into a string of dictionary indexes and running eval() on that string)?</p>
|
<python><dictionary><nested>
|
2023-02-28 23:20:47
| 2
| 409
|
snark17
|
75,598,016
| 12,860,924
|
How to Calculate ROC, Sensitivity and Specificity using DenseNet121 model
|
<p>I am working on image classification of breast cancer using DenseNet121. I used <code>confusion_matrix</code> and <code>classification_report</code> and <code>accuracy_score</code> but it didn't calculate the requirement metrics as I want to calculate ROC and sensitivity. I tried in many ways but it didn't work.</p>
<p><strong>This is the code:</strong></p>
<pre><code>in_model = tf.keras.applications.DenseNet121(input_shape=(224,224,3),
include_top=False,
weights='imagenet',classes = 2)
in_model.trainable = False
inputs = tf.keras.Input(shape=(224,224,3))
x = in_model(inputs)
flat = Flatten()(x)
dense_1 = Dense(4096,activation = 'relu')(flat)
dense_2 = Dense(4096,activation = 'relu')(dense_1)
prediction = Dense(2,activation = 'softmax')(dense_2)
in_pred = Model(inputs = inputs,outputs = prediction)
in_pred.evaluate(test_data,test_labels)
test_ = in_pred.predict(test_text)
Y_pred= np.argmax(test_labels, axis=1)
vgg19 = np.argmax(test_, axis=1)
</code></pre>
<p>I used <code>confusion_matrix</code> and <code>classification_report</code> and <code>accuracy_score</code> using the below code but I don't know how to calculate ROC and Sensitivity.</p>
<p>Any help would be appreciated.</p>
<pre><code>print(confusion_matrix(Y_pred,vgg19))
print(classification_report(Y_pred,vgg19))
print(accuracy_score(Y_pred,vgg19))
</code></pre>
|
<python><tensorflow><deep-learning><performance-testing><roc>
|
2023-02-28 22:43:40
| 0
| 685
|
Eda
|
75,597,960
| 7,839,887
|
can you save an object array using zarr?
|
<p>Following zarr's <a href="https://zarr.readthedocs.io/en/stable/tutorial.html#object-arrays" rel="nofollow noreferrer">tutorial</a>, I'm trying to save a list of list of ints to a persistent zarr:</p>
<ul>
<li><p><strong>Failed method 1:</strong></p>
<pre><code>import numcodecs, zarr
zarr.save("path/to/zarr", [[1], [2]], dtype=object, object_codec=numcodecs.JSON())
</code></pre>
</li>
<li><p><strong>Failed method 2:</strong></p>
<pre><code>import numcodec, zarr
z = zarr.array([[1], [2]], dtype=object, object_codec=numcodecs.JSON())
zarr.save("path/to/zarr", z, dtype=object, object_codec=numcodecs.JSON())
</code></pre>
</li>
</ul>
<p>Both methods output <code>ValueError: missing object_codec for object array</code></p>
|
<python><zarr>
|
2023-02-28 22:34:53
| 1
| 786
|
David Taub
|
75,597,931
| 7,631,183
|
seq2seq inference outputs wrong results despite high accuracy
|
<p>I am training a seq2seq model following Keras tutorial <a href="https://keras.io/examples/nlp/lstm_seq2seq/" rel="nofollow noreferrer">https://keras.io/examples/nlp/lstm_seq2seq/</a>, the same code but a different dataset.
Here is the main model code for reference:</p>
<p>Code snippet for data preparation:</p>
<pre><code>for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)):
for t, char in enumerate(input_text):
encoder_input_data[i, t, input_token_index[char]] = 1.0
encoder_input_data[i, t + 1 :, input_token_index[" "]] = 1.0
for t, char in enumerate(target_text):
# decoder_target_data is ahead of decoder_input_data by one timestep
decoder_input_data[i, t, target_token_index[char]] = 1.0
if t > 0:
# decoder_target_data will be ahead by one timestep
# and will not include the start character.
decoder_target_data[i, t - 1, target_token_index[char]] = 1.0
decoder_input_data[i, t + 1 :, target_token_index[" "]] = 1.0
decoder_target_data[i, t:, target_token_index[" "]] = 1.0
</code></pre>
<p>For training:</p>
<pre><code># Define an input sequence and process it.
encoder_inputs = keras.Input(shape=(None, num_encoder_tokens))
encoder = keras.layers.LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]
# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = keras.Input(shape=(None, num_decoder_tokens))
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder_lstm = keras.layers.LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states)
decoder_dense = keras.layers.Dense(num_decoder_tokens, activation="softmax")
decoder_outputs = decoder_dense(decoder_outputs)
# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = keras.Model([encoder_inputs, decoder_inputs], decoder_outputs)
model.summary()
</code></pre>
<p>Here is the accuracy I got:</p>
<pre><code>Epoch 1/5
1920/1920 [==============================] - 818s 426ms/step - loss: 0.2335 - accuracy: 0.9319 - val_loss: 0.2244 - val_accuracy: 0.9350
Epoch 2/5
1920/1920 [==============================] - 947s 493ms/step - loss: 0.2032 - accuracy: 0.9410 - val_loss: 0.1976 - val_accuracy: 0.9430
Epoch 3/5
1920/1920 [==============================] - 879s 458ms/step - loss: 0.1799 - accuracy: 0.9482 - val_loss: 0.1807 - val_accuracy: 0.9483
Epoch 4/5
1920/1920 [==============================] - 832s 433ms/step - loss: 0.1599 - accuracy: 0.9545 - val_loss: 0.1570 - val_accuracy: 0.9562
Epoch 5/5
1920/1920 [==============================] - 774s 403ms/step - loss: 0.1442 - accuracy: 0.9594 - val_loss: 0.1580 - val_accuracy: 0.9548
</code></pre>
<p>Here is the inference model:</p>
<pre><code>encoder_inputs = model.input[0] # input_1
encoder_outputs, state_h_enc, state_c_enc = model.layers[2].output # lstm_1
encoder_states = [state_h_enc, state_c_enc]
encoder_model = keras.Model(encoder_inputs, encoder_states)
decoder_inputs = model.input[1] # input_2
decoder_state_input_h = keras.Input(shape=(latent_dim,))
decoder_state_input_c = keras.Input(shape=(latent_dim,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
decoder_lstm = model.layers[3]
decoder_outputs, state_h_dec, state_c_dec = decoder_lstm(
decoder_inputs, initial_state=decoder_states_inputs
)
decoder_states = [state_h_dec, state_c_dec]
decoder_dense = model.layers[4]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_model = keras.Model(
[decoder_inputs] + decoder_states_inputs, [decoder_outputs] + decoder_states
)
def decode_sequence(input_seq):
# Encode the input as state vectors.
states_value = encoder_model.predict(input_seq)
# Generate empty target sequence of length 1.
target_seq = np.zeros((1, 1, num_decoder_tokens))
# Populate the first character of target sequence with the start character.
target_seq[0, 0, target_token_index["\t"]] = 1.0
# Sampling loop for a batch of sequences
# (to simplify, here we assume a batch of size 1).
stop_condition = False
decoded_sentence = ""
while not stop_condition:
output_tokens, h, c = decoder_model.predict([target_seq] + states_value)
# Sample a token
sampled_token_index = np.argmax(output_tokens[0, -1, :]) #greedy approach
sampled_char = reverse_target_char_index[sampled_token_index]
decoded_sentence += sampled_char
# Exit condition: either hit max length
# or find stop character.
if sampled_char == "\n" or len(decoded_sentence) > max_decoder_seq_length:
stop_condition = True
# Update the target sequence (of length 1).
target_seq = np.zeros((1, 1, num_decoder_tokens))
target_seq[0, 0, sampled_token_index] = 1.0
# Update states
states_value = [h, c]
return decoded_sentence
for seq_index in range(5):
# Take one sequence (part of the training set)
# for trying out decoding.
input_seq = X_test[seq_index : seq_index + 1]
decoded_sentence = decode_sequence(input_seq)
print("-")
print("Input sentence:", input_texts[seq_index])
print("Decoded sentence:", decoded_sentence)
</code></pre>
<p>However, the output I am getting is almost random. What could be the reason behind that? the accuracy is increasing in the training, and the loss is decreasing as well.</p>
|
<python><tensorflow><keras><nlp><seq2seq>
|
2023-02-28 22:28:30
| 1
| 1,207
|
Wanderer
|
75,597,909
| 1,857,373
|
TypeError: unsupported operand type for 'str', cast variables with nan.inf, nan.NaN, float64 evidence, but 'str' type error
|
<p><strong>Problem Defined, Data Casting</strong></p>
<p>Tying simple cast, conversion to change variable with nan.inf and nan.NaN into a safe numeric .astype("float64") to handle integers and real number fractions.</p>
<p>TypeError: unsupported operand type(s) for -: 'str' and 'str'</p>
<p>Perform read, then cast to .astype("float64"), then perform mix-max, the errors on 'train_data = (train_data - train_data.min()) / (train_data.max() - train_data.min())' in computation, since 'Age' is a float, see df.info(), what is the problem with the 'str' appears to be the unsupported operant problem variable when the 'Age' is 'float64'?</p>
<p><strong>Error Received</strong></p>
<p>TypeError: unsupported operand type(s) for -: 'str' and 'str'</p>
<p><strong>CODE</strong></p>
<pre><code>train_data = pd.read_csv("../data/titanic/train.csv")
test_data = pd.read_csv("../data/titanic/test.csv")
train_data['Age'].astype(np.str).astype("float64")
test_data['Age'].astype(np.str).astype("float64")
train_data["Age"] = train_data["Age"]
test_data["Age"] = test_data["Age"]
train_age = train_data["Age"]
test_age = test_data["Age"]
train_data = (train_data - train_data.min()) / (train_data.max() - train_data.min())
test_data = (test_data - test_data.min()) / (test_data.max() - test_data.min())
train_data["Age"] = train_age
test_data["Age"] = test_age
train_data.info()
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 PassengerId 891 non-null int64
1 Survived 891 non-null int64
...
4 Age 714 non-null float64
</code></pre>
<p><strong>DATA</strong></p>
<pre><code>0 22.0
1 38.0
2 26.0
3 35.0
4 35.0
...
886 27.0
887 19.0
888 NaN
889 26.0
890 32.0
Name: Age, Length: 891, dtype: float64
</code></pre>
<p><strong>FULL ERROR LOG</strong></p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
File ~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/ops/array_ops.py:163, in _na_arithmetic_op(left, right, op, is_cmp)
162 try:
--> 163 result = func(left, right)
164 except TypeError:
File ~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/computation/expressions.py:239, in evaluate(op, a, b, use_numexpr)
237 if use_numexpr:
238 # error: "None" not callable
--> 239 return _evaluate(op, op_str, a, b) # type: ignore[misc]
240 return _evaluate_standard(op, op_str, a, b)
File ~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/computation/expressions.py:128, in _evaluate_numexpr(op, op_str, a, b)
127 if result is None:
--> 128 result = _evaluate_standard(op, op_str, a, b)
130 return result
File ~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/computation/expressions.py:69, in _evaluate_standard(op, op_str, a, b)
68 _store_test_result(False)
---> 69 return op(a, b)
TypeError: unsupported operand type(s) for -: 'str' and 'str'
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
Cell In[119], line 6
4 train_age = train_data["Age"]
5 test_age = test_data["Age"]
----> 6 train_data = (train_data - train_data.min()) / (train_data.max() - train_data.min())
7 test_data = (test_data - test_data.min()) / (test_data.max() - test_data.min())
8 train_data["Age"] = train_age
File ~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/ops/common.py:70, in _unpack_zerodim_and_defer.<locals>.new_method(self, other)
66 return NotImplemented
68 other = item_from_zerodim(other)
---> 70 return method(self, other)
File ~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/arraylike.py:108, in OpsMixin.__sub__(self, other)
106 @unpack_zerodim_and_defer("__sub__")
107 def __sub__(self, other):
--> 108 return self._arith_method(other, operator.sub)
File ~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/frame.py:6955, in DataFrame._arith_method(self, other, op)
6951 other = ops.maybe_prepare_scalar_for_op(other, (self.shape[axis],))
6953 self, other = ops.align_method_FRAME(self, other, axis, flex=True, level=None)
-> 6955 new_data = self._dispatch_frame_op(other, op, axis=axis)
6956 return self._construct_result(new_data)
File ~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/frame.py:6994, in DataFrame._dispatch_frame_op(self, right, func, axis)
6988 # TODO: The previous assertion `assert right._indexed_same(self)`
6989 # fails in cases with empty columns reached via
6990 # _frame_arith_method_with_reindex
6991
6992 # TODO operate_blockwise expects a manager of the same type
6993 with np.errstate(all="ignore"):
-> 6994 bm = self._mgr.operate_blockwise(
6995 # error: Argument 1 to "operate_blockwise" of "ArrayManager" has
6996 # incompatible type "Union[ArrayManager, BlockManager]"; expected
6997 # "ArrayManager"
6998 # error: Argument 1 to "operate_blockwise" of "BlockManager" has
6999 # incompatible type "Union[ArrayManager, BlockManager]"; expected
7000 # "BlockManager"
7001 right._mgr, # type: ignore[arg-type]
7002 array_op,
7003 )
7004 return self._constructor(bm)
7006 elif isinstance(right, Series) and axis == 1:
7007 # axis=1 means we want to operate row-by-row
File ~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/internals/managers.py:1419, in BlockManager.operate_blockwise(self, other, array_op)
1415 def operate_blockwise(self, other: BlockManager, array_op) -> BlockManager:
1416 """
1417 Apply array_op blockwise with another (aligned) BlockManager.
1418 """
-> 1419 return operate_blockwise(self, other, array_op)
File ~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/internals/ops.py:63, in operate_blockwise(left, right, array_op)
61 res_blks: list[Block] = []
62 for lvals, rvals, locs, left_ea, right_ea, rblk in _iter_block_pairs(left, right):
---> 63 res_values = array_op(lvals, rvals)
64 if left_ea and not right_ea and hasattr(res_values, "reshape"):
65 res_values = res_values.reshape(1, -1)
File ~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/ops/array_ops.py:222, in arithmetic_op(left, right, op)
217 else:
218 # TODO we should handle EAs consistently and move this check before the if/else
219 # (https://github.com/pandas-dev/pandas/issues/41165)
220 _bool_arith_check(op, left, right)
--> 222 res_values = _na_arithmetic_op(left, right, op)
224 return res_values
File ~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/ops/array_ops.py:170, in _na_arithmetic_op(left, right, op, is_cmp)
164 except TypeError:
165 if not is_cmp and (is_object_dtype(left.dtype) or is_object_dtype(right)):
166 # For object dtype, fallback to a masked operation (only operating
167 # on the non-missing values)
168 # Don't do this for comparisons, as that will handle complex numbers
169 # incorrectly, see GH#32047
--> 170 result = _masked_arith_op(left, right, op)
171 else:
172 raise
File ~/opt/anaconda3/lib/python3.9/site-packages/pandas/core/ops/array_ops.py:108, in _masked_arith_op(x, y, op)
106 # See GH#5284, GH#5035, GH#19448 for historical reference
107 if mask.any():
--> 108 result[mask] = op(xrav[mask], yrav[mask])
110 else:
111 if not is_scalar(y):
TypeError: unsupported operand type(s) for -: 'str' and 'str'
</code></pre>
|
<python><python-3.x><pandas><dataframe><casting>
|
2023-02-28 22:25:05
| 1
| 449
|
Data Science Analytics Manager
|
75,597,851
| 5,198,162
|
embedding local html page with streamlit componets
|
<p>I am building a simple streamlit app with several pages. In one of the pages i want to display an embeded html and I am using iframe html.</p>
<pre><code>import streamlit as st
import streamlit.components.v1 as components
components.iframe("mypage.html")
</code></pre>
<p>I get the following error message when I run my streamlit app.</p>
<p><em>You have requested page /mypage.html, but no corresponding file was found in the app's pages/ directory. Running the app's main page.</em></p>
<p>I have tried putting mypage.html file both in pages directory and in main directory of the app, but I still get the error message when I run the streamlit app. All examples I could find online are about embedding web page that already exists somehere on the internet, but I want to display an html I created. I am using
Python 3.9.15
streamlit 1.17.0 pyhd8ed1ab_0 conda-forge</p>
|
<python><streamlit>
|
2023-02-28 22:18:32
| 1
| 369
|
Atanas Atanasov
|
75,597,837
| 1,317,018
|
No python detected by vscode jupyter notebook
|
<p>My vscode shows version <code>3.9.13 64bit</code> of python:</p>
<p><a href="https://i.sstatic.net/GKEJ6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GKEJ6.png" alt="enter image description here" /></a></p>
<p>However there are many versions of python installed on my machine (dont know how!)</p>
<p><a href="https://i.sstatic.net/rB41S.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rB41S.png" alt="enter image description here" /></a></p>
<p>When I run python shell in terminal, it picks up version <code>3.7.9</code></p>
<p><a href="https://i.sstatic.net/xMMR8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xMMR8.png" alt="enter image description here" /></a></p>
<p>When I open jupyter notebook, and run any cell it says</p>
<p>Also when I run a cell in the jupyter notebook, it detects no python installed:</p>
<p><a href="https://i.sstatic.net/K65s4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/K65s4.png" alt="enter image description here" /></a></p>
<p>Also it does not seem to detect any kernels installed:</p>
<p><a href="https://i.sstatic.net/ziwIs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ziwIs.png" alt="enter image description here" /></a></p>
<p>What is happening here?!! Is my Ubuntu installation screwed up?</p>
<p>I want same python (preferably 3.9, but 3.7 will also be ok) at all places: vscode bottom bar, terminal and also jupyter notebook. Also <code>pip</code> should correspond to the same python. That is installing package through <code>pip install</code> from terminal should make the package available to both python file and jupyter notebook. This is how it works on my Windows machine.</p>
|
<python><visual-studio-code><jupyter-notebook><pip><jupyter>
|
2023-02-28 22:16:53
| 2
| 25,281
|
Mahesha999
|
75,597,782
| 11,512,576
|
How to Interpolate One Segment of One Column in a Pandas Dataframe
|
<p>I have a dataframe in python as below.</p>
<p><a href="https://i.sstatic.net/Hfev0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Hfev0.png" alt="enter image description here" /></a></p>
<p>Say I change the value of second and forth rows as below.</p>
<p><a href="https://i.sstatic.net/CQzsy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CQzsy.png" alt="enter image description here" /></a></p>
<p>Now I need to do interpolate from the second row to the forth row as below.</p>
<p><a href="https://i.sstatic.net/ePqUU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ePqUU.png" alt="enter image description here" /></a></p>
<p>I'm new to this topic. Can anyone give me some ideas or the right function I can call for this problem. Thanks</p>
|
<python><pandas><interpolation>
|
2023-02-28 22:08:23
| 1
| 491
|
Harry
|
75,597,668
| 6,296,919
|
appending new rows to a Pandas groupby result object
|
<p>I am new to python and I am trying to insert record into group by result object.</p>
<p>I have below dataframe where ID 1 & 2 has SECTION_GROUP as GROUP 1 and 3 & 4 has GROUP 2 but 5 doesn't have any SECTION_GROUP.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">ID</th>
<th style="text-align: center;">ENTITY_NAME</th>
<th style="text-align: right;">ENTITY_NAME</th>
<th style="text-align: right;">SECTION_GROUP</th>
<th style="text-align: right;">DOC_ID</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">dNumber</td>
<td style="text-align: right;">U220059090</td>
<td style="text-align: right;">GROUP 1</td>
<td style="text-align: right;">40</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">tDate</td>
<td style="text-align: right;">6-Dec-22</td>
<td style="text-align: right;">GROUP 1</td>
<td style="text-align: right;">40</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;">dNumber</td>
<td style="text-align: right;">U220059090</td>
<td style="text-align: right;">GROUP 2</td>
<td style="text-align: right;">40</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: center;">tDate</td>
<td style="text-align: right;">6-Dec-22</td>
<td style="text-align: right;">GROUP 2</td>
<td style="text-align: right;">40</td>
</tr>
<tr>
<td style="text-align: left;">5</td>
<td style="text-align: center;">sCompany</td>
<td style="text-align: right;">bp</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">40</td>
</tr>
</tbody>
</table>
</div>
<p>I am trying to get result as below into two separate group.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">ID</th>
<th style="text-align: center;">ENTITY_NAME</th>
<th style="text-align: right;">ENTITY_NAME</th>
<th style="text-align: right;">SECTION_GROUP</th>
<th style="text-align: right;">DOC_ID</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">dNumber</td>
<td style="text-align: right;">U220059090</td>
<td style="text-align: right;">GROUP 1</td>
<td style="text-align: right;">40</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">tDate</td>
<td style="text-align: right;">6-Dec-22</td>
<td style="text-align: right;">GROUP 1</td>
<td style="text-align: right;">40</td>
</tr>
<tr>
<td style="text-align: left;">5</td>
<td style="text-align: center;">sCompany</td>
<td style="text-align: right;">bp</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">40</td>
</tr>
</tbody>
</table>
</div><div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">ID</th>
<th style="text-align: center;">ENTITY_NAME</th>
<th style="text-align: right;">ENTITY_NAME</th>
<th style="text-align: right;">SECTION_GROUP</th>
<th style="text-align: right;">DOC_ID</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;">dNumber</td>
<td style="text-align: right;">U220059090</td>
<td style="text-align: right;">GROUP 2</td>
<td style="text-align: right;">40</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: center;">tDate</td>
<td style="text-align: right;">6-Dec-22</td>
<td style="text-align: right;">GROUP 2</td>
<td style="text-align: right;">40</td>
</tr>
<tr>
<td style="text-align: left;">5</td>
<td style="text-align: center;">sCompany</td>
<td style="text-align: right;">bp</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">40</td>
</tr>
</tbody>
</table>
</div>
<p>I have tried below but I am only getting result for GROUP 2. I need to access both groups result outside of loop. Any help is really appreciated.</p>
<pre><code>import pandas as pd
df = pd.read_csv ('sample.csv',encoding= 'unicode_escape',usecols= ['ID','ENTITY_NAME','ENTITY_VALUE','SECTION_GROUP','DOC_ID'])
distDocIds = df["DOC_ID"].unique()
for docId in distDocIds:
result = df[df.DOC_ID==docId] # all data for specific Id
grpResult = df[df.DOC_ID==docId].groupby('SECTION_GROUP') # groupby SECTION_GROUP data
for group in grpResult:
#check in any record present without SECTION_GROUP
#if present append group with that record
foundUnion = result[pd.isnull(result.SECTION_GROUP)]
if len(foundUnion) > 0:
foundUnion = foundUnion.append(group[1])
**#IF I print foundUnion here I am getting proper result as epxected but I want this access foundUnion outside of loop.
newdf = foundUnion.copy()
print(newdf)
</code></pre>
|
<python><python-3.x><pandas><dataframe><group-by>
|
2023-02-28 21:52:32
| 3
| 847
|
tt0206
|
75,597,639
| 11,725,460
|
What is the simplest way for generating all possible connected and non-connected undirected graphs containing N edges using NetworkX?
|
<p>What is the simplest way for generating all possible connected and non-connected undirected graphs containing N edges using NetworkX?</p>
<p>I need to generate all possible connected and non-connected undirected graphs containing 6 edges using NetworkX. So I was hoping to write a function that works for other numbers of edges as well.</p>
<p>I have tried to use the built in <a href="https://networkx.org/documentation/stable/reference/generators.html" rel="nofollow noreferrer">generator functions in networkX</a> functions to come up with a solution. None of the generator functions does what I need, but, maybe a combination of multiple generators could create the solution I was looking for.</p>
<p>My current code sample is using a single generator and visualizes the outputs in pyplot:</p>
<pre class="lang-py prettyprint-override"><code>import networkx as nx
import matplotlib.pyplot as plt
N = 6
# Generate graphs with 6 edges [Need to help with this step]
all_graphs = list(nx.nonisomorphic_trees(N))
# Compute the Weisfeiler-Lehman hash for each graph
for i,graph in enumerate(all_graphs):
# Compute the hash using the Weisfeiler-Lehman algorithm
wl_hash = nx.weisfeiler_lehman_graph_hash(graph)
plt.figure(i)
nx.draw(graph)
# Print the graph and its hash
print("hash: ", wl_hash)
</code></pre>
<p>Any help is appreciated.</p>
|
<python><graph><networkx>
|
2023-02-28 21:48:45
| 1
| 842
|
avgJoe
|
75,597,598
| 9,536,233
|
How to efficiently remove duplicates from list of lists (nested) containing dictionaries and integers?
|
<p>I have a list of lists, where each list contains a dictionary and integer. Sometimes duplicate lists occur, and I wish to remove these from the parent list directly. Currently, I am creating a new list and iterating over the old list to ensure only unique values are appended, but I feel this is bad practice. Can this be rewritten to a one-liner with list comprehension, or can the original list be filtered directly instead, for performance enhancement?</p>
<pre><code>TRIAL=[[{'http': '46.101.160.223:80', 'https': '46.101.160.223:80'}, 0],
[{'http': '66.70.178.214:9300', 'https': '66.70.178.214:9300'}, 0],
[{'http': '130.61.100.135:80', 'https': '130.61.100.135:80'}, 0],
[{'http': '157.245.27.9:3128', 'https': '157.245.27.9:3128'}, 0],
[{'http': '185.246.84.7:8080', 'https': '185.246.84.7:8080'}, 0],
[{'http': '185.246.84.7:8080', 'https': '185.246.84.7:8080'}, 0],
[{'http': '130.61.100.135:80', 'https': '130.61.100.135:80'}, 1]]
#We have some duplicates which we want to filter out if there with function
temporary_list=[]
for i in TRIAL:
if i[0] not in [item[0] for item in temporary_list]:
temporary_list.append(i)
temporary_list (desired outcome)
[[{'http': '46.101.160.223:80', 'https': '46.101.160.223:80'}, 0],
[{'http': '66.70.178.214:9300', 'https': '66.70.178.214:9300'}, 0],
[{'http': '130.61.100.135:80', 'https': '130.61.100.135:80'}, 0],
[{'http': '157.245.27.9:3128', 'https': '157.245.27.9:3128'}, 0],
[{'http': '185.246.84.7:8080', 'https': '185.246.84.7:8080'}, 0]]
</code></pre>
|
<python><list><dictionary><for-loop><list-comprehension>
|
2023-02-28 21:43:46
| 3
| 799
|
Rivered
|
75,597,554
| 10,500,424
|
Python NetworkX: Confining force-directed layout within circular boundary
|
<p>Python NetworkX has a method <code>spring_layout</code> that simulates a force-directed representation of a NetworkX instance; however, this leads to an adjusted network with nodes that are not confined within a particular boundary shape (e.g., a circle). Below is an example of this (notice how the overall graph shape is arbitrary, albeit smaller clusters are visible):</p>
<p><a href="https://i.sstatic.net/acl6Ql.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/acl6Ql.png" alt="enter image description here" /></a></p>
<p>Is it possible to set a boundary limit such that any node seeking to position itself beyond the set boundary is confined to the boundary edge? Below is an example illustrating this (notice how nodes along the perimeter seem to lie against an invisible circular border):</p>
<p><a href="https://i.sstatic.net/F7gaXm.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F7gaXm.jpg" alt="enter image description here" /></a></p>
|
<python><python-3.x><graph><networkx><springlayout>
|
2023-02-28 21:37:51
| 1
| 1,856
|
irahorecka
|
75,597,545
| 2,908,017
|
How can I make a control invisible with code in a Python FMX GUI App?
|
<p>I made the following GUI in <a href="https://github.com/Embarcadero/DelphiFMX4Python" rel="nofollow noreferrer">DelphiFMX for Python</a> that contains two buttons and a rectangle. I simply want to hide and show the rectangle with the button clicks:</p>
<p><a href="https://i.sstatic.net/tjgR4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tjgR4.png" alt="Python GUI with two buttons and a rectangle" /></a></p>
<p>What I've tried doing so far is:</p>
<pre><code>def ShowButton_OnClick(self, sender):
self.myRectangle.Show()
def HideButton_OnClick(self, sender):
self.myRectangle.Hide()
</code></pre>
<p>But this gives an error:
<a href="https://i.sstatic.net/ltpBF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ltpBF.png" alt="Hide Button Error" /></a></p>
<p>What is the correct way to hide and show components?</p>
<hr />
<p>For extra info, here's my full code:</p>
<pre><code>from delphifmx import *
class frmMain(Form):
def __init__(self, owner):
self.Caption = 'My Form with Hide/Show Buttons'
self.Width = 600
self.Height = 500
self.ShowButton = Button(self)
self.ShowButton.Parent = self
self.ShowButton.Width = 200
self.ShowButton.Height = 100
self.ShowButton.Position.X = 50
self.ShowButton.Position.Y = 50
self.ShowButton.Text = 'Show'
self.ShowButton.OnClick = self.ShowButton_OnClick
self.HideButton = Button(self)
self.HideButton.Parent = self
self.HideButton.Width = 200
self.HideButton.Height = 100
self.HideButton.Position.X = self.ShowButton.Position.X + self.ShowButton.Width + 50
self.HideButton.Position.Y = 50
self.HideButton.Text = 'Hide'
self.HideButton.OnClick = self.HideButton_OnClick
self.myRectangle = Rectangle(self)
self.myRectangle.Parent = self
self.myRectangle.Width = self.ShowButton.Position.X + (self.ShowButton.Width * 2)
self.myRectangle.Height = 100
self.myRectangle.Position.X = 50
self.myRectangle.Position.Y = self.ShowButton.Position.Y + self.ShowButton.Height + 50
def ShowButton_OnClick(self, sender):
self.myRectangle.Show()
def HideButton_OnClick(self, sender):
self.myRectangle.Hide()
def main():
Application.Initialize()
Application.Title = "My Application"
Application.MainForm = frmMain(Application)
Application.MainForm.Show()
Application.Run()
Application.MainForm.Destroy()
main()
</code></pre>
|
<python><user-interface><firemonkey><visibility>
|
2023-02-28 21:37:03
| 1
| 4,263
|
Shaun Roselt
|
75,597,538
| 354,979
|
Is there a way to determine if cells are out of order in a jupyter notebook? (e.g., using a variable in one cell that is only declared in a later one)
|
<p>I am hoping to find for example a nbextension that can determine whether jupyter cells would crash if run. I understand this can't be achieved in the general case (halting problem) but I suppose likely culprits, such as out-of-order code, could be found fairly easily. Does such a tool exist?</p>
|
<python><jupyter-notebook>
|
2023-02-28 21:36:23
| 0
| 7,942
|
rhombidodecahedron
|
75,597,519
| 7,648
|
Subtracting a constant value from one column when condition on another column holds
|
<p>I have a Pandas data frame that has the following columns: <em>foo</em> and <em>bar</em>. <em>foo</em> values are integers and <em>bar</em> values are strings. For each row, if the value of <em>bar</em> is some particular value, say, 'ABC', then I want to set the value of the <em>foo</em> column (for that row) to its current value minus one.</p>
<p>For example, I want to convert this data frame:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>foo</th>
<th>bar</th>
</tr>
</thead>
<tbody>
<tr>
<td>98</td>
<td>'ABC'</td>
</tr>
<tr>
<td>53</td>
<td>'DEF'</td>
</tr>
<tr>
<td>22</td>
<td>'ABC'</td>
</tr>
<tr>
<td>34</td>
<td>'FGH'</td>
</tr>
</tbody>
</table>
</div>
<p>converted to this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>foo</th>
<th>bar</th>
</tr>
</thead>
<tbody>
<tr>
<td>97</td>
<td>'ABC'</td>
</tr>
<tr>
<td>53</td>
<td>'DEF'</td>
</tr>
<tr>
<td>21</td>
<td>'ABC'</td>
</tr>
<tr>
<td>34</td>
<td>'FGH'</td>
</tr>
</tbody>
</table>
</div>
<p>How is this done?</p>
|
<python><pandas>
|
2023-02-28 21:33:30
| 3
| 7,944
|
Paul Reiners
|
75,597,419
| 2,908,017
|
How can I create a dropdown combobox from a list in a Python FMX GUI App?
|
<p>I'm creating a GUI using <a href="https://github.com/Embarcadero/DelphiFMX4Python" rel="nofollow noreferrer">DelphiFMX GUI library for Python</a> that has a <code>ComboBox</code> on it and I also have a list (array) of strings that I want to put into the ComboBox. Here's my current code:</p>
<pre><code>from delphifmx import *
class frmMain(Form):
def __init__(self, owner):
self.Caption = 'My Form with ComboBox'
self.Width = 600
self.Height = 250
self.myComboBox = ComboBox(self)
self.myComboBox.Parent = self
self.myComboBox.Align = "Top"
self.myComboBox.Margins.Top = 20
self.myComboBox.Margins.Right = 20
self.myComboBox.Margins.Bottom = 20
self.myComboBox.Margins.Left = 20
Months = ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"]
def main():
Application.Initialize()
Application.Title = "My Application"
Application.MainForm = frmMain(Application)
Application.MainForm.Show()
Application.Run()
Application.MainForm.Destroy()
main()
</code></pre>
<p>I've tried doing things like:</p>
<ol>
<li><code>self.myComboBox.Items.AddStrings(Months)</code>, but then I get a <code>TypeError: "AddStrings" called with invalid arguments. Error: Could not find a method with compatible arguments</code> error.</li>
<li><code>self.myComboBox.Items.Text(Months)</code>, but then I get a <code>TypeError: 'str' object is not callable</code> error.</li>
<li><code>self.myComboBox.Text = Months</code>, but then I get an <code>AttributeError: Error in setting property Text</code> error.</li>
</ol>
<p>How can I get the <code>Months</code> array into the ComboBox?</p>
|
<python><arrays><user-interface><combobox><firemonkey>
|
2023-02-28 21:20:32
| 1
| 4,263
|
Shaun Roselt
|
75,597,285
| 2,908,017
|
How do I create a date picker in a Python FMX GUI App?
|
<p>Is there any standard component for selecting a date in the <a href="https://github.com/Embarcadero/DelphiFMX4Python" rel="nofollow noreferrer">DelphiFMX GUI library for Python</a>?</p>
<p>I'm looking for some kind of Date Picker component. Currently what I'm doing is making a couple of <code>SpinBox</code> components and then using them to enter a Date, this is my UI:</p>
<p><a href="https://i.sstatic.net/K77be.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/K77be.png" alt="Year, Month, Day Python GUI" /></a></p>
<p>But what I actually want is just one component for selecting a date instead of three SpinBox components.</p>
|
<python><user-interface><datepicker><firemonkey>
|
2023-02-28 21:04:30
| 1
| 4,263
|
Shaun Roselt
|
75,597,228
| 5,212,614
|
Trying to merge specific columns, including dynamic last row, from several Excel files, into one dataframe
|
<p>I am trying to merge data from 14 Excel files into one dataframe and save the dataframe as a CSV file. I am looping through the Excel files, but nothing is being merged into a single dataframe. I think the problem is with the code dynamically finding the last row in each Excel file. All the data I want to merge is in columns CB:DL, starting in row 6 and going down around 100k rows, all Excel files end on a different row number.</p>
<p>Here is the code that I am testing.</p>
<pre><code>#import modules
import pandas as pd
import glob
from openpyxl import Workbook
from openpyxl import load_workbook as xw
from openpyxl.utils import get_column_letter
# path of the folder
path = r'C:\\All Raw Data\\'
# reading all the excel files
filenames = glob.glob(path + "\\*.xlsx")
# to iterate excel file one by one
# inside the folder
for file in filenames:
print(file)
#print('File names:', filenames)
# initializing empty data frame
finalexcelsheet = pd.DataFrame()
wb = Workbook(file)
print(wb)
for sheet in wb:
ws = wb.sheet["Speech"]
print(ws)
for col in range(1, ws.max_column + 1):
col_letter = get_column_letter(col)
max_col_row = len([cell for cell in ws[col_letter] if cell.value])
print("Column: {}, Row numbers: {}".format(col_letter, max_col_row))
# combining multiple excel worksheets into single data frames
df = pd.concat(pd.read_excel(file, sheet_name=None, header=6, usecols='CB'+max_col_row+':DL'+max_col_row), ignore_index=True, sort=False)
print(df.shape)
# appending excel files one by one
merged= finalexcelsheet.append(df, ignore_index=True)
# to print the combined data
print(merged.shape)
merged.to_csv('C:\\All Raw Data\\merged.csv')
</code></pre>
|
<python><python-3.x><excel><dataframe><openpyxl>
|
2023-02-28 20:58:28
| 2
| 20,492
|
ASH
|
75,597,182
| 5,308,851
|
Variable in generator function shadows name from outer scope
|
<p>I recently started to teach myself Python and currently work on generator functions. Here, I encountered a "scoping" issue with variable names inside of the generator shadowing names in outer scope. I did some research on this but could not come up with an explanation.</p>
<p>Given this minimal example:</p>
<pre><code>def do_stuff(var):
shadow = var * var
def dummy_generator(size):
for i in range(size):
do_stuff(i)
yield i
if __name__ == '__main__':
for shadow in dummy_generator(5):
print(shadow)
</code></pre>
<p>PyCharm emits the warning "Shadows name 'shadow' from outer scope" for 'shadow' in the do_stuff function (I'm using Python 3.10.9 on Linux).</p>
<p>I would like to understand why this is the case.</p>
|
<python><scope><generator>
|
2023-02-28 20:52:04
| 1
| 345
|
Markus Moll
|
75,597,098
| 12,983,543
|
Cannot read local files to docker compose
|
<p>I am trying to add SSL to my Django app in the backend.</p>
<p>On my VPS, I created the certificate files, that are in the path:</p>
<pre><code>'/etc/letsencrypt/live/domain.it/fullchain.pem'
'/etc/letsencrypt/live/domain.it/privkey.pem'
</code></pre>
<p>And to let them be read I wrote that in the docker compose file of the django service:</p>
<pre><code>volumes:
- /etc/letsencrypt/live/domain.it/fullchain.pem:/etc/letsencrypt/live/domain.it/fullchain.pem
- /etc/letsencrypt/live/domain.it/privkey.pem:/etc/letsencrypt/live/domain.it/privkey.pem```
</code></pre>
<p>So, to use the files for HTTPS, I am setting up gunicorn file in this way, so that it is able to use the key to let the traffic arrive and be decripted. Here is what I wrote in <code>gunicorn.conf.py</code>:</p>
<pre><code>from multiprocessing import cpu_count
from os import environ
# Server Socket
bind = '0.0.0.0:' + environ.get('PORT', '443')
max_requests = 1000
workers = cpu_count()
timeout = 30
# Logging
loglevel ='info'
accessformat = '%(t)s %(h)s %(l)s %(r)s %(s)s %(b)s %(f)s %(a)s'
accesslog = '-'
errorlog = '-'
# SSL
certfile = '/etc/letsencrypt/live/api.my-table.it/fullchain.pem'
keyfile = '/etc/letsencrypt/live/api.my-table.it/privkey.pem'
ssl_version = 2
</code></pre>
<p>But an error occurs when I run it on the server with docker-compose, and says that:</p>
<pre><code> self.LISTENERS = sock.create_sockets(self.cfg, self.log, fds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/devuser/.local/lib/python3.11/site-packages/gunicorn/sock.py", line 162, in create_sockets
raise ValueError('certfile "%s" does not exist' % conf.certfile)
ValueError: certfile "/etc/letsencrypt/live/api.my-table.it/fullchain.pem" does not exist
</code></pre>
<p>I was wondering if I was missing something, or how I could import the file successfully, so that I can import the keys and use HTTPS via gunicorn.</p>
<p>Here is the complete docker compose file</p>
<pre><code>version: "3.9"
services:
db:
container_name: my_table_postgres
image: postgres
ports:
- 5432/tcp
volumes:
- my_table_postgres_db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=my_table_postgres
- POSTGRES_USER=dev
- POSTGRES_PASSWORD=Ieyh5&RIR48!&8fc
redis:
container_name: redis
image: redis
ports:
- 6739:6739/tcp
environment:
- REDIS_HOST=redis-oauth-user-service
volumes:
- redis_data:/var/lib/redis/data/
my_table:
container_name: my_table
build: .
command: ["python", "-m", "gunicorn", "--bind", "0.0.0.0:5000", "-c", "gunicorn.conf.py", "mytable.wsgi"]
volumes:
- .:/api
ports:
- "5000:5000"
depends_on:
- db
- redis
celery:
image: celery
container_name: celery
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
command: ['python', '-m', 'celery', '-A', 'mytable' ,'worker', '-l', 'INFO']
volumes:
- .:/api
- /etc/letsencrypt/live/api.my-table.it/:/certs
depends_on:
- redis
- my_table
links:
- redis
nginx:
build: ./nginx
ports:
- "8000:80"
depends_on:
- my_table
volumes:
my_table_postgres_db:
redis_data:
</code></pre>
|
<python><docker><ssl><gunicorn>
|
2023-02-28 20:42:26
| 0
| 614
|
Matteo Possamai
|
75,597,030
| 305,883
|
Correctly understanding amplitude of waveforms - in librosa or other libraries
|
<p>I lack a background in acoustics, but need to work on a data-science project in acoustics.</p>
<p>Please help me understand how to correctly interpret what amplitude of waveform represent, correctly set the metrics, and possibly set correct sampling rate when doing analysis.</p>
<p>Consider this example.</p>
<p>I have a waveform file of an animal recorded at 250000 sampling rate.</p>
<p>You can listen to it here:</p>
<p><a href="https://www.whyp.it/tracks/75747/bat-120614013233718915?token=Lmt6M" rel="nofollow noreferrer">https://www.whyp.it/tracks/75747/bat-120614013233718915?token=Lmt6M</a>
(original audio)</p>
<pre><code>data, rate = librosa.core.load('my_file.wav')
# data is numpy array
# rate is 250000
</code></pre>
<p>I am learning the amplitude units can be decibel or voltage; in case of wave files, amplitude is represented by 16-bits integers : from -32768 to 32767, where 0 represents no sound (silence).</p>
<p>I load the file with librosa, and amplitude should get normalised between [-1, 1].</p>
<p>When I plot the data, I see y-axis in between of -0.4, and 0.4 as max values.</p>
<p><a href="https://i.sstatic.net/T9vsq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/T9vsq.png" alt="enter image description here" /></a></p>
<p>If I extract a segment (see any interval between the red lines, above), which is about at 0, and plot it, now the y-axis ranges between -.008 and +0.006.</p>
<pre><code>fig, ax = plt.subplots(nrows=1,ncols=1, figsize=(1,4))
plt.plot(segment, color='black')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/Oev6O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Oev6O.png" alt="enter image description here" /></a>
Audio(data = segment, rate = 192000)</p>
<p>But in both cases, both the file and segments are perfectly audible. I was expecting not to be able to perceive anything for segments with amplitude about zero...</p>
<p>In both cases, in order to hear something, I resample to 192000, which appears to be the maximum value supported by my browser (I am using jupyter on local browser).</p>
<p>Now, a few questions because I feel I lack basic concepts :</p>
<ul>
<li>what is the metric of y-axis of the waveform in wav format: decibels ? voltage?</li>
<li>what is the relation between amplitude and volume : why can I hear sound, when its amplitude is around 0 ?</li>
</ul>
|
<python><librosa><waveform><wave><acoustics>
|
2023-02-28 20:33:59
| 2
| 1,739
|
user305883
|
75,596,919
| 15,476,955
|
Check if an integer from a tuple is between the values of another tuple
|
<pre><code>def are_values_crossed(tuple_1, tuple_2):
if tuple_2[0] <= tuple_1[0] <= tuple_2[1]: return True
if tuple_2[0] <= tuple_1[1] <= tuple_2[1]: return True
if tuple_1[0] <= tuple_2[0] <= tuple_1[1]: return True
if tuple_1[0] <= tuple_2[1] <= tuple_1[1]: return True
return False
tuple_1 = (5, 17)
tuple_2 = (4, 13)
print(are_values_crossed(tuple_1, tuple_2))
</code></pre>
<p>Is there a more pythonic way to do it? (without importing library if possible? In 1 line?)</p>
|
<python><logic>
|
2023-02-28 20:20:31
| 2
| 1,168
|
Utopion
|
75,596,871
| 10,976,654
|
pygount skip folders that start with _X recrusive
|
<p>I read through the documentation (<a href="https://pypi.org/project/pygount/1.2.0/" rel="nofollow noreferrer">https://pypi.org/project/pygount/1.2.0/</a>), but I am still confused. When running pygount in my root directory, how to I skip all folders that start with "_X" (recursive, so skip any nested folders that start with that, too).</p>
<p>This runs, but I don't know if it is doing what I think it is: <code>pygount --suffix=py --format=summary --folders-to-skip='**/_X*'</code></p>
|
<python>
|
2023-02-28 20:15:13
| 1
| 3,476
|
a11
|
75,596,823
| 19,694,624
|
Error "AttributeError: 'Service' object has no attribute 'process'" while running on Ubuntu 22.04 VPS
|
<p>I'm trying to run my selenium script on Ubuntu 22.04. VPS and get the error "AttributeError: 'Service' object has no attribute 'process'". However, if I run this script on my Ubuntu machine, it works fine as it should. What should I do?</p>
<p>Code:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
import time
from bs4 import BeautifulSoup as bs4
import discord
from discord.ext import commands
def get_html_source(url: str):
chrome_driver_binary = '/home/romik/Projects/hasuki_bot/chromedriver'
options = Options()
options.add_argument("--start-maximized") #open Browser in maximized mode
options.add_argument("--no-sandbox") #bypass OS security model
options.add_argument("--disable-dev-shm-usage") #overcome limited resource problems
options.add_argument("--headless=new")
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('useAutomationExtension', False)
driver = webdriver.Chrome(executable_path=chrome_driver_binary, options = options)
driver.get(url)
try:
with open("source-page.html", "w", encoding="utf-8") as file:
time.sleep(10)
file.write(driver.page_source)
except Exception as ex:
print(ex)
finally:
driver.close()
driver.quit()
if __name__ == "__main__":
get_html_source("https://ubuntu.com/")
</code></pre>
|
<python><ubuntu><selenium-webdriver><selenium-chromedriver><vps>
|
2023-02-28 20:09:15
| 1
| 303
|
syrok
|
75,596,693
| 2,205,916
|
AWS: Run a Python script to create a file in S3. No output in S3, but works locally
|
<p>I want to run the following <code>.py</code> script as a job in AWS Glue Studio. This script was just a test to see if I could get something to work. Basically, I want this <code>.py</code> script to run and create a file called <code>myfile.txt</code> in my desired <code>s3</code> directory. I was able to get this same script to work, locally, but it doesn't produce any output in <code>s3</code>.</p>
<p>How can I fix this?</p>
<pre><code>import sys
import os
# Specify the path
path = '.'
# path = 's3://test-dir-awsglue/'
# Specify the file name
file = 'myfile.txt'
# Creating a file at specified location
with open(os.path.join(path, file), 'w') as fp:
pass
# To write data to new file uncomment
# this fp.write("New file created")
</code></pre>
<p>The job details are:</p>
<p><a href="https://i.sstatic.net/8SI5p.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8SI5p.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/t6Kg0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t6Kg0.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/GnJyZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GnJyZ.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/rbosk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rbosk.png" alt="My permissions:" /></a></p>
|
<python><amazon-web-services><amazon-s3><aws-glue>
|
2023-02-28 19:53:21
| 1
| 3,476
|
user2205916
|
75,596,534
| 1,219,317
|
In Pyvis I get UnicodeEncodeError: 'charmap' codec can't encode characters in position 263607-263621: character maps to <undefined>
|
<p>In pyvis, Why I get this error when I am trying to visualise a simple graph (3 lines of codes only)?</p>
<pre><code>net=Network(notebook=True, cdn_resources='in_line')
net.from_nx(nx.davis_southern_women_graph())
net.show('example.html')
</code></pre>
<p>This leads to the error:</p>
<pre><code>Traceback (most recent call last):
File "g:\My Drive\....\graph_v.py", line 270, in <module>
net.show('example.html')
File "C:\Users\ziton\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\pyvis\network.py", line 546, in show
self.write_html(name, open_browser=False,notebook=True)
File "C:\Users\ziton\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\pyvis\network.py", line 530, in write_html
out.write(self.html)
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode characters in position 263607-263621: character maps to <undefined>
</code></pre>
<p>Some research yielded that I need to update the encoding to UTF-8 but when calling which method and how?</p>
|
<python><unicode><encoding><networkx><pyvis>
|
2023-02-28 19:36:13
| 1
| 2,281
|
Travelling Salesman
|
75,596,481
| 214,526
|
Python typing forward declaration
|
<p>I'm trying to write some code with generic types like this:</p>
<pre><code>from typing import Sequence, TypeVar, Hashable, Protocol, NoReturn
class _SortHashable(Protocol, Hashable):
def __lt__(self, other) -> bool:
...
def __eq__(self, other) -> bool:
...
SortHashableT = TypeVar("SortHashableT", bound=_SortHashable)
def foo(sequence: Sequence[SortHashableT]) -> NoReturn:
...
</code></pre>
<p>For type hinting of <code>other</code> object, I tried following:</p>
<pre><code>class _SortHashable(Protocol, Hashable):
def __lt__(self, other: _SortHashable) -> bool:
...
</code></pre>
<p>But I guess that is not supported and my editor was flagging it as error. Is there a way to forward declare the type as it could be done in languages like C++? I am using Python 3.10 for this project.</p>
|
<python><python-typing>
|
2023-02-28 19:28:11
| 1
| 911
|
soumeng78
|
75,596,475
| 3,357,935
|
How do I match a character before or after a capturing group in regex?
|
<p>I have a Python script with a regex pattern that searches for the word <code>employee_id</code> if there is an equals sign immediately before or after.</p>
<pre><code>import re
pattern = r"(=employee_id|employee_id=)"
print(re.search(pattern, "=employee_id").group(1)) # =employee_id
print(re.search(pattern, "employee_id=").group(1)) # employee_id=
print(re.search(pattern, "=employee_id=").group(1)) # =employee_id
print(re.search(pattern, "employee_id")) # None
print(re.search(pattern, "employee_identity=")) # None
</code></pre>
<p><strong>How can I modify my regex pattern to only capture the <code>employee_id</code> part of the string without the equals sign?</strong></p>
<pre><code># Desired results
print(re.search(pattern, "=employee_id").group(1)) # employee_id
print(re.search(pattern, "employee_id=").group(1)) # employee_id
print(re.search(pattern, "=employee_id=").group(1)) # employee_id
print(re.search(pattern, "employee_id")) # None
print(re.search(pattern, "employee_identity=")) # None
</code></pre>
<hr />
<p>I attempted to use capture groups, but putting parentheses around <code>employee_id</code> meant my results were split between two capture groups:</p>
<pre><code>pattern = r"=(employee_id)|(employee_id)="
print(re.search(pattern, "employee_id=").group(1)) # None
print(re.search(pattern, "employee_id=").group(2)) # employee_id
</code></pre>
<p>Using optional groups would match an <code>employee_id</code> without any equals sign.</p>
<pre><code>(?:=)?(employee_id)(?:=)?
</code></pre>
<p>I also do not want to <a href="https://stackoverflow.com/q/11140770/3357935">exclude matches where the character is both before and after the word</a>.</p>
|
<python><regex><python-re><capture-group>
|
2023-02-28 19:26:54
| 3
| 27,724
|
Stevoisiak
|
75,596,451
| 8,068,825
|
Combine module and list of torch.nn.Parameters in one optimizer
|
<p>I have the following code:</p>
<pre><code>optimizer = torch.optim.Adam([self.model.parameters()] + [self.latent_params_class.latent_params], lr=lr)
</code></pre>
<p>self.model is a BoTorch SingleTaskGP model (<a href="https://botorch.org/tutorials/fit_model_with_torch_optimizer" rel="nofollow noreferrer">https://botorch.org/tutorials/fit_model_with_torch_optimizer</a>) and self.latent_params_class.latent_params is just a list of torch.nn.Parameter. The above line throws the following error:</p>
<pre><code>TypeError: optimizer can only optimize Tensors, but one of the params is Module.parameters
</code></pre>
<p>How do I put <code>self.model.parameters()</code> and <code>self.latent_params_less.latent_params</code> into the optimizer?</p>
|
<python><pytorch>
|
2023-02-28 19:23:45
| 1
| 733
|
Gooby
|
75,596,428
| 18,445,352
|
ModuleNotFoundError while using GitHub codespace editor
|
<p>Recently I started using GitHub codespace for the first time. I created a new codespace from one of my repositories. Assuming the folder structure as below:</p>
<pre><code>my-codespace
|--- utils
|------ my_script.py
|--- config.py
</code></pre>
<p>I get the following error when I import <code>config.py</code> inside <code>my_script.py</code>:</p>
<pre><code>ModuleNotFoundError: No module named 'config'
</code></pre>
<p>Autocomplete is working inside the editor, and it recognizes <code>config.py</code> while I try to import it</p>
<p><a href="https://i.sstatic.net/IoxZf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IoxZf.png" alt="enter image description here" /></a></p>
<p>After a while red line appears indicating an error</p>
<p><a href="https://i.sstatic.net/qmEc1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qmEc1.png" alt="enter image description here" /></a></p>
<p>All methods, classes, and variables belonging to <code>config.py</code> are available inside <code>my_script.py</code> by autocomplete.</p>
<p>Currently, I'm using PyCharm on my computer and have no issues with that repository but I get the same error when I run it on local VS Code.</p>
<p>I would appreciate it if someone could help me in this regard.</p>
|
<python><visual-studio-code><codespaces><github-codespaces>
|
2023-02-28 19:20:21
| 2
| 346
|
Babak
|
75,596,387
| 9,922,171
|
Render Icons using Font-Awesome-Kit into Streamlit Application
|
<p>I'm attempting to add icons using font-awesome-kit into a streamlit application and have tried several different approaches with no success.</p>
<p><strong>Attempt 1</strong></p>
<p>Tried importing a JS tag for the component</p>
<pre><code>st.write('<script src="https://kit.fontawesome.com/xyz.js" crossorigin="anonymous"></script>', unsafe_allow_html=True)
st.write('<i class="fa-duotone fa-calendar-days"></i>', unsafe_allow_html=True)
</code></pre>
<p><strong>Attempt 2</strong>
Tried importing a CSS tag for the component</p>
<pre><code>st.write('<link rel="stylesheet" href="https://kit.fontawesome.com/zax.css" crossorigin="anonymous">', unsafe_allow_html=True)
st.write('<i class="fa-duotone fa-calendar-days"></i>', unsafe_allow_html=True)
</code></pre>
<p><strong>Attempt 3</strong>
Tried using streamlit HTML component to render the icon component.</p>
<pre><code>st.components.v1.html('<i class="fa-duotone fa-calendar-days"></i>')
</code></pre>
<p>As a workaround, I'm currently using a cloudflare's CDN as described in the code below which successfully render's the icon.</p>
<pre><code>import streamlit as st
st.write('<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.2.0/css/all.min.css"/>', unsafe_allow_html=True)
st.write('<i class="fa-solid fa-trash"/>', unsafe_allow_html=True)
</code></pre>
<p>However, when using the tool-kit not receiving any output where the icon is expected. Any ideas here?</p>
|
<python><font-awesome><streamlit>
|
2023-02-28 19:15:15
| 1
| 542
|
Doracahl
|
75,596,360
| 5,084,560
|
how to avoid high memory consumption of numpy where method
|
<p>I have a python script which does some calculation on data. data has ~50 million rows. when the execution comes to line which have numpy where method, memory is gone wild. I tried to split dataframe but it doesn't help.</p>
<p>code snippet:</p>
<pre><code>##Percentage Range
data_percnt = data.copy(deep=True)
b = [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1]
percentage_columns = []
for i in b:
percentage_columns.append('PCT'+str(i*100))
data_percnt['PCT'+str(i*100)] = np.where((data_percnt['PREDICTED']>data_percnt['TARGET']*(1+i)),'Above', np.where((data_percnt['PREDICTED']<data_percnt['TARGET']*(1-i)),'Below', 'Around'))
</code></pre>
<p>memory profile output (before splitting dataframe):
<a href="https://i.sstatic.net/aT0zR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aT0zR.png" alt="enter image description here" /></a></p>
<p>code snippet which i split the data:</p>
<pre><code>splitted_df_list = []
for date in sorted(data_percnt.DATA_DATE.unique()):
globals()["df_" + str(date)] = data_percnt[data_percnt['DATA_DATE'] == date]
splitted_df_list.append(globals()["df_" + str(date)])
</code></pre>
<p>and then new loop with splitted dataframes (i would expect to decrease in memory consumption):</p>
<pre><code>for i in range(len(splitted_df_list)):
for j in b:
splitted_df_list[i]['PCT'+str(j*100)] = np.where((splitted_df_list[i]['PREDICTED']>splitted_df_list[i]['TARGET']*(1+i)),'Above', np.where((splitted_df_list[i]['PREDICTED']<splitted_df_list[i]['TARGET']*(1-i)),'Below', 'Around'))
</code></pre>
<p>memory profile output (after splitting dataframe):
<a href="https://i.sstatic.net/jureM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jureM.png" alt="enter image description here" /></a></p>
<p>but unfortunately no expected decrease in memory.</p>
<p>i've tried to write a custom function:</p>
<pre><code>def flag_df(df, j):
if (df['PREDICTED']>df['TARGET']*(1+j)):
return 'Above'
elif (df['PREDICTED']<df['TARGET']*(1-j)):
return 'Below'
else:
return 'Around'
</code></pre>
<p>but it isn't feasible beacuse of time to complete.</p>
<p>any idea to reduce RAM usage?</p>
|
<python><pandas><numpy><machine-learning>
|
2023-02-28 19:12:03
| 0
| 305
|
Atacan
|
75,596,273
| 2,908,017
|
Getting CheckBox checked state in a Python FMX GUI app
|
<p>I've created a small app in the <a href="https://github.com/Embarcadero/DelphiFMX4Python" rel="nofollow noreferrer">DelphiFMX GUI library for Python</a> that has a <code>Checkbox</code>, <code>Button</code>, and <code>Label</code> on it:</p>
<p><a href="https://i.sstatic.net/Zq0Rc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Zq0Rc.png" alt="Python GUI form with checkbox, button and label" /></a></p>
<p>When I click on the button, then I simply want to check the checked state of the checkbox and then write into the label whether it is checked or not. I have the following code currently for clicking on the button, but it doesn't work:</p>
<pre><code>def Button_OnClick(self, sender):
if self.myCheckBox.Checked:
self.myLabel.Text = "The Checkbox is Checked"
else:
self.myLabel.Text = "The Checkbox is Unchecked"
</code></pre>
<p>When I click on the button, then I get the following run-time error
<code>AttributeError: Error in getting property "Checked". Error: Unknown attribute</code>:</p>
<p><a href="https://i.sstatic.net/95E8Z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/95E8Z.png" alt="AttributeError: Error in getting property "Checked". Error: Unknown attribute" /></a></p>
<p>What is the correct or best way to get the <code>Checked</code> state of a <code>CheckBox</code> component?</p>
|
<python><user-interface><checkbox><firemonkey>
|
2023-02-28 19:01:32
| 1
| 4,263
|
Shaun Roselt
|
75,596,269
| 974,555
|
Smallest dtype that will fit all values in an array
|
<p>How can I find the smallest dtype that will hold all values of an array?</p>
<p>For a scalar, I can use <code>np.min_scalar_type</code> to get the smallest dtype that will fit:</p>
<pre class="lang-py prettyprint-override"><code>In [32]: min_scalar_type(0)
Out[32]: dtype('uint8')
In [33]: min_scalar_type(0.1)
Out[33]: dtype('float16')
In [34]: min_scalar_type(-1)
Out[34]: dtype('int8')
In [35]: min_scalar_type(nan)
Out[35]: dtype('float16')
</code></pre>
<p>Is there an equivalent function for arrays? The existing <code>min_scalar_type</code> just returns the dtype of the input array, which is not the smallest.</p>
<pre class="lang-py prettyprint-override"><code>In [36]: min_scalar_type([0, 0.1, -1, nan])
Out[36]: dtype('float64') # I would like: float16
In [37]: min_scalar_type([0, 1])
Out[37]: dtype('int64') # I would like: uint8
In [39]: min_scalar_type([1, -1])
Out[39]: dtype('int64') # I would like: int8
In [40]: min_scalar_type([-100, 200])
Out[40]: dtype('int64') # I would like: int16
</code></pre>
<p>This seems not entirely trivial to implement. For example, for <code>[-1, 200]</code>, the <code>min_scalar_type</code> would be <code>int16</code>, even though <code>-1</code> fits in <code>int8</code> and 200 fits in <code>uint8</code>, so one cannot derive it from applying <code>min_scalar_type</code> to each element of the input. I also thought of <code>min_scalar_type((sign(min(ar)) or 1) * max(ar))</code>, but that fails if all values are negative or if there are floats. Doing this correctly needs more careful thought.</p>
<p>Does this already exist?</p>
|
<python><numpy><types>
|
2023-02-28 19:01:11
| 0
| 26,981
|
gerrit
|
75,596,215
| 9,183,839
|
Can't test Post request with FastAPI & Pytest
|
<p>I'm trying to test my <code>/login</code> API with FastAPI's Testclient.</p>
<p>But when I pass data to the post api. It shows, <code>422 error</code> with content <code>username</code> and <code>password</code> fields are required.</p>
<h3>API:</h3>
<pre class="lang-py prettyprint-override"><code>
@router.post('/token', response_model=schemas.Token)
async def login(user_credentials: OAuth2PasswordRequestForm = Depends(), db: Session = Depends(get_db)):
"""
Login to the system with (Email | Username | Contact)
"""
user = db.query(models.User).filter(
(models.User.email == user_credentials.username) |
(models.User.username == user_credentials.username) |
(models.User.contact == user_credentials.username)
).first()
if not user:
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail="Invalid Credentials"
)
if not utils.verify_pwd(user_credentials.password, user.password):
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail="Invalid Credentials"
)
access_token = oauth2.create_access_token(data={'user_id': user.id})
return {"access_token": access_token, "token_type": "bearer"}
</code></pre>
<h3>Test Code:</h3>
<pre class="lang-py prettyprint-override"><code>from fastapi.testclient import TestClient
from ..main import app
client = TestClient(app)
def test_auth_token(get_client: TestClient):
client = get_client.post('/token', json={"username": "admin", "password": "1234567890"})
assert client.status_code == 200
</code></pre>
<h3>Error</h3>
<pre class="lang-bash prettyprint-override"><code>(venv) ✘ genex@Genexs-MacBook-Pro: pytest -s
================================================================================== test session starts ===================================================================================
platform darwin -- Python 3.10.8, pytest-7.2.1, pluggy-1.0.0
rootdir: /Users/genex/Desktop/basha-bari
plugins: asyncio-0.20.3, anyio-3.6.2
asyncio: mode=strict
collected 1 item
apps/utility/test_utility_routers.py {'detail': [{'loc': ['body', 'username'], 'msg': 'field required', 'type': 'value_error.missing'}, {'loc': ['body', 'password'], 'msg': 'field required', 'type': 'value_error.missing'}]}
F
======================================================================================== FAILURES ========================================================================================
____________________________________________________________________________________ test_auth_token _____________________________________________________________________________________
def test_auth_token():
result = client.post('/token', json={"username": "admin", "password": "1234567890"})
print(result.json())
> assert result.status_code == 200
E assert 422 == 200
E + where 422 = <Response [422 Unprocessable Entity]>.status_code
apps/utility/test_utility_routers.py:12: AssertionError
================================================================================ short test summary info =================================================================================
FAILED apps/utility/test_utility_routers.py::test_auth_token - assert 422 == 200
=================================================================================== 1 failed in 1.06s ====================================================================================
</code></pre>
<p>I'm using <strong>httpx</strong> and <strong>pytest</strong>.</p>
<p>How should I pass the payload, so that the API receives it.</p>
|
<python><pytest><fastapi><httpx>
|
2023-02-28 18:56:07
| 1
| 439
|
Fahad Md Kamal
|
75,596,102
| 8,845,766
|
Object of type User is not JSON serialiable
|
<p>I'm trying to create a sign up method for users. This is what the user model looks like:</p>
<pre><code>class User(AbstractBaseUser):
name = models.CharField(max_length=128, blank=False)
created_at = models.DateTimeField(auto_now_add=True)
is_admin = models.BooleanField(blank=True,default=False, verbose_name="Is admin")
designation = models.CharField(max_length=90, blank=True)
email = models.EmailField(max_length=160, blank=False, unique=True)
password = models.TextField(blank=False)
USERNAME_FIELD = "email"
def __str__(self):
return self.name + " " + self.email
</code></pre>
<p>This is what the serializer looks like:</p>
<pre><code>class UserSerializer(serializers.ModelSerializer):
class Meta:
model = User
fields = ["id", "name", "designation", "is_admin", "email", "password"]
extra_kwargs = {
'password': {'write_only': True}
}
#creates user after hashing
#and salting password
def create(self, validated_data):
user = User.objects.create(
name=validated_data.get('name'),
email=validated_data.get('email'),
password=make_password(password=validated_data.get('password'), salt=get_random_string(length=32))
)
return user
</code></pre>
<p>The view</p>
<pre><code>class AuthViewset(viewsets.ViewSet):
def signUp(self, request, **kwargs):
serializer = UserSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
is_new = True
user = User.objects.filter(email=request.data['email']).first()
if user is not None:
#user exists
is_new = False
else:
#create user
user = serializer.create(serializer.validated_data)
if is_new:
return Response({
"user": user,
"message": "Created new user"
}, status=status.HTTP_201_CREATED)
else:
return Response({
"message": "User already exists"
}, status=status.HTTP_409_CONFLICT)
</code></pre>
<p>I've overridden the create method to allow for salting of a hashed password. I want my view method to get the created user, so I return the user object from the create method. But it throws an error saying</p>
<pre><code>Object of type User is not JSON serialiable
</code></pre>
<p>I'm new to django, so I having a tough time understanding why this is happening. How can I fix this?</p>
|
<python><django><django-rest-framework>
|
2023-02-28 18:42:41
| 2
| 794
|
U. Watt
|
75,596,053
| 2,211,268
|
Spacy similarity score for sweet M&M fails
|
<p>python 3 spacy seems to have a problem with sweets such as M&Ms.</p>
<pre><code>import spacy
nlp = spacy.load("en_core_web_lg" )
query = nlp( "M&M" )
query2 = nlp("M&M chocolate pouch")
print( "Score of M&M versus the full name in shop:", query2.similarity(query) )
</code></pre>
<p>The resultant score returned is <em><strong>0.0</strong></em>. And if query2 is any possible string, the result is always 0.0.</p>
<p>However, if you space separate M&M to make "M & M" then the scores are reasonable.</p>
<p>Does anyone know why it fails with a large language model on such a confectionary? And is there a solution to find the correct similarity score without synthetically adding in spaces around the ampersand?</p>
|
<python><nlp><spacy><similarity>
|
2023-02-28 18:36:42
| 1
| 2,092
|
Eamonn Kenny
|
75,596,029
| 3,510,043
|
timezone management in polars group_by_dynamic
|
<p><strong>Update:</strong> This was a bug which has since been fixed. <a href="https://github.com/pola-rs/polars/issues/7274" rel="nofollow noreferrer">https://github.com/pola-rs/polars/issues/7274</a></p>
<hr />
<p>I am exploring <code>polars</code> and encountered an unexpected behavior (at least to me) as shown below.</p>
<pre class="lang-py prettyprint-override"><code>In [1]: import polars
In [2]: polars.__version__
Out[2]: '0.16.9'
In [3]: df = (
...: polars.DataFrame(
...: data={
...: "timestamp": ["1970-01-01 00:00:00+01:00", "1970-01-01 01:00:00+01:00"],
...: "value": [1, 1],
...: }
...: )
...: .with_columns(
...: polars.col("timestamp").str.strptime(
...: polars.Datetime, fmt="%Y-%m-%d %H:%M:%S%:z"
...: )
...: )
...: .with_columns(
...: polars.col("timestamp").dt.convert_time_zone("UTC").alias("timestamp_utc")
...: )
...: )
In [4]: df
Out[4]:
shape: (2, 3)
┌────────────────────────────┬───────┬─────────────────────────┐
│ timestamp ┆ value ┆ timestamp_utc │
│ --- ┆ --- ┆ --- │
│ datetime[μs, +01:00] ┆ i64 ┆ datetime[μs, UTC] │
╞════════════════════════════╪═══════╪═════════════════════════╡
│ 1970-01-01 00:00:00 +01:00 ┆ 1 ┆ 1969-12-31 23:00:00 UTC │
│ 1970-01-01 01:00:00 +01:00 ┆ 1 ┆ 1970-01-01 00:00:00 UTC │
└────────────────────────────┴───────┴─────────────────────────┘
In [5]: df.groupby_dynamic(
...: index_column="timestamp", every="1d", closed="left"
...: ).agg(polars.col("value").count())
Out[5]:
shape: (2, 2)
┌────────────────────────────┬───────┐
│ timestamp ┆ value │
│ --- ┆ --- │
│ datetime[μs, +01:00] ┆ u32 │
╞════════════════════════════╪═══════╡
│ 1969-12-31 01:00:00 +01:00 ┆ 1 │
│ 1970-01-01 01:00:00 +01:00 ┆ 1 │
└────────────────────────────┴───────┘
In [6]: df.groupby_dynamic(
...: index_column="timestamp_utc", every="1d", closed="left"
...: ).agg(polars.col("value").count())
Out[6]:
shape: (2, 2)
┌─────────────────────────┬───────┐
│ timestamp_utc ┆ value │
│ --- ┆ --- │
│ datetime[μs, UTC] ┆ u32 │
╞═════════════════════════╪═══════╡
│ 1969-12-31 00:00:00 UTC ┆ 1 │
│ 1970-01-01 00:00:00 UTC ┆ 1 │
└─────────────────────────┴───────┘
</code></pre>
<p>The definition of the <code>timestamp</code> seems to include correctly the timezone as confirmed by the conversion to UTC.</p>
<p>When resampling by day, while the count is correct when dealing with the UTC timezone (column <code>timestamp_utc</code>), I think that the one with the <code>timestamp</code> column is not as it should have aggregated the two rows into <code>1970-01-01 00:00:00+01:00</code>.</p>
<p>Am I misunderstanding something?</p>
<p>Thanks in advance!</p>
|
<python><datetime><timezone><python-polars>
|
2023-02-28 18:33:37
| 0
| 820
|
Flavien Lambert
|
75,596,027
| 1,779,091
|
How to write a basic if statement to compare 3 variables and return the largest?
|
<p>I need a basic if statement that looks at 3 variables and returns the largest.</p>
<pre><code>a=10
b=20
c=30
if a>=b and a>=c:
return a
elif b>=a and b>=c:
return b
elif c>=a and c>=b:
return c
</code></pre>
<ol>
<li>Is this correct way to write this simple logic using IF statements? For example can I replace the last elif with else and skip the elif check?</li>
<li>Is there a pythonic way to write this?</li>
</ol>
|
<python>
|
2023-02-28 18:33:29
| 5
| 9,866
|
variable
|
75,596,024
| 11,922,765
|
Python Raspberry Pi 4: How to install erlang?
|
<p>There was a similar <a href="https://stackoverflow.com/questions/22828509/how-to-install-erlang">question almost a decade ago</a> and I don't think the solution applies to me.
I am trying to install a new software, and it needs <code>erlang</code> software as a support package.</p>
<p>Step1: I downloaded the package as given below: <a href="https://i.sstatic.net/pG6Oo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pG6Oo.png" alt="enter image description here" /></a></p>
<p>step2: installing it on the Raspberry Pi 4</p>
<p>I unzipped the downloaded file and this is what I see inside:</p>
<p><a href="https://i.sstatic.net/YXITv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YXITv.png" alt="enter image description here" /></a></p>
<p>After this, I have no idea how to install it. I don't see any readme file or executable file so I can install it. I appreciate your help. Thanks</p>
|
<python><raspberry-pi><erlang><raspberry-pi4><erlang-otp>
|
2023-02-28 18:33:07
| 1
| 4,702
|
Mainland
|
75,595,957
| 6,077,239
|
How to flatten/split a tuple of arrays and calculate column means in Polars dataframe?
|
<p>I have a dataframe as follows:</p>
<pre><code>df = pl.DataFrame(
{"a": [([1, 2, 3], [2, 3, 4], [6, 7, 8]), ([1, 2, 3], [3, 4, 5], [5, 7, 9])]}
)
</code></pre>
<p>Basically, each cell of <code>a</code> is a tuple of three arrays of the same length. I want to fully split them to separate columns (one scalar resides in one column) like the shape below:</p>
<pre><code>shape: (2, 9)
┌─────────┬─────────┬─────────┬─────────┬─────┬─────────┬─────────┬─────────┬─────────┐
│ field_0 ┆ field_1 ┆ field_2 ┆ field_3 ┆ ... ┆ field_5 ┆ field_6 ┆ field_7 ┆ field_8 │
│ --- ┆ --- ┆ --- ┆ --- ┆ ┆ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 ┆ i64 ┆ ┆ i64 ┆ i64 ┆ i64 ┆ i64 │
╞═════════╪═════════╪═════════╪═════════╪═════╪═════════╪═════════╪═════════╪═════════╡
│ 1 ┆ 2 ┆ 3 ┆ 2 ┆ ... ┆ 4 ┆ 6 ┆ 7 ┆ 8 │
│ 1 ┆ 2 ┆ 3 ┆ 3 ┆ ... ┆ 5 ┆ 5 ┆ 7 ┆ 9 │
└─────────┴─────────┴─────────┴─────────┴─────┴─────────┴─────────┴─────────┴─────────┘
</code></pre>
<p>One way I have tried is to use <code>list.to_struct</code> and <code>unnest</code> two times to fully flatten the two nested levels. Two levels is fine here, but if there are a variety of nested levels and the number could not be determined ahead, the code will be so long.</p>
<p>Is there any simpler (or more systematic) way to achieve this?</p>
|
<python><python-polars>
|
2023-02-28 18:26:35
| 3
| 1,153
|
lebesgue
|
75,595,939
| 12,302,691
|
How to print dictionary with array value without any brackets or quotes?
|
<p>I have a dictionary that has <em>character</em> keys and <em>list</em> values.</p>
<pre class="lang-py prettyprint-override"><code>my_dict = {
'A': [1, 1, 0, 0],
'B': [0, 0, 1, 1],
'C': [1, 1, 1, 0],
'D': [1, 0, 0, 0]
}
</code></pre>
<p>When I simply print it using <code>print(my_dict)</code>, I get an output like this:</p>
<pre><code>{'A': [1, 1, 0, 0], 'B': [0, 0, 1, 1], 'C': [1, 1, 1, 0], 'D': [1, 0, 0, 0]}
</code></pre>
<p>What I want is this:</p>
<pre><code>A: 1 1 0 0
B: 0 0 1 1
C: 1 1 1 0
D: 1 0 0 0
</code></pre>
<p>How can I do this without making it complicated?</p>
|
<python>
|
2023-02-28 18:24:51
| 1
| 429
|
Anushka Chauhan
|
75,595,854
| 9,743,695
|
create tuple from ranges in python
|
<p>Is there a way in python to create a tuple of months and years (as: <code>[(2020, 1),(2020,2)...]</code> ) by using the <code>tuple()</code> function?</p>
<p>My code:</p>
<pre><code>monthyears = []
for y in range(2020,2017,-1):
if y == 2018:
m_end = 6
else:
m_end = 0
for m in range(12,m_end,-1):
monthyears.append((y,m))
</code></pre>
<p>output:</p>
<pre><code>[(2020, 12),
(2020, 11),
(2020, 10),
(2020, 9),
(2020, 8),
(2020, 7),
(2020, 6),
(2020, 5),
(2020, 4),
(2020, 3),
(2020, 2),
(2020, 1),
(2019, 12),
(2019, 11),
(2019, 10),
(2019, 9),
(2019, 8),
(2019, 7),
(2019, 6),
(2019, 5),
(2019, 4),
(2019, 3),
(2019, 2),
(2019, 1),
(2018, 12),
(2018, 11),
(2018, 10),
(2018, 9),
(2018, 8),
(2018, 7)]
</code></pre>
<p>for loops are fine, but I'd like to learn a new trick, if it exists.</p>
|
<python><tuples><range>
|
2023-02-28 18:15:55
| 3
| 332
|
bearcub
|
75,595,748
| 5,431,132
|
Logic inside Django serializer
|
<p>I have a Django serializer implementation that has a field which contains a list as part of a JWT authentication process. For example:</p>
<pre><code>class MySerializer(serializers.ModelSerializer):
class Meta:
field = ['a', 'b', 'c']
</code></pre>
<p>I then have some logic in my user model</p>
<pre><code>class MyUser(PermissionMixin, AbstractBaseUser):
permissions = ['OK', 'NOT_OK']
</code></pre>
<p>I can access my users from the following logic inside my serializer</p>
<pre><code>self.context['request'].user.permissions
</code></pre>
<p>I would now like to only share parts <code>a</code> and <code>b</code> of <code>field</code> if the user is not permitted to access <code>c</code>. In other words I want to do something roughly equivalent to</p>
<pre><code>class MySerializer(serializers.ModelSerializer):
class Meta:
if 'OK' in self.context['request'].user.permissions:
field = ['a', 'b', 'c']
else:
field = ['a', 'b']
</code></pre>
<p>However, the Meta class does not have self attached. What is the best design pattern to achieve what I want? Is the serializer the best place to house this logic?</p>
|
<python><django><serialization>
|
2023-02-28 18:04:26
| 1
| 582
|
AngusTheMan
|
75,595,635
| 14,471,688
|
Remove strings that contain another existing string in a list of strings
|
<p>I want to remove the strings that contain another existing string in a list of strings.
Suppose that I have a list as below:</p>
<pre><code>ex_list = ['transport truck', 'truck', 'plastic boat', 'boat', 'transport', 'ferry', 'truck parking', 'pickup truck', 'pickup']
</code></pre>
<p>I want to remove some specific string like <strong>transport truck</strong> because it is a kind of <strong>truck</strong> and so on.</p>
<p>My desire output:</p>
<pre><code>new_list = ['truck', 'boat', 'transport', 'ferry', 'truck parking', 'pickup']
</code></pre>
<p>In this result note that I want to keep <strong>truck parking</strong> because it is not part of the <strong>truck</strong> so here I want to remove the prefix string (i.e. sth + truck) not the suffix string (truck + sth).</p>
<p>What I have tried so far:</p>
<pre><code>def contains_word(s, w):
return f' {w} ' in f' {s} '
new_list = []
for i in ex_list:
if len(i.split(' '))== 1:
new_list.append(i)
remain_list = list(set(ex_list) - set(new_list))
for i in new_list:
for j in remain_list:
if not contains_word(i, j):
new_list.append(i)
</code></pre>
<p>I think that the first loop worked well but the second loop did not work. What happened in the second loop (It ran for so long so i stopped it)? Are there any other solutions?</p>
<p>Thanks in advance for the correction.</p>
|
<python><list><substring>
|
2023-02-28 17:51:16
| 5
| 381
|
Erwin
|
75,595,538
| 3,493,829
|
How to stop websocket.WebSocketApp logs being generated
|
<p>I am using the <code>websocket.WebSocketApp</code> to send and receive messages on <code>websocket</code>.
I am able to run the application and can send and receive messages.</p>
<p>but wherever I received or send any message many logs are getting generated as shown below.</p>
<pre><code>++Sent raw: b'\x82\xfe\x05X\x'
++Rcv raw: b'\x81p my message'
++Sent decoded: fin=1 opcode=2 data=b'fn5+fn5+fn5+fn5+fn5'
++Rcv decoded: fin=1 opcode=1 data=b'my message'
++Sent raw: b'\x82\xfe\x05XQa\xab\x0f~\x17\x9c$~'
++Sent raw: b
</code></pre>
<p>How can I stop these logs being generated?
These logs are from websockt library</p>
|
<python><python-3.x><websocket>
|
2023-02-28 17:40:11
| 1
| 3,806
|
SSK
|
75,595,427
| 1,295,678
|
How can I properly type a subclass of class which sub-types a Generic base class?
|
<p>I have two classes inheriting from <code>dict</code>, like this:</p>
<pre><code>class A(dict):
class B(A):
</code></pre>
<p>This all works fine - the two classes use the inherited <code>dict</code> functionality and do other required stuff. However, I also want to use type-hinting. The first should reduce the range of the <code>Dict</code> generic and the second should reduce it further. For example, I'd like to write something like:</p>
<pre><code>class A(Dict[str, Any]):
class B(A[str, int]):
</code></pre>
<p>This works fine for <code>A</code> but not for <code>B</code> (that code is not syntactically correct, of course).</p>
<p>Class <code>B</code> isn't a sub-type of A (since it can't handle anything other than <code>int</code>) and that's fine but I do need it to inherit. Can I get mypy to understand the correct type of <code>B</code> and still inherit from <code>A</code>?</p>
<p>The only solution I can see at present is to re-implement all the inherited methods from <code>A</code> but that would be a lot of runtime boiler-plate just to declare the correct typing.</p>
|
<python><type-hinting><mypy>
|
2023-02-28 17:30:02
| 1
| 3,577
|
strubbly
|
75,595,323
| 6,165,671
|
Stanza based auto-py-to-exe GUI app throws exception (Windows 10)
|
<p>I am building a Window based .exe for a python script using auto-py-to-exe. It uses Stanza. I am able to build and run the Console based version of the app (the GUI + Console). But the GUI only .exe (Console hidden) based on same auto-py-to-exe settings (except the GUI option) does not even load for the first time (double click run produces the following error):</p>
<pre><code>Traceback (most recent call last):
File "stanza\models\common\utils.py", line 397, in get_tqdm
NameError: name 'get_ipython' is not defined
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "MFTE_gui.py", line 3, in <module>
from MFTE import tag_MD, tag_MD_parallel, tag_stanford, tag_stanford_stanza, do_counts
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module
File "MFTE.py", line 16, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module
File "stanza\__init__.py", line 1, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module
File "stanza\pipeline\core.py", line 23, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module
File "stanza\pipeline\pos_processor.py", line 13, in <module>
File "stanza\models\common\utils.py", line 405, in get_tqdm
AttributeError: 'NoneType' object has no attribute 'isatty'
</code></pre>
<p>Screenshot:
<a href="https://i.sstatic.net/aY1a8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aY1a8.png" alt="enter image description here" /></a></p>
<p>The relevant code from <code>stanza\models\common\utils.py</code> is as follows:</p>
<pre><code>def get_tqdm():
"""
Return a tqdm appropriate for the situation
imports tqdm depending on if we're at a console, redir to a file, notebook, etc
from @tcrimi at https://github.com/tqdm/tqdm/issues/506
This replaces `import tqdm`, so for example, you do this:
tqdm = utils.get_tqdm()
then do this when you want a scroll bar or regular iterator depending on context:
tqdm(list)
If there is no tty, the returned tqdm will always be disabled
unless disable=False is specifically set.
"""
try:
ipy_str = str(type(get_ipython()))
if 'zmqshell' in ipy_str:
from tqdm import tqdm_notebook as tqdm
return tqdm
if 'terminal' in ipy_str:
from tqdm import tqdm
return tqdm
except:
if sys.stderr.isatty():
from tqdm import tqdm
return tqdm
from tqdm import tqdm
def hidden_tqdm(*args, **kwargs):
if "disable" in kwargs:
return tqdm(*args, **kwargs)
kwargs["disable"] = True
return tqdm(*args, **kwargs)
return hidden_tqdm
</code></pre>
<p>Edit2: I am using a Console widget to redirect terminal output to a text box (<a href="https://www.reddit.com/r/Tkinter/comments/nmx0ir/how_to_show_terminal_output_in_gui/" rel="nofollow noreferrer">link</a>). I disabled it to see if that was the issue. It was not. The GUI does not load after creating exe, whereas it does if I run the *.py file.</p>
<p>So what could be the solution to this problem?</p>
|
<python><pyinstaller><stanford-nlp>
|
2023-02-28 17:20:01
| 2
| 355
|
Shakir
|
75,595,167
| 18,758,062
|
Get all processes in simpy Environment
|
<p>If I have a <code>simpy.Process</code> that creates nested processes, is there a way to get a list of all the active/alive processes from it's <code>simpy.Environment</code>?</p>
<p>Basically I've created a tree of simpy processes and at some point I want to interrupt all of the active processes. Having every process listen for <code>simpy.Interrupt</code> then in turn interrupting processes started by it appears to be too tedious and prone to errors from forgetting to add it to the list of child processes to be interrupted.</p>
|
<python><simpy>
|
2023-02-28 17:03:48
| 3
| 1,623
|
gameveloster
|
75,595,083
| 11,826,017
|
How to fix "ValueError: This tokenizer cannot be instantiated. Please make sure you have `sentencepiece` installed in order to use this tokenizer."
|
<p>I'm trying to run a Hugging Face model using the following code in Google Colab:</p>
<pre><code>!pip install transformers
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-es")
inputs = tokenizer(text, return_tensors="pt").input_ids
</code></pre>
<p>And I'm having the following error:</p>
<pre><code>ValueError: This tokenizer cannot be instantiated. Please make sure you have `sentencepiece` installed in order to use this tokenizer.
</code></pre>
<p>How do I fix it?</p>
|
<python><google-colaboratory><huggingface-transformers><valueerror>
|
2023-02-28 16:55:24
| 2
| 779
|
arnle
|
75,595,068
| 10,490,683
|
Standardising pydantic models for similar APIs
|
<p>I'm consuming a set of similar APIs, that all return a broadly standard structures.</p>
<p>The code below sets up the pydantic model for <code>Api1</code></p>
<p>I now need to do the model for <code>Api2</code>. The structure is the same but only the field names are different. Is it possible to create reusable code so that I can keep the standard model, and only create classes for each of the different API calls?</p>
<pre><code>from typing import List
from pydantic import BaseModel
api1 = {
'status': True,
'message': 'Successful',
'data': {
'records': [{
'api1field1': 'api1value1',
'api1field2': 'api1value2',
'api1field3': 'api1value3'
}],
'count': 1
},
'code': 200
}
class Api1(BaseModel):
api1field1: str
api1field2: str
api1field3: str
class StandardData(BaseModel):
records: List[Api1]
count: int
class CommonModel(BaseModel):
status: bool
message: str
data: StandardData
code: int
model = CommonModel(**api1)
# Happy so far. But now on to Api2
api2 = {
'status': True,
'message': 'Successful',
'data': {
'records': [
{
'api2field1': 'api2value1',
'api2field2': 'api2value2',
'api2field3': 'api2value3'
},
{
'api2field1': 'api2value4',
'api2field2': 'api2value5',
'api2field3': 'api2value6'
}
],
'count': 2
},
'code': 200
}
</code></pre>
<p>How do I set this up to reuse the standard models used for Api1?</p>
|
<python><pydantic>
|
2023-02-28 16:54:18
| 1
| 8,894
|
Belly Buster
|
75,595,027
| 10,714,156
|
PyTorch: Dataloader creates a new dimension when creating batches
|
<p>I am seeing that when looping over the my <code>Dataloader()</code> obect using <code>enumerate()</code> I am getting a new dimension that is being coerced in order to create the batches of my data.</p>
<p>I have 4 Tensors that I am slicing at a macro level (I am panel data so I slice the data in blocks of individuals instead of rows (or observations)):</p>
<ul>
<li><code>X</code> (3D)</li>
<li><code>Y</code> (2D)</li>
<li><code>Z</code> (2D)</li>
<li><code>id</code> (2D).</li>
</ul>
<p>In the data I have 10 observations but only 5 individuals on the sample (hence, each individual has 2 observations) on my dataset. Thus, each batch on my data has a minimum of two observations.</p>
<p>Since I am setting the <code>batch_size = 2</code>, I am taking 4 observations for the first and second batch, and only 2 for the third.</p>
<p>This behavior is represented in the output below:</p>
<pre><code>Selection of the data for by __getitem__ for individual 1
torch.Size([2, 3, 3]) X_batch when selecting for ind 1
torch.Size([2, 3]) Z_batch when selecting for ind 1
torch.Size([2, 1]) Y_batch when selecting for ind 1
Selection of the data for by __getitem__ for individual 2
torch.Size([2, 3, 3]) X_batch when selecting for ind 2
torch.Size([2, 3]) Z_batch when selecting for ind 2
torch.Size([2, 1]) Y_batch when selecting for ind 2
Data of the Batch # 1 inside the enumerate
shape X (outside foo) torch.Size([2, 2, 3, 3]) # <<-- here I have a new dimension
shape Z (outside foo) torch.Size([2, 2, 3])
shape Y (outside foo) torch.Size([2, 2, 1])
Selection of the data for by __getitem__ for individual 3
torch.Size([2, 3, 3]) X_batch when selecting for ind 3
torch.Size([2, 3]) Z_batch when selecting for ind 3
torch.Size([2, 1]) Y_batch when selecting for ind 3
Selection of the data for by __getitem__ for individual 4
torch.Size([2, 3, 3]) X_batch when selecting for ind 4
torch.Size([2, 3]) Z_batch when selecting for ind 4
torch.Size([2, 1]) Y_batch when selecting for ind 4
Data of the Batch # 2 inside the enumerate
shape X (outside foo) torch.Size([2, 2, 3, 3]) # <<-- here I have a new dimension
shape Z (outside foo) torch.Size([2, 2, 3])
shape Y (outside foo) torch.Size([2, 2, 1])
Selection of the data for by __getitem__ for individual 5
torch.Size([2, 3, 3]) X_batch when selecting for ind 5
torch.Size([2, 3]) Z_batch when selecting for ind 5
torch.Size([2, 1]) Y_batch when selecting for ind 5
Data of the Batch # 3 inside the enumerate
shape X (outside foo) torch.Size([1, 2, 3, 3]) # <<-- here I have a new dimension
shape Z (outside foo) torch.Size([1, 2, 3])
shape Y (outside foo) torch.Size([1, 2, 1])
</code></pre>
<p>First, I select the data that corresponds to the first and second individual but inside of the <code>enumerate()</code> loop I am getting a new dimension (<code>[0]</code>) which python is using to put the blocks if individuals.</p>
<hr />
<p>So here is my question:</p>
<p><strong>Is there any way of concatening <code>torch.cat(, axis = 0)</code> the blocks of data instead of creating this new dimension in order to store the entire batch of data?</strong></p>
<p>So for instance for the first individual I want the following</p>
<pre><code>Data of the Batch # 1 inside the enumerate
shape X (outside foo) torch.Size([4, 3, 3]) # <<-- here I torch.concat(,axis = 0)
shape Z (outside foo) torch.Size([4, 3])
shape Y (outside foo) torch.Size([4, 1])
</code></pre>
<p>The code that produces the output below is listed at the end. Thank you</p>
<hr />
<h3>Sample data</h3>
<pre><code>import torch
import pandas as pd
from torch.utils.data import Dataset
from torch.utils.data import DataLoader
import argparse
# args to be passed to the model
parser = argparse.ArgumentParser(description='Neural network for Flexible utility (VOT =f(z))')
args = parser.parse_args("")
args.J = 3 # number of alternatives
# Sample data
X = pd.DataFrame.from_dict({'x1_1': {0: -0.1766214634108258, 1: 1.645852185286492, 2: -0.13348860101031038, 3: 1.9681043689968933, 4: -1.7004428240831382, 5: 1.4580091413853749, 6: 0.06504113741068565, 7: -1.2168493676768384, 8: -0.3071304478616376, 9: 0.07121332925591593}, 'x1_2': {0: -2.4207773498298844, 1: -1.0828751040719462, 2: 2.73533787008624, 3: 1.5979611987152071, 4: 0.08835542172064115, 5: 1.2209786277076156, 6: -0.44205979195950784, 7: -0.692872860268244, 8: 0.0375521181289943, 9: 0.4656030062266639}, 'x1_3': {0: -1.548320898226322, 1: 0.8457342014424675, 2: -0.21250514722879738, 3: 0.5292389938329516, 4: -2.593946520223666, 5: -0.6188958526077123, 6: 1.6949245117526974, 7: -1.0271341091035742, 8: 0.637561891142571, 9: -0.7717170035055559}, 'x2_1': {0: 0.3797245517345564, 1: -2.2364391598508835, 2: 0.6205947900678905, 3: 0.6623865847688559, 4: 1.562036259999875, 5: -0.13081282910947759, 6: 0.03914373833251773, 7: -0.995761652421108, 8: 1.0649494418154162, 9: 1.3744782478849122}, 'x2_2': {0: -0.5052556836786106, 1: 1.1464291788297152, 2: -0.5662380273138174, 3: 0.6875729143723538, 4: 0.04653136473130827, 5: -0.012885303852347407, 6: 1.5893672346098884, 7: 0.5464286050059511, 8: -0.10430829457707284, 9: -0.5441755265313813}, 'x2_3': {0: -0.9762973303149007, 1: -0.983731467806563, 2: 1.465827578266328, 3: 0.5325950414202745, 4: -1.4452121324204903, 5: 0.8148816373643869, 6: 0.470791989780882, 7: -0.17951636294180473, 8: 0.7351814781280054, 9: -0.28776723200679066}, 'x3_1': {0: 0.12751822396637064, 1: -0.21926633684030983, 2: 0.15758799357206943, 3: 0.5885412224632464, 4: 0.11916562911189271, 5: -1.6436210334529249, 6: -0.12444368631987467, 7: 1.4618564171802453, 8: 0.6847234328916137, 9: -0.23177118858569187}, 'x3_2': {0: -0.6452955690715819, 1: 1.052094761527654, 2: 0.20190339195326157, 3: 0.6839430295237913, 4: -0.2607691613858866, 5: 0.3315513026670213, 6: 0.015901139336566113, 7: 0.15243420084881903, 8: -0.7604225072161022, 9: -0.4387652927008854}, 'x3_3': {0: -1.067058994377549, 1: 0.8026914180717286, 2: -1.9868531745912268, 3: -0.5057770735303253, 4: -1.6589569342151713, 5: 0.358172252880764, 6: 1.9238983803281329, 7: 2.2518318810978246, 8: -1.2781475121874357, 9: -0.7103081175166167}})
Y = pd.DataFrame.from_dict({'CHOICE': {0: 1.0, 1: 1.0, 2: 2.0, 3: 2.0, 4: 3.0, 5: 2.0, 6: 1.0, 7: 1.0, 8: 2.0, 9: 2.0}})
Z = pd.DataFrame.from_dict({'z1': {0: 2.4196730570917233, 1: 2.4196730570917233, 2: 2.822802255159467, 3: 2.822802255159467, 4: 2.073171091633643, 5: 2.073171091633643, 6: 2.044165101485163, 7: 2.044165101485163, 8: 2.4001241292606275, 9: 2.4001241292606275}, 'z2': {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 1.0, 5: 1.0, 6: 1.0, 7: 1.0, 8: 0.0, 9: 0.0}, 'z3': {0: 1.0, 1: 1.0, 2: 1.0, 3: 1.0, 4: 2.0, 5: 2.0, 6: 2.0, 7: 2.0, 8: 3.0, 9: 3.0}})
id = pd.DataFrame.from_dict({'id_choice': {0: 1.0, 1: 2.0, 2: 3.0, 3: 4.0, 4: 5.0, 5: 6.0, 6: 7.0, 7: 8.0, 8: 9.0, 9: 10.0}, 'id_ind': {0: 1.0, 1: 1.0, 2: 2.0, 3: 2.0, 4: 3.0, 5: 3.0, 6: 4.0, 7: 4.0, 8: 5.0, 9: 5.0}} )
# Create a dataframe with all the data
data = pd.concat([id,X, Z, Y], axis=1)
</code></pre>
<h3>Defining the <code>torch.utils.data.Dataset()</code></h3>
<pre><code># class to create a dataset for choice data
class ChoiceDataset_all(Dataset):
'''
Dataset for choice data
Args:
data (pandas dataframe): dataframe with all the data
Returns:
dictionary with the data for each individual
'''
def __init__(self, data, args , id_variable:str = "id_ind" ):
if id_variable not in data.columns:
raise ValueError(f"Variable {id_variable} not in dataframe")
self.data = data
# select cluster variable
self.cluster_ids = self.data[id_variable].unique()
self.Y = torch.LongTensor(self.data['CHOICE'].values -1).reshape(len(self.data['CHOICE'].index),1)
self.id = torch.LongTensor(self.data[id_variable].values).reshape(len(self.data[id_variable].index),1)
# number of individuals (N_n)
self.N_n = torch.unique(self.id).shape[0]
# number of choices made per individual (t_n)
_ , self.t_n = self.id.unique(return_counts=True)
#total number of observations (N_t = total number of choices)
self.N_t = self.t_n.sum(axis=0).item()
# Select regressors: variables that start with "x"
self.X_wide = data.filter(regex='^x')
# turn X_wide into a tensor
self.X = torch.DoubleTensor(self.X_wide.values)
# number of regressors (K)
self.K = int(self.X_wide.shape[1] / args.J)
# reshape X to have the right dimensions
# Select variables that start with "z"
self.Z = torch.DoubleTensor(self.data.filter(regex='^z').values)
def __len__(self):
return self.N_n # number of individuals
def __getitem__(self, idx):
# select the index of the individual
self.index = torch.where(self.id == idx+1)[0]
self.len_batch = self.index.shape[0]
# Select observations for the individual
Y_batch = self.Y[self.index]
Z_batch = self.Z[self.index]
id_batch = self.id[self.index]
X_batch = self.X[self.index]
# reshape X_batch to have the right dimensions
X_batch = X_batch.reshape(self.len_batch,self.K,args.J)
print("\n")
print("Selection of the data for by __getitem__ for individual", idx+1)
print(X_batch.shape, "X_batch when selecting for ind", idx+1)
print(Z_batch.shape, "Z_batch when selecting for ind", idx+1)
print(Y_batch.shape, "Y_batch when selecting for ind", idx+1)
#print(id_batch.shape, "id_batch when selecting for ind", idx+1)
return {'X': X_batch, 'Z': Z_batch, 'Y': Y_batch, 'id': id_batch}
</code></pre>
<h3>Looping over <code>torch.utils.data.DataLoader()</code></h3>
<pre><code>choice_data = ChoiceDataset_all(data, args, id_variable="id_ind")
data_loader = DataLoader(choice_data, batch_size=2, shuffle=False, num_workers=0, drop_last=False)
for idx, data_dict in enumerate(data_loader):
print("\n")
print("Data of the Batch # ", idx+1, "inside the enumerate")
print("shape X (outside foo)", data_dict['X'].shape)
print("shape Z (outside foo)", data_dict['Z'].shape)
print("shape Y (outside foo)", data_dict['Y'].shape)
# print("shape id (outside foo)", data_dict['id'])
</code></pre>
|
<python><pandas><pytorch><dataloader>
|
2023-02-28 16:50:27
| 1
| 1,966
|
Álvaro A. Gutiérrez-Vargas
|
75,594,987
| 800,053
|
How can super() instantiate a class inside it's own __new__ method?
|
<p>I recently had a use case for a singleton class and ended up using this definition:</p>
<pre><code>class SubClient(BaseClient):
def __new__(cls):
if not hasattr(cls, 'instance'):
cls.instance = super(SubClient, cls).__new__(cls)
return cls.instance
</code></pre>
<p>After testing that this worked, some questions came up:</p>
<p>How is it possible that <code>super(SubClient, cls).__new__(cls)</code> returns an instance of <code>SubClient</code> within the definition of the <code>SubClient.__new__</code> method?</p>
<p><code>__new__</code> is the method that creates a <code>SubClient</code> so how is it possible that within the definition of this method we can already create a <code>SubClient</code> ?</p>
|
<python><python-3.x>
|
2023-02-28 16:47:11
| 1
| 9,597
|
jhnclvr
|
75,594,828
| 4,865,723
|
Read SSH stdout via Paramiko behave different between REPL and script
|
<p>When I'm in the Python shell (REPL?) I'm able to create a connection read from stdout of the SSH server. But when I run the same code as a script (via <code>python3 -i script.py</code>) it is not working.</p>
<p>On the server side is a text-based MUD running. After loggin in via SSH it is asking for a MUD based login.</p>
<h1>REPL</h1>
<p>At the end you see that 153 lines where read.</p>
<pre><code>>>> import paramiko
>>> client = paramiko.SSHClient()
>>> client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
>>> client.connect(hostname='fakehost', username='fakeuser', password='fakepassword')
>>> shell = client.invoke_shell()
>>> shell.setblocking(0)
>>> shell.send('username\n')
8
>>> shell.send('password\n')
10
>>> f = shell.makefile('r')
>>> r = []
>>> while shell.recv_ready():
... r.append(f.readline())
...
>>> print(f'Read {len(r)} lines.')
Read 153 lines.
</code></pre>
<h1>As script</h1>
<pre><code>#!/usr/bin/env python3
import paramiko
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(hostname='fakehost', username='fakeuser', password='fakepassword')
shell = client.invoke_shell()
shell.setblocking(0)
shell.send('username\n')
shell.send('password\n')
f = shell.makefile('r')
r = []
while shell.recv_ready():
r.append(f.readline())
print(f'Read {len(r)} lines.')
</code></pre>
<p>The output here is just <code>Read 1 lines.</code>. Where are the other 152 lines are gone?</p>
|
<python><paramiko>
|
2023-02-28 16:33:15
| 1
| 12,450
|
buhtz
|
75,594,692
| 6,761,328
|
How to deal with micro- or nanoseconds in datetime64?
|
<p>I imported <code>.xls</code> files which appear now as</p>
<pre><code>0 2022-09-27 11:56:22.733740
1 2022-09-27 11:56:22.733940
2 2022-09-27 11:56:22.734140
3 2022-09-27 11:56:22.734340
4 2022-09-27 11:56:22.734540
4995 2022-09-27 11:56:23.732740
4996 2022-09-27 11:56:23.732940
4997 2022-09-27 11:56:23.733140
4998 2022-09-27 11:56:23.733340
4999 2022-09-27 11:56:23.733540
Name: time, Length: 5000, dtype: datetime64[ns]
</code></pre>
<p>Apparently, the numbers from <code>.</code> on are in ns(?). Is there a way to convert these numbers into a more readable format?</p>
|
<python><datetime>
|
2023-02-28 16:20:40
| 1
| 1,562
|
Ben
|
75,594,687
| 16,813,096
|
How to apply transparent background in tkinter window of linux (not alpha)?
|
<p>I want to make some portion of a tkinter window transparent. I have successfully achieved it in windows and mac os using the following methods:</p>
<p><strong>In windows:</strong></p>
<pre class="lang-py prettyprint-override"><code>root.attributes("-transparentcolor", '#000001')
root.config(bg="#000001")
</code></pre>
<p><strong>In mac:</strong></p>
<pre class="lang-py prettyprint-override"><code>root.attributes("-transparent", True)
root.config(bg="systemTransparent")
</code></pre>
<p>But I can't find any solution for linux, none of the above methods are working in linux (Ubuntu) because it doesn't have any <code>tranparent</code> attribute. <strong>Moreover, I don't want to use <code>-alpha</code> attribute because it makes the whole window transparent (including the widgets)</strong>. What I want is <strong>full transparency in the background</strong> and other widgets to be <strong>visible/opaque</strong>.</p>
<p><strong>Here is an image example (in windows and mac):</strong></p>
<p><a href="https://i.sstatic.net/XQS0g.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XQS0g.jpg" alt="Tkinter UI in windows" /></a>
<a href="https://i.sstatic.net/V4NsQ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V4NsQ.jpg" alt="Tkinter UI in macos" /></a></p>
<h3>Is this possible in linux?</h3>
|
<python><python-3.x><tkinter><tkinter-canvas><tkinter-layout>
|
2023-02-28 16:20:03
| 0
| 582
|
Akascape
|
75,594,594
| 7,575,552
|
Saving feature attributes to a CSV file using Python
|
<p>I am using the pyfeats library to extract radiomic features using the images and their respective ROI masks. I am extracting the shape features and GLRLM features. The shape features are extracted using shape_parameters function, which provides SHAPE_XcoordMax, SHAPE_YcoordMax, SHAPE_area, SHAPE_perimeter, and SHAPE_perimeter2perArea values. The glrlm_features returns
GLRLM_ShortRunEmphasis, GLRLM_LongRunEmphasis, and 10 more attribute values. The code is as given below:</p>
<pre><code>import os
import numpy as np
import cv2
import matplotlib.pyplot as plt
from pyfeats import *
import pandas as pd
from scipy import ndimage as ndi
#%%
# define image and mask folder paths
image_folder = 'images'
mask_folder = 'masks'
# get list of image names
image_names = [f for f in os.listdir(image_folder) if f.endswith('.png')]
# create an empty dictionary to store the features for each image
features_dict = {}
# iterate through each image and its corresponding mask
for img_name in image_names:
# Load image and resize to 224 x 224
img_path = os.path.join(image_folder, img_name)
image = cv2.imread(img_path, cv2.IMREAD_GRAYSCALE)
image = cv2.resize(image, (224, 224))
# Load mask and resize to 224 x 224
mask_name = img_name
mask_path = os.path.join(mask_folder, mask_name)
mask = cv2.imread(mask_path, cv2.IMREAD_GRAYSCALE)
mask = cv2.resize(mask, (224, 224))
#compute perimeter
mask //= 255
kernel = np.ones((5,5))
C= ndi.convolve(mask, kernel, mode='constant', cval=0)
perimeter = np.where( (C>=11) & (C<=15 ), 255, 0)
# extract features: Texture
features = {}
features['A_GLRLM'] = glrlm_features(image, mask, Ng=256)
features['A_Shape_Parameters'] = shape_parameters(image, mask, perimeter, pixels_per_mm2=1)
# add features to dictionary
features_dict[img_name] = features
#%%
# convert features dictionary to a pandas DataFrame and save to CSV file
</code></pre>
<p>After computing these features, I would like to save these quantitative values to a CSV file under their respective column headers (for example, SHAPE_XcoordMax, SHAPE_YcoordMax, SHAPE_area, etc.) along with the image filenames in a separate column. Both images and masks have the same filename with a .png extension. How to save these filenames and features to a CSV file?</p>
|
<python><image><opencv><image-processing>
|
2023-02-28 16:11:14
| 1
| 1,189
|
shiva
|
75,594,556
| 1,307,905
|
Trying to deepdiff throws an error on datetime
|
<p>I am trying to compare two very similar data structures using <code>deepdiff</code>. The data is loaded from two msgpack files and consists
of dicts and lists containing floats, integers, strings, datetime.datetime and datetime.date instances (I have some special routines to pack/unpack datetime.date instances in msgpack in two or four bytes). The lists only consist of scalar data (i.e. no other lists or dicts).</p>
<p>deepdiff is supposed to support datetime (I am using version 6.2.3 on Python 3.11.2), but comparison throws an huge number of errors:</p>
<blockquote>
<p>stringify_param was not able to get a proper repr for "2023-02-26 00:20:02+00:00". This object will be reported as None. Add instructions for this object to DeepDiff's helper.literal_eval_extended to make it work properly: invalid syntax. Maybe you meant '==' or ':=' instead of '='? (, line 1)</p>
</blockquote>
<p>Except for the value of the datetime (obviously convertable to a string), the messages are all the same. Since there are many datetime values in the file the messages never seem to end (killed the program after 15 minutes of printing error message on my macbook M1), and anyway comparing them as <code>None</code> will not get me the diffs I am looking for.</p>
<p>I tried to cut of the datetime with</p>
<pre><code>ddiff = DeepDiff(data1, data2, truncate_datetime='second')
</code></pre>
<p>which would be acceptable, but that doesn't remove the thousands of error messages.</p>
<p>Is it a problem for deepdiff to use timezone aware datetime values? There is no mention of <code>literal_eval_extended</code> in the deepdiff documentation (or at least not found by the documentations search option).</p>
|
<python><python-deepdiff>
|
2023-02-28 16:08:15
| 1
| 78,248
|
Anthon
|
75,594,351
| 226,473
|
Scrapy does not find module 'attrs'
|
<p>I'm trying to scrape a website. I'm using scrapy with the following commands:</p>
<p><code>pip install scrapy</code></p>
<p><code>scrapy startproject test && cd test</code></p>
<p><code>scrapy genspider test_spider www.webdomain.com</code></p>
<p><code>scrapy crawl test_spider</code></p>
<p>this results in</p>
<pre><code>ModuleNotFoundError: No module named 'attrs'
</code></pre>
<p>(full stacktrace can be found here: <a href="https://pastebin.com/177Vdpfk" rel="nofollow noreferrer">https://pastebin.com/177Vdpfk</a>)</p>
<p>To resolve this, I ran <code>pip install attrs</code> with the following result:</p>
<pre><code>$ pip install attrs
zsh: /usr/local/bin/pip: bad interpreter: /usr/local/opt/python@3.9/bin/python3.9: no such file or directory
Requirement already satisfied: attrs in /Users/rabdelazin/opt/anaconda3/lib/python3.9/site-packages (21.2.0)
</code></pre>
<p>So for some reason it seems that attrs is installed but scrapy does not find it. Any help appreciated.</p>
<p>Note: Googling for an answer yielded several results to similar questions with missing or unsatisfactory answers.</p>
|
<python><scrapy>
|
2023-02-28 15:51:47
| 1
| 21,308
|
Ramy
|
75,594,317
| 1,942,868
|
How to open and edit the pdf file uploaded by form
|
<p>I have <code>ModelViewSet</code> class which accepts the uploaded file.
using pymupdf( pdf handling library)</p>
<pre><code>import fitz
class DrawingViewSet(viewsets.ModelViewSet):
queryset = m.Drawing.objects.all()
serializer_class = s.DrawingSerializer
def list(self, request):
serializer = s.DrawingSerializer(queryset, many=True)
return Response(serializer.data)
def create(self, request, *args, **kwargs):
#file is uploaded as 'drawing'
print(request.FILES) # <MultiValueDict: {'drawing': [<InMemoryUploadedFile: mypdf.pdf>]}>
print(request.FILES['drawing']) # mypdf.pdf
doc = fitz.open(request.FILES['drawing'])
class Drawing(models.Model):
drawing = models.FileField(upload_to='uploads/')
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class DrawingSerializer(ModelSerializer):
drawing = serializers.FileField()
class Meta:
model = m.Drawing
fields = ('id','drawing')
</code></pre>
<p>When I uploaded the file, there comes the error like this.</p>
<pre><code>fitz.fitz.FileNotFoundError: no such file: 'mypdf.pdf'
</code></pre>
|
<python><django>
|
2023-02-28 15:48:40
| 0
| 12,599
|
whitebear
|
75,594,288
| 1,501,260
|
Absolute imports of common code with multiple entrypoints in subdirectories
|
<p><strong>Given</strong></p>
<p>Let's say I have a code repository "my_tools" with some common code and some scripts (here e.g. fooscript) residing in subdirectories.</p>
<p><em>utils.py</em></p>
<pre><code>def my_util():
print("baz")
</code></pre>
<p><em>fooscript.py</em></p>
<pre><code>import sys; print(sys.path)
from commons.utils import my_util
my_util()
</code></pre>
<p>(all <code>__init__.py</code> are empty)</p>
<p><a href="https://i.sstatic.net/tozab.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tozab.png" alt="repo structure" /></a></p>
<p><strong>What happens when running it</strong></p>
<p>When I run fooscript from the my_tools root directory, here is what happens:</p>
<pre><code>> python foo/fooscript.py
['/home/michel/my_tools/foo', ... python internals ... ]
Traceback (most recent call last):
File "foo/fooscript.py", line 3, in <module>
from commons.utils import my_util
ModuleNotFoundError: No module named 'commons'
</code></pre>
<p>This surprised me - I was expecting that running python interpreter from my_tools directory it would put my_tools onto sys.path. Instead it puts the parent directory of the script actually being run. Putting an <code>__init__.py</code> in the root directory makes no difference.</p>
<p><strong>Question</strong></p>
<p>What are best practices to resolve this import issue? Approaches I have considered:</p>
<ol>
<li><p>Adding my_tools manually onto PYTHONPATH variable, e.g. in a Makefile target.</p>
</li>
<li><p>Messing with sys.path inside the code. This is strictly worse than the above solution IMO.</p>
</li>
<li><p>Wrapping the scripts to be run under a common <code>my_tools.py</code> under the root directory. I guess OK, but it would be something that lives in addition to what is going to be run in AWS Lambda (each lambda only needing the commons directory plus the invidual script directory).</p>
</li>
<li><p>Doing only relative imports instead. Not really an option here because for deployment I want to package commons on a separate AWS lambda layer, requiring absolute imports. Although it could be achieved using a custom docker image for lambda, but I'd still prefer to be able to steer the working directory.</p>
</li>
</ol>
<p>Am I missing something here? If not wanting to add additional path or import manipulations, is the best practice here really to modify PYTHONPATH?</p>
<p><strong>Background</strong></p>
<p>the subtleties of this question come from the desire to run locally an AWS Lambda based repository of microservices with some common code put on a lower layer.</p>
|
<python><aws-lambda><python-import>
|
2023-02-28 15:46:04
| 0
| 5,735
|
Michel Müller
|
75,594,234
| 9,274,940
|
pandas resample with global minimum and maximum specifying the filling method
|
<p>I want to resample a dataset, the minimum date of each series should be the same for each series, therefore, the minimum date of each series should be the minimum value for the <em>date</em> column. Same for the maximum date (Instead of resampling at series level, I want to resample taking the global maximum value).</p>
<p>And this, opens a new question, how do I specify when resampling that the null values that I want a 0.</p>
<p>I mean, suppose this case:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>series col</th>
<th>date</th>
<th>value</th>
</tr>
</thead>
<tbody>
<tr>
<td>series_1</td>
<td>2023-02-06</td>
<td>5</td>
</tr>
<tr>
<td>series_1</td>
<td>2023-02-23</td>
<td>7</td>
</tr>
</tbody>
</table>
</div>
<p>If you notice, I miss the series in between: (resampling on weeks)</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>series col</th>
<th>date</th>
<th>value</th>
</tr>
</thead>
<tbody>
<tr>
<td>series_1</td>
<td>2023-02-13</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
<p>I'm wondering what is the following code doing? how is filling that value? Because I have not specified anything and I don't see null values when I'm resampling.<a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.resample.html" rel="nofollow noreferrer">pandas documentation</a></p>
<pre><code>df.groupby('series_col').resample('W', on='date', label='left', loffset=pd.DateOffset(days=0))['value'].sum().reset_index()
</code></pre>
|
<python><pandas><resampling>
|
2023-02-28 15:41:23
| 1
| 551
|
Tonino Fernandez
|
75,594,229
| 11,010,254
|
Only the top part of div is clickable - why?
|
<p>I am making a website in Streamlit. With HTML and CSS, I’m trying to put a clickable logo on the top left corner, and text dead center on the same line, regardless of the logo placement. I have managed to make the logo clickable when it is not located on the same line of the logo with base64 encoding. However, when I try to use div tricks to put logo and title on the same line without affecting placement of the title, only the top of the logo is clickable. Can someone explain where my code went wrong?</p>
<pre class="lang-py prettyprint-override"><code>import base64
from typing import Final
from pathlib import Path
import streamlit as st
HOME_DIR: Final = Path(__file__).parent.resolve()
TITLE: Final = "Example page"
@st.cache_data(persist="disk")
def clickable_image(img_path: str, link: str) -> str:
"""By default, st.image doesn't support adding hyperlinks to a local image, so I
have to decode a local image to base64 and pass it to HTML markdown instead.
Parameters:
-----------
img_path : str
Path to where your image is stored.
link : str
URL which you would like to open when the image is clicked on.
Returns:
-----------
str
A cached hyperlink where your local image is stored inside via base64 encoding.
"""
img_path = HOME_DIR / img_path
img_bytes, ext = img_path.read_bytes(), img_path.suffix
encoded_img = base64.b64encode(img_bytes).decode()
return (
f'<div style="display: inline-block;">'
f'<a href="{link}" target="_blank">'
f'<img src="data:image/{ext};base64,{encoded_img}" width="100"></a>'
f"</div>"
)
def pretty_title(title: str, img_path: str, link: str) -> None:
"""Make a centered title, and give it a red line. Adapted from
'streamlit_extras.colored_headers' package.
Parameters:
-----------
title : str
The title of your page.
img_path : str
Path to where your image is stored.
link : str
URL which you would like to open when the image is clicked on.
"""
# Define the logo and text
logo_html = clickable_image(img_path, link)
text_html = "<h2 style='text-align: center; margin-top: 0;'>" f"{title}</h2>"
# Define the HTML for the logo and text side by side
html = (
"<div style='position: relative;'>"
f"<div style='position: absolute; top: 0; left: 0;'>{logo_html}</div>"
f"<div style='text-align: center;'>{text_html}</div>"
"</div>"
"<hr style='background-color: #ff4b4b; margin-top: 0;"
" margin-bottom: 0; height: 3px; border: none; border-radius: 3px;'></hr>"
)
# Render the HTML in the Streamlit app
st.markdown(html, unsafe_allow_html=True)
def main() -> None:
st.set_page_config(
page_title=TITLE,
page_icon="📖",
layout="wide"
)
pretty_title(TITLE, "logo.png", "https://www.google.com/")
main()
</code></pre>
|
<python><html><css><streamlit>
|
2023-02-28 15:40:42
| 1
| 428
|
Vladimir Vilimaitis
|
75,594,202
| 7,613,669
|
Fastest way to loop over Polars DataFrame columns to apply transformations?
|
<p>Is there a preferred way to loop and apply functions to Polars columns?</p>
<p>Here is a pandas example of what I am trying to do:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df1 = pl.DataFrame(
{
"A": np.random.rand(10),
"B": np.random.rand(10),
"C": np.random.rand(10)
}
)
df2 = pl.DataFrame(
{
"X1": np.random.rand(10),
"X2": np.random.rand(10),
"X3": np.random.rand(10)
}
)
</code></pre>
<pre class="lang-py prettyprint-override"><code># pandas code
# this is just a weighted sum of df2, where the weights are from df1
df1.to_pandas().apply(
lambda weights: df2.to_pandas().mul(weights, axis=0).sum() / weights.sum(), axis=0,
result_type='expand'
)
</code></pre>
<pre><code> A B C
X1 0.647355 0.705358 0.692214
X2 0.500439 0.416325 0.384294
X3 0.601890 0.606301 0.577076
</code></pre>
|
<python><dataframe><python-polars>
|
2023-02-28 15:38:28
| 1
| 348
|
Sharma
|
75,594,166
| 5,195,209
|
Why would a POST request using curl work, but not when using Python's requests library
|
<p>I can send a curl request to upload a release asset file on Github and it works fine:</p>
<pre class="lang-bash prettyprint-override"><code>$ curl -v -X POST -H "Accept: application/vnd.github+json" -H "Authorization: Bearer <token>" -H "X-GitHub-Api-Version: 2022-11-28" -H "Content-Type: application/octet-stream" https://uploads.github.com/repos/Vuizur/tatoeba-to-anki/releases/93670877/assets?name=test8.apkg --data-binary "@spa_eng.apkg"
Note: Unnecessary use of -X or --request, POST is already inferred.
* Trying 140.82.121.13:443...
* Connected to uploads.github.com (140.82.121.13) port 443 (#0)
* ALPN: offers h2
* ALPN: offers http/1.1
* CAfile: C:/Program Files/Git/mingw64/ssl/certs/ca-bundle.crt
* CApath: none
* [CONN-0-0][CF-SSL] TLSv1.3 (OUT), TLS handshake, Client hello (1):
* [CONN-0-0][CF-SSL] TLSv1.3 (IN), TLS handshake, Server hello (2):
* [CONN-0-0][CF-SSL] TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* [CONN-0-0][CF-SSL] TLSv1.3 (IN), TLS handshake, Certificate (11):
* [CONN-0-0][CF-SSL] TLSv1.3 (IN), TLS handshake, CERT verify (15):
* [CONN-0-0][CF-SSL] TLSv1.3 (IN), TLS handshake, Finished (20):
* [CONN-0-0][CF-SSL] TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* [CONN-0-0][CF-SSL] TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256
* ALPN: server accepted h2
* Server certificate:
* subject: C=US; ST=California; L=San Francisco; O=GitHub, Inc.; CN=*.github.com
* start date: Jul 21 00:00:00 2022 GMT
* expire date: Jul 21 23:59:59 2023 GMT
* subjectAltName: host "uploads.github.com" matched cert's "*.github.com"
* issuer: C=US; O=DigiCert Inc; CN=DigiCert TLS RSA SHA256 2020 CA1
* SSL certificate verify ok.
* Using HTTP2, server supports multiplexing
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* h2h3 [:method: POST]
* h2h3 [:path: /repos/Vuizur/tatoeba-to-anki/releases/93670877/assets?name=test8.apkg]
* h2h3 [:scheme: https]
* h2h3 [:authority: uploads.github.com]
* h2h3 [user-agent: curl/7.87.0]
* h2h3 [accept: application/vnd.github+json]
* h2h3 [authorization: Bearer <token>]
* h2h3 [x-github-api-version: 2022-11-28]
* h2h3 [content-type: application/octet-stream]
* h2h3 [content-length: 141403969]
* Using Stream ID: 1 (easy handle 0x25092fa21e0)
> POST /repos/Vuizur/tatoeba-to-anki/releases/93670877/assets?name=test8.apkg HTTP/2
> Host: uploads.github.com
> user-agent: curl/7.87.0
> accept: application/vnd.github+json
> authorization: Bearer <token>
> x-github-api-version: 2022-11-28
> content-type: application/octet-stream
> content-length: 141403969
>
* [CONN-0-0][CF-SSL] TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* [CONN-0-0][CF-SSL] TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* We are completely uploaded and fine
< HTTP/2 201
< cache-control: no-cache
< content-security-policy: default-src 'none'
< content-type: application/json; charset=utf-8
< etag: W/"203d3f7a98f51759f96c496a467a4980cd4be65469e790005b7701b7938b6d84"
< last-modified: Tue, 28 Feb 2023 15:06:31 GMT
< strict-transport-security: max-age=31557600
< vary: Accept, Authorization, Cookie, X-GitHub-OTP
< x-content-type-options: nosniff
< x-frame-options: deny
< x-github-media-type: github.v3; format=json
< x-xss-protection: 1; mode=block
< date: Tue, 28 Feb 2023 15:06:31 GMT
< x-github-request-id: 0DD4:53A7:8A354:90660:63FE17A9
<
{"url":"https://api.github.com/repos/Vuizur/tatoeba-to-anki/releases/assets/97462199","id":97462199,"node_id":"RA_kwDOHa9_8c4Fzye3","name":"test8.apkg","label":"","uploader":{"login":"Vuizur","id":29223849,"node_id":"MDQ6VXNlcjI5MjIzODQ5","avatar_url":"https://avatars.githubusercontent.com/u/29223849?v=4","gravatar_id":"","url":"https://api.github.com/users/Vuizur","html_url":"https://github.com/Vuizur","followers_url":"https://api.github.com/users/Vuizur/followers","following_url":"https://api.github.com/users/Vuizur/following{/other_user}","gists_url":"https://api.github.com/users/Vuizur/gists{/gist_id}","starred_url":"https://api.github.com/users/Vuizur/starred{/owner}{/repo}","subscriptions_url":"https://api.github.com/users/Vuizur/subscriptions","organizations_url":"https://api.github.com/users/Vuizur/orgs","repos_url":"https://api.github.com/users/Vuizur/repos","events_url":"https://api.github.com/users/Vuizur/events{/privacy}","received_events_url":"https://api.github.com/users/Vuizur/received_events","type":"User","site_admin":false},"content_type":"application/octet-stream","state":"uploaded","size":141403969,"download_count":0,"created_at":"2023-02-28T15:03:05Z","updated_at":"2023-02-28T15:06:31Z","browser_download_url":"https://github.com/Vuizur/tatoeba-to-anki/releases/download/latest/test8.apkg"}* Connection #0 to host uploads.github.com left intact
</code></pre>
<p>However, when I do the same in Python requests:</p>
<pre class="lang-py prettyprint-override"><code> url = f"https://uploads.github.com/repos/Vuizur/tatoeba-to-anki/releases/{release_id}/assets?name={file}"
headers = {
"Accept": "application/vnd.github.v3+json",
# apkg files are zip files
"Content-Type": "application/octet-stream",
# the authorization token
"Authorization": f"Bearer {os.getenv('GITHUB_TOKEN')}",
}
with open(file, "rb") as f:
#try:
response = requests.post(url, headers=headers, data=f, timeout=20) # Not sure about the optimal value, I only know that 0.1 is too small
print(response)
#except requests.exceptions.Timeout:
# pass
</code></pre>
<p>Here it also uploads the file well so that I can see it online, but I get the following timeout, preventing me from seeing if the file was uploaded correctly or not (in the case of a strange network error):</p>
<pre><code>Traceback (most recent call last):
File "C:\Programs\tatoeba-to-anki\.venv\lib\site-packages\urllib3\connectionpool.py", line 449, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "C:\Programs\tatoeba-to-anki\.venv\lib\site-packages\urllib3\connectionpool.py", line 444, in _make_request
httplib_response = conn.getresponse()
File "C:\Users\hanne\AppData\Local\Programs\Python\Python310\lib\http\client.py", line 1374, in getresponse
response.begin()
File "C:\Users\hanne\AppData\Local\Programs\Python\Python310\lib\http\client.py", line 318, in begin
version, status, reason = self._read_status()
File "C:\Users\hanne\AppData\Local\Programs\Python\Python310\lib\http\client.py", line 279, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "C:\Users\hanne\AppData\Local\Programs\Python\Python310\lib\socket.py", line 705, in readinto
return self._sock.recv_into(b)
File "C:\Users\hanne\AppData\Local\Programs\Python\Python310\lib\ssl.py", line 1274, in recv_into
return self.read(nbytes, buffer)
File "C:\Users\hanne\AppData\Local\Programs\Python\Python310\lib\ssl.py", line 1130, in read
return self._sslobj.read(len, buffer)
TimeoutError: The read operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Programs\tatoeba-to-anki\.venv\lib\site-packages\requests\adapters.py", line 489, in send
resp = conn.urlopen(
File "C:\Programs\tatoeba-to-anki\.venv\lib\site-packages\urllib3\connectionpool.py", line 787, in urlopen
retries = retries.increment(
File "C:\Programs\tatoeba-to-anki\.venv\lib\site-packages\urllib3\util\retry.py", line 550, in increment
raise six.reraise(type(error), error, _stacktrace)
File "C:\Programs\tatoeba-to-anki\.venv\lib\site-packages\urllib3\packages\six.py", line 770, in reraise
raise value
File "C:\Programs\tatoeba-to-anki\.venv\lib\site-packages\urllib3\connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "C:\Programs\tatoeba-to-anki\.venv\lib\site-packages\urllib3\connectionpool.py", line 451, in _make_request
self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
File "C:\Programs\tatoeba-to-anki\.venv\lib\site-packages\urllib3\connectionpool.py", line 340, in _raise_timeout
raise ReadTimeoutError(
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='uploads.github.com', port=443): Read timed out. (read timeout=2)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Programs\tatoeba-to-anki\tatoeba_to_anki\upload_to_release.py", line 62, in <module>
response = requests.post(url, headers=headers, data=f, timeout=2) # Not sure about the optimal value, I only know that 0.1 is too small
File "C:\Programs\tatoeba-to-anki\.venv\lib\site-packages\requests\api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
File "C:\Programs\tatoeba-to-anki\.venv\lib\site-packages\requests\api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Programs\tatoeba-to-anki\.venv\lib\site-packages\requests\sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "C:\Programs\tatoeba-to-anki\.venv\lib\site-packages\requests\sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "C:\Programs\tatoeba-to-anki\.venv\lib\site-packages\requests\adapters.py", line 578, in send
raise ReadTimeout(e, request=request)
</code></pre>
<p>If I don't set a timeout, the Python code hangs forever. So hitting the timeout is actually what (in almost all cases) only happens after the download has successfully finished.</p>
<p>However, this only occurs with files over a certain size, something like 100 MB. (But also every time, so it is not caused by an unreliable connection.) (Note that it uploads the file correctly every time, the only thing that changes is that it hits the timeout after the correct upload instead of printing the 201 response).</p>
<p>Edit: I also managed to replicate the same behaviour (including the size dependent stuff) in <a href="https://github.com/octokit/octokit.js/discussions/2402" rel="nofollow noreferrer">octokit</a></p>
<p>For files unter (approximately) 100 MB the code simply prints this (<Response [201]), :</p>
<pre><code>Uploading ara_eng.apkg
<Response [201]>
</code></pre>
<p>Curl always gets the correct 201 response, independent of file size. What could be a reason for this?</p>
<p>(Windows 11, Python 3.10, Curl with Git Bash)</p>
|
<python><http><github><curl><post>
|
2023-02-28 15:34:55
| 0
| 587
|
Pux
|
75,594,111
| 14,729,820
|
How to convert txt file to jsoinl lines file for Hungarian char.?
|
<p>I have <strong><code>txt</code></strong> file that contians two columns (<code>filename</code> and <code>text</code>) the spreater during generating txt file is tab <code> </code> example of input file below :</p>
<p><code>text.txt</code></p>
<pre><code>23.jpg még
24.jpg több
</code></pre>
<p>the expacted <code>output_file.jsonl</code> type json line format</p>
<pre><code>{"file_name": "23.jpg", "text": "még"}
{"file_name": "24.jpg", "text": "több"}
</code></pre>
<p>But I got issue with uincode or encoding format :</p>
<pre><code>{"file_name": "23.jpg", "text": "m\u00c3\u00a9g"}
{"file_name": "24.jpg", "text": "t\u00c3\u00b6bb"}
</code></pre>
<p>it seems that dosent recognize hungarain spicial charchters <code>áéíöóőüúüű</code> for both small and captial case</p>
<p>for example in resulting <code>*.jsonl</code> file it gives assci or differnt encoding <code>\u00c3\u00a9</code> code instead of the letter <code>é</code></p>
<p>I wrote this small sript to convert <code>*.txt</code> file in Hungarain languge to <code>*.jsonl</code> in Hungarain too</p>
<pre><code>import pandas pd
train_text = 'text.txt'
df = pd.read_csv(f'{train_text}' ,header=None,delimiter=' ',encoding="utf8") # delimiter tab here
df.rename(columns={0: "file_name", 1: "text"}, inplace=True)
# convert txt file to jsonlines
reddit = df.to_dict(orient= "records")
import json
with open("output_file.jsonl","w") as f:
for line in reddit:
f.write(json.dumps(line) + "\n")
</code></pre>
<p>My expactation <code>output_file.jsonl</code> type json line format</p>
<pre><code>{"file_name": "23.jpg", "text": "még"}
{"file_name": "24.jpg", "text": "több"}
</code></pre>
|
<python><pandas><dataframe><nlp><jsonlines>
|
2023-02-28 15:31:02
| 2
| 366
|
Mohammed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.