QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,617,260
| 4,939,167
|
How to wait till the job status api reaches to status = success in python based pytest test automation framework
|
<p>I have a problem statement as below -
I have a job status api which accepts <code>job_id</code> and start checking the status of the job.</p>
<p>The job statuses are as follows:</p>
<ol>
<li>In Queue.</li>
<li>In Progress.</li>
<li>Going to next queue.</li>
<li>Success in queue 2.</li>
<li>Job is completed successfully.</li>
</ol>
<p>Now how should I wait until the api response returns <code>status = "Job is completed successfully"</code> as well as printing the previous status.</p>
<p>I do not want to hardcode or use <code>time.sleep(900)</code>. I want to check periodically and print the status accordingly.</p>
<p>So far i have this code:</p>
<pre class="lang-py prettyprint-override"><code>logger.info('---Getting Job Latest Status')
headers = {
"authorization": access_token
}
latest_job_status_url = f"localhost" + job_id + "/latest_status"
latest_task_url = f"{latest_job_status_url}"
latest_status_job_result = exec_request("GET", latest_task_url, headers,"null")
</code></pre>
<pre class="lang-py prettyprint-override"><code>def exec_request():
api_response = requests.Session().request(
method=request_type,
url=api_url,
headers=headers,
params=payload,
verify=False
)
api_content = api_response.content
api_response_content = json.loads(api_content.decode('utf-8'))
return {
'content': api_response_content,
'status_code': api_response.status_code
}
</code></pre>
<p>How can I make this function periodically print the status?</p>
<p>Adding the Code from comment :
@Luke</p>
<pre><code>def wait_for_job_to_complete_and_return_status(job_id):
job_api_response_result = get_job_latest_status(job_id)
job_internal_status = job_api_response_result.get('internal_status')
while (job_internal_status != "SUCCESS" or job_internal_status != "FAIL"):
time.sleep(50)
job_internal_status = get_job_latest_status(job_id)
logger.info("---Internal Status is : %s", job_internal_status .get('internal_status'))
return job_internal_status
```
</code></pre>
|
<python><python-requests><pytest>
|
2023-03-02 15:02:24
| 1
| 352
|
Ashu123
|
75,617,192
| 8,539,389
|
List and manage Azure Resource Locks with Python SDK
|
<p>I am trying to list and loop through Azure Resource Locks of a resource group by using Python SDK.</p>
<pre><code> from azure.mgmt.resource.locks.v2016_09_01.aio import ManagementLockClient
management_lock_client = ManagementLockClient(credential, subscription.subscription_id)
locks = management_lock_client.management_locks.list_at_resource_group_level(resource__group_snapshot)
for lock in locks:
management_lock_client.management_locks.delete(resource__group_snapshot, lock.name)
</code></pre>
<p>But here, I get the error:</p>
<blockquote>
<p><strong>for lock in locks:</strong>
<strong>TypeError: 'AsyncItemPaged' object is not iterable</strong> .</p>
</blockquote>
<p>I have tried different methods like list() and result() but it didn't work. For the moment, I don't want to use directly the REST API but the Python SDK.</p>
<p>Does someone has any idea?</p>
|
<python><azure><for-loop><azure-resource-lock>
|
2023-03-02 14:57:17
| 1
| 2,526
|
MoonHorse
|
75,617,107
| 10,270,590
|
How to get a joined volume Python Virtual environment to Airflow Docker working with external_python_task?
|
<h1>GOAL</h1>
<ul>
<li>Have a local python environemnt that I can swap up and install things to it</li>
<li>withouth needing to build a new image -> stopping the runing container -> starting new container</li>
</ul>
<h1>DONE</h1>
<ul>
<li>I use the docker version of airflow 2.4.1</li>
<li>I have succesfully joined the Python Virtual environment to Airflow Docker as a volume you can see I in the docker-compose.yml</li>
<li>After restarting docker with the new yml file it works fine.</li>
<li>I can jump in to the container activate manually the python environment import and run python libraries perfectly fine.</li>
</ul>
<h1>CHALLANGE</h1>
<ul>
<li>The problem comes when I try to run my test dag with the new venv2</li>
<li>The DAG works with the original external python environemnt that is installed via the Dockerfile but the goal would be to not to need this as mentioned before</li>
<li>My guess is that this error happens because the python environemnt does not activated.</li>
</ul>
<h1>Files and ERRORS</h1>
<p>docker-compose.yml</p>
<pre><code>
version: '3'
x-airflow-common:
&airflow-common
image: ${AIRFLOW_IMAGE_NAME:-myown-image-apache/airflow:2.4.1}
build: .
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: CeleryExecutor
AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: NOTPUBLIC
#ORIGINAL: postgresql+psycopg2://airflow:airflow@postgres/airflow
# For backward compatibility, with Airflow <2.3
AIRFLOW__CORE__SQL_ALCHEMY_CONN: NOTPUBLIC
#ORIGINAL postgresql+psycopg2://airflow:airflow@postgres/airflow
AIRFLOW__CELERY__RESULT_BACKEND: NOTPUBLIC
# ORIGINAL db+postgresql://airflow:airflow@postgres/airflow
AIRFLOW__CELERY__BROKER_URL: redis://:@redis:1111/0
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
AIRFLOW__CORE__LOAD_EXAMPLES: 'false'
AIRFLOW__API__AUTH_BACKENDS: 'airflow.api.auth.backend.NOTPUBLIC'
_PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-}
AIRFLOW__CORE__ENABLE_XCOM_PICKLING: 'NOTPUBLIC'
AIRFLOW__SMTP__SMTP_HOST: NOTPUBLIC
AIRFLOW__SMTP__SMTP_PORT: 222
AIRF LOW__SMTP__SMTP_USER: "NOTPUBLIC"
AIRFLOW__SMTP__SMTP_PASSWORD: NOTPUBLIC
AIRFLOW__SMTP__SMTP_MAIL_FROM: NOTPUBLIC@NOTPUBLIC.com
AIRFLOW__WEBSERVER__BASE_URL: NOTPUBLIC
AIRFLOW__WEBSERVER__WEB_SERVER_SSL_CERT: /opt/airflow/certs/NOTPUBLIC.pem
AIRFLOW__WEBSERVER__WEB_SERVER_SSL_KEY: /opt/airflow/certs/NOTPUBLIC.pem
AIRFLOW__CORE__MAX_ACTIVE_RUNS_PER_DAG: 1
AIRFLOW__CORE__DEFAULT_TASK_EXECUTION_TIMEOUT: 21600
AWS_SNOWPLOW_ACCESS_KEY: NOTPUBLIC
AWS_SNOWPLOW_SECRET_KEY: NOTPUBLIC
AIRFLOW__SCHEDULER__MIN_FILE_PROCESS_INTERVAL: 180
#AIRFLOW__SCHEDULER__DAG_DIR_LIST_INTERVAL: 600
volumes:
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- routtofolder/NOTPUBLIC1:/opt/airflow/NOTPUBLIC1
- routtofolder/NOTPUBLIC2:/opt/airflow/NOTPUBLIC2
- /routtofolder/NOTPUBLIC3:/opt/airflow/NOTPUBLIC3
- ./venv2:/opt/airflow/venv2 #########################################THIS IS THE PROBLEMATIC PART
user: "${AIRFLOW_UID:-50000}:0"
depends_on:
&airflow-common-depends-on
redis:
condition: service_healthy
postgres:
condition: service_healthy
</code></pre>
<p>my example DAG that I want to work:</p>
<pre><code>from __future__ import annotations
import logging
import sys
import tempfile
from pprint import pprint
from datetime import timedelta
import pendulum
from airflow import DAG
from airflow.decorators import task
from airflow.operators.python_operator import PythonOperator
from airflow.models import Variable
import requests
from requests.auth import HTTPBasicAuth
my_default_args = {
'owner': 'Anonymus',
'email': ['private@private.com'],
'email_on_failure': True,
'email_on_retry': False,
}
with DAG(
dag_id='test_connected_env',
schedule='10 10 * * *',
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
catchup=False,
#execution_timeout=timedelta(seconds=60),
default_args=my_default_args,
tags=['sample_tag', 'sample_tag2'],
) as dag:
#@task.external_python(task_id="test_external_python_venv_task", python=os.fspath(sys.executable)) # ORIGINAL
#@task.external_python(task_id="test_connected_env_task", python='/opt/airflow/venv1/bin/python3') ### installed via pip via Dockerfile, this works perfectly fine
@task.external_python(task_id="test_connected_env_task", python='/opt/airflow/venv2/bin/python3')
def go(): # this could be any function name
#import package here
print("My Start")
#if you want to test the error
# print(1+"Airflow")
import pandas as pd
print(pd.DataFrame({'a':[1,2,3], 'b':[4,5,6]}))
import numpy as np
print(np.array([1,2,3]))
return print('my end')
external_python_task = go()
</code></pre>
<p>ERROR that I get:</p>
<pre><code>
*** Reading local file: /opt/airflow/logs/dag_id=test_connected_env/run_id=manual__2023-03-02T14:15:16.674123+00:00/task_id=test_connected_env_task/attempt=1.log
[2023-03-02, 14:15:18 GMT] {taskinstance.py:1165} INFO - Dependencies all met for <TaskInstance: test_connected_env.test_connected_env_task manual__2023-03-02T14:15:16.674123+00:00 [queued]>
[2023-03-02, 14:15:18 GMT] {taskinstance.py:1165} INFO - Dependencies all met for <TaskInstance: test_connected_env.test_connected_env_task manual__2023-03-02T14:15:16.674123+00:00 [queued]>
[2023-03-02, 14:15:18 GMT] {taskinstance.py:1362} INFO -
--------------------------------------------------------------------------------
[2023-03-02, 14:15:18 GMT] {taskinstance.py:1363} INFO - Starting attempt 1 of 1
[2023-03-02, 14:15:18 GMT] {taskinstance.py:1364} INFO -
--------------------------------------------------------------------------------
[2023-03-02, 14:15:18 GMT] {taskinstance.py:1383} INFO - Executing <Task(_PythonExternalDecoratedOperator): test_connected_env_task> on 2023-03-02 14:15:16.674123+00:00
[2023-03-02, 14:15:18 GMT] {standard_task_runner.py:54} INFO - Started process 15812 to run task
[2023-03-02, 14:15:18 GMT] {standard_task_runner.py:82} INFO - Running: ['airflow', 'tasks', 'run', 'test_connected_env', 'test_connected_env_task', 'manual__2023-03-02T14:15:16.674123+00:00', '--job-id', '142443', '--raw', '--subdir', 'DAGS_FOLDER/test_connected_env_task.py', '--cfg-path', '/tmp/tmp1t0wy5hy']
[2023-03-02, 14:15:18 GMT] {standard_task_runner.py:83} INFO - Job 142443: Subtask test_connected_env_task
[2023-03-02, 14:15:18 GMT] {dagbag.py:525} INFO - Filling up the DagBag from /opt/airflow/dags/test_connected_env_task.py
[2023-03-02, 14:15:18 GMT] {task_command.py:384} INFO - Running <TaskInstance: test_connected_env.test_connected_env_task manual__2023-03-02T14:15:16.674123+00:00 [running]> on host 0ad620763627
[2023-03-02, 14:15:18 GMT] {taskinstance.py:1590} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_EMAIL=NONPUBLIC@NONPUBLIC.NONPUBLIC
AIRFLOW_CTX_DAG_OWNER=Anonymus
AIRFLOW_CTX_DAG_ID=test_connected_env
AIRFLOW_CTX_TASK_ID=test_connected_env_task
AIRFLOW_CTX_EXECUTION_DATE=2023-03-02T14:15:16.674123+00:00
AIRFLOW_CTX_TRY_NUMBER=1
AIRFLOW_CTX_DAG_RUN_ID=manual__2023-03-02T14:15:16.674123+00:00
[2023-03-02, 14:15:18 GMT] {python.py:725} WARNING - When checking for Airflow installed in venv got Command '['/opt/airflow/venv2/bin/python3', '-c', 'from airflow import version; print(version.version)']' returned non-zero exit status 1.
[2023-03-02, 14:15:18 GMT] {python.py:726} WARNING - This means that Airflow is not properly installed by /opt/airflow/venv2/bin/python3. Airflow context keys will not be available. Please Install Airflow 2.4.1 in your environment to access them.
[2023-03-02, 14:15:18 GMT] {process_utils.py:179} INFO - Executing cmd: /opt/airflow/venv2/bin/python3 /tmp/tmdqmf6q9rg/script.py /tmp/tmdqmf6q9rg/script.in /tmp/tmdqmf6q9rg/script.out /tmp/tmdqmf6q9rg/string_args.txt
[2023-03-02, 14:15:18 GMT] {process_utils.py:183} INFO - Output:
[2023-03-02, 14:15:18 GMT] {process_utils.py:187} INFO - My Start
[2023-03-02, 14:15:18 GMT] {process_utils.py:187} INFO - Traceback (most recent call last):
[2023-03-02, 14:15:18 GMT] {process_utils.py:187} INFO - File "/tmp/tmdqmf6q9rg/script.py", line 38, in <module>
[2023-03-02, 14:15:18 GMT] {process_utils.py:187} INFO - res = go(*arg_dict["args"], **arg_dict["kwargs"])
[2023-03-02, 14:15:18 GMT] {process_utils.py:187} INFO - File "/tmp/tmdqmf6q9rg/script.py", line 30, in go
[2023-03-02, 14:15:18 GMT] {process_utils.py:187} INFO - import pandas as pd
[2023-03-02, 14:15:18 GMT] {process_utils.py:187} INFO - ModuleNotFoundError: No module named 'pandas'
[2023-03-02, 14:15:18 GMT] {taskinstance.py:1851} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/decorators/base.py", line 188, in execute
return_value = super().execute(context)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/operators/python.py", line 370, in execute
return super().execute(context=serializable_context)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/operators/python.py", line 175, in execute
return_value = self.execute_callable()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/operators/python.py", line 678, in execute_callable
return self._execute_python_callable_in_subprocess(python_path, tmp_path)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/operators/python.py", line 426, in _execute_python_callable_in_subprocess
execute_in_subprocess(
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/process_utils.py", line 168, in execute_in_subprocess
execute_in_subprocess_with_kwargs(cmd, cwd=cwd)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/process_utils.py", line 191, in execute_in_subprocess_with_kwargs
raise subprocess.CalledProcessError(exit_code, cmd)
subprocess.CalledProcessError: Command '['/opt/airflow/venv2/bin/python3', '/tmp/tmdqmf6q9rg/script.py', '/tmp/tmdqmf6q9rg/script.in', '/tmp/tmdqmf6q9rg/script.out', '/tmp/tmdqmf6q9rg/string_args.txt']' returned non-zero exit status 1.
[2023-03-02, 14:15:18 GMT] {taskinstance.py:1401} INFO - Marking task as FAILED. dag_id=test_connected_env, task_id=test_connected_env_task, execution_date=20230302T141516, start_date=20230302T141518, end_date=20230302T141518
[2023-03-02, 14:15:18 GMT] {warnings.py:109} WARNING - /home/airflow/.local/lib/python3.8/site-packages/airflow/utils/email.py:120: RemovedInAirflow3Warning: Fetching SMTP credentials from configuration variables will be deprecated in a future release. Please set credentials using a connection instead.
send_mime_email(e_from=mail_from, e_to=recipients, mime_msg=msg, conn_id=conn_id, dryrun=dryrun)
[2023-03-02, 14:15:18 GMT] {email.py:229} INFO - Email alerting: attempt 1
[2023-03-02, 14:15:18 GMT] {email.py:241} INFO - Sent an alert email to ['NONPUBLIC@NONPUBLIC.com']
[2023-03-02, 14:15:18 GMT] {standard_task_runner.py:102} ERROR - Failed to execute job NONPUBLIC for task test_connected_env_task (Command '['/opt/airflow/venv2/bin/python3', '/tmp/tmdqmf6q9rg/script.py', '/tmp/tmdqmf6q9rg/script.in', '/tmp/tmdqmf6q9rg/script.out', '/tmp/tmdqmf6q9rg/string_args.txt']' returned non-zero exit status 1.; 15812)
[2023-03-02, 14:15:18 GMT] {local_task_job.py:164} INFO - Task exited with return code 1
[2023-03-02, 14:15:18 GMT] {local_task_job.py:273} INFO - 0 downstream tasks scheduled from follow-on schedule check
</code></pre>
|
<python><docker><airflow><airflow-2.x>
|
2023-03-02 14:48:51
| 0
| 3,146
|
sogu
|
75,616,989
| 17,696,880
|
How to use re.sub(), or similar, to do replacements and generate raw strings without the metacharacters causing problems with the regex engine?
|
<pre class="lang-py prettyprint-override"><code>import re
personal_pronoun = "se les" #example 1
personal_pronoun = "se le" #example 2
personal_pronoun = "se le" #example 3
personal_pronoun = "les" #example 4
personal_pronoun = "le" #example 5
#re.match() only matches at the beginning of the string
if re.match(r"se", personal_pronoun):
#concatenate this regex "negative look behind" to make a conditional negative match
personal_pronoun_for_regex = re.sub(r"^se", r"(?<!se\s)se", personal_pronoun)
else:
personal_pronoun_for_regex = personal_pronoun
#re.search() searches for matches anywhere in the string.
if re.search(r"\s*le$", personal_pronoun_for_regex):
#concatenate the \b metacharacter representing a word boundary
personal_pronoun_for_regex = re.sub(r"le$", r"le\b", personal_pronoun_for_regex)
#I check how the raw string looks like before using it in a regex
print(repr(personal_pronoun_for_regex)) # --> output raw string
</code></pre>
<p>This code give me that error <code>raise s.error('bad escape %s' % this, len(this)) re.error: bad escape \s at position 6</code></p>
<p>What could I do to get these raw strings into the <code>personal_pronoun_for_regex</code> variable without having these <code>re</code> errors?</p>
<p>I think this is because there is an error within the <code>re.sub()</code> functions, causing a <code>re.error</code> object to be raised indicating that there was a problem processing the replacing regular expression.</p>
<p>This is how the raw string, so that special characters are interpreted literally as part of the regular expression, should actually look like:</p>
<pre class="lang-py prettyprint-override"><code>personal_pronoun_for_regex = r"se les" #for example 1
personal_pronoun_for_regex = r"se le\b" #for example 2
personal_pronoun_for_regex = r"se le\b" #for example 3
personal_pronoun_for_regex = r"(?<!se\s)les" #for example 4
personal_pronoun_for_regex = r"(?<!se\s)le\b" #for example 5
</code></pre>
|
<python><python-3.x><regex><string><regex-negation>
|
2023-03-02 14:40:17
| 0
| 875
|
Matt095
|
75,616,893
| 5,688,175
|
Reshaping a 3D array of shape (K, M, N) to 2D array of shape (n_rows * M, n_cols * N) with Numpy
|
<p>I was trying to reshape a 3D array/tensor <code>arr</code> of shape (K, M, N) in <code>numpy</code> (where each (M, N) subarray could be an image for instance) to a 2D of shape (n_rows * M, n_cols * N).</p>
<p>Obviously, I ensure <code>K = n_rows * n_cols</code> beforehand.</p>
<p>I tried all the possible permutations (after scrolling on similar topics on SO),</p>
<pre><code> for perm in itertools.permutations([0, 1, 2], 3):
test = arr.transpose(perm).reshape((n_rows * M, n_cols * N))
</code></pre>
<p>but unsuccessfully so far.</p>
<p>However, using <code>einops</code> like this,</p>
<pre><code>test = ein.rearrange(arr, '(r c) h w -> (r h) (c w)', r=n_rows, c=n_cols)
</code></pre>
<p>it yields the expected result.</p>
<p>Is there a straightforward way to achieve this with numpy?</p>
|
<python><numpy><multidimensional-array><einops>
|
2023-03-02 14:32:56
| 1
| 2,351
|
floflo29
|
75,616,850
| 11,922,765
|
VOLTTRON: `python3 bootstrap.py` Does not install all packages
|
<p>I am in the process of installing VOLTTRON on my raspberry Pi. I came across this <a href="https://www.youtube.com/watch?v=0zHG1p76GNs&list=TLGG8TyZC8fiYxMwMTAzMjAyMw&t=4s&ab_channel=PNNLUnplugged" rel="nofollow noreferrer">VOLTTON installation video</a> and followed the same steps. But my installation is running into some issues:</p>
<p>On a Linux machine, as shown in the installation video: It installed all packages without any errors and I observed seven bars (showing the installation progress)
<a href="https://i.sstatic.net/pv4Kj.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pv4Kj.jpg" alt="enter image description here" /></a></p>
<p>On my Raspberry Pi 4 Model B machine: installed few packages initially and then it stops with errors.
<a href="https://i.sstatic.net/WVDkG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WVDkG.png" alt="enter image description here" /></a></p>
<p>I need your help to understand what went wrong. I repeated the installation 2 to 3 times and I don't know if error is to with this. But there is one error message I clearly see is <code>ERROR: you must give atleast one requirement to install</code>. I don't know what it means and what additional input I have to give? I appreciate your help. Thanks</p>
<p><strong>Update</strong>: More information on my Raspberry Pi 4 OS</p>
<p><a href="https://i.sstatic.net/w4mV4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/w4mV4.png" alt="enter image description here" /></a></p>
<p>Python3 version:
<a href="https://i.sstatic.net/SaHzG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SaHzG.png" alt="enter image description here" /></a></p>
<p>Result of installing pre-required packages:
<a href="https://i.sstatic.net/YFPkP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YFPkP.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/KRD1J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KRD1J.png" alt="enter image description here" /></a></p>
<p>I tried installing VOLLTRON on another Raspberry Pi 4 Model B (2 GB RAM). Unlike the previous one, I did not repeat the installation instructions and install any unnecessary packages. Initially, two packages seem to have been installed without any errors. How do I know? Well, I see two bars (below screenshot). On the video demo, I observed seven bars, meaning that for some reason, five packages failed to install on my RPi board. Then it ended with some errors with text in red color. Screenshot:
<a href="https://i.sstatic.net/jIAk8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jIAk8.png" alt="enter image description here" /></a></p>
|
<python><raspberry-pi><raspberry-pi4><volttron>
|
2023-03-02 14:29:16
| 2
| 4,702
|
Mainland
|
75,616,740
| 4,950,019
|
Upload Conversion Value Rules (Google Ads API) via python sdk / script?
|
<p>The (relatively) new Google Ads API offers <a href="https://developers.google.com/google-ads/api/docs/conversions/conversion-value-rules" rel="nofollow noreferrer">Conversion Value Rules</a> - but I am looking for some examples, resources, pointers ... of uploading them via the <a href="https://github.com/googleads/google-ads-python/" rel="nofollow noreferrer">python client library</a>. I would like to create potentially hundreds of such custom rules per country / device / ... and would be grateful for some helpful examples of how to do this. Thank you in front.</p>
|
<python><google-ads-api><google-api-python-client>
|
2023-03-02 14:21:57
| 0
| 581
|
davidski
|
75,616,635
| 1,014,217
|
How to use Label Encoder in a dataframe which is nested in another dataframe
|
<p>My dataset is:</p>
<p><a href="https://www.kaggle.com/datasets/angeredsquid/brewers-friend-beer-recipes" rel="nofollow noreferrer">https://www.kaggle.com/datasets/angeredsquid/brewers-friend-beer-recipes</a></p>
<p>I loaded like this:</p>
<pre><code>import json
filename = 'recipes_full copy.json'
with open(filename, 'r') as f:
try:
json_data = json.load(f)
print("The JSON file is valid")
except ValueError as e:
print("The JSON file is invalid:", e)
df = pd.DataFrame(json_data.values())
</code></pre>
<p>REsult is:</p>
<p><a href="https://i.sstatic.net/9ujVc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9ujVc.png" alt="enter image description here" /></a></p>
<p>Then I convert the fermentables and hops columns into dataframes</p>
<p>like this:</p>
<pre><code>df['fermentables'] = df['fermentables'].apply(pd.DataFrame,columns=["kg","Malt","ppg", "°L Degree Lintner", "bill"])
df['hops'] = df['hops'].apply(pd.DataFrame,columns=["grams", "hop","hoptype", " % AA", "Type", "Time", "IBU", "Percentage"])
</code></pre>
<p>and the result is like this:</p>
<p><a href="https://i.sstatic.net/ichZv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ichZv.png" alt="enter image description here" /></a></p>
<p>Now I need to be able to convert Malt Name and Hop Name with LabelEncoder.</p>
<p>How can I do this inside the nested dataframe? For all Rows of the main dataframe?</p>
|
<python><pandas>
|
2023-03-02 14:13:37
| 1
| 34,314
|
Luis Valencia
|
75,616,542
| 3,668,129
|
How to set whisper.DecodingOptions language?
|
<p>I'm trying to run whisper and I want to set the <code>DecodingOptions</code>.language with France (instead of using it's language detection).</p>
<p>I have tried to write:</p>
<pre><code>options = whisper.DecodingOptions()
options.language = "fr"
</code></pre>
<p>but I'm getting error:</p>
<pre><code>FrozenInstanceError: cannot assign to field 'language'
</code></pre>
<p>How can I set the language in <code>DecodingOptions</code> ?</p>
|
<python><deep-learning><openai-whisper>
|
2023-03-02 14:05:31
| 1
| 4,880
|
user3668129
|
75,616,325
| 12,913,047
|
Calculating the number of '1's in a df
|
<p>I have the following df, illustrated as the matrix in the image, and I would like to count the number of 'correlated' squares which are equal to 1, and 'non correlated' which are equal to 0.</p>
<p>I have tried to use the df.count() function but it doesn't return the result I want, as in the totals of 1s and 0s.</p>
<p>Any help would be great, thank you.</p>
<p>My code:</p>
<pre><code>import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib.colors import LinearSegmentedColormap
df = pd.read_csv('Res_Gov.csv')
df1 = df.set_index('Indicators').T
# Set up the matplotlib figure
fig, ax = plt.subplots(figsize=(12, 12))
colors = ["#f4a261","#2a9d8f"]
cmap = LinearSegmentedColormap.from_list('Custom', colors, len(colors))
# Draw the heatmap with the mask and correct aspect ratio
df1 = sns.heatmap(df1, cmap=cmap, square=True,
linewidths=.5, cbar_kws={"shrink": .5}) # HERE
# Set the colorbar labels
ax.set_xlabel("Indicators")
ax.set_ylabel("Resilience Criteria")
ax.tick_params(axis='x', rotation=90)
colorbar = ax.collections[0].colorbar
colorbar.set_ticks([0.25,0.75])
colorbar.set_ticklabels(['Not Correlated', 'Correlated'])
fig.tight_layout()
plt.show()
</code></pre>
<p>DF snippet</p>
<pre><code>,Indicators,Robustness,Flexibility,Resourcefulness,Redundancy,Diversity,Independence,Foresight Capacity,Coordination Capacitiy,Collaboration Capacity,Connectivity & Interdependence,Agility,Adaptability,Self-Organization,Creativity & Innovation,Efficiency,Equity
0,G1,1,1,1,0,0,1,1,1,1,1,1,1,0,1,1,1
1,G2,1,0,0,0,0,1,0,1,1,1,1,0,0,1,1,1
2,G3,1,0,1,0,0,1,1,1,1,1,1,1,1,1,0,1
3,G4,1,1,1,0,1,0,1,1,1,1,1,1,1,1,0,1
4,G5,1,0,1,0,0,0,0,1,0,1,1,0,0,0,1,0
5,G6,1,0,1,0,1,0,1,0,0,0,0,0,0,1,0,1
6,G7,1,1,0,1,0,1,0,0,0,0,1,0,0,0,0,0
7,G8,1,1,0,0,0,1,1,1,1,0,1,1,0,0,0,0
8,G9,1,0,1,0,0,1,1,1,1,0,1,1,0,1,0,1
9,G10,1,1,1,0,0,0,1,1,1,1,1,0,0,0,1,1
10,G11,1,0,1,0,0,0,1,0,0,0,1,0,0,0,0,1
11,G12,1,1,1,0,1,1,1,1,1,1,1,0,1,1,1,0
12,G13,1,1,1,0,1,0,1,1,0,1,1,0,0,0,0,0
13,G14,1,0,1,0,1,0,1,1,1,1,1,1,0,0,1,1
14,G15,1,1,1,0,1,0,1,1,1,1,1,1,1,1,0,1
15,G16,1,0,1,0,1,1,0,1,1,1,0,1,1,1,0,1
16,G17,1,1,1,0,0,0,0,0,0,0,1,1,0,1,1,0
17,G18,1,0,1,0,1,1,1,1,1,1,0,1,1,1,0,1
18,G19,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1
19,G20,1,1,0,1,1,0,0,0,1,0,0,0,1,0,0,1
20,G21,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0,1
21,G22,1,1,1,0,0,0,1,1,1,1,1,1,0,1,0,1
22,G23,1,0,1,0,0,1,1,0,1,0,0,1,1,1,0,0
</code></pre>
<p>Matrix :</p>
<p><a href="https://i.sstatic.net/Z0iml.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z0iml.png" alt="enter image description here" /></a></p>
|
<python><pandas>
|
2023-03-02 13:48:51
| 3
| 506
|
JamesArthur
|
75,616,311
| 12,945,785
|
Plotly add legend in hover graph
|
<p>i am doing a graph with Plotly library and I would like to add the name of the legend inside the hovertemplate. How can I do ?</p>
<pre><code>data = [go.Bar(name=col,
x=aum_annuel_classe.index.year,
y=aum_annuel_classe[col],
xhoverformat="%Y",
xperiodalignment="middle",
hovertemplate = '<br>'.join([
'Année: %{x}',
'AuM: %{y:,.2s}€',
'<extra></extra>'
]
)
)
for col in aum_annuel_classe.columns]
fig = go.Figure(data)
fig.update_layout(barmode='stack',
legend=dict(yanchor="top",
y=0.99,
xanchor="left",
x=0.01
),
)
fig.update_xaxes(ticklabelmode="period",
tickformat="%Y"
)
</code></pre>
<p><a href="https://i.sstatic.net/tKdZC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tKdZC.png" alt="enter image description here" /></a></p>
<p>Thx</p>
|
<python><plotly>
|
2023-03-02 13:47:55
| 1
| 315
|
Jacques Tebeka
|
75,616,210
| 2,896,292
|
Scipy FFT reduce bin count
|
<p>I have a waveform <code>sig</code> that I'd like to run an FFT on. The data is sampled at 40 kHz and I want to analyze a window of 25mS or 1000 points. Using <code>scipy.fft.rfft(sig)</code> I get a resulting waveform that has 500 values. I understand these values to be magnitudes that correspond to 500 frequency bins between 0 and 20kHz. Using <code>scipy.fft.rfftfreq(1000, 1/40000)</code> we can see these bins are at intervals of 40 Hz from 0.0, 40.0, 80.0, 120.0....20000.0. I want to reduce the number of bins to 250, but still keep the same sample window and frequency range. So the results would be at 0 , 80, 160...20000. How would I achieve this? I don't see a parameter in <code>scipy.fft.rfft()</code> or <code>scipy.fft.rfftfreq()</code> to set the number of bins. I get that I can't go over 500, but I should be able to reduce the number of bins, correct?</p>
<p>I am coming at this problem as a EE. On a scope or spectrum analyzer I can isolate the part of the waveform I want to inspect and then there is an option to set the number of bins. My expectation is that when I reduce the number of bins, I will see less peaks, but over the same frequency range (0-20kHz). The magnitudes of neighboring peaks in the initial FFT combine in the lower bin version.</p>
|
<python><python-3.x><numpy><scipy><fft>
|
2023-03-02 13:39:34
| 1
| 315
|
eh_whatever
|
75,616,056
| 3,802,177
|
What is the proper way to return data using HTTP-API(v2) + Lambda + DynamoDB as a JSON response?
|
<p>I used to use the REST API, but since v2 I find myself using it more.
Is there a proper way to return data "neatly" other than manipulating the database response before returning?
I used to use the model feature with REST (v1). What's the recommended way to do the same here?</p>
<p>Here's an example of what I'm trying to do.
I'm selecting specific columns, while avoiding the error:</p>
<blockquote>
<p>An error occurred (ValidationException) when calling the UpdateItem operation: Invalid UpdateExpression: Attribute name is a reserved keyword "owner"</p>
</blockquote>
<p>(got this for a column named "owner" and "name")
and since integers/floats return as a "Decimal":</p>
<blockquote>
<p>Object of type Decimal is not JSON serializable</p>
</blockquote>
<p>I added the class to set them properly as integers/floats.</p>
<pre><code>import json
import boto3
from decimal import Decimal
class DecimalEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, Decimal):
return str(obj)
return json.JSONEncoder.default(self, obj)
def lambda_handler(event, context):
try:
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('SomeTable')
response_body = ''
status_code = 0
response = table.scan(
ProjectionExpression="#col1, #col2, #col3, #col4, #col5",
ExpressionAttributeNames={
"#col1": "col1",
"#col2": "col2",
"#col3": "col3",
"#col4": "col4",
"#col5": "col5"
}
)
items = response["Items"]
mapped_items = list(map(lambda item: {
'col1': item['col1'],
'col2': item['col2'],
'col3': item['col3'],
'col4': item['col4'],
'col5': item['col5'],
}, items))
response_body = json.dumps(mapped_items, cls=DecimalEncoder)
status_code = 200
except Exception as e:
response_body = json.dumps(
{'error': 'Unable to get metadata from SomeTable: ' + str(e)})
status_code = 403
json_response = {
"statusCode": status_code,
"headers": {
"Content-Type": "application/json"
},
"body": response_body
}
return json_response
</code></pre>
<p>This just looks too much for a simple "GET" request of some columns in a table</p>
<p>EDIT:</p>
<pre><code>import json
import boto3
from boto3.dynamodb.conditions import Key
dynamodb = boto3.resource('dynamodb')
def lambda_handler(event, context):
try:
table = dynamodb.Table("mytable")
response = table.query(
ProjectionExpression='col1, col2, col3,col4, col5, col6'
)
response = json.loads(json.dumps(response, default=str))
return {
"statusCode": 200,
"body": response
}
except Exception as e:
print(f"Error: {e}")
return {
"statusCode": 500,
"body": "Error: Something went wrong!"
}
</code></pre>
<p>the above function returns the error:</p>
<blockquote>
<p>Error: An error occurred (ValidationException) when calling the Query
operation: Either the KeyConditions or KeyConditionExpression
parameter must be specified in the request.</p>
</blockquote>
<p>there is no condition for my case, i am requesting all rows.</p>
|
<python><aws-lambda><amazon-dynamodb><aws-http-api><aws-api-gateway-v2>
|
2023-03-02 13:26:11
| 1
| 5,946
|
Imnotapotato
|
75,615,836
| 120,457
|
find one or more element in strings array in another strings array
|
<pre><code>first_array = ['aaa', 'eee']
second_array = ['aaa', 'bbb', 'ccc', 'ddd', 'eee']
</code></pre>
<p>In Python, I want to determine whether any of the elements from the first array (one or more) are present in the second array.</p>
<p>I attempted using subset and union, but it wasn't very effective. I dont want to use the loop as it takes time</p>
|
<python><python-3.x>
|
2023-03-02 13:07:42
| 4
| 35,235
|
joe
|
75,615,690
| 7,797,210
|
What's the syntax for web REST fastAPI GET function, and the request.get function where one of the input variable is List, or Numpy Array
|
<p>been driving me crazy with the syntax for the past week, so hopefully an enlightened one can point me out! I've traced these posts, but somehow I couldn't get them to work</p>
<p>I am looking to have an input variable where it is a list, and a numpy array; for feeding into a fastAPI get function, and then calling it via a request.get. Somehow, I cannot get the arguments/syntax correct.</p>
<p><a href="https://stackoverflow.com/questions/71426756/fastapi-post-request-with-list-input-raises-422-unprocessable-entity-error">FastAPI POST request with List input raises 422 Unprocessable Entity error</a></p>
<p><a href="https://stackoverflow.com/questions/64174598/can-fastapi-pydantic-individually-validate-input-items-in-a-list">Can FastAPI/Pydantic individually validate input items in a list?</a></p>
<p>I have the following web API defined :</p>
<pre><code>import typing
from typing import List
from fastapi import Query
from fastapi import FastAPI
@appOne.get("/sabr/funcThree")
def funcThree(x1: List[float], x2: np.ndarray):
return {"x1" : x1, "x2" : x2}
</code></pre>
<p>Then, I try to call the function from a jupyter notebook:</p>
<pre><code>import requests
url_ = "http://127.0.0.1:8000/sabr/"
func_ = "funcThree"
items = [1, 2, 3, 4, 5]
params = {"x1" : items, "x2" : np.array([3,100])}
print(url_ + func_)
requests.get(url_ + func_, params = params).json()
</code></pre>
<p>I get the following error below...</p>
<pre><code>{'detail': [{'loc': ['body', 'x1'],
'msg': 'field required',
'type': 'value_error.missing'}]}
</code></pre>
<p>It's driving me CRAZY .... Help!!!!</p>
<hr />
<p>though answered in the above, as it is a related question..... I put a little update.</p>
<p>I add in a 'type_' field, so that it can return different type of results. But somehow, the json= params is NOT picking it up. Below is the python fastAPI code :</p>
<pre><code>@app.get("/sabr/calib_test2")
def calib_test2(x1: List[float] = [1, 2, 3], x2: List[float] = [4, 5, 6]\
, type_: int = 1):
s1 = np.array(x1)
s2 = np.array(x2)
# corr = np.corrcoef(x1, x2)[0][1]
if type_ == 1:
return {"x1": x1, "x1_sum": s1.sum(), "x2": x2, \
"x2_sum": s2.sum(), "size_x1": s1.shape}
elif type_ == 2:
return ['test', x1, s1.sum(), x2, s2.sum(), s1.shape]
else:
return [0, 0, 0]
</code></pre>
<p>But, it seems somehow the type_ input I am keying in, is NOT being fed-through... Help...</p>
<pre><code>url_ = "http://127.0.0.1:8000/sabr/"
func_ = "calib_test2"
item1 = [1, 2, 3, 4, 5]
item2 = [4, 5, 6, 7, 8]
all_ = url_ + func_
params = {"x1": item1, "x2": item2, "type_": '2', "type_": 2}
resp = requests.get(all_, json = params)
# resp.raise_for_status()
resp.json()
</code></pre>
<p>Results keep being :</p>
<pre><code>{'x1': [1.0, 2.0, 3.0, 4.0, 5.0],
'x1_sum': 15.0,
'x2': [4.0, 5.0, 6.0, 7.0, 8.0],
'x2_sum': 30.0,
'size_x1': [5]}
</code></pre>
|
<python><numpy><fastapi>
|
2023-03-02 12:54:18
| 1
| 571
|
Kiann
|
75,615,599
| 3,191,747
|
Django filter for ManyToMany items which exist only in a specified list
|
<p>I have the following:</p>
<pre><code>class Category():
category_group = models.ManyToManyField("CategoryGroup", blank=True, related_name="category_group")
class CategoryGroup():
label = models.TextField(null=True, blank=True)
categories = Category.objects.exclude(category_group__label__in=["keywords_1", "keywords_2"]
</code></pre>
<p>I wish to exclude the categories whose group label exists in either <code>keywords_1</code> or <code>keywords_2</code> only. If a category group label exists in <code>keywords_1</code> and <code>keywords_3</code> I do not want to exclude it. What changes are needed for this query?</p>
|
<python><django><postgresql>
|
2023-03-02 12:46:24
| 1
| 506
|
KvnH
|
75,615,469
| 1,873,108
|
Python & Visual studio code 2022 CMAKE ignored python search path
|
<p>I'm trying to compile some python cmake project but I hit a wall...
This is my example ></p>
<pre><code>set(PY_VERSION 37)
set(PY_EXE "C:/Program Files/Python37")
set(Python3_ROOT_DIR "C:/Program Files/Python37")
set(Python3_FIND_ABI "ON" "3" "7")
set(PYTHON_EXECUTABLE "${Python3_ROOT_DIR}/python.exe" CACHE FILEPATH "Path to the Python executable")
set(PY_BUILD_DEB)
if (${CMAKE_BUILD_TYPE} MATCHES Debug)
message("WERE IN DEBUG MODE")
set(PY_BUILD_DEB "_d")
endif ()
set(_PYTHON_EXECUTABLE "${PY_EXE}/python.exe" CACHE STRING "Path to the Python executable")
set(_PYTHON_INCLUDE_DIR "${PY_EXE}/include" CACHE STRING "Path to the Python include directory")
set(_PYTHON_LIBRARY "${PY_EXE}/libs/python${PY_VERSION}${PY_BUILD_DEB}.lib" CACHE STRING "Path to the Python library")
set(OLD_PATH ${CMAKE_PREFIX_PATH})
set(CMAKE_PREFIX_PATH ${PY_EXE})
find_package(Python REQUIRED COMPONENTS Interpreter Development)# HINTS "${PY_EXE}")
#set(CMAKE_PREFIX_PATH ${PY_BIND_ROOT}/tools)
#find_package(pybind11)
set(CMAKE_PREFIX_PATH ${OLD_PATH})
message(" PYTHON_EXECUTABLE : ${_PYTHON_EXECUTABLE}")
message(" PYTHON_INCLUDE_DIR : ${_PYTHON_INCLUDE_DIR}")
message(" PYTHON_LIBRARY : ${_PYTHON_LIBRARY}")
message(" Python_LIBRARIES : ${Python_LIBRARIES}")
message("Python_INCLUDE_DIRS : ${Python_INCLUDE_DIRS}")
</code></pre>
<p>The problem I'm having that hes simply ignoring my python pathing and uses his own one.</p>
<p>He gives me</p>
<pre><code>1> [CMake] Python_LIBRARIES : optimized;C:/Program Files/Python310/libs/python310.lib;debug;C:/Program Files/Python310/libs/python310_d.lib
</code></pre>
<p>where he should point to 37 one.
No matter what variable/etc I do, hes always stuck to 310 he finds "somewhere".</p>
<p>Any idea how to tell vs to do what hes told to do ? Clion/etc ides work properly.</p>
|
<python><c++><visual-studio><cmake>
|
2023-03-02 12:32:47
| 0
| 1,076
|
Dariusz
|
75,615,438
| 9,077,457
|
In python Is there a way to test if an object is in an enumerable, but in the sense of the "is" operator instead of the "==" operator?
|
<p>I'm displaying a data structure which is supposed to be a tree, but there might be some risks of infinite recursion if my tree if in reality a graph with cycles (That shouldn't happen, but I'm playing paranoid).</p>
<p>For this reason, I have created list of encountered nodes and I want to check if the current visited node is present in the list. However, it appears that the <code>in</code> operator is testing on values (like <code>==</code>) and not on identity (like <code>is</code>). Would there be a way to check if an object "identity" instead of "value" is already present in my list ?</p>
|
<python><operators><identity>
|
2023-03-02 12:29:55
| 2
| 1,394
|
Camion
|
75,615,302
| 20,051,041
|
Scrapy shell fetch response.css returns []
|
<p>I am learning to scrape using scrapy. I would like to get some information about this medicine: <a href="https://www.apotheken-umschau.de/medikamente/beipackzettel/azithromycin-al-250-mg-filmtabletten-1805007.html" rel="nofollow noreferrer">https://www.apotheken-umschau.de/medikamente/beipackzettel/azithromycin-al-250-mg-filmtabletten-1805007.html</a>
Before writing a spider in Python, I began with the headline using scrapy shell:</p>
<pre><code><h1 class="headline mb-3 fw-bolder">Beipackzettel von AZITHROMYCIN AL 250 mg Filmtabletten</h1>
</code></pre>
<p>and tried:</p>
<pre><code> fetch('https://www.apotheken-umschau.de/medikamente/beipackzettel/azithromycin-al-250-mg-filmtabletten-1805007.html')
</code></pre>
<p>then:</p>
<pre><code>response.css('h1.headline mb-3 fw-bolder').getall()
</code></pre>
<p>Any idea why I get <code>[]</code>?
Thanks.</p>
|
<python><scrapy>
|
2023-03-02 12:18:06
| 2
| 580
|
Mr.Slow
|
75,615,287
| 13,916,049
|
Retain separate dataframe structure after feature selection on list of dataframes
|
<p>The <code>df</code> is a list of dataframes; <code>y</code> represents each dataframe. After feature selection, I want to retain the features in each dataframe <code>mut_fs</code>, <code>mirna_fs</code> as separate output.</p>
<pre><code>dfs = [mut, mirna, mrna_exp, meth, protein]
df = pd.concat(dfs)
dummies = pd.get_dummies(df.iloc[:,-1:], prefix="category")
df = pd.concat([df, dummies], axis=1)
df.drop("category", axis=1, inplace=True)
X = df.iloc[:,:-5]
y = df.iloc[:,-5:]
mms = MinMaxScaler()
X_mms = pd.DataFrame(mms.fit_transform(X.values), columns=X.columns, index=X.index)
</code></pre>
<p>Feature selection:</p>
<pre><code>min_features_to_select = 10
clf = DecisionTreeClassifier()
cv = KFold(5)
rfecv = RFECV(estimator=clf, step=5, cv=cv, scoring="accuracy", min_features_to_select=min_features_to_select, n_jobs=2)
X_transformed = rfecv.fit_transform(X_mms, y)
X_transformed = X_mms.loc[:, rfecv.get_support()]
</code></pre>
<p>Data:</p>
<p><code>mut</code></p>
<pre><code>mut = pd.DataFrame({'TCGA-Y8-A8RY-01A': {'KNL1': 0,
'MEGF8': 0,
'JMJD1C': 0,
'FREM2': 0,
'SPEN': 0},
'TCGA-Y8-A8RZ-01A': {'KNL1': 0,
'MEGF8': 0,
'JMJD1C': 0,
'FREM2': 0,
'SPEN': 1},
'TCGA-Y8-A8S0-01A': {'KNL1': 0,
'MEGF8': 0,
'JMJD1C': 0,
'FREM2': 0,
'SPEN': 0},
'TCGA-Y8-A8S1-01A': {'KNL1': 0,
'MEGF8': 0,
'JMJD1C': 0,
'FREM2': 0,
'SPEN': 0},
'category': {'KNL1': 'Mutation',
'MEGF8': 'Mutation',
'JMJD1C': 'Mutation',
'FREM2': 'Mutation',
'SPEN': 'Mutation'}})
</code></pre>
<p><code>mirna</code></p>
<pre><code>mirna = pd.DataFrame({'TCGA-Y8-A8RY-01A': {'hsa-miR-664a-3p': 2.460083880550082,
'hsa-miR-1307-3p': 3.287550991864731,
'hsa-miR-1976': 1.962887971659645,
'hsa-miR-2355-5p': 2.352633477409978,
'hsa-miR-3607-3p': 2.10690806575631},
'TCGA-Y8-A8RZ-01A': {'hsa-miR-664a-3p': 2.54188890339199,
'hsa-miR-1307-3p': 3.3404984273244107,
'hsa-miR-1976': 1.584687245555564,
'hsa-miR-2355-5p': 1.2258390661832212,
'hsa-miR-3607-3p': 2.308760900404995},
'TCGA-Y8-A8S0-01A': {'hsa-miR-664a-3p': 2.577934740987889,
'hsa-miR-1307-3p': 3.196635506896576,
'hsa-miR-1976': 0.7959878740242344,
'hsa-miR-2355-5p': 1.971638052906995,
'hsa-miR-3607-3p': 2.0907950222445617},
'TCGA-Y8-A8S1-01A': {'hsa-miR-664a-3p': 2.4871912626414576,
'hsa-miR-1307-3p': 3.3312863379291127,
'hsa-miR-1976': 1.964206800367793,
'hsa-miR-2355-5p': 2.441762476705453,
'hsa-miR-3607-3p': 2.004685616955679},
'category': {'hsa-miR-664a-3p': 'miRNA',
'hsa-miR-1307-3p': 'miRNA',
'hsa-miR-1976': 'miRNA',
'hsa-miR-2355-5p': 'miRNA',
'hsa-miR-3607-3p': 'miRNA'}})
</code></pre>
<p><code>mrna_exp</code></p>
<pre><code>mrna_exp = pd.DataFrame({'TCGA-Y8-A8RY-01A': {'ZYG11B': 9.721558077351668,
'ZYX': 13.149784472249904,
'ZZEF1': 10.996463884857622,
'ZZZ3': 9.5709146512422,
'psiTPTE22': 8.39705036385952},
'TCGA-Y8-A8RZ-01A': {'ZYG11B': 9.482960989216007,
'ZYX': 12.18631415969286,
'ZZEF1': 10.3849211071136,
'ZZZ3': 9.630767657918822,
'psiTPTE22': 3.155036774675642},
'TCGA-Y8-A8S0-01A': {'ZYG11B': 9.991527373089331,
'ZYX': 12.602417419256271,
'ZZEF1': 10.4181662247631,
'ZZZ3': 9.558068606793018,
'psiTPTE22': 8.032538942350206},
'TCGA-Y8-A8S1-01A': {'ZYG11B': 9.00642622908457,
'ZYX': 13.08822035558983,
'ZZEF1': 11.091529865870283,
'ZZZ3': 7.709329928774525,
'psiTPTE22': 4.55896554346589},
'category': {'ZYG11B': 'mRNA',
'ZYX': 'mRNA',
'ZZEF1': 'mRNA',
'ZZZ3': 'mRNA',
'psiTPTE22': 'mRNA'}})
</code></pre>
<p><code>meth</code></p>
<pre><code>meth = pd.DataFrame({'TCGA-Y8-A8RY-01A': {'cg09560658': 0.939571238883928,
'cg09560763': 0.494413413161009,
'cg09560811': 0.9097565027488,
'cg09560911': 0.031638387180189,
'cg09560953': 0.851164164393655},
'TCGA-Y8-A8RZ-01A': {'cg09560658': 0.929089720009317,
'cg09560763': 0.301740989582562,
'cg09560811': 0.920238344141844,
'cg09560911': 0.0304795189432937,
'cg09560953': 0.707673764192998},
'TCGA-Y8-A8S0-01A': {'cg09560658': 0.932435869367479,
'cg09560763': 0.235758339404136,
'cg09560811': 0.924803871437567,
'cg09560911': 0.0255867247450433,
'cg09560953': 0.721923173082175},
'TCGA-Y8-A8S1-01A': {'cg09560658': 0.910527920556733,
'cg09560763': 0.731030638674928,
'cg09560811': 0.929761655129724,
'cg09560911': 0.0234602952079715,
'cg09560953': 0.835676721188431},
'category': {'cg09560658': 'Methylation',
'cg09560763': 'Methylation',
'cg09560811': 'Methylation',
'cg09560911': 'Methylation',
'cg09560953': 'Methylation'}})
</code></pre>
<p><code>protein</code></p>
<pre><code>protein = pd.DataFrame({'TCGA-Y8-A8RY-01A': {'p62-LCK-ligand': -0.85743991575,
'p70S6K': 0.11706638225,
'p70S6K_pT389': -0.20945653625,
'p90RSK': -0.03276679775,
'p90RSK_pT359_S363': -0.35120344275},
'TCGA-Y8-A8RZ-01A': {'p62-LCK-ligand': 0.48058468225,
'p70S6K': -0.34041949075,
'p70S6K_pT389': 0.12322377375,
'p90RSK': -0.17832512275,
'p90RSK_pT359_S363': -0.0444110847500001},
'TCGA-Y8-A8S0-01A': {'p62-LCK-ligand': -0.443653053,
'p70S6K': 0.330332598,
'p70S6K_pT389': 0.0048678305,
'p90RSK': 0.373424473,
'p90RSK_pT359_S363': -0.237274864},
'TCGA-Y8-A8S1-01A': {'p62-LCK-ligand': 0.892347429,
'p70S6K': -0.398764372,
'p70S6K_pT389': 0.8054628965,
'p90RSK': -0.039002197,
'p90RSK_pT359_S363': 0.804770661},
'category': {'p62-LCK-ligand': 'Protein',
'p70S6K': 'Protein',
'p70S6K_pT389': 'Protein',
'p90RSK': 'Protein',
'p90RSK_pT359_S363': 'Protein'}})
</code></pre>
<p>Expected output (example):</p>
<p><code>mut_fs</code></p>
<pre><code>mut_fs = pd.DataFrame({'TCGA-Y8-A8RY-01A': {'KNL1': 0,
'MEGF8': 0,
'FREM2': 0,
'SPEN': 0},
'TCGA-Y8-A8S0-01A': {'KNL1': 0,
'MEGF8': 0,
'FREM2': 0,
'SPEN': 0},
'TCGA-Y8-A8S1-01A': {'KNL1': 0,
'MEGF8': 0,
'FREM2': 0,
'SPEN': 0},
'category': {'KNL1': 'Mutation',
'MEGF8': 'Mutation',
'FREM2': 'Mutation',
'SPEN': 'Mutation'}})
</code></pre>
<p><code>mirna_fs</code></p>
<pre><code>mirna_fs = pd.DataFrame({'TCGA-Y8-A8RY-01A': {'hsa-miR-664a-3p': 2.460083880550082,
'hsa-miR-1307-3p': 3.287550991864731,
'hsa-miR-1976': 1.962887971659645,
'hsa-miR-3607-3p': 2.10690806575631},
'TCGA-Y8-A8S0-01A': {'hsa-miR-664a-3p': 2.577934740987889,
'hsa-miR-1307-3p': 3.196635506896576,
'hsa-miR-1976': 0.7959878740242344,
'hsa-miR-3607-3p': 2.0907950222445617},
'TCGA-Y8-A8S1-01A': {'hsa-miR-664a-3p': 2.4871912626414576,
'hsa-miR-1307-3p': 3.3312863379291127,
'hsa-miR-1976': 1.964206800367793,
'hsa-miR-3607-3p': 2.004685616955679},
'category': {'hsa-miR-664a-3p': 'miRNA',
'hsa-miR-1307-3p': 'miRNA',
'hsa-miR-1976': 'miRNA',
'hsa-miR-3607-3p': 'miRNA'}})
</code></pre>
|
<python><pandas>
|
2023-03-02 12:15:58
| 1
| 1,545
|
Anon
|
75,615,203
| 5,661,316
|
Create a list of values from an existing list if values are nearby
|
<p>I have a list of dictionaries</p>
<pre><code>[{"Name": 'A', "Area": 10000, "Price": 100},
{"Name": 'B', "Area": 9500, "Price": 99},
{"Name": 'C', "Area": 11000, "Price": 101},
{"Name": 'D', "Area": 12000, "Price": 150}
{"Name": 'E', "Area": 14000, "Price": 200},
{"Name": 'F', "Area": 14500, "Price": 400},
{"Name": 'G', "Area": 12999, "Price": 159}]
</code></pre>
<p>I'd like to create 2 new lists of dictionaries based on the two threshold criteria on <em>Area</em> and <em>Price</em> key. At this example the threshold I use is 1000 for <em>Area</em> and 10 for <em>Price</em>, so I'm expecting to receive 2 lists:</p>
<p>Possible duplicates which follow the threshold:</p>
<pre><code>[{"Name": 'A', "Area": 10000, "Price": 100}, #Because it is less than 1000 in Area and less than 10 in Price for B and C items
{"Name": 'B', "Area": 9500, "Price": 99}, #For A and C
{"Name": 'C', "Area": 11000, "Price": 101}, #For A and B
{"Name": 'D', "Area": 12000, "Price": 150}, #For G, but not for C because the price difference is more than 10
{"Name": 'G', "Area": 12999, "Price": 159}] #For D
</code></pre>
<p>The remaining items from original list of dicts which were not selected as duplicates</p>
<pre><code>[{"Name": 'E', "Area": 14000, "Price": 200}, #Not selected because the price difference is more than 10
{"Name": 'F', "Area": 14500, "Price": 400} #The same
</code></pre>
<p>The only idea, which comes to my mind is the naive solution to perform 2 loops for the each list to compare the <em>Area</em> and <em>Price</em> values with others. I am sure there is more <em>pythonic</em> way of the solution</p>
|
<python><list><loops><dictionary>
|
2023-03-02 12:06:56
| 1
| 373
|
sailestim
|
75,614,858
| 14,594,208
|
Is it possible to assign a Series to a DataFrame and use the Series' name as column name?
|
<p>Given a Series <code>s</code> like this:</p>
<pre class="lang-py prettyprint-override"><code>0 6
1 0
2 0
3 8
4 8
5 10
6 10
7 9
Name: my_series, dtype: int64
</code></pre>
<p>and given a <code>df</code>, would it be possible to assign the series to the <code>df</code> without having
to specify a column name? (The Series name would be used instead)</p>
<p>So, I'd like to <strong>avoid</strong> having to do this explicitly:</p>
<pre class="lang-py prettyprint-override"><code>df['my_series'] = s # avoid
</code></pre>
<p>My mind goes to something like this:</p>
<pre class="lang-py prettyprint-override"><code>pd.concat([df, s.to_frame()], axis=1)
</code></pre>
<p>but I guess it is counterintuitive.</p>
<p>I have also thought of using <code>df.assign</code>, but I think it requires specifying a column name as well.</p>
|
<python><pandas>
|
2023-03-02 11:31:51
| 3
| 1,066
|
theodosis
|
75,614,820
| 6,930,340
|
Nested enums in Python
|
<p>I am trying to implement some kind of nested <code>Enum</code>.</p>
<pre><code>from dataclasses import dataclass
from enum import Enum
class VendorA(Enum):
"""Define price fields for VENDOR_A."""
OPEN: str = "px_open"
HIGH: str = "px_high"
LOW: str = "px_low"
CLOSE: str = "px_last"
class VendorB(Enum):
"""Define price fields for VENDOR_B."""
OPEN: str = "open"
HIGH: str = "high"
LOW: str = "low"
CLOSE: str = "close"
class DataVendor(Enum):
"""Define data vendor."""
VENDOR_A: str = VendorA
VENDOR_B: str = VendorB
</code></pre>
<p>I then define a new <code>Config</code> class.</p>
<pre><code>@dataclass
class Config:
"""Configuration class."""
data_vendor: DataVendor
</code></pre>
<p>After instatiating <code>Config</code>, I would like to directly access the nested price fields of the respective <code>data_vendor</code>.</p>
<p>However, using <code>Enum</code>, you can't directly access the price fields, i.e. the following code breaks with an error: <code>AttributeError: 'DataVendor' object has no attribute 'OPEN'</code></p>
<pre><code>conf = Config(data_vendor=DataVendor.VENDOR_A)
# I would like these assertions to pass.
assert conf.data_vendor.OPEN == "px_open"
assert conf.data_vendor.HIGH == "px_high"
assert conf.data_vendor.LOW == "px_low"
assert conf.data_vendor.CLOSE == "px_last"
</code></pre>
<p>If I were to use just regular classes (don't derive from <code>Enum</code>), the code works perfectly fine.</p>
|
<python><enums>
|
2023-03-02 11:28:48
| 1
| 5,167
|
Andi
|
75,614,728
| 21,117,172
|
Cuda 12 + tf-nightly 2.12: Could not find cuda drivers on your machine, GPU will not be used, while every checking is fine and in torch it works
|
<ul>
<li><strong>tf-nightly version</strong> = 2.12.0-dev2023203</li>
<li><strong>Python version</strong> = 3.10.6</li>
<li><strong>CUDA drivers version</strong> = 525.85.12</li>
<li><strong>CUDA version</strong> = 12.0</li>
<li><strong>Cudnn version</strong> = 8.5.0</li>
<li>I am using <strong>Linux</strong> (x86_64, Ubuntu 22.04)</li>
<li>I am coding in <strong>Visual Studio Code</strong> on a <strong>venv</strong> virtual environment</li>
</ul>
<p>I am trying to run some models on the GPU (NVIDIA GeForce RTX 3050) using tensorflow nightly 2.12 (to be able to use Cuda 12.0). The problem that I have is that apparently every checking that I am making seems to be correct, but in the end the script is not able to detect the GPU. I've dedicated a lot of time trying to see what is happening and nothing seems to work, so any advice or solution will be more than welcomed. The GPU seems to be working for torch as you can see at the very end of the question.</p>
<p>I will show some of the most common checkings regarding CUDA that I did (Visual Studio Code terminal), I hope you find them useful:</p>
<ol>
<li><p><strong>Check CUDA version:</strong></p>
<p><code>$ nvcc --version</code></p>
<pre><code>nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Fri_Jan__6_16:45:21_PST_2023
Cuda compilation tools, release 12.0, V12.0.140
Build cuda_12.0.r12.0/compiler.32267302_0
</code></pre>
</li>
<li><p><strong>Check if the connection with the CUDA libraries is correct:</strong></p>
<p><code>$ echo $LD_LIBRARY_PATH</code></p>
<pre><code>/usr/cuda/lib
</code></pre>
</li>
<li><p><strong>Check nvidia drivers for the GPU and check if GPU is readable for the venv:</strong></p>
<p><code>$ nvidia-smi</code></p>
<pre><code>+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.85.12 Driver Version: 525.85.12 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... On | 00000000:01:00.0 On | N/A |
| N/A 40C P5 6W / 20W | 46MiB / 4096MiB | 22% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1356 G /usr/lib/xorg/Xorg 45MiB |
+-----------------------------------------------------------------------------+
</code></pre>
</li>
<li><p><strong>Add cuda/bin PATH and Check it:</strong></p>
<p><code>$ export PATH="/usr/local/cuda/bin:$PATH"</code></p>
<p><code>$ echo $PATH</code></p>
<pre><code>/usr/local/cuda-12.0/bin:/home/victus-linux/Escritorio/MasterThesis_CODE/to_share/venv_master/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/snap/bin
</code></pre>
</li>
<li><p><strong>Custom function to check if CUDA is correctly installed: [<a href="https://stackoverflow.com/questions/31326015/how-to-verify-cudnn-installation">function by Sherlock</a>]</strong></p>
<pre class="lang-bash prettyprint-override"><code>function lib_installed() { /sbin/ldconfig -N -v $(sed 's/:/ /' <<< $LD_LIBRARY_PATH) 2>/dev/null | grep $1; }
function check() { lib_installed $1 && echo "$1 is installed" || echo "ERROR: $1 is NOT installed"; }
check libcuda
check libcudart
</code></pre>
<pre><code>libcudart.so.12 -> libcudart.so.12.0.146
libcuda.so.1 -> libcuda.so.525.85.12
libcuda.so.1 -> libcuda.so.525.85.12
libcudadebugger.so.1 -> libcudadebugger.so.525.85.12
libcuda is installed
libcudart.so.12 -> libcudart.so.12.0.146
libcudart is installed
</code></pre>
</li>
<li><p><strong>Custom function to check if Cudnn is correctly installed: [<a href="https://stackoverflow.com/questions/31326015/how-to-verify-cudnn-installation">function by Sherlock</a>]</strong></p>
<pre class="lang-bash prettyprint-override"><code>function lib_installed() { /sbin/ldconfig -N -v $(sed 's/:/ /' <<< $LD_LIBRARY_PATH) 2>/dev/null | grep $1; }
function check() { lib_installed $1 && echo "$1 is installed" || echo "ERROR: $1 is NOT installed"; }
check libcudnn
</code></pre>
<pre><code> libcudnn_cnn_train.so.8 -> libcudnn_cnn_train.so.8.8.0
libcudnn_cnn_infer.so.8 -> libcudnn_cnn_infer.so.8.8.0
libcudnn_adv_train.so.8 -> libcudnn_adv_train.so.8.8.0
libcudnn.so.8 -> libcudnn.so.8.8.0
libcudnn_ops_train.so.8 -> libcudnn_ops_train.so.8.8.0
libcudnn_adv_infer.so.8 -> libcudnn_adv_infer.so.8.8.0
libcudnn_ops_infer.so.8 -> libcudnn_ops_infer.so.8.8.0
libcudnn is installed
</code></pre>
</li>
</ol>
<p>So, once I did these previous checkings I used a script to evaluate if everything was finally ok and then the following error appeared:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
print(f'\nTensorflow version = {tf.__version__}\n')
print(f'\n{tf.config.list_physical_devices("GPU")}\n')
</code></pre>
<pre><code>2023-03-02 12:05:09.463343: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-03-02 12:05:09.489911: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-03-02 12:05:09.490522: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-03-02 12:05:10.066759: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Tensorflow version = 2.12.0-dev20230203
2023-03-02 12:05:10.748675: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-03-02 12:05:10.771263: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1956] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
[]
</code></pre>
<p><strong>Extra check:</strong> I tried to run a checking script on torch and in here it worked so I guess the problem is related with tensorflow/tf-nightly</p>
<pre class="lang-py prettyprint-override"><code>import torch
print(f'\nAvailable cuda = {torch.cuda.is_available()}')
print(f'\nGPUs availables = {torch.cuda.device_count()}')
print(f'\nCurrent device = {torch.cuda.current_device()}')
print(f'\nCurrent Device location = {torch.cuda.device(0)}')
print(f'\nName of the device = {torch.cuda.get_device_name(0)}')
</code></pre>
<pre><code>Available cuda = True
GPUs availables = 1
Current device = 0
Current Device location = <torch.cuda.device object at 0x7fbe26fd2ec0>
Name of the device = NVIDIA GeForce RTX 3050 Laptop GPU
</code></pre>
<p>Please, if you know something that might help solve this issue, don't hesitate on telling me.</p>
|
<python><tensorflow><gpu>
|
2023-03-02 11:19:35
| 11
| 592
|
JaimeCorton
|
75,614,708
| 8,900,445
|
How to improve text similarity/classification performance when classes are semantically similar?
|
<p>I have an NLP classification problem whereby I want to match an input string (a question) to the most suitable string from a list of reference strings (FAQs), or abstain if confidence in a classification is low.</p>
<p>I have an existing function that uses <code>distilbert-base-uncased</code> embeddings and cosine similarity, which performs OK. However, the similarity scores are typically high for all reference strings, which is a consequence of them all being semantically similar. The strings themselves are all on a particular topic (e.g., "What is X?", "How can I prevent X?", "What are the symptoms of X?", "How can I tell if X is happening?"), so this isn't exactly surprising.</p>
<p>What techniques can I use to improve performance here? I do not have any training data, so fine-tuning is out. I can obviously try different language models and similarity measures, but it's difficult to determine whether this is going to have any noticeable impact.</p>
<p><strong>Are there any statistical or additional NLP techniques people can recommend for this problem?</strong></p>
<p>My existing function is as follows:</p>
<pre><code>
from transformers import AutoTokenizer, AutoModel
import torch
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
def mapping(user_input: str, abstain_threshold: float,
language_model='distilbert-base-uncased', faqs_file='faqs.txt'):
# Load the pre-trained transformer model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(language_model)
model = AutoModel.from_pretrained(language_model)
# Load the FAQ list
with open(faqs_file, 'r') as f:
faqs = [line.strip() for line in f]
# Tokenize the user input and FAQs
user_input_tokens = tokenizer.encode(user_input, add_special_tokens=True)
faq_tokens = [tokenizer.encode(faq, add_special_tokens=True) for faq in faqs]
# Pad the tokenized sequences to the same length
max_len = max(len(tokens) for tokens in faq_tokens + [user_input_tokens])
user_input_tokens = user_input_tokens + [0] * (max_len - len(user_input_tokens))
faq_tokens = [tokens + [0] * (max_len - len(tokens)) for tokens in faq_tokens]
# Convert the tokenized sequences to PyTorch tensors
user_input_tensor = torch.tensor(user_input_tokens).unsqueeze(0)
faq_tensors = [torch.tensor(tokens).unsqueeze(0) for tokens in faq_tokens]
# Pass the user input and FAQs through the transformer model
with torch.no_grad():
user_input_embedding = model(user_input_tensor)[0][:, 0, :]
faq_transformer_embeddings = [model(faq_tensor)[0][:, 0, :] for faq_tensor in faq_tensors]
# use cosine similarity to get the best match
faq_similarity_scores = []
for faq_transformer_embedding in faq_transformer_embeddings:
similarity = cosine_similarity(user_input_embedding, faq_transformer_embedding)
print(similarity)
faq_similarity_scores.append(similarity)
# Find the most similar FAQ
max_score_index = np.argmax(faq_similarity_scores)
max_score = faq_similarity_scores[max_score_index]
best_match = faqs[max_score_index]
# check if model abstains
if max_score >= abstain_threshold:
return best_match
else:
return None
</code></pre>
|
<python><nlp><huggingface-transformers><similarity><text-classification>
|
2023-03-02 11:17:07
| 0
| 895
|
cookie1986
|
75,614,647
| 3,348,261
|
How to generate all pxq matrices with n values "1" with no columns or lines having two "1" (chess n-towers problem)
|
<p>I'm looking for a way to generate all matrices (p,q) having exactly n "1" value with no column or line having more than one "1" value (a kind of chess n towers problem).</p>
<p>Here is a simple example in python for p=4, q=4 and n=2:</p>
<pre><code>for i1 in range(4*4):
x1, y1 = i1//4, i1%4
for i2 in range(i+1,4*4):
x2, y2 = i2//4, i2%4
if x1 != x2 and y1 != y2:
print(x1,y1,x2,y2)
</code></pre>
<p>It is not really efficient and it would be cumbersome to write if for n=9 (for example). Is there a vectorized way of doing it?</p>
<p>Also, what would be the formula to count them?</p>
|
<python><numpy>
|
2023-03-02 11:12:23
| 1
| 712
|
Nicolas Rougier
|
75,614,430
| 11,130,088
|
Override DOM style for Tabs, header level
|
<p>I have been trying to custom my Tab widget with css_class overriding with it with a style.css as I used to do without previous bokeh versions, although since the 3.0. the same approach does not work. I tried to use css_classes, styles, stylesheet and the only one that worked well was styles which I can pass a dictionary and it overrides the root values.
My problem is that I trying to reach the headers level to change the border-bottom-color, or even border-bottom to 0px.</p>
<pre><code>:host(.bk-above) .bk-header {
1. border-bottom: 1px solid #0e0922;
1. border-bottom-width:1px;
2. border-bottom-style: solid;
3. border-bottom-color: rgb(14, 9, 34); }
</code></pre>
<p>I inspected the element in the console, I checked the bokeh documentation <a href="https://github.com/bokeh/bokeh/blob/3.0.3/bokehjs/src/less/tabs.less" rel="nofollow noreferrer">css_widgets</a>. I have tried a lot of different approaches and I can’t get it to work.</p>
<p>Minimal sample:</p>
<pre><code>from bokeh.models import TabPanel, Tabs
from bokeh.plotting import figure, show
#dir(Tabs)
#Tabs.parameters()
p1, p2 = figure(), figure()
p1.background_fill_color = '#010727'
p1.border_fill_color = '#010727'
p1.outline_line_color = '#010727'
p2.background_fill_color = '#010727'
p2.border_fill_color = '#010727'
p2.outline_line_color = '#010727'
tab1 = TabPanel(child=p1, title="Fastest Lap")
tab2 = TabPanel(child=p2, title="Median Lap")
tabs_ = Tabs(tabs=[tab1, tab2], styles={ 'font-size':'15px', 'border-bottom': '0px', 'align':'right', 'header border-bottom-color':'green', 'border-bottom-color':'green', 'background-color':'#010727', 'color':'green'})
show(tabs_)
#dir(Tabs.styles)
#Tabs.styles.get_value(tabs_)
</code></pre>
<p><a href="https://i.sstatic.net/vW0Db.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vW0Db.png" alt="the lower border remove" /></a></p>
|
<python><css><tabs><bokeh>
|
2023-03-02 10:50:19
| 1
| 572
|
ReinholdN
|
75,614,405
| 1,935,611
|
mypy indexing pd.DataFrame with an Enum raises no overload variant error
|
<h4>The issue</h4>
<p>Mypy gives no overload variant of <code>__getitem__</code> of "DataFrame" matches argument type "MyEnum" error. In this case the argument type is an Enum but the issue would occur for any other custom type. Here is the signature of <code>__get_item__</code> below.</p>
<pre><code>def __getitem__(self, Union[str, bytes, date, datetime, timedelta, bool, int, float, complex, Timestamp, Timedelta], /) -> Series[Any]
</code></pre>
<h4>To reproduce</h4>
<p>Here is a script (namely <em>mypy_enum.py</em>) creating a pandas dataframe with enums as columns.</p>
<pre class="lang-py prettyprint-override"><code>from enum import Enum
import pandas as pd
class MyEnum(Enum):
TAYYAR = "tayyar"
HAYDAR = "haydar"
df = pd.DataFrame(data = [[12.2, 10], [8.8, 15], [22.1, 14]], columns=[MyEnum.TAYYAR, MyEnum.HAYDAR])
print(df[MyEnum.TAYYAR])
</code></pre>
<p>Here's the output when you call it. It works as expected, all is well.</p>
<pre class="lang-bash prettyprint-override"><code>> python mypy_enum.py
0 12.2
1 8.8
2 22.1
Name: MyEnum.TAYYAR, dtype: float64
</code></pre>
<p>When you call it with <code>mypy</code> however;</p>
<pre class="lang-bash prettyprint-override"><code>> mypy mypy_enum.py
mypy_enum.py:12: error: No overload variant of "__getitem__" of "DataFrame" matches argument type "MyEnum" [call-overload]
mypy_enum.py:12: note: Possible overload variants:
mypy_enum.py:12: note: def __getitem__(self, Union[str, bytes, date, datetime, timedelta, bool, int, float, complex, Timestamp, Timedelta], /) -> Series[Any]
mypy_enum.py:12: note: def __getitem__(self, slice, /) -> DataFrame
mypy_enum.py:12: note: def [ScalarT] __getitem__(self, Union[Tuple[Any, ...], Series[bool], DataFrame, List[str], List[ScalarT], Index, ndarray[Any, dtype[str_]], ndarray[Any, dtype[bool_]], Sequence[Tuple[Union[str, bytes, date, datetime, timedelta, bool, int, float, complex, Timestamp, Timedelta], ...]]], /) -> DataFrame
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<p>Shouldn't <code>__getitem__</code> be supporting the column type itself? How can this be addressed?</p>
|
<python><pandas><mypy>
|
2023-03-02 10:48:09
| 1
| 2,027
|
anilbey
|
75,614,368
| 3,668,129
|
How to get the chunk times from pydub split_on_silence?
|
<p>I'm using <code>split_on_silence</code> to split a mp3 file to multiple segments:</p>
<pre><code>sound = AudioSegment.from_mp3(TEST_FILE)
audio_chunks = split_on_silence(sound, min_silence_len=300, keep_silence=50, silence_thresh=-40 )
</code></pre>
<p>Is it possible (How can I do it) to get the origin start-stop times for each chunk?</p>
|
<python><pydub>
|
2023-03-02 10:45:25
| 1
| 4,880
|
user3668129
|
75,614,143
| 8,747,828
|
500: internal server error with Jupyter Notebook (nbconvert updated)
|
<p>I am getting a pretty well-documented error when trying to run a Jupyter Notebook from my Mac Monterey 12.3.1, the 500 internal server error.</p>
<p>This appears to be the problem:</p>
<p><code>ImportError: cannot import name 'contextfilter' from 'jinja2' (/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/jinja2/__init__.py)</code></p>
<p>Which, from what I read, is related to some sort of conflict with certain versions of <code>nbconvert</code>, so I upgraded it as is often suggested. However, it continues to give me the same error.</p>
<p>Does anyone have any other suggestions? Even if I make a fresh virtual environment and start with completely new package install I continue to get this error. I'm not using Conda, just pip to install packages.</p>
|
<python><jupyter-notebook><nbconvert>
|
2023-03-02 10:26:21
| 1
| 565
|
hmnoidk
|
75,614,029
| 2,036,464
|
How to run Python code in the background of my web/html pages, directly on internet?
|
<p>is it possible to run Python code in the background of my web/html pages, directly on internet?</p>
<p>Suppose I made a code with shuffle words (mix words), and I want to make an html page that have that python code in the background, to run that page as an application. Everytime someone will access that page, Python code will do the work.</p>
<p>Is it possible?</p>
|
<python><html><python-3.x>
|
2023-03-02 10:15:51
| 1
| 1,065
|
Just Me
|
75,614,021
| 9,021,547
|
Pandas dataframe mutability with loc method
|
<p>I am trying to understand the inticacies of using <code>loc</code> on a dataframe. Suppose we have the following:</p>
<pre><code>df = pd.DataFrame({'a':[1,2,3], 'b':[4,5,6]})
df2 = df.loc[:,'a']
df2.loc[0] = 10
print(df)
print(df2)
a b
0 10 4
1 2 5
2 3 6
0 10
1 2
2 3
Name: a, dtype: int64
df = pd.DataFrame({'a':[1,2,3], 'b':[4,5,6]})
df3 = df.loc[:,['a']]
df3.loc[0] = 10
print(df)
print(df3)
a b
0 1 4
1 2 5
2 3 6
a
0 10
1 2
2 3
</code></pre>
<p>Why does the first piece of code modify the original dataframe, whereas the second does not?</p>
|
<python><pandas><mutable>
|
2023-03-02 10:14:53
| 1
| 421
|
Serge Kashlik
|
75,613,902
| 6,854,595
|
How to implement a base method that performs validation based on the child generic type in Python
|
<p>I have a base Python (3.8) abstract base class, with two classes inheriting from it:</p>
<pre class="lang-py prettyprint-override"><code>BoundedModel = TypeVar("BoundedModel", bound=CustomBaseModel)
class BaseDataStore(ABC, Generic[BoundedModel]):
def __init__(self, resource_name: str) -> None:
self.client = client(resource_name)
@abstractmethod
def get_all(self) -> List[BoundedModel]:
pass
class MetadataStore(BaseDataStore[Metadata]):
def get_all(self) -> List[Metadata]:
items = self.client.get_all()
return [Metadata(**item) for item in items]
class TranscriptStore(BaseDataStore[Transcript]):
def get_all(self) -> List[Transcript]:
items = self.client.get_all()
return [Transcript(**item) for item in items]
</code></pre>
<p>The <code>CustomBaseModel</code> bound for <code>BoundedModel</code> represents a pydantic class, meaning
that <code>Metadata</code> and <code>Transcript</code> are pydantic class models used for validation.</p>
<p>The concrete implementations of <code>get_all</code> all do the exact same thing:
they validate the data with the Pydantic bounded model. This works, but forces me
to spell out the concrete implementation for each <code>BaseDataStore</code> child.</p>
<p>Is there any way that I could implement <code>get_all</code> as a generic method (rather than abstract) in the parent <code>BaseDataStore</code>, therefore removing the need for concrete implementations in the children?</p>
|
<python><mypy><pydantic>
|
2023-03-02 10:04:10
| 2
| 540
|
alexcs
|
75,613,896
| 4,576,519
|
How to get the gradients of network parameters for a derivative-based loss?
|
<p>I have a network <code>y(x)</code> for which I have a given dataset <code>dy(x)</code>. That is, I know the derivative of <code>y</code> for a certain <code>x</code> but I do not know <code>y</code> itself. A minimal example of this is:</p>
<pre class="lang-py prettyprint-override"><code>import torch
# Define network to predict y(x)
network = torch.nn.Sequential(
torch.nn.Linear(1, 50),
torch.nn.Tanh(),
torch.nn.Linear(50, 1)
)
# Define dataset dy(x) = x, which corresponds to y = 0.5x^2
x = torch.linspace(0,1,100).reshape(-1,1)
dy = x
# Calculate loss based on derivative of prediction for y
x.requires_grad=True
y_pred = network(x)
dy_pred = torch.autograd.grad(y_pred, x, grad_outputs=torch.ones_like(y_pred), create_graph=True)[0]
loss = torch.mean((dy-dy_pred)**2)
# This throws an error
gradients = torch.autograd.grad(loss, network.parameters())[0]
</code></pre>
<p>At the last line, it gives the error <code>One of the differentiated Tensors appears to not have been used in the graph</code>, even though the parameters have definitely been used to calculate the loss. Interestingly, when I use <code>torch.optim.Adam</code> on the loss with <code>loss.backward()</code>, no error occurs. How can I fix my error? <strong>Note: defining a network to predict <code>dy</code> directly is not an option for my actual problem</strong>.</p>
|
<python><pytorch><gradient><backpropagation><automatic-differentiation>
|
2023-03-02 10:03:46
| 1
| 6,829
|
Thomas Wagenaar
|
75,613,857
| 3,146,304
|
Colab 'ascii' codec can't decode byte: ordinal not in range(128) with encoding="utf-8"
|
<p>I am struggling with an issue that I have only on Colab and not on my machine.
I am reading some JSON files and it throws a <code>UnicodeDecodeError</code>:</p>
<p><code>UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 418: ordinal not in range(128)</code></p>
<p>which is not totally surprising since these files actually contain non-ASCII characters, and in effect, in that position on that file there is that non-ASCII character.
But I typically solve the issue by adding <code>encoding="utf-8"</code> parameter to the <code>json.load</code> function, while here it does not have any effect. Do you have any idea on how I could solve this?</p>
|
<python><character-encoding><google-colaboratory>
|
2023-03-02 09:59:49
| 1
| 389
|
Vitto
|
75,613,691
| 102,957
|
Given a Flask app object, how can I obtain the templates folder absolute path?
|
<p>After initializing the app, I would like to determine the absolute path of the 'templates' folder (to read a yaml file located inside it).</p>
<pre><code>app = Flask(__name__)
</code></pre>
<p>for example, I'm looking for something like this:</p>
<pre><code>templates_path = app.get_absolute_path_to_templates()
</code></pre>
|
<python><flask>
|
2023-03-02 09:45:58
| 1
| 8,855
|
DanC
|
75,613,534
| 380,111
|
is there a python way of listing all the properties
|
<p>I'm really new to python so I'm not 100% sure about the terminology. Tried googling but none of the answers i found work for my use case.</p>
<p>I've made some code to list out all my emails</p>
<pre class="lang-py prettyprint-override"><code>import win32com.client
import pandas as pd
outlook = win32com.client.Dispatch('Outlook.Application').GetNamespace('MAPI')
inbox = outlook.GetDefaultFolder(6) # "6" refers to the index of the inbox folder
emails = inbox.Items
restricted_emails = emails.Restrict("@SQL=urn:schemas:httpmail:subject LIKE '%SEDIT%'")
for email in restricted_emails:
print(email.Subject)
print(email.SentOn)
</code></pre>
<p>What I was wondering is there a way to list out all the different properties/attributes within email without me needing to look up the documentation.</p>
<p>something that will produce an output like</p>
<p>email.Subject
email.SentOn
email.To
email.Body</p>
<p>Ive tried</p>
<pre class="lang-py prettyprint-override"><code>>>> dir(email)
['_ApplyTypes_', '_FlagAsMethod', '_LazyAddAttr_', '_NewEnum', '_Release_', '__AttrToID__', '__LazyMap__', '__bool__', '__call__', '__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattr__', '__getattribute__', '__getitem__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__int__', '__le__', '__len__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setitem__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_builtMethods_', '_enum_', '_find_dispatch_type_', '_get_good_object_', '_get_good_single_object_', '_lazydata_', '_make_method_', '_mapCachedItems_', '_oleobj_', '_olerepr_', '_print_details_', '_proc_', '_unicode_to_string_', '_username_', '_wrap_dispatch_']
>>> import inspect
>>> inspect.getmembers(email)
</code></pre>
<p>but i can't make sense of the outputs</p>
|
<python><python-3.x>
|
2023-03-02 09:31:30
| 1
| 1,565
|
Nathaniel Saxe
|
75,613,521
| 7,865,686
|
Get data to labelbox via cloud function
|
<p>I want to write a Google Cloud Function to retrieve data from a bucket and upload it to Labelbox. Here's the function code:</p>
<pre><code>import labelbox
from labelbox import Client, Dataset
import os
import uuid
import logging
# Add your API key below
LABELBOX_API_KEY = "my api key"
client = Client(api_key=LABELBOX_API_KEY)
def upload_asset(event, context):
"""Uploads an asset to Catalog when a new asset is uploaded to GCP bucket.
If a dataset with bucket_name exists in Catalog, then an asset is added to that dataset. Otherwise, a new dataset is created.
Args:
event (dict): Event payload.
context (google.cloud.functions.Context): Metadata for the event.
"""
file = event
bucket_name = file['bucket']
object_name = file["name"]
try:
datasets = client.get_datasets(where=Dataset.name == bucket_name)
dataset = next(datasets, None)
if not dataset:
dataset = client.create_dataset(name=bucket_name)
url = f"gs://{bucket_name}/{object_name}"
dataset.create_data_row(row_data=url, external_id=object_name )
logging.getLogger().setLevel(logging.DEBUG)
return "success"
except Exception as e:
print(f"Error: {e}")
return "failure"
</code></pre>
<p>this gives Forbidden error in labelbox side how to fix that</p>
<p>I wrote a Google Cloud Function in Python to read data from a Google Cloud Storage bucket and upload it to Labelbox using the Labelbox Python SDK. I expected the function to run without errors and upload the data to my Labelbox project.</p>
<p>However, when I triggered the function, I received a "Forbidden" error message in Labelbox, and the function did not complete successfully. The error message indicated that Labelbox was unable to authenticate my request, but I'm not sure what's causing the problem.</p>
<p>I've checked that my Labelbox API key is correct and that I have the necessary permissions to upload data to my Labelbox project. I've also confirmed that my Google Cloud Function is able to read the data from the bucket successfully.</p>
<p>I'm not sure what else to try to fix this issue. Can you help me understand what might be causing the "Forbidden" error and how to resolve it?</p>
|
<python><google-cloud-functions>
|
2023-03-02 09:30:13
| 1
| 479
|
ishan weerakoon
|
75,613,492
| 14,720,380
|
Difference between C++ remainder and NumPy/Python remainder
|
<p>In C++, the following code:</p>
<pre class="lang-cpp prettyprint-override"><code>#include <math.h>
#include <iostream>
int main() {
std::cout << remainder(-177.14024960054252, 360) << std::endl;
}
</code></pre>
<p>Compiled with x86-64 GCC 12.2 (<a href="https://godbolt.org/z/43MzbE1ve" rel="nofollow noreferrer">https://godbolt.org/z/43MzbE1ve</a>)</p>
<p>Outputs:</p>
<pre><code>-177.14
</code></pre>
<p>However in Python:</p>
<pre class="lang-py prettyprint-override"><code>np.remainder(-177.14024960054252, 360)
# and
-177.14024960054252 % 360
</code></pre>
<p>Both output:</p>
<pre><code>182.85975039945748
</code></pre>
<p>According to the numpy docs, <code>np.remainder</code> is doing the IEEE remainder function. According to the C++ docs, <code>remainder</code> is also doing the IEEE remainder function.</p>
<p>Why are these two numbers different?</p>
|
<python><c++><numpy>
|
2023-03-02 09:27:04
| 1
| 6,623
|
Tom McLean
|
75,613,490
| 9,471,909
|
Downloading a file via Requests.get(url) raises socket.error: [Errno 10013]
|
<p>I need to download a file from a Python program using <code>requests</code> module.</p>
<p>If run the following :</p>
<pre><code>self.proxy = {"http_proxy": "...", "https_proxy": "..."}
request = requests.get(
file_url,
allow_redirects=True,
proxies=self.proxy,
timeout=30
)
</code></pre>
<p>I get the following error</p>
<pre><code>Traceback (most recent call last):
File "D:\MyPrograms\python_virtual_envs\3.10.5\lib\site-packages\urllib3\connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "D:\MyPrograms\python_virtual_envs\3.10.5\lib\site-packages\urllib3\util\connection.py", line 95, in create_connection
raise err
File "D:\MyPrograms\python_virtual_envs\3.10.5\lib\site-packages\urllib3\util\connection.py", line 85, in create_connection
sock.connect(sa)
PermissionError: [WinError 10013] An attempt was made to access a socket in a way forbidden by its access permissions
</code></pre>
<p>But if I open <code>Gitbash</code> on the same computer, I'm able to download the same file via <code>wget</code> without any problem.</p>
<pre><code>export http_proxy=...
export https_proxy=...
wget file_url
... 0%[> ] 7.49M 2.26MB/s eta 6m 56s
</code></pre>
<p>So if it works in command line via <code>wget</code> but not in a Python program using <code>requests</code> module, I don't think that is something related to security rules at antivirus/firewall level on the computer. Besides I checked the events and there was no trace of blocking my Python program trying to access the Web.</p>
<p>Any idea? How I might be able to solve this problem?</p>
|
<python><python-requests>
|
2023-03-02 09:27:01
| 1
| 1,471
|
user17911
|
75,613,421
| 2,998,077
|
Python Pandas GroupBy to plot a line chart and bar chart side by side (in 1 image)
|
<p>A dataframe of different columns that I want to plot them (from GroupBy) into a line chart and bar chart side by side (in 1 image).</p>
<p>With below lines that produces 2 separate charts, I tried but still not able to get them into a side-side-side 1 image.</p>
<pre><code>import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from io import StringIO
csvfile = StringIO(
"""
Name Year - Month Score Thumbs-up
Mike 2022-09 192 5
Mike 2022-08 708 5
Mike 2022-07 140 3
Mike 2022-05 144 8
Mike 2022-04 60 10
Mike 2022-03 108 4
Kate 2022-07 19850 5
Kate 2022-06 19105 2
Kate 2022-05 23740 3
Kate 2022-04 19780 9
Kate 2022-03 15495 4 """)
df = pd.read_csv(csvfile, sep = '\t', engine='python')
for group_name, sub_frame in df.groupby("Name"):
fig, axes = plt.subplots(nrows=1,ncols=2,figsize=(12,6))"
sub_frame_sorted = sub_frame.sort_values('Year - Month') # sort the data-frame by a column"
line_chart = sub_frame_sorted.plot(""Year - Month"", ""Score"", legend=False)"
bar_chart = sub_frame_sorted.plot.bar(""Year - Month"", ""Thumbs-up"", legend=False)"
# for data labeling in the charts
i=0
for ix, vl in sub_frame_sorted.iterrows():
line_chart.annotate(vl['Score'], (i, vl['Score']), ha='center')
bar_chart.annotate(vl['Thumbs-up'], (i, vl['Thumbs-up']), ha='center')
i=i+1
plt.show()
</code></pre>
<p>What's the right way to do so (if matplotlib can do so)?</p>
|
<python><pandas><matplotlib><plot><charts>
|
2023-03-02 09:20:43
| 1
| 9,496
|
Mark K
|
75,613,303
| 11,408,460
|
python/ Django values_list is not returning all values
|
<p>I have this bit of ugly code that is producing what I want. It's working but only necessary because what I would like to do with <code>values_list</code> is not working.</p>
<pre class="lang-py prettyprint-override"><code>member_channels = Channel.objects.filter(Q(members=request.user) | Q(owner=request.user)).prefetch_related('members').prefetch_related('owner')
members_nested = list(map(lambda channel: channel.members.all(), member_channels))
members = list(dict.fromkeys(itertools.chain(*members_nested)))
owners = list(map(lambda channel: channel.owner, member_channels))
# this is all the user's who's comment's the request.user should be able to see.
valid_comment_users = list(dict.fromkeys(members + owners))
</code></pre>
<p>What I would like to do and should work is:</p>
<pre class="lang-py prettyprint-override"><code>member_channels = Channel.objects.filter(Q(members=request.user) | Q(owner=request.user)).prefetch_related('members').prefetch_related('owner')
member_ids = member_channels.values_list('members', 'owner', flat=True)
valid_comment_users = AppUser.objects.filter(id__in=member_ids).distinct()
</code></pre>
<p>The issue is that with <code>values_list</code> I'm not getting all the <code>members</code> for each Channel in the <code>members_channels</code> it seems like it's only returning the <code>members</code> that are the same in all channels, or maybe just the first member I can't tell. Any insight into why <code>values_list</code> isn't working for this?</p>
<p>Here are the models and their relationships:</p>
<pre class="lang-py prettyprint-override"><code>class Channel(models.Model):
owner = models.ForeignKey(AppUser, on_delete=models.CASCADE)
members = models.ManyToManyField(AppUser, blank=True, related_name="members")
</code></pre>
<pre class="lang-py prettyprint-override"><code>class Comment(models.Model):
owner = models.ForeignKey(AppUser, on_delete=models.CASCADE)
episode = models.ForeignKey(Episode, on_delete=models.CASCADE)
</code></pre>
<pre class="lang-py prettyprint-override"><code>class Episode(models.Model):
owner = models.ForeignKey(AppUser, on_delete=models.CASCADE)
channels = models.ManyToManyField(Channel, blank=True)
</code></pre>
<p>The <code>AppUser</code> is just the user model extended from Django auth.</p>
|
<python><django><django-models><django-views><django-queryset>
|
2023-03-02 09:07:44
| 2
| 680
|
1ManStartup
|
75,613,299
| 9,757,174
|
Datetime issue with the streamlit dataframe display
|
<p>I am building a streamlit application and I am uploading an excel file to the Streamlit. However, when the data is displayed in a datetime format using <code>st.dataframe()</code>, it changes format and I am not able to fix it in dsiplay.</p>
<p>As you can see here, the interval column has these numbers and the Date column has time added to it which I want to fix.</p>
<p><a href="https://i.sstatic.net/MQTlS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MQTlS.png" alt="enter image description here" /></a></p>
<p>The actual data looks a bit like this and this is how I would like to display data. I don't mind converting it to a string so we can use <code>strftime()</code> but I am sort of stuck in trying to implement that to my data.</p>
<p><a href="https://i.sstatic.net/wMV4G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wMV4G.png" alt="enter image description here" /></a></p>
<p>Here's what my upload code looks like:</p>
<pre class="lang-py prettyprint-override"><code># File Upload
uploaded_file = st.file_uploader(
"Upload inventory file for past week estimates in **excel** format."
)
sheet_name = st.text_input("Add the name of the sheet to select data from. **Default = first sheet**")
# File processing button
# if st.button("Process file inputs"):
if uploaded_file is not None:
try:
# Read the uploaded excel file
if sheet_name:
historical_interval_data = pd.read_excel(uploaded_file, sheet_name=sheet_name)
else:
historical_interval_data = pd.read_excel(uploaded_file)
st.dataframe(historical_interval_data)
except ValueError:
st.error("**Error**: The sheet name is incorrect.")
</code></pre>
|
<python><python-3.x><datetime><streamlit>
|
2023-03-02 09:07:34
| 0
| 1,086
|
Prakhar Rathi
|
75,613,273
| 2,717,424
|
Argparse: Passing multiple arguments via optional parameters when there is also a positional argument
|
<p>When I have an <code>argparse</code> interface that only supports optional parameters, e.g.</p>
<pre><code>parser.add_argument('-p', '--ports', nargs='+' type=int)
</code></pre>
<p>I can pass values to this parameter as follows</p>
<pre><code>$ python3 myFunc.py -p 80
</code></pre>
<p>or even</p>
<pre><code>$ python3 myFunc.py -p 80 8080
</code></pre>
<p>Now, when I add a positional argument at the end of the interface, it is a different story:</p>
<pre><code>parser.add_argument('-p', '--ports', nargs='+' type=int)
parser.add_argument('host', nargs=1)
</code></pre>
<p>When I try to call the script like before, with the positional argument at the end, I am getting the following error:</p>
<pre><code>$ python3 myFunc.py -p 80 10.0.0.0.1
usage: myFunc.py [-h] [-p ports [ports ...]] host
myFunc.py: error: argument -p/--ports: invalid int value: '10.0.0.0.1'
</code></pre>
<p>So it seems that argparse cannot recognize that the last value relates to the positional argument <code>host</code> and is not part of <code>ports</code>. This can be fixed by using <code>=</code> to specify the port. Which I do not find aesthetic, to be honest.</p>
<pre><code>$ python3 myFunc.py -p=80 10.0.0.0.1
</code></pre>
<p>However, while I was able to pass multiple values to <code>port</code> in the previous version without <code>host</code> by simply add arguments:</p>
<pre><code>$ python3 myFunc.py -p=80 8080 443
</code></pre>
<p>this does not work anymore with the version that supports <code>host</code> most likely due to the syntax on how to pass multiple arguments when using the <code>=</code> along the optional argument, e.g.</p>
<pre><code>$ python3 myFunc.py -p=80,443 10.0.0.0.1
usage: myFunc.py [-h] [-p ports [ports ...]] host
myFunc.py: error: argument -p/--ports: invalid int value: '80,443'
</code></pre>
<p>In this case, do I need to implement a custom parser for this flag that accepts a string of comma-separated values and split it up internally, or is there any built-in functionality for this?</p>
|
<python><argparse>
|
2023-03-02 09:04:19
| 1
| 1,029
|
Sebastian Dine
|
75,613,212
| 7,089,239
|
Pyflink fails converting datetime when executing and collecting SQL timestamp
|
<p>I'd like to test some streams I've created with <code>execute_and_collect</code> instead of a JDBC sink. The sink succeeds in converting a <code>Row</code> to insert data into a DB, but <code>execute_and_collect</code> fails with:</p>
<blockquote>
<p>AttributeError: 'bytearray' object has no attribute 'timestamp'</p>
</blockquote>
<p>This is in <a href="https://github.com/apache/flink/blob/master/flink-python/pyflink/datastream/utils.py#L100" rel="nofollow noreferrer"><code>pyflink.datastream.utils:pickled_bytes_to_python_converter</code></a> through <code>execute_and_collect -> CloseableIterator -> next -> convert_to_python_obj</code>, and indeed caused by the unpickled object being a byte array instead of a datetime object that has <code>.timestamp()</code>. However, as you'll see in the MWE below, I'm creating datetime objects in the source (which in the real application then is a proper stream in a larger graph).</p>
<p>Before assuming this is a bug, I'd like to know if I'm doing something wrong. I'm quite new to Flink in general, but this seems basic. Here's the MWE:</p>
<pre><code>from datetime import datetime
from pyflink.common.typeinfo import Types
from pyflink.datastream import StreamExecutionEnvironment
env = StreamExecutionEnvironment.get_execution_environment()
env.set_parallelism(1)
field_names = ("created_at",)
collection = [(datetime.now(),)]
field_types = [Types.SQL_TIMESTAMP()]
types = Types.ROW_NAMED(field_names=field_names, field_types=field_types)
stream = env.from_collection(collection=collection, type_info=types)
items = stream.execute_and_collect()
print(list(items)) # Failure here
items.close()
</code></pre>
|
<python><apache-flink><flink-streaming><pyflink>
|
2023-03-02 08:59:24
| 0
| 2,688
|
Felix
|
75,613,159
| 3,861,965
|
Writing regex to capture string with optional lookahead
|
<p>I am trying to write a regex which, given these:</p>
<pre><code>cache_realm_report__hourly_0.json
filters_0000.json
how_we_feel_emotions.csv
</code></pre>
<p>returns the respective matches</p>
<pre><code>cache_realm_report__hourly
filters
how_we_feel_emotions
</code></pre>
<p>I have tried a few different patterns but they always fail for one reason or another.</p>
<p>This <code>^[a-zA-Z_]*(?=\d*\.[csv|json])</code> almost works except it returns</p>
<pre><code>cache_realm_report__hourly_
filters_
how_we_feel_emotions
</code></pre>
<p>with the last <code>_</code> in, which I don't want.</p>
<p>How can I change this to remote the last <code>_</code>?</p>
<p>P.S.: I know I could just <code>replace</code> afterwards, but I wanted to do everything in regex if possible.</p>
|
<python><regex>
|
2023-03-02 08:52:52
| 1
| 2,174
|
mcansado
|
75,613,145
| 6,387,095
|
Traceback - can't get the exception text?
|
<p>I am trying to use a try/except block:</p>
<pre><code>try:
raise ValueError
except Exception as e:
# Get error data
stack = traceback.extract_stack()
(filename, line, procname, text) = stack[-1]
# create sendable error data
error_data = {
"error_msg": f"Line No. {line}\nText: \n\n{text}",
"error_script": filename,
}
# send email to admin
resp = req.post(f"{ERRORS_URL}", headers=header, json=error_data)
pass
</code></pre>
<p>When I get an email from here:</p>
<p>I receive the <code>filename</code>, <code>line</code> without any issue. The <code>text</code> however is always blank.</p>
<p>I want the text to be the cause of the exception.</p>
<p>Am I doing something wrong here?</p>
|
<python><python-3.x>
|
2023-03-02 08:51:36
| 2
| 4,075
|
Sid
|
75,613,129
| 12,913,047
|
Indexing issue after Transposing dataframe
|
<p>I have the following code below, to produce the heatmap in the image. However, as there are many 'Indicators' - I would like the heat map to be long horizontal and not tall. I.e., the Indicators on the X axis, and the Criteria (robustness...etc) along the left side of the y axis.</p>
<pre><code>import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib.colors import LinearSegmentedColormap
df = pd.read_csv('Res_Gov.csv')
df1 = df.transpose()
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(12, 12))
colors = ["#f4a261","#2a9d8f"]
cmap = LinearSegmentedColormap.from_list('Custom', colors, len(colors))
# Draw the heatmap with the mask and correct aspect ratio
df1 = sns.heatmap(df1.set_index('Indicators'), cmap=cmap, square=True, linewidths=.5, cbar_kws={"shrink": .5})
# Set the colorbar labels
colorbar = ax.collections[0].colorbar
colorbar.set_ticks([0.25,0.75])
colorbar.set_ticklabels(['Not Applicable', 'Applicable'])
</code></pre>
<p>Without the transpose function, I receive this:</p>
<p><a href="https://i.sstatic.net/bhNQ6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bhNQ6.png" alt="heatmap" /></a></p>
<p>But, when I use it, I receive just a blank square graph.</p>
<p>I know there is an indexing issue happing but am not sure what it is. Any help would be appreciated! Thank you!</p>
<p>Df snippet - before transpose</p>
<pre><code>,Indicators,Robustness,Flexibility,Resourcefulness,Redundancy,Diversity,Independence,Foresight Capacity,Coordination Capacitiy,Collaboration Capacity,Connectivity & Interdependence,Agility,Adaptability,Self-Organization,Creativity & Innovation,Efficiency,Equity
0,G1,1,1,1,0,0,1,1,1,1,1,1,1,0,1,1,1
1,G2,1,0,0,0,0,1,0,1,1,1,1,0,0,1,1,1
2,G3,1,0,1,0,0,1,1,1,1,1,1,1,1,1,0,1
3,G4,1,1,1,0,1,0,1,1,1,1,1,1,1,1,0,1
4,G5,1,0,1,0,0,0,0,1,0,1,1,0,0,0,1,0
5,G6,1,0,1,0,1,0,1,0,0,0,0,0,0,1,0,1
6,G7,1,1,0,1,0,1,0,0,0,0,1,0,0,0,0,0
7,G8,1,1,0,0,0,1,1,1,1,0,1,1,0,0,0,0
8,G9,1,0,1,0,0,1,1,1,1,0,1,1,0,1,0,1
9,G10,1,1,1,0,0,0,1,1,1,1,1,0,0,0,1,1
10,G11,1,0,1,0,0,0,1,0,0,0,1,0,0,0,0,1
11,G12,1,1,1,0,1,1,1,1,1,1,1,0,1,1,1,0
12,G13,1,1,1,0,1,0,1,1,0,1,1,0,0,0,0,0
13,G14,1,0,1,0,1,0,1,1,1,1,1,1,0,0,1,1
14,G15,1,1,1,0,1,0,1,1,1,1,1,1,1,1,0,1
15,G16,1,0,1,0,1,1,0,1,1,1,0,1,1,1,0,1
16,G17,1,1,1,0,0,0,0,0,0,0,1,1,0,1,1,0
17,G18,1,0,1,0,1,1,1,1,1,1,0,1,1,1,0,1
18,G19,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1
19,G20,1,1,0,1,1,0,0,0,1,0,0,0,1,0,0,1
20,G21,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0,1
21,G22,1,1,1,0,0,0,1,1,1,1,1,1,0,1,0,1
22,G23,1,0,1,0,0,1,1,0,1,0,0,1,1,1,0,0
</code></pre>
|
<python><pandas>
|
2023-03-02 08:50:15
| 1
| 506
|
JamesArthur
|
75,613,063
| 179,014
|
How to access the output of gcloud build steps in python?
|
<p>I'm following the tutorial at <a href="https://cloud.google.com/blog/topics/developers-practitioners/orchestrating-pytorch-ml-workflows-vertex-ai-pipelines?hl=en" rel="nofollow noreferrer">https://cloud.google.com/blog/topics/developers-practitioners/orchestrating-pytorch-ml-workflows-vertex-ai-pipelines?hl=en</a> .</p>
<p>They are using the python client to create build steps. In my case that looks like</p>
<pre><code>import logging
from google.cloud.devtools import cloudbuild_v1 as cloudbuild
from google.protobuf.duration_pb2 import Duration
# Deploy the serving container to cloud run
build = cloudbuild.Build()
build.steps = [
{
"name": "gcr.io/google.com/cloudsdktool/cloud-sdk",
"entrypoint": "gcloud",
"args": [
"run", "deploy", service_name,
"--image", image_uri,
"--region", serving_location,
"--cpu", serving_cpu,
"--min-instances", serving_min_instances,
"--max-instances", serving_max_instances,
"--memory", serving_memory,
serving_authentication,
],
},
{
"name": "gcr.io/google.com/cloudsdktool/cloud-sdk",
"entrypoint": "/bin/bash",
"args": [
"gcloud", "run", "services", "describe", service_name,
"--platform", "managed",
"--region", serving_location,
"--format", "json",
" > $$BUILDER_OUTPUT/output"
]
}
]
# Override default timeout of 10min.
timeout = Duration()
timeout.seconds = 7200
build.timeout = timeout
operation = build_client.create_build(project_id=project, build=build)
result = operation.result(timeout=7200)
logging.info("RESULT: %s", result.results)
</code></pre>
<p>I would like to retrieve the URL of the deployed service in step one to use it in the python code. So in step two I'm calling <code>gcloud run services describe</code> and try to write output into <code>$BUILDER_OUTPUT/output</code>. As I understand it, this should pipe the output from the build step into the field <code>result.results.build_step_outputs</code>, which I could access from the python code, see</p>
<ul>
<li><a href="https://cloud.google.com/python/docs/reference/cloudbuild/latest/google.cloud.devtools.cloudbuild_v1.types.Results" rel="nofollow noreferrer">https://cloud.google.com/python/docs/reference/cloudbuild/latest/google.cloud.devtools.cloudbuild_v1.types.Results</a></li>
<li><a href="https://cloud.google.com/build/docs/api/reference/rest/v1/projects.builds#results" rel="nofollow noreferrer">https://cloud.google.com/build/docs/api/reference/rest/v1/projects.builds#results</a></li>
</ul>
<p>I also found an example making use of <code>$BUILDER_OUTPUT/output</code> at <a href="https://atamel.dev/posts/2022/10-17_executing_commands_from_workflows/" rel="nofollow noreferrer">https://atamel.dev/posts/2022/10-17_executing_commands_from_workflows/</a>.</p>
<p>However then running the steps above I get the error message</p>
<pre><code>google.api_core.exceptions.InvalidArgument: 400 generic::invalid_argument: invalid value for 'build.substitutions': key in the template "BUILDER_OUTPUT" is not a valid built-in substitution
</code></pre>
<p>So what am I doing wrong here? And is there (maybe another) way to retrieve the URL of the service deployed in step one, so I can use it inside the python code?</p>
|
<python><google-cloud-platform><gcloud>
|
2023-03-02 08:44:38
| 2
| 11,858
|
asmaier
|
75,613,040
| 972,647
|
python unittest: relative paths to files - pycharm vs cli
|
<p>I have unittests that require access to files. this is due to the nature of the project that generates files as output and I want to compare to the expected output.</p>
<p>Currently my directory structure is:</p>
<p><code>project root/tests/files</code></p>
<p>In the test setup I have the following:</p>
<pre><code> def setUp(self):
self.test_file = 'files/my_reference.txt'
</code></pre>
<p>Running this test in pycharm works perfectly fine.</p>
<p>I now want to create a github actions that runs these tests on push (especially to also be able to test easily on different OS). For that I have the command:</p>
<pre><code>python -m unittest discover -s ./tests -p "*_tests.py"
</code></pre>
<p>However this command fail locally as all the test files are not found as this runs in the root folder so the relative path is wrong. (eg it will look in <code>project root/files</code> instead of <code>project root/tests/files</code>. When changing the cwd to <code>project root/tests</code>, then the test fails because it then can't find my module that is being tested (obviously).</p>
<p>So how do I have to set the paths to the test files correctly so that running the test works in pycharm and by cli/github actions?</p>
|
<python><unit-testing><relative-path>
|
2023-03-02 08:40:34
| 1
| 7,652
|
beginner_
|
75,612,634
| 91,799
|
Copy files using Server Side Copy/Clone in Python?
|
<p>Protocols like <a href="https://wiki.samba.org/index.php/Server-Side_Copy" rel="nofollow noreferrer">Samba</a> and AFP support server side copy of files. The BTRFS file system even supports instant server side clone operations that don't take up space.</p>
<ul>
<li>Windows Explorer, Robocopy and MacOS Finder already utilize this</li>
<li><a href="https://github.com/djwong/xfstests/blob/master/src/cloner.c" rel="nofollow noreferrer">Cloner</a> is a sample implementation within xfstests</li>
<li>Here is a video showing it in action in <a href="https://www.youtube.com/watch?v=vFSjXoHg_Z4" rel="nofollow noreferrer">Explorer</a></li>
</ul>
<p>Overall it's an amazing feature and yet many copy tools including file copy in Python doesn't make use of it yet.</p>
<p>Is there an implementation of it already in Python?</p>
<p>Thanks!</p>
|
<python><performance><filesystems><file-copying><btrfs>
|
2023-03-02 07:55:39
| 1
| 2,579
|
Patrick Wolf
|
75,612,505
| 17,473,587
|
Using different classes (one imported and one defined) with the same name in a module
|
<pre><code>from .models import User, AuctionListing, Comment, Bids, Category, Watchlist, Activities, Winners
</code></pre>
<p>and</p>
<pre><code>class Comment(forms.Form):
comment = forms.CharField(label="", widget=forms.Textarea(attrs={
'placeholder': 'Comment', 'class': 'listing_textarera'
}))
</code></pre>
<p>Class name is <code>Comment</code>.</p>
<p>I have imported from <code>.models</code> and individual definition as above.</p>
<p>Here is in <code>views.py</code> module.</p>
<p>This two <code>Comment</code> class are different.</p>
<p>How can I using each one (Comment is imported from <code>models.py</code> or Class is defined here) separately?</p>
<p>May I refer each one individually?</p>
<p>May experience is here:</p>
<pre><code>c = Comment(message=message, user=request.user, listing=listing)
</code></pre>
<p>Which throws an error:</p>
<blockquote>
<p>got an unexpected keyword argument 'message'</p>
</blockquote>
|
<python><python-import>
|
2023-03-02 07:40:05
| 1
| 360
|
parmer_110
|
75,612,494
| 12,883,297
|
Select the dataframe based on multiple conditions on a group like all values in a column are 0 and value = x in another column in pandas
|
<p>I have a dataframe</p>
<pre><code>df = pd.DataFrame([["A",0,"ret"],["C",2,"rem"],["B",1,"ret"],["A",0,"rem"],["B",0,"rem"],["D",0,"rem"],["C",2,"rem"],["D",0,"rem"],["D",0,"rem"]],columns=["id","val1","val2"])
</code></pre>
<pre><code>id val1 val2
A 0 ret
C 2 rem
B 1 ret
A 0 rem
B 0 rem
D 0 rem
C 2 rem
D 0 rem
D 0 rem
</code></pre>
<p>Remove the id group where val1 is 0 in all the rows of group and val2 is rem in all the rows of group. Here for id <strong>D</strong>, val1 is 0 for all the rows and val2 is rem for all the rows so remove D id.</p>
<p><strong>Expected Output</strong></p>
<pre><code>df_out = pd.DataFrame([["A",0,"ret"],["C",2,"rem"],["B",1,"ret"],["A",0,"rem"],["B",0,"rem"],["C",2,"rem"]],columns=["id","val1","val2"])
</code></pre>
<pre><code>id val1 val2
A 0 ret
C 2 rem
B 1 ret
A 0 rem
B 0 rem
C 2 rem
</code></pre>
<p>How to do it in pandas?</p>
|
<python><python-3.x><pandas><dataframe>
|
2023-03-02 07:38:53
| 4
| 611
|
Chethan
|
75,612,441
| 6,133,593
|
how can I get the count of non zero at each row in pandas?
|
<p>I'm using pandas</p>
<p>dataframe is like</p>
<pre><code>name data1 data2 data3
kim 0 1 1
yu 0 1 1
min 2 0 0
</code></pre>
<p>I want to filter, if there are more than 2 data values greater than 0 for each row (filter kim, yu)</p>
<p>Is it possible to do this with pandas?</p>
|
<python><pandas><filter>
|
2023-03-02 07:32:23
| 2
| 427
|
Shale
|
75,612,400
| 10,200,497
|
add a column of bins by sum of a number
|
<p>This is my dataframe:</p>
<pre><code>df = pd.DataFrame({'a': range(100, 111)})
</code></pre>
<p>I want to add a column to this dataframe. My desired output looks like this:</p>
<pre><code> a b
0 100 NaN
1 101 NaN
2 102 NaN
3 103 1
4 104 1
5 105 1
6 106 2
7 107 2
8 108 2
9 109 3
10 110 3
</code></pre>
<p>I have a value which in this case is 3. I want 1 in column <code>b</code> if the value in <code>a</code> is between 103 and 106. And I want 2 in <code>b</code> if value is between 106 and 109. I want the inclusiveness like the example.
I have tried a couple of solutions. One of them was <code>pd.cut</code> but I couldn't figure out how to do it. This was one of my tries:</p>
<pre><code>df['b'] = pd.cut(df.a, [100, 103, 106, 109], include_lowest=True)
</code></pre>
<p>But since I don't know how many bins I have in my other samples I can't use this solution.</p>
|
<python><pandas>
|
2023-03-02 07:27:02
| 2
| 2,679
|
AmirX
|
75,612,088
| 7,054,640
|
Reading ZIP file from Url generates Bad Zip File error
|
<p>I am trying to download crypto historical data from <a href="http://www.data.binance.vision" rel="nofollow noreferrer">www.data.binance.vision</a> using python. I try to read the zip files into pandas using pd.read_csv method. This used to work a few months back but now an error pops up saying zipfile.badzipfile: file is not a zip file. I have manually downloaded the data and checked the files. The file is indeed a zip file and contains a CSV file inside it.
The url generated is also correct. Kindly guide me how to proceed in the matter.</p>
<pre><code>import pandas as pd
import json, requests
base_url = 'https://data.binance.vision/?prefix=data/spot/monthly/klines/NULSBTC/1w/'
url = f'{base_url}NULSBTC-1w-2023-02.zip'
df = pd.read_csv(url)
print(df)
Error:
zipfile.BadZipFile: File is not a zip file
</code></pre>
|
<python><pandas><binance>
|
2023-03-02 06:48:41
| 1
| 355
|
Jodhvir Singh
|
75,612,018
| 10,829,044
|
pandas - create customer movement matrix
|
<p>I have a dataframe that looks like below</p>
<pre><code>customer_id,month,Group,category,days_ago
A1,Jan,Premium,saf,13
A1,Jan,Premium,ewf,54
A2,Jan,Lost,ds,32
A3,Jan,Lost,dfs,78
A4,Jan,Lost,sdfg,94
A5,Jan,Loyal,sa,14
A6,Jan,Need Attention,ewf,13
A1,Mar,Premium,efWCC,78
A2,Mar,Need Attention,POI
A3,Mar,Lost,QWE
A4,Mar,Need Attention,QOEP
A4,Mar,Need Attention,POTU
A5,Mar,Loyal,FANC
A6,Mar,Lost,FAS
A7,Mar,New,qewr
A8,Mar,New,wqer
t1 = pd.read_clipboard(sep=',')
</code></pre>
<p>I would like to do the below</p>
<p>a) Create a matrix against Jan and Mar month</p>
<p>b) Fill the matrix with customer count under each group</p>
<p>I expect my output to be in a table like as below</p>
<p><a href="https://i.sstatic.net/2kAyO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2kAyO.png" alt="enter image description here" /></a></p>
<p>I tried the below but not sure how to get everything in a neat table</p>
<pre><code>cust_info = t1.groupby(['customer_id','month','Group']).size().reset_index()
group_info = t1.groupby(['customer_id','Group']).size().reset_index()
group_info.merge(cust_info,on='customer_id',how='left')
</code></pre>
<p>Is there anyway to capture their movement from one group another between the months <code>Jan</code> and <code>Mar</code>? I have a big data of 20K customers. Is there any elegant way to produce the below output?</p>
<p><a href="https://i.sstatic.net/2kAyO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2kAyO.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe><matrix><group-by>
|
2023-03-02 06:37:56
| 2
| 7,793
|
The Great
|
75,612,015
| 10,341,232
|
Scrapy spider crawl 0 page from Books to scrape website
|
<p>I have a basic and straightforward <code>Scrapy</code> spider to crawl<code>https://books.toscrape.com/</code>.</p>
<p>No parse function has been implemented yet, and I want to see if the spider can crawl the website.</p>
<pre class="lang-py prettyprint-override"><code>from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
class MySpider(CrawlSpider):
name = 'myspider'
allowed_domains = ["tosrape.com"]
start_urls = ["https://books.toscrape.com/"]
rules = (
Rule(LinkExtractor(allow="catalogue/category")),
)
</code></pre>
<p>Even though I'm able to interact with the website via the <code>Scrapy</code> shell (e.g. <code>response.css("a::text").getall()</code>) but the crawler doesn't crawl the website and returns :</p>
<pre><code>2023-03-02 14:31:05 [scrapy.core.engine] INFO: Spider opened
2023-03-02 14:31:05 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2023-03-02 14:31:05 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2023-03-02 14:31:06 [scrapy.core.engine] DEBUG: Crawled (404) <GET https://books.toscrape.com/robots.txt> (referer: None)
2023-03-02 14:31:06 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://books.toscrape.com/> (referer: None)
2023-03-02 14:31:07 [scrapy.spidermiddlewares.offsite] DEBUG: Filtered offsite request to 'books.toscrape.com': <GET https://books.toscrape.com/catalogue/category/books_1/index.html>
2023-03-02 14:31:07 [scrapy.core.engine] INFO: Closing spider (finished)
...
'downloader/response_status_count/200': 1,
'downloader/response_status_count/404': 1,
...
2023-03-02 14:31:07 [scrapy.core.engine] INFO: Spider closed (finished)
</code></pre>
<p>What am I doing wrong?</p>
|
<python><web-scraping><scrapy><web-crawler>
|
2023-03-02 06:37:51
| 2
| 419
|
Talkhak1313
|
75,611,996
| 1,806,566
|
Is there a way to specify to pip the #! line for any installed executable scripts?
|
<p>If I have a python package that contains an executable script and setup.py mentions it in its scripts field, it gets installed into the bin directory.</p>
<p>When it does that, however, it rewrites any #! line in that script to point to the path of the python being used. I would like to specify my own #! line.</p>
<p>When I was using setup.py on its own, I had a workaround. The build command has an --executable option that does exactly what I need. Unfortunately, the install command doesn't recognize it, but I could break the install into two steps:</p>
<pre><code>python3 setup.py build --executable=...
python3 setup.py install --skip-build
</code></pre>
<p>Is there a way to do the equivalent of --executable with pip?</p>
<p>The intention is to have executable scripts begin with '#!/usr/bin/env python3', which will invoke python from the path. I realize that this is not a good idea in most cases, but I'm installing an environment (which contains more than just python) that needs to function no matter where it exists in the file system (i.e., you can mount it somewhere else, and it still works), so I don't want any absolute paths except for things like /usr/bin/env which are always going to be present. The system already works, but I'm trying to move my python package installation from raw setup.py to pip and ran into this snag.</p>
|
<python><pip>
|
2023-03-02 06:35:07
| 0
| 1,241
|
user1806566
|
75,611,948
| 19,106,705
|
Why does PyTorch's max pooling layer store input tensors?
|
<p>I made a simple model like below. It seems weird but it has one convolutional layer and two maxpooling layer.</p>
<pre class="lang-py prettyprint-override"><code>class simple_model(nn.Module):
def __init__(self):
super(simple_model, self).__init__()
self.maxpool2D = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
self.conv1 = nn.Conv2d(3, 20, (5, 5))
def forward(self, x):
x = self.maxpool2D(self.maxpool2D(self.conv1(x)))
return x
</code></pre>
<p>And I check the tensors that saved in forward propagation using gradient hook.</p>
<pre class="lang-py prettyprint-override"><code>pack_saved_tensors = []
def pack_hook(x):
saved_tensors.append(x)
return x
unpack_used_tensors = []
def unpack_hook(x):
unpack_used_tensors.append(x)
return x
with torch.autograd.graph.saved_tensors_hooks(pack_hook, unpack_hook):
model_output = model(input_tensors)
label = torch.randn(model_output.size()).to(device)
loss = criterion(model_output, label)
loss.backward()
</code></pre>
<p>This is the result. And I even checked that all stored tensors are used for backward propagation.(with unpack hook funciton) And I think,</p>
<p>0th, 1st tensors saved for convolutional layer,</p>
<p>2nd, 3rd tensors saved for first maxpool2D layer,</p>
<p>4th, 5th tensors saved for second maxpool2D layer.</p>
<pre><code>pack hook saved tensors:
0 tensor size: torch.Size([64, 3, 224, 224]), tensor type: torch.float32
1 tensor size: torch.Size([20, 3, 5, 5]), tensor type: torch.float32
2 tensor size: torch.Size([64, 20, 220, 220]), tensor type: torch.float32
3 tensor size: torch.Size([64, 20, 110, 110]), tensor type: torch.int64
4 tensor size: torch.Size([64, 20, 110, 110]), tensor type: torch.float32
5 tensor size: torch.Size([64, 20, 55, 55]), tensor type: torch.int64
unpack hook used tensors:
6 tensor size: torch.Size([64, 20, 110, 110]), tensor type: torch.float32
7 tensor size: torch.Size([64, 20, 55, 55]), tensor type: torch.int64
8 tensor size: torch.Size([64, 20, 220, 220]), tensor type: torch.float32
9 tensor size: torch.Size([64, 20, 110, 110]), tensor type: torch.int64
10 tensor size: torch.Size([64, 3, 224, 224]), tensor type: torch.float32
11 tensor size: torch.Size([20, 3, 5, 5]), tensor type: torch.float32
</code></pre>
<p>My question is:</p>
<p>Why Pytorch store input tensors for maxpooling layer? I think, in backward propagation max pooling layer only need to store int64 tensors(3rd, 5th layer which is store indices of max value).</p>
<p>Thank you for reading this long post.
Any help is appreciated.</p>
|
<python><deep-learning><pytorch><backpropagation>
|
2023-03-02 06:25:50
| 0
| 870
|
core_not_dumped
|
75,611,661
| 6,727,914
|
Is there any logical reason not to reuse a deleted slot immediately in Hash Tables?
|
<p>I have seen several implementations of dynamic tables with open addressing using linear probing that does not use deleted slots before resizing. Here is one example: <a href="https://gist.github.com/EntilZha/5397c02dc6be389c85d8" rel="nofollow noreferrer">https://gist.github.com/EntilZha/5397c02dc6be389c85d8</a></p>
<p>Is there any logical reason not to reuse a deleted slot immediately?</p>
<p>I know why it makes sense not to set the slot's value as <code>Empty</code>
<a href="https://stackoverflow.com/questions/9127207/hash-table-why-deletion-is-difficult-in-open-addressing-scheme">Hash Table: Why deletion is difficult in open addressing scheme</a> because it would create a bug with the <code>read</code> operation. However, what's holding from <code>writing</code> to this slot? Wouldn't it be better to have most slots used as much as possible for performance?</p>
|
<python><algorithm><data-structures><time-complexity><hashtable>
|
2023-03-02 05:37:15
| 2
| 21,427
|
TSR
|
75,611,475
| 1,165,477
|
When using Anaconda, where does Python's Idle GUI create output files when calling 'file.open(...)'?
|
<p>I recently installed Anaconda, and am using it to run Idle.</p>
<p>I'm trying to figure out file I/O. I have a file created -</p>
<pre><code>file = file.open('output.txt', 'w')
</code></pre>
<p>I wrote to the file like so -</p>
<pre><code>file.write('test')
</code></pre>
<p>Idle spit out '4' (1 for each character, I'm guessing).</p>
<p>But I don't know where idle created the file. I checked the Anaconda folder, and I checked the idlelib folder in Anaconda, but there was no file 'output.txt' in either.</p>
<p>So where did Idle create the file? Or did it only create it in memory, and there's more I need to do to finalize and output the file?</p>
|
<python><python-3.x><file-io><anaconda3>
|
2023-03-02 04:58:43
| 0
| 3,615
|
Will
|
75,611,338
| 13,215,988
|
How do I pass an array of strings to a FastAPI post request function?
|
<p>I have this code for a FastAPI app. Right now it's just supposed to take in an array of strings and return them.</p>
<pre><code>from fastapi import FastAPI
from pydantic import BaseModel
class Item(BaseModel):
name: list[str]
app = FastAPI()
@app.post("/")
async def root(item: Item):
list_names = []
for nm in item.name:
list_names.append(nm)
return {list_names}
</code></pre>
<p>I run the code with <code>uvicorn main:app --reload</code> and make a post request in insomnia to <code>http://127.0.0.1:8000</code> with the following JSON:</p>
<pre><code>{
"name": [
"each",
"single",
"word"
]
}
</code></pre>
<p>But it fails... I get this in insomnia:<br />
<a href="https://i.sstatic.net/X7mLJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/X7mLJ.png" alt="enter image description here" /></a><br />
I also get this in my terminal: <code>TypeError: unhashable type: 'list'</code></p>
<p>So how do I pass an array of strings into a FastAPI post request function?</p>
|
<python><post><fastapi><uvicorn><insomnia>
|
2023-03-02 04:29:24
| 1
| 1,212
|
ChristianOConnor
|
75,611,184
| 13,575,728
|
Python constructor that initialize its parameters based on values from another python file
|
<p>I have a class <code>A</code> and its constructor takes many variables. In my project, these variables are identified from another python file, <code>B</code>.</p>
<p>file <code>B</code> looks like this:</p>
<pre><code>p1 = 4
p2 = 1
...
pN = 'dd'
#and a bunch of other variables.
</code></pre>
<p>Class <code>A</code> looks like this:</p>
<pre><code>class A():
def __init__(p1,p2,...,pN):
</code></pre>
<p>The number of parameters included in the constructor of <code>A</code> is very high, and I don't think it would be easy for a user (i.e., another one who wants to use my class) to initiate. However, I want my class to be generic and decoupled from other files in the project (i.e., file <code>B</code>). My question is how class <code>A</code> should be constructed?</p>
<p>A solution in my mind would be to pass the name of the python file that defines the parameters to a constructer in <code>A</code>:</p>
<pre><code>class A():
def __init__(self,python_filename):
self.p1 = python_filename.p1
self.p2 = python_filename.p2
def __init__(self,p1,p2,...,pN):
</code></pre>
<p>My first thought is to make a class in <code>B</code> that has static members, would that be a good design in terms of being pythonic and object-oriented?</p>
|
<python><python-3.x><design-patterns>
|
2023-03-02 03:57:03
| 1
| 377
|
rando
|
75,611,161
| 3,127,828
|
How to read_csv correctly for dataFrame with Int64 Array?
|
<p>The following is a simplied version of the issue.</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(data={'key': [1,1,2,2], 'val': [3,4,5,5]})
df['val'] = df['val'].astype('Int64') # read_csv can't read Int64 array properly by default
df = df.groupby('key')['val'].agg(['unique'])
display(df)
df.to_csv('test')
df = pd.read_csv('test', index_col=0)
display(df)
</code></pre>
<p>And this is what I got</p>
<p><a href="https://i.sstatic.net/TyMSi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TyMSi.png" alt="enter image description here" /></a></p>
<p>How can I read the unique column data correctly? Thanks</p>
<hr />
<p>Thanks for @hide1nbush 's pointer. I resolved it using converter.</p>
<pre class="lang-py prettyprint-override"><code>import ast
def convert_int64_array(array_string):
return pd.array(ast.literal_eval(array_string.split("\n")[1]), dtype=pd.Int64Dtype())
df = pd.read_csv('test', index_col=0, converters={'unique': convert_int64_array})
</code></pre>
<p>But I wonder if there is a easier way to do this.</p>
<p><a href="https://i.sstatic.net/t8buo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t8buo.png" alt="enter image description here" /></a></p>
<hr />
<p>I found that using <strong>pickle</strong> format is the easiest way to round trip the dataFrame as file. I don't need to worry about index, int64 etc. See <a href="https://towardsdatascience.com/the-best-format-to-save-pandas-data-414dca023e0d" rel="nofollow noreferrer">this</a> to understand the difference between some major formats.</p>
|
<python><pandas><dataframe>
|
2023-03-02 03:50:27
| 0
| 4,871
|
lzl124631x
|
75,610,981
| 16,009,435
|
Make string a valid file name that can later be viewed as original string
|
<p>Say I have a string</p>
<pre><code>"this is | test"
</code></pre>
<p>and I want to use that string as a file name but it is not valid to have the <code>|</code> character inside a file name. What is the best way to replace all characters that are not valid as a file name with characters that are valid but later on I can read the saved file name by replacing back all the prior replaced invalid characters?</p>
<p>To explain better say the initial string was</p>
<pre><code>this is | test
</code></pre>
<p>then I replaced the string to <code>"this is # test"</code> and saved the file now I can re-read that file name as the original string by just replacing the <code>#</code> with <code>|</code>. What is the best way to achieve this for all invalid strings?</p>
|
<python>
|
2023-03-02 03:07:19
| 1
| 1,387
|
seriously
|
75,610,911
| 9,475,509
|
How to use Mermaid diagram in Jupyter Notebook with mermaid.ink through proxy
|
<p>Previously to use <a href="https://mermaid.js.org/" rel="noreferrer">Mermaid</a> in a Jupyter Notebook file, <a href="https://pypi.org/project/nb-mermaid/" rel="noreferrer"><code>nb-mermaid</code></a> should be installed using <code>pip</code> and then it called using built-in magic commands <code>%%javascript</code> as instructed <a href="https://bollwyvl.github.io/nb-mermaid/" rel="noreferrer">here</a> or using <code>%%html</code>.</p>
<p>Unfortunately, the result, in a Jupyter Notebook file, <a href="https://gist.github.com/bollwyvl/e51b4e724f0b82669c84" rel="noreferrer">can not be displayed on GitHub</a>, but <a href="https://nbviewer.org/gist/bollwyvl/e51b4e724f0b82669c84" rel="noreferrer">will be displayed on nbviewer</a>. It works only in a GitHub page.</p>
<p>Then there is another way using <code>mermaid.ink</code> with IPython as guide in <a href="https://mermaid.js.org/config/Tutorials.html#jupyter-integration-with-mermaid-js" rel="noreferrer">here</a> as follows.</p>
<pre class="lang-py prettyprint-override"><code>import base64
from IPython.display import Image, display
import matplotlib.pyplot as plt
def mm(graph):
graphbytes = graph.encode("ascii")
base64_bytes = base64.b64encode(graphbytes)
base64_string = base64_bytes.decode("ascii")
display(
Image(
url="https://mermaid.ink/img/"
+ base64_string
)
)
mm("""
graph LR;
A--> B & C & D;
B--> A & E;
C--> A & E;
D--> A & E;
E--> B & C & D;
""")
</code></pre>
<p>And it works fine and can be viewed on GitHub as in <a href="https://github.com/dudung/py-jupyter-nb/blob/main/src/apply/flowchart/mermaid/begprocend.ipynb" rel="noreferrer">here</a>.</p>
<p>But when it runs behind proxy the image, which is generated remotely on <code>https://mermaid.ink/</code> and display using <code>matplotlib</code>, can not be displayed in a Jupyter Notebook file. Is there any solution to this problem?</p>
|
<python><jupyter-notebook><proxy><mermaid>
|
2023-03-02 02:52:37
| 2
| 789
|
dudung
|
75,610,687
| 17,274,113
|
skimage segmetation extracting labels filtered by properties
|
<p>I would like to extract skimage identified "labels" or segments which meet thresholds of parameters. My binary image was successfully split into segments, which skimage seems to call "labels", as the following:</p>
<pre><code>labels = measure.label(classified, connectivity = image.ndim)
#symoblize each label with a different colour and plot it over the original image
image_label_overlay = label2rgb(labels, image=image)
plt.imshow(image_label_overlay)
</code></pre>
<p><a href="https://i.sstatic.net/aGtAP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aGtAP.png" alt="enter image description here" /></a></p>
<p>As you can see, each labels are individually coloured.</p>
<p>My question is: How can I select these labeled segments base on their properties, and vectorize them in some way (convert to geopandas dataframe entries or something). My attempt to plot filtered labels is as follows, but was unsuccessful:</p>
<pre><code>props = measure.regionprops_table(labels, image,
properties = ['label',
'area',
'eccentricity',
'perimeter',
'equivalent_diameter',
'mean_intensity',
'solidity'])
#create a dataframe from these object properties
segs = pd.DataFrame(props)
#extract segments of low eccentricity (0 would be a perfect circle)
segs_filtered =segs[segs['eccentricity'] < 0.1]
#create list of label indexed meeting criteria
filtered_labels = segs_filtered['label']
</code></pre>
<p>I think <code>filtered labels</code> is actually indexing the pixels that belong to the labels I want to keep instead of the labels as a whole. That is why the following plotting method doesn't work.</p>
<pre><code>fig = px.imshow(labels, binary_string=True)
fig.update_traces(hoverinfo='skip') # hover is only for label info
# For each label, add a filled scatter trace for its contour,
# and display the properties of the label in the hover of this trace.
for index in filtered_labels:
label_i = props[index].label
contour = measure.find_contours(labels == label_i, 0.5)[0]
y, x = contour.T
hoverinfo = ''
for prop_name in properties:
hoverinfo += f'<b>{prop_name}: {getattr(props[index], prop_name):.2f}</b><br>'
fig.add_trace(go.Scatter(
x=x, y=y, name=label_i,
mode='lines', fill='toself', showlegend=False,
hovertemplate=hoverinfo, hoveron='points+fills'))
plotly.io.show(fig)
</code></pre>
<p><code>label_i = props[index].label</code> produces a <code>Key error: 1</code></p>
<p>Please let me know if there is a way of filtering and producing vector objects from these labels.</p>
<p>For reproducibility<a href="https://drive.google.com/file/d/1L-8Q4Y6ion9pNQlZtx5g0YxHzyA9pWq9/view?usp=sharing" rel="nofollow noreferrer">enter link description here</a>, here is the classified tiff "classified" that is input:</p>
<p>Also I am sorry if this is unclear. I don't really understand how the labelling and properties work in skimage.</p>
<p>Thanks very much for reading!</p>
|
<python><vectorization><image-segmentation><scikit-image><feature-extraction>
|
2023-03-02 01:58:21
| 1
| 429
|
Max Duso
|
75,610,648
| 8,803,234
|
Plotly Bar Chart Not Reflecting Values in Call
|
<p>This is driving me nutz.</p>
<p>I have a Polars dataframe named <code>totals</code> that stores counts by year:</p>
<p><a href="https://i.sstatic.net/hCke1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hCke1.png" alt="dataframe screen shot" /></a></p>
<p>I'm making a very simple bar chat in plotly with code like the following:</p>
<pre><code>px.bar(
totals.to_pandas(),
x = 'commit_year',
y = 'count',
color='commit_year'
)
</code></pre>
<p>This returns...</p>
<p><a href="https://i.sstatic.net/1NllM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1NllM.png" alt="example of graph rendered" /></a></p>
<p>I don't understand why I'm not seeing bars for each year matching the totals from the dataframe.</p>
<p>I set the y axis to a list of 12 random numbers to see how it affected the chart:</p>
<pre><code>px.bar(
totals.to_pandas(),
x = 'commit_year',
y = [22, 33, 231, 33, 31, 31, 23, 12, 13, 10, 33, 44],
color='commit_year'
)
</code></pre>
<p>This look great.</p>
<p><a href="https://i.sstatic.net/OEqkF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OEqkF.png" alt="enter image description here" /></a></p>
<p>What am I doing wrong?</p>
<p>The value returned by the following...</p>
<pre><code>totals.select('count').to_series()
</code></pre>
<p>Is as follows:</p>
<p>u32
73
93
88
6
21
439
365
91
55
36
31
1</p>
|
<python><plotly><python-polars>
|
2023-03-02 01:47:21
| 0
| 4,236
|
Adam
|
75,610,328
| 654,187
|
RXPY semaphore filter
|
<p>I'm looking to execute a batch of processes in parallel, but process each batch in series using RXPY (we're using v3 right now). Each process is kicked off, then I use RXPY to wait for a set amount of time before ending the process. Here's a basic version:</p>
<pre><code>def start_task(value):
print(f"Started {value}")
return value
def end_task(value):
print(f"End: {value}")
def main():
print("Start main")
rx.interval(1).pipe(
ops.flat_map(lambda time : rx.from_([1,2]).pipe(
ops.map(lambda value: [time, value])
)),
ops.map(lambda value: start_task(value)),
ops.delay(2),
ops.map(lambda value: end_task(value)),
).run()
</code></pre>
<p>The problem with this is the long-running processes overlap each other. In other words, I do not want new processes to start before the last batch has finished. In the above example, the output is:</p>
<pre><code>Start main
Started [0, 1]
Started [0, 2]
Started [1, 1]
Started [1, 2]
Started [2, 1]
Started [2, 2]
End: [0, 1]
End: [0, 2]
End: [1, 1]
Started [3, 1]
End: [1, 2]
Started [3, 2]
End: [2, 1]
End: [2, 2]
...
</code></pre>
<p>As you can see, time 1 and 2 started before 0 ended.</p>
<p>I can solve this by adding a boolean variable <code>working</code>, somewhat like a semaphore:</p>
<pre><code>def start_task(value):
print(f"Started {value}")
return value
def end_task(value):
print(f"End: {value}")
def main():
print("Start main")
global working
working = False
def set_working(input):
global working
working = input
rx.interval(1).pipe(
ops.filter(lambda time: not working),
ops.do_action(lambda value: set_working(True)),
ops.flat_map(lambda time : rx.from_([1,2]).pipe(
ops.map(lambda value: [time, value])
)),
ops.map(lambda value: start_task(value)),
ops.delay(2),
ops.map(lambda value: end_task(value)),
ops.do_action(lambda value: set_working(False)),
).run()
</code></pre>
<p>With the following output:</p>
<pre><code>Start main
Started [0, 1]
Started [0, 2]
End: [0, 1]
End: [0, 2]
Started [3, 1]
Started [3, 2]
End: [3, 1]
End: [3, 2]
</code></pre>
<p>But this feels wrong. Is there an existing operator in RXPY that would accomplish this same functionality?</p>
|
<python><rx-py>
|
2023-03-02 00:28:48
| 1
| 11,153
|
John Ericksen
|
75,610,232
| 8,713,442
|
Issues while running pyspark UDF with AWS glue
|
<p>I am trying to call UDF in AWS glue job but getting error . Code and error are given below</p>
<pre><code>import sys,os
import concurrent.futures
from concurrent.futures import *
import boto3
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from pyspark.context import SparkConf
from awsglue.context import GlueContext
from awsglue.job import Job
from awsglue.dynamicframe import DynamicFrame
from datetime import datetime
from pyspark.sql.functions import array
from pyspark.sql.functions import sha2, concat_ws
from pyspark.sql.functions import udf
from pyspark.sql.functions import StringType
import requests
import json
###############################
class JobBase(object):
fair_scheduler_config_file= "fairscheduler.xml"
rowAsDict={}
listVendorDF=[]
Oracle_Username=None
Oracle_Password=None
Oracle_jdbc_url=None
futures=[]
ataccama_url=None
#all spark configuations can be passed in object in s3 bucket
ataccama_cleanse_udf=udf(lambda x:self.__cleanse_dnb_attribute(x),StringType() )
def __cleanse_dnb_attribute(v_dnb_attr):
payload = '{"in":{"src_org_name":"' + v_dnb_attr +'","sco_in":0,"exp_in":""}}'
r = requests.post(self.ataccama_url, data=payload) # response
r.raise_for_status()
if r is not None:
r2 = json.loads(r.text)
if r2['out'] is not None:
r3 = r2['out']['cio_org_name'].replace(' ', '')
return r3
else:
''
else:
''
def __start_spark_glue_context(self):
conf = SparkConf().setAppName("python_thread").set('spark.scheduler.mode', 'FAIR').set("spark.scheduler.allocation.file", self.fair_scheduler_config_file)
self.sc = SparkContext(conf=conf)
self.glueContext = GlueContext(self.sc)
self.spark = self.glueContext.spark_session
def __spark_read_from_table(self,table_name):
#return self.spark.read.format("jdbc").option("url", self.Oracle_jdbc_url).option("dbtable", table_name).option("user", self.Oracle_Username).option("password", self.Oracle_Password).option("numPartitions",2).load()
return self.glueContext.read.format("jdbc").option("url", self.Oracle_jdbc_url).option("dbtable", table_name).option("user", self.Oracle_Username).option("password", self.Oracle_Password).option("numPartitions",2)\
.option("lowerBound", 1)\
.option("upperBound",10000)\
.option("partitionColumn", "ORG_CODE").load()
# Connecting to the source
#d f = glueContext.read.format("jdbc").option("driver", jdbc_driver_name).option("url", db_url).option("dbtable", table_name).option("user", db_username).option("password", db_password).load()
def execute(self):
self.__start_spark_glue_context()
args = getResolvedOptions(sys.argv, ['JOB_NAME','ataccma-cleanse-url'])
self.ataccama_url=args['ataccma_cleanse_url']
self.logger = self.glueContext.get_logger()
self.logger.info("Starting Glue Threading job ")
# ####connect to EDQDB edqdb-dev
client = boto3.client('glue', region_name='XXXXXXXXXX')
response = client.get_connection(Name='XXXXXXXX')
connection_properties = response['Connection']['ConnectionProperties']
URL = connection_properties['JDBC_CONNECTION_URL']
url_list = URL.split("/")
host = "{}".format(url_list[-2][:-5])
new_host=host.split('@',1)[1]
port = url_list[-2][-4:]
database = "{}".format(url_list[-1])
self.Oracle_Username = "{}".format(connection_properties['USERNAME'])
self.Oracle_Password = "{}".format(connection_properties['PASSWORD'])
#no. jobs which can run in parallel
spark_pool_configuration=3
print("Host:",host)
print("New Host:",new_host)
print("Port:",port)
print("Database:",database)
self.Oracle_jdbc_url="jdbc:oracle:thin:@//"+new_host+":"+port+"/"+database
print("Oracle_jdbc_url:",self.Oracle_jdbc_url)
############testing to check hash ############################
source_df =self.spark.read.format("jdbc").option("url", self.Oracle_jdbc_url).option("dbtable", "(select ENTERPRISE_NUM,ENTERPRISE_NAME,DNB_BUS_NM_TXT,DNB_SITE_BUS_STR_TXT from xxgmdmadm.mdm_firmographic_data_v2 where ORG_ACCT_ID in (11758718960,11758836692)) ").option("user", self.Oracle_Username).option("password", self.Oracle_Password).load()
source_df.show(truncate=False)
# columnarray = array(self.arr_list)
# print(columnarray)
# source_df.withColumn("row_sha2", sha2(concat_ws("||", columnarray), 256)).show(truncate=False)
############testing to check hash finished ############################
################test to check if we are getting variable name ###############
source_df=source_df.withColumn('DNB_BUS_NM_TXT_CLEANSED',self.ataccama_cleanse_udf( source_df['DNB_BUS_NM_TXT'])).show(truncate=False)
def main():
job = JobBase()
job.execute()
if __name__ == '__main__':
main()
</code></pre>
<p>error I am getting</p>
<blockquote>
<p>TypeError: Invalid argument, not a string or column: <<strong>main</strong>.JobBase
object at 0x7f4a77382390> of type <class '<strong>main</strong>.JobBase'>. For
column literals, use 'lit', 'array', 'struct' or 'create_map'
function.</p>
</blockquote>
|
<python><apache-spark><pyspark>
|
2023-03-02 00:07:53
| 1
| 464
|
pbh
|
75,610,213
| 7,706,917
|
How do I debug a FastAPI Azure Function App in VSCode?
|
<p><strong>The problem</strong></p>
<p>I have created an application which utilizes FastAPI and Azure Function Apps by following the guide <a href="https://learn.microsoft.com/en-us/samples/azure-samples/fastapi-on-azure-functions/azure-functions-python-create-fastapi-app/" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/samples/azure-samples/fastapi-on-azure-functions/azure-functions-python-create-fastapi-app/</a></p>
<p>However, I am unable to get debugging to work. The function host does successfully start, but the debugger is unable to connect. I receive the pop-up error <code>connect ECONNREFUSED 127.0.0.1:9091</code>.</p>
<p>I am on a Windows 10 system using Python 3.9.13, functions core tools v4, packages <code>azure-functions</code>, <code>nest_asyncio</code>, <code>fastapi</code>, and <code>uvicorn</code>. I am using the latest versions of the VSCode extensions <code>Azure Functions v1.10.3</code> and <code>Python v2023.2.0</code>.</p>
<p><strong>What I have tried</strong></p>
<p>I have attempted to enable reload and debugging for uvicorn following the FastAPI guide at <a href="https://fastapi.tiangolo.com/tutorial/debugging/" rel="nofollow noreferrer">https://fastapi.tiangolo.com/tutorial/debugging/</a> with no luck.</p>
<p>Attempting to configure the <code>--language-worker</code> of the <code>func start</code> or <code>func host start</code> command results in the same error if using <code>ptvsd</code> or a failure to locate Python if attempting to use <code>-m uvicorn --reload --debug</code>.</p>
<p><strong>Code</strong> As found in the aforementioned guide:</p>
<p><em>tasks.json</em></p>
<pre class="lang-json prettyprint-override"><code>{
"type": "func",
"command": "host start",
"problemMatcher": "$func-python-watch",
"isBackground": true,
"dependsOn": "pip install (functions)"
},
</code></pre>
<p><em>launch.json</em></p>
<pre class="lang-json prettyprint-override"><code>{
"name": "Attach to Python Functions",
"type": "python",
"request": "attach",
"port": 9091,
"preLaunchTask": "func: host start"
}
</code></pre>
|
<python><python-3.x><azure-functions><fastapi>
|
2023-03-02 00:04:44
| 1
| 349
|
patyx
|
75,610,200
| 8,838,303
|
Python: How to generate a random array only consisting of a specific number of -1, 0, 1?
|
<p>Is there a standard way in Python to generate an array (of size 15), where precisely three 1s and four -1s are placed randomly and the remaining array entries are 0?</p>
<p>An example for such an array would be</p>
<pre><code>0 0 0 0 1 1 0 -1 1 -1 -1 0 0 0 -1
</code></pre>
|
<python><arrays><random>
|
2023-03-02 00:02:40
| 2
| 475
|
3nondatur
|
75,610,165
| 14,471,688
|
Combine multiple identical nested dictionaries of a list by merging the value
|
<p>I want to combine multiple identical nested dictionaries of a list by merging the value and store them in a list.</p>
<p>Suppose I have a dictionary like this:</p>
<pre><code>ex = {'tran': { 'precision': 0.6666666666666666,
'recall': 0.6486486486486487,
'f1_score': 0.6575342465753425},
'act': {
'coy': {'precision': 0.7142857142857143,
'recall': 0.7142857142857143,
'f1_score': 0.7142857142857143},
'fam': {'precision': 0.8518518518518519,
'recall': 0.9583333333333334,
'f1_score': 0.9019607843137256},
'fri': {'precision': 0.7142857142857143,
'recall': 0.625,
'f1_score': 0.6666666666666666}},
'pla': {'acc': {'precision': 0.42105263157894735,
'recall': 0.4444444444444444,
'f1_score': 0.43243243243243246},
'pen': {'precision': 0.42105263157894735,
'recall': 0.8888888888888888,
'f1_score': 0.5714285714285714},
'loc': {'precision': 0.2608695652173913,
'recall': 0.8571428571428571,
'f1_score': 0.4}},
'j': {'precision': 0.44,
'recall': 0.4074074074074074,
'f1_score': 0.4230769230769231},
'rea': {'precision': 0.5,
'recall': 0.5555555555555556,
'f1_score': 0.5263157894736842}}
</code></pre>
<p>I have a list that contain that dictionary multiple times (I suppose I have only two but it can be three, four, ...</p>
<pre><code>dicts = [ex, ex]
</code></pre>
<p>What I have tried:</p>
<pre><code>merge_dict = {}
for k in dicts[0]:
merge_dict[k] = [d[k] for d in dicts]
</code></pre>
<p>But I got this:</p>
<pre><code>{'tran': [{'precision': 0.6666666666666666,
'recall': 0.6486486486486487,
'f1_score': 0.6575342465753425},
{'precision': 0.6666666666666666,
'recall': 0.6486486486486487,
'f1_score': 0.6575342465753425}],
'act': [{'coy': {'precision': 0.7142857142857143,
'recall': 0.7142857142857143,
'f1_score': 0.7142857142857143},
'fam': {'precision': 0.8518518518518519,
'recall': 0.9583333333333334,
'f1_score': 0.9019607843137256},
'fri': {'precision': 0.7142857142857143,
'recall': 0.625,
'f1_score': 0.6666666666666666}},
{'coy': {'precision': 0.7142857142857143,
'recall': 0.7142857142857143,
'f1_score': 0.7142857142857143},
'fam': {'precision': 0.8518518518518519,
'recall': 0.9583333333333334,
'f1_score': 0.9019607843137256},
'fri': {'precision': 0.7142857142857143,
'recall': 0.625,
'f1_score': 0.6666666666666666}}],
'pla': [{'acc': {'precision': 0.42105263157894735,
'recall': 0.4444444444444444,
'f1_score': 0.43243243243243246},
'pen': {'precision': 0.42105263157894735,
'recall': 0.8888888888888888,
'f1_score': 0.5714285714285714},
'loc': {'precision': 0.2608695652173913,
'recall': 0.8571428571428571,
'f1_score': 0.4}},
{'acc': {'precision': 0.42105263157894735,
'recall': 0.4444444444444444,
'f1_score': 0.43243243243243246},
'pen': {'precision': 0.42105263157894735,
'recall': 0.8888888888888888,
'f1_score': 0.5714285714285714},
'loc': {'precision': 0.2608695652173913,
'recall': 0.8571428571428571,
'f1_score': 0.4}}],
'j': [{'precision': 0.44,
'recall': 0.4074074074074074,
'f1_score': 0.4230769230769231},
{'precision': 0.44,
'recall': 0.4074074074074074,
'f1_score': 0.4230769230769231}],
'rea': [{'precision': 0.5,
'recall': 0.5555555555555556,
'f1_score': 0.5263157894736842},
{'precision': 0.5,
'recall': 0.5555555555555556,
'f1_score': 0.5263157894736842}]}
</code></pre>
<p>It was not correct it seems like i need to dig deeper into the value in order to store each one of them in a list.</p>
<p>My desired output should look like this:</p>
<pre><code>{'tran': { 'precision': [0.6666666666666666, 0.6666666666666666],
'recall': [0.6486486486486487, 0.6486486486486487],
'f1_score': [0.6575342465753425, 0.6575342465753425]},
'act': {
'coy': {'precision': [0.7142857142857143, 0.7142857142857143],
'recall': [0.7142857142857143, 0.7142857142857143],
'f1_score': [0.7142857142857143, 0.7142857142857143]},
'fam': {'precision': [0.8518518518518519, 0.8518518518518519],
'recall': [0.9583333333333334, 0.9583333333333334],
'f1_score': [0.9019607843137256, 0.9019607843137256]},
'fri': {'precision': [0.7142857142857143, 0.7142857142857143],
'recall': [0.625, 0.625],
'f1_score': [0.6666666666666666, 0.6666666666666666]}},
'pla': {
'acc': {'precision': [0.42105263157894735, 0.42105263157894735],
'recall': [0.4444444444444444, 0.4444444444444444],
'f1_score': [0.43243243243243246, 0.43243243243243246]},
'pen': {'precision': [0.42105263157894735, 0.42105263157894735],
'recall': [0.8888888888888888, 0.8888888888888888],
'f1_score': [0.5714285714285714, 0.5714285714285714]},
'loc': {'precision': [0.2608695652173913, 0.2608695652173913],
'recall': [0.8571428571428571, 0.8571428571428571],
'f1_score': [0.4, 0.4]}},
'j': {'precision': [0.44, 0.44],
'recall': [0.4074074074074074, 0.4074074074074074],
'f1_score': [0.4230769230769231, 0.4230769230769231]},
'rea': {'precision': [0.5, 0.5],
'recall': [0.5555555555555556, 0.5555555555555556],
'f1_score': [0.5263157894736842, 0.5263157894736842]}}
</code></pre>
<p>How can I get this desired output?</p>
<p>In addition to this, I also want a mean value for each list of each key.</p>
<p>For example of a key and value pair:
'precision': [0.6666666666666666, 0.6666666666666666] -> 'precision': 0.6666666666666666</p>
<p>where 0.6666666666666666 is the mean of [0.6666666666666666, 0.6666666666666666]</p>
|
<python><list><dictionary><merge>
|
2023-03-01 23:53:46
| 2
| 381
|
Erwin
|
75,610,162
| 8,481,155
|
Apache Beam pass list as argument - Python SDK
|
<p>I have an Apache Beam pipeline which would take a list as arguments and use it in the Filter and Map function. Since these would be available as string I had converted using ast.literal_eval on them. Is there any other better way to do the same thing?</p>
<pre><code>import argparse
import ast
import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions, GoogleCloudOptions
def run_pipeline(custom_args, beam_args):
elements = [
{'name': 'Jim', 'join_year': 2010, 'location': 'LA', 'role': 'Executive assistant'},
{'name': 'Tim', 'join_year': 2015, 'location': 'NY', 'role': 'Account manager'},
{'name': 'John', 'join_year': 2010, 'location': 'LA', 'role': 'Customer service representative'},
{'name': 'Bob', 'join_year': 2020, 'location': 'NJ', 'role': 'Customer service representative'},
{'name': 'Michael', 'join_year': 2019, 'location': 'CA', 'role': 'Scheduler'},
{'name': 'Adam', 'join_year': 2010, 'location': 'CA', 'role': 'Customer service representative'},
{'name': 'Andrew', 'join_year': 2009, 'location': 'TX', 'role': 'Account manager'},
{'name': 'James', 'join_year': 2017, 'location': 'NJ', 'role': 'Executive assistant'},
{'name': 'Paul', 'join_year': 2015, 'location': 'NY', 'role': 'Scheduler'},
{'name': 'Justin', 'join_year': 2015, 'location': 'NJ', 'role': 'Scheduler'}
]
opts = PipelineOptions(beam_args)
joinYear = [i for i in ast.literal_eval(custom_args.joinYear)]
selectCols = [i for i in ast.literal_eval(custom_args.selectCols)]
with beam.Pipeline(options=opts) as p:
(p
| "Create" >> beam.Create(elements)
| "Filter for join year in 2010 and 2015" >> beam.Filter(lambda item: item['join_year'] in joinYear)
| "Select name and location columns" >> beam.Map(lambda line : {key:value for (key,value) in line.items() if key in selectCols})
| beam.Map(print)
)
def main():
parser = argparse.ArgumentParser()
parser.add_argument("--joinYear",required=True)
parser.add_argument("--selectCols",required=True)
my_args, beam_args = parser.parse_known_args()
run_pipeline(my_args, beam_args)
if __name__ == '__main__':
main()
</code></pre>
<p>I run the code above like this <code>python filterlist.py --joinYear='[2010,2015]' --selectCols="['name','location']"</code></p>
<p>In actual production use I would pass these parameters from a Cloud Function and launch the dataflow job. So was wondering if there is any other better way to do the same following better practise?</p>
|
<python><python-3.x><google-cloud-dataflow><apache-beam>
|
2023-03-01 23:53:34
| 1
| 701
|
Ashok KS
|
75,610,023
| 16,009,435
|
Get the new URL after getting redirected to a new page
|
<p>When I load this website <code>https://yewtu.be/latest_version?id=E51gsi_r3HY&itag=137</code> it reads the URL and redirects me to a new URL which is a video feed. Is there any way I can get the new URL with python without using something like selenium? Thanks in advance.</p>
|
<python>
|
2023-03-01 23:27:25
| 1
| 1,387
|
seriously
|
75,609,785
| 3,713,236
|
Equivalent of the "Unique" row in Describe() for int/float variables?
|
<p>When I have a dataframe with <strong>strings</strong> and do a <code>describe()</code>, I get a very nice dataframe that looks like the below, whereupon you can see the number of unique values in each column and sort upon it:</p>
<p><a href="https://i.sstatic.net/oUycO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oUycO.png" alt="enter image description here" /></a></p>
<p>However, when I have a dataframe with <strong>integers or floats</strong>, and do a <code>describe()</code>, I get a dataframe with the traditional statistics like the one below. There is no <code>unique</code> column. Is there a way to retrieve the unique column?</p>
<p><a href="https://i.sstatic.net/vwpae.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vwpae.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe>
|
2023-03-01 22:51:39
| 2
| 9,075
|
Katsu
|
75,609,733
| 4,429,617
|
What is the meaning of "multiple" parameter in Seaborn's kdeplot?
|
<p>I am trying to understand the meaning of <code>multiple</code> parameter in Seaborn's <a href="https://seaborn.pydata.org/generated/seaborn.kdeplot.html" rel="nofollow noreferrer"><code>kdeplot</code></a>. Below is taken from its documentation,</p>
<blockquote>
<p>multiple{{“layer”, “stack”, “fill”}}</p>
<p>Method for drawing multiple elements when semantic mapping creates subsets. Only relevant with univariate data.</p>
</blockquote>
<p>However it doesn't help much and their plots looks very different. I would appreciate it if someone can elaborate them more.</p>
<p>Here are the plots created with setting <code>multiple</code> to <code>layer</code>, <code>stack</code> and <code>fill</code> respectively,</p>
<pre><code>sns.displot(data=bg_vs_non_bg, multiple="layer", x="Value", hue="ClassName", kind="kde", col="Modality", log_scale=True, fill=True)
</code></pre>
<p><a href="https://i.sstatic.net/7kSML.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7kSML.png" alt="multiple="layer"" /></a></p>
<pre><code>sns.displot(data=bg_vs_non_bg, multiple="stack", x="Value", hue="ClassName", kind="kde", col="Modality", log_scale=True)
</code></pre>
<p><a href="https://i.sstatic.net/gZ0FM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gZ0FM.png" alt="multiple="stack"" /></a></p>
<pre><code>sns.displot(data=bg_vs_non_bg, multiple="fill", x="Value", hue="ClassName", kind="kde", col="Modality", log_scale=True)
</code></pre>
<p><a href="https://i.sstatic.net/L2xSd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/L2xSd.png" alt="enter image description here" /></a></p>
|
<python><pandas><matplotlib><seaborn><kdeplot>
|
2023-03-01 22:44:27
| 1
| 468
|
Melike
|
75,609,716
| 10,634,126
|
Python pickling SSLContext TypeError when using tqdm
|
<p>I have a generic method for handling parallelization using <code>p_tqdm</code> (similar to <code>tqdm</code>) and <code>functools.partial</code> to handle multiple input args, like:</p>
<pre><code>from functools import partial
from p_tqdm import p_umap
def thread_multi(function, non_iterable_args: tuple, iterable_args: tuple):
results = list()
if non_iterable_args:
func = partial(function, *non_iterable_args)
for result in p_umap(func, *iterable_args):
if result and isinstance(result, list):
results.extend([r for r in result if r])
elif result:
results.append(result)
</code></pre>
<p>When I try to call this method I am generally passing something like a string as a non-iterable arg, and a list of dicts as an iterable arg, like:</p>
<pre><code>def some_function(d, r):
r["k3"] = d
return r
d = "2023-03-01"
records = [
{"k1": "v1", "k2": "v2"},
{"k1": "v1", "k2": "v2"},
...
]
records = parallelize.thread_multi(
function=some_function,
non_iterable_args=(d,),
iterable_args=(records,)
)
</code></pre>
<p>Doing this will often return the following error:</p>
<pre><code> 0%| | 0/21 [00:00<?, ?it/s]
Traceback (most recent call last):
File "process.py", line 70, in <module>
processor.run()
File "process.py", line 32, in run
records = parallelize.thread_multi(
File "/dev/utils/parallelize.py", line 28, in thread_multi
for result in p_umap(func, *iterables):
File "/usr/local/lib/python3.8/site-packages/p_tqdm/p_tqdm.py", line 84, in p_umap
result = list(generator)
File "/usr/local/lib/python3.8/site-packages/p_tqdm/p_tqdm.py", line 54, in _parallel
for item in tqdm_func(map_func(function, *iterables), total=length, **kwargs):
File "/usr/local/lib/python3.8/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/usr/local/lib/python3.8/site-packages/multiprocess/pool.py", line 868, in next
raise value
File "/usr/local/lib/python3.8/site-packages/multiprocess/pool.py", line 537, in _handle_tasks
put(task)
File "/usr/local/lib/python3.8/site-packages/multiprocess/connection.py", line 209, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/usr/local/lib/python3.8/site-packages/multiprocess/reduction.py", line 54, in dumps
cls(buf, protocol, *args, **kwds).dump(obj)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 394, in dump
StockPickler.dump(self, obj)
File "/usr/local/lib/python3.8/pickle.py", line 487, in dump
self.save(obj)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/local/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/lib/python3.8/pickle.py", line 901, in save_tuple
save(element)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/local/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 1824, in save_function
_save_with_postproc(pickler, (_create_function, (
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 1089, in _save_with_postproc
pickler.save_reduce(*reduction)
File "/usr/local/lib/python3.8/pickle.py", line 692, in save_reduce
save(args)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/local/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/lib/python3.8/pickle.py", line 886, in save_tuple
save(element)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/local/lib/python3.8/pickle.py", line 603, in save
self.save_reduce(obj=obj, *rv)
File "/usr/local/lib/python3.8/pickle.py", line 692, in save_reduce
save(args)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/local/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/lib/python3.8/pickle.py", line 886, in save_tuple
save(element)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/local/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 1427, in save_instancemethod0
pickler.save_reduce(MethodType, (obj.__func__, obj.__self__), obj=obj)
File "/usr/local/lib/python3.8/pickle.py", line 692, in save_reduce
save(args)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/local/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/lib/python3.8/pickle.py", line 886, in save_tuple
save(element)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/local/lib/python3.8/pickle.py", line 603, in save
self.save_reduce(obj=obj, *rv)
File "/usr/local/lib/python3.8/pickle.py", line 717, in save_reduce
save(state)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/local/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/usr/local/lib/python3.8/pickle.py", line 971, in save_dict
self._batch_setitems(obj.items())
File "/usr/local/lib/python3.8/pickle.py", line 997, in _batch_setitems
save(v)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/local/lib/python3.8/pickle.py", line 603, in save
self.save_reduce(obj=obj, *rv)
File "/usr/local/lib/python3.8/pickle.py", line 717, in save_reduce
save(state)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/local/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/usr/local/lib/python3.8/pickle.py", line 971, in save_dict
self._batch_setitems(obj.items())
File "/usr/local/lib/python3.8/pickle.py", line 997, in _batch_setitems
save(v)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/local/lib/python3.8/pickle.py", line 603, in save
self.save_reduce(obj=obj, *rv)
File "/usr/local/lib/python3.8/pickle.py", line 717, in save_reduce
save(state)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/local/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/usr/local/lib/python3.8/pickle.py", line 971, in save_dict
self._batch_setitems(obj.items())
File "/usr/local/lib/python3.8/pickle.py", line 997, in _batch_setitems
save(v)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/local/lib/python3.8/pickle.py", line 603, in save
self.save_reduce(obj=obj, *rv)
File "/usr/local/lib/python3.8/pickle.py", line 717, in save_reduce
save(state)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/local/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/lib/python3.8/pickle.py", line 886, in save_tuple
save(element)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/local/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/usr/local/lib/python3.8/pickle.py", line 971, in save_dict
self._batch_setitems(obj.items())
File "/usr/local/lib/python3.8/pickle.py", line 997, in _batch_setitems
save(v)
File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/usr/local/lib/python3.8/pickle.py", line 578, in save
rv = reduce(self.proto)
TypeError: cannot pickle 'SSLContext' object
</code></pre>
<p>Is this a datatype issue on the non-iterable arg(s)? If so, how does one resolve?</p>
|
<python><parallel-processing><pickle><partial><tqdm>
|
2023-03-01 22:41:18
| 0
| 909
|
OJT
|
75,609,704
| 850,781
|
Matplotlib animation shows only a part of each figure
|
<p>I create a chain of figures and save them into an animation.</p>
<p>My problem is that the image of the figure that I see on the screen is <em>different</em> from what is saved by <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.savefig.html" rel="nofollow noreferrer"><code>savefig</code></a>.
This problem is saved by passing <code>bbox_inches='tight', pad_inches=0.1</code> to <code>savefig</code> as recommended in a comment to <a href="https://stackoverflow.com/q/66401302/850781">matplotlib remove unexpected extra row and large padding around plots</a>.</p>
<p>However, <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.animation.FuncAnimation.html" rel="nofollow noreferrer"><code>FuncAnimation.save</code></a> does not accept those arguments, so the saved movie has an incorrect bounding box.</p>
<p>What do I do?</p>
|
<python><matplotlib><animation>
|
2023-03-01 22:40:00
| 0
| 60,468
|
sds
|
75,609,557
| 2,828,287
|
Reference type variable from enclosing scope in type annotation
|
<p>I have two nested classes and the outer one is generic.</p>
<p>The inner one has a reference to the outer one.</p>
<p>How can I annotate the reference that the inner one has to the outer one, so that the <code>reveal_type</code> at the bottom of the code snippet below works properly?</p>
<pre class="lang-py prettyprint-override"><code>from typing import Generic, TypeVar
_T = TypeVar("_T")
class Outer(Generic[_T]):
class Inner:
outer_ref: Outer # <- What goes here?
def __init__(self, outer_ref: Outer[_T]) -> None:
self.outer_ref = outer_ref
def produce_inner(self) -> Inner:
return Outer.Inner(self)
o: Outer[int] = Outer()
reveal_type(o.produce_inner().outer_ref) # Should be `Outer[int]`.
</code></pre>
<p>I tried typing <code>outer_ref</code> as <code>outer_ref: Outer</code> but that will reveal <code>Outer[unknown]</code>.</p>
<p>If I type <code>outer_ref</code> as <code>outer_ref: Outer[_T]</code> I get a warning saying that the variable has no meaning there, Mypy will suggest I add <code>Generic[_T]</code> or <code>Protocol[_T]</code> to <code>Inner</code>, and the revealed type is <code>Outer[_T@Outer]</code>.</p>
<p>Notice that MyPy does not seem to have any issues with the type annotation <code>outer_ref: Outer[_T]</code> in the signature of <code>Inner.__init__</code>.</p>
|
<python><mypy><python-typing>
|
2023-03-01 22:17:37
| 1
| 981
|
RGS
|
75,609,384
| 6,734,243
|
Is it possible to configure voila to shutdown when the tab is closed?
|
<h2>Context</h2>
<p>I try to execute voila dashboards in a <code>nox</code> isolated environment to facilitate development iterations and sharing. the problem is once the <code>nox</code> session is launching voila, it never finishes as voila is not closing itself when I close the tab.</p>
<h1>Question</h1>
<p><strong>How to make the voila command close when the browser tab is closed?</strong><br />
or (as its equivalent)<br />
<strong>How to make the voila command close when the kernel is shut down</strong></p>
<h1>How to reproduce</h1>
<ol>
<li>create a test.ipynb file with a simple <code>print("hello world")</code> cell</li>
<li>start voila by running <code>voila test.ipynb</code></li>
<li>close the tab</li>
<li>The terminal will catch the "[Voila] Kernel shutdown: xxxxx" by remain active</li>
<li>only way to stop it is to run <code>ctrl+c</code></li>
</ol>
|
<python><jupyter><ipython><voila>
|
2023-03-01 21:54:32
| 0
| 2,670
|
Pierrick Rambaud
|
75,609,276
| 1,014,217
|
How to convert complex JSON object to pandas dataframe for machine learning
|
<p>I have some json like this:</p>
<pre><code>{"0": {"name": "Vanilla Cream Ale", "url": "/homebrew/recipe/view/1633/vanilla-cream-ale", "method": "All Grain", "style": "Cream Ale", "batch": 21.8, "og": 1.055, "fg": 1.013, "abv": 5.48, "ibu": 19.44, "color": 4.83, "ph mash": -1, "fermentables": [[2.381, "American - Pale 2-Row", 37.0, 1.8, 44.7], [0.907, "American - White Wheat", 40.0, 2.8, 17.0], [0.907, "American - Pale 6-Row", 35.0, 1.8, 17.0], [0.227, "Flaked Corn", 40.0, 0.5, 4.3], [0.227, "American - Caramel / Crystal 20L", 35.0, 20.0, 4.3], [0.227, "American - Carapils (Dextrine Malt)", 33.0, 1.8, 4.3], [0.113, "Flaked Barley", 32.0, 2.2, 2.1], [0.34, "Honey", 42.0, 2.0, 6.4]], "hops": [[14.0, "Cascade", "Pellet", 6.2, "Boil", "60 min", 11.42, 33.3], [14.0, "Cascade", "Pellet", 6.2, "Boil", "20 min", 6.92, 33.3], [14.0, "saaz", "Pellet", 3.0, "Boil", "5 min", 1.1, 33.3]], "hops Summary": [[28.0, "Cascade (Pellet)", 18.34, 66.6], [14.0, "saaz (Pellet)", 1.1, 33.3]], "other": [["2 oz", "pure vanilla extract", "Flavor", "Boil", "0 min."], ["1 oz", "pure vanilla extract", "Flavor", "Bottling", "0 min."], ["1 tsp", "yeast nutrient", "Other", "Boil", "15 min."], ["1 each", "whirlfloc", "Fining", "Boil", "15 min."], ["4 each", "Vanilla beans - in 2oz Vodka", "Other", "Secondary", "0 min."]], "yeast": ["Wyeast - K\u00f6lsch 2565", "76%", "Low", "56", "70", "Yes"], "rating": 0, "num rating": 16, "views": 289454},
"2": {"name": "Sierra Nevada Pale Ale Clone", "url": "/homebrew/recipe/view/28546/sierra-nevada-pale-ale-clone", "method": "All Grain", "style": "American Pale Ale", "batch": 24.6, "og": 1.055, "fg": 1.013, "abv": 5.58, "ibu": 39.79, "color": 8.0, "ph mash": 5.67, "fermentables": [[5.216, "American - Pale 2-Row", 37.0, 1.8, 92.7], [0.412, "American - Caramel / Crystal 60L", 34.0, 60.0, 7.3]], "hops": [[14.0, "Magnum", "Pellet", 15.0, "Boil", "60 min", 22.62, 8.3], [14.0, "Perle", "Pellet", 8.2, "Boil", "30 min", 9.51, 8.3], [28.0, "Cascade", "Pellet", 7.0, "Boil", "10 min", 7.66, 16.7], [56.0, "Cascade", "Pellet", 7.0, "Boil", "0 min", 0, 33.3], [56.0, "Cascade", "Pellet", 7.0, "Dry Hop", "4 days", 0, 33.3]], "hops Summary": [[14.0, "Magnum (Pellet)", 22.62, 8.3], [14.0, "Perle (Pellet)", 9.51, 8.3], [140.0, "Cascade (Pellet)", 7.66, 83.3]], "other": [["1 each", "Crush whilrfoc Tablet", "Water Agt", "Boil", "10 min."]], "yeast": ["Fermentis - Safale - American Ale Yeast US-05", "76%", "Medium", "54", "77", "Yes"], "rating": 0, "num rating": 26, "views": 271945},
"3": {"name": "Zombie Dust Clone - ALL GRAIN", "url": "/homebrew/recipe/view/5916/zombie-dust-clone-all-grain", "method": "All Grain", "style": "American IPA", "batch": 22.7, "og": 1.061, "fg": 1.016, "abv": 5.94, "ibu": 62.42, "color": 8.5, "ph mash": 5.81, "fermentables": [[5.33, "American - Pale 2-Row", 37.0, 1.8, 81.7], [0.513, "American - Munich - Light 10L", 33.0, 10.0, 7.9], [0.227, "German - CaraFoam", 37.0, 1.8, 3.5], [0.227, "American - Caramel / Crystal 60L", 34.0, 60.0, 3.5], [0.227, "German - Melanoidin", 37.0, 25.0, 3.5]], "hops": [[21.0, "Citra", "Pellet", 11.0, "First Wort", "0 min", 15.57, 8.6], [35.0, "Citra", "Pellet", 11.0, "Boil", "15 min", 21.11, 14.3], [35.0, "Citra", "Pellet", 11.0, "Boil", "10 min", 15.43, 14.3], [35.0, "Citra", "Pellet", 11.0, "Boil", "5 min", 8.48, 14.3], [35.0, "Citra", "Pellet", 11.0, "Boil", "1 min", 1.83, 14.3], [84.0, "Citra", "Pellet", 11.0, "Dry Hop", "7 days", 0, 34.3]], "hops Summary": [[245.0, "Citra (Pellet)", 62.42, 100.1]], "other": [], "yeast": ["Fermentis - Safale - English Ale Yeast S-04", "75%", "High", "54", "77", "Yes"], "rating": 0, "num rating": 10, "views": 208996},
"4": {"name": "Russian River Pliny the Elder (original)", "url": "/homebrew/recipe/view/37534/russian-river-pliny-the-elder-original-", "method": "All Grain", "style": "Imperial IPA", "batch": 22.7, "og": 1.072, "fg": 1.018, "abv": 7.09, "ibu": 232.89, "color": 6.33, "ph mash": -1, "fermentables": [[6.01, "American - Pale 2-Row", 37.0, 1.8, 87.2], [0.272, "American - Caramel / Crystal 40L", 34.0, 40.0, 3.9], [0.272, "American - Carapils (Dextrine Malt)", 33.0, 1.8, 3.9], [0.34, "Corn Sugar - Dextrose ", 46.0, 0.5, 4.9]], "hops": [[98.0, "Columbus", "Pellet", 15.0, "Boil", "90 min", 171.54, 28.0], [21.0, "Columbus", "Pellet", 15.0, "Boil", "45 min", 31.54, 6.0], [28.0, "Simcoe", "Pellet", 12.7, "Boil", "30 min", 29.81, 8.0], [28.0, "Centennial", "Pellet", 10.0, "Aroma", "0 min", 0, 8.0], [70.0, "Simcoe", "Pellet", 12.7, "Aroma", "0 min", 0, 20.0], [28.0, "Columbus", "Pellet", 15.0, "Dry Hop", "13 days", 0, 8.0], [28.0, "Centennial", "Pellet", 10.0, "Dry Hop", "13 days", 0, 8.0], [28.0, "Simcoe", "Pellet", 12.7, "Dry Hop", "13 days", 0, 8.0], [7.0, "Columbus", "Pellet", 15.0, "Dry Hop", "5 days", 0, 2.0], [7.0, "Centennial", "Pellet", 10.0, "Dry Hop", "5 days", 0, 2.0], [7.0, "Simcoe", "Pellet", 12.7, "Dry Hop", "5 days", 0, 2.0]], "hops Summary": [[154.0, "Columbus (Pellet)", 203.08, 44.0], [133.0, "Simcoe (Pellet)", 29.81, 38.0], [63.0, "Centennial (Pellet)", 0.0, 18.0]], "other": [], "yeast": ["Wyeast - American Ale 1056", "75%", "Med-Low", "60", "72", "No"], "rating": 0, "num rating": 6, "views": 193832},
"5": {"name": "Spotted Clown (New Glarus Spotted Cow clone)", "url": "/homebrew/recipe/view/672/spotted-clown-new-glarus-spotted-cow-clone-", "method": "All Grain", "style": "Cream Ale", "batch": 20.8, "og": 1.054, "fg": 1.014, "abv": 5.36, "ibu": 21.27, "color": 5.94, "ph mash": -1, "fermentables": [[2.722, "American - Pale 2-Row", 37.0, 1.8, 50.0], [0.907, "American - Munich - Light 10L", 33.0, 10.0, 16.7], [0.567, "Flaked Corn", 40.0, 0.5, 10.4], [0.794, "Flaked Barley", 32.0, 2.2, 14.6], [0.227, "American - Caramel / Crystal 10L", 35.0, 10.0, 4.2], [0.227, "American - Carapils (Dextrine Malt)", 33.0, 1.8, 4.2]], "hops": [[14.0, "Cascade", "Pellet", 6.7, "Boil", "60 min", 12.67, 33.3], [14.0, "german select", "Pellet", 5.8, "Boil", "20 min", 6.64, 33.3], [14.0, "Willamette", "Pellet", 5.2, "Boil", "5 min", 1.96, 33.3]], "hops Summary": [[14.0, "Cascade (Pellet)", 12.67, 33.3], [14.0, "german select (Pellet)", 6.64, 33.3], [14.0, "Willamette (Pellet)", 1.96, 33.3]], "other": [["1 each", "whirlfloc", "Fining", "Boil", "15 min."], ["1 tsp", "yeast nutrient", "Other", "Boil", "15 min."]], "yeast": ["Wyeast - K\u00f6lsch 2565", "75%", "Low", "56", "70", "Yes"], "rating": 0, "num rating": 5, "views": 190059},
"6": {"name": "Chocolate Vanilla Porter", "url": "/homebrew/recipe/view/29265/chocolate-vanilla-porter", "method": "All Grain", "style": "Robust Porter", "batch": 22.7, "og": 1.06, "fg": 1.016, "abv": 5.77, "ibu": 31.36, "color": 34.76, "ph mash": -1, "fermentables": [[2.268, "American - Pale 2-Row", 37.0, 1.8, 35.6], [1.361, "United Kingdom - Brown", 32.0, 65.0, 21.4], [0.907, "American - Munich - Light 10L", 33.0, 10.0, 14.2], [0.454, "American - Chocolate", 29.0, 350.0, 7.1], [0.454, "American - Caramel / Crystal 10L", 35.0, 10.0, 7.1], [0.454, "Flaked Oats", 33.0, 2.2, 7.1], [0.227, "American - Carapils (Dextrine Malt)", 33.0, 1.8, 3.6], [0.113, "Brown Sugar", 45.0, 15.0, 1.8], [0.136, "Corn Sugar - Dextrose", 46.0, 0.5, 2.1]], "hops": [[28.0, "East Kent Goldings", "Pellet", 5.4, "Boil", "60 min", 18.38, 40.0], [21.0, "East Kent Goldings", "Pellet", 5.4, "Boil", "30 min", 10.59, 30.0], [21.0, "Willamette", "Pellet", 4.7, "Boil", "5 min", 2.39, 30.0]], "hops Summary": [[49.0, "East Kent Goldings (Pellet)", 28.97, 70.0], [21.0, "Willamette (Pellet)", 2.39, 30.0]], "other": [["6 oz", "organic cocoa powder", "Flavor", "Boil", "15 min."], ["1 oz", "pure vanilla extract", "Flavor", "Boil", "0 min."], ["1 tsp", "yeast nutrient", "Other", "Boil", "15 min."], ["2 each", "Vanilla bean", "Flavor", "Secondary", "--"]], "yeast": ["Wyeast - Irish Ale 1084", "73%", "Medium", "62", "72", "Yes"], "rating": 4, "num rating": 1, "views": 188822},
"7": {"name": "Zombie Dust Clone - EXTRACT", "url": "/homebrew/recipe/view/5920/zombie-dust-clone-extract", "method": "Extract", "style": "American IPA", "batch": 18.9, "og": 1.063, "fg": 1.016, "abv": 6.16, "ibu": 70.18, "color": 8.98, "ph mash": 5.41, "fermentables": [[2.722, "Dry Malt Extract - Extra Light", 42.0, 2.5, 70.6]], "hops": [[28.0, "Citra", "Pellet", 11.0, "First Wort", "0 min", 25.02, 12.5], [28.0, "Citra", "Pellet", 11.0, "Boil", "15 min", 20.35, 12.5], [28.0, "Citra", "Pellet", 11.0, "Boil", "10 min", 14.87, 12.5], [28.0, "Citra", "Pellet", 11.0, "Boil", "5 min", 8.18, 12.5], [28.0, "Citra", "Pellet", 11.0, "Boil", "1 min", 1.77, 12.5], [84.0, "Citra", "Pellet", 11.0, "Dry Hop", "7 days", 0, 37.5]], "hops Summary": [[224.0, "Citra (Pellet)", 70.19, 100.0]], "other": [], "yeast": ["Fermentis - Safale - English Ale Yeast S-04", "75%", "High", "54", "77", "Yes"], "rating": 0, "num rating": 9, "views": 184124},
"8": {"name": "Southern Tier Pumking clone", "url": "/homebrew/recipe/view/16367/southern-tier-pumking-clone", "method": "All Grain", "style": "Holiday/Winter Special Spiced Beer", "batch": 20.8, "og": 1.083, "fg": 1.021, "abv": 8.16, "ibu": 50.22, "color": 15.64, "ph mash": -1, "fermentables": [[6.804, "American - Pale 2-Row", 37.0, 1.8, 75.9], [0.907, "American - Victory", 34.0, 28.0, 10.1], [0.34, "American - Caramel / Crystal 80L", 33.0, 80.0, 3.8], [0.907, "pumpkin", 1.75, 13.0, 10.1]], "hops": [[28.0, "Magnum", "Pellet", 15.0, "Boil", "50 min", 41.13, 50.0], [28.0, "Sterling", "Pellet", 8.7, "Boil", "10 min", 9.1, 50.0]], "hops Summary": [[28.0, "Magnum (Pellet)", 41.13, 50.0], [28.0, "Sterling (Pellet)", 9.1, 50.0]], "other": [["0.75 lb", "Demerara sugar", "Flavor", "Boil", "1 hr."], ["0.25 lb", "Light Brown sugar", "Flavor", "Boil", "1 hr."], ["0.75 tsp", "Fresh ground ginger", "Spice", "Boil", "5 min."], ["3 each", "Ceylon cinnamon sticks", "Spice", "Boil", "5 min."], ["0.50 tsp", "Whole cloves", "Spice", "Boil", "5 min."], ["0.50 tsp", "Nutmeg", "Spice", "Boil", "5 min."], ["0.50 tsp", "Allspice", "Spice", "Boil", "5 min."], ["1 each", "Vanilla Bean vodka solution (see notes for exact quantities)", "Spice", "Secondary", "--"], ["1 tsp", "Pumpkin pie spice", "Spice", "Secondary", "--"], ["1 tsp", "Capella water soluble Graham Cracker Extract (purchased online)", "Spice", "Secondary", "--"]], "yeast": ["Wyeast - American Ale 1056", "75%", "Med-Low", "60", "72", "No"], "rating": 0, "num rating": 15, "views": 181369},
"9": {"name": "Bakke Brygg Belgisk Blond 50 L", "url": "/homebrew/recipe/view/89534/bakke-brygg-belgisk-blond-50-l", "method": "All Grain", "style": "Belgian Blond Ale", "batch": 50.0, "og": 1.062, "fg": 1.012, "abv": 6.52, "ibu": 18.54, "color": 4.35, "ph mash": -1, "fermentables": [[11.0, "Castle Malting Pilsen 2RP/2RS", 37.0, 1.8, 88.0], [0.5, "Castle Malting Abbey", 33.0, 17.4, 4.0], [1.0, "Farin, hvit", 46.0, 0.0, 8.0]], "hops": [[64.0, "Hallertau Mittelfruh", "Pellet", 4.0, "Boil", "60 min", 13.36, 56.1], [50.0, "Hallertau Mittelfruh", "Pellet", 4.0, "Boil", "15 min", 5.18, 43.9]], "hops Summary": [[114.0, "Hallertau Mittelfruh (Pellet)", 18.54, 100.0]], "other": [], "yeast": ["Fermentis - Safbrew - Specialty Ale Yeast T-58", "80%", "High", "12", "25", "No"], "rating": 0, "num rating": 5, "views": 172811},
"10": {"name": "Mango Habanero IPA", "url": "/homebrew/recipe/view/61082/mango-habanero-ipa", "method": "All Grain", "style": "Imperial IPA", "batch": 20.8, "og": 1.086, "fg": 1.018, "abv": 8.88, "ibu": 98.09, "color": 8.55, "ph mash": 5.69, "fermentables": [[6.804, "American - Pale 2-Row", 37.0, 1.8, 81.1], [0.907, "American - Caramel / Crystal 20L", 35.0, 20.0, 10.8], [0.454, "Flaked Wheat", 34.0, 2.0, 5.4], [0.227, "Rolled Oats", 33.0, 2.2, 2.7]], "hops": [[35.0, "Magnum", "Pellet", 15.0, "Boil", "60 min", 57.62, 16.7], [28.0, "Centennial", "Pellet", 10.0, "Boil", "30 min", 23.62, 13.3], [35.0, "Centennial", "Pellet", 10.0, "Boil", "10 min", 13.93, 16.7], [28.0, "Citra", "Pellet", 11.1, "Boil", "1 min", 1.47, 13.3], [28.0, "Zythos", "Pellet", 11.0, "Boil", "1 min", 1.46, 13.3], [14.0, "Centennial", "Pellet", 10.0, "Dry Hop", "0 days", 0, 6.7], [28.0, "Citra", "Pellet", 10.0, "Dry Hop", "0 days", 0, 13.3], [14.0, "Zythos", "Pellet", 11.0, "Dry Hop", "0 days", 0, 6.7]], "hops Summary": [[35.0, "Magnum (Pellet)", 57.62, 16.7], [77.0, "Centennial (Pellet)", 37.55, 36.7], [28.0, "Citra (Pellet)", 1.47, 13.3], [42.0, "Zythos (Pellet)", 1.46, 20.0], [28.0, "Citra (Pellet)", 0, 13.3]], "other": [["1 each", "Whirlfloc Tab", "Fining", "Boil", "5 min."], ["44 oz", "Pureed Frozen Mango", "Flavor", "Boil", "5 min."], ["1 each", "Pureed Habanero Pepper", "Flavor", "Boil", "5 min."], ["32 oz", "Organic Mango Juice", "Flavor", "Secondary", "0 min."], ["1 each", "Sliced Habanero Pepper", "Flavor", "Secondary", "0 min."]], "yeast": ["White Labs - California Ale Yeast WLP001", "76.5%", "Medium", "68", "73", "Yes"], "rating": 5, "num rating": 6, "views": 172664}
</code></pre>
<p>}</p>
<p>This json represents Beer Recipes, and some of the fiels are nested objects, for example fermentables or hops, can have multiple values like shown in this prettified version
<a href="https://i.sstatic.net/aUIXF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aUIXF.png" alt="enter image description here" /></a></p>
<p>I have code to read and normalize the file:</p>
<pre><code>import json
filename = 'recipes_full copy.json'
with open(filename, 'r') as f:
try:
json_data = json.load(f)
print("The JSON file is valid")
except ValueError as e:
print("The JSON file is invalid:", e)
print(json_data)
from pandas import json_normalize
df = json_normalize(json_data)
</code></pre>
<p>However the result is like this:</p>
<p><a href="https://i.sstatic.net/JhjHs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JhjHs.png" alt="enter image description here" /></a></p>
<p>Question:
How can I flatten this to a full pandas daframe? specially if I dont know how many number of fermentables each recipe is going to have?</p>
<p>Update:</p>
<p>This is my code:</p>
<pre><code>import json
filename = 'recipes_full copy.json'
with open(filename, 'r') as f:
try:
json_data = json.load(f)
print("The JSON file is valid")
except ValueError as e:
print("The JSON file is invalid:", e)
print(json_data)
from pandas import json_normalize
df = pd.DataFrame(json_data)
df = df.T
df['fermentables'] = df['fermentables'].apply(pd.DataFrame)
df['hops'] = df['hops'].apply(pd.DataFrame)
df.at[0, 'hops']
</code></pre>
<p>erro:</p>
<pre><code>KeyError Traceback (most recent call last)
Cell In[60], line 7
5 df['fermentables'] = df['fermentables'].apply(pd.DataFrame)
6 df['hops'] = df['hops'].apply(pd.DataFrame)
----> 7 df.at[0, 'hops']
...
3807 # InvalidIndexError. Otherwise we fall through and re-raise
3808 # the TypeError.
3809 self._check_indexing_error(key)
KeyError: 0
</code></pre>
|
<python><pandas>
|
2023-03-01 21:38:31
| 1
| 34,314
|
Luis Valencia
|
75,609,266
| 2,908,017
|
How to set min/max dimensions of a Form in a Python FMX GUI App?
|
<p>I've made a <code>Form</code> using the <a href="https://github.com/Embarcadero/DelphiFMX4Python" rel="nofollow noreferrer">DelphiFMX GUI Library for Python</a> and it's working perfectly fine, but I want to set a minimum and maximum size for the Form to make sure the window can't resize below or above that amount.</p>
<p>I have code that is working. I've written code that keeps the <code>Width</code> above 400 and below 1000, and keeps the <code>Height</code> below 800 and above 200. I'm using the <code>OnResize</code> event of the <code>Form</code> to check the <code>Height</code> and <code>Width</code> and then set it accordingly. Here's my full code for creating the <code>Form</code> and assigning the <code>OnResize</code> event to it:</p>
<pre><code>from delphifmx import *
class frmMain(Form):
def __init__(self, owner):
self.Caption = 'My Form'
self.Width = 400
self.Height = 200
self.OnResize = self.FormOnResizeEvent
def FormOnResizeEvent(self, sender):
if self.Width < 400:
self.Width = 400
elif self.Width > 1000:
self.Width = 1000
if self.Height < 200:
self.Height = 200
elif self.Height > 800:
self.Height = 800
def main():
Application.Initialize()
Application.Title = "My Application"
Application.MainForm = frmMain(Application)
Application.MainForm.Show()
Application.Run()
Application.MainForm.Destroy()
main()
</code></pre>
<p>But is there a better way to do this?</p>
<p>I tried doing:</p>
<pre><code>self.MinWidth = 400
self.MaxWidth = 1000
self.MinHeight = 200
self.MaxHeight = 800
</code></pre>
<p>But this doesn't work at all.</p>
|
<python><user-interface><resize><firemonkey><window-resize>
|
2023-03-01 21:38:16
| 1
| 4,263
|
Shaun Roselt
|
75,609,236
| 5,072,010
|
Pulp optimization model not giving reasonable result?
|
<p>I am trying to get an optimization model to decide on an optimal commitment rate. The general problem structure, in its simplest form, is as follows:</p>
<p>A known number of instances of known types is running for each hour over 10 hours. The number of each type running hour to hour is arbitrary, but known. For each of these instances, they have an on-demand rate and a savings plan rate, the latter being arbitrarily lower than the former. The rates vary depending on the type, but not hour to hour.</p>
<p>A dollar amount can be committed across all hours (only one commitment applies to all hours, cannot have different commitments per hour) s/t Commitment >= sum_over_all_types(# of instance type X * SP rate of instance type X) for that hour. Not all instances of same type need to utilize the SP rate. Ie if we have 4 A and 5 B for hour 1, its possible that the optimal to achieve full 'utilization' is 4 A and 1 B, or any other combo.</p>
<p>Also, if there is any hour s/t that sum_over_all_types([# instance]*[instance SP rate]) <= commitment rate, the cost for that hour is just equal to the commitment rate. If the sum is higher, than an optimal combination of the instances is chosen s/t the on-demand cost for that hour is minimized.</p>
<p>Any instances not 'part of'/'receiving' the SP rate will be charged the on-demand rate.</p>
<p>I have tried to set up the this optimization using PuLP as follows (with some example data):</p>
<pre><code>from pulp import *
# Define input data
instances = {'A': {'on_demand': 10000.0, 'savings_plan': 0.0001},
'B': {'on_demand': 2.0, 'savings_plan': 1.5},
'C': {'on_demand': 3.0, 'savings_plan': 2.0}}
hourly_instances = {
1: {'A': 5, 'B': 2, 'C': 1},
2: {'A': 6, 'B': 3, 'C': 1},
3: {'A': 7, 'B': 4, 'C': 2},
4: {'A': 8, 'B': 5, 'C': 2},
5: {'A': 9, 'B': 6, 'C': 3},
6: {'A': 10, 'B': 7, 'C': 3},
7: {'A': 11, 'B': 8, 'C': 4},
8: {'A': 12, 'B': 9, 'C': 4},
9: {'A': 13, 'B': 10, 'C': 5},
10: {'A': 14, 'B': 11, 'C': 5}
}
# Define the model
model = LpProblem("Instance Optimization", LpMinimize)
# Define decision variables
commitment_rate = LpVariable("commitment_rate", lowBound=0)
savings_plan_utilization = {}
for hour in hourly_instances.keys():
for instance_type in instances.keys():
savings_plan_utilization[(hour, instance_type)] = LpVariable(f"savings_plan_utilization_{hour}_{instance_type}", cat='Binary')
# Define objective function
hourly_costs = []
for hour, instance_counts in hourly_instances.items():
hourly_cost = lpSum([instances[type]['savings_plan'] * count * savings_plan_utilization[(hour, type)] for type, count in instance_counts.items()])
hourly_cost += lpSum([instances[type]['on_demand'] * count for type, count in instance_counts.items() if instances[type]['savings_plan'] == 0])
hourly_costs.append(hourly_cost)
model += lpSum(hourly_costs) + commitment_rate
# Add utilization constraints
for hour, instance_counts in hourly_instances.items():
for type in instances.keys():
model += savings_plan_utilization[(hour, type)] <= instance_counts[type]
# Add commitment rate constraint
model += lpSum([instances[type]['savings_plan'] * lpSum(savings_plan_utilization[(hour, type)] for hour in hourly_instances.keys()) for type in instances.keys()]) <= commitment_rate
# Print the solver status
print("Solver Status: ", LpStatus[model.status])
# Solve the model
model.solve()
# Print the values of the decision variables
for v in model.variables():
print(v.name, "=", v.varValue)
# Print the value of the objective function
print("Objective =", value(model.objective))
# Output the results
print(f"Optimal commitment rate: {commitment_rate.value()}")
# Output the savings plan utilization for each hour
for hour in hourly_instances.keys():
print(f"Hour {hour} Savings Plan Utilization: {lpSum(savings_plan_utilization[(hour, type)] for type in instances.keys()).value()}")
# Calculate the hourly costs
hourly_costs = []
for hour, instance_counts in hourly_instances.items():
hourly_cost = lpSum([instances[type]['on_demand'] * count for type, count in instance_counts.items()])
hourly_cost += lpSum([instances[type]['savings_plan'] * count * savings_plan_utilization[(hour, type)].value() for type, count in instance_counts.items()])
hourly_costs.append(hourly_cost.value())
# Calculate the optimized commitment rate
commitment_rate = lpSum([instances[type]['savings_plan'] * lpSum(savings_plan_utilization[(hour, type)] for hour in hourly_instances.keys()) for type in instances.keys()]).value()
# Output the optimized commitment rate
print(f"Optimized Commitment Rate: {commitment_rate}")
# Visualize the hourly costs and the optimized commitment rate
import matplotlib.pyplot as plt
plt.plot(range(1, len(hourly_costs)+1), hourly_costs, label='Hourly Cost')
plt.axhline(y=commitment_rate, color='r', linestyle='-', label='Optimized Commitment Rate')
plt.xlabel('Hour')
plt.ylabel('$/hr')
plt.legend()
plt.show()
</code></pre>
<p>It keeps solving to an optimal commitment rate of $0... I don't understand why, since I have made it extremely attractive to run instance A via the SP rate ($0.0001/instance/hr vs $10000/instance/hr).</p>
<p>My hunch is that somehow using the SP rate is not removing the on-demand rate for that instance/hour. But I am not sure how to implement this specifically?</p>
<p>The 'optimal' solution the model gives me provides me with the following total cost:</p>
<p><a href="https://i.sstatic.net/qdciZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qdciZ.png" alt=""Solution"" /></a></p>
|
<python><optimization><pulp>
|
2023-03-01 21:34:14
| 1
| 1,459
|
Runeaway3
|
75,609,188
| 9,390,633
|
add to a string variable without deleting last assignment
|
<pre><code>strings = None
def test(a)
strings = str(strings) + f"{a} \n"
</code></pre>
<p>when this function is called multiple times</p>
<pre><code>test("hello")
test("world")
</code></pre>
<p>how do I allow strings to be equal to</p>
<pre><code>"hello" \n
"world"
</code></pre>
<p>At the moment it just picks up the last time a was passed so strings is "world"</p>
|
<python><python-3.x><string>
|
2023-03-01 21:29:21
| 1
| 363
|
lunbox
|
75,609,166
| 4,602,726
|
How do I debug a function enqueued with rq?
|
<p>I am currently attempting to debug a function that's enqueued inside a <code>rq</code> queue in VS Code.
However <code>rq</code> forks the process to produce its workers, which I think is why it is impossible to intercept the breakpoint.</p>
<p>I use the <code>debugpy</code> as a debugging library and I am able to break into the non-queued code and I know the function is called because it produces the proper output.</p>
<p>I have tried to set the worker class to <code>simple-worker</code> and set the max number of workers to 1 but that didn't work.</p>
<p>I have also try to explicitly call <code>debugpy</code> to listen to incoming connections inside the enqueued function. More concretely:</p>
<pre><code>import rq
def worker_function():
pass # Breakpoints placed here don't break
debugpy.listen(('0.0.0.0', 5678)) # Adding these lines doesn't help
debugpy.wait_for_client()
debugpy.breakpoint() # Doesn't break
def parent_function(rq_queue: rq.Queue):
rq_queue.enqueue(worker_function) # Breakpoints placed here work
</code></pre>
<p>And I launch the script with the following command:</p>
<pre class="lang-bash prettyprint-override"><code># Assume that the rq worker and redis instance are up and running
# This is the command called by the docker-compose file
python3 -m debugpy --listen 0.0.0.0:5678 script.py
</code></pre>
<p>And on the VS Code side my launch configuration looks like this:</p>
<pre class="lang-json prettyprint-override"><code>{
"configurations": [
{
"name": "Python: Remote Attach",
"type": "python",
"request": "attach",
"connect": {
"host": "0.0.0.0",
"port": 5678
},
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/app"
}
],
"justMyCode": true
}
]
}
</code></pre>
|
<python><visual-studio-code><python-rq>
|
2023-03-01 21:26:23
| 1
| 791
|
TommyD
|
75,609,157
| 9,609,901
|
OpenAI Whisper API error: "AttributeError: module 'openai' has no attribute 'Audio'"
|
<p>ChatGPT API is announced with Speech-to-text Whisper api and i was so excited to give it a try. <a href="https://platform.openai.com/docs/guides/speech-to-text" rel="nofollow noreferrer">Here's the link</a></p>
<p>I have tried their sample code</p>
<pre><code># Note: you need to be using OpenAI Python v0.27.0 for the code below to work
import openai
audio_file= open("/path/to/file/audio.mp3", "rb")
</code></pre>
<p>and got the following error</p>
<pre><code>AttributeError: module 'openai' has no attribute 'Audio'
</code></pre>
<p>I'm sure i'm using the version 0.27.0</p>
<pre><code>pip list | grep openai
openai 0.27.0
</code></pre>
<p>Do you think openai is not updated yet?</p>
|
<python><openai-api><openai-whisper>
|
2023-03-01 21:25:04
| 2
| 568
|
Don Coder
|
75,609,144
| 2,908,017
|
How to add placeholder text to Edit in a Python FMX GUI App?
|
<p>I have made a <code>Form</code> with an <code>Edit</code> component using the <a href="https://github.com/Embarcadero/DelphiFMX4Python" rel="nofollow noreferrer">DelphiFMX GUI Library for Python</a> and I'm trying to add a placeholder text to the Edit component, but I'm not sure how. I've tried doing <code>self.myEdit.Placeholder = "Enter your name..."</code>, but this just gives me an error of <code>AttributeError: Error in setting property Placeholder</code>.</p>
<p>Here's my full code:</p>
<pre><code>from delphifmx import *
class frmMain(Form):
def __init__(self, owner):
self.Caption = 'My Form'
self.Width = 400
self.Height = 200
self.myEdit = Edit(self)
self.myEdit.Parent = self
self.myEdit.Align = "Center"
self.myEdit.Width = 250
self.myEdit.Placeholder = "Enter your name..."
def main():
Application.Initialize()
Application.Title = "My Application"
Application.MainForm = frmMain(Application)
Application.MainForm.Show()
Application.Run()
Application.MainForm.Destroy()
main()
</code></pre>
<p>My <code>Edit</code> is just empty. I need a placeholder text to be in there.</p>
<p>Is placeholder text available? How do I add placeholder text to the Edit?</p>
|
<python><user-interface><firemonkey><property-placeholder>
|
2023-03-01 21:23:52
| 1
| 4,263
|
Shaun Roselt
|
75,609,138
| 4,530,214
|
sympy : compute simple expected value takes a lot of time
|
<p>I am trying to compute an expected value from a simple model based on random variables.
I use <code>Normal</code> distribution only because I don't know another way to define a random variable (without specifying the underlying distribution). Basically the '_m' symbols represent the mean of the random distributions. In the end I combine some of these variables, and compute the expected value of a product. Using simple algebra one can quickly compute the result, but sympy needs about 2 minutes (eventually gives the right result, returning True).</p>
<pre class="lang-py prettyprint-override"><code>import sympy as sp
from sympy.stats import P, E, variance, Die, Normal, Poisson
G_m, L, Ad, tau, Omega_m, eta_m = sp.symbols('\\bar{G}, L, A_d, tau, \\bar{\Omega}, \\bar{\eta}')
sigma_i_vh = sp.symbols('\sigma_{i_{vh}}' , positive=True)
sigma_eta_vh = sp.symbols('\sigma_{\eta_{vh}}' , positive=True)
sigma_Omega_v = sp.symbols('\sigma_{\Omega_v}' , positive=True)
sigma_Omega_h = sp.symbols('\sigma_{\Omega_h}' , positive=True)
sigma_Omega_vh = sp.symbols('\sigma_{\Omega_{vh}}', positive=True)
sigma_G_t = sp.symbols('\sigma_{G_t}' , positive=True)
sigma_G_v = sp.symbols('\sigma_{G_v}' , positive=True)
sigma_G_h = sp.symbols('\sigma_{G_h}' , positive=True)
sigma_G_tv = sp.symbols('\sigma_{G_{tv}}' , positive=True)
sigma_G_th = sp.symbols('\sigma_{G_{th}}' , positive=True)
sigma_G_vh = sp.symbols('\sigma_{G_{vh}}' , positive=True)
sigma_G_tvh = sp.symbols('\sigma_{G_{tvh}}' , positive=True)
i_vh_random = Normal('i_{vh}', 0, sigma_i_vh)
eta_vh_random = Normal('\eta_{vh}', 0, sigma_eta_vh)
Omega_v_random = Normal('\Omega_v', 0, sigma_Omega_v)
Omega_h_random = Normal('\Omega_h', 0, sigma_Omega_h)
Omega_vh_random = Normal('\Omega_{vh}', 0, sigma_Omega_vh)
G_t_random = Normal('G_{t}', 0, sigma_G_t)
G_v_random = Normal('G_{v}', 0, sigma_G_v)
G_h_random = Normal('G_{h}', 0, sigma_G_h)
G_tv_random = Normal('G_{tv}', 0, sigma_G_tv)
G_th_random = Normal('G_{th}', 0, sigma_G_th)
G_vh_random = Normal('G_{vh}', 0, sigma_G_vh)
G_tvh_random = Normal('G_{tvh}', 0, sigma_G_tvh)
eta = eta_m + eta_vh_random
Omega = Omega_m + Omega_v_random + Omega_h_random + Omega_vh_random
phi_xy = L*Ad*tau*eta*Omega
G = G_m + G_t_random + G_v_random + G_h_random + G_tv_random + G_th_random + G_vh_random + G_tvh_random
E(G*phi_xy) == G_m * Ad * L * tau * eta_m *Omega_m
# variance(G*phi_xy)
</code></pre>
<p>My end goal is to compute the variance of the product, using <code>variance(G*phi_xy)</code>, but sympy never returns a result - I guess it takes way more steps than the expected value, which seems already quite long to compute.</p>
<p>I am wondering if something is ill-defined that make sympy extra-steps in the computation, or if it is 'normal' that sympy takes so long to compute ?</p>
<p>Python : 3.9.7 (default, Sep 16 2021, 08:50:36) \n[Clang 10.0.0 ]<br />
Sympy : 1.11.1<br />
on a mac 3,1 GHz Intel Core i7 dual-core</p>
|
<python><sympy>
|
2023-03-01 21:23:23
| 0
| 546
|
mocquin
|
75,609,057
| 2,908,017
|
How to get Mouse Cursor Position on Form in a Python FMX GUI App?
|
<p>I've built a simple <code>Form</code> using the <a href="https://github.com/Embarcadero/DelphiFMX4Python" rel="nofollow noreferrer">DelphiFMX GUI Library for Python</a>. The Form has a <code>MouseMove</code> event attached to it.</p>
<p>What I basically want is the <code>X</code> and <code>Y</code> coordinates of the mouse on the Form when you move the mouse around and then display the coordinates in the <code>Caption</code> of the <code>Form</code>.</p>
<p>I tried the following code, but it doesn't work:</p>
<pre><code>from delphifmx import *
class frmMain(Form):
def __init__(self, owner):
self.Width = 800
self.Height = 500
self.Caption = "Mouse Position: <X, Y>"
self.MouseMove = self.FormMouseMoveEvent
def FormMouseMoveEvent(self, sender, e):
self.Caption = "Mouse Position: <" + e.X + ", " + e.Y + ">"
def main():
Application.Initialize()
Application.Title = "My Application"
Application.MainForm = frmMain(Application)
Application.MainForm.Show()
Application.Run()
Application.MainForm.Destroy()
main()
</code></pre>
<p>The Form's caption never changes and always just say "Mouse Position: <X, Y>"</p>
<hr />
<p>UPDATE:</p>
<p>I think the <code>MouseMove</code> event isn't being triggered as it should. I've changed the code to the following, but the Caption still isn't updating:</p>
<pre><code>def FormMouseMoveEvent(self, sender, e):
self.Caption = "Just changing the caption"
</code></pre>
<p><a href="https://i.sstatic.net/DlILU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DlILU.png" alt="Python blank GUI App" /></a></p>
|
<python><user-interface><firemonkey><mousemove><onmousemove>
|
2023-03-01 21:13:22
| 2
| 4,263
|
Shaun Roselt
|
75,609,000
| 8,876,025
|
Access to XMLHttpRequest at 'localhost:5000' from origin 'localhost:3000' has been blocked
|
<p>My ReactJS app cannot successfully send form data to Python flask backend, even with a CORS statement in the backend.</p>
<p>This is the error message:</p>
<blockquote>
<p>Access to XMLHttpRequest at 'http://localhost:5000/' from origin
'http://localhost:3000' has been blocked by CORS policy: Response to
preflight request doesn't pass access control check: No
'Access-Control-Allow-Origin' header is present on the requested
resource.</p>
</blockquote>
<p>I'm not sure what I am missing. Could anybody point out where to fix? Thanks!</p>
<p><strong>EDIT</strong>
<em>I simplified the code snippets below to identify the cause easier.</em></p>
<p>This is an excerpt of my react frontend.</p>
<pre><code>import React, { useState } from 'react';
import axios from 'axios';
import { BrowserRouter, Routes, Route } from 'react-router-dom';
function App() {
const [destination, setDestination] = useState('Amsterdam');
const handleSubmit = async (e) => {
e.preventDefault();
const data = {
destination: destination,
};
console.log(data);
const response = await axios.post('http://localhost:5000/', data);
};
return (
<div >
<BrowserRouter>
<Routes>
<Route path="/" element={
<div>
<form onSubmit={handleSubmit} style={{ textAlign: "center" }}>
<div>
<label>
Destination:
<input
type="text"
value={destination}
onChange={(e) => setDestination(e.target.value)}
/>
</label>
</div>
<button type="submit">Submit</button>
</form>
</div>
}>
</Route>
</Routes>
</BrowserRouter>
</div >
);
}
export default App;
</code></pre>
<p>And this is an excerpt of my app.py</p>
<pre><code>from flask import Flask, render_template, request, redirect, session, url_for, jsonify
from flask_session import Session
from flask_cors import CORS, cross_origin
app = Flask(__name__)
CORS(app, resources={r"/*": {"origins": "*"}})
print("app.py running")
Session(app)
@app.route("/", methods=["GET", "POST"])
@cross_origin()
def index():
print("function called")
if request.method == "POST":
print(request.form['destination'])
return "success!"
# Add the following lines to set the 'Access-Control-Allow-Origin' header
response = jsonify({'message': 'Hello, World!'})
response.headers.add('Access-Control-Allow-Origin', '*')
return response
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p><strong>EDIT</strong>
<em>I modified my app.py where I made a separate route. However the same error still persists.</em></p>
<pre><code>@app.route("/", methods=["GET", "POST"])
@cross_origin()
def index():
print("function called")
if request.method == "POST":
print(request.form['destination'])
return "success!"
@app.route("/", methods=["OPTIONS"])
@cross_origin()
def handle_options_request():
# Set CORS headers
headers = {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'POST',
'Access-Control-Allow-Headers': 'Content-Type'
}
return ('', 204, headers)
</code></pre>
|
<python><reactjs><flask><cors>
|
2023-03-01 21:05:09
| 1
| 2,033
|
Makoto Miyazaki
|
75,608,846
| 8,194,364
|
How to use string variable instead of literal string with double quotes inside a python function?
|
<p>I have a function in Python:</p>
<pre><code>def clickButtonViaText():
url = 'https://finance.yahoo.com/quote/AAPL/balance-sheet?p=AAPL'
options = Options()
options.add_argument('--headless')
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=options)
driver.get(url)
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '//button[@aria-label="Total Assets"]'))).click()
time.sleep(10)
soup = BeautifulSoup(driver.page_source, 'html.parser')
</code></pre>
<p>I want to add an argument to my function that takes a string and I want to pass the string in <code>//button[@aria-label="Total Assets"]</code> as something like <code>//button[@aria-label=str_variable]</code> where <code>str_variable</code> is an argument passed with the function.
Example: <code>def clickButtonViaText(str_variable):</code></p>
|
<python><string>
|
2023-03-01 20:47:17
| 1
| 359
|
AJ Goudel
|
75,608,782
| 1,187,968
|
Python @patch.dict vs @patch.object
|
<p>I have the following file:</p>
<pre><code># dummy.py
import os
from requests import get
def my_func():
name = os.environ.get('name')
response = get('http://www.google.com')
return name, response.content
</code></pre>
<p>and the following test file</p>
<pre><code>import unittest
import os
import requests
from ddt import ddt, data, unpack
from mock import patch, Mock, MagicMock
from dummy import my_func
mock_response = Mock()
mock_response.content = "...content"
@ddt
@patch.dict(os.environ, {"name": "patched"})
@patch.object(requests, requests.get.__name__, return_value=mock_response)
class TestDummy(unittest.TestCase):
def test_dummy(self, mock_get):
name, content = my_func()
print("name: {}".format(name))
print("content: {}".format(content))
</code></pre>
<p>I was thinking these two lines have similar behaviors.</p>
<pre><code>@patch.dict(os.environ, {"name": "patched"})
@patch.object(requests, requests.get.__name__, return_value=mock_response)
</code></pre>
<p>However, when running the test, <code>os.environ</code> was patched, but <code>requests</code> was NOT patched.
Why?</p>
|
<python>
|
2023-03-01 20:41:42
| 2
| 8,146
|
user1187968
|
75,608,729
| 6,032,221
|
Tensorflow: External calculation of dice coef on validation set different than my Unet's validation dice coef with same data set
|
<p>So I am training a variation of a Unet style network in Tensorflow for a problem I am trying to solve. I have noticed an interesting pattern / error that I am unable to comprehend or fix.</p>
<p>As I have been training this network, on tensorboard the training loss is greater than validation loss, but the metric for validation is very low.(below)</p>
<p><a href="https://i.sstatic.net/3RO38.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3RO38.jpg" alt="enter image description here" /></a></p>
<p>But I have been looking at the output data from the network, and honestly, the output doesn't appear "half bad", at least not something that's a Dice of .25-.30</p>
<p><a href="https://i.sstatic.net/N5l6a.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N5l6a.png" alt="enter image description here" /></a></p>
<p>So when I externally validate the Dice by reloading the model and predicting on the validation set, I get a high dice score of > .90.</p>
<p><a href="https://i.sstatic.net/KOpB8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KOpB8.png" alt="enter image description here" /></a></p>
<p>I have a feeling this is due to my loss and metrics utilized, but am unsure how to proceed. My loss metrics, and external validation metric code blocks are posted below.</p>
<p>Loss Class</p>
<pre><code>class sce_dsc(losses.Loss):
def __init__(self, scale_sce=1.0, scale_dsc=1.0, sample_weight = None, epsilon=0.01, name=None):
super(sce_dsc, self).__init__()
self.sce = losses.SparseCategoricalCrossentropy(from_logits=False) #while the last layer activation is sigmoid, logits needs to be false
self.epsilon = epsilon
self.scale_a = scale_sce
self.scale_b = scale_dsc
self.cls = 1
self.weights = sample_weight
def dsc(self, y_true, y_pred, sample_weight = None):
true = tf.cast(y_true[..., 0] == self.cls, tf.int64)
pred = tf.nn.softmax(y_pred, axis=-1)[..., self.cls]
if self.weights is not None:
#true = true * (sample_weight[...])
true = true & (sample_weight[...] !=0)
#pred = pred * (sample_weight[...])
pred = pred & (sample_weight[...] !=0)
A = tf.math.reduce_sum(tf.cast(true, tf.float32) * tf.cast(pred,tf.float32)) * 2
B = tf.cast(tf.math.reduce_sum(true), tf.float32) + tf.cast(tf.math.reduce_sum(pred),tf.float32) + self.epsilon
return (1.0 - A/B)
def call(self, y_true, y_pred):
sce_loss = self.sce(y_true=y_true, y_pred=y_pred, sample_weight=self.weights) * self.scale_a
dsc_loss = self.dsc(y_true=y_true, y_pred=y_pred, sample_weight=self.weights) * self.scale_b
loss = tf.cast(sce_loss, tf.float32) + tf.cast(dsc_loss,tf.float32)
#self.add_loss(loss)
return loss```
Metric Class
class custom_dice(keras.metrics.Metric):
def __init__(self, name = "dsc", **kwargs):
super(custom_dice,self).__init__(**kwargs)
self.dice = self.add_weight(name = 'dice_coef', initializer = 'zeros')
def update_state(self, y_true,y_pred, sample_weight = None):
true = tf.cast(y_true[...,0] == 1, tf.int64)
pred = tf.math.argmax(y_pred == 1 , axis=-1)
if sample_weight is not None:
true = true * (sample_weight[...])
pred = pred * (sample_weight[...])
A = tf.math.count_nonzero(true & pred) * 2
B = tf.math.count_nonzero(true) + tf.math.count_nonzero(pred)
value = tf.math.divide_no_nan(tf.cast(A, tf.float32),tf.cast(B, tf.float32))
self.dice.assign(value)
def result(self):
return self.dice
def reset_state(self):
self.dice.assign(0.0)
External Validation Dice
def dsc(y_true, y_pred, sample_weight=None, c = 1):
print(y_true.shape, y_pred.shape)
true = tf.cast(y_true[...,0] == 1, tf.int64)
pred = tf.math.argmax(y_pred== c , axis=-1)
print(true.shape,pred.shape)
if sample_weight is not None:
true = true * (sample_weight[...])
pred = pred * (sample_weight[...])
A = tf.math.count_nonzero(true & pred) * 2
B = tf.math.count_nonzero(true) + tf.math.count_nonzero(pred)
return A / B
</code></pre>
|
<python><tensorflow><keras><conv-neural-network><metrics>
|
2023-03-01 20:35:42
| 1
| 323
|
zhilothebest
|
75,608,494
| 4,688,639
|
Are nested ifs equals to and logic?
|
<p>I wonder whether these two Python codes are <strong>always</strong> the same or not.</p>
<pre><code>if condition_1:
if condition_2:
some_process
</code></pre>
<p>and</p>
<pre><code>if condition_1 and condition_2:
some_process
</code></pre>
<p>I searched but did not find any specific answer to this question. So, for example, you are trying to evaluate a variable, but you cannot determine if that variable exists. So, in the first place, you should check variable existence. Can we use the variable existence check with "and" logic alongside with our evaluation? Is it <strong>always</strong> applicable?</p>
|
<python><if-statement><nested><logic>
|
2023-03-01 20:09:30
| 1
| 468
|
Soroosh Noorzad
|
75,608,409
| 4,772,565
|
How to type hint a particular dict type but allow empty dict in Python 3?
|
<p>I want to transfer data between different python-files. I create a new type so that different py-files all know what kind of data can be expected.</p>
<p>I used the following code.</p>
<pre class="lang-py prettyprint-override"><code>from typing import NewType
MyDataType = NewType("MyDataType", dict[str, dict[str, dict[str, dict[str, float]]]])
class App:
def __init__(self, name: str, data: MyDataType):
self.name = name
self.data = data
app = App("app_1", data={}) # Pycharm warning: Expected type 'MyDataType', got 'dict' instead
</code></pre>
<p>However, PyCharm gives the warning:</p>
<pre><code>Expected type 'MyDataType', got 'dict' instead
</code></pre>
<p>Basically, I want to make this <code>App.data</code> accept empty dict. Could you please show me how to do? Thanks!</p>
|
<python><python-typing>
|
2023-03-01 20:01:53
| 1
| 539
|
aura
|
75,608,323
| 17,532,318
|
How do I solve "error: externally-managed-environment" every time I use pip 3?
|
<p>When I run <code>pip install xyz</code> on a Linux machine (using <a href="https://en.wikipedia.org/wiki/Debian" rel="noreferrer">Debian</a> or <a href="https://en.wikipedia.org/wiki/Ubuntu_%28operating_system%29" rel="noreferrer">Ubuntu</a> or a derived Linux distribution), I get this error:</p>
<blockquote>
<pre class="lang-none prettyprint-override"><code>error: externally-managed-environment
× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
python3-xyz, where xyz is the package you are trying to
install.
If you wish to install a non-Debian-packaged Python package,
create a virtual environment using python3 -m venv path/to/venv.
Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
sure you have python3-full installed.
If you wish to install a non-Debian packaged Python application,
it may be easiest to use pipx install xyz, which will manage a
virtual environment for you. Make sure you have pipx installed.
See /usr/share/doc/python3.11/README.venv for more information.
note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.
</code></pre>
</blockquote>
<p>What does this error mean? How do I avoid it? Why doesn't <code>pip install xyz</code> work like it did before I upgraded my system using <code>sudo apt upgrade</code>?</p>
|
<python><pip><debian><failed-installation>
|
2023-03-01 19:52:19
| 44
| 10,005
|
Apoliticalboy
|
75,608,196
| 1,187,968
|
Understanding where to patch
|
<p>In <a href="https://docs.python.org/3.6/library/unittest.mock.html#where-to-patch" rel="nofollow noreferrer">the docs</a>, it explains why patching at at the function definition level doesn't work:</p>
<blockquote>
<p>Imagine we have a project that we want to test with the following
structure:</p>
<pre><code>a.py
-> Defines SomeClass
b.py
-> from a import SomeClass
-> some_function instantiates SomeClass
</code></pre>
<p>Now we want to test <code>some_function</code> but we want to mock out <code>SomeClass</code>
using <code>patch()</code>. The problem is that when we import module <code>b</code>, which we
will have to do then it imports <code>SomeClass</code> from module <code>a</code>. If we use
<code>patch()</code> to mock out <code>a.SomeClass</code> then it will have no effect on our
test; module <code>b</code> already has a reference to the real <code>SomeClass</code> and it
looks like our patching had no effect.</p>
</blockquote>
<p>The core explanation is <em>"module <code>b</code> already has a reference to the real <code>SomeClass</code>"</em>, but I don't fully understand the concept here. Can someone give me a deeper explanation?</p>
|
<python><python-unittest.mock>
|
2023-03-01 19:37:46
| 2
| 8,146
|
user1187968
|
75,608,149
| 5,632,058
|
Python CSV to dataclass
|
<p>I want to load a CSV into a Dataclass in Python. The dataclass consists of strings and enums, and I want to parse it accordingly. I now that there is a python library that does it, but it does not allow to skip malformed rows, which unfortunately exist.</p>
<p>I have created a method for this, that can read a file and looks like this:</p>
<pre><code>def dataset_reader(path: str):
#with open(path, 'r') as csv_handler:
reader = csv.reader(path)
header = reader.__next__()
expected_order = fields(MyFancyDataclass)
order_mapping = {fieldname: index for index, fieldname in enumerate([field.name for field in expected_order])}
header_mapping = {rowname: index for index, rowname in enumerate(header)}
order = [header_mapping.get(i[0]) for i in sorted(order_mapping.items(), key=lambda x: x[1])]
types = [type_ for type_ in [field.type for field in fields(MyFancyDataclass)]]
for line in reader:
try:
#yield MyFancyDataclass(*[line[x] if types[x] == str else types[x](line[x]) for x in order])
yield MyFancyDataclass(line[order[0]], line[order[1]], line[order[2]], line[order[3]], SourceType(line[order[4]]), line[order[5]], line[order[6]], line[order[7]],)
except Exception as e:
logging.error(line)
</code></pre>
<p>What I'm essentally trying to do is to not assume the order in that the CSV is written. As long as the required rows are in the file, we parse it. For this I first read the header and then create a index to column mapping. I then do the same for the dataclass and find the correct order for the CSV.</p>
<p>Then I read the CSV row by row. What you see there are two appraoches, one commented out (which is way more elegant, as we do not hardcode the number of columns) and one that is faster.</p>
<p>The problem that I have now is that it is still quite slow. As we deal with big data, this is a bit of an issue. Any good ideas of how we could speed it up? Nogos are assuming the column order in the CSVs. Although it should be consistently the same order, we do not want to assume that it always is. As essentally everything is simply a lookup it the current yield, I do not see what else we could improve to gain speed.</p>
<p>Thanks for all the help in advance!</p>
<p>CSV for reproduction. Call it test.csv:</p>
<pre><code>key,value
123,aaa
234,bbb
12,aaa
1919191,bbb
12,
13,aaa
,bbb
,
123,bbb
</code></pre>
<p>Full minimal python script for reproduction. Store it in the same folder as test.csv is:</p>
<pre><code>from dataclasses import fields, dataclass
import logging
import csv
from enum import Enum
class SourceType(Enum):
a = "aaa"
b = "bbb"
@dataclass
class MyFancyDataclass:
key: str
value: SourceType
def dataset_reader(path: str):
#with open(path, 'r') as csv_handler:
reader = csv.reader(path)
header = reader.__next__()
expected_order = fields(MyFancyDataclass)
order_mapping = {fieldname: index for index, fieldname in enumerate([field.name for field in expected_order])}
header_mapping = {rowname: index for index, rowname in enumerate(header)}
order = [header_mapping.get(i[0]) for i in sorted(order_mapping.items(), key=lambda x: x[1])]
types = [type_ for type_ in [field.type for field in fields(MyFancyDataclass)]]
print(order)
for line in reader:
try:
#yield MyFancyDataclass(*[line[x] if types[x] == str else types[x](line[x]) for x in order])
yield MyFancyDataclass(line[order[0]], SourceType(line[order[1]]),)
except Exception as e:
print(e)
logging.error(line)
if __name__=="__main__":
print(list(dataset_reader(open("test.csv"))))
</code></pre>
|
<python><csv>
|
2023-03-01 19:31:36
| 2
| 941
|
Syrius
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.