QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,653,285
| 4,817,370
|
pytest : is it possible to disable the warnings that originate from a specific directory?
|
<p>Situation is as follows :</p>
<p>I am redoing all the test suites for my work and there is a LOT of warnings but not all of them I can fix.<br/>
More specifically, the project has dependencies that I cannot interfer with and cannot fix them</p>
<p>Here is an exemple :</p>
<p><code>/venv/lib/python3.9/site-packages/flask_restx/api.py:275: DeprecationWarning: 'ERROR_404_HELP' config setting is deprecated and will be removed in the future. Use 'RESTX_ERROR_404_HELP' instead.</code></p>
<p>In this case the warning originates from the venv directory and I cannot do anything about it for now</p>
<p>How can I just have all the other warnings without having to scroll through hundreds of warnings to find them ?<br/>
I would like to ignore all warnings that have <code>venv/</code> in their path</p>
|
<python><python-3.x><pytest>
|
2023-03-06 16:20:31
| 1
| 2,559
|
Matthieu Raynaud de Fitte
|
75,653,197
| 8,543,025
|
Merge "True" chunks in binary array (Binary Closing)
|
<p>I have a (big) boolean array and I'm looking for a way to fill <code>True</code> where it merges two sequences of <code>True</code> with minimal length.<br />
For example:</p>
<pre><code>a = np.array([True] *3 + [False] + [True] *4 + [False] *2 + [True] *2)
# a == array([ True, True, True, False, True, True, True, True, False, False, True, True])
closed_a = close(a, min_merge_size=2)
# closed_a == array([ True, True, True, True, True, True, True, True, False, False, True, True])
</code></pre>
<p>Here the <code>False</code> value in index <code>[3]</code> is converted to <code>True</code> because on both sides it has a sequence of at least 2 <code>True</code> elements. Conversely, elements <code>[8]</code> and <code>[9]</code> remain <code>False</code> because the don't have such a sequence on both sides.</p>
<p>I tried using scipy.ndimage.binary_closing with <code>structure=[True True False True True]</code> (and with <code>False</code> in the middle) but it doesn't give me what I need.<br />
Any ideas?</p>
|
<python><python-3.x><binary-image>
|
2023-03-06 16:13:43
| 1
| 593
|
Jon Nir
|
75,653,149
| 8,318,946
|
Django - revoke(task_id) in celery does not cancel playwright task
|
<p>I am trying to write Django command that will cancel existing celery task in my Django application. The idea is to use this command in APIView so user can cancel the task that is already running.</p>
<p>When running <code>python manage.py cancel_task</code> I see in terminal that task was cancelled but the status of the task remains the same and it continues doing the task. In the end the status of the task is always SUCCESS.</p>
<p>The task is opening playwright chromium and going through list of websites.</p>
<pre><code>Task 0d5ffdd3-3a2c-4f40-a135-e1ed353afdf9 has been cancelled.
</code></pre>
<p>Below is my command that I store in cancel_task.py</p>
<pre><code>from django.core.management.base import BaseCommand
from celery.result import AsyncResult
from myapp.celery import app as myapp
class Command(BaseCommand):
help = 'Cancel a long-running Celery task'
def add_arguments(self, parser):
parser.add_argument('task_id', help='ID of the task to cancel')
def handle(self, *args, **options):
task_id = options['task_id']
result = AsyncResult(task_id, app=myapp)
if result.state not in ('PENDING', 'STARTED'):
self.stdout.write(self.style.WARNING(f'Task {task_id} is not running.'))
return
result.revoke(terminate=True, wait=False)
self.stdout.write(self.style.SUCCESS(f'Task {task_id} has been cancelled.'))
</code></pre>
<p>I checked all answers <a href="https://stackoverflow.com/questions/8920643/cancel-an-already-executing-task-with-celery">in this post</a> and in <a href="https://stackoverflow.com/questions/8920643/cancel-an-already-executing-task-with-celery/8924116#8924116">this post</a> as well but nothing works in my application.</p>
<p>What am I doing wrong and how to cancel task. I tried to use <code>revoke</code> directly in celery like this:</p>
<pre><code>>>> from myapp.celery import myapp
>>> myapp.control.revoke(task_id, terminate=True)
</code></pre>
<p>But the final effect is the same. I get information that the task was cancelled but status does not change.</p>
|
<python><django><celery>
|
2023-03-06 16:09:49
| 0
| 917
|
Adrian
|
75,653,030
| 605,356
|
Python: how to avoid psutil.process_iter() 'AccessDenied' and other misc errors?
|
<p>In Python, why am I seeing <code>psutil.AccessDenied</code> errors/exceptions when iterating through my processes (which works just fine) and printing their command lines via <code>psutil.cmdline()</code> (which, frustratingly experiences the aforementioned Exception for the exact same process, where <code>psutil.name()</code> works fine), as per the following code snippet?</p>
<pre><code>import psutil
for proc in psutil.process_iter():
print (proc.name()) # seems to consistently work
print (proc.cmdline()) # sometimes generates Exception
</code></pre>
|
<python><command-line><process><access-denied><psutil>
|
2023-03-06 15:57:13
| 1
| 2,498
|
Johnny Utahh
|
75,653,000
| 2,908,017
|
How do I change the cursor of a control in a Python FMX GUI App?
|
<p>I've created an <a href="https://github.com/Embarcadero/DelphiFMX4Python" rel="nofollow noreferrer">FMX GUI App</a> and I have several components on the form. I'd like to change the default <code>Cursor</code> for them, but I'm not sure how.</p>
<p>I've tried doing the following code to change the <code>Cursor</code> on my <code>Memo</code>:</p>
<pre><code>self.Memo1.Cursor = "crNo"
</code></pre>
<p>But <code>self.Memo1.Cursor = "crNo"</code> doesn't work. I get <code>Error: Invalid class typecast</code></p>
<p>What is the correct way to change the <code>Cursor</code> of a component?</p>
|
<python><user-interface><firemonkey><mouse-cursor>
|
2023-03-06 15:53:19
| 2
| 4,263
|
Shaun Roselt
|
75,652,936
| 2,071,807
|
How to unit test that a Flask app's routes are protected by authlib's ResourceProtector?
|
<p>I've got a Flask app with some routes protected by <a href="https://docs.authlib.org/en/latest/flask/2/resource-server.html" rel="nofollow noreferrer">authlib's ResourceProtector</a>.</p>
<p>I want to test that a route I've set up is indeed protected by a <code>authlib.integrations.flask_oauth2.ResourceProtector</code> but I don't want to go to the trouble of creating a valid jwt token or anything like that. I want to avoid feeling like I'm testing <code>authlib</code>, rather than testing my code.</p>
<p>All I really need to do is check that when I hit my endpoint, the <code>ResourceProtector</code> is called. This should be possible with mocking of some kind, but I wonder if there's a supported way to do this.</p>
<pre class="lang-py prettyprint-override"><code>from authlib.integrations.flask_oauth2 import ResourceProtector
require_auth = ResourceProtector()
require_auth.register_token_validator(validator)
APP = Flask(__name__)
@APP.route("/")
@require_auth(None)
def home():
return "Authorized!"
</code></pre>
<p>The test should look something like this:</p>
<pre class="lang-py prettyprint-override"><code>from my_module import APP
class TestApi:
@property
def client(self):
return APP.test_client()
def test_response_without_oauth(self):
response = self.client.get("/")
assert response.status_code == 401
def test_response_with_auth0(self):
# Do some mocking or some magic
response = self.client.get("/")
assert response.text == "Authorized!"
</code></pre>
|
<python><flask>
|
2023-03-06 15:46:57
| 2
| 79,775
|
LondonRob
|
75,652,855
| 9,471,909
|
Emacs elpy : How to view a list of all (global) variables and class attributes in the current buffer?
|
<p>I use <code>elpy</code> in <code>emacs</code> for Python development. If I enter the command <code>C-C C-o</code> I'll be able to view all defined functions, classes and methods in the current buffer. But I don't see any of class attributes and defined (global) variables. Is there any way to have this extra information?</p>
|
<python><emacs><elpy>
|
2023-03-06 15:40:09
| 1
| 1,471
|
user17911
|
75,652,730
| 5,641,924
|
Best practice to download blobs from Azure container
|
<p>I have an Azure container with thousands of blobs that each of them saves in a directory <code>id/year/month/day/hour_minute_second/file.json</code>. I want to download all <code>file.json</code> for an <code>id</code> between <code>start_date</code> and <code>end_date</code> in python. For this purpose, I use the <code>BlobServiceClient</code> from the <code>azure</code> python package. Before downloading each JSON file, I will check the blob directory existence using the <code>get_blob_client(blob=blob_dir).exists()</code> method.</p>
<pre><code>from azure.storage.blob import BlobServiceClient
import pandas as pd
class AzureContainerClient(object):
def __init__(self, account_url="https://mystorage.blob.core.windows.net/",
container_name='json',
credential="azure"):
self.account_url = account_url
self.container_name = container_name
self.credential = credential
# Connect to the Azure
self.__connect()
def __connect(self):
"""
Connect to Azure container_name.
:return:
"""
self.blob_client_server = BlobServiceClient(account_url=self.account_url, credential=self.credential)
self.container_client = self.blob_client_server.get_container_client(container=self.container_name)
def close(self) -> None:
"""
Close the connection.
:return: None
"""
self.blob_client_server.close()
def is_exist(self, blob: str) -> bool:
"""
Return True if blob exist in self.container_name else False
:param blob: blob address
:return:
"""
return self.container_client.get_blob_client(blob=blob).exists()
def read_blob(self, blob) -> dict:
"""
Read the blob from container_client.
:param blob: blob directory
:return:
"""
data = self.container_client.get_blob_client(blob=blob).download_blob().readall()
# Load the binary data into json
# data = json.loads(data)
return data
def get_files(ids: list, start_date: str, end_date: str) -> pd.DataFrame:
"""
Get the json files for ids from start_date to end_date.
:param ids:
:param start_date:
:param end_date:
:return:
"""
date_range = pd.date_range(start=start_date, end=end_date, freq='H')
# Get the generator directories for each id between start_date and end_date
stores_dates_gen = product(ids, date_range)
azure_container_client = AzureContainerClient()
data_list = []
for id_date in stores_dates_gen:
# Get the blob directory
id_date_blob = f'{id_date[0]}/{"/".join(id_date[1].strftime("%Y-%m-%d-%H_%M_%S").split("-"))}/file.json'
# Check the existence of the id_date blob in container
if not azure_container_client.is_exist(id_date_blob):
continue
data = azure_container_client.read_blob(blob=id_date_blob)
data_list.append((id_date[0], id_date[1], data))
df = pd.DataFrame(data=data_list, columns=['id', 'dateTime', 'data'])
return df
</code></pre>
<p>However, it takes too much time. It is mostly because of checking the existence blob (<code>azure_container_client.is_exist(id_date_blob)</code>). What is the faster solution to downloading all existing blobs?</p>
|
<python><azure>
|
2023-03-06 15:29:39
| 1
| 642
|
Mohammadreza Riahi
|
75,652,727
| 346,112
|
Python logging configuration like Java log4j
|
<p>I'm a Java developer who has transitioned to Python 3 development. I need to configure logging in my Python application, and I'd like to do it in a similar way to how I configured log4j in Java. Ideally I want my Python logging configuration file to be in YAML format. How do I do this?</p>
|
<python><python-3.x><logging><configuration><log4j>
|
2023-03-06 15:29:08
| 1
| 15,278
|
Jim Tough
|
75,652,720
| 7,179,546
|
How to transform a list of lists in Python into a dictionary
|
<p>I have in Python an structure like this one</p>
<pre><code>example = {
['g': 'h'],
[
{'a':'b'}, {'c': 'd'},
{'a':'b'}, {'e', 'f'}
]
}
</code></pre>
<p>I want to create a dictionary that represents the information of this list considering it a graph in the sense that the info goes from left to right</p>
<p>The output I want, for the example above is:</p>
<pre><code>output = {
{'a':
{'b':
{'c': 'd'},
{'e', 'f'}
}
,
{'g': 'h'}
}
</code></pre>
|
<python><dictionary><graph>
|
2023-03-06 15:28:09
| 1
| 737
|
Carabes
|
75,652,633
| 11,992,601
|
A phenomenon in which the cycle of a function is delayed when an error occurs in multiple infinite loop functions executed with python asyncio
|
<p>like the following code, I put several infinite loop asynchronous functions in one loop and run them.</p>
<pre><code>loop = asyncio.get_event_loop()
try:
loop.run_until_complete(
asyncio.gather(
get_exchange_rate(),
data_input(),
return_exceptions=True,
)
)
finally:
loop.close()
</code></pre>
<p>The function to save data should be executed every 5 seconds, but if an error occurs in the function that crawls the exchange rate, it will be delayed during the error time.</p>
<p>The error that occurs is an SSL error as follows, and get_exchange_rate() is a crawl function with a cycle of 30 seconds through BeautifulSoup.</p>
<p><code>requests.exceptions.SSLError: HTTPSConnectionPool(host='URL', port=443): Max retries exceeded with url:</code></p>
|
<python><python-asyncio><coroutine>
|
2023-03-06 15:21:14
| 0
| 567
|
윤태일
|
75,652,476
| 8,845,766
|
why is the output of make_password different than expected?
|
<p>Sorry if this is a dumb question, but I'm new to django here. I'm creating a signup flow, one that needs the users email id and password to create the user. I'm trying to hash and salt the password, before saving the salted password in the db, along with the email.</p>
<p>I'm using django's default <code>make_password(password=password, salt=get_random_string(length=32))</code> to hash and salt the password. But the output I get is like <code>"!KxPs6lAiW1Im2iuBbuK1lm6dqQz5h08gPSIWlEUr"</code> instead of being something like <code>"algorithm$iterations$salt$hash"</code>. Here's the code:</p>
<pre><code>salt = get_random_string(length=32)
print(salt)
salted_pwd = make_password(password=password, salt=salt)
print("salted", salted_pwd)
</code></pre>
<p>Why is this happening and what am I doing wrong here?</p>
|
<python><django>
|
2023-03-06 15:07:30
| 1
| 794
|
U. Watt
|
75,652,474
| 2,725,103
|
Python: replace a pattern in a df column with another pattern
|
<p>I have a dataframe as per the below:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
columns=['deal','details'],
data=[
['deal1', 'MH92h'],
['deal2', 'L97h'],
['deal3', '97.538'],
['deal4', 'LM98h'],
['deal5', 'TRD (97.612 cvr)'],
]
)
</code></pre>
<p>I would like to replace the any row that has <code>details</code> = <code>MH[0-9]h</code> with <code>[0-9].75</code>
For example, the output would look as follows:</p>
<pre><code>df =
['deal1', 'MH92h', '92.75']
['deal2', 'L97h', 'L97h'],
['deal3', '97.538', '97.538'],
['deal4', 'MH98h', '98.75'],
['deal5', 'TRD 97.61', 'TRD 97.61']
</code></pre>
<p>I've tried the below, but it doesn't work:</p>
<pre><code>df = df.assign(test_col=df.details.str.replace("\d+",r'\d+'+'75'), regex=True)
</code></pre>
|
<python><pandas><regex>
|
2023-03-06 15:07:22
| 1
| 1,069
|
Mike
|
75,652,460
| 3,914,746
|
Python Sum Scores from CSV
|
<p>I am creating a program that is supposed to read data in a file called "Diving.csv". Each line in the file has a set of scores for each contestant in a diving competition. The program needs to find and display the highest and lowest score for each contestant. It should then find the final score for each contestant by finding the sum of each diver's score (except for the largest and smallest) and then multiplying the sum by 0.6. This should then be displayed after each max and min score. Each contestant has 11 scores.</p>
<p>I believe the problem is in either the read file procedure or the function to calculate the scores? Any help much appreciated.</p>
<p>Here is an example of the csv:
<a href="https://i.sstatic.net/25NPW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/25NPW.png" alt="Driving csv" /></a></p>
<pre><code>#Procedure to Read File
def ReadFile():
class Diving():
score1 = 0.0
score2 = 0.0
score3 = 0.0
score4 = 0.0
score5 = 0.0
score6 = 0.0
score7 = 0.0
score8 = 0.0
score9 = 0.0
score10 = 0.0
score11 = 0.0
diving = [Diving() for x in range(10)]
counter = 0
with open("Diving.csv","r") as readfile:
line = readfile.readline().rstrip("\n")
while line:
items = line.split(",")
diving[counter].score1 = float(items[0])
diving[counter].score2 = float(items[1])
diving[counter].score3 = float(items[2])
diving[counter].score4 = float(items[3])
diving[counter].score5 = float(items[4])
diving[counter].score6 = float(items[5])
diving[counter].score7 = float(items[6])
diving[counter].score8 = float(items[7])
diving[counter].score9 = float(items[8])
diving[counter].score10 = float(items[9])
diving[counter].score11 = float(items[10])
line = readfile.readline().rstrip("\n")
counter += 1
input("File read... Press any key to continue")
return diving
#Procedure to Calculate Scores
def CalcScores(diving):
score = [0]*10
for i in range(10):
maxmin= [float(diving[i].score1),float(diving[i].score2),float(diving[i].score3),float(diving[i].score4),float(diving[i].score5),float(diving[i].score6),float(diving[i].score7),float(diving[i].score8),float(diving[i].score9),float(diving[i].score10),float(diving[i].score11)]
min = maxmin[0]
max = maxmin[0]
for loop in range(11):
if float(maxmin[loop]) < min:
min = maxmin[loop]
if float(maxmin[loop]) > max:
max = maxmin[loop]
print("min =",min,"max =",max)
score[i] = ((diving[i].score1+diving[i].score2+diving[i].score3+diving[i].score4+diving[i].score5+diving[i].score6+diving[i].score7+diving[i].score8+diving[i].score9+diving[i].score10+diving[i].score11)-min-max)*0.6
return score
#Procedure to Display Scores
def DisplayScore(score):
for counter in range(len(score)):
print("Score",counter+1,"=",round(score[counter],2))
#Main Program
diving = ReadFile()
score = CalcScores(diving)
DisplayScore(score)
</code></pre>
|
<python>
|
2023-03-06 15:06:07
| 1
| 1,155
|
Hexana
|
75,652,326
| 2,532,203
|
Celery: Spawn "sidecar" webserver process
|
<p>I'm trying to collect metrics from my Celery workers, which seemed simply enough, but turns out to be utterly, ridiculously hard. After lots of approaches, I'm now trying to spawn an additional process next to the Celery worker/supervisor that hosts a simple HTTP server to expose Prometheus metrics.<br />
To make this work, I need to spawn a process using the <code>multiprocessing</code> module, so the Celery task workers and the metrics server can use the same, in-memory Prometheus registry. In theory, this would be as simple as:</p>
<pre class="lang-py prettyprint-override"><code># app/celery_worker.py
from prometheus_client import start_http_server, REGISTRY
def start_server():
start_http_server(port=9010, registry=REGISTRY)
if __name__ == "__main__":
metric_server = Process(target=start_server, daemon=True)
metric_server.start()
</code></pre>
<p>Alas, the worker is started using the Celery module:</p>
<pre class="lang-bash prettyprint-override"><code>python -m celery --app "app.celery_worker" worker
</code></pre>
<p>So my worker is never the main module. How can I spawn a process in the Celery worker?</p>
|
<python><celery><python-multiprocessing>
|
2023-03-06 14:54:26
| 2
| 1,501
|
Moritz Friedrich
|
75,652,281
| 11,703,015
|
Store a edited dataframe
|
<p>I am importing data from an excel file (.xls) as a DataFrame. Once the DataFrame has been imported, I manipulate it, modifying several fields and columns. For the following steps, I would like to use the modified DataFrame and not work from the beginning with the excel file.</p>
<p>How could I store the DataFrame, so that each time I call the script, it does not import the excel file from scratch but work with the already imported and modified DataFrame?</p>
|
<python><pandas><dataframe><import>
|
2023-03-06 14:50:50
| 1
| 516
|
nekovolta
|
75,652,274
| 2,908,017
|
How do I make an Edit that only accepts numbers in a Python FMX GUI App?
|
<p>I have a <a href="https://github.com/Embarcadero/DelphiFMX4Python" rel="nofollow noreferrer">Python FMX GUI App</a> with an <code>Edit</code>control that I want to only accept integer values with. In the past, I've done this kind of validation by overloading the KeyPress event and just removing characters that didn't fit the specification.</p>
<p>Is there a different way that is better? Maybe using regular expressions?</p>
<p>Ideally, this would behave such that pressing a non-numeric character would either produce no result or immediately provide the user with feedback about the invalid character.</p>
|
<python><user-interface><firemonkey>
|
2023-03-06 14:50:14
| 1
| 4,263
|
Shaun Roselt
|
75,652,221
| 2,163,392
|
Find countours on a dark background
|
<p>Suppose, I have a dark image on a white background. For example, the image below:</p>
<p><a href="https://i.sstatic.net/CxHCc.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CxHCc.jpg" alt="enter image description here" /></a></p>
<p>With the code below, I can easily extract its countors</p>
<pre><code>import imutils
import cv2
image = cv2.imread("image.jpg")
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
image = imutils.resize(image, width = 64)
thresh = cv2.adaptiveThreshold(image, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 11, 7)
# initialize the outline image, find the outermost
# contours then draw
# it
outline = np.zeros(image.shape, dtype = "uint8")
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
cnts = sorted(cnts, key = cv2.contourArea, reverse = True)[0]
x=cv2.drawContours(outline, [cnts], -1, 255, -1)
</code></pre>
<p>This would give me the image below</p>
<p><a href="https://i.sstatic.net/v1hZx.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v1hZx.jpg" alt="easy image, white background, good contrast between foreground and background" /></a></p>
<p>However, let us try a more difficult image, like below</p>
<p><a href="https://i.sstatic.net/OMFjt.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OMFjt.jpg" alt="difficult image" /></a></p>
<p>whenever I run such a code using the image above, I find the following error:</p>
<pre><code>Traceback (most recent call last):
File "segment_img.py", line 16, in <module>
cnts = sorted(cnts, key = cv2.contourArea, reverse = True)[0]
IndexError: list index out of range
</code></pre>
<p>It obviously does not find any countors. However, I am using adaptive threshold which is supposed to work.</p>
<p>What can I change in my code so I could make both images having their corresponding contours?</p>
|
<python><opencv><image-processing><computer-vision>
|
2023-03-06 14:46:13
| 2
| 2,799
|
mad
|
75,652,083
| 13,742,058
|
how to design a class with complicated data type which includes multiple list and dictionaries in python?
|
<p>I have a complicated data type in which I would like to create a class but I do not know how to design it, please help and see example data below, thank you.</p>
<pre><code>"""
complicated_data_sample1 = {
name: "test_name"
description:"test_description",
type:"test_type",
its_all: [ "overview":{"brand":"dell", "model":"11"},
"Processor": {"Speed":"1222mhz"}
]
}
complicated_data_sample2 = {
name: "test_name2"
description:"test_description2",
type:"test_type2",
its_all: [ "Memory":{"flash":"256", "ram":"8gb", "video":"1gb"},
"network": {"switching":"supported", "ports":"16","features": ["f1","f2","f3"]},
"dimension":{"width":"3","length":"3","height":"4"},
]
}
complicated_data_sample3 = {
name: "test_name3"
description:"test_description3",
type:"test_type3",
its_all: None
}
"""
</code></pre>
<p>I would like to design a class like this ..</p>
<pre><code>class Complicated_data:
name = "test_name"
description = "test_description",
type = "test_type",
# its_all , don't know how to do this
</code></pre>
<p>so that in the future, I can call data1_name = Complicated_data.name</p>
<p>The its_all data is complicated because it changes list length and dictionary keys also changing.</p>
|
<python><list><dictionary><class><inner-classes>
|
2023-03-06 14:30:41
| 1
| 308
|
fardV
|
75,652,018
| 577,669
|
How to annotate a dataclass to have a return value based on the initialized value?
|
<p>I have the following (stripped down) dataclass:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
from typing import Union, Type
class BaseType: ...
class PathType(BaseType): ...
class DataType(BaseType): ...
_FinalTypes = Union[PathType, DataType]
@dataclass
class InterfaceInfo:
what: Type[_FinalTypes]
name: str
def __call__(self, *args, **kwargs) -> _FinalTypes:
return self.what(*args, **kwargs)
print(InterfaceInfo(PathType, "path"))
print(InterfaceInfo(DataType, "path"))
</code></pre>
<p>But I'm uncertain on how to properly annotate it.
My purpose is actually that whatever type you pass in into the <code>__init__</code>-method, should come out of the <code>__call__</code> as a materialized object.</p>
<p>Because what I've written now, the type-checker will think it's possible to have an <code>InterfaceInfo</code> constructed with a PathType, and have a DataType-object come out of it.</p>
<p>If this would be a method, I could use <code>@overload</code> to type hint it, but this is a class, so I'm at a loss...
I've looked into TypeVar bound to the BaseType. But isn't the same possible then? Or will the type checker be smart enough to know a Type[PathType] comes in, and a PathType needs to come out?</p>
<p>How would one go to solve this?</p>
<p>Thanks!</p>
|
<python><python-typing>
|
2023-03-06 14:25:21
| 1
| 1,069
|
Steven Van Ingelgem
|
75,651,703
| 468,334
|
Get the (C) size of a (C) type in a Python extension
|
<p>I'm writing a C extension for Python and need to pass the size of a C type to the compilation. So I'll want to do</p>
<pre class="lang-py prettyprint-override"><code>extra_compile_args = ['-DSIZEOF_MYTYPE=32']
</code></pre>
<p>in my <code>setup.py</code>. My question is: how can I get the size? (32 here)</p>
<p>If my target were pure C and I use autoconf, then I could use <code>AC_CHECK_SIZEOF</code>, if writing a Ruby extension there is <code>check_sizeof</code> in the <a href="https://ruby-doc.org/stdlib-2.6.1/libdoc/mkmf/rdoc/MakeMakefile.html" rel="nofollow noreferrer">mkmf module</a>, is there some similar facility (presumably, like autoconf and Ruby's mkmf, by making a test-compile which calls C's <code>sizeof</code> and prints the result to stdout, and capturing that) in Python? Or do I need to roll my own?</p>
|
<python><setup.py><python-extensions>
|
2023-03-06 13:55:13
| 0
| 1,120
|
jjg
|
75,651,587
| 12,546,311
|
How to subtract one row pandas series from a multi-row pandas series?
|
<p>I have a pandas dataframe with 962 columns</p>
<pre><code>print(df)
ID doy WaitID Year ... 212
386 1895 193 14507 2001 ... 0.407672
389 1899 192 14511 2001 ... 0.000000
390 1900 204 14512 2001 ... 0.000000
391 1902 145 14514 2001 ... 2.251606
395 1877 204 14491 2001 ... 1.727977
... ... ... ... ... ... ...
20279 369 189 32767 2001 ... 1.727977
20281 371 174 32767 2001 ... 2.038362
20292 356 170 32767 2001 ... 0.407672
20295 359 174 32767 2001 ... 0.815345
20296 360 201 32767 2001 ... 2.038362
</code></pre>
<p>and another data frame that I have sliced into a average one-rowed pandas series</p>
<pre><code>print(mean_df)
ID WaitID ... 212 213 Year
0 3.1 3.0 ... 35.939027 24.231911 2000.0
</code></pre>
<p>I want to subtract the column <code>212</code> from the <code>mean_df</code> from the <code>df</code> column with the same name. I did the following but it gives me <code>NaN</code>:</p>
<pre><code>x = df['212'].subtract(mean_df['212'], fill_value = 0)
print(x)
ID doy WaitID Year ... 212 212_x
386 1895 193 14507 2001 ... 0.407672 NaN
389 1899 192 14511 2001 ... 0.000000 NaN
390 1900 204 14512 2001 ... 0.000000 NaN
391 1902 145 14514 2001 ... 2.251606 NaN
395 1877 204 14491 2001 ... 1.727977 NaN
... ... ... ... ... ... ...
20279 369 189 32767 2001 ... 1.727977 NaN
20281 371 174 32767 2001 ... 2.038362 NaN
20292 356 170 32767 2001 ... 0.407672 NaN
20295 359 174 32767 2001 ... 0.815345 NaN
20296 360 201 32767 2001 ... 2.038362 NaN
</code></pre>
<p>How can I subtract the one-rowed pandas series from multi-rowed pandas?</p>
|
<python><pandas><dataframe><subtraction>
|
2023-03-06 13:42:44
| 2
| 501
|
Thomas
|
75,651,425
| 18,579,739
|
why catch a TimeExpired exception in python subprocess lose stdout?
|
<p>app.py</p>
<pre><code>import subprocess
import time
if __name__ == '__main__':
outs = ''
p = subprocess.Popen(['python3', 'app2.py'], stdout=subprocess.PIPE,
text=True)
try:
p.communicate(timeout=3)
except subprocess.TimeoutExpired:
p.kill()
outs, _ = p.communicate()
print(outs)
</code></pre>
<p>app2.py</p>
<pre><code>import subprocess
import time
if __name__ == '__main__':
while True:
print("counter")
time.sleep(1)
</code></pre>
<p>according to doc(<a href="https://docs.python.org/3/library/subprocess.html" rel="nofollow noreferrer">https://docs.python.org/3/library/subprocess.html</a>)</p>
<blockquote>
<p>If the process does not terminate after timeout seconds, a TimeoutExpired exception will be raised. Catching this exception and retrying communication will not lose any output.</p>
</blockquote>
<p>However running <code>python3 app.py</code> actually produce nothing, why?</p>
<hr />
<p>update:</p>
<p>In vscode debug model, the behavior is as expected, output is normal!!
really weird..</p>
|
<python><subprocess><popen>
|
2023-03-06 13:25:49
| 0
| 396
|
shan
|
75,651,353
| 15,904,492
|
Pyomo: Iterative model to reach a known value for a given variable
|
<p>I am new to Pyomo and I am trying to build a simple model.
It takes only one variable with known initial and final values. The solver needs to modify the value of the vafriable to reach its final value while minimizing the change between each timestep.</p>
<p>However when I run my optimization the value of my variable doesn't change over time.</p>
<p>Here is my model:</p>
<pre><code># Define the known initial and final values for the variable x
x_init = 10
x_final = 50
# Define the number of time steps
T = 10
# Create the model
model = ConcreteModel()
# Define the set of time steps
model.T = Set(initialize=range(1, T+1))
# Define the variable x
model.x = Var(model.T, within=NonNegativeIntegers, initialize=x_init)
# Define the change variable as a Suffix
model.delx = Var(model.T)
# Define the objective function
model.obj = Objective(expr=sum(model.delx[t] for t in model.T), sense=minimize)
# Define the constraints
for t in model.T:
# Define the constraint for reaching the known value
if t == T:
model.add_component('con_end', Constraint(expr=model.x[T] == x_final))
else:
# Define the constraint that links to the next time step
model.add_component('con_link_{}'.format(t), Constraint(expr=model.x[t+1] - model.x[t] == model.delx[t]))
# Solve the model
solver = SolverFactory('glpk')
solver.solve(model)
# Print the optimal value of x at each time step
for t in model.T:
print(f"x[{t}] = {model.x[t].value:.2f}")
</code></pre>
|
<python><optimization><pyomo>
|
2023-03-06 13:18:50
| 0
| 729
|
zanga
|
75,651,326
| 8,182,504
|
Python function minimization not changing optimization variables
|
<p>I need to minimize a simple function that divides two values. The optimization paramter <code>x</code> is a <code>(n,m)</code> numpy array from which I calculate a float.</p>
<pre><code># An initial value
normX0 = calculate_normX(x_start)
def objective(x) -> float:
"""Objective function """
x = x.reshape((n,m))
normX = calculate_normX(x)
return -(float(normX) / float(normX0))
</code></pre>
<p><code>def calculate_normX()</code> is a wrapper function to an external (Java-)API that takes the <em>ndarray</em> as an input and outputs a <em>float</em>, in this case, the norm of a vector. For the optimization, I was using <code>jax</code> and <code>jaxopt</code>, since it supports automatic differentiation of <code>objective</code>.</p>
<pre class="lang-py prettyprint-override"><code>solver = NonlinearCG(fun=objective, maxiter=5, verbose=True)
res = solver.run(x.flatten())
</code></pre>
<p>or the regular scipy minimize</p>
<pre class="lang-py prettyprint-override"><code>objective_jac = jax.jacrev(objective)
minimize(objective, jac=objective_jac, x0=`x.flatten(), method='L-BFGS-B', options={'maxiter': 2})
</code></pre>
<p>In both cases, however, <code>x</code> is not changed during the optimization step. Even initializing x with random values the optimizer does not seem to work. I also tried other solvers like <a href="https://jaxopt.github.io/stable/_autosummary/jaxopt.NonlinearCG.html#jaxopt.NonlinearCG" rel="nofollow noreferrer">Jaxopt NonlinearCG</a>. What am I doing wrong?</p>
|
<python><optimization><scipy-optimize-minimize><jax>
|
2023-03-06 13:16:33
| 1
| 1,324
|
agentsmith
|
75,651,325
| 6,357,916
|
Considering numpy matrix row independent of other rows while performing softmax computation
|
<p>This is rather noob question. I am trying to code the neural network from scratch.
This is how I have wrote the softmax function:</p>
<pre><code>def softmax(x):
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum()
</code></pre>
<p>Then I created a dummy weights matrix:</p>
<pre><code>w0 = np.random.uniform(-0.8, 0.8, (6, 4))
print(w0.shape)
print(w0)
</code></pre>
<p>This prints:</p>
<pre><code>(6, 4)
array([[-0.47349099, -0.56027454, 0.78373698, -0.23283302],
[-0.63164942, -0.23417482, 0.00111565, 0.22848594],
[-0.41288949, 0.05927629, -0.59752415, 0.45548192],
[-0.35111661, 0.13681976, -0.73963359, 0.53842663],
[-0.58055457, -0.03494196, 0.59678369, -0.40245336],
[ 0.57615495, -0.03258459, -0.25033765, 0.20835347]])
</code></pre>
<p>These are weights of 4 classes (output labels) across 6 training examples. I want softmax probabilities to be calculated for each label for all training examples. So, I tried something like this:</p>
<pre><code>sft = softmax(w0) # calculating softmax for all 6 training examples
print(sft.shape)
print(sft)
softmax(w0[0]) # calculating softmax for only 1st training example
</code></pre>
<p>This prints:</p>
<pre><code>(6, 4)
[[0.02550981 0.02338932 0.08968387 0.03245067] # <-- 1
[0.02177809 0.03240715 0.041004 0.0514721 ]
[0.02710354 0.04345954 0.0225341 0.06458847]
[0.0288306 0.04696364 0.01954893 0.07017419]
[0.02291976 0.03955183 0.0743912 0.02738788]
[0.07287233 0.03964518 0.03188757 0.0504462 ]]
array([0.1491508 , 0.13675272, 0.52436386, 0.18973262]) # <-- 2
</code></pre>
<p>I felt that the first row in the <code>sft</code> matrix should be same as the last the output of <code>softmax(w0[0])</code>. That is line suffixed <code><-- 1</code> should be same as <code><-- 2</code> as both correspond to same training example. But it seems that <code>softmax(w0)</code> is calculating probabilities across whole matrix considering it as single training example and not interpreting each row as separate training example.</p>
<p>How to do softmax computation interpreting each row independent of others? What I am missing here?</p>
|
<python><numpy><multidimensional-array><numpy-ndarray><array-broadcasting>
|
2023-03-06 13:16:29
| 1
| 3,029
|
MsA
|
75,651,233
| 5,134,285
|
Managing AppRoles using Azure SDK for python, or Microsoft Graph API
|
<p>I am lost in my searches, and cannot find any specific documentation regarding creating/Deleting,... App Roles.</p>
<p>My aim is to create an App Roles, preferably using python, here is my code</p>
<pre><code>import requests
from msal import ConfidentialClientApplication
# Define the Azure AD credentials and API endpoints
tenant_id = "<your tenant ID>"
client_id = "<your client ID>"
client_secret = "<your client secret>"
authority_url = f'https://login.microsoftonline.com/{tenant_id}'
scope = ['https://graph.microsoft.com/.default']
# api_version = 'beta'
api_version = 'v1.0'
# Define the app role properties
app_role_name = 'APIAppRole'
app_role_description = 'app role for API user'
# Authenticate and get an access token using the MSAL library
app = ConfidentialClientApplication(
client_id=client_id,
client_credential=client_secret,
authority=authority_url
)
token = app.acquire_token_for_client(scopes=scope)
# Create the app role using the Microsoft Graph API
url = f'https://graph.microsoft.com/{api_version}/applications/{client_id}'
headers = {
'Authorization': f'Bearer {token["access_token"]}',
'Content-Type': 'application/json'
}
body = {
'allowedMemberTypes': [
'User',
'Group'
],
'displayName': app_role_name,
'description': app_role_description,
'id': 'lkjasldkjpq9u934l',
'isEnabled': True,
'value': app_role_name
}
response = requests.post(url, headers=headers, json=body)
response.raise_for_status()
</code></pre>
<p>this raises this error:</p>
<pre><code> raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 405 Client Error: Method Not Allowed for url: https://graph.microsoft.com/v1.0/applications/656
</code></pre>
<p>what is the correct way to create an AppRoles programmatically?</p>
|
<python><azure><azure-active-directory><azure-sdk><azure-sdk-python>
|
2023-03-06 13:06:16
| 1
| 1,404
|
GeoCom
|
75,651,183
| 15,452,168
|
error in Huber regressor sklearn Error: The 'max_iter' parameter of HuberRegressor must be an int
|
<p>I am running a forecasting algorithm and suddenly today my algorithm is throwing an error</p>
<pre><code>sklearn.utils._param_validation.InvalidParameterError: The 'max_iter' parameter of HuberRegressor must be an int in the range [0, inf). Got 279170.46874829195 instead.
</code></pre>
<p>I have no idea what went wrong as the script was working continuously since 2 weeks.</p>
<p>my code block is below</p>
<pre><code># Split the data into training and test sets
X_train = X[:-17]
X_test = X[-17:-1]
y_train = y[:-17]
y_test = y[-17:-1]
# Define hyperparameter distributions for random search
param_dist = {'epsilon': truncnorm(1.10, 2.10, loc=1.5, scale=0.2)
, 'max_iter': uniform(loc=100000, scale=400000)
, 'alpha': uniform(loc=0.0001, scale=0.01)
, 'tol': [0.01, 0.00001,0.001,0.0001]
}
# Initialize a list to store the best models
best_models = []
# Run random search on the Huber Regressor to find the best model
huber = HuberRegressor()
rand = RandomizedSearchCV(huber, param_distributions=param_dist, n_iter=100, cv=3, scoring='r2', random_state=0)
rand.fit(X_train, y_train)
# Store the best model found in the list of best models
best_model = rand.best_estimator_
best_models.append(best_model)
# Make predictions on the test data
y_pred = best_model.predict(X_test)
# Calculate the evaluation metrics
mae = mean_absolute_error(y_test, y_pred)
mse = mean_squared_error(y_test, y_pred)
mape = mean_absolute_percentage_error(y_test, y_pred)
r2 = r2_score(y_test, y_pred)
</code></pre>
<p>please share some insights. Thank you</p>
|
<python><scikit-learn><regression><linear-regression><forecasting>
|
2023-03-06 13:00:51
| 1
| 570
|
sdave
|
75,651,079
| 1,014,747
|
Is it wise to have if logic inside a python catch block?
|
<p>Consider the following method that checks if a parameter contains a valid UUID string and generates a new one if it isn't and if the option is set:</p>
<pre><code>def validate_uuid(val, generate):
try:
uuid.UUID(str(val))
return val
except ValueError:
if generate:
return str(uuid.uuid4())
raise
</code></pre>
<p>I feel that for this particular use-case, having a small simple condition inside the catch block makes sense, however, I would like to understand if there's possibly a better way that adheres to pythonic principles.</p>
|
<python><try-catch><catch-block>
|
2023-03-06 12:51:04
| 1
| 603
|
b0neval
|
75,650,973
| 10,450,923
|
msgraph client to communicate with OneDrive
|
<p>How to properly use msgraph-sdk and does it even work as expected?</p>
<pre><code>import asyncio
from azure.identity.aio import EnvironmentCredential
from kiota_authentication_azure.azure_identity_authentication_provider import AzureIdentityAuthenticationProvider
from msgraph import GraphRequestAdapter
from msgraph import GraphServiceClient
credential = EnvironmentCredential()
scopes = ['User.Read']
auth_provider = AzureIdentityAuthenticationProvider(credential, scopes=scopes) #type:ignore
request_adapter = GraphRequestAdapter(auth_provider)
client = GraphServiceClient(request_adapter)
user = asyncio.run(client.me.get())
print(user)
</code></pre>
<p>This simple code from example makes a request but with exception <code>Content: {"error":"invalid_scope","error_description":"AADSTS1002012: The provided value for scope User.Read is not valid. Client credential flows must have a scope value with /.default suffixed to the resource identifier (application ID URI)</code></p>
<p>If I change scopes to <code>/.default</code> than it throws an error</p>
<pre><code>Content: {"error":"invalid_scope","error_description":"AADSTS70011: The provided request must include a 'scope' input parameter. The provided value for the input parameter 'scope' is not valid. The scope /.default is not valid
</code></pre>
<p>App is registered, client secret is valid for a year.</p>
|
<python><microsoft-graph-api>
|
2023-03-06 12:40:46
| 0
| 371
|
Rostislav Aleev
|
75,650,953
| 13,854,431
|
group_by id and timestamp (timestamp threshold 45 minutes) in polars
|
<p>I have a polars dataframe with a 'col1' column and a 'col2' column.
Now I want to groupby the two columns and to create a new column. I have the following example data:</p>
<pre><code>data = {
"col1": [1, 1, 1,1,1,1,1,1,1,1,1,1, 2, 2,2,2,2,2,2,2,2,2,2],
"col2": [
"2022-05-25T08:00:00.648681",
"2022-05-25T08:15:00.648681",
"2022-05-25T08:30:00.648681",
"2022-05-25T08:45:00.648681",
"2022-05-25T09:00:00.648681",
"2022-05-25T09:15:00.648681",
"2022-05-25T09:30:00.648681",
"2022-05-25T09:45:00.648681",
"2022-05-25T10:00:00.648681",
"2022-05-25T10:15:00.648681",
"2022-05-25T10:30:00.648681",
"2022-05-25T10:45:00.648681",
"2022-05-25T08:00:00.648681",
"2022-05-25T08:15:00.648681",
"2022-05-25T08:30:00.648681",
"2022-05-25T08:45:00.648681",
"2022-05-25T09:00:00.648681",
"2022-05-25T06:00:00.648681",
"2022-05-25T06:15:00.648681",
"2022-05-25T06:30:00.648681",
"2022-05-25T06:45:00.648681",
"2022-05-25T07:00:00.648681",
"2022-05-25T07:15:00.648681",
],
}
# Create a DataFrame from the dictionary
df = pl.DataFrame(data)
df = df.with_columns(pl.col("col2").str.to_datetime())
</code></pre>
<p>Now I want to create column 'col3' where 'col1' and 'col2' are grouped by with a threshold of 45 minutes. Example if col1 = 1 and col2 = within a period of 45 minutes set the value of col3 as 1. if col1 = 1 and col2 = within the next period of 45 minutes set the value of col3 as 2</p>
<p>So the desired outcome should be as follows:</p>
<pre><code>┌──────┬────────────────────────────┬──────┐
│ col1 ┆ col2 ┆ col3 │
│ --- ┆ --- ┆ --- │
│ i64 ┆ datetime[μs] ┆ u32 │
╞══════╪════════════════════════════╪══════╡
│ 1 ┆ 2022-05-25 08:00:00.648681 ┆ 1 │
│ 1 ┆ 2022-05-25 08:15:00.648681 ┆ 1 │
│ 1 ┆ 2022-05-25 08:30:00.648681 ┆ 1 │
│ 1 ┆ 2022-05-25 08:45:00.648681 ┆ 1 │
│ 1 ┆ 2022-05-25 09:00:00.648681 ┆ 2 │
│ 1 ┆ 2022-05-25 09:15:00.648681 ┆ 2 │
│ 1 ┆ 2022-05-25 09:30:00.648681 ┆ 2 │
│ 1 ┆ 2022-05-25 09:45:00.648681 ┆ 2 │
│ 1 ┆ 2022-05-25 10:00:00.648681 ┆ 3 │
│ 1 ┆ 2022-05-25 10:15:00.648681 ┆ 3 │
│ 1 ┆ 2022-05-25 10:30:00.648681 ┆ 3 │
│ 1 ┆ 2022-05-25 10:45:00.648681 ┆ 3 │
│ 2 ┆ 2022-05-25 08:00:00.648681 ┆ 3 │
│ 2 ┆ 2022-05-25 08:15:00.648681 ┆ 3 │
│ 2 ┆ 2022-05-25 08:30:00.648681 ┆ 3 │
│ 2 ┆ 2022-05-25 08:45:00.648681 ┆ 3 │
│ 2 ┆ 2022-05-25 09:00:00.648681 ┆ 4 │
│ 2 ┆ 2022-05-25 06:00:00.648681 ┆ 1 │
│ 2 ┆ 2022-05-25 06:15:00.648681 ┆ 1 │
│ 2 ┆ 2022-05-25 06:30:00.648681 ┆ 1 │
│ 2 ┆ 2022-05-25 06:45:00.648681 ┆ 1 │
│ 2 ┆ 2022-05-25 07:00:00.648681 ┆ 2 │
│ 2 ┆ 2022-05-25 07:15:00.648681 ┆ 2 │
└──────┴────────────────────────────┴──────┘
</code></pre>
<p>How would you do this in polars?</p>
|
<python><group-by><python-polars>
|
2023-03-06 12:38:54
| 1
| 457
|
Herwini
|
75,650,950
| 15,042,008
|
Generate equation for one variable in terms of others
|
<p>I have a particular data set where there are 3 different functions with 3 different variables:</p>
<pre><code>x = 1
strLen_x = int(3**x + 0.25*(-3*(-1)**x + 3**x + 6)) # Function 1
y = 2
strLen_y = 2*y # Function 2
z = 3
strLen_z = 3*(z + 1 + (z//4)) # Function 3
# Expecting Function 4 in terms of 1, 2, and 3
</code></pre>
<p>The final output i.e <code>strLen_</code>... variable is nothing but a variable that is dependent of 3 different variables <code>x, y, and z</code>.
I'm trying to come up with a common equation where <code>strLen_xyz</code> is in terms of all the 3 variables (<code>x, y, z</code>).</p>
<p>Here's a table of the output with different values of <code>x, y, z</code>.
Note: <code>y > 1</code> and all the variables are positive integers.<br>
<a href="https://i.sstatic.net/Oi6xy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Oi6xy.png" alt="![enter image description here" /></a></p>
<p>Here, <code>Output</code> is the final value to be obtained.</p>
|
<python><python-3.x><function><math><sequence>
|
2023-03-06 12:38:39
| 1
| 1,219
|
The Myth
|
75,650,925
| 12,976,010
|
How is that pandas is faster than pure C in the groupby operation?
|
<p>I have an nparray of <em>x,y</em> pairs with shape <code>(n,2)</code>, and knowing for certain that for each <em>x</em> there are multiple values of <em>y</em> , I wanted to calculate the average value of <em>y</em> for each unique <em>x</em>. It occurred to me that this needed a <code>groupby</code> operation followed by a <code>mean</code> which was available in <strong>pandas</strong> library. However, thinking that <em>pandas</em> was slow because of the scale of my data (over a million points) I wrote a simple program in <em>C</em> and used <em>ctypes</em> to call the C function and perform the operation. I used <code>-fPIC</code> and compiled a <code>shared object</code> file with <code>GCC MinGW</code>.</p>
<pre class="lang-c prettyprint-override"><code>int average( int* array , int size_array , int* unique , int size_unique , float* avg ){
if (size_array % 2 != 0){
return 1;
}
for (int i = 0 ; i < size_unique ; i++){
int curX = unique[i];
int sum = 0;
int count = 0;
for (int j = 0 ; j < size_array ; j += 2){
if ( array[j] == curX ){
sum += (array[j+1]);
count += 1;
}
}
float average = ((float)sum / (float)count);
avg[i] = average;
}
return 0;
}
</code></pre>
<p>Later, because the program was still slow (took about 1.5 seconds) I gave <em>pandas</em> a shot and I was stunned at how faster it was using it. It was almost twice as fast as the program I wrote in C. But it didn't make any sense to me. How did they achieve this level of performance? Is pandas using a hashtable?</p>
<pre class="lang-py prettyprint-override"><code>ar = np.random.randint(0,2000,size = (40000,2))
df = pd.DataFrame({'x': ar[:,0], 'y': ar[:,1]})
df = df.groupby('y', as_index=False)['x'].mean()
x = df[['x']].to_numpy()
y = df[['y']].to_numpy()
</code></pre>
<p>I calculated and found out that for an array of size <code>(40000,2)</code> and <code>2000</code> unique elements, I had about <code>80,000,000</code> operation which was done in less than <code>0.2s</code>. So each operation takes about <em>2.5 nanoseconds</em> which is close to my processor's limit (I have a 3.5Ghz Quad Core CPU - intel i7 4720HQ). So I'm pushing the CPU using the C code. How is it that pandas even push it further?</p>
<p>As I mentioned above, I compiled the C code with GCC MinGW and the following command:<br />
<code>gcc -fPIC -shared c_out.so c_in.c</code>.</p>
<p>I used pythons time library to estimate the runtime of the code. I initialized two timestamps one before (<code>t1</code>) and one after the code (<code>t2</code>). Afterward, I print the time difference between t1 and t2 as the runtime of the code.<br />
Samples of the benchmark for 40000 data ranging from 0 to 2448 are as follows:</p>
<pre><code>Run #1
Pandas: 0.0623
C: 0.2250
Run #2
Pandas: 0.0660
C: 0.1880
Run #3
Pandas: 0.609
C: 0.2261
Run #4
Pandas: 0.0629
C: 0.2488
Run #5
Pandas: 0.0619
C: 0.2159
</code></pre>
|
<python><c><pandas><performance><ctypes>
|
2023-03-06 12:35:33
| 1
| 838
|
ARK1375
|
75,650,879
| 4,373,805
|
Pivot a column to multiple columns based on regex pattern
|
<p>There are 33 columns in my dataframe -</p>
<p><code>Description_1_1, Description_2_1, Description_3_1,Description_1_2, Description_2_2, Description_3_2</code> etc.</p>
<p>I need to make 11 columns based out a pattern -
First word should be description and it should end with a number</p>
<pre><code>Description_1
Description_2
Description_3
Description_4
Description_5
Description_6
Description_7
Description_8
Description_91
Description_92
Description_93
</code></pre>
<p>First I am thinking to unpivot these 33 columns in a single column Description using</p>
<p><code>pandas.wide_to_long(df, stubnames=['Description'], i=['uuid'], j='dropme', suffix='_[^_]+_[^_]+$')).reset_index()</code></p>
<p>But I am not able to get what should be the logic to create 11 columns out of this single column <code>Description</code> based on the integer after last underscore.</p>
<p>Expected output</p>
<pre><code>Description_1 , Description_2
Description_1_1, Description_1_2
Description_2_1, Description_2_2
Description_3_1, Description_3_2
</code></pre>
|
<python><pandas>
|
2023-03-06 12:31:04
| 1
| 468
|
Ezio
|
75,650,841
| 15,800,270
|
How to train or fine-tune GPT-2 / GPT-J model for generative question answering?
|
<p>I am new at using Huggingface models. Though I have some basic understanding of its Model, Tokenizers and Training.</p>
<p>I am looking for a way to leverage the generative models like GPT-2, and GPT-J from the Huggingface community and tune them for the question <strong>Closed Generative question answering</strong> - where we train the model first with the "specific domain data" <em>such as medical</em> and then asking questions related to that.</p>
<p>If possible, will you please walk me through the process?
Thank you so much 🤗</p>
|
<python><machine-learning><nlp><huggingface-transformers><gpt-2>
|
2023-03-06 12:27:08
| 1
| 610
|
Aayush Shah
|
75,650,572
| 5,342,009
|
Stripe Subscription adding same card twice
|
<p>I have implemented Stripe Subscription with the following steps :</p>
<ol>
<li><p>Add card via Stripe Elements using stripe.PaymentMethod.attach :</p>
<pre><code> stripe_customer = stripe.Customer.retrieve(str(customer.stripe_customer_id))
payment_methods_list = stripe.PaymentMethod.list(customer=stripe_customer.id, type="card")
card = request.data['card']
stripe_paymentmethod_id = card['id']
stripe.PaymentMethod.attach( stripe_paymentmethod_id, customer=stripe_customer.id)
stripe.Customer.modify(
stripe_customer.id,
invoice_settings={
"custom_fields": None,
"default_payment_method": stripe_paymentmethod_id,
"footer": None,
"rendering_options": None
},
)
</code></pre>
</li>
<li><p>Then create a subscription, and generate a client secret :</p>
<pre><code> stripe.confirmCardPayment(resultState.data.clientsecret, {
payment_method: {
card: info.card_element,
billing_details: {
name: "Name Surname",
},
}
</code></pre>
</li>
<li><p>Then finally confirm payment card in FrontEnd :</p>
<pre><code> stripe_customer = stripe.Customer.retrieve(customer.stripe_customer_id)
stripe_subscription_list = stripe.Subscription.list(customer=customer.stripe_customer_id)
stripe_subscription_list_len = len(stripe_subscription_list.data)
stripe_subscription = None
if stripe_subscription_list_len != 0:
stripe_subscription = stripe.Subscription.retrieve(stripe_subscription_list.data[0].id)
if ((stripe_subscription_list_len == 0) or (stripe_subscription_list_len == 1)) and \
(selected_product.plan == 0 and customer.product.plan != selected_product.plan) or \
(selected_product.plan > 0 and stripe_customer.invoice_settings.default_payment_method != None and customer.product.plan != selected_product.plan):
logging.info(f"SubscriptionCreate post Subscription Modify.")
logging.info(f"stripe_customer.invoice_settings.default_payment_method : {stripe_customer.invoice_settings.default_payment_method}")
logging.info(f"SubscriptionCreate customer.product.plan : {customer.product.plan}")
logging.info(f"SubscriptionCreate selected_product.plan : {selected_product.plan}")
# This will be removed in deployment
logging.info(f"SubscriptionCreate Deleting Free Plan : {selected_product.plan}")
if stripe_subscription:
stripe.Subscription.delete(stripe_subscription.id)
logging.info(f"SubscriptionCreate Free Plan Deleted : {selected_product.plan}")
new_subscription = stripe.Subscription.create(
customer=customer.stripe_customer_id,
items=[{"price": stripe_product.stripe_plan_id},],
payment_behavior='default_incomplete',
payment_settings={'save_default_payment_method': 'on_subscription'},
expand=['latest_invoice.payment_intent'],
)
if new_subscription:
customer.product = stripe_product
customer.active = False
if customer.product.plan == 0:
customer.active = True
logging.info(f"stripe_subscription_list new_subscription : {new_subscription}")
customer.clientsecret = new_subscription.latest_invoice.payment_intent.client_secret
customer.stripe_subscription_id = new_subscription.id
customer.save()
#logging.info(f"clientSecret : {new_subscription['latest_invoice.payment_intent.client_secret}")
logging.info(f"stripe_subscription_list new_subscription clientsecret : {customer.clientsecret}")
</code></pre>
</li>
</ol>
<p>However, in this solution, the card is being added twice as I can see in Stripe Dashboard.</p>
<p>The first time obviously, the card is added in here :</p>
<pre><code>stripe.PaymentMethod.attach( stripe_paymentmethod_id, customer=stripe_customer.id)
</code></pre>
<p>But if I remove "stripe.PaymentMethod.attach" then subscription create does not generate client secret for the frontend.</p>
<p>How can I avoid the card being added twice under any circumstances ?</p>
|
<python><django><stripe-payments>
|
2023-03-06 11:56:14
| 0
| 1,312
|
london_utku
|
75,650,569
| 5,270,403
|
Unittesting DRF Serializer validators one by one
|
<p>We have an example Serializer class we'd like to test:</p>
<pre><code>from rest_framework import serializers
class MySerializer(serializers.Serializer):
fieldA = serializers.CharField()
fieldB = serializers.CharField()
def validate_fieldA(self,AVal):
if len(AVal) < 3:
raise serializers.ValidationError("AField must be at least 3 characters long")
return AVal
def validate_fieldB(self,BVal):
if not len(BVal) < 3:
raise serializers.ValidationError("BField must be at least 3 characters long")
return BVal
</code></pre>
<p>This is a greatly simplified scenario, but it should do.</p>
<p>We want to write unittests for this Serializer class. My friend argues we should test using the .is_valid() method, like so</p>
<pre><code>class TestMySerializer(unittest.TestCase):
def test_validate_fieldA_long_enough(self):
ser = MySerializer(data={"fieldA":"I'm long enough","fieldB":"Whatever"}),
self.assertTrue(ser.is_valid())
self.assertEqual(ser.validated_data["fieldA"],"I'm long enough")
def test_validate_fieldA_too_short(self):
ser = MySerializer(data={"fieldA":"x","fieldB":"Whatever"})
self.assertFalse(ser.is_valid())
#similarly tests for fieldB
...
</code></pre>
<p>I argue that unittests are supposed to be atomic and test only one "thing" at a time. By using the .is_valid() method we run <em>every</em> validator in the class instead of just the one we want to test. This introduces an unwanted dependency, where tests for fieldA validators may fail if there's something wrong with fieldB validators. So instead I would write my tests like so:</p>
<pre><code>class TestMySerializer(unittest.TestCase):
def setUp(self):
self.ser = MySerializer()
def test_fieldA_validator_long_enough(self):
self.assertEqual(self.ser.validate_fieldA("I'm long enough"),"I'm long enough")
def test_fieldA_validator_too_short(self):
with self.assertRaises(serializers.ValidationError) as catcher:
self.ser.validate_fieldA('x')
self.assertEqual(str(catcher.exception),"AField must be at least 3 characters long")
#same for fieldB validators
...
</code></pre>
<p>Which approach is better and why?</p>
|
<python><django><rest>
|
2023-03-06 11:55:51
| 1
| 1,031
|
AlanKalane
|
75,650,507
| 16,389,095
|
Kivy MD: How to get the selected item from an uix.list / OneLineAvatarIconListItem
|
<p>I'm quite new in Kivy. I developed a simple UI in Python/KivyMD. It displays a list of items: each of them has a checkbox that allows the user to select that item. The object that displays this list is inherited from <em>kivymd.uix.list</em>. I would like to get the selected item/items of this list. Here is the code:</p>
<pre><code>from kivy.lang import Builder
from kivymd.app import MDApp
from kivymd.uix.floatlayout import MDFloatLayout
from kivymd.uix.list import IRightBodyTouch, OneLineAvatarIconListItem
from kivymd.uix.selectioncontrol import MDCheckbox
from kivymd.toast import toast
import numpy as np
Builder.load_string(
"""
<ListItemWithCheckbox>:
RightCheckbox:
<View>:
MDScrollView:
MDList:
id: scroll
"""
)
class ListItemWithCheckbox(OneLineAvatarIconListItem):
pass
class RightCheckbox(IRightBodyTouch, MDCheckbox):
pass
class View(MDFloatLayout):
detectedDevices = list()
for i in range(20):
device = dict(name='DEVICE ' + str(i), index=i+1, battery_level=np.random.randint(low=0, high=101))
detectedDevices.append(device)
def __init__(self, **kwargs):
super().__init__(**kwargs)
for i in range(len(self.detectedDevices)):
self.ids.scroll.add_widget(ListItemWithCheckbox(text=self.detectedDevices[i]['name']))
class MainApp(MDApp):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.view = View()
def build(self):
self.title = ''
return self.view
if __name__ == '__main__':
MainApp().run()
</code></pre>
|
<python><user-interface><kivy><kivy-language><kivymd>
|
2023-03-06 11:49:11
| 1
| 421
|
eljamba
|
75,649,960
| 4,432,671
|
How do I compare arbitrarily nested numpy ndarrays?
|
<p>Leaving aside if this is a good idea or not, how can I compare two nested arrays in numpy?</p>
<pre><code>>>> a
array([[array([0, 0]), array([0, 1])],
[array([1, 0]), array([1, 1])]], dtype=object)
>>> b
array([[array([0, 0]), array([0, 1])],
[array([1, 0]), array([1, 1])]], dtype=object)
>>> np.array_equal(a, b)
False
>>> np.equal(a, b, dtype=object)
array([[array([ True, True]), array([ True, True])],
[array([ True, True]), array([ True, True])]], dtype=object)
>>> np.equal(a, b, dtype=object).all()
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
|
<python><numpy>
|
2023-03-06 10:56:26
| 0
| 3,737
|
xpqz
|
75,649,944
| 7,568,316
|
Regex find until word of end of block
|
<p>I'm analyzing router logs and found a solution to find blocks of text, but now I'm facing a problem that the end of the text block might not be present. I'll explain.
This is a sample of a stripped log.</p>
<pre><code>POBL026# show run vpn 0
interface ge0/4
no shutdown
!
interface ge0/4.1
description "TLOC-Extension Custom1"
ip address 10.31.xxx.1/30
nat
respond-to-ping
log-translations
!
tracker lbo2
tunnel-interface
encapsulation ipsec preference 100 weight 33
color custom1 restrict
no allow-service bgp
allow-service dhcp
allow-service dns
allow-service icmp
no allow-service sshd
no allow-service Netconf
no allow-service ntp
no allow-service ospf
no allow-service stun
allow-service https
!
mtu 1496
no shutdown
!
interface ge0/4.2
description "TLOC-Extension biz-internet "
ip address 10.31.xxx.5/30
mtu 1496
tloc-extension ge0/0
no shutdown
!
ip route 0.0.0.0/0 10.31.xxx.2
ip route 0.0.0.0/0 84.198.zzz.217
!
POBL026# show run vpn 1
</code></pre>
<p>So with this code I'm able to isolate each vpn block</p>
<pre><code>regex_result = re.search("(?s)(show\s*run\s*vpn\s*" + str(vpn_nbr) + "\s*\n)(.*?)(?=" + routername + ")", filecontent)
</code></pre>
<p>Then, in each block I searched for the interfaces with this code</p>
<pre><code> result = re.findall("(?s)(?<=interface )(.*?)(?=!)"", vpn_block)
</code></pre>
<p>But this is not OK as I've noticed that my interface-block itself can hold this sens before the end. So, I could search until the next "!\ninterface", this works fine, but I'm not getting my last block, as there the end is the routername itself.</p>
<p>So I'm looking for a kind of regex that says, everything after the sequence "interface" until "!\ninterface" OR "end of the block".</p>
<p>Can this be done?</p>
|
<python><regex>
|
2023-03-06 10:55:33
| 1
| 755
|
Harry Leboeuf
|
75,649,848
| 2,592,835
|
predicting data bug with keras
|
<p>Hi im trying to build a model in keras that can predict some data based on training values.
I've seen this work succesfully all over the internet but my example code doesn't work:</p>
<pre><code>from keras.models import Sequential
from keras.layers import Dense, Activation
import numpy as np
model = Sequential()
model.add(Dense(64, activation='relu', input_shape=(4,))) #input shape of 50
model.add(Dense(28, activation='relu')) #input shape of 50
model.add(Dense(4, activation='sigmoid'))
xtrain=np.asarray([[3,4,3,2],[1,0,1,2],[1,1,1,1]])
ytrain=np.asarray([[1,1,1,1],[0,0,0,0],[2,2,2,2]])
model.compile(optimizer="sgd",loss="categorical_crossentropy")
model.fit(xtrain,ytrain,epochs=10)
xtest=xtrain[0]
data=model.predict(xtest)
print(data)
</code></pre>
<p>this comes up with the error:</p>
<pre><code>WARNING:tensorflow:Model was constructed with shape (None, 4) for input
KerasTensor(type_spec=TensorSpec(shape=(None, 4), dtype=tf.float32, name='dense_input'),
name='dense_input', description="created by layer 'dense_input'"), but it was called on
an input with incompatible shape (None,).
Traceback (most recent call last):
File "C:\Users\macla\Downloads\kerasmartai.py", line 30, in <module>
data=model.predict(xtest)
File "C:\Users\macla\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\utils\traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\Users\macla\AppData\Local\Temp\__autograph_generated_filefmg_w5bl.py", line 15, in tf__predict_function
retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
ValueError: in user code:
File "C:\Users\macla\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\engine\training.py", line 2041, in predict_function *
return step_function(self, iterator)
File "C:\Users\macla\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\engine\training.py", line 2027, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "C:\Users\macla\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\engine\training.py", line 2015, in run_step **
outputs = model.predict_step(data)
File "C:\Users\macla\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\engine\training.py", line 1983, in predict_step
return self(x, training=False)
File "C:\Users\macla\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\utils\traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\Users\macla\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\engine\input_spec.py", line 250, in assert_input_compatibility
raise ValueError(
ValueError: Exception encountered when calling layer "sequential" " f"(type Sequential).
Input 0 of layer "dense" is incompatible with the layer: expected min_ndim=2, found ndim=1. Full shape received: (None,)
Call arguments received by layer "sequential" " f"(type Sequential):
• inputs=tf.Tensor(shape=(None,), dtype=int32)
• training=False
• mask=None
</code></pre>
<p>however running what appears to be an exactly same piece of code works perfectly:</p>
<pre><code>from keras.models import Sequential
from keras.layers import Dense
from sklearn.datasets import make_blobs
from sklearn.preprocessing import MinMaxScaler
# generate 2d classification dataset
X, y = make_blobs(n_samples=100, centers=2, n_features=2, random_state=1)
scalar = MinMaxScaler()
scalar.fit(X)
X = scalar.transform(X)
# define and fit the final model
model = Sequential()
model.add(Dense(4, input_shape=(2,), activation='relu'))
model.add(Dense(4, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam')
model.fit(X, y, epochs=500, verbose=0)
# new instances where we do not know the answer
Xnew, _ = make_blobs(n_samples=3, centers=2, n_features=2, random_state=1)
Xnew = scalar.transform(Xnew)
# make a prediction
ynew = model.predict(Xnew)
# show the inputs and predicted outputs
for i in range(len(Xnew)):
print("X=%s, Predicted=%s" % (Xnew[i], ynew[i]))
</code></pre>
<p>so how can I train my data like in the second example with the code im using?</p>
|
<python><keras><deep-learning>
|
2023-03-06 10:45:16
| 1
| 1,627
|
willmac
|
75,649,804
| 8,713,442
|
Create dictionary of each row in polars Dataframe
|
<p>Lets assume we have below given dataframe. Now for each row I need to create dictionary and pass it to UDF for some logic processing.Is there a way to achieve this using either polars or pyspark dataframe ?</p>
<p><a href="https://i.sstatic.net/OuLTu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OuLTu.png" alt="enter image description here" /></a></p>
|
<python><apache-spark><python-polars>
|
2023-03-06 10:41:33
| 2
| 464
|
pbh
|
75,649,695
| 584,239
|
how to make collectd python plugin to use python3 instead of python2 to execute the script
|
<p>I have enable python collectd plugin. My python script requires python3 syntax. When the script executes it is using python2.7 hence my script fails. How can I force the plugin to use python3 instead of python2.</p>
|
<python><collectd>
|
2023-03-06 10:29:27
| 1
| 9,691
|
kumar
|
75,649,542
| 11,251,938
|
colormap not returning the expected color for value
|
<p>I'm experiencing some problems with matplotlib colormap.
I've set up this example code to reproduce my problem.</p>
<pre><code>%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from matplotlib import style
from matplotlib import colors
import pandas as pd
import numpy as np
style.use('ggplot')
plot_df = pd.DataFrame({
'from': [[0, 0], [10, 10], [15, 15], [20, 20]],
'to':[[10, 10], [20, 20], [30, 30], [40, 40]],
'value':[0, 5, 10, 15]
})
plot_df['connectionstyle'] = 'arc3, rad=.55'
plot_df['arrowstyle'] = 'Simple, tail_width=1, head_width=10, head_length=15'
plot_df
fig, ax = plt.subplots()
vmin = 0
vmax = 15
scalarmappable = plt.cm.ScalarMappable(norm=colors.Normalize(vmin=vmin, vmax=vmax), cmap='YlOrRd')
cmap = scalarmappable.get_cmap()
cbar = fig.colorbar(scalarmappable, ax=ax)
ticks = np.linspace(vmin, vmax, 5)
cbar.ax.set_yticks(ticks)
cbar.ax.set_yticklabels([f'{tick:.0f}' for tick in ticks]) # vertically oriented colorbar
cbar.outline.set_linewidth(0)
for _, row in list(plot_df.iterrows()):
start_x = row['from'][0]
start_y = row['from'][1]
dest_x = row['to'][0]
dest_y = row['to'][1]
plt.scatter([start_x, dest_x], [start_y, dest_y], color='k')
p = patches.FancyArrowPatch(
(start_x, start_y),
(dest_x, dest_y),
connectionstyle=row.connectionstyle,
arrowstyle=row.arrowstyle,
color=cmap(row.value)
)
ax.add_patch(p)
fig.tight_layout()
</code></pre>
<p><a href="https://i.sstatic.net/rw8D7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rw8D7.png" alt="enter image description here" /></a></p>
<p>As you can see in the image all arrows are yellow and color is function of cmap, so I'd expect at least the two arrows to be orange and red.</p>
<p>Does someone has a fix for this?</p>
|
<python><matplotlib><colormap>
|
2023-03-06 10:15:38
| 1
| 929
|
Zeno Dalla Valle
|
75,649,540
| 12,932,447
|
Pydantic root_validator unexpected behaviour
|
<p>I have a file <code>test.py</code> like this:</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import root_validator
from sqlmodel import SQLModel
from typing import List
from devtools import debug
class Base(SQLModel):
@root_validator(pre=True)
def validate(cls, values):
debug(values, type(values))
for k, v in values.items():
# here we perform the validation
# for this example we do nothing
pass
debug("EXIT")
return values
class A(Base):
id: int
name: str
class B(Base):
id: int
others: List[A]
</code></pre>
<p>Now if I do</p>
<pre><code>$ python3
Python 3.10.5 (v3.10.5:f377153967, Jun 6 2022, 12:36:10) [Clang 13.0.0 (clang-1300.0.29.30)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from test import A
>>> from test import B
>>> x = A(id=1, name="john")
test.py:11 Base.validate
values: {
'id': 1,
'name': 'john',
} (dict) len=2
type(values): <class 'dict'> (type)
test.py:16 Base.validate
'EXIT' (str) len=4
>>> y = B(id=42, others=[])
test.py:11 Base.validate
values: {
'id': 42,
'others': [],
} (dict) len=2
type(values): <class 'dict'> (type)
test.py:16 Base.validate
'EXIT' (str) len=4
>>> z = B(id=100, others=[x])
test.py:11 Base.validate
values: {
'id': 42,
'others': [
A(
id=1,
name='john',
),
],
} (dict) len=2
type(values): <class 'dict'> (type)
test.py:16 Base.validate
'EXIT' (str) len=4
test.py:11 Base.validate
values: A(
id=1,
name='john',
) (A)
type(values): <class 'test.A'> (SQLModelMetaclass)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/yuri/Desktop/exact/fractal/fractal-common/venv/lib/python3.10/site-packages/sqlmodel/main.py", line 498, in __init__
values, fields_set, validation_error = validate_model(
File "pydantic/main.py", line 1077, in pydantic.main.validate_model
File "pydantic/fields.py", line 895, in pydantic.fields.ModelField.validate
File "pydantic/fields.py", line 928, in pydantic.fields.ModelField._validate_sequence_like
File "pydantic/fields.py", line 1094, in pydantic.fields.ModelField._validate_singleton
File "pydantic/fields.py", line 884, in pydantic.fields.ModelField.validate
File "pydantic/fields.py", line 1101, in pydantic.fields.ModelField._validate_singleton
File "pydantic/fields.py", line 1148, in pydantic.fields.ModelField._apply_validators
File "pydantic/class_validators.py", line 318, in pydantic.class_validators._generic_validator_basic.lambda13
File "/Users/yuri/Desktop/exact/fractal/fractal-common/test.py", line 12, in validate
for k, v in values.items():
AttributeError: 'A' object has no attribute 'items'
</code></pre>
<p>I don't undestand what is happening with <code>z</code>. Why after calling the validator for class <code>B</code>, the constructor call the validator also for class <code>A</code> but this time <code>values</code> is not a dict but a <code><class 'test.A'> (SQLModelMetaclass)</code>.</p>
|
<python><validation><pydantic><sqlmodel>
|
2023-03-06 10:15:32
| 1
| 875
|
ychiucco
|
75,649,406
| 14,555,505
|
Matplotlib plot one line, multiple colours, *multiple* segments
|
<p>Let's say I had some random time-series data:</p>
<pre class="lang-py prettyprint-override"><code>y = np.random.random(100) - 0.5
x = np.array(range(len(y)))
plt.plot(
x,
y
)
</code></pre>
<p><a href="https://i.sstatic.net/ETgMx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ETgMx.png" alt="100 random values plotted as a line plot, all coloured blue and linked together sequentially" /></a></p>
<p>What I want to do is colour some line segments one colour, and other line segments another colour. For example, maybe the ranges <code>0 < x < 10</code>, <code>34 < x < 37</code>, and <code>68 < x < 91</code> should all be green and everything else should be red. I can do this trivially with <code>plt.scatter</code>:</p>
<pre class="lang-py prettyprint-override"><code>plt.scatter(
x,
y,
c=np.where(
((0 < x) & (x < 10))
| ((34 < x) & (x < 37))
| ((68 < x) & (x < 91)),
'green', 'red'
)
)
</code></pre>
<p><a href="https://i.sstatic.net/u0jbD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u0jbD.png" alt="Same random numbers as above, but now as a scatter plot and with points some points in red, other points in green" /></a></p>
<p>But because this is a time series, I don't want a bunch of dots. I want one continuous line that is segmented into multiple differently coloured segments. I can't see an easy way to do this with <code>plt.plot</code>. Some sources say to do a for loop over each of the sets, and just mask out the points I want. But this does not work because it will connect all the green points into one continuous line, which is not what I want:</p>
<pre class="lang-py prettyprint-override"><code>mask = (
((0 < x) & (x < 10))
| ((34 < x) & (x < 37))
| ((68 < x) & (x < 91))
)
plt.plot(
x[mask],
y[mask],
c='green'
)
plt.plot(
x[~mask],
y[~mask],
c='red'
)
plt.title("WRONG - red lines are connected")
</code></pre>
<p><img src="https://i.sstatic.net/7mtFW.png" alt="a red and green line plot, but with the red and green lines mixing in with one another. It's wrong." /><a href="https://i.sstatic.net/7mtFW.png" rel="nofollow noreferrer">3</a></p>
<p>What I want, is a short and easy way to achieve this:</p>
<pre class="lang-py prettyprint-override"><code>masks_dict = {
'green': [
(( 0 <= x) & (x <= 10)),
((34 <= x) & (x <= 37)),
((68 <= x) & (x <= 91))
],
'red': [
((10 <= x) & (x <= 34)),
((37 <= x) & (x <= 68)),
((91 <= x) & (x <= 100)),
]
}
for color, masks in masks_dict.items():
for mask in masks:
plt.plot(
x[mask],
y[mask],
c=color
)
plt.title("CORRECT - all lines are joined nicely")
</code></pre>
<p><a href="https://i.sstatic.net/vLhD7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vLhD7.png" alt="a correct plot with one continuous line that's split into alternating red and green" /></a></p>
<p>Also important is that the ranges <code>0 < x < 10</code>, <code>34 < x < 37</code>, and <code>68 < x < 91</code> are just dummy ranges, and I can't be manually typing the <code>masks_dict</code> by hand. In reality I have a time series dataset which has certain continuous sequences labelled into one of several classes, and I want one continuous line plot that is a different colour for each of the classes.</p>
<p>Is there an easier way to do this than the masking approach I described above?</p>
|
<python><matplotlib><plot>
|
2023-03-06 10:00:27
| 2
| 1,163
|
beyarkay
|
75,649,339
| 19,920,392
|
Sending form data from react to flask server
|
<p>I am trying to submit a form and send the form data to a flask server from React.
It seems like POST is working fine. However, when I try to access to the form in the flask server, it returns <code>ImmutableMultiDict([])</code>.</p>
<p>Here are my codes:<br />
<strong>register.js</strong></p>
<pre><code>async function sendUserInfo(userInfo) {
await fetch('http://127.0.0.1:5000/get_user_info', {
method: 'POST',
body: userInfo
})
.then((response) => response.json())
.then((response) => console.log(response))
}
async function handleSubmit(event) {
event.preventDefault()
await sendUserInfo(event.target);
}
//component function
function Register() {
console.log('Register Page')
return (
<div id='main_container'>
<form onSubmit={handleSubmit}>
<input name='username' className='register_input' placeholder={inputUsername.placeholder}></input>
<input name='gender' className='register_input' placeholder={inputGender.placeholder}></input>
<input name='age' className='register_input' placeholder={inputAge.placeholder}></input>
<select className="register_input">
{cities.map((city) => {return <option className='select_option'value={city}>{city}</option>})}
</select>
<button className='input_button' >채팅 시작하기...</button>
<input className='input_button' type='submit'></input>
</form>
</div>
)
}
</code></pre>
<p><strong>server.py</strong></p>
<pre><code>from chat_app.chat import ChatApp, ChatRoom, User
from flask import Flask
from flask import request
from flask_cors import CORS
import json
server = Flask(__name__)
CORS(server)
@server.route('/get_user_info', methods=['GET', 'POST'])
def get_user_info():
user_info = request.form
print(user_info)
return json.dumps({'status': 'successful'})
server.run(debug=True)
</code></pre>
<p>I successfully get <code>{'status': 'successful'}</code> on the client side. But when I try to print out <code>request.form</code>, it returns <code>ImmutableMultiDict([])</code>.</p>
<p>I know that this question might be a duplicate but I am asking this question because I've been trying to fix it by referring to various posts in Stack Overflow or Google that deal with similar issues and I have failed.</p>
|
<javascript><python><flask>
|
2023-03-06 09:54:09
| 2
| 327
|
gtj520
|
75,649,286
| 11,572,712
|
Ariadne-GraphQL - ModuleNotFoundError: No module named 'graphql.pyutils.undefined'
|
<p>I am trying to import several libraries:</p>
<pre><code>from ariadne import load_schema_from_path, make_executable_schema, \
graphql_sync, snake_case_fallback_resolvers, ObjectType
</code></pre>
<p>But when running this I get this Error:</p>
<pre><code>ModuleNotFoundError: No module named 'graphql.pyutils.undefined'
</code></pre>
<p>Before that I downgraded to a lower graphql-core version by using <code>pip install "graphql-core<3.1"</code>because I got this ImportError:</p>
<pre><code>ImportError: cannot import name 'PLAYGROUND_HTML' from 'ariadne.constants'
</code></pre>
<p>What can I do to solve this?</p>
|
<python><flask><graphql><ariadne-graphql>
|
2023-03-06 09:46:44
| 2
| 1,508
|
Tobitor
|
75,649,130
| 5,423,080
|
Get minimum of SortedKeyList with a condition on an object attribute
|
<p>I have a <code>SortedKeyList</code> of objects sorted by timestamp, the objects also have another attribute, the ID.</p>
<p>I was trying to get the earliest element of the list based on the ID.</p>
<p>This is a code example:</p>
<pre><code>import numpy as np
import sortedcontainers
class Point:
def __init__(self, timestamp, id):
self.timestamp = timestamp
self.id = id
point_list = sortedcontainers.SortedKeyList(key=lambda x: x.timestamp)
point_list.add(Point(np.datetime64("2021-07-05T09:00:00"), "B"))
point_list.add(Point(np.datetime64("2021-07-05T09:00:01"), "A"))
point_list.add(Point(np.datetime64("2021-07-05T09:00:03"), "A"))
point_list.add(Point(np.datetime64("2021-07-05T09:01:00"), "B"))
</code></pre>
<p>I would like to select on the ID and take the minimum.</p>
<p>For example I want the earliest point for <code>ID=A</code> and, in this case, is:</p>
<pre><code>Point(np.datetime64("2021-07-05T09:00:01"), "A")
</code></pre>
<p>I was trying to do something like this:</p>
<pre><code>min(point_list, key=lambda x: x.timestamp if(x.id=="A") else None).timestamp
</code></pre>
<p>but this give me a <code>TypeError</code>:</p>
<pre><code>TypeError: '<' not supported between instances of 'datetime.datetime' and 'NoneType'
</code></pre>
<p>This is part of a bigger project, so I can refactor the part related to <code>SortedKeyList</code>, but I can't modify the object <code>Point</code>.</p>
<p>Any idea?</p>
|
<python><python-3.x><numpy><sortedcontainers>
|
2023-03-06 09:31:43
| 2
| 412
|
cicciodevoto
|
75,649,108
| 7,118,663
|
How can I translate a curl command to Python request?
|
<p>Following <a href="https://shopify.dev/docs/apps/online-store/media/products#images-post-request" rel="nofollow noreferrer">this Shopify tutorial</a>, I'm trying to upload an image to Shopify. A subtask is to translate this curl command to a python request. The file is uploaded by users and I pass the file variable to this function with <a href="https://docs.djangoproject.com/en/4.1/ref/request-response/#django.http.HttpRequest.FILES" rel="nofollow noreferrer">request.FILES['filename']</a>:</p>
<pre><code>curl -v \
-F "Content-Type=image/png" \
-F "success_action_status=201" \
-F "acl=private" \
-F "key=tmp/45732462614/products/7156c27e-0331-4bd0-b758-f345afaa90d1/watches_comparison.png" \
-F "x-goog-date=20221024T181157Z" \
-F "x-goog-credential=merchant-assets@shopify-tiers.iam.gserviceaccount.com/20221024/auto/storage/goog4_request" \
-F "x-goog-algorithm=GOOG4-RSA-SHA256" \
-F "x-goog-signature=039cb87e2787029b56f498beb2deb3b9c34d96da642c1955f79225793f853760906abbd894933c5b434899d315da13956b1f67d8be54f470571d7ac1487621766a2697dfb8699c57d4e67a8b36ea993fde0f888b8d1c8bd3f33539d8583936bc13f9001ea3e6d401de6ad7ad2ae52d722073caf250340d5b0e92032d7ad9e0ec560848b55ec0f943595578a1d6cae53cd222d719acb363ba2c825e3506a52b545dec5be57074f8b1b0d58298a0b4311016752f4cdb955b89508376c38f8b2755fce2423acb3f592a6f240a21d8d2f51c5f740a61a40ca54769a736d73418253ecdf685e15cfaf7284e6e4d5a784a63d0569a9c0cffb660028f659e68a68fb80e" \
-F "policy=eyJjb25kaXRpb25zIjpbeyJDb250ZW50LVR5cGUiOiJpbWFnZVwvcG5nIn0seyJzdWNjZXNzX2FjdGlvbl9zdGF0dXMiOiIyMDEifSx7ImFjbCI6InByaXZhdGUifSxbImNvbnRlbnQtbGVuZ3RoLXJhbmdlIiwxLDIwOTcxNTIwXSx7ImJ1Y2tldCI6InNob3BpZnktc3RhZ2VkLXVwbG9hZHMifSx7ImtleSI6InRtcFwvZ2NzXC80NTczMjQ2MjYxNFwvcHJvZHVjdHNcLzcxNTZjMjdlLTAzMzEtNGJkMC1iNzU4LWYzNDVhZmFhOTBkMVwvd2F0Y2hlc19jb21wYXJpc29uLnBuZyJ9LHsieC1nb29nLWRhdGUiOiIyMDIyMTAyNFQxODExNTdaIn0seyJ4LWdvb2ctY3JlZGVudGlhbCI6Im1lcmNoYW50LWFzc2V0c0BzaG9waWZ5LXRpZXJzLmlhbS5nc2VydmljZWFjY291bnQuY29tXC8yMDIyMTAyNFwvYXV0b1wvc3RvcmFnZVwvZ29vZzRfcmVxdWVzdCJ9LHsieC1nb29nLWFsZ29yaXRobSI6IkdPT0c0LVJTQS1TSEEyNTYifV0sImV4cGlyYXRpb24iOiIyMDIyLTEwLTI1VDE4OjExOjU3WiJ9" \
-F "file=@/Users/shopifyemployee/Desktop/watches_comparison.png" \
"https://shopify-staged-uploads.storage.googleapis.com/"
</code></pre>
<p>I use chatgpt to come up with this python code:</p>
<pre><code>def uploadImage(stagedTarget, file):
parameters = stagedTarget["parameters"]
url = stagedTarget["url"]
files = {
'file': file,
}
data = {
'Content-Type': parameters[0]['value'],
'success_action_status': parameters[1]['value'],
'acl': parameters[2]['value'],
'key': parameters[3]['value'],
'x-goog-date': parameters[4]['value'],
'x-goog-credential': parameters[5]['value'],
'x-goog-algorithm': parameters[6]['value'],
'x-goog-signature': parameters[7]['value'],
'policy': parameters[8]['value'],
}
print(f"{url = }, {data = }")
response = requests.post(url, files=files, data=data)
print(f"{response.status_code = }")
print(f"{response.text = }")
response = response.content
response = json.loads(response)
</code></pre>
<p>The server gives me this response:</p>
<pre><code>web_1 | response.status_code = 400
web_1 | response.text = "<?xml version='1.0' encoding='UTF-8'?><Error><Code>EntityTooSmall</Code><Message>Your proposed upload is smaller than the minimum object size specified in your Policy Document.</Message><Details>Content-length smaller than lower bound on range</Details></Error>"
</code></pre>
<p>My file size is only 46KB. I don't know why it's too small. Itried a larger file and it's still the same. I tried to call the curl command to upload a similar local image file and it's ok. What did I do wrong?</p>
<p>Updated:
I try to update the python code like below and it gives me 400 error again:</p>
<pre><code> parameters = stagedTarget["parameters"]
url = stagedTarget["url"]
headers = {
'Content-Type': 'multipart/form-data'
}
data = {
'Content-Type': parameters[0]['value'],
'success_action_status': parameters[1]['value'],
'acl': parameters[2]['value'],
'key': parameters[3]['value'],
'x-goog-date': parameters[4]['value'],
'x-goog-credential': parameters[5]['value'],
'x-goog-algorithm': parameters[6]['value'],
'x-goog-signature': parameters[7]['value'],
'policy': parameters[8]['value'],
'file': file,
}
response = requests.post(url, headers=headers, data=data)
</code></pre>
<pre><code>web_1 | response.status_code = 400
web_1 | response.text = 'Bad content type. Please use multipart.'
</code></pre>
|
<python><django><curl><shopify><shopify-api>
|
2023-03-06 09:30:00
| 2
| 932
|
Benny Chan
|
75,649,103
| 5,675,325
|
Override test settings from a file
|
<p>For context, when one wants to change settings in a test (<a href="https://stackoverflow.com/q/48042132/5675325">yes, it's ok to change it there</a>), we can use <a href="https://docs.djangoproject.com/en/4.1/topics/testing/tools/#django.test.override_settings" rel="nofollow noreferrer">override_settings()</a> or <a href="https://docs.djangoproject.com/en/4.1/topics/testing/tools/#django.test.SimpleTestCase.modify_settings" rel="nofollow noreferrer">modify_settings()</a> (as <a href="https://stackoverflow.com/q/42905759/5675325">observed here</a>). <a href="https://stackoverflow.com/a/69752186/5675325">That works when running the tests in parallel too</a>.</p>
<p>Generally speaking, I do something like this</p>
<pre><code>from django.test import TestCase, override_settings
@override_settings(LOGIN_URL='/other/login/')
class LoginTestCase(TestCase):
def test_login(self):
response = self.client.get('/sekrit/')
self.assertRedirects(response, '/other/login/?next=/sekrit/')
</code></pre>
<p>Thing is, the list of settings can get big. So would to get the key-values from another file.</p>
<p>How to override from a file instead of having a long list like <code>LOGIN_URL='/other/login/', ...</code>?</p>
|
<python><django><django-testing><django-settings>
|
2023-03-06 09:29:25
| 1
| 15,859
|
Tiago Peres
|
75,649,082
| 616,349
|
"invalid server address" when connecting to LDAP server with ldap3
|
<p>I have the following python code that connects to our LDAP servers (multiple LDAP servers behind a load balancer).</p>
<pre class="lang-py prettyprint-override"><code>ldap_server = Server(host=LDAP_server_IP, port=636, use_ssl=True, get_info=ALL)
</code></pre>
<p>It keeps on failing with the following error message:</p>
<blockquote>
<p>Error: ("('socket ssl wrapping error: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:997)',)",)
invalid server address
Error: invalid server address</p>
</blockquote>
<p>According to this <a href="https://stackoverflow.com/questions/55854546/ldap-connect-failed-invalid-server-address">post</a>, it seems that one solution was to use the IP address of the server instead of the dns name.</p>
<p>This would make sense if the server certificate was somehow invalid and for another name.</p>
<p>Unfortunately, it did not solve my problem.</p>
<p>I also tried to turn off names checking without success:</p>
<pre class="lang-py prettyprint-override"><code>context = ssl.create_default_context()
context.check_hostname = False
ldap_server = Server(
host=LDAP_server_IP, port=636, use_ssl=True, get_info=ALL
)
</code></pre>
<p>Therefore, my question is actually double: did I really turn off certificate name verification with my code? Is there any other explanation for this error?</p>
|
<python><python-3.x><ssl><ldap>
|
2023-03-06 09:28:16
| 1
| 2,195
|
E. Jaep
|
75,649,038
| 7,327,257
|
Training difference between LightGBM API and Sklearn API
|
<p>I'm trying to train a LGBClassifier for multiclass task. I tried first working directly with LightGBM API and set the model and training as follows:</p>
<p><strong>LightGBM API</strong></p>
<pre><code>train_data = lgb.Dataset(X_train, (y_train-1))
test_data = lgb.Dataset(X_test, (y_test-1))
params = {}
params['learning_rate'] = 0.3
params['boosting_type'] = 'gbdt'
params['objective'] = 'multiclass'
params['metric'] = 'softmax'
params['max_depth'] = 10
params['num_class'] = 8
params['num_leaves'] = 500
lgb_train = lgb.train(params, train_data, 200)
# AFTER TRAINING THE MODEL
y_pred = lgb_train.predict(X_test)
y_pred_class = [np.argmax(line) for line in y_pred]
y_pred_class = np.asarray(y_pred_class) + 1
</code></pre>
<p>This is how the confussion matrix looks:</p>
<p><a href="https://i.sstatic.net/oiZ79.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oiZ79.png" alt="enter image description here" /></a></p>
<p><strong>Sklearn API</strong></p>
<p>Then I tried to move to Sklearn API to be able to use other tools. This is the code I used:</p>
<pre><code>lgb_clf = LGBMClassifier(objective='multiclass',
boosting_type='gbdt',
max_depth=10,
num_leaves=500,
learning_rate=0.3,
eval_metric=['accuracy','softmax'],
num_class=8,
n_jobs=-1,
early_stopping_rounds=100,
num_iterations=500)
clf_train = lgb_clf(X_train, (y_train-1), verbose=1, eval_set=[(X_train, (y_train-1)), (X_test, (y_test-1)))])
# TRAINING: I can see overfitting is happening
y_pred = clf_train.predict(X_test)
y_pred = [np.argmax(line) for line in y_pred]
y_pred = np.asarray(y_pred) + 1
</code></pre>
<p>And this is the confusion matrix in this case:</p>
<p><a href="https://i.sstatic.net/BQNWh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BQNWh.png" alt="enter image description here" /></a></p>
<p><strong>Notes</strong></p>
<ol>
<li>I need to substract 1 from y_train as my classes start at 1 and LightGBM was complaining about this.</li>
<li>When I try a RandomSearch or a GridSearch I always obtain the same result as the last confusion matrix.</li>
<li>I have check different questions here but none solve this issue.</li>
</ol>
<p><strong>Questions</strong></p>
<ol>
<li>Is there anything that I'm missing out when implementing the model in Sklearn API?</li>
<li>Why do I obtain good results (maybe with overfitting) with LightGBM API?</li>
<li>How can I achieve the same results with the two APIs?</li>
</ol>
<p>Thanks in advance.</p>
<p><strong>UPDATE</strong> It was my mistake. I thought the output in both APIs would be the same but it doesn't seem like that. I just removed the np.argmax() line when predicting with Sklearn API. It seems this API already predict directly the class. Don't remove the question in case someone else is dealing with similar issues.</p>
|
<python><machine-learning><scikit-learn><multiclass-classification><lightgbm>
|
2023-03-06 09:24:30
| 3
| 357
|
M. Merida-Floriano
|
75,649,009
| 11,017,797
|
Cannot build sphinx autodocumentation with Django via GitLab (docker) pipeline
|
<p>this problem might be quite specific.</p>
<h3>My setup</h3>
<p>This is my project structure. It's a Django project.</p>
<pre><code>├── docs
│ ├── build
│ │ ├── doctrees
│ │ └── html
│ ├── Makefile
│ └── source
│ ├── code
│ ├── conf.py
│ ├── contributing.md
│ ├── index.rst
│ ├── infrastructure.md
│ └── _static
├── my-project-name
│ ├── api.py
│ ├── asgi.py
│ ├── celeryconfig.py
│ ├── celery.py
│ ├── __init__.py
│ ├── __pycache__
│ ├── routers.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
| ... some more apps
</code></pre>
<p>It is hosted on a private GitLab instance and I have multiple runners installed. One is a docker executor that is responsible for building the documentation via Sphinx.</p>
<p>The basic <code>.gitlab-ci.yml</code> looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>image: python:3.10-slim
stages:
- deploy
pages:
tags:
- docs
stage: deploy
script:
- python3 -m pip install django sphinx furo myst-parser
- sphinx-build -b html docs/source public/
</code></pre>
<p>This has been working just fine as long as I haven't tried to include Django code via the <code>sphinx-autodoc</code> extension. Everything shows up on the GitLab pages. I know that Sphinx needs to load every module that it scans upfront. So this is why you have to initialize Django in the <code>conf.py</code>.</p>
<p>The beginning of my Sphinx <code>conf.py</code> looks like this:</p>
<pre class="lang-py prettyprint-override"><code>
# Sphinx needs Django loaded to correctly use the "autodoc" extension
import django
import os
import sys
from pathlib import Path
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "my-project-name.settings")
sys.path.append(Path(__file__).parent.parent.parent)
django.setup()
</code></pre>
<p>The last line before calling <code>django.setup()</code> makes sure to include the project-level path.
Autodocumenting Django works fine with these modifications when I <code>make html</code> on my local system. The modules I specify are documented nicely and no errors occur.</p>
<h4>Now to my actual problem</h4>
<p>I push the code to my GitLab repository and let the jobs run via a docker executor.</p>
<pre class="lang-bash prettyprint-override"><code>$ python3 -m pip install django sphinx furo myst-parser
Collecting django
Downloading Django-4.1.7-py3-none-any.whl (8.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 8.1/8.1 MB 124.3 MB/s eta 0:00:00
Collecting sphinx
Downloading sphinx-6.1.3-py3-none-any.whl (3.0 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.0/3.0 MB 111.0 MB/s eta 0:00:00
...
# the loading continues here
</code></pre>
<p>After all the modules are loaded it tries to execute Sphinx. Now, this error occurs:</p>
<pre class="lang-bash prettyprint-override"><code>
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
[notice] A new release of pip available: 22.3.1 -> 23.0.1
[notice] To update, run: pip install --upgrade pip
$ sphinx-build -b html docs/source public/
Running Sphinx v6.1.3
######
/builds/my-dir/my-project-name
######
Configuration error:
There is a programmable error in your configuration file:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/sphinx/config.py", line 351, in eval_config_file
exec(code, namespace) # NoQA: S102
File "/builds/my-dir/my-project-name/docs/source/conf.py", line 22, in <module>
django.setup()
File "/usr/local/lib/python3.10/site-packages/django/__init__.py", line 19, in setup
configure_logging(settings.LOGGING_CONFIG, settings.LOGGING)
File "/usr/local/lib/python3.10/site-packages/django/conf/__init__.py", line 92, in __getattr__
self._setup(name)
File "/usr/local/lib/python3.10/site-packages/django/conf/__init__.py", line 79, in _setup
self._wrapped = Settings(settings_module)
File "/usr/local/lib/python3.10/site-packages/django/conf/__init__.py", line 190, in __init__
mod = importlib.import_module(self.SETTINGS_MODULE)
File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 992, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1004, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'my-project-name'
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
</code></pre>
<p>As far as I know, this usually occurs when you don't set your path variables correctly. But I set it at the project level and print it out as validation (the <code>### lines</code> in the error message). My project path is definitely <code>/builds/my-dir/my-project-name</code>.</p>
<p>Do you have any ideas on what I could try to make it work?</p>
|
<python><django><gitlab><python-sphinx><gitlab-ci-runner>
|
2023-03-06 09:20:24
| 1
| 324
|
kerfuffle
|
75,648,889
| 15,112,321
|
tf keras autokeras with early stopping returns empty history
|
<p>I am trying different models for the same dataset, being autokeras.ImageClassifier one of them.
First I go for</p>
<pre><code>img_size = (100,120,3)
train_dataset = get_dataset(x_train, y_train, img_size[:-1], 128)
valid_dataset = get_dataset(x_valid, y_valid, img_size[:-1], 128)
test_dataset = get_dataset(x_test, y_test, img_size[:-1], 128)
</code></pre>
<p>For getting the dataset with a predefined function and then I fit the model with an early stopping callback:</p>
<pre><code># - Crear la red
model = ak.ImageClassifier(overwrite=True, max_trials=1, metrics=['accuracy'])
# - Entrena la red
early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', mode='min', patience=2) #El resto de valores por defecto
history = model.fit(train_dataset, epochs=10, validation_data=valid_dataset, callbacks=[early_stop])
# - Evalúa la red
model.evaluate(test_dataset)
</code></pre>
<p>The problem is that when train stops because of the callback, history is None type, what means is an empty object. I have not been able to find anything similar in the internet, for everyone it seems to work properly. I know the problem is with the callback because I fit the model without any callback it works properly.</p>
<p>The output when the train is ended by the callback is this one:</p>
<pre><code>Trial 1 Complete [00h 13m 18s]
val_loss: 4.089305400848389
Best val_loss So Far: 4.089305400848389
Total elapsed time: 00h 13m 18s
WARNING:absl:Found untraced functions such as _jit_compiled_convolution_op, _jit_compiled_convolution_op, _update_step_xla while saving (showing 3 of 3). These functions will not be directly callable after loading.
WARNING:tensorflow:Detecting that an object or model or tf.train.Checkpoint is being deleted with unrestored values. See the following logs for the specific values in question. To silence these warnings, use `status.expect_partial()`. See https://www.tensorflow.org/api_docs/python/tf/train/Checkpoint#restorefor details about the status object returned by the restore function.
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.1
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.2
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.3
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.4
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.5
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.6
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.7
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.8
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.9
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.10
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.11
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.12
</code></pre>
|
<python><tensorflow><keras><early-stopping><auto-keras>
|
2023-03-06 09:07:38
| 1
| 305
|
Videgain
|
75,648,660
| 16,971,617
|
Counting unique pixel value of an image
|
<p>I would like to count the number of unique pixel values and filter out those number more than a threshold and save it in a dict.</p>
<pre class="lang-py prettyprint-override"><code># Assume an image is read as a numpy array
np.random.seed(seed=777)
s = np.random.randint(low=0, high = 255, size=(100, 100, 3))
print(s)
</code></pre>
<p>This is how I count the number of unique values(1*3 array).</p>
<pre><code>np.unique(img.reshape(-1, 3), axis=0, return_counts=True)
</code></pre>
<p>How can I add some logic to filter out and turn it to a dict?</p>
|
<python><numpy>
|
2023-03-06 08:40:56
| 3
| 539
|
user16971617
|
75,648,656
| 12,579,308
|
Getting upper case keyboard entries from OpenCV HighGUI in Linux Environment and Python?
|
<p>I'm currently working on a project that involves OpenCV HighGUI for reading and displaying images in a Linux environment. I need to be able to capture keyboard entries, including upper case letters, from the user while running the HighGUI window.</p>
<p>As an example, I like to get ASCII of 'A'(i.e 65) but I always get ASCII of 'a'(i.e 97)</p>
<p>I've tried using the cv::waitKey() function, which returns the ASCII code of the pressed key but I was not able to get with caps lock or shift. Here is my code.</p>
<pre><code>import cv2
if __name__ == "__main__":
# Create a HighGUI window
cv2.namedWindow("image")
while True:
# Display an image in the HighGUI window
img = cv2.imread("images/histogram.png")
cv2.imshow("image", img)
# Wait for a key event
key = cv2.waitKey(0)
# Check if the key is the ESC key
if key == 27:
break
# Check if the key is an upper case letter
if key >= ord("A") and key <= ord("Z"):
print(f"Upper case letter '{chr(key)}' pressed!")
else:
print(f"No upper case detected. ASCII: '{key}'")
# Destroy the HighGUI window
cv2.destroyAllWindows()
</code></pre>
<p>And here is a sample output:</p>
<pre><code>No upper case detected. ASCII: '97' # 'a'
No upper case detected. ASCII: '225' # Left Shift
No upper case detected. ASCII: '97' # 'a'
No upper case detected. ASCII: '229' # Caps
No upper case detected. ASCII: '97' # 'a'
</code></pre>
|
<python><opencv>
|
2023-03-06 08:40:40
| 0
| 341
|
Oguz Hanoglu
|
75,648,441
| 15,222,211
|
Python CiscoConfParse. How to get full block by command in middle?
|
<p>I need to get full config of block by some command in middle.
In the following example, I got an incomplete block where more indented children 'ip 10.2.2.1' were missing.</p>
<pre><code>from ciscoconfparse import CiscoConfParse
from pprint import pprint
config = """
hostname switch
interface Vlan2
description vlan2
ip address 10.2.2.3/24
hsrp 2
ip 10.2.2.1
interface Vlan3
description vlan3
ip address 10.3.3.3/24
hsrp 3
ip 10.3.3.1
""".splitlines()
ccp = CiscoConfParse(config=config)
blocks = ccp.find_blocks("10.2.2.3/24")
print(blocks) # ['interface Vlan2', ' description vlan2', ' ip address 10.2.2.3/24', ' hsrp 2']
</code></pre>
<p>Help me to find elegant way to get next output (with 'ip 10.2.2.1')</p>
<pre><code>['interface Vlan2', ' description vlan2', ' ip address 10.2.2.3/24', ' hsrp 2', 'ip 10.2.2.1']
</code></pre>
|
<python><cisco><ciscoconfparse>
|
2023-03-06 08:12:52
| 2
| 814
|
pyjedy
|
75,648,372
| 17,124,619
|
SQL concurrency overflows for larger tables
|
<p>I have a simple concurrency script that uses Oracle for the main database. I'm listening into SQL with <code>python-oracledb</code> and have used individual threads to open up a database connection. However, I have recognised that this is very slow by comparison for larger tables. For example, for table with a table size of 1.2 million rows, the programme hangs and my IDE crashes. Whereas, with row sizes of 80,000, it completes in a few seconds. What is causing this overflow and how to correct this?</p>
<pre class="lang-python prettyprint-override"><code>SQL = 'SELECT /*+ ENABLE_PARALLEL_DML PARALLEL(AUTO) */ * FROM USER_TABLE offset :rowoffset rows fetch next :maxrows rows only'
MAX_ROWS = 1200000
NUM_THREADS = 12
def start_workload(fn):
def wrapped(self, threads, *args, **kwargs):
assert isinstance(threads, int)
assert threads > 0
ts = []
for i in range(threads):
new_args = (self, i, *args)
t = threading.Thread(target=fn, args=new_args, kwargs=kwargs)
t.start()
ts.append(t)
for t in ts:
t.join()
return wrapped
import pandas as pd
class TEST:
def __init__(self, batchsize, maxrows, *args):
self._pool = oracledb.create_pool(user = args[0], password = args[1], port=1521,host="localhost", service_name="service_name", min=NUM_THREADS, max=NUM_THREADS)
self._batchsize = batchsize
self._maxrows = maxrows
@start_workload
def do_query(self, tn):
with self._pool.acquire() as connection:
with connection.cursor() as cursor:
max_rows = self._maxrows
row_iter = int(max_rows/self._batchsize)
cursor.arraysize = 10000
cursor.prefetchrows = 1000000
cursor.execute(SQL, dict(rowoffset=(tn*row_iter), maxrows=row_iter))
columns = [col[0] for col in cursor.description]
cursor.rowfactory = lambda *args: dict(zip(columns, args))
pd.DataFrame(cursor.fetchall()).to_csv(f'TH_{tn}_customer.csv')
if __name__ == '__main__':
result = TEST(NUM_THREADS,MAX_ROWS,username, password)
import time
start=time.time()
Make = result.do_query(NUM_THREADS)
end=time.time()
print('Total Time: %s' % (end-start))
print(Make)
</code></pre>
|
<python><sql><oracle-database><multithreading><python-oracledb>
|
2023-03-06 08:04:31
| 2
| 309
|
Emil11
|
75,648,347
| 13,647,125
|
Finding minimum iterative
|
<pre><code>sales_data = {
'order_number': [1001, 1002, 1003, 1004, 1005],
'order_date': ['2022-02-01', '2022-02-03', '2022-02-07', '2022-02-10', '2022-02-14']
}
sales_data = pd.DataFrame(sales_data)
capacity_data = {
'date': pd.date_range(start='2022-02-01', end='2022-02-28', freq='D'),
'capacity': [0, 0, 0, 1, 1, 100, 110, 120, 130, 140, 150, 160, 170, 180,
190, 200, 210, 220, 230, 240, 250, 260, 270, 280, 290, 300, 310]
}
capacity_data = pd.DataFrame(capacity_data)
</code></pre>
<p>I would like to output like this</p>
<pre><code>output = {'order_number': [1001, 1002, 1003, 1004, 1005], 'confirmation_date':['2022-02-04', '2022-02-05', '2022-02-06', '2022-02-06', '2022-02-06']
</code></pre>
<p>I just want to find closest date where capacity is free and if it is free then reduce capacity by one for another iteration.</p>
<p>This is my script:</p>
<pre><code> order_number = None
confirmation_date = None
grouped = sales_data['order_number'].unique()
# Iterate over the groups and rows within each group
for group in grouped:
for order_row in range(len(capacity_data)):
if capacity_data['capacity'][order_row] > 0:
try:
order_number.append(group)
confirmation_date.append(capacity_data['date'][order_row])
capacity_data['capacity'][order_row] = capacity_data['capacity'][order_row] - 1
except:
pass
else:
pass
orderdict = dict(zip(order_number, caonfirmation_date))
</code></pre>
<p>I would also like to ask if there is way to make this script more optimaze for ittering more than 100k rows</p>
|
<python><loops><optimization>
|
2023-03-06 08:01:02
| 2
| 755
|
onhalu
|
75,648,232
| 17,267,064
|
Is there a method of reading a Google Sheet via Python without having to share sheet to service account?
|
<p>I am new to using Google Cloud Platform and its Developer Console. Here, I am working on a project wherein I have to automate data transfer from google sheets to SQL Server daily using Task Scheduler. Here, I came across an approach which requires me to share Google Sheet to a service account to read data from it in Python. This approach would require me to share each and every sheet in my company's drive account to a service account which add an step for a manual intervention.</p>
<p>Is there any method or approach by which I can skip the process of sharing sheets to service account and read sheets directly from my google drive or so?</p>
<p>I tried below script of reading shared Google Sheets to a service account which works fine but that's not what I require.</p>
<pre><code>import gspread, pyodbc, pandas as pd
from oauth2client.service_account import ServiceAccountCredentials
import pandas as pd
scope = ['https://spreadsheets.google.com/feeds', 'https://www.googleapis.com/auth/drive']
creds = ServiceAccountCredentials.from_json_keyfile_name('service_account.json', scope)
client = gspread.authorize(creds)
sheet = client.open('Sample').sheet1
data = sheet.get_all_values()
</code></pre>
<p>Need you to assist me in such a methodology though if it uses an API key instead of an oauth.</p>
|
<python><google-sheets><gspread>
|
2023-03-06 07:44:19
| 1
| 346
|
Mohit Aswani
|
75,648,221
| 19,106,705
|
Upsampling 4d tensor in pytorch
|
<p>I created a 4D tensor of the following size. (B, C, H, W)</p>
<p>And I want to reshape this tensor to the following size. (B, C, 2H, 2W)</p>
<p>Each value expands into 4 values, but only the elements in the original tensor that have an index corresponding to a value in the index tensor remain identical to the original tensor's value.</p>
<p>The index follows the following rule.</p>
<p>0 1</p>
<p>2 3</p>
<p>And here is an example below:</p>
<pre class="lang-py prettyprint-override"><code>original tensor:
torch.Size([1, 1, 2, 2])
tensor([[[[1.0000, 0.4000],
[0.2000, 0.5000]]]])
index tensor:
torch.Size([1, 1, 2, 2])
tensor([[[[0, 2],
[1, 1]]]])
</code></pre>
<pre class="lang-py prettyprint-override"><code>output tensor:
torch.Size([1, 1, 4, 4])
tensor([[[[1.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.4000, 0.0000],
[0.0000, 0.2000, 0.0000, 0.5000],
[0.0000, 0.0000, 0.0000, 0.0000]]]])
</code></pre>
<p>How can I efficiently transform this tensor while making good utility of GPU? (maybe we can use torch.tensor.scatter_)</p>
|
<python><pytorch><tensor>
|
2023-03-06 07:42:16
| 1
| 870
|
core_not_dumped
|
75,648,159
| 4,473,615
|
raise KeyError(key) KeyError: python matplotlib bar chart key error
|
<p>I have below data frame, on which am creating a chart:</p>
<p><a href="https://i.sstatic.net/aaoja.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aaoja.png" alt="enter image description here" /></a></p>
<p>The expectation is to create as below which is in Excel:</p>
<p><a href="https://i.sstatic.net/FrP9i.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FrP9i.png" alt="enter image description here" /></a></p>
<p>But while defining the axis in matplotlib, am facing an issue,</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
import random
def la_bar():
df1 = pd.DataFrame(lst, columns=['source','type','date','count']) #lst is a data set
ax = df.plot(x="date", y="count", kind="bar")
df.plot(x="date", y="source", kind="bar", ax=ax)
plt.savefig("static/images/image.png")
la_bar()
</code></pre>
<p>am getting key error as below,</p>
<pre><code> raise KeyError(key)
KeyError: 'date'
</code></pre>
<p>What can I try to resolve this?</p>
|
<python><pandas><matplotlib>
|
2023-03-06 07:34:24
| 1
| 5,241
|
Jim Macaulay
|
75,648,150
| 3,735,871
|
Pyspark- how to reformat column names from schema description
|
<p>I'm trying to get the column names of a table and only extract the last part after <code>_</code> For example for column name <code>a_col_name1</code>, I'd like to keep <code>name1</code></p>
<p>Below is the code I tried, and the part doesn't work. Could someone please help with this? Please let me know if I could explain this better. Many thanks.</p>
<pre><code>df = spark.sql("describe mytable") #get the schema of the dataframe
#trying to concat the last part of col names after "_" into a string
col_names = ",".join([n["col_name"] for n in df.collect()]) #this works without subtring
#.split("_",len([n["col_name"] -1) this part trying to get substring is not successful
col_names_short= ",". join([n["col_name"].split("_",len([n["col_name"] -1) for n in df.collect()]) #this doesn't work
</code></pre>
|
<python><string><list><apache-spark><pyspark>
|
2023-03-06 07:32:45
| 1
| 367
|
user3735871
|
75,648,132
| 139,150
|
OpenAI GPT-3 API: Why do I get only partial completion? Why is the completion cut off?
|
<p>I tried the following code but got only partial results like</p>
<pre><code>[{"light_id": 0, "color
</code></pre>
<p>I was expecting the full JSON as suggested on this page:</p>
<p><a href="https://medium.com/@richardhayes777/using-chatgpt-to-control-hue-lights-37729959d94f" rel="nofollow noreferrer">https://medium.com/@richardhayes777/using-chatgpt-to-control-hue-lights-37729959d94f</a></p>
<pre><code>import json
import os
import time
from json import JSONDecodeError
from typing import List
import openai
openai.api_key = "xxx"
HEADER = """
I have a hue scale from 0 to 65535.
red is 0.0
orange is 7281
yellow is 14563
purple is 50971
pink is 54612
green is 23665
blue is 43690
Saturation is from 0 to 254
Brightness is from 0 to 254
Two JSONs should be returned in a list. Each JSON should contain a color and a light_id.
The light ids are 0 and 1.
The color relates a key "color" to a dictionary with the keys "hue", "saturation" and "brightness".
Give me a list of JSONs to configure the lights in response to the instructions below.
Give only the JSON and no additional characters.
Do not attempt to complete the instruction that I give.
Only give one JSON for each light.
"""
completion = openai.Completion.create(model="text-davinci-003", prompt=HEADER)
print(completion.choices[0].text)
</code></pre>
|
<python><openai-api><gpt-3>
|
2023-03-06 07:29:41
| 2
| 32,554
|
shantanuo
|
75,648,108
| 1,635,523
|
QCheckBox stateChanged not triggered reliably
|
<p>Here is a Qt riddle: PythonQt. Win10. Within Slicer 3d (I am hoping, that this is not the issue).</p>
<ul>
<li>Defined a QCheckBox in the QtDesigner.</li>
<li>Said: <code>ui.checkBox.connect('stateChanged(int)', _on_checkBox)</code></li>
</ul>
<p>The callback:</p>
<pre class="lang-py prettyprint-override"><code>def _on_checkbox(val: int):
print(f"Value: {val}.")
ui.checkBox.setValue(0)
</code></pre>
<p><strong>Expected</strong>: Either an endless loop, forever setting that checkbox to False, or
(hoped-for) a behavior that simply enforces False <em>(not my ultimate use-case.
This is just research-code, trying to understand checkboxes)</em>.</p>
<p>Instead I observe:</p>
<p>a.: Starting out at state <code>0</code> (not checked). Click!</p>
<p>b.: Callback is triggered: <code>Value: 2.</code>, checkbox remains unchecked. Click!</p>
<p>c.: <strong>Callback is <em>not</em> triggered, checkbox turns checked!</strong> Click! -> Back to (a), triggering the callback, printing <code>Value: 0.</code>.</p>
<p>Are there two distinct checkstate properties to such a checkbox? So, that the <code>checkBox.setValue(0)</code>
does not reset the 2 that has been received in (b), leading to setting it to
2 for (c) is not a state change, thus omitting the callback?</p>
<p>And the core question, <strong>how do I get to my desired behavior: A callback that
is triggered every time a user clicks on the checkbox while it is enabled?</strong></p>
|
<python><qt>
|
2023-03-06 07:26:54
| 1
| 1,061
|
Markus-Hermann
|
75,648,044
| 2,827,181
|
I would like to use list[int] parameters for input and output, and for internal variable of a function. But I cannot declare them as Python hints
|
<p>While many examples succeed in using hints to describe the items carried by a list, I'm stumbling in their declarations.</p>
<p>I'm willing to manipulate (receive, return, create internally) lists of integers.<br />
Accordingly, I'm using <code>list[int]</code> to mention them.</p>
<p>But my code fails with the message: <em>TypeError: 'type' object is not subscriptable</em>, at the first (<code>def</code>) line.</p>
<pre class="lang-py prettyprint-override"><code>def filtre_valeurs_paires(valeurs: list[int]) -> list[int]:
valeurs_entieres: list[int] = list(filter(lambda valeur: valeur % 2 == 0, valeurs));
return valeurs_entieres;
candidats: list[int] = [5, 8, -2, 23, 11, 4];
print("Les valeurs paires dans {} sont : {}".format(candidats, filtre_valeurs_paires(candidats)));
</code></pre>
<hr />
<p>The desired output is a list of even integers: [8, -2, 4].</p>
|
<python><arrays><type-hinting>
|
2023-03-06 07:18:11
| 1
| 3,561
|
Marc Le Bihan
|
75,647,830
| 9,798,210
|
How to download the folders from gcs bucket to local using python?
|
<p>I have a model and other folders in the gcs bucket where I need to download into my local. I have the service account of that gcs bucket as well.</p>
<p><strong>gs bucket: gs://test_bucket/11100/e324vn241c9e4c4/artifacts</strong></p>
<p>Below is the code I am using to download the folder</p>
<pre><code>from google.cloud import storage
storage_client = storage.Client.from_service_account_json(
'service_account.json')
bucket = storage_client.get_bucket(bucket_name)
blobs_all = list(bucket.list_blobs())
blobs = bucket.list_blobs(prefix='artifacts')
for blob in blobs:
if not blob.name.endswith('/'):
print(f'Downloading file [{blob.name}]')
blob.download_to_filename(f'./{blob.name}')
</code></pre>
<p>The above code is not working properly. I want to download all the folders that are saved in the artifacts folder (confusion_matrix, model and roc_auc_plot) in google bucket. Below images shows the folders</p>
<p><a href="https://i.sstatic.net/mZVSG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mZVSG.png" alt="enter image description here" /></a></p>
<p>Can anyone tell me how to download those folders?</p>
|
<python><google-cloud-platform><google-bucket>
|
2023-03-06 06:50:30
| 1
| 1,835
|
merkle
|
75,647,682
| 15,260,108
|
How can i resolve flake8 "unused import" error for pytest fixture imported from another module
|
<p>I wrote pytest fixture in fixtures.py file and using it in my main_test.py . But I am getting this error in flake8: <strong>F401 'utils_test.backup_path' imported but unused</strong> for this code:</p>
<pre><code>@pytest.fixture
def backup_path():
...
</code></pre>
<pre><code>from fixtures import backup_path
def test_filename(backup_path):
...
</code></pre>
<p>How can I resolve this?</p>
|
<python><pytest><flake8>
|
2023-03-06 06:30:11
| 1
| 400
|
Diyorbek
|
75,647,669
| 584,239
|
In amazon linux 2 how to set defalut python to python 3
|
<p>In Amazon linx 2 currently command <code>python</code> points to python2. How do I set so that for all profile and command uses python3 when python command is called?
Currently i am editing <code>~/.bashrc</code> and adding <code>alias python=python3</code>. But I what it to be default across the system.</p>
|
<python><python-3.x><amazon-ec2>
|
2023-03-06 06:28:15
| 1
| 9,691
|
kumar
|
75,647,581
| 5,405,813
|
Reading 7 segment digits of weighing controller
|
<p>i am trying to read the 7-segment display of a industrial weighing machine. as there is no serial out port for the machine i have to note down the weight manually. the idea is to use python-computer vision to capture the weight and store it in text file. the image is as shown <a href="https://i.sstatic.net/Cf8ko.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Cf8ko.jpg" alt="7 segment from weighing machine" /></a>. <br/>i have used the following <a href="https://github.com/shanlau/SVM_Recognizing_Digit/blob/master/read_digit.py" rel="nofollow noreferrer">python</a> code for this purpose.<br/>
the code runs fine till i am converting the image to gray. the converted image is <a href="https://i.sstatic.net/gz1Nd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gz1Nd.png" alt="grayed image" /></a> . the problem starts when i start applying the threshold to the image. after applying the threshold the image is as shown <a href="https://i.sstatic.net/yFKbF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yFKbF.png" alt="threshold image" /></a>any suggestions so as to how i should proceed.</p>
|
<python><opencv>
|
2023-03-06 06:16:08
| 1
| 455
|
bipin_s
|
75,647,442
| 1,080,517
|
Firestore collection documents accessible directly but not with stream()
|
<p>I am using simple <code>Firestore</code> database with chats collection inside which each chat document has collection of message documents. If I want to access specific message I can do following request</p>
<pre><code>msg_ref = self.db.collection('chats').document(chat_id).collection('messages').document(msg_id)
</code></pre>
<p>I can also access all of the messages for given chat</p>
<pre><code>msgs_ref = db.collection('chats').document(chat_id).collection('messages')
for doc in msgs_ref.stream():
print(doc.id)
</code></pre>
<p>But I cannot understand why following command returns empty list</p>
<pre><code>chat_ref = db.collection('chats').get()
</code></pre>
<p>And as a result this call never enters <code>for loop</code></p>
<pre><code>chat_ref = db.collection('chats')
for doc in chat_ref.stream():
print(doc.id)
</code></pre>
<p>Looking at <a href="https://cloud.google.com/firestore/docs/samples/firestore-data-get-all-documents#firestore_data_get_all_documents-python" rel="nofollow noreferrer">example</a> from official documentation</p>
<pre><code>docs = db.collection(u'cities').stream()
for doc in docs:
print(f'{doc.id} => {doc.to_dict()}')
</code></pre>
<p>I completely don't understand what I might be doing wrong here. Can someone please explain why this is not working</p>
<pre><code>chat_ref = db.collection('chats')
for doc in chat_ref.stream():
print(doc.id)
</code></pre>
<p>And whether I can do anything to fix it?</p>
|
<python><google-cloud-firestore>
|
2023-03-06 05:51:10
| 1
| 2,713
|
sebap123
|
75,647,409
| 3,682,549
|
Extracting info from customer requests using python regex
|
<p>I have customer request log data as below: (showing one request as an example</p>
<pre><code>req=['"Software not available on Software Center, when tried to raise AHO for required software it opens Software Center with error as 'This Software is not available for you' , Need to install following software for client demo urgently - nodejs, intellij, , angular, mongo db, compass , Java, Open3DK \\n (from 10.61.107.166) \\n Logged through #OneClickAHD# \\n Contact:Ashutosh Suresh Mitkari, STGPW\nCTZPSAPR \\n Email: Ashutosh_Mitkari@google.com"']
</code></pre>
<p>I need to extract:</p>
<ol>
<li>extract all text before <code>(from</code> and if not encountered return an empty list.</li>
<li>extract ip address after <code>from</code> . Also strip any blank space. If pattern not found return empty list</li>
<li>extract text between <code># #</code>. If pattern not found return empty list</li>
<li>extract the name after <code>contact:</code> <code>till ,</code> . If pattern not found return empty list</li>
<li>extract unit after say from example <code>Contact:Ashutosh Suresh Mitkari,</code>. expected answer='STGPW\nCTZPSAPR'. If pattern not found return empty list</li>
<li>extract email after <code>Email:</code> . If pattern not found return empty list</li>
</ol>
<p>save them in separate list as below:</p>
<p>initialize empty lists for each piece of information</p>
<pre><code>request_list = []
ip_address_list = []
text_between_hashes_list = []
contact_name_list = []
unit_list = []
email_list = []
</code></pre>
<p>My try:</p>
<pre><code>import re
for req in ahd_req:
# extract till first \n
match = re.search(r'^(.*?)\n', req)
if match:
print(match.group(1))
# extract IP address after 'from'
match = re.search(r'from\s+([\d\.]+)', req)
if match:
print(match.group(1))
# extract text between # #
match = re.search(r'#(.*?)#', req)
if match:
print(match.group(1))
# extract name after 'contact:' till ,
match = re.search(r'Contact:([^,]*),', req)
if match:
print(match.group(1))
# extract unit after Contact:Ramesh Najukrao Sangle,` till \n\n
match = re.search(r'Contact:.*?,\s*(.*?)\n\n', req)
if match:
print(match.group(1))
</code></pre>
<p>Not getting the required result/ Need help.</p>
|
<python><regex>
|
2023-03-06 05:44:40
| 2
| 1,121
|
Nishant
|
75,647,387
| 1,652,954
|
How to set max number of concurrent files to be opened for parallelized operations in .py file
|
<p>in the .py file, it contains flask webservices and that .py file contains mutltiple parallelized code. so, each time i want to run the .py file using the commnds mentioned in the code section mentioned below, i have to spcify the number of the concurrent opened files
via the following command:</p>
<pre><code>ulimit -n 100000
</code></pre>
<p>however,this .py file will be accessed by other users and it will be invoked from the server.
my question is, is there any way to set the command <code>ulimit -n 100000</code> in .py file instead of set it manually from the console?</p>
<p><strong>code</strong></p>
<pre><code>ulimit -n 100000
export FLASK_APP=webservice.py
flask run -h 1xx.xx.xxx.xx -p 8187
</code></pre>
|
<python><flask>
|
2023-03-06 05:41:07
| 1
| 11,564
|
Amrmsmb
|
75,647,373
| 16,971,617
|
Replace a pixel value in an image (numpy) to the closest color in a dict
|
<p>I have an image with 4 channels rgba but I only want to compute the difference of rgb of each pixel and replace it with the closest color in a dict.</p>
<pre><code>myMap = {
'a': ([255, 27, 255, 255]),
'b': ([255, 255, 27, 255]),
'c': ([255, 27, 27, 255]),
}
</code></pre>
<p>Let this be my array,</p>
<pre><code>np.random.seed(seed=777)
s = np.random.randint(low=0, high = 255, size=(3, 4, 4))
print(s)
</code></pre>
<p>This is how I replace the pix but it is very inefficient because 3 loops are used.</p>
<p>The logic is to loop over every pixel and calculate the distance with every value in the dict, then replace the value by the one in dict for that particular pixel.</p>
<p>After running the function, the image shd consist of 3 colors only at most.</p>
<p>Is there a better way achieve what I want?</p>
<pre><code>for i in range(s.shape[0]):
for j in range(s.shape[1]):
_min = math.inf
r, g, b, _ = s[i][j].tolist()
for color in myMap.values():
cr, cg, cb, ca = color
color_diff = math.sqrt((r - cr)**2 + (g - cg)**2 + (b - cb)**2)
if color_diff < _min:
_min = color_diff
s[i][j] = np.array([cr, cg, cb, ca])
print(s)
</code></pre>
|
<python><numpy><opencv><optimization>
|
2023-03-06 05:38:34
| 1
| 539
|
user16971617
|
75,647,325
| 5,252,492
|
Neural Network: Detect multiple digits from an image
|
<p>I have a neural network that detects single digits from an image of a 6 digit display (A digital clock with hours, minutes and seconds).</p>
<p>I've trained it on a picture database of 10k images of the clock (tagged with eg- 14:45:02 corresponding to the clock) using most of the techniques mentioned in this youtube video on digit detection: <a href="https://www.youtube.com/watch?v=bte8Er0QhDg" rel="nofollow noreferrer">https://www.youtube.com/watch?v=bte8Er0QhDg</a></p>
<p>It works well on one digit at a time.
However, I have to do image segmentation and contour detection to find my digits every time because depending on how the picture of the image was taken, the digits can be randomly spaced.
I then send the separated images to the NN to detect the numbers.</p>
<p>The problem is, depending on how the picture of the clock was taken, I can get numbers bleeding into each other, ie 8 and 9 merge into a weird number image and my number detection algorithm fails.</p>
<p>Is there a reasonable way to create a neural network (or some other AI) that can detect multiple numbers at a time?</p>
<p>ie use the entire image as an input and the output be my answer?
If I could even do 2 numbers at a time, that would be great.</p>
|
<python><opencv><image-processing><neural-network>
|
2023-03-06 05:27:32
| 1
| 6,145
|
azazelspeaks
|
75,647,305
| 7,585,973
|
Can we convert from text to existing header and URL that available in search engine using pandas
|
<p>Here's my input</p>
<pre><code>app
fix
jd_id
zalora
leomaster
</code></pre>
<p>Here's my expected output</p>
<pre><code>app header url
fix Fix.com | Your Source for Genuine Parts & DIY Repair Help https://www.fix.com/
jd_id jdid https://www.jd.id/
zalora ZALORA Indonesia: Belanja Online Fashion & Lifestyle Terbaru https://www.zalora.co.id/
leomaster Leomaster — Manufacturers of fine fabrics https://www.leomaster.it/en/
</code></pre>
<p>It can be done manually by using google chrome and exhausting copy-paste process, since I have 22000+ of app that need to be cheked, we need a scalable solution</p>
|
<python><pandas><dataframe><automation><google-api>
|
2023-03-06 05:23:27
| 1
| 7,445
|
Nabih Bawazir
|
75,647,177
| 15,247,669
|
django rest framework giving psycopg2 Error
|
<p>End developers, I am a FE dev having trouble with setting up BE. I need to connect django project with my FE which uses React.js. I have installed all the required stuff and finally when I ran <code>make runserver</code>, it's giving me this error</p>
<pre><code>raise ImproperlyConfigured("Error loading psycopg2 module: %s" % e)
django.core.exceptions.ImproperlyConfigured: Error loading psycopg2 module: No module named 'psycopg2'
</code></pre>
<p>When I ran <code>make install</code>, I get another error,</p>
<pre><code>~/.poetry/venv/lib/python3.10/site-packages/poetry/installation/chef.py:152 in _prepare
148│
149│ error = ChefBuildError("\n\n".join(message_parts))
150│
151│ if error is not None:
→ 152│ raise error from None
153│
154│ return path
155│
156│ def _prepare_sdist(self, archive: Path, destination: Path | None = None) -> Path:
Note: This error originates from the build backend, and is likely not a problem with poetry but with psycopg2 (2.9.5) not supporting PEP 517 builds. You can verify this by running 'pip wheel --use-pep517 "psycopg2 (==2.9.5) ; python_version >= "3.6""'.
</code></pre>
<p>I am using <code>Macbook Air M2</code>, if that is related to my specific device.</p>
<p>I am not sure what <code>psycopg2</code> is and why I am getting this error. I just simply need my django project to run smoothly so that I can connect to it from FE. Can someone help me debug this issue, big thanks to all of you.</p>
|
<python><django><postgresql><psycopg2>
|
2023-03-06 04:56:36
| 2
| 744
|
Nyi Nyi Hmue Aung
|
75,646,773
| 3,575,018
|
Preserving long single line string as is when round-triping in ruamel
|
<p>In the below code, I'm trying to write a load and write a YAML string back to ensure that it retains the spacing as is.</p>
<pre><code>import ruamel.yaml
yaml_str = """\
long_single_line_text: Hello, there. This is nothing but a long single line text which is more that a 100 characters long
"""
yaml = ruamel.yaml.YAML() # defaults to round-trip
yaml.preserve_quotes = True
yaml.allow_duplicate_keys = True
yaml.explicit_start = True
data = yaml.load(yaml_str)
yaml.dump(data, sys.stdout)
</code></pre>
<p>And the result is</p>
<pre><code>long_single_line_text: Hello, there. This is nothing but a long single line text which
is more that a 100 characters long
</code></pre>
<p>Here the line breaks at around character 87, I'm not sure if this is a setting that can be configured but having the long line as is would help me not have huge diffs when adding new keys.</p>
<p>If I set to a longer width via <code>yaml.width</code> then the multi-line string become a long single string, so can't do that.</p>
<p>Is there anyway I can keep the string as in for long single line scalars?</p>
|
<python><yaml><ruamel.yaml>
|
2023-03-06 03:13:05
| 1
| 1,621
|
thebenman
|
75,646,742
| 7,734,318
|
Reshape matrix before matrix multiplication
|
<p>I have a snippet of the <code>forward</code> step of "Transformer Neural Network" using Pytorch.</p>
<ul>
<li>Source: <a href="https://github.com/pbloem/former/blob/master/former/transformers.py" rel="nofollow noreferrer">https://github.com/pbloem/former/blob/master/former/transformers.py</a></li>
<li>Simplified to below code by me.</li>
</ul>
<p>With:</p>
<ul>
<li><code>b: batch_size</code>, <code>t: input sequence length</code>, <code>k: embedding length</code>, <code>self.num_vocabs: output classes</code></li>
<li><code>self.toprobs(x)</code>: a <code>nn.Linear</code> layer with in/output features <code>(k, num_vocabs)</code>.</li>
</ul>
<pre><code> def forward(self, x):
tokens = self.token_embedding(x)
b, t, k = tokens.size()
x = self.transformer_block(tokens)
x = x.view(b * t, k)
x = self.toprobs(x)
x = x.view(b, t, self.num_vocabs)
output = F.log_softmax(x, dim=2)
return output
</code></pre>
<p>Given: <code>b = 2, t = 2, k = 3, self.num_vocabs = 256</code>.</p>
<ul>
<li>The output shape of <code>x</code> after <code>x = self.transformer_block(tokens)</code> is <code>(2, 2, 3)</code>.</li>
<li>Reshape <code>x</code> to <code>(b * t, k) -> (4, 3)</code>, then pass throught <code>self.toprobs(x)</code> I got <code>(4, 256)</code>, then reshape again back to <code>(2, 2, 256)</code>.</li>
</ul>
<p>Question:</p>
<ul>
<li>Why <code>x</code> need to reshape to <code>(b * t, k)</code>? If I keep <code>x</code> shape at <code>(2, 2, 3)</code> and pass throght <code>self.toprobs(x)</code>, I stil have the same results and shape <code>(2, 2, 256)</code>.</li>
<li>Is there any beneficial in accelerate or memory usage of matrix multiplication step?</li>
</ul>
<pre><code> def forward(self, x):
tokens = self.token_embedding(x)
b, t, k = tokens.size()
x = self.transformer_block(tokens)
# Same result without matrix reshape
x = self.toprobs(x)
output = F.log_softmax(x, dim=2)
return output
</code></pre>
|
<python><performance><pytorch><matrix-multiplication><transformer-model>
|
2023-03-06 03:05:31
| 1
| 319
|
Tín Tr.
|
75,646,546
| 1,230,724
|
Combine by-integer and by-boolean numpy slicing
|
<p>I'm looking for a way to combine two index arrays <code>b</code> and <code>i</code> (one of type boolean, one of type integer) to slice another array <code>x</code>.</p>
<pre><code>x = np.array([5.5, 6.6, 3.3, 7.7, 8.8])
i = np.array([1, 4])
b = np.array([True, True, False, False, False])
</code></pre>
<p>The resulting array should be <code>x == [6.6]</code> because it's indexed by <code>i</code> (<code>i[0]</code>) and by the value in <code>b[1]</code>.</p>
<p>In other words, I'm looking for a way to express <code>x[i & b]</code> with <code>i</code> being an integer array. I know how to convert boolean index arrays to integer index arrays (<code>np.where(b)</code>), but that would merely shift the problem to combining two integer index arrays, which I also don't have a solution for.</p>
<p>Obviously subsequent slicing doesn't work (i.e. <code>x[i][b]</code> or vice versa), because the dimensionality changes after each separate slicing.</p>
<p>Any help would be appreciated.</p>
|
<python><arrays><numpy><numpy-slicing>
|
2023-03-06 02:13:17
| 1
| 8,252
|
orange
|
75,646,317
| 2,985,049
|
pytorch: Only divide values greater than certain value
|
<p>I'm trying to divide one tensor by another in torch, but only when the values of the denominator exceed a certain threshold.</p>
<p>This implementation works, but won't compile in torchdynamo.</p>
<pre class="lang-py prettyprint-override"><code>wsq_ola = wsq_ola.to(wav).expand_as(wav).clone()
min_mask = wsq_ola.abs() < eps
wav[~min_mask] = wav[~min_mask] / wsq_ola[~min_mask]
</code></pre>
<p>I tried to implement the same thing with <code>torch.where</code> instead as follows:</p>
<pre class="lang-py prettyprint-override"><code>wsq_ola = wsq_ola.to(wav).expand_as(wav).clone()
min_mask = wsq_ola.abs() < eps
wav = torch.where(min_mask, wav, wav / wsq_ola)
</code></pre>
<p>Unfortunately, once I make this change, the model no longer converges. Is there some issue with my use of torch.where here? For context, this is part of an stft layer with no trainable weights.</p>
|
<python><pytorch>
|
2023-03-06 00:59:44
| 1
| 7,189
|
Luke
|
75,646,258
| 11,703,015
|
Using contains operator to add new columns
|
<p>I have a DataFrame imported from an excel file, we can say expenses of different type. This DataFrame has a column called 'CONCEPT'. I would like to add two new columns depending on the type of expense. To do so, I identify "key words" in the concept. For instance></p>
<pre><code>import pandas as pd
dictionary = {
"DATE" : ['12/02/2023', '02/01/2023', '02/01/2023', '10/02/2023'],
"CONCEPT" : ['Supermarket','Restaurant', 'petrol station', 'decathlon'],
"EUR" : [-150,-50,-45,-95]
}
df = pd.DataFrame(dictionary)
df['EXPENSE TYPE'] = pd.Series(dtype="string")
df['EXPENSE TYPE'][df['CONCEPT'].str.upper().str.contains('SUPERMARKET')] = 'FOOD'
df['EXPENSE TYPE'][df['CONCEPT'].str.upper().str.contains('RESTAURANT')] = 'FOOD'
df['EXPENSE TYPE'][df['CONCEPT'].str.upper().str.contains('PETROL')] = 'GAS'
df['EXPENSE TYPE'][df['CONCEPT'].str.upper().str.contains('DECATHLON')] = 'CLOTHES'
</code></pre>
<p>With this code I get the expected output></p>
<pre><code> DATE CONCEPT EUR EXPENSE TYPE
0 12/02/2023 Supermarket -150 FOOD
1 02/01/2023 Restaurant -50 FOOD
2 02/01/2023 petrol station -45 GAS
3 10/02/2023 decathlon -95 CLOTHES
</code></pre>
<p>However, I would like to add two fields instead of one. So when I identify the word 'Supermarket' I add the EXPENSE TYPE and a SUBCATEGORY, for instance:</p>
<pre><code> DATE CONCEPT EUR EXPENSE TYPE SUBCATEGORY
0 12/02/2023 Supermarket -150 FOOD HOME
1 02/01/2023 Restaurant -50 FOOD OUT OF HOME
2 02/01/2023 petrol station -45 GAS DIESEL
3 10/02/2023 decathlon -95 CLOTHES SPORT
</code></pre>
<p>How could I add two new columns instead of one?</p>
|
<python><pandas><dataframe><contains>
|
2023-03-06 00:42:13
| 1
| 516
|
nekovolta
|
75,646,247
| 9,291,340
|
Is set.remove() causing a big slowdown in my code?
|
<p>This is a solution to a <a href="https://leetcode.com/problems/jump-game-iv/description/" rel="nofollow noreferrer">leetcode problem</a>. The problem is solved, but I am struggling to understand some weird behavior which is probably python specific. The issue is with the two lines that have a comment.</p>
<p><code>graph</code> is a hashmap that maps integers to sets. <code>graph[arr[index]]</code> is a set containing integers, which I am removing from one by one: <code>graph[arr[index]].remove(j)</code>. I could remove the integers one by one or just do <code>graph[arr[index]] = set()</code> after I am done processing all of them. Initially I removed the integers from the set one by one. This caused my solution to be too slow. This was confusing since removing from a set should be O(1), but maybe the constant factor was too large. I fixed it by doing <code>graph[arr[index]] = set()</code> after the loop. The solution was 50x faster and accepted.</p>
<p>However, even if I leave in this line: <code>graph[arr[index]].remove(j)</code>, as long as I have the <code>graph[arr[index]] = set()</code> after the loop, the solution is still fast. What could be the cause of this? My only guess is an interpreter optimization. Also, I tested the slow code with different sized inputs. The time taken seems to scale linearly with the size of the inputs.</p>
<pre><code>from collections import defaultdict, deque
def minJumps(arr: List[int]) -> int:
graph = defaultdict(set)
for i, n in enumerate(arr):
graph[n].add(i)
queue = deque()
queue.append(0)
visited = set([0])
steps = 0
while queue:
l = len(queue)
for i in range(l):
index = queue.popleft()
if index == len(arr)-1:
return steps
if index and index-1 not in visited:
queue.append(index-1)
visited.add(index-1)
if index+1 not in visited:
queue.append(index+1)
visited.add(index+1)
for j in list(graph[arr[index]]):
graph[arr[index]].remove(j) #removing from a set should be O(1), but maybe big constant factor?
if j not in visited:
visited.add(j)
queue.append(j)
#graph[arr[index]] = set() #If I uncomment this line then it runs much faster, even if I leave in the previous line
steps += 1
</code></pre>
|
<python><python-3.x>
|
2023-03-06 00:37:35
| 1
| 528
|
Mustafa
|
75,646,178
| 726,730
|
How can I make a python package (setuptools) and publish in pypi.org?
|
<p><a href="https://stackoverflow.com/questions/67127591/install-python-shout-module-in-windows-10-python-version-3-9/67177448#67177448">In this link</a> i make a py script and an exe file while trying to make a Windows OS python-shout module.</p>
<p>The question now is how can i make a .whl file and publish it to github and pypi.org</p>
<p>The package will only have two files: an exe file, and a .py which will run the exe with Popen and will send and receive data from .exe using stdin and pipe.</p>
|
<python><python-packaging>
|
2023-03-06 00:16:43
| 1
| 2,427
|
Chris P
|
75,646,032
| 12,688,015
|
Issue with the dequeue function in the custom class
|
<p>I am building a RingBuffer class for my module. However, I am facing some issues with the dequeue function. When I enqueue/dequeue integers, everything is fine. When I enqueue a list, the first dequeue operation returns an empty list instead of the actual values. When I enqueue a string, the first dequeue operation returns meaningless data that says:</p>
<pre><code>''_builtins__'
</code></pre>
<p>and then if I continue to dequeue, or call any methods, it gives an error</p>
<pre><code>Segmentation fault (core dumped)
</code></pre>
<p>Here is the enqueue/dequeue functions:</p>
<pre><code>static PyObject *
RingBuffer_enqueue(RingBuffer *self, PyObject *args)
{
// its only works with integers, solve that and make it work with any type
PyObject *item = NULL;
if (!PyArg_ParseTuple(args, "O", &item))
return NULL;
// if its full, overwrite the oldest item
if (self->size == self->capacity)
{
Py_DECREF(self->items[self->head]);
self->items[self->head] = item;
self->head = (self->head + 1) % self->capacity;
}
else
{
self->items[self->tail] = item;
self->tail = (self->tail + 1) % self->capacity;
self->size++;
}
Py_RETURN_NONE;
}
static PyObject *
RingBuffer_dequeue(RingBuffer *self)
{
if (self->size == 0)
Py_RETURN_NONE;
PyObject *item = self->items[self->head];
self->items[self->head] = NULL;
self->head = (self->head + 1) % self->capacity;
self->size--;
return item;
}
</code></pre>
<p>and here's the RingBuffer type</p>
<pre><code>typedef struct
{
PyObject_HEAD;
long capacity;
long size;
long head;
long tail;
PyObject **items;
} RingBuffer;
</code></pre>
<p>what is the problem that I cannot see? thanks in advance</p>
|
<python><c><cpython><pyobject>
|
2023-03-05 23:37:39
| 1
| 742
|
sekomer
|
75,646,031
| 11,703,015
|
Sort dataframe by dates
|
<p>I am importing an excel file in which a column is a date in the format dd/mm/yyyy.</p>
<p>When I import it from the excel file, I think it is understood as a string. I need to sort the whole DataFrame by date, so I perform this code:
import pandas as pd</p>
<pre><code>import pandas as pd
dictionary = {
"DATE" : ['12/02/2023', '02/01/2023', '02/01/2023', '10/02/2023'],
"CONCEPT" : ['Supermarket','Restaurant', 'Gas', 'Suscription'],
"EUR" : [-150,-50,-45,-95]
}
df = pd.DataFrame(dictionary)
df['DATE'] = pd.to_datetime(df['DATE']).dt.strftime('%d/%m/%Y')
df = df.sort_values(by=['DATE'],axis=0, ascending=True)
</code></pre>
<p>If you perform this example, you will see it works perfectly fine, as the first-row date, 12/02/2023, is sorted in the last position. However, when I am using my real excel file, this date is interpreted as the 2nd of December 2023. Moreover, it sorts the date column as strings as not as dates; therefore, 31/01/2023 goes after 28/02/2023.</p>
<p>How could I solve this problem?</p>
|
<python><pandas><dataframe><xls>
|
2023-03-05 23:37:09
| 3
| 516
|
nekovolta
|
75,645,812
| 272,023
|
How do I copy bytes to/from SharedMemory into BytesIO?
|
<p>I am creating a <a href="https://docs.python.org/3/library/multiprocessing.shared_memory.html#multiprocessing.shared_memory.SharedMemory" rel="nofollow noreferrer"><code>SharedMemory</code> object</a>, where data is being written to it in an I/O bound process for subsequent processing by a separate compute-bound process. In subsequent processing of the data I want to be able to read and write with a file-like object, so I'd like to copy to a <code>io.BytesIO</code> object. I'm not sure how to use the <code>buf</code> memory view object of the SharedMemory.</p>
<p>How can I copy data between a SharedMemory object and a BytesIO object?</p>
|
<python><shared-memory><bytesio>
|
2023-03-05 22:46:01
| 1
| 12,131
|
John
|
75,645,762
| 5,637,851
|
Cannot Import Models In Django
|
<p>I am trying to import my models file into my api.py file but, I get this error:</p>
<pre><code>> from dashboard.models import Customer, Lines, Devices
> ModuleNotFoundError: No module named 'dashboard'
</code></pre>
<p>My apps.py is:</p>
<pre><code>from django.apps import AppConfig
class DashboardConfig(AppConfig):
name = 'dashboard'
</code></pre>
<p>My settings:</p>
<pre><code>INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.sessions',
'dashboard'
]
</code></pre>
<p>My models:</p>
<pre><code>from django.db import models
class Customer(models.Model
):
account_name = models.CharField(default='', max_length=254, null=True, blank=True)
accountNumber = models.CharField(default='', max_length=30, null=True, blank=True)
accountType = models.CharField(default='', max_length=40, null=True, blank=True)
allowDangerousExtensions = models.BooleanField(default=False)
billingCycleDay = models.IntegerField(default=0)
@property
def customer_name(self):
return (self.account_name)
class Lines(models.Model):
accountId = models.CharField(default='', max_length=30, null=True, blank=True)
deviceName = models.CharField(default='', max_length=10, null=True, blank=True)
deviceTypeId = models.CharField(default='', max_length=100, null=True, blank=True)
def __str__(self):
return self.accountId()
class Devices(models.Model):
uid = models.CharField(default='', max_length=30, null=True, blank=True)
online = models.BooleanField(default=False)
customer = models.ForeignKey(Customer, blank=True, null=True, on_delete=models.CASCADE)
def __str__(self):
return self.name()
</code></pre>
<p>and api.py:</p>
<pre><code>from dashboard.models import Customer, Lines, Devices
def create_customer():
customer = Customer.objects.create()
</code></pre>
<p>But I cannot reference the models in the api.py file. I can reference it in my Admin.py but, api.py does not work.</p>
<p>Folder Structure:</p>
<p><a href="https://i.sstatic.net/sZ2IG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sZ2IG.png" alt="enter image description here" /></a></p>
|
<python><django><django-models>
|
2023-03-05 22:31:14
| 2
| 800
|
Doing Things Occasionally
|
75,645,700
| 3,802,177
|
How do i query a DynamoDB and get all rows where "col3" exists & not 0/null (boto3)
|
<p>This is my DynamoDB table via serverless framework, added a secondary column index for "aaa_id":</p>
<pre class="lang-yaml prettyprint-override"><code> Devices:
Type: AWS::DynamoDB::Table
Properties:
TableName: Devices
BillingMode: PAY_PER_REQUEST
AttributeDefinitions:
- AttributeName: serial
AttributeType: S
- AttributeName: aaa_id
AttributeType: N
KeySchema:
- AttributeName: serial
KeyType: HASH
GlobalSecondaryIndexes:
- IndexName: aaa_id
KeySchema:
- AttributeName: aaa_id
KeyType: HASH
Projection:
ProjectionType: ALL
</code></pre>
<p>I want to query my DynamoDB and get all items of the table where the column "aaa_id" exists or isn't 0 (or null, if it's even possible for a Number type column). Some rows don't include it.
preferable using the <code>query</code> method instead of <code>scan</code> since i know it's less heavy</p>
<p>I've been on this for hours. please help.</p>
<p>Some of my few fail attempts:</p>
<pre class="lang-py prettyprint-override"><code>import json
import boto3
def lambda_handler(event, context):
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('Devices')
try:
response = table.query(
IndexName='aaa_id',
FilterExpression='aaa_id <> :empty',
ExpressionAttributeValues={':empty': {'N': '0'}}
)
items = response['Items']
return {
'statusCode': 200,
'body': json.dumps(items)
}
except Exception as e:
print(e)
return {
'statusCode': 500,
'body': json.dumps('Error querying the database')
}
#################################
import json
import boto3
from boto3.dynamodb.conditions import Key, Attr
def lambda_handler(event, context):
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('Devices')
try:
response = table.query(
IndexName='aaa_id',
KeyConditionExpression=Key('aaa_id').gt(0) & Attr('aaa_id').not_exists(),
ExpressionAttributeValues={
':empty': {'N': ''}
}
)
data = response['Items']
while 'LastEvaluatedKey' in response:
response = table.query(
IndexName='aaa_id',
KeyConditionExpression=Key('aaa_id').gt(0) & Attr('aaa_id').not_exists(),
ExpressionAttributeValues={
':empty': {'N': ''}
},
ExclusiveStartKey=response['LastEvaluatedKey']
)
data.extend(response['Items'])
return {
'statusCode': 200,
'body': json.dumps(data),
'success': True
}
except Exception as e:
return {
'statusCode': 500,
'body': json.dumps(str(e)),
'success': False
}
#######################
import json
import boto3
from boto3.dynamodb.conditions import Key
def lambda_handler(event, context):
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('Devices')
try:
response = table.query(
IndexName='aaa_id-index',
KeyConditionExpression=Key('aaa_id').gt(0)
)
items = response['Items']
while 'LastEvaluatedKey' in response:
response = table.query(
IndexName='aaa_id-index',
KeyConditionExpression=Key('aaa_id').gt(0),
ExclusiveStartKey=response['LastEvaluatedKey']
)
items.extend(response['Items'])
return {
'statusCode': 200,
'body': json.dumps(items),
'success': True
}
except Exception as e:
return {
'statusCode': 500,
'body': json.dumps({'error': str(e)}),
'success': False
}
##################################
import boto3
import json
def lambda_handler(event, context):
dynamodb = boto3.client('dynamodb')
try:
response = dynamodb.query(
TableName="Devices",
IndexName='aaa_id-index',
KeyConditionExpression='aaa_id <> :empty',
# ExpressionAttributeValues={':empty': {'S': ''}}
)
return {
'statusCode': 200,
'body': json.dumps(response['Items']),
'status': 'success'
}
except Exception as e:
return {
'statusCode': 500,
'body': json.dumps({'error': str(e)}),
'status': 'error'
}
</code></pre>
|
<python><aws-lambda><amazon-dynamodb><boto3><serverless-framework>
|
2023-03-05 22:19:31
| 1
| 5,946
|
Imnotapotato
|
75,645,492
| 308,827
|
Specify a list of columns to move ot the end of a pandas dataframe
|
<p>I want to select multiple columns and move them to the end of the dataframe. E.g. current dataframe is:</p>
<pre><code>index a b c
1 2 3 4
2 3 4 5
</code></pre>
<p>How can I specify a list l = ['a', 'b'] and use that to specify that those two columns be moved to the end of the dataframe resulting in:</p>
<pre><code>index c a b
1 4 2 3
2 5 3 4
</code></pre>
<p>Note: I do not want to specify all the columns in the dataframe, only the columns I want to move</p>
|
<python><pandas>
|
2023-03-05 21:33:49
| 3
| 22,341
|
user308827
|
75,645,321
| 16,797,805
|
python & sqlite - Bind parameters when multiple columns with same name are present
|
<p>I have the following update query:</p>
<pre><code>UPDATE table1 SET name = ?, surname = ?, birth_date = ?, sex = ? WHERE name = ? AND surname = ?
</code></pre>
<p>I'm running this query through the <code>sqlite3</code> python package, and specifically using this method:</p>
<pre><code> def _connect_and_execute(query, data=None):
conn = None
try:
conn = sqlite3.connect(DB_FILE_PATH)
cur = conn.cursor()
if data is None:
cur.execute(query)
else:
cur.execute(query, data)
conn.commit()
except sqlite3.Error as e:
logging.error(e)
raise e
finally:
if conn:
conn.close()
</code></pre>
<p><code>DB_FILE_PATH</code> is a constant which holds the path to the sql file. The method is called by using the above query as <code>query</code> parameter and the list <code>["John", "Johnson", "01/01/2000", "M", "John", "Johnson"]</code>.</p>
<p>The query executes without errors, but without actually changing the values in the database (check through a database explorer software and with other select queries). Obviously the corresponding record to update already exists.</p>
<p>I did a little search on the internet and my hypothesis for this behaviour is that the <code>name</code> and <code>surname</code> columns are referenced twice for bind parameters. To my understanding, what this <a href="https://www.sqlite.org/c3ref/bind_blob.html" rel="nofollow noreferrer">page states</a> is that it is possible to bind values by using the corresponding indexes. I tried to update the query to:</p>
<pre><code>UPDATE table1 SET name = ?1, surname = ?2, birth_date = ?3, sex = ?4 WHERE name = ?5 AND surname = ?6
</code></pre>
<p>but the behaviour is still the same.</p>
<p>Is it possible to bind parameters to the same columns twice? Is that the problem in my case? If so, which can be a possible solution?</p>
|
<python><sqlite>
|
2023-03-05 21:01:59
| 1
| 857
|
mattiatantardini
|
75,645,297
| 2,607,447
|
Django - gettext_lazy not working in string interpolation/concatenation (inside list)
|
<p>I have a dictionary of items with multiple properties.</p>
<pre><code>from django.utils.translation import (
gettext_lazy as _,
)
{"item1": {
"labels": [
_("label1"),
"this is" + _("translatethis") + " label2",
]
</code></pre>
<p>These items are then serialized in <strong>DRF</strong>.</p>
<p>The problem is that</p>
<p><code>_("label1")</code> is being translated</p>
<p>but</p>
<p><code>"this is" + _("translatethis") + " label2"</code> is not translated</p>
<p>I tried also string interpolation, <code>fstring</code> and <code>.format</code> but nothing worked. When serializer fetches <code>labels</code>, <code>_("translatethis")</code> is not a proxy object.</p>
<p>Is the only way to make this work surrounding whole strings in the <code>gettext_lazy</code> ?</p>
|
<python><django><gettext><django-i18n>
|
2023-03-05 20:56:30
| 1
| 18,885
|
Milano
|
75,645,292
| 2,550,810
|
Duplicate tick sets of tick marks with loglog plot
|
<p>I would like customized tick marks for a loglog plot in Matplotlib. But in some cases (although not all), the default tick marks chosen by the loglog plot are not overwritten by my custom tick marks. Instead, both sets of tick marks show up in the plot.</p>
<p>In the following, I use <code>xticks</code> to get custom tick marks and labels at <code>[224, 448, 896, 1792]</code>. The problem is that tick marks at <code>3 x 10^2</code>, <code>4 x 10^2</code> and <code>6 x 10^2</code> also show up. These appear to be left over from the initial call to loglog and are not overwritten by my custom ticks.</p>
<p>Using the same approach to set custom y-ticks works as expected in the plot below, although I have seen the same strange behavior when setting y-ticks in other plots.</p>
<pre><code>import matplotlib.pyplot as plt
P = [224, 448, 896, 1792]
T = [4200, 2300,1300, 1000]
plt.loglog(P,T,'.-',ms=10)
pstr =[f"{p:d}" for p in P]
plt.xticks(P,pstr);
ytick = [1000, 2000, 3000, 4000]
pstr =[f"{pt:d}" for pt in ytick]
plt.yticks(ytick,pstr);
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/cnivM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cnivM.png" alt="Duplicate tick marks with loglog plot" /></a></p>
<p>I don't always see this behavior, but it shows up often enough to be annoying.</p>
<p>Is this a bug? Or is there something I am missing?</p>
<p><strong>Edit</strong> <a href="https://stackoverflow.com/questions/10781077/how-to-disable-the-minor-ticks-of-log-plot-in-matplotlib">This post</a> provides an answer, but only if one realizes that the problem I had was due to minor tick marks. In any case, my question was promptly answered here.</p>
|
<python><matplotlib>
|
2023-03-05 20:55:42
| 1
| 1,610
|
Donna
|
75,645,065
| 7,437,143
|
Error: Untyped decorator (@typeguard) makes function "add_two" untyped [misc]
|
<h2>Context</h2>
<p>While using <a href="https://github.com/agronholm/typeguard/issues/60" rel="nofollow noreferrer">typeguard</a> on a project with <code>mypy</code>, I've encountered the error:</p>
<pre><code>src/pythontemplate/adder.py:6: error: Untyped decorator makes function "add_two" untyped [misc]
</code></pre>
<p>on the following example code:</p>
<pre class="lang-py prettyprint-override"><code>"""Example python file with a function."""
from typeguard import typechecked
@typechecked
def add_two(*, x: int) -> int:
"""Adds a value to an incoming number."""
return x + 2
</code></pre>
<p>After looking at <a href="https://stackoverflow.com/a/65641392/7437143">this</a> answer, it seems one can resolve this error message by manually adding a typing for the <code>@typechecked</code> function/decorator.</p>
<h2>Issue</h2>
<ol>
<li>I did not quite determine what the typing should be for the <code>@typeguard</code> decorator.</li>
<li>This decorator is in each <code>python</code> file in the project, so adding the typing for that decorator in each file, or even an import to a separate file with that typing, would be quite a bit of duplicate code and/or boilerplate code.</li>
<li>This decorator is specifically for typechecking, so I would expect there may be a more elegant fashion to resolve this issue.</li>
</ol>
<h2>Question</h2>
<p>How can one ensure the <code>@typeguard</code> decorator is typed?</p>
|
<python><mypy><typing><typeguards>
|
2023-03-05 20:13:30
| 1
| 2,887
|
a.t.
|
75,645,059
| 10,162,229
|
error: (-215:Assertion failed) (npoints = prevPtsMat.checkVector(2, CV_32F, true)) >= 0 in function 'calc'
|
<p><a href="https://github.com/prouast/heartbeat" rel="nofollow noreferrer">I'm trying out this git</a> and I get this error below</p>
<pre><code>terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.5.1) ../modules/video/src/lkpyramid.cpp:1257: error: (-215:Assertion failed) (npoints = prevPtsMat.checkVector(2, CV_32F, true)) >= 0 in function 'calc'
</code></pre>
<p>Hardware:</p>
<p>Raspberry pi 3B v1.2</p>
<p>RAM:1GB</p>
<p>Swapfile:4GB</p>
<p>Software:</p>
<p>Debian GNU/Linux 11 (bullseye) aarch64</p>
<p>Pip packages: opencv-python-headless and opencv-contrib_python == 4.7.0.72</p>
<p>the video I'm trying to pass in is 480x600, 30fps, about 30sec long.</p>
|
<python><python-3.x><opencv><raspberry-pi><raspberry-pi3>
|
2023-03-05 20:12:40
| 0
| 303
|
Gordon Freeman
|
75,644,789
| 2,103,394
|
Type hint the "with" value of a context manager
|
<p>I am looking to create some interfaces for extending a concise SQL library to use different SQL databases using a plugin/bridge pattern. I modeled this on sqlite3 since that is packaged with Python, and I used a context manager that connects to the sqlite3 database file on <code>__init__</code> and returns the cursor on <code>__enter__</code>. I have abstracted this such that the base query builder takes a context manager that returns a cursor (via runtime checkable protocol) as an argument:</p>
<pre class="lang-py prettyprint-override"><code>@runtime_checkable
class CursorProtocol(Protocol):
def execute(sql: str) -> CursorProtocol:
...
def fetchone() -> Any:
...
@runtime_checkable
class ContextManagerProtocol(Protocol):
def __init__(self, *args) -> None:
...
def __enter__(self) -> CursorProtocol:
...
def __exit__(self, __exc_type: Optional[Type[BaseException]],
__exc_value: Optional[BaseException],
__traceback: Optional[TracebackType]) -> None:
...
class SqliteContext:
"""Context manager for sqlite."""
connection: sqlite3.Connection
cursor: sqlite3.Cursor
def __init__(self, model: type) -> None:
assert type(model) is type, 'model must be child class of SqliteModel'
assert issubclass(model, SqliteModel), \
'model must be child class of SqliteModel'
assert type(model.file_path) in (str, bytes), \
'model.file_path must be str or bytes'
self.connection = sqlite3.connect(model.file_path)
self.cursor = self.connection.cursor()
def __enter__(self) -> CursorProtocol:
return self.cursor
def __exit__(self, __exc_type: Optional[Type[BaseException]],
__exc_value: Optional[BaseException],
__traceback: Optional[TracebackType]) -> None:
if __exc_type is not None:
self.connection.rollback()
else:
self.connection.commit()
self.connection.close()
@dataclass
class SqlQueryBuilder:
model: type
context_manager: ContextManagerProtocol = field(default=None)
// a lot more code
def count(self) -> int:
"""Returns the number of records matching the query."""
sql = f'select count(*) from {self.model.table}'
if len(self.clauses) > 0:
sql += ' where ' + ' and '.join(self.clauses)
with self.context_manager(self.model) as cursor:
cursor.execute(sql, self.params)
return cursor.fetchone()[0]
</code></pre>
<p>The updated code passes the extensive unit test suite I modified for the abstraction process. However, the code editor does not properly pick up on the <code>def __enter__(self) -> CursorProtocol:</code> type hint. If I use <code>with SqliteContext(self.model) as cursor</code> instead of <code>with self.context_manager(self.model) as cursor</code>, the <code>CursorProtocol</code> type hint works as expected; i.e. <code>cursor.{method}</code> calls are properly annotated.</p>
<p>Is this a bug with my code editor, or have I missed something?</p>
|
<python><python-typing>
|
2023-03-05 19:25:28
| 1
| 1,216
|
Jonathan Voss
|
75,644,713
| 1,110,044
|
How to access the subgraph added as node in networkx graph?
|
<p>Let's say I have such a graph:</p>
<pre class="lang-py prettyprint-override"><code>G = nx.Graph()
G.add_edges_from([(12, 8), (12, 11), (11, 8), (11, 7), (8, 1), (8, 9), (8, 13), (7, 0), (7, 6), (0, 1), (0, 2), (6, 2), (6, 5), (5, 3), (3, 2), (3, 4), (1, 4), (4, 9), (4, 10), (10, 14), (10, 13), (14, 15), (13, 15), (13, 14)])
nx.draw(G, with_labels=True)
</code></pre>
<p><a href="https://i.sstatic.net/SR8AE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SR8AE.png" alt="enter image description here" /></a></p>
<p>Then I'm going to take nodes <code>[0, 1, 2, 3, 4]</code> out of the main graph, move them to the subgraph, add this subgraph to the main graph and create missing edges:</p>
<pre class="lang-py prettyprint-override"><code>subgraph_nodes = set([0, 1, 2, 3, 4])
subgraph = G.subgraph(subgraph_nodes).copy()
G.add_node(subgraph)
for subgraph_node in subgraph_nodes:
neighbors = set(G.neighbors(subgraph_node))
external_neighbors = neighbors - subgraph_nodes
for ext_neighbor in external_neighbors:
G.add_edge(ext_neighbor, subgraph)
G.remove_node(subgraph_node)
nx.draw(G, with_labels=True)
</code></pre>
<p><a href="https://i.sstatic.net/cDhIB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cDhIB.png" alt="enter image description here" /></a></p>
<p>Now in <code>G</code>'s nodes I have the following:</p>
<pre class="lang-py prettyprint-override"><code>G.nodes()
</code></pre>
<p><code>NodeView((12, 8, 11, 7, 9, 13, 6, 5, 10, 14, 15, <networkx.classes.graph.Graph object at 0x7f34025a8fa0>))</code></p>
<p>So the question is - how can I access the subgraph via <code>G.nodes()</code> interface (like <code>G.nodes()[12]</code> for the node <code>12</code>)?</p>
<p>Here is colab link for convenience - <a href="https://colab.research.google.com/drive/1j_7SNv6ixV0OVya3vWYmjOQZgwJH3m1v?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1j_7SNv6ixV0OVya3vWYmjOQZgwJH3m1v?usp=sharing</a></p>
|
<python><networkx>
|
2023-03-05 19:14:11
| 1
| 1,478
|
eawer
|
75,644,654
| 6,032,140
|
YAML / ruamel output formatting. Are there any other knobs?
|
<p>I have the following python string dictionary and am using ruamel to dump it out as yaml file.</p>
<pre><code> import sys
import json
import ruamel.yaml
dit = "'{p_d: '{a:3, what:3.6864e-05, s:lion, vec_mode:'{2.5, -2.9, 3.4, 5.6, -8.9, -5.67, 2, 2, 2, 2, 5.4, 2, 2, 6.545, 2, 2}, sst:'{c:-20, b:6, p:panther}}}"
yaml_str = dit.replace('"', '').replace("'",'').replace(':', ': ').replace('{','[').replace('}',']')
print(yaml_str)
yaml = ruamel.yaml.YAML(typ='safe') #
yaml.default_flow_style = False
yaml.allow_duplicate_keys = True
data = yaml.load(yaml_str)
print("data: {}".format(data))
fileo = open("yamloutput.yaml", "w")
yaml.dump(data, fileo)
fileo.close()
</code></pre>
<p>The output it prints as:</p>
<pre><code> - p_d:
- a: 3
- what: 3.6864e-05
- s: lion
- vec_mode:
- 2.5
- -2.9
- 3.4
- 5.6
- -8.9
- -5.67
- 2
- 2
- 2
- 2
- 5.4
- 2
- 2
- 6.545
- 2
- 2
- sst:
- c: -20
- b: 6
- p: panther
</code></pre>
<p>but the expected / desired output is as follows:</p>
<pre><code> p_d:
a: 3
what: 3.6864e-05
s: lion
vec_mode:
- 2.5
- -2.9
- 3.4
- 5.6
- -8.9
- -5.67
- 2
- 2
- 2
- 2
- 5.4
- 2
- 2
- 6.545
- 2
- 2
sst:
c: -20
b: 6
p: panther
</code></pre>
<p>Is there any formatting that can be done/configured in ruamel to get this desired output i.e. instead of underscore use the proper tabs, and for the arrays alone use the underscores as represented?</p>
<p><strong>Update to the Query:</strong>
In the above query, in the dit string when I tried adding two sub strings like given below.</p>
<pre><code>dit="'{p_d: '{a:3, what:3.6864e-05, s:lion, vec_mode:'{2.5, 1e9, -2.9, 3.4, 5.6, -8.9, -5.67, 2, 2, 2, 2, 5.4, 2, 2, 6.545, 2, 2}, sst:'{c:-20, b:6, p:panther}}}\n'{mgbp: '{ifftp:'{ipdp:'{ipdncf:'{rand_type:no, mfn:\"bbbb.txt\", rng_speed:-967901775, np:-1187634210}}}}}"
</code></pre>
<p>Getting the following error:</p>
<p><code>"ruamel.yaml.parser.ParserError: did not find expected <document start>"</code></p>
<p>I tried adding ---\n to the beginning of the string but still got the same error.</p>
<p>Any suggestions or comments ?</p>
|
<python><yaml><ruamel.yaml>
|
2023-03-05 19:01:33
| 1
| 1,163
|
Vimo
|
75,644,557
| 11,004,423
|
numpy get indexes of connected array values
|
<p>I have a 1d numpy array that looks like this:</p>
<pre class="lang-py prettyprint-override"><code>a = np.array([1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1])
</code></pre>
<p>Is there a way to get the indexes of start and end of each cluster of values. So basically I would get this:</p>
<pre class="lang-py prettyprint-override"><code>[
# clusters with value 1 (cluster with values 0 aren't needed)
[
# start and end of each cluster
[0, 2],
[8, 11],
[13, 14],
],
]
</code></pre>
<p>I'm not very skilled with numpy. I know there are lots of cool functions, but I have no idea which ones to use. Also googling this problem didn't give me anything since people usually have pretty specific problems that are different than mine. I know that for example <code>np.split</code> won't be enough here.</p>
<p>Please help me if you can, I can provide you with more examples or details if needed. I'll try to respond as quickly as possible. Thank you for you time.</p>
|
<python><arrays><numpy><split>
|
2023-03-05 18:45:32
| 0
| 1,117
|
astroboy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.