QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 β |
|---|---|---|---|---|---|---|---|---|
77,178,370 | 15,671,914 | How to retrieve source documents via LangChain's get_relevant_documents method only if the answer is from the custom knowledge base | <p>I am making a chatbot which accesses an external knowledge base <code>docs</code>. I want to get the relevant documents the bot accessed for its answer, but this shouldn't be the case when the user input is something like "hello", "how are you", "what's 2+2", or any answer that is not retrieved from the external knowledge base <code>docs</code>. In this case, I want
<code>retriever.get_relevant_documents(query)</code> or any other line to return an empty list or something similar.</p>
<pre><code>import os
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
os.environ['OPENAI_API_KEY'] = ''
custom_template = """
This is conversation with a human. Answer the questions you get based on the knowledge you have.
If you don't know the answer, just say that you don't, don't try to make up an answer.
Chat History:
{chat_history}
Follow Up Input: {question}
"""
CUSTOM_QUESTION_PROMPT = PromptTemplate.from_template(custom_template)
llm = ChatOpenAI(
model_name="gpt-3.5-turbo", # Name of the language model
temperature=0 # Parameter that controls the randomness of the generated responses
)
embeddings = OpenAIEmbeddings()
docs = [
"Buildings are made out of brick",
"Buildings are made out of wood",
"Buildings are made out of stone",
"Buildings are made out of atoms",
"Buildings are made out of building materials",
"Cars are made out of metal",
"Cars are made out of plastic",
]
vectorstore = FAISS.from_texts(docs, embeddings)
retriever = vectorstore.as_retriever()
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
qa = ConversationalRetrievalChain.from_llm(
llm,
retriever,
condense_question_prompt=CUSTOM_QUESTION_PROMPT,
memory=memory
)
query = "what are cars made of?"
result = qa({"question": query})
print(result)
print(retriever.get_relevant_documents(query))
</code></pre>
<p>I tried setting a threshold for the retriever but I still get relevant documents with high similarity scores. And in other user prompts where there is a relevant document, I do not get back any relevant documents.</p>
<pre><code>retriever = vectorstore.as_retriever(search_type="similarity_score_threshold", search_kwargs={"score_threshold": .9})
</code></pre>
| <python><openai-api><langchain><py-langchain> | 2023-09-26 08:41:20 | 4 | 385 | Blue Cheese |
77,178,365 | 3,416,774 | Why is using multiline comment not allowed in the middile of if statement? | <p>This code yields <code>invalid syntax</code>:</p>
<pre><code>a = 'hi'
""" comment 1 """
if a == 'hi':
print(a)
""" comment 2 """
elif a != 'hi':
print('hello')
</code></pre>
<p>However this works:</p>
<pre><code>a = 'hi'
# comment 1
if a == 'hi':
print(a)
# comment 2
elif a != 'hi':
print('hello')
</code></pre>
<p>Do you know why is that?</p>
| <python><comments> | 2023-09-26 08:40:23 | 0 | 3,394 | Ooker |
77,178,335 | 4,046,443 | Configure Multitenancy with Langchain and Qdrant | <p>I'm creating a Q&A chatbot and I'm using langchain and qdrant.</p>
<p>I'm trying to configure langchain to be able to use qdrant in a multitenant environment.
The doc from qdrant says that the best approach in my case is to use a "Partition by payload" and use a group_id = OneClient inside the payload of each element of a collection, so that then it's possible to filter on that group_id (which in my case will be the client).
That's the link to the doc <a href="https://qdrant.tech/documentation/tutorials/multiple-partitions/" rel="nofollow noreferrer">https://qdrant.tech/documentation/tutorials/multiple-partitions/</a></p>
<p>I'm using langchain and I have added to the documents that I'm saving inside qdrant a "group_id" metadata field.</p>
<p>I'd like to understand how to filter on group_id when I use langchain.
This is how I'm using langchain to retrieve the answer to a question:</p>
<pre><code>qdrant = Qdrant(
client=QdrantClient(...),
collection_name="collection1",
embeddings=embeddings
)
prompt = ...
llm = ChatOpenAI(...)
qa_chain = RetrievalQAWithSourcesChain.from_chain_type(
llm=llm,
chain_type="stuff",
return_source_documents=True,
retriever=qdrant.as_retriever(),
chain_type_kwargs = {"prompt": prompt}
)
result = qa_chain({"question": question})
</code></pre>
<p>The group_id will represent the client and it is known before the question.</p>
<p>Any help is much appreciated, Thanks.</p>
| <python><chatbot><langchain><qdrant><qdrantclient> | 2023-09-26 08:37:04 | 2 | 783 | disgra |
77,178,300 | 4,397,613 | PydanticUserError('`const` is removed, use `Literal` instead', code='removed-kwargs') | <p>I am using the datamodel-code-generator to generate my pydantic model class.
Here is my schema:</p>
<pre><code>{
"properties": {
"name": {
"title": "Name",
"type": "string"
},
"status": {
"const": "active",
"type": "string"
}
},
"required": [
"name",
"status"
],
"title": "MyBaseModel",
"type": "object"
}
</code></pre>
<p>and this generates a class:</p>
<pre><code>from __future__ import annotations
from pydantic import BaseModel, Field
class MyBaseModel(BaseModel):
name: str = Field(..., title='Name')
status: str = Field('active', const=True)
</code></pre>
<p>But while importing this class, I am getting</p>
<pre><code> raise PydanticUserError('`const` is removed, use `Literal` instead', code='removed-kwargs')
pydantic.errors.PydanticUserError: `const` is removed, use `Literal` instead
</code></pre>
<p>How can I specify in my json schema to generate pydantic classes with literal?
Using versions:
<code>Pydantic:2.0.3</code>
<code>datamodel-code-generator:0.22.0</code></p>
| <python><pydantic> | 2023-09-26 08:32:36 | 1 | 529 | shane |
77,178,175 | 1,826,376 | Django ValueError - didn't return an HttpResponse object. It returned None instead | <p>I am new to Django Framework. I just want to save employee data using a form. But I always get this error no matter what I adjust in my code. <code>ValueError at /employees/create_employee/ The view employees.views.create_employee didn't return an HttpResponse object. It returned None instead.</code> Here are my codes:</p>
<p>Project folder:
<a href="https://i.sstatic.net/stds0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/stds0.png" alt="enter image description here" /></a></p>
<p><strong>models.py - employees</strong></p>
<pre><code>from django.db import models
from datetime import date
# Create your models here.
class Employee(models.Model):
first_name = models.CharField(max_length=255)
middle_name = models.CharField(max_length=255)
last_name = models.CharField(max_length=255)
sex = models.CharField(max_length=20)
position = models.CharField(max_length=100)
division = models.CharField(max_length=50)
created_at = models.DateField(default=date.today)
updated_at = models.DateField(default=date.today)
</code></pre>
<p><strong>employees/forms.py</strong></p>
<pre><code>from django import forms
from .models import Employee
class EmployeeForm(forms.ModelForm):
class Meta:
model = Employee
exclude = ['created_at', 'updated_at']
</code></pre>
<p><strong>employees/views.py</strong></p>
<pre><code>from django.shortcuts import render, redirect
from .forms import EmployeeForm
# Create your views here.
def create_employee(request):
if request.method == 'POST':
form = EmployeeForm(request.POST)
if form.is_valid():
form.save()
return redirect('employee_list') # redirect to list view or another page
else:
return render(request, 'employee_form.html', {'form':form})
</code></pre>
<p><strong>employees/templates/employees/employee_form.html</strong></p>
<pre><code><!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Create Employee</title>
</head>
<body>
<h1>Create Employee</h1>
<form method='post'>
{% csrf_token %}
{{ form.as_p }}
<button type="submit">Save</button>
</form>
</code></pre>
<p><strong>employees/urls.py</strong></p>
<pre><code>from django.urls import path
from . import views
urlpatterns = [
path('create_employee/', views.create_employee, name='create_employee'),
]
</code></pre>
<p><strong>project's main urls.py</strong></p>
<pre><code>from django.contrib import admin
from django.urls import include, path
urlpatterns = [
path('admin/', admin.site.urls),
path('employees/', include('employees.urls'))
</code></pre>
| <python><django> | 2023-09-26 08:16:04 | 2 | 1,276 | Eli |
77,178,172 | 694,733 | Why does Pylance show error for one function with type hints but not the other? | <p>VSCode Pylance v2023.9.20 with strict checking is giving me type error on first function call, but not the second. Only difference is that in first argument is wrapped in list.</p>
<pre><code>def listTupleFunc(arg: list[tuple[str, tuple[str, ...]]]) -> None:
pass
listTuple = [('x', ('a', 'b'))]
listTupleFunc(listTuple) # Error here
def tupleOnlyFunc(arg: tuple[str, tuple[str, ...]]) -> None:
pass
tupleOnly = ('x', ('a', 'b'))
tupleOnlyFunc(tupleOnly) # OK
</code></pre>
<p>And the error message is:</p>
<pre class="lang-none prettyprint-override"><code>Argument of type "list[tuple[Literal['x'], tuple[Literal['a'], Literal['b']]]]" cannot be assigned to parameter "arg" of type "list[tuple[str, tuple[str, ...]]]" in function "listTupleFunc"
Β Β "list[tuple[Literal['x'], tuple[Literal['a'], Literal['b']]]]" is incompatible with "list[tuple[str, tuple[str, ...]]]"
Β Β Β Β Type parameter "_T@list" is invariant, but "tuple[Literal['x'], tuple[Literal['a'], Literal['b']]]" is not the same as "tuple[str, tuple[str, ...]]"
</code></pre>
<p>If I change so that <em>some</em> of the literals are formatted, suprisingly the error disappears:</p>
<pre><code># These work OK somehow
listTuple = [(f'x', ('a', f'b'))]
listTuple = [(f'x', (f'a', 'b'))]
# But these don't
listTuple = [('x', (f'a', f'b'))]
listTuple = [(f'x', ('a', 'b'))]
</code></pre>
<p>Is this a bug in Pylance or am I missing something obvious? How do I get rid of the error?</p>
| <python><python-typing><pyright> | 2023-09-26 08:15:33 | 2 | 16,316 | user694733 |
77,177,914 | 9,363,181 | Unable to run the python script via CLI | <p>I have the project structure below:</p>
<pre><code>project_name/
βββ provisioner/
β βββ src/
β βββ common/
β βββ utility/
β βββ provisioner_logger.py
βββ scripts/
βββ validate/
βββ validate_provision.py
</code></pre>
<p>Now, I am trying to run this <code>validate_provision.py </code> via <code>CLI</code> which causes the error. It has import statements from the common package as listed below:</p>
<pre><code>from common.utility.provisioner_logger import get_provisioner_logger
</code></pre>
<p>I am trying to run this script locally and on the gitlab stage in a docker image. So, I tried running this below command (assuming the current working directory is <code>scripts</code>):</p>
<pre><code>python ./validate/validate_provision.py "$(cat ../payload.json)"
</code></pre>
<p>It gives below error:</p>
<pre><code>Traceback (most recent call last):
File "./validate/validate_provision.py", line 4, in <module>
from common.utility.provisioner_logger import get_provisioner_logger
ModuleNotFoundError: No module named 'common'
</code></pre>
<p>However, when run via IDE directly this script works fine, with absolutely no errors.</p>
<p>I tried printing <code>sys.path</code> from the root to check if my project is listed under the Python paths but in the docker image it is not listed, it is present in my local but still doesn't work from the local as well.</p>
<p>So to include it in my paths, I tried running the below command as well:</p>
<p><code>python -m scripts.validate.validate_provision.py</code></p>
<p>from the root but the same error. So, what am I missing here? Any help is appreciated.</p>
<p><strong>P.S</strong>: I have <code>__init__.py</code> file in every package.</p>
| <python><python-3.x> | 2023-09-26 07:40:53 | 2 | 645 | RushHour |
77,177,877 | 2,624,876 | What is `except*` syntax in Python? (TryStar in ast module) | <p>I came across <a href="https://docs.python.org/3/library/ast.html#ast.TryStar" rel="noreferrer">this documentation in the <code>ast</code> module</a> for a version of the <code>try</code>/<code>except</code> block with an extra asterisk. The documentation doesn't explain what it is, and gives a completely generic example:</p>
<blockquote>
<h2><code>class ast.TryStar(body, handlers, orelse, finalbody)</code></h2>
<p><code>try</code> blocks which are followed by <code>except*</code> clauses. The attributes are the same as for Try but the ExceptHandler nodes in handlers are interpreted as except* blocks rather then except.</p>
<pre class="lang-py prettyprint-override"><code>print(ast.dump(ast.parse("""
try:
...
except* Exception:
...
"""), indent=4))
</code></pre>
</blockquote>
<p>What is <code>except*</code> and what is it for? Is it deprecated, or up-and-coming?</p>
<p>(And perhaps more importantly, what is the feature called? except-star? except-glob? except-asterisk? try-star?)</p>
| <python><exception><syntax> | 2023-09-26 07:35:19 | 1 | 4,298 | 1j01 |
77,177,810 | 11,261,546 | Differences in OpenCV image matrix after re-saving as PNG, despite matching dependencies | <p>I am working with a colleague, and we both have identical Ubuntu installations (actually, same docker images) with the same packages and dependencies. We decided to investigate a peculiar behavior when working with OpenCV-Python. We both took the same JPEG image file (we verified using <code>md5sum</code> that the files are equal) and opened it using OpenCV, but to our surprise, we observed differences in the resulting image matrix.</p>
<p>To test this further, we used <code>cv2.imwrite</code> to re-save the image as a PNG file and compared the resulting PNG files using <code>md5sum</code>. Interestingly, the checksums were different despite our matching configurations.</p>
<p>We double checked that we have the same versions of:</p>
<p>Ubuntu
OpenCV
Python
libjpeg
libpng</p>
<p>We would like to understand the reason behind this behavior and what could be causing these differences in the resulting PNG images.</p>
<p>Is there any specific configuration or parameter that we might be overlooking when using OpenCV to save images as PNG? Any insights or suggestions on why the saved PNG files differ would be greatly appreciated.</p>
| <python><opencv> | 2023-09-26 07:25:47 | 0 | 1,551 | Ivan |
77,177,654 | 7,965,552 | Microsoft Teams web-hook return respond 404 for Adaptive Card | <p>I was trying to send data to Microsoft Teams with a valid webhook my code is below</p>
<pre><code>import requests
import json
url = "https://webhooks.valid_url.com/WebhookHandler/test"
payload = json.dumps({
"type": "message",
"attachments": [
{
"contentType": "application/vnd.microsoft.card.adaptive",
"contentUrl": None,
"content": {
"$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
"type": "AdaptiveCard",
"version": "1.2",
"body": [
{
"type": "TextBlock",
"text": "For Samples and Templates, see [https://adaptivecards.io/samples](https://adaptivecards.io/samples)"
}
]
}
}
]
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.status_code)
</code></pre>
<p>I follow this <a href="https://learn.microsoft.com/en-us/microsoftteams/platform/webhooks-and-connectors/how-to/connectors-using?tabs=cURL#send-adaptive-cards-using-an-incoming-webhook" rel="nofollow noreferrer">Doc</a> to write this code but each time I try to send data it returns 404 where the webhook URL is valid.</p>
<p>If I update the payload to this it sends the message perfectly.</p>
<pre><code>payload = {
"text": "Hello, this is a message sent to Teams from Python!",
}
</code></pre>
<p>As my investigation pure text message is sent but If I try to add any extra design it returns 404. So my question is what I am missing when the <a href="https://learn.microsoft.com/en-us/microsoftteams/platform/task-modules-and-cards/cards/cards-reference#adaptive-card" rel="nofollow noreferrer">Doc</a> says Adaptive Card is valid and when I try to use their code it returns 404?</p>
| <python><python-3.x><python-requests><microsoft-teams><adaptive-cards> | 2023-09-26 07:01:48 | 0 | 2,353 | Antu |
77,177,548 | 9,202,041 | Hourly data out of 15 mins interval dataset | <p>I am looking for help in python code.</p>
<p>I have data with 15 mins values for 1 year. I just want to extract the values at hourly interval. No need to apply mean or any function. Just take the data in a subset.</p>
<p><a href="https://docs.google.com/spreadsheets/d/1Eo3MUsKG1PHcth6Cf73MQFU4bt5BADjW/edit#gid=1477101306" rel="nofollow noreferrer">Here's my data</a></p>
<p>I am trying this code but this results in missing so many values. The final dataset should have 35040/4 = 8760 values.</p>
<pre><code>final_TYP_hourly = final_TYP
final_TYP_hourly['DateTime'] = pd.to_datetime(final_TYP_hourly.index)
final_TYP_hourly['TimeDiff'] = (final_TYP_hourly['DateTime'] - final_TYP_hourly['DateTime'].iloc[0]).dt.total_seconds() / 60
filtered_data = final_TYP_hourly[final_TYP_hourly['TimeDiff'] % 60 == 0]
filtered_data.reset_index(drop=True, inplace=True)
print(filtered_data)
</code></pre>
| <python><pandas><subset> | 2023-09-26 06:43:02 | 0 | 305 | Jawairia |
77,177,457 | 9,072,753 | How to statically protect against str == enum comparisons? | <p>More often than not, I make the following mistake:</p>
<pre><code>import enum
class StatusValues(enum.Enum):
one = "one"
two = "two"
def status_is_one(status: str):
return status == StatusValues.one
</code></pre>
<p>String will never be an enum class. The problem is that it should be <code>StatusValues.one.value</code>.</p>
<p>Is there a <code>strcmp(status, StatusValues.one)</code>-ish function so that my pyright will error on the line that I am comparing string with a class? Is there a good way to protect against such mistakes?</p>
| <python><python-3.x><enums><python-3.7> | 2023-09-26 06:24:49 | 1 | 145,478 | KamilCuk |
77,177,102 | 12,427,876 | Unable to type spaces in "multiline" of Pysimplegui | <p>I'm trying to sanitize input typed in <code>sg.Multiline</code>, and I have this:</p>
<pre class="lang-py prettyprint-override"><code>def sanitize_query(window: sg.Window, values: dict):
# Sanitize query: replace/remove/escape all unallowed characters in command-line
# Spaces are allowed in query
query = values['-INPUT-QUERY-']
sanitized_query = query
# Replace newline with space
if "\n" in query:
sanitized_query = query.replace("\n", " ")
# Replace double quote with single quote
if "\"" in query:
sanitized_query = query.replace("\"", "'")
# Replace backslash with double backslash
if "\\" in query:
sanitized_query = query.replace("\\", "\\\\")
# Update query in GUI
window['-INPUT-QUERY-'](sanitized_query)
values.update({'-INPUT-QUERY-': sanitized_query})
# ... (in event loop)
# Input changed
if '-INPUT-' in event:
# Sanitize query
if event == '-INPUT-QUERY-':
sanitize_query(window=window, values=values)
</code></pre>
<p>However, this function will cause me be unable to type any spaces in query. Why is that so?</p>
| <python><pysimplegui> | 2023-09-26 04:53:08 | 1 | 411 | TaihouKai |
77,176,974 | 5,212,614 | How can we use Scipy Optimize to Maximize Revenue Columns in a Dataframe | <p>I have a dataframe that looks like this.</p>
<pre><code>from scipy.optimize import minimize
import numpy as np
import pandas as pd
data = [{'Month': '2020-01-01', 'Stadium':3000, 'Casino':3000, 'Airport':3000, 'Max':20000},
{'Month': '2020-02-01', 'Stadium':3000, 'Casino':5000, 'Airport':5000, 'Max':10000},
{'Month': '2020-03-01', 'Stadium':7000, 'Casino':5000, 'Airport':7000, 'Max':12000},
{'Month': '2020-04-01', 'Stadium':3000, 'Casino':6000, 'Airport':2000, 'Max':10000}]
df = pd.DataFrame(data)
cols = ['Stadium', 'Casino', 'Airport']
df['Aggregations'] = df[cols].sum(axis=1)
df
</code></pre>
<p><a href="https://i.sstatic.net/e2Whf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e2Whf.png" alt="enter image description here" /></a></p>
<p>I am trying to maximize the 'Aggregation' column subject to the constraints in the 'Max' column, so I can't exceed the limit in the Max column.</p>
<p>The columns named Stadium, Casino, and Airport show revenue data. I want to maximize the revenue, per date, without exceeding the Max value. Lower bound is 0 and upper bound is .5.</p>
<pre><code>So, in row=0, Stadium, Casino, and Airport would be .333 each.
In row=1, Casino and Airport would be .5 each, and Stadium would be 0.
In row=2, Stadium and Airport would be .5 each, and Casino would be 0.
In row=3, Stadium would be .333, Casino would be .5, and Airport would be .1.
</code></pre>
<p>I am playing with this code, but it's giving me .5 for everything!</p>
<pre><code># Define the objective function to maximize
def objective_function(variables):
return -np.sum(variables)
# Define the constraint function to enforce the 'Max' constraint
def max_constraint(variables):
return df['Max'].values - np.sum(variables)
# Initial guess for optimization
initial_guess = np.zeros(len(cols))
# Constraints
constraints = [{'type': 'ineq', 'fun': max_constraint}]
# Bounds for each column
bounds = [(0, .5)]
# Perform the optimization
result = minimize(objective_function, initial_guess, bounds=bounds, constraints=constraints)
# Retrieve the optimized values for the columns
optimized_values = result.x
# Update the DataFrame with the optimized values
df[cols] = optimized_values
# Recalculate the 'Aggregations' column
df['Aggregations'] = df[cols].sum(axis=1)
# Display the updated DataFrame
print(df)
</code></pre>
<p><a href="https://i.sstatic.net/5l8ab.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5l8ab.png" alt="enter image description here" /></a></p>
| <python><pandas><scipy-optimize> | 2023-09-26 04:13:33 | 0 | 20,492 | ASH |
77,176,780 | 9,983,652 | NotImplementedError: float16 indexes are not supported | <p>I just upgraded to use pandas 2. Then I got this below error for the below code. I was using float16 in order to reduce memory use before without any problem. Where is the problem? Does pandas 2 doesn't allow using float16? before pandas 2, the code was fine.</p>
<pre><code>file='test_read_float16.csv'
df=pd.read_csv(file,sep='\t')
df
Depth 2023-05-12 2023-05-12 0:20 2023-05-12 0:36
0 0 19.593750 19.296875 20.59375
1 1 23.296875 21.906250 21.00000
2 2 112.187500 112.187500 111.68750
3 3 180.750000 180.750000 180.25000
4 4 187.375000 188.500000 188.12500
</code></pre>
<pre><code>df=df.astype('float16',errors='ignore')
df=df.set_index('Depth')
df
NotImplementedError Traceback (most recent call last)
cnrl\users\yongnual\Data\Spyder_workplace\DTS_dashboard\DTS_dashboard_v201_injectorDateRange_reducememory_seeqcrossplot_calcfluidlevel_crossplotseeq_crossplotspm_b12dtscrossplot_pandas2_parquet.ipynb Cell 205 line 2
1 df=df.astype('float16',errors='ignore')
----> 2 df=df.set_index('Depth')
3 df
File c:\Anaconda\envs\dash2\lib\site-packages\pandas\core\frame.py:5915, in DataFrame.set_index(self, keys, drop, append, inplace, verify_integrity)
5907 if len(arrays[-1]) != len(self):
5908 # check newest element against length of calling frame, since
5909 # ensure_index_from_sequences would not raise for append=False.
5910 raise ValueError(
5911 f"Length mismatch: Expected {len(self)} rows, "
5912 f"received array of length {len(arrays[-1])}"
5913 )
-> 5915 index = ensure_index_from_sequences(arrays, names)
5917 if verify_integrity and not index.is_unique:
5918 duplicates = index[index.duplicated()].unique()
File c:\Anaconda\envs\dash2\lib\site-packages\pandas\core\indexes\base.py:7067, in ensure_index_from_sequences(sequences, names)
7065 if names is not None:
7066 names = names[0]
-> 7067 return Index(sequences[0], name=names)
7068 else:
7069 return MultiIndex.from_arrays(sequences, names=names)
...
578 # asarray_tuplesafe does not always copy underlying data,
579 # so need to make sure that this happens
580 data = data.copy()
NotImplementedError: float16 indexes are not supported
</code></pre>
| <python><pandas> | 2023-09-26 02:55:32 | 1 | 4,338 | roudan |
77,176,669 | 21,575,627 | Optimizing a geometrical algorithm [ points inside a region inside a circle ] | <p>This is from a coding assessment. The problem is as such:</p>
<blockquote>
<p>A node is a point on the x-y plane (i, j) where i,j are integers. Given a circle with radius <code>R</code> originating from <code>(xl, yl)</code> and a rectangular grid with bottom left coordinate (x1, y1) and top right coordinate (x2, y2), write a program that returns the number of nodes within the grid that are contained within (or on the perimeter of) the circle.</p>
</blockquote>
<p>I wrote this program:</p>
<pre><code>def numPointsInCircle(x1, y1, x2, y2, xl, yl, R):
num = 0
for x in range(x1, x2 + 1):
for y in range(y1, y2 + 1):
if (x - xl) ** 2 + (y - yl) ** 2 <= R ** 2:
num += 1
return num
</code></pre>
<p>The logic is: the loop gives all <code>(x, y)</code> "nodes" within the rectangular grid, and for each one, we test whether their distance from the center of the circle is greater than the radius.</p>
<p>I passed 9/15 test cases and failed the rest due to time limit exceeded. How can this be optimized to be more efficient?</p>
<p>Some things I tried:</p>
<ul>
<li>restricting the grid for the top right to be at most the circle's top right and the bottom right to be at most the circle's bottom right</li>
<li>early break conditions when <code>(x - xl)</code> or <code>(y - yl)</code> alone surpasses the <code>R</code></li>
</ul>
| <python><algorithm><data-structures><geometry><time-complexity> | 2023-09-26 02:18:42 | 2 | 1,279 | user129393192 |
77,176,408 | 15,542,245 | A regex pattern for specifying characters within a word | <p>Apparently my question was not specific enough. I will add my solution below my original question.</p>
<p><strong>Question</strong>:
I have character words of three numbers. For example, <code>001,002,003</code>. Where there are examples of conversion errors like <code>Q04</code> (instead of <code>004</code>), I need a regex pattern to identify this kind of error.</p>
<p>Requirements:</p>
<ul>
<li>A three number word delimited by space characters that contains at least one capital letter</li>
<li>the pattern should <em>not</em> match <code>004, 014, 114</code> type examples</li>
<li>the maximum value is 200</li>
</ul>
<p>I am planning to identify these errors in a bunch of text files.</p>
<p><strong>Additional information</strong> which was left out for clarity (apparently I could have made this clearer)</p>
<p>Test string (only roll entry number 003 is correct. Others <code>00Q, 0Q2</code> etc are incorrect and need to be identified as such:</p>
<pre><code>text = '00Q ALLAN, Wilham Ross 0Q2 ALLARDYCE, Margaret Isabel 003 ALLARDYCE, Mervyn George Q04 ANKER. Delia Roswyn QQ5 ANKER, Doreen Alison'
</code></pre>
<p>I ended up using the following Python code with my own regex to identify these mistakes.</p>
<pre><code># matchTestSub.py
import re
def isValidInteger(string):
try:
int(string) # Try to convert the string to an integer
return True # If successful, it's a valid integer
except ValueError:
return False # Conversion to integer failed, not a valid integer
text = '00Q ALLAN, Wilham Ross 0Q2 ALLARDYCE, Margaret Isabel 003 ALLARDYCE, Mervyn George Q04 ANKER. Delia Roswyn QQ5 ANKER, Doreen Alison'
pattern1 = r'\b[A-Z0-9]{3}\b(?![A-Z]+)'
match = re.findall(pattern1,text)
if match:
for i in range(len(match)):
x = match[i]
print(x,isValidInteger(x))
</code></pre>
<p>My regex and test string are <a href="https://regex101.com/r/Un6OAd/1" rel="nofollow noreferrer">here</a></p>
<p>My thinking for the regex pattern was:</p>
<ul>
<li>eliminate other string matches like names using negative lookahead for multiple uppercase chars <code>(?![A-Z]+)</code></li>
<li>match a 3 char word comprising numerals and/or uppercase</li>
</ul>
| <python><regex><integer> | 2023-09-26 00:39:07 | 4 | 903 | Dave |
77,176,377 | 1,959,753 | Optimising Dask computations (memory implications and communication overhead) | <p>I am working on a project which was initially optimised for the Python multiprocessing library, and I am trying to use Dask to distribute the work over multiple nodes. Particularly I am trying to use the SSHCluster.</p>
<p>In order to optimise as much as possible, I have changed the worker methods to be more fine-grained, i.e. working on the smallest level possible, requiring smaller inputs and returning smaller outputs. I am trying to utilise the least amount of memory, while taking the least amount of time to complete a single task.</p>
<p>The kind of data structures I have are as follows: a large dict, with inner integer values, arrays and dicts. As well as simpler dicts, arrays and sets. These are dynamic in nature, i.e. they should be changed by the worker methods and returned back to the client, and then be used by subsequent calls to the same (and other) worker methods.</p>
<p>I also have a dict of dicts, which is static, and some other objects that also feature static data. I am saving these properties as JSON files, using the client.upload_file method to upload the files to the workers, and using the client.register_worker_callbacks method to register these files on the worker side to be able to use them as "global shared" memory from the worker methods (this uses up some memory, especially because the data is duplicated in the memory space of each worker, however, it works quite well, because the data is loaded once upon creating the workers, and then is shared by any subsequent worker method (task) computation).</p>
<p>However, when it comes to the dynamic memory, this data (approx. 800 mb in size), needs to be passed to the workers in the most efficient way possible, before starting the computation of the worker method.</p>
<p>I came up with 3 potential ways to achieve this communication between the client and the workers:</p>
<blockquote>
<ol>
<li><p>Split the data structures into "partial data structures" based on what each worker method requires. For e.g. if the worker method will
be tackling person A, B, C, only include the corresponding data for A,
B, C. Then subsequently only return the data for A, B, C.</p>
</li>
<li><p>Scatter these data structures to the workers using: client.scatter(dynamic_obj, broadcast=true). Pass the scattered data
structures as futures to the workers (along with other small params).
Then on the worker side build the "partial data structures" for local
usage only, and return the results much like 1.</p>
</li>
<li><p>Use Dask data structures such as Dask.Bag or Dask.Array.</p>
</li>
</ol>
</blockquote>
<p>The first one works; I am just not sure whether my memory consumption is optimal. For e.g. the client is using around 6.3gb of memory, the scheduler around 2.8gb, and the workers around 2.3gb each. I am using the client.submit or the client.map methods, and then evaluating the resulting futures with the as_completed method. I am also releasing each future as soon as I evaluate its results. While I think the client is warranted to use 6.3gb of memory, I am not sure why the scheduler is using that much memory, when I should be releasing the results so quickly. The workers seem to have a baseline memory which is around 1.9gb, so 2.3gb seems to be acceptable as "working memory" while the tasks are ongoing.</p>
<p>The second option doesn't work. When trying to call:</p>
<pre><code>client.scatter(dynamic_obj, broadcast=True)
</code></pre>
<p>on what I've described as "a large dict, with inner integer values, arrays and dicts", I get:</p>
<blockquote>
<p>distributed.comm.core.CommClosedError: in <TCP (closed) ConnectionPool.scatter local=tcp://127.0.0.1:58858 remote=tcop://127.0.1.1:35767>: Stream is closed</p>
</blockquote>
<p>after around 32 minutes. Is this possibly because of the "nested dict" type of data structure? The collection is not even that large, and most inner dicts/arrays are empty at this point.</p>
<p>I am not sure about the 3rd option, especially because from the articles I read and videos I watched, it seems that these are more useful for distributed computations on large data sets, of which I don't have plenty. The data structures I am using (and referring to above) are what I have found to be the most convenient data representations of the results that need to be returned by the worker methods. However, I was wondering whether, when using Dask data structures, I could gain an automatic speed up with regards to the data communication overhead. It would be interesting if I can be pointed into the right direction in this regard.</p>
| <python><dask><distributed-computing><dask-distributed> | 2023-09-26 00:25:50 | 1 | 736 | Jurgen Cuschieri |
77,176,014 | 643,357 | conda: The pytorch and nvidia channels aren't playing nicely together | <p>If I set up a conda pytorch environment like this:</p>
<pre><code>conda activate pytorch-cuda
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
</code></pre>
<p>That works; at least insofar as being able to import torch in python. If, however, I add cuDNN:</p>
<pre><code>conda install cudnn -c nvidia
</code></pre>
<p>Things are no longer warm and fuzzy:</p>
<pre><code>(torch-cuda1) pgoetz@finglas ~$ python --version
Python 3.11.5
(torch-cuda1) pgoetz@finglas ~$ python
Python 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/lusr/opt/miniconda/envs/torch-cuda1/lib/python3.11/site-packages/torch/__init__.py", line 229, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /lusr/opt/miniconda/envs/torch-cuda1/lib/python3.11/site-packages/torch/lib/libc10_cuda.so: undefined symbol: cudaMemPoolSetAttribute, version libcudart.so.11.0
>>>
</code></pre>
<p>What's happening is the cuDNN conda package is installing and relinking an older version of <strong>libcudart.so.11.0</strong>. Here is what is in <strong>~/miniconda/envs/pytorch-cuda/lib</strong> before cuDNN is installed:</p>
<pre><code># ls -l libcudart*
-rwxr-xr-x 3 root root 695712 Sep 21 2022 libcudart.so.11.8.89
</code></pre>
<p>Here is what it looks like after the cudnn package is installed from the nvidia channel:</p>
<pre><code># ls -l libcudart*
lrwxrwxrwx 1 root root 20 Sep 25 13:12 libcudart.so -> libcudart.so.11.1.74
lrwxrwxrwx 1 root root 20 Sep 25 13:12 libcudart.so.11.0 -> libcudart.so.11.1.74
-rwxr-xr-x 2 root root 554032 Oct 14 2020 libcudart.so.11.1.74
-rwxr-xr-x 3 root root 695712 Sep 21 2022 libcudart.so.11.8.89
</code></pre>
<p>It looks like something similar is happening with <strong>libcusparse.so.11</strong>, and possibly other libraries, I didn't bother trying to track them all down.</p>
<p>Hmm, so maybe try using only the pytorch channel? No matter how far I pare it down, this somehow results in unresolvable dependency problems and nothing gets installed:</p>
<pre><code>conda install pytorch pytorch-cuda=11.8 -c pytorch
</code></pre>
<p>So it appears that using different channels can result in software incompatibilities, but in some cases the use of multiple channels is unavoidable, so then the question becomes which channel(s) to use for what task environments? The installation matrix on the pytorch.org website specifically recommends this:</p>
<pre><code>conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
</code></pre>
<p>However many of my users also want cuDNN -- how can I get cuDNN installed in a pytorch environment in such a way that it doesn't break pytorch?</p>
<p>Beyond this, whom can should I bring these issues up with? The maintainers of the pytorch and nividia channels? Should one only be using conda-forge instead?</p>
<p>Thanks.</p>
| <python><pytorch><conda><miniconda> | 2023-09-25 22:23:43 | 0 | 919 | pgoetz |
77,175,989 | 904,100 | Getting invalid_grant when trying to exchange tokens using Django-oauth-toolkit | <p>I am trying to use the django-oauth-toolkit but continually hitting an invalid_grant error.</p>
<p>I believe I have followed the instructions at <a href="https://django-oauth-toolkit.readthedocs.io/en/latest/getting_started.html#oauth2-authorization-grants" rel="nofollow noreferrer">https://django-oauth-toolkit.readthedocs.io/en/latest/getting_started.html#oauth2-authorization-grants</a> to the letter. I have created the following script to test with (mostly just extracted the code mentioned in the above page:</p>
<pre><code>import random
import string
import base64
import hashlib
code_verifier = ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(random.randint(43, 128)))
code_verifier = base64.urlsafe_b64encode(code_verifier.encode('utf-8'))
code_challenge = hashlib.sha256(code_verifier).digest()
code_challenge = base64.urlsafe_b64encode(code_challenge).decode('utf-8').replace('=', '')
client_id = input("Enter client id: ")
client_secret = input("Enter client secret: ")
redirect_url = input("Enter callback url: ")
print(f'vist http://127.0.0.1:8000/o/authorize/?response_type=code&code_challenge={code_challenge}&code_challenge_method=S256&client_id={client_id}&redirect_uri={redirect_url}')
code = input("Enter code: ")
print(f'enter: curl -X POST -H "Cache-Control: no-cache" -H "Content-Type: application/x-www-form-urlencoded" "http://127.0.0.1:8000/o/token/" -d "client_id={client_id}" -d "client_secret={client_secret}" -d "code={code}" -d "code_verifier={code_verifier}" -d "redirect_uri={redirect_url}" -d "grant_type=authorization_code"')
</code></pre>
<p>The following shows the output (I haven't redacted the details, they are only temporary):</p>
<pre><code>(venv) (base) peter@Peters-MacBook-Air Intra-Site % python api_test.py
Enter client id: sJ3ijzbmdogfBZPfhkF6hOqifPuPSKKpOnN8hq1N
Enter client secret: GnXNWqw7t1OwPRbOWTmDXKiuaqeJ7LRMhY9g2CWe00f3QrHLvx6aDKjGf5eF1t6QPkD1YO8BR43HNmzCjZYBW81FIjTng7QnVypzshMljEJRGTj5N7r8giwjKIiXyVng
Enter callback url: https://192.168.1.44/authenticate/callback
vist http://127.0.0.1:8000/o/authorize/?response_type=code&code_challenge=WiaJdysQ6aFnmkaO8yztt9kPBGUj-aqZSFVgmWYRBlU&code_challenge_method=S256&client_id=sJ3ijzbmdogfBZPfhkF6hOqifPuPSKKpOnN8hq1N&redirect_uri=https://192.168.1.44/authenticate/callback
Enter code: hrLeADVEmQF8mDLKJXhAZJRQ2dElV7
enter: curl -X POST -H "Cache-Control: no-cache" -H "Content-Type: application/x-www-form-urlencoded" "http://127.0.0.1:8000/o/token/" -d "client_id=sJ3ijzbmdogfBZPfhkF6hOqifPuPSKKpOnN8hq1N" -d "client_secret=GnXNWqw7t1OwPRbOWTmDXKiuaqeJ7LRMhY9g2CWe00f3QrHLvx6aDKjGf5eF1t6QPkD1YO8BR43HNmzCjZYBW81FIjTng7QnVypzshMljEJRGTj5N7r8giwjKIiXyVng" -d "code=hrLeADVEmQF8mDLKJXhAZJRQ2dElV7" -d "code_verifier=b'UDZGREwyR0ZVMjNCTzRBQlFaVlBUQk9TWkVRUDlCQzFSUTY1MkFNMUUzRTVCRVBNNlkwR0k4UEtGMUhaVVlTVkQ0QVowMzJUR0xLTDQ1U0VNOEtaWVcxT0tP'" -d "redirect_uri=https://192.168.1.44/authenticate/callback" -d "grant_type=authorization_code"
(venv) (base) peter@Peters-MacBook-Air Intra-Site % curl -X POST -H "Cache-Control: no-cache" -H "Content-Type: application/x-www-form-urlencoded" "http://127.0.0.1:8000/o/token/" -d "client_id=sJ3ijzbmdogfBZPfhkF6hOqifPuPSKKpOnN8hq1N" -d "client_secret=GnXNWqw7t1OwPRbOWTmDXKiuaqeJ7LRMhY9g2CWe00f3QrHLvx6aDKjGf5eF1t6QPkD1YO8BR43HNmzCjZYBW81FIjTng7QnVypzshMljEJRGTj5N7r8giwjKIiXyVng" -d "code=hrLeADVEmQF8mDLKJXhAZJRQ2dElV7" -d "code_verifier=b'UDZGREwyR0ZVMjNCTzRBQlFaVlBUQk9TWkVRUDlCQzFSUTY1MkFNMUUzRTVCRVBNNlkwR0k4UEtGMUhaVVlTVkQ0QVowMzJUR0xLTDQ1U0VNOEtaWVcxT0tP'" -d "redirect_uri=https://192.168.1.44/authenticate/callback" -d "grant_type=authorization_code"
{"error": "invalid_grant"}%
</code></pre>
<p>I have the following in my settings.py:</p>
<pre><code>OAUTH2_PROVIDER = {
'ALLOWED_REDIRECT_URI_SCHEMES': ["https", "intra"],
"SCOPES": {
"read": "Read scope",
"write": "Write scope",
"groups": "Access to your groups",
},
}
REST_FRAMEWORK = {
# Use Django's standard `django.contrib.auth` permissions,
# or allow read-only access for unauthenticated users.
"DEFAULT_PERMISSION_CLASSES": ("rest_framework.permissions.IsAuthenticated",),
"DEFAULT_AUTHENTICATION_CLASSES": [
"oauth2_provider.contrib.rest_framework.OAuth2Authentication",
],
}
</code></pre>
<p>I also have the following in my installed apps:</p>
<pre><code> "oauth2_provider",
"rest_framework",
</code></pre>
<p>It is most likely something mundane I have missed but cannot identify what.</p>
<p>Can anyone shed any light on why I might be getting the grant_type error.</p>
<p>I have confirmed that the redirect URL, client id, client secret are correct as they are in the application registered in dJango</p>
<p>I have also ensured that the code verifier and code challenge are correct and appear to be sent at the right times.</p>
| <python><django><django-oauth-toolkit> | 2023-09-25 22:18:59 | 1 | 793 | Peter |
77,175,891 | 11,429,035 | VSCode generate python function/class with inputs and outputs datatype | <p>I hope to let the vscode automatically generate the function or class template with the datatypes. Something like,</p>
<p>if I type a function like</p>
<pre><code>def add(a, b):
t = a + b
return t
</code></pre>
<p>And then the VSCode can automatically help me complete the template like</p>
<pre><code>def add(a:$, b:$) -> $:
return a + b
</code></pre>
<p>And then I can type a datatype into <code>$</code> and use <code>tab</code> to jump to the next <code>$</code>. I have seen this feature in PyCharm. I think it should also be available in VSCode.</p>
| <python><visual-studio-code> | 2023-09-25 21:49:17 | 1 | 533 | Xudong |
77,175,880 | 5,096,653 | WTForms removing fields per-instance with FormField and FieldList | <p>I have a base class form for PersonForm that looks something like this:</p>
<pre class="lang-py prettyprint-override"><code>class PersonForm(Form):
'''Re-usable form for parents and spouses and children and grandchildren'''
name = StringField('Name', validators=[])
maiden_name = StringField('Maiden Name', validators=[])
partner = StringField('Spouse or Partner First Name', validators=[])
deceased = BooleanField('Deceased')
owns_monkey = BooleanField('Owns Monkey')
note = StringField('Note', validators=[])
</code></pre>
<p>In another form I'm referencing that base class and using it in a FieldList, something like this:</p>
<pre class="lang-py prettyprint-override"><code>class ParentsForm(FlaskForm):
'''Who are the parents?'''
parents = FieldList(FormField(PersonForm), min_entries=2)
submit = SubmitField('Next')
</code></pre>
<p>When the ParentsForm is rendered it will include at least 2 instances of the Person form. But let's say that asking whether these people own a monkey or not is irrelevant for the current form, so I'd like to remove it before it is rendered.</p>
<p>The <a href="https://wtforms.readthedocs.io/en/2.3.x/specific_problems/#removing-fields-per-instance" rel="nofollow noreferrer">WTForms documentation for Removing Fields Per-Instance</a> says you can simply use the <code>del form.field</code> syntax, but it does not specify if this is possible for when using FieldList.</p>
<p>If I were to do:</p>
<pre class="lang-py prettyprint-override"><code>form = ParentsForm()
print(form.parents)
</code></pre>
<p>instead of referencing a Python object, it simply gives me an HTML unordered list composed of the two PersonForm entries. This means that I couldn't do <code>del form.parents.owns_monkey</code> to remove those fields.</p>
<p>Is there a way to achieve this?</p>
| <python><flask><flask-wtforms><wtforms> | 2023-09-25 21:48:13 | 1 | 3,105 | Chase |
77,175,670 | 1,471,980 | how do you subset data frame based on several conditions in Pandas | <p>I need to subset this data frame called:</p>
<pre><code>df
Server Model Slot
server1 Cisco 1
server1 Cisco 2
server1 Cisco 3
server1 Cisco 4
server1 Cisco 8
server1 Cisco Chasis
server1 Cisco Chasis
server2 IBM Slot 5
server2 IBM Slot 8
server2 IBM Slot 9
server3 Micr Slot 22
server3 Micr Slot 18
server3 Micr Slot 1
server3 Micr Chasis 1
</code></pre>
<p>subset df will include Slot values less and equal to 12 or Slot values include "Slot" text in them.</p>
<p>Final data frame need to look like this:</p>
<pre><code>Server Model Slot
server1 Cisco 1
server1 Cisco 2
server1 Cisco 3
server1 Cisco 4
server1 Cisco 8
server2 IBM Slot 5
server2 IBM Slot 8
server2 IBM Slot 9
server3 Micr Slot 22
server3 Micr Slot 18
server3 Micr Slot 1
</code></pre>
<p>I tried this:</p>
<pre><code>df[df['Slot']=<12 || df['Slot].str.contains("Slot")]
</code></pre>
| <python><pandas> | 2023-09-25 20:57:33 | 3 | 10,714 | user1471980 |
77,175,515 | 3,387,716 | Text processing with Python | <p>I need to extract 1,500,000 out of 25,000,000 records and group them.</p>
<p>The groups and the UUIDs of the records to extract are defined in a separate file (200MB) with the following format:</p>
<pre class="lang-none prettyprint-override"><code>>Cluster 0
0 70nt, >90ec66e4-c038-41f0-a553-c94864cf3958... at +/80.00%
1 88nt, >2d45d336-a0f4-4eca-8577-b950e11bb4cf... *
2 70nt, >6f6ad8f1-0cfb-4e57-8962-366cd749fa3f... at +/82.86%
>Cluster 1
0 74nt, >5f584468-a231-416d-9156-ff68e11ee096... *
>Cluster 2
0 74nt, >7f584468-a231-416d-9156-ff68e11ee096... *
1 79nt, >f7884902-51d4-48e1-88a3-9adc0bd0f2cd... at +/86.08%
</code></pre>
<p>Here's my function for parsing it:</p>
<pre class="lang-py prettyprint-override"><code>def clstr_parse(filename):
clstr = None
with open(filename) as f:
for line in f:
if line.startswith('>'):
if clstr:
yield clstr
clstr = []
else:
uuid = line.split()[2][1:37]
clstr.append(uuid)
if clstr:
yield clstr
</code></pre>
<p>Then I use it to extract the "groups" (list of UUIDs) that contain more than one UUID:</p>
<pre class="lang-py prettyprint-override"><code>groups = [grp for grp in clstr_parse('file.clstr') if len(grp) >= 2]
</code></pre>
<p>And define a <code>dict</code> (with the UUIDs as keys) for storing the records during their extraction:</p>
<pre class="lang-py prettyprint-override"><code>records = {uuid: None for grp in groups for uuid in grp}
</code></pre>
<p>The file (30GB) from which I need to extract the records is in the following format (the columns are <kbd>TAB</kbd>-delimited):</p>
<pre class="lang-none prettyprint-override"><code>@something ...some_defs...
@...more_things...
92fa0cdf-9e1b-4f83-b6e0-ca35885bfdbd 16 ...more_fields...
2d45d336-a0f4-4eca-8577-b950e11bb4cf 16 ...more_fields...
2d45d336-a0f4-4eca-8577-b950e11bb4cf 2064 ...more_fields...
f7884902-51d4-48e1-88a3-9adc0bd0f2cd 0 ...more_fields...
90ec66e4-c038-41f0-a553-c94864cf3958 16 ...more_fields...
6f6ad8f1-0cfb-4e57-8962-366cd749fa3f 0 ...more_fields...
7f584468-a231-416d-9156-ff68e11ee096 16 ...more_fields...
</code></pre>
<p>I made a function for yielding each record:</p>
<pre class="lang-py prettyprint-override"><code>def sam_parse(filename):
with open(filename) as f:
for line in f:
if line.startswith('@'):
pass
else:
yield line
for line in f:
yield line
</code></pre>
<p>Which I use in the extraction process:</p>
<pre class="lang-py prettyprint-override"><code>for rec in sam_parse('file.sam'):
(uuid, flag) = rec.split(maxsplit=2)[0:2]
if uuid in records and int(flag) < 2048:
records[uuid] = rec[0:-1]
for grp in groups:
for uuid in grp:
print(records[uuid])
print()
</code></pre>
<p>The problem is that I would expect this program to take less than 10 minutes to complete (tested a similar code in <code>awk</code>) but it's been 8 hours that I launched it and it's still running. Is there something wrong with the Python code?</p>
| <python><text-processing> | 2023-09-25 20:23:41 | 1 | 17,608 | Fravadona |
77,175,366 | 11,277,108 | Is it possible to set two absolute list lengths when validating a field? | <p>If I want to validate the length of a list I currently use:</p>
<pre><code>from pydantic import BaseModel, conlist
from typing import Annotated
class Match(BaseModel):
player_ids: Annotated[List[int], conlist(int, min_length=2, max_length=4)]
</code></pre>
<p>But what if I want the list to either be 2 or 4 items in length?</p>
| <python><pydantic> | 2023-09-25 19:48:46 | 0 | 1,121 | Jossy |
77,175,201 | 1,914,781 | convert duration string greater than 24 hours to datetime object | <p>I have some duration strings, some sample greater than 24 hours. how can I convert such strings to datetime?</p>
<pre><code>import pandas as pd
data = [
['A','01:00:00'],
['B','23:10:00'],
['C','13:20:05'],
['D','27:30:25']
]
df = pd.DataFrame(data,columns=['ID','duration'])
df['duration'] = pd.to_datetime(df['duration'],format='%H:%M:%S')
print(df)
</code></pre>
| <python><pandas> | 2023-09-25 19:13:07 | 1 | 9,011 | lucky1928 |
77,175,144 | 3,605,997 | ModuleNotFoundError: No module named 'tortoise.api' | <p>I am getting this error when trying to run Tortoise TTS from this repo: <a href="https://git.ecker.tech/mrq/ai-voice-cloning" rel="nofollow noreferrer">https://git.ecker.tech/mrq/ai-voice-cloning</a>. I have followed the installation instruction and ran <code>pip install -r requirements.txt</code> and <code>sudo pip install --upgrade tortoise.api</code>. What should I do to resolve this error? It happens after I run the main script with: <code>python3.9 ./src/main.py "$@"</code>.</p>
| <python> | 2023-09-25 19:00:58 | 0 | 1,421 | chackerian |
77,175,083 | 1,471,980 | how do you subset data frame based on values in data dictionary in Pandas | <p>I have a data frame like this:</p>
<pre><code>df
Server Model Slot
server1 Cisco 1
server1 Cisco 2
server1 Cisco 3
server1 Cisco 4
server1 Cisco 8
server1 Cisco Chasis
server1 Cisco Chasis
server2 IBM Slot 5
server2 IBM Slot 8
server2 IBM Slot 9
server3 Micr Slot 22
server3 Micr Slot 18
server3 Micr Slot 1
server3 Micr Chasis 1
</code></pre>
<p>I need to subset this data frame to include only slot defined the n_slots data dict:</p>
<pre><code>n_slots={'Cisco': 8, 'IBM':6, 'Micr':4}
</code></pre>
<p>Data frame df will have more data in the slot column but we are only interested in data configured in the n_slots dictionary.</p>
<p>For example Cisco has 8 in n_slots dictionary, the final data frame will only need to have data from df if Slot 1, 2,3, 4, 5,6,7,8 for Cisco.</p>
<p>Micr has 4 in n_slots dict, my final data frame will only need to include Slot 1 through Slot 4.</p>
<p>My final data frame needs to be something like this given the data frame "df" and my dict called n_slots:</p>
<pre><code>final_df
Server Model Slot
server1 Cisco 1
server1 Cisco 2
server1 Cisco 3
server1 Cisco 4
server1 Cisco 8
server2 IBM 5
server3 Micr 1
</code></pre>
<ol>
<li>I need to figure out the subset the data where Slot is only 1 though 8 and "Slot 1" through "Slot 8".</li>
<li>Remove "Slot" text from the Cell Values</li>
<li>final_df should include only values defined in "n_slot" dictionary.</li>
</ol>
| <python><pandas> | 2023-09-25 18:49:08 | 3 | 10,714 | user1471980 |
77,174,883 | 12,304,000 | check if rows are already present in pyspark dataset | <p>The schema of my output and input dataset is the same. Upon running this script, I want to first check create a new dataset using the filter_data function and save the results in a dataframe variable called "alerts". Now, I want to check if there are any rows in this "alerts" dataframe that are not already present in my existing output.</p>
<p>If there are no new rows, I want to return the existing output as it is (without any changes) so that the output dataset is not updated and the health checks on the output dataset are not triggered.</p>
<p>However, if the "alerts" dataset has new data that's not already present in the Output dataset, then i want to return "alerts" dataset.</p>
<pre><code>from transforms.api import configure, transform_df, Input, Output
from numpy import mean, std
from datetime import timedelta, datetime
from pyspark.sql import functions as F
@configure(profile=[
"DRIVER_MEMORY_MEDIUM"
])
@incremental(snapshot_inputs=['violations'])
@transform_df(
Output("/Spring/Snowplow/data/derived/output_dataset"),
violations=Input("xxx"),
)
def compute(violations, output):
alerts = filter_data(violations, output)
alerts_df = alerts.subtract(output)
if alerts_df.count() > 0:
return alerts
else:
return output
def filter_data(violations, output):
todays_date = F.current_date()
violations = violations.filter(F.col("failure_date") == todays_date)
columns_to_keep = [
"id",
"value_1",
"value_2",
"date",
"timestamp",
]
return violations.select(*columns_to_keep)
</code></pre>
<p>This is what I am trying but currently I seem to be getting an error:</p>
<pre><code>TypeError: compute() missing 1 required positional argument: 'output'
</code></pre>
| <python><pyspark><palantir-foundry><foundry-code-repositories><pyspark-schema> | 2023-09-25 18:10:34 | 2 | 3,522 | x89 |
77,174,838 | 693,294 | python 3 syntax error while evaluating import line | <p>Working on getting seriously into python3.</p>
<p>While trying to eval(generated_code) to parse arguments I get an unexpected syntax error during the eval. Iv'e whittled down the code to a simpler failure case for posting here. The real code nicely defines ap though this fragment does not.</p>
<p>The problem with the fragment is not the undefined ap, but the immediate syntax error in the first line passed to eval. If the code within the triple quotes is passed directly to python3, without the eval, then rather than a syntax error the expected undefined ap problem happens, which for this fragment, is a success.</p>
<p><strong>Code Fragment</strong></p>
<pre><code>evtxt = '''
import argparse
def sh_add_args(ap):
# eval ./argparse.py --name0="$0" -- "$@" <<ARGPARSE || exit $?
ap.add_argument('--arg', type=str, default='ARG', action='store')
ap.add_argument('--bool', default=False, action='store_true')
ap.add_argument('--int', type=int, default=42, action='store')
return
sh_parser = argparse.ArgumentParser('Parse arguments for test' )
sh_add_args(sh_parser)
sh_args = ap.parse_args(py_parser.argv)
'''
print("###BEGIN\n" + evtxt + "\n####END")
eval(evtxt)
</code></pre>
<p><strong>Unexplained Error</strong>
The actual error follows from python3:</p>
<pre><code>Traceback (most recent call last):
File "tmp2.py", line 15, in <module>
eval(evtxt)
File "<string>", line 2
import argparse
^
SyntaxError: invalid syntax
</code></pre>
<p><strong>What I want</strong></p>
<p>I've tried both """ and ''' quotes in the fragment and get the same result.</p>
<p>The triple quotes are just for this demo fragment. The actual <code>evtxt</code> is generated reading input line by line using <code>fileinput.input()</code>, though the results are the same.</p>
<p><em>The goal is to find the stupid thing I'm overlooking to work past the syntax error.</em></p>
<p>FYI: any trailing white space has been removed during line reading.</p>
| <python><python-3.x><eval> | 2023-09-25 18:02:22 | 0 | 3,796 | Gilbert |
77,174,806 | 4,439,019 | Python datetime to string | <p>I am trying to convert this:</p>
<pre><code> my_time = datetime.datetime.now()
my_datetime = my_time.strftime("%Y%M%D.%H%M%S")
</code></pre>
<p>To this:</p>
<pre><code>'20235409.105400'
</code></pre>
<p>Instead I am getting this:</p>
<pre><code>'20235409/25/23.105400'
</code></pre>
<p>Any ideas?</p>
| <python><datetime> | 2023-09-25 17:56:52 | 1 | 3,831 | JD2775 |
77,174,785 | 12,627,448 | How to pass histogram bins for each discrete value in 2D histogram and plot it | <p>My data can be seen as a time series. For <code>t=0</code> there is some data <code>x0</code>, then for <code>t=1</code> some data <code>x1</code>, etc. I am trying to create a 2D histogram using <code>sns.histplot</code> <a href="https://seaborn.pydata.org/generated/seaborn.histplot.html" rel="nofollow noreferrer">here for reference</a> of this data. I want to bins along the y-axis to be calculated and passed separately for each <code>t</code>, but I am not sure how to do that. To provide some code:</p>
<pre><code>time value
0 1.2
0 1.3
0 0.4
0 0.3
0 1.34
0 1.31
0 1.36
... ...
1 3.4
1 10.2
1 5.2
1 100.13
1 108.13
... ...
n 1.2
n 2.5
</code></pre>
<p>I have a dataframe, <code>df</code> that looks like that. Note how a binwidth of 0.3 would be fine for <code>t=0</code> because it puts the values <code>0.3; 0.4</code> in a bin, whereas the values <code>1.2; 1.3; 1.34; 1.31; 1.36</code> in a different bin. But that binwidth for <code>t=1</code> would not work because each value would have its own bin. Instead for <code>t=1</code>, a more reasonable more binwidth would be 10, which would group <code>3.4; 10.2; 5.2</code> in a bin, and <code>100.13; 108.13</code> in a different bin. Of course, the bins don't have to have the same size, e.g., 0.3 or 10, but this is just for illustrating the issue.</p>
<p>Currently, the code is <code>sns.histplot(df, x='time', y='value', discrete=(True, False))</code>, but that's not what I want. I'd like to pass something like this:</p>
<pre><code>sns.histplot(df, x='time', y='value', binwidth=(1, (binwidth_t0, binwidth_t1,...,binwidth_n)))
</code></pre>
<p>where each binwidth for each <code>t</code> is passed independently, and the same binwidth is used for x-axis (it is discrete).</p>
<p>Is there a way to achieve this? It also doesn't have to be with <code>sns.histplot</code>. A different library is fine. Any help is appreciated.</p>
<p>Edit: I hope this clarifies the question. This is a plot of what I currently have: <a href="https://i.sstatic.net/0ixzP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0ixzP.png" alt="enter image description here" /></a>
Ignore the fact there are subplots. If you look at ant, you can see the bins along the y-axis are all identical (roughly 1-2, then 2-3, etc) and they are the same for every column (every value along the x-axis). I'd like to have different bins for every <code>x</code>. I hope this makes it the question more clear. If not, please let me know.</p>
| <python><matplotlib><seaborn><histogram><visualization> | 2023-09-25 17:52:39 | 1 | 470 | Schach21 |
77,174,777 | 2,504,762 | Pandas Dataframe - apply filter to rows | <p>Could anyone please let me know if there is a better way to apply filter to row in a pandas dataframe.</p>
<p>I am running following code to check if what is the difference between 1st row and 2nd row and also, a function would return True or False if there is a different.</p>
<pre class="lang-py prettyprint-override"><code> data = [
(10, 20, 30, 40, 50, 60, 70),
(10, 30, 30, 40, 50, 60, 100)
]
df = pd.DataFrame(data, columns=["a", "b", "c", "d", "d", "f", "g"])
df.loc['diff'] = df.iloc[1]-df.iloc[0]
df_1 = (df.loc["diff"] != 0).to_frame("result")
df_2 = df_1.loc[df_1["result"] != 0]
print(df_2.size)
</code></pre>
| <python><pandas> | 2023-09-25 17:51:26 | 2 | 13,075 | Gaurang Shah |
77,174,742 | 9,381,966 | 'InnerDoc' object has no attribute 'pk' | <p>I'm encountering an issue with Elasticsearch-dsl when trying to use a custom serializer with Django REST framework. The error message I'm getting is:</p>
<pre><code>'InnerDoc' object has no attribute 'pk'
</code></pre>
<p>This error originates in the line <code>serializer = self.serializer_class(results, many=True)</code> from the following code snippet in my PaginatedElasticSearchAPIView:</p>
<pre class="lang-py prettyprint-override"><code>
class PaginatedElasticSearchAPIView(APIView, LimitOffsetPagination):
serializer_class = None
document_class = None
@abc.abstractmethod
def generate_q_expression(self, query):
"""This method should be overridden
and return a Q() expression."""
def get(self, request, query):
try:
q = self.generate_q_expression(query)
search = self.document_class.search().query(q)
response = search.execute()
print(f'Found {response.hits.total.value} hit(s) for query: "{query}"')
results = self.paginate_queryset(response, request, view=self)
serializer = self.serializer_class(results, many=True)
return self.get_paginated_response(serializer.data)
except Exception as e:
return HttpResponse(e, status=500)
</code></pre>
<p>I've checked my serializer, and it seems to have the primary key ("id") defined correctly:</p>
<pre class="lang-py prettyprint-override"><code>
class OfficeSerializer(serializers.ModelSerializer):
# ...
class Meta:
model = Office
fields = [
"id",
# ...
]
</code></pre>
<p>My Elasticsearch document, defined as OfficeDocument, also appears to have the primary key ("id") defined correctly:</p>
<pre class="lang-py prettyprint-override"><code>
@registry.register_document
class OfficeDocument(Document):
# ...
class Django:
model = Office
fields = [
"id",
# ...
]
</code></pre>
<p>Despite this, I was still encountering the "'InnerDoc' object has no attribute 'pk'" error.</p>
<p>Then I recreated all models without explicitly setting ids and removed the ids references in the serializer and document files. The error persists, though.</p>
<p>What could be causing this issue, and how can I resolve it? Any insights or suggestions would be greatly appreciated.</p>
| <python><django><elasticsearch><django-rest-framework> | 2023-09-25 17:44:09 | 1 | 1,590 | Lucas |
77,174,740 | 16,453,533 | Retrieve an element from a given index in the biggest list value of a JSON | <p>Here are the instructions:</p>
<blockquote>
<p>Giving a randomly-generated JSON string. Find the longest list and
return its key, followed by a dash ('-'), followed by the element at
index 1</p>
<pre><code>{
"agistment": "nonpromulgation",
"gastroparietal": 4599,
"ryotwar": [
"mealies",
"sophistress",
"money",
"astraeid"
],
"pranava": [
"rosinesses",
"basis"
],
"loomer": 581.6,
"bangalay": "orthopnea",
"inflexive": 3367,
"spinelles": [
"druith",
"pseudomonades",
"nonrhythmically",
"ischiocavernosus",
"operettist"
],
"articulator": 2805.3,
"preempted": [
"equvalent",
"mesoderms",
"idlers"
],
"unfractiously": [
"semitextural",
"stilyaga",
"coddle",
"identism"
],
"synkinesia": 4993,
"linoleum": "overcram",
"nations": "colleagueship",
"lithonephria": "presoaked",
"subpartitioned": "tiresomeness",
"triternately": [
"postaxial",
"nonimmune",
"eopaleozoic",
"maledict",
"ritzily",
"whisperation"
],
"etoile": "nonvesting",
"decussorium": [
"tocusso",
"butterfingered",
"sizeman",
"duetted",
"misthrew",
"untextually",
"splenotyphoid",
"acmispon"
],
"novelties": "hypersthenite",
"aesop": [
"sabadinine",
"tangency"
],
"potlatching": [
"lucrific",
"tupaia",
"dedenda",
"burglarise",
"tanagroid",
"grewsomely",
"lipotype",
"squarable",
"withholdal",
"antidotes"
],
"preformistic": "broadsiding",
"papovavirus": 3917,
"hurri": [
"catcall"
]
}
</code></pre>
</blockquote>
<p>Note : the index and JSON is randomly generated each time I do the lab, so I have to write generalized code. Here's what I came with first :</p>
<pre><code>ind = 1 #raw data for better understanding
json_data = json.loads(challenge_input) #challenge_input is the json above sent in the lab
biggest_length = 0 #will store the biggest length of the list of a value in the JSON
for key, value in json_data.items():
if type(value) == list: # check if the value of the key is a list
list_length = len(value) # store the length
if list_length > biggest_length: # check if the list is bigger than the existent biggest list
biggest_length=list_length # store the biggest list length
list_value = value # store the value from the json into a variable
list_key = key # stoer the value of the key from the json into a variable
result=list_key + "-" + list_value[ind] #output
# {} =
send_data_to_server(client_socket, result) #send the result to the server to validate the lab
</code></pre>
<p>Here's the output of my code :</p>
<pre><code>Your answer was on time but incorrect: - Your answer was "potlatching-tupaia", the expected answer was "potlatching-"tupaia""
</code></pre>
<p>My first idea was that my JSON value was maybe parsed to a string or given a bad format. I've tried to "bypass" this, by manually adding --> ". As you would expect, didn't work. :</p>
<pre><code>result=f'"{list_key}-"{res[ind]}""'
</code></pre>
<p>Output:</p>
<pre><code>Your answer was on time but incorrect: - Your answer was "ultralegality-"reservoired"", the expected answer was "ultralegality-"reservoired""
</code></pre>
<p>As you see, bypassing doesn't work.</p>
<p>I don't know what's wrong. Is it the way I retrieve the data in the JSON, which leads it to being in another format ? (like string format?) Is is the way that I display my data ? The Lab seems to check the data and the type, but I don't know any more solutions. I've tried almost every way of display the data (str(), f"{}", ...)</p>
| <python><json> | 2023-09-25 17:43:46 | 1 | 390 | ThomasT |
77,174,695 | 5,195,646 | Display a literalinclude block within a README.rst in GitHub | <p><strong>Goal</strong>. Display a <code>literalinclude</code> block within a README.rst in GitHub.</p>
<p><strong>Problem</strong>. I'm working on generating documentation for a Python project using Sphinx. The README.rst file has a literalinclude block (documentation <a href="https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html#directive-literalinclude" rel="nofollow noreferrer">here</a>). The literalinclude block is displayed correctly in Sphinx documentation but not in the home page of the repo in GitHub.</p>
<p><strong>Tested workarounds</strong>. The explanation could be that the literalinclude directive is understood by Sphinx, but not by Docutils (and therefore GitHub?) as discussed <a href="https://stackoverflow.com/questions/77172274/error-using-literalinclude-with-sphinx-in-rst">here</a>, which includes a reproducible example. One workaround that I've explored is to use a <code>.. only:</code> block (<a href="https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html#directive-only" rel="nofollow noreferrer">link</a>). However, once again, it works on Sphinx documentation but not on Github, so it's likely affected by the same curse of literalinclude.</p>
| <python><python-3.x><github><python-sphinx><docutils> | 2023-09-25 17:35:19 | 0 | 2,130 | Elrond |
77,174,583 | 12,502,424 | Unable to resolve "Celery Result backend. DisabledBackend object has no attribute _get_task_meta_for" | <p>I am building a small Flask application that calls the get completion API from OpenAI to fetch some information. I want to run these API calls in the background (partially because Heroku has a 30 sec. timeout). I tried with redis, and now with Celery, but even though the background task is created and executed correctly, I can't seem to resolve the "Celery Result backend. DisabledBackend object has no attribute _get_task_meta_for" error.</p>
<p>I've checked the various issues on SF, used Chat GPT Plus, and looked at various App Factory blogs.</p>
<p>Previously I used Postgres and even though the celery table is successfully created during database creation check, the error won't go away.</p>
<p>I have set up the celery_config.py in two different ways - of course I uncomment one before running :-) not working on both:</p>
<pre><code>CELERY_BROKER_URL = "redis://localhost:6379/0"
#option1 CELERY_RESULT_BACKEND = "db+postgresql://user:pwd@localhost/db"
#option2 CELERY_RESULT_BACKEND = "redis://localhost:6379/0"
</code></pre>
<p>celery_utils.py:</p>
<pre><code>from celery import Celery
celery = Celery("website", broker="", backend="")
def make_celery(app):
celery = Celery(
app.import_name,
backend=app.config["CELERY_RESULT_BACKEND"],
broker=app.config["CELERY_BROKER_URL"],
)
celery.conf.update(app.config)
return celery
</code></pre>
<p>tasks.py:</p>
<pre><code>@celery.task(bind=True)
def fetchContextualNote_async(self, term, context):
// logic here
</code></pre>
<p>celery logs:When I run</p>
<pre><code>celery -A appname.tasks worker --loglevel=info
</code></pre>
<p>I can see the API call and successful execution.</p>
<p>What am I missing?</p>
| <python><celery> | 2023-09-25 17:13:24 | 1 | 1,199 | VeeDuvv |
77,174,529 | 3,437,212 | Pre-trained models(Spacy, NLTK etc. ) for Name, Entity, Product, Place recognition for short descriptions | <p>I have very short descriptions of not exceeding 40 characters. I'm using Spacy's NER model to identify Name, Entity, Products and foods.
The problem with the text descriptions I have are very short and are not proper English sentences. Spacy fails to identify Name, Entity, Products and foods and returns nulls.</p>
<p>For example, the first description in my table is "Monster Ultra Strawberry". But when I try to get the entity type tags for individual tokens the spacy model returns a null.</p>
<p>My code is as below.</p>
<pre><code>nlp = spacy.load('en_core_web_lg')
docs = nlp(data['desc'][0])
for token in docs:
print(token.ent_type_)
</code></pre>
<p>Please let me know what other models I can use in this situation? Are there model classifiers that classify a word into these buckets - Name, Org, Food, Product etc. without additional context?</p>
| <python><nlp><nltk><spacy> | 2023-09-25 17:05:30 | 1 | 685 | user3437212 |
77,174,450 | 5,892,689 | Find intervals in a Pandas DatetimeIndex with continuous data, and with no data in a given period | <p>I have a DatetimeIndex. I would like to find intervals greater than ie '10s' where there are no data, as well as interval with continuous data.</p>
<p>Intervals with data (I show the first one):</p>
<pre><code>['2023-09-12 09:48:28.720000', '2023-09-12 09:48:29.020999936']
</code></pre>
<p>Intervals with no data:</p>
<pre><code>['2023-09-12 09:48:29.020999936', '2023-09-19 10:27:00.992000']
index = pd.to_datetime(['2023-09-12 09:48:28.720000',
'2023-09-12 09:48:28.813999872',
'2023-09-12 09:48:28.921999872',
'2023-09-12 09:48:29.020999936',
'2023-09-19 10:27:00.992000',
'2023-09-19 10:27:01.192000',
'2023-09-19 10:27:01.293999872'])
</code></pre>
| <python><pandas><datetime><group-by> | 2023-09-25 16:50:56 | 1 | 688 | Guido |
77,174,223 | 1,041,521 | Simplest Dataloader fails when num_workers>0 | <p>The following very simplistic Dataloader works, when <code>num_workers=0</code>, but fails with unexpected Runtime Error as soon as I increase to 1: <code>RuntimeError: DataLoader worker (pid(s) 8248) exited unexpectedly</code>
My project is bigger but I could cut down the problem to this minimalistic example.</p>
<pre><code>import torch
from torch.utils.data import Dataset
from torch.utils.data import DataLoader
class DummyDataset(Dataset):
def __init__(self, num_samples):
self.num_samples = num_samples
def __len__(self):
return self.num_samples
def __getitem__(self, idx):
try:
# Generate random data for features and labels
# Features: shape (6, 64, 64), random float values
# Labels: shape (64, 64), random binary values
features = torch.randn(6, 64, 64)
labels = torch.randint(0, 2, (64, 64))
return features, labels
except Exception as e:
print(f'Error at index {idx}: {e}')
raise
# Usage:
num_samples = 1000
dummy_dataset = DummyDataset(num_samples)
# Create a DataLoader
dummy_dataloader = DataLoader(dummy_dataset, batch_size=32, shuffle=True, num_workers=1)
# Try to fetch a batch of data
data_batch, labels_batch = next(iter(dummy_dataloader))
print(data_batch.shape, labels_batch.shape)
</code></pre>
<p>I checked the Manual but couldn't find any solution. The general setup on my computer is working and even the following example gives no error with 4 workers:</p>
<pre><code>import torch
import torchvision
import torchvision.transforms as transforms
transform = transforms.Compose([transforms.ToTensor()])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=32, shuffle=True, num_workers=4)
# Try to get a batch of data
data, labels = next(iter(trainloader))
print(data.shape, labels.shape)
</code></pre>
<p>Therefore I think the problem is part of my implementation. On the other hand, google colab does not raise problems here. And so it might be a problem of my environment as well which I can't identify.</p>
<p>Versions:</p>
<ul>
<li>Python: 3.11.5</li>
<li>torch: 2.0.1</li>
</ul>
<p>I'll be glad to provide more information to solve the issue.</p>
| <python><pytorch><pytorch-dataloader> | 2023-09-25 16:11:27 | 1 | 324 | Sonic |
77,174,018 | 11,370,582 | Getting `TypeError: issubclass() arg 1 must be a class` when trying to load `nlp = spacy.load("en_ner_bc5cdr_md")` | <p>I'm using spaCy to analyze a large set of medical text for commentary about diagnoses, which was working fine when I left it last week.</p>
<p>Now when I try to load the scispaCy library <code>en_ner_bc5cdr_md</code> I am getting the type error <code>TypeError: issubclass() arg 1 must be a class</code></p>
<p>I used this exact library to analyze the same text last week and the only thing that has changed is that I've restarted my computer.</p>
<p>I'm running an anaconda distribution with python <code>3.8.8</code> on OSX <code>Version 13.5.2 (22G91)</code></p>
<p>Any ideas what's going on?</p>
<p>This is the code, it never gets past importing the model.</p>
<pre><code>import spacy
import scispacy
import pandas as pd
text = """The patient was diagnosed with pneumonia last year. He has a history of asthma and hypertension.
His COPD symptoms have worsened over the last 2 months. The patient also suffers from migraines."""
nlp = spacy.load("en_ner_bc5cdr_md")
doc = nlp(text)
labels = []
counts = []
for ent in doc.ents:
if ent.label_ == 'DISEASE':
if ent.text not in labels:
labels.append(ent.text)
counts.append(1)
else:
idx = labels.index(ent.text)
counts[idx] += 1
df = pd.DataFrame({'Diagnosis': labels, 'Count': counts})
print(df)
</code></pre>
<p>If you are able to run on you machine please let me know the parameters. You will also need to run - <code>!pip install https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.4.0/en_ner_bc5cdr_md-0.4.0.tar.gz</code></p>
| <python><tensorflow><machine-learning><nlp><spacy> | 2023-09-25 15:37:10 | 1 | 904 | John Conor |
77,173,879 | 5,724,244 | S3 File Copy:Same bucket: using python boto3 from one folder to the bucket | <p>We have a file in s3bucket named: aftoem-resolver-group-data-1</p>
<p>in the format as follows:</p>
<pre><code>fftoem-operation-group-data-1
|-Folder Name: 2023-09-25 //Today's Date
|- File Name: {randomstring coming as an output from AWS Glue}.csv
</code></pre>
<p>We have written a lambda function to copy the file inside the 2023-09-25 folder and paste it in the bucket and delete the folder.</p>
<p>Resultant Output:</p>
<pre><code>fftoem-operation-group-data-1
|- File Name: resolver_group-2023-09-25.csv
</code></pre>
<p>We have written the following code for lambda however it is giving us unexpected results.</p>
<p>The below code gives the results but it is into another folder inside the bucket, however we do not want any folder into the bucket we need directly the file inside the bucket.
Any possibility in achieving the same.</p>
<pre><code>import boto3
import botocore
import json
import os
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
s3 = boto3.resource('s3')
client = boto3.client('s3')
def lambda_handler(event, context):
logger.info("New files uploaded to the source bucket.")
print("FP: Printing Event", event)
key = event['Records'][0]['s3']['object']['key']
source_bucket = event['Records'][0]['s3']['bucket']['name']
new_key = f'resolver_group-{key[3:].partition("%")[0]}.csv'
print(f"FP: Printing key={key}")
print(f"FP: Prining source_bucket={source_bucket}")
print(f"FP: Printing new_key={new_key}")
destination_bucket = source_bucket
source = {'Bucket': source_bucket, 'Key': key}
print(f"FP: Printing source={source}")
response = client.list_objects_v2(Bucket= source_bucket, Prefix = key)
print(f"FP: Printing response={response}")
source_key = response["Prefix"]
print(f"FP: Printing source_key={source_key}")
copy_source = {'Bucket': source_bucket, 'Key': source_key}
print(f"FP: Printing copy_source={copy_source}")
try:
client.copy_object(Bucket = destination_bucket, CopySource = copy_source, Key = ''+new_key)
logger.info("File copied to the destination bucket successfully!")
except botocore.exceptions.ClientError as error:
logger.error("There was an error copying the file to the destination bucket")
print('Error Message: {}'.format(error))
except botocore.exceptions.ParamValidationError as error:
logger.error("Missing required parameters while calling the API.")
print('Error Message: {}'.format(error))
</code></pre>
| <python><amazon-web-services><amazon-s3><aws-lambda><boto3> | 2023-09-25 15:11:48 | 1 | 449 | frp farhan |
77,173,742 | 12,845,199 | Wordcloud centralize and reduce overlap in randomly positioned words inside a plot | <pre><code>import plotly
import random
import plotly.graph_objects as go
from plotly.subplots import make_subplots
random.seed(42)
words = ['potato','hello','juvi']
colors = [plotly.colors.DEFAULT_PLOTLY_COLORS[random.randrange(1, 10)] for i in range(20)]
weights = [110,80,20]
data = go.Scatter(x=[random.random() for i in range(20)],
y=[random.random() for i in range(20)],
mode='text',
text=words,
marker={'opacity': 0.3},
textfont={'size': weights,
'color': colors})
layout = go.Layout({'xaxis': {'showgrid': False, 'showticklabels': False, 'zeroline': False},
'yaxis': {'showgrid': False, 'showticklabels': False, 'zeroline': False}})
fig = go.Figure(data=[data], layout=layout)
fig.show()
</code></pre>
<p>I have the following sample code which generates a wordcloud in plotly, the thing is as you can see the words tend to get out of the given square in the plot any ideas on how I could keep the words inside the blueish square. Or at least more to the center of it</p>
<p><a href="https://i.sstatic.net/icW4u.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/icW4u.png" alt="enter image description here" /></a></p>
| <python><random><plotly> | 2023-09-25 14:53:06 | 1 | 1,628 | INGl0R1AM0R1 |
77,173,549 | 7,437,143 | Replacing a SimpleString inside a libcst function definition? (dataclasses.FrozenInstanceError: cannot assign to field 'body') | <h2>Context</h2>
<p>While trying to use the <code>libcst</code> module, I am experiencing some difficulties updating a documentation of a function.</p>
<h2>MWE</h2>
<p>To reproduce the error, the following minimal working example (MWE) is included:</p>
<pre class="lang-py prettyprint-override"><code>from libcst import ( # type: ignore[import]
Expr,
FunctionDef,
IndentedBlock,
MaybeSentinel,
SimpleStatementLine,
SimpleString,
parse_module,
)
original_content: str = """
\"\"\"Example python file with a function.\"\"\"
from typeguard import typechecked
@typechecked
def add_three(*, x: int) -> int:
\"\"\"ORIGINAL This is a new docstring core.
that consists of multiple lines. It also has an empty line inbetween.
Here is the emtpy line.\"\"\"
return x + 2
"""
new_docstring_core: str = """\"\"\"This is a new docstring core.
that consists of multiple lines. It also has an empty line inbetween.
Here is the emtpy line.\"\"\""""
def replace_docstring(
original_content: str, func_name: str, new_docstring: str
) -> str:
"""Replaces the docstring in a Python function."""
module = parse_module(original_content)
for node in module.body:
if isinstance(node, FunctionDef) and node.name.value == func_name:
print("Got function node.")
# print(f'node.body={node.body}')
if isinstance(node.body, IndentedBlock):
if isinstance(node.body.body[0], SimpleStatementLine):
simplestatementline: SimpleStatementLine = node.body.body[
0
]
print("Got SimpleStatementLine")
print(f"simplestatementline={simplestatementline}")
if isinstance(simplestatementline.body[0], Expr):
print(
f"simplestatementline.body={simplestatementline.body}"
)
simplestatementline.body = (
Expr(
value=SimpleString(
value=new_docstring,
lpar=[],
rpar=[],
),
semicolon=MaybeSentinel.DEFAULT,
),
)
replace_docstring(
original_content=original_content,
func_name="add_three",
new_docstring=new_docstring_core,
)
print("done")
</code></pre>
<h2>Error:</h2>
<p>Running <code>python mwe.py</code> yields:</p>
<pre><code>Traceback (most recent call last):
File "/home/name/git/Hiveminds/jsonmodipy/mwe0.py", line 68, in <module>
replace_docstring(
File "/home/name/git/Hiveminds/jsonmodipy/mwe0.py", line 56, in replace_docstring
simplestatementline.body = (
^^^^^^^^^^^^^^^^^^^^^^^^
File "<string>", line 4, in __setattr__
dataclasses.FrozenInstanceError: cannot assign to field 'body'
</code></pre>
<h2>Question</h2>
<p>How can one replace the docstring of a function named: <code>add_three</code> in some Python code <code>file_content</code> using the libcst module?</p>
<h2>Partial Solution</h2>
<p>I found the following solution for a basic example, however, I did not test it on different functions inside classes, with typed arguments, typed returns etc.</p>
<pre class="lang-py prettyprint-override"><code>from pprint import pprint
import libcst as cst
import libcst.matchers as m
src = """\
import foo
from a.b import foo_method
class C:
def do_something(self, x):
\"\"\"Some first line documentation
Some second line documentation
Args:something.
\"\"\"
return foo_method(x)
"""
new_docstring:str = """\"\"\"THIS IS A NEW DOCSTRING
Some first line documentation
Some second line documentation
Args:somethingSTILLCHANGED.
\"\"\""""
class ImportFixer(cst.CSTTransformer):
def leave_SimpleStatementLine(self, orignal_node, updated_node):
"""Replace imports that match our criteria."""
if m.matches(updated_node.body[0], m.Expr()):
expr=updated_node.body[0]
if m.matches(expr.value, m.SimpleString()):
simplestring=expr.value
print(f'GOTT={simplestring}')
return updated_node.with_changes(body=[
cst.Expr(value=cst.SimpleString(value=new_docstring))
])
return updated_node
source_tree = cst.parse_module(src)
transformer = ImportFixer()
modified_tree = source_tree.visit(transformer)
print("Original:")
print(src)
print("\n\n\n\nModified:")
print(modified_tree.code)
</code></pre>
<p>For example, this partial solution fails on:</p>
<pre class="lang-py prettyprint-override"><code>src = """\
import foo
from a.b import foo_method
class C:
def do_something(self, x):
\"\"\"Some first line documentation
Some second line documentation
Args:something.
\"\"\"
return foo_method(x)
def do_another_thing(y:List[str]) -> int:
\"\"\"Bike\"\"\"
return 1
"""
</code></pre>
<p>because the solution does not verify the name of the function in which the <code>SimpleString</code> occurs.</p>
| <python><replace><docstring><libcst> | 2023-09-25 14:31:18 | 1 | 2,887 | a.t. |
77,173,313 | 14,954,262 | MYSQL query to find partial matches in a string with delimiters | <p>I have this table :</p>
<p><a href="https://i.sstatic.net/SsN0k.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SsN0k.png" alt="enter image description here" /></a></p>
<p>And i have this string in python <strong>which can change</strong> as it's a result of another previous query.</p>
<p>So the string could be either</p>
<pre><code>string_1 = str(;123;131415;)
</code></pre>
<p>or</p>
<pre><code>string_1 = str(;456;123;)
</code></pre>
<p>or</p>
<pre><code>string_1 = str(;789;)
</code></pre>
<p>and so on ...</p>
<p>I'm searching for a query to compare each number between the <code>";"</code> delimiters in <code>string_1</code> and search for a match in <code>my_table</code> with <code>RLIKE</code></p>
<p>Something like :</p>
<pre><code>"SELECT id FROM my_table WHERE company_members_id RLIKE'" each numbers between ;; in string_1 "' "
</code></pre>
<p>It should return row 1 (match with ;123;) and row 2 (match with ;131415;)</p>
<p>Thanks</p>
| <python><mysql> | 2023-09-25 13:59:02 | 1 | 399 | Nico44044 |
77,173,221 | 14,923,149 | Splitting Mismatched Rows into Multiple Rows in a Pandas DataFrame | <p>I would like to split the mismatched values within each column into separate rows while preserving the values in the 'Gene.ID' column for the new rows.</p>
<pre><code>import pandas as pd
data = {
'Gene.ID': ['NZ_JAHWGH010000001.1_15', 'NZ_JAHWGH010000001.1_17', 'NZ_JAHWGH010000001.1_68', 'NZ_JAHWGH010000001.1_7', 'NZ_JAHWGH010000001.1_7', 'NZ_JAHWGH010000001.1_7', 'NZ_JAHWGH010000001.1_7', 'NZ_JAHWGH010000001.1_7','NZ_JAHWGH010000001.1_7'],
'DIAMOND': ['SLH', 'GT2', 'GT2', 'CBM41', 'CBM48', 'GH11', 'GH13', 'GH13', ''],
'HMMER': ['', 'GT2', 'GT2', 'CBM41', 'CBM41', 'GH13', 'GH13', '', 'GH13'],
'dbCAN_sub': ['', 'GT2', 'GT2', 'CBM41', 'CBM41', 'CBM41', 'CBM48', '', 'GH13']
}
df = pd.DataFrame(data)
# Print the DataFrame
print(df)
</code></pre>
<p>The result should look like this:</p>
<pre><code>expected_data = {
"Gene.ID": ["NZ_JAHWGH010000001.1_15", "NZ_JAHWGH010000001.1_17", "NZ_JAHWGH010000001.1_68", "NZ_JAHWGH010000001.1_7", "NZ_JAHWGH010000001.1_7", "NZ_JAHWGH010000001.1_7", "NZ_JAHWGH010000001.1_7", "NZ_JAHWGH010000001.1_7", "NZ_JAHWGH010000001.1_7", "NZ_JAHWGH010000001.1_7", "NZ_JAHWGH010000001.1_7", "NZ_JAHWGH010000001.1_7", "NZ_JAHWGH010000001.1_7"],
"DIAMOND": ["SLH", "GT2", "GT2", "CBM41", "CBM48", "", "GH11", "", "", "GH13", "", "GH13",""],
"HMMER": ["", "", "GT2", "CBM41", "", "CBM41", "", "GH13", "", "GH13", "", "", "GH13"],
"dbCAN_sub": ["", "", "GT2", "CBM41", "", "CBM41", "", "", "CBM41", "", "CBM48", "", "GH13"]
}
expected_df = pd.DataFrame(expected_data)
print(expected_df)
</code></pre>
<p>I tried this code</p>
<pre><code>import pandas as pd
print(df)
def g(df):
for i in range(len(df)):
if i == len(df) - 1:
break
if df.iloc[i, 0] == '':
pass
if df.iloc[i, 0] == df.iloc[i, 1]:
pass
if df.iloc[i, 0] != df.iloc[i, 1]:
df.iloc[i, 1] = df.iloc[i+1, 1]
if df.iloc[i, 1] == '':
pass
if df.iloc[i, 1] == df.iloc[i, 2]:
pass
if df.iloc[i, 1] != df.iloc[i, 2]:
df.iloc[i, 2] = df.iloc[i+1, 2]
return df
df = g(df.copy())
print(df)
</code></pre>
<p>However, I'm facing challenges in splitting the mismatched values and preserving the 'Gene.ID' column for the new rows. Can someone please help me with a solution or suggest a more efficient approach to achieve this?</p>
| <python><pandas><numpy><loops> | 2023-09-25 13:49:25 | 1 | 504 | Umar |
77,173,196 | 7,483,211 | How to rename a conda environment using (micro)mamba? | <p>I now use micromamba instead of conda or mamba. I would like to rename/move an environment.</p>
<p>Using conda, I can <a href="https://stackoverflow.com/questions/42231764/how-can-i-rename-a-conda-environment">rename</a> via:</p>
<pre class="lang-bash prettyprint-override"><code>conda rename -n CURRENT_ENV_NAME NEW_ENV_NAME
</code></pre>
<p>But this doesn't work with neither mamba nor micromamba:</p>
<pre class="lang-bash prettyprint-override"><code>$ /opt/homebrew/Caskroom/miniforge/base/condabin/mamba rename -n flu_frequencies_test flu_frequencies
Currently, only install, create, list, search, run, info, clean, remove, update, repoquery, activate and deactivate are supported through mamba.
$ micromamba rename -n flu_frequencies_test flu_frequencies
The following arguments were not expected: flu_frequencies rename
Run with --help for more information.
</code></pre>
| <python><mamba><micromamba> | 2023-09-25 13:46:18 | 2 | 10,272 | Cornelius Roemer |
77,173,094 | 11,656,244 | Kivy Pop Up scale issue | <p>I have this custom popup compoenent in kivy. When I scale it down to 0.5, the pop up is scaled down pefectly with all the internal components. but there is a transparent background and the buttons are not clickale on their visible position. Buttons are clickable on the original position. Here is the code:</p>
<pre><code><CustomPopup@Popup>:
canvas.before:
PushMatrix
Scale:
x: 0.5
y: 0.5
z: 0.5
origin: self.center
canvas.after:
PopMatrix
id: configure_popup
title: "Configure Server Settings" # Title of the popup
size_hint: None, None
size: dp(600), dp(320) # Set the size of the popup
GridLayout:
cols: 1 # Two columns for labels and text inputs
padding: dp(10)
spacing: dp(10)
TextInput:
id: server_url # Reference to the first text input
hint_text: 'Server URL'
multiline: True # Allow only single-line input
size_hint_y: None
height: dp(40)
TextInput:
id: api_key # Reference to the second text input
hint_text: 'Api Key'
multiline: False # Allow only single-line input
size_hint_y: None
height: dp(40)
Label:
text: "Keep this key secret: it provides access to all your data. You can get this key on your profile on the webapp" # Text for the label
color: 0.5, 0.5, 0.5, 1 # Set text color to grey
italic: True # Make the text italic
size_hint_y: None
height: self.texture_size[1] # Adjust the height based on the text size
text_size: self.width, None
Button:
text: "Save and close window"
size_hint_y: None
height: dp(40)
on_release: configure_popup.save_and_close()
Button:
id: test_conifg_button
text: "Test Server Configuration"
size_hint_y: None
height: dp(40)
on_release: app.test_server_configuration(server_url.text, api_key.text, self)
</code></pre>
<p>How can i go about this issue? any help would be appreciated. Thanks</p>
| <python><python-3.x><kivy><kivymd> | 2023-09-25 13:32:53 | 1 | 441 | Ibtsam Ahmad |
77,172,876 | 10,694,589 | how to select a circular area of an image? | <p>I would like to select only the gray area in the middle of my image ?</p>
<p>I mean my calculations of mean, standard deviation must use values in this area.</p>
<p>I can select columns and rows but I will still have the corners.</p>
<pre><code>import numpy as np
import cv2
import rawpy
import rawpy.enhance
import matplotlib.pyplot as plt
import glob
#################### 2023-09-21_16-58-51.894
# Reading a Nikon RAW (NEF) image
init="/media/alexandre/Transcend/ExpΓ©rience/Ombroscopie/eau/initialisation/2023-09-19_19-02-33.473.nef"
brut="/media/alexandre/Transcend/ExpΓ©rience/Ombroscopie/eau/DT0.4/2023-09-25_13-26-56.259.nef"
bruit="/media/alexandre/Transcend/ExpΓ©rience/Ombroscopie/eau/bruit-electronique/2023-09-18_18-59-34.994.nef"
####################
# This uses rawpy library
print("reading init file using rawpy.")
raw_init = rawpy.imread(init)
image_init = raw_init.postprocess(use_camera_wb=True, output_bps=16)
print("Size of init image read:" + str(image_init.shape))
print("reading brut file using rawpy.")
raw_brut = rawpy.imread(brut)
image_brut = raw_brut.postprocess(use_camera_wb=True, output_bps=16)
print("Size of brut image read:" + str(image_brut.shape))
print("reading bruit file using rawpy.")
raw_bruit = rawpy.imread(bruit)
image_bruit = raw_bruit.postprocess(use_camera_wb=True, output_bps=16)
print("Size of bruit image read:" + str(image_bruit.shape))
####################
# (grayscale) OpenCV
init_grayscale = cv2.cvtColor(image_init, cv2.COLOR_RGB2GRAY)
brut_grayscale = cv2.cvtColor(image_brut, cv2.COLOR_RGB2GRAY)
bruit_grayscale = cv2.cvtColor(image_bruit, cv2.COLOR_RGB2GRAY)
print("max brut_grayscle : ", np.max(brut_grayscale))
print("init grayscale type : ", init_grayscale.dtype)
print("brut grayscale type : ", brut_grayscale.dtype)
test = cv2.divide((brut_grayscale-init_grayscale),(init_grayscale))
####################
# Irms, std, mean
intensite_rms = np.sqrt(np.mean(np.square(test)))
print("IntensitΓ© RMS de l'image :", intensite_rms)
mean, std_dev = cv2.meanStdDev(test)
print("ecart type de l'image :", std_dev[0][0])
print("Moyenne de l'image :", mean[0][0])
print("variance de l'image :", std_dev[0][0]**2)
####################
# Matplotlib
import matplotlib.pyplot as plt
plt.imshow((test * 65535), cmap='gray')
plt.imshow((brut_grayscale * 65535), cmap='gray')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/rEGwl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rEGwl.png" alt="enter image description here" /></a></p>
<p>Do you have any idea to do it ?</p>
| <python><opencv><rawpy> | 2023-09-25 12:59:39 | 1 | 351 | Suntory |
77,172,842 | 3,190,076 | Import Python standard library with Mojo | <p>I just started to play around with <a href="https://docs.modular.com/mojo/" rel="nofollow noreferrer">Mojo</a>. I am having a hard time importing modules from the Python standard library even though, according to the examples reported in the quick start guide, importing Python's module should be a breeze. What am I missing?</p>
<p>For example, to import the "time" module, I have tried: <code>let time = Python.import_module("time")</code></p>
<p>However, I get a couple of errors I don't get.</p>
<ol>
<li><code>cannot call function that may raise in a context that cannot raise</code></li>
<li><code>use of unknown declaration 'time', 'fn' declarations require explicit variable declarations</code></li>
</ol>
<p>I have resolved the first error by adding <code>raises</code> to the function declaration and adding a <code>try/except block</code> to the import line. Although, I am not satisfied with this, it feels too verbose. Is there a better way?</p>
<p>I still don't have a solution for the second error. Can anyone suggest a solution?</p>
<pre><code>fn main() raises:
#added raises and following try/except block
#to fix "Error 1"
from python import Python
try:
let time = Python.import_module("time")
except:
print('Import Error')
let s = time.time() #raises "Error 2"
print('Hello World')
let e = time.time()
print("Done in", e-s, "seconds")
</code></pre>
| <python><mojolang> | 2023-09-25 12:55:45 | 1 | 10,889 | alec_djinn |
77,172,768 | 13,285,834 | Tweaking py-shiny layout | <p>I created a <a href="https://shinylive.io/py/editor/#code=NobwRAdghgtgpmAXGKAHVA6VBPMAaMAYwHsIAXOcpMAMwCdiYACAZwAsBLCbJjmVYnTJMAgujxM6lACZw6EgK4cAOhHqMmqKBGlQWvfoOEARKGSgAxOrDirVNYsSYBeJqfNWbAClVM-TYF9-YJAg4PCmZTARKMRIsABZHlQOQjIFKSi8MIj-KIBJGCgAc1skJgByAB4+YqYoABthQga9Fmco1AZpBTSAWj4SstY6Qg6wNjIyVBZEAHo5lnMyVIBGDBIGho5ZOkHSlg3GOYB3QWkuuBYWU9Q+knJKMjmFVAbiKGkbgCYABm-vnNfgAWOZkNhwPpFFgAaz6ACs+PcoHQpNg+jQ4GYMnAMPDUMUokwIRxipNnMCAKxRAB8FWyEFyTAAvjkALqqACUdmg6AA+koXEwlFghnyaA0lNIfIz-CKtBA4A0+SsyA04D4wBZHKJoA1sCwOCwopyGcERa1sMQFGQ+YbZAAjFEypny7RKu07OBOuguplyjiixXKk5Khp+-3hEVcVA2u1KuBpDgALw1OUjESihDMcGKgmwWXiAGEc3m6NhEIXCGxiKkrs5gKsJN8JABmNkSFgJtJwaQNptMFtMdvAX4c2UZ-zcif+03p3InHbg5yts25Ocz82BhUeopcCP+kXW6Zx3TmcXWeCaqQsBRNE1riIb4Ib6c82Q0VhyABuci8MZtCRj1jMhOyuQ1SE5SsZwAAWAm0chgqQdDkDAzygC8bByD9JCuO8yC8KD5z8KR0joRlkN2DB3CgABxPZpQcYg3wgVQ0FQIUxFQLx2IFDgwLoX86GnMBmTZIA" rel="nofollow noreferrer">minimal example</a> (directly editable online). I have two questions on how to tweak it to look better.</p>
<ol>
<li><p>First of all, how to set the height to 100%? The problem is the the "category" selectize which goes "inside" of the outer box.</p>
<p><a href="https://i.sstatic.net/sQWzZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sQWzZ.png" alt="enter image description here" /></a></p>
</li>
<li><p>How to get rid of these borders around the outer box. What helped on one page was <code>ui.tags.style(ui.HTML(".bslib-sidebar-layout .sidebar { border: 0px; }")),</code></p>
</li>
</ol>
<p><a href="https://i.sstatic.net/WKWA5.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WKWA5.jpg" alt="enter image description here" /></a></p>
<ol start="3">
<li>Since I follow the same structure of the page on every page - how can I set both 1 and 2 globally?</li>
</ol>
| <javascript><python><css><py-shiny> | 2023-09-25 12:44:32 | 1 | 320 | abe |
77,172,536 | 12,415,863 | Plot points from Z vector at x,y coordinates | <p>I have two vectors, x & y:</p>
<pre><code>x = [0,1,2,3,4]
y = [0,1,2,3]
</code></pre>
<p>and I have a vector z of length (x*y) e.g.</p>
<pre><code>[5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24]
</code></pre>
<p>The values in the vector z correspond to the coordinates <code>[(0,0),(0,1), (0,2),(0,3),(1,0), (1,1)...(4,3)]</code></p>
<p>I want a plot where I can plot a point from z at the specified x,y coordinates.</p>
<p>So I want at coordinates (0,0) a value of 5. At (0,1) a value of 6, etc.</p>
<p>How can I do this? I've seen a lot of examples online where this is done as <code>plt.imshow</code>, or a heatmap. But nothing where I can essentially plot a scatter plot/line.</p>
| <python><matplotlib> | 2023-09-25 12:14:04 | 3 | 301 | abra |
77,172,420 | 4,397,613 | ImportError: cannot import name 'field_validator' from 'pydantic' | <p>I am getting error:
<code>ImportError: cannot import name 'field_validator' from 'pydantic' </code> while writing a custom validator on fields of my model class generated from a schema.
Here is how I am importing:</p>
<pre><code>from pydantic import field_validator
</code></pre>
<p>Versions being used:
pydantic version: <code>pypi:pydantic:1.10.7</code>
datamodel-code-generator: <code>pypi:datamodel-code-generator:0.17.1</code></p>
<p>Please help me in identifying the issue.</p>
| <python><importerror><pydantic> | 2023-09-25 11:53:48 | 2 | 529 | shane |
77,172,346 | 5,790,653 | How to convert CIDR to list of IPs | <p>This is a file I have:</p>
<pre class="lang-none prettyprint-override"><code>1.1.1.1/26
2.2.2.2/28
3.3.3.3/29
4.4.4.4/30
</code></pre>
<p>I want to convert all of these subnets to IPs.</p>
<pre><code>import ipaddress
with open('subnets.txt', 'r') as file1:
the_ip=file1.read()
for ip in ipaddress.ip_network(the_ip):
print(f'IP: {ip}')
</code></pre>
<p>This is what I'm trying to do but I get this error:</p>
<blockquote>
<p>ValueError: "['1.1.1.1/26', '2.2.2.2/28', '3.3.3.3/29', '4.4.4.4/30']"
does not appear to be an IPv4 or IPv6 network</p>
</blockquote>
<p>This is the list I tried making:</p>
<pre><code>ips_list = ['1.1.1.1/26', '2.2.2.2/28', '3.3.3.3/29', '4.4.4.4/30']
</code></pre>
<p>This is the error:</p>
<pre><code>"ips_list = ['1.1.1.1/26', '2.2.2.2/28', '3.3.3.3/29', '4.4.4.4/30']" does not appear to be an IPv4 or IPv6 network
</code></pre>
<p>How can I convert subnets to list of IPs?</p>
<p>I googled the errors and the how-tos, but none of them worked.</p>
| <python> | 2023-09-25 11:42:54 | 1 | 4,175 | Saeed |
77,172,130 | 6,930,340 | How to find the distance to next non-NaN value in numpy array | <p>Consider the following array:</p>
<pre><code>arr = np.array(
[
[10, np.nan],
[20, np.nan],
[np.nan, 50],
[15, 20],
[np.nan, 30],
[np.nan, np.nan],
[10, np.nan],
]
)
</code></pre>
<p>For every cell in each column in <code>arr</code> I need to find the distance to the next non-NaN value.
That is, the expected outcome should look like this:</p>
<pre><code>expected = np.array(
[
[1, 2],
[2, 1],
[1, 1],
[3, 1],
[2, np.nan],
[1, np.nan],
[np.nan, np.nan]
]
)
</code></pre>
| <python><pandas><numpy> | 2023-09-25 11:07:53 | 2 | 5,167 | Andi |
77,172,054 | 17,082,611 | Progress bar in my custom grid search using tqdm | <p>I am implementing the grid search from scratch. This is the code:</p>
<pre><code>def custom_grid_search(x_train, latent_dimensions):
param_grid = {
'epochs': [1, 3, 5],
'l_rate': [10 ** -4],
'batch_size': [32, 64],
'patience': [30]
}
grid_search = CustomGridSearchCV(param_grid)
grid_search.fit(x_train, latent_dimensions)
return grid_search
</code></pre>
<p>I want to print a progress bar and I often use <code>tqdm</code> for showing progress bar over loops.</p>
<p>This is my code:</p>
<pre><code>from tqdm import tqdm
class CustomGridSearchCV:
def __init__(self, param_grid):
self.param_grid = param_grid
self.best_params_ = {}
self.best_score_ = None
self.grid_ = []
def fit(self, x_train, latent_dimensions):
ssim_scorer = my_ssim
param_combinations = product(*self.param_grid.values())
for params in param_combinations:
print("CIAO") # Never printed!
# other code omitted...
return self
</code></pre>
<p>But unfortunately <code>"CIAO"</code> is never printed.</p>
<p>I debugged those variables that is:</p>
<pre><code>>>> param_combinations
<itertools.product object at 0x289618300>
>>> len(list(param_combinations))
6
</code></pre>
<p>If I remove <code>tqdm</code>, that is <code>for params in param_combinations:</code> it works properly. How is that?</p>
<p>I want to print something like</p>
<pre><code>Combination 1/6
Combination 2/6
...
</code></pre>
| <python><tqdm> | 2023-09-25 10:56:14 | 1 | 481 | tail |
77,172,051 | 11,167,163 | How to Append 'GRAND TOTAL' Row at the End of a DataFrame in Pandas? | <p>I'm working with a Pandas DataFrame and I want to add a 'GRAND TOTAL' row at the end of it. This row should contain sums of specific numeric columns and should always be at the bottom of the DataFrame, even after sorting by index.</p>
<p>Here's a snippet of my code:</p>
<pre class="lang-py prettyprint-override"><code>total = new_df.sum(numeric_only=True)
total.name = 'GRAND TOTAL'
total_df = total.to_frame().T
total_df.at['GRAND TOTAL', 'CURRENCY'] = ''
total_df.at['GRAND TOTAL', 'MANDATE'] = 'SomeValue'
total_df.at['GRAND TOTAL', 'COMPOSITE'] = 'GRAND TOTAL'
new_df = pd.concat([new_df, total_df.set_index('COMPOSITE')])
new_df.sort_index(inplace=True)
</code></pre>
<p>When I run this, the 'GRAND TOTAL' row doesn't stay at the end after sorting.</p>
<p>How can I make sure that the 'GRAND TOTAL' row is always at the end of the DataFrame?</p>
| <python><pandas> | 2023-09-25 10:55:58 | 1 | 4,464 | TourEiffel |
77,171,960 | 13,682,080 | Fetch balance from Huobi using ccxt | <p>I'm trying to fetch future account balance on huobi.</p>
<p>This is the code I use:</p>
<pre><code>b = ccxt.huobi({
"apiKey": "API KEY",
"secret": "SECRET",
"options": {
"defaultType": "future"
},
})
b.fetch_balance()
</code></pre>
<p>When I run it, I get this error:</p>
<pre><code>ccxt.base.errors.ExchangeError: huobi {
"status": "error",
"err_code": 4002,
"err_msg": "The merged cross and isolated margin account for USDT-M futures is unavailable.Please complete the query with linear-swap-api/v3/unified_account_info",
"ts": 1695638305262
}
</code></pre>
<p>I tried to read huobi API docs as well as ccxt docs, but didn't find anything useful.</p>
<p>Can you please help me with figuring out what is the problem and how I can solve it?</p>
| <python><cryptocurrency><ccxt><huobi> | 2023-09-25 10:42:55 | 1 | 542 | eightlay |
77,171,953 | 7,052,826 | Plotly texttemplate appears to ignore actual string template | <p>From this part of the <a href="https://import%20plotly.graph_objects%20as%20go" rel="nofollow noreferrer">documentation</a>, I expected the following code to produce a bar graph, where each bars contain the corresponding text as per <code>df['label']</code>. Instead, each bar contains the string found in <code>df['Foo']</code>. How can I get the <code>texttemplate</code> to work properly, i.e., have each bar contain the text from <code>df['label']</code>?</p>
<pre><code>>>> import plotly
>>> import pandas as pd
>>> import plotly.graph_objects as go
>>> plotly.__version__
5.17.0
>>> df = pd.DataFrame(data={"Foo":['a','b','c'],'Bar':[2,5,10], 'label':['alpha','beta', 'charlie']})
df
Foo Bar label
0 a 2 alpha
1 b 5 beta
2 c 10 charlie
>>> bar = go.Bar(x=df['Foo'], y=df['Bar'], texttemplate ="%{label}")
>>> go.Figure(data=bar)
</code></pre>
<p><a href="https://i.sstatic.net/PxN2u.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PxN2u.png" alt="enter image description here" /></a></p>
| <python><plotly> | 2023-09-25 10:42:18 | 2 | 4,155 | Mitchell van Zuylen |
77,171,944 | 4,776,977 | How to safely add GitHub packages to a Conda environment for export | <p>I have installed a package from GitHub in a Conda environment and subsequently exported it. When I came to install the environment on another machine, I found that the GitHub package had been listed under the pip dependencies, causing the installation to fail when it found no such package on the PyPI server. I removed that dependency manually but then had to painstakingly remove a series of other dependencies that had been created, all with incompatible versions.</p>
<p>So... my question is, when I need to use a package from GitHub in a Conda virtual environment, how can I save the dependency? This <a href="https://stackoverflow.com/a/32799944/4776977">SO answer</a> suggests that you can list GitHub dependencies in the pip section of the environment YAML, but that is not what gets created when I export the environment according to <a href="https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#exporting-the-environment-file" rel="nofollow noreferrer">Conda's documentation</a>.</p>
<p>I can think of 2 workarounds, but neither is ideal:</p>
<ol>
<li>Manually edit the YAML, removing any new pip dependencies following the install and replacing them with the GitHub installation of the main package.</li>
<li>Export the environment <em>before</em> installing any packages from GitHub, and keep a separate note that these need to be installed manually after installing the environment.</li>
</ol>
<p>Sorry if this has been answered elsewhere, but I can't find it. I see other people have given advice on <a href="https://stackoverflow.com/questions/46076754/use-package-from-github-in-conda-virtual-environment">how to install GitHub packages in a Conda environment</a> but no one seems to have broached the issue of how you then replicate that environment successfully.</p>
| <python><github><anaconda><conda> | 2023-09-25 10:40:26 | 0 | 1,768 | Tom Wagstaff |
77,171,894 | 6,322,924 | Need help in findinf optimal parameters using pymc | <p>I have a set of measurements in the pandas dataframe, The first two columns are two parameters, and depending on their values, I also have 4 features. Letβs say I perform a new observation, and I have obtained a new set of values for those 4 features. My goal is to find in that lookup table the best match of my observation to that lookup table dataset.
How am I planning to do that? I am βvaryingβ those two parameters, for each combination of parameters I get a set of features, and then I calculate chi2 between those βnewβ features and the features from the dataset. So, instead of an exhaustive grid search of the minimum chi2, I am planning to search intelligently by using MCMC.
I managed to do it using the emcee library, but also using the pymc3. Although, pymc3 is at least 10 times slower. Therefore, I tried to implement the same thing using Pymc, but I am failing.
I need your help, I am not sure how to perform lookup in the table since the parameters must be distributions. I will paste my minimal reproducible example below, and I think it is self-explanatory. Basically, I need help to finish my code</p>
<pre><code>import pandas as pd
import pytensor.tensor as pt
import pymc as pm
import numpy as np
#this function should return features(data) for given parameters
def model_getFeaturesFromParams(Param1, Param2, lookup_table_arr):
lookup_table_tensor = pt.as_tensor_variable(lookup_table_arr)
# Compute the index based on the given Param1 and Param2
print(lookup_table_tensor[:,0])
a = pt.eq(lookup_table_tensor[:, 0], Param1)
b = pt.eq(lookup_table_tensor[:, 1], Param1)
print(a,b)
index = (lookup_table_tensor[:, 0] == Param1) & (lookup_table_tensor[:, 1] == Param2)
# Check if there's a matching row in the lookup table
if pt.all(index):
# Extract the features from the matching row
features = lookup_table_tensor[index][0][2:]
return features
else:
# Handle the case when Param1 and Param2 are not found in the table
return [0, 0, 0, 0] # Modify this as needed
def chiSquaredLikelihood(observed_features, observed_errors, predicted_features):
chi2 = pt.sum(pt.sqr((observed_features - predicted_features) / observed_errors))
return -2 * chi2
def run_pymc():
observed_features = np.array([2,2,4,4])
observed_errors = np.array([0.03, 0.02, 0.02, 0.03])
lookup_table = [
[1,2, 1,1,2,2],
[2,4, 2,2,4,4],
[4,8, 4,4,8,8],
]
columns = ["Param1", "Param2", "feature1", "feature2", "feature3", "feature4"]
df = pd.DataFrame(lookup_table, columns=columns)
lookup_table_arr = df.values
with pm.Model() as model:
Param1 = pm.Uniform('Param1', lower=1, upper=4)
Param2 = pm.Uniform('Param2', lower=2, upper=8)
Param1_tensor = pt.as_tensor_variable(Param1)
Param2_tensor = pt.as_tensor_variable(Param2)
# predicted_features = model_getFeaturesFromParams(Param1_tensor, Param2_tensor, df)
predicted_features = model_getFeaturesFromParams(Param1_tensor, Param2_tensor, lookup_table_arr)
likelihood = pm.DensityDist(
"likelihood", logp=lambda x: chiSquaredLikelihood(observed_features, observed_errors, x),
observed=predicted_features.eval()
)
trace = pm.sample(100,100, chains=4, return_inferencedata=True,
progressbar=True, random_seed=42, discard_tuned_samples=True, cores=4, step=pm.Metropolis())
return trace
if __name__ == "__main__":
run_pymc()
</code></pre>
| <python><optimization><pymc3><pymc><mcmc> | 2023-09-25 10:32:29 | 0 | 607 | Falco Peregrinus |
77,171,600 | 2,123,706 | How to drop and create SQL table using SQLalchemy | <p>I have data in a SQL table. I have some processes where I append extra data, but there are extra columns present. In order to append successfully, I read in the entire SQL table, <code>pd.concat</code> the new data, drop the original table, and then write this back to SQL.</p>
<p>This works with this:</p>
<pre><code>database_con = f'mssql://@{server}/{database}?driver={driver}'
engine = create_engine(database_con)
con = engine.connect()
df = pd.DataFrame({'col1':[1,2,3], 'col2':['a','b','c']})
df.to_sql(
name='#temp01',
con=con,
if_exists="append",
index=False
)
query = 'select * from #temp01'
data = pd.read_sql_query(query, con)
data
# new data
df = pd.DataFrame({'column1':['test_20230925'], 'column2':[234], 'column3':[234.56]})
df_new = pd.concat([data,df])
drop_query = """ drop table #temp01"""
pd.read_sql_query(drop_query, con)
df_new.to_sql(
name='#temp01',
con=con,
if_exists="append",
index=False
)
query = "select * from #temp01"
new_data = pd.read_sql_query(query, con)
new_data
</code></pre>
<p>the <code>pd.read_sql_query(drop_query, con)</code> works, but returns <code>qlalchemy.exc.ResourceClosedError: This result object does not return rows. It has been closed automatically.</code></p>
<p>I would like to drop the original table without this return, and have tried</p>
<pre><code>with engine.begin() as conn:
conn.execute(drop_query)
</code></pre>
<p>with the error:</p>
<blockquote>
<p>sqlalchemy.exc.ObjectNotExecutableError: Not an executable object: ' drop table #temp01'</p>
</blockquote>
<p>as well as</p>
<pre><code>tbl = '#temp01'
con.execute(schema.DropTable(tbl, if_exists=True))
</code></pre>
<p>but I get this error:</p>
<blockquote>
<p>AttributeError: 'str' object has no attribute 'name'</p>
</blockquote>
<p>and</p>
<pre><code>schema.DropTable(tbl, if_exists=True)
con.commit()
</code></pre>
<p>returns</p>
<blockquote>
<p><sqlalchemy.sql.ddl.DropTable object at 0x0000019CE1ABF990></p>
</blockquote>
<p>but when I check the status of the table, it has not been updated.</p>
<p>Is there something I am missing?</p>
| <python><sqlalchemy> | 2023-09-25 09:53:20 | 0 | 3,810 | frank |
77,171,571 | 11,729,033 | When making a dataclass, how can I use a custom type for the field objects themselves? | <p>I'm writing a library that works a lot like <code>dataclass</code> but with some additional functionality. Obviously I want to reuse <code>dataclass</code> as much as possible, but for my added functionality, I'd like to extend <code>dataclass.Field</code>. That is, I'd like to define:</p>
<pre><code>class MyField(dataclasses.Field):
... # additional functionality
</code></pre>
<p>And then do <em>something</em> so that when the user does:</p>
<pre><code>@mydataclass
class UserDefinedClass:
foo: int
y: Bar
</code></pre>
<p>(Where <code>mydataclass</code> is a decorator that eventually calls <code>dataclass</code>...)</p>
<p>The actual field objects inside <code>UserDefinedClass</code> will be of type <code>MyField</code> rather than <code>dataclasses.Field</code>.</p>
<p>I can't find anything in the dataclass docs that allows me to do this, but I think I might be missing something.</p>
<p>(I am aware of the <code>metadata</code> keyword argument to <code>field</code>, and I will probably use that if the answer to this question is "that's impossible". But it'll be more work.)</p>
| <python><python-3.x><python-dataclasses> | 2023-09-25 09:48:32 | 1 | 314 | J E K |
77,171,444 | 11,329,736 | Skip snakemake rules that depend on failed rule | <p>This is a section of my <code>snakemake</code> workflow:</p>
<p><a href="https://i.sstatic.net/wMNsM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wMNsM.png" alt="enter image description here" /></a></p>
<p>The rule <em>bagel2bf</em> fails occasionally with certain data sets, but I do not want this to fail the entire <code>snakemake</code> run.</p>
<p>How can I adapt the workflow so that if there is an exit code 1 from rule <em>bagel2bf</em> it will skip the rules that are dependent on its output (maybe also printing a warning message)?</p>
<p>This is the code for the rules:</p>
<pre><code>rule install_bagel2:
output:
directory("bagel2_software/"),
shell:
"git clone https://github.com/hart-lab/bagel.git {output}"
rule convert_count_table:
input:
"count/counts-aggregated.tsv"
output:
"count/counts-aggregated-bagel2.tsv"
params:
fa=fasta,
resources:
runtime=config["resources"]["stats"]["time"]
conda:
"envs/stats.yaml"
script:
"scripts/convert_count_table.py"
rule bagel2fc:
input:
"bagel2_software/",
"count/counts-aggregated-bagel2.tsv",
output:
"bagel2/{bcomparison}/{bcomparison}.foldchange"
resources:
runtime=config["resources"]["stats"]["time"]
conda:
"envs/stats.yaml"
log:
"logs/bagel2/fc/{bcomparison}.log"
script:
"scripts/bagel2fc.py"
rule bagel2bf:
input:
"bagel2_software/",
"bagel2/{bcomparison}/{bcomparison}.foldchange",
output:
"bagel2/{bcomparison}/{bcomparison}.bf"
params:
species=config["lib_info"][library]["species"],
resources:
runtime=config["resources"]["stats"]["time"]
conda:
"envs/stats.yaml"
log:
"logs/bagel2/bf_{bcomparison}.log"
script:
"scripts/bagel2bf.py"
rule bagel2pr:
input:
"bagel2_software/",
"bagel2/{bcomparison}/{bcomparison}.bf",
output:
report("bagel2/{bcomparison}/{bcomparison}.pr", caption="report/bagel2.rst", category="BAGEL2")
params:
species=config["lib_info"][library]["species"]
resources:
runtime=config["resources"]["stats"]["time"]
conda:
"envs/stats.yaml"
log:
"logs/bagel2/pr_{bcomparison}.log"
script:
"scripts/bagel2pr.py"
rule plot_bf:
input:
"bagel2/{bcomparison}/{bcomparison}.bf"
output:
report("bagel2_plots/{bcomparison}/{bcomparison}.bf.pdf", caption="report/bagel2_plots.rst", category="BAGEL2 plots", subcategory="{bcomparison}", labels={"Comparison":"{bcomparison}", "Figure":"BF plot"})
conda:
"envs/stats.yaml"
script:
"scripts/plot_bf.py"
rule plot_pr:
input:
"bagel2/{bcomparison}/{bcomparison}.pr"
output:
report("bagel2_plots/{bcomparison}/{bcomparison}.pr.pdf", caption="report/bagel2_plots.rst", category="BAGEL2 plots", subcategory="{bcomparison}", labels={"Comparison":"{bcomparison}", "Figure":"Precision-recall plot"})
conda:
"envs/stats.yaml"
script:
"scripts/plot_pr.py"
</code></pre>
| <python><snakemake> | 2023-09-25 09:29:00 | 2 | 1,095 | justinian482 |
77,171,252 | 8,849,755 | Pandas select matching multi index with different number of levels | <p>I have a data frame with 3 index levels:</p>
<pre><code> d
a b c
1 9 4 1
2 8 2 4
3 7 5 2
4 6 4 5
5 5 6 3
6 4 5 6
7 3 7 4
8 2 6 7
9 1 8 5
</code></pre>
<p>and I have a multi index object with only 2 levels:</p>
<pre><code>MultiIndex([(1, 9),
(2, 8),
(3, 7),
(4, 6),
(9, 1)],
names=['a', 'b'])
</code></pre>
<p>How can I select the entries on the data frame that match this multi index?</p>
<p>Toy code:</p>
<pre class="lang-py prettyprint-override"><code>import pandas
df1 = pandas.DataFrame(
dict(
a = [1,2,3,4,5,6,7,8,9],
b = [9,8,7,6,5,4,3,2,1],
c = [4,2,5,4,6,5,7,6,8],
d = [1,4,2,5,3,6,4,7,5],
)
).set_index(['a','b','c'])
select_this = multi_idx = pandas.MultiIndex.from_tuples([(1, 9), (2, 8), (3, 7), (4, 6), (9, 1)], names=['a', 'b'])
selected = df1.loc[select_this]
print(select_this)
print(df1)
print(selected)
</code></pre>
<p>which produces <code>ValueError: operands could not be broadcast together with shapes (5,2) (3,) (5,2)</code>.</p>
<p>What I want to do can be achieved with</p>
<pre class="lang-py prettyprint-override"><code>selected = df1.reset_index('c').loc[select_this].set_index('c', append=True)
</code></pre>
<p>However, this forces me to do this extra <code>reset_index</code> and then <code>set_index</code>. I want to avoid this.</p>
| <python><pandas> | 2023-09-25 09:02:30 | 5 | 3,245 | user171780 |
77,171,240 | 6,515,755 | python run pre-commit with -m | <p>Python modules can be run with <code>-m</code>
for example next command for <code>pip</code></p>
<pre><code>python3.10 -m pip install pre-commit
</code></pre>
<p>It allow you to specify exact python binary to use with command (python3.10 or python3.11).</p>
<p>I have successfully installed pre-commit, but got next:</p>
<pre><code>python3.10 -m pre-commit run
> python.exe: No module named pre-commit
python3.10 -m precommit run
> python.exe: No module named precommit
</code></pre>
<p>how to run pre-commit via <code>python -m</code> option ?</p>
| <python><pre-commit> | 2023-09-25 09:01:21 | 1 | 12,736 | Ryabchenko Alexander |
77,171,124 | 2,293,224 | snowflake-connector: MissingDependencyError' issue with write_pandas function | <p>I am trying to ingest data into snowflake using Python geopandas and snowflake-connector library. Here is my code:</p>
<pre><code>import geopandas
import pandas as pd
import json
import snowflake.connector
from snowflake.connector.pandas_tools import write_pandas
snowflake_config = {
'account': account_url,
'warehouse':'wh_name',
'database':'db_name',
'schema':'schmea_name',
'role': 'role_name'
}
shapefile=geopandas.read_file("path of shap file in zip format")
df = pd.DataFrame(shapefile)
conn = snowflake.connector.connect(
user = 'username',
authenticator ='externalbrowser',
**snowflake_config
)
create_table_query = f'''
CREATE TABLE IF NOT EXISTS db_name.schema_name.TRIALEXAMPLE (
globalid STRING,
CreationDa date,
EditDate date,
your_name STRING,
are_you_a_ STRING,
which_indu STRING,
COORDINATES varchar
)
'''
cursor = conn.cursor()
cursor.execute(create_table_query)
success, nchunks, nrows, _ = write_pandas(conn, df, 'TRIALEXAMPLE')
</code></pre>
<p>When I ran the program, it gave following error on the last command of the code:</p>
<pre><code>AttributeError Traceback (most recent call last)
Cell In[17], line 1
----> 1 success, nchunks, nrows, _ = write_pandas(conn, df, 'TRIALEXAMPLE')
File ~\AppData\Local\Anaconda3\envs\snowflake-env\lib\site-packages\snowflake\connector\pandas_tools.py:271, in write_pandas(conn, df, table_name, database, schema, chunk_size, compression, on_error, parallel, quote_identifiers, auto_create_table, create_temp_table, overwrite, table_type, **kwargs)
267 if chunk_size is None:
268 chunk_size = len(df)
270 if not (
--> 271 isinstance(df.index, pandas.RangeIndex)
272 and 1 == df.index.step
273 and 0 == df.index.start
274 ):
275 warnings.warn(
276 f"Pandas Dataframe has non-standard index of type {str(type(df.index))} which will not be written."
277 f" Consider changing the index to pd.RangeIndex(start=0,...,step=1) or "
(...)
280 stacklevel=2,
281 )
283 cursor = conn.cursor()
File ~\AppData\Local\Anaconda3\envs\snowflake-env\lib\site-packages\snowflake\connector\options.py:39, in MissingOptionalDependency.__getattr__(self, item)
38 def __getattr__(self, item):
---> 39 raise errors.MissingDependencyError(self._dep_name)
AttributeError: module 'snowflake.connector.errors' has no attribute 'MissingDependencyError'
</code></pre>
<p>I made sure that I installed the latest version of snowflake-connector-python (i.e. 3.2.0) is installed. Any help would be appreciated to fix the issue.</p>
| <python><python-3.x><snowflake-cloud-data-platform> | 2023-09-25 08:43:28 | 0 | 2,260 | user2293224 |
77,170,804 | 12,013,353 | How to determine the actual phase angles of component waves of a signal processed with scipy.fft? | <p>I used <code>scipy.fft</code> to transform a signal and find its components. The signal was randomly chosen, and the used code is shown below. When manually constructing the component waves, I am confused about the phase angle I need to add to each component.</p>
<pre><code>tt = 10*np.pi
N = 1000
fs = N/tt
space = np.linspace(0,tt,N)
signal = lambda t: 2*np.sin(t) + np.cos(2*t)
</code></pre>
<p>The signal looks like this:<br />
<a href="https://i.sstatic.net/72tD5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/72tD5.png" alt="original signal" /></a></p>
<pre><code>fft1 = fft.fft(signal(space))
fft1t = fft1/len(space) # scaling the amplitudes
fft1t[1:] = fft1t[1:]*2 # scaling the amplitudes
freq = fft.fftfreq(space.shape[-1], d=1/fs)
</code></pre>
<p>The stemplot shows the correctly identified terms with their amplitudes, and the corresponding frequencies are identified from <code>freq</code> as 0.1592 for the 5th term and 0.3183 for the 10th term.</p>
<pre><code>plt.stem(np.abs(fft1t)[:int(len(fft1t)/2)]) # plotting only up to the Nyquist freq.
plt.xlim(-.5,20)
plt.grid()
</code></pre>
<p><a href="https://i.sstatic.net/B7LkQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/B7LkQ.png" alt="stemplot" /></a></p>
<p>Now when I plot try to manually reconstruct the signal, and plot is on top of the original, I get a phase mismatch by 2 times the phase angle calculated from the fft results.</p>
<pre><code>uk = space**0 * fft1t[0].real
for n in [5,10]: # 5th and 10th term of the Fourier series
uk += (fft1t[n].real * np.cos(n * np.pi * space / (N/fs/2) + 0*np.angle(fft1t[n]))
+ fft1t[n].imag * np.sin(n * np.pi * space / (N/fs/2) + 0*np.angle(fft1t[n])))
sns.lineplot(x=space, y=signal(space))
sns.lineplot(x=space, y=uk)
</code></pre>
<p><a href="https://i.sstatic.net/1MAdX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1MAdX.png" alt="phase angle 0" /></a></p>
<p>When I add one phase angle I get:</p>
<pre><code>uk = space**0 * fft1t[0].real
for n in [5,10]: # 5th and 10th term of the Fourier series
uk += (fft1t[n].real * np.cos(n * np.pi * space / (N/fs/2) + 1*np.angle(fft1t[n]))
+ fft1t[n].imag * np.sin(n * np.pi * space / (N/fs/2) + 1*np.angle(fft1t[n])))
sns.lineplot(x=space, y=signal(space))
sns.lineplot(x=space, y=uk)
</code></pre>
<p><a href="https://i.sstatic.net/3I0Jq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3I0Jq.png" alt="phase angle 1" /></a></p>
<p>And when I add two times the phase angle, then I get a perfect match:</p>
<pre><code>uk = space**0 * fft1t[0].real
for n in [5,10]: # 5th and 10th term of the Fourier series
uk += (fft1t[n].real * np.cos(n * np.pi * space / (N/fs/2) + 2*np.angle(fft1t[n]))
+ fft1t[n].imag * np.sin(n * np.pi * space / (N/fs/2) + 2*np.angle(fft1t[n])))
sns.lineplot(x=space, y=signal(space))
sns.lineplot(x=space, y=uk)
</code></pre>
<p><a href="https://i.sstatic.net/W2GUM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/W2GUM.png" alt="phase angle 2" /></a></p>
<p>Please help me understand why am I required to double the calculated phase angle here, and what is the general rule for determining the phase angle to add. In another <a href="https://stackoverflow.com/questions/77150937/incorrect-fourier-coefficients-signs-resulting-from-scipy-fft-fft">question (example) I recently posted</a>, I only had to add one times the phase angle. Is this connected to the sampling space, where instead of sampling from [-T,T] (like in the example from the link) we sample in [0,T] (like in this example)?</p>
| <python><scipy><fft> | 2023-09-25 07:56:25 | 2 | 364 | Sjotroll |
77,170,631 | 17,863,242 | How to use Django test without Django models? | <p>I have a Django app for REST api, but <strong>not</strong> using Django REST Framework.</p>
<p>I don't use models for any tables, only using Raw SQL query. Could've used flask since not using any Django features, but higher ups decided to use Django.</p>
<p>I want to use Django test but digging in the source code I can see both <code>dumpdata</code> and <code>loaddata</code> which test uses to create Test DB uses app's model. But since I don't have any models how do I test my Django app?</p>
<p>It is OK to use the <strong>actual development DB</strong> for testing.</p>
<p>I see django overrides the <code>DEFAULT_DB_ALIAS</code> connection param with the newly created test db params, should I just override the param with the actual <code>default</code> db connection param instead?</p>
<p>So</p>
<ol>
<li>can I use django test without models and migrations?</li>
<li>can I use <code>default</code> database for testing and prevent django from creating a test db?</li>
</ol>
| <python><django><rest><testing> | 2023-09-25 07:32:19 | 2 | 411 | Alraj |
77,170,542 | 9,997,212 | How to make Pydantic discriminate a nested object based on a field? | <p>I have this Pydantic model:</p>
<pre class="lang-py prettyprint-override"><code>import typing
import pydantic
class TypeAData(pydantic.BaseModel):
aStr: str
class TypeBData(pydantic.BaseModel):
bNumber: int
class TypeCData(pydantic.BaseModel):
cBoolean: bool
class MyData(pydantic.BaseModel):
type: typing.Literal['A', 'B', 'C']
name: str
data: TypeAData | TypeBData | TypeCData
</code></pre>
<p>However, if <code>type</code> is equal to "A" and <code>data</code> contains <code>TypeBData</code>, it'll validate correctly when it shouldn't. This could be an alternative:</p>
<pre class="lang-py prettyprint-override"><code>class MyData(pydantic.BaseModel):
type: typing.Literal['A', 'B', 'C']
name: str
data: TypeAData | TypeBData | TypeCData
@pydantic.validator('data', pre=True, always=True)
def validate_data(cls, data):
if isinstance(data, dict):
data_type = data.get('type')
if data_type == 'A':
return TypeAData(**data)
elif data_type == 'B':
return TypeBData(**data)
elif data_type == 'C':
return TypeCData(**data)
raise ValueError('Invalid data or type')
</code></pre>
<p>It works; however, is there a better way to do it without repeating the enum keys (<code>'A'</code>, <code>'B'</code> and <code>'C'</code>) and values (<code>TypeAData</code>, <code>TypeBData</code> and <code>TypeCData</code>) twice?</p>
<p>I've tried using <a href="https://docs.pydantic.dev/latest/usage/types/unions/#discriminated-unions-aka-tagged-unions" rel="nofollow noreferrer">discriminated unions</a>, but since the <code>type</code> field and the discriminated fields are in different levels within the model (the latter is inside a nested object), I could not make further progress on this.</p>
| <python><pydantic><discriminated-union> | 2023-09-25 07:16:58 | 1 | 11,559 | enzo |
77,170,430 | 1,583,225 | Seabron heatmap only annotating the first row | <p>I have:</p>
<p>Python
3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]</p>
<p>matplotlib
3.8.0</p>
<p>seaborn
0.12.2</p>
<pre><code>import numpy as np
import seaborn as sns
sns.heatmap(np.random.rand(3, 3), annot=True)
</code></pre>
<p>The output on my system:</p>
<p><a href="https://i.sstatic.net/suley.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/suley.png" alt="enter image description here" /></a></p>
<p>Only the first row gets annotated. I've tried various variations like just a script, in a notebook, saving to a file, but nothing makes a difference, always only the first row is annotated. What can the problem be?</p>
| <python><matplotlib><seaborn> | 2023-09-25 06:55:13 | 0 | 3,380 | Anonymous Entity |
77,170,292 | 13,566,519 | Why poetry could not understand the kind of my os (Linux) and my cpu architecture? | <p>I'm working on a straightforward project where I employ 'poetry' for Python project management. For network isolation, I use Jfrog as my PyPi repository. In the pyproject.toml file, I've included certain dependencies, such as numpy. However, I noticed an issue: when I run the <code>poetry update</code> command, Poetry requests the macOS package from Jfrog and it want to download wheel file for <code>numpy-1.25.0-cp310-cp310-macosx_10_9_x86_64.whl</code> package, even though my OS is Ubuntu 20.04. Additionally, there was another instance where Poetry requested to download <code>grpcio_tools-1.46.3-cp310-cp310-linux_armv7l.whl</code> the armv7l architecture wheel file from Jfrog, even though my operating system architecture is x86_64.
My pyproject.toml is:</p>
<pre><code>[tool.poetry]
name = "demo_poetry"
version = "0.1.0"
description = ""
authors = ["Your Name <you@example.com>"]
readme = "README.md"
[tool.poetry.dependencies]
python = "^3.8"
assertpy = "1.1"
asttokens = "2.0.1"
bcrypt = "3.2.0"
cached-property = "1.5.2"
certifi = "2021.10.8"
cffi = "1.14.3"
charset-normalizer = "2.0.8"
click = "8.0.3"
colorama = "0.4.4"
coloredlogs = "15.0.1"
cryptography = "3.4.8"
docopt = "0.6.2"
EditorConfig = "0.12.3"
func-timeout = "4.3.5"
grpcio = "1.46.3"
grpcio-tools = "1.46.3"
humanfriendly = "9.1"
idna = "3.3"
importlib-metadata = "3.1.1"
itsdangerous = "2.0.1"
Jinja2 = "3.0.1"
jsbeautifier = "1.14.0"
markdown2 = "2.4.3"
MarkupSafe = "2.0.0"
mccabe = "0.6.0"
paramiko = "2.11.0"
pptree = "3.1"
prompt-toolkit = "3.0.20"
protobuf = "3.19.3"
pycparser = "2.21"
Pygments = "2.12.0"
PyNaCl = "1.5.0"
pyvim = "3.0.2"
pyvmomi = "7.0.2"
requests = "2.26.0"
scp = "0.13.2"
six = "1.16.0"
toml = "0.10.2"
urllib3 = "1.26.7"
varname = "0.8.3"
wcwidth = "0.2.5"
Werkzeug = "2.0.1"
wrapt = "1.12.1"
zipp = "3.4.0"
protoc-gen-validate = "0.4.2"
boto3 = "1.24.72"
pysmb = "1.2.6"
tqdm = "4.62.3"
python-slugify = "5.0.0"
ImageHash = "4.3.1"
pythonping = "1.1.4"
flask = "2.0.2"
behave = "1.2.6"
behave-html-pretty-formatter = "1.9.1"
[[tool.poetry.source]]
name = "jfrog"
url = "https://jfrog.example.co/artifactory/api/pypi/all-pypi/simple/"
priority = "default"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
</code></pre>
<p>Why poetry could not understand the kind of my os (Linux) and my cpu architecture to download spicific package?</p>
| <python><artifactory><pypi><python-poetry> | 2023-09-25 06:28:01 | 0 | 317 | GreenMan |
77,170,289 | 305,135 | Insert field into dictionary at specific position, not append at end | <p>I am reading JSON from a file and want to add <strong>fn2</strong> just after <strong>fn</strong> field, while setting video['fn2']=7 will append to end of JSON, I am using Python 3.7.9:</p>
<pre><code>import json
# read json file
with open('myjson.json', encoding="utf-8-sig") as json_file:
data = json.load(json_file)
for video in data:
video['fn2'] = 7;
# write data to json file
with open('output.json', 'w', encoding='utf-8-sig') as outfile:
json.dump(data, outfile, indent=4, ensure_ascii=False)
</code></pre>
<p>this is a sample of my original json:</p>
<pre><code>[
{
"fn": "f1.txt",
"post": "11",
"tp": "z"
}
]
</code></pre>
<p>and this is output after append :</p>
<pre><code>[
{
"fn": "f1.txt",
"post": "11",
"tp": "z",
"fn2": 7
}
]
</code></pre>
<p>but I need output to be like :</p>
<pre><code>[
{
"fn": "f1.txt",
"fn2": 7,
"post": "11",
"tp": "z"
}
]
</code></pre>
| <python><json><dictionary> | 2023-09-25 06:27:40 | 1 | 19,540 | AVEbrahimi |
77,170,178 | 6,550,449 | SQLAlchemy: The most efficient way to select a parent with multiple nullable children? | <p>I want to select a parent with only one non-null <code>ChildType</code>. Here is the schema to be more precise:</p>
<pre><code>class Parent(Base):
...
id : Mapped[int]
type : Mapped[str]
child_type_a: Mapped['ChildTypeA | None'] = relationship(uselist=False)
child_type_b: Mapped['ChildTypeB | None'] = relationship(uselist=False)
child_type_c: Mapped['ChildTypeC | None'] = relationship(uselist=False)
class ChildTypeA/B/C(Base):
id : Mapped[int]
parent_id: Mapped[int] = mapped_column(BigIntegerType, ForeignKey('parent_table.id'))
parent : Mapped['Parent'] = relationship(back_populates="child_type_a/b/c")
</code></pre>
<p>In my case, each <code>Parent</code> can only have one child out of these 3 types (a, b, c). So I would like to join only on the table that has an entry referencing this particular <code>Parent</code>.</p>
<p>For now, the only working solution I came up with is to join all <code>ChildType</code> tables:</p>
<pre><code>select(Parent)
.join(ChildTypeA, ChildTypeA.parent_id == Parent.id)
.join(ChildTypeB, ChildTypeB.parent_id == Parent.id)
.join(ChildTypeC, ChildTypeC.parent_id == Parent.id)
.options(
contains_eager(Parent.child_type_a),
contains_eager(Parent.child_type_b),
contains_eager(Parent.child_type_c),
)
</code></pre>
<p>But as far as I understand it searches through all tables, even those from which our current <code>Parent</code> is not being referenced. Ideally it would be nice to have some sort of a conditional join based on <code>Parent.type</code> ("a", "b", "c") field.</p>
| <python><mysql><sqlalchemy> | 2023-09-25 05:56:28 | 0 | 927 | Taras Mykhalchuk |
77,170,009 | 508,907 | How to convert Python d3blocks chord diagram to react / d3.js frontend | <p>Using in Python <a href="https://d3blocks.github.io/d3blocks/pages/html/Chord.html" rel="nofollow noreferrer">the module d3blocks</a>, I can create a chord fig:</p>
<pre><code>from d3blocks import D3Blocks
d3 = D3Blocks()
df = d3.import_example('energy')
d3_c = d3.chord(df, showfig=False, filepath=None)
</code></pre>
<p>How can I export this, or even better the javascript model to a way that can be
imported from a react frontent,
or (the best) export a json model and then create it there using d3.js?</p>
<p>I can always save it to an html file and use an iframe to load the whole page using <code>filepath=...</code>, serve the file e.g.</p>
<pre><code>@router.get("/chord")
async def html_chord():
file_path = Path(__file__).parent / "static/chord.html"
return FileResponse(file_path)
</code></pre>
<p>and then load in an iframe e.g.</p>
<pre><code> <iframe
title="HTML Preview"
src={"http://localhost:5000/js/chord"}
width="100%"
height="500px"
></iframe>
</code></pre>
<p>But this seems unessecerily convoluted and does not look as beutiful.</p>
| <python><d3.js> | 2023-09-25 05:04:38 | 0 | 14,360 | ntg |
77,169,901 | 17,028,242 | Losing Records while uploading file to GCS bucket using cloud storage Python client library | <p>For context, I'm running a Vertex AI pipeline. In one of my pipeline components, I am running the following code:</p>
<pre><code>from google.cloud import storage
file_1_name = '<insert csv file 1 name>'
file_2_name = '<insert csv file 2 name>'
archive_bucket = storage_client.bucket(archive_bucket_name)
upload_bucket = storage_client.bucket(target_bucket_name)
# generate list of blobs in the prediction bucket
target_bucket_blobs = list(upload_bucket.list_blobs())
# if no blobs exist, this means that we can write to the bucket
# create a blob name for files to be uploaded as
if not target_bucket_blobs:
pass
# if blobs do exist, this means that we should delete the existing blobs in bucket
else:
# DELETE BLOBS FIRST
for blob in target_bucket_blobs:
upload_bucket.copy_blob(blob, archive_bucket, blob.name)
blob.delete()
# make script sleep for 5 minutes until the files have been deleted and copied to new bucket
time.sleep(300)
# NOW WRITE PREDICTIONS
# Tag all predictions with the timestamp
current_time = datetime.today().strftime("%Y-%m-%d - %Hh-%Mm-%Ss")
# upload file as blob name in specified bucket
print(f'Uploading predictions to {target_bucket_name}.')
file_1_blob = upload_bucket.blob(file_1_name + '_' + current_time)
file_2_blob = upload_bucket.blob(file_2_name + '_' + current_time)
file_1_blob.upload_from_filename(file_1_path, content_type='text/csv')
file_2_blob.upload_from_filename(file_2_path, content_type='text/csv')
</code></pre>
<p>For those that use the google-cloud-storage python library, this should be somewhat familiar, but if not, follow the comments in the code.</p>
<p>At a high level, this is what is happening, I have 2 buckets, an archive bucket and an upload bucket. I have a machine learning pipeline (Vertex AI pipeline) where, in the last pipeline component, I am running the above code. The ML pipeline generates 2 files (predictions), file_1 and file_2. The upload bucket houses these predictions, to be read by another service.</p>
<p>What I am trying to accomplish with my code is the following:</p>
<ul>
<li>check the upload bucket to see if it is populated with any objects, if so, archive these objects in the bucket (by copying them to my archive bucket), then delete these objects in the upload bucket</li>
<li>then upload file_1 and file_2, generated by the run of the ML pipeline, to the upload bucket</li>
</ul>
<p><strong>My issue:</strong></p>
<p>The former step works perfectly fine, the GCS objects (i.e. existing files in the upload bucket get copied successfully to the archive bucket and get deleted successfully). The latter is where I'm having issues.</p>
<p>When I upload file_1 and file_2 to the upload bucket via the snippet of code:</p>
<pre><code>...
file_1_blob = upload_bucket.blob(file_1_name + '_' + current_time)
file_2_blob = upload_bucket.blob(file_2_name + '_' + current_time)
file_1_blob.upload_from_filename(file_1_path, content_type='text/csv')
file_2_blob.upload_from_filename(file_2_path, content_type='text/csv')
</code></pre>
<p>the uploaded csv files are missing records.</p>
<p>I know this because I outputted the files: file_1, file_2 as artifacts associated to the Vertex AI pipeline component, and they returned the expected number of records, but the remote upload I do via <code>upload_from_filename()</code> loses records, seemingly randomly.</p>
<p>Has anyone ever encountered this type of issue before? With the upload to bucket losing records? I'm not entirely sure of the root cause here, if this is an issue with the google cloud storage python library, or if it's something of my doing.</p>
<p>Any help would be appreciated.</p>
| <python><google-cloud-platform><google-cloud-storage> | 2023-09-25 04:24:14 | 0 | 458 | AndrewJaeyoung |
77,169,877 | 2,780,906 | how to add row totals to simple df using pivot table | <p>I have a simple df:</p>
<pre><code>a=pd.DataFrame({'a': [1, 2, 3], 'b': [2, 3, 4], 'c': [3, 4, 5]},index=["type1","type2","type3"])
a b c
type1 1 2 3
type2 2 3 4
type3 3 4 5
</code></pre>
<p>Even though <code>pivot_table</code> is meant for more complicated data, I can use it to quickly generate column totals:</p>
<pre><code>>>> a.pivot_table(index=a.index,margins=True,aggfunc=sum)
a b c
type1 1 2 3
type2 2 3 4
type3 3 4 5
All 6 9 12
</code></pre>
<p>Can I easily add row totals using this method?</p>
<p>Desired Output is:</p>
<pre><code> a b c All
type1 1 2 3 6
type2 2 3 4 9
type3 3 4 5 12
All 6 9 12 27
</code></pre>
| <python><pandas><dataframe><pivot-table> | 2023-09-25 04:15:19 | 1 | 397 | Tim |
77,169,718 | 2,981,639 | Configure poetry to install private (commercial) package using token | <p>I'm trying to install the python package <code>prodigy</code> using poetry. The pip method is as follows</p>
<pre><code>pip install prodigy -f https://XXXX-XXXX-XXXX-XXXX@download.prodi.gy
</code></pre>
<p>Where <code>XXX-XXXX-XXXX-XXXX</code> is my private token.</p>
<p>How do I do this using poetry? I've tried added <code>https://download.prodi.gy</code> as a <code>source</code> i.e.</p>
<pre><code>poetry source add prodigy https://download.prodi.gy
</code></pre>
<p>But I'm not sure how to configure the authentication. I've also tried</p>
<pre><code>poetry source add prodigy XXX-XXXX-XXXX-XXXX@https://download.prodi.gy
</code></pre>
<p>But this fails with</p>
<pre><code>401 Client Error: Unauthorized for url: https://download.prodi.gy/download/prodigy-1.12.7-cp310-cp310-linux_aarch64.whl
</code></pre>
<p>Poetry suggests <code>poetry config http-basic.foo <username> <password></code> but I don't think this is basic auth?</p>
| <python><python-poetry> | 2023-09-25 03:02:27 | 0 | 2,963 | David Waterworth |
77,169,656 | 19,060,733 | How to change the value of a user-defined scalar field using cloudComPy? | <p>I have an array of points which I need to convert into a point cloud. I am using the python API of cloud compare; cloudComPy to achieve this. My ultimate goal is to create a point cloud object using these points and add a scalar field named as "lane_id" to this point cloud with a value of 2.</p>
<p>I created an empty point cloud using -</p>
<pre><code>import cloudComPy as cc
import numpy as np
# A small sample for arr_of_points
arr_of_points = np.arange(1,31).reshape(-1, 3)
cloud = cc.ccPointCloud()
cloud.coordsFromNPArray_copy(arr_of_points)
</code></pre>
<p>This returns me a point cloud object as expected. Now, I tried to create a new scalar field -</p>
<pre><code>cloud.addScalarField("lane_id")
</code></pre>
<p>Now, I cannot find a path to assign a value to this newly created scalar field.</p>
| <python><point-clouds><cloudcompare> | 2023-09-25 02:40:00 | 0 | 692 | rr_goyal |
77,169,431 | 16,808,528 | How to loop arithmetic operators in Python? | <p>I am trying to loop arithmetic operator in this following code:</p>
<pre><code>num_input = input("Please enter a number: ")
num = int(num_input)
# Define the mathematical operations as a list of tuples (operator, operand)
operations = ['+','-', '*','%']
print(f"You entered: {num_input}")
print("The mathematical operations applied to your number and 15 are:")
# Initialize an empty list to store the results
results = []
# Iterate through the operations and calculate the results
for operand in operations:
op= [operand.strip() for operand in operations]
result = (num) op 20
results.append(result)
# Iterate through the results and print them
for i, (operator, operand) in enumerate(operations):
print(f"{num} {operator} {operand} = {results[i]}")
</code></pre>
<p>So, I just want to achieve the goal like:</p>
<p>When I input any integer, the code can loop calculation: <em>this input number</em> [+,-,*, %] a fixed number(say 20.)</p>
<p>However, if I directly use the code result=(num) operand 10, the operand is '*' instead of *, so it does not work.</p>
<p>I am trying to strip '' from the operator by turns and then assign it to op, and let op to do the calculate then assign the value to "result". But I still got the error like this:</p>
<p>Cell In[103], line 16
result = (num) op 20
^
SyntaxError: invalid syntax</p>
<p>Can anyone give some help how to achieve the goal to loop arithmetic operator correctly in this case? Thanks!</p>
| <python> | 2023-09-25 00:51:57 | 3 | 477 | Rstudyer |
77,169,403 | 2,372,612 | Round function guaranteed to generate the best floating point approximation? | <p>When I run <code>round(3.12143345454353,2)</code>, am I guaranteed to get the same floating point approximation (3.12000000000000010658141036401502788066864013671875
) that I get when I use the <code>3.12</code> literal?</p>
<p>In other words, is round(3.12143345454353,2)===3.12 going to be true? Is it true in general, so if x rounds to y does the floating point arithmetic guarantee that round(x,k)===y?</p>
| <javascript><python><floating-point> | 2023-09-25 00:38:13 | 1 | 535 | David Vonka |
77,169,204 | 4,145,280 | Pythonic way of dropping columns used in assign (i.e. Pandas equivalent of `.keep = "unused"`) | <p>In dplyr package of R, there's the option <code>.keep = "unused"</code> when creating new columns with the function <code>mutate()</code> (which is their equivalent of <code>assign</code>).</p>
<p>An example, for those who haven't used it:</p>
<pre><code>> head(iris)
Sepal.Length Sepal.Width Petal.Length Petal.Width Species
1 5.1 3.5 1.4 0.2 setosa
2 4.9 3.0 1.4 0.2 setosa
3 4.7 3.2 1.3 0.2 setosa
4 4.6 3.1 1.5 0.2 setosa
5 5.0 3.6 1.4 0.2 setosa
6 5.4 3.9 1.7 0.4 setosa
# any column used in creating `new_col` is dropped afterwards automatically
> mutate(.data = head(iris), new_col = Sepal.Length + Petal.Length * Petal.Width, .keep = "unused")
Sepal.Width Species new_col
1 3.5 setosa 5.38
2 3.0 setosa 5.18
3 3.2 setosa 4.96
4 3.1 setosa 4.90
5 3.6 setosa 5.28
6 3.9 setosa 6.08
</code></pre>
<p>I <em>say</em> they are equivalent, but there doesn't appear to be the option for doing this with <code>assign</code> in the <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.assign.html" rel="nofollow noreferrer">Pandas documentation</a> so I assume it doesn't exist. I was curious about creating a way of doing something similar then.</p>
<p>One way I can think of to do this is to create a list of names beforehand, and drop them afterwards, like this:</p>
<pre><code>from sklearn import datasets
import pandas as pd
used_columns = ['sepal length (cm)', 'petal length (cm)', 'petal width (cm)']
iris = pd.DataFrame(datasets.load_iris().data, columns=datasets.load_iris().feature_names)
iris.assign(new_col = lambda x: x['sepal length (cm)'] + x['petal length (cm)'] * x['petal width (cm)']).drop(used_columns, axis=1)
</code></pre>
<p>or</p>
<pre><code>iris.assign(new_col = lambda x: x[used_columns[0]] + x[used_columns[1]] * x[used_columns[2]]).drop(used_columns, axis=1)
</code></pre>
<p>Which seems ~<em>fine</em>~, but requires a separate list, and with the first one, keeping two things updated, and with the second, the cognitive load of keeping track of what the nth list item is in my head.</p>
<p>So I was curious if there's another way I'm not aware of of doing this, that would be easier to maintain? Both of the ones above seem not very Pythonic?</p>
<p>Research I've done: I did a bunch of googling around this, with no luck. It seems <a href="https://stackoverflow.com/questions/51140011/pandasdrop-multiple-columns-which-name-in-a-list-and-assigned-to-a-new-datafram">there's</a> <a href="https://stackoverflow.com/questions/51167612/what-is-the-best-way-to-remove-columns-in-pandas">plenty</a> <a href="https://stackoverflow.com/questions/40389018/dropping-multiple-columns-from-a-dataframe">of</a> <a href="https://stackoverflow.com/questions/13411544/delete-a-column-from-a-pandas-dataframe/18145399#18145399">ways</a> <a href="https://stackoverflow.com/questions/19071199/drop-columns-whose-name-contains-a-specific-string-from-pandas-dataframe">of</a> <a href="https://stackoverflow.com/questions/56891518/drop-columns-from-pandas-dataframe-if-they-are-not-in-specific-list">dropping</a> <a href="https://stackoverflow.com/questions/26347412/drop-multiple-columns-in-pandas">columns</a>, but none I've found seem particularly well-suited to this type of situation. Any help you could provide would be much appreciated! Answers which use other Python packages (e.g. <code>janitor</code>) are okay too.</p>
| <python><pandas><dataframe> | 2023-09-24 22:47:55 | 2 | 12,688 | Mark |
77,169,006 | 2,173,773 | Creating a wrapper decorator for click.options() | <p>I am trying to create a wrapper decorator for the click decorator <code>@click.options('--foo', required=True)</code>:</p>
<pre><code>import click
def foo_option(func):
orig_decorator = click.option('--foo', required=True)(func)
def decorator(*args, **kwargs):
orig_decorator(*args, **kwargs)
return decorator
@click.command()
@foo_option
def bar1(foo: str) -> None:
print(f"bar1: {foo}")
if __name__ == '__main__':
bar1()
</code></pre>
<p>This does not work. When I run it like <code>script.py --foo=1</code> I get error:</p>
<pre><code>Usage: script.py [OPTIONS]
Try 'script.py --help' for help.
Error: No such option: --foo
</code></pre>
<p>Expected output should be:</p>
<pre><code>bar1: 1
</code></pre>
<p>What am I missing?
See <a href="https://stackoverflow.com/q/77168894/2173773">Refactor @click.option() arguments</a> for background information.</p>
| <python><python-decorators><python-click> | 2023-09-24 21:28:38 | 1 | 40,918 | HΓ₯kon HΓ¦gland |
77,168,894 | 2,173,773 | Refactor @click.option() arguments | <p>Say I have this (minimal example from a larger script):</p>
<pre><code>import click
@click.command()
@click.option('--foo', required=True)
def bar1(foo: str) -> None:
print(f"bar1: {foo}")
@click.command()
@click.option('--foo', required=True)
def bar2(foo: str) -> None:
print(f"bar2: {foo}")
if __name__ == '__main__':
bar1() # these are actually called from a poetry scripts section in pyproject.toml
# bar2()
</code></pre>
<p>I would like to avoid repeating the same option <code>--foo</code> with the same arguments for both <code>bar1</code> and <code>bar2</code>. Maybe something like</p>
<pre><code>@click.option(*ClickOptions.foo)
</code></pre>
<p>but this does not work since option foo also takes keyword argument <code>required=True</code>. I would like to avoid splitting it up like this:</p>
<pre><code>@click.option(*ClickOptions.foo.args, **ClickOptions.foo.kwargs)
</code></pre>
<p>Is there an elegant way to do this?</p>
| <python><python-click> | 2023-09-24 20:48:30 | 1 | 40,918 | HΓ₯kon HΓ¦gland |
77,168,776 | 2,636,579 | Lambda function issue: "Unable to import module 'embedding_search_handler': attempted relative import beyond top-level package", | <p>I have an Elasticsearch instance up and I am using API Gateway + Lambda to set up a little backend. The ES contains embeddings of some Word Docs that I generated for a "chat with your docs" LLM thing.</p>
<p>This is embedding_search_handler.py:</p>
<pre><code>import os
import json
import requests
from elasticsearch import Elasticsearch, RequestsHttpConnection
from requests_aws4auth import AWS4Auth
from decouple import config
import sys
sys.path.append('./')
# Elasticsearch setup
host = 'https://myhost.us-east-1.es.amazonaws.com'
region = 'us-east-1'
# Retrieve keys from the .env file using python-decouple
ACCESS_KEY = config('ACCESS_KEY')
SECRET_KEY = config('SECRET_KEY')
OPENAI_API_KEY = config('OPENAI_API_KEY')
awsauth = AWS4Auth(ACCESS_KEY, SECRET_KEY, region, 'es')
es = Elasticsearch(
hosts=[{'host': host, 'port': 443}],
http_auth=awsauth,
use_ssl=True,
verify_certs=True,
connection_class=RequestsHttpConnection
)
def query_elasticsearch(embedding):
try:
body = {
"size": 5,
"query": {
"script_score": {
"query": {"match_all": {}},
"script": {
"source": "cosineSimilarity(params.query_vector, 'embedding_vector') + 1.0",
"params": {"query_vector": embedding}
}
}
}
}
response = es.search(index="document_embeddings", body=body)
return response['hits']['hits']
except Exception as e:
print(f"Error querying Elasticsearch: {e}")
return []
def lambda_handler(event, context):
query_text = event['query']
query_embedding = convert_query_to_embedding(query_text)
if not query_embedding:
return {
"statusCode": 500,
"body": json.dumps({"error": "Failed to get embedding for query"})
}
results = query_elasticsearch(query_embedding)
return {
"statusCode": 200,
"body": json.dumps(results)
}
def convert_query_to_embedding(query):
headers = {
"Authorization": f"Bearer {OPENAI_API_KEY}",
"Content-Type": "application/json"
}
try:
response = requests.post(
"https://api.openai.com/v1/embed",
headers=headers,
json={
"model": "text-embedding-ada-002",
"data": [query]
}
)
response.raise_for_status() # Raises a HTTPError if the HTTP request returned an unsuccessful status code
data = response.json()
embedding = data['data'][0]['embedding']
return embedding
except requests.RequestException as e:
print(f"Error calling OpenAI API: {e}")
return None
</code></pre>
<p>When I try to run this test inside Lambda:</p>
<pre><code>{
"query": "what is this?"
}
</code></pre>
<p>I get this error:</p>
<pre><code>START RequestId: 5cdd60ba-562b-4cb5-a5f9-b02bbbb26a97 Version: $LATEST
[ERROR] Runtime.ImportModuleError: Unable to import module 'embedding_search_handler': attempted relative import beyond top-level package
Traceback (most recent call last):END RequestId: 5cdd60ba-562b-4cb5-a5f9-b02bbbb26a97
REPORT RequestId: 5cdd60ba-562b-4cb5-a5f9-b02bbbb26a97 Duration: 1.10 ms Billed Duration: 2 ms Memory Size: 128 MB Max Memory Used: 50 MB
</code></pre>
<p>This is what my directory looks like:
<a href="https://i.sstatic.net/07Aqw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/07Aqw.png" alt="enter image description here" /></a></p>
<p>When I am inside the dir "lambda_package" and run <code>zip -r ../lambda_package.zip</code>, it also appears to zip correctly. This is the screenshot from zipping and unzipping the contents:
<a href="https://i.sstatic.net/kxDQw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kxDQw.png" alt="enter image description here" /></a></p>
<p>When I go to "Edit runtime settings", my Handler is set to "embedding_search_handler.lambda_handler"</p>
<p>I don't understand what I am doing wrong.</p>
| <python><amazon-web-services><elasticsearch><aws-lambda><aws-api-gateway> | 2023-09-24 20:13:41 | 2 | 1,034 | reallymemorable |
77,168,697 | 7,212,809 | Handling Level Changes for Prophet Predictions | <p>I have a dataset like so:</p>
<p><a href="https://i.sstatic.net/BOK8a.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BOK8a.png" alt="enter image description here" /></a></p>
<p>It is seasonal data, but there is a level shift after some point</p>
<p>I want Prophet to adapt to the data after the level shift faster. How can I do this?</p>
<p>I've read through the <a href="https://facebook.github.io/prophet/docs/trend_changepoints.html" rel="nofollow noreferrer">docs</a>, there are some options:</p>
<ul>
<li>delete older data</li>
</ul>
<p>But is there any way to force prophet <strong>to adapt to the level shifted data "faster"</strong>?</p>
<p>Here is a repro:</p>
<pre><code>import pandas as pd
from prophet import Prophet
from random import randint
from datetime import datetime
import matplotlib.pyplot as plt
def get_dataset():
d = {}
total = 100
level_shift_point = 10
values = [20+i%3 for i in range(level_shift_point)]
for i in range(level_shift_point, total):
values.append(100 + i%3)
d["y"] = values
d["ds"] = [datetime.utcfromtimestamp(3600*i).strftime('%Y-%m-%d %H:%M:%S') for i in range(total)]
return pd.DataFrame.from_dict(d)
df = get_dataset()
m = Prophet(changepoint_prior_scale=0.0001)
m.fit(df)
future = m.make_future_dataframe(periods=100, freq="h", include_history=False)
forecast = m.predict(future)
m.plot(forecast)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/nf2Ar.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nf2Ar.jpg" alt="enter image description here" /></a></p>
<p>As you can see, prediction don't make any sense at all.</p>
<p>I want the predictions to align with the data after the level shift. How can I do this?</p>
| <python><machine-learning><statistics><forecasting><prophet> | 2023-09-24 19:50:51 | 1 | 7,771 | nz_21 |
77,168,635 | 13,347,255 | I am trying to scrape images from a website with playwright. I am unable to scrape its one div content load dynamically | <p>I totally don't get the page load normally, still, bigImage is empty.<br />
Here's the below code.<br />
I am using <strong>asyncio</strong> and <strong>playwright</strong> along with lxlm and requests.<br />
I am trying to wait for the element to load but the element has loaded<br />
but does not reflect in the code.</p>
<p>Here's the Github repo link: <a href="https://%20github.com/OAtulA/Flipkart-AmazonImageDownloader/tree/edits" rel="nofollow noreferrer">github.com/OAtulA/Flipkart-AmazonImageDownloader/tree/edits</a></p>
<p><em><strong>Inputfile</strong></em></p>
<pre><code>Products File:
Products.txt
Image Output Folder:
Product-Images
</code></pre>
<p><em><strong>Products.txt</strong></em></p>
<pre><code>black sneakers, https://www.flipkart.com/atom-sneakers-men/p/itm74ce4c7c2d4b9?pid=SHOGHYV8GPF2HNEC&lid=LSTSHOGHYV8GPF2HNECKHMMST&marketplace=FLIPKART&q=jordan+shoes&store=osp%2Fcil%2Fe1f&srno=s_1_11&otracker=AS_QueryStore_OrganicAutoSuggest_1_6_na_na_na&otracker1=AS_QueryStore_OrganicAutoSuggest_1_6_na_na_na&fm=search-autosuggest&iid=en_FhuSro4-26x7lsgKP6XuQPlbRWXb4hVzCcG4YmqgaBgP8HcROpRwPq2TKJFuvdHoVe-6n18rUHThOVLpsiCapA%3D%3D&ppt=sp&ppn=sp&ssid=fi2iod6weznugg741695385201496&qH=57e1b9f14a605a5e
Series 7 sneakers, https://www.flipkart.com/kraasa-series-7-sneakers-men/p/itmda307099c343b?pid=SHOGF3MZZGBFJAEK&lid=LSTSHOGF3MZZGBFJAEK9XAYQP&marketplace=FLIPKART&q=jordan+shoes&store=osp%2Fcil%2Fe1f&srno=s_1_27&otracker=AS_QueryStore_OrganicAutoSuggest_1_6_na_na_na&otracker1=AS_QueryStore_OrganicAutoSuggest_1_6_na_na_na&fm=search-autosuggest&iid=en_FhuSro4-26x7lsgKP6XuQPlbRWXb4hVzCcG4YmqgaBh2G1kDBg4cDrsZtr5_NM9Ybc6ftsImCKTMyJb1-owaSw%3D%3D&ppt=sp&ppn=sp&qH=57e1b9f14a605a5e
red shoes, https://www.flipkart.com/bescettro-shoes-white-stylish-jordan-light-weight-mens-casual-sport-trending-gym-outdoor-casuals-men/p/itm9acb33113c4e0?pid=SHOGE5U7GXATDPDN&lid=LSTSHOGE5U7GXATDPDNOJOTHL&marketplace=FLIPKART&q=jordan+shoes&store=osp%2Fcil%2Fe1f&srno=s_1_19&otracker=AS_QueryStore_OrganicAutoSuggest_1_6_na_na_na&otracker1=AS_QueryStore_OrganicAutoSuggest_1_6_na_na_na&fm=search-autosuggest&iid=a3973867-33b5-4b7c-8a7d-b6fe6db70759.SHOGE5U7GXATDPDN.SEARCH&ppt=sp&ppn=sp&qH=57e1b9f14a605a5e
</code></pre>
<p>It's safe to assume that installation is proper and the code before the</p>
<pre class="lang-py prettyprint-override"><code>
# Wait for the element with class "_3li7GG" to become visible
await page.wait_for_selector("_3li7GG", state="visible")
await page.wait_for_selector("_1BweBB")
</code></pre>
<p>The program starts here</p>
<pre class="lang-py prettyprint-override"><code>
from lxml import html
import os
import requests
import asyncio
from playwright.async_api import async_playwright
# This is literally going to download the pics
def downLoading(product, images, bigImage, Pic_name, url, OutputLocation):
# This code is mainly going to grab the image
image_count = 1
# So that even if we have multiple images for
# same product we can easily choose.
bigImageLink = bigImage.xpath('@src')[0]
if product.strip() != '':
for image in images:
name = Pic_name + " " + str(image_count)
image_count += 1
# to get all the images of a product
# link = image['src']
link = image.xpath('@src')[0]
# Link looks like this
'''
# 1st small image
# src="https://rukminim2.flixcart.com/image/128/128/xif0q/shoe/l/t/l/8-rkt-19039-black-42-atom-black-original-imagzmhf7d8fgd38.jpeg?q=70"
# 1st big image
# src="https://rukminim2.flixcart.com/image/832/832/xif0q/shoe/l/t/l/8-rkt-19039-black-42-atom-black-original-imagzmhf7d8fgd38.jpeg?q=70"
'''
# Now setting the link to get the big images for flipkart
if 'flipkart' in url:
# Now I get the smallImage dimesions as well as big images dimensions.
start_index = link.find("image/") + len("image/")
end_index = start_index + 7
smallDimension = link[start_index:end_index]
start_index = bigImageLink.find("image/") + len("image/")
end_index = start_index + 7
bigDimension = bigImageLink[start_index:end_index]
link = link.replace(smallDimension, bigDimension)
# Not needed now.
# link = link.replace('q=70', 'q=50')
name = name.replace(' ', '-').replace('/', '') + '.jpg'
name = OutputLocation + '/' + name
# To get the file in the desired location
with open(name, 'wb') as photo:
im = requests.get(link)
photo.write(im.content)
# Writing the downloaded file.
print('Writing: ', name)
def download_image(name, link, OutputLocation):
name = name.replace(' ', '-').replace('/', '') + '.jpg'
name = os.path.join(OutputLocation, name)
with open(name, 'wb') as photo:
im = requests.get(link)
photo.write(im.content)
print('Writing:', name)
async def scrape_website(product, url, Pic_name, OutputLocation):
async with async_playwright() as p:
browser = await p.chromium.launch(headless=False)
page = await browser.new_page()
await page.goto(url)
# Wait for the element with class "_35DpL- _2_B7hD" to become visible
await page.wait_for_selector("_3li7GG", state="visible")
await page.wait_for_selector("_1BweBB")
# Get the HTML content after waiting for the element
html_content = await page.content()
# You can now parse the HTML content using lxml and extract the images
html_tree = html.fromstring(html_content)
images = html_tree.xpath("//ul[@class='_3GnUWp']/li/div/div/img")
bigImage = html_tree.xpath("//div[@class='_1BweB8']//img")
# Download images as before
# download_image(Pic_name, bigImage[0].xpath('@src')[0], OutputLocation)
downLoading(product= product, images= images, bigImage= bigImage, Pic_name= Pic_name, url= url, OutputLocation= OutputLocation)
await browser.close()
async def img_graber(products, OutputLocation):
for product in products:
Pic_name, link = product.split(',')
Pic_name = Pic_name.strip()
link = link.strip()
url = link
if 'flipkart' in url:
await scrape_website(product, url, Pic_name, OutputLocation)
# You can add support for other websites as needed.
def Inputs():
file = open('Inputfile', 'r')
InputLines = []
temp_inputLines = file.readlines()
for line in temp_inputLines:
if line != '\n':
InputLines.append(line.strip('\n'))
for i in range(len(InputLines)):
if "Products File:" in InputLines[i]:
productsInfo = open(InputLines[i+1], 'r')
products = productsInfo.readlines()
productsInfo.close()
elif "Image Output Folder:" in InputLines[i]:
OutputLocation = InputLines[i+1]
try:
os.makedirs(OutputLocation)
except:
pass
temp_products = products
products = []
for p in temp_products:
p = p.strip('\n')
if p != '':
products.append(p)
asyncio.run(img_graber(products=products, OutputLocation=OutputLocation))
file.close()
Inputs()
</code></pre>
<p><em><strong>Thanks in Advance humble developer.</strong></em></p>
| <python><web-scraping><playwright> | 2023-09-24 19:32:07 | 0 | 438 | Atul Anand Oraon |
77,168,557 | 4,449,954 | Referencing specific property of YAML anchor | <p>I am trying to create a template that consists of a single, human-readable file of data values that will be used to populate several source files. Right now, I'm using YAML and a short Python script with Jinja2 to insert the YAML values into template source files, but it's important that the names of how these things are used be preserved, as there are hundreds of parameters that have complex interrelationships.</p>
<p>I'm trying to reference the same data value in multiple places under different names, and trying to use YAML anchors and aliases to do so.</p>
<p>Basically I want this:</p>
<pre><code>compute:
gpus: 4
cpus: 120
mem: 900Gi
torchrun:
nproc_per_node: [compute.gpus]
...
resource_requests:
gpus: [compute.gpus]
...
</code></pre>
<p>I've tried this:</p>
<pre><code>compute: &compute
gpus: 4
torchrun:
nproc_per_node: *compute.gpus
</code></pre>
<p>...but apparently it isn't valid syntax in PyYAML, and I've seen conflicting things online as to whether it's valid YAML at all.</p>
<p>The end result will be multiple files, for example:</p>
<p>resources.py:</p>
<pre><code>@task(
gpus = {{ resource_requests.gpus }}
)
def my_task():
...
</code></pre>
<p>and run.sh:</p>
<pre><code>torchrun --nproc_per_node={{ torchrun.nproc_per_node }} ...
</code></pre>
<p>I am aware that with Jinja2 I could simply reuse <code>compute.gpu</code> in the templates for both files, but I want to keep the names consistent so that it's easier to reason about what the template values are actually doing (again, there are dozens of such values that get used in different places but must be consistent).</p>
| <python><yaml><jinja2> | 2023-09-24 19:09:17 | 1 | 1,080 | stuart |
77,168,439 | 893,254 | How does the Seaborn timeseries plot with error bands example get its error band data from? | <p>This question relates to this <a href="https://seaborn.pydata.org/examples/errorband_lineplots.html" rel="nofollow noreferrer">example</a> from the Seaborn documentation.</p>
<p>I do not understand how the plotting library is obtaining error information.</p>
<p>The function called, with arguments is as follows.</p>
<pre><code>lineplot(x="timepoint", y="signal", hue="region", style="event", data=fmri)
</code></pre>
<p>If I understand correctly:</p>
<ul>
<li>The pandas dataframe containing the data is set with the <code>data</code> argument</li>
<li>The name of the column containing x values is set using the <code>x</code> argument</li>
<li>The name of the column containing y values is set using the <code>y</code> argument</li>
<li>There are 4 lines plotted, the color and style for each line is set using the data in the columns defined by the names provided to the <code>hue</code> and <code>style</code> arguments</li>
<li>There is no argument for the width of the error band (uncertainty data)</li>
</ul>
<p>Have I misunderstood something? Where does the error width data come from? How does Seaborn know what widths to use when plotting the shaded regions?</p>
<p><a href="https://i.sstatic.net/gtBea.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gtBea.png" alt="Seaborn Line Example" /></a></p>
| <python><seaborn> | 2023-09-24 18:31:45 | 1 | 18,579 | user2138149 |
77,168,370 | 6,684,751 | Django Rest Framework Default Settings Not Working | <p>I'm facing an issue with Django Rest Framework (DRF) where the default settings I've configured in my settings.py file are not working as expected. Specifically, I've set up default pagination and authentication classes, but they don't seem to be applied to my views.</p>
<p>I am using :</p>
<pre><code>python3.11
Django==4.2.5
djangorestframework==3.14.0
django_filter==23.3
</code></pre>
<p>Here are the relevant parts of my project configuration:</p>
<pre><code>settings.py:
</code></pre>
<pre><code>REST_FRAMEWORK = {
'DEFAULT_PAGINATION_CLASS': 'rest_framework.pagination.PageNumberPagination',
'PAGE_SIZE': 10,
'DEFAULT_AUTHENTICATION_CLASSES': [
'rest_framework.authentication.BasicAuthentication',
'rest_framework.authentication.SessionAuthentication',
],
'DEFAULT_PERMISSION_CLASSES': [
'rest_framework.permissions.IsAuthenticated',
'rest_framework.permissions.DjangoModelPermissions',
]
}
</code></pre>
<pre><code>project/urls.py
</code></pre>
<pre><code>urlpatterns = [
path('admin/', admin.site.urls),
path('', include('myapp.urls')),
]
</code></pre>
<p><code>myapp/urls.py</code></p>
<pre><code>urlpatterns = [
path('myview/', views.MyListView.as_view()), # MyListView is a DRF view
]
</code></pre>
<p><code>my app/views.py</code></p>
<pre><code>class MyListView(generics.ListCreateAPIView):
queryset = Author.objects.all()
serializer_class = MySerializer
</code></pre>
| <python><django><django-rest-framework> | 2023-09-24 18:12:54 | 1 | 906 | arun n a |
77,168,266 | 4,451,315 | Normalise Polars list | <p>Say I have</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({'a': [[1,2,3], [4, 5, 6, 7]]})
</code></pre>
<p>I'd like to divide each element in each list by the total in that list. I.e. to get</p>
<pre class="lang-py prettyprint-override"><code>shape: (2, 1)
ββββββββββββββββββββββββββββββββββββ
β a β
β --- β
β list[f64] β
ββββββββββββββββββββββββββββββββββββ‘
β [0.166667, 0.333333, 0.5] β
β [0.181818, 0.227273, β¦ 0.318182] β
ββββββββββββββββββββββββββββββββββββ
</code></pre>
<p>How can I do this in Polars?</p>
<p>I've tried using <code>.list.eval</code> but get an error:</p>
<pre class="lang-py prettyprint-override"><code>In [8]: df.select(pl.col('a').list.eval(pl.element()/pl.col('a')
...: .list.sum()))
----------------------------------------------------------------
ComputeError: named columns are not allowed in `list.eval`; consider using `element` or `col("")`
</code></pre>
| <python><python-polars> | 2023-09-24 17:47:38 | 2 | 11,062 | ignoring_gravity |
77,167,999 | 2,455,888 | Can I call a QObject method from main thread after moveToThread() method is called? | <p>I have a worker class</p>
<pre><code>class Worker(QObject):
finished = Signal()
def __init__(self, n):
super().__init__()
self.a_flag = False
# self.mutex = QMutex()
# @Slot()
def stop(self):
self.a_flag = True
@Slot()
def do_work(self):
while True:
if self.a_flag:
break
# do some wo
self.finished.emit()
</code></pre>
<p>And somewhere in the controller (from main thread) I am moving it to a new thread</p>
<pre><code>self.worker = Worker(num)
self.worker_thread = QThread()
self.worker.moveToThread(self.worker_thread)
</code></pre>
<p>After <code>self.worker.moveToThread(self.worker_thread)</code> the thread affinity of <code>worker</code> is changed and I guess I can't access the members of <code>worker</code> in controller (I am not sure about it but I can argue why). Is there any reference that say's anything about it?</p>
<p>Is it safe to do <code>self.a_flag = True</code> somewhere in controller (main thread) or call <code>stop()</code>?</p>
<p>Is there any workaround to notify <code>do_work</code> to get out of the loop?</p>
| <python><qt><pyqt><pyside><qthread> | 2023-09-24 16:35:53 | 1 | 106,472 | haccks |
77,167,960 | 2,829,863 | Converting Latin characters to quoted-printable in python | <p>How to convert Latin characters to <code>quoted-printable</code> encoding in Python?</p>
<p>I know about <a href="https://docs.python.org/3/library/quopri.html" rel="nofollow noreferrer">quopri</a> but it doesn't work with Latin characters (maybe I'm doing something wrong).</p>
<p>Here is my code:</p>
<pre class="lang-py prettyprint-override"><code>import quopri
fly_as_quoted_printable = b'=28=46=6C=79=29'
fly_as_bytes = quopri.decodestring(fly_as_quoted_printable)
fly_as_utf8 = fly_as_bytes.decode('utf-8')
print('\nΠ‘onverting `quoted_printable` to bytes and string is ok:')
print(f'fly_as_quoted_printable= {fly_as_quoted_printable}')
print(f'fly_as_bytes= {fly_as_bytes}')
print(f'fly_as_utf8= {fly_as_utf8}')
cyrillic_and_latin_mixed_as_bytes = bytes('ΠΠΎΠ»ΡΡ (Fly)', 'utf-8')
quoted_printable = quopri.encodestring(cyrillic_and_latin_mixed_as_bytes)
print('\nBut converting latin characters as bytes to `quoted_printable` does not work:')
print(f'cyrillic_and_latin_mixed_as_bytes= {cyrillic_and_latin_mixed_as_bytes}')
print(f'quotep_printable= {quoted_printable}')
</code></pre>
<p>The output is:</p>
<pre><code>Π‘onverting `quoted_printable` to bytes and string is ok:
fly_as_quoted_printable= b'=28=46=6C=79=29'
fly_as_bytes= b'(Fly)'
fly_as_utf8= (Fly)
But converting latin characters as bytes to `quoted_printable` does not work:
cyrillic_and_latin_mixed_as_bytes= b'\xd0\x9f\xd0\xbe\xd0\xbb\xd1\x91\xd1\x82 (Fly)'
quotep_printable= b'=D0=9F=D0=BE=D0=BB=D1=91=D1=82 (Fly)'
</code></pre>
| <python><quoted-printable> | 2023-09-24 16:25:09 | 1 | 787 | Comrade Che |
77,167,833 | 7,155,015 | What if __enter__ and __exit__ raises? | <p>I'm working on a simple package to control my power supply remotely. I wrapped the logic in a class, let's call it <code>PowerSupply</code>. The communication is over a serial port. Also before actual control a specific command should be transferred in order to switch the device to the remote mode. There is also command to revert it back to the local mode. So here is the description of the context manager I would like to use:</p>
<ul>
<li>Setup
<ol>
<li>Open serial port</li>
<li>Switch the device to the remote control mode</li>
</ol>
</li>
<li>Teardown
<ol>
<li>Turn off the voltage</li>
<li>Switch the device to the local control mode</li>
<li>Close serial port</li>
</ol>
</li>
</ul>
<p>The problem is (almost) each of the above points can raise actually. Here is <em>very</em> simplified code :</p>
<pre class="lang-py prettyprint-override"><code>class PowerSupply :
def __init__(self, port: str) :
self._device = serial.Serial(port)
def __enter__(self) :
try :
self.setMode(EMode.REMOTE)
except Exception as outer :
try :
self.__exit__(*sys.exc_info()) # should be cleaned anyway
except Exception as inner :
pass
finally :
raise outer # we don't want to miss this error
else :
return self
def __exit__(self, _, __, ___) :
try :
self.setOut(Out.OFF)
self.setMode(EMode.LOCAL)
except Exception as e :
raise e # we don't want to miss this error
finally :
self.close()
def setMode(...) :
pass # can raise
def setOut(...) :
pass # can raise
</code></pre>
<p>The usage is straightforward:</p>
<pre class="lang-py prettyprint-override"><code>with PowerSupply("/dev/ttyUSB0") as ps :
ps.setMaxVoltage(5000);
ps.setMaxCurrent(1000);
ps.setVoltage(1000);
ps.setOut(EOut.ON);
# work
</code></pre>
<p>I see one problem (excluding the code full of <code>try</code>s) with this approach: <em><code>__exit__</code> replaces the exception from the <code>with</code> body</em>. Actually, it is not quite a problem in my case because I made every single command to be logged and any error is also logged internally. But anyway, could be a problem in other projects.</p>
<p>What is a recommended approach to deal with such kind of cases when both setup <em>and</em> teardown raise?</p>
| <python><contextmanager> | 2023-09-24 15:51:55 | 1 | 691 | LRDPRDX |
77,167,796 | 6,365,890 | Parse string inside a column (pandas) | <p>I'm trying to parse out the "select investors" column into individual columns (ex. investor1, investor2, investor3) in this dataset: <a href="https://www.cbinsights.com/research-unicorn-companies" rel="nofollow noreferrer">https://www.cbinsights.com/research-unicorn-companies</a></p>
<p>I tried this code:</p>
<pre><code>df_worldunicorns[['tier1','tier2','tier3']] = df_worldunicorns['Select Investors'].str.split(',', expand=True)
</code></pre>
<p>but get the following error:</p>
<pre><code>ValueError: Columns must be same length as key
</code></pre>
<p>Please help</p>
| <python><pandas><dataframe><parsing><split> | 2023-09-24 15:41:40 | 2 | 669 | pynewbee |
77,167,537 | 19,130,803 | extract tuple items from list without accessing index | <p>I am trying to extract the tuple items which consist of another embedded tuple without using the index.</p>
<pre><code>items = [(('a', 'b'), 'a'), (('a', 'b'), 'b'), (('a', 'b'), 'c'), (('a', 'b'), 'd')]
# Currently accessing using index
for xy,z in items:
print(xy[0],xy[1],z)
# Trying without index but getting error
for x, y, z in items:
print(x,y,z)
for item in items:
x, y, z = item
print(x,y,z)
</code></pre>
<p>Error:</p>
<pre><code>ValueError: not enough values to unpack (expected 3, got 2)
</code></pre>
<p>Is it possible?</p>
| <python> | 2023-09-24 14:36:53 | 3 | 962 | winter |
77,167,377 | 9,571,463 | How to Split a Dictionary By Key Inplace | <p>I have a dictionary with three (variable) items:</p>
<pre><code>d = {"data": [1, 2, 3, 4], "id": 4, "name": "bob"}
</code></pre>
<p>I want to split this into two dicts without copying any data. Basically I want to remove the <code>id</code> and <code>name</code> keys into its own dictionary but only by specifying "I want all keys that aren't the "data" key.</p>
<p>I would like to avoid copying the "data" key-value item as it can be 15+MB in size, so if anything, extracting the other items and leaving the "data" key-value item intact is desired.</p>
<p>My attempt:</p>
<pre><code># This will create a new dict for the non "data" key items. Great!
# But I want the "data" key-value pair to remain in its own dictionary without having to copy it into a new dict
non_data_dict = {key: d[key] for key in d.keys() - ['data']}
</code></pre>
<p>My desired result:</p>
<pre><code>data_dict = {"data": [1,2,3,4]}
non_data_dict = {"id": 4, "name": "bob"}
</code></pre>
| <python><dictionary> | 2023-09-24 13:50:31 | 1 | 1,767 | Coldchain9 |
77,167,279 | 11,098,908 | Reasons to have a return statement without returning any value | <p>The following code came from the book <a href="https://www.booktopia.com.au/fundamentals-of-python-kenneth-lambert/book/9781337560092.html?source=pla&gclid=Cj0KCQjwvL-oBhCxARIsAHkOiu0hyMQneJ9G_uPeEKaq8o6qtQVSxkqqokBDKkawFDitkWNIoLc4HuUaAkbIEALw_wcB" rel="nofollow noreferrer">Fundamentals of Python</a>, which made me think there <em>must</em> be some reasons for the author to use <code>return</code> without returning anything.</p>
<pre><code>def save(self, fileName = None):
"""Saves pickled accounts to a file. The parameter
allows the user to change filenames."""
if fileName != None:
self.fileName = fileName
elif self.fileName == None:
return # WHAT ARE THE REASONS FOR HAVING THIS LINE? CAN WE DELETE/REMOVE IT?
fileObj = open(self.fileName, "wb")
for account in self.accounts.values():
pickle.dump(account, fileObj)
fileObj.close()
</code></pre>
<p>The full code is as follows:</p>
<pre><code>import pickle
class SavingsAccount(object):
RATE = 0.02 # Single rate for all accounts
def __init__(self, name, pin, balance = 0.0):
self.name = name
self.pin = pin
self.balance = balance
def __str__(self) :
"""Returns the string rep."""
result = 'Name: ' + self.name + '\n'
result += 'PIN: ' + self.pin + '\n'
result += 'Balance: ' + str(self.balance)
return result
def getBalance(self):
"""Returns the current balance."""
return self.balance
def getName(self):
"""Returns the current name."""
return self.name
def getPin(self):
"""Returns the current pin."""
return self.pin
def deposit(self, amount):
"""Deposits the given amount and returns None."""
self.balance += amount
return None
def withdraw(self, amount):
"""Withdraws the given amount.
Returns None if successful, or an
error message if unsuccessful."""
if amount < 0:
return "Amount must be >= 0"
elif self.balance < amount:
return "Insufficient funds"
else:
self.balance -= amount
return None
def computeInterest(self):
"""Computes, deposits, and returns the interest."""
interest = self.balance * SavingsAccount.RATE
self.deposit(interest)
return interest
class Bank(object):
def __init__(self, fileName):
self.accounts = {}
self.fileName = fileName
def __str__(self) :
"""Return the string rep of the entire bank."""
return '\n'.join(map(str, self.accounts.values()))
def makeKey(self, name, pin):
"""Makes and returns a key from name and pin."""
return name + "/" + pin
def add(self, account):
"""Inserts an account with name and pin as a key."""
key = self.makeKey(account.getName(), account.getPin())
self.accounts[key] = account
def remove(self, name, pin):
"""Removes an account with name and pin as a key."""
key = self.makeKey(name, pin)
return self.accounts.pop(key, None)
def get(self, name, pin):
"""Returns an account with name and pin as a key
or None if not found."""
key = self.makeKey(name, pin)
return self.accounts.get(key, None)
def computeInterest(self):
"""Computes interest for each account and
returns the total."""
total = 0.0
for account in self.accounts.values():
total += account.computeInterest()
return total
def save(self, fileName = None):
"""Saves pickled accounts to a file. The parameter
allows the user to change filenames."""
if fileName != None:
self.fileName = fileName
elif self.fileName == None:
return
fileObj = open(self.fileName, "wb")
for account in self.accounts.values():
pickle.dump(account, fileObj)
fileObj.close()
</code></pre>
<p>How can the <code>def save(self, fileName=None)</code> be modified such that the program will ask user to provide a file name, that is:</p>
<pre><code>bank = Bank(None)
bank.add(SavingsAccount("Wilma", "1001", 4000.00))
bank.add(SavingsAccount("Fred", "1002", 1000.00))
bank.save() # a prompt would appear: 'Please provide a name for the file to be saved'
</code></pre>
| <python><return> | 2023-09-24 13:22:19 | 1 | 1,306 | Nemo |
77,167,156 | 10,952,047 | remove suffix after concatenate python | <p>I have this anndata obj:</p>
<pre><code>ldata1x
ldata2x
ldata3x
ldata4x
</code></pre>
<p>and I would to concatenate them into one df.</p>
<pre><code>ldata1x.obs.index
Index(['KO_d6_r1_AAACCGGCACCTCGCT-1', 'KO_d6_r1_AAAGCCGCAAGGATTA-1',
'KO_d6_r1_AAACCGCGTTAGCTGA-1', 'KO_d6_r1_AAAGCGGGTGTTTGTC-1',
'KO_d6_r1_AACAGATAGCAGCTAT-1', 'KO_d6_r1_AAAGCGGGTTCAAGCA-1',
'KO_d6_r1_AAATGCCTCCAATTAG-1', 'KO_d6_r1_AAAGCCGCACAGAAAC-1',
'KO_d6_r1_AAAGCGGGTCTAACAG-1', 'KO_d6_r1_AAACCGCGTGTAACCA-1',
...
'KO_d6_r1_TTTCACCCACTAAGTT-1', 'KO_d6_r1_TTTGAGTCACCAAAGG-1',
'KO_d6_r1_TTTAGCTTCACTCGCT-1', 'KO_d6_r1_TTTAGGATCTATGACA-1',
'KO_d6_r1_TTTGACTTCATGAAGG-1', 'KO_d6_r1_TTTCACCCAGCTTACA-1',
'KO_d6_r1_TTTGCATTCCGGAACC-1', 'KO_d6_r1_TTTAGGATCTTGGATA-1',
'KO_d6_r1_TTTGACCGTGCTTACT-1', 'KO_d6_r1_TTTGGTAAGTTGTCCC-1'],
dtype='object', length=1628)
</code></pre>
<p>so I did:</p>
<h1>concatenate the df</h1>
<pre><code>ldata = ldata1x.concatenate([ldata2x, ldata3x, ldata4x])
</code></pre>
<p>but I obtain this df:</p>
<pre><code>ldata.obs.index
Index(['KO_d6_r1_AAACCGGCACCTCGCT-1-0', 'KO_d6_r1_AAAGCCGCAAGGATTA-1-0',
'KO_d6_r1_AAACCGCGTTAGCTGA-1-0', 'KO_d6_r1_AAAGCGGGTGTTTGTC-1-0',
'KO_d6_r1_AACAGATAGCAGCTAT-1-0', 'KO_d6_r1_AAAGCGGGTTCAAGCA-1-0',
'KO_d6_r1_AAATGCCTCCAATTAG-1-0', 'KO_d6_r1_AAAGCCGCACAGAAAC-1-0',
'KO_d6_r1_AAAGCGGGTCTAACAG-1-0', 'KO_d6_r1_AAACCGCGTGTAACCA-1-0',
...
'KO_d8_r2_TTTCACCCACTTAACG-1-3', 'KO_d8_r2_TTTGGCTGTAGGATCC-1-3',
'KO_d8_r2_TTTCTCACAATTGCGC-1-3', 'KO_d8_r2_TTTGCATTCAATGTGC-1-3',
'KO_d8_r2_TTTCGTCCAAACAACA-1-3', 'KO_d8_r2_TTTGCATTCCCTGGTT-1-3',
'KO_d8_r2_TTTGGCTGTGCATTTC-1-3', 'KO_d8_r2_TTTGACTTCCGCTAGA-1-3',
'KO_d8_r2_TTTGTTGGTTCCTGTG-1-3', 'KO_d8_r2_TTTGTTGGTGGAAGGC-1-3'],
dtype='object', length=5536)
</code></pre>
<p>in other words I don't need the suffixes <code>-0, -1, -2, -3</code> (that identified the dataframes) after -1.
How can I remove them? thanks</p>
| <python><pandas><dataframe> | 2023-09-24 12:47:46 | 1 | 417 | jonny jeep |
77,167,055 | 12,002,600 | Running falcon ai model on mojo lang | <p>How can we run <a href="https://huggingface.co/tiiuae/falcon-180B" rel="nofollow noreferrer">https://huggingface.co/tiiuae/falcon-180B</a> the one of the current best models on mojo lang which is faster than python 35k? When we just copy this:</p>
<pre><code>`from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-180b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
</code></pre>
<p>`</p>
<p>we got error as below:</p>
<p>`</p>
<pre><code>error: TODO: expressions are not yet supported at the file scope level
model = "tiiuae/falcon-180b"
^
hello.mojo:7:1: error: use of unknown declaration 'model'
model = "tiiuae/falcon-180b"
^~~~~
hello.mojo:9:1: error: TODO: expressions are not yet supported at the file scope level
tokenizer = AutoTokenizer.from_pretrained(model)
^
hello.mojo:3:6: error: unable to locate module 'transformers'
from transformers import AutoTokenizer, AutoModelForCausalLM
^
hello.mojo:10:1: error: TODO: expressions are not yet supported at the file scope level
pipeline = transformers.pipeline(
^
hello.mojo:18:1: error: TODO: expressions are not yet supported at the file scope level
sequences = pipeline(
^
hello.mojo:18:13: error: use of unknown declaration 'pipeline'
sequences = pipeline(
^~~~~~~~
hello.mojo:26:12: error: use of unknown declaration 'sequences'
for seq in sequences:
^~~~~~~~~
hello.mojo:27:5: error: TODO: expressions are not yet supported at the file scope level
print(f"Result: {seq['generated_text']}")
^
hello.mojo:27:12: error: expected ')' in call argument list
print(f"Result: {seq['generated_text']}")
^
hello.mojo:27:12: error: statements must start at the beginning of a line
print(f"Result: {seq['generated_text']}")
^
mojo: error: failed to parse the provided Mojo
</code></pre>
<p>`</p>
<p>When the code from hugging face is copied and pasted in mojo environment, model should be running.</p>
| <python><artificial-intelligence><huggingface-transformers><huggingface><mojolang> | 2023-09-24 12:21:44 | 2 | 1,482 | Bitdom8 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.