QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,355,619
| 5,197,034
|
Detect the format of a datetime string in Python
|
<p>I'm looking for a way to detect the <code>strftime</code>-style format of a datetime string in Python. All datetime libraries I've found have functionalities for parsing the string to create a datetime object, but I would like to detect the format or pattern that can be used with the <code>datetime.strptime</code> format parameter.</p>
<p>Why? I'm dealing with long lists (or series) of datetime strings and using <code>dateutil.parser</code> to parse them is too inaccurate and slow.</p>
<ul>
<li><strong>Slow</strong>: It will check for all potential formats every single time, although all entries per list are of the same format (in my case).</li>
<li><strong>Inaccurate</strong>: Ambiguous entries will be parsed in one out of multiple ways without drawing knowledge from other entries that are not ambiguous.</li>
</ul>
<p>So instead I would like to detect the format. Once I have that, I can use the <code>to_datetime</code> function in <code>polars</code> to create a datetime series in a faster manner.</p>
<p>I couldn't find such functionality in more modern datetime libs like pendulum. I also implemented my own version that iterates through a fixed lists of formats and checks if it can be read using <code>datetime.strptime</code> like so:</p>
<pre><code>patterns = [
"%Y.%m.%d %H:%M:%S",
"%Y-%m-%d %H:%M",
"%Y-%m-%d",
...
]
for pattern in patterns:
try:
for val in col:
assert datetime.datetime.strptime(val, pattern)
return pattern
except:
continue
</code></pre>
<p>This doesn't strike me as an elegant solution and I was wondering if there's a better way to do it or even a library available that does this sort of thing.</p>
|
<python><datetime><parsing><format><python-polars>
|
2023-05-29 08:03:22
| 4
| 2,603
|
pietz
|
76,355,510
| 7,657,219
|
Where can i find pip in linux to add it to my path? Python3.11.3
|
<p>I've installed Python 3.11.3 in my Linux machine, but every time I need to install an offline package or do something with pip I have to run it using:</p>
<blockquote>
<p>python3.11 -m pip install "package"</p>
</blockquote>
<p>But I can't use it just with the command pip (without python3.11) like:</p>
<blockquote>
<p>pip install "package"</p>
</blockquote>
<p>I've searched for it to add to my PATH but I only find in usr/local/bin/ (the folder I have in my PATH) a not folder type file that is the one of python3.11.3, am I missing something, where is the pip file located to add it to my PATH?</p>
<p>Thank you very much.</p>
<p>Edit:</p>
<p>When I try to run pip install something it tells me:</p>
<p><em>bash: pip: command not found...</em></p>
<p>Im using RHEL8, and I've created an alias for python to link it to my python3.11 version.</p>
<p>If I try <strong>which pip</strong> it gives me <em>No pip in (/usr/local/bin: .... etc</em></p>
<p>From this Linux machine I <strong>don't</strong> have internet access, so I have to do everything offline.</p>
|
<python><pip>
|
2023-05-29 07:45:20
| 3
| 353
|
Varox
|
76,355,446
| 1,259,374
|
String to occurences and back algorithm
|
<p>So I have this string <code>aabcd</code> and I need to count the number of occurrences and back.</p>
<p>This is the first method I have:</p>
<pre class="lang-py prettyprint-override"><code>def string_to_occurances(s):
dictionary = {}
for i in range(int(len(s))):
if s[i] in dictionary:
dictionary[s[i]] += 1
else:
dictionary[s[i]] = 1
return ''.join(f'{k}{v}' for k, v in dictionary.items())
</code></pre>
<p>So, in this case <code>aabcd</code> becomes <code>a2b1c1d1</code> and now I wonder how to convert it back.</p>
<p>This is what I have so far:</p>
<pre class="lang-py prettyprint-override"><code>def is_digit(s):
if s == '0' or s == '1' or s == '2' or s == '3' or s == '4' or s == '5' or s == '6' or s == '7' or s == '8' or s == '9':
return True
else:
return False
s = 'a2b1c1d1'
last_chat = s[0]
index = len(s) - 1
digits = ''
for i in range(len(s) - 1, -1, -1):
while index >= 0 and is_digit(s[index]):
digits += s[index]
index -= 1
</code></pre>
<p>Any suggestions?</p>
|
<python>
|
2023-05-29 07:33:38
| 3
| 1,139
|
falukky
|
76,355,427
| 20,646,427
|
How to make django-filter depend on another django-filter
|
<p>I'm using package django-filter and i have some fields which i want to be depend on another field like i have field <code>name</code> and field <code>car</code> and if i choose name Michael in <code>name</code> filter, filter <code>car</code> will show me only cars that Michael has</p>
<p>This looks like a big problem and i dont know how to solve that</p>
<p>filters.py</p>
<pre><code>import django_filters
from django_filters import DateTimeFromToRangeFilter
from django.db.models import Q
from common.filters import CustomDateRangeWidget
from common.models import CounterParty, ObjectList
from .models import OpenPointList
class OpenPointFilter(django_filters.FilterSet):
"""
Django-filter class to filter OpenPointList model by date, CounterParty name and Object name
"""
CHOICES = (
('Closed', 'Closed'),
('Open', 'Open')
)
status = django_filters.ChoiceFilter(choices=CHOICES)
category_choice = django_filters.ModelChoiceFilter(label='Категория',
queryset=OpenPointList.objects.values_list('category',
flat=True).distinct())
category = django_filters.CharFilter(method='custom_category_filter')
class Meta:
model = OpenPointList
fields = (
'open_point_object',
'status',
'category',
)
</code></pre>
|
<python><django><django-filter>
|
2023-05-29 07:30:47
| 1
| 524
|
Zesshi
|
76,355,339
| 13,564,858
|
Bandit vulnerability on 'Drop View <View_Name>'
|
<p>I am not sure why bandit is notifying the below as 'Detected possible formatted SQL query. Use parameterized queries instead.':</p>
<pre><code> conn.execute(f"DROP VIEW {view_name};")
</code></pre>
<p>Is there a way to parameterize the view_name? or concatenation is the only way forward to remove bandit flags here?</p>
|
<python><sql-injection><parameterization><bandit-python>
|
2023-05-29 07:15:21
| 1
| 429
|
Lucky Ratnawat
|
76,355,115
| 11,197,301
|
read values in a file and convert them according to their type in python
|
<p>I have the following file:</p>
<pre><code> 10 10 11 10530 18 100 0 Open ;
11 11 12 5280 14 100 0 Open ;
12 12 13 5280 10 100 0 Open ;
21 21 22 5280 10 100 0 Open ;
</code></pre>
<p>I read the file as follow:</p>
<pre><code> with open(file) as f:
values = line.rstrip().split()
</code></pre>
<p>this is the outcome for the first line:</p>
<pre><code>values
Out [2]: ['10', '10', '11', '10530', '18', '100', '0', 'Open']
</code></pre>
<p>I would like to convert some of them to integer, float or keep them as string.</p>
<p>The easiest way could be the following:</p>
<pre><code>values[0] = int(values[0])
values[1] = int(values[1])
values[2] = int(values[2])
values[3] = float(values[3])
values[4] = float(values[4])
values[5] = float(values[5])
values[6] = float(values[6])
values[7] = values[7]
</code></pre>
<p>Do you have in mind other more compact or smarter way to do that?</p>
<p>I could for example write the values that I want to convert to float as float in the file by adding a dot. For example, 2 as 2.0.
However, I am stuck again with the conversion.</p>
<p>Thanks for any kind of help</p>
|
<python><input><type-conversion>
|
2023-05-29 06:30:50
| 1
| 623
|
diedro
|
76,354,999
| 13,710,421
|
How to transfer dictionary to dataframe
|
<p>There is dictionary <code>dic_1</code> , i want transfer it to <code>dataframe</code> (the result just like the result of <code>val_1.append(val_2)</code>), but code <code>pd.DataFrame(dic_1)</code> failed .How to fixed it ?</p>
<pre><code>import pandas as pd
val_1 = pd.DataFrame({'category':['a','b'],'amount':[1,3]})
val_2 = pd.DataFrame({'category':['d','e'],'amount':[4,7]})
dic_1 ={'v01':val_1,'v02':val_2}
</code></pre>
<p>below code failed</p>
<pre><code> pd.DataFrame(dic_1)
</code></pre>
|
<python><pandas><dictionary>
|
2023-05-29 06:05:05
| 2
| 2,547
|
anderwyang
|
76,354,983
| 2,598,184
|
List all folders from ClickUp space
|
<p>I'm trying to list all folders created under particular ClickUp space, but it's not giving me desired result. Below is the snippet,</p>
<pre><code>import requests
# Enter your ClickUp API token here
api_token = 'pk_xxxxxxxxxxx'
# Space ID for the desired space
space_id = '1234567'
# API endpoint for retrieving folders in a space
folders_url = f'https://api.clickup.com/api/v2/space/{space_id}/folder'
# Headers with authorization
headers = {
'Authorization': api_token,
'Content-Type': 'application/json'
}
def list_all_folders():
try:
# Send GET request to retrieve folders in the space
response = requests.get(folders_url, headers=headers)
response.raise_for_status() # Raise exception if request fails
# Parse the JSON response
folders_data = response.json()
# Extract and print folder details
for folder in folders_data['folders']:
folder_id = folder['id']
folder_name = folder['name']
print(f"Folder ID: {folder_id}, Folder Name: {folder_name}")
except requests.exceptions.HTTPError as err:
print(f"Error: {err}")
# Call the function to list all folders in the space
list_all_folders()
</code></pre>
<p>Though I have passed correct token, I'm getting this error while running this code,</p>
<blockquote>
<p>Error: 401 Client Error: Unauthorized for url:
<a href="https://api.clickup.com/api/v2/space/1234567/folder" rel="nofollow noreferrer">https://api.clickup.com/api/v2/space/1234567/folder</a></p>
</blockquote>
|
<python><clickup-api>
|
2023-05-29 06:01:16
| 1
| 418
|
Rahul
|
76,354,939
| 12,436,050
|
Load data to Oracle db through flat file
|
<p>Is there a way to load data from a flat file to an oracle table. I am using following python code but the file is too big and the script stops after sometime (due to lost db connection).</p>
<pre><code>from tqdm import tqdm
insert_sty = "insert into MRSTY (CUI,TUI,STN,STY,ATUI,CVF) values (:0,:1,:2,:3,:4,:5)"
records=[]
file_path = "../umls_files/umls-2023AA-metathesaurus-full/2023AA/META/MRSTY.RRF"
num_lines = sum(1 for line in open(file_path))
with open(file_path, 'r') as f:
for line in tqdm(f, total=num_lines, desc="Processing file"):
line = line.strip()
records.append(line.split("|"))
for sublist in records:
if sublist:
sublist.pop()
for i in tqdm(records, desc="Inserting records"):
try:
cur.execute(insert_sty,i)
print ("record inserted")
except Exception as e:
print (i)
print("Error: ",str(e))
conn.commit()
</code></pre>
|
<python><oracle-database>
|
2023-05-29 05:50:34
| 1
| 1,495
|
rshar
|
76,354,598
| 13,710,421
|
In python/pandas, how to filter the dataframe whice condition stored in list
|
<p>in python/pandas, how to filter the dataframe the condition stored in list ?
Below code <code>ori_data[ori_data['category'] in ['a','d','f']]</code> can't work</p>
<pre><code>import pandas as pd
ori_data = pd.DataFrame({'category':['a','a','b','c','d','f']})
filtered_data = ori_data[ori_data['category'] in ['a','d','f']]
</code></pre>
|
<python><pandas>
|
2023-05-29 03:57:56
| 1
| 2,547
|
anderwyang
|
76,354,557
| 9,510,800
|
Hide whiskers and outliers in box plot plotly express and adjust color tone python
|
<p>Is it possible to remove the whiskers and outliers in plotly express box plot in python ? My code looks like below: Also, is there a way, I can give 2 sets of colors ? for example I have 3 labels and each label has 2 categories.
I wanted to have same color for each label but the color tone alone, I wanted to reduce for each category as well.</p>
<pre><code>fig = px.box(all_services, x="xvalue", y="yvalue", color="label",
title="title")
fig.update_traces(quartilemethod="inclusive",boxpoints=False)
</code></pre>
|
<python><plotly>
|
2023-05-29 03:41:02
| 0
| 874
|
python_interest
|
76,354,467
| 6,458,245
|
RuntimeError: Expected all tensors to be on the same device?
|
<p>I am getting this runtime error here:</p>
<pre><code>RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
</code></pre>
<p>However, all the tensors I've created here are on cuda. Does anyone know what the problem is?</p>
<pre><code>import torch.nn as nn
import torch
class MyModel(nn.Module):
def __init__(self):
super().__init__()
self.lin = nn.Linear(5,1)
self.trans = nn.Transformer(nhead=1, num_encoder_layers=1, d_model=5)
def forward(self, x):
tgt = torch.rand(4, 5).to(device)
y = self.trans(x, tgt)
out = self.lin(y)
return out
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
model = MyModel()
model.to(device)
model.eval()
src = torch.rand(4,5)
out = model(src)
print(out)
</code></pre>
<p>Also, is there some pytorch command I can use to list all tensors on a device?</p>
|
<python><pytorch>
|
2023-05-29 03:09:05
| 1
| 2,356
|
JobHunter69
|
76,354,242
| 2,769,240
|
f string to pass file path issue
|
<p>I have a function which accepts a file path. It's as below:</p>
<pre><code>def document_loader(doc_path: str) -> Optional[Document]:
""" This function takes in a document in a particular format and
converts it into a Langchain Document Object
Args:
doc_path (str): A string representing the path to the PDF document.
Returns:
Optional[DocumentLoader]: An instance of the DocumentLoader class or None if the file is not found.
"""
# try:
loader = PyPDFLoader(doc_path)
docs = loader.load()
print("Document loader done")
</code></pre>
<p>PyPDfLoader is a wrapper around PyPDF2 to read in a pdf file path</p>
<p>Now,when I call the function with hardcoding the file path string as below:</p>
<pre><code>document_loader('/Users/Documents/hack/data/abc.pdf')
</code></pre>
<p>The function works fine and is able to read the pdf file path.</p>
<p>But now if I want a user to upload their pdf file via Streamlit file_uploader() as below:</p>
<pre><code>uploaded_file = st.sidebar.file_uploader("Upload a file", key= "uploaded_file")
print(st.session_state.uploaded_file)
if uploaded_file is not None:
filename = st.session_state.uploaded_file.name
print(os.path.abspath(st.session_state.uploaded_file.name))
document_loader(f'"{os.path.abspath(filename)}"')
</code></pre>
<p>I get the error:</p>
<pre><code>ValueError: File path "/Users/Documents/hack/data/abc.pdf" is not a valid file or url
</code></pre>
<p>This statement <code>print(os.path.abspath(st.session_state.uploaded_file.name))</code> prints out the same path as the hardcoded one.</p>
<p>Note: Streamlit is currently on localhost on my laptop and I am the "user" who is trying to upload a pdf via locally runnin streamlit app.</p>
<p><strong>Edit1:</strong></p>
<p>So as per @MAtchCatAnd I added tempfile and it WORKS. But with an issue:</p>
<p>My function where tempfile_path is passed, it is re-running everytime there is any interaction by a user. This is because tempfile path is changing automatically thereby making the function re-run even if I had decorated it with @st.cache_data.</p>
<p>The pdf file uploaded remains the same, so I don't want the same function to re run as it consumes some cost everytime it is run.</p>
<p>How to fix this as I see Streamlit has deprecated allow_mutation=True parameter in st.cache.</p>
<p>Here's the code:</p>
<pre><code>@st.cache_data
def document_loader(doc_path: str) -> Optional[Document]:
""" This function takes in a document in a particular format and
converts it into a Langchain Document Object
Args:
doc_path (str): A string representing the path to the PDF document.
Returns:
Optional[DocumentLoader]: An instance of the DocumentLoader class or None if the file is not found.
"""
# try:
loader = PyPDFLoader(doc_path)
docs = loader.load()
print("Document loader done")
uploaded_file = st.sidebar.file_uploader("Upload a file", key= "uploaded_file")
if uploaded_file is not None:
with tempfile.NamedTemporaryFile(delete=False) as temp_file:
temp_file.write(uploaded_file.getvalue())
temp_file_path = temp_file.name
print(temp_file_path)
custom_qa = document_loader(temp_file_path)
</code></pre>
|
<python><streamlit>
|
2023-05-29 01:44:47
| 1
| 7,580
|
Baktaawar
|
76,354,204
| 7,766,024
|
Flask's order of rendering functions?
|
<p>I'm trying to set up a basic website to practice Flask and have the following in my <code>app.py</code> file:</p>
<pre><code>from flask import Flask, render_template
app = Flask(__name__)
@app.route("/")
def hello_world():
return "Hello, world!"
@app.route('/')
def home():
return render_template('home.html')
if __name__ == "__main__":
app.run(
host="0.0.0.0",
debug=True
)
</code></pre>
<p>The contents of <code>home.html</code> are:</p>
<pre><code><div id="menubar">
<ul>
<li><a href="/">Home</a></li>
</ul>
</div>
</code></pre>
<p>I noticed that when I run the app as <code>python app.py</code>, <code>Hello, world!</code> is printed onto the screen but not <code>home.html</code>. When I comment out the <code>hello_world</code> function, then the contents of <code>home.html</code> are rendered.</p>
<p>Why is this? Is there some sort of rendering order that I'm not aware of?</p>
|
<python><flask>
|
2023-05-29 01:25:45
| 1
| 3,460
|
Sean
|
76,354,125
| 6,159,217
|
3d scatter plot is displaying a black window
|
<p>Here is an example of matplotlib displaying a black window on my local environment.</p>
<pre><code>import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
fig = plt.figure()
ax = Axes3D(fig, elev=4, azim=-95)
xs, ys, zs = np.linspace(-2, 2, 60), np.linspace(-2, 2, 60), np.linspace(-2, 2, 60)
ax.scatter(xs=xs, ys=xs, zs=xs)
plt.show()
</code></pre>
<p>I know the issue is not my backend as others have reported, since I can run the snippet at this site fine:</p>
<p><a href="https://matplotlib.org/2.0.2/examples/mplot3d/subplot3d_demo.html" rel="nofollow noreferrer">https://matplotlib.org/2.0.2/examples/mplot3d/subplot3d_demo.html</a></p>
|
<python><matplotlib><mplot3d><matplotlib-3d><scatter3d>
|
2023-05-29 00:51:44
| 1
| 972
|
IntegrateThis
|
76,354,103
| 13,608,794
|
Rendering audio graph with correct volume level
|
<p>On the images below, the same audio file is represented. On the left side, there's a <strong>correct, original and NOT NORMALIZED audio preview</strong>.</p>
<p>I want to fix the following problems of the matplotlib's graph:</p>
<ul>
<li><h2>Correct the audio level (amplitude)</h2>
</li>
<li>Stop truncating the height
<ul>
<li>There's small space at Y-axis</li>
<li><strong>I've tested normalized files with different sample rates - this padding occurs regardless of audio level</strong></li>
</ul>
</li>
</ul>
<hr />
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">Correct preview</th>
<th style="text-align: center;">Matplotlib</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;"><img src="https://i.sstatic.net/k7eNN.png"></td>
<td style="text-align: center;"><img src="https://i.sstatic.net/FAVgU.png"></td>
</tr>
</tbody>
</table>
</div><hr />
<p>Code I've used:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import wave, sys, os
import matplotlib.pyplot as plt
File_Name = os.path.basename(sys.argv[1])
plt.rcParams["axes.xmargin"] = 0
#-=-=-=-#
try:
Audio = wave.open(sys.argv[1], "r")
except Exception:
raise SystemExit("WAV format not integer, exiting.")
if Audio.getnchannels() > 1:
raise SystemExit("Not a mono file, exiting.")
Signal = Audio.readframes(-1)
Signal = np.fromstring(Signal, "int16")
Time = np.linspace(0, len(Signal) / Audio.getframerate(), num = len(Signal))
#-=-=-=-#
plt.plot(Time, Signal, color = "#000000")
#plt.axis("off")
#plt.savefig(os.path.splitext(File_Name)[0] + ".png",
# bbox_inches = "tight", transparent = 1, pad_inches = 0
#)
plt.show()
</code></pre>
|
<python><matplotlib><audio><waveform>
|
2023-05-29 00:42:18
| 0
| 303
|
kubinka0505
|
76,353,867
| 8,671,089
|
org.apache.kafka.common.KafkaException: Failed to construct kafka producer
|
<p>I have a docker-compose.yml file, running kafka server and creating kafka topic. running docker compose and tests from github workflow</p>
<h2>docker-compose.yml</h2>
<pre><code>version: "3.8"
networks:
kafka-network:
driver: bridge
services:
kafka:
container_name: kafka
image: bitnami/kafka:latest
networks:
- kafka-network
environment:
KAFKA_BROKER_ID: 1
KAFKA_ENABLE_KRAFT: "true"
KAFKA_CFG_PROCESS_ROLES: broker,controller
KAFKA_CFG_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_CFG_CONTROLLER_QUORUM_VOTERS: 1@127.0.0.1:9094
ALLOW_PLAINTEXT_LISTENER: "yes"
KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT,CONTROLLER:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_CFG_LISTENERS: PLAINTEXT://:9092,EXTERNAL://0.0.0.0:9093,CONTROLLER://:9094
KAFKA_CFG_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,EXTERNAL://localhost:9093
KAFKA_CFG_BROKER_ID: 1
KAFKA_CFG_NODE_ID: 1
KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_CFG_MESSAGE_MAX_BYTES: 5242880
KAFKA_CFG_MAX_REQUEST_SIZE: 5242880
BITNAMI_DEBUG: "true"
KAFKA_CFG_DELETE_TOPIC_ENABLE: "true"
# create kafka raw topic
init-kafka:
image: bitnami/kafka:latest
networks:
- kafka-network
depends_on:
kafka:
condition: service_started
entrypoint: [ '/bin/sh', '-c' ]
command: |
"
# blocks until kafka is reachable
kafka-topics.sh --bootstrap-server kafka:9092 --list
echo -e 'Creating kafka topics'
kafka-topics.sh --bootstrap-server kafka:9092 --create --if-not-exists --topic test-topic --replication-factor 1 --partitions 1
echo -e 'Successfully created the following topics:'
kafka-topics.sh --bootstrap-server kafka:9092 --list
"
</code></pre>
<p>test.yml</p>
<pre><code>name: Testing
on: [push]
jobs:
integration-tests:
runs-on: [self-hosted, Linux, docker, ubuntu]
container:
image: app-image:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
timeout-minutes: 15
steps:
- uses: actions/checkout@v2
with:
fetch-depth: "0"
- name: Install missing system dependencies
run: apt-get update -qq && apt-get install -y make
- name: Log in
uses: docker/login-action@v1
- name: Build the stack
run: docker-compose up -d
- name: Run Integration Test
run: make integration-test
</code></pre>
<p>test_kafka.py</p>
<pre><code>df.write.format("kafka")
.option("kafka.bootstrap.servers", "kafka:9092")
.option("kafka.security.protocol", "PLAINTEXT")
.option("topic", "test-topic")
.save()
</code></pre>
<p>in make file i am running commage to run testcase -</p>
<pre><code>pytest tests/integration/test_kafka.py
</code></pre>
<p>This setup works fine locally, but the github workflow fails and gives error <code>org.apache.kafka.common.KafkaException: Failed to construct kafka producer</code></p>
<p>I also chnaged <code>.option("kafka.bootstrap.servers", "localhost:9093")</code>
git throws an error</p>
<pre><code>WARN NetworkClient:[Producer clientId=producer-1] Connection to node -1 (localhost/127.0.0.1:9093) could not be established. Broker may not be available.
WARN NetworkClient: [Producer clientId=producer-1] Bootstrap broker localhost:9093 (id: -1 rack: null) disconnected
org.apache.kafka.common.errors.TimeoutException: Topic test-topic not present in metadata after 60000 ms.
</code></pre>
<p>I want to run tests in the CI pipeline, inside a container.Kafka server is running and kafka topic is being created on github workflow as well but connecting through python failes. Can someone please help me or I am missing something here?</p>
<p>Thanks in advance!</p>
|
<python><pyspark><apache-kafka><github-actions>
|
2023-05-28 22:47:04
| 2
| 683
|
Panda
|
76,353,680
| 4,966,945
|
Read Parquet files without reading into memory (using Python) from URL
|
<p>I am trying to read NY data set which is stored & publically available <a href="https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page" rel="nofollow noreferrer">here</a>, I extracted the underlying location of the parquet file for the 2022 as "https://d37ci6vzurychx.cloudfront.net/trip-data/yellow_tripdata_2019-01.parquet". Now I was trying to read data form this URL and used the <code>read_parquet</code> method to do it quite easily. But I am not able to figure out on how to read this data if the data size is too big and which might cause memory overload. Unlike <code>read_csv</code> does not have stream option & converting into <code>pyarrow.parquet.parquetfile</code> to use its iter_batches functionality does not seem to be an option since it cannot read from URL</p>
|
<python><download><stream><parquet><pyarrow>
|
2023-05-28 21:30:26
| 2
| 667
|
av abhishiek
|
76,353,462
| 14,852,106
|
Get value from rows not null XLSXWRITER (python)
|
<p>I want to merge column 2 (with "description") same to column 1(with "name") : same number of rows to merge , but when using merge_range I got null value in the column 2.</p>
<p>Here is my code.</p>
<pre><code> my_list = [
{'field1': [
{'name': 'value1', 'description': ""
},
{'name': 'value1', 'description': "this is descr..."
},
{'name': 'value1', 'description': ""
},
]
},
]
row = 206
for record in my_list:
row_mem = row
for rec in record['field1']:
sheet.write(row, 2, rec['description'])
row += 1
sheet.merge_range(row_mem, 1, row - 1, 1, rec['name'])
</code></pre>
<p>Result:
<a href="https://i.sstatic.net/8744K.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8744K.png" alt="enter image description here" /></a></p>
<p>when replacing <code>sheet.write(row, 2, rec['description'])</code>by</p>
<pre><code>sheet.merge_range(row_mem, 2, row - 1, 2, rec['description'])
</code></pre>
<p>I got this result :
<a href="https://i.sstatic.net/UjjrV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UjjrV.png" alt="enter image description here" /></a></p>
<p>But I want to display it like this:
<a href="https://i.sstatic.net/5MlM4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5MlM4.png" alt="enter image description here" /></a></p>
<p>How I can merge column 2 and get the not null value when merging it .</p>
<p>Any help please ? Thanks.</p>
|
<python><excel><xlsxwriter>
|
2023-05-28 20:23:19
| 1
| 633
|
K.ju
|
76,353,461
| 5,210,803
|
ANTLR4 python target not working with some special rule names
|
<p>For example, if my grammar contains a rule named <code>state</code>:</p>
<pre><code>// Grammar.g4
grammar Grammar;
program : state* EOF ;
state: 'break;' | 'continue;' | 'return;' ;
</code></pre>
<p>The <code>GrammarParser.py</code> generated by <code>antlr4 -Dlanguage=Python3 Grammar.g4</code> will raise an exception:</p>
<pre><code>Traceback (most recent call last):
...
File ".../GrammarParser.py", line 96, in program
self.state()
TypeError: 'int' object is not callable
</code></pre>
<p>This is because each ANTLR rule turns into a Python method. The name <code>state</code> conflicts with an existing field/method used by ANTLR runtime internally:</p>
<pre><code>class GrammarParser ( Parser ):
def program(self):
...
...
...
self.state = 4
self.state()
self.state = 9
...
</code></pre>
<p>Other rule names like <code>match</code>, <code>atn</code> or <code>consume</code> also lead to various runtime exceptions:</p>
<pre><code># match:
TypeError: GrammarParser.match() takes 1 positional argument but 2 were given
# atn:
AttributeError: 'function' object has no attribute 'states'
# consume:
RecursionError: maximum recursion depth exceeded while calling a Python object
</code></pre>
<p>Although this can be easily fixed by renaming the rule, the exception looks confusing especially to beginners, and it can take quite a long time to realize the cause for a large grammar file. I believe ANTLR4 should give a warning or rename the rule (<a href="https://github.com/antlr/antlr4/blob/873a01ca2939e623f396d356f9e903b9afd4ca32/tool/src/org/antlr/v4/codegen/target/Python3Target.java#L15" rel="nofollow noreferrer">as it already does for keywords</a>) if a name conflict is detected.</p>
<p>Since ANTLR4's GitHub requires asking on StackOverflow before opening a issue, do you believe it is a ANTLR bug?</p>
|
<python><antlr4>
|
2023-05-28 20:22:50
| 0
| 3,852
|
xmcp
|
76,353,403
| 12,436,050
|
Adding progress monitor while inserting data to a Oracle db using Python
|
<p>I would like to add a progress bar in my python script. I am running following lines of code. I am reading a file 'MRSTY.RRF' which is 0.195 MB. I would like to how much file has been processed.</p>
<pre><code>insert_sty = "insert into MRSTY (CUI,TUI,STN,STY,ATUI,CVF) values (:0,:1,:2,:3,:4,:5)"
records=[]
with open("../umls_files/umls-2023AA-metathesaurus-full/2023AA/META/MRSTY.RRF" , 'r') as f:
for line in f:
line = line.strip()
records.append(line.split("|"))
for sublist in records: #remove '' at the end of every element
if sublist:
sublist.pop()
for i in records:
try:
cur.execute(insert_sty,i)
print ("record inserted")
except Exception as e:
print (i)
print("Error: ",str(e))
conn.commit()
</code></pre>
<p>How can I achieve this?</p>
|
<python><oracle-database>
|
2023-05-28 20:08:05
| 1
| 1,495
|
rshar
|
76,353,265
| 9,261,745
|
sqlalchemy call mysql procedure to read the multiple results
|
<p>I am trying to get the multiple results from procedure in mysql by using sqlalchemy in python.</p>
<pre><code>async def create_user(db: AsyncSession, user: UserInfo):
try:
result_proxy = await db.execute(
text("CALL create_user(:account)"),
{
"account": user.account
}
)
logging.info("result_proxy: %s", result_proxy)
# Fetch the first result set
db_user = result_proxy.fetchone()
logging.info("db_user: %s", db_user)
# Move to the next result set
if result_proxy.nextset():
# Fetch the second result set
db_user_setting = result_proxy.fetchone()
logging.info("db_user_setting: %s", db_user_setting)
else:
db_user_setting = None
except SQLAlchemyError as e:
logging.error("Error creating user: %s", e)
await db.rollback()
raise HTTPException(status_code=500, detail="Database creating user failed.")
return db_user, db_user_setting
</code></pre>
<p>mysql procedure code is</p>
<pre><code>CREATE DEFINER=`test`@`%` PROCEDURE `data`.`create_user`(
IN p_account VARCHAR(255)
)
BEGIN
DECLARE v_user_id VARCHAR(255);
DECLARE EXIT HANDLER FOR SQLEXCEPTION
BEGIN
-- Rollback the transaction in case of an error
ROLLBACK;
END;
SET v_user_id = UUID_SHORT();
START TRANSACTION;
INSERT INTO data.user_info (
account,
user_id
)
VALUES(
p_account,
v_user_id
);
INSERT INTO data.user_setting (
user_id,
created_name
)
VALUES(
v_user_id,
'Javis'
);
COMMIT;
-- Select the newly created records to return them as a result set
SELECT * FROM data.user_info WHERE user_id = v_user_id;
SELECT * FROM data.user_setting WHERE user_id = v_user_id;
END
</code></pre>
<p>I don't know why I got <code>Error creating user: This result object does not return rows. It has been closed automatically.</code> I run the procedure in the mysql script, it can create the row in the tables successfully.
Would you mind giving me some advices?</p>
<p>Best regards</p>
|
<python><mysql><stored-procedures><sqlalchemy>
|
2023-05-28 19:27:47
| 1
| 457
|
Youshikyou
|
76,353,116
| 2,131,907
|
Can I create a venv with already locally installed packages?
|
<p>I already have some locally installed Python packages in <code>~/.local</code>. Some of them are self-built packages. And now I want to create a new virtual environment, is there any way to create a virtual environment with those locally installed packages?</p>
<p><code>pip -r requirement.txt</code> doesn't work for me since some of the packages are self-built. The building time is very long hence I'd like to know if it's possible to create a virtual environment with those libraries.</p>
|
<python><python-venv>
|
2023-05-28 18:52:38
| 1
| 364
|
user2131907
|
76,352,883
| 5,822,440
|
How to use snowball (.sbl) file in Python
|
<p>I am a first-year Master's student currently learning NLP with Python. My professor has assigned me a mini project that involves implementing Porter Stemmer in Snowball Language. However, I am a bit confused and would appreciate some assistance from anyone experienced with Snowball (.sbl).</p>
<p>Specifically, I would like to know how to compile these files. If anyone could help me, I would be grateful.</p>
<p>Thank you in advance.</p>
|
<python><nltk><snowball>
|
2023-05-28 18:01:26
| 0
| 411
|
Fatihi Youssef
|
76,352,834
| 7,895,542
|
Polars: Pad list columns to specific size
|
<p>I think i ran into the XY problem...</p>
<p>Here is what i actually want to do:</p>
<p>To be exact i have a dataframe like:</p>
<pre><code>shape: (3, 3)
┌───────────┬───────┬──────────────────────────┐
│ nrs ┆ stuff ┆ more_stuff │
│ --- ┆ --- ┆ --- │
│ list[i64] ┆ i64 ┆ list[list[i64]] │
╞═══════════╪═══════╪══════════════════════════╡
│ [1, 2, 3] ┆ 1 ┆ [[1, 1], [2, 2], [3, 3]] │
│ [2, 4] ┆ 2 ┆ [[4, 4], [5, 5]] │
│ [1] ┆ 3 ┆ [[6, 6]] │
└───────────┴───────┴──────────────────────────┘
</code></pre>
<p>With normal int64 columns, list[int64] columns and one list[list[64]] column.
I want to be able to specify a size and set the length of all the list (also the nested) columns to that size. Either by shortening to that size or by padding them with their last value (<code>list[-1]</code> for normal python lists) for both the nested and the normal lists. The non-list columns should be left unchanged.</p>
<p>So the result for N=2 for the above dataframe should be:</p>
<pre><code>shape: (3, 3)
┌───────────┬───────┬──────────────────┐
│ nrs ┆ stuff ┆ more_stuff │
│ --- ┆ --- ┆ --- │
│ list[i64] ┆ i64 ┆ list[list[i64]] │
╞═══════════╪═══════╪══════════════════╡
│ [1, 2] ┆ 1 ┆ [[1, 1], [2, 2]] │
│ [2, 4] ┆ 2 ┆ [[4, 4], [5, 5]] │
│ [1, 1] ┆ 3 ┆ [[6, 6], [6, 6]] │
└───────────┴───────┴──────────────────┘
</code></pre>
|
<python><python-polars>
|
2023-05-28 17:51:47
| 2
| 360
|
J.N.
|
76,352,655
| 325,809
|
Generate data from the results of .describe() in pandas
|
<p>I don't have access to the actual dataset, but I do have access to the results of the dataframes' <code>.describe()</code>. I need to construct some data that has similar statistics. Is there a way to generate random data that matches those statistics? ie I would ideally like a <code>data_generator(pd.DataFrame) -> pd.DataFrame</code> such that</p>
<pre><code>data_gen(df.describe()).describe() == df.describe()
</code></pre>
<p>I understand that it won't capture the dependencies between columns but I will work with what I can get.</p>
|
<python><pandas>
|
2023-05-28 17:13:18
| 1
| 6,926
|
fakedrake
|
76,352,457
| 5,623,899
|
How to convert a conda env yaml file to a list of requirements for a settings.ini file accounting for channels and conversions for pypi
|
<p><strong>NOTE</strong>: you can start at <a href="https://stackoverflow.com/questions/76352457/how-to-convert-a-conda-env-yaml-file-to-a-list-of-requirements-for-a-settings-in/76352458/#problem-formulation">Problem formulation</a>. The motivation section just explains how I found myself asking this.</p>
<h1>Motivation</h1>
<p>I use and love <a href="https://nbdev.fast.ai/tutorials/tutorial.html" rel="nofollow noreferrer">nbdev</a>. It makes developing a python package iteratively as easy as it gets. Especially, as I generally do this along side a project which uses said package. Does this question require knowledge of <a href="https://nbdev.fast.ai/tutorials/tutorial.html" rel="nofollow noreferrer">nbdev</a>? <strong>no</strong>. It only motivates why I am asking. Normally when I create a new <a href="https://nbdev.fast.ai/tutorials/tutorial.html" rel="nofollow noreferrer">nbdev</a> project (<code>nbdev_new</code>), I get a <code>settings.ini</code> file and <code>setup.py</code> file. In order to keep working on different packages / projects simple, I will immediately create a <a href="https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-from-an-environment-yml-file" rel="nofollow noreferrer">conda environment file</a> <code>env.yml</code> for the project (see <code>Files > Example Conda File</code> <a href="https://stackoverflow.com/questions/76352457/how-to-convert-a-conda-env-yaml-file-to-a-list-of-requirements-for-a-settings-in/76352458/#example-conda-file">maybe this link works</a>).</p>
<p>I know that the environment one develops a package in is <strong>NOT</strong> necessarily the minimum dependencies it requires. Further, a package's dependencies are a subset of those one may <em>need</em> for a working on a project utilizing said package. In <strong>MY</strong> use case it is clear that I am double-dipping! I am <em>developing</em> the package as I <em>use</em> it on a project.</p>
<p>So for the sake of this question let's assume that the <code>package dependencies == project dependencies</code>. In other words, the <code>env.yml</code> file contains all of the <code>requirements</code> for the <code>setting.ini</code> file.</p>
<h2>nbdev workflow</h2>
<ol>
<li><p>make new empty repo "current_project" and clone it</p>
</li>
<li><p><code>cd path/to/current_project</code></p>
</li>
<li><p><code>nbdev_new</code></p>
</li>
<li><p>make <code>env.yml</code> file</p>
</li>
<li><p>create / update environment:</p>
</li>
</ol>
<pre class="lang-bash prettyprint-override"><code># create conda environment
$ mamba env create -f env.yml
# update conda environment as needed
$ mamba env update -n current_project --file env.yml
# $ mamba env update -n current_project --file env.mac.yml
</code></pre>
<ol start="6">
<li>active environment:</li>
</ol>
<pre class="lang-bash prettyprint-override"><code># activate conda environment
$ conda activate current_project
</code></pre>
<ol start="7">
<li>install <code>current_project</code>:</li>
</ol>
<pre class="lang-bash prettyprint-override"><code># install for local development
$ pip install -e .
</code></pre>
<h1>Problem formulation</h1>
<p>I am developing a package in python using a <code>setup.py</code> file. My package may have requirements (listed under <code>settings.ini</code> with the key <code>requirements</code>) that get automatically important and used in the <code>setup.py</code> file. While developing my package I have a conda environment which is specified in a yaml file <code>env.yml</code> (see <code>Files > Example Conda File</code> <a href="https://stackoverflow.com/questions/76352457/how-to-convert-a-conda-env-yaml-file-to-a-list-of-requirements-for-a-settings-in/76352458/#example-conda-file">maybe this link works</a>).</p>
<p>I also have some GitHub actions that test my package. I dislike having to update <code>settings.ini</code> manually (especially since it doesn't allow for multiple lines) to get the requirements into <code>setup.py</code>. <strong>Especially</strong> as I have already listed them out nice and neatly in my <code>env.yml</code> file. So my question is as follows:</p>
<h2>Question</h2>
<blockquote>
<p>Given a conda environment yaml file (e.g. <code>env.yml</code>) how can one iterate through its content and convert the dependencies (and the versions) to the correct <code>pypi</code> version (required by <code>setup.py</code>), storing them in <code>settings.ini</code> under the keyword <code>requirements</code>?</p>
</blockquote>
<p><strong>caveats</strong>:</p>
<ul>
<li>version specifier requirements in conda are <em>not</em> the same as <code>pypi</code>. Most notably <code>=</code> vs <code>==</code>, amongst others.</li>
<li>package names for conda may not be the same for <code>pypi</code>. For example <a href="https://pytorch.org" rel="nofollow noreferrer"><code>pytorch</code></a> is listed as <a href="https://pypi.org/project/torch/" rel="nofollow noreferrer"><code>torch</code></a> for <code>pypi</code> and <a href="https://anaconda.org/pytorch/pytorch" rel="nofollow noreferrer"><code>pytorch</code></a> for conda.</li>
<li>the environment yaml file may have channel specifiers e.g. <code>conda-forge::<package-name></code></li>
<li>the environment yaml file may specify the python version e.g. <code>python>=3.10</code>, which shouldn't be a requirement.</li>
<li><strong>MY</strong> ideal solution works with my workflow. That means the contents of <code>env.yml</code> need to get transferred to <code>settings.ini</code>.</li>
</ul>
<h1>Desired outcome</h1>
<p>My desired outcome is that I can store all of my package requirements in the conda environment file <code>env.yml</code> and have them automatically find themselves in the <code>setup.py</code> file under <code>install_requires</code>. Since my workflow is built around reading the requirements in from a <code>settings.ini</code> file (from <a href="https://nbdev.fast.ai/tutorials/tutorial.html" rel="nofollow noreferrer">nbdev</a>), I would like the solution to take the values of <code>env.yml</code> and put them in <code>settings.ini</code>.</p>
<p><strong>Note</strong> I am sharing my <a href="https://stackoverflow.com/a/76352458/5623899">current solution</a> as an answer below. Please help / improve it!</p>
<h1>Files</h1>
<h2>Example conda file</h2>
<pre class="lang-yaml prettyprint-override"><code># EXAMPLE YAML FILE
name: current_project
channels:
- pytorch
- conda-forge
- fastai
dependencies:
- python>=3.10
# Utilities
# -------------------------------------------------------------------------
- tqdm
- rich
- typer
# Jupyter Notebook
# -------------------------------------------------------------------------
- conda-forge::notebook
- conda-forge::ipykernel
- conda-forge::ipywidgets
- conda-forge::jupyter_contrib_nbextensions
# nbdev
# -------------------------------------------------------------------------
- fastai::nbdev>=2.3.12
# PyTorch & Deep Learning
# -------------------------------------------------------------------------
- pytorch>=2
# NOTE: add pytorch-cuda if using a CUDA enabled GPU. You will need to
# remove this if you are on Apple Silicon
# - pytorch::pytorch-cuda
- conda-forge::pytorch-lightning
# Plotting
# -------------------------------------------------------------------------
- conda-forge::matplotlib
- conda-forge::seaborn
# Data Wrangling
# -------------------------------------------------------------------------
- conda-forge::scikit-learn
- pandas>=2
- numpy
- scipy
# Pip / non-conda packages
# -------------------------------------------------------------------------
- pip
- pip:
# PyTorch & Deep Learning
# -----------------------------------------------------------------------
- dgl
</code></pre>
|
<python><pip><conda><nbdev>
|
2023-05-28 16:22:43
| 1
| 5,218
|
SumNeuron
|
76,352,445
| 7,895,542
|
Polars how to turn column of type list[list[...]] into numpy ndarray
|
<p>I know I can turn a normal polars series into a numpy array via <code>.to_numpy()</code>.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
s = pl.Series("a", [1,2,3])
s.to_numpy()
# array([1, 2, 3])
</code></pre>
<p>However that does not work with a list type. What would be they way to turn such a construct into a 2-D array.</p>
<p>And even more general is there a way to turn a series of list[list[whatever]] into a 3-D and so on?</p>
<pre class="lang-py prettyprint-override"><code>s = pl.Series("a", [[1,1,1],[1,2,3],[1,0,1]])
s.to_numpy()
# exceptions.ComputeError: 'to_numpy' not supported for dtype: List(Int64)
</code></pre>
<p>Desired output would be:</p>
<pre><code>array([[1, 1, 1],
[1, 2, 3],
[1, 0, 1]])
</code></pre>
<p>Or one step further</p>
<pre class="lang-py prettyprint-override"><code>s = pl.Series("a", [[[1,1],[1,2]],[[1,1],[1,1]]])
s.to_numpy()
# exceptions.ComputeError: 'to_numpy' not supported for dtype: List(Int64)
</code></pre>
<p>Desired output would be:</p>
<pre><code>array([[[1, 1],
[1, 2]],
[[1, 1],
[1, 1]]])
</code></pre>
|
<python><numpy><python-polars>
|
2023-05-28 16:19:36
| 1
| 360
|
J.N.
|
76,352,280
| 13,709,317
|
Can a python function be both a generator and a "non-generator"?
|
<p>I have a function which I want to yield bytes from (generator behaviour) and also write to a file (non-generator behaviour) depending on whether the <code>save</code> boolean is set. Is that possible?</p>
<pre class="lang-py prettyprint-override"><code>def encode_file(source, save=False, destination=None):
# encode the contents of an input file 3 bytes at a time
print('hello')
with open(source, 'rb') as infile:
# save bytes to destination file
if save:
print(f'saving to file {destination}')
with open(destination, 'wb') as outfile:
while (bytes_to_encode := infile.read(3)):
l = len(bytes_to_encode)
if l < 3:
bytes_to_encode += (b'\x00' * (3 - l))
outfile.write(bytes_to_encode)
return
# yield bytes to caller
else:
while (bytes_to_encode := infile.read(3)):
l = len(bytes_to_encode)
if l < 3:
bytes_to_encode += (b'\x00' * (3 - l)) # pad bits if short
yield encode(bytes_to_encode)
return
</code></pre>
<p>In the above implementation, the function always behaves as a generator. When I call</p>
<pre class="lang-py prettyprint-override"><code>encode_file('file.bin', save=True, destination='output.base64')
</code></pre>
<p>it does not print "hello" instead, it returns a generator object. This does not make sense to me. Shouldn't "hello" be printed and then shouldn't control be directed to the <code>if save:</code> portion of the code thus avoiding the part of the function that yields completely?</p>
|
<python><generator>
|
2023-05-28 15:42:02
| 2
| 801
|
First User
|
76,352,244
| 3,610,891
|
How to create diff environments for different Python packages in production
|
<p>I am new to Python as well as Azure, so I might be missing some crucial information with my design.
We have couple of libraries which are created to be used by user while working on their Azure ML workspace. Now, this questions remain same if we are creating libraries for user to be used in a simple Jupyter notebook.</p>
<p>Now, both of these libraries have diff packages which could be different then what a user is using. Eg : User might be using an environment which is using numpy x.1 but package A might be using x.2 and package B needs x.3. This is a possibility since all these packages are developed by different teams.</p>
<p>Now, what could be the best way to handle this problem in real world. So far, I am able to come up with below approaches :</p>
<ol>
<li>Install these files in different docker container where the needed packages are installed. And get the desired output done in separate environments.</li>
<li>Use Custom Environment options provided by Azure itself. And run the incompatibles ones in different environment.</li>
<li>not sure, if we can create different virtual environments and run the packages in different virtual environments but something similar to this if possible.</li>
</ol>
<p>So, I wanted to know if there is any right way of doing this in real world. I see that we should create a different environment for each project but what about the case when we have different packages which needs different versions of common dependencies. How to handle such case?</p>
|
<python><azure><virtualenv><azure-machine-learning-service><azuremlsdk>
|
2023-05-28 15:32:52
| 1
| 2,115
|
Onki
|
76,352,198
| 11,080,806
|
How to compare two Python ASTs, ignoring arguments?
|
<p>I want to elegantly compare two Python expressions ignoring any differences in the arguments. For example comparing <code>plt.show()</code> and <code>plt.show(*some-args)</code> should return <code>True</code>.</p>
<p>I've tried parsing the expressions using <code>ast</code> as follows:</p>
<pre class="lang-py prettyprint-override"><code>import ast
node1 = ast.parse("plt.show()").body[0]
node2 = ast.parse("plt.show(*some-args)").body[0]
print(node1 == node2)
</code></pre>
<p>Returns: <code>False</code></p>
|
<python><abstract-syntax-tree>
|
2023-05-28 15:23:10
| 1
| 568
|
Jonathan Biemond
|
76,352,175
| 18,018,869
|
Separate form fields to "parts"; render part with loop, render part with specific design, render part with loop again
|
<p>I want to render part of the form via a loop in template and a part with specific "design".</p>
<pre class="lang-py prettyprint-override"><code># forms.py
class HappyIndexForm(forms.Form):
pizza_eaten = forms.IntegerField(label="Pizzas eaten")
# 5 more fields
minutes_outside = forms.IntegerField(label="Minutes outside")
class TherapyNeededForm(HappyIndexForm):
had_therapy_before = forms.BooleanField()
# about 20 more fields
class CanMotivateOthers(HappyIndexForm):
has_hobbies = forms.BooleanField()
# about 20 more fields
</code></pre>
<p>The only purpose of <code>HappyIndexForm</code> is to pass the 7 fields to other forms that need to work with the "HappyIndex" (it is a made up example).</p>
<p>I have designed a very nice and complex template for the <code>HappyIndexForm</code>fields. The fields of <code>TherapyNeededForm</code> exclusive of the <code>HappyIndexForm</code>fields I want to simply loop over.</p>
<pre class="lang-html prettyprint-override"><code><!-- template.html -->
{% for field in first_ten_fields_therapy_needed %}
{{ field }}
{% endfor %}
{% include happyindexform.html %}
{% for field in second_ten_fields_therapy_needed %}
{{ field }}
{% endfor %}
</code></pre>
<h5>Task:</h5>
<p>Loop over first ten fields of <code>TherapyNeededForm</code>, then display the specific template I wrote for the fields coming from <code>HappyIndexForm</code> called 'happyindexform.html', then loop over second ten fields of <code>TherapyNeededForm</code>.</p>
<h5>My problem:</h5>
<p>If I loop over the fields and include my 'happyindexform.html' the fields coming from inheritance get displayed twice. I for sure can also write the specific template for all the 20 fields of <code>TherapyNeededForm</code> but that is very repetitive and not in any way aligned to DRY.</p>
<p>Note: I am using django's class-based views. That's why I feel using 2 forms would be more stressful than the inheritance approach.</p>
<p>I feel like I am somehow missing how I can devide my form to blocks and ship them to the template separately.</p>
|
<python><django><django-forms><django-templates>
|
2023-05-28 15:16:30
| 0
| 1,976
|
Tarquinius
|
76,352,024
| 6,357,916
|
Escaping % in django sql query gives list out of range
|
<p>I tried running following SQL query in pgadmin and it worked:</p>
<pre><code> SELECT <columns>
FROM <tables>
WHERE
date_visited >= '2023-05-26 07:05:00'::timestamp
AND date_visited <= '2023-05-26 07:07:00'::timestamp
AND url LIKE '%/mymodule/api/myurl/%';
</code></pre>
<p>I wanted to call the same url in django rest endpoint. So, I wrote code as follows:</p>
<pre><code>with connection.cursor() as cursor:
cursor.execute('''
SELECT <columns>
FROM <tables>
WHERE
date_visited >= '%s'::timestamp
AND date_visited <= '%s'::timestamp
AND url LIKE '%%%s%%';
''', [from_date, to_date, url])
</code></pre>
<p>But it is giving me list index out of range error. I guess I have made mistake with <code>'%%%s%%'</code>. I tried to escale <code>%</code> in original query with <code>%%</code>. But it does not seem to work. Whats going wrong here?</p>
|
<python><sql><django><django-rest-framework>
|
2023-05-28 14:40:39
| 2
| 3,029
|
MsA
|
76,351,958
| 2,715,498
|
Superclass property setting using super() and multiple inheritance
|
<p>In a real world program I have ran into the next problem: I have a diamond inheritance having SuperClass, MidClassA, MidClassB and SubClass. SuperClass has a property (a filename, actually) that is used by its successors, but different ways (with or without extension). At all levels I want to set this property by a setter. Here is an example code:</p>
<pre><code>class SuperClass:
def __init__(self):
print("SuperClass __init__")
super().__init__()
@property
def value(self):
return self._value
@value.setter
def value(self, new_value):
self._value = new_value
print("Superclass setter called!")
class MidClassA(SuperClass):
def __init__(self):
print("MidClassA __init__")
super().__init__()
@SuperClass.value.setter
def value(self, new_value):
super(MidClassA, MidClassA).value.__set__(self, new_value)
print("MidClassA setter called!", MidClassA.__mro__)
class MidClassB(SuperClass):
def __init__(self):
print("MidClassB __init__")
super().__init__()
@SuperClass.value.setter
def value(self, new_value):
super(MidClassB, MidClassB).value.__set__(self, new_value)
print("MidClassB setter called!", MidClassB.__mro__)
class SubClass(MidClassB, MidClassA):
def __init__(self):
print("SubClass __init__")
super().__init__()
@SuperClass.value.setter
def value(self, new_value):
super(SubClass, SubClass).value.__set__(self, new_value)
print("Subclass setter called!", SubClass.__mro__)
obj = SubClass()
obj.value = 42
print(obj.value)
</code></pre>
<p>And an ouput:</p>
<pre><code>SubClass __init__
MidClassB __init__
MidClassA __init__
SuperClass __init__
Superclass setter called!
MidClassB setter called! (<class '__main__.MidClassB'>, <class '__main__.SuperClass'>, <class 'object'>)
Subclass setter called! (<class '__main__.SubClass'>, <class '__main__.MidClassB'>, <class '__main__.MidClassA'>, <class '__main__.SuperClass'>, <class 'object'>)
42
</code></pre>
<p>Questions:</p>
<ul>
<li>As output displays, the MRO of __init__s works well (SubClass-MidClassB-MidClassA-SuperClass-object) but during the propery setting MidClassA is skipped.</li>
<li>How to get rid of the <code>super(MidClassB, MidClassB)</code>?</li>
<li>Is it possible to get rid of the <code>__set__</code> method?</li>
</ul>
<p>An ideal solution would be something like <code>super().value = new_value</code>, but all these don't work.</p>
|
<python><properties><method-resolution-order>
|
2023-05-28 14:22:32
| 2
| 3,372
|
Gyula Sámuel Karli
|
76,351,947
| 7,895,542
|
Polars convert string of digits to list
|
<p>So i have a polars column/series that is strings of digits.</p>
<pre><code>s = pl.Series("a", ["111","123","101"])
s
shape: (3,)
Series: 'a' [str]
[
"111"
"123"
"101"
]
</code></pre>
<p>I would like to convert each string into a list of integers.
I have found a working solution but i am not sure if it is optimal.</p>
<pre><code>s.str.split("").list.eval(pl.element().str.to_integer(base=10))
shape: (3,)
Series: 'a' [list[i32]]
[
[1, 1, 1]
[1, 2, 3]
[1, 0, 1]
]
</code></pre>
<p>This seems to be working but id like to know if there are better ways to do this or any of the individual steps.</p>
|
<python><python-polars>
|
2023-05-28 14:20:04
| 1
| 360
|
J.N.
|
76,351,894
| 11,584,730
|
jinja2 unpack unknown number of values
|
<p>I'm wondering how to unpack a tuple with an unknown number of variables in Jinja2. Specifically, I'm looking for an equivalent syntax or approach in Jinja2 that achieves the same effect as the following Python code:</p>
<pre><code>a,*_ = d,f,g,e
</code></pre>
<p>In the Python code above, the *_ syntax allows me to discard the remaining variables in the tuple. Is there a similar construct or technique in Jinja2 that allows for this type of unpacking and discarding?</p>
<p>Thanks</p>
|
<python><flask><jinja2>
|
2023-05-28 14:08:08
| 1
| 620
|
Itay Lev
|
76,351,798
| 3,416,774
|
Variables and their values are stored externally in a YAML file. How to read them as if I declare them internally?
|
<p>Instead of having to declare the variables and their values in the script, I would like to have them declared externally in a separate YAML file called <code>settings.yml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>setting1: cat
setting2: dog
</code></pre>
<p>Is there a way to use the variables' names and values directly as if I declare them internally? E.g. running <code>print(setting1, setting2)</code> returns <code>cat dog</code>. So far I can only read it:</p>
<pre class="lang-py prettyprint-override"><code>import yaml
with open("settings.yml", "r") as stream:
try:
data = yaml.safe_load(stream)
for key, value in data.items():
print(f"{key}: {value}")
except yaml.YAMLError as exc:
print(exc)
print(setting1, setting2)
</code></pre>
<p>The <code>print(setting1, setting2)</code> doesn't work. I take a look at the <a href="https://pyyaml.org/wiki/PyYAMLDocumentation" rel="nofollow noreferrer">PyYAML documentation</a> but am unable to find it.</p>
|
<python><yaml><pyyaml>
|
2023-05-28 13:45:48
| 2
| 3,394
|
Ooker
|
76,351,690
| 1,239,299
|
Pycharm does not support Blenders Bpy package
|
<p>I'm trying to use Blenders bpy package in Pycharm.</p>
<p>I'm using the correct version of Python (3.10)</p>
<p>And have used pip to install the bpy package</p>
<p><a href="https://i.sstatic.net/y5OpR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/y5OpR.png" alt="enter image description here" /></a></p>
<p>But Pycharm does not seem recognise to have any sub packages of bpy</p>
<p><a href="https://i.sstatic.net/TAWIl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TAWIl.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/Rjjnl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Rjjnl.png" alt="enter image description here" /></a></p>
<p>As far as I can tell the paths virtual machine have the correct paths setup</p>
<p><a href="https://i.sstatic.net/BXOjK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BXOjK.png" alt="enter image description here" /></a></p>
<p>But Pycharm does not does not reconise the bpy folder under site-packages as a python module</p>
<p><a href="https://i.sstatic.net/gpqPT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gpqPT.png" alt="enter image description here" /></a></p>
<p>Is there any way to fix this</p>
|
<python><pycharm>
|
2023-05-28 13:15:24
| 0
| 817
|
user1239299
|
76,351,660
| 15,839,694
|
Endless trace messages when converting python kivy program to exe file with pyinstaller
|
<p>So i am trying to convert an example kivy program into an exe standalone file from this documentation page, just as a bleak example to investigate why my pyinstaller and kivy do not work:
<a href="https://kivy.org/doc/stable/tutorials/firstwidget.html" rel="nofollow noreferrer">https://kivy.org/doc/stable/tutorials/firstwidget.html</a></p>
<p>A .kv file is not needed for this particular implementation.</p>
<p>The code is in the file main2.py:</p>
<pre><code>from random import random
from kivy.app import App
from kivy.uix.widget import Widget
from kivy.uix.button import Button
from kivy.graphics import Color, Ellipse, Line
class MyPaintWidget(Widget):
def on_touch_down(self, touch):
color = (random(), 1, 1)
with self.canvas:
Color(*color, mode='hsv')
d = 30.
Ellipse(pos=(touch.x - d / 2, touch.y - d / 2), size=(d, d))
touch.ud['line'] = Line(points=(touch.x, touch.y))
def on_touch_move(self, touch):
touch.ud['line'].points += [touch.x, touch.y]
class MyPaintApp(App):
def build(self):
parent = Widget()
self.painter = MyPaintWidget()
clearbtn = Button(text='Clear')
clearbtn.bind(on_release=self.clear_canvas)
parent.add_widget(self.painter)
parent.add_widget(clearbtn)
return parent
def clear_canvas(self, obj):
self.painter.canvas.clear()
if __name__ == '__main__':
MyPaintApp().run()
</code></pre>
<p>i used the command: (tried with and without the "--onefile")</p>
<pre><code>pyinstaller --onefile -w main2.py
</code></pre>
<p>Everything goes good until pyinstaller outputs this endless list of trace messages that don't stop and keep on printing in the terminal:</p>
<pre><code>349 TRACE: _safe_import_hook 'sys' Package('kivy', 'c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy\\__init__.py', ['c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy']) None 0
[TRACE ] [_safe_import_hook 'sys' Package('kivy', 'c]\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy\\__init__.py', ['c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy']) None 0
8353 TRACE: _import_hook 'sys' Package('kivy', 'c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy\\__init__.py', ['c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy']) Package('kivy', 'c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy\\__init__.py', ['c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy']) 0
[TRACE ] [_import_hook 'sys' Package('kivy', 'c]\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy\\__init__.py', ['c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy']) Package('kivy', 'c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy\\__init__.py', ['c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy']) 0
8359 TRACE: determine_parent Package('kivy', 'c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy\\__init__.py', ['c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy'])
[TRACE ] [determine_parent Package('kivy', 'c]\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy\\__init__.py', ['c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy'])
8362 TRACE: determine_parent -> Package('kivy', 'c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy\\__init__.py', ['c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy'])
[TRACE ] [determine_parent -> Package('kivy', 'c]\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy\\__init__.py', ['c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy'])
8363 TRACE: find_head_package Package('kivy', 'c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy\\__init__.py', ['c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy']) 'sys' 0
[TRACE ] [find_head_package Package('kivy', 'c]\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy\\__init__.py', ['c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy']) 'sys' 0
8367 TRACE: safe_import_module 'sys' 'sys' None
[TRACE ] safe_import_module 'sys' 'sys' None
8369 TRACE: safe_import_module -> BuiltinModule('sys',)
[TRACE ] safe_import_module -> BuiltinModule('sys',)
8372 TRACE: find_head_package -> (BuiltinModule('sys',), '')
[TRACE ] find_head_package -> (BuiltinModule('sys',), '')
8375 TRACE: load_tail BuiltinModule('sys',) ''
[TRACE ] load_tail BuiltinModule('sys',) ''
8376 TRACE: load_tail -> BuiltinModule('sys',)
[TRACE ] load_tail -> BuiltinModule('sys',)
8377 TRACE: createReference Package('kivy', 'c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy\\__init__.py', ['c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy']) BuiltinModule('sys',) DependencyInfo(conditional=False, function=False, tryexcept=False, fromlist=False)
[TRACE ] [createReference Package('kivy', 'c]\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy\\__init__.py', ['c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy']) BuiltinModule('sys',) DependencyInfo(conditional=False, function=False, tryexcept=False, fromlist=False)
8379 TRACE: _safe_import_hook 'shutil' Package('kivy', 'c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy\\__init__.py', ['c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy']) None 0
[TRACE ] [_safe_import_hook 'shutil' Package('kivy', 'c]\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy\\__init__.py', ['c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy']) None 0
8381 TRACE: _import_hook 'shutil' Package('kivy', 'c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy\\__init__.py', ['c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy']) Package('kivy', 'c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy\\__init__.py', ['c:\\users\\coderalpha\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\kivy']) 0
</code></pre>
<p>The process generates this main2.spec file though:</p>
<pre><code># -*- mode: python ; coding: utf-8 -*-
block_cipher = None
a = Analysis(
['main2.py'],
pathex=[],
binaries=[],
datas=[],
hiddenimports=[],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False,
)
pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher)
exe = EXE(
pyz,
a.scripts,
[],
exclude_binaries=True,
name='main2',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
console=False,
disable_windowed_traceback=False,
argv_emulation=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None,
)
coll = COLLECT(
exe,
a.binaries,
a.zipfiles,
a.datas,
strip=False,
upx=True,
upx_exclude=[],
name='main2',
)
</code></pre>
<p>and when i run the "exe file" that has been created, it gives me this recursion error as an error message (without opening any application window):</p>
<pre><code>Failed to execute script 'main2' due to unhandled exception: maximum recursion depth exceeded while calling a Python object
Traceback (most recent call last):
File "logging\__init__.py", line 1084, in emit
AttributeError: 'NoneType' object has no attribute 'write'
During handling of the above exception, another exception occurred:
... <same exception repeated many times> ...
</code></pre>
<p>I also tried to follow the .spec file changes given here:
<a href="https://www.youtube.com/watch?v=NEko7jWYKiE" rel="nofollow noreferrer">https://www.youtube.com/watch?v=NEko7jWYKiE</a></p>
<p>...without any luck, it gives me the same traces and error message.</p>
<p>Additional information:
PyInstaller: version 5.11.0
Python: version 3.8.3</p>
<p>Any answers to how i can convert it to an exe file and get rid of these errors will be appreciated.</p>
|
<python><python-3.x><kivy><pyinstaller><trace>
|
2023-05-28 13:10:08
| 1
| 317
|
Coder Alpha
|
76,351,556
| 7,895,542
|
Polars Series.to_numpy() does not return ndarray
|
<p>I was trying to convert a series to a numpy array via <code>.to_numpy()</code> but unlike what the documentation shows i am not getting a ndarray out but a seriesview</p>
<p>Running exactly the example in the documentation: <a href="https://pola-rs.github.io/polars/py-polars/html/reference/series/api/polars.Series.to_numpy.html" rel="nofollow noreferrer">https://pola-rs.github.io/polars/py-polars/html/reference/series/api/polars.Series.to_numpy.html</a></p>
<pre><code>s = pl.Series("a", [1, 2, 3])
arr = s.to_numpy()
arr
type(arr)
</code></pre>
<p>I get</p>
<pre><code>[1 2 3]
<class 'polars.series._numpy.SeriesView'>
</code></pre>
<p>Am i doing something wrong here and if not how should i work around this?</p>
|
<python><numpy><numpy-ndarray><python-polars>
|
2023-05-28 12:43:38
| 2
| 360
|
J.N.
|
76,351,518
| 4,451,521
|
tox skipped because could not find python interpreter
|
<p>I am having a problem using <em>tox</em>. I have to say first that I am not an expert on virtual environments, and I prefer to use <em>conda</em> environments, I find them much more easy to use and understand.</p>
<p>So as the background of my system I have a Ubuntu 18 system, where Python is 3.6.9. I also have a <em>conda</em> environment where Python is 3.9.16, and an <em>Anaconda</em> base environment with Python 3.8.3</p>
<p>Anyway I want to use tox for testing with this <code>tox.ini</code> file</p>
<pre><code>[tox]
envlist=py37
skipsdist = True
[testenv]
deps=pytest
commands= pytest
</code></pre>
<p>After installing <em>tox</em> (and I installed it in both inside the conda environment and outside), when I ran <code>tox</code> I got:</p>
<pre class="lang-none prettyprint-override"><code>py37: skipped because could not find python interpreter with spec(s): py37
</code></pre>
<p>I get it, <em>pytest 3.7</em> is not installed.
But I thought that is what virtual environments do... what is the point of me using <em>tox</em> if I have to install manually the version ?
And if I want to test it with multiple versions, do I have to install manually every version?</p>
<p>And what is more important, how can I install several versions?</p>
<p>With <em>conda</em> I have one version per environment.
How can I use <em>tox</em> efficiently here?</p>
|
<python><tox>
|
2023-05-28 12:35:57
| 2
| 10,576
|
KansaiRobot
|
76,351,373
| 1,319,998
|
Test that a ZIP doesn't require zip64 support
|
<p>I'm looking to test some Python code makes zip files that dynamically chooses zip32 or zip64, and I would like to assert that in certain cases it really does create valid zip32 files by opening it in something that does not support zip64.</p>
<p>How can I check that a file really is openable by something that doesn't support zip64 in Python. The "allowZip64" option in Python's zipfile module seems to only be for writing, rather than reading <a href="https://docs.python.org/3/library/zipfile.html" rel="nofollow noreferrer">https://docs.python.org/3/library/zipfile.html</a></p>
<p>(I would prefer to not peek at the bytes of the file - I would like a more integration style test that uses existing library/software to process the file)</p>
|
<python><zip><deflate>
|
2023-05-28 11:58:34
| 2
| 27,302
|
Michal Charemza
|
76,351,319
| 3,247,006
|
How to set the current time to "TimeField()" as a default value in Django Models?
|
<p>The doc says below in <a href="https://docs.djangoproject.com/en/4.2/ref/models/fields/#django.db.models.DateField.auto_now_add" rel="nofollow noreferrer">DateField.auto_now_add</a>:</p>
<blockquote>
<p>Automatically set the field to now when the object is first created. ... If you want to be able to modify this field, set the following instead of <code>auto_now_add=True</code>:</p>
</blockquote>
<ul>
<li>For <code>DateField</code>: <code>default=date.today</code> - <code>from datetime.date.today()</code></li>
<li>For <code>DateTimeField</code>: <code>default=timezone.now</code> - <code>from django.utils.timezone.now()</code></li>
</ul>
<p>So to <a href="https://docs.djangoproject.com/en/4.2/ref/models/fields/#datefield" rel="nofollow noreferrer">DateField()</a> and <a href="https://docs.djangoproject.com/en/4.2/ref/models/fields/#datetimefield" rel="nofollow noreferrer">DateTimeField()</a>, I can set the current date and time respectively as <a href="https://docs.djangoproject.com/en/4.2/ref/models/fields/#default" rel="nofollow noreferrer">a default value</a> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>from datetime import date
from django.utils import timezone
class MyModel(models.Model):
date = models.DateField(default=date.today) # Here
datetime = models.DateTimeField(default=timezone.now) # Here
</code></pre>
<p>Now, how can I set the current time to <a href="https://docs.djangoproject.com/en/4.2/ref/models/fields/#timefield" rel="nofollow noreferrer">TimeField()</a> as a default value as shown below?</p>
<pre class="lang-py prettyprint-override"><code>class MyModel(models.Model): # ?
time = models.TimeField(default=...)
</code></pre>
|
<python><django><datetime><django-models><default>
|
2023-05-28 11:47:10
| 2
| 42,516
|
Super Kai - Kazuya Ito
|
76,351,097
| 2,302,262
|
"boolean" nunique in pandas object dataframes
|
<h2>The goal</h2>
<p>I have a long narrow dataframe <code>df</code> (30k x 15), and want to see for each row, if <em>all</em> the values are unique or not.</p>
<p>The values in the dataframe are not necessarily float or int values, but may also be objects. This questions is about the latter case, as it slows things down <em>a lot</em>. (I'm aware that objects will always be slower, but I'd still like to optimize that case.)</p>
<h2>The approach</h2>
<p>What I have been doing:</p>
<pre class="lang-py prettyprint-override"><code>df.nunique(axis=1) == len(df.columns)
</code></pre>
<p>This takes 47sec. It is inefficient, because I do not actually care about the <em>number</em> of unique values, but the code still needs to calculate them.</p>
<h2>The improvement</h2>
<p>I have improved this by creating a function <code>boolunique</code>:</p>
<pre class="lang-py prettyprint-override"><code>def boolunique(row):
vals = set()
for val in row:
if val in vals:
return False
vals.add(val)
return True
</code></pre>
<p>The results are a bit confusing:</p>
<ul>
<li>using it with <code>df.apply(boolunique, axis=1)</code> almost doubles the execution time, to 81sec; but</li>
<li>using it with <code>pd.Series({n: boolunique(r) for n, r in df.iterrows()})</code> halves the time to 24sec.</li>
</ul>
<p>The latter is better, but it <em>still</em> takes much longer than I would expect.</p>
<h2>The question</h2>
<p><strong>Is there a more efficient way I'm overlooking?</strong></p>
<hr />
<p>PS: I tried using a variant of the <code>boolunique</code> function as well (<code>lambda row: len(set(row)) == len(row)</code>), but the running times are virtually the same.</p>
<hr />
<p><strong>edit</strong></p>
<p>Here is some sample code to create a similar dataframe:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
import pint
import pint_pandas
idx = pd.date_range('1940', '2020', freq='D')
vals = np.random.random_integers(0, 40, (len(idx), 15))
df = pd.DataFrame({n: pd.Series(column, idx).astype('pint[sec]') for n, column in enumerate(vals.T)})
</code></pre>
<p>The <code>.astype('pint[sec]')</code> turns the values into objects, and this is what slows the comparison down.</p>
<p>I'd like to write code that also efficiently handles objects.</p>
<p>(I'm aware that, in this particular case, I could speed things up by leaving out the conversion to <code>pint</code> objects. But I cannot control the datatype I'm handed; it may be a dataframe of floats, or of ints, ore of pint quantities, or a mix of all of the above.)</p>
|
<python><pandas><dataframe>
|
2023-05-28 10:49:36
| 1
| 2,294
|
ElRudi
|
76,350,376
| 7,149,485
|
Django : Edit main.html to reference static webpage
|
<p>I am learning Django and I am still very new to it so I don't yet understand how all the pieces fit together.</p>
<p>I have successfully built the polls application on the tutorial website (<a href="https://docs.djangoproject.com/en/4.2/intro/tutorial01/" rel="nofollow noreferrer">https://docs.djangoproject.com/en/4.2/intro/tutorial01/</a>) and a 'Hello World!' app as an additional test. I have created a main page at the root with links to these two applications. Code for this is below.</p>
<pre><code><!DOCTYPE html>
<html lang="en">
<head>
<title>B. Rabbit</title>
<meta charset="UTF-8">
</head>
<body>
<h3>Applications</h3>
<ul>
<li><p><a href="/polls">A Polls Application.</a></p></li>
<li><p><a href="/hello">Hello World App.</a></p></li>
<li><p></p><a href="/HWFolder/HWPage.htm">Hello World stand-alone page.</a></p></li>
</ul>
</body>
</html>
</code></pre>
<p>I now want to create a new folder in my project with a static webpage (just a simple html file) and add a link to my main page that will point to this static webpage. That is, rather the create an app via <code>python manage.py startapp hello</code>, I want to just create a raw <code>.html</code> file, stick it in a folder somewhere, and then point to this. But I don't know how to do this.</p>
<p>The third list object above is my attempt, but this produces a 404 Page not found error.</p>
<p>Below is the <code>urls.py</code> script for the website. I was able to get the Hello World app working by following the syntax for the polls app, but I do not know how to edit this to just refer to a stand-alone page.</p>
<pre><code>import os
from django.contrib import admin
from django.urls import include, path, re_path
from django.views.static import serve
from django.views.generic.base import TemplateView
# Up two folders to serve "site" content
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
SITE_ROOT = os.path.join(BASE_DIR, 'site')
urlpatterns = [
path('admin/', admin.site.urls),
path('polls/', include('polls.urls')),
path('hello/', include('hello.urls')),
]
</code></pre>
<p>Could someone point me in the right direction on what scripts I need to edit or how I adjust the site's <code>urls.py</code> to refer to a stand-alone page, let's say <code>HWPage.htm</code> in the folder <code>/HWFolder</code>.</p>
|
<python><django>
|
2023-05-28 07:18:34
| 1
| 1,169
|
brb
|
76,350,255
| 2,514,521
|
Is there a standard class that implements all the int-like magic methods by calling int(self)?
|
<p>Please note: I have made edits since some of these responses were given, due to my bad wording. Blame me. Thanks to everyone for putting up with me.</p>
<p>I am writing a class that has int-like properties.</p>
<p>It implements the <code>__int__</code> method.</p>
<p>I'd like to implement all the magic methods that ints have: <code>__add__</code>, <code>__and__</code>, <code>__rshift__</code> etc., by applying the same function on <code>int(self)</code>.</p>
<p>For example:</p>
<pre><code> def __lshift__(self, other):
return int(self) << other
def __bool__(self):
return bool(int(self))
# and so forth, for about 40 more methods.
</code></pre>
<p>Is there an object that will do this for me? It seems like this would be a common pattern.</p>
<p>I tried with <code>class MyClass(int)</code>, but this did not allow me to override <code>__int__</code></p>
<p>My class is <em>not</em> an int; I want it to be able to be treated as an int.</p>
<p>Or should I be doing this some other way?</p>
<p>I am trying to simulate FPGA wires & registers - the value will change dynamically.</p>
<p>For example, if inst.wire2 is defined as <code>inst.wire1 + 10</code>, then <code>inst.wire2</code> will always be <code>inst.wire1 + 10</code>; if inst.wire1 changes, inst.wire2 changes with it.</p>
<p>Once complete, almost all access to a given wire or register will be for it's value, so making every reference <code>inst.wire1.value</code> (or <code>self.wire1.value</code>) will clutter the code immensely, with no advantage.</p>
<p>TO CLARIFY: if someone treats the property as an int (e.g. by adding a number to it), the result should be a regular, real, actual int, with no magic stuff.</p>
|
<python>
|
2023-05-28 06:40:37
| 2
| 5,929
|
AMADANON Inc.
|
76,350,008
| 11,117,265
|
isinstance based custom validator in pydantic for custom classes from third party libraries
|
<p>In my custom package for work, I want to validate inputs using <code>pydantic</code>. However, most of my functions take inputs that are not of native types, but instances of classes from other libraries, e.g. <code>pandas.DataFrame</code> or <code>sqlalchemy.Engine</code>. Mentioning these as type hints and adding <code>pydantic.validate_arguments</code> decorator fails.</p>
<p>Let's suppose the input of my function should be of type <code>CustomClass</code> from library <code>CustomPackage</code>.</p>
<pre class="lang-py prettyprint-override"><code>import CustomPackage
import pydantic
@pydantic.validate_arguments
def custom_function(custom_argument: CustomPackage.CustomClass) -> None:
pass
</code></pre>
<p>This will lead to the following error:</p>
<blockquote>
<p>RuntimeError: no validator found for <class 'CustomPackage.CustomClass'>, see arbitrary_types_allowed in Config</p>
</blockquote>
<p>To solve this, I can use <code>@pydantic.validate_arguments(config={"arbitrary_types_allowed": True})</code> instead, which will allow anything. This is not my intention, so I followed the <a href="https://docs.pydantic.dev/latest/usage/types/#classes-with-__get_validators__" rel="nofollow noreferrer">custom type section in documentation</a>, and created this:</p>
<pre class="lang-py prettyprint-override"><code>import collections.abc
import typing
import CustomPackage
class CustomClassWithValidator(CustomPackage.CustomClass):
@classmethod
def __get_validators__(cls: typing.Type["CustomClassWithValidator"]) -> collections.abc.Iterable[collections.abc.Callable]:
yield cls.validate_custom_class
@classmethod
def validate_custom_class(cls: typing.Type["CustomClassWithValidator"], passed_value: typing.Any) -> CustomPackage.CustomClass:
if isinstance(passed_value, CustomPackage.CustomClass):
return passed_value
raise ValueError
</code></pre>
<p>After this, the following works fine:</p>
<pre class="lang-py prettyprint-override"><code>@pydantic.validate_arguments
def custom_function(custom_argument: CustomClassWithValidator) -> None:
pass
</code></pre>
<p>But I have quite a few of third party dependencies, and each have lots of custom classes which I am using. Creating almost identical class for every single one of them as above will work, but that does not seem optimal. Is there any functionality in <code>pydantic</code> to make this less repetitive?</p>
|
<python><python-3.x><pydantic>
|
2023-05-28 05:05:57
| 1
| 1,676
|
yarnabrina
|
76,349,990
| 336,527
|
How to delete a python variable whose name is only known at runtime?
|
<p>I need to do the equivalent of the <code>del</code> statement when the variable name is only known dynamically.</p>
<p>For the global namespace, I believe <code>del globals()[name]</code> works correctly.</p>
<p>For the local namespace, <code>del locals()[name]</code> is incorrect since <code>locals()</code> is not guaranteed to be the original dictionary (and, in fact, in CPython it is a copy).
For the same reason, <code>exec(f'del {name}')</code> also won't work.</p>
|
<python>
|
2023-05-28 04:55:58
| 1
| 52,663
|
max
|
76,349,868
| 11,482,269
|
Python mariadb-connector function returns empty cursor.fetchall() on 252nd iteration with different WHERE clauses
|
<p>Caveats:
Linux Distribution prevents upgrade beyond connector 3.1.20 and thus python module 1.0.11</p>
<p>Versions from /usr/bin/mariadb_config
Copyright 2011-2020 MariaDB Corporation AB
Get compiler flags for using the MariaDB Connector/C.
Usage: mariadb_config [OPTIONS]
Compiler: GNU 10.2.1</p>
<p>--version [10.5.19]
--cc_version [3.1.20]</p>
<p>Output:</p>
<pre><code>***** ***** ***** ***** ***** ***** ***** ***** ***** *****
Executing SQL Query: "SELECT cardid,mfguid,setTCGUid ,CONCAT_WS(' ',REGEXP_REPLACE(REGEXP_REPLACE(ed.editionName,'^(.*)(| Edition)$','\\1 Edition'),'(Alpha Print Edition|1st Edition Edition)','1st Edition'),REPLACE(f.finishName,'Regular','Normal')) AS printing from g1st_fabdb.cards left join g1st_fabdb.setNames sn ON cards.setNameId=sn.setNameId LEFT JOIN g1st_fabdb.finishes f ON cards.finishId = f.finishId LEFT JOIN g1st_fabdb.editions ed ON cards.editionId = ed.editionId WHERE ( mfguid LIKE %s )AND setTCGUid = %s ;"
Values:"%RVD026%"
"RVD
Card Data: [(148982, 'RVD026', 'RVD', 'Rhinar Edition Normal')]
cardid mfguid setTCGUid printing
0 148982 RVD026 RVD Rhinar Edition Normal
DBCursor Status: <mariadb.connection connected to 'localhost' at 0xffff8929bb40> True
DBCardID: cardid mfguid setTCGUid printing
0 148982 RVD026 RVD Rhinar Edition Normal
Length: 1
CardId 0: cardid mfguid setTCGUid printing
0 148982 RVD026 RVD Rhinar Edition Normal
Card Last Updated SQL Query: SELECT card_lastupdate from g1st_fabdb.marketdata WHERE cardid=148982 ORDER BY card_lastupdate DESC LIMIT 1
Card Last Updated in DB: None
In API: 2023-05-26 21:53:45
***** In Set: RVD Card In set: 026 SETID: RVD026 *****
***** Current CardCount total: 251 *****
***** ***** ***** ***** ***** ***** ***** ***** ***** *****
Executing SQL Query: "SELECT cardid,mfguid,setTCGUid ,CONCAT_WS(' ',REGEXP_REPLACE(REGEXP_REPLACE(ed.editionName,'^(.*)(| Edition)$','\\1 Edition'),'(Alpha Print Edition|1st Edition Edition)','1st Edition'),REPLACE(f.finishName,'Regular','Normal')) AS printing from g1st_fabdb.cards left join g1st_fabdb.setNames sn ON cards.setNameId=sn.setNameId LEFT JOIN g1st_fabdb.finishes f ON cards.finishId = f.finishId LEFT JOIN g1st_fabdb.editions ed ON cards.editionId = ed.editionId WHERE ( mfguid LIKE %s )AND setTCGUid = %s ;"
Values:"%DRO001%"
"DRO
Card Data: []
False
DBCursor Status: <mariadb.connection connected to 'localhost' at 0xffff8929bb40> True
DBCardID: False
Traceback (most recent call last):
File "/home/biqu/scripts/api-dbupdate.py", line 367, in <module>
if Debug : print("Length:",len(dfCardId))
TypeError: object of type 'bool' has no len()
</code></pre>
<hr />
<p>python3 function:</p>
<pre><code>def get_dfCardId(SetID,sName):
#replace with cardID identification function .,+6
if not mydberror :
my_cardid_query="SELECT cardid,mfguid,setTCGUid ,CONCAT_WS(' ',REGEXP_REPLACE(REGEXP_REPLACE(ed.editionName,'^(.*)(| Edition)$','\\\\1 Edition'),'(Alpha Print Edition|1st Edition Edition)','1st Edition'),REPLACE(f.finishName,'Regular','Normal')) AS printing from g1st_fabdb.cards left join g1st_fabdb.setNames sn ON cards.setNameId=sn.setNameId LEFT JOIN g1st_fabdb.finishes f ON cards.finishId = f.finishId LEFT JOIN g1st_fabdb.editions ed ON cards.editionId = ed.editionId WHERE ( mfguid LIKE %s )AND setTCGUid = %s ;"
mydbSetId="%{}%"
if Debug: print("Executing SQL Query: \""+my_cardid_query+"\"\n\tValues:\""+mydbSetId.format(SetID)+"\"\n\t\t\""+sName)
myconn_ro=mariadb.connect(**mydbparams)
mydbcursor=myconn_ro.cursor()
mydbcursor.execute(my_cardid_query,(mydbSetId.format(SetID),sName))
dbCardId = mydbcursor.fetchall()
if Debug and myconn_ro.warnings > 0: print("Warnings: ",myconn_ro.warnings,"\n\t",myconn_ro.show_warnings)
mydbcursor.close()
myconn_ro.reset()
if Debug: print("\t\tCard Data: ",dbCardId)
if len(dbCardId)>0:
dfCardId = pd.DataFrame(dbCardId,index=None,columns=('cardid','mfguid','setTCGUid','printing'))
else:
dfCardId = False # pd.DataFrame({'cardid':[''],'mfguid':[''],'setTCGUid':[''],'printing':['']})
print(dfCardId)
if Debug: print("DBCursor Status: ",mydbcursor.connection," ",mydbcursor.closed)
myconn_ro.close()
return dfCardId
</code></pre>
<p>Summary:
Python function runs a select statement. on the 252nd iteration with different WHERE mfguids (mind you I have rearranged things and it is always on the 252 iteration regardless of the WHERE clause) it always returns no data from the cursor.fetchall()
Initially the function was reusing the same connection initialized only once at start of main script, I have since tried variants using ConnectionPool, (which the connection.close() statements never seemed to release the connections back top the pool), executing the cursor and connection close methods in order and finally the current iteration which should be creating a new connection using the same parameters on each iteration and destroying it (which does not appear to be happening as the memory address seems to remain unchanged each iteration. Still it always returns no data, no warnings, no errors and no exceptions resulting in a failure due to the lack of data to return. So either I found a really corner bug (which I think someone would have hit before now) or I am doing something wrong. Any suggestions are much appreciated.</p>
|
<python><python-3.x><mariadb-10.5><mariadb-connector>
|
2023-05-28 03:56:41
| 1
| 351
|
Joe Greene
|
76,349,851
| 19,504,610
|
Accessing variables in the local scope of a python decorator
|
<p>Consider:</p>
<pre><code>def g(value):
def f():
return value
return f
x = g(3)
x() # prints 3
</code></pre>
<p>Given <code>x</code> in the example, the returned closure from <code>g(3)</code>, is there any way to inspect that is the value of <code>value</code> without calling <code>x()</code>?</p>
|
<python><scope><closures><local>
|
2023-05-28 03:45:57
| 1
| 831
|
Jim
|
76,349,775
| 14,012,470
|
Removing Noise and Detecting Circle in Spotlight Using OpenCV
|
<p>I have been trying to detect circles in spotlight images using OpenCV and have a variety of pictures I am working with, generally looking something like these 4 images:</p>
<p><a href="https://i.sstatic.net/6RQKh.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6RQKh.jpg" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/yMOXf.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yMOXf.jpg" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/t0X9z.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t0X9z.jpg" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/bk0RC.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bk0RC.jpg" alt="enter image description here" /></a></p>
<p>Following some image processing (using a threshold, blurring) I have gotten the images to look something like this:
<a href="https://i.sstatic.net/JFpUa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JFpUa.png" alt="enter image description here" /></a></p>
<p>However, trying to use the HoughCircles function, even after playing around with it for a while seems to not work. Is there something that I am glossing over while using Hough Cicles, or is there a better way to detect the circles than using HoughCircles?</p>
<p>My current code:</p>
<pre><code>import cv2 as cv
import numpy as np
img = cv.imread('image.jpg', cv.IMREAD_GRAYSCALE)
img = cv.medianBlur(img,33)
assert img is not None, "file could not be read, check with os.path.exists()"
image = cv.adaptiveThreshold(img,255,cv.ADAPTIVE_THRESH_GAUSSIAN_C,\
cv.THRESH_BINARY,73,2)
blur = cv.blur(image,(5,5))
circles = cv.HoughCircles(blur,cv.HOUGH_GRADIENT,1,20,
param1=50,param2=30,minRadius=30)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv.circle(circles,(i[0],i[1]),i[2],(0,255,0),2)
# draw the center of the circle
cv.circle(circles,(i[0],i[1]),2,(0,0,255),3)
cv.imshow('Objects Detected',blur)
cv.waitKey(0)
</code></pre>
|
<python><opencv><image-processing>
|
2023-05-28 03:03:11
| 1
| 1,511
|
AS11
|
76,349,651
| 16,595,100
|
Python Curses: Check if cursor is hidden
|
<p>I am wanting to write a function to ask the user input using Python curses. The problem I am having is I want to have the cursor hidden in other parts of the program except in the text-box. I intend to use this function in many cases and if the cursor is hidden before the function call, it should be returned to that state and vise versa. To do this I need to be able to read the state of the cursor. How do I do this? (Something like this?)</p>
<pre><code>if curses.get_cursor_state() == 0:
#Hidden
hidden=True
else:
hidden=False
#Textbox code here
if hidden:
curs_set(0)
else:
curs_set(1)
</code></pre>
|
<python><ncurses><python-curses>
|
2023-05-28 01:42:23
| 1
| 673
|
Enderbyte09
|
76,349,589
| 1,105,249
|
How do I ensure the created uinput.Device instance is always the same?
|
<p>In Python3, instances for gamepad controllers can be created using <code>python-uinput</code> module. The code may look something like this:</p>
<pre><code>device = uinput.Device(list_of_events, name=name, bustype=0x06, vendor=0x2357, product=0x1, version=mode)
</code></pre>
<p><em>While I forced some values, where typically 0x06 means virtual, the vendor id is something that I picked and hope it does not collide with any other vendor, the product is 1 and the version is typically 1, I did not found any change with respect to the previous situation:</em></p>
<pre><code>device = uinput.Device(list_of_events, name=name)
</code></pre>
<p>The list of events is long to explain, but let's say it is a gamepad with two axes and works "typically" as expected (D-Pad is discrete usage of ABS_X/ABS_Y axes, left analog is ABS_X/ABS_Y with analog usage, and right analog is ABS_RX/ABS_RY with analog usage).</p>
<p>While the pads work well until disconnected, under certain conditions or software (e.g. zsnes emulator) a new device instance (<code>uinput.Device(...)</code>) seems to not be recognized as a previous device instance <em>even if both were created with the same name</em>. For example, let's say I do this:</p>
<ol>
<li>I create an instance: <code>device = uinput.Device(events, "My-Gamepad-1")</code>.</li>
<li>I trigger the keys properly, configure the zsnes input with that device's events, and play a while.</li>
<li>I disconnect my device: <code>device.destroy()</code>.</li>
<li>I create a new instance: <code>device = uinput.Device(events, "My-Gamepad-1")</code>. <em>remember: the same happens if I add the arguments that I added in particular as in the beginnin of this question</em>.</li>
<li>I try to continue playing in the zsnes (even after closing it and opening it again), or perhaps I just try to re-configure the same input.</li>
</ol>
<p>The problem is that, by step 5, my pad is not recognized in the ZSNES (a super nintendo emulator I took just for testing purposes of my virtual devices). As if it had a different internal identifier (perhaps not a product id, but instead a product instance id or something like that?). It is, however, still a recognized pad: I made a test program in Unity which can detect the pad, their buttons and their axes properly (in my case, the pad is detected as 0 and the axes are detected from it), but the mechanism might be different: Unity just needs to detect the pads being present, while the emulator settings need, perhaps, to match a pad id.</p>
<p>My question: I'd like to instantiate my <code>uinput.Device</code> objects in a way that, aside from not having issues in Unity, I don't have issues with software like zsnes.</p>
|
<python><uinput>
|
2023-05-28 01:06:08
| 0
| 12,383
|
Luis Masuelli
|
76,349,549
| 7,995,293
|
Python script called by Zsh function: why does printing the python output work, but returning the same output does not?
|
<p>I have a large nested file structure; navigating from one working folder to the next requires verbose commands like <code>cd ../../../../path/to/working/file</code>. Fortunately, the files are consistently named: <code>part01_Part01-04.fileName/src/main/</code>
To make navigation easier, I've written a Python script that takes the current directory name and increments or decrements the <code>part</code> numbers according to command line args. The script prints an absolute filepath and then exits.</p>
<p>The Python script is called by a small Zsh function I wrote into my .zshrc, as follows:</p>
<pre><code>function funcName () { builtin cd "$(/path/to/pythonExecutable $PWD $1)"; pwd; ls; }
</code></pre>
<p>Despite being a bit hacky, it works beautifully, as intended. My question has to do with the way it works: in order to have it work, I need to <em>print</em> the path string at the end of my Python script, as opposed to returning the path string. This was surprising and unexpected.</p>
<p>As I understand it, the <code>return</code> statement should deliver my string to whatever called the script, whereas <code>print</code> sends its cargo to <code>stdout</code>. My Zsh function is not stdout. Can anyone tell me why it is working this way, and/or point me to any resources to help me flesh out what's going on under the hood?</p>
<p>Last, I'm sure there are cleaner ways of doing this in shell script alone, but I know Python, and don't know shell (yet). One step at a time.</p>
|
<python><zsh><zshrc>
|
2023-05-28 00:38:05
| 0
| 399
|
skytwosea
|
76,349,404
| 427,083
|
Django rest_framework serializer dynamic depth
|
<p>I am using a ModelSerializer to serialize a model. How do I have <code>depth = 1</code> when retrieving a single instance, and <code>depth = 0</code> when retrieving a list.</p>
<pre><code>from rest_framework import serializers
class UserSerializer(serializers.ModelSerializer):
class Meta:
model = User
fields = ['id', 'first_name', 'last_name', 'organizaton']
depth = 1 # <--- How to change this to 0 for the list?
</code></pre>
|
<python><django><django-rest-framework><django-serializer>
|
2023-05-27 23:24:18
| 1
| 80,257
|
Mundi
|
76,349,378
| 17,987,266
|
How to log method calls from derivate classes in an asynchronous way?
|
<p>I'm implementing a logger of method calls from derivate classes as suggested by this <a href="https://stackoverflow.com/a/58656725/17987266">answer</a>:</p>
<pre class="lang-py prettyprint-override"><code>class Logger:
def _decorator(self, f):
@functools.wraps(f)
def wrapper(*args, **kwargs):
print(f.__name__, args, kwargs) # I'll replace this with my logging logic
return f(*args, **kwargs)
return wrapper
def __getattribute__(self, item):
value = object.__getattribute__(self, item)
if callable(value):
decorator = object.__getattribute__(self, '_decorator')
return decorator(value)
return value
class A(Logger):
def method(self, a, b):
print(a)
def another_method(self, c):
print(c)
@staticmethod
def static_method(d):
print(d)
</code></pre>
<p>I'm worried that my logging logic may disrupt the method calls from the derivate classes. In particular, because the logging may involve some call to a database, I'm worried that this can cause some unnecessary delay to the logged method. And I surely don't want any errors during the logging spilling over the methods.</p>
<p>Is there a way to make the logging logic asynchronous to the method call here? It doesn't bother me to use the async toolbox from Python in <code>Logger</code>, but I'd like to keep <code>A</code> and any program that will instantiate it totally unaware of it. In other words, <code>A</code> and the client code should not need to worry about things like <code>asyncio</code>, <code>async</code> and <code>await</code>.</p>
|
<python><multithreading><asynchronous><logging><async-await>
|
2023-05-27 23:11:54
| 1
| 369
|
sourcream
|
76,349,264
| 5,611,471
|
How to retain the data type datetime64 with the format ("%m/%d/%Y") in pandas?
|
<p>Please, wait before marking it as a duplicate.<br>
These posts <br> <a href="https://stackoverflow.com/questions/38067704/how-to-change-the-datetime-format-in-pandas">How to change the datetime format in Pandas</a> <br> <a href="https://stackoverflow.com/questions/38333954/converting-object-to-datetime-format-in-python">Converting object to datetime format in python</a><br>
answered my question partially.<br><br>
Sample data:<br>date1:<br>
04/26/2012<br>
02/16/2006<br>
11/26/2017<br>
...<br>
<code>dtype: object</code></p>
<p>I converted an object type to <code>datetime64</code> like this by following the above-linked post:<br></p>
<pre><code>df['date1'] = pd.to_datetime(df['date1'], format='%m/%d/%Y')
</code></pre>
<p>The code returned results in the format <code>%Y-%m-%d</code>. <code>format='%m/%d/%Y'</code> didn't preserve the original form which was <code>%m/%d/%Y</code>. However, the dtype was changed to <code>datetime64[ns]</code> from <code>object</code> which I wanted.</p>
<p>So, I added another line of code to get the original format based on the answers from the above-linked posts.<br></p>
<pre><code>df['date1'] = df['date1'].dt.strftime('%m/%d/%Y')
</code></pre>
<p>Again, the column's data type was converted back to <code>object</code>. But, I need to retain the <code>datetime64</code> data type.</p>
<p>How to preserve the original formatting, <code>'%m/%d/%Y'</code> with the <code>dtype: datetime64</code>?</p>
|
<python><pandas>
|
2023-05-27 22:18:22
| 0
| 529
|
007mrviper
|
76,349,202
| 8,869,570
|
Composed object needs access to parent class method
|
<p>I have a class <code>Round</code> that has a <code>Calc</code> composed class object. <code>Round</code> also inherits from several base classes, and inherits a method called <code>parameter</code> from one of its base classes.</p>
<p><code>Calc</code> has a method, <code>compute</code>, that is called from a method in <code>Round</code>, using:</p>
<pre><code>self.calc_object.compute(...)
</code></pre>
<p>This <code>compute</code> method, however, needs access to <code>parameter</code>, and we cannot change the API of <code>compute</code>, but we can change the api of <code>Calc's</code> constructor, so I was thinking of passing in <code>self.parameter</code> when constructing <code>self.calc_object = Calc(self.parameter, other args)</code>. Is this a valid approach? Are there other approaches I should consider?</p>
<p>The solution needs to work for both python2 and python3.</p>
|
<python><inheritance><composition>
|
2023-05-27 21:54:33
| 0
| 2,328
|
24n8
|
76,349,192
| 3,604,745
|
Rasa - ‘from_entity’ mapping for a non-existent entity
|
<p>This Rasa issue seems to not really be described by the “Warning” (and the “Warning” in this case is effectively an error). It has this message for every slot and entity:</p>
<p><code>/rasa/shared/utils/io.py:99: UserWarning: Slot ‘name’ uses a ‘from_entity’ mapping for a non-existent entity ‘name’</code>. Skipping slot extraction because of invalid mapping.</p>
<p>Of course, it’s not literally true:</p>
<p><em>From domain.yml:</em></p>
<pre><code>slots:
name:
type: text
initial_value: null
mappings:
- type: from_entity
entity: name
</code></pre>
<p><em>From nlu.yml:</em></p>
<pre><code>- intent: provide_name
examples: |
- My name is [John Doe](name)
- You can call me [Sara](name)
- [James](name) is my name
- they call me [Biggie Smallz](name)
- [Yun Fan](name)
</code></pre>
<p>I have also tried several variations and tried appending an entity list to nlu.yml:</p>
<pre><code>entities:
- name
</code></pre>
<p>My original slots syntax was more concise (this is one of a few variations).</p>
<pre><code>slots:
name:
type: text
mappings:
- type: from_entity
entity: name
</code></pre>
<p>I’ve tried running with <code>debug = True</code>, <code>log_level: DEBUG</code>, <code>-vv</code>, Python versions 3.7, 3.8, 3.9, and 3.10, as well as deleting caches. The model builds without error using <code>rasa train</code>. In fact, if I build the <code>nlu</code> component only with <code>rasa train nlu</code> I can run it and see that entity extraction works fine.</p>
<p>Additionally, when I put a debugger in the source code I see the <code>domain_slots.items()</code> look like this:</p>
<blockquote>
<p>('name', {'type': 'text', 'initial_value': None, 'mappings': [{'type':
'from_entity', 'entity': 'name'}]}),</p>
</blockquote>
|
<python><rasa><rasa-nlu><rasa-sdk>
|
2023-05-27 21:52:25
| 0
| 23,531
|
Hack-R
|
76,349,101
| 13,460,543
|
How to select rows until an element is encountered in a column?
|
<p>Let's suppose we have the following dataframe :</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame(index=['A', 'B', 'C', 'D'], data = [1,2,3,3])
</code></pre>
<p>which gives us the following dataframe :</p>
<pre><code>df
0
A 1
B 2
C 3
D 3
</code></pre>
<p>I was looking for a quick way (during a certain time) to extract the rows until the first occurrence of <code>3</code> for instance is encountered.</p>
<p>I found a solution (written in the answer section below) but I wonder if there are other more conventional approaches.</p>
<p>Thanks in advance for your contributions.</p>
<p><strong>Solution found</strong></p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame(index=['A', 'B', 'C', 'D'], data = [1,2,3,3])
mask = df[0].eq(3).cumsum().cumsum().le(1)
r = df[mask]
print(r)
</code></pre>
<pre class="lang-py prettyprint-override"><code> 0
A 1
B 2
C 3
</code></pre>
|
<python><pandas>
|
2023-05-27 21:22:59
| 2
| 2,303
|
Laurent B.
|
76,348,906
| 1,278,365
|
SQLAlchemy 2.x: Eagerly load joined collection query
|
<h2>Context</h2>
<p>With SQLAlchemy 2.x, how to eagerly load a joined collection?</p>
<p>Let's say we have the following models <code>Parent</code> and <code>Child</code>:</p>
<pre class="lang-py prettyprint-override"><code>class Parent(Base):
__tablename__ = "parent"
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str] = mapped_column(String(30))
children: Mapped[List["Child"]] = relationship(
back_populates="parent", cascade="all, delete-orphan"
)
def __repr__(self) -> str:
return f"Parent(id={self.id!r}, name={self.name!r})"
class Child(Base):
__tablename__ = "child"
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str] = mapped_column(String(30))
parent_id: Mapped[int] = mapped_column(ForeignKey("parent.id"))
parent: Mapped["Parent"] = relationship(back_populates="children")
def __repr__(self) -> str:
return f"Child(id={self.id!r}, name={self.name!r})"
</code></pre>
<p>And I would like to get the <code>Parent</code> that has a <code>Child</code> with <code>id</code> equal to 1. And populate the result <code>parent.children</code> with all its children.</p>
<p>IOW, if our <code>parent</code> and <code>child</code> tables are populated with:</p>
<pre><code>parent
id name
1 p1
2 p2
</code></pre>
<pre><code>child
id name parent_id
1 c1 1
2 c2 1
3 c2 1
4 c3 1
5 c4 2
</code></pre>
<p>I would like to see the query result object:</p>
<pre class="lang-py prettyprint-override"><code>result = <query parent whose children has id == 1>
print(result)
>>> Parent(id=1, name='p1')
print(result.children)
>>> [
Child(id=1, name='c1'),
Child(id=2, name='c2'),
Child(id=3, name='c3'),
Child(id=4, name='c4'),
]
</code></pre>
<h2>Test case 1</h2>
<pre class="lang-py prettyprint-override"><code>stmt = select(Parent).join(Parent.children).where(Child.id == 1)
</code></pre>
<p>Generates the following SQL:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT parent.id, parent.name
FROM parent JOIN child ON parent.id = child.parent_id
WHERE child.id = 1
</code></pre>
<p>Which looks great, but since I don't tell sqlalchemy to eagerly load <code>children</code>, when accessing the result scalar object (<code>parent.children</code>), I get the error:</p>
<pre><code>sqlalchemy.exc.MissingGreenlet: greenlet_spawn has not been called; can't call await_only() here. Was IO attempted in an unexpected place? (Background on this error at: https://sqlalche.me/e/20/xd2s)
</code></pre>
<h2>Test case 2</h2>
<pre class="lang-py prettyprint-override"><code>stmt = select(Parent).options(joinedload(Parent.children)).where(Child.id == 1)
</code></pre>
<p>Generates the following SQL:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT parent.id, parent.name, child_1.id AS id_1, child_1.name AS name_1, child_1.parent_id
FROM parent JOIN child AS child_1 ON parent.id = child_1.parent_id, child
WHERE child.id = 1
</code></pre>
<p>Which is not what we're looking for, notice <code>child</code> now appears in the <code>FROM</code> clause.</p>
|
<python><sql><sqlalchemy>
|
2023-05-27 20:28:12
| 2
| 2,058
|
gmagno
|
76,348,824
| 7,453,065
|
Meaning of Python's @setter decorator
|
<p>In the website <a href="https://stackoverflow.com/questions/17330160/how-does-the-property-decorator-work-in-python">How does the @property decorator work in Python?</a> you can find many answers about the meaning of the @property decorator, but no answer about the corresponding @propertyname.setter decorator.</p>
<p>My question is the following. In the code below, is it possible to replace the @x.setter decorator with a single statement?</p>
<pre><code>class D:
def __init__(self):
self._x = 'Hi'
def x(self):
return self._x
x = property(x)
@x.setter #delete this decorator
def x(self, value):
self._x = value
# Your code, one line!
</code></pre>
|
<python><properties>
|
2023-05-27 20:05:55
| 2
| 744
|
Dietrich Baumgarten
|
76,348,589
| 9,620,095
|
How to display Break page by default with XLSXWRITER
|
<p>I added the configuration of break page .I want to know if is it possible to display page break view by default in xlsxwriter (python)?</p>
<p>I tried with <code>sheet.set_page_break_view()</code> but I got error .</p>
|
<python><excel><xlsxwriter>
|
2023-05-27 19:08:56
| 2
| 631
|
Ing
|
76,348,551
| 982,049
|
Launching Python scripts on Kotlin server for Android app?
|
<p>Tldr; how to launch python scripts for multiple clients at the same time using Kotlin server side code for Android clients.</p>
<p>I am developing an Android app using Kotlin for client side as well as server side code.
However, I want to use some libraries for NLP using Python. This is required for text input by any client on the phone, I need send that to the server for NLP processing and give result to the client accordingly.
Note that there can thousands of clients at a given time and my server should handle all the requests.
Should I use Ktor for handling multiple clients?
How to launch python code for each client?</p>
|
<python><android><kotlin>
|
2023-05-27 19:00:50
| 1
| 5,113
|
Cool_Coder
|
76,348,441
| 2,029,836
|
Cant resolve Import exception in python using Visual Studio Code
|
<p>I am getting following exceptions when trying to execute a Python script in Visual Studio Code (VSC).
I suspect this is a simple env config issue but am new to Python and can't see it.</p>
<blockquote>
<p>Import "openai" could not be resolved by Pylance
Import "gradio" could not be resolved by Pylance</p>
</blockquote>
<p>I am using Mac Catalina 10.15.7.
VSC Version: 1.75.1.
I have installed Python, openai and gradio:</p>
<blockquote>
<p>--version Python 3.9.12 (base)<br />
--version openai 0.27.7</p>
<p>pip install gradio</p>
</blockquote>
<p>This is the script:</p>
<pre><code>import openai
import gradio as gr
print("Debugging AI script")
openai.api_key = "..."
print("API Key: " + openai.api_key)
messages = [
{
"role": "system",
"content":"You are a helpful andd kind AI Assitant"
},
]
def chatbot(input):
if input:
messages.append({
"role":"user",
"content": input
})
chat = openai.ChatCompletion.create(model="gpt-3.5-turbo",
messages = messages)
reply = chat.choices[0].message.content
messages.append({
"role":"assistant",
"content": reply
})
return reply
inputs = gr.inputs.Textbox(lines=7,
label="Chat with Mega Brain")
outputs = gr.outputs.Textbox(label="Speak Sir")
gr.Interface(fn=chatbot, inputs=inputs,
outputs=outputs, title="AI Mega-Brain Mega-Chat",
description="Ask the Brain anything",
theme="compact").launch(share=True)
</code></pre>
<p>Ive tried a number of solutions, included these posted <a href="https://bobbyhadz.com/blog/python-no-module-named-openai#:%7E:text=The%20error%20%22Import%20%22openai%22,Python%20interpreter%20in%20your%20IDE." rel="nofollow noreferrer">here,</a> to no avail.</p>
<p>Any help appreciated.
thanks</p>
|
<python><visual-studio-code><import><openai-api><gradio>
|
2023-05-27 18:34:12
| 1
| 2,281
|
dancingbush
|
76,348,350
| 18,092,798
|
Writing out sparse matrix as a compressed gzip file
|
<p>I have a sparse matrix <code>m</code> (scipy.sparse.coo_matrix) and a output path <code>p</code> (example <code>p="~/matrix.mtx.gz"</code>). I'm using <code>scipy.io.mmwrite</code> to write out <code>m</code> (to the path <code>p</code>), but it doesn't appear to have any compression options. Is there a way to write out the sparse matrix as a gzipped file in Matrix Market format? I don't want to save as a .mtx file and have to read in the file again to gzip it.</p>
|
<python><scipy>
|
2023-05-27 18:08:32
| 1
| 581
|
yippingAppa
|
76,348,268
| 20,130,220
|
Clicking loading button with selenium doesnt work
|
<p>I try to load all comments from this site to scrape them but i cant figure out how to load them all.</p>
<p>When i run my code i get error in console it says:</p>
<blockquote>
<p>WebDriverWait(driver, 20).until(EC.element_to_be_clickable( File
"C:\Users\Jakub\dev\rok_quests\rok_quests\Lib\site-packages\selenium\webdriver\support\wait.py",
line 95, in until
raise TimeoutException(message, screen, stacktrace) selenium.common.exceptions.TimeoutException: Message:</p>
</blockquote>
<p>Doesnt it mean it doesnt find a button or it cant click on it ?</p>
<p>the url i use:</p>
<blockquote>
<p><a href="https://www.rok.guide/buildings/lyceum-of-wisdom/" rel="nofollow noreferrer">https://www.rok.guide/buildings/lyceum-of-wisdom/</a></p>
</blockquote>
<p>The code here is meant to load all comments from comments section then i will get page_source and scrape .</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from bs4 import BeautifulSoup
import time
def scrape_comments(url):
# Set up Chrome driver options
chrome_options = Options()
chrome_options.add_argument("start-maximized")
chrome_options.add_argument('disable-infobars')
chrome_options.add_argument("--block-notifications")
chrome_options.add_argument("--headless")
driver = webdriver.Chrome(options=chrome_options)
wait = WebDriverWait(driver, 10)
comments = []
try:
# Open the website
driver.get(url)
get_url = driver.current_url
wait.until(EC.url_to_be(url))
if get_url != url:
raise Exception('Site url doesnt match')
WebDriverWait(driver, 20).until(EC.visibility_of_element_located(
(By.CSS_SELECTOR, ".wpd-comment-text")))
while True:
try:
WebDriverWait(driver, 20).until(EC.element_to_be_clickable(
(By.XPATH, "/html/body/div[3]/div/div[1]/main/div/div[2]/div[2]/div[3]/div[3]/div[51]/div/button"))).click()
print("clicked")
except TimeoutError:
print("No more to load")
break
print(driver.page_source)
return comments
finally:
# Close the web driver
driver.quit()
</code></pre>
|
<python><selenium-webdriver><web-scraping><beautifulsoup>
|
2023-05-27 17:49:03
| 1
| 346
|
IvonaK
|
76,348,264
| 1,927,108
|
How can I understand why Sphinx fails with code -4 within GitLab CI?
|
<p>I am trying to build the docs of my project with <em>Sphinx</em>, <em>tox</em>, and <em>GitLab CI</em>. Although it works fine locally I am getting this very unintuitive error without any proper error message on <em>GitLab CI</em>. Any ideas on what might be going on and how to fix it?</p>
<p>Everything is pretty much standard <em>PyScaffold</em>-based.</p>
<pre class="lang-none prettyprint-override"><code>$ tox -v -e docs
...
docs: commands[0]> sphinx-build -T -v --color -b html -d /builds/repo/dlproject/docs/_build/doctrees /builds/repo/dlproject/docs /builds/repo/dlproject/docs/_build/html
Running Sphinx v7.0.1
loading configurations for dlproject 0.0.post1.dev43+g1bfa027 ...
Creating file /builds/repo/dlproject/docs/api/dlproject.rst.
Creating file /builds/repo/dlproject/docs/api/dlproject.data.rst.
Creating file /builds/repo/dlproject/docs/api/dlproject.design.rst.
Creating file /builds/repo/dlproject/docs/api/dlproject.model.rst.
Creating file /builds/repo/dlproject/docs/api/dlproject.sequence.rst.
Creating file /builds/repo/dlproject/docs/api/modules.rst.
making output directory... done
locale_dir /builds/repo/dlproject/docs/locales/en/LC_MESSAGES does not exists
loading intersphinx inventory from https://www.sphinx-doc.org/en/master/objects.inv...
loading intersphinx inventory from https://docs.python.org/3.10/objects.inv...
loading intersphinx inventory from https://matplotlib.org/objects.inv...
loading intersphinx inventory from https://numpy.org/doc/stable/objects.inv...
loading intersphinx inventory from https://scikit-learn.org/stable/objects.inv...
loading intersphinx inventory from https://pandas.pydata.org/pandas-docs/stable/objects.inv...
loading intersphinx inventory from https://docs.scipy.org/doc/scipy/reference/objects.inv...
loading intersphinx inventory from https://setuptools.pypa.io/en/stable/objects.inv...
loading intersphinx inventory from https://pyscaffold.org/en/stable/objects.inv...
intersphinx inventory has moved: https://matplotlib.org/objects.inv -> https://matplotlib.org/stable/objects.inv
intersphinx inventory has moved: https://docs.scipy.org/doc/scipy/reference/objects.inv -> https://docs.scipy.org/doc/scipy/objects.inv
[autosummary] generating autosummary for: api/dlproject.data.rst, api/dlproject.design.rst, api/dlproject.model.rst, api/dlproject.rst, api/dlproject.sequence.rst, api/modules.rst, authors.rst, changelog.rst, contributing.rst, index.rst, license.rst, readme.rst
docs: exit -4 (7.65 seconds) /builds/repo/dlproject> sphinx-build -T -v --color -b html -d /builds/repo/dlproject/docs/_build/doctrees /builds/repo/dlproject/docs /builds/repo/dlproject/docs/_build/html pid=346
.pkg: _exit> python /usr/local/lib/python3.10/site-packages/pyproject_api/_backend.py True setuptools.build_meta
.pkg: exit None (0.00 seconds) /builds/repo/dlproject> python /usr/local/lib/python3.10/site-packages/pyproject_api/_backend.py True setuptools.build_meta pid=96
docs: FAIL code -4 (361.24=setup[353.59]+cmd[7.65] seconds)
evaluation failed :( (361.66 seconds)
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
</code></pre>
<p><code>tox.ini</code>:</p>
<pre class="lang-ini prettyprint-override"><code># Tox configuration file
# Read more under https://tox.wiki/
# THIS SCRIPT IS SUPPOSED TO BE AN EXAMPLE. MODIFY IT ACCORDING TO YOUR NEEDS!
[tox]
minversion = 3.24
envlist = default
isolated_build = True
[testenv]
description = Invoke pytest to run automated tests
setenv =
TOXINIDIR = {toxinidir}
passenv =
HOME
SETUPTOOLS_*
extras =
testing
commands =
pytest {posargs}
# # To run `tox -e lint` you need to make sure you have a
# # `.pre-commit-config.yaml` file. See https://pre-commit.com
# [testenv:lint]
# description = Perform static analysis and style checks
# skip_install = True
# deps = pre-commit
# passenv =
# HOMEPATH
# PROGRAMDATA
# SETUPTOOLS_*
# commands =
# pre-commit run --all-files {posargs:--show-diff-on-failure}
[testenv:{build,clean}]
description =
build: Build the package in isolation according to PEP517, see https://github.com/pypa/build
clean: Remove old distribution files and temporary build artifacts (./build and ./dist)
# https://setuptools.pypa.io/en/stable/build_meta.html#how-to-use-it
skip_install = True
changedir = {toxinidir}
deps =
build: build[virtualenv]
passenv =
SETUPTOOLS_*
commands =
clean: python -c 'import shutil; [shutil.rmtree(p, True) for p in ("build", "dist", "docs/_build")]'
clean: python -c 'import pathlib, shutil; [shutil.rmtree(p, True) for p in pathlib.Path("src").glob("*.egg-info")]'
build: python -m build {posargs}
# By default, both `sdist` and `wheel` are built. If your sdist is too big or you don't want
# to make it available, consider running: `tox -e build -- --wheel`
[testenv:{docs,doctests,linkcheck}]
description =
docs: Invoke sphinx-build to build the docs
doctests: Invoke sphinx-build to run doctests
linkcheck: Check for broken links in the documentation
passenv =
SETUPTOOLS_*
setenv =
DOCSDIR = {toxinidir}/docs
BUILDDIR = {toxinidir}/docs/_build
docs: BUILD = html
doctests: BUILD = doctest
linkcheck: BUILD = linkcheck
deps =
-r {toxinidir}/docs/requirements.txt
# ^ requirements.txt shared with Read The Docs
commands =
sphinx-build -T -v --color -b {env:BUILD} -d "{env:BUILDDIR}/doctrees" "{env:DOCSDIR}" "{env:BUILDDIR}/{env:BUILD}" {posargs}
[testenv:publish]
description =
Publish the package you have been developing to a package index server.
By default, it uses testpypi. If you really want to publish your package
to be publicly accessible in PyPI, use the `-- --repository pypi` option.
skip_install = True
changedir = {toxinidir}
passenv =
# See: https://twine.readthedocs.io/en/latest/
TWINE_USERNAME
TWINE_PASSWORD
TWINE_REPOSITORY
TWINE_REPOSITORY_URL
deps = twine
commands =
python -m twine check dist/*
python -m twine upload {posargs:--repository {env:TWINE_REPOSITORY:testpypi}} dist/*
</code></pre>
<p><code>.gitlab-ci.yml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code># This file is a template, and might need editing before it works on your project.
stages:
- prepare
- test
- deploy
- release
variables:
# Change cache dirs to be inside the project (can only cache local items)
PIP_CACHE_DIR: $CI_PROJECT_DIR/.cache/pip
PIPX_HOME: $CI_PROJECT_DIR/.cache/pipx
PRE_COMMIT_HOME: $CI_PROJECT_DIR/.cache/pre-commit
# Coveralls configuration
CI_NAME: gitlab-ci
CI_BRANCH: $CI_COMMIT_REF_NAME
CI_BUILD_NUMBER: $CI_PIPELINE_ID
CI_BUILD_URL: $CI_PIPELINE_URL
# TODO: You will also need to set `COVERALLS_REPO_TOKEN` to work with coveralls.
# We recommend that you do that via GitLab CI web interface.
# - https://coveralls-python.readthedocs.io/en/latest/usage/index.html
# - https://docs.gitlab.com/ee/ci/variables/
workflow:
rules:
# Restrict the number of times the pipeline runs to save resources/limits
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
variables:
# Specific merge request configurations for coveralls
CI_BRANCH: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME
CI_PULL_REQUEST: $CI_MERGE_REQUEST_IID
- if: $CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS && $CI_PIPELINE_SOURCE == 'push'
when: never # Avoid running the pipeline twice (push + merge request)
- if: $CI_COMMIT_BRANCH || $CI_COMMIT_TAG
# You can also set recurring execution of the pipeline, see:
# https://docs.gitlab.com/ee/ci/pipelines/schedules.html
default:
before_script:
- python --version # useful for debugging
- apt-get install -yqq --no-install-recommends gcc
# Setup git (used for setuptools-scm)
- git config --global user.email "you@example.com"
- git config --global user.name "Your Name"
# Install dependencies for the testing environment
- pip install -U pip tox pipx
check:
stage: prepare
image: "python:3.10-bullseye"
script:
- pipx run pre-commit run --all-files --show-diff-on-failure
build:
stage: prepare
image: "python:3.10-bullseye"
script:
- tox -v -e clean,build
variables:
GIT_DEPTH: "0" # deep-clone
artifacts:
expire_in: 1 day
paths: [dist]
.test_script: &test_script
dependencies: [build]
variables:
COVERALLS_PARALLEL: "true"
COVERALLS_FLAG_NAME: $CI_JOB_NAME
script:
- tox -v --installpkg dist/*.whl -- -rFEx --durations 10 --color yes
- pipx run coverage xml -o coverage.xml
#- pytest --cov --cov-report term --cov-report xml:coverage.xml
coverage: '/(?i)total.*? (100(?:\.0+)?\%|[1-9]?\d(?:\.\d+)?\%)$/'
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: coverage.xml
py38:
stage: test
image: "python:3.8-bullseye"
<<: *test_script
py39:
stage: test
image: "python:3.9-bullseye"
<<: *test_script
py310:
stage: test
image: "python:3.10-bullseye"
<<: *test_script
mamba:
stage: test
image: "condaforge/mambaforge"
before_script:
- mamba install -y pip pipx 'tox!=4.0.*,!=4.1.*' c-compiler cxx-compiler zlib
<<: *test_script
#upload-coverage:
# stage: deploy
# image: "python:3.10"
# script:
# - pipx run coveralls --finish
deploy:
stage: deploy
dependencies: [build]
image: "python:3.10"
rules: [if: $CI_COMMIT_TAG]
script:
- tox -v -e build
- tox -v -e publish
pages:
stage: deploy
cache: []
image: "python:3.10-bullseye"
script:
- pip install tox pytest sphinx>=7.0 furo
- tox -v -e docs
- mv docs/_build/html public
artifacts:
paths:
- public
# only:
# - main
</code></pre>
|
<python><gitlab-ci><python-sphinx><tox><pyscaffold>
|
2023-05-27 17:48:41
| 0
| 1,440
|
gkcn
|
76,348,254
| 11,720,193
|
Error in syntax of fields while exporting data from Botify
|
<p>I am trying to pull website crawl data from Botify by using Python leveraging the Botify Query Language. Now, to retrieve the crawled values from Botify, the following JSON needs to be sent to Botify using Python <code>requests.PUT()</code>.</p>
<p>Below is a sample JSON string provided which works fine when sent to Botify. However, whenever I add few other metadata columns to the below JSON in the <code>query</code> --> <code>dimensions</code> section , a FAILURE message is being sent back to me.</p>
<p>Basically, I am looking for the proper syntax to include these additional columns (mentioned at the end) into the below JSON in the <code>query</code> --> <code>dimension</code> section:</p>
<p>Input JSON (Fails):-</p>
<pre><code>{
"job_type": "export",
"payload": {
"username": "myID",
"project": "website.com",
"export_size": 50,
"formatter": "csv",
"formatter_config": {
"delimiter": ",",
"print_delimiter": "False",
"print_header": "True",
"header_format": "verbose"
},
"connector": "direct_download",
"extra_config": {},
"query": {
"collections": ["crawl.20230515"],
"query": {
"dimensions": ["url",
"crawl.20230515.date_crawled",
"crawl.20230515.content_type",
"crawl.20230515.http_code",
"segments.pagetype.value",
"compliant.is_compliant",
"metadata.robots.noindex",
"http_redirect.to.final.path"
],
"metrics": [],
"sort": [1]
}
}
}
}
</code></pre>
<p>But fails when I add the following fields to the <code>dimension</code>-list in the query above. I can't get the syntax right.</p>
<p>Fields causing issue in JSON query:</p>
<pre><code>compliant.is_compliant
metadata.robots.noindex
http_redirect.to.final.path
</code></pre>
<p>Error message received:-</p>
<pre><code>{'job_id': 1831591, 'job_type': 'export', 'job_url': '/v1/jobs/1831591', 'job_status': 'FAILED', 'results': {'nb_lines': None}, 'date_created': '2023-05-27T17:12:54.074062Z', 'payload': {'query': {'query': {'sort': [1], 'metrics': [], 'dimensions': ['url', 'crawl.20230515.date_crawled', 'crawl.20230515.content_type', 'crawl.20230515.http_code', 'segments.pagetype.value', 'segments.country.value', 'crawl.20230515.is_compliant']}, 'collections': ['crawl.20230515']}, 'export_size': 50, 'connector': 'direct_download', 'formatter': 'csv', 'formatter_config': {'delimiter': ',', 'print_header': True, 'header_format': 'verbose', 'print_delimiter': False}, 'extra_config': {}, 'sort_url_id': False, 'export_job_name': None}, 'user': 'XXXXXXXX', 'metadata': None, 'crawl_date': None}
</code></pre>
<p>I am new to Botify world and I tried searching the net a lot but didn't find the syntax for using these fields in the JSON query segment.</p>
<p>Any help is appreciated. Thanks.</p>
|
<python><python-requests><bots>
|
2023-05-27 17:45:44
| 0
| 895
|
marie20
|
76,348,088
| 6,290,211
|
How to simulate a starting queue before opening times in a Simulation process with Simpy?
|
<p>I am studying SimPy and I came across <a href="https://medium.com/swlh/simulating-a-parallel-queueing-system-with-simpy-6b7fcb6b1ca1" rel="nofollow noreferrer">this interesting tutorial</a> that allows to simulate a queue in a bank.</p>
<p>I wanted to know if it is possible and how to create an initial queue.</p>
<p>Let's assume that the bank opens at 09:00 but we have already 20 customers waiting to be served + the other that will come with the defined probabilistic arrival rate.</p>
<p>How to do that? Thank you for you support.</p>
<pre><code>"""
Bank with multiple queues example
Covers:
- Resources: Resource
- Iterating processes
Scenario:
A multi-counter bank with a random service time and customers arrival process. Based on the
program bank10.py from TheBank tutorial of SimPy 2. (KGM)
By Aaron Janeiro Stone
"""
from simpy import *
import random
maxNumber = 30 # Max number of customers
maxTime = 400.0 # Rumtime limit
timeInBank = 20.0 # Mean time in bank
arrivalMean = 10.0 # Mean of arrival process
seed = 12345 # Seed for simulation
def Customer(env, name, counters):
arrive = env.now
Qlength = [NoInSystem(counters[i]) for i in range(len(counters))]
print("%7.4f %s: Here I am. %s" % (env.now, name, Qlength))
for i in range(len(Qlength)):
if Qlength[i] == 0 or Qlength[i] == min(Qlength):
choice = i # the chosen queue number
break
with counters[choice].request() as req:
# Wait for the counter
yield req
wait = env.now - arrive
# We got to the counter
print('%7.4f %s: Waited %6.3f' % (env.now, name, wait))
tib = random.expovariate(1.0 / timeInBank)
yield env.timeout(tib)
print('%7.4f %s: Finished' % (env.now, name))
def NoInSystem(R):
"""Total number of customers in the resource R"""
return max([0, len(R.put_queue) + len(R.users)])
def Source(env, number, interval, counters):
for i in range(number):
c = Customer(env, 'Customer%02d' % i, counters)
env.process(c)
t = random.expovariate(1.0 / interval)
yield env.timeout(t)
# Setup and start the simulation
print('Bank with multiple queues')
random.seed(seed)
env = Environment()
counters = [Resource(env), Resource(env)]
env.process(Source(env, maxNumber, arrivalMean, counters))
env.run(until=maxTime)
</code></pre>
|
<python><simulation><simpy><traffic-simulation><event-simulation>
|
2023-05-27 17:10:44
| 1
| 389
|
Andrea Ciufo
|
76,348,086
| 9,877,065
|
PyQt5 QThread.self.setTerminationEnabled(False) seems not to work
|
<p>borrowing from <a href="https://stackoverflow.com/questions/27961098/pyside-qthread-terminate-causing-fatal-python-error">PySide QThread.terminate() causing fatal python error</a> I tried this example:</p>
<pre><code>from PyQt5 import QtCore, QtGui, QtWidgets
class Looper(QtCore.QThread):
"""QThread that prints natural numbers, one by one to stdout."""
def __init__(self, *args, **kwargs):
super(Looper, self).__init__(*args, **kwargs)
# self.setTerminationEnabled(True)
self.setTerminationEnabled(False)
def run(self):
i = 0
while True:
self.msleep(100)
print(i)
i += 1
# Initialize and start a looper.
looper = Looper()
looper.setTerminationEnabled(False)
looper.start()
# Sleep main thread for 5 seconds.
QtCore.QThread.sleep(3)
# Terminate looper.
looper.terminate()
app = QtWidgets.QApplication([])
app.exec_()
</code></pre>
<p>I could be wrong, but expected <code>looper.setTerminationEnabled(False)</code> or <code>self.setTerminationEnabled(False)</code> to prevent the QThread from <code>terminate()</code>
according to <a href="https://doc.qt.io/qtforpython-6/PySide6/QtCore/QThread.html#PySide6.QtCore.PySide6.QtCore.QThread.setTerminationEnabled" rel="nofollow noreferrer">https://doc.qt.io/qtforpython-6/PySide6/QtCore/QThread.html#PySide6.QtCore.PySide6.QtCore.QThread.setTerminationEnabled</a>.</p>
<p>But to me it doesn't work. Any hint ??</p>
<p>I am using <code>Qt: v 5.15.2 PyQt: v 5.15.7</code></p>
|
<python><multithreading><pyqt><pyqt5><qthread>
|
2023-05-27 17:10:35
| 1
| 3,346
|
pippo1980
|
76,348,055
| 5,330,527
|
Check if an id is in another model's field, and get the connected values
|
<p>Given these two models:</p>
<pre><code>class Event(models.Model):
title = models.CharField(max_length=200)
class dateEvent(models.Model):
venue = models.ForeignKey(Venue, on_delete=models.CASCADE,null=True, blank=True)
event = models.ForeignKey('Event', on_delete=models.CASCADE)
class Venue(models.Model):
venue_name = models.CharField(max_length=50)
</code></pre>
<p>How can I filter only the venues that appear in a <code>dateEvent</code>? And how can I get the <code>Event</code> details linked to a specific venue?</p>
<p>Right now I attempted this, which returns <code>noniterable</code> and <code>non exists</code> error:</p>
<pre><code>venues = list(Venue.objects.filter(dateevent__venue.icontains=id).values('venue_name', 'dateevent__event.id')
</code></pre>
|
<python><django><django-models><django-views>
|
2023-05-27 17:03:42
| 1
| 786
|
HBMCS
|
76,347,562
| 3,821,009
|
Initialize polars dataframe from list of structs
|
<p>These two make sense:</p>
<pre><code>df = polars.DataFrame(dict(
j=1,
))
print(df)
print(df.schema)
j
1
shape: (1, 1)
{'j': Int64}
df = polars.DataFrame(dict(
j=range(2)
))
print(df)
print(df.schema)
j
0
1
shape: (2, 1)
{'j': Int64}
</code></pre>
<p>However:</p>
<pre><code>cols = list('ab')
df = polars.DataFrame(dict(
j=polars.struct([polars.lit(j).alias(col) for j, col in enumerate(cols)], eager=True)
))
print(df)
print(df.schema)
j
{0,1}
shape: (1, 1)
{'j': Struct([Field('a', Int32), Field('b', Int32)])}
df = polars.DataFrame(dict(
j=[polars.struct([polars.lit(j).alias(col) for j, col in enumerate(cols)], eager=True)
for k in range(2)]
))
print(df)
print(df.schema)
j
[{0,1}]
[{0,1}]
shape: (2, 1)
{'j': List(Struct([Field('a', Int32), Field('b', Int32)]))}
</code></pre>
<p>Why did changing from a single <code>polars.struct</code> to a list of <code>polar.struct</code>s change the type of the element itself (from <code>Struct</code> to <code>List(Struct)</code>)? I'd expect the result of the last one above to be the same as this:</p>
<pre><code>df = (polars.DataFrame(dict(
j=range(2)
))
.with_columns(
polars.struct([polars.lit(j).alias(col) for j, col in enumerate(cols)], eager=True).alias('j')
))
print(df)
print(df.schema)
j
{0,1}
{0,1}
shape: (2, 1)
{'j': Struct([Field('a', Int32), Field('b', Int32)])}
</code></pre>
<p>Is there a shorter / better way to initialize the dataframe with a list of structs (i.e. a shorter way to get the same result as the last code example above)?</p>
|
<python><python-polars>
|
2023-05-27 15:10:27
| 1
| 4,641
|
levant pied
|
76,347,524
| 1,481,314
|
AWS Lambda works but Lambda@Edge throws error
|
<p>I have written a python lambda to redirect unauthorized users to Cognito. The lambda works when I run a test event in the lambda console, but when I try to hit the CloudFront distribution, I get the following error:</p>
<pre><code>[ERROR] Runtime.ImportModuleError: Unable to import module 'app': No module named
'pyjwt'
Traceback (most recent call last):
</code></pre>
<p>I am not referring to pyjwt in my code. I am doing this <code>import jwt</code>, and I my lambda uses these packages. I have tried deleting the lambda@edge association and the lambda.</p>
<p><a href="https://i.sstatic.net/YYBBf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YYBBf.png" alt="enter image description here" /></a></p>
<p>I'm completely stuck at this point. What can I try next?</p>
|
<python><amazon-web-services><aws-lambda-edge><pyjwt>
|
2023-05-27 15:01:18
| 1
| 1,718
|
Danny Ellis Jr.
|
76,347,173
| 5,166,312
|
How to use pyOCD, or open OCD from Python
|
<p>I want to use <code>pyOCD</code> (or open OCD) for some operation with <code>STLink</code> and <code>STM32</code> MCU. I found this list of commands <a href="https://pyocd.io/docs/command_reference.html#find" rel="nofollow noreferrer">https://pyocd.io/docs/command_reference.html#find</a> but I am not wise from that - I really do not understand the syntaxes.</p>
<p>E.G.
I want to load <em>.bin</em> file to the mem on specific address:
Documentation says: <em>load FILENAME [ADDR]</em> ....So I wrote to the console: <em><strong>pyocd load myBin.bin 0x8005000</strong></em> but it does not work - it ignores the addr - it probably flashs the bin file but not on my desired addr.</p>
<p>Basicly I want to write 6 function in python:</p>
<ol>
<li>check if some ST link is connected to PC (windows)</li>
<li>check if some MCU is connected to the STlink</li>
<li>erase connected MCU</li>
<li>load .bin file to the MCU</li>
<li>be able to read and write specific values to specific addres to the MCU</li>
<li>reset MCU</li>
</ol>
<p>Would be great to use it like python library, but I absolutelly do not know how to use it, so I will settle for the command line - using subprocess in python</p>
<p>Thank you</p>
|
<python><gdb><stm32><openocd>
|
2023-05-27 13:30:06
| 1
| 337
|
WITC
|
76,347,065
| 11,594,202
|
Custom Exception for missing fields in pydantic
|
<p>I want to catch the exception that validators raise of a Pydantic as such. I currently got a working set up like so:</p>
<pre><code>class MissingPhoneNumber(ValueError):
pass
class SMS(Basemodel):
id: str
phone_number: Optional[str] = None
@validator('phone_number', always=True)
def phone_number(cls, v):
if not v:
raise MissingPhoneNumber
return v
if __name__ == '__main__':
try:
event = SMS()
except ValidationError as e:
for e2 in e.errors():
if e2['type'] == 'value_error.missingphonenumber':
print('caught Exception')
raise e
</code></pre>
<p>However, it is quite messy and I'd rather catch it like so:</p>
<pre><code>if __name__ == '__main__':
try:
event = SMS()
except MissingPhoneNumber:
print('caught Exception')
</code></pre>
<p>Is there a way in Pydantic to accomplish this? Running on Pydantic 1.10.9</p>
|
<python><exception><pydantic>
|
2023-05-27 13:05:32
| 1
| 920
|
Jeroen Vermunt
|
76,346,946
| 5,431,734
|
size on disk of pickled objects
|
<p>I want to serialize an object (which contains other objects etc) and I would like to exclude (if possible) attributes that take up a lot space when the pickle file is saved on the disk. I plan to do this by deleting the attribute while manipulating <code>__getstate__(self)</code> of my (top level) class, for example:</p>
<pre><code>def __getstate__(self):
attr = self.__dict__.copy()
del attr['some_key'].some_other.key
return attr
</code></pre>
<p>Is there a way to determine how much space each attribute occupies. I dont necessarily want to do it on the fly. Saving a full pickle once and then interrogate it to see which keys are the heaviest so I know how to edit my <code>__getstate__()</code> would be fine</p>
|
<python><pickle>
|
2023-05-27 12:33:47
| 1
| 3,725
|
Aenaon
|
76,346,900
| 8,665,962
|
Finding a clever way to set a threshold given a list of loss values
|
<p>Assume I have a list of losses plotted in the following KDE plot:</p>
<p><a href="https://i.sstatic.net/ioufT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ioufT.png" alt="enter image description here" /></a></p>
<p>If the goal is to spot the outliers, the best threshold would be clearly around <code>0.75</code>, where the value of density is the minimum possible (near zero), and it is at the beginning of the tail.</p>
<p>Given a list of loss values, how can I (as accurately as possible) set such a threshold at the beginning of the tail?</p>
|
<python><python-3.x><anomaly-detection>
|
2023-05-27 12:24:37
| 1
| 574
|
Dave
|
76,346,896
| 12,902,027
|
How can you find where a specified function is defined in a library?
|
<p>For example, I am investigating the implementation of <code>torch.nn.Embedding</code> class.
I guessed this would be in some file in <code>nn</code> directory and found the class in <a href="https://github.com/pytorch/pytorch/blob/main/torch/nn/modules/sparse.py#L13" rel="nofollow noreferrer">torch/nn/module/sparse.py</a>.</p>
<p>In this class, <code>functional.embedding()</code> function is provoked and I found this function in <a href="https://github.com/pytorch/pytorch/blob/main/torch/nn/functional.py#L2127" rel="nofollow noreferrer">torch/nn/functional.py</a>.</p>
<p>In this function, <code>torch.embedding()</code> is provoked. Now I have totally no idea of where it is defined.</p>
<p>Can you tell me the way to find it, for it doesn't seem that there is <code>embedding.py</code> in <code>torch</code> directory.</p>
<p>Can you teach me some tips for finding specific function definition?</p>
<p>P.S. I havefound one in <a href="https://github.com/pytorch/pytorch/blob/main/torch/jit/_shape_functions.py#458" rel="nofollow noreferrer">torch/jit/_shape_functions.py</a>, but not sure at all if this is the one I am looking for.</p>
|
<python><pytorch><embedding>
|
2023-05-27 12:23:44
| 1
| 301
|
agongji
|
76,346,891
| 11,232,272
|
Should I deactivate current conda env before creating a new one?
|
<p>Does it make any difference in creating a new conda environment from which conda environment? I mean, should I create all of my environments from the <code>base</code> environment?</p>
|
<python><conda>
|
2023-05-27 12:22:48
| 1
| 741
|
Matin Zivdar
|
76,346,847
| 11,720,193
|
Create JSON dynamically reading file from S3
|
<p>I am working on AWS Glue and writing a a <code>requests</code> program to query Botify (with BQL). I need to have a json (requred for POST) which should be <strong>dynamically</strong> created with the queried fields. The fields that needs to be queried resides in a text file on S3. We should be able to read the S3 file and create the JSON string as given below.</p>
<p>Also, the field "myId" in the expected JSON should be replaced with the actual id stored in a variable.
Please help.</p>
<p>S3 file contents:</p>
<pre><code>date_crawled
content_type
http_code
compliant.is_compliant
compliant.reason.http_code
compliant.reason.canonical
</code></pre>
<p>Expected JSON string:-</p>
<pre><code>payload = """
{
"job_type": "export",
"payload": {
"username": "myID",
"project": "abc123.com",
"export_size": 50,
"formatter": "csv",
"formatter_config": {
"delimiter": ",",
"print_delimiter": "False",
"print_header": "True",
"header_format": "verbose"
},
"connector": "direct_download",
"extra_config": {},
"query": {
"collections": ["crawl.20230515"],
"query": {
"dimensions": ["url",
"crawl.20230515.date_crawled",
"crawl.20230515.content_type",
"crawl.20230515.http_code",
"compliant.is_compliant",
"compliant.reason.http_code",
"compliant.reason.canonical"
"
],
"metrics": [],
"sort": [1]
}
}
}
}
"""
</code></pre>
<p>I am new to Python. So any help is immensely appreciated.</p>
<p>Thanks.</p>
|
<python><python-requests><python-jsons>
|
2023-05-27 12:10:07
| 2
| 895
|
marie20
|
76,346,786
| 1,935,655
|
Python OpenCV Mediapipe Overlay Triangle on landmark on face
|
<p>I am wanting to take a triangle from an image and overlay it at the same location on my face in video camera.</p>
<p>I am using python mediapipe to get the landmarks and it seems I am able to get the correct triangle, but it doesn't overlay on the correct location properly.</p>
<p>Mediapipe landmarks:
<a href="https://user-images.githubusercontent.com/11573490/109521608-72aeed00-7ae8-11eb-9539-e07c406cc65b.jpg" rel="nofollow noreferrer">Landmarks</a></p>
<p>Here is myimage:</p>
<p><a href="https://i.sstatic.net/gXxcd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gXxcd.png" alt="myimage" /></a></p>
<p>Here is the code:</p>
<pre><code>import cv2
import numpy as np
import mediapipe as mp
# PROBLEM: There is a problem here It places it in the wrong place and it seems to be drawing a square,
# but without transparency.
def overlay_triangle_in_xyz_postions(source_image, triangle_image, array_xx_yy_zz: []):
if source_image.shape[2] == 3:
source_image = cv2.cvtColor(source_image, cv2.COLOR_BGR2BGRA)
if triangle_image.shape[2] == 3:
triangle_image = cv2.cvtColor(triangle_image, cv2.COLOR_BGR2BGRA)
# Get your mediapipe landmarks - replace this with actual landmark detection
# https://user-images.githubusercontent.com/11573490/109521608-72aeed00-7ae8-11eb-9539-e07c406cc65b.jpg
# you can vide the landmarks here.
# landmarks = np.array([[100, 100], [200, 100], [150, 200]])
landmarks = np.array(array_xx_yy_zz)
# Assuming the triangle_image is an equilateral triangle,
# we'll set the vertices to be the top middle and bottom corners
triangle_vertices = np.array([[triangle_image.shape[1] / 2, 0], [0, triangle_image.shape[0]],
[triangle_image.shape[1], triangle_image.shape[0]]])
# Get the affine transform matrix
M = cv2.getAffineTransform(triangle_vertices.astype(np.float32), landmarks.astype(np.float32))
# Separate the alpha channel from the rest of the triangle image
triangle_image_rgb = triangle_image[:, :, :3]
triangle_image_alpha = triangle_image[:, :, 3]
# Warp the triangle image (RGB channels only) to fit the landmarks
warped_triangle_rgb = cv2.warpAffine(triangle_image_rgb, M, (source_image.shape[1], source_image.shape[0]))
# Warp the triangle image (alpha channel only) to fit the landmarks
warped_triangle_alpha = cv2.warpAffine(triangle_image_alpha, M, (source_image.shape[1], source_image.shape[0]))
# Recombine the RGB and alpha channels
warped_triangle = cv2.merge([warped_triangle_rgb, warped_triangle_alpha])
# Normalize the alpha mask to keep intensity between 0 and 1
alpha = warped_triangle[:, :, 3].astype(float) / 255
# Create 3 channel alpha mask
alpha = cv2.merge([alpha, alpha, alpha, alpha])
# Alpha blending
final_image = (alpha * warped_triangle + (1 - alpha) * source_image).astype(np.uint8)
return final_image
def get_triangle_from_image():
image_element = cv2.imread("./myface.png", cv2.IMREAD_UNCHANGED)
image = cv2.cvtColor(image_element, cv2.COLOR_BGR2RGB)
results = face_mesh.process(image)
# Check if any face is detected
if results.multi_face_landmarks:
for face_landmarks in results.multi_face_landmarks:
# Get the coordinates of the landmarks for the triangle
landmarks = [[int(face_landmarks.landmark[i].x * image.shape[1]),
int(face_landmarks.landmark[i].y * image.shape[0])] for i in [10, 108, 151]]
triangle_image_rbg = extract_triangle_from_landmarks(image, landmarks)
triangle_image = cv2.cvtColor(triangle_image_rbg, cv2.COLOR_BGR2RGBA)
# This should return a triangle with alpha channel
return triangle_image
def extract_triangle_from_landmarks(image, landmarks):
# Create a mask for the image
mask = np.zeros(image.shape, dtype=np.uint8)
# Draw the triangle on the mask
triangle_cnt = np.array(landmarks).reshape((-1, 1, 2)).astype(np.int32)
cv2.drawContours(mask, [triangle_cnt], 0, (255, 255, 255), -1)
# Bitwise-and the mask and the original image to get the triangle
triangle_image = cv2.bitwise_and(image, mask)
# Create bounding rectangle around the triangle
(x, y, w, h) = cv2.boundingRect(triangle_cnt)
# Crop the image using the bounding rectangle
triangle_image = triangle_image[y:y + h, x:x + w]
return triangle_image
def overlay_triangle(face_image, triangle_image, landmarks):
# Get the size of the triangle_image
h, w = triangle_image.shape[:2]
# Create a mask for the triangle_image
mask = np.zeros((h, w), dtype=np.uint8)
cv2.fillConvexPoly(mask, np.array([[0, 0], [w // 2, h], [w, 0]], dtype=np.int32), 255)
# Compute the bounding rectangle for the triangle
(x, y, w, h) = cv2.boundingRect(np.array(landmarks))
# Adjust the landmarks to the bounding rectangle
landmarks = [[x[0] - x, x[1] - y] for x in landmarks]
# Compute the affine transform that maps the triangle_image to the face_image
warp_mat = cv2.getAffineTransform(np.float32([[0, 0], [w // 2, h], [w, 0]]), np.float32(landmarks))
# Warp the triangle_image to match the triangle on the face_image
warped_image = cv2.warpAffine(triangle_image, warp_mat, (face_image.shape[1], face_image.shape[0]))
# Create a mask for the triangle on the face_image
mask = cv2.warpAffine(mask, warp_mat, (face_image.shape[1], face_image.shape[0]))
# Use the mask to blend the warped_image into the face_image
face_image = cv2.bitwise_and(face_image, cv2.bitwise_not(cv2.cvtColor(mask, cv2.COLOR_GRAY2BGR)))
face_image = cv2.bitwise_or(face_image, cv2.bitwise_and(warped_image, cv2.cvtColor(mask, cv2.COLOR_GRAY2BGR)))
return face_image
# Initialize MediaPipe Face Mesh
mp_face_mesh = mp.solutions.face_mesh
face_mesh = mp_face_mesh.FaceMesh()
# Start the webcam feed
cap = cv2.VideoCapture(0)
while cap.isOpened():
ret, frame = cap.read()
# Strip out ALPHA for processing.
image = bgr_image = frame[:, :, :3]
results = face_mesh.process(image)
# Check if any face is detected
if results.multi_face_landmarks:
triangle = get_triangle_from_image()
for face_landmarks in results.multi_face_landmarks:
# PROBLEM ? This might be icorrect...
a = 10
b = 151
c = 108
ret_frame = overlay_triangle_in_xyz_postions(image, triangle, [[a, a], [b, a], [c, b]] )
cv2.imshow('Triangle', ret_frame)
break
if cv2.waitKey(10) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
</code></pre>
|
<python><opencv><mediapipe>
|
2023-05-27 11:54:20
| 0
| 1,214
|
LUser
|
76,346,764
| 2,889,716
|
Celery task queue is not registered
|
<p>In my code only task1-queue will be registered. Why?
<code>pass-params.py</code></p>
<pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI
from celery import Celery, chain
app = FastAPI()
celery_app = Celery('tasks',
broker='redis://localhost:6379/0', # broker URL
backend='redis://localhost:6379/1', # backend URL
)
celery_app.conf.task_routes = {
'celery_app.task1': {'queue': 'task1-queue'},
'celery_app.task2': {'queue': 'task2-queue'},
'celery_app.task3': {'queue': 'task3-queue'
},
}
@celery_app.task(name='task1')
def task1(name):
print(name)
return f'Hello {name}'
@celery_app.task(name='task2')
def task2(message):
print(message)
return f'Previous message is: {message}'
@celery_app.task(name='task3')
def task3(name, message):
print(f'Name: {name} Message: {message}')
return f'Previous message is: {message}'
t1 = celery_app.signature('task1', args=["Ehsan", ]).set(queue="task1-queue")
t2 = celery_app.signature('task2').set(queue="task2-queue")
t3 = celery_app.signature('task3').set(queue="task3-queue")
chain_of_tasks = chain(t1, t2, t3)
result = chain_of_tasks.apply_async()
</code></pre>
<p>I run celery:</p>
<pre><code>celery -A pass-params:celery_app worker --loglevel=info
</code></pre>
<p>The output is like this:</p>
<pre><code>
-------------- celery@ehsan-pc v5.2.7 (dawn-chorus)
--- ***** -----
-- ******* ---- Linux-5.19.0-42-generic-x86_64-with-glibc2.35 2023-05-27 15:18:55
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: tasks:0x7f93d17c0580
- ** ---------- .> transport: redis://localhost:6379/0
- ** ---------- .> results: redis://localhost:6379/1
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
.> task1-queue exchange=task1-queue(direct) key=task1-queue
[tasks]
. task1
. task2
. task3
[2023-05-27 15:18:55,433: INFO/MainProcess] Connected to redis://localhost:6379/0
[2023-05-27 15:18:55,435: INFO/MainProcess] mingle: searching for neighbors
[2023-05-27 15:18:56,446: INFO/MainProcess] mingle: all alone
[2023-05-27 15:18:56,473: INFO/MainProcess] celery@ehsan-pc ready.
[2023-05-27 15:18:56,480: INFO/MainProcess] Task task1[113aef7c-4a28-48c8-af7a-a3e5e3cef277] received
[2023-05-27 15:18:56,483: WARNING/ForkPoolWorker-2] Ehsan
[2023-05-27 15:18:56,496: INFO/ForkPoolWorker-2] Task task1[113aef7c-4a28-48c8-af7a-a3e5e3cef277] succeeded in 0.013839636027114466s: 'Hello Ehsan'
[2023-05-27 15:18:59,561: INFO/MainProcess] Events of group {task} enabled by remote.
</code></pre>
<p>What's wrong?</p>
|
<python><celery>
|
2023-05-27 11:49:36
| 0
| 4,899
|
ehsan shirzadi
|
76,346,637
| 4,512,218
|
Can PyCharm suggest available methods?
|
<p>PyCharm Professional does not suggest methods while typing (for any library).</p>
<p>For example, in the screenshot below, I would expect to see methods I can call on <code>service</code> in the autosuggest popover (like I would in WebStorm or PhPStorm). I only get "not", "par" and "main" every single time.</p>
<p><a href="https://i.sstatic.net/uRtjr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uRtjr.png" alt="enter image description here" /></a></p>
<p>Everything is enabled in settings in accordance to all the other posts on the topic. See code completion settings below:</p>
<p><a href="https://i.sstatic.net/AVtCn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AVtCn.png" alt="enter image description here" /></a></p>
<p>Is something off or is this just not a feature like with Javascript or PHP? Below is an example for PhpStorm which shows me all the methods I can call on <code>$response</code>. I want to achieve the same thing in PyCharm.</p>
<p>Is this possible?</p>
<p><a href="https://i.sstatic.net/GfN4w.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GfN4w.png" alt="enter image description here" /></a></p>
|
<python><pycharm><jetbrains-ide>
|
2023-05-27 11:17:19
| 1
| 1,339
|
Cellydy
|
76,346,518
| 2,925,976
|
Odoo Python Function TypeError: EventBooth.check_app_installed() missing 1 required positional argument: 'self'
|
<p>What is necessary to make this peace of code working?
I always get the error "Function TypeError: EventBooth.check_app_installed() missing 1 required positional argument: 'self'". Meanwhile I tried a lot of different approaches, but nothing worked.</p>
<pre><code>from odoo import models, fields, api
class EventBooth(models.Model):
_name = 'event.booth'
@api.model
def check_app_installed(self):
# Check if the 'event' app is installed
event_app_installed = self.env['ir.module.module'].search([('name', '=', 'event'), ('state', '=', 'installed')])
return event_app_installed
@api.model
def create(self, vals):
app_installed = self.check_app_installed()
if not app_installed:
# Install the 'event' app if it is not installed
event_app = self.env['ir.module.module'].search([('name', '=', 'event')])
event_app.button_immediate_install()
# Continue with the creation of the record
return super(EventBooth, self).create(vals)
if check_app_installed():
_inherit = 'event.booth'
price = fields.Float(string='Price')
</code></pre>
|
<python><function><odoo><self>
|
2023-05-27 10:47:01
| 1
| 628
|
Perino
|
76,346,318
| 12,751,927
|
Why my request works in python but not with curl
|
<p>Hi i try fetch from this <a href="https://dooood.com/d/h9rojbkqnpan" rel="nofollow noreferrer">url</a> with curl and i got 403 error but it perfectly work in python with request</p>
<p>python code</p>
<pre><code>import requests
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36 OPR/67.0.3575.97',
'Referer': 'https://dooood.com/'}
url = "https://dooood.com/d/h9rojbkqnpan"
res = requests.get('https://dooood.com/d/h9rojbkqnpan', headers=headers)
print(res.content)
print(res.status_code)
</code></pre>
<p>curl command</p>
<pre><code> curl -I "https://dooood.com/d/h9rojbkqnpan" \
-H "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36 OPR/67.0.3575.97" \
-H "Referer: https://dooood.com/"
</code></pre>
<p>is python requests add extra headers ?</p>
|
<python><curl><python-requests><http-headers><http-status-code-403>
|
2023-05-27 09:50:25
| 1
| 335
|
linkkader
|
76,345,708
| 3,909,896
|
Passing kwargs via dict to a function and overwriting a single passed kwarg in an elegant way
|
<p>Thanks to <a href="https://stackoverflow.com/questions/50989404/how-do-i-pass-variable-keyword-arguments-to-a-function-in-python">this SO post</a> I now know how to pass a dictionary as kwargs to a function (and save some space if I repeatedly call a function).</p>
<p>I was wondering whether it is still possible in some way to specifically "overwrite" one of the arguments passed in the dictionary? If I have 100 function calls in which I pass the kwarg-dict and in one of them I need to change one of the kwarg params, is there a nice way to do this?</p>
<p>One solution would be to just write all the params for that call "by hand" and not using the kwarg-dict, but I'm specifically looking to overwrite the param in an elegant way.</p>
<p>Minimal example:</p>
<pre><code>def func(arg1="foo", arg_a= "bar", firstarg=1):
print(arg1, arg_a, firstarg)
kwarg_dictionary = {
'arg1': "foo",
'arg_a': "bar",
'first_arg':42
}
# normal call, works - without overwriting any args
func(**kwarg_dictionary)
# The following fails with TypeError: func() got multiple values for keyword argument 'first_arg'
func(**kwarg_dictionary, first_arg=100)
# The following works, but is very clunky and long:
func(**({k:v for k, v in kwarg_dictionary.items() if k != 'first_arg'}), first_arg=54)
</code></pre>
<p>Trying to solve it with <code>functools.partial</code> also proved less than fruitful, since it expects the arguments to be in order and you cannot provide keyword arguments.</p>
<p>I was wondering whether there is a more elegant solution than the last code line?</p>
|
<python>
|
2023-05-27 06:56:44
| 1
| 3,013
|
Cribber
|
76,345,404
| 4,869,293
|
How to define Django model datefield
|
<p>I am new in python django, i am creating app model and wants to define a field(date of birth) to input date from user form but unable to understand how to define date field in model so that i can capture the date(date of birth) using form.</p>
<p>Here is model</p>
<pre><code>
# Create your models here.
class funder(models,Model):
user_name = models.CharField(max_length=150)
user_mobile = models.CharField(max_length=12)
user_dob = models.DateField('none' = true)
user_address = models.TextField()
user_status = models.PositiveIntegerField(default='1')
</code></pre>
<p>thanks</p>
|
<python><django><django-models>
|
2023-05-27 05:09:26
| 2
| 465
|
Rahul Saxena
|
76,345,366
| 2,437,656
|
How can I extend every unique key constraint of all models with a common key across using Flask-SQLAlchemy and Flask-Migrate in Python?
|
<p>Want to extend every unique key constraint of all models with a common key across. Have tried multiple things but doesn't seem to working when I do</p>
<pre><code>flask db init; flask db migrate -m "init"; flask db upgrade;
</code></pre>
<p>But it works and adds</p>
<p><code>"users_email_organization_id_key" UNIQUE CONSTRAINT, btree (email, organization_id)</code></p>
<p>when I run it as <code>python app.py</code> since I have <code>db.create_all()</code> as part of app.py</p>
<p>My code looks something like this. Hope someone can help, stuck here for quite some time now.</p>
<pre><code>from flask import Flask
from flask_migrate import Migrate
from sqlalchemy import event
from sqlalchemy import Column, Integer, String, UniqueConstraint
from sqlalchemy.orm import declarative_base, relationship
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy.orm import declared_attr, relationship, declarative_base
import os
app = Flask(__name__)
basedir = os.path.abspath(os.path.dirname(__file__))
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://localhost:5432/postgres'
db = SQLAlchemy(app)
migrate = Migrate(app, db)
class Organization(db.Model):
__tablename__ = 'organizations'
id = Column(Integer, primary_key=True)
name = Column(String(50), unique=True)
class OrganizationMixin:
organization_id = Column(Integer, db.ForeignKey('organizations.id'))
@classmethod
def extend_unique_constraints(cls):
table = cls.__table__
constraints_to_modify = []
for constraint in table.constraints:
if isinstance(constraint, UniqueConstraint):
constraints_to_modify.append(constraint)
for constraint in constraints_to_modify:
table.constraints.remove(constraint)
columns = list(constraint.columns)
if 'organization_id' not in columns:
columns.append(table.c.organization_id)
uc = UniqueConstraint(*columns, name=constraint.name)
table.append_constraint(uc)
class User(OrganizationMixin, db.Model):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
username = Column(String(50), unique=True)
email = Column(String(100), unique=True)
class Resource(OrganizationMixin, db.Model):
__tablename__ = 'resources'
id = Column(Integer, primary_key=True)
name = Column(String(50), unique=True)
@event.listens_for(db.metadata, 'before_create')
def extend_unique_constraints(target, connection, **kwargs):
models = [User, Resource]
for model in models:
if issubclass(model, OrganizationMixin):
model.extend_unique_constraints()
with app.app_context():
db.create_all()
if __name__ == '__main__':
app.run(port=8000)
</code></pre>
<p>All things tried are in the question above.</p>
|
<python><flask><flask-sqlalchemy><alembic><flask-migrate>
|
2023-05-27 04:55:36
| 1
| 306
|
Aakash Aggarwal
|
76,345,344
| 219,153
|
How to set channel value with PyDMXControl?
|
<p>I have a DMX512 decoder with LEDs at address 1. It is connected to the PC running Ubuntu 22.04.2 via USB/RS485 dongle using FT232R chip. It works fine with QLC+ app. I would like to control it from a Python script. I'm using PyDMXControl module and this script:</p>
<pre><code>from PyDMXControl.controllers import OpenDMXController
from PyDMXControl.profiles.Generic import Dimmer
dmx = OpenDMXController()
leds = dmx.add_fixture(Dimmer, name="LEDs", start_channel=1)
leds.set_id(1)
leds.dim(127, 5000)
</code></pre>
<p>has no effect. How do I set the channel (address) to 1, so it matches the decoder setting? Do I have to pass the device path, e.g. <code>/dev/ttyUSB0</code>, somewhere? Is there a documentation or tutorial of PyDMXControl somewhere?</p>
|
<python><dmx512>
|
2023-05-27 04:43:15
| 1
| 8,585
|
Paul Jurczak
|
76,345,286
| 743,531
|
Unable to resolve '_WorkbookChild" has no attribute "max_row" [attr-defined]' warning with openpyxl
|
<p>With the python file below I am unable to resolve mypy errors.</p>
<pre><code>import openpyxl
inputWorkbook = openpyxl.load_workbook("input.xlsx")
activeSheet = inputWorkbook.active
if activeSheet:
print(activeSheet.max_row)
</code></pre>
<p>I keep getting this error <code>test.py:6: error: "_WorkbookChild" has no attribute "max_row" [attr-defined]</code>. Looking at the documentation I dont see what the issue is. <a href="https://openpyxl.readthedocs.io/en/stable/api/openpyxl.workbook.workbook.html#openpyxl.workbook.workbook.Workbook.active" rel="nofollow noreferrer">openpyxl.workbook.workbook().active</a> returns type <a href="https://openpyxl.readthedocs.io/en/stable/api/openpyxl.worksheet.worksheet.html#openpyxl.worksheet.worksheet.Worksheet" rel="nofollow noreferrer">openpyxl.worksheet.worksheet.Worksheet</a> which include <a href="https://openpyxl.readthedocs.io/en/stable/api/openpyxl.worksheet.worksheet.html#openpyxl.worksheet.worksheet.Worksheet.max_row" rel="nofollow noreferrer">max_row</a>.</p>
<ul>
<li>Python 3.11.3</li>
<li>mypy 1.3.0</li>
<li>mypy-extensions 1.0.0</li>
<li>openpyxl 3.1.2</li>
</ul>
|
<python><python-3.x><openpyxl><mypy>
|
2023-05-27 04:13:24
| 2
| 301
|
jprince14
|
76,345,255
| 4,825,796
|
TensorFlowJS Mask RCNN - ERROR provided in model.execute(dict) must be int32, but was float32
|
<p>I have trained a object detection model using transferred learning from <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md" rel="nofollow noreferrer">Mask R-CNN Inception ResNet V2 1024x1024</a> and after converting the model to js I get the error: <strong>ERROR provided in model.execute(dict) must be int32, but was float32</strong>. Here are the steps I took to create the model.</p>
<p><strong>1-</strong> Created the training.json, validation.json, testing.json annotation files along with the label_map.txt files from my images. I have also pre-processed the images to fit the 1024 * 1024 size.</p>
<p><strong>2-</strong> Used the <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/dataset_tools/create_coco_tf_record.py" rel="nofollow noreferrer">create_coco_tf_record.py</a> provided by tensorflow to generate tfrecord files. The only alteration I made to the <strong>create_coco_tf_record.py</strong> file was changing <strong>include_mask</strong> to <strong>True</strong></p>
<pre><code>tf.flags.DEFINE_boolean(
'include_masks', True **was false**, 'Whether to include instance segmentations masks '
</code></pre>
<p>then ran the bottom command using <strong>conda</strong></p>
<pre><code>python create_coco_tf_record.py ^
--logtostderr ^
--train_image_dir=C:/model/ai_container/training ^
--val_image_dir=C:/model/ai_container/vidation ^
--test_image_dir=C:/model/ai_container/testing ^
--train_annotations_file=C:/model/ai_container/training/training.json ^
--val_annotations_file=C:/model/ai_container/validation/coco_validation.json ^
--testdev_annotations_file=C:/model/ai_container/testing/coco_testing.json ^
--output_dir=C:/model/ai_container/tfrecord
</code></pre>
<p><strong>3-</strong> I then train the model. Bellow is the modified portion of my config_file based on <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/configs/tf2/mask_rcnn_inception_resnet_v2_1024x1024_coco17_gpu-8.config" rel="nofollow noreferrer">base mask-rcnn config file</a>. The <em>batch</em> and <em>num_steps</em> are set to 1 just so I could quickly train the model to test the results.</p>
<pre><code> train_config: {
batch_size: 1
num_steps: 1
optimizer {
momentum_optimizer: {
learning_rate: {
cosine_decay_learning_rate {
learning_rate_base: 0.008
total_steps: 200000
warmup_learning_rate: 0.0
warmup_steps: 5000
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint_version: V2
fine_tune_checkpoint: "C:/ObjectDetectionAPI/mask_rcnn_inception_resnet_v2_1024x1024_coco17_gpu-8/checkpoint/ckpt-0"
fine_tune_checkpoint_type: "detection"
data_augmentation_options {
random_horizontal_flip {
}
}
}
train_input_reader: {
label_map_path: "C:/model/ai_container/label_map.txt"
tf_record_input_reader {
input_path: "C:/model/ai_container/tfrecord/coco_train.record*"
}
load_instance_masks: true
mask_type: PNG_MASKS
}
eval_config: {
metrics_set: "coco_detection_metrics"
metrics_set: "coco_mask_metrics"
eval_instance_masks: true
use_moving_averages: false
batch_size: 1
include_metrics_per_category: false
}
eval_input_reader: {
label_map_path: "C:/model/ai_container/label_map.txt"
shuffle: false
num_epochs: 1
tf_record_input_reader {
input_path: "C:/model/ai_container/tfrecord/coco_val.record*"
}
load_instance_masks: true
mask_type: PNG_MASKS
}
</code></pre>
<p>than ran training command:</p>
<pre><code>python object_detection/model_main_tf2.py ^
--pipeline_config_path=C:/ObjectDetectionAPI/mask_rcnn_inception_resnet_v2_1024x1024_coco17_gpu-8/mask_rcnn_inception_resnet_v2_1024x1024_coco17_gpu-8.config ^
--model_dir=C:/TensoFlow/training_process_2 ^
--alsologtostderr
</code></pre>
<p><strong>4-</strong> Run validation command (might be doing this wrong)</p>
<pre><code>python object_detection/model_main_tf2.py ^
--pipeline_config_path=C:/ObjectDetectionAPI/mask_rcnn_inception_resnet_v2_1024x1024_coco17_gpu-8/mask_rcnn_inception_resnet_v2_1024x1024_coco17_gpu-8.config ^
--model_dir=C:/TensoFlow/training_process_2 ^
--checkpoint_dir=C:/TensoFlow/training_process_2 ^
--sample_1_of_n_eval_examples=1 ^
--alsologtostderr
</code></pre>
<p><strong>5-</strong> Export Model</p>
<pre><code>python object_detection/exporter_main_v2.py ^
--input_type="image_tensor" ^
--pipeline_config_path=C:/ObjectDetectionAPI/mask_rcnn_inception_resnet_v2_1024x1024_coco17_gpu-8/mask_rcnn_inception_resnet_v2_1024x1024_coco17_gpu-8.config ^
--trained_checkpoint_dir=C:/TensoFlow/training_process_2 ^
--output_directory=C:/TensoFlow/training_process_2/generatedModel
</code></pre>
<p><strong>6-</strong> Convert Model to tensorflowJs</p>
<pre><code>tensorflowjs_converter ^
--input_format=tf_saved_model ^
--output_format=tfjs_graph_model ^
--signature_name=serving_default ^
--saved_model_tags=serve ^
C:/TensoFlow/training_process_2/generatedModel/saved_model C:/TensoFlow/training_process_2/generatedModel/jsmodel
</code></pre>
<p><strong>7-</strong> Then attempt to load the model into my angular project. I placed the converted model bin and json files in my assets folder.</p>
<pre><code>npm install @tensorflow/tfjs
</code></pre>
<hr />
<pre><code>ngAfterViewInit() {
tf.loadGraphModel('/assets/tfmodel/model1/model.json').then((model) => {
this.model = model;
this.model.executeAsync(tf.zeros([1, 256, 256, 3])).then((result) => {
this.loadeModel = true;
});
});
}
</code></pre>
<hr />
<p>I then get the error</p>
<pre><code> tf.min.js:17 ERROR Error: Uncaught (in promise): Error: The dtype of dict['input_tensor'] provided in model.execute(dict) must be int32, but was float32
Error: The dtype of dict['input_tensor'] provided in model.execute(dict) must be int32, but was float32
at F$ (util_base.js:153:11)
at graph_executor.js:721:9
at Array.forEach (<anonymous>)
at e.value (graph_executor.js:705:25)
at e.<anonymous> (graph_executor.js:467:12)
at h (tf.min.js:17:2100)
at Generator.<anonymous> (tf.min.js:17:3441)
at Generator.next (tf.min.js:17:2463)
at u (tf.min.js:17:8324)
at o (tf.min.js:17:8527)
at resolvePromise (zone.js:1211:31)
at resolvePromise (zone.js:1165:17)
at zone.js:1278:17
at _ZoneDelegate.invokeTask (zone.js:406:31)
at Object.onInvokeTask (core.mjs:26343:33)
at _ZoneDelegate.invokeTask (zone.js:405:60)
at Zone.runTask (zone.js:178:47)
at drainMicroTaskQueue (zone.js:585:35)
</code></pre>
<hr />
<p>Im using angular. I have also tried a few online solutions with no success. If anyone could give me any information on how to possible solve this issue I would be grateful. THANKS.</p>
|
<python><angular><typescript><tensorflow><tensorflow.js>
|
2023-05-27 03:58:09
| 0
| 1,762
|
Hozeis
|
76,345,189
| 5,212,614
|
How to get the bedroom square footage and prices from Zillow?
|
<p>I asked this question below to ChatGPT today</p>
<blockquote>
<p>User python mozlla headers scrape bedrooms square footage and price from zillow</p>
</blockquote>
<p>I got this.</p>
<pre><code>import requests
from bs4 import BeautifulSoup
# Set the URL of the Zillow page you want to scrape
url = "https://www.zillow.com/breckenridge-co/sold/?searchQueryState=%7B%22pagination%22%3A%7B%7D%2C%22usersSearchTerm%22%3A%22Breckenridge%2C%20CO%22%2C%22mapBounds%22%3A%7B%22west%22%3A-106.12936606713866%2C%22east%22%3A-105.9443151027832%2C%22south%22%3A39.45416110834031%2C%22north%22%3A39.497489978187815%7D%2C%22regionSelection%22%3A%5B%7B%22regionId%22%3A14934%2C%22regionType%22%3A6%7D%5D%2C%22isMapVisible%22%3Atrue%2C%22filterState%22%3A%7B%22sort%22%3A%7B%22value%22%3A%22days%22%7D%2C%22fsba%22%3A%7B%22value%22%3Afalse%7D%2C%22fsbo%22%3A%7B%22value%22%3Afalse%7D%2C%22nc%22%3A%7B%22value%22%3Afalse%7D%2C%22fore%22%3A%7B%22value%22%3Afalse%7D%2C%22cmsn%22%3A%7B%22value%22%3Afalse%7D%2C%22auc%22%3A%7B%22value%22%3Afalse%7D%2C%22rs%22%3A%7B%22value%22%3Atrue%7D%2C%22ah%22%3A%7B%22value%22%3Atrue%7D%7D%2C%22isListVisible%22%3Atrue%2C%22mapZoom%22%3A13%7D"
# Send an HTTP GET request to the URL
response = requests.get(url)
# Create a BeautifulSoup object to parse the HTML content
soup = BeautifulSoup(response.content, "html.parser")
# Find the relevant elements containing the data you want to scrape
results = soup.find_all("article", class_="property-card-data")
# Loop through each result and extract the desired information
for result in results:
# Extract the bedroom information
bedrooms = result.find("ul", class_="list-card-details").find("li").text.strip()
# Extract the square footage
square_footage = result.find("ul", class_="list-card-details").find_all("li")[1].text.strip()
# Extract the price
price = result.find("div", class_="list-card-price").text.strip()
# Print the scraped data
print("Bedrooms:", bedrooms)
print("Square Footage:", square_footage)
print("Price:", price)
print()
</code></pre>
<p>The problem is, that nothing gets returned. I think the issue is with the 'soup.find_all' or the 'class_='. How does this work exactly?</p>
|
<python><python-3.x><web-scraping><beautifulsoup><mozilla>
|
2023-05-27 03:15:08
| 2
| 20,492
|
ASH
|
76,344,950
| 1,942,868
|
Stop automatic encoding by FileField
|
<p>I am using the <code>FileField</code> of django model.</p>
<pre><code>class Drawing(models.Model):
drawing = models.FileField(upload_to='uploads/')
</code></pre>
<p>For example I try uploading 2byte charactor filename such as <code>木.pdf</code>,</p>
<p>then filename is encoded into , <code>%E6%9C%A8_sBMogAs.pdf</code> automatically.</p>
<p>However, I want to keep 2byte characters, so I override the <code>FileField.py</code>.</p>
<p>Just copy and paste from <a href="https://github.com/django/django/blob/main/django/db/models/fields/files.py" rel="nofollow noreferrer">here</a> and put some <code>print</code> to check where the filename is changed.</p>
<p>I put <code>print</code> in <code>def pre_save(self, model_instance, add):</code>, <code>def generate_filename(self, instance, filename):``def save_form_data(self, instance, data):</code> and <code>def save(self, name, content, save=True):</code> functions to see the filename</p>
<p>However in every checkpoint, filename is still <code>木.pdf</code> or <code>木_sBMogAs.pdf</code> not <code>%E6%9C%A8_sBMogAs.pdf</code></p>
<p>Where the filename is changed? and how can I surpass the change of filename ?</p>
<pre><code>import datetime
import posixpath
from django import forms
from django.core import checks
from django.core.files.base import File
from django.core.files.images import ImageFile
from django.core.files.storage import Storage, default_storage
from django.core.files.utils import validate_file_name
from django.db.models import signals
from django.db.models.fields import Field
from django.db.models.query_utils import DeferredAttribute
from django.db.models.utils import AltersData
from django.utils.translation import gettext_lazy as _
class FieldFile(File, AltersData):
def __init__(self, instance, field, name):
super().__init__(None, name)
self.instance = instance
self.field = field
self.storage = field.storage
self._committed = True
def __eq__(self, other):
# Older code may be expecting FileField values to be simple strings.
# By overriding the == operator, it can remain backwards compatibility.
if hasattr(other, "name"):
return self.name == other.name
return self.name == other
def __hash__(self):
return hash(self.name)
# The standard File contains most of the necessary properties, but
# FieldFiles can be instantiated without a name, so that needs to
# be checked for here.
def _require_file(self):
if not self:
raise ValueError(
"The '%s' attribute has no file associated with it." % self.field.name
)
def _get_file(self):
self._require_file()
if getattr(self, "_file", None) is None:
self._file = self.storage.open(self.name, "rb")
return self._file
def _set_file(self, file):
self._file = file
def _del_file(self):
del self._file
file = property(_get_file, _set_file, _del_file)
@property
def path(self):
self._require_file()
return self.storage.path(self.name)
@property
def url(self):
self._require_file()
return self.storage.url(self.name)
@property
def size(self):
self._require_file()
if not self._committed:
return self.file.size
return self.storage.size(self.name)
def open(self, mode="rb"):
self._require_file()
if getattr(self, "_file", None) is None:
self.file = self.storage.open(self.name, mode)
else:
self.file.open(mode)
return self
# open() doesn't alter the file's contents, but it does reset the pointer
open.alters_data = True
# In addition to the standard File API, FieldFiles have extra methods
# to further manipulate the underlying file, as well as update the
# associated model instance.
def save(self, name, content, save=True):
name = self.field.generate_filename(self.instance, name)
print("save1:",name)
self.name = self.storage.save(name, content, max_length=self.field.max_length)
print("save2:",self.name)
setattr(self.instance, self.field.attname, self.name)
self._committed = True
print("save3:",self.name)
# Save the object because it has changed, unless save is False
if save:
self.instance.save()
save.alters_data = True
def delete(self, save=True):
if not self:
return
# Only close the file if it's already open, which we know by the
# presence of self._file
if hasattr(self, "_file"):
self.close()
del self.file
self.storage.delete(self.name)
self.name = None
setattr(self.instance, self.field.attname, self.name)
self._committed = False
if save:
self.instance.save()
delete.alters_data = True
@property
def closed(self):
file = getattr(self, "_file", None)
return file is None or file.closed
def close(self):
file = getattr(self, "_file", None)
if file is not None:
file.close()
def __getstate__(self):
# FieldFile needs access to its associated model field, an instance and
# the file's name. Everything else will be restored later, by
# FileDescriptor below.
return {
"name": self.name,
"closed": False,
"_committed": True,
"_file": None,
"instance": self.instance,
"field": self.field,
}
def __setstate__(self, state):
self.__dict__.update(state)
self.storage = self.field.storage
class FileDescriptor(DeferredAttribute):
"""
The descriptor for the file attribute on the model instance. Return a
FieldFile when accessed so you can write code like::
>>> from myapp.models import MyModel
>>> instance = MyModel.objects.get(pk=1)
>>> instance.file.size
Assign a file object on assignment so you can do::
>>> with open('/path/to/hello.world') as f:
... instance.file = File(f)
"""
def __get__(self, instance, cls=None):
if instance is None:
return self
# This is slightly complicated, so worth an explanation.
# instance.file needs to ultimately return some instance of `File`,
# probably a subclass. Additionally, this returned object needs to have
# the FieldFile API so that users can easily do things like
# instance.file.path and have that delegated to the file storage engine.
# Easy enough if we're strict about assignment in __set__, but if you
# peek below you can see that we're not. So depending on the current
# value of the field we have to dynamically construct some sort of
# "thing" to return.
# The instance dict contains whatever was originally assigned
# in __set__.
file = super().__get__(instance, cls)
# If this value is a string (instance.file = "path/to/file") or None
# then we simply wrap it with the appropriate attribute class according
# to the file field. [This is FieldFile for FileFields and
# ImageFieldFile for ImageFields; it's also conceivable that user
# subclasses might also want to subclass the attribute class]. This
# object understands how to convert a path to a file, and also how to
# handle None.
if isinstance(file, str) or file is None:
attr = self.field.attr_class(instance, self.field, file)
instance.__dict__[self.field.attname] = attr
# Other types of files may be assigned as well, but they need to have
# the FieldFile interface added to them. Thus, we wrap any other type of
# File inside a FieldFile (well, the field's attr_class, which is
# usually FieldFile).
elif isinstance(file, File) and not isinstance(file, FieldFile):
file_copy = self.field.attr_class(instance, self.field, file.name)
file_copy.file = file
file_copy._committed = False
instance.__dict__[self.field.attname] = file_copy
# Finally, because of the (some would say boneheaded) way pickle works,
# the underlying FieldFile might not actually itself have an associated
# file. So we need to reset the details of the FieldFile in those cases.
elif isinstance(file, FieldFile) and not hasattr(file, "field"):
file.instance = instance
file.field = self.field
file.storage = self.field.storage
# Make sure that the instance is correct.
elif isinstance(file, FieldFile) and instance is not file.instance:
file.instance = instance
# That was fun, wasn't it?
return instance.__dict__[self.field.attname]
def __set__(self, instance, value):
instance.__dict__[self.field.attname] = value
class FileField(Field):
# The class to wrap instance attributes in. Accessing the file object off
# the instance will always return an instance of attr_class.
attr_class = FieldFile
# The descriptor to use for accessing the attribute off of the class.
descriptor_class = FileDescriptor
description = _("File")
def __init__(
self, verbose_name=None, name=None, upload_to="", storage=None, **kwargs
):
self._primary_key_set_explicitly = "primary_key" in kwargs
self.storage = storage or default_storage
if callable(self.storage):
# Hold a reference to the callable for deconstruct().
self._storage_callable = self.storage
self.storage = self.storage()
if not isinstance(self.storage, Storage):
raise TypeError(
"%s.storage must be a subclass/instance of %s.%s"
% (
self.__class__.__qualname__,
Storage.__module__,
Storage.__qualname__,
)
)
self.upload_to = upload_to
kwargs.setdefault("max_length", 100)
super().__init__(verbose_name, name, **kwargs)
def check(self, **kwargs):
return [
*super().check(**kwargs),
*self._check_primary_key(),
*self._check_upload_to(),
]
def _check_primary_key(self):
if self._primary_key_set_explicitly:
return [
checks.Error(
"'primary_key' is not a valid argument for a %s."
% self.__class__.__name__,
obj=self,
id="fields.E201",
)
]
else:
return []
def _check_upload_to(self):
if isinstance(self.upload_to, str) and self.upload_to.startswith("/"):
return [
checks.Error(
"%s's 'upload_to' argument must be a relative path, not an "
"absolute path." % self.__class__.__name__,
obj=self,
id="fields.E202",
hint="Remove the leading slash.",
)
]
else:
return []
def deconstruct(self):
name, path, args, kwargs = super().deconstruct()
if kwargs.get("max_length") == 100:
del kwargs["max_length"]
kwargs["upload_to"] = self.upload_to
storage = getattr(self, "_storage_callable", self.storage)
if storage is not default_storage:
kwargs["storage"] = storage
return name, path, args, kwargs
def get_internal_type(self):
return "FileField"
def get_prep_value(self, value):
value = super().get_prep_value(value)
# Need to convert File objects provided via a form to string for
# database insertion.
if value is None:
return None
return str(value)
def pre_save(self, model_instance, add):
file = super().pre_save(model_instance, add)
if file and not file._committed:
# Commit the file to storage prior to saving the model
file.save(file.name, file.file, save=False)
print("pre_save",file.name)
return file
def contribute_to_class(self, cls, name, **kwargs):
super().contribute_to_class(cls, name, **kwargs)
setattr(cls, self.attname, self.descriptor_class(self))
def generate_filename(self, instance, filename):
"""
Apply (if callable) or prepend (if a string) upload_to to the filename,
then delegate further processing of the name to the storage backend.
Until the storage layer, all file paths are expected to be Unix style
(with forward slashes).
"""
print("generate filename",filename)
if callable(self.upload_to):
filename = self.upload_to(instance, filename)
else:
dirname = datetime.datetime.now().strftime(str(self.upload_to))
filename = posixpath.join(dirname, filename)
filename = validate_file_name(filename, allow_relative_path=True)
print("finish generate filename",self.storage.generate_filename(filename))
return self.storage.generate_filename(filename)
def save_form_data(self, instance, data):
# Important: None means "no change", other false value means "clear"
# This subtle distinction (rather than a more explicit marker) is
# needed because we need to consume values that are also sane for a
# regular (non Model-) Form to find in its cleaned_data dictionary.
print("save_from_data,",self.name)
if data is not None:
# This value will be converted to str and stored in the
# database, so leaving False as-is is not acceptable.
setattr(instance, self.name, data or "")
def formfield(self, **kwargs):
return super().formfield(
**{
"form_class": forms.FileField,
"max_length": self.max_length,
**kwargs,
}
)
class ImageFileDescriptor(FileDescriptor):
"""
Just like the FileDescriptor, but for ImageFields. The only difference is
assigning the width/height to the width_field/height_field, if appropriate.
"""
def __set__(self, instance, value):
previous_file = instance.__dict__.get(self.field.attname)
super().__set__(instance, value)
# To prevent recalculating image dimensions when we are instantiating
# an object from the database (bug #11084), only update dimensions if
# the field had a value before this assignment. Since the default
# value for FileField subclasses is an instance of field.attr_class,
# previous_file will only be None when we are called from
# Model.__init__(). The ImageField.update_dimension_fields method
# hooked up to the post_init signal handles the Model.__init__() cases.
# Assignment happening outside of Model.__init__() will trigger the
# update right here.
if previous_file is not None:
self.field.update_dimension_fields(instance, force=True)
class ImageFieldFile(ImageFile, FieldFile):
def delete(self, save=True):
# Clear the image dimensions cache
if hasattr(self, "_dimensions_cache"):
del self._dimensions_cache
super().delete(save)
class ImageField(FileField):
attr_class = ImageFieldFile
descriptor_class = ImageFileDescriptor
description = _("Image")
def __init__(
self,
verbose_name=None,
name=None,
width_field=None,
height_field=None,
**kwargs,
):
self.width_field, self.height_field = width_field, height_field
super().__init__(verbose_name, name, **kwargs)
def check(self, **kwargs):
return [
*super().check(**kwargs),
*self._check_image_library_installed(),
]
def _check_image_library_installed(self):
try:
from PIL import Image # NOQA
except ImportError:
return [
checks.Error(
"Cannot use ImageField because Pillow is not installed.",
hint=(
"Get Pillow at https://pypi.org/project/Pillow/ "
'or run command "python -m pip install Pillow".'
),
obj=self,
id="fields.E210",
)
]
else:
return []
def deconstruct(self):
name, path, args, kwargs = super().deconstruct()
if self.width_field:
kwargs["width_field"] = self.width_field
if self.height_field:
kwargs["height_field"] = self.height_field
return name, path, args, kwargs
def contribute_to_class(self, cls, name, **kwargs):
super().contribute_to_class(cls, name, **kwargs)
# Attach update_dimension_fields so that dimension fields declared
# after their corresponding image field don't stay cleared by
# Model.__init__, see bug #11196.
# Only run post-initialization dimension update on non-abstract models
# with width_field/height_field.
if not cls._meta.abstract and (self.width_field or self.height_field):
signals.post_init.connect(self.update_dimension_fields, sender=cls)
def update_dimension_fields(self, instance, force=False, *args, **kwargs):
"""
Update field's width and height fields, if defined.
This method is hooked up to model's post_init signal to update
dimensions after instantiating a model instance. However, dimensions
won't be updated if the dimensions fields are already populated. This
avoids unnecessary recalculation when loading an object from the
database.
Dimensions can be forced to update with force=True, which is how
ImageFileDescriptor.__set__ calls this method.
"""
# Nothing to update if the field is deferred.
if self.attname not in instance.__dict__:
return
# getattr will call the ImageFileDescriptor's __get__ method, which
# coerces the assigned value into an instance of self.attr_class
# (ImageFieldFile in this case).
file = getattr(instance, self.attname)
# Nothing to update if we have no file and not being forced to update.
if not file and not force:
return
dimension_fields_filled = not (
(self.width_field and not getattr(instance, self.width_field))
or (self.height_field and not getattr(instance, self.height_field))
)
# When both dimension fields have values, we are most likely loading
# data from the database or updating an image field that already had
# an image stored. In the first case, we don't want to update the
# dimension fields because we are already getting their values from the
# database. In the second case, we do want to update the dimensions
# fields and will skip this return because force will be True since we
# were called from ImageFileDescriptor.__set__.
if dimension_fields_filled and not force:
return
# file should be an instance of ImageFieldFile or should be None.
if file:
width = file.width
height = file.height
else:
# No file, so clear dimensions fields.
width = None
height = None
# Update the width and height fields.
if self.width_field:
setattr(instance, self.width_field, width)
if self.height_field:
setattr(instance, self.height_field, height)
def formfield(self, **kwargs):
return super().formfield(
**{
"form_class": forms.ImageField,
**kwargs,
}
)
</code></pre>
|
<python><django><django-rest-framework>
|
2023-05-27 01:00:31
| 0
| 12,599
|
whitebear
|
76,344,687
| 3,096,622
|
Pytest: ModuleNotFoundError. Problem with my import statements
|
<p>Initial development of my project had everything in the same directory and all of my Pytest tests worked fine. I changed directory structure for packaging and moved code into <code>src/project_name</code> directory, and all test files into <code>test/</code>. Now Pytest throws a ModuleNotFoundError. I am running pytest from the test directory by calling <code>pytest</code>. I run it from project root. I run it with <code>python -m pytest</code>. Same ModuleNotFoundError each way.</p>
<p>Current project structure is:</p>
<pre><code>src
project_name
__init__.py
moduleA.py
modluleB.py
test
testA.py
testB.py
</code></pre>
<p>No matter how I write my import statements, I am getting the same error. I have tried:</p>
<pre><code># testA.py
import src.project_name.moduleA
</code></pre>
<p>which throws an error that there is no module <code>src</code>.</p>
<pre><code># testA.py
import project_name.moduleA
</code></pre>
<p>This errors on <code>project_name</code>.
Finally:</p>
<pre><code># testA.py
import moduleA
</code></pre>
<p>This errors on <code>moduleA</code>.</p>
<p>There's something obvious I am missing, but can't figure it out. How can I get my imports to work on my test.py files?</p>
|
<python><import><pytest>
|
2023-05-26 23:23:43
| 1
| 603
|
JayCo741
|
76,344,360
| 12,904,608
|
Is this a fits header problem or something to do with astropy?
|
<p>I tried a simple example in astropy, like the one bellow:</p>
<pre><code>from astropy import wcs
from astropy.io import fits
# Load the FITS hdulist using astropy.io.fits
hdulist = fits.open('abell.fits')
# Parse the WCS keywords in the primary HDU
w = wcs.WCS(hdulist[0].header)
# Print out the "name" of the WCS, as defined in the FITS header
print(w.wcs.name)
# Print out all of the settings that were parsed from the header
w.wcs.print_contents()
</code></pre>
<p>It errors with the following info:</p>
<pre><code>--------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[34], line 8
5 hdulist = fits.open('abell.fits')
7 # Parse the WCS keywords in the primary HDU
----> 8 w = wcs.WCS(hdulist[0].header)
10 # Print out the "name" of the WCS, as defined in the FITS header
11 print(w.wcs.name)
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\astropy\wcs\wcs.py:612, in WCS.__init__(self, header, fobj, key, minerr, relax, naxis, keysel, colsel, fix, translate_units, _do_set)
609 self.fix(translate_units=translate_units)
611 if _do_set:
--> 612 self.wcs.set()
614 for fd in close_fds:
615 fd.close()
ValueError: ERROR 5 in wcsset() at line 2775 of file cextern\wcslib\C\wcs.c:
Invalid parameter value.
ERROR 4 in linset() at line 737 of file cextern\wcslib\C\lin.c:
Failed to initialize distortion functions.
ERROR 3 in dssset() at line 2697 of file cextern\wcslib\C\dis.c:
Coefficient scale for DSS on axis 1 is zero..
</code></pre>
<p>My fits file contains this data for wcs:</p>
<pre><code>OBJCTRA = '16 35 53.453 ' /Object Right Ascension (J2000)
OBJCTDEC= '+66 14 01.11 ' /Object Declinaison (J2000)
RA = 2.48972721654300130E+002 /Telescope RA
DEC = 6.62336441336017145E+001 /Telescope DEC
CRVAL1 = 2.48972721654300130E+002 /approx coord. in RA
CRVAL2 = 6.62336441336017145E+001 /approx coord. in DEC
CDELT1 = 2.57895481677548606E-004 /ScaleX in deg/pix
CDELT2 = 2.57895481677548606E-004 /ScaleY in deg/pix
</code></pre>
<p>Can't tell what the problem is. From what I learned, CRVAL and CDELT should be enough, but maybe I'm missing some info in the fits header.</p>
|
<python><astropy><fits>
|
2023-05-26 21:50:34
| 0
| 317
|
Adrian
|
76,344,334
| 11,001,493
|
os.makedirs is creating a new folder with file name
|
<p>I'm trying to copy all folders and files from one path to another if a file doesn't contain a substring called "obsolete". Here is my code:</p>
<pre><code>import os
import shutil
rootdir = "C:/Old"
new_rootdir = "C:/New/"
for root, dirs, files in os.walk(rootdir):
for filename in files:
if not "obsolete" in filename:
source = root + "\\" + filename
extra_paths = source.split("\\")[1:] # strings to be merged to the new destination
destination = new_rootdir + "/".join(extra_paths)
if not os.path.exists(destination):
os.makedirs(destination) # Create the destination directory if it doesn't exist
if os.path.isdir(source):
for root, dirs, files in os.walk(source):
for directory in dirs:
source_path = os.path.join(root, directory)
destination_path = os.path.join(destination, os.path.relpath(source_path, source))
if not os.path.exists(destination_path):
os.makedirs(destination_path)
for file in files:
source_path = os.path.join(root, file)
destination_path = os.path.join(destination, os.path.relpath(source_path, source))
shutil.copy2(source_path, destination_path)
else:
shutil.copy2(source, destination)
</code></pre>
<p>The script works, but it is creating a new folder for every file with its name on it. For example, for the old path <code>C:/Old/Documents/example.txt</code> it is creating a new path like <code>C:/New/Documents/example.txt/example.txt</code> and it should be <code>C:/New/Documents/example.txt</code></p>
<p>How can I fix this so the script does not create a folder with the file name?</p>
|
<python><operating-system><shutil>
|
2023-05-26 21:44:08
| 1
| 702
|
user026
|
76,344,312
| 1,997,735
|
pyinstaller doesn't like sounddevice
|
<p>Our Python 3.7 project is using sounddevice and it runs just fine, but we recently updated pyinstaller to 5.10.1 and the new version of pyinstaller doesn't like sounddevice.</p>
<p>BTW, updating pyinstaller to 5.11.0 doesn't help.</p>
<p>What do we need to do, to create an installable Python EXE that uses sounddevice?</p>
<p>EDIT: We are currently using sounddevice==0.4.1 and pyinstaller-hooks-contrib==2022.0</p>
<p>Here's what we get:</p>
<pre><code>32015 INFO: Loading module hook 'hook-sounddevice.py' from 'C:\\Users\\employee\\Documents\\Project\\software\\venv\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
Traceback (most recent call last):
File "build.py", line 39, in <module>
PyInstaller.__main__.run(['--noconfirm', '--distpath', distdir, '--workpath', builddir, fn_msi_spec])
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\__main__.py", line 180, in run
run_build(pyi_config, spec_file, **vars(args))
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\__main__.py", line 61, in run_build
PyInstaller.building.build_main.main(pyi_config, spec_file, **kwargs)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\building\build_main.py", line 978, in main
build(specfile, distpath, workpath, clean_build)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\building\build_main.py", line 900, in build
exec(code, spec_namespace)
File "C:\Users\employee\Documents\Project\software\main.spec", line 35, in <module>
noarchive=False)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\building\build_main.py", line 424, in __init__
self.__postinit__()
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\building\datastruct.py", line 173, in __postinit__
self.assemble()
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\building\build_main.py", line 576, in assemble
priority_scripts.append(self.graph.add_script(script))
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\depend\analysis.py", line 269, in add_script
self._top_script_node = super().add_script(pathname)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1433, in add_script
self._process_imports(n)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports
target_modules = self._safe_import_hook(*import_info, **kwargs)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\depend\analysis.py", line 433, in _safe_import_hook
return super()._safe_import_hook(target_module_partname, source_module, target_attr_names, level, edge_attr)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2303, in _safe_import_hook
target_attr_names=None, level=level, edge_attr=edge_attr)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1518, in import_hook
submodule = self._safe_import_module(head, mname, submodule)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\depend\analysis.py", line 480, in _safe_import_module
return super()._safe_import_module(module_basename, module_name, parent_package)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module
self._process_imports(n)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports
target_modules = self._safe_import_hook(*import_info, **kwargs)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\depend\analysis.py", line 433, in _safe_import_hook
return super()._safe_import_hook(target_module_partname, source_module, target_attr_names, level, edge_attr)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2303, in _safe_import_hook
target_attr_names=None, level=level, edge_attr=edge_attr)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1518, in import_hook
submodule = self._safe_import_module(head, mname, submodule)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\depend\analysis.py", line 480, in _safe_import_module
return super()._safe_import_module(module_basename, module_name, parent_package)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module
self._process_imports(n)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports
target_modules = self._safe_import_hook(*import_info, **kwargs)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\depend\analysis.py", line 433, in _safe_import_hook
return super()._safe_import_hook(target_module_partname, source_module, target_attr_names, level, edge_attr)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2303, in _safe_import_hook
target_attr_names=None, level=level, edge_attr=edge_attr)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1506, in import_hook
source_package, target_module_partname, level)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1685, in _find_head_package
target_module_headname, target_package_name, source_package)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\depend\analysis.py", line 480, in _safe_import_module
return super()._safe_import_module(module_basename, module_name, parent_package)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module
self._process_imports(n)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports
target_modules = self._safe_import_hook(*import_info, **kwargs)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\depend\analysis.py", line 369, in _safe_import_hook
excluded_imports = self._find_all_excluded_imports(source_module.identifier)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\depend\analysis.py", line 357, in _find_all_excluded_imports
excluded_imports.update(module_hook.excludedimports)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\depend\imphook.py", line 316, in __getattr__
self._load_hook_module()
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\depend\imphook.py", line 383, in _load_hook_module
self._hook_module = importlib_load_source(self.hook_module_name, self.hook_filename)
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\compat.py", line 612, in importlib_load_source
return mod_loader.load_module()
File "<frozen importlib._bootstrap_external>", line 407, in _check_name_wrapper
File "<frozen importlib._bootstrap_external>", line 907, in load_module
File "<frozen importlib._bootstrap_external>", line 732, in load_module
File "<frozen importlib._bootstrap>", line 265, in _load_module_shim
File "<frozen importlib._bootstrap>", line 696, in _load
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\_pyinstaller_hooks_contrib\hooks\stdhooks\hook-sounddevice.py", line 24, in <module>
sfp = get_package_paths("sounddevice")
File "C:\Users\employee\Documents\Project\software\venv\lib\site-packages\PyInstaller\utils\hooks\__init__.py", line 541, in get_package_paths
raise ValueError(f"Package '{package}' does not exist or is not a package!")
ValueError: Package 'sounddevice' does not exist or is not a package!
</code></pre>
|
<python><pyinstaller><python-sounddevice>
|
2023-05-26 21:38:50
| 1
| 3,473
|
Betty Crokker
|
76,344,269
| 3,330,347
|
Call value from PyQt5 UI interface and carry out operation in another py file
|
<p>I am new to Python and still learning.</p>
<p>I have a py file that I generated ftom Qt designer (ForEmail.py):</p>
<pre><code>from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtCore import QSize, QTimer
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(398, 235)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.frame = QtWidgets.QFrame(self.centralwidget)
self.frame.setGeometry(QtCore.QRect(40, 20, 161, 121))
self.frame.setFrameShape(QtWidgets.QFrame.StyledPanel)
self.frame.setFrameShadow(QtWidgets.QFrame.Raised)
self.frame.setObjectName("frame")
self.spinBox = QtWidgets.QSpinBox(self.frame)
self.spinBox.setGeometry(QtCore.QRect(92, 33, 51, 19))
self.spinBox.setObjectName("spinBox")
self.label = QtWidgets.QLabel(self.frame)
self.label.setGeometry(QtCore.QRect(90, 10, 52, 16))
self.label.setObjectName("label")
self.pushButton = QtWidgets.QPushButton(self.frame)
self.pushButton.setGeometry(QtCore.QRect(11, 31, 75, 23))
self.pushButton.setObjectName("pushButton")
MainWindow.setCentralWidget(self.centralwidget)
self.menubar = QtWidgets.QMenuBar(MainWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 398, 22))
self.menubar.setObjectName("menubar")
MainWindow.setMenuBar(self.menubar)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
self.label.setText(_translate("MainWindow", "HS_DELAY"))
self.pushButton.setText(_translate("MainWindow", "HOT START"))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
</code></pre>
<p>I am trying to get the number from the spin box entry to be used as the number for a timer that goes on and carries out the operation of keeping the button pressed while showing some text while the button is presseed (Hot_Strt_Test.py):</p>
<pre><code>class Hot_Strt_Test_Btn():
def __init__(self, pushButton_object):
bt3_timer = 10
self.count = bt3_timer
self.pushButton.clicked.connect(self.Action)
self.time = QTimer(self.pushButton)
self.time.setInterval(1000)
self.time.timeout.connect(self.Refresh)
#self.show()
def Action(self):
if self.pushButton.isEnabled():
self.time.start()
self.pushButton.setEnabled(False)
def Refresh(self):
bt3_timer = 10
self.pushButton.setStyleSheet("QPushButton"
"{"
"background-color : red;"
"}")
if self.count > 0:
self.pushButton.setText('HOT START RUNNING\n'+str(self.count)+' secs')
self.count -= 1
else:
self.time.stop()
self.pushButton.setEnabled(True)
self.pushButton.setText('HOT START')
self.count = bt3_timer
self.pushButton.setStyleSheet("QPushButton"
"{"
"background-color : lightgray;"
"}")
</code></pre>
<p>Currently I'm not getting the operation as described.</p>
<p>Can you tell me what I am doing wrong and how to fix it.</p>
<p>Thanks for your time!</p>
|
<python><pyqt5>
|
2023-05-26 21:27:22
| 0
| 405
|
Joe
|
76,344,214
| 5,252,492
|
numpy: Faster np.dot/ multiply(element-wise multiplication) when one array is the same
|
<p>I have to do dot product multiplications between two 1D arrays and then take the sum of the list as:</p>
<pre><code>import numpy as np
import numba
import time
import random
a = np.array([1, 2, 3, 4, 5, 6],dtype=int8) # Same first array
b = [np.array([random.randint(0,100) for y in range(6)],dtype=float) for x in range(1, 1000000)] # iterable of 2nd array
c = np.zeros(len(b)) # to store results
@numba.njit() # benefits of numba
def sum_of_mul(list1, list2):
return np.dot(list1, list2)
tstart = time.perf_counter()
for i in range(len(c)):
c[i] = sum_of_mul(a, b[i])
final_average = np.sum(c) / len(c)
print('time taken', round(time.perf_counter() - tstart, 1))
</code></pre>
<p><strong>I am only interested in the speed of the array multiplication and not the setup.</strong>
The dot product takes about 1.2 seconds on my pc.</p>
<p>Given that my first array is the same, and both my arrays are 6 elements in size, is there any faster way to do this?</p>
<p>Somehow make this into a multidimensional matrix multiplication? Pandas? Or use some fancy Library? I have an AMD GPU.</p>
<p>I've tried multiprocessing it and it takes 16x more time (most likely because I'm doing a relatively trivial operation on a small array).</p>
<p>numba helped speed this up.</p>
<p>EDIT: As per the currently accepted answer, this also was a huge speed up</p>
<pre><code>np.mean(np.dot(a, b.T))
</code></pre>
<p>and making sure that a and b (/everything) are numpy arrays</p>
|
<python><numpy>
|
2023-05-26 21:14:46
| 1
| 6,145
|
azazelspeaks
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.