QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,330,264
| 14,501,369
|
How do I extract a list of conditional formatting rules from excel using the xml?
|
<p>I need to extract a list of all cells that have conditional formatting, where they might be highlighted yellow as a result of conditional formatting. I have tried using openxpyxl to do this, but it will only give me a list of all cells that have a conditional formatting rules, regardless of whether the rule might result in a yellow, red, blue, etc highlight. I am unable to us pywin32 or xlwings to do this as they run too slowly.</p>
<p>Current openpyxl code:</p>
<pre><code>wb_openpyxl = openpyxl.load_workbook(filename)
ws_openpyxl = wb_openpyxl["Sheet1"]
for cf in ws_openpyxl.conditional_formatting:
print(cf)
#<ConditionalFormatting A5:A9>
</code></pre>
<p>Desired result:</p>
<p>Conditional formatting A5:A9 if a5>0, (hex color associated with yellow)</p>
|
<python><excel><xml><openpyxl>
|
2024-04-15 18:00:12
| 1
| 997
|
Hooded 0ne
|
78,330,155
| 14,501,369
|
How do I extract the color of an excel shape using the xml?
|
<p>I have the below code which works beautifully to extract the text from all shapes in an excel file. For a simple xlsx with 2 sheets and 1 shape per sheet it will return the following:</p>
<p>['Text box 2']
['Sheet2 ellipse text 3']</p>
<p>However, I am needing it to also return the color of the shape. I cannot use xlwings or pywin32 as those take too long to process. I do not believe that openpyxl has this feature.</p>
<pre><code>from zipfile import ZipFile
from lxml import etree
def main():
z = ZipFile('/home/luis/tmp/shapes.xlsx')
drawings = [ drw for drw in z.filelist if "drawings" in drw.filename.split('/') ]
for drw in drawings:
with z.open(drw.filename) as sstr:
ssdata = sstr.read()
tree = etree.fromstring(ssdata)
ns = {"xdr": "http://schemas.openxmlformats.org/drawingml/2006/spreadsheetDrawing",
"a": "http://schemas.openxmlformats.org/drawingml/2006/main",
"r": "http://schemas.openxmlformats.org/officeDocument/2006/relationships"
}
shapes = tree.xpath('//xdr:txBody[preceding-sibling::xdr:spPr]/a:p/a:r/a:t/text()', namespaces = ns)
print(shapes)
if __name__ == '__main__':
main()
</code></pre>
|
<python><excel><xml>
|
2024-04-15 17:38:30
| 1
| 997
|
Hooded 0ne
|
78,330,095
| 4,398,966
|
dictionary of lists of dictionaries not filtering
|
<p>I tried the following:</p>
<pre><code>positions = {
'IBM': [
{'ticker': 'IBM', 'start_date': '20240415', 'end_date': 99999999},
{'ticker': 'IBM', 'start_date': '20240416', 'end_date': 00000000},
{'ticker': 'IBM', 'start_date': '20240417', 'end_date': 99999999}],
'MRK': [
{'ticker': 'mmm', 'start_date': '20240415', 'end_date': 99999999},
{'ticker': 'mmm', 'start_date': '20240416', 'end_date': 00000000},
{'ticker': 'mmm', 'start_date': '20240417', 'end_date': 99999999}]}
new_pos = [position for position in positions.values()
if position[0].get('end_date') == 99999999]
print(new_pos)
</code></pre>
<p>and got back:</p>
<pre><code>[[{'ticker': 'IBM', 'start_date': '20240415', 'end_date': 99999999},
{'ticker': 'IBM', 'start_date': '20240416', 'end_date': 0},
{'ticker': 'IBM', 'start_date': '20240417', 'end_date': 99999999}],
[{'ticker': 'mmm', 'start_date': '20240415', 'end_date': 99999999},
{'ticker': 'mmm', 'start_date': '20240416', 'end_date': 0},
{'ticker': 'mmm', 'start_date': '20240417', 'end_date': 99999999}]]
</code></pre>
<p>why is the if condition not working and why do we get back all records instead
of just the first record in the list with <code>position[0]</code>?</p>
<p>I think if the first dict of each list has end = 99999999 then we get back the entire list. How do we just get back the dicts where end = 99999999?</p>
|
<python><list><dictionary><conditional-statements>
|
2024-04-15 17:26:25
| 1
| 15,782
|
DCR
|
78,330,035
| 23,626,926
|
Python `regex` module - get unfuzzied match from fuzzy match
|
<p>I an writing a simple command interpreter for a project to allow the user to interact with a virtual world. In the commands the user can refer to objects by any number of different names, and the objects can appear and disappear. Also, to be nice to the user I am allowing some typos.</p>
<p>To figure out which object the user is referring to, I am using the Python <a href="https://pypi.org/project/regex" rel="nofollow noreferrer">regex</a> module to do fuzzy matching. However, when there is a typo, the matched group no longer has the same name as the dictionary I am using to look up the actual object by its name, and the lookup fails.</p>
<p>Pared down test case:</p>
<pre class="lang-py prettyprint-override"><code># test.py
from dataclasses import dataclass, field
import regex
@dataclass
class Command:
phrase: str
arguments: list[str]
argtypes: list[type]
pattern: str = field(init=False)
def __post_init__(self):
assert (len(set(self.arguments)) == len(self.argtypes))
def fp(match):
if (n := match.group(0)) in self.arguments:
return rf"(?:(?P<{n}>\b\L<{n}>\b)){{e<=5}}"
return rf"(?:\b{regex.escape(n)}\b){{e<=3}}"
munged = regex.sub(r"\b(\w+)\b", fp, self.phrase)
munged = regex.sub(r"\s+", r"\\s+", munged)
self.pattern = munged
def match(self, string: str,
candidate_captures: list) -> tuple[list[str] | None, int | None]:
"""Match self on the string, return the matched objects and the
number of errors, or None, None if no match"""
# assemble the candidates dict
options = {x: [] for x in self.arguments}
unmap = {}
for c in candidate_captures:
for a, t in zip(self.arguments, self.argtypes):
if isinstance(c, t):
s = str(c)
options[a].append(s)
unmap[s] = c
match: regex.Match = regex.search(self.pattern, string,
**options, flags=regex.BESTMATCH)
if match:
return ([unmap[match.group(g)] for g in self.arguments], #####<<<<<<<<
sum(match.fuzzy_counts))
return None, None
</code></pre>
<pre><code>>>> from test import Command
>>> x = Command("bar X foo Y", ["X", "Y"], [int, float])
>>> x
Command(phrase='bar X foo Y', arguments=['X', 'Y'], argtypes=[<class 'int'>, <class 'float'>], pattern='(?:\\bbar\\b){e<=3}\\s+(?:(?P<X>\\b\\L<X>\\b)){e<=5}\\s+(?:\\bfoo\\b){e<=3}\\s+(?:(?P<Y>\\b\\L<Y>\\b)){e<=5}')
>>> x.match("baar 12 foo 345", [12, 34.5, 17, 65.9])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "test.py", line 39, in match
return ([unmap[match.group(g)] for g in self.arguments], #####<<<<<<<<
~~~~~^^^^^^^^^^^^^^^^
KeyError: '345'
>>> # this is because it fuzzymatched "34.5"
>>> # how do I get the "34.5" out again?
</code></pre>
<p>What I am looking to do is, on the marked line that errors, to get the <em>original</em>, unfuzzied text, instead of the <em>actual</em> fuzzy text. I can't figure out how to do this.</p>
|
<python><regex><fuzzy-search><python-regex>
|
2024-04-15 17:12:22
| 0
| 360
|
dragoncoder047
|
78,329,987
| 5,312,606
|
numba dispatch on type
|
<p>I would like to dispatch on the type of the second argument in a function in numba and fail in doing so.</p>
<p>If it is an integer then a vector should be returned,
if it is itself an array of integers, then a matrix should be returned.</p>
<p>The first code does not work</p>
<pre class="lang-py prettyprint-override"><code>@njit
def test_dispatch(X, indices):
if isinstance(indices, nb.int64):
ref_pos = np.empty(3, np.float64)
ref_pos[:] = X[:, indices]
return ref_pos
elif isinstance(indices, nb.int64[:]):
ref_pos = np.empty((3, len(indices)), np.float64)
ref_pos[:, :] = X[:, indices]
return ref_pos
</code></pre>
<p>while the second one, with an <code>else</code>, does.</p>
<pre class="lang-py prettyprint-override"><code>@njit
def test_dispatch(X, indices):
if isinstance(indices, nb.int64):
ref_pos = np.empty(3, np.float64)
ref_pos[:] = X[:, indices]
return ref_pos
else:
ref_pos = np.empty((3, len(indices)), np.float64)
ref_pos[:, :] = X[:, indices]
return ref_pos
</code></pre>
<p>I guess that the problem is the type declaration via <code>nb.int64[:]</code> but I don't get it to work in any other way.
Do you have an idea?</p>
<p>Note that this question applies to <code>numba>=0.59</code>.
<code>generated_jit</code> is deprecated in earlier versions and actually removed from
versions 0.59 on.</p>
|
<python><types><numba>
|
2024-04-15 17:03:41
| 1
| 1,897
|
mcocdawc
|
78,329,750
| 12,800,206
|
dockerized python application takes a long time to trim a video with ffmpeg
|
<p>The project trims YouTube videos.</p>
<p>When I ran the ffmpeg command on the terminal, it didn't take too long to respond. The code below returns the trimmed video to the front end but it takes too long to respond. A 10 mins trim length takes about 5mins to respond. I am missing something, but I can't pinpoint the issue.</p>
<p>backend</p>
<p>main.py</p>
<pre><code>import os
from flask import Flask, request, send_file
from flask_cors import CORS, cross_origin
app = Flask(__name__)
cors = CORS(app)
current_directory = os.getcwd()
folder_name = "youtube_videos"
save_path = os.path.join(current_directory, folder_name)
output_file_path = os.path.join(save_path, 'video.mp4')
os.makedirs(save_path, exist_ok=True)
def convert_time_seconds(time_str):
hours, minutes, seconds = map(int, time_str.split(':'))
total_seconds = (hours * 3600) + (minutes * 60) + seconds
return total_seconds
def convert_seconds_time(total_seconds):
new_hours = total_seconds // 3600
total_seconds %= 3600
new_minutes = total_seconds // 60
new_seconds = total_seconds % 60
new_time_str = f'{new_hours:02}:{new_minutes:02}:{new_seconds:02}'
return new_time_str
def add_seconds_to_time(time_str, seconds_to_add):
total_seconds = convert_time_seconds(time_str)
total_seconds -= seconds_to_add
new_time_str = convert_seconds_time(total_seconds)
return new_time_str
def get_length(start_time, end_time):
start_time_seconds = convert_time_seconds(start_time)
end_time_seconds = convert_time_seconds(end_time)
length = end_time_seconds - start_time_seconds
length_str = convert_seconds_time(length)
return length_str
def download_url(url):
command = [
"yt-dlp",
"-g",
url
]
try:
links = subprocess.run(command, capture_output=True, text=True, check=True)
video, audio = links.stdout.strip().split("\n")
return video, audio
except subprocess.CalledProcessError as e:
print(f"Command failed with return code {e.returncode}.")
print(f"Error output: {e.stderr}")
return None
except ValueError:
print("Error: Could not parse video and audio links.")
return None
def download_trimmed_video(video_link, audio_link, start_time, end_time):
new_start_time = add_seconds_to_time(start_time, 30)
new_end_time = get_length(start_time, end_time)
if os.path.exists(output_file_path):
os.remove(output_file_path)
command = [
'ffmpeg',
'-ss', new_start_time + '.00',
'-i', video_link,
'-ss', new_start_time + '.00',
'-i', audio_link,
'-map', '0:v',
'-map', '1:a',
'-ss', '30',
'-t', new_end_time + '.00',
'-c:v', 'libx264',
'-c:a', 'aac',
output_file_path
]
try:
result = subprocess.run(command, capture_output=True, text=True, check=True)
if result.returncode == 0:
return "Trimmed video downloaded successfully!"
else:
return "Error occurred while downloading trimmed video"
except subprocess.CalledProcessError as e:
print(f"Command failed with return code {e.returncode}.")
print(f"Error output: {e.stderr}")
app = Flask(__name__)
@app.route('/trimvideo', methods =["POST"])
@cross_origin()
def trim_video():
print("here")
data = request.get_json()
video_link, audio_link = download_url(data["url"])
if video_link and audio_link:
print("Downloading trimmed video...")
download_trimmed_video(video_link, audio_link, data["start_time"], data["end_time"])
response = send_file(output_file_path, as_attachment=True, download_name='video.mp4')
response.status_code = 200
return response
else:
return "Error downloading video", 400
if __name__ == '__main__':
app.run(debug=True, port=5000, host='0.0.0.0')
</code></pre>
<p>dockerfile</p>
<pre><code>FROM ubuntu:latest
# Update the package list and install wget and ffmpeg
RUN apt-get update \
&& apt-get install -y wget ffmpeg python3 python3-pip \
&& rm -rf /var/lib/apt/lists/*
# Download the latest version of yt-dlp and install it
RUN wget https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp -O /usr/local/bin/yt-dlp \
&& chmod a+rx /usr/local/bin/yt-dlp
WORKDIR /app
COPY main.py /app/
COPY requirements.txt /app/
RUN pip install --no-cache-dir -r requirements.txt
# Set the default command
CMD ["python3", "main.py"]
</code></pre>
<p>requirements.txt</p>
<pre><code>blinker==1.7.0
click==8.1.7
colorama==0.4.6
Flask==3.0.3
Flask-Cors==4.0.0
itsdangerous==2.1.2
Jinja2==3.1.3
MarkupSafe==2.1.5
Werkzeug==3.0.2
</code></pre>
<p>frontend</p>
<p>App.js</p>
<pre><code>
import React, { useState } from 'react';
import './App.css';
import axios from 'axios';
async function handleSubmit(event, url, start_time, end_time, setVideoUrl, setIsSubmitted){
event.preventDefault();
if( url && start_time && end_time){
try {
setIsSubmitted(true);
const response = await axios.post('http://127.0.0.1:5000/trimvideo', {
url: url,
start_time: start_time,
end_time: end_time
},
{
responseType: 'blob',
headers: {'Content-Type': 'application/json'}
}
)
const blob = new Blob([response.data], { type: 'video/mp4' });
const newurl = URL.createObjectURL(blob);
setVideoUrl(newurl);
} catch (error) {
console.error('Error trimming video:', error);
}
} else {
alert('Please fill all the fields');
}
}
function App() {
const [url, setUrl] = useState('');
const [startTime, setStartTime] = useState('');
const [endTime, setEndTime] = useState('');
const [videoUrl, setVideoUrl] = useState('');
const [isSubmitted, setIsSubmitted] = useState(false);
return (
<div className="App">
<div className="app-header">TRIM AND DOWNLOAD YOUR YOUTUBE VIDEO HERE</div>
<input className = "input-url"placeholder='Enter url' value={url} onChange={e=>setUrl(e.target.value)}/>
<div className="input-container">
<input className="start-time-url" placeholder="start time" value={startTime} onChange={(e)=>setStartTime(e.target.value)}/>
<input className="end-time-url" placeholder="end time" value={endTime} onChange={e=>setEndTime(e.target.value)}/>
</div>
{
!isSubmitted && <button onClick={(event) => handleSubmit(event, url, startTime, endTime, setVideoUrl, setIsSubmitted)} className='trim-button'>Trim</button>
}
{
( isSubmitted && !videoUrl) && <div className="dot-pulse"></div>
}
{
videoUrl && <video controls autoPlay width="500" height="360">
<source src={videoUrl} type='video/mp4'></source>
</video>
}
</div>
);
}
export default App;
</code></pre>
|
<python><docker><flask><ffmpeg><yt-dlp>
|
2024-04-15 16:12:17
| 0
| 748
|
Ukpa Uchechi
|
78,329,714
| 2,687,317
|
Pandas - groupby string field and select by time-of-day range
|
<p>I have a dataset like this</p>
<pre><code>index Date_Time Pass_ID El
0 3/30/23 05:12:36.36 A 1
1 3/30/23 05:12:38.38 A 2
1 3/30/23 05:12:40.40 A 3
1 3/30/23 05:12:42.42 A 4
1 3/30/23 05:12:44.44 A 4
1 3/30/23 12:12:50.50 B 3
1 3/30/23 12:12:52.52 B 4
1 3/30/23 12:12:54.54 B 5
1 3/30/23 12:12:56.56 B 6
1 3/30/23 12:12:58.58 B 7
1 3/30/23 12:13:00.00 B 8
1 3/30/23 12:13:02.02 B 9
1 3/31/23 20:02:02.02 C 3
1 3/31/23 20:02:05.05 C 4
</code></pre>
<p>The Date_Time is pandas datetime object.</p>
<p>I'd like to group the records by <code>Pass_ID</code>, and <em><strong>then select out only those unique Pass_IDs that occur between specific hours</strong></em> in the day: for instance, between 10:00 and 13:00 would return B.</p>
<p>I don't know how to get groupby and 'between_time' to work in this case... which would seem to be the best way forward. I've also tried using a lambda function after setting the Date_Time as the index, but that didn't work. And using aggregate doesn't seem to allow me to pull out the dt.hour of the Date_Time field. Anyone know how to do this concisely?</p>
|
<python><pandas><group-by>
|
2024-04-15 16:05:19
| 2
| 533
|
earnric
|
78,329,495
| 4,909,242
|
what is the equivalent of numpy accumulate ufunc in pytorch
|
<p>In numpy, I can do the following:</p>
<pre><code>>>> x = np.array([[False, True, False], [True, False, True]])
>>> z0 = np.logical_and.accumulate(x, axis=0)
>>> z1 = np.logical_and.accumulate(x, axis=1)
</code></pre>
<p>This returns the following:</p>
<pre><code>>>> z0
array([[False, True, False],
[False, False, False]])
>>> z1
array([[False, False, False],
[ True, False, False]])
</code></pre>
<p>What is the equivalent of this ufunc operation in pytorch?</p>
|
<python><numpy><pytorch><numpy-ufunc>
|
2024-04-15 15:26:35
| 2
| 709
|
Wei Li
|
78,329,445
| 15,452,168
|
How to Efficiently Process 6000 Requests in Python with Limited Time Constraints?
|
<p>I have a Python script that sends requests to an API to generate responses based on input prompts. Each request typically takes around 10 seconds to process. I have a CSV file containing 6000 rows, and I want to send the values from the 'text' column as input prompts to the API. Additionally, I aim to append the generated responses to a new column in the same CSV file.</p>
<p>Given the large number of requests and the time it takes for each request to complete, I'm concerned about the overall processing time. Is there a way to optimize this process to avoid spending 6000*10 seconds sequentially?</p>
<p>Here's the base code snippet:</p>
<pre><code>import pandas as pd
from requests_futures.sessions import FuturesSession
import time
# Define your existing function for sending a request
def send_request(session, input_prompt):
url = 'https://ai-chat-test.***.workers.dev/'
consolidated_request = f"""From the input "{input_prompt}", extract key descriptions, deduplicate them, and use these to create a compelling, concise description of the fashion item."""
return session.post(url, json={"messages": [{"role": "user", "content": consolidated_request}]}, headers={"Content-Type": "application/json"})
# Function to apply send_request concurrently and save intermittently
def apply_concurrent_requests(df, function):
session = FuturesSession(max_workers=5)
futures = {function(session, text): index for index, text in enumerate(df['text'])}
results = [None] * len(df)
batch_counter = 0
start_time = time.time()
for future in as_completed(futures):
index = futures[future]
try:
response = future.result()
if response.status_code == 200:
data = response.json()
for message in data:
if message['role'] == 'assistant':
results[index] = message['content']
break
else:
results[index] = "No assistant message found."
else:
results[index] = f"Failed to get a response, status code: {response.status_code}"
except Exception as e:
results[index] = f"Exception occurred: {e}"
# Save after processing each batch of 100 or the last response
batch_counter += 1
if batch_counter % 100 == 0 or index + 1 == len(df):
# Save the responses so far to the DataFrame
for j, res in enumerate(results[:index+1]):
df.at[j, 'response'] = res
# Save to CSV
df.to_csv('intermediate_metadata.csv', index=False)
print(f"Processed {index+1} requests in {time.time() - start_time:.2f} seconds and saved to CSV.")
start_time = time.time() # Reset the timer for the next batch
return df
df = pd.read_csv('metadata.csv')
updated_df = apply_concurrent_requests(df, send_request)
updated_df.to_csv('updated_metadata.csv', index=False)
</code></pre>
<p>How can I optimize this process to send multiple requests at once or utilize parallel processing to reduce the overall processing time?</p>
|
<python><python-3.x><multithreading><multiprocessing><python-multiprocessing>
|
2024-04-15 15:20:45
| 0
| 570
|
sdave
|
78,329,319
| 13,174,189
|
How to lattitude, longtitude and full address based on address?
|
<p>I have a dataframe with column address_1:</p>
<pre><code>address_1
apple fifth avenue new york
burj khalifa dubai
microsoft office san francisco
</code></pre>
<p>I want to get longtitude, lattitude and full address based on address_1. How to di it? I tried geopandas library:</p>
<pre><code>geolocator = Nominatim(user_agent="myGeocoder")
location = geolocator.geocode("apple fifth avenue new york")
address = location.address
latitude = location.latitude
longitude = location.longitude
print(latitude, longitude)
</code></pre>
<p>but i get this error:</p>
<pre><code>GeocoderInsufficientPrivileges Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/geopy/geocoders/base.py in _adapter_error_handler(self, error)
409 ) from error
410 else:
--> 411 raise exc_cls(str(error)) from error
412 else:
413 res = self._geocoder_exception_handler(error)
GeocoderInsufficientPrivileges: Non-successful status code 403
</code></pre>
|
<python><python-3.x><google-maps><geolocation><geopandas>
|
2024-04-15 15:02:27
| 1
| 1,199
|
french_fries
|
78,329,226
| 447,426
|
PySpark where to find logs and how to log properly
|
<p>I have set up a local (docker) spark cluster as provided by <a href="https://github.com/bitnami/containers/blob/main/bitnami/spark/docker-compose.yml" rel="nofollow noreferrer">bitnami</a>. Everything is running fine.
I can submit simple spark jobs and run them and i can interact with pyspark.</p>
<p>Now i try to set up basic logging (my sources are <a href="https://stackoverflow.com/questions/25407550/how-do-i-log-from-my-python-spark-script">stackoverflow</a> and <a href="https://gist.github.com/smartnose/9f173b4c36dc31310e8efd27c3535a14" rel="nofollow noreferrer">github gist</a>)
Mainly i see 2 ways first:</p>
<pre><code>log4jref = sc._jvm.org.apache.log4j
logger = log4jref.getLogger(__name__)
</code></pre>
<p>If i am using this in pyspark console i see at least the output in console:</p>
<pre><code>>>> logger.info("something")
24/04/15 14:24:31 INFO __main__: something
</code></pre>
<p>If i am using the "cleaner solution":</p>
<pre><code>import logging
logger = logging.getLogger(__name__)
logger.info("here we go")
</code></pre>
<p>i do not see anything returned in pyspark console.</p>
<p>In both cases i do not see my outputs in <code>docker logs spark-spark-worker-1</code> nor on master <code>docker logs spark-spark-1</code> (other outputs are there).
I also tried to look in <code>/opt/bitnami/spark/logs</code></p>
<p>So my question is: <strong>where to find my log messages in general</strong>? And what of the 2 ways is better? In a first steps i just need a quick way to get feedback from my running code - making it production ready comes later (if not easily possible right from start)</p>
<p><strong>little update</strong></p>
<p>the job that i am running for test looks like this:</p>
<pre><code># Create a SparkSession
spark = SparkSession.builder \
.appName("My App") \
.getOrCreate()
log = logging.getLogger(__name__)
rdd = spark.sparkContext.parallelize(range(1, 100))
log.error("AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA")
log.error(f"THE SUM IS HERE: {rdd.sum()}")
print(f"THE SUM IS HERE: {rdd.sum()}")
print("AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA")
# Stop the SparkSession
spark.stop()
</code></pre>
<p>While i see in logs that this job runs successfully i can't see my log entries nor the print.
All this places seem to show the same log:</p>
<ul>
<li>docker ui (uses API to access logs?)</li>
<li>docker logs (for master and worker)</li>
<li>file /opt/bitnami/spark/work/app-20240416084653-0006/0/stderror on worker</li>
</ul>
|
<python><docker><logging><pyspark>
|
2024-04-15 14:46:17
| 1
| 13,125
|
dermoritz
|
78,329,019
| 4,469,336
|
Django: Adding unique together errors to one of the involved fields
|
<p>Note: I asked this question on the <a href="https://forum.djangoproject.com/t/adding-unique-together-errors-to-one-of-the-involved-fields/30017/1" rel="nofollow noreferrer">Django forum</a>, but since I did not receive any answers there, I'm asking here, too.</p>
<p>I’m building an application with multi-tenancy, meaning that all data is associated to a tenant model. When creating models, I often want to validate uniqueness of e.g. the <code>name</code> field. However, the uniqueness is only required per tenant, not globally, which is why I’m using <code>unique_together</code> (or <code>UniqueConstraint</code>, found that too late, but will migrate).</p>
<p>However, when the uniqueness is violated, a <code>non_field_error</code> is added. I would however prefer to add this error to the <code>name</code> field, as the tenant field is implicit and not visible to the user (the tenant is always the tenant of the logged in user, and can’t be chosen).</p>
<p>Any hints on how I could achieve that, preferably in the model or alternatively in a form? I thought about first validating the model, and then moving the error from the non_field_errors to the name field, but that feels kind of wrong.</p>
<p>Starting point would be this code:</p>
<pre class="lang-py prettyprint-override"><code>class Company(models.Model):
tenant = models.ForeignKey(Tenant, on_delete=models.PROTECT)
name = models.CharField(max_length=255)
class Meta:
unique_together = ('tenant', 'name')
</code></pre>
|
<python><django><validation><unique-constraint>
|
2024-04-15 14:14:58
| 0
| 2,985
|
Timitry
|
78,328,963
| 3,042,018
|
PyCharm warns "Expected type 'int', got 'float'" for random.randint, but code still runs sometimes
|
<p>PyCharm (Python version 3.11.3 on Windows) is flagging <code>Expected type 'int', got 'float' instead</code> where I use <code>random.randint(x/y, 10)</code> (where x/y will be, say, 5.0), which makes sense. However the code still runs.</p>
<p>Someone else using a different IDE tried to run the code and <code>TypeError</code> prevented the code from running. I don't know what system/version that was on, but it doesn't make sense to me.</p>
<p>For minimal viable program:</p>
<pre><code>import random
x = 10
y = 2
print(random.randint(x/y, 10)
</code></pre>
<p>Why is there a discrepancy between the PyCharm warning and the runtime errors, and why is it not consistent across systems?</p>
|
<python><typeerror>
|
2024-04-15 14:04:42
| 1
| 3,842
|
Robin Andrews
|
78,328,935
| 547,231
|
Computing the Jacobian of an image: How to reshape the numpy array properly?
|
<p>I have a batch <code>x</code> of images of the shape <code>[k, width, height, channel_count]</code>. This batch is transformed by a function <code>f</code>. The result has the same shape and I need to compute the divergence (i.e. trace of the Jacobian) of this transformation. (To emphasize this: The transform is working on a batch; it's the output of a neural network and I cannot change it).</p>
<p>I'm using <code>jax.jacfwd</code> to compute the Jacobian. The output has shape <code>[k, width, height, channel_count, k, width, height, channel_count]</code>. That is the first problem, I actually need the Jacobian per image. So the output should have shape <code>[k, width, height, channel_count, width, height, channel_count]</code> instead. I don't know how I can achieve this using <code>jax.jacfwd</code>, since I don't have a per image transform (only the per batch transform).</p>
<p>And even if I have the desired output, how can I compute the trace (per image)? The output should have shape <code>[k]</code>. I think I need to reshape the jacobian output to <code>[k, width * height * channel_count, width * height * channel_count]</code>, but how can I do that?</p>
<p><em>Remark</em>: Note that I also need to know the value of the transform itself. Since there is no <code>jax.val_and_jacfwd</code>, I'm return the actual value as an auxiliary variable, which should be fine. The solution to my question should still allow this.</p>
<p><strong>EDIT</strong>:</p>
<p>Here is a minimal reproducible example:</p>
<pre><code>import jax
import torch
def f(x):
# s = model.apply(params, x) ### The actual code queries a network model
# for reproducibility, here is a simple transform:
s = jax.numpy.empty(x.shape)
for i in range(x.shape[0]):
for u in range(x.shape[1]):
for v in range(x.shape[2]):
for c in range(x.shape[3]):
s = s.at[i, u, v, c].set(x[i, u, v, c] * x[i, u, v, c])
return [s, s]
jac = jax.jacfwd(f, has_aux = True)
k = 3
width = 2
height = 2
channel_count = 1
x = torch.empty((k, width, height, channel_count))
### The actual loads x from a batch
it = 1
for i in range(x.shape[0]):
for u in range(x.shape[1]):
for v in range(x.shape[2]):
x[i, u, v, 0] = it
it += 1
f_jac, f_val = jac(x.numpy())
print(f_jac.shape)
</code></pre>
<p>The output is <code>(3, 2, 2, 1, 3, 2, 2, 1)</code>. This is clearly not what I want. I don't want to "differentiate one image with respect to pixels of another". What I actually want is the "per image" Jacobian. So, the output should be of the shape <code>(3, 2, 2, 1, 2, 2, 1)</code> instead.</p>
<p>But let me stress it again: I still <em>need</em> that <code>f</code> takes a batch of images, since invoking <code>model.apply</code> for each image in the batch separately would be terribly slow.</p>
<p><strong>EDIT 2</strong>:</p>
<p>BTW, if there is a direct way to compute the divergence of <code>f</code> - without the need of computing the whole Jacobian before - I would definitely prefer that.</p>
<p><strong>EDIT 3</strong>:</p>
<p>Regarding only the "per batch Jacobian" thing: Here is an even simpler example:</p>
<pre><code>import jax
import torch
def f(x):
s = jax.numpy.empty(x.shape)
s = s.at[0].set(x[0] * x[0])
s = s.at[1].set(x[1] * x[1])
s = s.at[2].set(x[2] * x[2])
return [s, s]
jac = jax.jacfwd(f, has_aux = True)
x = torch.empty(3)
x[0] = 1
x[1] = 2
x[2] = 3
f_jac, f_val = jac(x.numpy())
print(f_jac.shape)
print(f_jac)
print(f_val)
</code></pre>
<p>The goal would be that <code>f_jac</code> has shape <code>[3, 1, 1]</code>. (The Jacobian of a one-dimensional function is simply a scalar).</p>
|
<python><numpy><neural-network><jax><automatic-differentiation>
|
2024-04-15 13:58:44
| 1
| 18,343
|
0xbadf00d
|
78,328,798
| 3,324,415
|
Robot Framework Database Library calling Oracle stored procedure fails with character to number conversion error
|
<p>I have an <code>Oracle PL/SQL</code> procedure that I can directly call as follows without problem:</p>
<pre><code>BEGIN
example_package_name.example_procedure(p_item_no => 123456, p_send_now => true);
END;
</code></pre>
<p>(Note: <code>p_item_no</code> expects a <code>NUMBER</code> and <code>p_send_now</code> expects a <code>BOOLEAN</code>)</p>
<p>I am attempting to run this from within my <code>Robot Framework</code> test automation framework as shown below.</p>
<p>First I have a small helper wrapper method for <code>robotframework-databaselibrary</code>:</p>
<pre><code>Execute SQL stored procedure
[Arguments]
... ${target_database}
... ${target_stored_procedure}
... ${stored_procedure_arguments}
... ${timeout}=5 minutes
[Documentation] Small wrapper around DatabaseLibrary for the: Call stored procedure keyword.
[Timeout] ${timeout}
__Open connection to target database ${target_database}
DatabaseLibrary.Set Auto Commit autoCommit=${True}
DatabaseLibrary.Call Stored Procedure ${target_stored_procedure} ${stored_procedure_arguments}
Close connection from the current database
</code></pre>
<p>Next from my test I am attempting something as follows:</p>
<pre><code>${item_no_int}= Convert To Integer ${test_item_dictionary.item_no}
${example_procedure_argument_list}= Create List p_item_no => ${item_no_int} p_send_now => ${True}
Execute SQL Stored Procedure target_database=test_db_name target_stored_procedure=example_package_name.example_procedure stored_procedure_arguments=${example_procedure_argument_list}
</code></pre>
<p>My error reads:</p>
<blockquote>
<p>[info (+0.10s)] Executing : Call Stored Procedure | example_package_name.example_procedure | ['p_item_no => 123456', 'p_send_now => True']</p>
</blockquote>
<blockquote>
<p>[FAIL] DatabaseError:ORA-06502: PL/SQL: numeric or value error: character to number conversion error ORA-06512: at line 1 Help: <a href="https://docs.oracle.com/error-help/db/ora-06502/" rel="nofollow noreferrer">https://docs.oracle.com/error-help/db/ora-06502/</a></p>
</blockquote>
<p>Naturally I have been trying to ensure that my data is of the correct type when leaving Robot Framework, when reading the documentation on the Robot Framework: <code>DatabaseLibrary.Call Stored Procedure</code> keyword, I see:</p>
<pre><code>def call_stored_procedure(
self, spName: str, spParams: Optional[List[str]] = None, sansTran: bool = False, alias: Optional[str] = None
):
</code></pre>
<p>With description:</p>
<blockquote>
<p>Calls a stored procedure <code>spName</code> with the <code>spParams</code> - a <em>list</em> of
parameters the procedure requires.</p>
</blockquote>
<p>Is it be possible that the Keyword <code>DatabaseLibrary.Call Stored Procedure</code> / <code>spParams: Optional[List[str]]</code> does not end up preserving one's intended data types? Or is something else perhaps missing on my part?</p>
<p>I am running:</p>
<ul>
<li><code>robotframework>=7.0.0</code></li>
<li><code>robotframework-databaselibrary>=1.4.3</code></li>
<li><code>oracledb>=2.1.0</code></li>
</ul>
|
<python><oracle-database><plsql><robotframework>
|
2024-04-15 13:36:10
| 1
| 766
|
BernardV
|
78,328,713
| 2,401,856
|
Sqlite3 getting outdated value when selecting straight after inserting
|
<p>I'm having a very wiered behavior with sqlite3 in python.<br>
I have a small flask service which has 2 endpoints <code>addTask</code> and <code>deleteTask</code>, and in my database I have <code>tasks_order</code> column which holds array that represents the tasks ordering. this order is changeable by the user.<br>
<code>addTask</code> - pushes a new random id (task id) into tasks_order array and to the database, this is its code:<br></p>
<p>Endpoint:</p>
<pre><code>@tasks_bp.route("/addTask", methods=["POST"])
def addTask():
hash = ''.join(secrets.choice(string.ascii_letters + string.digits + string.punctuation) for _ in range(10))
data = request.get_json()
return tasksService.addTask(hash)
</code></pre>
<p>Service:</p>
<pre><code>def addTask(hash):
try:
conn = db.getConn()
new_task_id = random.randint(1, 999)
tasks_order = dbm.getTasksOrderList()
dbm.updateTasksOrderList(tasks_order + [new_task_id])
conn.commit()
tasks_order = dbm.getTasksOrderList()
print(hash, "added", new_task_id, datetime.now().strftime("%M:%S.%f"))
print(hash, tasks_order, datetime.now().strftime("%M:%S.%f"))
except Exception as e:
conn.rollback()
raise e
else:
return {"id": new_task_id}
</code></pre>
<p>dbm:</p>
<pre><code>def getTasksOrderList():
cur = db.getConn().cursor()
row = cur.execute("SELECT order_list FROM tasks_order").fetchone()
order_list_str = row["order_list"]
return json.loads(order_list_str)
def updateTasksOrderList(order_list):
order_list_str = json.dumps(order_list)
cur = db.getConn().cursor()
cur.execute("UPDATE tasks_order SET order_list = ?", (order_list_str,))
</code></pre>
<p>Notice that I'm addid <code>hash</code> and some prints for debugging manners only.<br>
<code>deleteTask</code> - receives a task id, <strong>selects the tasks_order array from the database</strong>, removes this task id then updates the tasks_order in database. Here's its code:</p>
<p>Endpoint:</p>
<pre><code>@tasks_bp.route("/deleteTask", methods=["POST"])
def deleteTask():
try:
hash = ''.join(secrets.choice(string.ascii_letters + string.digits + string.punctuation) for _ in range(10))
data = request.get_json()
return tasksService.deleteTask(hash, data["task_id"])
except Exception as e:
print(hash, "ERROR OCCURED", data["task_id"])
</code></pre>
<p>Service:</p>
<pre><code>def deleteTask(hash, task_id):
print(hash, "todelete", task_id, datetime.now().strftime("%M:%S.%f"))
try:
conn = db.getConn()
tasks_order = dbm.getTasksOrderList()
print(hash, tasks_order, task_id, datetime.now().strftime("%M:%S.%f"))
tasks_order.remove(task_id)
dbm.updateTasksOrderList(tasks_order)
conn.commit()
except Exception as e:
conn.rollback()
raise e
else:
return {}, 204
</code></pre>
<p>I handle connections to the database as suggested in flask <a href="https://flask.palletsprojects.com/en/3.0.x/patterns/sqlite3/" rel="nofollow noreferrer">documentation</a>.</p>
<pre><code>def getConn():
db = getattr(g, '_database', None)
if db is None:
db = g._database = sqlite3.connect(DATABASE)
db.row_factory = sqlite3.Row
return db
@app.teardown_appcontext
def close_connection(exception):
db = getattr(g, '_database', None)
if db is not None:
db.close()
</code></pre>
<p>After running the service, I run a performance test in postman, which sends requests to <code>addTask</code> and <code>deleteTask</code> from 20 users. It proccesses few add and delete requests succesfully then fails.<br>
I found out that it fails in <code>deleteTask</code> endpoint, when trying to remove a task_id from the array which doesn't exist. The weird thing that I see in logs that the insertion to the array (into the database) happened before calling <code>deleteTask</code> and I was able actually to see that in the logs. <br>
But when the connection to the db was closed (which happens after proccessing each request) and a new connection established (which happens when receiving a new request), the new connection cannot see the changes in the database. Why?!</p>
<p>logs:</p>
<pre><code>?,PXYJ*pse added 237 02:51.805480
?,PXYJ*pse [54, 950, 86, 678, 417, 404, 237] 02:51.805480 # here we can see that on 02:51.805480 when I selected the array from the database, it included 237
t"qu5NI+bb todelete 237 02:51.903749
t"qu5NI+bb [54, 950, 86, 678, 417, 151] 237 02:51.905749 # here we can see that AFTER 02:51.805480 when I selected the array from the database, it doesn't include 237
t"qu5NI+bb ERROR OCCURED 237
</code></pre>
|
<python><sqlite><flask>
|
2024-04-15 13:23:39
| 1
| 620
|
user2401856
|
78,328,663
| 5,089,311
|
Python: swap position of vars expansion
|
<p>My function receives object <code>myenum</code> which can be either <code>set</code> or <code>dict</code>.<br />
<code>{'a', 'b', 'c'}</code><br />
or<br />
<code>{'a': 5, 'b' : 2, 'c' : 8}</code></p>
<p>It represents enum like in C/C++.</p>
<p>In case of <code>dict</code> I need enumerate over <code>.items()</code> in case of <code>set</code> I need use <code>enumerate(myenum)</code> like this</p>
<pre><code>for k, i in myenum.items():
for i, k in enumerate(myenum):
</code></pre>
<p>I could use ternary <code>itr = myenum.items() if hasattr(myenum, 'keys') else enumerate(myenum)</code>
And then simply have single <code>for</code>.</p>
<p>Unfortunately <code>.items()</code> expands into <code>key, value</code> and <code>enumerate()</code> expands into <code>index, key</code>. <code>index</code> is actually the desired value for the enum.</p>
<p>Is there way to swap places in the expansion? So that <code>enumerate()</code> also expands into <code>key, index</code>?</p>
<p>Right now I have naive solution of 2 <code>for</code> cycles that are identical, just the order of <code>k, i</code> is different, which is ugly.</p>
|
<python>
|
2024-04-15 13:15:31
| 4
| 408
|
Noob
|
78,328,539
| 13,819,714
|
Huggingface pipeline available models
|
<p>I'm working with Huggingface in Python to make inference with specific LLM text generation models. So far I used pipelines like this to initialize the model, and then insert input from a user and retrieve the response:</p>
<pre><code>import torch
from transformers import pipeline
print(torch.cuda.is_available())
generator = pipeline('text-generation', model='gpt2', device="cuda")
#Inference code
</code></pre>
<p>However, when I change <code>gpt2</code> with <code>google/gemma-2b-it</code> or some other models, it might ask for authentication or directly it thwors an error indicating it´s not available from <code>pipeline()</code>.</p>
<p>I know some models need specific tokenizers and dependencies, but, is there any way to list all available models from <code>pipeline()</code>? And is there any way I can use other models inside <code>pipeline()</code> with all its dependencies without needing to import or use them inside the script?</p>
|
<python><python-3.x><machine-learning><pytorch><large-language-model>
|
2024-04-15 12:54:43
| 2
| 5,253
|
Cardstdani
|
78,328,474
| 12,094,039
|
FastAPI Error: HTTPBasic.__call__() missing 1 required positional argument: 'request'
|
<p>I am trying to create a simple WebSocket app with authentication using FastAPI, but after adding the authentication I face issues,</p>
<p>The <code>/{id}</code> request passes, but the WebSocket connection <code>/ws/{client_id}</code> gets failed due to the following error</p>
<pre><code>ERROR: Exception in ASGI application
Traceback (most recent call last):
File "D:\Installations\envs\chat-app\lib\site-packages\uvicorn\protocols\websockets\websockets_impl.py", line 240, in run_asgi
result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
File "D:\Installations\envs\chat-app\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 69, in __call__
return await self.app(scope, receive, send)
File "D:\Installations\envs\chat-app\lib\site-packages\fastapi\applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "D:\Installations\envs\chat-app\lib\site-packages\starlette\applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "D:\Installations\envs\chat-app\lib\site-packages\starlette\middleware\errors.py", line 151, in __call__
await self.app(scope, receive, send)
File "D:\Installations\envs\chat-app\lib\site-packages\starlette\middleware\exceptions.py", line 65, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "D:\Installations\envs\chat-app\lib\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app
raise exc
File "D:\Installations\envs\chat-app\lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "D:\Installations\envs\chat-app\lib\site-packages\starlette\routing.py", line 756, in __call__
await self.middleware_stack(scope, receive, send)
File "D:\Installations\envs\chat-app\lib\site-packages\starlette\routing.py", line 776, in app
await route.handle(scope, receive, send)
File "D:\Installations\envs\chat-app\lib\site-packages\starlette\routing.py", line 373, in handle
await self.app(scope, receive, send)
File "D:\Installations\envs\chat-app\lib\site-packages\starlette\routing.py", line 96, in app
await wrap_app_handling_exceptions(app, session)(scope, receive, send)
File "D:\Installations\envs\chat-app\lib\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app
raise exc
File "D:\Installations\envs\chat-app\lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "D:\Installations\envs\chat-app\lib\site-packages\starlette\routing.py", line 94, in app
await func(session)
File "D:\Installations\envs\chat-app\lib\site-packages\fastapi\routing.py", line 338, in app
solved_result = await solve_dependencies(
File "D:\Installations\envs\chat-app\lib\site-packages\fastapi\dependencies\utils.py", line 572, in solve_dependencies
solved_result = await solve_dependencies(
File "D:\Installations\envs\chat-app\lib\site-packages\fastapi\dependencies\utils.py", line 600, in solve_dependencies
solved = await call(**sub_values)
TypeError: HTTPBasic.__call__() missing 1 required positional argument: 'request'
INFO: connection open
INFO: connection closed
</code></pre>
<p><strong>Code:</strong></p>
<pre><code>app = FastAPI()
security = HTTPBasic()
templates = Jinja2Templates(directory="templates")
class ConnectionManager:
def __init__(self):
self.active_connections: List[WebSocket] = []
async def connect(self, websocket: WebSocket):
await websocket.accept()
self.active_connections.append(websocket)
def disconnect(self, websocket: WebSocket):
self.active_connections.remove(websocket)
async def send_personal_message(self, message: str, websocket: WebSocket):
await websocket.send_text(message)
async def broadcast(self, message: str, websocket: WebSocket) :
for connection in self.active_connections:
if (connection == websocket):
continue
await connection.send_text(message)
connectionmanager = ConnectionManager()
def get_current_username(
credentials: Annotated[HTTPBasicCredentials, Depends(security)],
):
is_correct_username, is_correct_password= validate_user(credentials.username.encode("utf8"),credentials.password.encode("utf8"))
if not (is_correct_username and is_correct_password):
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Incorrect username or password",
headers={"WWW-Authenticate": "Basic"},
)
return credentials.username
@app.get("/{id}", response_class=HTMLResponse)
def read_user(id: int, username: Annotated[str, Depends(get_current_username)], request: Request):
# Render the HTML template
return templates.TemplateResponse("index.html", {"request": request})
@app.websocket("/ws/{client_id}")
async def websocket_endpoint(
websocket: WebSocket, client_id: int, username: Annotated[str, Depends(get_current_username)], request: Request):
# accept connections
await connectionmanager.connect(websocket)
try:
while True:
data = await websocket.receive_text()
await connectionmanager.send_personal_message(f"You : {data}", websocket)
await connectionmanager.broadcast(f"Client #{client_id}: {data}", websocket)
except WebSocketDisconnect:
connectionmanager.disconnect(websocket)
await connectionmanager.broadcast(f"Client #{client_id} left the chat")
if __name__ == "__main__":
uvicorn.run(app, host='localhost', port=8000)
</code></pre>
<p>Before adding the authentication mechanism it was working fine.</p>
<p>Even though I added the Request parameter, still I am getting this error.</p>
|
<python><python-3.x><fastapi><fastapi-middleware>
|
2024-04-15 12:42:43
| 1
| 411
|
Aravindan vaithialingam
|
78,328,203
| 3,603,546
|
How to get python out of the "raw" terminal mode?
|
<p>(In Linux context) Entering terminal raw mode (to read mouse, Function keys etc) in python is easy:</p>
<pre><code>import sys, tty, termios
fd=sys.stdin.fileno()
#tty.setraw(fd)
tty.setcbreak(fd)
</code></pre>
<p>But then I want to get back and use <code>input()</code> again. Unfortunately, there is no <code>tty.setcooked()</code> function. What to do?</p>
|
<python><terminal><stty>
|
2024-04-15 11:56:27
| 1
| 316
|
user3603546
|
78,327,891
| 4,046,235
|
Use enum fields to define higher level aggregate
|
<p>In the project I am working on, it's a common pattern to have a function like <code>MachineType.get_external</code> to group the fields:</p>
<pre><code>from enum import IntEnum
class MachineType(IntEnum):
Cloud = 1
OnPrem = 2
Saas = 3
@classmethod
def get_external(cls):
return [
MachineType.Cloud,
MachineType.Saas
]
</code></pre>
<p>I don't like it and would rather prefer having something more efficient, like:</p>
<pre><code>class MachineAggregateType:
EXTERNAL = [MachineType.Cloud, MachineType.Saas]
</code></pre>
<p>i.e. less function calls and less list constructions. (It is used in event processing service, doing 1M events/min, and there are many classes like this which are used for every given event.)</p>
<p>Is there a way to have such grouping in the original class as a class field? In my opinion it's better for maintainability and structure than another class.</p>
|
<python><python-3.x><enums>
|
2024-04-15 10:59:31
| 1
| 3,201
|
Yuriy Vasylenko
|
78,327,887
| 17,530,552
|
How can I integrate values of a color channel with the image dimensions using cv2?
|
<p>Using <code>cv2</code>, one can load an image into Python and directly transfer the image into a grayscale version, as shown below.</p>
<pre><code>import cv2
image = cv2.imread("/path-to-image", cv2.IMREAD_GRAYSCALE)
</code></pre>
<p>Then, the image is a <em>two dimensional</em> numpy array. Hence, the intensity of the gray colors for each pixel are integrated into the height and width dimensions of the image.</p>
<pre><code>image.shape
(720, 1024)
</code></pre>
<p>As we can see above, the image or numpy array has only two dimensions, height (<code>720</code>) and width (<code>1024</code>). Each value or pixel in this matrix consists of a specific value, where the value corresponds to the brightness or gray color of the respective pixel.</p>
<p><strong>Selecting single color channels:</strong>
Now, I am interested in the three color channels of the image, that is, in the red, green, and blue channels. It is possible to pick the three color channels as follows when we don't load the image in a grayscaled format.</p>
<pre><code>import cv2
image = cv2.imread("/path-to-image", 1) # the value 1 means to load the original color image
for color in ["red", "green", "blue"]:
if color == "red":
red[:,:,0] = 0
red[:,:,1] = 0
elif color == "green":
green[:,:,0] = 0
green[:,:,2] = 0
elif color == "blue":
blue[:,:,1] = 0
blue[:,:,2] = 0
</code></pre>
<p>Paradigmatically, plotting the red image then works as follows:</p>
<pre><code>cv2.imshow("red image plot", red)
cv2.waitKey(0)
</code></pre>
<p>So far so good.</p>
<p><strong>My problem and question:</strong>
I would like to integrate the color values for each pixel into a <em>two dimensional</em> numpy array, exactly as it is automatically done when I directly load the image as grayscale format via <code>image = cv2.imread("/path-to-image", cv2.IMREAD_GRAYSCALE)</code>.</p>
<p>When I individually select the color channels I get a <em>three dimensional</em> numpy array, respectively for each color channel. Here is an example for the red color channel:</p>
<pre><code>red.shape
(720, 1024, 3)
</code></pre>
<p>How can I now "integrate" the color channel values, such as in the case of the red image, from the three dimensional into a two dimensional numpy array, as automatically performed when I load the image as grayscale?
Therefore, my aim is to get a two dimensional numpy array or matrix, where each pixel in that matrix is the color value for the red color.</p>
|
<python><arrays><numpy><opencv>
|
2024-04-15 10:58:24
| 0
| 415
|
Philipp
|
78,327,822
| 7,132,482
|
Python API JAMA: Put attachment to item
|
<p>I'm facing issues with API Jama to attach an attachment to an item.</p>
<p>I need to automatise my jobs and use python to create items and attach zipfile to them.
With the console I can create my attachment with my zip file and attach to it to my item.</p>
<p>But with the API Jama I don't know how I can create my attachment..</p>
<p>In the documentation it seems that I must use this function:</p>
<pre><code>def put_attachments_file(self, attachment_id, file_path):
"""
Upload a file to a jama attachment
:param attachment_id: the integer ID of the attachment item to which we are uploading the file
:param file_path: the file path of the file to be uploaded
:return: returns the status code of the call
"""
resource_path = 'attachments/' + str(attachment_id) + '/file'
with open(file_path, 'rb') as f:
files = {'file': f}
try:
response = self.__core.put(resource_path, files=files)
except CoreException as err:
py_jama_rest_client_logger.error(err)
raise APIException(str(err))
self.__handle_response_status(response)
return response.status_code
</code></pre>
<p>But I don't know how I can create the attachment_id then associate my zip file to it.</p>
<p>Thanks for your help !</p>
|
<python><rest><jama>
|
2024-04-15 10:46:33
| 1
| 335
|
Laurent Cesaro
|
78,327,635
| 1,791,983
|
In Python, want to find all the placemarks in a kml file, but the list is returning empty
|
<p>Trying to find all the placemarks in a kml file, so I can alter them. But findall doesnt find any.</p>
<pre><code>def scan (filename):
from lxml import etree
tree = etree.parse(open(filename,encoding='utf-8'))
root = tree.getroot()
namespaces = {'kml': 'http://www.opengis.net/kml/2.2'}
placemarks = root.findall(".//{kml}Placemark", namespaces)
for placemark in placemarks:
##LIST IS EMPTY!!!
name_text = placemark.findtext('.//name')
return
#file structure is <kml><Document><Placemark><...></Placemark></Document></kml>
I seem unable gto post the actual kml file, keeps complaining its not proper code
</code></pre>
|
<python><kml>
|
2024-04-15 10:13:04
| 1
| 381
|
user1791983
|
78,327,547
| 6,197,439
|
pyqt5 change QMessageBox when using multiprocessing (Cannot set parent, new parent is in a different thread)?
|
<p>Here is an example, based on the example here <a href="https://stackoverflow.com/questions/48262966/terminate-a-long-running-python-command-within-a-pyqt-application/78325636#78325636">Terminate a long-running Python command within a PyQt application</a> - but with QMessageBox:</p>
<pre class="lang-python prettyprint-override"><code>import sys
import time
from PyQt5.QtWidgets import (QApplication, QMainWindow, QLabel, QDialog,
QVBoxLayout, QPushButton, QDialogButtonBox, QMessageBox)
from multiprocessing import Pool
def mp_long_task():
for x in range(2):
print('long task:', x)
time.sleep(1)
return 'finished'
class MainWindow(QMainWindow):
def __init__(self, pool):
super().__init__()
button = QPushButton("Start", self)
button.clicked.connect(self.long_task)
self.setGeometry(300, 300, 300, 200)
self.show()
self.pool = pool
def long_task(self):
msgBox = QMessageBox( QMessageBox.Information, "Long task", "long task...", QMessageBox.Cancel, parent=None )
orig_ipmsize = msgBox.iconPixmap().size()
orig_ipm = msgBox.iconPixmap()
msgBox.setIcon( QMessageBox.NoIcon )
def callback(msg):
#print(msg)
msgBox.setText(msg)
msgBox.setIconPixmap(orig_ipm) # QObject::setParent: Cannot set parent, new parent is in a different thread
self.pool.apply_async(mp_long_task, callback=callback)
if msgBox.exec_() == QMessageBox.Cancel:
pool.terminate()
msgBox.done(0)
print('terminated')
if __name__ == '__main__':
pool = Pool()
app = QApplication(sys.argv)
main = MainWindow(pool)
app.exec_()
</code></pre>
<p>The <code>mp_long_task</code> is started when you click the Start button; terminating the <code>mp_long_task</code> with Cancel button actually works fine - but if you wait for the <code>mp_long_task</code> to complete on its own, then <code>callback</code> runs, and in it:</p>
<ul>
<li>The <code>msgBox.setText(msg)</code>, if its on its own (that is, the <code>msgBox.setIconPixmap...</code> line is commented), applies the text and the QMessageBox stays on screen - which is what I expect the behavior to be</li>
<li>If the <code>msgBox.setIconPixmap(orig_ipm)</code> is uncommented and it runs, then a sort of a segfault occurs, and the whole app crashes with:</li>
</ul>
<pre class="lang-none prettyprint-override"><code>QObject::setParent: Cannot set parent, new parent is in a different thread
QCoreApplication::postEvent: Unexpected null receiver
</code></pre>
<p>How should I set up this code so it works: that is, when the multiprocessing <code>.apply_async</code> called task completes on its own, how can I both change the text and change the icon pixmap, keeping the QMessageBox, without the application crashing?</p>
|
<python><pyqt5><python-multiprocessing>
|
2024-04-15 09:56:38
| 1
| 5,938
|
sdbbs
|
78,327,535
| 23,461,455
|
scikit-learn 1.1.3. import cannot import name 'METRIC_MAPPING64' in python
|
<p>I am trying to <code>import linear_model</code> from scikit-learn into my python code in vscode and get an unexpected error message.</p>
<pre><code>import sklearn
from sklearn import linear_model
</code></pre>
<p>the error:</p>
<pre><code>cannot import name 'METRIC_MAPPING64' from 'sklearn.metrics._dist_metrics'
</code></pre>
<p>I am not trying to import these metrics, how to solve this?</p>
<p>The scikit-learn version used is 1.1.3.</p>
|
<python><machine-learning><scikit-learn><linear-regression><scikits>
|
2024-04-15 09:54:33
| 1
| 1,284
|
Bending Rodriguez
|
78,327,520
| 188,331
|
Parameter 'function' of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead
|
<p>I have a function to pre-process the dataset before tokenization. Here is the code:</p>
<pre><code>source_lang = "en"
target_lang = "fr"
def preprocess_dataset(examples):
inputs = [example[source_lang] for example in examples["translation"]]
targets = [example[target_lang] for example in examples["translation"]]
model_inputs = tokenizer(inputs, return_tensors="pt", text_target=targets, padding="max_length", max_length=200, truncation=True)
return model_inputs
</code></pre>
<p>and to call the function using <code>.map()</code>:</p>
<pre><code>tokenized_dataset = dataset.map(preprocess_dataset, batched=True)
</code></pre>
<p>The warning message is shown:</p>
<pre><code>Parameter 'function'=<function preprocess_dataset at 0x7f968c4d4790> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.
</code></pre>
<p>What have I done wrong? How can I avoid this warning?</p>
<p>p.s. the subsequent training process can be done even if the warning appears.</p>
<p>p.s. the dataset is in JSON strings format, example data is as follows:</p>
<pre><code>{"translation": {"en": "Hello", "fr": "Bonjour"}}
{"translation": {"en": "Enjoy your meal", "fr": "bon appétit"}}
{"translation": {"en": "Please", "fr": "s'il te plaît"}}
</code></pre>
|
<python><huggingface-tokenizers>
|
2024-04-15 09:51:30
| 0
| 54,395
|
Raptor
|
78,327,471
| 6,775,670
|
How to convert pydantic model to python dataclass?
|
<p>There the opposite is very popular question. Why not to convert a ready pydantic model into a standard python library dataclass?</p>
|
<python><pydantic><python-dataclasses>
|
2024-04-15 09:43:59
| 1
| 1,312
|
Nikolay Prokopyev
|
78,327,423
| 10,357,604
|
Naming of file while saving with extension
|
<p>How can I save a file with the name as: oldname+"_new"+extension in an elegant way?</p>
<p>I currently do:</p>
<pre><code>ext = os.path.splitext(file)[1]
output_file = (root+'/'+ os.path.splitext(file)[0]+"_new"+ext)
#or output_file = (os.path.join(root, os.path.splitext(file)[0])+'_new'+ext)
with open(output_file, 'w', encoding='utf-8') as file:
file.write(text)
</code></pre>
<p>(with the help of <a href="https://stackoverflow.com/questions/38596511/python-how-to-retain-the-file-extension-when-renaming-files-with-os">Python: how to retain the file extension when renaming files with os?</a> , but not duplicate)</p>
<p>--
I use Python 3.12.2</p>
|
<python><file><os.path><code-readability>
|
2024-04-15 09:35:21
| 1
| 1,355
|
thestruggleisreal
|
78,327,204
| 18,380,493
|
Changing the input layout of an ONNX model from NCHW to NHWC
|
<p>I have an ONNX model with an input shape of <code>(1, 1, 28, 28)</code>, which is in an <code>NCHW</code> layout.
Is there any way that I can convert the model to take input data in the <code>NHWC</code> layout, i.e., as <code>(1, 28, 28, 1)</code>?</p>
<p>The model is loaded from a file using the following code: <code>model_obj = onnx.load(model_path)</code></p>
<p>A method to modify and convert the ONNX file can also be helpful.</p>
|
<python><deep-learning><onnx><onnxruntime>
|
2024-04-15 08:56:06
| 1
| 326
|
moriaz
|
78,327,110
| 13,942,929
|
Install GMP library on Mac OS and PyCharm
|
<p>I'm trying to run my Cython project. And one of the header is <code>gmpxx.h</code>.</p>
<p>Even though I already installed the gmp library using <code>brew install gmp</code>. I could not run my cython file with <code> python3 setup.py build_ext --inplace</code>.</p>
<pre><code>fatal error: 'gmpxx.h' file not found
#include <gmpxx.h>
^~~~~~~~~
1 error generated.
error: command '/usr/bin/clang' failed with exit code 1
</code></pre>
<p>So I use <code>brew list gmp</code> to check the location of the <code> gmpxx.h</code> header.
So it is actually inside <code>/opt/homebrew/Cellar/gmp/6.3.0/include/</code> folder.</p>
<p>With <code>Xcode</code>, I can just add the location into the <code>header search paths</code>.
But I'm trying to do the same thing with <code>Pycharm</code>.
How do I add the location of my gmpxx.h header to pycharm?</p>
<p>I need a little help. Please kindly give me your take. Thank you.</p>
|
<python><c++><pycharm><cython><gmp>
|
2024-04-15 08:39:57
| 2
| 3,779
|
Punreach Rany
|
78,327,057
| 9,381,746
|
Euler's identity in python
|
<p>I am trying to calculate the Euler's in python with the following code:</p>
<pre><code>import numpy as np
import cmath
x = 0
y = 1
z=complex(x,y)
out=complex(np.e**(np.pi*z.imag))
print(out)
</code></pre>
<p>But what I am getting is the following (and I should get -1 or something close to it with floating point errors)</p>
<pre><code>$ python code.py
(23.140692632779263+0j)
</code></pre>
|
<python><math><complex-numbers>
|
2024-04-15 08:31:11
| 1
| 5,557
|
ecjb
|
78,327,045
| 3,042,018
|
Appending to Path to Access Parent Directory in PyCharm vs Terminal
|
<p>Why is it that</p>
<pre><code>import sys
sys.path.append("..")
from file_in_parent_dir import a_function
</code></pre>
<p>works in PyCharm, but not in when I run the file from CMD or PowerShell?</p>
<p>(Windows 10, Python 11.3.3)</p>
<p><code>ModuleNotFoundError: No module named 'file_in_parent_dir'</code></p>
<p>Instead I have to use</p>
<pre><code>import sys
sys.path.insert(0, '..')
from trace_recursion import trace
</code></pre>
<p>(which also works on Linux, with <code>"./"</code>)</p>
<p>Could someone please explain the difference between the two approaches, and why PyCharm allows the first?</p>
|
<python><path><pycharm>
|
2024-04-15 08:29:01
| 2
| 3,842
|
Robin Andrews
|
78,326,981
| 447,738
|
Pandas to find a streak of sales but with a tolerance for streak interruption
|
<p>I am trying to use Panda's to filter out a DataFrame to find a streak of sales that shall be of variant length. I am looking to identify periods of consecutive sales, but also allow for some days with no sale in between. In the df below, I am looking to select rows 12 to 19.</p>
<pre><code>data = [['2023-11-16', 1], ['2023-11-17', 0], ['2023-11-20', 1],
['2023-11-21', 0], ['2023-11-22', 1], ['2023-11-24', 0],
['2023-11-27', 0], ['2023-11-28', 0], ['2023-11-29', 0],
['2023-11-30', 0], ['2023-12-01', 0], ['2023-12-04', 0],
['2023-12-05', 1], ['2023-12-06', 1] , ['2023-12-07', 1],
['2023-12-08', 1], ['2023-12-11', 0], ['2023-12-12', 0],
['2023-12-13', 1], ['2023-12-14', 1], ['2023-12-15', 0],
['2023-12-18', 0], ['2023-12-19', 0], ['2023-12-20', 0]]
df = pd.DataFrame(data, columns=['date', 'sold'])
</code></pre>
<p><a href="https://i.sstatic.net/rA7Aw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rA7Aw.png" alt="enter image description here" /></a></p>
|
<python><pandas>
|
2024-04-15 08:16:51
| 1
| 2,357
|
cksrc
|
78,326,962
| 5,043,301
|
Using UserCreationForm
|
<p>I am learning Django. I would like to understand below code.</p>
<pre><code>class RegisterPage(FormView):
template_name = 'base/register.html'
form_class = UserCreationForm
redirect_authenticated_user = True
success_url = reverse_lazy('tasks')
def form_valid( self, form ):
user = form.save()
if user is not None:
login( self.request, user )
return super( RegisterPage, self ).form_valid( form )
def get( self, *args, **kwargs ):
if self.request.user.is_authenticated:
return redirect('tasks')
return super(RegisterPage, self).get(*args, **kwargs)
</code></pre>
<p>Can I use <code> def register( self, form ):</code> instead of <code> def form_valid( self, form ):</code>?</p>
<p>How <code>self</code> and <code>form</code> are passing here?</p>
<p>Can I use <code> def save( self, *args, **kwargs ):</code> instead of <code> def get( self, *args, **kwargs ):</code>?</p>
|
<python><django>
|
2024-04-15 08:12:54
| 1
| 7,102
|
abu abu
|
78,326,924
| 4,429,265
|
Qdrant takes 25 second to retrieve 1000 single products
|
<p>We have a single-product API that is causing us problems when called in large numbers (not even very large; 1000 requests take 25 seconds to finish).</p>
<h2>What Are We Using</h2>
<p>So, we are using Qdrant for our product database because of the semantic vector search capabilities. But also, we want to use it for the single-product API, which does nothing other than taking the ID and giving back the payload. When called with a single ID, it takes 70-90 ms to finish. But calling 1000 concurrent requests causes it to slow and return responses to all of them in 25-30 seconds. And while these 1000 requests are being processed, if we call it with one single ID, this time it takes 2500 ms (from 70-90 went to 2500 ms).</p>
<h2>The API</h2>
<p>This is the single product api:</p>
<pre><code>def retrieve(self, request, pk):
search_lang = request.query_params.get("language", "tr")
country = request.query_params.get("country", "tr")
payload = search.get_single_product(search_lang=search_lang, point_id=int(pk))
payload["extra"] = functions.generate_extra_fields(
product_id=payload["id"],
total_rating_count=payload["stats"]["total_rating_count"],
lang=search_lang,
country=country,
)
return Response(
{"data": payload},
status=status.HTTP_200_OK,
)
</code></pre>
<h2>The search function</h2>
<p>And this is the <code>get_single_product</code> function:</p>
<pre><code>
def get_single_product(search_lang: str, point_id: int):
"""
Retrieves a single product based on the provided language and product ID.
Parameters:
search_lang (str): The language code for the product.
point_id (int): The ID of the product to retrieve.
Returns:
dict: A dictionary containing the details of the retrieved product.
Note:
- This function retrieves the language code from the environment variable LANGUAGE_KEY_MAP.
- It uses the Qdrant client to retrieve the product from the appropriate language collection.
- If the provided search language is not supported, it returns an empty dictionary.
"""
lang = int(LANGUAGE_KEY_MAP.get(search_lang))
qd_client = qdrant_client()
if not lang:
return 0
search_results = qd_client.retrieve(
collection_name=f"lang{lang}_products",
ids=[point_id],
)
qd_client.close()
payloads = []
for search_result in search_results:
payloads.append(search_result.payload)
return payloads[0] if len(payloads) > 0 else []
</code></pre>
<h2>How I tested with 1000 requests:</h2>
<p>For testing, I first need to have the ids of 1000 products. Our main relational DB is pg, so I first retrieve the ids, then create async requests; as below:</p>
<p><code>test.py</code>:</p>
<pre><code>import aiohttp
import asyncio
from utils.functions import pg_connect
import time
pg_start = time.time()
pg_connection, pg_cursor = pg_connect()
query = "SELECT id from public.products limit 1000;"
# Execute the query
pg_cursor.execute(query)
# Fetch IDs
ids = [row[0] for row in pg_cursor.fetchall()]
pg_connection.close()
pg_cursor.close()
print(f"it took {time.time() - pg_start} seconds to get ids from pg.\n")
async def fetch(session, url):
async with session.get(url) as response:
return await response.text()
async def fetch_all(urls):
async with aiohttp.ClientSession() as session:
task_start = time.time()
tasks = [fetch(session, url) for url in urls]
print(f"tasks created at {time.time() -task_start} seconds. \n")
return await asyncio.gather(*tasks)
async def main(ids):
base_url = "https://api2.markabu.com/v1/product/"
url_start = time.time()
urls = [f"{base_url}{id}/" for id in ids]
print(f"urls created in {time.time() - url_start} seconds.\n")
request_start = time.time()
responses = await fetch_all(urls)
print(f"responses came in {time.time() - request_start} seconds.\n")
if __name__ == "__main__":
asyncio.run(main(ids))
</code></pre>
<h2>The results</h2>
<p>I ran the code a few times, the results are within 5% of each other, one example:</p>
<pre><code>it took 0.00798797607421875 seconds to get ids from pg.
urls created in 0.00010538101196289062 seconds.
tasks created at 0.00011730194091796875 seconds.
responses came in 26.443119525909424 seconds.
</code></pre>
<h2>What is the problem?</h2>
<p>So, there are different areas which may cause the problem. Before blaming Qdrant or my own data structure, I thought about web server issues which may have problems distributing a large (fairly large) number of requests. That is why I created a simple API that just returns a simple message. Then called it even more than 100,000 times, and for almost 130,000 calls, it took only 8-9 seconds.</p>
<p>So, the problem is not with the web server, nor the Django backend.</p>
<h2>What about data structure?</h2>
<p>One other thing to consider is that our products have 30 columns (sorry for using 'column' to refer to JSON structured data in Qdrant, but you know what I mean), some of which are fairly large JSON fields (such as <code>variants</code>, which for some products may be a list of 10-20 variants, each containing data about different variants of the product).</p>
<h2>Can Qdrant actually handle large amounts of concurrent query requests?</h2>
<p>Another possibility to consider is Qdrant's limitations. I could not find any note about concurrent requests in Qdrant. Or, perhaps the problem lies in how I handle the <code>qdrant_client</code>. I open the client using a simple function called in the <code>get_single_product</code> function above:</p>
<pre><code>qdrant_client = QdrantClient(
host=kwargs.get("host"),
port=kwargs.get("port"),
timeout=kwargs.get("timeout"),
)
return qdrant_client
</code></pre>
<p>And after using it to get the product, I close the connection (refer to <code>get_single_product</code> function above).</p>
<h1>UPDATE</h1>
<h2>Details about Qdrant setup</h2>
<p>I am hosting Qdrant on a docker container on my server, with 30 cores of CPU and 50 GBs of RAM. This is the docker-compose for the container:</p>
<pre><code> qdrant:
image: qdrant/qdrant:v1.8.2
restart: always
ports:
- "6333:6333"
- "6334:6334"
volumes:
- qdrant_data:/qdrant/storage
</code></pre>
<p>Also my qdrant collections are created in this manner:</p>
<pre><code>client.recreate_collection(
collection_name=collection_name,
vectors_config=models.VectorParams(
size=encoder.get_sentence_embedding_dimension(),
distance=models.Distance.DOT,
),
)
</code></pre>
<p>Which I do not think is relevant to the problem, but included here if needed. If any more details on the structure of the project is needed, please ask.</p>
|
<python><database><qdrant>
|
2024-04-15 08:07:03
| 1
| 417
|
Vahid
|
78,326,802
| 18,730,707
|
"DevTools listening on ws://127.0.0.1" message does not go away in python selenium
|
<p>I know this question is a duplicate. However, no matter how much I write, the following phrase will not disappear.</p>
<blockquote>
<p>DevTools listening on ws://127.0.0.1:62784/devtools/browser/9f06f86e-f98b-4896-9f35-8c5c73317c7a</p>
</blockquote>
<p>There is a reason why the above phrase should be deleted. You need to import usd-to-krw, kospi, and nasdaq indices from external sources and print them to the console. And I copy that console and use it. But the above correctly linked statement is stuck in the middle of my entire text. I don't want the above text to be printed because it's inconvenient to erase it every time.</p>
<p>Here is the full code I used:</p>
<pre class="lang-py prettyprint-override"><code>from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from webdriver_manager.chrome import ChromeDriverManager
from bs4 import BeautifulSoup
import os
def get_usd_to_krw_exchange_rate():
service = Service(
executable_path = ChromeDriverManager().install(),
log_path=os.devnull
)
options = Options()
options.add_experimental_option('excludeSwitches', ['enable-logging'])
options.add_argument('--headless')
options.add_argument('--log-level=3')
options.add_argument('--disable-logging')
options.add_argument('--disable-dev-shm-usage')
driver = webdriver.Chrome(
service = service,
options = options
)
driver.get('https://www.google.com/search?q=usd+to+krw')
html = driver.page_source
soup = BeautifulSoup(html, 'html.parser')
rate = soup.select_one('#knowledge-currency__updatable-data-column > div.b1hJbf > div.dDoNo.ikb4Bb.gsrt > span.DFlfde.SwHCTb').get_text()
print(rate)
get_usd_to_krw_exchange_rate()
</code></pre>
<p>Please let me know what more I should do or what I'm doing wrong.</p>
|
<python><google-chrome><selenium-webdriver>
|
2024-04-15 07:41:25
| 0
| 878
|
na_sacc
|
78,326,717
| 2,695,082
|
ImportError: tools/python/lib/python3.9/lib-dynload/math.so: undefined symbol: PyFloat_Type
|
<p>I am using Python 3.9 on Linux. The python installation structure looks like:</p>
<pre><code>/tools/python/bin - containing python executable
/tools/python/lib/libpython3.9.so
/tools/python/lib/python3.9/lib-dynload/*
</code></pre>
<p>I am linking a x.so with libpython.so and trying to run it. And when I run a simple python script test.py(present in /share/python) that looks like:</p>
<pre><code>import unittest
import math
class Test_2(unittest.TestCase):
def test_distance(self):
self.assertAlmostEqual(box1.distance(box2), math.sqrt(2))
def suite() :
suite = unittest.TestSuite()
suite.addTest(unittest.makeSuite(Test_2))
return suite
</code></pre>
<p>It simply gives the following error in the log:</p>
<pre><code>ImportError: Failed to import test module: test
Traceback (most recent call last):
File "/share/python/test.py", line 3, in <module>
import math
ImportError: /tools/python/lib/python3.9/lib-dynload/math.so: undefined symbol: PyFloat_Type
</code></pre>
<p>I have set PYTHONPATH as : /share/python and PYTHONHOME as : /tools/python
And appended LD_LIBRARYPATH with /tools/python/lib so that it finds libpython3.9.so
But still get the above import error: "math.so: undefined symbol: PyFloat_Type"
Can someone pls help me in this issue.</p>
<p>Following is the output:</p>
<pre><code>[abc@xyz01 ~]$ ldd /tools/python/bin/python
linux-vdso.so.1 (0x00007ffc281ce000)
libpython3.9.so.1.0 => /tools/python/lib/libpython3.9.so.1.0 (0x00007f28887d8000)
libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007f28885af000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f288838f000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f288818b000)
libutil.so.1 => /lib64/libutil.so.1 (0x00007f2887f87000)
libm.so.6 => /lib64/libm.so.6 (0x00007f2887c05000)
libc.so.6 => /lib64/libc.so.6 (0x00007f2887840000)
/lib64/ld-linux-x86-64.so.2 (0x00007f2888b6a000)
[abc@xyz01 ~]$ echo $PYTHONPATH
/share/python
[abc@xyz01 ~]$ echo $PYTHONHOME
/tools/python
[abc@xyz01 bin]$ echo $LD_LIBRARY_PATH
tools/python/lib:/build/tools/libstdc+/lib64:/build/tools/lib/64bit:/build/tools/lib:/build81/tools/Qt/v5/64bit/lib:/build/tools/boost/lib/64bit/:/usr/lib64:/build/tools/lib/64bit/RHEL/RHEL8:/build/tools/lib/64bit/RHEL/RHEL7:/build81/tools/libstdc++/lib64:/build81/tools/lib/64bit:/usr/X11R6/lib:/lib:/usr/lib
</code></pre>
|
<python><linux><python-3.9>
|
2024-04-15 07:26:37
| 0
| 329
|
user2695082
|
78,326,712
| 3,405,291
|
No module named pip
|
<h1>Conda environment</h1>
<p>I'm creating the following conda environment by <code>conda env create -f environment.yml</code>.</p>
<p>The <code>environment.yml</code> file content is:</p>
<pre class="lang-yaml prettyprint-override"><code>name: deep3d_pytorch
channels:
- pytorch
- conda-forge
- defaults
dependencies:
- python=3.6
- pytorch=1.6.0
- torchvision=0.7.0
- numpy=1.18.1
- scikit-image=0.16.2
- scipy=1.4.1
- pillow=6.2.1
- pip=20.0.2
- ipython=7.13.0
- yaml=0.1.7
- pip:
- matplotlib==2.2.5
- opencv-python==3.4.9.33
- tensorboard==1.15.0
- tensorflow==1.15.0
- kornia==0.5.5
- dominate==2.6.0
- trimesh==3.9.20
</code></pre>
<h1>Error</h1>
<p>At the end of the environment creation process, these errors are thrown:</p>
<pre><code>Installing pip dependencies: \ Ran pip subprocess with arguments:
['/usr/local/envs/deep3d_pytorch/bin/python', '-m', 'pip', 'install', '-U', '-r', '/content/Deep3DFaceRecon_pytorch/condaenv.i1gomfsb.requirements.txt', '--exists-action=b']
Pip subprocess output:
Pip subprocess error:
/usr/local/envs/deep3d_pytorch/bin/python: No module named pip
failed
CondaEnvException: Pip failed
</code></pre>
<p>In addition to <code>conda env create -f environment.yml</code>, updating the conda environment by <code>conda env update -f environment.yml</code> would throw similar errors.</p>
<h1>Note</h1>
<p>The errors occur on my local machine and also on the Google Colab. I'm just following the instructions. Does anybody have any clue or hint?</p>
<p>I have looked at this, but couldn't figure it out: <a href="https://stackoverflow.com/questions/41060382/using-pip-to-install-packages-to-anaconda-environment">Using Pip to install packages to Anaconda Environment</a></p>
|
<python><pip><anaconda><conda><google-colaboratory>
|
2024-04-15 07:25:35
| 1
| 8,185
|
Megidd
|
78,326,697
| 10,722,752
|
Trend and Residue plots in seasonal decompose not spanning entire duration of data
|
<p>I am performing SARIMAX forecasting and visualizing the seasonal decompose plots. I have monthly data running from Jan 1, 2022 to March 1, 2024. When I plot my data I am getting a "partial" trend and residue plots.</p>
<p>Sample Data:</p>
<pre><code>from statsmodels.tsa.seasonal import seasonal_decompose
from matplotlib import pyplot
np.random.seed(0)
df = pd.DataFrame(index=pd.date_range('2022-01-01', end='2024-03-01', freq='MS'),
data = {'rev' : np.random.randint(10, high = 1000, size = len(pd.date_range('2022-01-01', end='2024-03-01', freq='MS')))})
seasonal_decompose(df).plot();
</code></pre>
<p><a href="https://i.sstatic.net/hHpCq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hHpCq.png" alt="enter image description here" /></a></p>
<p>But when I do the same for data with longer time span, the plot seems to better span the time line.</p>
<pre><code>df1 = pd.DataFrame(index=pd.date_range('2015-01-01', end='2024-03-01', freq='MS'),
data = {'rev' : np.random.randint(10, high = 1000, size = len(pd.date_range('2015-01-01', end='2024-03-01', freq='MS')))})
seasonal_decompose(df1).plot();
</code></pre>
<p><a href="https://i.sstatic.net/HLhB5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HLhB5.png" alt="enter image description here" /></a></p>
<p>Could someone please help me understand if my understanding is right, if so, what would be the way to understand the trend plot where we don't have data for sufficiently long duration.</p>
|
<python><pandas><statsmodels><sarimax>
|
2024-04-15 07:22:39
| 1
| 11,560
|
Karthik S
|
78,326,633
| 11,503,237
|
Issue with generating poisson arrivals in a system
|
<p>I am simulating the poisson arrivals of requests in the system. I am using numpy poisson for this purpose. The code is given below:</p>
<pre><code>no_of_req=500
timesteps =500
rate = no_of_req / timesteps
id=1
poisson=[]
idx=[]
for time_step in range(1, timesteps + 1):
number_of_new_agents = np.random.poisson(rate)
time=update_time(start_time, time_step).time()
poisson.append(number_of_new_agents)
idx.append(time_step)
plt.bar(idx, poisson)
</code></pre>
<p>When I plot it I get the following bar</p>
<p><a href="https://i.sstatic.net/H5iRQ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H5iRQ.jpg" alt="enter image description here" /></a></p>
<p>This doesnot resemble a Poisson Distribution. Where is the problem?</p>
|
<python><numpy><poisson>
|
2024-04-15 07:11:32
| 1
| 483
|
Hamsa
|
78,326,625
| 13,687,718
|
Handle continuous parallel requests without blocking in Asyncio
|
<p>I am new to python asyncio and have a slightly convoluted requirement. I have been going through the documentation for asyncio but have not found the right solution yet as I am unable to understand some aspects.</p>
<p>I have a script <strong>KafkaScript.py</strong> that reads from a kafka topic (streaming) and calls an API endpoint. The API will call a method <strong>method_A</strong> in another script <strong>ProcessingScript.py</strong> in <em>parallel</em> for each incoming request/kafka record.</p>
<p><em>method_A</em> calls an asynchronous <em>method_B</em> which uses a globally defined ThreadPoolExecutor to call another asynchronous <em>method_C</em> using <em>run_in_executor()</em>.
<em>method_C</em> return a string back to <em>method_B</em> which in turn must respond back to <em>method_A</em>.</p>
<p>Here is the script:</p>
<p>KafkaScript.py:</p>
<pre><code>import requests
if __name__ == '__main__':
for i in Kafka-Topic:
# Use a new thread to call make API request for each kafka record
response = requests.get(<url generated>) # i is passed as param
</code></pre>
<p>The API:</p>
<pre><code>import ProcessingScript
@app.route('/v1/generate', methods = ['GET'])
def generate():
data = request.args.get('data', type = str)
response = ProcessingScript.method_A(data)
return response
</code></pre>
<p>ProcessingScript.py:</p>
<pre><code>executor = ThreadPoolExecutor(max_workers=5)
async def method_C(val):
# Calls playwright to get content from a page which is a blocking call per thread
return "processed:" + val
async def method_B(val):
loop = asyncio.get_event_loop()
# For each parallel thread, call method_C, get the response without blocking other threads
response = loop.run_in_executor(self.executor, asyncio.run, method_C(val))
return response
def method_A(val):
response = asyncio.run(method_B(val))
return response
</code></pre>
<p>As I understand, <em>method_B</em> can receive the string from <em>method_C</em> only when I make a call such as this.</p>
<pre><code>response = await loop.run_in_executor(self.executor, asyncio.run, method_C(val))
</code></pre>
<p>The problem with the <em>await</em> here is, although multiple threads are calling <em>method_A</em>, the <em>await</em> makes all threads wait for the first thread to have a response.</p>
<p>How do I ensure this in a thread safe manner where none of the parallel requests to <em>method_A</em> are blocked by others threads waiting for the response?</p>
|
<python><multithreading><python-asyncio>
|
2024-04-15 07:09:50
| 1
| 832
|
mang4521
|
78,326,591
| 11,770,390
|
pd.date_range with enddate included
|
<p>Using pandas 2.2.2 and python 3.11, why doesn't this give me the daterange containing the end date:</p>
<pre><code>import pandas as pd
start_date = pd.to_datetime('2023-04-05T04:01:40Z')
end_date = pd.to_datetime('2024-04-15T00:00:00Z')
full_date_range = pd.date_range(start=start_date, end=end_date, freq='1M', inclusive='both')
for date in full_date_range.to_list():
print(date)
</code></pre>
<p>It gives me:</p>
<pre><code>2023-04-30 04:01:40+00:00
2023-05-31 04:01:40+00:00
2023-06-30 04:01:40+00:00
2023-07-31 04:01:40+00:00
2023-08-31 04:01:40+00:00
2023-09-30 04:01:40+00:00
2023-10-31 04:01:40+00:00
2023-11-30 04:01:40+00:00
2023-12-31 04:01:40+00:00
2024-01-31 04:01:40+00:00
2024-02-29 04:01:40+00:00
2024-03-31 04:01:40+00:00
</code></pre>
<p>and it stops at the end of march. I thought inclusive would give me an added entry of</p>
<pre><code>2024-04-30 04:01:40+00:00
</code></pre>
<p>as this would include my enddate as well (at least this is how I understand <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.date_range.html" rel="nofollow noreferrer">the <code>inclusive</code> argument</a>)</p>
|
<python><pandas><dataframe><timestamp>
|
2024-04-15 07:03:30
| 3
| 5,344
|
glades
|
78,326,573
| 2,508,672
|
Convert any format of date string to datetime
|
<p>I am reading csv file and in date column it has different format some rows using m/d/yyyy some using d-m-yyyy and others</p>
<p>I am using below code to convert</p>
<pre><code>if(datetime.strptime(row['Date'],"%m/%d/%Y").astimezone() <= datetime.strptime("2024-03-03","%Y-%m-%d").astimezone()):
## do the things
</code></pre>
<p>if the format matches it works, but it throws error when it does not match</p>
<blockquote>
<p>time data '2022-05-16' does not match format '%m/%d/%Y'</p>
</blockquote>
<p>How can I convert string date of any format to a valid date</p>
<p>Thanks</p>
|
<python>
|
2024-04-15 06:59:07
| 2
| 4,608
|
Md. Parvez Alam
|
78,326,265
| 2,739,700
|
Azure alerting Creation using Python SDK
|
<p>We are creating Azure alert using python SDK for Custom KQL query and below is the code which we are trying</p>
<pre><code>from azure.mgmt.monitor import MonitorManagementClient
from azure.identity import DefaultAzureCredential
credentials = DefaultAzureCredential()
sub_id= "xxxx"
resource_group = "rgname"
client = MonitorManagementClient(credential=credentials,subscription_id=sub_id)
# Define the KQL query
kql_query = """
ServiceMonitor
| where Computer contains "xyz-host"
| where SvcName contains "linux-service"
| where SvcState != "Running"
"""
# Create a unique name for the metric alert
metric_alert_name = "KQL_Metric_alert"
# Create the alert based on the KQL query
Alert = client.metric_alerts.create_or_update(resource_group,
metric_alert_name, # Use the unique metric alert name
{
"location": "global",
"description": "Alert triggered when the service is not running on xyz_host",
"severity": "3",
"enabled": True,
"scopes": [
f"/subscriptions/{sub_id}/resourceGroups/{resource_group}"
],
"evaluation_frequency": "PT1M",
"window_size": "PT15M",
"target_resource_type": "Microsoft.Insights/components",
"target_resource_region": "global",
"criteria": {
"odata.type": "Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria",
"all_of": [
{
"criterion_type": "StaticThresholdCriterion",
"metric_name": "KQLQueryExecutionResult",
"metric_namespace": "",
"operator": "GreaterThan",
"threshold": 0,
"aggregation": "Count",
"dimensions": [],
"metric_namespace": "microsoft.insights/components",
"additional_properties": {
"query": kql_query
}
}
]
},
"auto_mitigate": False,
"actions": [
{
"action_group_id": f"/subscriptions/{sub_id}/resourceGroups/rg-name/providers/microsoft.insights/actionGroups/AlertTrigger",
"webhook_properties": {}
}
]
}
)
print("Alert created Successfully.")
</code></pre>
<p>We get below error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/testuser/Documents/test/alerting/service_check.py", line 21, in <module>
Alert = client.metric_alerts.create_or_update(resource_group,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/vcimalap/Documents/test/alerting/.venv/lib/python3.11/site-packages/azure/core/tracing/decorator.py", line 78, in wrapper_use_tracer
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/vcimalap/Documents/test/alerting/.venv/lib/python3.11/site-packages/azure/mgmt/monitor/v2018_03_01/operations/_metric_alerts_operations.py", line 609, in create_or_update
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
azure.core.exceptions.HttpResponseError: (BadRequest) Required property 'name' not found in JSON. Path '', line 1, position 512. Activity ID: df779cd2-198b-4098-be4f-c68d95c27f00.
Code: BadRequest
Message: Required property 'name' not found in JSON. Path '', line 1, position 512. Activity ID: df779cd2-198b-4098-be4f-c68d95c27f00.
</code></pre>
<p>We are not sure whats the wrong with code and anybody fixing issue will be great</p>
<p>Python version: 3.11</p>
<p>package versions:</p>
<pre><code>azure-common==1.1.28
azure-core==1.30.1
azure-identity==1.16.0
azure-mgmt-core==1.4.0
azure-mgmt-monitor==6.0.2
certifi==2024.2.2
cffi==1.16.0
charset-normalizer==3.3.2
cryptography==42.0.5
idna==3.7
isodate==0.6.1
msal==1.28.0
msal-extensions==1.1.0
packaging==24.0
portalocker==2.8.2
pycparser==2.22
PyJWT==2.8.0
requests==2.31.0
six==1.16.0
typing_extensions==4.11.0
urllib3==2.2.1
</code></pre>
|
<python><azure><azure-monitoring><azure-alerts>
|
2024-04-15 05:41:23
| 1
| 404
|
GoneCase123
|
78,325,944
| 7,584,138
|
How to consolidate slowing changing dimension tables using sql, python or r?
|
<p>I have below input table:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>id</th>
<th>type</th>
<th>value</th>
<th>date_from</th>
<th>date_to</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>department</td>
<td>finance</td>
<td>2020-01-01</td>
<td>9999-12-31</td>
</tr>
<tr>
<td>1</td>
<td>headcount</td>
<td>10</td>
<td>2020-01-01</td>
<td>2020-02-03</td>
</tr>
<tr>
<td>1</td>
<td>headcount</td>
<td>15</td>
<td>2020-02-04</td>
<td>9999-12-31</td>
</tr>
<tr>
<td>1</td>
<td>location</td>
<td>DC</td>
<td>2020-01-01</td>
<td>2020-01-21</td>
</tr>
<tr>
<td>1</td>
<td>location</td>
<td>NY</td>
<td>2020-01-22</td>
<td>9999-12-31</td>
</tr>
</tbody>
</table></div>
<p>I want to convert it to a wide table containing all "type" fields as columns:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>id</th>
<th>department</th>
<th>headcount</th>
<th>location</th>
<th>date_from</th>
<th>date_to</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>finance</td>
<td>10</td>
<td>DC</td>
<td>2020-01-01</td>
<td>2020-01-21</td>
</tr>
<tr>
<td>1</td>
<td>finance</td>
<td>10</td>
<td>NY</td>
<td>2020-01-22</td>
<td>2020-02-03</td>
</tr>
<tr>
<td>1</td>
<td>finance</td>
<td>15</td>
<td>NY</td>
<td>2020-02-04</td>
<td>9999-12-31</td>
</tr>
</tbody>
</table></div>
<p>Note that my actual data has multiple "id" and unknown "type" values. How can I achieve this effectively using either <code>sql</code>, <code>python</code> or <code>r</code>?</p>
<p><em>Edit</em></p>
<p>The final output represents a historic table which will enable me to go back to any given day to view the status on that day.</p>
<p>In my example, "department" is always "finance". "headcount" and "location" have changed, so that there are two new rows.</p>
|
<python><sql><pandas><dplyr><tidyverse>
|
2024-04-15 03:24:01
| 1
| 1,688
|
Frank Zhang
|
78,325,675
| 1,202,417
|
Serving a webpage content by running a php/python script
|
<p>I'm trying to set up a RSS for my site. So I would like to make a link that takes in a keyword and produces a RSS feed.</p>
<p>I have a python script (<code>script.py</code>) to generate this xml, but I don't know how to run it and serve the text to the user when my page is called.</p>
<p>Essentially I would like to have someone visit <code>mysite.com/<keyword></code> and be served the text generated in <code>script.py</code></p>
<p>I can make the text appear to the user by simply running javascript, but this isn't being picked up by the rss.</p>
<pre><code><html>
<head>
<title>Run Python Script</title>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
</head>
<body>
<script>
$.ajax({
method: "POST",
url: "/script.py",
})
.done(function( response ) {
var newXML = document.open("text/xml", "replace");
newXML.write(response);
newXML.close();
});
</script>
</body>
</html>
</code></pre>
<p>It seams like there should be some way of generating this text and serving it to the user, but I feel like I'm just missing something obvious</p>
<p>I'm using godaddy and cpanel if that helps</p>
|
<javascript><python><rss>
|
2024-04-15 00:55:23
| 0
| 411
|
Ben Fishbein
|
78,325,531
| 11,930,602
|
What is the standard way to insert out-of-order elements in a list of a possibly smaller size?
|
<p>What I have:</p>
<pre class="lang-py prettyprint-override"><code>msg: list = []
</code></pre>
<p>expected behaviour:</p>
<pre class="lang-py prettyprint-override"><code>msg.insert(2,"two") # msg = [None, None, "two"]
msg.insert(10,"ten") # msg = [None, None, "two", None, None, None, None, None, None, None, "ten"]
</code></pre>
<p>Current code:</p>
<pre class="lang-py prettyprint-override"><code>def insert_into_list(orig_list: list, index: int, element: str)->None: # does not need to return the list
if index >= len(orig_list): # sanity checks removed for MRE
for _ in range(index - len(orig_list) + 1):
orig_list.append(None)
orig_list.insert(index, element)
</code></pre>
<p>Is there a better (or a more standard and possibly shorter than writing another method) way to do this?</p>
|
<python><list><insert>
|
2024-04-14 23:21:46
| 2
| 2,322
|
kesarling
|
78,325,356
| 6,197,439
|
How to setup an instance method monkeypatch in this PyQt5 code?
|
<p>Yes, I have seen:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/28127874/monkey-patching-python-an-instance-method">monkey-patching python an instance method</a></li>
<li><a href="https://stackoverflow.com/questions/8726238/how-to-call-the-original-method-when-it-is-monkey-patched">How to call the original method when it is monkey-patched?</a></li>
</ul>
<p>... but none of these approaches seem to work in my example:</p>
<p>Basically, this is an example that tries to use <a href="https://pypi.org/project/QtAwesome/" rel="nofollow noreferrer">QtAwesome</a> to provide a spinning icon to a QMessageBox; there is a class <code>Spin</code> in QtAwesome that does animations, which has a method <code>_update</code>: I would like to monkeypatch this method, so first the original method is called, and then a message is printed - so I have this:</p>
<pre class="lang-python prettyprint-override"><code>import sys
from PyQt5.QtWidgets import QApplication, QWidget, QMessageBox
import qtawesome as qta
class Example(QWidget):
def __init__(self):
super().__init__()
self.initUI()
def initUI(self):
self.setGeometry(300, 300, 300, 220)
self.setWindowTitle('Hello World')
self.show()
msgBox = QMessageBox( QMessageBox.Information, "Title", "Content ...", QMessageBox.Cancel )
orig_ipmsize = msgBox.iconPixmap().size()
print(orig_ipmsize.width(), orig_ipmsize.height()) # 0 0 for QMessageBox.NoIcon; 32 32 for QMessageBox.Information
animation = qta.Spin(msgBox, autostart=True)
DO_MONKEYPATCH = 2 # 1 or 2
if DO_MONKEYPATCH == 1:
old_anim_update = animation._update
def new_anim_update(self):
old_anim_update(self) # TypeError: Spin._update() takes 1 positional argument but 2 were given
print("new_anim_update")
animation._update = new_anim_update.__get__(animation, qta.Spin) # https://stackoverflow.com/a/28127947
elif DO_MONKEYPATCH == 2:
def update_decorator(method):
def decorate_update(self=None):
method(self) # TypeError: Spin._update() takes 1 positional argument but 2 were given
print("decorate_update")
return decorate_update
animation._update = update_decorator(animation._update) # https://stackoverflow.com/a/8726680
#print(animation._update)
spin_icon = qta.icon('fa5s.spinner', color='red', animation=animation)
msgBox.setIconPixmap(spin_icon.pixmap(orig_ipmsize)) #msgBox.setIcon(spin_icon)
#animation.start()
returnValue = msgBox.exec()
if __name__ == '__main__':
app = QApplication(sys.argv)
ex = Example()
sys.exit(app.exec_())
</code></pre>
<p>However, no matter which <code>DO_MONKEYPATCH</code> method I choose, I get <code>TypeError: Spin._update() takes 1 positional argument but 2 were given</code>?</p>
<hr />
<p>OK, just noticed how both errors apply while writing the question, and found if I change the "old method" calls to NOT use a <code>(self)</code> argument -- i.e. I change <code>old_anim_update(self)</code> / <code>method(self)</code>, to <code>old_anim_update()</code> / <code>method()</code> -- then both <code>DO_MONKEYPATCH</code> methods allow for running without the positional argument error - however only <code>DO_MONKEYPATCH</code> method 1 seems to preserve <code>self</code>:</p>
<pre class="lang-python prettyprint-override"><code>import sys
from PyQt5.QtWidgets import QApplication, QWidget, QMessageBox
import qtawesome as qta
class Example(QWidget):
def __init__(self):
super().__init__()
self.initUI()
def initUI(self):
self.setGeometry(300, 300, 300, 220)
self.setWindowTitle('Hello World')
self.show()
msgBox = QMessageBox( QMessageBox.Information, "Title", "Content ...", QMessageBox.Cancel )
orig_ipmsize = msgBox.iconPixmap().size()
print(orig_ipmsize.width(), orig_ipmsize.height()) # 0 0 for QMessageBox.NoIcon; 32 32 for QMessageBox.Information
animation = qta.Spin(msgBox, autostart=True)
DO_MONKEYPATCH = 1 # 1 or 2
if DO_MONKEYPATCH == 1:
old_anim_update = animation._update
def new_anim_update(self):
old_anim_update() # no error
print("new_anim_update {}".format(self)) # self is <qtawesome.animation.Spin object at 0x00000238f1d45f10>
animation._update = new_anim_update.__get__(animation, qta.Spin) # https://stackoverflow.com/a/28127947
elif DO_MONKEYPATCH == 2:
def update_decorator(method):
def decorate_update(self=None):
method() # TypeError: Spin._update() takes 1 positional argument but 2 were given
print("decorate_update {}".format(self)) # self is None
return decorate_update
animation._update = update_decorator(animation._update) # https://stackoverflow.com/a/8726680
#print(animation._update)
spin_icon = qta.icon('fa5s.spinner', color='red', animation=animation)
msgBox.setIconPixmap(spin_icon.pixmap(orig_ipmsize)) #msgBox.setIcon(spin_icon)
#animation.start()
returnValue = msgBox.exec()
if __name__ == '__main__':
app = QApplication(sys.argv)
ex = Example()
sys.exit(app.exec_())
</code></pre>
<p>So, it seems that <code>DO_MONKEYPATCH == 1</code> method is the answer to this question as originally posed - but I am still worried, how does my call to <code>old_anim_update()</code>, without any original references to <code>self</code>, call the old method correctly? Or is there a more correct method to do this kind of monkeypatch?</p>
|
<python><python-3.x><pyqt5><monkeypatching>
|
2024-04-14 21:57:33
| 1
| 5,938
|
sdbbs
|
78,325,210
| 777,304
|
maping of memory into structure using ctypes in python
|
<p>I am trying to map memory into structure but unable to get desired results. My memory is list of 32 bit values i.e.</p>
<pre><code>[0x7E008000, 0x1234AAAA, 0xBBBBFFFF]
</code></pre>
<p>and I am trying to get this output.</p>
<pre><code>ns: 0x7d008000
us: 0x1234
zs: 0xaaaabbbb
crc: 0xffff
</code></pre>
<p>with following code I am not getting correct "zs" value, and if I enable "crc" field I get error.
This is the output of below mentioned code</p>
<pre><code>> ns: 0x7d008000
> us: 0x1234
> zs: 0xbbbbffff
</code></pre>
<p>Here is the code</p>
<pre><code> import struct
import ctypes
class MemoryParser:
@classmethod
def parse_memory(cls, memory):
memory_bytes = bytearray()
for word in memory:
# Convert the word to little-endian bytes
word_bytes = struct.pack(">I", word)
# Extend memory_bytes with the word bytes in little-endian order
memory_bytes.extend(word_bytes)
# Create a ctypes pointer from the memory_bytes
ubuffer = (ctypes.c_ubyte * len(memory_bytes)).from_buffer(memory_bytes)
# Return the class instance created from the ctypes pointer
return cls.from_buffer(ubuffer)
class Data(MemoryParser, ctypes.BigEndianStructure):
_fields_ = [("ns", ctypes.c_uint32),
("us", ctypes.c_uint16),
("zs", ctypes.c_uint32)]
#("crc", ctypes.c_uint16)]
memory = [0x7E008000, 0x1234AAAA, 0xBBBBFFFF]
data = Data.parse_memory(memory)
print("ns:", hex(data.ns))
print("us:", hex(data.us))
print("zs:", hex(data.zs))
#print("crc:", hex(data.crc))
</code></pre>
<p>Update 1:</p>
<p>So I collected another data set and created fields but it fails again.</p>
<p>data set, example</p>
<pre><code>[0x7E008000, 0x1234AAAA, 0xBBBBFFFC, 0xCCCDDEEE]
</code></pre>
<p>what will be the new class fields structure for expected data like below</p>
<pre><code>v1 = 0x7E008000
v2 = 0x1234
v3 = 0xAAAABBBB
v4 = 0xFFF
v5 = 0xCCCC
v6 = 0xDD
v7 = 0xEEEE
</code></pre>
|
<python><ctypes>
|
2024-04-14 20:45:17
| 2
| 455
|
user777304
|
78,324,897
| 1,493,192
|
Python ConnectionRefusedError using a Docker compose with jupyter notebook and spring boot
|
<p>I wrote a local application using spring boot in which I have three containers: mongodb, mongo-express and jupyter noteboot. Using a python script I can access the data without errors</p>
<pre><code>import requests
import json
url = "http://localhost:8080/api/v1/metadata/sites"
response = requests.get(url)
print(response)
# print json content
print(response.json())
>>> <Response [200]>
</code></pre>
<p>However, when I repeat the same code in the notebook (of the container) I get this error</p>
<p><a href="https://i.sstatic.net/BAZTq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BAZTq.png" alt="enter image description here" /></a></p>
<pre><code>FROM python:3.9
LABEL authors=""
# Install any necessary packages
RUN apt-get update && \
apt-get install -y --no-install-recommends \
build-essential \
git && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Set the working directory
WORKDIR /app
# Copy the requirements file and install the dependencies
RUN pip install --upgrade pip
#RUN pip install --no-cache-dir jupyterlab numpy==1.26.4 && \
# pandas==2.2.2 && \
# requests==2.31.0
RUN pip install --no-cache-dir numpy==1.26.4 \
jupyter \
pandas==2.2.2 \
requests==2.31.0
# Copy the notebooks to the container
COPY . /app
EXPOSE 8888
# Set the default command to run Jupyter Notebook
CMD ["jupyter", "notebook", "--ip=0.0.0.0", "--port=8888", "--no-browser", "--allow-root", "--NotebookApp.token=''"]
version: '3.9'
services:
# Database - Mongo DB
mongodb:
image: mongo:latest
container_name: mongodb
restart: always
ports:
- "27017:27017"
env_file: .env
environment:
- MONGO_INITDB_ROOT_USERNAME=${MONGO_INITDB_ROOT_USERNAME}
- MONGO_INITDB_ROOT_PASSWORD=${MONGO_INITDB_ROOT_PASSWORD}
- MONGO_INITDB_DATABASE=${MONGO_INITDB_DATABASE}
volumes:
# - data:/data
- mongo-volume:/data/db2
- ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
# - mongo-data:/data/db
# Database Manager
mongo-express:
image: mongo-express:latest
container_name: mongo-express
restart: always
ports:
- "8081:8081"
depends_on:
- mongodb
env_file: .env
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME=${ME_CONFIG_MONGODB_ADMINUSERNAME}
- ME_CONFIG_MONGODB_ADMINPASSWORD=${ME_CONFIG_MONGODB_ADMINPASSWORD}
- ME_CONFIG_MONGODB_SERVER=mongodb # same name of "mongodb" in services
jupyter:
build:
context: .
dockerfile: ./jupyter/Dockerfile
container_name: jupyter-metadata-itineris
ports:
- '8888:8888'
volumes:
- ./notebooks:/app
# Define named volumes
volumes:
mongo-volume: {}
networks:
default:
name: mongodb_network
</code></pre>
|
<python><spring-boot><docker-compose><jupyter-notebook>
|
2024-04-14 18:44:19
| 0
| 8,048
|
Gianni Spear
|
78,324,819
| 7,199,629
|
Optimizing Lat/Lon Extraction from Astropy's GeocentricTrueEcliptic SkyCoord
|
<p>I am facing a challenge that should be straightforward but has proven to be quite complex with the Astropy library. Specifically, I need to frequently compute the longitude and latitude from a <code>SkyCoord</code> object that has been transformed to the <code>GeocentricTrueEcliptic</code> frame. Here is the relevant section of my code:</p>
<pre class="lang-py prettyprint-override"><code>from astropy.coordinates import SkyCoord, GeocentricTrueEcliptic, ICRS
import astropy.units as u
from astropy.time import Time
coo_icrs = SkyCoord(150*u.deg, 19*u.deg, frame=ICRS(), obstime=Time(2460791., format="jd"))
coo = coo_icrs.geocentrictrueecliptic
some_function(coo.lon, coo.lat)
</code></pre>
<p>Accessing <code>coo.lon</code> and <code>coo.lat</code> proves to be time-consuming due to unnecessary computations that don’t align with my use case. The issue is exacerbated as this computation needs to be repeated millions of times during each testing phase of my module, significantly impacting performance.</p>
<p>Upon investigation, I found that <code>coo.lon</code> and <code>coo.lat</code> may be slow because <code>SkyCoord</code> performs a transformation to spherical coordinates. I discovered a much faster approach by directly accessing internal attributes (<code>coo_icrs._sky_coord_frame._data</code>):</p>
<pre class="lang-py prettyprint-override"><code>%timeit coo_icrs._sky_coord_frame._data._lon
%timeit coo_icrs.ra
# 46.1 ns ± 0.203 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)
# 6.21 µs ± 69.8 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
</code></pre>
<p>However, this shortcut doesn’t apply to <code>coo</code> since its coordinates are only available in XYZ Cartesian format:</p>
<pre class="lang-py prettyprint-override"><code>coo._sky_coord_frame._data
# <CartesianRepresentation (x, y, z) [dimensionless]
# (-0.83586289, 0.53450713, 0.12504145)>
</code></pre>
<p>Extracting values directly (<code>._x.value</code>, <code>._y.value</code>, and <code>._z.value</code>) and converting them manually to longitude and latitude is still faster than using <code>coo.lon</code> and <code>coo.lat</code>, but this manual conversion is also computationally intensive.</p>
<p>I am searching for a method to more efficiently retrieve longitude and latitude values from a <code>SkyCoord</code> object in the <code>GeocentricTrueEcliptic</code> frame, avoiding the extra computational overhead. Is there a faster, more direct way to obtain these values?</p>
|
<python><numpy><numba><jit><astropy>
|
2024-04-14 18:16:23
| 1
| 361
|
ysBach
|
78,324,721
| 1,182,299
|
BERT MLM model fine-tune on small data bad results
|
<p>I want to fine tune a BERT MLM model from Hugging Face. I have only a small amount of data (train.csv) like this:</p>
<pre><code>text
בראשית ברא אלהים את השמים ואת הארץ
והארץ היתה תהו ובהו וחשך על פני תהום ורוח אלהים מרחפת על פני המים
ויאמר אלהים יהי אור ויהי אור
וירא אלהים את האור כי טוב ויבדל אלהים בין האור ובין החשך
ויקרא אלהים לאור יום ולחשך קרא לילה ויהי ערב ויהי בקר יום אחד
ויאמר אלהים יהי רקיע בתוך המים ויהי מבדיל בין מים למים
ויעש אלהים את הרקיע ויבדל בין המים אשר מתחת לרקיע ובין המים אשר מעל לרקיע ויהי כן
ויקרא אלהים לרקיע שמים ויהי ערב ויהי בקר יום שני
ויאמר אלהים יקוו המים מתחת השמים אל מקום אחד ותראה היבשה ויהי כן
ויקרא אלהים ליבשה ארץ ולמקוה המים קרא ימים וירא אלהים כי טוב
</code></pre>
<p>Below is my script in doing the fine-tuning:</p>
<pre><code>from huggingface_hub import login
from transformers import AutoModelForMaskedLM
from transformers import AutoTokenizer
import torch
from datasets import load_dataset
from transformers import DataCollatorForLanguageModeling
import collections
import numpy as np
from transformers import default_data_collator
from transformers import TrainingArguments
from transformers import Trainer
import math
from transformers import EarlyStoppingCallback, IntervalStrategy
access_token_write = "hf_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
login(token = access_token_write)
model_checkpoint = "dicta-il/BEREL_2.0"
model = AutoModelForMaskedLM.from_pretrained(model_checkpoint)
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
sam_dataset = load_dataset("johnlockejrr/samv2")
def tokenize_function(examples):
result = tokenizer(examples["text"])
if tokenizer.is_fast:
result["word_ids"] = [result.word_ids(i) for i in range(len(result["input_ids"]))]
return result
tokenized_datasets = sam_dataset.map(
tokenize_function, batched=True, remove_columns=["text"]
)
chunk_size = 128
tokenized_samples = tokenized_datasets["train"][:3]
concatenated_examples = {
k: sum(tokenized_samples[k], []) for k in tokenized_samples.keys()
}
def group_texts(examples):
# Concatenate all texts
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
# Compute length of concatenated texts
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the last chunk if it's smaller than chunk_size
total_length = (total_length // chunk_size) * chunk_size
# Split by chunks of max_len
result = {
k: [t[i : i + chunk_size] for i in range(0, total_length, chunk_size)]
for k, t in concatenated_examples.items()
}
# Create a new labels column
result["text"] = result["input_ids"].copy()
return result
lm_datasets = tokenized_datasets.map(group_texts, batched=True)
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15)
samples = [lm_datasets["train"][i] for i in range(2)]
train_size = 1_000
test_size = int(0.1 * train_size)
downsampled_dataset = lm_datasets["train"].train_test_split(
train_size=train_size, test_size=test_size, seed=42
)
batch_size = 64
logging_steps = len(downsampled_dataset["train"]) // batch_size
model_name = model_checkpoint.split("/")[-1]
training_args = TrainingArguments(
output_dir=f"{model_name}-finetuned-sam-v1",
overwrite_output_dir=True,
evaluation_strategy=IntervalStrategy.STEPS, # "steps"
eval_steps = 50, # Evaluation and Save happens every 50 steps
save_total_limit = 5, # Only last 5 models are saved. Older ones are deleted.
learning_rate=2e-5,
weight_decay=0.01,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=5,
push_to_hub=True,
fp16=True,
logging_steps=logging_steps,
metric_for_best_model = 'f1',
load_best_model_at_end=True
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=downsampled_dataset["train"],
eval_dataset=downsampled_dataset["test"],
data_collator=data_collator,
tokenizer=tokenizer,
)
trainer.train()
eval_results = trainer.evaluate()
print(f">>> Perplexity: {math.exp(eval_results['eval_loss']):.2f}")
trainer.push_to_hub()
</code></pre>
<p>All went well, model trained, pushed to Hugging Face.
Problems:</p>
<ol>
<li>Loss: 6.2156</li>
<li>Model work for <code>Fill-Mask</code> but with my new data seems something happened in the process and some words are masked as tokens and not full words:</li>
</ol>
<p>Here is a script as an example of <code>Fill-Mask</code> and the output:</p>
<pre><code>from transformers import pipeline
mask_filler = pipeline(
"fill-mask", model="johnlockejrr/BEREL_2.0-finetuned-sam-v1"
)
text = "ואלין שמהת בני [MASK] דעלו למצרים עם יעקב גבר וביתה עלו"
preds = mask_filler(text)
for pred in preds:
print(f">>> {pred['sequence']}")
</code></pre>
<p>Output:</p>
<pre><code>ואלין שמה ##ת בני יעקב דעלו למצרים עם יעקב גבר ובית ##ה עלו
ואלין שמה ##ת בני ישראל דעלו למצרים עם יעקב גבר ובית ##ה עלו
ואלין שמה ##ת בני לאה דעלו למצרים עם יעקב גבר ובית ##ה עלו
ואלין שמה ##ת בני ראובן דעלו למצרים עם יעקב גבר ובית ##ה עלו
ואלין שמה ##ת בני דן דעלו למצרים עם יעקב גבר ובית ##ה עלו
</code></pre>
<p>I was expecting this output:</p>
<pre><code>ואלין שמהת בני יעקב דעלו למצרים עם יעקב גבר וביתה עלו
ואלין שמהת בני ישראל דעלו למצרים עם יעקב גבר וביתה עלו
ואלין שמהת בני לאה דעלו למצרים עם יעקב גבר וביתה עלו
ואלין שמהת בני ראובן דעלו למצרים עם יעקב גבר וביתה עלו
ואלין שמהת בני דן דעלו למצרים עם יעקב גבר וביתה עלו
</code></pre>
<p>What am I doing wrong?</p>
|
<python><huggingface-transformers><bert-language-model>
|
2024-04-14 17:44:46
| 1
| 1,791
|
bsteo
|
78,324,619
| 1,802,483
|
Problem installing `apache-flink:1.19.0` using python docker 3.9 - 3.12
|
<p>I am having trouble installing <code>PyFlink</code>/<code>apache-flink 1.19.0</code> using python docker 3.9 - 3.12 following <a href="https://nightlies.apache.org/flink/flink-docs-master/docs/dev/python/installation/" rel="nofollow noreferrer">the official tutorial</a>.</p>
<p>It seems that the error is about a path that return <code>NoneType</code>, but I am not sure how to fix it.</p>
<p>I did not post <code>python:3.12</code> as the error is the same as others.</p>
<p>Can you please help me on this?</p>
<p>Thank you very much.</p>
<h2>Python 3.9</h2>
<p><code>docker run - it python:3.9 python - m pip install apache-flink</code></p>
<pre><code> × Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [22 lines of output]
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/usr/local/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/usr/local/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "/tmp/pip-build-env-rdoi1tww/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 162, in get_requires_for_build_wheel
return self._get_build_requires(
File "/tmp/pip-build-env-rdoi1tww/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 143, in _get_build_requires
self.run_setup()
File "/tmp/pip-build-env-rdoi1tww/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 267, in run_setup
super(_BuildMetaLegacyBackend,
File "/tmp/pip-build-env-rdoi1tww/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 158, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 169, in <module>
include_dirs=get_java_include() + ['src/main/c/pemja/core/include'],
File "setup.py", line 111, in get_java_include
inc = os.path.join(get_java_home(), inc_name)
File "/usr/local/lib/python3.9/posixpath.py", line 76, in join
a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not NoneType
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
</code></pre>
<h2>Python 3.10</h2>
<p><code>docker run - it python:3.10 python - m pip install apache-flink</code></p>
<pre><code> × Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [22 lines of output]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/usr/local/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/usr/local/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "/tmp/pip-build-env-ptcni4h1/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 162, in get_requires_for_build_wheel
return self._get_build_requires(
File "/tmp/pip-build-env-ptcni4h1/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 143, in _get_build_requires
self.run_setup()
File "/tmp/pip-build-env-ptcni4h1/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 267, in run_setup
super(_BuildMetaLegacyBackend,
File "/tmp/pip-build-env-ptcni4h1/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 158, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 169, in <module>
include_dirs=get_java_include() + ['src/main/c/pemja/core/include'],
File "setup.py", line 111, in get_java_include
inc = os.path.join(get_java_home(), inc_name)
File "/usr/local/lib/python3.10/posixpath.py", line 76, in join
a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not NoneType
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<h2>Python 3.11</h2>
<p><code>docker run - it python:3.11 python - m pip install apache-flink</code></p>
<pre><code> × Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [27 lines of output]
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/usr/local/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-jpt4cjc2/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 162, in get_requires_for_build_wheel
return self._get_build_requires(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-jpt4cjc2/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 143, in _get_build_requires
self.run_setup()
File "/tmp/pip-build-env-jpt4cjc2/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 268, in run_setup
self).run_setup(setup_script=setup_script)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-jpt4cjc2/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 158, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 169, in <module>
include_dirs=get_java_include() + ['src/main/c/pemja/core/include'],
^^^^^^^^^^^^^^^^^^
File "setup.py", line 111, in get_java_include
inc = os.path.join(get_java_home(), inc_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen posixpath>", line 76, in join
TypeError: expected str, bytes or os.PathLike object, not NoneType
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
</code></pre>
<h3>UPDATE 15/4/2024</h3>
<p>It works in Windows. But it does not work in Mac when using this command: Is it a permission problem?</p>
<p><code>docker run -it python:3.11 pip install apache-flink</code></p>
<p>My Mac Config:</p>
<ul>
<li>M1 Max</li>
<li>Sonoma 14.4.1</li>
</ul>
<p>Message below:</p>
<pre><code>× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [27 lines of output]
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/usr/local/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-r8xb1epe/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 162, in get_requires_for_build_wheel
return self._get_build_requires(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-r8xb1epe/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 143, in _get_build_requires
self.run_setup()
File "/tmp/pip-build-env-r8xb1epe/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 268, in run_setup
self).run_setup(setup_script=setup_script)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-r8xb1epe/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 158, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 169, in <module>
include_dirs=get_java_include() + ['src/main/c/pemja/core/include'],
^^^^^^^^^^^^^^^^^^
File "setup.py", line 111, in get_java_include
inc = os.path.join(get_java_home(), inc_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen posixpath>", line 76, in join
TypeError: expected str, bytes or os.PathLike object, not NoneType
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
|
<python><docker><apache-flink>
|
2024-04-14 17:07:58
| 2
| 705
|
Ellery Leung
|
78,324,332
| 1,848,244
|
How to instantiate a large number of NotRequired arguments in a TypedDict?
|
<p>Consider this contrived example:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Mapping, Union, MutableMapping
from typing_extensions import TypedDict, NotRequired
class Pet(TypedDict):
softness: NotRequired[int]
name: NotRequired[str]
# **IMPORTANT**: Assume these are only known at run time.
softness_exists = False
name_exists = True
optargs: MutableMapping[str, Union[int, str]] = dict()
if softness_exists:
optargs['softness'] = 999999
if name_exists:
optargs['name'] = 'David'
p = Pet(
type='Dog',
#Unsupported type "MutableMapping[str, Union[Food, int, str]]" for ** expansion in TypedDict
**optargs
)
print(p)
</code></pre>
<p>In my real world use case, I have a relatively large number of optional arguments. Enough that conditionally populating optargs based on the run-time input is the only wieldy way to accomplish the construction of the TypedDict.</p>
<p>But this does not appear to be allowed. What is the recommended way to construct a TypedDict with a large number of <code>NotRequired</code> fields, whose applicability is decided at run time?</p>
<p>I am suspecting that the decision on which NotRequired fields present in an instantiation of a TypedDict cannot be decided at run time.</p>
|
<python><mypy><python-typing>
|
2024-04-14 15:41:39
| 1
| 437
|
user1848244
|
78,324,285
| 10,200,497
|
How can I find the first row that meets conditions of a mask for each group?
|
<p>This is my DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': ['x', 'x', 'x', 'x', 'x', 'y', 'y', 'y', 'y', 'y', 'y', 'y'],
'b': [1, 1, 1, 2, 2, 1, 1, 1, 2, 2, 2, 2],
'c': [9, 8, 11, 13, 14, 3, 104, 106, 11, 100, 70, 7]
}
)
</code></pre>
<p>Expected output: Creating column <code>out</code>:</p>
<pre><code> a b c out
0 x 1 9 NaN
1 x 1 8 NaN
2 x 1 11 NaN
3 x 2 13 found
4 x 2 14 NaN
5 y 1 3 NaN
6 y 1 104 found
7 y 1 106 NaN
8 y 2 11 NaN
9 y 2 100 NaN
10 y 2 70 NaN
11 y 2 7 NaN
</code></pre>
<p>The mask is:</p>
<pre><code>mask = (df.c > 10)
</code></pre>
<p>The process: Grouping is by column <code>a</code>:</p>
<p><strong>a)</strong> For each group, finding the first row that meets the conditions of the <code>mask</code>.</p>
<p><strong>b)</strong> For group <code>x</code> this condition only applies when <code>b == 2</code>. That is why row <code>3</code> is selected.</p>
<p>And this is my attempt. It is getting close but it feels like this is not the way:</p>
<pre><code>def func(g):
mask = (g.c > 10)
g.loc[mask.cumsum().eq(1) & mask, 'out'] = 'found'
return g
df = df.groupby('a').apply(func)
</code></pre>
|
<python><pandas><dataframe><group-by>
|
2024-04-14 15:24:24
| 3
| 2,679
|
AmirX
|
78,323,995
| 7,633,739
|
`BaseSettings` has been moved to the `pydantic-settings` package
|
<p>I am struggling with this error from long now, tried other solutions but no luck, here is my overview about the issue.</p>
<p>Firstly i am not at all using this package anywhere in my project, seems like python is executing this internally, but still i am facing this issue, here are my python configurations</p>
<pre><code>Python 3.9.6
pip 21.2.4
</code></pre>
<p>I tried other version of python too which is <code>3.10.9</code></p>
<p>Below is the error i am facing.</p>
<pre><code> File "/Users/pradeepyenkuwale/Library/Python/3.9/lib/python/site-packages/langchain/vectorstores/chroma.py", line 81, in __init__
import chromadb
File "/Users/pradeepyenkuwale/Library/Python/3.9/lib/python/site-packages/chromadb/__init__.py", line 1, in <module>
import chromadb.config
File "/Users/pradeepyenkuwale/Library/Python/3.9/lib/python/site-packages/chromadb/config.py", line 1, in <module>
from pydantic import BaseSettings, Field
File "/Users/pradeepyenkuwale/Library/Python/3.9/lib/python/site-packages/pydantic/__init__.py", line 380, in __getattr__
return _getattr_migration(attr_name)
File "/Users/pradeepyenkuwale/Library/Python/3.9/lib/python/site-packages/pydantic/_migration.py", line 296, in wrapper
raise PydanticImportError(
pydantic.errors.PydanticImportError: `BaseSettings` has been moved to the `pydantic-settings` package. See https://docs.pydantic.dev/2.7/migration/#basesettings-has-moved-to-pydantic-settings for more details.
For further information visit https://errors.pydantic.dev/2.7/u/import-error
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/pradeepyenkuwale/Library/Python/3.9/lib/python/site-packages/flask/app.py", line 2213, in __call__
return self.wsgi_app(environ, start_response)
File "/Users/pradeepyenkuwale/Library/Python/3.9/lib/python/site-packages/flask/app.py", line 2193, in wsgi_app
response = self.handle_exception(e)
File "/Users/pradeepyenkuwale/Library/Python/3.9/lib/python/site-packages/flask_cors/extension.py", line 176, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/Users/pradeepyenkuwale/Library/Python/3.9/lib/python/site-packages/flask_cors/extension.py", line 176, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/Users/pradeepyenkuwale/Library/Python/3.9/lib/python/site-packages/flask/app.py", line 2190, in wsgi_app
response = self.full_dispatch_request()
File "/Users/pradeepyenkuwale/Library/Python/3.9/lib/python/site-packages/flask/app.py", line 1486, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Users/pradeepyenkuwale/Library/Python/3.9/lib/python/site-packages/flask_cors/extension.py", line 176, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/Users/pradeepyenkuwale/Library/Python/3.9/lib/python/site-packages/flask_cors/extension.py", line 176, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/Users/pradeepyenkuwale/Library/Python/3.9/lib/python/site-packages/flask/app.py", line 1484, in full_dispatch_request
rv = self.dispatch_request()
File "/Users/pradeepyenkuwale/Library/Python/3.9/lib/python/site-packages/flask/app.py", line 1469, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/Users/pradeepyenkuwale/Work/Repositories/TripAdvisor/setup.py", line 53, in userQuery
resp = ProcessConsumerQuery(query)
File "/Users/pradeepyenkuwale/Work/Repositories/TripAdvisor/lib/process_query.py", line 30, in ProcessConsumerQuery
index = VectorstoreIndexCreator().from_loaders([loader])
File "/Users/pradeepyenkuwale/Library/Python/3.9/lib/python/site-packages/langchain/indexes/vectorstore.py", line 83, in from_loaders
return self.from_documents(docs)
File "/Users/pradeepyenkuwale/Library/Python/3.9/lib/python/site-packages/langchain/indexes/vectorstore.py", line 88, in from_documents
vectorstore = self.vectorstore_cls.from_documents(
File "/Users/pradeepyenkuwale/Library/Python/3.9/lib/python/site-packages/langchain/vectorstores/chroma.py", line 771, in from_documents
return cls.from_texts(
File "/Users/pradeepyenkuwale/Library/Python/3.9/lib/python/site-packages/langchain/vectorstores/chroma.py", line 707, in from_texts
chroma_collection = cls(
File "/Users/pradeepyenkuwale/Library/Python/3.9/lib/python/site-packages/langchain/vectorstores/chroma.py", line 84, in __init__
raise ImportError(
ImportError: Could not import chromadb python package. Please install it with pip install chromadb.`
</code></pre>
<p>FYI, I am using <code>LangChain</code> AI framework with <code>ChatGPT</code>.
The same code works fine in windows machine, i am trying to run the repository in MacBook Pro, I tried same version as in windows as well but still i am facing this issue.</p>
|
<python><langchain>
|
2024-04-14 13:48:48
| 0
| 320
|
Pradeep Yenkuwale
|
78,323,943
| 5,257,430
|
Statistic values of Fleiss Kappa using statsmodels.stats.inter_rater
|
<p>I use statsmodels.stats.inter_rater.fleiss_kappa to calculate my inter-rater reliability. I only get the kappa value. What if I need the z-value, p-value, and range?</p>
|
<python><pandas><statsmodels>
|
2024-04-14 13:31:15
| 1
| 621
|
pill45
|
78,323,859
| 9,111,293
|
Broadcast pytorch array across channels based on another array
|
<p>I have two arrays, <code>x</code> and <code>y</code>, with the same shape (<code>B 1 N</code>).</p>
<p><code>x</code> represents data and <code>y</code> represents which class (from <code>1</code> to <code>C</code>) each datapoint in <code>x</code> belongs to.</p>
<p>I want to create a new tensor <code>z</code> (with shape <code>B C</code>) where</p>
<ol>
<li>the data in <code>x</code> are partitioned into channels based on their classes in <code>y</code></li>
<li>and summed over <code>N</code></li>
</ol>
<p>I can accomplish this if I use a one-hot encoding. However, for large tensors (especially with a large number of classes), PyTorch's one-hot encoding quickly uses up all memory on the GPU.</p>
<p>Is there a more memory-efficient way to do this broadcasting without explicitly allocating a <code>B C N</code> tensor?</p>
<p>Here's an MWE of what I'm after:</p>
<pre class="lang-py prettyprint-override"><code>import torch
B, C, N = 2, 10, 1000
x = torch.randn(B, 1, N)
y = torch.randint(low=0, high=C, size=(B, 1, N))
one_hot = torch.nn.functional.one_hot(y, C) # B 1 N C
one_hot = one_hot.squeeze().permute(0, -1, 1) # B C N
z = x * one_hot # B C N
z = z.sum(-1) # B C
</code></pre>
|
<python><pytorch><tensor>
|
2024-04-14 12:59:45
| 1
| 579
|
Vivek
|
78,323,749
| 6,915,206
|
AWS EC2 Server not serving some pages and static files propperly
|
<p>I just Deployed a <a href="http://3.17.142.65/" rel="nofollow noreferrer">website</a> on AWS EC2 from github clone. When i visit to <a href="http://3.17.142.65/influencers/" rel="nofollow noreferrer">Influencer Marketing</a> & <a href="http://3.17.142.65/career/" rel="nofollow noreferrer">Career</a> pages the server serving the static files from <strong>S3 Bucket</strong> correctly. But when i am visiting my <a href="http://3.17.142.65/" rel="nofollow noreferrer">home page</a> & <a href="http://3.17.142.65/COE/" rel="nofollow noreferrer">Who are We</a> pages its not serving the static files also i am not getting these pages contents (raw data). I assume that its not serving static files its ok but <strong>where is the content of both pages gone</strong>. What i am missing here. I am new to AWS and Website deployment So if i did any mistake please let me now i will correct it. If you require any additional information i will add it.</p>
<p>Here you can access both home & Who we are pages -<a href="https://github.com/rahul6612/testing-repo/tree/main" rel="nofollow noreferrer">link</a></p>
<p><strong>Configure Nginx to Proxy Pass to Gunicorn</strong></p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>[Unit]
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
User=ubuntu
Group=www-data
WorkingDirectory=/home/ubuntu/try-django-digital-marketing/try-django-digital-marketing
ExecStart=/home/ubuntu/try-django-digital-marketing/try-django-digital-marketing/env/bin/gunicorn \
--access-logfile - \
--workers 3 \
--bind unix:/run/gunicorn.sock \
BE.wsgi:application
server {
listen 80;
server_name 3.17.142.65;
location = /favicon.ico { access_log off; log_not_found off; }
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}</code></pre>
</div>
</div>
</p>
<p><strong>Bucket Policy, Cross-origin resource sharing (CORS) & User Policy</strong></p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::try-marketing/*"
}
]
}
__________________________________________________
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"POST",
"GET",
"PUT"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
___________________________________________________
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:AbortMultipartUpload"
],
"Resource": [
"arn:aws:s3:::try-marketing",
"arn:aws:s3:::try-marketing/*"
],
"Effect": "Allow"
}
]
}</code></pre>
</div>
</div>
</p>
<p>AWS Conf file</p>
<pre><code>AWS_USERNAME = 'user11111'
AWS_ACCESS_KEY_ID = 'xxxxxxxxxx'
AWS_SECRET_ACCESS_KEY = 'xxxxxxxxxxxx'
AWS_PRELOAD_METADATA = True
AWS_QUERYSTRING_AUTH = False
AWS_S3_SIGNATURE_VERSION = "s3v4"
AWS_S3_REGION_NAME = 'us-east-2'
DEFAULT_FILE_STORAGE = 'BE.aws.utils.MediaRootS3BotoStorage'
STATICFILES_STORAGE = 'BE.aws.utils.StaticRootS3BotoStorage'
AWS_STORAGE_BUCKET_NAME = 'try-marketing'
S3DIRECT_REGION = 'us-east-2'
S3_URL = '//%s.s3.amazonaws.com/' % AWS_STORAGE_BUCKET_NAME
MEDIA_URL = '//%s.s3.amazonaws.com/media/' % AWS_STORAGE_BUCKET_NAME
MEDIA_ROOT = MEDIA_URL
STATIC_URL = S3_URL + 'static/'
ADMIN_MEDIA_PREFIX = STATIC_URL + 'admin/'
AWS_DEFAULT_ACL = None
</code></pre>
|
<python><django><amazon-web-services><amazon-s3><django-staticfiles>
|
2024-04-14 12:13:05
| 2
| 563
|
Rahul Verma
|
78,323,737
| 1,440,299
|
Python+Selenium. Page wan't load in headless mode
|
<p>I'm trying load page with that code</p>
<pre><code> from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.common.exceptions import TimeoutException
chrome_options = Options()
chrome_options.add_argument("--headless=new")
driver = webdriver.Chrome(options=chrome_options)
driver.get('https://www.chabad.org/calendar/zmanim_cdo/aid/143790/locationid/790/locationtype/1/save/1/tdate/4-15-2024/jewish/Zmanim-Halachic-Times.htm')
try:
myElem = WebDriverWait(driver, 20).until(EC.presence_of_element_located((By.ID, 'group_4/15/2024')))
print("Page is ready!")
except TimeoutException:
print("Loading took too much time!")
</code></pre>
<p>and I get a <code>TimeoutException</code>. But if I run browser in standard mode (not headless) it works fine.</p>
<p>Because I need get information from page on the server, I want run browser in headless mode. So how can I load information from that page?</p>
|
<python><selenium-webdriver><selenium-chromedriver><google-chrome-headless>
|
2024-04-14 12:09:41
| 1
| 355
|
Ishayahu
|
78,323,703
| 2,913,106
|
What sparse object should be used for indexing via coordinates and efficient summing across dimensions?
|
<p>I'd like to find some container object to contain scalar values (doesn't really matter whether integral or fractional or floating point types). The following snippet roughly outlines the usecases and the criteria it needs to fulfill. In this snipped I'm using a numpy array, but I'd like to avoid that as it needs a lot of memory, because it will only be sparsely filled. But other than that, it fulfills my requirements.</p>
<ul>
<li>The object should hold scalar values, that are indexed by a <code>d</code>-dimensional index with values of <code>0,1,2,...,n-1</code>. (Imagine something like <code>5 <= d <= 100</code>, <code>20 <= n <= 200</code>).</li>
<li><s>The object should be mutable, i.e. the values need to be updatable.</s> (Edit: Not actually necessary, see discussion below.)</li>
<li>All possible indices should be initialized with zero at the start. That is, all indices that have not been accessed should implicitly assumed to hold zero.</li>
<li>It should be efficient to sum across one or multiple dimensions.</li>
</ul>
<p>Is there built in python object that satisfies this, or is there a data structure that can effciently implemented in python?</p>
<p>So far I have found the scipy <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.coo_array.html#scipy.sparse.coo_array" rel="nofollow noreferrer">COO arrays</a>, that satisfy some of the criteria, but they only support 1- and 2-d indexing.</p>
<p>For context: The idea is building a frequency table of certain objects, and use this then as a distribution to sample from, marginalize, etc.</p>
<pre><code>import numpy as np
# just some placeholder data
data = [((1,0,5),6),((2,6,5),100),((5,3,1),1),((2,0,5),4),((2,6,5),100)]
# structure mapping [0,n]^d coordinates to scalars
data_structure = np.zeros((10, 10, 10))
# needs to be mutable #EDIT: doesn't need to be, see discussion below
for coords, value in data:
data_structure[coords] += value #EDIT: can be done as preprocessing step
# needs to be able to efficiently sum across dimensions
for k in range(100):
x = data_structure.sum(axis=(1,2))
y = data_structure.sum(axis=(0,))
z = data_structure.sum(axis=(0,2))
</code></pre>
|
<python><arrays><numpy><data-structures><probability-distribution>
|
2024-04-14 11:55:38
| 1
| 11,728
|
flawr
|
78,323,611
| 1,725,836
|
OpenCV play video without waitKey
|
<p>I'm trying to play a video with OpenCV in Python. For that I use this code:</p>
<pre class="lang-py prettyprint-override"><code>i = 0
while True:
frame = frames[i]
cv2.imshow("video", frame)
cv2.waitKey(int(1000 / frame_rate))
i += 1
</code></pre>
<p>But once the user uses his keyboard, the playback speed changes and the video plays in a "fast-forward" mode. I don't need key detection, so I tried to replace the <code>waitKey</code> with <code>time.sleep</code> but the video is not playing at all.</p>
<p>How can I prevent the keyboard from changing the playback speed?</p>
|
<python><opencv><user-interface>
|
2024-04-14 11:24:06
| 1
| 9,887
|
nrofis
|
78,323,579
| 6,709,460
|
FastAPI TrustedHostMiddleware refuses my host
|
<p>When I was developing the application on my local machine and I tried to reach <code>/docs</code> I had to set up up <code>127.0.0.1</code> as my trusted host. This worked fine. But now I have put the app on the server and when I try to reach <code>/docs</code> with the IP of my computer, I am denied access.</p>
<p>I have checked my IP on several websites, I have also checked <code>request.client.host</code> to confirm the IP. But I can not access the <code>/docs</code>.</p>
<p>The only way I can access the route is, if I set <code>trusted_hosts = ["*"]</code>. But this doesn't make sense.</p>
<p>Here is my code:</p>
<pre><code>from fastapi import FastAPI
from fastapi.middleware.trustedhost import TrustedHostMiddleware
app = FastAPI()
# Define a list of trusted hosts
trusted_hosts = [
"my_ip_address"
]
# Add TrustedHostMiddleware to the app with the list of trusted hosts
app.add_middleware(TrustedHostMiddleware, allowed_hosts=trusted_hosts)
</code></pre>
|
<python><fastapi><middleware><starlette>
|
2024-04-14 11:14:46
| 1
| 741
|
Testing man
|
78,323,390
| 10,480,181
|
How to install wheel file dependency with extras on Databricks?
|
<p>One of my workflows has a python wheel file dependency. However that wheel file also has extra dependencies that need to be installed. How can I do that on databricks?</p>
<p>I tried using a init script but it doesn't work. I want to do something analogous to:</p>
<pre><code>pip install "path/to/wheel/file/mywheel.whl[extras]"
</code></pre>
<p>How to achieve this in a Databricks workflow? The entry point to the task is a python function defined in the wheel file.</p>
|
<python><azure-databricks><python-packaging><python-wheel>
|
2024-04-14 10:04:10
| 0
| 883
|
Vandit Goel
|
78,323,238
| 7,611,838
|
Timeout before the position for partition could be determined in one topic in apache beam ReadFromKafka
|
<p>I'm having a Google dataflow job, I'm using apache beam <code>ReadFromKafka</code> to consume topic messages. I'm consuming 4 topics. The pipeline used to work fine, after we added a new broker to our kafka cluster and triggered a rebalancing, the consumer started failing on one specific topic but successfully kept working for the
other 3 topics.</p>
<pre><code>The error is: org.apache.kafka.common.errors.TimeoutException: Timeout of 60000ms expired before the position for partition topic_name-0 could be determined
</code></pre>
<p>Here is my code:</p>
<pre><code>(
pcoll
| f"Read From Kafka {self.transf_label}"
>> ReadFromKafka(
consumer_config=self.kafka_consumer_config,
topics=_topic_list,
with_metadata=True,
)
| f"Extract KVT {self.transf_label}" >> Map(extract_key_value_topic)
| f"Decode messages {self.transf_label}" >> ParDo(DecodeKafkaMessage())
)
</code></pre>
<p>In the logs I can see</p>
<pre><code>[Consumer clientId=consumer-group-dummy-4, groupId=consumer-group-dummy] Subscribed to partition(s): topic_name-0
</code></pre>
<p>But after few secs it fails with the timeout error, while continuously pulling messages of other topics</p>
<p>Here is the config:</p>
<pre><code>{
"bootstrap.servers": BROKERS,
"security.protocol": SECURITY_PROTOCOL,
"sasl.mechanism": SASL_MECHANISM,
"group.id": "consumer-group-dummy",
"session.timeout.ms": "60000",
"sasl.jaas.config": f'org.apache.kafka.common.security.scram.ScramLoginModule required username="{SASL_USERNAME}" password="{SASL_PASSWORD}";',
"auto.offset.reset": "latest",
}
</code></pre>
<p>One thing to consider is our other Kafka consumer apps are not throwing any issue, this behavior is noticed only with the apache beam pipeline. Also when I tried to trigger locally a consumer with the same config and same consumer group using Kafka confluent library and simple python script, it seems to be working fine and pulling the messages.</p>
<p>I found this <a href="https://github.com/apache/flink-connector-kafka/pull/91" rel="nofollow noreferrer">bug</a> in Apache flink, can it be the same in the beam Kafka connector in Java?</p>
|
<python><java><apache-kafka><google-cloud-dataflow><apache-beam>
|
2024-04-14 08:56:13
| 1
| 974
|
Idhem
|
78,323,213
| 13,443,114
|
Environment variable set as password is appearing as empty string in Python Environ library
|
<p>I just noticed that when I set an environment variable as <code>PASSWORD</code> for my script in Python, I am using the <a href="https://pypi.org/project/python-environ/" rel="nofollow noreferrer">python-environ</a> library to read and access environment variables, the value for the environment variable set to password appears as an empty string, setting the environment variable to another other thing but the word PASSWORD works, I do not understand why this is the case, and I can not seem to find a documented explanation or justification for this.</p>
<p><strong>.env</strong></p>
<pre><code>USERNAME=testuser
PASSWORD=sdfnsidufbsdufbsiudf
PASSWORDU=sdfnsidufbsdufbsiudf
</code></pre>
<p><strong>myscript.py</strong></p>
<pre><code>import environ
env = environ.Env()
# reading .env file
environ.Env.read_env('.env')
USERNAME = env.str("USERNAME")
PASSWORD = env.str("PASSWORD")
PASSWORDU = env.str("PASSWORDU")
print(f"Usermame {USERNAME}") # testuser
print(f"Password {PASSWORD}") #
print(f"Passwordu {PASSWORDU}") # sdfnsidufbsdufbsiudf
</code></pre>
|
<python><python-3.x><environment-variables>
|
2024-04-14 08:43:38
| 2
| 428
|
Brian Obot
|
78,323,033
| 12,454,639
|
Im not sure I understand why Django wont let me do asynchronous database transactions
|
<p>I am trying to create a function that will get_or_create a user record for my discord bot in my django User model as soon as someone runs the <code>!home</code> command - but I am encountering an issue that (as far as I understand) is a normal part of django's existing model wherein it disallows asynchronous database transactions.</p>
<p>I've tried using the built in sync_to_async method but it raises similiar issues. Is there something I can change in my settings file or something that I havent been able to find online that would be able achieve this functionality here?</p>
<p>Here is my home command that is invoked by the command:</p>
<pre><code>@bot.command(name="home", description="Displays the character sheet of your character in Oblivion After you have already created your guy")
async def home_async(ctx):
user_instance, created = await sync_to_async(User.objects.get_or_create(
discord_id=discord_user_id,
defaults={'username': ctx.author.name}
))
if created:
character = Character.objects.create(name=f"New Character for {ctx.author.name}")
user_instance.character = character
user_instance.save()
view = HomePageView(user=user_instance)
await ctx.send(f"Welcome {ctx.author.display_name}! Manage your character and inventory here.", view=view)
</code></pre>
<p>Currently this is the issue that I am encountering this error:</p>
<pre><code>Traceback (most recent call last):
File "/mnt/m/ZocPy/OblivionAlchemy/OblivionAlchemy/venv/oblivion/lib/python3.10/site-packages/discord/ext/commands/bot.py", line 1350, in invoke
await ctx.command.invoke(ctx)
File "/mnt/m/ZocPy/OblivionAlchemy/OblivionAlchemy/venv/oblivion/lib/python3.10/site-packages/discord/ext/commands/core.py", line 1029, in invoke
await injected(*ctx.args, **ctx.kwargs) # type: ignore
File "/mnt/m/ZocPy/OblivionAlchemy/OblivionAlchemy/venv/oblivion/lib/python3.10/site-packages/discord/ext/commands/core.py", line 244, in wrapped
raise CommandInvokeError(exc) from exc
discord.ext.commands.errors.CommandInvokeError: Command raised an exception: SynchronousOnlyOperation: You cannot call this from an async context - use a thread or sync_to_async
</code></pre>
|
<python><django><python-asyncio>
|
2024-04-14 07:17:46
| 1
| 314
|
Syllogism
|
78,322,897
| 3,224,483
|
Why does this code use more and more memory over time?
|
<p>Python: 3.11
Saxonche: 12.4.2</p>
<p>My website keeps consuming more and more memory until the server runs out of memory and crashes. I isolated the problematic code to the following script:</p>
<pre class="lang-python prettyprint-override"><code>import gc
from time import sleep
from saxonche import PySaxonProcessor
xml_str = """
<root>
<stuff>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum ac auctor ex. Nunc in tincidunt urna. Sed tincidunt eros lacus, sed pulvinar sem venenatis et. Donec euismod orci quis pellentesque sagittis. Donec at tortor in dui mattis facilisis. Pellentesque vel varius lectus. Nunc sed gravida risus, ac finibus elit. Etiam sollicitudin nunc a velit efficitur molestie in ac lectus. Donec vulputate orci odio, sit amet hendrerit odio rhoncus commodo.</stuff>
<stuff>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum ac auctor ex. Nunc in tincidunt urna. Sed tincidunt eros lacus, sed pulvinar sem venenatis et. Donec euismod orci quis pellentesque sagittis. Donec at tortor in dui mattis facilisis. Pellentesque vel varius lectus. Nunc sed gravida risus, ac finibus elit. Etiam sollicitudin nunc a velit efficitur molestie in ac lectus. Donec vulputate orci odio, sit amet hendrerit odio rhoncus commodo.</stuff>
<stuff>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum ac auctor ex. Nunc in tincidunt urna. Sed tincidunt eros lacus, sed pulvinar sem venenatis et. Donec euismod orci quis pellentesque sagittis. Donec at tortor in dui mattis facilisis. Pellentesque vel varius lectus. Nunc sed gravida risus, ac finibus elit. Etiam sollicitudin nunc a velit efficitur molestie in ac lectus. Donec vulputate orci odio, sit amet hendrerit odio rhoncus commodo.</stuff>
</root>
"""
while True:
print('Running once...')
with PySaxonProcessor(license=False) as proc:
proc.parse_xml(xml_text=xml_str)
gc.collect()
sleep(1)
</code></pre>
<p>This script consumes memory at a rate of about 0.5 MB per second. The memory usage does not plateau after a while. I have logs showing that memory usage continues to grow for hours until the server runs out of memory and crashes.</p>
<p>Other things I tried that aren't shown above:</p>
<ul>
<li>Using a PyDocumentBuilder to parse the XML instead of a PySaxonProcessor. It didn't appear to change anything.</li>
<li>Deleting the Saxon processor and the return value of <code>parse_xml()</code> using the <code>del</code> Python keyword. No change.</li>
</ul>
<p>I have to use Saxon instead of lxml because I need XPath 3.0 support.</p>
<p>What am I doing wrong? How do I parse XML using Saxon in a way that doesn't leak?</p>
<hr />
<p>A few folks have suggested that instantiating the PySaxonProcessor once before the loop will fix the leak. It doesn't. This still leaks:</p>
<pre class="lang-python prettyprint-override"><code>with PySaxonProcessor(license=False) as proc:
while True:
print('Running once...')
proc.parse_xml(xml_text=xml_str)
gc.collect()
sleep(1)
</code></pre>
|
<python><saxon><saxon-c>
|
2024-04-14 06:14:07
| 3
| 3,659
|
Rainbolt
|
78,322,641
| 13,142,245
|
SqlModel - specify index type (B-Tree, Hash, etc.)
|
<p>I'm looking at documentation for SQLModel. The python library is written by the author of FastAPI, so I'm eager to leverage the seamless db integration via SqlModel.</p>
<p>In the <a href="https://sqlmodel.tiangolo.com/tutorial/indexes/" rel="nofollow noreferrer">docs</a>, he illustrates what an index is and how it works by graphically depicting binary search. So I think that it's reasonable to conclude that SQLModel can index on a given variable via B-Tree by default.</p>
<p>B-Trees (and binary search) make sense for range and inequality based queries. But equality or identity queries (<code>name=='John'</code>) can be greatly sped up through hash-based indices. But if you could only support one index type, B-Tree obviously makes more sense than hash.</p>
<p>Here's an example</p>
<pre><code>from sqlmodel import Field, Session, SQLModel, create_engine, select
class Hero(SQLModel, table=True):
id: int | None = Field(default=None, primary_key=True)
name: str = Field(index=True)
secret_name: str
age: int | None = Field(default=None, index=True)
</code></pre>
<p>But suppose that I want to use a Hash index, can this be done through some SQLModel configuration?</p>
<p>Under the hood, SQLModel is built on top of SQLAlchemy. Perhaps as a backup, one should drop down a level and define the index type through SQLAlchemy. Ideally, building Db-provider specific solutions (ex: Postgres) can be avoided.</p>
|
<python><sqlalchemy><orm><sqlmodel>
|
2024-04-14 03:38:53
| 1
| 1,238
|
jbuddy_13
|
78,322,538
| 4,117,496
|
ModuleNotFoundError: No module named 'torch' on AWS Batch GPU instance
|
<p>I have a job that runs on AWS Batch on a GPU instance, my application uses torch, i.e.</p>
<pre><code>import torch
</code></pre>
<p>The Compute Environment has only one GPU instance, I was able to confirm that <code>torch</code> is available there by connecting to the instance via AWS Console and ran:</p>
<pre><code>sh-4.2$ python3
Python 3.7.16 (default, Aug 30 2023, 20:37:53)
[GCC 7.3.1 20180712 (Red Hat 7.3.1-15)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print(torch.__version__)
1.13.1+cu117
>>>
</code></pre>
<p>However when I submit my Batch job, it fails saying</p>
<pre><code> ModuleNotFoundError: No module named 'torch'
</code></pre>
<p>I looked into this and found that:
My AWS Batch job definition has this Command:</p>
<pre><code> CoolGPUJobDefinition:
DependsOn: ComputeRole
Type: AWS::Batch::JobDefinition
Properties:
Type: container
ContainerProperties:
Command:
- "/opt/prod/bin/python3"
- "/opt/prod/bin/start.py"
</code></pre>
<p>From the stacktrace of my application, it shows:</p>
<pre><code>File "/opt/prod/lib/python3.8/site-packages/cool_service/slowfast/utils/distributed.py", line 9, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
</code></pre>
<p>But when I tried to <code>ls -a</code> in the <code>/opt</code> directory of this GPU instance, it didn't even have <code>prod</code>:</p>
<pre><code>sh-4.2$ pwd
/opt
sh-4.2$ ls -a
. .. aws containerd nvidia
</code></pre>
<p>Somehow there's a disconnect from what I can see in the GPU instance and how AWS Batch runs my application on this GPU instance.</p>
<p>I'd like to understand:</p>
<ol>
<li>Why this disconnect/discrepancy?</li>
<li>How could I resolve this module not found error in this case?</li>
</ol>
<p>Thanks!</p>
|
<python><amazon-web-services><tensorflow><pytorch><aws-batch>
|
2024-04-14 02:31:57
| 1
| 3,648
|
Fisher Coder
|
78,322,530
| 25,891
|
How to use OpenCV estimateAffinePartial2D
|
<p>I need to align two (actually hundreds of) images and I'm at a loss on how to do that in Python with OpenCV (which I have never heard of before). It looks like I should first estimate the transformation to apply as follows, and then apply it to one of the image (rinse and repeat hundreds of times). However even the simplest</p>
<pre><code>import cv2
img1 = cv2.imread("img1.jpg", cv2.IMREAD_COLOR)
img2 = cv2.imread("img2.jpg", cv2.IMREAD_COLOR)
cv2.estimateAffinePartial2D(img1, img2)
</code></pre>
<p>fails with</p>
<pre><code>cv2.error: OpenCV(4.9.0) /io/opencv/modules/calib3d/src/ptsetreg.cpp:1108: error: (-215:Assertion failed) count >= 0 && to.checkVector(2) == count in function 'estimateAffinePartial2D'
</code></pre>
<p>Stack overflow and OpenCV forum has a few questions about this problem but no solution, other than the one mentioned at <a href="https://stackoverflow.com/questions/35428739/opencv-estimateaffine3d-failes-with-cryptic-error-message">OpenCV estimateAffine3D failes with cryptic error message</a> (which is even more cryptic than the error message itself).</p>
<p>How to do that estimation <strong>in Python</strong>?</p>
<p>EDIT:</p>
<pre><code>$ python
Python 3.8.18 (default, Aug 25 2023, 13:20:30)
[GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
>>> cv2.__version__
'4.9.0'
</code></pre>
|
<python><opencv>
|
2024-04-14 02:29:49
| 1
| 17,904
|
Davide
|
78,322,515
| 5,264,310
|
Is there a way to make a long for loop to run faster?
|
<p>Im making a simple montecarlo simulator that takes in a 3*4 matrix of probabilities, and the number of iterations you want to simulate for. And the output is a table with all the results.
Each row of the table is a list that contains: [iter no, random no, result1, result2]
The logic is simple, just generate a random number and compare it with the cumulative probability. So when i try 1000000 iterations it takes 16 seconds or more and im trying to find out if its possible to get a better time.
So far I've tried:</p>
<ul>
<li>Using memoization which i guess it doesnt make sense since the return data depends on a random number.</li>
<li>Using numpy arrays and the method cumsum() to get cumulative probabilities.</li>
<li>The best data structured i could come up with is a list of tuples of the possible outcomes and a 2 dimensional list to store the probabilities.</li>
</ul>
<pre><code>from random import random
import numpy as np
matrix = [
[0.2, 0.2, 0.05, 0.1],
[0.7, 0.5, 0.6, 0.4],
[0.2, 0.25, 0.2, 0.04],
]
estados = [
("buena", "buena"),
("buena", "regular"),
("buena", "critica"),
("buena", "alta"),
("regular", "buena"),
("regular", "regular"),
("regular", "critica"),
("regular", "alta"),
("critica", "buena"),
("critica", "regular"),
("critica", "critica"),
("critica", "alta"),
]
estado_siguiente = {"buena": 0, "regular": 1, "critica": 2, "alta": 3}
def get_result(rnd, probabilities, sig_estado=None):
probabilities= (
probabilities[0]
if (sig_estado == "" or sig_estado == "alta")
else probabilities[estado_siguiente[sig_estado] + 1]
)
for i in range(len(probabilities[0])):
if rnd < probabilities[0][i]:
return probabilities[1][i]
return probabilities[1][len(probabilities) - 1]
def start_simulation(probabilities, iter):
vector_estado = [0, 0, "", "", 0, ""]
table_final = list()
#Here i have 4 accumulators for the probabilities of the 3 rows plus one for the whole table because a new result1 depends on the previous result2 meaning sometimes i just need the probs of just a row
to_np_arr = np.array(probabilities)
all_acc = (np.cumsum(to_np_arr), estados[:])
buena_acc = (all_acc[0][0:4], estados[0:4])
regular_acc = (all_acc[0][4:8], estados[4:8])
critico_acc = (all_acc[0][8:], estados[8:])
for _ in range(iter):
rnd1 = random()
estado1, estado2 = definir_condicion(
rnd1,
probabilities=[all_acc, buena_acc, regular_acc, critico_acc],
sig_estado=vector_estado[3],
)
#Add the list to the rest of the rows
tabla_final.append(vector_estado)
#Here I replace the old list with the new one
vector_estado = [
vector_estado[0] + 1,
"%.3f" % (rnd1),
estado1,
estado2,
"%.3f" % (rnd2) if rnd2 else "-",
condicion_alta,
]
return tabla_final
</code></pre>
|
<python><list><numpy><loops>
|
2024-04-14 02:23:11
| 1
| 1,147
|
Xcecution
|
78,322,507
| 10,483,893
|
sqlalchemy session "Cannot use autocommit mode with future=True."
|
<p>Since which version sqlalchemy session <em>"Cannot use autocommit mode with future=True."</em>?
<a href="https://rattailproject.org/docs/wuttjamaican/_modules/sqlalchemy/orm/session.html" rel="nofollow noreferrer">https://rattailproject.org/docs/wuttjamaican/_modules/sqlalchemy/orm/session.html</a></p>
<pre><code> if autocommit:
if future: <-- I don't really get this...
raise sa_exc.ArgumentError(
"Cannot use autocommit mode with future=True."
)
self.autocommit = True
else:
self.autocommit = False
</code></pre>
<p>This is breaking changes, I found out only after I had to reinstall Anaconda.</p>
<pre><code>Python 3.9.19 (main, Mar 21 2024, 17:21:27) [MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlalchemy
>>> print(sqlalchemy.__version__)
1.4.32
>>>
</code></pre>
<p>Anyone else run into this?
In the past, <em>session.flush()</em> with auto-commit = True will propagate changes to database.
Now:</p>
<pre><code>session.flush() <-- Should I remove Flush altogether?
session.commit()
</code></pre>
<p>Thanks</p>
|
<python><sqlalchemy>
|
2024-04-14 02:20:47
| 1
| 1,404
|
user3761555
|
78,322,464
| 10,061,193
|
Including `.jinja` template files inside the Python distribution using `pyproject.toml`
|
<p>I'm working on a package. It has a console command that enables users to generate a template from the <code>.jinja</code> files within the distribution.</p>
<p>My issue is that I can't include them (template files) in my distribution. Here is a tree-look of my package and where my templates are located. (<code>template/</code>)</p>
<pre><code>mypack/
├── mypack/
│ ├── package1/
│ ├── package2/
│ └── template
│ └── {{tmpl_slug}}
│ └── file1.py.jinja
│ └── file2.json.jinja
│ └── config.yml
│ └── ...
└── pyproject.yml
</code></pre>
<p>I want my distribution to have the exact <code>template/</code> directory with all its sub-files and sub-directories.</p>
<p><strong>I can do what I need using a <code>MANIFEST.in</code> file that has the following single line in it.</strong></p>
<pre><code>graft mypack/template
</code></pre>
<p>How can I do it without the MANIFEST file, just via <code>pyproject.toml</code>?!</p>
|
<python><templates><jinja2><packaging><python-packaging>
|
2024-04-14 01:40:26
| 0
| 394
|
Sadra
|
78,322,364
| 1,802,225
|
Ethereum: how to call Smart Contract functions directly with JSON-RPC method?
|
<p>I need to execute and call http JSON-RPC ethereum node to get results without web3py or web3js. How to do that? I have read there is <code>eth_call</code> method, but I didn't find any examples with ABI how to use. Below is example with <code>eth_getTransactionByHash</code>.</p>
<pre class="lang-py prettyprint-override"><code>import json
import requests
from web3 import Web3
w3 = Web3(Web3.HTTPProvider('https://eth-pokt.nodies.app'))
if not w3.is_connected():
exit("w3 not connected")
# basic ERC-20: name(), symbol(), decimals() balanceOf()
abi = json.loads('[{"constant":true,"inputs":[],"name":"name","outputs":[{"name":"","type":"string"}],"payable":false,"type":"function"},{"constant":true,"inputs":[],"name":"decimals","outputs":[{"name":"","type":"uint8"}],"payable":false,"type":"function"},{"constant":true,"inputs":[{"name":"_owner","type":"address"}],"name":"balanceOf","outputs":[{"name":"balance","type":"uint256"}],"payable":false,"type":"function"},{"constant":true,"inputs":[],"name":"symbol","outputs":[{"name":"","type":"string"}],"payable":false,"type":"function"}]')
contract = w3.eth.contract(address='0x40E64405F18e4FB01c6fc39f4F0c78df5eF9D0E0', abi=abi)
balance = contract.functions.balanceOf('0x7Ba83AC9243Ff520A28FC218758608F900E2d958').call(block_identifier=19649654)
print(balance)
# 976055241381593241650
# example of http request to get transaction
tx = requests.post('https://eth-pokt.nodies.app', json={"method":"eth_getTransactionByHash","params":["0xd8feb6cd2e76f738e2b42da99f9746ade1dc8e4034ae79acec01fda7b293e6c1"],"id":1,"jsonrpc":"2.0"}, headers={"Content-Type": "application/json"}).json()
</code></pre>
|
<python><ethereum><smartcontracts><web3py>
|
2024-04-14 00:30:41
| 0
| 1,770
|
sirjay
|
78,322,358
| 219,153
|
Does eval() have performance advantage in Mojo vs. Python?
|
<p>I would like to use <code>eval()</code> for validation of short function synthesis. The search space may be fairly large, e.g. more than a million different functions. Functions will be synthesized as a string containing their source code and executed with <code>eval()</code>.</p>
<p>Will Mojo have a performance advantage vs. Python for this task, or it just calls Python's <code>eval()</code> through the compatibility layer? Is there a different method than string and <code>eval()</code>, which would work for my scenario?</p>
<hr />
<p>Just to add some context, so concerns about security of calling <code>eval</code> are put to rest. What I'm doing is called Program Synthesis, see: <a href="https://en.wikipedia.org/wiki/Program_synthesis" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Program_synthesis</a>. I'm not calling <code>eval</code> on an unknown source code.</p>
|
<python><eval><mojolang>
|
2024-04-14 00:29:05
| 1
| 8,585
|
Paul Jurczak
|
78,322,249
| 17,517,315
|
Correct inheritance of methods from dataclass in Python
|
<p>I'm quite new to dataclasses and OOP in Python and I have been trying to figure out how to inheritance works. Before anything, I am really sorry if my question has been asked in one way or another, I really tried to look for similar questions but there is nothing out there that answered me. So apologies beforehand if there is anything unclear.</p>
<p>Basically I want to achieve the following:</p>
<pre><code>
metadata.py
@dataclass(frozen=True)
class IngestionMetadata:
file: str
sheet_name: str
col_converters: dict
col_names: list = field(init=False)
def __post_init__(self):
object.__setattr__(self, 'col_names', list(self.col_converters.keys()))
@classmethod
def get_columns(cls):
return list(cls.col_converters.keys())
@dataclass(frozen=True)
class ProductGroupLookup(IngestionMetadata):
file: str = 'product_group_table.xlsx'
sheet_name: str = 'INGESTION'
col_converters: dict = field(default_factory= lambda: {
'Product 1': pl.String,
'Product 2': pl.String,
'Product 3': pl.String,
'Product Category Code 4': pl.String
})
</code></pre>
<p>My goal is to be able to import the module from another script like:</p>
<pre><code>from metadata import IngestionMetadata as im
product_cols = im.ProductGroupLookup.get_columns() # Should output ['Product 1', 'Product 2'...]
</code></pre>
<p>I tried already separating the classes into subclasses or just putting them all inside the base class but without success. I would appreciate some guidance.</p>
|
<python><oop><python-dataclasses>
|
2024-04-13 23:04:00
| 0
| 391
|
matt.aurelio
|
78,322,169
| 1,498,178
|
How to merge Django model query results?
|
<p>I have to modify a legacy Django web application where there are 2 tables to store the same information using the same database structure, and the only difference is the names of the tables (and the corresponding Django models):</p>
<ul>
<li><code>ProgramAAssignments</code> model for <code>program_a_assignments</code> table</li>
<li><code>ProgramBAssignments</code> model for <code>program_b_assignments</code> table</li>
</ul>
<p>They have 2 distinct views that query these models like so:</p>
<pre><code>ProgramAAssignments.objects.all().order_by('-assignment_date')
ProgramBAssignments.objects.all().order_by('-assignment_date')
</code></pre>
<p>I decided to leave the database untouched and just combine the views, but don't know how to merge these queries.</p>
|
<python><django-models>
|
2024-04-13 22:10:12
| 1
| 8,705
|
toraritte
|
78,322,167
| 2,030,532
|
Why tab completion in PyCharm python console is much slower than a terminal?
|
<p>I am running my python script in python console of PyCharm and the tab completion of a custom object attributes is pretty slow (unusable). However when I run the same code is a terminal using IPython the tab completion is fast. I tried increasing the memory limits of PyCharm with no avail. Is this a limitation of PyCharm that is not efficient in looking up attributes? Or some setting that needs to be corrected.</p>
|
<python><pycharm>
|
2024-04-13 22:09:29
| 0
| 3,874
|
motam79
|
78,321,952
| 8,713,442
|
Python Code failing : dedupe library error
|
<p>I am trying to learn about dedupe library . I am trying to match name which are more than 80% match.</p>
<p>Sharing code and error . Please help</p>
<pre><code>import dedupe
from Levenshtein import distance
def test():
# Sample data (replace with your actual library data)
data = [
{'name': 'Alice Smith', 'address': '123 Main St', 'phone': '555-1212'},
{'name': 'Alice SmIth', 'address': '123 Main Street', 'phone': '555-1213'},
{'name': 'Bob Johnson', 'address': '456 Elm St', 'phone': '555-3434'},
{'name': 'Charlie Brown', 'address': '789 Maple Ave', 'phone': '555-5656'},
]
# Define fields for comparison (adjust based on your data)
# Define data fields and comparison functions
fields = [
{'field': 'name', 'comparators': ['name_similarity']},
]
# Define similarity functions - customize based on your matching criteria
def name_similarity(s1, s2):
# Implement your name comparison logic here (e.g., Levenshtein distance, etc.)
distance1 = distance(s1, s2)
similarity = 1 - (distance1 / max(len(s1), len(s2))) # Normalize distance to 0-1 similarity
return similarity
# Set thresholds for field-wise and overall similarity (adjust as needed)
deduper = dedupe.Dedupe(fields)
deduper.threshold( threshold=0.8)
# Process the data for deduplication
deduped_data = deduper.dedupe(data)
# Print the deduplicated results
print("Deduplicated Data:")
for cluster in deduped_data:
print(cluster)
if __name__ == '__main__':
test()
</code></pre>
<p>.....</p>
<pre><code>C:\PythonProject\pythonProject\venv\Graph_POC\Scripts\python.exe C:\PythonProject\pythonProject\matching.py Traceback (most recent call last): File "C:\PythonProject\pythonProject\venv\Graph_POC\Lib\site-packages\dedupe\datamodel.py", line 152, in typify_variables
variable_type = definition["type"]
~~~~~~~~~~^^^^^^^^ KeyError: 'type'
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "C:\PythonProject\pythonProject\matching.py", line 45, in <module>
test() File "C:\PythonProject\pythonProject\matching.py", line 32, in test
deduper = dedupe.Dedupe(fields)
^^^^^^^^^^^^^^^^^^^^^ File "C:\PythonProject\pythonProject\venv\Graph_POC\Lib\site-packages\dedupe\api.py", line 1155, in __init__
self.data_model = datamodel.DataModel(variable_definition)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\PythonProject\pythonProject\venv\Graph_POC\Lib\site-packages\dedupe\datamodel.py", line 42, in __init__
self.primary_variables, all_variables = typify_variables(variable_definitions)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\PythonProject\pythonProject\venv\Graph_POC\Lib\site-packages\dedupe\datamodel.py", line 161, in typify_variables
raise KeyError( KeyError: "Missing variable type: variable specifications are dictionaries that must include a type definition, ex. {'field' : 'Phone', type: 'String'}"
Process finished with exit code 1
</code></pre>
|
<python><python-3.x><python-dedupe>
|
2024-04-13 20:36:13
| 1
| 464
|
pbh
|
78,321,701
| 8,713,442
|
pip install dedupe not working for python 3.11
|
<p>While doing pip install dedupe getting following error .</p>
<p>Python installed is 3.11 . Please help I am trying collect cluster id using dedupe . Running it from pycharm .</p>
<blockquote>
<pre><code> Building wheels for collected packages: affinegap, doublemetaphone
Building wheel for affinegap (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for affinegap (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [10 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-311
creating build\lib.win-amd64-cpython-311\affinegap
copying affinegap\__init__.py -> build\lib.win-amd64-cpython-311\affinegap
running build_ext
building 'affinegap.affinegap' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools":
</code></pre>
<p><a href="https://visualstudio.microsoft.com/visual-cpp-build-tools/" rel="nofollow noreferrer">https://visualstudio.microsoft.com/visual-cpp-build-tools/</a>
[end of output]</p>
<pre><code> note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for affinegap
Building wheel for doublemetaphone (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for doublemetaphone (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [10 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-311
creating build\lib.win-amd64-cpython-311\doublemetaphone
copying doublemetaphone\__init__.py -> build\lib.win-amd64-cpython-311\doublemetaphone
running build_ext
building 'doublemetaphone.doublemetaphone' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools":
</code></pre>
<p><a href="https://visualstudio.microsoft.com/visual-cpp-build-tools/" rel="nofollow noreferrer">https://visualstudio.microsoft.com/visual-cpp-build-tools/</a>
[end of output]</p>
<pre><code> note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for doublemetaphone
Failed to build affinegap doublemetaphone
ERROR: Could not build wheels for affinegap, doublemetaphone, which is required to install pyproject.toml-based projects
[notice] A new release of pip available: 22.3.1 -> 24.0
[notice] To update, run: python.exe -m pip install --upgrade pip
</code></pre>
</blockquote>
|
<python><python-dedupe>
|
2024-04-13 18:56:47
| 1
| 464
|
pbh
|
78,321,696
| 15,474,507
|
GetMessagesReactionsRequest - distinguish reactions between 2 users
|
<p>Is there a system to distinguish the reaction of one user from another? <br>I'm testing with two my personal telegram accounts, but the terminal always gives me the same username, but in reality they are 2 users that makes reaction, not the same one.</p>
<p><a href="https://i.sstatic.net/5EqYt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5EqYt.png" alt="enter image description here" /></a></p>
<p>I test with this</p>
<pre><code>from telethon.sync import TelegramClient
from telethon.tl.functions.messages import GetMessagesReactionsRequest
from telethon.tl.types import DocumentAttributeFilename
api_id = '2++++'
api_hash = 'f++++'
channel_id = -100+++
client = TelegramClient('session_name', api_id, api_hash)
async def main():
async with client:
channel_entity = await client.get_input_entity(channel_id)
messages = await client.get_messages(channel_entity, limit=100)
me = await client.get_me()
for message in messages:
reactions = await client(GetMessagesReactionsRequest(channel_entity, [message.id]))
if reactions.updates: # check if the reactions list is not empty
for update in reactions.updates:
if hasattr(update, 'reactions'):
for reaction_count in update.reactions.results:
content = message.message if message.message else "No caption"
if hasattr(message, 'media') and message.media:
if hasattr(message.media, 'document') and message.media.document:
for attr in message.media.document.attributes:
if isinstance(attr, DocumentAttributeFilename):
content = attr.file_name
username = me.username if me.username else "ignoto"
print(f"Post ID: {message.id} - Content: {content[:50]} - Reaction: {reaction_count.reaction.emoticon}, Count: {reaction_count.count}, User: @{username}")
client.loop.run_until_complete(main())
</code></pre>
|
<python><telegram><telethon>
|
2024-04-13 18:55:11
| 0
| 307
|
Alex Doc
|
78,321,650
| 13,392,257
|
Keycloak: requests.exceptions.MissingSchema: Invalid URL 'None': No scheme supplied
|
<p>I am trying to run KeyCloak server and FastAPI application locally. I am following this tutorial <a href="https://fastapi-keycloak.code-specialist.com/quick_start/" rel="nofollow noreferrer">https://fastapi-keycloak.code-specialist.com/quick_start/</a></p>
<p>My actions:</p>
<p>1 Created docker-compose.yaml</p>
<pre><code>version: '3.9'
services:
postgres:
image: postgres:14.11
container_name: postgres_db
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: password
keycloak_server:
image: quay.io/keycloak/keycloak:20.0.2
volumes:
- ./realm-export.json:/opt/keycloak/data/import/realm.json
container_name: keycloak_server
environment:
KC_DB: postgres
KC_DB_URL: jdbc:postgresql://postgres:5432/keycloak
KC_DB_USERNAME: keycloak
KC_DB_PASSWORD: password
KC_HOSTNAME: localhost
KC_HOSTNAME_STRICT: false
KC_HOSTNAME_STRICT_HTTPS: false
KC_LOG_LEVEL: info
KC_METRICS_ENABLED: true
KC_HEALTH_ENABLED: true
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: admin
command:
- start-dev
- --import-realm
depends_on:
- postgres
ports:
- 8280:8080
volumes:
postgres_data:
</code></pre>
<p>Faced with error in the tutorial and fixed according to this <a href="https://howtodoinjava.com/devops/keycloak-script-upload-is-disabled/" rel="nofollow noreferrer">https://howtodoinjava.com/devops/keycloak-script-upload-is-disabled/</a> (removed authorizationSettings)</p>
<p>I see the following log</p>
<pre><code>keycloak_server | 2024-04-13 18:30:02,620 INFO [org.keycloak.quarkus.runtime.hostname.DefaultHostnameProvider] (main) Hostname settings: Base URL: <unset>, Hostname: localhost, Strict HTTPS: false, Path: <request>, Strict BackChannel: false, Admin URL: <unset>, Admin: <request>, Port: -1, Proxied: false
keycloak_server | 2024-04-13 18:30:03,742 WARN [io.quarkus.agroal.runtime.DataSources] (main) Datasource <default> enables XA but transaction recovery is not enabled. Please enable transaction recovery by setting quarkus.transaction-manager.enable-recovery=true, otherwise data may be lost if the application is terminated abruptly
keycloak_server | 2024-04-13 18:30:04,447 WARN [org.infinispan.PERSISTENCE] (keycloak-cache-init) ISPN000554: jboss-marshalling is deprecated and planned for removal
keycloak_server | 2024-04-13 18:30:04,566 WARN [org.infinispan.CONFIG] (keycloak-cache-init) ISPN000569: Unable to persist Infinispan internal caches as no global state enabled
keycloak_server | 2024-04-13 18:30:04,615 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000556: Starting user marshaller 'org.infinispan.jboss.marshalling.core.JBossUserMarshaller'
keycloak_server | 2024-04-13 18:30:04,892 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000128: Infinispan version: Infinispan 'Triskaidekaphobia' 13.0.10.Final
keycloak_server | 2024-04-13 18:30:05,275 INFO [org.keycloak.connections.infinispan.DefaultInfinispanConnectionProviderFactory] (main) Node name: node_591749, Site name: null
keycloak_server | 2024-04-13 18:30:05,281 INFO [org.keycloak.broker.provider.AbstractIdentityProviderMapper] (main) Registering class org.keycloak.broker.provider.mappersync.ConfigSyncEventListener
keycloak_server | 2024-04-13 18:30:06,011 INFO [org.keycloak.services] (main) KC-SERVICES0003: Not importing realm Test from file /opt/keycloak/bin/../data/import/realm.json. It already exists.
keycloak_server | 2024-04-13 18:30:06,024 INFO [org.keycloak.services] (main) KC-SERVICES0003: Not importing realm Test from file /opt/keycloak/bin/../data/import/realm.json. It already exists.
keycloak_server | 2024-04-13 18:30:06,163 INFO [io.quarkus] (main) Keycloak 20.0.2 on JVM (powered by Quarkus 2.13.3.Final) started in 4.937s. Listening on: http://0.0.0.0:8080
keycloak_server | 2024-04-13 18:30:06,163 INFO [io.quarkus] (main) Profile dev activated.
keycloak_server | 2024-04-13 18:30:06,163 INFO [io.quarkus] (main) Installed features: [agroal, cdi, hibernate-orm, jdbc-h2, jdbc-mariadb, jdbc-mssql, jdbc-mysql, jdbc-oracle, jdbc-postgresql, keycloak, logging-gelf, narayana-jta, reactive-routes, resteasy, resteasy-jackson, smallrye-context-propagation, smallrye-health, smallrye-metrics, vault, vertx]
keycloak_server | 2024-04-13 18:30:06,209 ERROR [org.keycloak.services] (main) KC-SERVICES0010: Failed to add user 'admin' to realm 'master': user with username exists
keycloak_server | 2024-04-13 18:30:06,210 WARN [org.keycloak.quarkus.runtime.KeycloakMain] (main) Running the server in development mode. DO NOT use this configuration in production.
</code></pre>
<p>I am not sure that error <strong>Failed to add user 'admin' to realm 'master': user with username exists</strong> is critical (I think it happens because of the second run of the docker-compose file)</p>
<p>Then I am running FastAPI application <code>python app.py</code> (file and configs copied from tutorial)</p>
<pre><code>import uvicorn
from fastapi import FastAPI, Depends
from fastapi.responses import RedirectResponse
from fastapi_keycloak import FastAPIKeycloak, OIDCUser
app = FastAPI()
idp = FastAPIKeycloak(
server_url="http://localhost:8280/auth",
client_id="test-client",
client_secret="GzgACcJzhzQ4j8kWhmhazt7WSdxDVUyE",
admin_client_secret="BIcczGsZ6I8W5zf0rZg5qSexlloQLPKB",
realm="Test",
callback_uri="http://localhost:8081/callback"
)
idp.add_swagger_config(app)
...
if __name__ == '__main__':
uvicorn.run('app:app', host="127.0.0.1", port=8081)
</code></pre>
<p>An see this error:</p>
<pre><code>idp = FastAPIKeycloak(
File "/Users/aamoskalenko/0_root_folder/02_dev/33_keykloak_fastapi/venv/lib/python3.8/site-packages/fastapi_keycloak/api.py", line 165, in __init__
self._get_admin_token() # Requests an admin access token on startup
File "/Users/aamoskalenko/0_root_folder/02_dev/33_keykloak_fastapi/venv/lib/python3.8/site-packages/fastapi_keycloak/api.py", line 349, in _get_admin_token
response = requests.post(url=self.token_uri, headers=headers, data=data, timeout=self.timeout)
File "/Users/aamoskalenko/0_root_folder/02_dev/33_keykloak_fastapi/venv/lib/python3.8/site-packages/requests/api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
File "/Users/aamoskalenko/0_root_folder/02_dev/33_keykloak_fastapi/venv/lib/python3.8/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/Users/aamoskalenko/0_root_folder/02_dev/33_keykloak_fastapi/venv/lib/python3.8/site-packages/requests/sessions.py", line 575, in request
prep = self.prepare_request(req)
File "/Users/aamoskalenko/0_root_folder/02_dev/33_keykloak_fastapi/venv/lib/python3.8/site-packages/requests/sessions.py", line 486, in prepare_request
p.prepare(
File "/Users/aamoskalenko/0_root_folder/02_dev/33_keykloak_fastapi/venv/lib/python3.8/site-packages/requests/models.py", line 368, in prepare
self.prepare_url(url, params)
File "/Users/aamoskalenko/0_root_folder/02_dev/33_keykloak_fastapi/venv/lib/python3.8/site-packages/requests/models.py", line 439, in prepare_url
raise MissingSchema(
requests.exceptions.MissingSchema: Invalid URL 'None': No scheme supplied. Perhaps you meant https://None?
</code></pre>
|
<python><keycloak><fastapi>
|
2024-04-13 18:37:28
| 0
| 1,708
|
mascai
|
78,321,309
| 13,174,189
|
How to group rows in dataframe based on timestamps ranges and mean group size maximization?
|
<p>I have a data frame:</p>
<pre><code>prod_id timestamp1 timestamp2
1 2023-12-02 2023-12-01
2 2023-12-05 2023-12-01
3 2023-12-06 2023-12-01
4 2023-12-07 2023-12-01
5 2023-12-08 2023-12-01
6 2023-12-08 2023-12-02
7 2023-10-10 2023-09-02
8 2023-12-11 2023-12-22
9 2023-12-12 2023-12-24
</code></pre>
<p>I want to group prod_id (create new parameter group_id) by timestamps ranges. the distribution of dates at timestamp1 must not exceed 3 days (so the difference between max and min within group_id must be no higher than 3 days). Similarly, the distribution of dates at timestamp 2 must not exceed 3 days (so the difference between max and min within group_id must be no higher than 3 days). I need to maximize the average number of prod_id per group_id. so here the desired result must be:</p>
<pre><code>prod_id timestamp1 timestamp2 group_id
1 2023-12-02 2023-12-01 1
2 2023-12-05 2023-12-01 2
3 2023-12-06 2023-12-01 2
4 2023-12-07 2023-12-01 2
5 2023-12-08 2023-12-01 2
6 2023-12-08 2023-12-02 2
7 2023-10-10 2023-09-02 3
8 2023-12-11 2023-12-22 4
9 2023-12-12 2023-12-24 4
</code></pre>
<p>as you see here prod_id is in group_id 2 because it will create larger group_id, than if it was in group_id 1, even though it could be in group_id 1 by conditions. so this grouping maximizes the mean number of prod_id per group_id.</p>
<p>How to do it? I wrote this, but it doesn't have algorithmic maximization of the mean number of prod_id per group_id:</p>
<pre><code>df['timestamp1'] = pd.to_datetime(df['timestamp1'])
df['timestamp2'] = pd.to_datetime(df['timestamp2'])
df_sorted = df.sort_values(by=['timestamp1', 'timestamp2'])
group_id = 1
df_sorted['group_id'] = 0
while not df_sorted.empty:
group_start = df_sorted.iloc[0]
valid_rows_t1 = df_sorted[(df_sorted['timestamp1'] <= group_start['timestamp1'] + pd.Timedelta(days=30))]
valid_rows_t2 = valid_rows_t1[(valid_rows_t1['timestamp2'] >= valid_rows_t1['timestamp2'].min()) &
(valid_rows_t1['timestamp2'] <= valid_rows_t1['timestamp2'].min() + pd.Timedelta(days=30))]
df_sorted.loc[valid_rows_t2.index, 'group_id'] = group_id
df_sorted = df_sorted.drop(valid_rows_t2.index)
group_id += 1
</code></pre>
<p>Here is a code to create tha dataframe:</p>
<pre><code># Create the dataframe
data = {
'prod_id': [1, 2, 3, 4, 5, 6, 7, 8, 9],
'timestamp1': ['2023-12-02', '2023-12-05', '2023-12-06', '2023-12-07', '2023-12-08', '2023-12-08', '2023-10-10', '2023-12-11', '2023-12-12'],
'timestamp2': ['2023-12-01', '2023-12-01', '2023-12-01', '2023-12-01', '2023-12-01', '2023-12-02', '2023-09-02', '2023-12-22', '2023-12-24']
}
df = pd.DataFrame(data)
</code></pre>
|
<python><python-3.x><algorithm><function><genetic-algorithm>
|
2024-04-13 16:39:34
| 1
| 1,199
|
french_fries
|
78,321,225
| 12,560,241
|
Find if a set of lists of 2 values are the result of the possible combination of a n-value list
|
<p>I have used itertools to find all possible 2-element combination (without repetition) from a set of 10 elements to which I have applied some filtering to reduce the 2-element combination based on given condition</p>
<p>For simplicity suppose the code below: the 10 elements are the number 1 to 10. The code I have used is following:</p>
<pre><code>import numpy as np
M = np.array(np.transpose(np.loadtxt('C:\\py\\10.csv')))
n=2
from itertools import combinations
combos = list(filter(lambda e: ((e[0]+e[1]) > 14 ) , combinations(M, n)))
print (combos)
</code></pre>
<p>The output will then be</p>
<pre><code>[(5.0, 10.0), (6.0, 9.0), (6.0, 10.0), (7.0, 8.0), (7.0, 9.0), (7.0, 10.0), (8.0, 9.0), (8.0, 10.0), (9.0, 10.0)]
</code></pre>
<p>What I am now trying to achieve, programmatically, is to compress the above results into a n-element list that would have generated it.</p>
<p>For example I can say <code>(6.0, 9.0), (6.0, 10.0), (9.0, 10.0)</code> are the two element combination of a list of 3 elements <code>(6,9,10)</code></p>
<p>I can also say <code>(7.0, 8.0), (7.0, 9.0), (7.0, 10.0), (8.0, 9.0), (8.0, 10.0), (9.0, 10.0)</code> are the two element combination of a list of 4 elements (7,8,9,10)</p>
<p>I can also say that the (5.0, 10.0) pair it is not the result of the combination of a sublist of the 10 numbers</p>
<p>If anyone came across this problem I'd like to know how this could be achieved</p>
|
<python><combinations><python-itertools><combinatorics><more-itertools>
|
2024-04-13 16:12:56
| 1
| 317
|
Marco_sbt
|
78,321,161
| 8,641,778
|
How to csat WinID handle to capsule in pyside6
|
<p>In pyqt, one can access wid as <code>sip.voidptr</code>, using <code>widget.winId()</code>, and then turn it to capsule object using <code>wid.ascapsule()</code></p>
<p>In pyside6, when using <code>widget.winId()</code>, I can only get an int number.</p>
<p>How can I get winId() as ptr like pyqt when using pyside6?</p>
<p>I need ptr as capsule because the api I use give the following error:</p>
<pre><code>return WNT_Window(ctypes.c_void_p(wid))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: __init__(): incompatible constructor arguments. The following argument types are supported:
1. OCP.WNT.WNT_Window(theTitle: str, theClass: OCP.WNT.WNT_WClass, theStyle: int, thePxLeft: int, thePxTop: int, thePxWidth: int, thePxHeight: int, theBackColor: OCP.Quantity.Quantity_NameOfColor = <Quantity_NameOfColor.Quantity_NOC_MATRAGRAY: 2>, theParent: capsule = None, theMenu: capsule = None, theClientStruct: capsule = None)
2. OCP.WNT.WNT_Window(theHandle: capsule, theBackColor: OCP.Quantity.Quantity_NameOfColor = <Quantity_NameOfColor.Quantity_NOC_MATRAGRAY: 2>)
</code></pre>
|
<python><qt><pyqt><pyside>
|
2024-04-13 15:53:29
| 1
| 323
|
C-Entropy
|
78,321,095
| 4,398,966
|
Spyder stdout not being flushed
|
<p>Updated with complete example. I'm running this in Spyder but it's been noted in comments that this works as expected from a windows command prompt.</p>
<p>I have the following:</p>
<pre><code>import sys
import os
class Portfolio:
@classmethod
def menu(cls):
print("Main Menu")
p = Portfolio()
p.clear_screen()
p.transaction()
def clear_screen(self):
os.system('cls' if os.name == 'nt' else 'clear')
print(flush=True)
sys.stdout.flush()
def transaction(self):
print("Transaction Menu")
if __name__ == "__main__" :
Portfolio.menu()
</code></pre>
<p>When I run this the screen clears AFTER not BEFORE transaction() is run. How can I correct that?</p>
|
<python><python-3.x><buffer><screen>
|
2024-04-13 15:33:45
| 0
| 15,782
|
DCR
|
78,321,036
| 2,649,312
|
Python how to pull data from dictionary
|
<p>Using the <code>weerlive.nl</code> API to pull weather data from the Dutch meteriological institute (KNMI), I get a dictionary with one key <code>'liveweer'</code> with a long set of tuples:</p>
<pre><code>dict_values([[{'plaats': 'Tilburg', 'timestamp': '1713019987', 'time': '13-04-2024 16:53', 'temp': '23.3', 'gtemp': '22.3', 'samenv': 'Licht bewolkt', 'lv': '48', 'windr': 'ZW', 'windrgr': '225', 'windms': '6', 'winds': '4', 'windk': '11.7', 'windkmh': '21.6', 'luchtd': '1020.5', 'ldmmhg': '765', 'dauwp': '11.7', 'zicht': '40', 'verw': 'Vandaag zonnige perioden, droog en vrij warm. Zondag frisser', 'sup': '06:45', 'sunder': '20:35', 'image': 'lichtbewolkt', 'd0weer': 'bewolkt', 'd0tmax': '22', 'd0tmin': '12', 'd0windk': '3', 'd0windknp': '10', 'd0windms': '5', 'd0windkmh': '19', 'd0windr': 'ZW', 'd0windrgr': '225', 'd0neerslag': '0', 'd0zon': '0', 'd1weer': 'halfbewolkt', 'd1tmax': '15', 'd1tmin': '10', 'd1windk': '3', 'd1windknp': '8', 'd1windms': '4', 'd1windkmh': '15', 'd1windr': 'W', 'd1windrgr': '270', 'd1neerslag': '20', 'd1zon': '50', 'd2weer': 'regen', 'd2tmax': '11', 'd2tmin': '7', 'd2windk': '3', 'd2windknp': '10', 'd2windms': '5', 'd2windkmh': '19', 'd2windr': 'ZW', 'd2windrgr': '225', 'd2neerslag': '90', 'd2zon': '20', 'alarm': '0', 'alarmtxt': ''}]])
</code></pre>
<p>For my RaspberryPi weather station I only need the values of <code>'temp'</code> and <code>'lv'</code> (luchtvochtigheid = humidity).</p>
<p>How do I pull these values from the dictionary?</p>
|
<python><dictionary>
|
2024-04-13 15:15:19
| 2
| 811
|
jdelange
|
78,321,025
| 3,437,012
|
Why is this trivial numba function with a List input so slow
|
<pre><code>import numba
from typing import List
@numba.njit
def test(a: List[int]) -> int:
return 1
test([i for i in range(2_000_000)])
</code></pre>
<p>takes 2s and scales linearly with the size of the list.
Wrapping the input argument with <code>numba.typed.List</code> takes even longer. (all the time is spent on the <code>numba.typed.List</code> call.</p>
<p>The timings don't get better if the function is called multiple times (while only being defined once), i.e., this is not a matter of compilation time.</p>
<p>Is there a way to tell numba to just use the list as is?</p>
<p>In my actual application, the raw data comes from an external library that cannot return numpy arrays or numba lists directly, only Python lists.</p>
<p>I'm using numba 0.59.1 and Python 3.12 on a 4 core Ubuntu22 laptop with 16GB RAM.</p>
|
<python><numpy><numba>
|
2024-04-13 15:12:13
| 1
| 2,370
|
Bananach
|
78,320,873
| 19,576,917
|
How to achieve concurrency with python telegram bot module
|
<p>I am using python telegram bot module to create a bot that takes an image in Braille as an input, translates it then returns the English translation. The method responsible for the translation is defined as follows,</p>
<pre><code>async def start_translation(update: Update, context: CallbackContext):
# translation logic
return translated_text
</code></pre>
<p>The problem here is that this method takes too long to finish. So, when a user sends an image to the bot while another user has already sent an image and it is being translated, the second user has to wait until the first user is done. How can I employ parallel processing in my code so that multiple users can translate at the same time? I have tried to use <code>threading</code> module but it seems like it is not supported with <code>async</code> functions. Below is how the translation process starts:</p>
<pre><code>def create_handle_photo(oVoice, oCorrect, oInference, current_dir):
async def handle_photo(update: Update, context: CallbackContext):
image_path = await get_image(update, context, current_dir) # download the image
oProcessImage = ProcessImage(image_path)
text = await start_translation(update, context, oProcessImage, oInference, oCorrect)
if text != '':
await generate_voice(update, context, current_dir, oVoice, text)
await clean_up(current_dir, image_path) #delete downloaded files
return handle_photo
def main():
app = ApplicationBuilder().token(TOKEN).build()
handle_photo_handler = create_handle_photo(oVoice, oCorrect, oInference, current_dir)
app.add_handler(MessageHandler(filters.PHOTO, handle_photo_handler)) # handle photo
app.run_polling()
if __name__ == '__main__':
main()
</code></pre>
<p>The input and output of the bot:</p>
<p><a href="https://i.sstatic.net/j3nRY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/j3nRY.png" alt="Bot receives image starts translation" /></a></p>
|
<python><parallel-processing><python-telegram-bot>
|
2024-04-13 14:22:00
| 0
| 488
|
Chandler Bong
|
78,320,840
| 16,569,183
|
Piping pip freeze and pip uninstall
|
<p>I can clean my current python environment in two steps</p>
<pre class="lang-bash prettyprint-override"><code>pip freeze > requirements.txt
pip uninstall -r requirements.txt -y
</code></pre>
<p>I was wondering if it's possible to pipe these two commands to avoid creating the temporary file (and why or why not). The naive approach (below) does not seem to work</p>
<pre class="lang-bash prettyprint-override"><code>pip freeze | pip uninstall -y -r
</code></pre>
|
<python><pip>
|
2024-04-13 14:11:27
| 1
| 313
|
alfonsoSR
|
78,320,789
| 2,494,795
|
Register a Pandas Dataframe to Azure Machine Learning: NOT_SUPPORTED_API_USE_ATTEMPT
|
<p>I have been using the following code to register Pandas dataframes to Azure ML:</p>
<pre><code>workspace = Workspace(<subscription_id>, <resource_group>, <workspace_name>)
from azureml.core import Dataset
datastore = workspace.get_default_datastore()
ds = Dataset.Tabular.register_pandas_dataframe(df, datastore, "output_dataset_name", "description")
</code></pre>
<p>However, starting today, I am receiving a deprecation error message:</p>
<pre><code> Message: [NOT_SUPPORTED_API_USE_ATTEMPT] The [read_pandas_dataframe] API has been deprecated and is no longer supported
Payload: {"rslex_version": "2.22.2", "api_name": "read_pandas_dataframe", "version": "5.1.6"}
ActivityCompleted: Activity=register_pandas_dataframe, HowEnded=Failure, Duration=14.09 [ms], Info = {'activity_name': 'register_pandas_dataframe', 'activity_type': 'PublicApi', 'app_name': 'TabularDataset', 'source': 'azureml.dataset', 'version': '1.51.0.post1', 'dataprepVersion': '5.1.6', 'completionStatus': 'Failure', 'durationMs': 1.7}, Exception=NotImplementedError; read_pandas_dataframe is no longer supported. Write dataframe out as Parquet or csv and use read_parquet or read_csv instead..
</code></pre>
<p>Can anybody recommend the best way to register Pandas dataframes as AML datasets after this deprecation? Thanks!</p>
|
<python><pandas><azure><azure-machine-learning-service>
|
2024-04-13 13:58:03
| 1
| 1,636
|
Irina
|
78,320,581
| 1,089,412
|
Pydantic - change data in nested model before validation
|
<p>I have this small example of nested object with pydantic.</p>
<pre class="lang-py prettyprint-override"><code>from typing import Dict
from pydantic import BaseModel, Field, ValidationError
class UserType(BaseModel):
name: str = Field(min_length=1)
type: str = Field(min_length=1)
class AppConfig(BaseModel):
key1: int = Field(gt=0)
objects: Dict[str, UserType]
try:
data = {
"key1": 1,
"objects": {
"type1": {
"name": "Name 2",
},
"type2": {
"name": "Name 1"
}
}
}
c = AppConfig(**data)
print(c.model_dump_json())
except ValidationError as e:
print(e)
</code></pre>
<p>This obviously fail because <code>type</code> is not set in <code>UserType</code> model. My goal is to somehow set each <code>UserType.type</code> using associated key from <code>objects</code> dictionary before the actual validation. Something like:</p>
<ul>
<li>I pass data to model</li>
<li>Model validates main keys</li>
<li>Before validating nested objects it copies dict key as <code>type = key</code> inside data that is next is passed to UserType model</li>
</ul>
<p>Is this possible somehow with <code>pydantic</code>?</p>
<p>I know I could do all this before passing data to main model but I want to know if this is possible in <code>pydantic</code> model.</p>
<p>I also tried using <code>model_post_init()</code> method like this:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Dict
from pydantic import BaseModel, Field, ValidationError
class UserType(BaseModel):
name: str = Field(min_length=1)
type: str = Field(min_length=1)
class AppConfig(BaseModel):
key1: int = Field(gt=0)
objects: Dict[str, UserType]
def model_post_init(self, __context) -> None:
values = self.dict()
for obj_type, obj in values["objects"].items():
print(obj, obj_type)
obj["type"] = obj_type
try:
data = {
"key1": 1,
"objects": {
"type1": {
"name": "Name 2",
#"type": "t"
},
"type2": {
"name": "Name 1",
#"type": "t"
}
}
}
c = AppConfig(**data)
print(c.model_dump_json())
except ValidationError as e:
print(e)
</code></pre>
<p>But this method is executed after validation and validation failed earlier. Also tried to set some dummy type values in data payload and then override it in <code>model_post_init()</code> but this did not work at all and model had only original dummy values for type.</p>
|
<python><validation><pydantic>
|
2024-04-13 12:30:53
| 1
| 3,306
|
piotrekkr
|
78,320,551
| 5,473,482
|
Find intersection point between line and ellipse in Python
|
<p>I am trying to find the point which intersects with the line and the ellipse. I can find the point manually as seen in the code but how can I find it automatically. Also when the red point is inside the ellipse the green point should also be projected on to the ellipse when extending the line. How can I solve this? Code examples are appreciated! Thanks!</p>
<p><a href="https://i.sstatic.net/zOZLf1J5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zOZLf1J5.png" alt="enter image description here" /></a></p>
<pre><code>import cv2
import numpy as np
# Parameters
ellipse_center = np.array([238, 239])
ellipse_axes = np.array([150, 100]) # Semi-major and semi-minor axes
ellipse_angle = np.radians(70) # Angle of rotation (in radians)
line_point1 = ellipse_center # Point on the line
line_point2 = np.array([341, 125]) # Another point on the line (moving point)
# Initialize OpenCV window
window_name = "Projecting Point on Rotated Ellipse"
cv2.namedWindow(window_name)
canvas = np.zeros((400, 400, 3), dtype=np.uint8)
while True:
# Clear canvas
canvas.fill(0)
# Draw the rotated ellipse
cv2.ellipse(canvas, tuple(ellipse_center), tuple(ellipse_axes), np.degrees(ellipse_angle), 0, 360, (255, 255, 255), 2)
# Draw the moving point
cv2.circle(canvas, (int(line_point2[0]), int(line_point2[1])), 5, (0, 0, 255), -1)
# Draw the center point of the ellipse
cv2.circle(canvas, tuple(ellipse_center), 5, (0, 255, 255), -1)
# Draw the line between moving point and the center of the ellipse
cv2.line(canvas, (int(line_point2[0]), int(line_point2[1])), tuple(ellipse_center), (0, 0, 255), 1)
intersection_points = (310, 159) # <- How to find this?
cv2.circle(canvas, intersection_points, 5, (0, 255, 0), -1)
# Show canvas
cv2.imshow(window_name, canvas)
# Exit if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Close OpenCV window
cv2.destroyAllWindows()
</code></pre>
<p>When using the ellipse equation and the line equation I get this:</p>
<pre><code>import cv2
import numpy as np
from scipy.optimize import fsolve
# Parameters
ellipse_center = np.array([238, 239])
ellipse_axes = np.array([150, 100]) # Semi-major and semi-minor axes
ellipse_angle = np.radians(70) # Angle of rotation (in radians)
line_point1 = ellipse_center # Point on the line
line_point2 = np.array([341, 125]) # Another point on the line (moving point)
# Ellipse equation: (x - h)^2 / a^2 + (y - k)^2 / b^2 = 1, where (h, k) is the center of the ellipse
def ellipse_eq(params):
x, y = params
h, k = ellipse_center
a, b = ellipse_axes
return ((x - h) ** 2 / a ** 2) + ((y - k) ** 2 / b ** 2) - 1
# Line equation: y = mx + c
def line_eq(x):
m = (line_point2[1] - line_point1[1]) / (line_point2[0] - line_point1[0])
c = line_point1[1] - m * line_point1[0]
return m * x + c
# Intersection of the ellipse and the line
def intersection_point():
x_intercept = fsolve(lambda x: ellipse_eq([x, line_eq(x)]), line_point2[0])
y_intercept = line_eq(x_intercept)
return x_intercept[0], y_intercept[0]
# Initialize OpenCV window
window_name = "Projecting Point on Rotated Ellipse"
cv2.namedWindow(window_name)
canvas = np.zeros((400, 400, 3), dtype=np.uint8)
while True:
# Clear canvas
canvas.fill(0)
# Draw the rotated ellipse
cv2.ellipse(canvas, tuple(ellipse_center), tuple(ellipse_axes), np.degrees(ellipse_angle), 0, 360, (255, 255, 255), 2)
# Draw the moving point
cv2.circle(canvas, (int(line_point2[0]), int(line_point2[1])), 5, (0, 0, 255), -1)
# Draw the center point of the ellipse
cv2.circle(canvas, tuple(ellipse_center), 5, (0, 255, 255), -1)
# Draw the line between moving point and the center of the ellipse
cv2.line(canvas, (int(line_point2[0]), int(line_point2[1])), tuple(ellipse_center), (0, 0, 255), 1)
# Find and draw the intersection point
intersection_points = intersection_point()
cv2.circle(canvas, (int(intersection_points[0]), int(intersection_points[1])), 5, (0, 255, 0), -1)
# Show canvas
cv2.imshow(window_name, canvas)
# Exit if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Close OpenCV window
cv2.destroyAllWindows()
</code></pre>
<p><a href="https://i.sstatic.net/8M6M2H0T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8M6M2H0T.png" alt="enter image description here" /></a></p>
<p>Which does not solve my problem. What am I doing wrong here?</p>
<p>EDIT:
Using MBo's approach with the code:</p>
<pre><code>import cv2
import numpy as np
# Parameters
ellipse_center = np.array([238, 239])
ellipse_axes = np.array([150, 100]) # Semi-major and semi-minor axes
ellipse_angle = np.radians(70) # Angle of rotation (in radians)
line_point1 = ellipse_center # Point on the line (ellipse center)
line_point2 = np.array([341, 125]) # Another point on the line (moving point)
# Initialize OpenCV window
window_name = "Projecting Point on Rotated Ellipse"
cv2.namedWindow(window_name)
canvas = np.zeros((400, 400, 3), dtype=np.uint8)
while True:
# Clear canvas
canvas.fill(0)
# Draw the rotated ellipse
cv2.ellipse(canvas, tuple(ellipse_center), tuple(ellipse_axes), np.degrees(ellipse_angle), 0, 360, (255, 255, 255), 2)
# Draw the moving point
cv2.circle(canvas, (int(line_point2[0]), int(line_point2[1])), 5, (0, 0, 255), -1)
# Draw the center point of the ellipse
cv2.circle(canvas, tuple(ellipse_center), 5, (0, 255, 255), -1)
# Draw the line between moving point and the center of the ellipse
cv2.line(canvas, (int(line_point2[0]), int(line_point2[1])), tuple(ellipse_center), (0, 0, 255), 1)
# Calculate the intersection point
t = np.arctan2(line_point2[1] - ellipse_center[1], line_point2[0] - ellipse_center[0]) - np.arctan2(np.sin(ellipse_angle) * ellipse_axes[0], np.cos(ellipse_angle) * ellipse_axes[1])
x_intersection = ellipse_center[0] + ellipse_axes[0] * np.cos(t) * np.cos(ellipse_angle) - ellipse_axes[1] * np.sin(t) * np.sin(ellipse_angle)
y_intersection = ellipse_center[1] + ellipse_axes[0] * np.cos(t) * np.sin(ellipse_angle) + ellipse_axes[1] * np.sin(t) * np.cos(ellipse_angle)
intersection_point = (int(x_intersection), int(y_intersection))
# Draw the intersection point
cv2.circle(canvas, intersection_point, 5, (0, 255, 0), -1)
# Show canvas
cv2.imshow(window_name, canvas)
# Exit if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Close OpenCV window
cv2.destroyAllWindows()
</code></pre>
<p>Gets me:</p>
<p><a href="https://i.sstatic.net/s1mAK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/s1mAK.png" alt="enter image description here" /></a></p>
|
<python><math><geometry><ellipse>
|
2024-04-13 12:17:38
| 2
| 1,047
|
Blind0ne
|
78,320,404
| 3,047,069
|
Facing problems saving JSON data to postgres using Python FastAPI
|
<p>I am using python and Fastapi to build a project. I am fairly new to both the technologies, and now getting stuck at a task of uploading a JSON file to an endpoint and then saving its contents to postgresSQL.</p>
<p>Here are some details:</p>
<p><strong>vehicles.json: It can have list of 500-600 entries in the format shown below</strong></p>
<pre><code>{
"vehicleList": [
{
"id": 1,
"coordinate": {
"latitude": 54.5532316,
"longitude": 67.0087783
},
"condition": "GOOD"
},
{
"id": 2,
"coordinate": {
"latitude": 37.442316,
"longitude": 38.0087783
},
"condition": "BREAKDOWM"
},
]
}
</code></pre>
<p><strong>connect.py:</strong></p>
<pre><code>vehicle_data = Table(
"vehicle_data",
metadata_obj,
Column("coordinates", ARRAY(Float), nullable=False),
Column("condition", String(6), nullable=False),
Column("id", Integer, primary_key=True),
)
</code></pre>
<p><strong>router.py:</strong></p>
<pre><code>@router.post("/upload")
async def upload_json(file: UploadFile = File(...)):
try:
# Read the uploaded file as bytes
contents = await file.read()
# Decode the bytes to string assuming it's JSON
decoded_content = contents.decode("utf-8")
# Parse the JSON content
json_data = json.loads(decoded_content)
fields = [
'coordinates', #List of floats
'condition', #str
'id' #int
]
for item in json_data:
my_data = [tuple(item[field] for field in fields) for item in json_data]
insert_query = "INSERT INTO vehicle_data (coordinates, condition, id) VALUES %s"
execute_values(insert_query, tuple(my_data))
return JSONResponse(status_code=200, content={"message": "JSON file uploaded successfully", "data": json_data})
except Exception as e:
return JSONResponse(content={"error": str(e)}, status_code=500)
</code></pre>
<p><strong>Expected output: I want to only store all the entries of JSON into postgress rows</strong></p>
<p><a href="https://i.sstatic.net/r3PvF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/r3PvF.png" alt="enter image description here" /></a></p>
<p><strong>What I tried:</strong></p>
<p>I tried reading about file upload on fastapi as well as went through multiple stackoverflow questions, and even found a question that was close to the solution I am trying which is:</p>
<pre><code>for item in json_data:
my_data = [tuple(item[field] for field in fields) for item in json_data]
insert_query = "INSERT INTO vehicle_data (coordinates, condition, id) VALUES %s"
execute_values(insert_query, tuple(my_data))
</code></pre>
<p>But I suspect that the mistake is somewhere in the above code which I am not able to solve, I am able to correctly parse data in the form of python dict object till <code>json_data = json.loads(decoded_content)</code> line.</p>
<p>It'll be very helpful if you just help me to point in right direction or state what needs to be replaced instead of the lines written above.</p>
<p>Here is the error I am getting:
<a href="https://i.sstatic.net/8bWVi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8bWVi.png" alt="enter image description here" /></a></p>
<p>Have a great day!</p>
|
<python><json><postgresql><fastapi><psycopg2>
|
2024-04-13 11:21:27
| 0
| 821
|
Radheya
|
78,320,399
| 5,378,816
|
Type hint for a function returning the same type not working
|
<p>What is wrong in this example? Isn't the <code>str -> str</code> case included in <code>T -> T</code> ?</p>
<pre><code>from typing import TypeVar
T = TypeVar("T")
def unch(arg:T) -> T:
if isinstance(arg, str):
return arg
return arg
</code></pre>
<p><code>mypy</code> checking result:</p>
<pre><code># error: Incompatible return value type (got "str", expected "T")
</code></pre>
<hr />
<p>Update 1: deleted</p>
<hr />
<p>Update 2: don't know why, but this works:</p>
<pre><code>from typing import TypeVar, Any
T = TypeVar("T", str, Any)
def unch(arg:T) -> T:
if isinstance(arg, str):
return arg
return arg
</code></pre>
|
<python><mypy><python-typing>
|
2024-04-13 11:19:05
| 0
| 17,998
|
VPfB
|
78,320,329
| 6,303,377
|
Can't run inference SDK on AWS Lambda due to error with multiprocessing
|
<p>I am running this code on AWS Lambda</p>
<pre><code>import os
from inference_sdk import InferenceHTTPClient
def handler(event, context):
client = InferenceHTTPClient(api_url="https://detect.roboflow.com",
api_key=os.environ["ROBOFLOW_API_KEY"])
img_path = "./pizza.jpg"
return client.infer(img_path, model_id="pizza-identifier/3")
</code></pre>
<p>As part of a docker container that looks like this:</p>
<pre><code>FROM public.ecr.aws/lambda/python:3.11
RUN yum install -y mesa-libGL
COPY requirements.txt ${LAMBDA_TASK_ROOT}
RUN pip install -r requirements.txt
COPY pizza.jpg ${LAMBDA_TASK_ROOT}
COPY lambda_function.py ${LAMBDA_TASK_ROOT}
CMD [ "lambda_function.handler" ]
</code></pre>
<p>My requirements.txt contains nothing but <code>inference==0.9.17</code></p>
<p>When the code runs I get the following error. I have been trying to fix this and tried workarounds but to no avail. I understand that the error is somehow related to multiprocessing. I found <a href="https://stackoverflow.com/questions/59638035/using-python-multiprocessing-queue-inside-aws-lambda-function">this post</a> from which I understand that multiprocessing isn't possible on AWS Lambda, however, my script does not control or trigger any multiprocessing. I am incredibly frustrated since I've been working on this for 9 hours now and would appreciate any hints!</p>
<p>This is the full error:</p>
<pre><code>{
"errorMessage": "[Errno 38] Function not implemented",
"errorType": "OSError",
"requestId": "703be804-fd86-4b44-88f9-ac54c87717be",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 10, in handler\n return client.infer(img_path, model_id=\"pizza-identifier/3\")\n",
" File \"/var/lang/lib/python3.11/site-packages/inference_sdk/http/client.py\", line 82, in decorate\n return function(*args, **kwargs)\n",
" File \"/var/lang/lib/python3.11/site-packages/inference_sdk/http/client.py\", line 237, in infer\n return self.infer_from_api_v0(\n",
" File \"/var/lang/lib/python3.11/site-packages/inference_sdk/http/client.py\", line 299, in infer_from_api_v0\n responses = execute_requests_packages(\n",
" File \"/var/lang/lib/python3.11/site-packages/inference_sdk/http/utils/executors.py\", line 42, in execute_requests_packages\n responses = make_parallel_requests(\n",
" File \"/var/lang/lib/python3.11/site-packages/inference_sdk/http/utils/executors.py\", line 58, in make_parallel_requests\n with ThreadPool(processes=workers) as pool:\n",
" File \"/var/lang/lib/python3.11/multiprocessing/pool.py\", line 930, in __init__\n Pool.__init__(self, processes, initializer, initargs)\n",
" File \"/var/lang/lib/python3.11/multiprocessing/pool.py\", line 196, in __init__\n self._change_notifier = self._ctx.SimpleQueue()\n",
" File \"/var/lang/lib/python3.11/multiprocessing/context.py\", line 113, in SimpleQueue\n return SimpleQueue(ctx=self.get_context())\n",
" File \"/var/lang/lib/python3.11/multiprocessing/queues.py\", line 341, in __init__\n self._rlock = ctx.Lock()\n",
" File \"/var/lang/lib/python3.11/multiprocessing/context.py\", line 68, in Lock\n return Lock(ctx=self.get_context())\n",
" File \"/var/lang/lib/python3.11/multiprocessing/synchronize.py\", line 169, in __init__\n SemLock.__init__(self, SEMAPHORE, 1, 1, ctx=ctx)\n",
" File \"/var/lang/lib/python3.11/multiprocessing/synchronize.py\", line 57, in __init__\n sl = self._semlock = _multiprocessing.SemLock(\n"
]
}
</code></pre>
|
<python><python-3.x><amazon-web-services><aws-lambda>
|
2024-04-13 10:50:15
| 0
| 1,789
|
Dominique Paul
|
78,320,199
| 1,920,003
|
How to improve Nginx/Uvicorn upload speed in FastAPI application?
|
<p>I have an Ubuntu server, running a Nginx reverse proxy for a FastAPI application using Uvicorn. The server is an AWS EC2 g4n.xlarge in Virginia. On the frontend I'm using HTMX
My personal home upload speed is close to 1GBPs, fiber optics, I'm connected via the ethernet cable.</p>
<p>The application uploads the file first and then processes it. To upload 48mb, 35 mins mp3 file, the application takes 10min or so, it's not acceptable. In fact, given my fast internet speed, anything above 30 seconds or a minute is not acceptable. I already tried chunk upload, didn't make a difference</p>
<pre><code>async with aiofiles.open(video_file_path, "wb") as out_file:
while True:
content = await file.read(1024 * 1024) # Read chunks of 1 MB
if not content:
break
await out_file.write(content)
</code></pre>
<p>I believe the issue has to do with NGINX, because on my PC for testing, I use Uvicorn directly without NGINX and the upload is instant. My upload speed is fast and EC2 upload speed is fast, so the only one left to blame is Nginx, I think</p>
<p>nginx config</p>
<pre><code>server {
server_name example.com;
# Increase client max body size
client_max_body_size 6000M; # Allow 6GB uploads
# Adjust timeouts
client_body_timeout 7200s;
client_header_timeout 7200s;
location / {
proxy_pass http://127.0.0.1:8001; # Proxy pass to the FastAPI port
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Necessary for WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Proxy timeouts
proxy_read_timeout 7200s;
proxy_connect_timeout 7200s;
proxy_send_timeout 7200s;
}
</code></pre>
|
<python><nginx><fastapi><uvicorn>
|
2024-04-13 09:55:03
| 1
| 5,375
|
Lynob
|
78,320,086
| 710,955
|
How get the value of a 'charset' meta element with Xpath?
|
<p>With Selenium webdriver, I'm trying to parse a <code>charset</code> meta element from a page.</p>
<pre class="lang-html prettyprint-override"><code><meta charset="UTF-8">
</code></pre>
<p>This is what I have so far</p>
<pre class="lang-py prettyprint-override"><code>from selenium.webdriver.common.by import By
xpath='//meta[@charset]'
charset_meta_element = driver.find_element(By.XPATH, xpath)
</code></pre>
<p>I get a <code>WebElement</code> object.
How from this element I can get the value (eg: 'UTF-8')?</p>
|
<python><selenium-webdriver><xpath><html-meta>
|
2024-04-13 09:03:57
| 2
| 5,809
|
LeMoussel
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.