row_id int64 0 48.4k | init_message stringlengths 1 342k | conversation_hash stringlengths 32 32 | scores dict |
|---|---|---|---|
18,993 | Using Laravel framework I would like to create synonym database to store relations between words that are stored in another table 'keywords'. Can you recommend DB structure and advise how to store relations to maintain two-way relationship between synonyms? | 5da90153cd4d6a95d53f0d1c889d17f2 | {
"intermediate": 0.8241146802902222,
"beginner": 0.08792339265346527,
"expert": 0.08796191215515137
} |
18,994 | import logging
import uuid
from concurrent.futures import ThreadPoolExecutor
import cv2
from libs.face_recognition import ALG
import numpy as np
from settings import NUM_MAX_THREADS, BASE_DIR
class FaceRecognition:
"""Service for using face recognition."""
def __init__(self, video_path, threshold=80):
"""
Sets model's parameters.
Args:
video_path (str): path to video
threshold (int): model's threshold
"""
self.face_cascade_path = cv2.data.haarcascades + ALG
self.face_cascade = cv2.CascadeClassifier(self.face_cascade_path)
self.faces_list = []
self.names_list = []
self.threshold = 80
self.video_path = video_path
self.video = cv2.VideoCapture(self.video_path)
def process(self):
"""
Process of recognition faces in video by frames.
Writes id as uuid4.
Returns:
tuple: with list of faces and list of names
"""
pool = ThreadPoolExecutor(max_workers=NUM_MAX_THREADS)
frame_num = 1
while True:
ret, frame = self.video.read()
logging.info(
f"\n---------\n"
f"Frame: {frame_num}\n"
f"Video: {self.video_path}\n"
"----------"
)
if not ret:
break
frame_num += 1
pool.submit(self._process_frame, frame)
pool.shutdown()
self._close()
logging.info(f"Video with path {self.video_path} ready!")
return self.faces_list, self.names_list
def _process_frame(self, frame):
"""
Frame processing.
Args:
frame (cv2.typing.MatLike): cv2 frame
Returns:
None:
"""
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = self.face_cascade.detectMultiScale(
gray, scaleFactor=1.1, minNeighbors=5, minSize=(100, 100))
for (x, y, w, h) in faces:
cur_face = gray[y:y + h, x:x + w]
rec = cv2.face.LBPHFaceRecognizer.create()
rec.train([cur_face], np.array(0))
f = True
for face in self.faces_list:
_, confidence = rec.predict(face)
if confidence < self.threshold:
f = False
if f:
label = uuid.uuid4()
self.faces_list.append(cur_face)
self.names_list.append(label)
def _close(self):
"""
Closes video and destroys all windows.
Returns:
None:
"""
self.video.release()
cv2.destroyAllWindows()
Есть такой код для обработки видеороликов, а есть консюминг процесс из кафки:
class FaceRecognitionConsumer(Consumer):
"""
A child consumer to get messages about videos which needed to
processing of recognition from kafka.
"""
def on_message(self, message):
"""
Starts process of recognition.
Args:
message (dict): message from kafka publisher
Returns:
bool: True if file has been uploaded to s3,
db row has been created else False
"""
try:
filepath = message.value.get("filepath")
face_recognition = FaceRecognition(filepath)
faces_list, names_list = face_recognition.process()
logging.info(f"faces_list: {faces_list}")
logging.info(f"names_list: {names_list}")
return True
except AssertionError as e:
self.logger.error(f"Assertion error: {e}")
return False
Как мне сделать такие эндпоинты, чтобы я мог остановить процесс обработки видео, запомнить кадр, а потом, допустим, продолжить с того же места? | 9ec3353604742f1866a89d616ef4aad1 | {
"intermediate": 0.5291444063186646,
"beginner": 0.23461127281188965,
"expert": 0.2362443208694458
} |
18,995 | I need to find ROI for current year, then find ROI for previous year. After that, find the difference between the calculated values. ‘Customer_SKU_Share_Month’[Year] contains years (2022 and 2023). If I select 2023, the previous year is 2022, so that the difference exists. If I select 2023 and 2022, the difference also can be determined. But if I select 2022, there is no previous year. Do not explicitly type Year. Use DAX. ROI is calculated as
ROI = CALCULATE(
DIVIDE(SUM(‘Customer_SKU_Share_Month’[SUM_GP])-SUM(‘Customer_SKU_Share_Month’[SUM_Log]),SUM(‘Customer_SKU_Share_Month’[SUM_Disc])),
FILTER(‘Customer_SKU_Share_Month’,
‘Customer_SKU_Share_Month’[SUM_TotVol] <> 0 &&
‘Customer_SKU_Share_Month’[SUM_PromoVol] <> 0 &&
NOT(ISERROR(‘Customer_SKU_Share_Month’[ROI(share, add%)_log2]))
)) | 9038468a1b27389b10a56a00de96b2f9 | {
"intermediate": 0.30267882347106934,
"beginner": 0.31287792325019836,
"expert": 0.3844432830810547
} |
18,996 | 连接防火墙报错ssh protocol handshake error,socket error connection reset by peer怎么解决 | 9f9426f64543766529b31d027a634b64 | {
"intermediate": 0.30969056487083435,
"beginner": 0.3720859885215759,
"expert": 0.31822338700294495
} |
18,997 | Declare a function called perimeterBox which accepts two parameter (h=height, w=width) representing the height and width of a rectangle. The function should compute and return the perimeter of the rectangle. The perimeter of a rectangle is computed as p = 2h + 2w; where p is the perimeter, h is the height, and w the width of the rectangle. | 70fc5914e7eae9741b2d9cef1104f7aa | {
"intermediate": 0.3399512469768524,
"beginner": 0.3047667145729065,
"expert": 0.3552820682525635
} |
18,998 | I have Table1 with columns Customer, SKU, Date in Power BI. Date column contains dates. How to create the table with Customer, SKU and dates not present in column Date in Table1 among all dates in 2022 and 2023 for each combination of Customer and SKU? Use DAX | 0dc3c692a345a7ce3eacc63bc62f1a29 | {
"intermediate": 0.35970479249954224,
"beginner": 0.22543630003929138,
"expert": 0.41485893726348877
} |
18,999 | Filter google sheets column. I want to return the column without the empty cells | b5de394a625cd0166bd364be0fbe3d1c | {
"intermediate": 0.32581475377082825,
"beginner": 0.24443146586418152,
"expert": 0.42975375056266785
} |
19,000 | Can you prepare in mql5 language script the expert advisor with CCI indicator? | 5149d7c91d91d6671f13a118f1ff7ac2 | {
"intermediate": 0.3896300494670868,
"beginner": 0.25856277346611023,
"expert": 0.351807177066803
} |
19,001 | need to set proper interval, after which this auto-queue will be triggered in timeout. need to check the code and figure-out where it may interfere wrong with an actual text2image AI in queries. also, add this interval input field between retryattempts and timeout. the idea here is to activate this auto-queue button and it will retrieve an image from that text2image AI endpoint based on overal imeout for auto-queue and the interval between the number of attempts used.: <html>
<head>
<title>Text2Image AI</title>
</head>
<body>
<div class='container'>
<div class='control-container'>
<div class='input-field-container'>
<h1 class='title' style='margin-left: 10px;margin-right: 10px;margin-top: 10px;'>T2I AI UI</h1>
<input id='inputText' type='text' value='armoured girl riding an armored cock' class='input-field' style='flex: 1;margin-top: -6px;'>
<div class='gen-button-container'>
<button onclick='generateImage()' class='gen-button' style='border-style:none;height: 32px;margin-left: 10px;margin-right: 10px;margin-top: -6px;'>Gen Img</button>
</div>
</div>
</div>
<div class='independent-container'>
<label for='autoQueueCheckbox' style='margin-left: 10px;margin-right: 5px;'>Auto Queue:</label>
<input type='checkbox' id='autoQueueCheckbox' onchange='autoQueueChanged()'>
<label for='numAttemptsInput' style='margin-left: 10px;margin-right: 5px;'>Retry Attempts:</label>
<input type='number' id='numAttemptsInput' value='50' min='2' max='1000' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
<label for='timeoutInput' style='margin-left: 10px;margin-right: 5px;'>Timeout (sec):</label>
<input type='number' id='timeoutInput' value='120' min='12' max='600' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
</div>
<div class='progress-bar'>
<div class='progress-bar-filled'></div>
<canvas id='imageCanvas' class='image-canvas'></canvas></div>
</div>
<script>
const modelUrl = 'https://api-inference.huggingface.co/models/hogiahien/counterfeit-v30-edited';
const modelToken = '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';
const progressBarFilled = document.querySelector('.progress-bar-filled');
const imageCanvas = document.getElementById('imageCanvas');
const ctx = imageCanvas.getContext('2d');
let estimatedTime = 0;
async function query(data) {
const response = await fetch(modelUrl, {
headers: {
Authorization: "Bearer " + modelToken
},
method: 'POST',
body: JSON.stringify(data)
});
const headers = response.headers;
const estimatedTimeString = headers.get('estimated_time');
estimatedTime = parseFloat(estimatedTimeString) * 1000;
const result = await response.blob();
return result;
}
function autoQueueChanged() {
const autoQueueActive = document.getElementById('autoQueueCheckbox').checked;
if (autoQueueActive) {
const timeout = parseInt(document.getElementById('timeoutInput').value) * 1000;
setTimeout(function() {
generateImage();
}, timeout);
}
}
async function generateImage() {
const inputText = document.getElementById('inputText').value;
const numAttempts = parseInt(document.getElementById('numAttemptsInput').value);
progressBarFilled.style.width = '0%';
progressBarFilled.style.backgroundColor = 'green';
await new Promise(resolve => setTimeout(resolve, 1000));
let retryAttempts = 0;
const maxRetryAttempts = numAttempts;
let autoQueueActive = false;
while (retryAttempts < maxRetryAttempts) {
try {
const startTime = Date.now();
const timeLeft = Math.floor(estimatedTime / 1000);
const interval = setInterval(function() {
const elapsedTime = Math.floor((Date.now() - startTime) / 1000);
const progress = Math.floor((elapsedTime / timeLeft) * 100);
progressBarFilled.style.width = progress + '%';
}, 1000);
const cacheBuster = new Date().getTime();
const response = await query({ inputs: inputText, cacheBuster });
const url = URL.createObjectURL(response);
const img = new Image();
img.onload = function() {
const aspectRatio = img.width / img.height;
const canvasWidth = imageCanvas.offsetWidth;
const canvasHeight = Math.floor(canvasWidth / aspectRatio);
imageCanvas.width = canvasWidth;
imageCanvas.height = canvasHeight;
ctx.clearRect(0, 0, canvasWidth, canvasHeight);
ctx.drawImage(img, 0, 0, canvasWidth, canvasHeight);
};
img.src = url;
clearInterval(interval);
progressBarFilled.style.width = '100%';
progressBarFilled.style.backgroundColor = 'darkmagenta';
break;
} catch (error) {
console.error(error);
retryAttempts++;
}
if (autoQueueActive) {
const timeout = estimatedTime + 2000;
await new Promise(resolve => setTimeout(resolve, timeout));
}
autoQueueActive = document.getElementById('autoQueueCheckbox').checked;
}
progressBarFilled.style.width = '100%';
progressBarFilled.style.height = '2px';
progressBarFilled.style.backgroundColor = 'green';
}
</script>
</body>
</html> | d002ac5ece660d05fdbd1354a83c420b | {
"intermediate": 0.4080185890197754,
"beginner": 0.4608021676540375,
"expert": 0.13117927312850952
} |
19,002 | need to set proper interval, after which this auto-queue will be triggered in timeout. need to check the code and figure-out where it may interfere wrong with an actual text2image AI in queries. also, add this interval input field between retryattempts and timeout. the idea here is to activate this auto-queue button and it will retrieve an image from that text2image AI endpoint based on overal imeout for auto-queue and the interval between the number of attempts used.: <html>
<head>
<title>Text2Image AI</title>
</head>
<body>
<div class='container'>
<div class='control-container'>
<div class='input-field-container'>
<h1 class='title' style='margin-left: 10px;margin-right: 10px;margin-top: 10px;'>T2I AI UI</h1>
<input id='inputText' type='text' value='armoured girl riding an armored cock' class='input-field' style='flex: 1;margin-top: -6px;'>
<div class='gen-button-container'>
<button onclick='generateImage()' class='gen-button' style='border-style:none;height: 32px;margin-left: 10px;margin-right: 10px;margin-top: -6px;'>Gen Img</button>
</div>
</div>
</div>
<div class='independent-container'>
<label for='autoQueueCheckbox' style='margin-left: 10px;margin-right: 5px;'>Auto Queue:</label>
<input type='checkbox' id='autoQueueCheckbox' onchange='autoQueueChanged()'>
<label for='numAttemptsInput' style='margin-left: 10px;margin-right: 5px;'>Retry Attempts:</label>
<input type='number' id='numAttemptsInput' value='50' min='2' max='1000' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
<label for='timeoutInput' style='margin-left: 10px;margin-right: 5px;'>Timeout (sec):</label>
<input type='number' id='timeoutInput' value='120' min='12' max='600' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
</div>
<div class='progress-bar'>
<div class='progress-bar-filled'></div>
<canvas id='imageCanvas' class='image-canvas'></canvas></div>
</div>
<script>
const modelUrl = 'https://api-inference.huggingface.co/models/hogiahien/counterfeit-v30-edited';
const modelToken = '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';
const progressBarFilled = document.querySelector('.progress-bar-filled');
const imageCanvas = document.getElementById('imageCanvas');
const ctx = imageCanvas.getContext('2d');
let estimatedTime = 0;
async function query(data) {
const response = await fetch(modelUrl, {
headers: {
Authorization: "Bearer " + modelToken
},
method: 'POST',
body: JSON.stringify(data)
});
const headers = response.headers;
const estimatedTimeString = headers.get('estimated_time');
estimatedTime = parseFloat(estimatedTimeString) * 1000;
const result = await response.blob();
return result;
}
function autoQueueChanged() {
const autoQueueActive = document.getElementById('autoQueueCheckbox').checked;
if (autoQueueActive) {
const timeout = parseInt(document.getElementById('timeoutInput').value) * 1000;
setTimeout(function() {
generateImage();
}, timeout);
}
}
async function generateImage() {
const inputText = document.getElementById('inputText').value;
const numAttempts = parseInt(document.getElementById('numAttemptsInput').value);
progressBarFilled.style.width = '0%';
progressBarFilled.style.backgroundColor = 'green';
await new Promise(resolve => setTimeout(resolve, 1000));
let retryAttempts = 0;
const maxRetryAttempts = numAttempts;
let autoQueueActive = false;
while (retryAttempts < maxRetryAttempts) {
try {
const startTime = Date.now();
const timeLeft = Math.floor(estimatedTime / 1000);
const interval = setInterval(function() {
const elapsedTime = Math.floor((Date.now() - startTime) / 1000);
const progress = Math.floor((elapsedTime / timeLeft) * 100);
progressBarFilled.style.width = progress + '%';
}, 1000);
const cacheBuster = new Date().getTime();
const response = await query({ inputs: inputText, cacheBuster });
const url = URL.createObjectURL(response);
const img = new Image();
img.onload = function() {
const aspectRatio = img.width / img.height;
const canvasWidth = imageCanvas.offsetWidth;
const canvasHeight = Math.floor(canvasWidth / aspectRatio);
imageCanvas.width = canvasWidth;
imageCanvas.height = canvasHeight;
ctx.clearRect(0, 0, canvasWidth, canvasHeight);
ctx.drawImage(img, 0, 0, canvasWidth, canvasHeight);
};
img.src = url;
clearInterval(interval);
progressBarFilled.style.width = '100%';
progressBarFilled.style.backgroundColor = 'darkmagenta';
break;
} catch (error) {
console.error(error);
retryAttempts++;
}
if (autoQueueActive) {
const timeout = estimatedTime + 2000;
await new Promise(resolve => setTimeout(resolve, timeout));
}
autoQueueActive = document.getElementById('autoQueueCheckbox').checked;
}
progressBarFilled.style.width = '100%';
progressBarFilled.style.height = '2px';
progressBarFilled.style.backgroundColor = 'green';
}
</script>
</body>
</html> | d556238180ba3075bb737c384564af04 | {
"intermediate": 0.4080185890197754,
"beginner": 0.4608021676540375,
"expert": 0.13117927312850952
} |
19,003 | need to set proper interval, after which this auto-queue will be triggered in timeout. need to check the code and figure-out where it may interfere wrong with an actual text2image AI in queries. also, add this interval input field between retryattempts and timeout. the idea here is to activate this auto-queue button and it will retrieve an image from that text2image AI endpoint based on overal imeout for auto-queue and the interval between the number of attempts used.: <html>
<head>
<title>Text2Image AI</title>
</head>
<body>
<div class='container'>
<div class='control-container'>
<div class='input-field-container'>
<h1 class='title' style='margin-left: 10px;margin-right: 10px;margin-top: 10px;'>T2I AI UI</h1>
<input id='inputText' type='text' value='armoured girl riding an armored cock' class='input-field' style='flex: 1;margin-top: -6px;'>
<div class='gen-button-container'>
<button onclick='generateImage()' class='gen-button' style='border-style:none;height: 32px;margin-left: 10px;margin-right: 10px;margin-top: -6px;'>Gen Img</button>
</div>
</div>
</div>
<div class='independent-container'>
<label for='autoQueueCheckbox' style='margin-left: 10px;margin-right: 5px;'>Auto Queue:</label>
<input type='checkbox' id='autoQueueCheckbox' onchange='autoQueueChanged()'>
<label for='numAttemptsInput' style='margin-left: 10px;margin-right: 5px;'>Retry Attempts:</label>
<input type='number' id='numAttemptsInput' value='50' min='2' max='1000' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
<label for='timeoutInput' style='margin-left: 10px;margin-right: 5px;'>Timeout (sec):</label>
<input type='number' id='timeoutInput' value='120' min='12' max='600' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
</div>
<div class='progress-bar'>
<div class='progress-bar-filled'></div>
<canvas id='imageCanvas' class='image-canvas'></canvas></div>
</div>
<script>
const modelUrl = 'https://api-inference.huggingface.co/models/hogiahien/counterfeit-v30-edited';
const modelToken = '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';
const progressBarFilled = document.querySelector('.progress-bar-filled');
const imageCanvas = document.getElementById('imageCanvas');
const ctx = imageCanvas.getContext('2d');
let estimatedTime = 0;
async function query(data) {
const response = await fetch(modelUrl, {
headers: {
Authorization: "Bearer " + modelToken
},
method: 'POST',
body: JSON.stringify(data)
});
const headers = response.headers;
const estimatedTimeString = headers.get('estimated_time');
estimatedTime = parseFloat(estimatedTimeString) * 1000;
const result = await response.blob();
return result;
}
function autoQueueChanged() {
const autoQueueActive = document.getElementById('autoQueueCheckbox').checked;
if (autoQueueActive) {
const timeout = parseInt(document.getElementById('timeoutInput').value) * 1000;
setTimeout(function() {
generateImage();
}, timeout);
}
}
async function generateImage() {
const inputText = document.getElementById('inputText').value;
const numAttempts = parseInt(document.getElementById('numAttemptsInput').value);
progressBarFilled.style.width = '0%';
progressBarFilled.style.backgroundColor = 'green';
await new Promise(resolve => setTimeout(resolve, 1000));
let retryAttempts = 0;
const maxRetryAttempts = numAttempts;
let autoQueueActive = false;
while (retryAttempts < maxRetryAttempts) {
try {
const startTime = Date.now();
const timeLeft = Math.floor(estimatedTime / 1000);
const interval = setInterval(function() {
const elapsedTime = Math.floor((Date.now() - startTime) / 1000);
const progress = Math.floor((elapsedTime / timeLeft) * 100);
progressBarFilled.style.width = progress + '%';
}, 1000);
const cacheBuster = new Date().getTime();
const response = await query({ inputs: inputText, cacheBuster });
const url = URL.createObjectURL(response);
const img = new Image();
img.onload = function() {
const aspectRatio = img.width / img.height;
const canvasWidth = imageCanvas.offsetWidth;
const canvasHeight = Math.floor(canvasWidth / aspectRatio);
imageCanvas.width = canvasWidth;
imageCanvas.height = canvasHeight;
ctx.clearRect(0, 0, canvasWidth, canvasHeight);
ctx.drawImage(img, 0, 0, canvasWidth, canvasHeight);
};
img.src = url;
clearInterval(interval);
progressBarFilled.style.width = '100%';
progressBarFilled.style.backgroundColor = 'darkmagenta';
break;
} catch (error) {
console.error(error);
retryAttempts++;
}
if (autoQueueActive) {
const timeout = estimatedTime + 2000;
await new Promise(resolve => setTimeout(resolve, timeout));
}
autoQueueActive = document.getElementById('autoQueueCheckbox').checked;
}
progressBarFilled.style.width = '100%';
progressBarFilled.style.height = '2px';
progressBarFilled.style.backgroundColor = 'green';
}
</script>
</body>
</html> | fdd632bb8c434839d63abec588d1514b | {
"intermediate": 0.4080185890197754,
"beginner": 0.4608021676540375,
"expert": 0.13117927312850952
} |
19,004 | how to compose redfish query with expand and select | 1b77ff36898bf1145d6323882d3ea447 | {
"intermediate": 0.4178433418273926,
"beginner": 0.17468856275081635,
"expert": 0.40746814012527466
} |
19,005 | need to set proper interval, after which this auto-queue will be triggered in timeout. need to check the code and figure-out where it may interfere wrong with an actual text2image AI in queries. also, add this interval input field between retryattempts and timeout. the idea here is to activate this auto-queue button and it will retrieve an image from that text2image AI endpoint based on overal imeout for auto-queue and the interval between the number of attempts used.: <html>
<head>
<title>Text2Image AI</title>
</head>
<body>
<div class='container'>
<div class='control-container'>
<div class='input-field-container'>
<h1 class='title' style='margin-left: 10px;margin-right: 10px;margin-top: 10px;'>T2I AI UI</h1>
<input id='inputText' type='text' value='armoured girl riding an armored cock' class='input-field' style='flex: 1;margin-top: -6px;'>
<div class='gen-button-container'>
<button onclick='generateImage()' class='gen-button' style='border-style:none;height: 32px;margin-left: 10px;margin-right: 10px;margin-top: -6px;'>Gen Img</button>
</div>
</div>
</div>
<div class='independent-container'>
<label for='autoQueueCheckbox' style='margin-left: 10px;margin-right: 5px;'>Auto Queue:</label>
<input type='checkbox' id='autoQueueCheckbox' onchange='autoQueueChanged()'>
<label for='numAttemptsInput' style='margin-left: 10px;margin-right: 5px;'>Retry Attempts:</label>
<input type='number' id='numAttemptsInput' value='50' min='2' max='1000' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
<label for='timeoutInput' style='margin-left: 10px;margin-right: 5px;'>Timeout (sec):</label>
<input type='number' id='timeoutInput' value='120' min='12' max='600' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
</div>
<div class='progress-bar'>
<div class='progress-bar-filled'></div>
<canvas id='imageCanvas' class='image-canvas'></canvas></div>
</div>
<script>
const modelUrl = 'https://api-inference.huggingface.co/models/hogiahien/counterfeit-v30-edited';
const modelToken = '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';
const progressBarFilled = document.querySelector('.progress-bar-filled');
const imageCanvas = document.getElementById('imageCanvas');
const ctx = imageCanvas.getContext('2d');
let estimatedTime = 0;
async function query(data) {
const response = await fetch(modelUrl, {
headers: {
Authorization: "Bearer " + modelToken
},
method: 'POST',
body: JSON.stringify(data)
});
const headers = response.headers;
const estimatedTimeString = headers.get('estimated_time');
estimatedTime = parseFloat(estimatedTimeString) * 1000;
const result = await response.blob();
return result;
}
function autoQueueChanged() {
const autoQueueActive = document.getElementById('autoQueueCheckbox').checked;
if (autoQueueActive) {
const timeout = parseInt(document.getElementById('timeoutInput').value) * 1000;
setTimeout(function() {
generateImage();
}, timeout);
}
}
async function generateImage() {
const inputText = document.getElementById('inputText').value;
const numAttempts = parseInt(document.getElementById('numAttemptsInput').value);
progressBarFilled.style.width = '0%';
progressBarFilled.style.backgroundColor = 'green';
await new Promise(resolve => setTimeout(resolve, 1000));
let retryAttempts = 0;
const maxRetryAttempts = numAttempts;
let autoQueueActive = false;
while (retryAttempts < maxRetryAttempts) {
try {
const startTime = Date.now();
const timeLeft = Math.floor(estimatedTime / 1000);
const interval = setInterval(function() {
const elapsedTime = Math.floor((Date.now() - startTime) / 1000);
const progress = Math.floor((elapsedTime / timeLeft) * 100);
progressBarFilled.style.width = progress + '%';
}, 1000);
const cacheBuster = new Date().getTime();
const response = await query({ inputs: inputText, cacheBuster });
const url = URL.createObjectURL(response);
const img = new Image();
img.onload = function() {
const aspectRatio = img.width / img.height;
const canvasWidth = imageCanvas.offsetWidth;
const canvasHeight = Math.floor(canvasWidth / aspectRatio);
imageCanvas.width = canvasWidth;
imageCanvas.height = canvasHeight;
ctx.clearRect(0, 0, canvasWidth, canvasHeight);
ctx.drawImage(img, 0, 0, canvasWidth, canvasHeight);
};
img.src = url;
clearInterval(interval);
progressBarFilled.style.width = '100%';
progressBarFilled.style.backgroundColor = 'darkmagenta';
break;
} catch (error) {
console.error(error);
retryAttempts++;
}
if (autoQueueActive) {
const timeout = estimatedTime + 2000;
await new Promise(resolve => setTimeout(resolve, timeout));
}
autoQueueActive = document.getElementById('autoQueueCheckbox').checked;
}
progressBarFilled.style.width = '100%';
progressBarFilled.style.height = '2px';
progressBarFilled.style.backgroundColor = 'green';
}
</script>
</body>
</html> | 8bb3ce409a308cd423f888a86e1a8687 | {
"intermediate": 0.4080185890197754,
"beginner": 0.4608021676540375,
"expert": 0.13117927312850952
} |
19,006 | help me write a python script to check the capacity of elastic cloud storage dell EMC product | 058f33787f8ac4b9cfabcb0e316c4eae | {
"intermediate": 0.5573350787162781,
"beginner": 0.14984221756458282,
"expert": 0.2928226590156555
} |
19,007 | need that interval in auto-queue to synch perfectly with actual image returned from that text2imge AI backend. because it sometimes slow generating as 20-35 sec in total. the interval going out of synch because it don't understands if actual image returned from backend or not. need to fix that issue and also check all timeouts and intervals used to queue that text2image backend.: <html>
<head>
<title>Text2Image AI</title>
</head>
<body>
<div class='container'>
<div class='control-container'>
<div class='input-field-container'>
<h1 class='title' style='margin-left: 10px;margin-right: 10px;margin-top: 10px;'>T2I AI UI</h1>
<input id='inputText' type='text' value='armoured girl riding an armored cock' class='input-field' style='flex: 1;margin-top: -6px;'>
<div class='gen-button-container'>
<button onclick='generateImage()' class='gen-button' style='border-style:none;height: 32px;margin-left: 10px;margin-right: 10px;margin-top: -6px;'>Gen Img</button>
</div>
</div>
</div>
<div class='independent-container'>
<label for='autoQueueCheckbox' style='margin-left: 10px;margin-right: 5px;'>Auto Queue:</label>
<input type='checkbox' id='autoQueueCheckbox' onchange='autoQueueChanged()'>
<label for='numAttemptsInput' style='margin-left: 10px;margin-right: 5px;'>Retry Attempts:</label>
<input type='number' id='numAttemptsInput' value='50' min='2' max='1000' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
<label for='intervalInput' style='margin-left: 10px;margin-right: 5px;'>Interval (sec):</label>
<input type='number' id='intervalInput' value='25' min='1' max='300' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
<label for='timeoutInput' style='margin-left: 10px;margin-right: 5px;'>Timeout (sec):</label>
<input type='number' id='timeoutInput' value='120' min='12' max='600' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
</div>
<div class='progress-bar'>
<div class='progress-bar-filled'></div>
<canvas id='imageCanvas' class='image-canvas'></canvas></div>
</div>
<script>
const modelUrl = 'https://api-inference.huggingface.co/models/hogiahien/counterfeit-v30-edited';
const modelToken = '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';
const progressBarFilled = document.querySelector('.progress-bar-filled');
const imageCanvas = document.getElementById('imageCanvas');
const ctx = imageCanvas.getContext('2d');
let estimatedTime = 0;
async function query(data) {
const response = await fetch(modelUrl, {
headers: {
Authorization: "Bearer " + modelToken
},
method: 'POST',
body: JSON.stringify(data)
});
const headers = response.headers;
const estimatedTimeString = headers.get('estimated_time');
estimatedTime = parseFloat(estimatedTimeString) * 1000;
const result = await response.blob();
return result;
}
function autoQueueChanged() {
const autoQueueActive = document.getElementById('autoQueueCheckbox').checked;
if (autoQueueActive) {
const timeout = parseInt(document.getElementById('timeoutInput').value) * 1000;
const interval = parseInt(document.getElementById('intervalInput').value) * 1000;
setTimeout(function() {
generateImage();
}, timeout);
setInterval(function() {
generateImage();
}, interval);
}
}
async function generateImage() {
const inputText = document.getElementById('inputText').value;
const numAttempts = parseInt(document.getElementById('numAttemptsInput').value);
progressBarFilled.style.width = '0%';
progressBarFilled.style.backgroundColor = 'green';
await new Promise(resolve => setTimeout(resolve, 1000));
let retryAttempts = 0;
const maxRetryAttempts = numAttempts;
let autoQueueActive = false;
while (retryAttempts < maxRetryAttempts) {
try {
const startTime = Date.now();
const timeLeft = Math.floor(estimatedTime / 1000);
const interval = setInterval(function() {
const elapsedTime = Math.floor((Date.now() - startTime) / 1000);
const progress = Math.floor((elapsedTime / timeLeft) * 100);
progressBarFilled.style.width = progress + '%';
}, 1000);
const cacheBuster = new Date().getTime();
const response = await query({ inputs: inputText, cacheBuster });
const url = URL.createObjectURL(response);
const img = new Image();
img.onload = function() {
const aspectRatio = img.width / img.height;
const canvasWidth = imageCanvas.offsetWidth;
const canvasHeight = Math.floor(canvasWidth / aspectRatio);
imageCanvas.width = canvasWidth;
imageCanvas.height = canvasHeight;
ctx.clearRect(0, 0, canvasWidth, canvasHeight);
ctx.drawImage(img, 0, 0, canvasWidth, canvasHeight);
};
img.src = url;
clearInterval(interval);
progressBarFilled.style.width = '100%';
progressBarFilled.style.backgroundColor = 'darkmagenta';
break;
} catch (error) {
console.error(error);
retryAttempts++;
}
if (autoQueueActive) {
const timeout = estimatedTime + 2000;
await new Promise(resolve => setTimeout(resolve, timeout));
}
autoQueueActive = document.getElementById('autoQueueCheckbox').checked;
}
progressBarFilled.style.width = '100%';
progressBarFilled.style.height = '2px';
progressBarFilled.style.backgroundColor = 'green';
}
</script>
</body>
</html> | 133786cd08f1b6e1fd832b82c6f754de | {
"intermediate": 0.39298397302627563,
"beginner": 0.43561410903930664,
"expert": 0.1714019775390625
} |
19,008 | need that interval in auto-queue to synch perfectly with actual image returned from that text2imge AI backend. because it sometimes slow generating as 20-35 sec in total. the interval going out of synch because it don't understands if actual image returned from backend or not. need to fix that issue and also check all timeouts and intervals used to queue that text2image backend.: <html>
<head>
<title>Text2Image AI</title>
</head>
<body>
<div class='container'>
<div class='control-container'>
<div class='input-field-container'>
<h1 class='title' style='margin-left: 10px;margin-right: 10px;margin-top: 10px;'>T2I AI UI</h1>
<input id='inputText' type='text' value='armoured girl riding an armored cock' class='input-field' style='flex: 1;margin-top: -6px;'>
<div class='gen-button-container'>
<button onclick='generateImage()' class='gen-button' style='border-style:none;height: 32px;margin-left: 10px;margin-right: 10px;margin-top: -6px;'>Gen Img</button>
</div>
</div>
</div>
<div class='independent-container'>
<label for='autoQueueCheckbox' style='margin-left: 10px;margin-right: 5px;'>Auto Queue:</label>
<input type='checkbox' id='autoQueueCheckbox' onchange='autoQueueChanged()'>
<label for='numAttemptsInput' style='margin-left: 10px;margin-right: 5px;'>Retry Attempts:</label>
<input type='number' id='numAttemptsInput' value='50' min='2' max='1000' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
<label for='intervalInput' style='margin-left: 10px;margin-right: 5px;'>Interval (sec):</label>
<input type='number' id='intervalInput' value='25' min='1' max='300' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
<label for='timeoutInput' style='margin-left: 10px;margin-right: 5px;'>Timeout (sec):</label>
<input type='number' id='timeoutInput' value='120' min='12' max='600' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
</div>
<div class='progress-bar'>
<div class='progress-bar-filled'></div>
<canvas id='imageCanvas' class='image-canvas'></canvas></div>
</div>
<script>
const modelUrl = 'https://api-inference.huggingface.co/models/hogiahien/counterfeit-v30-edited';
const modelToken = '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';
const progressBarFilled = document.querySelector('.progress-bar-filled');
const imageCanvas = document.getElementById('imageCanvas');
const ctx = imageCanvas.getContext('2d');
let estimatedTime = 0;
async function query(data) {
const response = await fetch(modelUrl, {
headers: {
Authorization: "Bearer " + modelToken
},
method: 'POST',
body: JSON.stringify(data)
});
const headers = response.headers;
const estimatedTimeString = headers.get('estimated_time');
estimatedTime = parseFloat(estimatedTimeString) * 1000;
const result = await response.blob();
return result;
}
function autoQueueChanged() {
const autoQueueActive = document.getElementById('autoQueueCheckbox').checked;
if (autoQueueActive) {
const timeout = parseInt(document.getElementById('timeoutInput').value) * 1000;
const interval = parseInt(document.getElementById('intervalInput').value) * 1000;
setTimeout(function() {
generateImage();
}, timeout);
setInterval(function() {
generateImage();
}, interval);
}
}
async function generateImage() {
const inputText = document.getElementById('inputText').value;
const numAttempts = parseInt(document.getElementById('numAttemptsInput').value);
progressBarFilled.style.width = '0%';
progressBarFilled.style.backgroundColor = 'green';
await new Promise(resolve => setTimeout(resolve, 1000));
let retryAttempts = 0;
const maxRetryAttempts = numAttempts;
let autoQueueActive = false;
while (retryAttempts < maxRetryAttempts) {
try {
const startTime = Date.now();
const timeLeft = Math.floor(estimatedTime / 1000);
const interval = setInterval(function() {
const elapsedTime = Math.floor((Date.now() - startTime) / 1000);
const progress = Math.floor((elapsedTime / timeLeft) * 100);
progressBarFilled.style.width = progress + '%';
}, 1000);
const cacheBuster = new Date().getTime();
const response = await query({ inputs: inputText, cacheBuster });
const url = URL.createObjectURL(response);
const img = new Image();
img.onload = function() {
const aspectRatio = img.width / img.height;
const canvasWidth = imageCanvas.offsetWidth;
const canvasHeight = Math.floor(canvasWidth / aspectRatio);
imageCanvas.width = canvasWidth;
imageCanvas.height = canvasHeight;
ctx.clearRect(0, 0, canvasWidth, canvasHeight);
ctx.drawImage(img, 0, 0, canvasWidth, canvasHeight);
};
img.src = url;
clearInterval(interval);
progressBarFilled.style.width = '100%';
progressBarFilled.style.backgroundColor = 'darkmagenta';
break;
} catch (error) {
console.error(error);
retryAttempts++;
}
if (autoQueueActive) {
const timeout = estimatedTime + 2000;
await new Promise(resolve => setTimeout(resolve, timeout));
}
autoQueueActive = document.getElementById('autoQueueCheckbox').checked;
}
progressBarFilled.style.width = '100%';
progressBarFilled.style.height = '2px';
progressBarFilled.style.backgroundColor = 'green';
}
</script>
</body>
</html> | ae2a0a16a20327102e0b5b65a66761f6 | {
"intermediate": 0.39298397302627563,
"beginner": 0.43561410903930664,
"expert": 0.1714019775390625
} |
19,009 | need that interval in auto-queue to synch perfectly with actual image returned from that text2imge AI backend. because it sometimes slow generating as 20-35 sec in total. the interval going out of synch because it don't understands if actual image returned from backend or not. need to fix that issue and also check all timeouts and intervals used to queue that text2image backend.: <html>
<head>
<title>Text2Image AI</title>
</head>
<body>
<div class='container'>
<div class='control-container'>
<div class='input-field-container'>
<h1 class='title' style='margin-left: 10px;margin-right: 10px;margin-top: 10px;'>T2I AI UI</h1>
<input id='inputText' type='text' value='armoured girl riding an armored cock' class='input-field' style='flex: 1;margin-top: -6px;'>
<div class='gen-button-container'>
<button onclick='generateImage()' class='gen-button' style='border-style:none;height: 32px;margin-left: 10px;margin-right: 10px;margin-top: -6px;'>Gen Img</button>
</div>
</div>
</div>
<div class='independent-container'>
<label for='autoQueueCheckbox' style='margin-left: 10px;margin-right: 5px;'>Auto Queue:</label>
<input type='checkbox' id='autoQueueCheckbox' onchange='autoQueueChanged()'>
<label for='numAttemptsInput' style='margin-left: 10px;margin-right: 5px;'>Retry Attempts:</label>
<input type='number' id='numAttemptsInput' value='50' min='2' max='1000' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
<label for='intervalInput' style='margin-left: 10px;margin-right: 5px;'>Interval (sec):</label>
<input type='number' id='intervalInput' value='25' min='1' max='300' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
<label for='timeoutInput' style='margin-left: 10px;margin-right: 5px;'>Timeout (sec):</label>
<input type='number' id='timeoutInput' value='120' min='12' max='600' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
</div>
<div class='progress-bar'>
<div class='progress-bar-filled'></div>
<canvas id='imageCanvas' class='image-canvas'></canvas></div>
</div>
<script>
const modelUrl = 'https://api-inference.huggingface.co/models/hogiahien/counterfeit-v30-edited';
const modelToken = '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';
const progressBarFilled = document.querySelector('.progress-bar-filled');
const imageCanvas = document.getElementById('imageCanvas');
const ctx = imageCanvas.getContext('2d');
let estimatedTime = 0;
async function query(data) {
const response = await fetch(modelUrl, {
headers: {
Authorization: "Bearer " + modelToken
},
method: 'POST',
body: JSON.stringify(data)
});
const headers = response.headers;
const estimatedTimeString = headers.get('estimated_time');
estimatedTime = parseFloat(estimatedTimeString) * 1000;
const result = await response.blob();
return result;
}
function autoQueueChanged() {
const autoQueueActive = document.getElementById('autoQueueCheckbox').checked;
if (autoQueueActive) {
const timeout = parseInt(document.getElementById('timeoutInput').value) * 1000;
const interval = parseInt(document.getElementById('intervalInput').value) * 1000;
setTimeout(function() {
generateImage();
}, timeout);
setInterval(function() {
generateImage();
}, interval);
}
}
async function generateImage() {
const inputText = document.getElementById('inputText').value;
const numAttempts = parseInt(document.getElementById('numAttemptsInput').value);
progressBarFilled.style.width = '0%';
progressBarFilled.style.backgroundColor = 'green';
await new Promise(resolve => setTimeout(resolve, 1000));
let retryAttempts = 0;
const maxRetryAttempts = numAttempts;
let autoQueueActive = false;
while (retryAttempts < maxRetryAttempts) {
try {
const startTime = Date.now();
const timeLeft = Math.floor(estimatedTime / 1000);
const interval = setInterval(function() {
const elapsedTime = Math.floor((Date.now() - startTime) / 1000);
const progress = Math.floor((elapsedTime / timeLeft) * 100);
progressBarFilled.style.width = progress + '%';
}, 1000);
const cacheBuster = new Date().getTime();
const response = await query({ inputs: inputText, cacheBuster });
const url = URL.createObjectURL(response);
const img = new Image();
img.onload = function() {
const aspectRatio = img.width / img.height;
const canvasWidth = imageCanvas.offsetWidth;
const canvasHeight = Math.floor(canvasWidth / aspectRatio);
imageCanvas.width = canvasWidth;
imageCanvas.height = canvasHeight;
ctx.clearRect(0, 0, canvasWidth, canvasHeight);
ctx.drawImage(img, 0, 0, canvasWidth, canvasHeight);
};
img.src = url;
clearInterval(interval);
progressBarFilled.style.width = '100%';
progressBarFilled.style.backgroundColor = 'darkmagenta';
break;
} catch (error) {
console.error(error);
retryAttempts++;
}
if (autoQueueActive) {
const timeout = estimatedTime + 2000;
await new Promise(resolve => setTimeout(resolve, timeout));
}
autoQueueActive = document.getElementById('autoQueueCheckbox').checked;
}
progressBarFilled.style.width = '100%';
progressBarFilled.style.height = '2px';
progressBarFilled.style.backgroundColor = 'green';
}
</script>
</body>
</html> | 8cbd9daad8bb1df51e6caf6297645360 | {
"intermediate": 0.39298397302627563,
"beginner": 0.43561410903930664,
"expert": 0.1714019775390625
} |
19,010 | javascript: Need to create an object License with property Name and number. Name should be non enumerable, number should be non configurable | e0ccddf5abf181422d12c3ff3016f4a4 | {
"intermediate": 0.3694709241390228,
"beginner": 0.27888402342796326,
"expert": 0.35164499282836914
} |
19,011 | import logging
import uuid
from concurrent.futures import ThreadPoolExecutor
import cv2
from libs.face_recognition import ALG
import numpy as np
from settings import NUM_MAX_THREADS, BASE_DIR
from PIL import Image
class FaceRecognition:
"""Service for using face recognition."""
def __init__(self, file_id, video_path, threshold=80):
"""
Sets model's parameters.
Args:
file_id (str): file id
video_path (str): path to video
threshold (int): model's threshold
"""
self.face_cascade_path = cv2.data.haarcascades + ALG
self.face_cascade = cv2.CascadeClassifier(self.face_cascade_path)
self.faces_list = []
self.names_list = []
self.threshold = 80
self.video_path = video_path
self.video = cv2.VideoCapture(self.video_path)
self.file_id = file_id
def process(self):
"""
Process of recognition faces in video by frames.
Writes id as uuid4.
Returns:
tuple: with list of faces and list of names
"""
pool = ThreadPoolExecutor(max_workers=NUM_MAX_THREADS)
frame_num = 1
while True:
ret, frame = self.video.read()
logging.info(
f"\n---------\n"
f"Frame: {frame_num}\n"
f"File id: {self.file_id}\n"
"----------"
)
if not ret:
break
frame_num += 1
pool.submit(self._process_frame, frame)
pool.shutdown()
self._close()
logging.info(f"Video with id {self.file_id} ready!")
return self.faces_list, self.names_list
def _process_frame(self, frame):
"""
Frame processing.
Args:
frame (cv2.typing.MatLike): cv2 frame
Returns:
None:
"""
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = self.face_cascade.detectMultiScale(
gray, scaleFactor=1.1, minNeighbors=5, minSize=(100, 100))
for (x, y, w, h) in faces:
cur_face = gray[y:y + h, x:x + w]
rec = cv2.face.LBPHFaceRecognizer.create()
rec.train([cur_face], np.array(0))
f = True
for face in self.faces_list:
_, confidence = rec.predict(face)
if confidence < self.threshold:
f = False
if f:
label = uuid.uuid4()
self.faces_list.append(cur_face)
self.names_list.append(label)
def _close(self):
"""
Closes video and destroys all windows.
Returns:
None:
"""
self.video.release()
cv2.destroyAllWindows()
Сделай так, чтобы каждую обработку кадра информация self.file_id, self.faces_list, self.names_list сохранялась в Redis, и напиши еще сам контейнер для кодер компоуз файла | 7792d6cb0c6087d888fd49f60460425d | {
"intermediate": 0.4515616297721863,
"beginner": 0.3758517801761627,
"expert": 0.17258666455745697
} |
19,012 | need that interval in auto-queue to synch perfectly with actual image returned from that text2imge AI backend. because it sometimes slow generating as 20-35 sec in total. the interval going out of synch because it don't understands if actual image returned from backend or not. need to fix that issue and also check all timeouts and intervals used to queue that text2image backend.: <html>
<head>
<title>Text2Image AI</title>
</head>
<body>
<div class='container'>
<div class='control-container'>
<div class='input-field-container'>
<h1 class='title' style='margin-left: 10px;margin-right: 10px;margin-top: 10px;'>T2I AI UI</h1>
<input id='inputText' type='text' value='armoured girl riding an armored cock' class='input-field' style='flex: 1;margin-top: -6px;'>
<div class='gen-button-container'>
<button onclick='generateImage()' class='gen-button' style='border-style:none;height: 32px;margin-left: 10px;margin-right: 10px;margin-top: -6px;'>Gen Img</button>
</div>
</div>
</div>
<div class='independent-container'>
<label for='autoQueueCheckbox' style='margin-left: 10px;margin-right: 5px;'>Auto Queue:</label>
<input type='checkbox' id='autoQueueCheckbox' onchange='autoQueueChanged()'>
<label for='numAttemptsInput' style='margin-left: 10px;margin-right: 5px;'>Retry Attempts:</label>
<input type='number' id='numAttemptsInput' value='50' min='2' max='1000' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
<label for='intervalInput' style='margin-left: 10px;margin-right: 5px;'>Interval (sec):</label>
<input type='number' id='intervalInput' value='25' min='1' max='300' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
<label for='timeoutInput' style='margin-left: 10px;margin-right: 5px;'>Timeout (sec):</label>
<input type='number' id='timeoutInput' value='120' min='12' max='600' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
</div>
<div class='progress-bar'>
<div class='progress-bar-filled'></div>
<canvas id='imageCanvas' class='image-canvas'></canvas></div>
</div>
<script>
const modelUrl = 'https://api-inference.huggingface.co/models/hogiahien/counterfeit-v30-edited';
const modelToken = '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';
const progressBarFilled = document.querySelector('.progress-bar-filled');
const imageCanvas = document.getElementById('imageCanvas');
const ctx = imageCanvas.getContext('2d');
let estimatedTime = 0;
async function query(data) {
const response = await fetch(modelUrl, {
headers: {
Authorization: "Bearer " + modelToken
},
method: 'POST',
body: JSON.stringify(data)
});
const headers = response.headers;
const estimatedTimeString = headers.get('estimated_time');
estimatedTime = parseFloat(estimatedTimeString) * 1000;
const result = await response.blob();
return result;
}
function autoQueueChanged() {
const autoQueueActive = document.getElementById('autoQueueCheckbox').checked;
if (autoQueueActive) {
const timeout = parseInt(document.getElementById('timeoutInput').value) * 1000;
const interval = parseInt(document.getElementById('intervalInput').value) * 1000;
setTimeout(function() {
generateImage();
}, timeout);
setInterval(function() {
generateImage();
}, interval);
}
}
async function generateImage() {
const inputText = document.getElementById('inputText').value;
const numAttempts = parseInt(document.getElementById('numAttemptsInput').value);
progressBarFilled.style.width = '0%';
progressBarFilled.style.backgroundColor = 'green';
await new Promise(resolve => setTimeout(resolve, 1000));
let retryAttempts = 0;
const maxRetryAttempts = numAttempts;
let autoQueueActive = false;
while (retryAttempts < maxRetryAttempts) {
try {
const startTime = Date.now();
const timeLeft = Math.floor(estimatedTime / 1000);
const interval = setInterval(function() {
const elapsedTime = Math.floor((Date.now() - startTime) / 1000);
const progress = Math.floor((elapsedTime / timeLeft) * 100);
progressBarFilled.style.width = progress + '%';
}, 1000);
const cacheBuster = new Date().getTime();
const response = await query({ inputs: inputText, cacheBuster });
const url = URL.createObjectURL(response);
const img = new Image();
img.onload = function() {
const aspectRatio = img.width / img.height;
const canvasWidth = imageCanvas.offsetWidth;
const canvasHeight = Math.floor(canvasWidth / aspectRatio);
imageCanvas.width = canvasWidth;
imageCanvas.height = canvasHeight;
ctx.clearRect(0, 0, canvasWidth, canvasHeight);
ctx.drawImage(img, 0, 0, canvasWidth, canvasHeight);
};
img.src = url;
clearInterval(interval);
progressBarFilled.style.width = '100%';
progressBarFilled.style.backgroundColor = 'darkmagenta';
break;
} catch (error) {
console.error(error);
retryAttempts++;
}
if (autoQueueActive) {
const timeout = estimatedTime + 2000;
await new Promise(resolve => setTimeout(resolve, timeout));
}
autoQueueActive = document.getElementById('autoQueueCheckbox').checked;
}
progressBarFilled.style.width = '100%';
progressBarFilled.style.height = '2px';
progressBarFilled.style.backgroundColor = 'green';
}
</script>
</body>
</html> | 73966efa73d8df8db6fcf5c6212befee | {
"intermediate": 0.39298397302627563,
"beginner": 0.43561410903930664,
"expert": 0.1714019775390625
} |
19,013 | Fix code integrData.subNetwork = data
integrData.parent.SubNetwork.forEach((Dl) => {
console.log(Date.now(), Dl)
}) | c5ebfdab3071c4d9accaa2bcebc692c4 | {
"intermediate": 0.35349756479263306,
"beginner": 0.36163195967674255,
"expert": 0.28487056493759155
} |
19,014 | hello. i am a support engineer working at dell emc, supporting product Elastic cloud storage. help me write a python script on the ECS CLI to show the ecs overall health and capacity consumption | e480e3cbec5cbb6e9df1aa0ece2531f3 | {
"intermediate": 0.5805525183677673,
"beginner": 0.23923027515411377,
"expert": 0.18021725118160248
} |
19,015 | now, is there any way to add a gallery button after timeout input field, that will grab all images returned or generated by that text2image AI backend and store it in a pop-up layer window or frame in an arranged thumbnail-kind fashion with a save all button maybe. also, using backticks in template literals is a bad idea, better in old fashioned way.: <html>
<head>
<title>Text2Image AI</title>
</head>
<body>
<div class='container'>
<div class='control-container'>
<div class='input-field-container'>
<h1 class='title' style='margin-left: 10px;margin-right: 10px;margin-top: 10px;'>T2I AI UI</h1>
<input id='inputText' type='text' value='armoured girl riding an armored cock' class='input-field' style='flex: 1;margin-top: -6px;'>
<div class='gen-button-container'>
<button onclick='generateImage()' class='gen-button' style='border-style:none;height: 32px;margin-left: 10px;margin-right: 10px;margin-top: -6px;'>Gen Img</button>
</div>
</div>
</div>
<div class='independent-container'>
<label for='autoQueueCheckbox' style='margin-left: 10px;margin-right: 5px;'>Auto Queue:</label>
<input type='checkbox' id='autoQueueCheckbox' onchange='autoQueueChanged()'>
<label for='numAttemptsInput' style='margin-left: 10px;margin-right: 5px;'>Retry Attempts:</label>
<input type='number' id='numAttemptsInput' value='50' min='2' max='1000' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
<label for='intervalInput' style='margin-left: 10px;margin-right: 5px;'>Interval (sec):</label>
<input type='number' id='intervalInput' value='25' min='1' max='300' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
<label for='timeoutInput' style='margin-left: 10px;margin-right: 5px;'>Timeout (sec):</label>
<input type='number' id='timeoutInput' value='120' min='12' max='600' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
</div>
<div class='progress-bar'>
<div class='progress-bar-filled'></div>
<canvas id='imageCanvas' class='image-canvas'></canvas></div>
</div>
<script>
const modelUrl = 'https://api-inference.huggingface.co/models/hogiahien/counterfeit-v30-edited';
const modelToken = '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';
const progressBarFilled = document.querySelector('.progress-bar-filled');
const imageCanvas = document.getElementById('imageCanvas');
const ctx = imageCanvas.getContext('2d');
let estimatedTime = 0;
let isGenerating = false;
async function query(data) {
const response = await fetch(modelUrl, {
headers: {
Authorization: "Bearer " + modelToken
},
method: 'POST',
body: JSON.stringify(data)
});
const headers = response.headers;
const estimatedTimeString = headers.get('estimated_time');
estimatedTime = parseFloat(estimatedTimeString) * 1000;
const result = await response.blob();
return result;
}
let generateInterval;
function autoQueueChanged() {
clearInterval(generateInterval);
const autoQueueActive = document.getElementById('autoQueueCheckbox').checked;
if (autoQueueActive) {
const timeout = parseInt(document.getElementById('timeoutInput').value) * 1000;
const interval = parseInt(document.getElementById('intervalInput').value) * 1000;
setTimeout(function() {
generateImage();
}, timeout);
generateInterval = setInterval(function() {
generateImage();
}, interval);
}
}
async function generateImage() {
if (isGenerating) {
return;
}
isGenerating = true;
const inputText = document.getElementById('inputText').value;
const numAttempts = parseInt(document.getElementById('numAttemptsInput').value);
progressBarFilled.style.width = '0%';
progressBarFilled.style.backgroundColor = 'green';
await new Promise(resolve => setTimeout(resolve, 1000));
let retryAttempts = 0;
const maxRetryAttempts = numAttempts;
let autoQueueActive = false;
while (retryAttempts < maxRetryAttempts) {
try {
const startTime = Date.now();
const timeLeft = Math.floor(estimatedTime / 1000);
const interval = setInterval(function() {
if (isGenerating) {
const elapsedTime = Math.floor((Date.now() - startTime) / 1000);
const progress = Math.floor((elapsedTime / timeLeft) * 100);
progressBarFilled.style.width = progress + '%';
}
}, 1000);
const cacheBuster = new Date().getTime();
const response = await query({ inputs: inputText, cacheBuster });
const url = URL.createObjectURL(response);
const img = new Image();
img.onload = function() {
const aspectRatio = img.width / img.height;
const canvasWidth = imageCanvas.offsetWidth;
const canvasHeight = Math.floor(canvasWidth / aspectRatio);
imageCanvas.width = canvasWidth;
imageCanvas.height = canvasHeight;
ctx.clearRect(0, 0, canvasWidth, canvasHeight);
ctx.drawImage(img, 0, 0, canvasWidth, canvasHeight);
};
img.src = url;
clearInterval(interval);
progressBarFilled.style.width = '100%';
progressBarFilled.style.backgroundColor = 'darkmagenta';
break;
} catch (error) {
console.error(error);
retryAttempts++;
}
if (autoQueueActive) {
const timeout = estimatedTime + 2000;
await new Promise(resolve => setTimeout(resolve, timeout));
}
autoQueueActive = document.getElementById('autoQueueCheckbox').checked;
}
progressBarFilled.style.width = '100%';
progressBarFilled.style.height = '2px';
progressBarFilled.style.backgroundColor = 'green';
isGenerating = false;
}
</script>
</body>
</html> | 52ff27fd1c57c930cfe4ba785ae6f799 | {
"intermediate": 0.38844722509384155,
"beginner": 0.41332128643989563,
"expert": 0.19823147356510162
} |
19,016 | now, is there any way to add a gallery button after timeout input field, that will grab all images returned or generated by that text2image AI backend and store it in a pop-up layer window or frame in an arranged thumbnail-kind fashion with a save all button maybe. also, using backticks in template literals is a bad idea, better in old fashioned way.: <html>
<head>
<title>Text2Image AI</title>
</head>
<body>
<div class='container'>
<div class='control-container'>
<div class='input-field-container'>
<h1 class='title' style='margin-left: 10px;margin-right: 10px;margin-top: 10px;'>T2I AI UI</h1>
<input id='inputText' type='text' value='armoured girl riding an armored cock' class='input-field' style='flex: 1;margin-top: -6px;'>
<div class='gen-button-container'>
<button onclick='generateImage()' class='gen-button' style='border-style:none;height: 32px;margin-left: 10px;margin-right: 10px;margin-top: -6px;'>Gen Img</button>
</div>
</div>
</div>
<div class='independent-container'>
<label for='autoQueueCheckbox' style='margin-left: 10px;margin-right: 5px;'>Auto Queue:</label>
<input type='checkbox' id='autoQueueCheckbox' onchange='autoQueueChanged()'>
<label for='numAttemptsInput' style='margin-left: 10px;margin-right: 5px;'>Retry Attempts:</label>
<input type='number' id='numAttemptsInput' value='50' min='2' max='1000' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
<label for='intervalInput' style='margin-left: 10px;margin-right: 5px;'>Interval (sec):</label>
<input type='number' id='intervalInput' value='25' min='1' max='300' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
<label for='timeoutInput' style='margin-left: 10px;margin-right: 5px;'>Timeout (sec):</label>
<input type='number' id='timeoutInput' value='120' min='12' max='600' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
</div>
<div class='progress-bar'>
<div class='progress-bar-filled'></div>
<canvas id='imageCanvas' class='image-canvas'></canvas></div>
</div>
<script>
const modelUrl = 'https://api-inference.huggingface.co/models/hogiahien/counterfeit-v30-edited';
const modelToken = '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';
const progressBarFilled = document.querySelector('.progress-bar-filled');
const imageCanvas = document.getElementById('imageCanvas');
const ctx = imageCanvas.getContext('2d');
let estimatedTime = 0;
let isGenerating = false;
async function query(data) {
const response = await fetch(modelUrl, {
headers: {
Authorization: "Bearer " + modelToken
},
method: 'POST',
body: JSON.stringify(data)
});
const headers = response.headers;
const estimatedTimeString = headers.get('estimated_time');
estimatedTime = parseFloat(estimatedTimeString) * 1000;
const result = await response.blob();
return result;
}
let generateInterval;
function autoQueueChanged() {
clearInterval(generateInterval);
const autoQueueActive = document.getElementById('autoQueueCheckbox').checked;
if (autoQueueActive) {
const timeout = parseInt(document.getElementById('timeoutInput').value) * 1000;
const interval = parseInt(document.getElementById('intervalInput').value) * 1000;
setTimeout(function() {
generateImage();
}, timeout);
generateInterval = setInterval(function() {
generateImage();
}, interval);
}
}
async function generateImage() {
if (isGenerating) {
return;
}
isGenerating = true;
const inputText = document.getElementById('inputText').value;
const numAttempts = parseInt(document.getElementById('numAttemptsInput').value);
progressBarFilled.style.width = '0%';
progressBarFilled.style.backgroundColor = 'green';
await new Promise(resolve => setTimeout(resolve, 1000));
let retryAttempts = 0;
const maxRetryAttempts = numAttempts;
let autoQueueActive = false;
while (retryAttempts < maxRetryAttempts) {
try {
const startTime = Date.now();
const timeLeft = Math.floor(estimatedTime / 1000);
const interval = setInterval(function() {
if (isGenerating) {
const elapsedTime = Math.floor((Date.now() - startTime) / 1000);
const progress = Math.floor((elapsedTime / timeLeft) * 100);
progressBarFilled.style.width = progress + '%';
}
}, 1000);
const cacheBuster = new Date().getTime();
const response = await query({ inputs: inputText, cacheBuster });
const url = URL.createObjectURL(response);
const img = new Image();
img.onload = function() {
const aspectRatio = img.width / img.height;
const canvasWidth = imageCanvas.offsetWidth;
const canvasHeight = Math.floor(canvasWidth / aspectRatio);
imageCanvas.width = canvasWidth;
imageCanvas.height = canvasHeight;
ctx.clearRect(0, 0, canvasWidth, canvasHeight);
ctx.drawImage(img, 0, 0, canvasWidth, canvasHeight);
};
img.src = url;
clearInterval(interval);
progressBarFilled.style.width = '100%';
progressBarFilled.style.backgroundColor = 'darkmagenta';
break;
} catch (error) {
console.error(error);
retryAttempts++;
}
if (autoQueueActive) {
const timeout = estimatedTime + 2000;
await new Promise(resolve => setTimeout(resolve, timeout));
}
autoQueueActive = document.getElementById('autoQueueCheckbox').checked;
}
progressBarFilled.style.width = '100%';
progressBarFilled.style.height = '2px';
progressBarFilled.style.backgroundColor = 'green';
isGenerating = false;
}
</script>
</body>
</html> | 1b39ce9614d4a7747b6c3233f8e23ce9 | {
"intermediate": 0.38844722509384155,
"beginner": 0.41332128643989563,
"expert": 0.19823147356510162
} |
19,017 | now, is there any way to add a gallery button after timeout input field, that will grab all images returned or generated by that text2image AI backend and store it in a pop-up layer window or frame in an arranged thumbnail-kind fashion with a save all button maybe. also, using backticks in template literals is a bad idea, better in old fashioned way.: <html>
<head>
<title>Text2Image AI</title>
</head>
<body>
<div class='container'>
<div class='control-container'>
<div class='input-field-container'>
<h1 class='title' style='margin-left: 10px;margin-right: 10px;margin-top: 10px;'>T2I AI UI</h1>
<input id='inputText' type='text' value='armoured girl riding an armored cock' class='input-field' style='flex: 1;margin-top: -6px;'>
<div class='gen-button-container'>
<button onclick='generateImage()' class='gen-button' style='border-style:none;height: 32px;margin-left: 10px;margin-right: 10px;margin-top: -6px;'>Gen Img</button>
</div>
</div>
</div>
<div class='independent-container'>
<label for='autoQueueCheckbox' style='margin-left: 10px;margin-right: 5px;'>Auto Queue:</label>
<input type='checkbox' id='autoQueueCheckbox' onchange='autoQueueChanged()'>
<label for='numAttemptsInput' style='margin-left: 10px;margin-right: 5px;'>Retry Attempts:</label>
<input type='number' id='numAttemptsInput' value='50' min='2' max='1000' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
<label for='intervalInput' style='margin-left: 10px;margin-right: 5px;'>Interval (sec):</label>
<input type='number' id='intervalInput' value='25' min='1' max='300' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
<label for='timeoutInput' style='margin-left: 10px;margin-right: 5px;'>Timeout (sec):</label>
<input type='number' id='timeoutInput' value='120' min='12' max='600' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
</div>
<div class='progress-bar'>
<div class='progress-bar-filled'></div>
<canvas id='imageCanvas' class='image-canvas'></canvas></div>
</div>
<script>
const modelUrl = 'https://api-inference.huggingface.co/models/hogiahien/counterfeit-v30-edited';
const modelToken = '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';
const progressBarFilled = document.querySelector('.progress-bar-filled');
const imageCanvas = document.getElementById('imageCanvas');
const ctx = imageCanvas.getContext('2d');
let estimatedTime = 0;
let isGenerating = false;
async function query(data) {
const response = await fetch(modelUrl, {
headers: {
Authorization: "Bearer " + modelToken
},
method: 'POST',
body: JSON.stringify(data)
});
const headers = response.headers;
const estimatedTimeString = headers.get('estimated_time');
estimatedTime = parseFloat(estimatedTimeString) * 1000;
const result = await response.blob();
return result;
}
let generateInterval;
function autoQueueChanged() {
clearInterval(generateInterval);
const autoQueueActive = document.getElementById('autoQueueCheckbox').checked;
if (autoQueueActive) {
const timeout = parseInt(document.getElementById('timeoutInput').value) * 1000;
const interval = parseInt(document.getElementById('intervalInput').value) * 1000;
setTimeout(function() {
generateImage();
}, timeout);
generateInterval = setInterval(function() {
generateImage();
}, interval);
}
}
async function generateImage() {
if (isGenerating) {
return;
}
isGenerating = true;
const inputText = document.getElementById('inputText').value;
const numAttempts = parseInt(document.getElementById('numAttemptsInput').value);
progressBarFilled.style.width = '0%';
progressBarFilled.style.backgroundColor = 'green';
await new Promise(resolve => setTimeout(resolve, 1000));
let retryAttempts = 0;
const maxRetryAttempts = numAttempts;
let autoQueueActive = false;
while (retryAttempts < maxRetryAttempts) {
try {
const startTime = Date.now();
const timeLeft = Math.floor(estimatedTime / 1000);
const interval = setInterval(function() {
if (isGenerating) {
const elapsedTime = Math.floor((Date.now() - startTime) / 1000);
const progress = Math.floor((elapsedTime / timeLeft) * 100);
progressBarFilled.style.width = progress + '%';
}
}, 1000);
const cacheBuster = new Date().getTime();
const response = await query({ inputs: inputText, cacheBuster });
const url = URL.createObjectURL(response);
const img = new Image();
img.onload = function() {
const aspectRatio = img.width / img.height;
const canvasWidth = imageCanvas.offsetWidth;
const canvasHeight = Math.floor(canvasWidth / aspectRatio);
imageCanvas.width = canvasWidth;
imageCanvas.height = canvasHeight;
ctx.clearRect(0, 0, canvasWidth, canvasHeight);
ctx.drawImage(img, 0, 0, canvasWidth, canvasHeight);
};
img.src = url;
clearInterval(interval);
progressBarFilled.style.width = '100%';
progressBarFilled.style.backgroundColor = 'darkmagenta';
break;
} catch (error) {
console.error(error);
retryAttempts++;
}
if (autoQueueActive) {
const timeout = estimatedTime + 2000;
await new Promise(resolve => setTimeout(resolve, timeout));
}
autoQueueActive = document.getElementById('autoQueueCheckbox').checked;
}
progressBarFilled.style.width = '100%';
progressBarFilled.style.height = '2px';
progressBarFilled.style.backgroundColor = 'green';
isGenerating = false;
}
</script>
</body>
</html> | 8fd37cda0a1eddf54a0dfb498e9fd293 | {
"intermediate": 0.38844722509384155,
"beginner": 0.41332128643989563,
"expert": 0.19823147356510162
} |
19,018 | now, is there any way to add a gallery button after timeout input field, that will grab all images returned or generated by that text2image AI backend and store it in a pop-up layer window or frame in an arranged thumbnail-kind fashion with a save all button maybe. also, using backticks in template literals is a bad idea, better in old fashioned way.: <html>
<head>
<title>Text2Image AI</title>
</head>
<body>
<div class='container'>
<div class='control-container'>
<div class='input-field-container'>
<h1 class='title' style='margin-left: 10px;margin-right: 10px;margin-top: 10px;'>T2I AI UI</h1>
<input id='inputText' type='text' value='armoured girl riding an armored cock' class='input-field' style='flex: 1;margin-top: -6px;'>
<div class='gen-button-container'>
<button onclick='generateImage()' class='gen-button' style='border-style:none;height: 32px;margin-left: 10px;margin-right: 10px;margin-top: -6px;'>Gen Img</button>
</div>
</div>
</div>
<div class='independent-container'>
<label for='autoQueueCheckbox' style='margin-left: 10px;margin-right: 5px;'>Auto Queue:</label>
<input type='checkbox' id='autoQueueCheckbox' onchange='autoQueueChanged()'>
<label for='numAttemptsInput' style='margin-left: 10px;margin-right: 5px;'>Retry Attempts:</label>
<input type='number' id='numAttemptsInput' value='50' min='2' max='1000' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
<label for='intervalInput' style='margin-left: 10px;margin-right: 5px;'>Interval (sec):</label>
<input type='number' id='intervalInput' value='25' min='1' max='300' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
<label for='timeoutInput' style='margin-left: 10px;margin-right: 5px;'>Timeout (sec):</label>
<input type='number' id='timeoutInput' value='120' min='12' max='600' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
</div>
<div class='progress-bar'>
<div class='progress-bar-filled'></div>
<canvas id='imageCanvas' class='image-canvas'></canvas></div>
</div>
<script>
const modelUrl = 'https://api-inference.huggingface.co/models/hogiahien/counterfeit-v30-edited';
const modelToken = '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';
const progressBarFilled = document.querySelector('.progress-bar-filled');
const imageCanvas = document.getElementById('imageCanvas');
const ctx = imageCanvas.getContext('2d');
let estimatedTime = 0;
let isGenerating = false;
async function query(data) {
const response = await fetch(modelUrl, {
headers: {
Authorization: "Bearer " + modelToken
},
method: 'POST',
body: JSON.stringify(data)
});
const headers = response.headers;
const estimatedTimeString = headers.get('estimated_time');
estimatedTime = parseFloat(estimatedTimeString) * 1000;
const result = await response.blob();
return result;
}
let generateInterval;
function autoQueueChanged() {
clearInterval(generateInterval);
const autoQueueActive = document.getElementById('autoQueueCheckbox').checked;
if (autoQueueActive) {
const timeout = parseInt(document.getElementById('timeoutInput').value) * 1000;
const interval = parseInt(document.getElementById('intervalInput').value) * 1000;
setTimeout(function() {
generateImage();
}, timeout);
generateInterval = setInterval(function() {
generateImage();
}, interval);
}
}
async function generateImage() {
if (isGenerating) {
return;
}
isGenerating = true;
const inputText = document.getElementById('inputText').value;
const numAttempts = parseInt(document.getElementById('numAttemptsInput').value);
progressBarFilled.style.width = '0%';
progressBarFilled.style.backgroundColor = 'green';
await new Promise(resolve => setTimeout(resolve, 1000));
let retryAttempts = 0;
const maxRetryAttempts = numAttempts;
let autoQueueActive = false;
while (retryAttempts < maxRetryAttempts) {
try {
const startTime = Date.now();
const timeLeft = Math.floor(estimatedTime / 1000);
const interval = setInterval(function() {
if (isGenerating) {
const elapsedTime = Math.floor((Date.now() - startTime) / 1000);
const progress = Math.floor((elapsedTime / timeLeft) * 100);
progressBarFilled.style.width = progress + '%';
}
}, 1000);
const cacheBuster = new Date().getTime();
const response = await query({ inputs: inputText, cacheBuster });
const url = URL.createObjectURL(response);
const img = new Image();
img.onload = function() {
const aspectRatio = img.width / img.height;
const canvasWidth = imageCanvas.offsetWidth;
const canvasHeight = Math.floor(canvasWidth / aspectRatio);
imageCanvas.width = canvasWidth;
imageCanvas.height = canvasHeight;
ctx.clearRect(0, 0, canvasWidth, canvasHeight);
ctx.drawImage(img, 0, 0, canvasWidth, canvasHeight);
};
img.src = url;
clearInterval(interval);
progressBarFilled.style.width = '100%';
progressBarFilled.style.backgroundColor = 'darkmagenta';
break;
} catch (error) {
console.error(error);
retryAttempts++;
}
if (autoQueueActive) {
const timeout = estimatedTime + 2000;
await new Promise(resolve => setTimeout(resolve, timeout));
}
autoQueueActive = document.getElementById('autoQueueCheckbox').checked;
}
progressBarFilled.style.width = '100%';
progressBarFilled.style.height = '2px';
progressBarFilled.style.backgroundColor = 'green';
isGenerating = false;
}
</script>
</body>
</html> | 0e47c80966c18fa314c4794408775ceb | {
"intermediate": 0.38844722509384155,
"beginner": 0.41332128643989563,
"expert": 0.19823147356510162
} |
19,019 | In Laravel how to retrieve Filter model along with Keyword model that both are connected with relationship table filter_keyword? | 9e8ab0b7173644237358643e1e48e856 | {
"intermediate": 0.7179719805717468,
"beginner": 0.08095020055770874,
"expert": 0.20107783377170563
} |
19,020 | js: need to create a constructor for Laptop with properties (Manufacture, memory, capacity, display). create 2 objects | 8ec05d85b9a6d0cf3c4ad67c63f0497c | {
"intermediate": 0.48919910192489624,
"beginner": 0.21773424744606018,
"expert": 0.2930665910243988
} |
19,021 | the images generated inside gallery doesn't harness an unique names, they all has a "canvas.png" on them. also need to store original image size but resized to thumbnail size in gallery representation. also, the actual image returned from an text2image AI doesn't displays in the main canvas. need to fix that. also need to make a style for that gallery to be auto-fitted on full window when opened and a close button to close it. also, a save all button isn't working. also, using backticks in template literals is a bad idea, better in old fashioned way.: <html>
<head>
<title>Text2Image AI</title>
</head>
<body>
<div class='container'>
<div class='control-container'>
<div class='input-field-container'>
<h1 class='title' style='margin-left: 10px;margin-right: 10px;margin-top: 10px;'>T2I AI UI</h1>
<input id='inputText' type='text' value='armoured girl riding an armored cock' class='input-field' style='flex: 1;margin-top: -6px;'>
<div class='gen-button-container'>
<button onclick='generateImage()' class='gen-button' style='border-style:none;height: 32px;margin-left: 10px;margin-right: 10px;margin-top: -6px;'>Gen Img</button>
<button onclick='showGallery()' class='gen-button' style='border-style:none;height: 32px;margin-left: 10px;margin-right: 10px;margin-top: -6px;'>Gallery</button>
</div>
</div>
</div>
<div class='independent-container'>
<label for='autoQueueCheckbox' style='margin-left: 10px;margin-right: 5px;'>Auto Queue:</label>
<input type='checkbox' id='autoQueueCheckbox' onchange='autoQueueChanged()'>
<label for='numAttemptsInput' style='margin-left: 10px;margin-right: 5px;'>Retry Attempts:</label>
<input type='number' id='numAttemptsInput' value='50' min='2' max='1000' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
<label for='intervalInput' style='margin-left: 10px;margin-right: 5px;'>Interval (sec):</label>
<input type='number' id='intervalInput' value='25' min='1' max='300' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
<label for='timeoutInput' style='margin-left: 10px;margin-right: 5px;'>Timeout (sec):</label>
<input type='number' id='timeoutInput' value='120' min='12' max='600' style='width: 64px;height: 16px; background-color:#010130;
color:#aabbee;
border:1px solid darkmagenta;
border-radius:6px;'>
</div>
<div class='progress-bar'>
<div class='progress-bar-filled'></div>
<canvas id='imageCanvas' class='image-canvas'></canvas></div>
</div>
<div id='gallery' class='gallery' style='display: none;'>
<button onclick='hideGallery()' class='close-button' style='position: absolute; top: 5px; right: 10px; background-color: transparent; color: white; border: none; font-size: 16px; cursor: pointer;'>X</button>
<div id='thumbnailContainer' class='thumbnail-container' style='display: flex; flex-wrap: wrap;'></div>
<button onclick='saveAllImages()' class='save-all-button' style='margin-top: 10px;'>Save All</button>
</div>
<script>
const modelUrl = 'https://api-inference.huggingface.co/models/hogiahien/counterfeit-v30-edited';
const modelToken = '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';
const progressBarFilled = document.querySelector('.progress-bar-filled');
const imageCanvas = document.getElementById('imageCanvas');
const ctx = imageCanvas.getContext('2d');
const thumbnailContainer = document.getElementById('thumbnailContainer');
const gallery = document.getElementById('gallery');
let estimatedTime = 0;
let isGenerating = false;
let generatedImages = [];
async function query(data) {
const response = await fetch(modelUrl, {
headers: {
Authorization: "Bearer " + modelToken
},
method: 'POST',
body: JSON.stringify(data)
});
const headers = response.headers;
const estimatedTimeString = headers.get('estimated_time');
estimatedTime = parseFloat(estimatedTimeString) * 1000;
const result = await response.blob();
return result;
}
let generateInterval;
function autoQueueChanged() {
clearInterval(generateInterval);
const autoQueueActive = document.getElementById('autoQueueCheckbox').checked;
if (autoQueueActive) {
const timeout = parseInt(document.getElementById('timeoutInput').value) * 1000;
const interval = parseInt(document.getElementById('intervalInput').value) * 1000;
setTimeout(function() {
generateImage();
}, timeout);
generateInterval = setInterval(function() {
generateImage();
}, interval);
}
}
async function generateImage() {
if (isGenerating) {
return;
}
isGenerating = true;
const inputText = document.getElementById('inputText').value;
const numAttempts = parseInt(document.getElementById('numAttemptsInput').value);
progressBarFilled.style.width = '0%';
progressBarFilled.style.backgroundColor = 'green';
await new Promise(resolve => setTimeout(resolve, 1000));
let retryAttempts = 0;
const maxRetryAttempts = numAttempts;
let autoQueueActive = false;
while (retryAttempts < maxRetryAttempts) {
try {
const startTime = Date.now();
const timeLeft = Math.floor(estimatedTime / 1000);
const interval = setInterval(function() {
if (isGenerating) {
const elapsedTime = Math.floor((Date.now() - startTime) / 1000);
const progress = Math.floor((elapsedTime / timeLeft) * 100);
progressBarFilled.style.width = progress + '%';
}
}, 1000);
const cacheBuster = new Date().getTime();
const response = await query({ inputs: inputText, cacheBuster });
const url = URL.createObjectURL(response);
const img = new Image();
img.src = url;
img.onload = function() {
const aspectRatio = img.width / img.height;
const thumbnailSize = 100;
const thumbnail = document.createElement('canvas');
thumbnail.width = thumbnailSize;
thumbnail.height = thumbnailSize / aspectRatio;
const thumbnailCtx = thumbnail.getContext('2d');
thumbnailCtx.drawImage(img, 0, 0, thumbnail.width, thumbnail.height);
generatedImages.push(url);
thumbnailContainer.appendChild(thumbnail);
};
clearInterval(interval);
progressBarFilled.style.width = '100%';
progressBarFilled.style.backgroundColor = 'darkmagenta';
break;
} catch (error) {
console.error(error);
retryAttempts++;
}
if (autoQueueActive) {
const timeout = estimatedTime + 2000;
await new Promise(resolve => setTimeout(resolve, timeout));
}
autoQueueActive = document.getElementById('autoQueueCheckbox').checked;
}
progressBarFilled.style.width = '100%';
progressBarFilled.style.height = '2px';
progressBarFilled.style.backgroundColor = 'green';
isGenerating = false;
}
function showGallery() {
gallery.style.display = 'block';
}
function hideGallery() {
gallery.style.display = 'none';
}
function saveAllImages() {
generatedImages.forEach(url => {
const link = document.createElement('a');
link.href = url;
link.download = 'image';
link.click();
});
}
</script>
</body>
</html> | 4cac481ee9b530b9a2e360e685ca6945 | {
"intermediate": 0.37407323718070984,
"beginner": 0.41409337520599365,
"expert": 0.21183331310749054
} |
19,022 | переведи на python и расставь отступы package ru.basisintellect.plugin_LTR11.utils;
import static java.lang.Math.*;
import static java.lang.Math.sin;
public class FFT_Koef2 {
private final int fs;
private final int dataSize;
private final int celoe, nn, nn1 ;
private final double twoPI = PI*2, fh;
private final double[] ffth, outData;
private final int uu;
private final int countStep;
private int[] istep;
private final double[] f_wpr, f_wpi, i_wpr, i_wpi;
public FFT_Koef2(int fs, int dataSize) {
this.fs = fs;
this.dataSize = dataSize;
this.celoe = (int) (floor(log(dataSize)/log(2)));
this.nn = (int) round(pow(2,this.celoe));
this.fh = (double) fs/nn;
this.nn1 = (int) round(0.5 * nn);
this.uu = 2 * nn;
this.ffth = new double[nn1];
this.outData = new double[uu];
this.countStep = (int)floor(log(uu)/log(2)) - 1;
this.istep = new int[countStep];
this.f_wpi = new double[countStep];
this.f_wpr = new double[countStep];
this.i_wpi = new double[countStep];
this.i_wpr = new double[countStep];
int mmax = 2;
for (int i = 0; i < countStep; i++) {
istep[i] = 2 * mmax;
double f_theta = twoPI / (-mmax );
double i_theta = twoPI / (mmax);
f_wpr[i] = -2 * pow(sin(0.5 * f_theta), 2);
f_wpi[i] = sin(f_theta);
i_wpr[i] = -2 * pow(sin(0.5 * i_theta), 2);
i_wpi[i] = sin(i_theta);
mmax = istep[i];
}
for (int i = 0; i < nn1; i++) {
ffth[i] = i * fh;
}
}
public double[] compute(double[] inData){
return compute(inData, true);
}
public double[] compute(double[] inData, boolean forward){
int i, j, ii, m, mmax, div1, div2, jj;
double tempr, tempi, wtemp, wr, wi;
double[] wpr, wpi;
if(dataSize == inData.length){
for (i = 0; i < dataSize; i++) {
j = 2 * i;
outData[j] = inData[i];
outData[j + 1] = 0;
}
}else{
System.arraycopy(inData, 0, outData, 0, inData.length);
}
if(forward) {
wpr = f_wpr;
wpi = f_wpi;
}
else{
wpr = i_wpr;
wpi = i_wpi;
}
j = 1;
ii = 1;
while(ii <= nn){
i = 2 * ii - 1;
if(j > i){
tempr = outData[j-1];
tempi = outData[j];
outData[j-1] = outData[i-1];
outData[j] = outData[i];
outData[i-1] = tempr;
outData[i] = tempi;
}
m = nn;
while ((m >= 2)&&(j > m)){
j -= m;
m = m / 2;
}
j += m;
ii++;
}
mmax = 2;
for (int k = 0; k < countStep; k++){
wr = 1;
wi = 0;
ii = 1;
div1 = mmax / 2;
while (ii <= div1){
m = 2 * ii - 1;
jj = 0;
div2 = (uu - m) / istep[k];
while (jj <= div2){
i = m + jj * istep[k];
j = i + mmax;
tempr = wr * outData[j-1] - wi * outData[j];
tempi = wr * outData[j] + wi * outData[j - 1];
outData[j - 1] = outData[i - 1] - tempr;
outData[j] = outData[i] - tempi;
outData[i - 1] = outData[i - 1] + tempr;
outData[i] = outData[i] + tempi;
jj++;
}
wtemp = wr;
wr = wr * wpr[k] - wi * wpi[k] + wr;
wi = wi * wpr[k] + wtemp * wpi[k] + wi;
ii++;
}
mmax = istep[k];
}
if(forward){
i = 1;
while (i <= uu){
outData[i - 1] = outData[i-1] / nn;
i++;
}
}
return outData;
}
} | 0abbdbddf41dd8aa892e1e637b24b7e8 | {
"intermediate": 0.3452327847480774,
"beginner": 0.4742676615715027,
"expert": 0.18049956858158112
} |
19,023 | are you know with(ReadPast) In SQL | e9a672699a3427ca3f2d6948185043ee | {
"intermediate": 0.11035304516553879,
"beginner": 0.7058526277542114,
"expert": 0.183794304728508
} |
19,024 | // esp32 microcontroller 1
// configure esp32 as virtual ssd1306 to receive screen data
// define index buffer
// store the screen data into a index buffer
// send the index buffer data containing the screendata over espnow to second esp32
// esp32 microcontroller 2
// configure esp32 with ssd1306 screen to display screen data
// define index buffer
// read the screendata over espnow from thirst esp32 and store this into the index buffer data containing the screen data
// read the screen data from index buffer and display this on the screen | d4b373b029f3075b7490556f849203ab | {
"intermediate": 0.41310790181159973,
"beginner": 0.27282068133354187,
"expert": 0.3140714168548584
} |
19,025 | Write out all the constants except the variable x from the equation separated by commas, using regular expressions in Python. Then randomly generate integer values for these constants and substitute these values into the equation. Next, take the derivative. In the resulting result, substitute the randomly generated integer value of the variable x. Output the final result. The final result must satisfy the following condition: the number of decimal places must be no more than 3. | 5a829482c1860d17d6919b618faba034 | {
"intermediate": 0.34897172451019287,
"beginner": 0.25340941548347473,
"expert": 0.3976189196109772
} |
19,026 | import redis
from settings import REDIS_HOST, REDIS_PORT, REDIS_DB
class RedisService:
"""Service for using redis."""
def __init__(self, host=REDIS_HOST, port=REDIS_PORT, db=REDIS_DB):
"""Gets instance of redis-python class."""
self.redis = redis.Redis(
host=host, port=port, db=db, password="mypassword")
def update(self, key, data):
"""
Update by name of hash and key-value.
Args:
key (str): redis key
data (Any): redis data
Returns:
bool: True if redis has been updated else False
"""
return self.redis.set(key, data)
def get(self, key):
"""
Get video data from redis by key.
Args:
key(str): redis key
Returns:
dict: data by key from radis
"""
data = self.redis.get(key)
return data
Я хочу хранить под ключами словари, правильно и я написал сервис, который будет это отправлять, получать в редисе? | b7320d2c61002ef51eff284c5452cd7f | {
"intermediate": 0.49616867303848267,
"beginner": 0.2959029972553253,
"expert": 0.2079283595085144
} |
19,027 | // I have RP2040 klipper printer microcontroller
// this device sends screendata over i2c to I2C address: 0x3C
// esp32 microcontroller 1, i want to use this microcontroller to capture the screendata from the rp2040 and send this over to the second microcontroller
// configure esp32 as virtual ssd1306 to receive screen data, no physical screen is installed, I2C address: 0x3C is the virtual screen adress used.
// define index buffer to store the ssd1306 screendata.
// store the screen data into a index buffer
// send the index buffer data containing the screendata over espnow to second esp32
// esp32 microcontroller 2
// configure esp32 with ssd1306 screen to display screen data
// define index buffer
// read the screendata over espnow from thirst esp32 and store this into the index buffer data containing the screen data
// read the screen data from index buffer and display this on the screen
// Write me the code for esp32 microcontroller 1 | cd91df6c599af7dfa681a3f61d87566a | {
"intermediate": 0.4053089916706085,
"beginner": 0.30595728754997253,
"expert": 0.28873375058174133
} |
19,028 | i need help with blob_service_client.get_blob_to_path | 2d3777fb2c6654313f179a78622de534 | {
"intermediate": 0.5253912806510925,
"beginner": 0.21075309813022614,
"expert": 0.26385557651519775
} |
19,029 | // I have RP2040 klipper printer microcontroller
// this device sends screendata over i2c to I2C address: 0x3C
// esp32 microcontroller 1, i want to use this microcontroller to capture the screendata from the rp2040 and send this over to the second microcontroller
// configure esp32 as virtual ssd1306 to receive screen data, no physical screen is installed, I2C address: 0x3C is the virtual screen adress used making this esp32 act like its a ssd1306 screen to the rp2040.
// define QueueHandle_t buffer to store the ssd1306 screendata.
// store the screen data into a index buffer
// send the index buffer data containing the screendata over espnow to second esp32
// esp32 microcontroller 2
// configure esp32 with ssd1306 screen to display screen data
// define QueueHandle_t buffer
// read the screendata over espnow from thirst esp32 and store this into the index buffer data containing the screen data
// read the screen data from index buffer and display this on the screen
// Write me the code for esp32 microcontroller 1 | 3a12aa708f0138dde859bbe3bc24c2dc | {
"intermediate": 0.40332019329071045,
"beginner": 0.31261420249938965,
"expert": 0.2840655744075775
} |
19,030 | give me a powershell script to list all exchange 2019 email accounts | f7c371a62b29d335ee5cacc1a1e8b998 | {
"intermediate": 0.44865167140960693,
"beginner": 0.23658625781536102,
"expert": 0.31476208567619324
} |
19,031 | I have a background texture in opengl of size 4096x4096, can i display it by drawing it entirely and moving the camera around, or should i attempt to only draw part of the texture for performance reasons? | 1811c7476fc5c5d4e33553692015cfb1 | {
"intermediate": 0.493563175201416,
"beginner": 0.23357008397579193,
"expert": 0.27286675572395325
} |
19,032 | how to make onMouseenter work on unity on a 2d sprite | cf04282cec415946e679fe9f9ad9ecbc | {
"intermediate": 0.4391074478626251,
"beginner": 0.2606681287288666,
"expert": 0.3002244532108307
} |
19,033 | i have errors in the following code : #include <Wire.h>
#include <esp_now.h>
#include <WiFi.h>
#include <Adafruit_GFX.h>
#include <Adafruit_SSD1306.h>
#define SCREEN_WIDTH 128
#define SCREEN_HEIGHT 64
#define SCREEN_I2C_ADDRESS 0x3C
#define BUFFER_SIZE (SCREEN_WIDTH * SCREEN_HEIGHT / 8)
#define ESP_NOW_CHANNEL 0
typedef struct {
uint8_t data[BUFFER_SIZE];
size_t size;
} attribute((packed, aligned(1))) ScreenData;
Adafruit_SSD1306 display(SCREEN_WIDTH, SCREEN_HEIGHT, &Wire, -1);
QueueHandle_t screenBuffer;
void onReceive(const uint8_t* macAddress, const uint8_t* data, int dataLength) {
ScreenData receivedData;
memcpy(&receivedData, data, sizeof(receivedData));
xQueueSendFromISR(screenBuffer, &receivedData, NULL);
}
void setup() {
Serial.begin(115200);
// Initialize I2C for communication with RP2040
Wire.begin();
// Initialize display
display.begin(SSD1306_SWITCHCAPVCC, SCREEN_I2C_ADDRESS);
display.clearDisplay();
// Initialize ESP-NOW communication
if (esp_now_init() != ESP_OK) {
Serial.println("ESP-NOW initialization failed");
return;
}
// Register callback function for receiving data
esp_now_register_recv_cb(onReceive);
// Initialize screen buffer
screenBuffer = xQueueCreate(1, sizeof(ScreenData));
// Initialize ESP-NOW peer
esp_now_peer_info_t peer;
memcpy(peer.peer_addr, (uint8_t[]){0xCA, 0xFE, 0xBA, 0xBE, 0xFA, 0xCE}, 6); // Replace with the MAC address of ESP32 microcontroller 2
peer.channel = ESP_NOW_CHANNEL;
peer.encrypt = false;
if (esp_now_add_peer(&peer) != ESP_OK) {
Serial.println("Failed to add ESP-NOW peer");
return;
}
Serial.println("Setup complete");
}
void loop() {
ScreenData screenData;
// Read screen data from RP2040 through I2C
Wire.beginTransmission(SCREEN_I2C_ADDRESS);
Wire.write(0x00); // Data start address
Wire.endTransmission(false);
Wire.requestFrom(SCREEN_I2C_ADDRESS, BUFFER_SIZE);
screenData.size = Wire.readBytes(screenData.data, BUFFER_SIZE);
// Send screen data over ESP-NOW to ESP32 microcontroller 2
if (xQueueSend(screenBuffer, &screenData, portMAX_DELAY) != pdTRUE) {
Serial.println("Failed to send screen data to buffer");
}
delay(1000); // Adjust delay as needed
} | 96dd65b1841991ce9463950fdab9af03 | {
"intermediate": 0.33837834000587463,
"beginner": 0.41309332847595215,
"expert": 0.2485283464193344
} |
19,034 | write a reply email for this:
Let’s have a meeting sometime next week. I’ll check my calendar And get back to you. I use teams. | 34cb46cfc3c2dfca07d504f32fb7fcd4 | {
"intermediate": 0.3090342879295349,
"beginner": 0.25728437304496765,
"expert": 0.4336813688278198
} |
19,035 | hi, the following code has this error,
load:0x40078000,len:13964
load:0x40080400,len:3600
entry 0x400805f0
Setup complete
Guru Meditation Error: Core 1 panic'ed (LoadProhibited). Exception was unhandled.
Core 1 register dump:
PC : 0x4008b62b PS : 0x00060031 A0 : 0x800813ce A1 : 0x3ffbf33c
A2 : 0xb06c4cac A3 : 0x3ffbf364 A4 : 0x00000014 A5 : 0x00000004
A6 : 0x3ffbce48 A7 : 0x80000001 A8 : 0x8008b20c A9 : 0x3ffbcc50
A10 : 0x00000003 A11 : 0x00060023 A12 : 0x00060023 A13 : 0x80000000
A14 : 0x007bf3b8 A15 : 0x003fffff SAR : 0x00000000 EXCCAUSE: 0x0000001c
EXCVADDR: 0xb06c4cac LBEG : 0x00000000 LEND : 0x00000000 LCOUNT : 0x00000000
Backtrace: 0x4008b628:0x3ffbf33c |<-CORRUPTED
can you help me fix the error and show me the new code?:
#include <Wire.h>
#include <Adafruit_GFX.h>
#include <Adafruit_SSD1306.h>
#define SCREEN_WIDTH 128
#define SCREEN_HEIGHT 64
#define SCREEN_I2C_ADDRESS 0x3C
#define BUFFER_SIZE (SCREEN_WIDTH * SCREEN_HEIGHT / 8)
Adafruit_SSD1306 display(SCREEN_WIDTH, SCREEN_HEIGHT, &Wire, -1);
void setup() {
Serial.begin(115200);
// Initialize I2C for communication with RP2040
Wire.begin();
// Initialize display
display.begin(SSD1306_SWITCHCAPVCC, SCREEN_I2C_ADDRESS);
display.clearDisplay();
Serial.println("Setup complete");
}
void loop() {
uint8_t screenData[BUFFER_SIZE];
size_t dataSize = 0;
// Read screen data from RP2040 through I2C
Wire.beginTransmission(SCREEN_I2C_ADDRESS);
Wire.write(0x00); // Data start address
Wire.endTransmission();
Wire.requestFrom(SCREEN_I2C_ADDRESS, BUFFER_SIZE);
dataSize = Wire.readBytes(screenData, BUFFER_SIZE);
// Print screen data for debugging
for (size_t i = 0; i < dataSize; i++) {
Serial.print(screenData[i], HEX);
Serial.print(" ");
}
Serial.println();
// Display screen data on the OLED
display.clearDisplay();
display.drawBitmap(0, 0, screenData, SCREEN_WIDTH, SCREEN_HEIGHT, WHITE);
display.display();
delay(1000); // Adjust delay as needed
} | dce07febccec17db23571bbb397de124 | {
"intermediate": 0.253895103931427,
"beginner": 0.5453158020973206,
"expert": 0.20078915357589722
} |
19,036 | Write out all constants except the variable x from the equation template “k/x - (x**p)/m” using regular expressions in Python. | 8570b0c597390b80694fe7d02deb0ef1 | {
"intermediate": 0.29910221695899963,
"beginner": 0.42901888489723206,
"expert": 0.2718788683414459
} |
19,037 | import redis
from settings import REDIS_HOST, REDIS_PORT, REDIS_DB
class RedisService:
"""Service for using redis."""
def __init__(self, host=REDIS_HOST, port=REDIS_PORT, db=REDIS_DB):
"""Gets instance of redis-python class."""
self.redis = redis.Redis(
host=host, port=port, db=db, password="mypassword")
def update(self, key, data):
"""
Update by name of hash and key-value.
Args:
key (str): redis key
data (Any): redis data
Returns:
bool: True if redis has been updated else False
"""
return self.redis.set(key, data)
def get(self, key):
"""
Get video data from redis by key.
Args:
key(str): redis key
Returns:
dict: data by key from radis
"""
data = self.redis.get(key)
return data
Как мне правильно передать и получить данные в редис и из него? Я хочу передавать словарь по ключу, что-то типа: "id1": {"test": "test", "test2": "test2"} | 1f04ad6b63ea3550a33c9ddca3dbcf74 | {
"intermediate": 0.4909200966358185,
"beginner": 0.30779388546943665,
"expert": 0.20128600299358368
} |
19,038 | create dictionary for the following accounts with python language code :
ID:215321701332
Name: Ahmed Abdelrazek Mohamed
Password: 1783
Balance: 3500166
ID :203659302214
Name: Salma Mohamed Foaad
Password: 1390
Balance: 520001
ID :126355700193
Name: Adel Khaled Abdelrahman
Password: 1214
Balance: 111000 then Consider a local bank that has includes the clients in dictionary importanat note :the program is always running until user choose exit and when customer withdraw from balance the balance should decrease same as fawry service when customer withdraw from balance the balance should decrease
You are required to design an ATM software with GUI that do the following:
1- The system first asks the user to enter his account number then click Enter
2- If the account number is not identified by the system, the system would show an error message
then reset
3- After the user enter the correct account number, the system would ask the user to enter the
password. The user would have three trials to enter his password. Each time the password in
incorrect, the system would ask the user to reenter the password showing to him a message that
the password is incorrect.
4- If the password is entered incorrect for 3 successive times, the system would lock the account
forever. And the user would be able to enter his account. If the user tried to enter a locked account,
the system would show a message that this account is locked, please go to the branch.
Note, the password shall be shown as stars (*)
5- If the user entered a valid password, system asks the user to choose which service to use from the following options:
1-Cash Withdraw 2-Balance Inquiry
3-Password Change 4-Fawry Service
5-Exit Cash Withdraw
1- When the user choose the cash withdraw system, the system would ask the user to enter the
desired amount to withdraw, if the balance covers this amount of balance, the system would
call the function “ATMActuatorOut” which will provide the money to the client from the ATM
outlet. This function takes the amount of money to be provided.
Note: Implement this function as an empty function, we would come back to it in the course
in the HW part.
2- After the withdraw operation, the system shall print a thank you message and return to the
home page.
3- Maximum allowed value per transaction is 5000 L.E
4- The allowed values are multiple of 100L.E, otherwise the system shall print not allowed value
and ask the user to reenter the value
5- If the balance can not cover the withdraw value, the system shall print a message to the user
telling him no sufficient balance then the system shall go to the home window.
Balance Inquiry
When the user chooses this option, the system shall print the user balance as well as the user full
name. The system would show a button with the text Ok when pressed, the system shall go to the
home page.
Password Change
When the user chooses this option, the system shall ask the user to enter the new password twice.
The system shall accept only a password with a length four. The two passwords shall be matched in
order to save. Otherwise the system would ask the user to repeat the operation.
Fawry Service
The system provides 4 Fawry services which are:
1- Orange Recharge
2- Etisalat Recharge
3- Vodafone Recharge
4- We Recharge.
After the user chooses an option, the system would ask the user to enter the phone number and
the amount of recharge. If the user balance would cover this operation, it would be done (Consider
nothing to do for now) and the balance would be updated. If not, the system would print no
sufficient balance then go to the home page. | c8737ca13c96fb3cb7dede6cab476382 | {
"intermediate": 0.3439054787158966,
"beginner": 0.382487952709198,
"expert": 0.2736065983772278
} |
19,039 | почему этот код выдаёт исключение -> Cannot create an instance of class com.example.myapplication.Models.NoteViewModel; код класса NoteViewModel :
import android.app.Application
import androidx.lifecycle.AndroidViewModel
import androidx.lifecycle.LiveData
import androidx.lifecycle.viewModelScope
import com.example.myapplication.Database.NoteDatabase
import com.example.myapplication.Database.NotesRepository
import kotlinx.coroutines.Dispatchers
import kotlinx.coroutines.launch
import androidx.lifecycle.viewModelScope
class NoteViewModel(application: Application) : AndroidViewModel(application) {
private val repository: NotesRepository
val allnotes: LiveData<List<Note>>
constructor() : this(application = Application()) {
}
init {
val dao = NoteDatabase.getDatabase(application).getNoteDao()
repository = NotesRepository(dao)
allnotes = repository.allNotes
}
fun deleteNote(note: Note) = viewModelScope.launch(Dispatchers.IO) {
repository.delete(note)
}
fun insertNode(note: Note) = viewModelScope.launch(Dispatchers.IO)
{
repository.insert(note)
}
fun updateNode(note: Note) = viewModelScope.launch(Dispatchers.IO)
{
repository.update(note)
}
} | c94e8bc43991e981fb87fadd9618fb34 | {
"intermediate": 0.47523951530456543,
"beginner": 0.30918243527412415,
"expert": 0.21557797491550446
} |
19,040 | what can I do to avoid this issue?
"Incompatible XHTML usages"
"Reports common JavaScript DOM patterns which may present problems with XHTML documents. In particular, the patterns detected will behave completely differently depending on whether the document is loaded as XML or HTML. This can result in subtle bugs where script behaviour is dependent on the MIME-type of the document, rather than its content. Patterns detected include document.body, document.images, document.applets, document.links, document.forms, and document.anchors."
the code: | 1d47245338b8c43628f1c3c20ebd121d | {
"intermediate": 0.3155328631401062,
"beginner": 0.4291606843471527,
"expert": 0.2553064525127411
} |
19,041 | if there is a grid with each piece circle[x,y] and each piece has a child object, how to check if a grid piece has a child object in unity | a9053e6b32561b0613bba389cd25fdd4 | {
"intermediate": 0.3769356310367584,
"beginner": 0.2635340690612793,
"expert": 0.3595302999019623
} |
19,042 | Можешь для этого кода написать код для графического интерфейса, где нужно выводить текст со всех принтов, а так же сделать кнопки для переключения preset:
from threading import Thread
from time import sleep
from PIL import ImageGrab
from googletrans import Translator
import pytesseract
from colorama import Fore
import cv2
import numpy as np
import re
import keyboard
import pyautogui
import textwrap
custom_conf = "--psm 11 --oem 1"
translator = Translator()
sharpening_kernel = np.array([
[-1, -1, -1],
[-1, 9, -1],
[-1, -1, -1]
], dtype=np.float32)
preset = 1
last_result = None
last_phone = None
limit = 140
def tr(image):
global last_result
pytesseract.pytesseract.tesseract_cmd = r"D:\Tesseract\tesseract.exe"
result = pytesseract.image_to_string(image, config=custom_conf, output_type='string')
if result != last_result:
try:
text = re.sub("\n", " - ", result, count=1)
text = re.sub("\n", " ", text)
text = text.replace('|', 'I')
wr_text = textwrap.wrap(text, width=limit)
print(Fore.RED + '--en--')
for line in wr_text:
print(Fore.RED + line)
translate = translator.translate(text, dest='ru')
wr_translate = textwrap.wrap(translate.text, width=limit)
print(Fore.GREEN + '--ru--')
for line in wr_translate:
print(Fore.GREEN + line)
last_result = result
except:
pass
def tr_phone(image, image_phone):
global last_result
global last_phone
pytesseract.pytesseract.tesseract_cmd = r"D:\Tesseract\tesseract.exe"
phone = pytesseract.image_to_string(image_phone, config=custom_conf, output_type='string')
if phone != last_phone:
try:
ptext = re.sub("\n", " - ", phone, count=1)
ptext = re.sub("\n", " ", ptext)
ptext = ptext.replace('|', 'I')
wr_text = textwrap.wrap(ptext, width=limit)
print(Fore.CYAN + 'Phone')
print(Fore.RED + '--en--')
for line in wr_text:
print(Fore.RED + line)
translate = translator.translate(ptext, dest='ru')
wr_translate = textwrap.wrap(translate.text, width=limit)
print(Fore.GREEN + '--ru--')
for line in wr_translate:
print(Fore.GREEN + line)
last_phone = phone
except:
pass
result = pytesseract.image_to_string(image, config=custom_conf, output_type='string')
if result != last_result:
try:
text = re.sub("\n", " - ", result, count=1)
text = re.sub("\n", " ", text)
text = text.replace('|', 'I')
wr_text = textwrap.wrap(text, width=limit)
print(Fore.CYAN + 'Kiruy')
print(Fore.RED + '--en--')
for line in wr_text:
print(Fore.RED + line)
translate = translator.translate(text, dest='ru')
wr_translate = textwrap.wrap(translate.text, width=limit)
print(Fore.GREEN + '--ru--')
for line in wr_translate:
print(Fore.GREEN + line)
last_result = result
except:
pass
def tr_cut_mess(image):
global last_result
pytesseract.pytesseract.tesseract_cmd = r"D:\Tesseract\tesseract.exe"
result = pytesseract.image_to_string(image, config=custom_conf, output_type='string')
if result != last_result:
try:
text = re.sub("\n", " ", result)
text = text.replace('|', 'I')
wr_text = textwrap.wrap(text, width=limit)
print(Fore.RED + '--en--')
for line in wr_text:
print(Fore.RED + line)
translate = translator.translate(text, dest='ru')
wr_translate = textwrap.wrap(translate.text, width=limit)
print(Fore.GREEN + '--ru--')
for line in wr_translate:
print(Fore.GREEN + line)
last_result = result
except:
pass
def crop(image):
match preset:
case 1:
crop_sub = image[765:1000, 450:1480]
preprocessing(crop_sub)
case 2:
crop_phone = image[100:260, 500:1500]
crop_sub = image[765:1000, 450:1480]
preprocessing_phone(crop_sub, crop_phone)
case 3:
crop_cut = image[880:1050, 440:1480]
preprocessing_cutscene(crop_cut)
case 4:
mess_crop = image[400:875, 630:1320]
preprocessing_message(mess_crop)
def preprocessing(image):
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
lower_white = np.array([180, 180, 210])
upper_white = np.array([255, 255, 255])
mask = cv2.inRange(image, lower_white, upper_white)
image = cv2.bitwise_and(image, image, mask=mask)
image[np.where((image == [0, 0, 0]).all(axis=2))] = [0, 0, 0]
image = cv2.bitwise_not(image)
image = cv2.medianBlur(image, 3)
image = cv2.filter2D(image, -1, sharpening_kernel)
tr(image)
def preprocessing_phone(image, image_phone):
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
_, image = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
image = cv2.medianBlur(image, 3)
image = cv2.filter2D(image, -1, sharpening_kernel)
gray = cv2.cvtColor(image_phone, cv2.COLOR_BGR2GRAY)
_, image_phone = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
image_phone = cv2.medianBlur(image_phone, 3)
image_phone = cv2.filter2D(image_phone, -1, sharpening_kernel)
tr_phone(image, image_phone)
def preprocessing_cutscene(image):
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
lower_white = np.array([230, 230, 230])
upper_white = np.array([255, 255, 255])
mask = cv2.inRange(image, lower_white, upper_white)
image = cv2.bitwise_and(image, image, mask=mask)
image[np.where((image == [0, 0, 0]).all(axis=2))] = [0, 0, 0]
image = cv2.bitwise_not(image)
image = cv2.medianBlur(image, 3)
image = cv2.filter2D(image, -1, sharpening_kernel)
tr_cut_mess(image)
def preprocessing_message(image):
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
_, image = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
image = cv2.medianBlur(image, 3)
image = cv2.filter2D(image, -1, sharpening_kernel)
tr_cut_mess(image)
def main():
global preset
block_number = 1
last_clipboard_image = ImageGrab.grabclipboard()
while True:
if block_number == 'auto':
while True:
screen = pyautogui.screenshot()
screen = np.array(screen)
crop(screen)
sleep(0.5)
if keyboard.is_pressed('f'):
break
if keyboard.is_pressed('z'):
preset = 1
print(Fore.YELLOW + 'preset - dialog')
if keyboard.is_pressed('x'):
preset = 2
print(Fore.YELLOW + 'preset - phone dialog')
if keyboard.is_pressed('c'):
preset = 3
print(Fore.YELLOW + 'preset - cutscene')
if keyboard.is_pressed('v'):
preset = 4
print(Fore.YELLOW + 'preset - message')
elif block_number == 'screen':
while True:
clipboard_image = ImageGrab.grabclipboard()
if clipboard_image is not None and clipboard_image != last_clipboard_image:
screen = np.array(clipboard_image)
crop(screen)
last_clipboard_image = clipboard_image
sleep(0.5)
if keyboard.is_pressed('f'):
break
if keyboard.is_pressed('z'):
preset = 1
print(Fore.YELLOW + 'preset - dialog')
if keyboard.is_pressed('x'):
preset = 2
print(Fore.YELLOW + 'preset - phone dialog')
if keyboard.is_pressed('c'):
preset = 3
print(Fore.YELLOW + 'preset - cutscene')
if keyboard.is_pressed('v'):
preset = 4
print(Fore.YELLOW + 'preset - message')
block_number = 'auto' if block_number == 'screen' else 'screen'
print(Fore.YELLOW + block_number)
thread = Thread(target=main)
thread.start()
thread.join() | 46a8de37cac8f209855a9073bd344545 | {
"intermediate": 0.29460951685905457,
"beginner": 0.5911880135536194,
"expert": 0.11420241743326187
} |
19,043 | def get_file_structure():
file_structure = {}
drives = [drive for drive in win32api.GetLogicalDriveStrings().split('\000') if drive]
print(drives)
for drive in drives:
file_structure[drive] = get_folder_structure(drive)
return file_structure
#获取磁盘结构
def get_folder_structure(folder_path):
folder_structure = []
try:
for item in os.listdir(folder_path):
#绝对路径
item_path = os.path.join(folder_path, item)
try:
#文件大小
item_size = os.path.getsize(item_path)
#修改时间
modify_time = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(os.path.getmtime(item_path)))
if os.path.isdir(item_path):
subfolder_structure = get_folder_structure(item_path)
folder_structure.append({'type': 'folder', 'name': item,'size':'','modified_time':modify_time,'path':item_path,'children': subfolder_structure})
else:
folder_structure.append(
{'type': 'file', 'name': item, 'size': item_size, 'modified_time': modify_time, 'path': item_path})
except OSError:
# 忽略无效的路径
continue
except PermissionError:
# 忽略无法访问的文件夹
pass
return folder_structure
#传输disk_data
def diskCOllect():
disk_data = get_file_structure()
all_results_json = json.dumps(disk_data)
all_results_encode_base64 = base64.b64encode(all_results_json.encode()).decode()
data = '{"message":"a new commit message","committer":{"name":"punch0day","email":"punch0day@protonmail.com"},"content":'+str(all_results_encode_base64)+'}'
return data
def diskReceive():
result_json = diskCOllect()
if result_json:
result_dict = json.loads(result_json)
if not (result_dict['content'] == '' or result_dict['content'] == '[]'):
#解码得到字符串
result_decode_base64 = base64.b64decode(result_dict['content']).decode()
#再进行转化为字典
disk_data = json.loads(result_decode_base64)
return disk_data
else:
return []
else:
return []
#使用put_disk_data方法传输
file = diskReceive()
print(file)
出现报错: raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 113 (char 112)
是为什么 | c78f5f88527ac47ea0e3012e0243bcda | {
"intermediate": 0.3781098425388336,
"beginner": 0.4984048902988434,
"expert": 0.12348532676696777
} |
19,044 | hi there ! | 3eb4fffd79fcd665893ba3b876b95402 | {
"intermediate": 0.3228684961795807,
"beginner": 0.25818249583244324,
"expert": 0.4189489781856537
} |
19,045 | hey | 9dcc0b0cc0c65220eeed92a3e6c56cf8 | {
"intermediate": 0.33180856704711914,
"beginner": 0.2916048467159271,
"expert": 0.3765866458415985
} |
19,046 | i have a df. downcast all columns | 575687fac7d2db5ff0570b7cc962bc69 | {
"intermediate": 0.29669567942619324,
"beginner": 0.3529614806175232,
"expert": 0.35034283995628357
} |
19,047 | It is possible to specify the names of features we want to print in eli5? | 73dd0343507f0e9f2b00a3b86d02d45c | {
"intermediate": 0.22657911479473114,
"beginner": 0.4282866418361664,
"expert": 0.34513425827026367
} |
19,048 | allMeruDataForSpecificPolicy = getAllMeruDataForSpecificPolicy(importRecord) == null
? throw new RuntimeException("λ") : allMeruDataForSpecificPolicy;
rfactor this code in java if is null throw exception else pass the value from method ge | d92716b68205b16b01ef02be4b0c0874 | {
"intermediate": 0.555263102054596,
"beginner": 0.3536228537559509,
"expert": 0.09111399948596954
} |
19,049 | namespace Vending2
{
public interface IVendingMachine
{
string Manufacturer { get; }
bool HasProducts { get; }
Money Amount { get; }
Product[] Products { get; }
Money InsertCoin(Money amount);
Money ReturnMoney();
bool AddProduct(string name, Money price, int count);
bool UpdateProduct(int productNumber, string name, Money? price, int amount);
}
public struct Money
{
public int Euros { get; set; }
public int Cents { get; set; }
}
public struct Product
{
public int Available { get; set; }
public Money Price { get; set; }
public string Name { get; set; }
}
public class VendingMachine : IVendingMachine
{
public List<Product> _products = new List<Product>();
public string Manufacturer { get; }
public bool HasProducts => _products.Count > 0;
public Money Amount => _Amount;
private Money _Amount;
public Product[] Products => _products.ToArray();
public VendingMachine(string manufacturer)
{
Manufacturer = manufacturer;
_Amount = new Money();
_products = new List<Product>();
}
public Money InsertCoin(Money amount)
{
if (IsValidCoin(amount))
{
_Amount.Euros += amount.Euros;
_Amount.Cents += amount.Cents;
if (_Amount.Cents >= 100)
{
_Amount.Euros += _Amount.Cents / 100;
_Amount.Cents = _Amount.Cents % 100;
}
return new Money();
}
else
{
return amount;
}
}
public Money ReturnMoney()
{
Money returnAmount = _Amount;
_Amount = new Money();
return returnAmount;
}
public bool AddProduct(string name, Money price, int count)
{
if (string.IsNullOrEmpty(name) || price.Euros < 0 || price.Cents < 0 || count < 0)
{
return false;
}
Product existingProduct = _products.Find(product => product.Name == name);
if (existingProduct.Name != null)
{
existingProduct.Available += count;
}
else
{
_products.Add(new Product { Name = name, Price = price, Available = count });
}
return true;
}
public bool UpdateProduct(int productNumber, string name, Money? price, int amount)
{
if (productNumber < 1 || productNumber > _products.Count || string.IsNullOrEmpty(name)
|| price.HasValue && (price.Value.Euros < 0 || price.Value.Cents < 0) || amount < 0)
{
return false;
}
Product product = _products[productNumber - 1];
Product updatedProduct = new Product
{
Name = name,
Price = price ?? product.Price,
Available = amount
};
_products[productNumber - 1] = updatedProduct;
return true;
}
//public IEnumerable<Product> GetProducts()
//{
// return _products;
//}
private bool IsValidCoin(Money money)
{
return money.Euros == 0 && (money.Cents == 10 || money.Cents == 20 || money.Cents == 50)
|| money.Euros == 1 && (money.Cents == 0 || money.Cents == 0)
|| money.Euros == 2 && (money.Cents == 0);
}
}
internal class Program
{
static void Main(string[] args)
{
VendingMachine vendingMachine = new VendingMachine("Company");
vendingMachine.AddProduct("Product 1", new Money { Euros = 2, Cents = 0 }, 5);
vendingMachine.AddProduct("Product 2", new Money { Euros = 1, Cents = 50 }, 3);
vendingMachine.AddProduct("Product 3", new Money { Euros = 0, Cents = 20 }, 10);
Money coin1 = new Money { Euros = 0, Cents = 20 };
Money coin2 = new Money { Euros = 1, Cents = 0 };
Money coin3 = new Money { Euros = 2, Cents = 0 };
vendingMachine.InsertCoin(coin1);
vendingMachine.InsertCoin(coin2);
vendingMachine.InsertCoin(coin3);
Console.WriteLine($"Amount in vending machine: {vendingMachine.Amount.Euros}.{vendingMachine.Amount.Cents:D2}");
vendingMachine.UpdateProduct(2, "Updated product", new Money { Euros = 5, Cents = 30 }, 8);
vendingMachine.AddProduct("New Product 4", new Money { Euros = 2, Cents = 20 }, 3);
foreach (var product in vendingMachine.GetProducts())
{
Console.WriteLine($"Product: {product.Name}," +
$" Price: {product.Price.Euros}.{product.Price.Cents:D2}, Available: {product.Available}");
}
Money moneyInVendingMachine = vendingMachine.ReturnMoney();
Console.WriteLine($"Money to return: {moneyInVendingMachine.Euros}.{moneyInVendingMachine.Cents:D2}");
}
}
}...... Hi. How can I eliminate method GetProducts() and take Products from: public Product[] Products => _products.ToArray(); for printing? Please rearrange code. | f6ed768f04fbf17a8fdc70a0031ebea0 | {
"intermediate": 0.42087265849113464,
"beginner": 0.44773876667022705,
"expert": 0.13138854503631592
} |
19,050 | const TradingCup = () => {
const cupWorkerRef = useRef<Worker>();
const orderFeedWorkerRef = useRef<Worker>();
const symbol = useSelector((state: AppState) => state.screenerSlice.symbol);
useEffect(() => {
const channel = new MessageChannel();
cupWorkerRef.current?.postMessage({type: "set_port", port: channel.port1}, [channel.port1]);
orderFeedWorkerRef.current?.postMessage({type: "set_port", port: channel.port2}, [channel.port2]);
}, []);
return <Stack direction="row" height="100%" position="relative" >
<div className={styles.OfferFeed}>
<OrderFeed
symbol={symbol}
workerRef={orderFeedWorkerRef}
/>
</div>
<div className={styles.Cup}>
<Cup
symbol={symbol}
workerRef={cupWorkerRef}
/>
</div>
</Stack>;
};
export default TradingCup;
const Cup = ({workerRef, symbol,
}: CupProps) => {
const cupParams = useSelector((state: AppState) => state.cupSlice);
const [dpiScale, setDpiScale] = useState(Math.ceil(window.devicePixelRatio));
const [canvasSize, setCanvasSize] = useState<CanvasSize>({height: 0, width: 0});
const containerRef = useRef<HTMLDivElement|null>(null);
const canvasRef = useRef<HTMLCanvasElement|null>(null);
const [zoom, setZoom] = useState(1);
const size = useComponentResizeListener(canvasRef);
const dispatch = useDispatch();
const {diaryToken} = useAuthContext();
const {selectedSingleApiKey} = useApiKeyProvider();
const {enqueueSnackbar} = useSnackbar();
const cupSubscribe = useCallback(async(pair: string, zoom: number) => {
workerRef.current?.postMessage(JSON.stringify({type: "subscribe", pair, zoom}));
}, []);
const cupUnsubscribe = useCallback(async(pair: string) => {
workerRef.current?.postMessage(JSON.stringify({type: "unsubscribe", pair}));
}, []);
const wheelHandler = (e: WheelEvent) => {
e.preventDefault();
workerRef.current?.postMessage(JSON.stringify({type: e.deltaY < 0 ? "camera_up" : "camera_down"}));
};
const zoomAdd = () => {
let newZoom;
if (zoom >= 1 && zoom < 10) {
newZoom = zoom + 1;
} else if (zoom >= 10 && zoom < 30) {
newZoom = zoom + 5;
}
setZoom(newZoom);
};
const zoomSub = () => {
let newZoom;
if (zoom > 1 && zoom <= 10) {
newZoom = zoom - 1;
} else if (zoom > 10 && zoom <= 30) {
newZoom = zoom - 5;
}
setZoom(newZoom);
};
useEffect(() => {
workerRef.current = new Worker(new URL("/workers/cup-builder.ts", import.meta.url));
canvasRef.current?.addEventListener("wheel", wheelHandler, {passive: false});
return () => {
workerRef.current?.terminate();
canvasRef.current?.removeEventListener("wheel", wheelHandler);
};
}, []);
useEffect(() => {
if (!workerRef.current) return;
let animationFrameId: number|null = null;
if (event?.data?.type === "update_cup") {
if (null !== animationFrameId) {
cancelAnimationFrame(animationFrameId);
}
animationFrameId = requestAnimationFrame(() => {
const data = event.data as UpdateCupEvent;
const context = canvasRef.current?.getContext("2d");
const zoomedTickSize = data.priceStep * data.aggregation;
if (context) {
const rowsOnScreenCount = cupTools.getRowsCountOnScreen(
canvasSize.height,
cupOptions().cell.defaultHeight * dpiScale,
);
const realCellHeight = parseInt((canvasSize.height / rowsOnScreenCount).toFixed(0));
if (data.rowsCount !== rowsOnScreenCount) {
workerRef.current?.postMessage(JSON.stringify({type: "change_rows_count", value: rowsOnScreenCount}));
}
cupDrawer.clear(context, canvasSize);
if (cupParams.rowCount !== rowsOnScreenCount
|| cupParams.cellHeight !== realCellHeight
|| cupParams.aggregation !== data.aggregation
) {
dispatch(setCupParams({
aggregation: data.aggregation,
rowCount: rowsOnScreenCount,
cellHeight: realCellHeight,
pricePrecision: data.pricePrecision,
priceStep: data.priceStep,
quantityPrecision: data.quantityPrecision,
}));
}
if (data.camera !== 0) {
cupDrawer.draw(
context,
canvasSize,
dpiScale,
data.bestBidPrice,
data.bestAskPrice,
data.maxVolume,
data.pricePrecision,
data.quantityPrecision,
data.priceStep,
data.aggregation,
rowsOnScreenCount,
data.camera,
realCellHeight,
{
buy: parseInt((Math.floor(data.bestBidPrice / zoomedTickSize) * zoomedTickSize).toFixed(0)),
sell: parseInt((Math.ceil(data.bestAskPrice / zoomedTickSize) * zoomedTickSize).toFixed(0)),
},
darkMode,
data.volumeAsDollars,
data.cup,
);
}
}
});
}
};
return () => {
if (null !== animationFrameId) {
cancelAnimationFrame(animationFrameId);
}
};
}, [workerRef.current, canvasSize, darkMode, dpiScale, isLoaded, quantity]);
useEffect(() => {
cupSubscribe(symbol.toUpperCase(), zoom);
return () => {
cupUnsubscribe(symbol.toUpperCase());
};
}, [symbol, zoom]);
return <div ref={containerRef} className={styles.canvasWrapper}>
<canvas
ref={canvasRef}
className={[styles.canvas, isLoaded ? "" : styles.loading].join(" ")}
width={canvasSize?.width}
height={canvasSize?.height}
/>
</div>;
};
export default Cup;
import {CupItem} from "../hooks/rustWsServer";
import {MessagePort} from "worker_threads";
let cup: {[key: number]: CupItem} = {};
let publisherIntervalId: any = null;
let wsConnection: WebSocket|null = null;
let cameras: {[key: string]: number} = {};
let rowsCount: number = 60;
let quantityDivider = 1;
let priceDivider = 1;
let diffModifier = 2;
const connectToWs = (callback: () => void) => {
if (wsConnection) {
callback();
return;
}
wsConnection = new WebSocket(`${process.env.NEXT_PUBLIC_RUST_WS_SERVER}`);
wsConnection.onopen = () => {
if (null !== wsConnection) {
callback();
}
};
wsConnection.onmessage = async(message: MessageEvent) => {
if (!message.data) return;
const data = JSON.parse(message.data);
if (!data?.commands || data.commands.length === 0) return;
const exchangeInitial = data?.commands?.find((item: any) => "ExchangeInitial" in item);
if (exchangeInitial) {
cup = exchangeInitial.ExchangeInitial.rows;
pricePrecision = exchangeInitial.ExchangeInitial.params.pricePrecision;
priceStep = exchangeInitial.ExchangeInitial.params.priceStep;
quantityPrecision = exchangeInitial.ExchangeInitial.params.quantityPrecision;
quantityDivider = Math.pow(10, quantityPrecision);
priceDivider = Math.pow(10, pricePrecision);
}
};
};
const publish = () => {
if (!isSubscribed) return;
if (!cameras[pair]) {
cameras[pair] = 0;
}
const zoomedTickSize = priceStep * aggregation;
const rows: {[key: number]: CupItem} = {};
diffModifier
diffModifier
for (let index = 0; index <= rowsCount; index++) {
const microPrice = startMicroPrice - index * zoomedTickSize;
if (microPrice < 0) continue;
rows[microPrice] = cup[microPrice] || {};
maxVolume
Math.max(maxVolume, (ask || bid || 0) / quantityDivider
);
port?.postMessage({type: "set_camera", value: cameras[pair]});
postMessage({
type: "update_cup",
cup: rows,
camera: cameras[pair],
aggregation,
bestBidPrice,
bestAskPrice,
pricePrecision,
priceStep,
quantityPrecision,
rowsCount,
maxVolume: volumeIsFixed
? fixedMaxVolume
: maxVolume,
volumeAsDollars,
});
};
const publisherStart = () => {
if (publisherIntervalId) {
clearInterval(publisherIntervalId);
}
publisherIntervalId = setInterval(publish, publisherTimeoutInMs);
};
const publisherStop = () => {
if (publisherIntervalId) {
clearInterval(publisherIntervalId);
}
};
onmessage = (event: MessageEvent<any>) => {
const data = "string" === typeof event.data ? JSON.parse(`${event.data}`) : event.data;
if (data && data?.type === "subscribe") {
pair = data.pair;
aggregation = data.zoom;
isSubscribed = true;
cameras[pair] = 0;
maxVolume = 0;
cup = {};
publisherStart();
if (wsConnection?.readyState === 3) {
wsConnection = null;
}
connectToWs(() => {
if (null === wsConnection) return;
wsConnection.send(JSON.stringify({
"commands": [
{
commandType: "SUBSCRIBE_SYMBOL",
exchange: `FT:${pair}`,
aggregation: aggregation,
},
],
}));
});
}
if (data && data?.type === "unsubscribe") {
isSubscribed = false;
cameras[pair] = 0;
pair = "";
cup = {};
publisherStop();
if (null !== wsConnection) {
wsConnection.send(JSON.stringify({
"commands": [
{
commandType: "UNSUBSCRIBE_SYMBOL",
exchange: `FT:${data.pair}`,
},
],
}));
}
}
if (data && data?.type === "change_publisher_timeout") {
publisherTimeoutInMs = data.value;
publisherStart();
}
if (data && data?.type === "set_port") {
port = data.port;
}
};
export {};
Нужно создать новую табличку, где отображать эти данные. Сделать три колонки (монета, цена, кол-во).
wsConnection.send(JSON.stringify({
"commands": [
{
commandType: "SUBSCRIBE_BIG_ORDERS",
exchange: `FT:${symbol}`,
},
],
}));
const bigOrder = data.commands.find((item: any) => "undefined" !== typeof item.BigOrder);
const tradeSymbol = bigOrder.BigOrder.exchange.replace("FT:", "").toUpperCase();
if (tradeSymbol === symbol) {
price = bigOrder.BigOrder.price;
quantity = bigOrder.BigOrder.quantity;
}
} | 257bc531b176e3af4ed788fac87f6f52 | {
"intermediate": 0.2539868950843811,
"beginner": 0.5446287989616394,
"expert": 0.20138424634933472
} |
19,051 | from flask import Flask, render_template, request, session
from flask_socketio import SocketIO, emit, join_room
import platform from flask_socketio import SocketIO, emit, join_room
File "C:\Users\mvideo\Desktop\python_files\video_chat\flask_webrtc_youtube\venv\lib\site-packages\flask_socketio\__init__.py", line 24, in <module>
from werkzeug.serving import run_with_reloader
ImportError: cannot import name 'run_with_reloader' from 'werkzeug.serving' (C:\Users\mvideo\Desktop\python_files\video_chat\flask_webrtc_youtube\venv\lib\site-packages\werkzeug\serving.py) | 37da3afc07d39364d463398b0c2e70a6 | {
"intermediate": 0.4821835458278656,
"beginner": 0.30768120288848877,
"expert": 0.21013528108596802
} |
19,052 | import os
>>> import numpy as np
>>> import tensorflow as tf
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'tensorflow'
>>> from tensorflow.keras.preprocessing.text import Tokenizer
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'tensorflow'
>>> from tensorflow.keras.preprocessing.sequence import pad_sequences
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'tensorflow'
>>>
>>> # 读取文本数据
>>> def read_data(file_path):
... with open(file_path, ‘r’, encoding=‘utf-8’) as file:
File "<stdin>", line 2
with open(file_path, ‘r’, encoding=‘utf-8’) as file:
^
SyntaxError: invalid character '‘' (U+2018)
>>> data = file.readlines()
File "<stdin>", line 1
data = file.readlines()
IndentationError: unexpected indent
>>> return data
File "<stdin>", line 1
return data
IndentationError: unexpected indent
>>>
>>> # 清洗和预处理文本
>>> def clean_text(data):
... cleaned_data = []
... for sentence in data:
... # 可以根据需求进行其他的清洗步骤,例如去除标点符号和特殊字符等
... cleaned_data.append(sentence.strip())
... return cleaned_data
...
>>> # 构建训练数据
>>> def prepare_data(data, num_words, max_sequence_length):
... # 使用Keras中的Tokenizer来将文本转化为数字序列
... tokenizer = Tokenizer(num_words=num_words, oov_token=‘<OOV>’)
File "<stdin>", line 3
tokenizer = Tokenizer(num_words=num_words, oov_token=‘<OOV>’)
^
SyntaxError: invalid character '‘' (U+2018)
>>> tokenizer.fit_on_texts(data)
File "<stdin>", line 1
tokenizer.fit_on_texts(data)
IndentationError: unexpected indent
>>> sequences = tokenizer.texts_to_sequences(data)
File "<stdin>", line 1
sequences = tokenizer.texts_to_sequences(data)
IndentationError: unexpected indent
>>> # 填充序列,使得每个序列都有相同的长度
>>> padded_sequences = pad_sequences(sequences, maxlen=max_sequence_length, padding=‘post’)
File "<stdin>", line 1
padded_sequences = pad_sequences(sequences, maxlen=max_sequence_length, padding=‘post’)
IndentationError: unexpected indent
>>>
>>> return padded_sequences, tokenizer
File "<stdin>", line 1
return padded_sequences, tokenizer
IndentationError: unexpected indent
>>>
>>> # 保存预处理后的数据和tokenizer
>>> def save_preprocessed_data(padded_sequences, tokenizer, save_dir):
... np.save(os.path.join(save_dir, ‘padded_sequences.npy’), padded_sequences)
File "<stdin>", line 2
np.save(os.path.join(save_dir, ‘padded_sequences.npy’), padded_sequences)
^
SyntaxError: invalid character '‘' (U+2018)
>>> tokenizer_json = tokenizer.to_json()
File "<stdin>", line 1
tokenizer_json = tokenizer.to_json()
IndentationError: unexpected indent
>>> with open(os.path.join(save_dir, ‘tokenizer.json’), ‘w’, encoding=‘utf-8’) as file:
File "<stdin>", line 1
with open(os.path.join(save_dir, ‘tokenizer.json’), ‘w’, encoding=‘utf-8’) as file:
IndentationError: unexpected indent
>>> file.write(tokenizer_json)
File "<stdin>", line 1
file.write(tokenizer_json)
IndentationError: unexpected indent
>>>
>>> # 设定参数
>>> file_path = ‘D:/NanoGPT-修仙小说/data/shakespeare_char/input.txt’ # 替换为你的数据文件路径
File "<stdin>", line 1
file_path = ‘D:/NanoGPT-修仙小说/data/shakespeare_char/input.txt’ # 替换为你的数据文件路径
^
SyntaxError: invalid character '‘' (U+2018)
>>> num_words = 10000 # 限制词汇表的大小
>>> max_sequence_length = 100 # 设定每个序列的最大长度
>>> save_dir = ‘preprocessed_data’ # 保存预处理数据的目录
File "<stdin>", line 1
save_dir = ‘preprocessed_data’ # 保存预处理数据的目录
^
SyntaxError: invalid character '‘' (U+2018)
>>>
>>> # 准备和保存数据
>>> data = read_data(file_path)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'read_data' is not defined
>>> cleaned_data = clean_text(data)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'data' is not defined
>>> padded_sequences, tokenizer = prepare_data(cleaned_data, num_words, max_sequence_length)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'prepare_data' is not defined
>>> save_preprocessed_data(padded_sequences, tokenizer, save_dir)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'save_preprocessed_data' is not defined 我应该如何修改呢 | 3be482e26c71e66109d2e1ea6af08f4c | {
"intermediate": 0.3749048709869385,
"beginner": 0.43149039149284363,
"expert": 0.1936047375202179
} |
19,053 | i want to apply masking to remove some blocks with matching coordinates. matching_coords2 = np.array(matching_coords2)
matching_x_y = matching_coords2[:, :2]
# Create a boolean mask to identify matching coordinates the unique_coords are given as below which reports an error with unique_coords[:, :2]
Out [5]: [array([2.50762359e+05, 4.40350475e+06, 1.34539000e+03]), array([2.50762359e+05, 4.40350975e+06, 1.34539000e+03]), array([2.50767359e+05, 4.40350475e+06, 1.34539000e+03]), array([2.50767359e+05, 4.40350975e+06, 1.34539000e+03]), array([2.50757359e+05, 4.40350475e+06, 1.34039000e+03]), array([2.50757359e+05, 4.40350975e+06, 1.34039000e+03]), array([2.50757359e+05, 4.40351475e+06, 1.34039000e+03]), array([2.50762359e+05, 4.40350475e+06, 1.34039000e+03]), array([2.50762359e+05, 4.40350975e+06, 1.34039000e+03]), array([2.50762359e+05, 4.40351475e+06, 1.34039000e+03])]
matching_mask = np.isin(unique_coords[:, :2][0], matching_x_y).all(axis=1)
# Filter out rows from unique_coords using the mask
unique_coords_filtered = unique_coords[~matching_mask] | f66889903451126025c19175f4853ba2 | {
"intermediate": 0.3982102572917938,
"beginner": 0.25641605257987976,
"expert": 0.3453737199306488
} |
19,054 | how to show some page from pdf url flutter web | 8c76fb2c5944aaecc95c1dca52b15656 | {
"intermediate": 0.4541846811771393,
"beginner": 0.26359081268310547,
"expert": 0.2822244465351105
} |
19,055 | hello | 051e6ebf4c5bd75970e050a6cfe17bae | {
"intermediate": 0.32064199447631836,
"beginner": 0.28176039457321167,
"expert": 0.39759764075279236
} |
19,056 | move all files path in list to a different location | 016bf38fbbffe447ee0f490fc2a5619e | {
"intermediate": 0.40191251039505005,
"beginner": 0.20280861854553223,
"expert": 0.3952789008617401
} |
19,057 | write a python script that can transform equirectangular map to 6 faces of cubemap | a3319b8188c1af62df89621a52329056 | {
"intermediate": 0.28652676939964294,
"beginner": 0.11161641776561737,
"expert": 0.6018567681312561
} |
19,058 | get some page from pdf url flutter | 4aea28288112aadb90b05eaddb6aa215 | {
"intermediate": 0.34827813506126404,
"beginner": 0.2898133099079132,
"expert": 0.36190852522850037
} |
19,059 | use pdf depenencies to write code get first 3 pages from pdf url flutter | 4efe87686cf6bfdd7af2e8bde0813f4a | {
"intermediate": 0.24581964313983917,
"beginner": 0.38350969552993774,
"expert": 0.3706706464290619
} |
19,060 | use pdf dependencies to display first 3 page from pdf url use flutter | 3b92e1a77807a6a058ad85733ccabcf5 | {
"intermediate": 0.538858950138092,
"beginner": 0.2197975069284439,
"expert": 0.24134358763694763
} |
19,061 | write a python script that can transform equirectangular map to 6 faces of cubemap. user needs to type path for input and output | d9d833142de395a5cfef1e451c93e828 | {
"intermediate": 0.3925982415676117,
"beginner": 0.17834213376045227,
"expert": 0.42905959486961365
} |
19,062 | write a python script that can transform equirectangular map to 6 faces of cubemap. user needs to type path for input (png) and output (png). Add that on export all 6 faces are joined horizontally without spaces. in that order: X+, X-, Y+, Y-, Z+, Z- | fef573a10d235d2c10b76be9ad06b063 | {
"intermediate": 0.4089072644710541,
"beginner": 0.17652562260627747,
"expert": 0.41456711292266846
} |
19,063 | ATmega8, write me code for 2 ADC channels, read voltage for 2 ADC inputs | 0ea0d7ba4932da085f7a01e3aa56231c | {
"intermediate": 0.44863247871398926,
"beginner": 0.1513708382844925,
"expert": 0.39999666810035706
} |
19,064 | write a python script that can transform equirectangular map to 6 faces of cubemap. user needs to type path for input (png) and output (png). Add that on export all 6 faces are joined horizontally without spaces. in that order: X+, X-, Y+, Y-, Z+, Z- | 68b7b73cea239540610b4ffffcc8b39b | {
"intermediate": 0.4089072644710541,
"beginner": 0.17652562260627747,
"expert": 0.41456711292266846
} |
19,065 | write a python script that can transform equirectangular map to 6 faces of cubemap. user needs to type path for input (png) and output. Add that on output all 6 faces are joined into one image (png) horizontally without spaces. in that order: X+, X-, Y+, Y-, Z+, Z- | 079fcf15804ed45fff99ddba122be1b5 | {
"intermediate": 0.40886929631233215,
"beginner": 0.16691270470619202,
"expert": 0.42421796917915344
} |
19,066 | use pdf dependencies to display first 3 pages from pdf url flutter | cd930b8ee1a30fd23d8ee18d884e29ca | {
"intermediate": 0.40486833453178406,
"beginner": 0.2730731666088104,
"expert": 0.32205840945243835
} |
19,067 | write a python 3.9 script that can transform equirectangular map to 6 faces of cubemap. user needs to type path for input (png) and output. Add that on output all 6 faces are joined into one image (png) horizontally without spaces. in that order: X+, X-, Y+, Y-, Z+, Z- | 51d36fcbe77b82a0a762b9935a5ee63b | {
"intermediate": 0.4115632474422455,
"beginner": 0.19801223278045654,
"expert": 0.39042457938194275
} |
19,068 | how to display first 3 pages from pdf url flutter | 1c5c56f225f0dbe3407fec38868bccd4 | {
"intermediate": 0.3712714612483978,
"beginner": 0.25584232807159424,
"expert": 0.37288618087768555
} |
19,069 | write a python 3.9 script that can transform equirectangular map to 6 faces of cubemap. user needs to type path for input (png) and output. Add that on output all 6 faces are joined into one image (png) horizontally without spaces. in that order: X+, X-, Y+, Y-, Z+, Z- | b572691ad6833790e3b6f01915fcbd04 | {
"intermediate": 0.4115632474422455,
"beginner": 0.19801223278045654,
"expert": 0.39042457938194275
} |
19,070 | html
<a href="/generate_task/{{tasks.id}}" class="is-size-6 has-text-weight-bold generate_a">Генерировать похожее</a>
app.py
@app.route('/generate_task/<tasks_id>', methods=['POST', "GET"])
def generate_task(tasks_id):
print(tasks_id)
return redirect("/") | 66c20e2e74aea7059057754b36197a52 | {
"intermediate": 0.30197441577911377,
"beginner": 0.5069124698638916,
"expert": 0.19111305475234985
} |
19,071 | Solution for Property 'file' does not exist on type 'Request<ParamsDictionary, any, any, ParsedQs, Record<string, any>>'. | 649ac7c5abe63d8ca89f75693cbc8dcf | {
"intermediate": 0.4294823408126831,
"beginner": 0.3200990855693817,
"expert": 0.2504185736179352
} |
19,072 | html
<a href="/generate_task/{{tasks.id}}" class="is-size-6 has-text-weight-bold generate_a">Генерировать похожее</a>
app.py
@app.route('/generate_task/<tasks_id>', methods=['POST', "GET"])
def generate_task(tasks_id):
print(tasks_id)
return redirect("/")
а можешь переписать этот код, добавив ajax запрос, чтобы на сервер отправлялось сообщение generate_task(tasks_id):
print(tasks_id) , но перезагрузки страницы не было | 26d83b75fafb00d8f9eae1e8063cac42 | {
"intermediate": 0.29820674657821655,
"beginner": 0.5227899551391602,
"expert": 0.1790032982826233
} |
19,073 | use pdf dependencies to display first 3 pages from pdf url flutter | 0130e7b72fec2abbcdbd624535cb0ef0 | {
"intermediate": 0.4118143916130066,
"beginner": 0.2868749797344208,
"expert": 0.30131059885025024
} |
19,074 | Candlestick Formation: Aggregate trades into candlesticks based on the provided time interval (e.g., 5 minutes, 1 hour). The candlesticks should include Open, High, Low, Close values. | 8eec299f538f6c967047aa1c7baec99a | {
"intermediate": 0.37159672379493713,
"beginner": 0.3067823648452759,
"expert": 0.321620911359787
} |
19,075 | use pdf dependencies to display first 3 pages from pdf file is convert to File flutter | 9fe2ed19d13c41c43952bd378ac294da | {
"intermediate": 0.5097872614860535,
"beginner": 0.24586255848407745,
"expert": 0.24435016512870789
} |
19,076 | I have a 14 channel Futaba transmitter. On the other side I have a futaba receiver, an arduino uno and a WS2812 ledstrip with 22 LEDS. I want te strip to change colour depending of de position of the joystick in channel one. When the stick is downward I want GREEN, in the center I want BLUE and upwards I want RED. Can you create a script for me? | 93fad65309fcc6072198483e43804e1b | {
"intermediate": 0.498923659324646,
"beginner": 0.2827298641204834,
"expert": 0.21834641695022583
} |
19,077 | Этот код не работает : package com.example.test_youtube
import android.os.Bundle
import androidx.appcompat.app.AppCompatActivity
import com.google.android.youtube.player.YouTubePlayer
import com.google.android.youtube.player.YouTubePlayerView
import com.google.android.youtube.player.YouTubeInitializationResult
class MainActivity : AppCompatActivity(), YouTubePlayer.OnInitializedListener {
private val VIDEO_ID = "ESXT-Dxl7Ek" // замените на ваш ID видео
private val API_KEY = "YOUR_API_KEY" // замените на ваш YouTube API ключ
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
val youtubePlayerView = findViewById<YouTubePlayerView>(R.id.youtube_player_view)
youtubePlayerView.initialize(API_KEY, this)
}
override fun onInitializationSuccess(p0: YouTubePlayer.Provider?, player: YouTubePlayer?, p2: Boolean) {
player?.cueVideo(VIDEO_ID)
}
override fun onInitializationFailure(p0: YouTubePlayer.Provider?, p1: YouTubeInitializationResult?) {
// Обработать ошибку инициализации YouTube player.
}
} | 3e237f2b25ed3cf06bc3961875cb1fd4 | {
"intermediate": 0.47574707865715027,
"beginner": 0.2701334059238434,
"expert": 0.25411951541900635
} |
19,078 | EMA Calculation: Implement a function to calculate the Exponential Moving Average (EMA) with a given length (e.g., 14 periods). Your function should be well-documented and tested in python | 0f7303a6e3a27c07c7018379fbf18bbd | {
"intermediate": 0.3828624188899994,
"beginner": 0.15674619376659393,
"expert": 0.4603913426399231
} |
19,079 | What's pandas command to round to nearest integer | 11814d7c669e3d4e4de30e342e1ba300 | {
"intermediate": 0.3409377932548523,
"beginner": 0.08914418518543243,
"expert": 0.5699180960655212
} |
19,080 | get all files from a given directoy | 1371600dc4d268bf550e2cce693b4e14 | {
"intermediate": 0.3266535997390747,
"beginner": 0.16544246673583984,
"expert": 0.5079039335250854
} |
19,081 | hi | 49ce8df002bd114737602be70c6ec428 | {
"intermediate": 0.3246487081050873,
"beginner": 0.27135494351387024,
"expert": 0.40399640798568726
} |
19,082 | Change the below to transcript
const swaggerJSDoc = require("swagger-jsdoc");
const swaggerUi = require("swagger-ui-express");
const options = {
definition: {
openapi: '3.0.0',
info: {
title: 'Monday.com Event Logger',
version: '1.0.0',
description: 'API documentation for monday.com event logger',
},
},
// Path to the API specs
apis: ['./routes/api/*.js'],
};
const swaggerSpec = swaggerJSDoc(options);
module.exports = {
swaggerSpec,
swaggerUi,
}; | acab2f61c6b3d1c74615c705a9da6e4a | {
"intermediate": 0.5143380165100098,
"beginner": 0.23884232342243195,
"expert": 0.2468196451663971
} |
19,083 | please write me a VBA code for a power point presentation about risk management , i need 7 slide , fill content on your own | 6dd3dcba0a64b22c9a8fdc900a19fca3 | {
"intermediate": 0.21434129774570465,
"beginner": 0.6317532658576965,
"expert": 0.1539054661989212
} |
19,084 | python list of 9 rgb colors | e725462099b93631492b0412fb5f7055 | {
"intermediate": 0.35214728116989136,
"beginner": 0.2865687906742096,
"expert": 0.36128395795822144
} |
19,085 | In C++ is there a way to collect all types generated by a template's instantiations and call a templated function wich each type in succession? | 6da1095cc2198f7831c224d96b455f2d | {
"intermediate": 0.4643743336200714,
"beginner": 0.3256817162036896,
"expert": 0.20994390547275543
} |
19,086 | @app.route('/generate_task/<tasks_id>', methods=['POST', "GET"])
def generate_task(tasks_id):
data = json.loads(request.data) # Получаем данные из запроса
tasks_id = data['tasks_id'] # Извлекаем tasks_id
smth = Task.query.get(int(tasks_id)) #.options(load_only('url'))
print(smth)
task,answer = random_logarythm()
# Возвращаем статус 200 и сообщение как JSON
return jsonify([tasks_id,task,answer])
Traceback (most recent call last):
File "C:\Users\mvideo\Desktop\python_files\bulma_from_jan_30_05_2022\kuzovkin\kuzovkin\venv\lib\site-packages\flask\app.py", line 2070, in wsgi_app
response = self.full_dispatch_request()
File "C:\Users\mvideo\Desktop\python_files\bulma_from_jan_30_05_2022\kuzovkin\kuzovkin\venv\lib\site-packages\flask\app.py", line 1515, in full_dispatch_request
rv = self.handle_user_exception(e)
File "C:\Users\mvideo\Desktop\python_files\bulma_from_jan_30_05_2022\kuzovkin\kuzovkin\venv\lib\site-packages\flask\app.py", line 1513, in full_dispatch_request
rv = self.dispatch_request()
File "C:\Users\mvideo\Desktop\python_files\bulma_from_jan_30_05_2022\kuzovkin\kuzovkin\venv\lib\site-packages\flask\app.py", line 1499, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
File "C:\Users\mvideo\Desktop\python_files\bulma_from_jan_30_05_2022\kuzovkin\kuzovkin\app.py", line 402, in generate_task
smth = Task.query.get(int(tasks_id)) #.options(load_only('url'))
AttributeError: type object '_asyncio.Task' has no attribute 'query' | d9b6bf8ebb81d9137af8a97c3663c6d7 | {
"intermediate": 0.4311586320400238,
"beginner": 0.3777490258216858,
"expert": 0.19109231233596802
} |
19,087 | from PyPDF2 import PdfReader
...
# Function to extract text from a PDF file
def extract_text_from_pdf(file_path):
try:
# Read the PDF file using PdfReader
reader = PdfReader(file_path)
raw_text = ''
# Extract text from each page in the PDF
for page in reader.pages:
raw_text += ' ' + page.extract_text()
# Return the extracted text
return raw_text
except:
# In case of any exceptions, return False
return False
i cannot open certian files this way with : ---------------------------------------------------------------------------
DependencyError Traceback (most recent call last)
Cell In[67], line 1
----> 1 PdfReader(file_path)
File ~\AppData\Roaming\Python\Python310\site-packages\PyPDF2\_reader.py:339, in PdfReader.__init__(self, stream, strict, password)
336 # try empty password if no password provided
337 pwd = password if password is not None else b""
338 if (
--> 339 self._encryption.verify(pwd) == PasswordType.NOT_DECRYPTED
340 and password is not None
341 ):
342 # raise if password provided
343 raise WrongPasswordError("Wrong password")
344 self._override_encryption = False
File ~\AppData\Roaming\Python\Python310\site-packages\PyPDF2\_encryption.py:785, in Encryption.verify(self, password)
782 else:
783 pwd = password
--> 785 key, rc = self.verify_v4(pwd) if self.algV <= 4 else self.verify_v5(pwd)
786 if rc != PasswordType.NOT_DECRYPTED:
787 self._password_type = rc
File ~\AppData\Roaming\Python\Python310\site-packages\PyPDF2\_encryption.py:836, in Encryption.verify_v5(self, password)
833 ue_entry = cast(ByteStringObject, self.entry["/UE"].get_object()).original_bytes
...
File ~\AppData\Roaming\Python\Python310\site-packages\PyPDF2\_encryption.py:162, in AES_CBC_encrypt(key, iv, data)
161 def AES_CBC_encrypt(key: bytes, iv: bytes, data: bytes) -> bytes:
--> 162 raise DependencyError("PyCryptodome is required for AES algorithm")
DependencyError: PyCryptodome is required for AES algorithm | 0545997551ee2ac33387bb93a507b49d | {
"intermediate": 0.36612340807914734,
"beginner": 0.28399381041526794,
"expert": 0.34988272190093994
} |
19,088 | Could not find a declaration file for module 'swagger-ui-express' . How do I solve it? | 8ed16425b303b5d9c00397434281e4dd | {
"intermediate": 0.6499039530754089,
"beginner": 0.17191123962402344,
"expert": 0.17818477749824524
} |
19,089 | Write unit tests to download the file python | 2c4fbd0dab576ea0e35764e38ee1879a | {
"intermediate": 0.37330830097198486,
"beginner": 0.30632394552230835,
"expert": 0.320367693901062
} |
19,090 | Swagger documenttion for typescript | 7ffce06b8c9cf6eabe192c7849b9bf26 | {
"intermediate": 0.20602372288703918,
"beginner": 0.515934407711029,
"expert": 0.27804186940193176
} |
19,091 | change this razer synapse 3 macro to center my mouse cursor on press: <?xml version="1.0" encoding="utf-8"?>
<Macro xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<Name>center mouse1</Name>
<Guid>f6add1cd-4b65-4b2e-b578-8f5665927a88</Guid>
<MacroEvents />
<IsFolder>false</IsFolder>
<FolderGuid>00000000-0000-0000-0000-000000000000</FolderGuid>
</Macro> | f99eb6586864db4f6b82b1244a421220 | {
"intermediate": 0.41058769822120667,
"beginner": 0.24051393568515778,
"expert": 0.34889835119247437
} |
19,092 | how to use HTML div to design research questionnaire , in which when present in mobile screen is fit ? | c6141f515ea177657bf469e633b93ba1 | {
"intermediate": 0.2771644592285156,
"beginner": 0.23805567622184753,
"expert": 0.48477980494499207
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.