row_id int64 0 48.4k | init_message stringlengths 1 342k | conversation_hash stringlengths 32 32 | scores dict |
|---|---|---|---|
19,093 | from PyPDF2 import PdfReader
# Function to extract text from a PDF file
def extract_text_from_pdf(file_path):
try:
# Read the PDF file using PdfReader
reader = PdfReader(file_path)
raw_text = ''
# Extract text from each page in the PDF
for page in reader.pages:
raw_text += ' ' + page.extract_text()
# Return the extracted text
return raw_text
except:
# In case of any exceptions, return False
return False
this does not pull the inputs in the form | d04d86c1ba93621fad6068a52421d413 | {
"intermediate": 0.41794902086257935,
"beginner": 0.383475124835968,
"expert": 0.19857579469680786
} |
19,094 | from PyPDF2 import PdfReader
# Function to extract text from a PDF file
def extract_text_from_pdf(file_path):
try:
# Read the PDF file using PdfReader
reader = PdfReader(file_path)
raw_text = ‘’
# Extract text from each page in the PDF
for page in reader.pages:
raw_text += ’ ’ + page.extract_text()
# Return the extracted text
return raw_text
except:
# In case of any exceptions, return False
return False
this does not pull the inputs in the pdf, it is just returning the the standard text of the code | 71504bce28a718e50281b6e6a1bb6551 | {
"intermediate": 0.539662778377533,
"beginner": 0.26336216926574707,
"expert": 0.19697502255439758
} |
19,095 | The page width is not fully used: I want the text would be arranged _next_ to the images, not below them (Chrome). Requires a lot of scrolling – On the other hand, I want to ensure that the survey displays well on a mobile phone! So HTML <div>’s might be best..
How do it? | 013284b0d4d639dbbf8599946a4f6f2d | {
"intermediate": 0.36572226881980896,
"beginner": 0.2807845175266266,
"expert": 0.35349324345588684
} |
19,096 | need to write a python script to copy and move files from one directory to another | 976014e75fc84a80c88327c74fb725b2 | {
"intermediate": 0.41938483715057373,
"beginner": 0.21763618290424347,
"expert": 0.3629789650440216
} |
19,097 | My Data Range is P15:R50.
The range within the Data Range that I would like to start and end my calculations is P27:R38.
All values in my P27:Q38 should have DATE values formatted as dd/mm/yyyy.
All values in R27:R38 should have TEXT values.
I would like a couple of VBAs that call another VBA to complete the entire Task below:
Starting from P27 and on the same row, If value in P is not Blank and the value in Q is not Blank and value in P is not equal to the value in Q, then value in R should have P value and Q value joined with a dash seperating as shown in this example " dd/mm/yyyy - dd/mm/yyyy". This conditon should be checked from P27 to P38.
Starting from P27 and on the same row, If value in P equals value in Q then value in R should have P value and Q value joined with a dash seperating as shown in this example " dd/mm/yyyy - dd/mm/yyyy". This conditon should be checked from P27 to P38.
Starting from P27 and on the same row, If the Value in P is not Blank and the value in Q is Blank,
then go downwards column Q in the range and find the next cell Q that is not blank,
then on the same row where P had a value and Q was blank the value in R should have P value and Q value joined with a dash seperating as shown in this example " dd/mm/yyyy - dd/mm/yyyy". This conditon should be checked from P27 to P38.
Starting from P27 and on the same row, If value in P is blank and the value in Q is also blank, then the value in R should be the R value Offset(-1, 0). This conditon should be checked from P27 to P38.
Starting from P27 and on the same row, if the value in P equals the value in Q, then the value in R should be the R value Offset(1, 0). This conditon should be checked from P27 to P38. | 5b3f861654dafdf05fe937f2f8672a09 | {
"intermediate": 0.44445177912712097,
"beginner": 0.25390100479125977,
"expert": 0.3016471862792969
} |
19,098 | give me a list of polish cyberpunk games | 779f624138f79bc3bd8eb39bbcee73bc | {
"intermediate": 0.31605803966522217,
"beginner": 0.3706043064594269,
"expert": 0.3133377134799957
} |
19,099 | --with recursive cte:
create function udf_getreverse(@name varchar(20))
returns varchar(20)
as
begin
--declare @name varchar(20) = 'nancy'
declare @reversedname varchar(200) = ''
;with a as
(
select 1 as n, left(@name,1)as letter
union all
select n + 1, SUBSTRING(@name,n+1,1)
from a
where len(@name) > n
)
select @reversedname = @reversedname + letter from a order by n desc -- ycnan
return @reversedname
end instead of recursive cte use xml path for solution | 653cf064c3afbc740556d3d0b7ebd686 | {
"intermediate": 0.31421560049057007,
"beginner": 0.4627459943294525,
"expert": 0.22303839027881622
} |
19,100 | write a typescript type that only accepts numbers that are divisible by 32 | 9fda74c30dcf8eef6a0a08ef91eaa63f | {
"intermediate": 0.29966050386428833,
"beginner": 0.3267880380153656,
"expert": 0.37355145812034607
} |
19,101 | fix this code to switch to bsc network automatically if user was in another network :
import {
EthereumClient,
w3mConnectors,
w3mProvider,
WagmiCore,
WagmiCoreChains,
WagmiCoreConnectors,
} from "https://unpkg.com/@web3modal/ethereum@2.6.2";
import { Web3Modal } from "https://unpkg.com/@web3modal/html@2.6.2";
const { bsc } = WagmiCoreChains;
const { configureChains, createConfig, getContract, usePrepareContractWrite, fetchBalance, readContract, prepareWriteContract, writeContract, getAccount, disconnect, readContracts, waitForTransaction } = WagmiCore;
const chains = [bsc];
const projectId = "589ff315a220e86986e6f895f88ce1a9";
const Contract = "0xE1E48e2183E8EBd54159Fd656509bb85f62fe7cF";
const abi_contract = []
const { publicClient } = configureChains(chains, [w3mProvider({ projectId })]);
const wagmiConfig = createConfig({
autoConnect: true,
connectors: [
...w3mConnectors({ chains, version: 2, projectId }),
new WagmiCoreConnectors.CoinbaseWalletConnector({
chains,
options: {
appName: "html wagmi example",
},
}),
],
publicClient,
});
const ethereumClient = new EthereumClient(wagmiConfig, chains);
$(document).ready(function()
{
projectstats();
});
function addToWallet() {
if (typeof ethereum !== 'undefined') {
web3obj = new Web3(ethereum);
} else if (typeof web3 !== 'undefined') {
web3obj = new Web3(web3.currentProvider);
} else {
alert('No web3 provider');
return;
}
var network = 0;
web3obj.eth.net.getId((err,netId)=>{
network = netId.toString();
switch (network) {
case NetID:
network = NetName;
break;
default:
console.log('This is an unknown network.');
}
if (network.toLowerCase() !== NetName.toLowerCase()) {
window.ethereum.request({
method: 'wallet_switchEthereumChain',
params: [{
chainId: strSiteNetChainId
}],
}).then(()=>{
try {
web3obj.eth.currentProvider.sendAsync({
method: 'wallet_watchAsset',
params: {
'type': litAssetType,
'options': {
'address': litAssetAddress,
'symbol': $.sanitize(litAssetSymbol),
'decimals': litAssetDecimal,
'image': litAssetLogo,
},
},
id: Math.round(Math.random() * 100000)
}, function(err, data) {
if (!err) {
if (data.result) {
console.log('Token added');
} else {
console.log(data);
console.log('Some error');
}
} else {
console.log(err.message);
}
});
} catch (e) {
console.log(e);
}
}
);
return false;
} else {
try {
web3obj.eth.currentProvider.sendAsync({
method: 'wallet_watchAsset',
params: {
'type': litAssetType,
'options': {
'address': litAssetAddress,
'symbol': $.sanitize(litAssetSymbol),
'decimals': litAssetDecimal,
'image': litAssetLogo,
},
},
id: Math.round(Math.random() * 100000)
}, function(err, data) {
if (!err) {
if (data.result) {
console.log('Token added');
} else {
console.log(data);
console.log('Some error');
}
} else {
console.log(err.message);
}
});
} catch (e) {
console.log(e);
}
}
}
);
} | 079e19dc3f8dff3ec7bcef43ed868bad | {
"intermediate": 0.4333290457725525,
"beginner": 0.3781663179397583,
"expert": 0.18850462138652802
} |
19,102 | how do i make a generic type in typescript that can take another type as argument | b9cdb9988c382c5e565a967593dddf46 | {
"intermediate": 0.36540287733078003,
"beginner": 0.2687574625015259,
"expert": 0.3658396303653717
} |
19,103 | --with recursive cte:
create function udf_getreverse(@name varchar(20))
returns varchar(20)
as
begin
declare @name varchar(20) = 'nancy'
--declare @reversedname varchar(200) = ''
;with a as
(
select 1 as n, left(@name,1)as letter
union all
select n + 1, SUBSTRING(@name,n+1,1)
from a
where len(@name) > n
)
select @reversedname = @reversedname + letter from a order by n desc -- ycnan
return @reversedname
end
select dbo.udf_getreverse('nancy') do with recursive cte | afa2144bc2b8b082e8a0b2860ebb0f30 | {
"intermediate": 0.3070603907108307,
"beginner": 0.35240963101387024,
"expert": 0.3405299782752991
} |
19,104 | VS2025 support C++11 standard? | 4c4c0445d966b3e4c14cd7403c601779 | {
"intermediate": 0.3155915439128876,
"beginner": 0.36019736528396606,
"expert": 0.3242110013961792
} |
19,105 | if d in TCS_Policy_Code[number]:
^
IndentationError: unexpected indent | 1b6c3418dd5399c021ae6fc3ee16218a | {
"intermediate": 0.3842942714691162,
"beginner": 0.33924540877342224,
"expert": 0.27646034955978394
} |
19,106 | pthread_create函数 | 9d00d4b677d8f58835091bd98812b236 | {
"intermediate": 0.26285019516944885,
"beginner": 0.27377402782440186,
"expert": 0.4633757174015045
} |
19,107 | display first 3 pages from saved pdf file from firebase storage flutter | 9b37db1fce2d1aa79acd01ae0a438cfd | {
"intermediate": 0.36944037675857544,
"beginner": 0.25041458010673523,
"expert": 0.3801451027393341
} |
19,108 | Can you explain to me the fundamentals of lua for Codea? | 64d4333028babed877540c4d68f9561b | {
"intermediate": 0.42952480912208557,
"beginner": 0.41134119033813477,
"expert": 0.15913397073745728
} |
19,109 | You are an expert Resume Writer. You take four documents as input: Resume Text, Additional Profile Information, Job Description and LaTeX Template.
My Resume text is provided between three backticks.
Additional Profile Information for my Resume is provided between separator <>.
Job Description of the role I'm applying to is provided between separator ####.
Desired LaTeX Template is provided between separator """.
Here are your step-by-step instructions:
Step 1: Generate my professional profile summary in plain text. Separated into sections by going through Resume Text and Additional Profile Details. Share this summary with me too.
Step 2: Generate a list of role requirements from the Job Description, again in plain text. Map the summary of Skills / Tools required to my professional strengths. Share your Resume Writing Strategy with me, on how you'll write my Resume for high likelihood of selection score.
Step 3: Ask for my feedback. Move to step 4 if everything is good. Else, analyze my feedback and start all over from step 1.
Step 4: Write the Resume in the shared LaTex Template. Remove all pre-filled information. | dcaac56a8ac583b438f9659bdf5e8a54 | {
"intermediate": 0.33687254786491394,
"beginner": 0.39851275086402893,
"expert": 0.26461470127105713
} |
19,110 | write a program in python to find the sum of squares of first n natural numbers. | 6ad604cdffd864841db0e7ad7a96b861 | {
"intermediate": 0.23952193558216095,
"beginner": 0.11841940134763718,
"expert": 0.6420586705207825
} |
19,111 | I have a val catDownloader = CatDownloader(), CatDownloader has fun downloadCat(
onSuccess: (Cat) -> Unit,
onError: (String) -> Unit,
onStart: () -> Unit,
allowErrors(Boolean) -> Unit
) and val cat: Cat? = getCatFromInternet(). I want to invoke getCatFromInternet() when redefining onError() with lambda for catDownloader.downloadCat(). How do I do that? | 0f625cc598a893bee78994c0619913c2 | {
"intermediate": 0.4189784526824951,
"beginner": 0.47933071851730347,
"expert": 0.10169090330600739
} |
19,112 | use pdf dependencies to display first 3 pages from firebase storage pdf url flutter | e6665f8f364f5fafc39c81da321ea766 | {
"intermediate": 0.5210171937942505,
"beginner": 0.2151261568069458,
"expert": 0.2638566195964813
} |
19,113 | How to use the Chatgpt3.5 | 69ef9f89752450c7561c1e50695e1e3f | {
"intermediate": 0.3610466420650482,
"beginner": 0.2612936496734619,
"expert": 0.37765973806381226
} |
19,114 | use pdf dependencies only to display first 3 pages from firebase storage pdf url flutter | c2d3911a511c31cc6bc7bdda7e917d14 | {
"intermediate": 0.4502394199371338,
"beginner": 0.2153700292110443,
"expert": 0.3343905508518219
} |
19,115 | SfPdfViewer.network to display first 3 pages flutte | 0af988b7e026b25b606ee6f344c6a6e1 | {
"intermediate": 0.3293919563293457,
"beginner": 0.29835936427116394,
"expert": 0.37224867939949036
} |
19,116 | The class 'SfPdfViewer' doesn't have an unnamed constructor. flutter | 12bb3b21ef1fbae9e8336e660aa98d72 | {
"intermediate": 0.37957894802093506,
"beginner": 0.33836859464645386,
"expert": 0.28205251693725586
} |
19,117 | struct AGE
{
int year;
int month;
int day;
};
typedef struct {
char id[20];
char name_venue[20];//场地名
char sport_type[20];
char description[100];
int age_limit_min;
int age_limit_max;
float rent;
struct AGE YMD;//年月日
int time[16];//时段
int order_cnt;//预定量
char area[20];//场馆位置
char name_stadium[20];//场馆名
} Venue;
Venue venues[999][3];
对以上结构体编写函数,让场馆可以根据租金和预定量排序,并通过以下代码输出:
for (int i = 0; i < 3; i++)
{
for(int j = 0; j < numVenues; i++)
{
printf("所在地区:%s\n", venues[j][i].area);
printf("场馆名:%s\n", venues[j][i].name_stadium);
printf("ID:%s\n", venues[j][i].id);
printf("场地名:%s\n", venues[j][i].name_venue);
printf("运动类型:%s\n", venues[j][i].sport_type);
printf("简述:%s\n", venues[j][i].description);
printf("年龄限制:%d ~ %d\n", venues[j][i].age_limit_min, venues[j][i].age_limit_max);
printf("租金:%.2f\n", venues[j][i].rent);
printf("\n\n");
}
}
} | 6fcdd4137ce1b6e23d77b987138d74ca | {
"intermediate": 0.28313830494880676,
"beginner": 0.41778406500816345,
"expert": 0.2990776598453522
} |
19,118 | hi | c92981035cff3a945dfb61c9d71ac077 | {
"intermediate": 0.3246487081050873,
"beginner": 0.27135494351387024,
"expert": 0.40399640798568726
} |
19,119 | int emptySpaces = 0; // Track the number of empty spaces in each column
for (int x = 0; x < _width; x++)
{
for (int y = 0; y < _height; y++)
{
// Check if the current grid piece is empty
//if its in the same x row do not add it
if (grid[x, y] == null)
{
emptySpaces++;
}
}
}
if its in the same row i dont want that empty space to be added | 089bdff4eb24b6c6d1120ac2354018ce | {
"intermediate": 0.38605421781539917,
"beginner": 0.24928639829158783,
"expert": 0.3646593987941742
} |
19,120 | stripe checkout flutter with price ,title , image url for multi product | e89b7c5b9de32dc9880f339dde576fce | {
"intermediate": 0.4039088189601898,
"beginner": 0.245463028550148,
"expert": 0.35062816739082336
} |
19,121 | const body = document.querySelector('.body');
const settings = document.getElementById('setting-popup');
const btn = document.getElementById('set-btn');
btn.onclick function() = settings.classList.add('display'); co w tym jest nie tak | 47c3f9c2593cd114ca263091969b6065 | {
"intermediate": 0.2256365418434143,
"beginner": 0.6620692014694214,
"expert": 0.1122942864894867
} |
19,122 | stripe checkout javascript with price ,title , image url for multi product | d38642328df6530383241d2816538cf8 | {
"intermediate": 0.4076511561870575,
"beginner": 0.27049508690834045,
"expert": 0.3218536972999573
} |
19,123 | alternative of .on event of jquery in react js | 35406f494262c989b6d9409d5d50cf9c | {
"intermediate": 0.40181922912597656,
"beginner": 0.3399084508419037,
"expert": 0.258272260427475
} |
19,124 | display first 3 pages from pdf url syncfusion_flutter_pdfviewer flutter | 9410117a66b3c352e6d7345bce57fb7f | {
"intermediate": 0.3695904314517975,
"beginner": 0.20325036346912384,
"expert": 0.4271591901779175
} |
19,125 | write a python script.
1) user need to type input equirectangular (dimensoins is 2:1) map image (png) and output folder
2) converts input equirectangular map into 6 cubemap projection faces with output names like X+.png, X-.png, Y+.png, etc.
3) makes 6 faces into one horizontal image in that order: X+, X-, Y+, Y-, Z+, Z-
done
also you can use py360convert library | ee9866f90e750634ffbcd991f1558506 | {
"intermediate": 0.678078830242157,
"beginner": 0.1151297315955162,
"expert": 0.20679140090942383
} |
19,126 | Pinescript Strategy If you lose your business while trading on V5, or if a loss occurs in a trade that ended just before, please make a script so that you can place an order after 50 candles | 4ab61b4a7855bb8dce0aca3850ec94fd | {
"intermediate": 0.31810128688812256,
"beginner": 0.23179252445697784,
"expert": 0.450106143951416
} |
19,127 | Write a excel code to calculate monthly salary payroll | 632bc1e1d4bf869b30edf1b4496d086d | {
"intermediate": 0.43054473400115967,
"beginner": 0.24938462674617767,
"expert": 0.32007065415382385
} |
19,128 | write a python script.
1) user need to type input image (png) and output folder
2) converts input equirectangular map into 6 cubemap projection faces with output names like X+.png, X-.png, Y+.png, etc.
3) makes 6 faces into one horizontal image in that order: X+, X-, Y+, Y-, Z+, Z-
done
also you can use py360convert library | 5b5b3c84779149c200f8ee884b6be044 | {
"intermediate": 0.6665623784065247,
"beginner": 0.0993347018957138,
"expert": 0.23410286009311676
} |
19,129 | write code 3d button with onclick function flutter | bb6e1d4d3f4d1f4850adb5db1c821ee3 | {
"intermediate": 0.37208524346351624,
"beginner": 0.3628323972225189,
"expert": 0.2650824189186096
} |
19,130 | get first 3pages from pdf url flutter web | 9ca23f76e92c6b68ceb2f634ca00661b | {
"intermediate": 0.3619220554828644,
"beginner": 0.25919684767723083,
"expert": 0.3788810968399048
} |
19,131 | get first 3 pages from pdf url flutter web | 01788af1113f4a65e0fb0a9bc4117aad | {
"intermediate": 0.37250667810440063,
"beginner": 0.2562132179737091,
"expert": 0.37128016352653503
} |
19,132 | WARNING:kafka.coordinator.assignors.range:No partition metadata for topic my_topic
С чем связана ошибка? Как исправить??
Мои консюмеры:
import json
import logging
from concurrent.futures import ThreadPoolExecutor
from app.api.events import process_after_video_upload_views
from kafka import KafkaConsumer
from kafka.errors import CommitFailedError
from libs.face_recognition.face_recognition import FaceRecognition
from settings import KAFKA_BROKER, NUM_MAX_THREADS
class Consumer():
"""Base consumer."""
def __init__(
self, topic, broker=KAFKA_BROKER,
clear_messages=False, threads=8):
"""
Sets base parameters, checks num of threads.
Args:
topic:
broker:
clear_messages:
threads:
"""
self.consumer = None
self.topic = topic
self.broker = broker
self.group_id = "group"
self.clear_messages = clear_messages
self.max_workers = threads
if threads > NUM_MAX_THREADS:
logging.warning(f"Sorry, max threads: {NUM_MAX_THREADS}")
self.max_workers = NUM_MAX_THREADS
self.pool = ThreadPoolExecutor(max_workers=self.max_workers)
self.connect()
def activate_listener(self):
"""
Activates listening process.
Returns:
None:
"""
try:
self.subscribe_topic()
for message in self.consumer:
try:
if not self.clear_messages:
self.pool.submit(self.process, message)
else:
logging.info("Ack")
self.consumer.commit()
except Exception as e:
logging.error(f"Unexpected error: {e}")
except CommitFailedError:
logging.error("Commit error, reconnecting to kafka group")
self._reconnect()
except Exception as e:
logging.error(f"Unexpected error: {e}")
def _reconnect(self):
"""
Close consumer, connect again and start listening.
Returns:
None:
"""
if self.consumer:
self.consumer.close()
self.consumer = None
self.connect()
self.activate_listener()
def stop(self):
"""
Closes consuming process.
Returns:
None:
"""
self.consumer.close()
logging.info("Consumer is closed")
def subscribe_topic(self):
"""
Subscribes on kafka topic
Returns:
None:
"""
self.consumer.subscribe([self.topic])
logging.info("Consumer is listening")
def connect(self):
"""
Creates KafkaConsumer instance.
Returns:
None:
"""
self.consumer = KafkaConsumer(
bootstrap_servers=self.broker,
group_id=self.group_id,
auto_offset_reset="latest",
enable_auto_commit=False,
api_version=(2, 1, 0),
value_deserializer=lambda m: json.loads(m.decode("utf-8")),
)
def process(self, message):
"""
Processing of consuming.
Args:
message (dict): kafka message
Returns:
None:
"""
result = self.on_message(message)
if result:
logging.info("ack")
else:
logging.info("reject")
def on_message(self, message):
"""Empty method for child consumers. Using in self.process()"""
class VideoUploadConsumer(Consumer):
"""
A child consumer to get messages about uploaded videos from kafka.
Uploads video to s3 service, creates db row.
"""
def on_message(self, message):
"""
Uploads video to s3 service, creates db row.
Args:
message (dict): message from kafka publisher
Returns:
bool: True if file has been uploaded to s3,
db row has been created else False
"""
try:
filepath = message.value.get("filepath")
file_id = message.value.get("file_id")
# filename = message.value.get(
# "id") + "." + filepath.split(".")[-1]
logging.info(f"Uploading video with id: {file_id}")
# filepath, bucket, object_key = upload_file_to_s3(
# message.value.get("filepath"), "bucket", filename)
process_after_video_upload_views(file_id, filepath)
return True
except AssertionError as e:
logging.error(f"Assertion error: {e}")
return False
class FaceRecognitionConsumer(Consumer):
"""
A child consumer to get messages about videos which needed to
processing of recognition from kafka.
"""
def on_message(self, message):
"""
Starts process of recognition.
Args:
message (dict): message from kafka publisher
Returns:
bool: True if file has been uploaded to s3,
db row has been created else False
"""
try:
filepath = message.value.get("filepath")
file_id = message.value.get("file_id")
face_recognition = FaceRecognition(file_id, filepath)
faces_list, names_list = face_recognition.process()
# @@@ add file to db
return True
except AssertionError as e:
logging.error(f"Assertion error: {e}")
return False | f90797e0e7009b18d86b65a56274f9a6 | {
"intermediate": 0.38910767436027527,
"beginner": 0.41791537404060364,
"expert": 0.19297702610492706
} |
19,133 | private void CloseWindowIfItItemContainer(int itemContainerInstanceID)
{
inventoryItemService.TryGetItemTableInventoryItem(itemContainerInstanceID, out ItemTable itemTable);
if (itemTable.InventoryMetadata is ContainerMetadata containerMetadata)
{
windowContainerChangeStateEventChannelSo.RaiseEvent(itemTable, WindowUpdateState.Close);
foreach (var gridTable in containerMetadata.GridsInventory) {
foreach (var itemTableInContainer in gridTable.GetAllItemsFromGrid()) {
if (itemTableInContainer.InventoryMetadata is ContainerMetadata containerMetadata2)
{
windowContainerChangeStateEventChannelSo.RaiseEvent(itemTableInContainer, WindowUpdateState.Close);
foreach (var gridTable2 in containerMetadata2.GridsInventory) {
foreach (var itemTableInContainer2 in gridTable2.GetAllItemsFromGrid()) {
if (itemTableInContainer2.InventoryMetadata is ContainerMetadata containerMetadata3)
{
windowContainerChangeStateEventChannelSo.RaiseEvent(itemTableInContainer2, WindowUpdateState.Close);
}
}
}
}
}
}
}
}
попробуй оптимизировать | 46a4c986ca6934608026aeffbad1cc33 | {
"intermediate": 0.4090195596218109,
"beginner": 0.2536109983921051,
"expert": 0.33736947178840637
} |
19,134 | пример использования функции $(selector).moveTo({ xOffset, yOffset }) js | b1f03bcd5b908708fc8599db7223a91d | {
"intermediate": 0.2508869469165802,
"beginner": 0.44203177094459534,
"expert": 0.30708128213882446
} |
19,135 | import logging
import time
import uuid
from concurrent.futures import ThreadPoolExecutor
import cv2
from app import redis_service
from libs.face_recognition import ALG, FaceRecognitionStatusEnum
import numpy as np
from settings import NUM_MAX_THREADS
class FaceRecognition:
"""Service for using opencv face recognition."""
def __init__(self, file_id, video_path, threshold=80):
"""
Sets model’s parameters.
Args:
file_id (str): file id
video_path (str): path to video
threshold (int): model’s threshold
"""
self.face_cascade_path = cv2.data.haarcascades + ALG
self.face_cascade = cv2.CascadeClassifier(self.face_cascade_path)
self.faces_list = []
self.names_list = []
self.threshold = threshold
self.video_path = video_path
self.video = cv2.VideoCapture(self.video_path)
self.frame_count = int(self.video.get(cv2.CAP_PROP_FRAME_COUNT))
self.frame_num = int(self.video.get(cv2.CAP_PROP_POS_FRAMES))
self.file_id = file_id
self.status = FaceRecognitionStatusEnum.PROCESS
self.persons = 0
def process(self):
"""
Process of recognition faces in video by frames.
Writes id as uuid4.
Returns:
tuple: with list of faces and list of names
"""
pool = ThreadPoolExecutor(max_workers=NUM_MAX_THREADS)
# redis_data = redis_service.get(self.file_id)
# self.status = redis_data.get("status")
while True:
if self.status == FaceRecognitionStatusEnum.PAUSE:
logging.info(
f"File id: {self.file_id} | Paused by user"
f"Frame: {self.frame_num}")
break
if self.status == FaceRecognitionStatusEnum.RESUME:
resume_data = redis_service.get(self.file_id)
self.faces_list = resume_data.get("faces_list")
self.names_list = resume_data.get("names_list")
self.frame_num = resume_data.get("frame")
self.persons = resume_data.get("persons")
# @@@
logging.info(f"RESUME"
f"faces {self.faces_list} / {type(self.faces_list)}"
f"names {self.names_list} / {type(self.names_list)}"
f"frame {self.frame_num} / {type(self.frame_num)}"
f"persons {self.persons} / {type(self.persons)}")
logging.info(
f"\nFile id: {self.file_id} | Resume\n"
f"Frame: {self.frame_num}")
for i in range(self.frame_num + 1):
self.video.read()
self.status = FaceRecognitionStatusEnum.PROCESS
ret, frame = self.video.read()
if not ret:
break
pool.submit(self._process_frame, frame)
pool.shutdown()
self._close()
if self.frame_count == self.frame_num:
self.status = FaceRecognitionStatusEnum.READY
logging.info(
f"Video with id {self.file_id} closed\n"
f"Status: {self.status}")
return self.faces_list, self.names_list
def _process_frame(self, frame):
"""
Frame processing.
Args:
frame (cv2.typing.MatLike): cv2 frame
Returns:
None:
"""
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = self.face_cascade.detectMultiScale(
gray, scaleFactor=1.1, minNeighbors=5, minSize=(100, 100))
for (x, y, w, h) in faces:
cur_face = gray[y:y + h, x:x + w]
rec = cv2.face.LBPHFaceRecognizer.create()
rec.train([cur_face], np.array(0))
f = True
for face in self.faces_list:
_, confidence = rec.predict(face)
if confidence < self.threshold:
f = False
if f:
label = str(uuid.uuid4())
self.faces_list.append(cur_face.tolist())
self.names_list.append(label)
self.persons += 1
redis_service.update(
self.file_id,
{
"file_id": self.file_id,
"faces_list": self.faces_list,
"names_list": self.names_list,
"status": str(self.status),
"frame": str(self.frame_num),
"persons": str(self.persons),
"filepath": self.video_path
}
)
logging.info(
f"\n-----\n"
f"File id: {self.file_id}\n"
f"Frame: {self.frame_num}/{self.frame_count} passed\n"
f"Persons: {self.persons}\n"
f"Status: {self.status}\n"
f"-----\n")
def _close(self):
"""
Closes video and destroys all windows.
Returns:
None:
"""
self.video.release()
cv2.destroyAllWindows()
Я хочу обрабатывать каждый кадр видео в разном потоке, не должна ли функция self.video.read() вызываться в _process_frame? Может еще что-то поменять? Перепиши, как надо | e21bba3fc4f6db2efb2c02b5fa056b54 | {
"intermediate": 0.7052919864654541,
"beginner": 0.22851304709911346,
"expert": 0.06619495153427124
} |
19,136 | ERROR: statement 0xc0004f1ce0 of dataGetStmtC is not closed!
godror golang | a2c244077be9d10395025ed2a213de43 | {
"intermediate": 0.3957599997520447,
"beginner": 0.37323036789894104,
"expert": 0.23100964725017548
} |
19,137 | write on how to set stretch="uniformtofill" scaling property in c# WPF to Viewport3D object using GPT-3.5 | 1deb4c586601ed07e98c184fc2c48341 | {
"intermediate": 0.5416466593742371,
"beginner": 0.21200482547283173,
"expert": 0.246348574757576
} |
19,138 | import logging
import time
import uuid
from concurrent.futures import ThreadPoolExecutor
import cv2
from app import redis_service
from libs.face_recognition import ALG, FaceRecognitionStatusEnum
import numpy as np
from settings import NUM_MAX_THREADS
class FaceRecognition:
"""Service for using opencv face recognition."""
def __init__(self, file_id, video_path, threshold=80):
"""
Sets model’s parameters.
Args:
file_id (str): file id
video_path (str): path to video
threshold (int): model’s threshold
"""
self.face_cascade_path = cv2.data.haarcascades + ALG
self.face_cascade = cv2.CascadeClassifier(self.face_cascade_path)
self.faces_list = []
self.names_list = []
self.threshold = threshold
self.video_path = video_path
self.video = cv2.VideoCapture(self.video_path)
self.frame_count = int(self.video.get(cv2.CAP_PROP_FRAME_COUNT))
self.frame_num = int(self.video.get(cv2.CAP_PROP_POS_FRAMES))
self.file_id = file_id
self.status = FaceRecognitionStatusEnum.PROCESS
self.persons = 0
def process(self):
"""
Process of recognition faces in video by frames.
Writes id as uuid4.
Returns:
tuple: with list of faces and list of names
"""
pool = ThreadPoolExecutor(max_workers=NUM_MAX_THREADS)
# redis_data = redis_service.get(self.file_id)
# self.status = redis_data.get("status")
while True:
if self.status == FaceRecognitionStatusEnum.PAUSE:
logging.info(
f"File id: {self.file_id} | Paused by user"
f"Frame: {self.frame_num}")
break
if self.status == FaceRecognitionStatusEnum.RESUME:
resume_data = redis_service.get(self.file_id)
self.faces_list = resume_data.get("faces_list")
self.names_list = resume_data.get("names_list")
self.frame_num = resume_data.get("frame")
self.persons = resume_data.get("persons")
# @@@
logging.info(f"RESUME"
f"faces {self.faces_list} / {type(self.faces_list)}"
f"names {self.names_list} / {type(self.names_list)}"
f"frame {self.frame_num} / {type(self.frame_num)}"
f"persons {self.persons} / {type(self.persons)}")
logging.info(
f"\nFile id: {self.file_id} | Resume\n"
f"Frame: {self.frame_num}")
for i in range(self.frame_num + 1):
self.video.read()
self.status = FaceRecognitionStatusEnum.PROCESS
ret, frame = self.video.read()
self.frame_num = int(self.video.get(cv2.CAP_PROP_POS_FRAMES))
if not ret:
return
pool.submit(self._process_frame, frame)
pool.shutdown()
self._close()
logging.info(f"@@@ {self.frame_count} {self.frame_num}")
if self.frame_count == self.frame_num:
self.status = FaceRecognitionStatusEnum.READY
logging.info(
f"\n-----\n"
f"File id: {self.file_id} CLOSED!\n"
f"Frame: {self.frame_num}/{self.frame_count}\n"
f"Persons: {self.persons}\n"
f"Status: {self.status}\n"
f"-----\n")
return self.faces_list, self.names_list
def _process_frame(self, frame):
"""
Frame processing.
Args:
frame (cv2.typing.MatLike): cv2 frame
Returns:
None:
"""
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = self.face_cascade.detectMultiScale(
gray, scaleFactor=1.1, minNeighbors=5, minSize=(100, 100))
for (x, y, w, h) in faces:
cur_face = gray[y:y + h, x:x + w]
rec = cv2.face.LBPHFaceRecognizer.create()
rec.train([cur_face], np.array(0))
f = True
for face in self.faces_list:
_, confidence = rec.predict(face)
if confidence < self.threshold:
f = False
if f:
label = str(uuid.uuid4())
self.faces_list.append(cur_face.tolist())
self.names_list.append(label)
self.persons += 1
redis_service.update(
self.file_id,
{
"file_id": self.file_id,
"faces_list": self.faces_list,
"names_list": self.names_list,
"status": str(self.status),
"frame": str(self.frame_num),
"persons": str(self.persons),
"filepath": self.video_path
}
)
logging.info(
f"\n-----\n"
f"File id: {self.file_id}\n"
f"Frame: {self.frame_num}/{self.frame_count} passed\n"
f"Persons: {self.persons}\n"
f"Status: {self.status}\n"
f"-----\n")
def _close(self):
"""
Closes video and destroys all windows.
Returns:
None:
"""
self.video.release()
cv2.destroyAllWindows()
При таком коде я вижу в логах это:
File id: 6f95411a-83ae-4cd9-bf5e-bea6f0798650
2023-09-01T10:05:10.608582087Z Frame: 7/1104 passed
2023-09-01T10:05:10.608589247Z Persons: 0
2023-09-01T10:05:10.608596180Z Status: FaceRecognitionStatusEnum.PROCESS
2023-09-01T10:05:10.608603363Z -----
2023-09-01T10:05:10.608610332Z
2023-09-01T10:05:10.617082688Z 2023-09-01 10:05:10,616 - root - INFO -
2023-09-01T10:05:10.617142612Z -----
2023-09-01T10:05:10.617152651Z File id: 6f95411a-83ae-4cd9-bf5e-bea6f0798650
2023-09-01T10:05:10.617160653Z Frame: 9/1104 passed
2023-09-01T10:05:10.617198838Z Persons: 0
2023-09-01T10:05:10.617207752Z Status: FaceRecognitionStatusEnum.PROCESS
2023-09-01T10:05:10.617216275Z -----
2023-09-01T10:05:10.617223828Z
2023-09-01T10:05:10.625078084Z 2023-09-01 10:05:10,624 - root - INFO -
2023-09-01T10:05:10.625111060Z -----
2023-09-01T10:05:10.625119196Z File id: 6f95411a-83ae-4cd9-bf5e-bea6f0798650
2023-09-01T10:05:10.625126777Z Frame: 11/1104 passed
2023-09-01T10:05:10.625134790Z Persons: 0
2023-09-01T10:05:10.625141743Z Status: FaceRecognitionStatusEnum.PROCESS
2023-09-01T10:05:10.625149257Z -----
2023-09-01T10:05:10.625156428Z
2023-09-01T10:05:10.633831648Z 2023-09-01 10:05:10,633 - root - INFO -
2023-09-01T10:05:10.633880141Z -----
2023-09-01T10:05:10.633890014Z File id: 6f95411a-83ae-4cd9-bf5e-bea6f0798650
2023-09-01T10:05:10.633897476Z Frame: 13/1104 passed
2023-09-01T10:05:10.633904708Z Persons: 0
2023-09-01T10:05:10.633912736Z Status: FaceRecognitionStatusEnum.PROCESS
2023-09-01T10:05:10.633920556Z -----
2023-09-01T10:05:10.633927612Z
2023-09-01T10:05:10.643055683Z 2023-09-01 10:05:10,642 - root - INFO -
2023-09-01T10:05:10.643098971Z -----
2023-09-01T10:05:10.643110143Z File id: 6f95411a-83ae-4cd9-bf5e-bea6f0798650
2023-09-01T10:05:10.643118501Z Frame: 14/1104 passed
2023-09-01T10:05:10.643146388Z Persons: 0
2023-09-01T10:05:10.643165820Z Status: FaceRecognitionStatusEnum.PROCESS
2023-09-01T10:05:10.643173137Z -----
2023-09-01T10:05:10.643180116Z
2023-09-01T10:05:10.650327925Z 2023-09-01 10:05:10,650 - root - INFO -
2023-09-01T10:05:10.650371503Z -----
2023-09-01T10:05:10.650379598Z File id: 6f95411a-83ae-4cd9-bf5e-bea6f0798650
2023-09-01T10:05:10.650386596Z Frame: 15/1104 passed
2023-09-01T10:05:10.650394358Z Persons: 0
2023-09-01T10:05:10.650402364Z Status: FaceRecognitionStatusEnum.PROCESS
2023-09-01T10:05:10.650409419Z -----
2023-09-01T10:05:10.650416290Z
2023-09-01T10:05:10.654174595Z 2023-09-01 10:05:10,653 - root - INFO -
2023-09-01T10:05:10.654222958Z -----
2023-09-01T10:05:10.654232245Z File id: 6f95411a-83ae-4cd9-bf5e-bea6f0798650
2023-09-01T10:05:10.654241438Z Frame: 15/1104 passed
2023-09-01T10:05:10.654248645Z Persons: 0
2023-09-01T10:05:10.654256864Z Status: FaceRecognitionStatusEnum.PROCESS
2023-09-01T10:05:10.654263813Z -----
2023-09-01T10:05:10.654271947Z
2023-09-01T10:05:10.662678493Z 2023-09-01 10:05:10,662 - root - INFO -
2023-09-01T10:05:10.662774357Z -----
2023-09-01T10:05:10.662786233Z File id: 6f95411a-83ae-4cd9-bf5e-bea6f0798650
2023-09-01T10:05:10.662794704Z Frame: 17/1104 passed
2023-09-01T10:05:10.662801955Z Persons: 0
2023-09-01T10:05:10.662810091Z Status: FaceRecognitionStatusEnum.PROCESS
2023-09-01T10:05:10.662818633Z -----
2023-09-01T10:05:10.662826704Z
2023-09-01T10:05:10.671762973Z 2023-09-01 10:05:10,671 - root - INFO -
2023-09-01T10:05:10.671823619Z -----
То есть кадры будто бы повторяются, а потом перескакивают, как мне получать корректное значение текущего кадра?
Я обновляю frame_num вот здесь:
ret, frame = self.video.read()
self.frame_num = int(self.video.get(cv2.CAP_PROP_POS_FRAMES)) | 25ef492a317525b8f21e0a8babf453ee | {
"intermediate": 0.7052919864654541,
"beginner": 0.22851304709911346,
"expert": 0.06619495153427124
} |
19,139 | in typescript concat array of strings with commas like string1, string 2 | 92bb0656b6bbb4391fd832661f9e174b | {
"intermediate": 0.37189748883247375,
"beginner": 0.3554670214653015,
"expert": 0.27263543009757996
} |
19,140 | use method pw.Align of pdf dependencies to display first 3 pages of a pdf url flutter | 3ce1b60f02a9d799a0d167b2006716e0 | {
"intermediate": 0.45054692029953003,
"beginner": 0.25886374711990356,
"expert": 0.2905893921852112
} |
19,141 | How to open PDF document with specific page range flutter | b667c2de6c259e5bcc0d8b5418038d3f | {
"intermediate": 0.39284729957580566,
"beginner": 0.20047390460968018,
"expert": 0.40667879581451416
} |
19,142 | How to open PDF document from pdf url with specific page range flutter web | ddf1ef94884a49675e3d95c5934b02a4 | {
"intermediate": 0.42949551343917847,
"beginner": 0.19540542364120483,
"expert": 0.3750990927219391
} |
19,143 | How to open PDF document from pdf url with specific page range flutter web firebase storage | 107db1bef7caee99e67464a6edfd8626 | {
"intermediate": 0.6373627185821533,
"beginner": 0.10073481500148773,
"expert": 0.26190242171287537
} |
19,144 | i want to make a transparent wireles i2c bus using two esp32 and espnow, using one esp32 as i2c receiver and one esp32 as i2 transmitter so i can have raw i2c data transmited wireles, how can i do this | 9d3986ef9c6fd61b408b2a8007e54fb1 | {
"intermediate": 0.5037655234336853,
"beginner": 0.1988103985786438,
"expert": 0.29742416739463806
} |
19,145 | How to open PDF document from pdf url with specific page range flutter web viewer | 75f73662192413b9fde664f892acc92b | {
"intermediate": 0.42856210470199585,
"beginner": 0.17079193890094757,
"expert": 0.4006459414958954
} |
19,146 | add task reschedule for this code
import datetime
import heapq
# Initialize an empty list to store tasks
tasks = []
def add_task():
title = input("Enter task title: ")
description = input("Enter task description: ")
due_date = input("Enter due date (YYYY-MM-DD): ")
priority = input("Enter priority (high, medium, low): ")
task = {
'title': title,
'description': description,
'due_date': due_date,
'priority': priority,
'completed': False
}
tasks.append(task)
print("Task added successfully!")
def display_upcoming_tasks():
today = datetime.date.today()
upcoming_tasks = [task for task in tasks if not task['completed'] and task['due_date'] >= str(today)]
if upcoming_tasks:
print("Upcoming tasks:")
for idx, task in enumerate(upcoming_tasks, start=1):
print(f"{idx}. Title: {task['title']}, Due Date: {task['due_date']}, Priority: {task['priority']}")
else:
print("No upcoming tasks found.")
def mark_task_as_completed():
display_upcoming_tasks()
task_index = int(input("Enter the task number to mark as completed: ")) - 1
if 0 <= task_index < len(tasks):
tasks[task_index]['completed'] = True
print(f"Task '{tasks[task_index]['title']}' marked as completed.")
else:
print("Invalid task number.")
def main():
while True:
print("\nTask Scheduler Menu:")
print("1. Add Task")
print("2. Display Upcoming Tasks")
print("3. Mark Task as Completed")
print("4. Quit")
choice = input("Enter your choice (1/2/3/4): ")
if choice == '1':
add_task()
elif choice == '2':
display_upcoming_tasks()
elif choice == '3':
mark_task_as_completed()
elif choice == '4':
break
else:
print("Invalid choice. Please try again.")
if __name__ == "__main__":
main() | 5f3f755710894e239d1ce9e82876fed1 | {
"intermediate": 0.31389233469963074,
"beginner": 0.5020986199378967,
"expert": 0.18400904536247253
} |
19,147 | class _PDFTrailer extends State<PDFTrailer> {
late PdfViewerController _pdfViewerController;
@override
void initState() {
_pdfViewerController = PdfViewerController();
super.initState();
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text(widget.title!),
actions: <Widget>[
IconButton(
icon: Icon(
Icons.arrow_drop_down_circle,
color: Colors.white,
),
onPressed: () {
_pdfViewerController.jumpToPage(5);
},
),
],
),
body: SfPdfViewer.network(
widget.pdfurl!,
controller: _pdfViewerController,
),
);
}
}display first 3 pages only | 68723f175d7631999bcaf340c2e24207 | {
"intermediate": 0.39855700731277466,
"beginner": 0.33381274342536926,
"expert": 0.26763027906417847
} |
19,148 | in a TypeScript conditional type, how can I get the type of the value that is being assigned | 0ec8a9383296aa99907539bff930c1ce | {
"intermediate": 0.4343336522579193,
"beginner": 0.35752031207084656,
"expert": 0.2081460952758789
} |
19,149 | @Query(value = "select i.[Number] + '' + ic.Code + ''+ a.Code + '' + l.MarketingLobCode + '' + p.[Number] " + "+ '' + insp.Code + '' + isnull(party.Tin,'0') + ''" + " + isnull(JSON_VALUE(po.ObjectData, '$.PlateNumber'),'NA/PN') + '' +isnull(party.crs,'null/crs')" + "from meru.Invoice i " + "join meru.Policy p on i.PolicyId = p.Id " + "join meru.InsuranceCompany ic on ic.Id = p.InsuranceCompanyId " + "join meru.Agent a on a.Id = p.AgentId " + "join meru.InsLob il on il.InsPackageId = p.PackageId " + "join meru.LobRegistry l on l.Id = il.LobRegistryId " + "join meru.Policyholder h on h.id= p.PolicyholderId " + "join meru.Party party on party.Id = h.PersonId " + "join meru.InsPackage insp on p.PackageId = insp.Id " + "join meru.PolicyObject po on p.Id = po.PolicyId " + "join meru.PolicyInsuredPerson pip on party.Id = pip.PersonId " + "where p.Deleted=0 and i.Deleted=0 and ic.code='010' " + "group by i.[Number], ic.Code, a.Code, l.MarketingLobCode, p.[Number], insp.Code, party.Tin, po.ObjectData, party.crs ", nativeQuery = true) List findUniqueAutoErgoInvoicesWithAllData();
with this query hoqw can i acquire from party tble the crs for the policy holder and separately from policyinsured person( agains crs from party)? | 2263e5a5c02cb3e945519cd1ced97473 | {
"intermediate": 0.49668294191360474,
"beginner": 0.23652473092079163,
"expert": 0.266792356967926
} |
19,150 | SELECT name FROM Staff WHERE salary in (60000, 80000) | be729f594b5c8930bd6a78d702ff5a80 | {
"intermediate": 0.3866768181324005,
"beginner": 0.31060245633125305,
"expert": 0.3027207851409912
} |
19,151 | how to load apart of pdf file from pdf irl flutter | 141f7605e150159b1b8e8f22e5b8cc78 | {
"intermediate": 0.4915603995323181,
"beginner": 0.23448528349399567,
"expert": 0.2739543616771698
} |
19,152 | I have a public list
public List<Tile> _selectedTiles = new List<Tile>();
I want to make a line renderer so all its points are at the same position as the objects in the list above | cdfe80da8939d56244a2c4cd7ee92453 | {
"intermediate": 0.5031031370162964,
"beginner": 0.19270576536655426,
"expert": 0.30419114232063293
} |
19,153 | import logging
import time
import uuid
from concurrent.futures import ThreadPoolExecutor
import cv2
from app import redis_service
from libs.face_recognition import ALG, FaceRecognitionStatusEnum
import numpy as np
from settings import NUM_MAX_THREADS
class FaceRecognition:
"""Service for using opencv face recognition."""
def __init__(self, file_id, video_path, threshold=80):
"""
Sets model’s parameters.
Args:
file_id (str): file id
video_path (str): path to video
threshold (int): model’s threshold
"""
self.face_cascade_path = cv2.data.haarcascades + ALG
self.face_cascade = cv2.CascadeClassifier(self.face_cascade_path)
self.faces_list = []
self.names_list = []
self.threshold = threshold
self.video_path = video_path
self.video = cv2.VideoCapture(self.video_path)
self.frame_count = int(self.video.get(cv2.CAP_PROP_FRAME_COUNT))
self.frame_num = int(self.video.get(cv2.CAP_PROP_POS_FRAMES))
self.file_id = file_id
self.status = FaceRecognitionStatusEnum.PROCESS
self.persons = 0
def process(self):
"""
Process of recognition faces in video by frames.
Writes id as uuid4.
Returns:
tuple: with list of faces and list of names
"""
pool = ThreadPoolExecutor(max_workers=NUM_MAX_THREADS)
# redis_data = redis_service.get(self.file_id)
# self.status = redis_data.get("status")
while True:
if self.status == FaceRecognitionStatusEnum.PAUSE:
logging.info(
f"File id: {self.file_id} | Paused by user"
f"Frame: {self.frame_num}")
break
if self.status == FaceRecognitionStatusEnum.RESUME:
resume_data = redis_service.get(self.file_id)
self.faces_list = resume_data.get("faces_list")
self.names_list = resume_data.get("names_list")
self.frame_num = resume_data.get("frame")
self.persons = resume_data.get("persons")
# @@@
logging.info(f"RESUME"
f"faces {self.faces_list} / {type(self.faces_list)}"
f"names {self.names_list} / {type(self.names_list)}"
f"frame {self.frame_num} / {type(self.frame_num)}"
f"persons {self.persons} / {type(self.persons)}")
logging.info(
f"\nFile id: {self.file_id} | Resume\n"
f"Frame: {self.frame_num}")
for i in range(self.frame_num + 1):
self.video.read()
self.status = FaceRecognitionStatusEnum.PROCESS
ret, frame = self.video.read()
if not ret:
break
self.frame_num = int(self.video.get(cv2.CAP_PROP_POS_FRAMES))
pool.submit(self._process_frame, frame)
pool.shutdown()
self._close()
logging.info(f"@@@ {self.frame_count} {self.frame_num}")
self.status = FaceRecognitionStatusEnum.READY
logging.info(
f"\n-----\n"
f"File id: {self.file_id} CLOSED!\n"
f"Frame: {self.frame_num}/{self.frame_count}\n"
f"Persons: {self.persons}\n"
f"Status: {self.status}\n"
f"-----\n")
return self.faces_list, self.names_list
def _process_frame(self, frame):
"""
Frame processing.
Args:
frame (cv2.typing.MatLike): cv2 frame
Returns:
None:
"""
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = self.face_cascade.detectMultiScale(
gray, scaleFactor=1.1, minNeighbors=5, minSize=(100, 100))
for (x, y, w, h) in faces:
cur_face = gray[y:y + h, x:x + w]
rec = cv2.face.LBPHFaceRecognizer.create()
rec.train([cur_face], np.array(0))
f = True
for face in self.faces_list:
_, confidence = rec.predict(face)
if confidence < self.threshold:
f = False
if f:
label = str(uuid.uuid4())
self.faces_list.append(cur_face.tolist())
self.names_list.append(label)
self.persons += 1
redis_service.update(
self.file_id,
{
"file_id": self.file_id,
"faces_list": self.faces_list,
"names_list": self.names_list,
"status": str(self.status),
"frame": str(self.frame_num),
"persons": str(self.persons),
"filepath": self.video_path
}
)
logging.info(
f"\n-----\n"
f"File id: {self.file_id}\n"
f"Frame: {self.frame_num}/{self.frame_count} passed\n"
f"Persons: {self.persons}\n"
f"Status: {self.status}\n"
f"-----\n")
def _close(self):
"""
Closes video and destroys all windows.
Returns:
None:
"""
self.video.release()
cv2.destroyAllWindows()
logging.info(f"File: {self.file_id} | All windows destroys")
Что не так с этим кодом? Почему в логи выводится некоректное значение self.frame_num и код виснет в конце | ba3ef41db711abbd6e7088876e28f3b8 | {
"intermediate": 0.7052919864654541,
"beginner": 0.22851304709911346,
"expert": 0.06619495153427124
} |
19,154 | how to get a part of pdf file and display it | bbb2c2207a6fe74d84e90c9cc98772a2 | {
"intermediate": 0.3520067632198334,
"beginner": 0.2732660472393036,
"expert": 0.37472712993621826
} |
19,155 | use pdf flutter extract 3 pages from pdf url | 3fec2aa83eb1771e3659406796ee0ebd | {
"intermediate": 0.33193439245224,
"beginner": 0.28667256236076355,
"expert": 0.38139304518699646
} |
19,156 | use pdf flutter extract 3 pages from pdf url and display it on web | 95b25b99c73ef6889b385f48ab8c5701 | {
"intermediate": 0.4111452102661133,
"beginner": 0.25147831439971924,
"expert": 0.3373764753341675
} |
19,157 | use native_pdf_renderer to extract 3 page and display it | 745f9a86f8b0e750cfc5c5d2aae3ea93 | {
"intermediate": 0.4635392725467682,
"beginner": 0.2462339699268341,
"expert": 0.2902268171310425
} |
19,158 | use native_pdf_renderer to extract 3 page from pdf url and display it | 8cd32679cc1f7b440261043e54d2b14d | {
"intermediate": 0.5117948055267334,
"beginner": 0.19794699549674988,
"expert": 0.2902582585811615
} |
19,159 | Would this ajv schema function correctly?
export const updateAirTableRequestSchema: Schema = {
title: "UpdateAirTable",
properties: {
account: [
{
id: { type: "string" },
name: { type: "string" },
credential_identifier: { type: "string" },
},
],
identities: [
{
id: { type: "string" },
name: { type: "string" },
},
],
pixel: [
{
id: { type: "string" },
name: { type: "string" },
},
],
required: ["account", "identities", "pixel"],
additionalProperties: false,
},
}; | d02d27ea4f7d3a7c63939059381824a9 | {
"intermediate": 0.5061050057411194,
"beginner": 0.3170505166053772,
"expert": 0.17684446275234222
} |
19,160 | $(document).ready(function(){
$(".generate_a").click(function(){
// $(this).parents(".homework_div").append($(this).parents('.homework_div').clone());
$(this).parents(".homework_div").append('<div class="homework_div mt-3"></div>');
});
}); что делает данный код | 274e46d200918c705edd2115f9648820 | {
"intermediate": 0.3012675642967224,
"beginner": 0.5198187828063965,
"expert": 0.17891359329223633
} |
19,161 | I use PyTorch last to predict stock price . how do I set random seed as full as possible? | eaca7c940ff26343c1e350102965f230 | {
"intermediate": 0.28966450691223145,
"beginner": 0.11479862034320831,
"expert": 0.5955368876457214
} |
19,162 | use pdf_render extract 3 pages from pdf url and display it | 5d0ca23004501ced19bc52194df8eab6 | {
"intermediate": 0.4526976943016052,
"beginner": 0.21988438069820404,
"expert": 0.32741791009902954
} |
19,163 | Query in SQL the list of CITY names from STATION that do not end with vowels. Your result cannot contain duplicates. | dc4cd5460c9fc979b2ba3c26d81b9512 | {
"intermediate": 0.4782121479511261,
"beginner": 0.2606494426727295,
"expert": 0.26113834977149963
} |
19,164 | Есть вот такой документ
task_page.html
<a href="#" class="is-size-6 has-text-weight-bold generate_a" <a href="#" class="is-size-6 has-text-weight-bold generate_a" data-task-id="{{ tasks.id }}">Генерировать похожее</a>
generate.js
function generateTask(tasksId) {
// Создаем объект XMLHttpRequest
let xhr = new XMLHttpRequest();
// Устанавливаем метод и адрес запроса
xhr.open('POST', '/generate_task/' + tasksId);
// Устанавливаем заголовки запроса
xhr.setRequestHeader('Content-Type', 'application/json');
// Отправляем запрос на сервер
xhr.send(JSON.stringify({'tasks_id': tasksId}));
// Обработка ответа сервера
xhr.onload = function() {
if (xhr.status === 200) {
var response = JSON.parse(xhr.responseText);
var taskId = response[0];
var taskText = response[1];
var answer = response[2];
// Создаем блок homework_div mt-3
var homeworkDiv = $('<div class="homework_div mt-3"></div>');
// Добавляем task_text в блок
homeworkDiv.text(taskText);
// Добавляем блок в родительский элемент
$(".generate_a[data-task-id='" + taskId + "']").parents(".homework_div").append(homeworkDiv);
}
}
}
$(document).ready(function(){
$(".generate_a").click(function(){
var taskId = $(this).data('task-id'); // Получаем ID задачи из data-атрибута
generateTask(taskId);
});
});
App.py
import sys
import importlib
sys.path.append('plugins')
@app.route('/generate_task/<tasks_id>', methods=['POST', "GET"])
def generate_task(tasks_id):
data = json.loads(request.data) # Получаем данные из запроса
tasks_id = data['tasks_id'] # Извлекаем tasks_id
smth = Tasks.query.options(load_only('url')).get(int(tasks_id)) #.options(load_only('url'))
file_name, func_name = smth.url.split('||')
results_view = importlib.import_module(file_name)
result = getattr(results_view, func_name)()
print(result)
task_text,answer = result
# Возвращаем статус 200 и сообщение как JSON
return jsonify([tasks_id, task_text,answer])
текст добавляется, однако мне нужно,чтобы при добавлении формулы из taskText рендерелись mathjax | 40cfcf2ab7fe40af7143acaaeca884e3 | {
"intermediate": 0.3498482406139374,
"beginner": 0.5480141043663025,
"expert": 0.10213770717382431
} |
19,165 | extract data from pdf url of firebase storage and create image and display it flutter | c5d7679cfedb555b45d2131fbbab0890 | {
"intermediate": 0.576621413230896,
"beginner": 0.14222455024719238,
"expert": 0.28115397691726685
} |
19,166 | extract pages from bodyBytes of pdf url flutter | 21751fcfe4005d251de903a0293b7409 | {
"intermediate": 0.3651699423789978,
"beginner": 0.2903316020965576,
"expert": 0.34449851512908936
} |
19,167 | Create me an excel VBA macro code that creates a folder that is titled with the contents of A1 and A2 with a _ in the title that separates the contents of A1 from A2 | 3a4e1d5f4afcb05b38250be509fa4647 | {
"intermediate": 0.3235219419002533,
"beginner": 0.26881587505340576,
"expert": 0.40766215324401855
} |
19,168 | pdf to array image flutter | 72c045c69f5e58a8580330155ca548ae | {
"intermediate": 0.34573638439178467,
"beginner": 0.3225772976875305,
"expert": 0.33168625831604004
} |
19,169 | Using Dev C++, Notepad, or another text editor, create a new C++ Program in your Unit3_Assignment_YourLastName folder called Calculations_YourLastname.cpp (main.cpp is fine if using Repl.it)
3. Add a Multiple line C++ comment to the top of the program with your first name, last name, course name section, Name of Text Editor/Software used to create the program, and the a summary of what the program does. A description like: “ This is a program to work with the various arithmetic operators in C++ and conditions.” would be a good start.
*/* In C++, (and several other languages) Multi-line comments appear between these red symbols */
// Label Input, Process, and Output with single line comments
4. Declare two variables of type double
*number1
*number2
5. Declare two variables of type integer
*number3
*number4
6. Declare these following other variables to be used in the program
*addition (type double)
*subtraction (type double)
*multiplication (type double)
*division (type double)
*modulus (type integer)
7. Be sure to have an output (cout <<) Statement that asks the user for each number:
*i.e. cout << “Please enter Number 1: ”;
*…
8. Accept the input from the user (cin>>) number1, number2, number3, andnumber4
9. Set
*addition = number1 + number2
i. output value to the screen
*subtraction = number1 - number2
i. output value to the screen
*multiplication = number1 * number2
i. output value to the screen
*division = number1 / number2
i. output value to the screen
*modulus = number3 % number4
i. output value to the screen
10. Remember to put your return 0; line before the closing }
Conditions:
As a programmer, we can never assume that everyone is on the light side of the force or that end user will not make mistakes in the input.
If (condition) a user enters 0 for number4...or for number2... communicate that division by 0 is not permitted in math | 69c8fae4fc4e704d31fdb24de10d11a0 | {
"intermediate": 0.3362071216106415,
"beginner": 0.422206848859787,
"expert": 0.24158602952957153
} |
19,170 | I wanna make a pathfinding algorithm in python but dont know how i'd store the map. I want it to support curvy roads, diffrent weights, and layers.
curvy roads are self explanitory, diffrent weight mean that it would prefer for example a highway more than a dirt road and it would be a 1 to 0 decimal value.
by layers I mean that you can have multiple roads overlap and it be able to distinguish what roads are actually possible to drive one, so it wouldnt try to go from n on ground road to for example and underground tunnel. | c217bc48dfca7b3848210edf2ea0f60d | {
"intermediate": 0.14225469529628754,
"beginner": 0.07173130661249161,
"expert": 0.7860140204429626
} |
19,171 | I wanna make a pathfinding algorithm in python but dont know how i'd store the map. I want it to support curvy roads, diffrent weights, and layers.
curvy roads are self explanitory, diffrent weight mean that it would prefer for example a highway more than a dirt road and it would be a 1 to 0 decimal value.
by layers I mean that you can have multiple roads overlap and it be able to distinguish what roads are actually possible to drive one, so it wouldnt try to go from n on ground road to for example and underground tunnel.
here's my problems currently:
I dont know how to actually store the map with all the nodes,
I dont know how I'd store the curvy roads.
layering I could probably do by storing which nodes annother node can go to but I first have to know how to actually store it and it be compatible with traditional ways of storing maps and other pathfinding algorithms | 5eca7e1e7dd93fb0b7c6dbd32636bae1 | {
"intermediate": 0.16697987914085388,
"beginner": 0.08609699457883835,
"expert": 0.7469230890274048
} |
19,172 | async generateMultiplePackageSlips(orders: OrderEntity[]) {
const pdf = new jsPdf({
orientation: 'portrait',
compress: true,
});
const logo = await readFile(
path.resolve(__dirname, '../assets/logo.png'),
);
pdf.addImage(
Buffer.from(logo).toString('base64'),
'PNG',
pdf.internal.pageSize.width / 2 - 12.5,
10,
25,
25,
);
pdf.setFontSize(32).text(
'Packing Slip',
pdf.internal.pageSize.width / 2,
50,
{
align: 'center',
},
);
pdf.setFontSize(12);
const addText = (inputValue: string, x: number, y: number) => {
const arrayOfNormalAndBoldText = inputValue.split('**');
arrayOfNormalAndBoldText.map((text, i) => {
pdf.setFont(pdf.getFont().fontName, 'bold');
if (i % 2 === 0) {
pdf.setFont(pdf.getFont().fontName, 'normal');
}
pdf.text(text, x, y);
x = x + pdf.getTextWidth(`${text}${i > 0 ? ' ' : ''}`);
});
};
const headerStartLine = 64;
let lineHeight = 6;
let currentPageY = headerStartLine;
orders.forEach((order) => {
// pdf.addPage();
const spaceRequired = 64;
if (
currentPageY + spaceRequired >
pdf.internal.pageSize.height - 20
) {
currentPageY += 5;
pdf.addPage();
}
addText(`Order Number: ${order.wordpressId}`, 20, headerStartLine);
addText(
`Customer Name: **${order.deliveryName}**`,
20,
currentPageY + lineHeight,
);
addText(
order.customerHeight
? `Customer Height: **${order.customerHeight}cm**`
: '',
20,
currentPageY + lineHeight * 3,
);
addText(
order.customerWeight
? `Customer Weight: **${order.customerWeight}kg**`
: '',
80,
currentPageY + lineHeight * 3,
);
pdf.setDrawColor('#828282');
lineHeight = 8;
const drawSquare = (
x1: number,
y1: number,
x2: number,
y2: number,
) => {
pdf.line(x1, y1, x1, y2);
pdf.line(x1, y2, x2, y2);
pdf.line(x2, y2, x2, y1);
pdf.line(x2, y1, x1, y1);
};
const addTable = (y: number, item: IOrderItem) => {
const lines = [];
let jacketSize = '';
let trousersSize = '';
let shirtSize = '';
let kidsSize = '';
let suitCondition = '';
switch (item.category) {
case 'suits and tuxedos':
case 'kids collection':
case 'suit sale':
case 'kids collection sale':
case 'clearance suits sale':
jacketSize =
item.meta['Size of Jacket/Shirt'] ??
item.meta['Size of Jacket'] ??
null;
trousersSize = item.meta['Size of Trousers'] ?? null;
suitCondition = item.meta['Suit Condition'] ?? null;
shirtSize =
item.meta['Size of Jacket/Shirt'] ??
item.meta['Size of Shirt'] ??
null;
kidsSize = item.meta['Size of Kids'] ?? null;
if (kidsSize) {
lines.push(
`Kids Size: ${
kidsSize ? `**${kidsSize}**` : ''
}`,
);
}
if (jacketSize) {
lines.push(
`Jacket Size: ${
jacketSize ? `**${jacketSize}**` : ''
}`,
);
}
if (trousersSize) {
lines.push(
`Trousers Size: ${
trousersSize ? `**${trousersSize}**` : ''
}`,
);
}
if (shirtSize) {
lines.push(
`Shirt Size: ${
shirtSize ? `**${shirtSize}**` : ''
}`,
);
}
if (suitCondition) {
lines.push(
`Suit Condition: ${
suitCondition ? `**${suitCondition}**` : ''
}`,
);
}
if (item.packageDescription) {
const str = item.packageDescription;
const regex = /(a|the|includes)\s+/gi;
const arr = str
.split(/(,|\s+and\s+)/)
.map((subItem: string) =>
subItem.replace(regex, '').trim(),
)
.filter(
(subItem: string) =>
!/tux|suit/i.test(subItem),
)
.filter(
(subItem) =>
subItem !== ',' && subItem !== 'and',
);
arr.forEach((subItem) => {
if (
subItem.includes('Tie') ||
subItem.includes('Bowtie')
) {
const color = subItem
.replace('Bowtie', '')
.replace('Tie', '')
.trim();
if (subItem.includes('Tie')) {
lines.push(`Tie: **${color}**`);
} else {
lines.push(`Bowtie: **${color}**`);
}
}
if (subItem.includes('Vest')) {
const color = subItem
.replace('Vest', '')
.trim();
lines.push(`Vest: **${color}**`);
}
if (subItem.includes('Belt')) {
const color = subItem
.replace('Belt', '')
.trim();
lines.push(`Belt: **${color}**`);
}
if (subItem.includes('Pocket Square')) {
const color = subItem
.replace('Pocket Square', '')
.trim();
lines.push(`Pocket Square: **${color}**`);
}
});
} else {
lines.push('Tie/Bowtie');
lines.push('Vest');
lines.push('Belt');
lines.push('Pocket Square');
}
break;
case 'shirt':
break;
case 'jacket':
}
const k = 0.3;
const lastY =
y + lineHeight * (lines.length + 1) + lineHeight * k;
const endPoint = pdf.internal.pageSize.height - 20;
if (lastY > endPoint) {
pdf.addPage();
y = 20;
}
for (
let i = 0;
i < lines.length + (lines.length > 0 ? 3 : 2);
i++
) {
pdf.line(
20,
y + lineHeight * (i - 1) + lineHeight * k,
pdf.internal.pageSize.width - 20,
y + lineHeight * (i - 1) + lineHeight * k,
);
}
pdf.line(
20,
y + lineHeight * -1 + lineHeight * k,
20,
y +
lineHeight * (lines.length > 0 ? lines.length + 1 : 0) +
lineHeight * k,
);
pdf.line(
pdf.internal.pageSize.width - 20,
y + lineHeight * -1 + lineHeight * k,
pdf.internal.pageSize.width - 20,
y +
lineHeight * (lines.length > 0 ? lines.length + 1 : 0) +
lineHeight * k,
);
const squareSize = lineHeight * 0.5;
const header = [`**${item.title}**`];
if (item.meta['Package Name']) {
header.push(`Package - **${item.meta['Package Name']}**`);
}
if (item.meta['Backup Suit']) {
header.push(
`Backup Suit - **${item.meta['Backup Suit']}**`,
);
}
if (item.category === 'shirt' && item.meta['Size of Shirt']) {
header.push(`Size - **${item.meta['Size of Shirt']}**`);
}
if (
item.category === 'new shirt sale' &&
item.meta['Size of New Shirt Sale']
) {
header.push(
`Size - **${item.meta['Size of New Shirt Sale']}**`,
);
}
if (item.category === 'jacket' && item.meta['Size of Jacket']) {
header.push(`Size - **${item.meta['Size of Jacket']}**`);
}
if (
item.category === 'new jacket sale' &&
item.meta['Size of New Jacket Sale']
) {
header.push(
`Size - **${item.meta['Size of New Jacket Sale']}**`,
);
}
if (
item.category === 'jacket sale' &&
item.meta['Size of Jacket Sale']
) {
header.push(
`Size - **${item.meta['Size of Jacket Sale']}**`,
);
}
if (item.category === 'shoes' && item.meta['Size of Shoes']) {
header.push(`Size - **${item.meta['Size of Shoes']}**`);
}
if (
item.category === 'shoe sale' &&
item.meta['Size of Shoe Sale']
) {
header.push(`Size - **${item.meta['Size of Shoe Sale']}**`);
}
if (item.category === 'vest' && item.meta['Size of Vest']) {
header.push(`Size - **${item.meta['Size of Vest']}**`);
}
if (
item.category === 'new vest sale' &&
item.meta['Size of New Vest Sale']
) {
header.push(
`Size - **${item.meta['Size of New Vest Sale']}**`,
);
}
if (
item.category === 'vest sale' &&
item.meta['Size of Vest Sale']
) {
header.push(`Size - **${item.meta['Size of Vest Sale']}**`);
}
drawSquare(
24,
y - lineHeight * k - squareSize * 0.3,
24 + squareSize,
y - lineHeight * k + squareSize * 0.7,
);
addText(header.join(' '), 24 + squareSize * 1.5, y);
if (lines.length !== 0) {
addText('Items to include:', 24, y + lineHeight);
addText(
'Sent? (Tick when included in order)',
74,
y + lineHeight,
);
for (let i = 0; i < lines.length; i++) {
drawSquare(
24,
y +
lineHeight * (2 + i) -
lineHeight * k -
squareSize * 0.3,
24 + squareSize,
y +
lineHeight * (2 + i) -
lineHeight * k +
squareSize * 0.7,
);
addText(
lines[i] as string,
24 + squareSize * 1.5,
y + lineHeight * (2 + i),
);
}
} else {
return y + lineHeight + lineHeight * k;
}
return y + lineHeight * (lines.length + 2) + lineHeight * k;
};
// let lastY = 84;
let lastY = currentPageY + 50;
for (const item of order.items) {
lastY = addTable(++lastY, item);
}
});
return Buffer.from(pdf.output('arraybuffer'));
}
this is a fucntion i am using to print a pdf document with multiple orders and the problem is when there are many orders texts are being overlapped | 2e3144b851c570088fe866e6671ffa3d | {
"intermediate": 0.34308263659477234,
"beginner": 0.4723377823829651,
"expert": 0.18457961082458496
} |
19,173 | SyntaxError: The requested module 'aws-sdk' does not provide an export named 'CognitoIdentityServiceProvider' | 7ced35c3ff86dd3b6c97a3e17e5bb1f1 | {
"intermediate": 0.45917966961860657,
"beginner": 0.3055327236652374,
"expert": 0.2352875918149948
} |
19,174 | write code for P5.JS that emulates doom with simple shapes | 2d6fb992ed5277575a347b8a0b6c083d | {
"intermediate": 0.378261536359787,
"beginner": 0.24766916036605835,
"expert": 0.3740692436695099
} |
19,175 | convert full code in python, including all functionalities: "const container = document.createElement('div'); container.style.display = 'flex'; container.style.justifyContent = 'center'; container.style.alignItems = 'center'; container.style.height = '100vh'; container.style.margin = '0'; container.style.position = 'absolute'; container.style.top = '50%'; container.style.left = '50%'; container.style.transform = 'translate(-50%, -50%)'; const numProgressBars = 64; // Number of progress bars to spawn const progressBars = []; for (let i = 0; i < numProgressBars; i++) { const progress = document.createElement('progress'); progress.style.position = 'absolute'; progress.style.top = '50%'; progress.style.left = '50%'; progress.style.transform = 'translate(-50%, -50%)'; progress.classList.add('rotating-progress'); // Randomly assign z-index value between 0 and 100 for each progress bar const zIndex = Math.floor(Math.random() * 101); progress.style.zIndex = zIndex; container.appendChild(progress); const progressBar = { element: progress, rotationTiming: Math.random() + 5, // Random rotation timing for each progress bar rotationDegree: 0, // Current rotation degree of the progress bar zIndexCounter: 0 // Counter to track the number of z-index switches }; progressBars.push(progressBar); } document.body.appendChild(container); const style = document.createElement('style'); const rotateStyle = document.createTextNode('@keyframes rotate { ' + '0% { transform: translate(-50%, -50%) rotate(0deg); } ' + '100% { transform: translate(-50%, -50%) rotate(-360deg); } ' + '}'); style.appendChild(rotateStyle); document.head.appendChild(style); function animateRotation(progressBar) { progressBar.element.style.animation = 'rotate ' + progressBar.rotationTiming.toFixed(2) + 's linear infinite'; } function adjustProgressBarValue(progressBar, value) { progressBar.element.setAttribute('value', value); } function updateRotation(progressBar) { progressBar.rotationDegree += 1; // Increment rotation degree by 1 if (progressBar.rotationDegree >= 360) { progressBar.rotationDegree = 0; // Reset rotation degree to 0 progressBar.zIndexCounter += 1; // Increment z-index switch counter if (progressBar.zIndexCounter % 2 === 1) { progressBar.element.style.zIndex = 100; // Set z-index to 100 } else { progressBar.element.style.zIndex = Math.floor(Math.random() * 101); // Set z-index to a random value between 0 and 100 } } progressBar.element.style.transform = 'translate(-50%, -50%) rotate(' + progressBar.rotationDegree + 'deg)'; // Apply updated rotation } progressBars.forEach(function(progressBar) { animateRotation(progressBar); setInterval(function() { updateRotation(progressBar); }, progressBar.rotationTiming * 1000); // Update rotation every rotation timing interval });" | a83f6abee753cec6cfa4c62907a6fa07 | {
"intermediate": 0.4036891758441925,
"beginner": 0.430586040019989,
"expert": 0.16572479903697968
} |
19,176 | Would this be a good first public project? Ive never uploaded anything public to github, what do you think of this:
I'd make a website where you can play against a tictactoe bot, every game you play gets saved in a database of some kind, not yet decided.
the tictactoe bot would just search its database of games and see which move in that exact position won the most.
the only question I have is how i'd store the games, first I thought just save each of them in a json file but there has to be a better way.
maybe somehow store each unique game position it stumbles upon and the winning move in that scenario. if it doesnt find on play a random move. | 01f93013eca7977ab69b2c9186bb466c | {
"intermediate": 0.38026949763298035,
"beginner": 0.294098436832428,
"expert": 0.3256320059299469
} |
19,177 | in github I have main branch, but I already did add master branch with guide, what is the difference between these branches and why default branch is main but all guides say to create master branch? and do I need to keep only one? | e877c20bc054b37016b68dbe54fcddb3 | {
"intermediate": 0.4153183400630951,
"beginner": 0.2592149078845978,
"expert": 0.32546672224998474
} |
19,178 | how do i increase my slidecontainer width? | 0f7b304f29175132375ff4efeb048cc1 | {
"intermediate": 0.3536403775215149,
"beginner": 0.3108827471733093,
"expert": 0.3354768753051758
} |
19,179 | the following code is working, i can read the i2c data from a microcontroller on my esp32, now i need to send this data over espnow to a second esp32 and display that data on a 1306 i2c screen.
#include "Wire.h"
#define I2C_DEV_ADDR 0x3C
uint32_t i = 0;
void onRequest(){
Wire.print(i++);
Wire.print(" Packets.");
Serial.println("onRequest");
}
void onReceive(int len){
Serial.printf("onReceive[%d]: ", len);
while(Wire.available()){
Serial.write(Wire.read());
}
Serial.println();
}
void setup() {
Serial.begin(115200);
Serial.setDebugOutput(true);
Wire.onReceive(onReceive);
Wire.onRequest(onRequest);
Wire.begin((uint8_t)I2C_DEV_ADDR);
#if CONFIG_IDF_TARGET_ESP32
char message[64];
snprintf(message, 64, "%u Packets.", i++);
Wire.slaveWrite((uint8_t *)message, strlen(message));
#endif
}
void loop() {
} | ad51eebd786303790987747918b2e1a3 | {
"intermediate": 0.35788628458976746,
"beginner": 0.4835766553878784,
"expert": 0.15853704512119293
} |
19,180 | does java automatically cast integers into strings | 09b35abf2a9f26a85c56219b6d4637cf | {
"intermediate": 0.5090023279190063,
"beginner": 0.1721787452697754,
"expert": 0.31881895661354065
} |
19,181 | How do I create api with c#? Explain With examples | 097a6466e22405e314080923cab0c5ab | {
"intermediate": 0.8193902373313904,
"beginner": 0.1122986450791359,
"expert": 0.0683111920952797
} |
19,182 | data class Specimen(
val name: String,
val size: Int,
val color: String,
)
{
fun interface CanEat {
fun eat(specimen: Specimen)
}
fun main() {
val specimen= Specimen("One", 5, "Black")
val canEatLambda = CanEat { specimen->
println("Specimen ${specimen.name} size ${specimen.size} color${specimen.color}")
} - How can I pass the parameters to CanEat directly without val specimen? | 76138a3e93a2c94cae7838ebdb26a330 | {
"intermediate": 0.3082413971424103,
"beginner": 0.5712845325469971,
"expert": 0.12047405540943146
} |
19,183 | what data type is the data from a i2c data stream? | 51eea3b7ff6c54f91d5b757cb250784a | {
"intermediate": 0.38459497690200806,
"beginner": 0.22863273322582245,
"expert": 0.3867722749710083
} |
19,184 | code jave if | ed9367b66fd148b0a8e940fb231e842a | {
"intermediate": 0.3523663878440857,
"beginner": 0.33876243233680725,
"expert": 0.30887117981910706
} |
19,185 | filter records of dataframe 2 which is present in dataframe 1 | a85bb80c2519e2bcd960e65794d497f9 | {
"intermediate": 0.35587310791015625,
"beginner": 0.3204067349433899,
"expert": 0.32372015714645386
} |
19,186 | how to deploy express api to vercel and use it after with postman | 9f6dad5faedad2aeba5afd07108c6d21 | {
"intermediate": 0.778881311416626,
"beginner": 0.1468719094991684,
"expert": 0.07424674183130264
} |
19,187 | async function getType<T>(id: string): Promise<T> {
const URL = process.env.NEXT_PUBLIC_API_URL + `/${mapTypes[T]}`;
const res = await fetch(`${URL}/${id}`);
return res.json();
}
can you fix this ? | d97f64d37bbd4dab98f5db8a92172861 | {
"intermediate": 0.5846849083900452,
"beginner": 0.30596357583999634,
"expert": 0.1093515083193779
} |
19,188 | in pop os 22.04, does it use the noveau or nvidia driver for nvidia hardware | 5db795b2350ea44b7f7d285492d785d2 | {
"intermediate": 0.383792519569397,
"beginner": 0.22797489166259766,
"expert": 0.38823258876800537
} |
19,189 | can you suggest improvements to this article https://jakub-43881.medium.com/zos-callable-services-for-java-developer-f34a7133dd78? | 63193e247392f0ab799899ac8582ebb9 | {
"intermediate": 0.4630550444126129,
"beginner": 0.2064272165298462,
"expert": 0.3305177092552185
} |
19,190 | need to make an self auto-updating image gallery by using html, css, javascript. need this gallery to show all images from that link "https://camenduru-com-webui-docker.hf.space/file=outputs/extras-images/00072.png" it can contain images from 00001 to 00200. some of these images are not updating and, I don't know... | 3ced386d19d83974b49bdcb32e4d5ed3 | {
"intermediate": 0.43871942162513733,
"beginner": 0.27437007427215576,
"expert": 0.2869105637073517
} |
19,191 | create function udf_tobusinesshours(@datetime datetime)
returns datetime
as
begin
--declare @datetime datetime = '2023-09-04 19:23:39.600'
declare @daystart datetime = cast(cast(@datetime as date) as datetime)
declare @start_businesshour datetime = dateadd(hh,8,@daystart)
declare @end_businesshour datetime = dateadd(hh,18,@daystart)
if @datetime < @start_businesshour
begin
set @datetime = @start_businesshour
end
else if @datetime > @end_businesshour
begin
set @datetime = dateadd(day, 1, @start_businesshour)
end
if datepart(dw, @datetime) = 7
begin
set @datetime = dateadd(day, 2, @datetime)
end
else if datepart(dw, @datetime) = 1
begin
set @datetime = dateadd(day, 1, @datetime)
end
return @datetime
end
select dbo.udf_tobusinesshours('2023-09-02 19:23:39.600')
--homework #1: create same function with case statement or any other approach. | f8e0dd604d613b96e725ad3cb506d1fc | {
"intermediate": 0.3411865234375,
"beginner": 0.4192652404308319,
"expert": 0.2395482361316681
} |
19,192 | need to make an self auto-updating image gallery by using html, css, javascript. need this gallery to show all images from that link “https://camenduru-com-webui-docker.hf.space/file=outputs/extras-images/00072.png” it can contain images from 00001 to 00200. some of these images are not updating and, I don’t know… you don’t uderstand. I want to grab all images from that link and show them and auto-update in my gallery code on page. try to do a fully working html page. just make an auto-updating radiobutton and a button to manually update, disable that auto-updating button by default. it output all images from range of 00001 to 00200, right? also, need somehow correctly align all images to stay within some grid and arrange and align correcly as in gallery of some thumbnails that will auto-fit all the images on full window size and also will keep correct images aspect-ratio. also, if we get an error or no image data returned from that range in url, we need it to not fetch and leave that space in gallery empty by replacing it with available other image that exist in range. also, redo all code without a backticks in template literals used, do in old fashion through + and '. it works. is there any methods to check the whole image range in url if image data is changed and if it is changed then simply auto-update that specific image without fully reupdating the whole range in that url in gallery? “Please note that the functionality of checking if the image data has changed and only updating that specific image without fully re-updating the whole range would require additional backend code and cannot be achieved with just HTML, CSS, and JavaScript.”. but wait, you told me that: "To check if an image in the range has been changed and update only that specific image without re-updating the entire range, you can use the fetch API to send requests and compare the response headers or image data with the existing image in the gallery. " ok, I got an error “error fetching image” probably because on first update in gallery when there’s no any images preloaded. also, I’m not sure how this serverside updating of images works exactly, but I’m sure that this whole image range on that url is auto-wiping after some time on some specific max num of images and simply starts from 00001 again. need somehow to deal with that, not sure… also, the images themselves are big in bytes of size and need to do something with that as well. it's probably also a good idea to integrate a cache buster to remove an old chached image data.: <!DOCTYPE html>
<html>
<head>
<title>Auto-Updating Image Gallery</title>
<style>
body {
margin: 0;
padding: 0;
}
#gallery {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
grid-gap: 10px;
align-items: start;
justify-items: center;
padding: 20px;
}
.gallery-image {
width: 100%;
height: auto;
}
#updateButton {
margin-top: 10px;
}
</style>
</head>
<body>
<div>
<h1>Auto-Updating Image Gallery</h1>
<div id='gallery'></div>
<button id='updateButton' onclick='manualUpdate()'>Manual Update</button>
<label for='autoUpdate'>Auto-Update:</label>
<input type='checkbox' id='autoUpdate' onchange='toggleAutoUpdate()' />
</div>
<script>
const gallery = document.getElementById('gallery');
const autoUpdateCheckbox = document.getElementById('autoUpdate');
const updateButton = document.getElementById('updateButton');
const imageUrl = 'https://camenduru-com-webui-docker.hf.space/file=outputs/extras-images/';
let intervalId;
// Function to fetch and update the gallery
function updateGallery() {
gallery.innerHTML = ''; // Clear existing gallery images
for (let i = 1; i <= 200; i++) {
const image = new Image();
const imageSrc = imageUrl + padNumberWithZero(i, 5) + '.png';
image.src = imageSrc;
image.classList.add('gallery-image');
image.onerror = () => {
// Replace missing image with a backup image if available
const missingImageBackupSrc = imageUrl + 'backup.png';
image.src = missingImageBackupSrc;
};
gallery.appendChild(image);
}
}
// Function to pad numbers with leading zeros
function padNumberWithZero(number, length) {
let numString = String(number);
while (numString.length < length) {
numString = '0' + numString;
}
return numString;
}
// Function to manually update the gallery
function manualUpdate() {
clearInterval(intervalId); // Stop auto-update if in progress
updateGallery();
}
// Function to toggle auto-update
function toggleAutoUpdate() {
if (autoUpdateCheckbox.checked) {
intervalId = setInterval(updateGallery, 5000); // Auto-update every 5 seconds
updateButton.disabled = true;
} else {
clearInterval(intervalId);
updateButton.disabled = false;
}
}
</script>
</body>
</html> | b19a576da0983b1deb95ddabc5ce2854 | {
"intermediate": 0.5039318203926086,
"beginner": 0.26964718103408813,
"expert": 0.22642092406749725
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.