row_id
int64 0
48.4k
| init_message
stringlengths 1
342k
| conversation_hash
stringlengths 32
32
| scores
dict |
|---|---|---|---|
42,569
|
how to configure http proxy to system for which I don't have direct access
|
72662d3c3ab2151a3392cd79ff22d265
|
{
"intermediate": 0.4781305491924286,
"beginner": 0.2747863829135895,
"expert": 0.24708308279514313
}
|
42,570
|
Код в Prolog 7.5 База данных содержит факты вида: student(имя, курс). Создать проект, позволяющий сформировать список студентов 1-го курса. Использовать идентификаторы в латинице.
% Copyright
implement main
open core
class predicates
student(symbol,integer)
spisok
clauses
student(vova,3).
student(lena,1).
student(dima,1).
student(ira,2).
student(marina,1).
run() :-
succeed. % place your own code here
spisok:-student(X,1),write(X),nl,fail.
write("Список студентов 1-курса"),nl,spisok.
end implement main
goal
console::runUtf8(main::run).
|
14923aa9be081d703c12a40887159f2b
|
{
"intermediate": 0.32598304748535156,
"beginner": 0.548452615737915,
"expert": 0.1255643218755722
}
|
42,571
|
when reading csv files using pandas how to skip first 3 columns and not read them ?
|
2899842120408dc2eb2edf23ef12600d
|
{
"intermediate": 0.4361368715763092,
"beginner": 0.13385635614395142,
"expert": 0.43000680208206177
}
|
42,572
|
are you gpt 4 or 3.5
|
e61b1b6a51ebfad14991763a1def4f43
|
{
"intermediate": 0.27883201837539673,
"beginner": 0.3660850524902344,
"expert": 0.3550829291343689
}
|
42,573
|
import { CommunityPolicyEnum, AccountCommunityType } from "@openchat/types"
import { Models } from "@openchat/models"
import {
useBanByCommunityAndIdentity,
useCommunity,
useCurrentIdentityId,
useAccountCommunityBy,
} from "@openchat/ui"
import { useEffect, useState } from "react"
import { useNavigate } from "react-router-dom"
import { AppController, AppStore } from "../../../app/app.globals"
import { useDashboardSettings } from "../../../hook/use-dashboard-settings.hook.js"
import { Button } from "../../../ui-component/button/button.component"
import { memoComponent } from "@openchat/ui"
import { CommunityInfo } from "./community-info.component"
import styles from "./community-panel.styles.module.css"
export const CommunityPanel = memoComponent("CommunityPanel", () => {
// ---------------------------------------------------------------------------
// variables
// ---------------------------------------------------------------------------
const { selectedTab } = useDashboardSettings()
const navigate = useNavigate()
const communityId =
selectedTab.type === "community" ? selectedTab.communityId : null
const accountCommunity = useAccountCommunityBy({
communityId,
})
const [isLoaded, setLoading] = useState(false)
const [hasMembership, setMembership] = useState(false)
const [hasJoinRequest, setJoinRequest] = useState(false)
const [isDeleted, setDeleted] = useState(false)
const [isDisabled, setIsDisabled] = useState(false)
const currentIdentityId = useCurrentIdentityId()
const currentUserBan = useBanByCommunityAndIdentity(
communityId,
currentIdentityId,
)
const isBanned = !!currentUserBan
const community = useCommunity(communityId)
// ---------------------------------------------------------------------------
// effects
// ---------------------------------------------------------------------------
useEffect(() => {
loadCommunity()
}, [accountCommunity, communityId])
// ---------------------------------------------------------------------------
// functions
// ---------------------------------------------------------------------------
async function loadCommunity() {
if (!accountCommunity || !communityId) return
if (accountCommunity.isDeleted === true) {
setDeleted(true)
return
}
// if we already have community, we don't need to check an access
if (AppStore.for(Models.Changeset).has(communityId)) {
setLoading(true)
return
}
// for private communities we need to check if current user is a community member
// todo: re-implement
// if (accountCommunity.policy === CommunityPolicyEnum.private) {
// // if user doesn't have a membership - he will be asked to send a petition,
// // in the case if petition was not sent yet
// const hasJoinRequest = await AppController.joinRequest.checkIfSent(
// accountCommunity.communityId,
// )
// setJoinRequest(hasJoinRequest)
// }
if (accountCommunity && !community) {
await AppController.community.import(accountCommunity.communityId)
}
setLoading(true)
}
async function sendJoinRequest(accountCommunity: AccountCommunityType) {
await AppController.joinRequest.join(accountCommunity)
setIsDisabled(true)
setJoinRequest(true)
}
async function deleteCommunityPublic(id: string) {
await AppController.accountCommunity.delete(id)
setIsDisabled(true)
navigate("/messages")
}
const handleImage = () => {
return (
<div className={styles.imgSearchWrapper}>
<img
src={"/public/images/login-svg/image87.svg"}
alt="Image panel"
style={{ width: "16.4rem", height: "14.8rem" }}
/>
</div>
)
}
if (!accountCommunity) return <></>
// ---------------------------------------------------------------------------
return (
<>
{isDeleted ? (
<>
<div className={styles.heading}>
<h2>{accountCommunity.name}</h2>
</div>
<div className={styles.communityPanelContainer}>
{/* --------------------------------------------------------------------------- */}
{/* DELETE COMMUNITY */}
{/* --------------------------------------------------------------------------- */}
<div>
This community was removed by its administrators, thus you cannot
use it anymore. Press the button to remove it from your
communities.
</div>
{handleImage()}
<div className={styles.btnCommunity}>
<Button
mode="danger"
onClick={() =>
deleteCommunityPublic(accountCommunity.communityId)
}
disabled={isDisabled}
>
Delete community
</Button>
</div>
</div>
</>
) : (
<>
{/* --------------------------------------------------------------------------- */}
{/* COMMUNITY INFO */}
{/* --------------------------------------------------------------------------- */}
{isLoaded && !isBanned && <CommunityInfo />}
{/* --------------------------------------------------------------------------- */}
{/* been banned in this community */}
{/* --------------------------------------------------------------------------- */}
{isLoaded && isBanned && (
<>
<div className={styles.heading}>
<h2>{accountCommunity.name}</h2>
</div>
<div className={styles.communityPanelContainer}>
<div className={styles.communityPanelText}>
You have been banned in this community. You cannot perform any
actions until you’ll be unbanned by this community
administrators.
</div>
{handleImage()}
<div className={styles.btnCommunity}>
<Button onClick={() => {}} disabled={isDisabled}>
Leave community
</Button>
</div>
</div>
</>
)}
{/* --------------------------------------------------------------------------- */}
{/* LOADING */}
{/* --------------------------------------------------------------------------- */}
{!isLoaded &&
(accountCommunity.policy !== CommunityPolicyEnum.private ||
hasMembership) && (
<>
<div className={styles.heading}>
<h2>{accountCommunity.name}</h2>
</div>
<div className={styles.loadingContainer}>
<div
style={{
position: "relative",
width: "4rem",
height: "4rem",
}}
>
<div className={styles.communityLoadingSpinner} />
</div>
<p className={styles.communityLoadingText}>
Community is loading...
</p>
</div>
</>
)}
{/* --------------------------------------------------------------------------- */}
{/* SEND REQUEST */}
{/* --------------------------------------------------------------------------- */}
{!isLoaded &&
hasMembership === false &&
hasJoinRequest === false &&
accountCommunity.policy === CommunityPolicyEnum.private && (
<>
<div className={styles.communityPrivate}>
<div className={styles.heading}>
<h2>{accountCommunity.name}</h2>
</div>
<p className={styles.communityPanelText}>
This is a private community and you’ll be able to access its
content only if you’ll have a membership.
</p>
<br />
<p className={styles.communityPanelText}>
If you want to ask community administrators to give you a
membership, press “send the request” button below.
</p>
{handleImage()}
<div className={styles.btnWrap}>
<Button
onClick={() => sendJoinRequest(accountCommunity)}
disabled={isDisabled}
>
Send the request
</Button>
</div>
</div>
</>
)}
{/* --------------------------------------------------------------------------- */}
{/* SENT THE MEMBERSHIP REQUEST */}
{/* --------------------------------------------------------------------------- */}
{hasMembership === false && hasJoinRequest && (
<>
<div className={styles.heading}>
<h2>{accountCommunity.name}</h2>
</div>
<div>
<p className={styles.communityPanelPetitionText}>
This is a private community and you’ll be able to access its
content only if you’ll have a membership.
</p>
<p className={styles.communityPanelPetitionText}>
You’ve already sent the membership request to access this
community contents.{" "}
</p>
{handleImage()}
<div className={styles.textUnderline}>
{/* <span className={styles.petitionUnderlineText}>
Please wait for the administrator <br />
<span className={styles.petitionSubUnderlineText}>
approval of your request
</span> */}
<span className={styles.communityPanelUnderlineText}>
Please wait for the administrator approval of your request
</span>
</div>
</div>
</>
)}
</>
)}
</>
)
})
the issue here when community is private it should displays this part of code `{!isLoaded &&
hasMembership === false &&
hasJoinRequest === false &&
accountCommunity.policy === CommunityPolicyEnum.private && (
<>
<div className={styles.communityPrivate}>
<div className={styles.heading}>
<h2>{accountCommunity.name}</h2>
</div>
<p className={styles.communityPanelText}>
This is a private community and you’ll be able to access its
content only if you’ll have a membership.
</p>
<br />
<p className={styles.communityPanelText}>
If you want to ask community administrators to give you a
membership, press “send the request” button below.
</p>
{handleImage()}
<div className={styles.btnWrap}>
<Button
onClick={() => sendJoinRequest(accountCommunity)}
disabled={isDisabled}
>
Send the request
</Button>
</div>
</div>
</>
)}` but it shows everytime this part of code `{isLoaded && !isBanned && <CommunityInfo />}` this part shows when community is public but in private it shows send request first. How to fix it ?
|
33fd85df16d822d9f4b1ae991427bd42
|
{
"intermediate": 0.26351186633110046,
"beginner": 0.43404337763786316,
"expert": 0.3024446964263916
}
|
42,574
|
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>Metacar: Documentation</title>
<!-- Load the latest version of TensorFlow.js -->
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@0.11.6"> </script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/pixi.js/4.7.1/pixi.min.js"></script>
</head>
<body>
<div id="env"></div>
<script src="https://cdn.jsdelivr.net/combine/npm/metacar@0.1.1,npm/metacar@0.1.1"></script>
<script>
// Select a level
const level = metacar.level.level0;
// Create the environement
const env = new metacar.env("env", level);
// Set the agent motion
env.setAgentMotion(metacar.motion.BasicMotion, {rotationStep: 0.1});
// Create the model
// Input
const input = tf.input({batchShape: [null, 26]});
// Hidden layer
const layer = tf.layers.dense({useBias: true, units: 32, activation: 'relu'}).apply(input);
// Output layer
const output = tf.layers.dense({useBias: true, units: 3, activation: 'linear'}).apply(layer);
// Create the model
const model = tf.model({inputs: input, outputs: output});
// Optimize
let model_optimizer = tf.train.adam(0.01);
// Loss of the model
function model_loss(tf_states, tf_actions, Qtargets){
return tf.tidy(() => {
// valeur
return model.predict(tf_states).sub(Qtargets).square().mul(tf_actions).mean();
});
}
// Pick an action eps-greedy
function pickAction(st, eps){
let st_tensor = tf.tensor([st]);
let act;
if (Math.random() < eps){ // Pick a random action
act = Math.floor(Math.random()*3);
}
else {
let result = model.predict(st_tensor);
let argmax = result.argMax(1);
act = argmax.buffer().values[0];
argmax.dispose();
result.dispose();
}
st_tensor.dispose();
return act;
}
// Return the mean of an array
function mean(array){
if (array.length == 0)
return null;
var sum = array.reduce(function(a, b) { return a + b; });
var avg = sum / array.length;
return avg;
}
// Train the model
function train_model(states, actions, rewards, next_states){
var size = next_states.length;
// Transform each array into a tensor
let tf_states = tf.tensor2d(states, shape=[states.length, 26]);
let tf_rewards = tf.tensor2d(rewards, shape=[rewards.length, 1]);
let tf_next_states = tf.tensor2d(next_states, shape=[next_states.length, 26]);
let tf_actions = tf.tensor2d(actions, shape=[actions.length, 3]);
// Get the list of loss to compute the mean later in this function
let losses = []
// Get the QTargets
const Qtargets = tf.tidy(() => {
let Q_stp1 = model.predict(tf_next_states);
let Qtargets = tf.tensor2d(Q_stp1.max(1).expandDims(1).mul(tf.scalar(0.99)).add(tf_rewards).buffer().values, shape=[size, 1]);
return Qtargets;
});
// Generate batch of training and train the model
let batch_size = 32;
for (var b = 0; b < size; b+=32) {
// Select the batch
let to = (b + batch_size < size) ? batch_size : (size - b);
const tf_states_b = tf_states.slice(b, to);
const tf_actions_b = tf_actions.slice(b, to);
const Qtargets_b = Qtargets.slice(b, to);
// Minimize the error
model_optimizer.minimize(() => {
const loss = model_loss(tf_states_b, tf_actions_b, Qtargets_b);
losses.push(loss.buffer().values[0]);
return loss;
});
// Dispose the tensors from the memory
tf_states_b.dispose();
tf_actions_b.dispose();
Qtargets_b.dispose();
}
console.log("Mean loss", mean(losses));
// Dispose the tensors from the memory
Qtargets.dispose();
tf_states.dispose();
tf_rewards.dispose();
tf_next_states.dispose();
tf_actions.dispose();
}
env.load().then(() => {
env.addEvent("play", () => {
// Move forward
let st = env.getState().linear;
let act = pickAction(st, 0.0);
let reward = env.step(act);
console.log(reward);
// Log the reward
});
env.addEvent("stop", () => {
return;
});
env.addEvent("train", async () => {
let eps = 1.0;
// Used to store the experiences
let states = [];
let rewards = [];
let reward_mean = [];
let next_states = [];
let actions = [];
// Get the current state of the lidar
let st = env.getState().linear;
let st2;
for (let epi=0; epi < 150; epi++){
let reward = 0;
let step = 0;
while (step < 400){
// pick an action
let act = pickAction(st, eps);
reward = env.step(act);
st2 = env.getState().linear;
let mask = [0, 0, 0];
mask[act] = 1;
// Randomly insert the new transition tuple
let index = Math.floor(Math.random() * states.length);
states.splice(index, 0, st);
rewards.splice(index, 0, [reward]);
reward_mean.splice(index, 0, reward)
next_states.splice(index, 0, st2);
actions.splice(index, 0, mask);
// Be sure to keep the size of the dataset under 10000 transitions
if (states.length > 10000){
states = states.slice(1, states.length);
rewards = rewards.slice(1, rewards.length);
reward_mean = reward_mean.slice(1, reward_mean.length);
next_states = next_states.slice(1, next_states.length);
actions = actions.slice(1, actions.length);
}
st = st2;
step += 1;
}
// Decrease epsilon
eps = Math.max(0.1, eps*0.99);
// Train model every 5 episodes
if (epi % 5 == 0){
console.log("---------------");
console.log("rewards mean", mean(reward_mean));
console.log("episode", epi);
await train_model(states, actions, rewards, next_states);
await tf.nextFrame();
}
// Shuffle the env
env.shuffle();
}
env.render(true);
});
});
</script>
</body>
</html>
Transform ce programe en python
|
24954d6702d44d05a846af76f57bc8e6
|
{
"intermediate": 0.3036764860153198,
"beginner": 0.34285157918930054,
"expert": 0.35347193479537964
}
|
42,575
|
I have a json file where I have extracted data of some entities including line_item. And I have a Image(Invoice) and OCR textract output in csv format containing these columns (page_num,block_num,line_num,word_num,left,right,top,bottom,width,height,conf,text,image_height,image_width) .
Now I want to map all the line item on image from json using ocr output. And create bounding box for each line_items and Print those bounding box on image.
A sample json output :
{
"invoice_details": {
"invoice_number": "2268",
"invoice_date": "10-Dec-2020",
"invoice_due_date": "None",
"order_id": "None",
"vendor_name": "",
"buyer_name": "",
"shipto_name": "None"
},
"Payment Details": {
"vendor_ifsccode": "",
"vendor_bankname": "",
"account_number": ""
},
"address_details": {
"vendor_address": "",
"billto_address": "",
"shipto_address": ""
},
"amounts_and_tax": {
"Subtotal_or_taxable_amount": "",
"total_sgst_amount": "",
"total_cgst_amount": "",
"total_igst_amount": "None",
"total_amount_after_tax": "",
"billto_GSTIN": "",
"vendor_GSTIN": "",
"shipto_GSTIN": "None"
},
"line_items": [
{
"hsn_code": "8443",
"description": "Hp 1606 Pressure Roller",
"unit_of_measurement_or_uom": "NOS",
"unit_price": "750.00",
"quantity": "3",
"sgst_rate": "None",
"sgst_amount": "None",
"cgst_rate": "None",
"cgst_amount": "None",
"igst_rate": "None",
"igst_amount": "None",
"taxable_amount_or_subtotal": "2250.00",
"total_amount_or_gross_amount_of_the_item": "None"
},
{
"hsn_code": "8443",
"description": "Teflon 1010 (OG)",
"unit_of_measurement_or_uom": "NOS",
"unit_price": "800.00",
"quantity": "3",
"sgst_rate": "None",
"sgst_amount": "None",
"cgst_rate": "None",
"cgst_amount": "None",
"igst_rate": "None",
"igst_amount": "None",
"taxable_amount_or_subtotal": "2400.00",
"total_amount_or_gross_amount_of_the_item": "None"
},
{
"hsn_code": "84439959",
"description": "Pick Up Roll 1606",
"unit_of_measurement_or_uom": "NOS",
"unit_price": "250.00",
"quantity": "3",
"sgst_rate": "None",
"sgst_amount": "None",
"cgst_rate": "None",
"cgst_amount": "None",
"igst_rate": "None",
"igst_amount": "None",
"taxable_amount_or_subtotal": "750.00",
"total_amount_or_gross_amount_of_the_item": "None"
},
{
"hsn_code": "8443",
"description": "Hp 1606 Lower Bush",
"unit_of_measurement_or_uom": "NOS",
"unit_price": "250.00",
"quantity": "2",
"sgst_rate": "None",
"sgst_amount": "None",
"cgst_rate": "None",
"cgst_amount": "None",
"igst_rate": "None",
"igst_amount": "None",
"taxable_amount_or_subtotal": "500.00",
"total_amount_or_gross_amount_of_the_item": "None"
},
{
"hsn_code": "8443",
"description": "1505 /1606 Fuser Drive Gear",
"unit_of_measurement_or_uom": "NOS",
"unit_price": "250.00",
"quantity": "1",
"sgst_rate": "None",
"sgst_amount": "None",
"cgst_rate": "None",
"cgst_amount": "None",
"igst_rate": "None",
"igst_amount": "None",
"taxable_amount_or_subtotal": "250.00",
"total_amount_or_gross_amount_of_the_item": "None"
}
],
"table_column_names_in_reading_order": "Sl No.,Description of Goods,HSN/SAC,Quantity,Rate,per,Disc. %,Amount"
}
You have to write a dynamic python code to map bounding box.
|
eea26e042709242a8791f2696653c456
|
{
"intermediate": 0.41767656803131104,
"beginner": 0.4365912675857544,
"expert": 0.14573225378990173
}
|
42,576
|
Как переделать следующий код так, чтобы рамка расширялась, а не создавалась новая
class DrawingWindow(QMainWindow):
def __init__(self, coordinates):
super().__init__()
self.setWindowTitle("Transparent Drawing Window")
self.setGeometry(0, 0, QApplication.desktop().screenGeometry().width(),
QApplication.desktop().screenGeometry().height())
self.setAttribute(Qt.WA_TranslucentBackground, True)
self.setWindowFlags(Qt.FramelessWindowHint | Qt.WindowStaysOnTopHint)
self.painter = QPainter()
self.painter.setRenderHint(QPainter.Antialiasing)
self.pen_color = QColor(255, 0, 0) # Set the initial pen color to red
self.pen_width = 4 # Set the initial pen width to 4
self.coordinates = coordinates # Store the coordinates for drawing rectangles
self.starting_point = None
def paintEvent(self, event):
self.painter.begin(self)
self.painter.setPen(Qt.NoPen)
self.painter.setBrush(QBrush(Qt.transparent))
self.painter.drawRect(QRect(0, 0, self.width(), self.height())) # Draw a transparent background
self.painter.setPen(QPen(QColor(self.pen_color), self.pen_width))
self.painter.setBrush(QBrush(Qt.transparent))
for coord in self.coordinates:
x, y, width, height = coord
self.painter.drawRect(x, y, width, height) # Draw rectangles using the provided coordinates
self.painter.end()
def mousePressEvent(self, event):
self.starting_point = event.pos()
def mouseMoveEvent(self, event):
if self.starting_point:
current_point = event.pos()
x = min(self.starting_point.x(), current_point.x())
y = min(self.starting_point.y(), current_point.y())
width = abs(self.starting_point.x() - current_point.x())
height = abs(self.starting_point.y() - current_point.y())
self.coordinates.append((x, y, width, height))
self.update()
def mouseReleaseEvent(self, event):
self.starting_point = None
|
cb8b7f6c5e949d5ac7ae70d860ea7ed7
|
{
"intermediate": 0.3186626434326172,
"beginner": 0.4757785201072693,
"expert": 0.2055588811635971
}
|
42,577
|
are you based on gpt 4 or 3.5
|
ca5c7964cc71b3f1e8134b7f63bccacd
|
{
"intermediate": 0.304495632648468,
"beginner": 0.30219417810440063,
"expert": 0.39331018924713135
}
|
42,578
|
У меня есть вот такой класс окна:
class DrawingWindow(QMainWindow):
def __init__(self, coordinates):
super().__init__()
self.setWindowTitle("Transparent Drawing Window")
self.setGeometry(0, 0, QApplication.desktop().screenGeometry().width(),
QApplication.desktop().screenGeometry().height())
self.setAttribute(Qt.WA_TranslucentBackground, True)
self.setWindowFlags(Qt.FramelessWindowHint | Qt.WindowStaysOnTopHint)
self.painter = QPainter()
self.painter.setRenderHint(QPainter.Antialiasing)
self.pen_color = QColor(255, 0, 0) # Set the initial pen color to red
self.pen_width = 4 # Set the initial pen width to 4
self.coordinates = coordinates # Store the coordinates for drawing rectangles
self.starting_point = None
self.setFrameStyle(QFrame.Box) # Set functional frame shape
def paintEvent(self, event):
self.painter.begin(self)
self.painter.setPen(Qt.NoPen)
self.painter.setBrush(QBrush(Qt.transparent))
self.painter.drawRect(QRect(0, 0, self.width(), self.height())) # Draw a transparent background
self.painter.setPen(QPen(QColor(self.pen_color), self.pen_width))
self.painter.setBrush(QBrush(Qt.transparent))
self.painter.drawRect(self.coordinates[0], self.coordinates[1], self.coordinates[2], self.coordinates[3]) # Draw rectangles using the provided coordinates
self.painter.end()
def mousePressEvent(self, event):
self.starting_point = event.pos()
def mouseMoveEvent(self, event):
if self.starting_point:
current_point = event.pos()
if current_point.x() > self.coordinates[0] + self.coordinates[2]:
self.coordinates[2] = current_point.x() - self.coordinates[0]
if current_point.y() > self.coordinates[1] + self.coordinates[3]:
self.coordinates[3] = current_point.y() - self.coordinates[1]
if current_point.x() < self.coordinates[0] + self.coordinates[2]:
self.coordinates[2] += self.coordinates[0] - current_point.x()
self.coordinates[0] = current_point.x()
if current_point.y() < self.coordinates[1] + self.coordinates[3]:
self.coordinates[3] += self.coordinates[1] - current_point.y()
self.coordinates[1] = current_point.y()
self.update()
self.ending_point = event.pos()
def mouseReleaseEvent(self, event):
'''if self.starting_point :
#self.coordinates.append((x, y, width, height))
if self.ending_point.x() > self.coordinates[0] + self.coordinates[2]:
self.coordinates[2] = self.ending_point.x() - self.coordinates[0]
if self.ending_point.y() > self.coordinates[1] + self.coordinates[3]:
self.coordinates[3] = self.ending_point.y() - self.coordinates[1]
if self.ending_point.x() < self.coordinates[0] + self.coordinates[2]:
self.coordinates[2] += self.coordinates[0] - self.ending_point.x()
self.coordinates[0] = self.ending_point.x()
if self.ending_point.y() < self.coordinates[1] + self.coordinates[3]:
self.coordinates[3] += self.coordinates[1] - self.ending_point.y()
self.coordinates[1] = self.ending_point.y()
self.update()'''
self.starting_point = None
Как добавить к нему кнопки?
|
9bd16d19ec566daff2e81925d3f2b842
|
{
"intermediate": 0.25754514336586,
"beginner": 0.5949560403823853,
"expert": 0.14749877154827118
}
|
42,579
|
i have 1050 file of historical data of different cryptocurrencies icluding their OHLCV and technical indicators and etc…
each file belongs to one crypto and has 113 columns(features and around 1500 rows)
i want to train a cnn model on them…
how should i do so?
|
f68f38371c570de2e2f173a2ff8b14ed
|
{
"intermediate": 0.10280318558216095,
"beginner": 0.0455867163836956,
"expert": 0.8516100645065308
}
|
42,580
|
Hi who are you?
|
5c25f96b15e195b84009e358bdd440c1
|
{
"intermediate": 0.426351398229599,
"beginner": 0.2528441548347473,
"expert": 0.3208044469356537
}
|
42,581
|
write python program using ada boost algorithm
generate csv files from this code:
import pandas as pd
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.ensemble import AdaBoostClassifier
from sklearn.metrics import accuracy_score
# Define the number of samples
n_samples = 1000
# Generate a synthetic dataset
X, y = make_classification(n_samples=n_samples, n_features=10, n_informative=5, n_redundant=5, random_state=42, flip_y=0.03)
# Feature names
features = ["CreditScore", "Geography", "Gender", "Age", "Tenure", "Balance", "NumOfProducts", "HasCrCard", "IsActiveMember", "EstimatedSalary"]
# Convert into a DataFrame
df = pd.DataFrame(X, columns=features)
df["ExitStatus"] = y
# Let's take a quick look at the dataset
print(df.head())
# Split data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(df[features], df["ExitStatus"], test_size=0.2, random_state=42)
# Save the full dataset
df.to_csv('synthetic_employee_data.csv', index=False)
# Optionally, you can also save the training and testing sets separately if needed
X_train.join(y_train).to_csv('training_data.csv', index=False)
X_test.join(y_test).to_csv('testing_data.csv', index=False)
print("Data saved successfully.")
write python program and add below features with all plots outputs
# Predicting on the test set
# Probability estimates for ROC curve
# Calculate accuracy, precision, and F1-score
# Show classification report
# Confusion matrix with plots
plt.figure(figsize=(6, 5))
cm = confusion_matrix(y_test, y_pred)
sns.heatmap(cm, annot=True, fmt='d', cmap='Blues')
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.show()
# ROC curve
fpr, tpr, _ = roc_curve(y_test, y_score)
roc_auc = roc_auc_score(y_test, y_score)
plt.figure()
plt.plot(fpr, tpr, color='darkorange', lw=2, label=f'ROC curve (area = {roc_auc:.2f})')
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle=":")
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver Operating Characteristic')
plt.legend(loc="lower right")
plt.show()
# Visualize feature correlations
plt.figure(figsize=(10, 8))
corr = X_train.corr()
sns.heatmap(corr, cmap='coolwarm', annot=True)
plt.title('Feature Correlation Matrix')
plt.show()
# Density plot of feature values by class
plt.title(f'Density Plot of {feature} by Class')
plt.legend()
plt.show()
# Actual vs. Predicted labels plot
plt.figure(figsize=(8, 6))
sns.countplot(x=y_test, color='blue', alpha=0.5, label='Actual')
sns.countplot(x=y_pred, color='red', alpha=0.5, label='Predicted')
plt.title('Actual vs. Predicted Label Counts')
plt.legend()
plt.show()
|
a2360306fb048a53941897a24987271f
|
{
"intermediate": 0.3659922480583191,
"beginner": 0.3895795941352844,
"expert": 0.2444281280040741
}
|
42,582
|
I have a code you have to analyse the code and modify it so that we can use same code to map line_items as well.
I am sharing how line_items looks like in given json
"line_items": [
{
"hsn_code": "8443",
"description": "Hp 1606 Pressure Roller",
"unit_of_measurement_or_uom": "NOS",
"unit_price": "750.00",
"quantity": "3",
"sgst_rate": "None",
"sgst_amount": "None",
"cgst_rate": "None",
"cgst_amount": "None",
"igst_rate": "None",
"igst_amount": "None",
"taxable_amount_or_subtotal": "2250.00",
"total_amount_or_gross_amount_of_the_item": "None"
},
{
"hsn_code": "8443",
"description": "Teflon 1010 (OG)",
"unit_of_measurement_or_uom": "NOS",
"unit_price": "800.00",
"quantity": "3",
"sgst_rate": "None",
"sgst_amount": "None",
"cgst_rate": "None",
"cgst_amount": "None",
"igst_rate": "None",
"igst_amount": "None",
"taxable_amount_or_subtotal": "2400.00",
"total_amount_or_gross_amount_of_the_item": "None"
},
{
"hsn_code": "84439959",
"description": "Pick Up Roll 1606",
"unit_of_measurement_or_uom": "NOS",
"unit_price": "250.00",
"quantity": "3",
"sgst_rate": "None",
"sgst_amount": "None",
"cgst_rate": "None",
"cgst_amount": "None",
"igst_rate": "None",
"igst_amount": "None",
"taxable_amount_or_subtotal": "750.00",
"total_amount_or_gross_amount_of_the_item": "None"
},
{
"hsn_code": "8443",
"description": "Hp 1606 Lower Bush",
"unit_of_measurement_or_uom": "NOS",
"unit_price": "250.00",
"quantity": "2",
"sgst_rate": "None",
"sgst_amount": "None",
"cgst_rate": "None",
"cgst_amount": "None",
"igst_rate": "None",
"igst_amount": "None",
"taxable_amount_or_subtotal": "500.00",
"total_amount_or_gross_amount_of_the_item": "None"
},
{
"hsn_code": "8443",
"description": "1505 /1606 Fuser Drive Gear",
"unit_of_measurement_or_uom": "NOS",
"unit_price": "250.00",
"quantity": "1",
"sgst_rate": "None",
"sgst_amount": "None",
"cgst_rate": "None",
"cgst_amount": "None",
"igst_rate": "None",
"igst_amount": "None",
"taxable_amount_or_subtotal": "250.00",
"total_amount_or_gross_amount_of_the_item": "None"
}
],
Code that is working for header entities
import cv2
import pandas as pd
import json
from thefuzz import fuzz
from itertools import product
used_bounding_boxes = {}
def preprocess_entity(entity):
return entity.replace(",", "").strip()
def calculate_proximity_score(box_a, box_b):
vertical_overlap = max(0, min(box_a["bottom"], box_b["bottom"]) - max(box_a["top"], box_b["top"]))
vertical_distance = 0 if vertical_overlap > 0 else min(abs(box_a["top"] - box_b["bottom"]), abs(box_a["bottom"] - box_b["top"]))
horizontal_overlap = max(0, min(box_a["right"], box_b["right"]) - max(box_a["left"], box_b["left"]))
horizontal_distance = 0 if horizontal_overlap > 0 else abs(box_a["right"] - box_b["left"])
return horizontal_distance + 2 * vertical_distance
def is_nearby(box_a, box_b, max_line_difference=1, max_distance=50):
return calculate_proximity_score(box_a, box_b) <= max_distance + 2 * max_line_difference
def merge_boxes(boxes):
min_left = min(box["left"] for box in boxes)
max_right = max(box["right"] for box in boxes)
min_top = min(box["top"] for box in boxes)
max_bottom = max(box["bottom"] for box in boxes)
return {"left": min_left, "right": max_right, "top": min_top, "bottom": max_bottom}
def find_potential_matches(dataframe, token, threshold=75):
potential_matches = []
for _, row in dataframe.iterrows():
ocr_text = preprocess_entity(row["text"])
score = fuzz.ratio(token, ocr_text)
if score > threshold:
potential_matches.append({
"box": {"left": row["left"], "right": row["right"], "top": row["top"], "bottom": row["bottom"]},
"score": score
})
return potential_matches
def find_best_sequence_heuristic(matches_list):
if not matches_list or len(matches_list[0]) == 0:
return []
best_sequence = [min(matches_list[0], key=lambda match: match["score"])]
for next_matches in matches_list[1:]:
current_box = best_sequence[-1]["box"]
next_best_match = min(next_matches, key=lambda match: calculate_proximity_score(current_box, match["box"]))
best_sequence.append(next_best_match)
return best_sequence
def process_single_token_entity(dataframe, entity, threshold=75):
global used_bounding_boxes
best_match = None
best_score = threshold
entity = preprocess_entity(entity)
if entity not in used_bounding_boxes:
used_bounding_boxes[entity] = []
for _, row in dataframe.iterrows():
ocr_text = preprocess_entity(row['text'])
score = fuzz.ratio(entity, ocr_text)
current_box = {'left': row['left'], 'right': row['right'], 'top': row['top'], 'bottom': row['bottom']}
if score > best_score and current_box not in used_bounding_boxes[entity]:
best_score = score
best_match = current_box
if best_match:
used_bounding_boxes[entity].append(best_match)
return best_match
def box_overlap(box1, box2):
"""Check if there’s any overlap in any coordinate between two boxes."""
return box1["left"] == box2["left"] or box1["right"] == box2["right"]
def all_boxes_unique(sequence_boxes, used_boxes):
"""Ensure no part of the boxes in sequence_boxes overlaps with any box in used_boxes."""
for seq_box in sequence_boxes:
for used_box in used_boxes:
if box_overlap(seq_box, used_box):
return False
return True
def get_next_best_sequence(all_potential_matches, previous_matches, entity):
"""
Try to find the next best sequence of matches that hasn’t used any part of the bounding boxes.
"""
# Flatten the list of used boxes for easier comparison.
used_boxes = [box for sequence in previous_matches.get(entity, []) for box in sequence]
for sequence in product(*all_potential_matches):
sequence_boxes = [match["box"] for match in sequence]
if all_boxes_unique(sequence_boxes, used_boxes):
return sequence # Found a sequence where no box part has been used before
return None # No unique sequence found
def process_multi_token_entity(dataframe, entity, threshold=85):
global used_bounding_boxes
# if entity not in used_bounding_boxes:
# used_bounding_boxes[entity] = []
tokens = entity.split()
all_potential_matches = [find_potential_matches(dataframe, token, threshold) for token in tokens]
# Ensuring all tokens have at least one match
if not all(matches for matches in all_potential_matches):
return None
# This assumes used_bounding_boxes[entity] holds lists of used sequences of boxes (not merged boxes)
previous_matches = used_bounding_boxes.get(entity, [])
next_best_sequence = get_next_best_sequence(all_potential_matches, used_bounding_boxes, entity)
if next_best_sequence:
new_boxes_sequence = [match["box"] for match in next_best_sequence]
merged_box = merge_boxes(new_boxes_sequence)
# If we found a new sequence, add it to the used sequences for this entity
if entity not in used_bounding_boxes:
used_bounding_boxes[entity] = []
used_bounding_boxes[entity].append(new_boxes_sequence)
return merged_box
return None
def draw_bounding_boxes(image_path, bounding_boxes, entity_names):
image = cv2.imread(image_path)
font = cv2.FONT_HERSHEY_SIMPLEX
for box, name in zip(bounding_boxes, entity_names):
if box:
cv2.rectangle(image, (box["left"], box["top"]), (box["right"], box["bottom"]), (0, 255, 0), 2)
cv2.putText(image, name, (box["left"], max(box["top"] - 10, 0)), font, 0.5, (0, 0, 255), 2)
cv2.imwrite("annotated_image_using_dp1119.jpg", image)
def main(json_path, csv_path, image_path):
with open(json_path, "r") as f:
data = json.load(f)
dataframe = pd.read_csv(csv_path)
bounding_boxes = []
entity_names = []
# Existing processing for non-special sections
special_sections = ["amounts_and_tax","Payment Details"] # Define special handling cases here
for section in ["invoice_details", "Payment Details", "amounts_and_tax"]:
entities = data.get(section, {})
# Check if the current section needs special handling
if section not in special_sections:
for entity_name, entity_value in entities.items():
entity_value_no_comma = preprocess_entity(entity_value)
if " " in entity_value_no_comma:
box = process_multi_token_entity(dataframe, entity_value_no_comma)
else:
box = process_single_token_entity(dataframe, entity_value_no_comma)
if box:
bounding_boxes.append(box)
entity_names.append(entity_name)
else:
# Special handling for "amounts_and_tax" section
reversed_dataframe = dataframe.iloc[::-1].reset_index(drop=True) # Reverse the dataframe
for entity_name, entity_value in entities.items():
entity_value_no_comma = preprocess_entity(entity_value)
if " " in entity_value_no_comma:
# Use the reversed_dataframe for multi-token entities
box = process_multi_token_entity(reversed_dataframe, entity_value_no_comma)
else:
# Use the reversed_dataframe for single-token entities
box = process_single_token_entity(reversed_dataframe, entity_value_no_comma)
if box:
bounding_boxes.append(box)
entity_names.append(entity_name)
draw_bounding_boxes(image_path, bounding_boxes, entity_names)
main("/home/ritik1s/Desktop/bbox_issues/temp_GPT/row_skip.json", "/home/ritik1s/Desktop/bbox_issues/temp_GPT/check.csv", "/home/ritik1s/Desktop/bbox_issues/temp_GPT/check.jpeg")
|
eec25cb8be338aa050daa7015d2a4280
|
{
"intermediate": 0.316036194562912,
"beginner": 0.47156035900115967,
"expert": 0.21240341663360596
}
|
42,583
|
guide me through on how to setup the mongoDB-databse and run it
|
face5ff7ab520b1d9f60367c95839f91
|
{
"intermediate": 0.8517882227897644,
"beginner": 0.05698922276496887,
"expert": 0.09122250229120255
}
|
42,584
|
i have 1100 csv files of cryptocurrencies historical data including OHLCV and indicators and etc
each file has 113 columns and between 1000 to 2500 rows
i want to train a cnn model to predict close price of next day based on past 60 days
i want to train model without combining all csv files
give me the proper implementation to train a InceptionNet cnn model on my dataset
|
d88f17ceb6afbb3756e1ef9c39a7eaa9
|
{
"intermediate": 0.264211505651474,
"beginner": 0.08045874536037445,
"expert": 0.6553297638893127
}
|
42,585
|
перепиши следующий код под PySide6:
import sys
from PyQt5.QtWidgets import QApplication, QMainWindow, QPushButton
from PyQt5.QtGui import QPainter, QBrush, QColor, QPen
from PyQt5.QtCore import Qt, QTimer, QRect
import random
class DrawingWindow(QMainWindow):
def __init__(self, coordinates):
super().__init__()
self.setWindowTitle("Transparent Drawing Window")
self.setGeometry(0, 0, QApplication.desktop().screenGeometry().width(),
QApplication.desktop().screenGeometry().height())
self.setAttribute(Qt.WA_TranslucentBackground, True)
self.setWindowFlags(Qt.FramelessWindowHint | Qt.WindowStaysOnTopHint)
self.painter = QPainter()
self.painter.setRenderHint(QPainter.Antialiasing)
self.pen_color = QColor(255, 0, 0) # Set the initial pen color to red
self.pen_width = 4 # Set the initial pen width to 4
self.coordinates = coordinates # Store the coordinates for drawing rectangles
self.starting_point = None
self.red_button = QPushButton('Red', self)
self.red_button.clicked.connect(self.set_red_color)
self.blue_button = QPushButton('Blue', self)
self.blue_button.clicked.connect(self.set_blue_color)
def set_red_color(self):
self.pen_color = QColor(255, 0, 0)
self.update()
def set_blue_color(self):
self.pen_color = QColor(0, 0, 255)
self.update()
def paintEvent(self, event):
self.painter.begin(self)
self.painter.setPen(Qt.NoPen)
self.painter.setBrush(QBrush(Qt.transparent))
self.painter.drawRect(QRect(0, 0, self.width(), self.height())) # Draw a transparent background
self.painter.setPen(QPen(QColor(self.pen_color), self.pen_width))
self.painter.setBrush(QBrush(Qt.transparent))
self.red_button.move(self.coordinates[0], self.coordinates[1])
self.red_button.resize(50, 50)
self.blue_button.move(self.coordinates[0] + 50, self.coordinates[1])
self.blue_button.resize(50, 50)
self.painter.drawRect(self.coordinates[0], self.coordinates[1], self.coordinates[2], self.coordinates[3]) # Draw rectangles using the provided coordinates
self.painter.end()
def mousePressEvent(self, event):
self.starting_point = event.pos()
def mouseMoveEvent(self, event):
if self.starting_point:
current_point = event.pos()
last_x = self.coordinates[0] + self.coordinates[2]
first_x = self.coordinates[0]
cur_x = current_point.x()
cur_y = current_point.y()
if cur_x < first_x:
self.coordinates[2] += abs(cur_x - first_x)
self.coordinates[0] = cur_x
self.update()
def mouseReleaseEvent(self, event):
self.starting_point = None
|
69fc53a4f59dfdf5ae689b0f12cbf654
|
{
"intermediate": 0.2669590413570404,
"beginner": 0.5326098799705505,
"expert": 0.20043109357357025
}
|
42,586
|
I want to install a CCTV camera for my house but I should be able to write the software for facial recognition and update it whenever I can
|
a88cc03e2f0bc22faf320d50f362acc2
|
{
"intermediate": 0.28563645482063293,
"beginner": 0.122511126101017,
"expert": 0.5918523669242859
}
|
42,587
|
based on this code
# -*- coding: utf-8 -*-
"""GPT_for_chatbot_main.ipynb
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/1_Jue7uz455TpSfZpBmX5O6MtUXIyQ6ZP
"""
from google.colab import drive
drive.mount('/content/drive')
! pip install transformers
import numpy as np
from transformers import AutoTokenizer, AutoConfig, AutoModelForPreTraining, \
TrainingArguments, Trainer
import torch
from torch.utils.data import Dataset
!pip install datasets
from datasets import load_dataset
dataset = load_dataset('daily_dialog')
def load_conversations(data):
context = []
response = []
for i in range(len(dataset[data])):
for j in range(len(dataset[data][i]['dialog'])-1):
context.append(dataset[data][i]['dialog'][j])
response.append(dataset[data][i]['dialog'][j+1])
return context, response
SPECIAL_TOKENS = { "bos_token": "<|BOS|>",
"eos_token": "<|EOS|>",
"unk_token": "<|UNK|>",
"pad_token": "<|PAD|>",
"sep_token": "<|SEP|>"}
MAXLEN = 60
class myDataset(Dataset):
def __init__(self, data, tokenizer):
context, response = [], []
for k, v in data.items():
context.append(v[0])
response.append(v[1])
self.tokenizer = tokenizer
self.response = response
self.context = context
def __len__(self):
return len(self.context)
def __getitem__(self, i):
input = SPECIAL_TOKENS['bos_token'] + self.context[i] + \
SPECIAL_TOKENS['sep_token'] + \
self.response[i] + SPECIAL_TOKENS['eos_token']
encodings_dict = tokenizer(input,
truncation=True,
max_length=MAXLEN,
padding="max_length")
input_ids = encodings_dict['input_ids']
attention_mask = encodings_dict['attention_mask']
return {'label': torch.tensor(input_ids),
'input_ids': torch.tensor(input_ids),
'attention_mask': torch.tensor(attention_mask)}
tokenizer = AutoTokenizer.from_pretrained('gpt2')
tokenizer.add_special_tokens(SPECIAL_TOKENS)
config = AutoConfig.from_pretrained('gpt2',
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id,
sep_token_id=tokenizer.sep_token_id,
pad_token_id=tokenizer.pad_token_id,
output_hidden_states=False)
model = AutoModelForPreTraining.from_pretrained('gpt2', config=config)
model.resize_token_embeddings(len(tokenizer))
model.cuda()
context_train, response_train = load_conversations('train')
context_val, response_val = load_conversations('validation')
train_data = dict()
i=0
for context, response in zip (context_train, response_train):
train_data[i] = [context, response]
i += 1
#********************************************
val_data = dict()
i=0
for context, response in zip (context_val, response_val):
val_data[i] = [context, response]
i += 1
train_dataset = myDataset(train_data, tokenizer)
val_dataset = myDataset(val_data, tokenizer)
# load_model_path = '/content/drive/MyDrive/models/checkpoint-1000/pytorch_model.bin'
# model.load_state_dict(torch.load(load_model_path))
training_args = TrainingArguments(
output_dir="/content/drive/MyDrive/models",
num_train_epochs=3,
eval_steps = 2000,
save_steps=2000,
warmup_steps=500,
prediction_loss_only=True,
learning_rate = 5e-4,
do_eval = True,
evaluation_strategy = 'steps'
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset,
tokenizer=tokenizer
)
trainer.train()
trainer.save_model('/content/drive/MyDrive/final model')
load_model_path = '/content/drive/MyDrive/models/checkpoint-26000/pytorch_model.bin'
tokenizer = AutoTokenizer.from_pretrained('gpt2')
tokenizer.add_special_tokens(SPECIAL_TOKENS)
config = AutoConfig.from_pretrained('gpt2',
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id,
sep_token_id=tokenizer.sep_token_id,
pad_token_id=tokenizer.pad_token_id,
output_hidden_states=False)
model = AutoModelForPreTraining.from_pretrained('gpt2', config=config)
model.resize_token_embeddings(len(tokenizer))
model.load_state_dict(torch.load(load_model_path))
model.cuda()
model.eval()
def generate_response(text):
inp_context = SPECIAL_TOKENS['bos_token'] + text + SPECIAL_TOKENS['sep_token']
generated = torch.tensor(tokenizer.encode(inp_context)).unsqueeze(0)
device = torch.device("cuda")
generated = generated.to(device)
sample_outputs = model.generate(generated,
do_sample=True,
top_k=0,
min_length=5,
max_length=30,
num_return_sequences=10
)
for i, sample_output in enumerate(sample_outputs):
text_gen = tokenizer.decode(sample_output, skip_special_tokens=True)
a = len(text)
print("{}: {}\n\n".format(i+1, text_gen))
return
generate_response('Where have you been?')
generate_response('What is your favourite color?')
generate_response('What do you want to eat?')
write it for this data
{
"conversations": [
{
"conversation_id": "3c091d14-fd54-4e25-b1e3-d348f8dd73fb",
"messages": [
{
"name": "John Doe",
"type": "customer",
"message": "Hey, Jane! How's it going?"
},
{
"name": "agent 1",
"type": "agent",
"message": "Hey John! I'm good, thanks. How about you?"
},
{
"name": "John Doe",
"type": "customer",
"message": "Can't complain. Hey, do you happen to know what the minimum age is to open an account here?"
},
{
"name": "agent 1",
"type": "agent",
"message": "I think it's 18 years. At least that's what I was told when I opened mine."
}
]
},
{
"conversation_id": "ad5e3b21-36a2-4c0f-b0b6-63f3816e0f4e",
"messages": [
{
"name": "Jane Doe",
"type": "customer",
"message": "Hi John! How have you been?"
},
{
"name": "agent 2",
"type": "agent",
"message": "Hey Jane! I've been good, thank you. How about yourself?"
},
{
"name": "Jane Doe",
"type": "customer",
"message": "Not too bad. By the way, do you know what the minimum balance is to maintain an account here?"
},
{
"name": "agent 2",
"type": "agent",
"message": "I believe it's $10. That's what they mentioned when I signed up."
}
]
},
{
"conversation_id": "2c90e715-193e-4938-8153-f23fd1be9473",
"messages": [
{
"name": "Customer",
"type": "customer",
"message": "I wish to request chargeback"
},
{
"name": "Agent",
"type": "agent",
"message": "May I know the date, amount and merchant name?"
},
{
"name": "Customer",
"type": "customer",
"message": "The amount is 198.20 and date is: '10-OCT-2022' from"
},
{
"name": "Agent",
"type": "agent",
"message": "I have found this transaction and it is not qualified for chargeback, is there other transaction I can help you with"
},
{
"name": "Customer",
"type": "customer",
"message": "Yes, amount is amount 7849.90 from LG on December 12"
}
]
},
{
"conversation_id": "7d2fb18d-9f29-4a8e-936a-7da06e5fb746",
"messages": [
{
"name": "Jack Williams",
"type": "customer",
"message": "Hello, I want to request a chargeback."
},
{
"name": "Agent 3",
"type": "agent",
"message": "May I know the date, amount and merchant name?"
},
{
"name": "Jack Williams",
"type": "customer",
"message": "The amount is 198.20 and date is: \"10-OCT-2022\" from "
},
{
"name": "Agent 3",
"type": "agent",
"message": "I have found this transaction and it is not qualified for chargeback, is there other transaction I can help you with"
},
{
"name": "Jack Williams",
"type": "customer",
"message": "Yes, amount is amount 7849.90 from LG on December 12"
},
{
"name": "Agent 3",
"type": "agent",
"message": "This transaction is qualified for chargeback, the reference ID is 304712, do you wish to proceed and do you agree with our terms of service?"
}
]
}
]
}
|
bc6597b7ed957c962c3bceb50b5b75c7
|
{
"intermediate": 0.3290996253490448,
"beginner": 0.31484320759773254,
"expert": 0.35605719685554504
}
|
42,588
|
Make me a back code for websites
|
251769bd2488b0a4d84eca8c010b1645
|
{
"intermediate": 0.3484247028827667,
"beginner": 0.3712378442287445,
"expert": 0.2803374230861664
}
|
42,589
|
i have 1100 csv files of cryptocurrencies historical data including OHLCV and indicators and etc
each file has 113 columns and between 1000 to 2500 rows
i want to train a cnn model to predict close price of next day based on past 60 days
i want to train model without combining all csv files
give me the proper implementation to train a ResNet cnn model on my dataset
|
0393415e2e9f034d37b067eda2b33755
|
{
"intermediate": 0.2590683102607727,
"beginner": 0.07044460624456406,
"expert": 0.670487105846405
}
|
42,590
|
Send me a py code for making mod menus
|
f403138033bb1c2921f9db7d20b3f96e
|
{
"intermediate": 0.43523818254470825,
"beginner": 0.2729821503162384,
"expert": 0.29177963733673096
}
|
42,591
|
make neovim the development environment
|
d295ee3342a0f6dffe48a46d0bf9cbc0
|
{
"intermediate": 0.2529650032520294,
"beginner": 0.10819298774003983,
"expert": 0.638841986656189
}
|
42,592
|
import gradio as gr
import requests
API_URL = "https://api-inference.huggingface.co/models/emilianJR/epiCRealism"
headers = {"Authorization": "Bearer {API_TOKEN}"}
def query(prompt):
payload = {"inputs": prompt}
response = requests.post(API_URL, headers=headers, json=payload)
return response.content
iface = gr.Interface(query, gr.inputs.Textbox(lines=1), gr.outputs.Image())
iface.launch()
|
d0dd2696ba052afaf96c0597eddc0cc6
|
{
"intermediate": 0.4404014050960541,
"beginner": 0.31141698360443115,
"expert": 0.24818159639835358
}
|
42,593
|
I have two models in Django DRF. Should I add something to CustomUser model?
Here the code:
__
#interests of user, like 'Python'[programming], 'web development', etc.
class Interest(models.Model):
name = models.CharField(max_length=100, unique=True)
def __str__(self):
return self.name
class CustomUser(AbstractUser):
# Add additional fields here
telegram = models.CharField(null=True, blank=True, max_length=64)
discord = models.CharField(null=True, blank=True, max_length=64)
whatsapp = models.IntegerField(null=True, blank=True, max_length=64)
|
3bad45de5aca8c0ea8209ad788d0d6f6
|
{
"intermediate": 0.4359378516674042,
"beginner": 0.3974030911922455,
"expert": 0.16665901243686676
}
|
42,594
|
Привет! вот мой код:
from aiogram import Bot, Dispatcher, types
from aiogram.dispatcher import FSMContext
from aiogram.dispatcher.filters.state import State, StatesGroup
from aiogram.contrib.fsm_storage.memory import MemoryStorage
from aiogram.types import ReplyKeyboardMarkup, KeyboardButton, InlineKeyboardMarkup, InlineKeyboardButton
import aiosqlite
from random import sample
from aiogram.utils import executor
import asyncio
API_TOKEN = '7013156611:AAEsIm7Vklxl6hHipwmiClgn9KAkgzNWuQg' # Замените на ваш токен бота Telegram
bot = Bot(token=API_TOKEN)
storage = MemoryStorage()
dp = Dispatcher(bot, storage=storage)
channel_id = "-1002034494437"
database_path = "hide_seek_bot.db" # Путь к вашей базе данных
async def create_db():
async with aiosqlite.connect(database_path) as db:
await db.execute('''CREATE TABLE IF NOT EXISTS games (
id INTEGER PRIMARY KEY,
message_id INTEGER,
is_active BOOLEAN NOT NULL CHECK (is_active IN (0, 1)),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP)''')
await db.execute('''CREATE TABLE IF NOT EXISTS players (
id INTEGER PRIMARY KEY,
game_id INTEGER,
user_id INTEGER NOT NULL,
is_detective BOOLEAN NOT NULL CHECK (is_detective IN (0, 1)),
is_found BOOLEAN NOT NULL DEFAULT 0 CHECK (is_found IN (0, 1)),
FOREIGN KEY(game_id) REFERENCES games(id))''')
await db.commit()
class Form(StatesGroup):
detective_count = State() # Количество сыщиков
async def is_admin(user_id: int):
admin_ids = [989037374, 511205958]
return user_id in admin_ids
@dp.message_handler(commands=['start'])
async def send_welcome(message: types.Message):
if await is_admin(message.from_user.id):
markup = ReplyKeyboardMarkup(resize_keyboard=True, selective=True)
markup.add(KeyboardButton("Создать игру"))
markup.add(KeyboardButton("Определить сыщиков"))
markup.add(KeyboardButton("Завершить игру"))
await message.reply("Вы администратор бота. Используйте кнопки.", reply_markup=markup)
else:
await message.reply("Извините, доступно только администраторам.")
@dp.callback_query_handler(text="participate")
async def participate_in_game(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
async with aiosqlite.connect(database_path) as db:
async with db.execute("SELECT id FROM games WHERE is_active = 1") as cursor:
game = await cursor.fetchone()
if not game:
await bot.answer_callback_query(callback_query.id, "В данный момент нет активной игры.")
return
game_id = game[0]
async with db.execute("SELECT id FROM players WHERE game_id = ? AND user_id = ?", (game_id, user_id)) as cursor:
participant = await cursor.fetchone()
if participant:
await bot.answer_callback_query(callback_query.id, "Вы уже участвуете в этой игре.")
return
await db.execute("INSERT INTO players (game_id, user_id, is_detective) VALUES (?, ?, 0)", (game_id, user_id))
await db.commit()
async with db.execute("SELECT message_id FROM games WHERE is_active = 1") as cursor:
game_info = await cursor.fetchone()
if game_info:
message_id = game_info[0]
players_list_str = await generate_players_list(db)
keyboard_markup = types.InlineKeyboardMarkup(row_width=1)
participation_button = types.InlineKeyboardButton(text="Участвовать", callback_data="participate")
keyboard_markup.add(participation_button)
new_msg_text = f"Начинается новая игра! Участники:\n{players_list_str}\nДля участия нажмите кнопку ниже."
await bot.edit_message_text(chat_id=channel_id, message_id=message_id, text=new_msg_text,
reply_markup=keyboard_markup)
await bot.answer_callback_query(callback_query.id, "Вы успешно присоединились к игре!")
async def generate_players_list(db):
players_list_str = ""
async with db.execute("SELECT user_id FROM players WHERE game_id = (SELECT id FROM games WHERE is_active = 1)") as cursor:
players = await cursor.fetchall()
for player in players:
user_id = player[0]
try:
user = await bot.get_chat_member(channel_id, user_id)
username = "@" + user.user.username if user.user.username else f"{user_id}"
players_list_str += username + "\n"
except Exception as e:
continue # Пропускаем в случае ошибки
return players_list_str
@dp.message_handler(lambda message: message.text == "Создать игру")
async def create_game(message: types.Message):
async with aiosqlite.connect(database_path) as db:
# Проверяем, есть ли активная игра
async with db.execute("SELECT id FROM games WHERE is_active = 1") as cursor:
active_game = await cursor.fetchone()
if active_game:
await message.reply("Уже существует активная игра. Необходимо сначала её завершить.")
return
# Создаём новую игру
cursor = await db.execute("INSERT INTO games (is_active) VALUES (1)")
game_id = cursor.lastrowid
await db.commit()
keyboard_markup = types.InlineKeyboardMarkup(row_width=1)
participation_button = types.InlineKeyboardButton(text="Участвовать", callback_data="participate")
keyboard_markup.add(participation_button)
msg = await bot.send_message(channel_id, "Начинается новая игра! Для участия нажмите кнопку ниже.",
reply_markup=keyboard_markup)
# Сохраняем message_id игры
await db.execute("UPDATE games SET message_id = ? WHERE id = ?", (msg.message_id, game_id))
await db.commit()
await message.reply("Игра создана и объявлена в канале.")
@dp.message_handler(lambda message: message.text == "Определить сыщиков", state=None)
async def request_detective_count(message: types.Message):
if await is_admin(message.from_user.id):
await Form.detective_count.set()
await message.reply("Введите количество сыщиков:")
else:
await message.reply("Извините, доступно только администраторам.")
@dp.message_handler(state=Form.detective_count)
async def handle_set_detective_count(message: types.Message, state: FSMContext):
# Тут ваш код обработки 'set_detective_count'
await pick_detectives(message, int(message.text)) # Вызов функции pick_detectives с количеством сыщиков
await state.finish() # Завершение состояния
async def pick_detectives(message: types.Message, num_of_detectives: int):
async with aiosqlite.connect(database_path) as db:
# Получаем ID активной игры
async with db.execute("SELECT id FROM games WHERE is_active = 1 LIMIT 1") as cursor:
game = await cursor.fetchone()
if not game:
await message.reply("В данный момент нет активной игры.")
return
game_id = game[0]
# Получаем список ID игроков этой игры
async with db.execute("SELECT user_id FROM players WHERE game_id = ? AND is_detective = 0",
(game_id,)) as cursor:
players = await cursor.fetchall()
if len(players) <= num_of_detectives:
await message.reply("Недостаточно игроков для выбора сыщиков.")
return
# Рандомно выбираем сыщиков
detectives_ids = sample([p[0] for p in players], num_of_detectives)
channel_id = "-1002014105263"
# Фомируем визуальный список сыщиков для отправки сообщения
detective_list_info = []
for user_id in detectives_ids:
user = await bot.get_chat_member(channel_id, user_id) # Получаем информацию о пользователе
username = user.user.username
if username:
detective_list_info.append(f"@{username}")
else:
detective_list_info.append(f"{user_id}") # Если username отсутствует, используем ID
# Обновляем статус выбранных игроков на сыщиков
for user_id in detectives_ids:
await db.execute("UPDATE players SET is_detective = 1 WHERE game_id = ? AND user_id = ?",
(game_id, user_id))
await db.commit()
detectives_list_str, hiders_list_str = await generate_players_and_detectives_list(db)
keyboard_markup = types.InlineKeyboardMarkup()
found_button = types.InlineKeyboardButton(text="Меня нашли", callback_data="found_me")
keyboard_markup.add(found_button)
message_text = f"{detectives_list_str}\n{hiders_list_str}\n\nИгра началась!"
send_message = await bot.send_message(channel_id, text=message_text, reply_markup=keyboard_markup)
async with aiosqlite.connect(database_path) as db:
await db.execute("UPDATE games SET message_id = ? WHERE id = ?", (send_message.message_id, game_id))
await db.commit()
async def generate_players_and_detectives_list(db):
detectives_list_str = "Сыщики:\n"
hiders_list_str = "Прячущиеся:\n"
current_game_id = None
# Сначала найдем id активной игры
async with db.execute("SELECT id FROM games WHERE is_active = 1") as cursor:
game = await cursor.fetchone()
if game:
current_game_id = game[0]
else:
return "Нет активной игры.", "Нет активной игры."
# После нахождения id игры, используем его для выборки игроков этой игры
# Запрос на получение сыщиков текущей активной игры
async with db.execute("SELECT DISTINCT user_id FROM players WHERE game_id = ? AND is_detective = 1",
(current_game_id,)) as cursor:
async for det in cursor:
try:
user_info = await bot.get_chat_member(channel_id, det[0])
user_name = f"@{user_info.user.username}" if user_info.user.username else f"ID: {det[0]}"
detectives_list_str += user_name + "\n"
except Exception as e:
print(f"Ошибка получения данных о пользователе: {e}")
# Запрос на получение прячущихся текущей активной игры, которые еще не найдены
async with db.execute(
"SELECT DISTINCT user_id FROM players WHERE game_id = ? AND is_detective = 0 AND is_found = 0",
(current_game_id,)) as cursor:
async for hid in cursor:
try:
user_info = await bot.get_chat_member(channel_id, hid[0])
user_name = f"@{user_info.user.username}" if user_info.user.username else f"ID: {hid[0]}"
hiders_list_str += user_name + "\n"
except Exception as e:
print(f"Ошибка получения данных о пользователе: {e}")
return detectives_list_str.strip(), hiders_list_str.strip()
@dp.message_handler(lambda message: message.text == "Завершить игру")
async def finish_the_game_request(message: types.Message):
if not await is_admin(message.from_user.id):
await message.reply("Только администраторы могут завершать игру.")
return
# Предположим, у нас есть две команды: Сыщики и Прячущиеся
markup = types.InlineKeyboardMarkup()
markup.add(types.InlineKeyboardButton("Сыщики", callback_data="win_detectives"))
markup.add(types.InlineKeyboardButton("Прячущиеся", callback_data="win_hiders"))
await message.reply("Какая команда победила?", reply_markup=markup)
@dp.callback_query_handler(lambda c: c.data == "win_detectives" or c.data == "win_hiders")
async def announce_winners(callback_query: types.CallbackQuery):
await bot.answer_callback_query(callback_query.id)
winning_team = "Сыщики" if callback_query.data == "win_detectives" else "Прячущиеся"
async with aiosqlite.connect(database_path) as db:
# Получаем ID активной игры
async with db.execute("SELECT id FROM games WHERE is_active = 1 LIMIT 1") as cursor:
active_game = await cursor.fetchone()
if not active_game:
await bot.send_message(callback_query.from_user.id, "Активная игра не найдена.")
return
game_id = active_game[0]
# Помечаем игру как завершенную
await db.execute("UPDATE games SET is_active = 0 WHERE id = ?", (game_id,))
await db.commit()
# Получаем список участников победившей команды
role_condition = "1" if callback_query.data == "win_detectives" else "0"
async with db.execute("SELECT user_id FROM players WHERE game_id = ? AND is_detective = ?",
(game_id, role_condition)) as cursor:
players = await cursor.fetchall()
# Формируем список для публикации
player_list = []
for player in players:
user_id = player[0]
try:
# Попытка получить пользователя для взаимодействия
user = await bot.get_chat_member(channel_id, user_id) # Получаем информацию о пользователе
username = user.user.username
if username:
player_list.append(f"@{user.user.username}")
else:
player_list.append(f"ID: {user_id}")
except:
# В случае ошибки, используем ID
player_list.append(f"ID: {user_id}")
winners_text = ", ".join(player_list)
# Отправляем сообщение о победителях в чат
announcement_text = f"Победила команда {winning_team}: {winners_text}"
await bot.send_message(channel_id, announcement_text)
@dp.callback_query_handler(text="found_me")
async def player_found(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
game_id = None
async with aiosqlite.connect(database_path) as db:
await db.execute("UPDATE players SET is_found = 1 WHERE user_id = ? AND is_found = 0", (user_id,))
await db.commit()
cursor = await db.execute("SELECT id FROM games WHERE is_active = 1")
row = await cursor.fetchone()
if row:
game_id = row[0]
if game_id:
await update_players_list(channel_id, game_id)
await bot.answer_callback_query(callback_query.id, "Вы отмечены как найденный!")
else:
await bot.answer_callback_query(callback_query.id, "Нет активной игры.")
async def update_players_list(channel_id, game_id):
detectives_str = ""
hiders_str = ""
async with aiosqlite.connect(database_path) as db:
# Получаем список сыщиков
cursor = await db.execute("SELECT user_id FROM players WHERE game_id = ? AND is_detective = 1", (game_id,))
detectives = await cursor.fetchall()
for user_id in [det[0] for det in detectives]:
try:
user = await bot.get_chat_member(channel_id, user_id)
username = f"@{user.user.username}" if user.user.username else str(user_id)
detectives_str += username + "\n"
except Exception:
continue # Пропускаем в случае ошибки
# Получаем список оставшихся игроков (прячущихся не найденных)
cursor = await db.execute("SELECT user_id FROM players WHERE game_id = ? AND is_detective = 0 AND is_found = 0",
(game_id,))
hiders = await cursor.fetchall()
for user_id in [hid[0] for hid in hiders]:
try:
user = await bot.get_chat_member(channel_id, user_id)
username = f"@{user.user.username}" if user.user.username else str(user_id)
hiders_str += username + "\n"
except Exception:
continue # Пропускаем в случае ошибки
cursor = await db.execute("SELECT message_id FROM games WHERE id = ?", (game_id,))
row = await cursor.fetchone()
if row:
message_id = row[0]
keyboard_markup = types.InlineKeyboardMarkup()
found_button = types.InlineKeyboardButton(text="Меня нашли", callback_data="found_me")
keyboard_markup.add(found_button)
# Сформированный текст сообщения объединяет информацию о обеих группах игроков
new_msg_text = f"Сыщики:\n{detectives_str}\nОстальные прячущиеся:\n{hiders_str}"
send_message = await bot.edit_message_text(chat_id=channel_id, message_id=message_id, text=new_msg_text,
reply_markup=keyboard_markup)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(create_db())
executor.start_polling(dp, skip_updates=True)
мне нужно сделать так, чтобы если не осталось ненайденных игроков, игра автоматически заканчивалась в пользу сыщиков
|
8c25e1ba3c7d5ec25f7f49cbbc3e088c
|
{
"intermediate": 0.36273032426834106,
"beginner": 0.5144443511962891,
"expert": 0.12282531708478928
}
|
42,595
|
from aiogram import Bot, Dispatcher, types
from aiogram.dispatcher import FSMContext
from aiogram.dispatcher.filters.state import State, StatesGroup
from aiogram.contrib.fsm_storage.memory import MemoryStorage
from aiogram.types import ReplyKeyboardMarkup, KeyboardButton, InlineKeyboardMarkup, InlineKeyboardButton
import aiosqlite
from random import sample
from aiogram.utils import executor
import asyncio
API_TOKEN = '7013156611:AAEsIm7Vklxl6hHipwmiClgn9KAkgzNWuQg' # Замените на ваш токен бота Telegram
bot = Bot(token=API_TOKEN)
storage = MemoryStorage()
dp = Dispatcher(bot, storage=storage)
channel_id = "-1002034494437"
database_path = "hide_seek_bot.db" # Путь к вашей базе данных
async def create_db():
async with aiosqlite.connect(database_path) as db:
await db.execute('''CREATE TABLE IF NOT EXISTS games (
id INTEGER PRIMARY KEY,
message_id INTEGER,
is_active BOOLEAN NOT NULL CHECK (is_active IN (0, 1)),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP)''')
await db.execute('''CREATE TABLE IF NOT EXISTS players (
id INTEGER PRIMARY KEY,
game_id INTEGER,
user_id INTEGER NOT NULL,
is_detective BOOLEAN NOT NULL CHECK (is_detective IN (0, 1)),
is_found BOOLEAN NOT NULL DEFAULT 0 CHECK (is_found IN (0, 1)),
FOREIGN KEY(game_id) REFERENCES games(id))''')
await db.commit()
class Form(StatesGroup):
detective_count = State() # Количество сыщиков
async def is_admin(user_id: int):
admin_ids = [989037374, 511205958]
return user_id in admin_ids
@dp.message_handler(commands=['start'])
async def send_welcome(message: types.Message):
if await is_admin(message.from_user.id):
markup = ReplyKeyboardMarkup(resize_keyboard=True, selective=True)
markup.add(KeyboardButton("Создать игру"))
markup.add(KeyboardButton("Определить сыщиков"))
markup.add(KeyboardButton("Завершить игру"))
await message.reply("Вы администратор бота. Используйте кнопки.", reply_markup=markup)
else:
await message.reply("Извините, доступно только администраторам.")
@dp.callback_query_handler(text="participate")
async def participate_in_game(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
async with aiosqlite.connect(database_path) as db:
async with db.execute("SELECT id FROM games WHERE is_active = 1") as cursor:
game = await cursor.fetchone()
if not game:
await bot.answer_callback_query(callback_query.id, "В данный момент нет активной игры.")
return
game_id = game[0]
async with db.execute("SELECT id FROM players WHERE game_id = ? AND user_id = ?", (game_id, user_id)) as cursor:
participant = await cursor.fetchone()
if participant:
await bot.answer_callback_query(callback_query.id, "Вы уже участвуете в этой игре.")
return
await db.execute("INSERT INTO players (game_id, user_id, is_detective) VALUES (?, ?, 0)", (game_id, user_id))
await db.commit()
async with db.execute("SELECT message_id FROM games WHERE is_active = 1") as cursor:
game_info = await cursor.fetchone()
if game_info:
message_id = game_info[0]
players_list_str = await generate_players_list(db)
keyboard_markup = types.InlineKeyboardMarkup(row_width=1)
participation_button = types.InlineKeyboardButton(text="Участвовать", callback_data="participate")
keyboard_markup.add(participation_button)
new_msg_text = f"🎮 Начинается новая игра! Участники:\n{players_list_str}\nДля участия нажмите кнопку ниже."
await bot.edit_message_text(chat_id=channel_id, message_id=message_id, text=new_msg_text,
reply_markup=keyboard_markup)
await bot.answer_callback_query(callback_query.id, "Вы успешно присоединились к игре!")
async def generate_players_list(db):
players_list_str = ""
async with db.execute("SELECT user_id FROM players WHERE game_id = (SELECT id FROM games WHERE is_active = 1)") as cursor:
players = await cursor.fetchall()
for player in players:
user_id = player[0]
try:
user = await bot.get_chat_member(channel_id, user_id)
username = "@" + user.user.username if user.user.username else f"{user_id}"
players_list_str += username + "\n"
except Exception as e:
continue # Пропускаем в случае ошибки
return players_list_str
@dp.message_handler(lambda message: message.text == "Создать игру")
async def create_game(message: types.Message):
async with aiosqlite.connect(database_path) as db:
# Проверяем, есть ли активная игра
async with db.execute("SELECT id FROM games WHERE is_active = 1") as cursor:
active_game = await cursor.fetchone()
if active_game:
await message.reply("Уже существует активная игра. Необходимо сначала её завершить.")
return
# Создаём новую игру
cursor = await db.execute("INSERT INTO games (is_active) VALUES (1)")
game_id = cursor.lastrowid
await db.commit()
keyboard_markup = types.InlineKeyboardMarkup(row_width=1)
participation_button = types.InlineKeyboardButton(text="Участвовать", callback_data="participate")
keyboard_markup.add(participation_button)
msg = await bot.send_message(channel_id, "Начинается новая игра! Для участия нажмите кнопку ниже.",
reply_markup=keyboard_markup)
# Сохраняем message_id игры
await db.execute("UPDATE games SET message_id = ? WHERE id = ?", (msg.message_id, game_id))
await db.commit()
await message.reply("Игра создана и объявлена в канале.")
@dp.message_handler(lambda message: message.text == "Определить сыщиков", state=None)
async def request_detective_count(message: types.Message):
if await is_admin(message.from_user.id):
await Form.detective_count.set()
await message.reply("Введите количество сыщиков:")
else:
await message.reply("Извините, доступно только администраторам.")
@dp.message_handler(state=Form.detective_count)
async def handle_set_detective_count(message: types.Message, state: FSMContext):
# Тут ваш код обработки 'set_detective_count'
await pick_detectives(message, int(message.text)) # Вызов функции pick_detectives с количеством сыщиков
await state.finish() # Завершение состояния
async def pick_detectives(message: types.Message, num_of_detectives: int):
async with aiosqlite.connect(database_path) as db:
# Получаем ID активной игры
async with db.execute("SELECT id FROM games WHERE is_active = 1 LIMIT 1") as cursor:
game = await cursor.fetchone()
if not game:
await message.reply("В данный момент нет активной игры.")
return
game_id = game[0]
# Получаем список ID игроков этой игры
async with db.execute("SELECT user_id FROM players WHERE game_id = ? AND is_detective = 0",
(game_id,)) as cursor:
players = await cursor.fetchall()
if len(players) <= num_of_detectives:
await message.reply("Недостаточно игроков для выбора сыщиков.")
return
# Рандомно выбираем сыщиков
detectives_ids = sample([p[0] for p in players], num_of_detectives)
# Фомируем визуальный список сыщиков для отправки сообщения
detective_list_info = []
for user_id in detectives_ids:
user = await bot.get_chat_member(channel_id, user_id) # Получаем информацию о пользователе
username = user.user.username
if username:
detective_list_info.append(f"@{username}")
else:
detective_list_info.append(f"{user_id}") # Если username отсутствует, используем ID
# Обновляем статус выбранных игроков на сыщиков
for user_id in detectives_ids:
await db.execute("UPDATE players SET is_detective = 1 WHERE game_id = ? AND user_id = ?",
(game_id, user_id))
await db.commit()
detectives_list_str, hiders_list_str = await generate_players_and_detectives_list(db)
keyboard_markup = types.InlineKeyboardMarkup()
found_button = types.InlineKeyboardButton(text="Меня нашли", callback_data="found_me")
keyboard_markup.add(found_button)
message_text = f"{detectives_list_str}\n{hiders_list_str}\n\nИгра началась! Если вас нашли, пожалуйста, нажмите на кнопку Меня нашли!, чтобы игра проходила согласно правилам."
send_message = await bot.send_message(channel_id, text=message_text, reply_markup=keyboard_markup)
async with aiosqlite.connect(database_path) as db:
await db.execute("UPDATE games SET message_id = ? WHERE id = ?", (send_message.message_id, game_id))
await db.commit()
async def generate_players_and_detectives_list(db):
detectives_list_str = "🕵️♂️ Сыщики:\n"
hiders_list_str = "🤫 Прячущиеся:\n"
current_game_id = None
# Сначала найдем id активной игры
async with db.execute("SELECT id FROM games WHERE is_active = 1") as cursor:
game = await cursor.fetchone()
if game:
current_game_id = game[0]
else:
return "Нет активной игры.", "Нет активной игры."
# После нахождения id игры, используем его для выборки игроков этой игры
# Запрос на получение сыщиков текущей активной игры
async with db.execute("SELECT DISTINCT user_id FROM players WHERE game_id = ? AND is_detective = 1",
(current_game_id,)) as cursor:
async for det in cursor:
try:
user_info = await bot.get_chat_member(channel_id, det[0])
user_name = f"@{user_info.user.username}" if user_info.user.username else f"ID: {det[0]}"
detectives_list_str += user_name + "\n"
except Exception as e:
print(f"Ошибка получения данных о пользователе: {e}")
# Запрос на получение прячущихся текущей активной игры, которые еще не найдены
async with db.execute(
"SELECT DISTINCT user_id FROM players WHERE game_id = ? AND is_detective = 0 AND is_found = 0",
(current_game_id,)) as cursor:
async for hid in cursor:
try:
user_info = await bot.get_chat_member(channel_id, hid[0])
user_name = f"@{user_info.user.username}" if user_info.user.username else f"ID: {hid[0]}"
hiders_list_str += user_name + "\n"
except Exception as e:
print(f"Ошибка получения данных о пользователе: {e}")
return detectives_list_str.strip(), hiders_list_str.strip()
@dp.message_handler(lambda message: message.text == "Завершить игру")
async def finish_the_game_request(message: types.Message):
if not await is_admin(message.from_user.id):
await message.reply("Только администраторы могут завершать игру.")
return
# Предположим, у нас есть две команды: Сыщики и Прячущиеся
markup = types.InlineKeyboardMarkup()
markup.add(types.InlineKeyboardButton("🕵️♂️ Сыщики", callback_data="win_detectives"))
markup.add(types.InlineKeyboardButton("🤫 Прячущиеся", callback_data="win_hiders"))
await message.reply("Какая команда победила?", reply_markup=markup)
@dp.callback_query_handler(lambda c: c.data == "win_detectives" or c.data == "win_hiders")
async def announce_winners(callback_query: types.CallbackQuery):
await bot.answer_callback_query(callback_query.id)
winning_team = "сыщиков" if callback_query.data == "win_detectives" else "прячущихся"
async with aiosqlite.connect(database_path) as db:
# Получаем ID активной игры
async with db.execute("SELECT id FROM games WHERE is_active = 1 LIMIT 1") as cursor:
active_game = await cursor.fetchone()
if not active_game:
await bot.send_message(callback_query.from_user.id, "Активная игра не найдена.")
return
game_id = active_game[0]
# Помечаем игру как завершенную
await db.execute("UPDATE games SET is_active = 0 WHERE id = ?", (game_id,))
await db.commit()
# Получаем список участников победившей команды
role_condition = "1" if callback_query.data == "win_detectives" else "0"
async with db.execute("SELECT user_id FROM players WHERE game_id = ? AND is_detective = ?",
(game_id, role_condition)) as cursor:
players = await cursor.fetchall()
# Формируем список для публикации
player_list = []
for player in players:
user_id = player[0]
try:
# Попытка получить пользователя для взаимодействия
user = await bot.get_chat_member(channel_id, user_id) # Получаем информацию о пользователе
username = user.user.username
if username:
player_list.append(f"@{user.user.username}")
else:
player_list.append(f"ID: {user_id}")
except:
# В случае ошибки, используем ID
player_list.append(f"ID: {user_id}")
winners_text = ", ".join(player_list)
# Отправляем сообщение о победителях в чат
announcement_text = f"🥳 Победила команда {winning_team}: {winners_text}\n\nПоздравляем победителей!"
await bot.send_message(channel_id, announcement_text)
@dp.callback_query_handler(text="found_me")
async def player_found(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
game_id = None
async with aiosqlite.connect(database_path) as db:
await db.execute("UPDATE players SET is_found = 1 WHERE user_id = ? AND is_found = 0", (user_id,))
await db.commit()
cursor = await db.execute("SELECT id FROM games WHERE is_active = 1")
row = await cursor.fetchone()
if row:
game_id = row[0]
if game_id:
await update_players_list(channel_id, game_id)
await bot.answer_callback_query(callback_query.id, "Вы отмечены как найденный!")
else:
await bot.answer_callback_query(callback_query.id, "Нет активной игры.")
async def update_players_list(channel_id, game_id):
detectives_str = ""
hiders_str = ""
async with aiosqlite.connect(database_path) as db:
# Получаем список сыщиков
cursor = await db.execute("SELECT user_id FROM players WHERE game_id = ? AND is_detective = 1", (game_id,))
detectives = await cursor.fetchall()
for user_id in [det[0] for det in detectives]:
try:
user = await bot.get_chat_member(channel_id, user_id)
username = f"@{user.user.username}" if user.user.username else str(user_id)
detectives_str += username + "\n"
except Exception:
continue # Пропускаем в случае ошибки
# Получаем список оставшихся игроков (прячущихся не найденных)
cursor = await db.execute("SELECT user_id FROM players WHERE game_id = ? AND is_detective = 0 AND is_found = 0",
(game_id,))
hiders = await cursor.fetchall()
for user_id in [hid[0] for hid in hiders]:
try:
user = await bot.get_chat_member(channel_id, user_id)
username = f"@{user.user.username}" if user.user.username else str(user_id)
hiders_str += username + "\n"
except Exception:
continue # Пропускаем в случае ошибки
cursor = await db.execute("SELECT message_id FROM games WHERE id = ?", (game_id,))
row = await cursor.fetchone()
if row:
message_id = row[0]
keyboard_markup = types.InlineKeyboardMarkup()
found_button = types.InlineKeyboardButton(text="Меня нашли!", callback_data="found_me")
keyboard_markup.add(found_button)
# Сформированный текст сообщения объединяет информацию о обеих группах игроков
new_msg_text = f"🕵️♂️ Сыщики:\n{detectives_str}\n🤫 Оставшиеся прячущиеся:\n{hiders_str}"
send_message = await bot.edit_message_text(chat_id=channel_id, message_id=message_id, text=new_msg_text,
reply_markup=keyboard_markup)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(create_db())
executor.start_polling(dp, skip_updates=True)
все ли правильно у меня реализовано в этой части кода? Вот она:
@dp.message_handler(lambda message: message.text == "Определить сыщиков", state=None)
async def request_detective_count(message: types.Message):
if await is_admin(message.from_user.id):
await Form.detective_count.set()
await message.reply("Введите количество сыщиков:")
else:
await message.reply("Извините, доступно только администраторам.")
@dp.message_handler(state=Form.detective_count)
async def handle_set_detective_count(message: types.Message, state: FSMContext):
# Тут ваш код обработки 'set_detective_count'
await pick_detectives(message, int(message.text)) # Вызов функции pick_detectives с количеством сыщиков
await state.finish() # Завершение состояния
|
16ffcc262a4bfe2661e97394fb863b6c
|
{
"intermediate": 0.3595738112926483,
"beginner": 0.43234744668006897,
"expert": 0.2080787718296051
}
|
42,596
|
как в следующем окне сделать так чтобы была рамка:
class DrawingWindow(QMainWindow):
def __init__(self, coordinates):
super().__init__()
self.setWindowTitle("Transparent Drawing Window")
self.setGeometry(0, 0, QApplication.primaryScreen().size().width(),
QApplication.primaryScreen().size().height())
self.setAttribute(Qt.WA_TranslucentBackground, True)
self.setWindowFlags(Qt.FramelessWindowHint | Qt.WindowStaysOnTopHint)
|
fbf809e43dc754e6160c5e4066686fbd
|
{
"intermediate": 0.358111709356308,
"beginner": 0.48786935210227966,
"expert": 0.15401898324489594
}
|
42,597
|
как в следующем окне сделать рамку для изменения размера окна:
class DrawingWindow(QMainWindow):
def __init__(self, coordinates):
super().__init__()
self.setWindowTitle("Transparent Drawing Window")
self.setGeometry(0, 0, QApplication.primaryScreen().size().width(),
QApplication.primaryScreen().size().height())
self.setAttribute(Qt.WA_TranslucentBackground, True)
self.setWindowFlags(Qt.FramelessWindowHint | Qt.WindowStaysOnTopHint)
|
02b0f7928593df1f01f535218034fc17
|
{
"intermediate": 0.32773062586784363,
"beginner": 0.5093376636505127,
"expert": 0.1629316657781601
}
|
42,598
|
i have 1100 csv files of cryptocurrencies historical data including OHLCV and indicators and etc
each file has 113 columns and between 1000 to 2500 rows
i want to train a RNN model to predict close price of next day based on past 60 days
i want to train model without combining all csv files
give me the proper implementation to train a RSTM RNN model on my dataset
|
960b540702d2d98e69d86d217b88663b
|
{
"intermediate": 0.2950141429901123,
"beginner": 0.08053205907344818,
"expert": 0.6244538426399231
}
|
42,599
|
{
"rowCount": 4,
"colCount": 4,
"type": 0,
"letters": "WIND",
"words": [
{
"word": "WIND",
"vertical": true,
"rowIndex": 0,
"colIndex": 1,
"coinIndices": [],
"awardCoins": false
},
{
"word": "IN",
"vertical": false,
"rowIndex": 1,
"colIndex": 0,
"coinIndices": [],
"awardCoins": false
},
{
"word": "WIN",
"vertical": false,
"rowIndex": 0,
"colIndex": 1,
"coinIndices": [],
"awardCoins": false
}
],
"id": "3a87432f1c6e4f51b501303142f4016a"
}
This a level data format for my mobile crossword word game. Col index determines the starting column number of the word and row index determines the starting row number of the word. Row count and col count determines the total number of rows and columns to generate the board. Vertical field determines if word will be placed in a vertical orientation in the board if true otherwise it is horizontal. Letters field determines the letters will be used to generate words in the board. Please generate me a level json for a max 5 letter words.
|
3efdd108e752fc6775eba1de382aaa93
|
{
"intermediate": 0.38598644733428955,
"beginner": 0.303703635931015,
"expert": 0.31030991673469543
}
|
42,600
|
Give me a code for a web page about ai and make it look really aesthetic and make it look simple not many colors, also use images and buttons
|
afcb49ed55459cbfc09debfd7146dd82
|
{
"intermediate": 0.3121311664581299,
"beginner": 0.15591827034950256,
"expert": 0.5319505929946899
}
|
42,601
|
i want to run this script only on desktop view not on mobile browsers
<script src="https://zuuzzu.it/wp-content/uploads/panelsnap.js" defer></script>
<script>
document.addEventListener("DOMContentLoaded", function() {
var options = {
panelSelector: '.panel' // Select div elements with class "panel" as panels
};
new PanelSnap(options);
});
</script>
|
ac8a7363c7671e6f1d7abe463dc515d0
|
{
"intermediate": 0.4234367907047272,
"beginner": 0.31848058104515076,
"expert": 0.2580825984477997
}
|
42,602
|
List an example of the**Key Skills** section of a Modern Android engineer
|
c7d6208c37af808f881d54ef9f9f1431
|
{
"intermediate": 0.45930859446525574,
"beginner": 0.17391376197338104,
"expert": 0.36677756905555725
}
|
42,603
|
How to debug it - Segmentation fault (core dumped)
|
a566b1def561988714a6fe8d537a1df9
|
{
"intermediate": 0.6577370166778564,
"beginner": 0.16829714179039001,
"expert": 0.17396587133407593
}
|
42,604
|
Как у следующего окна сделать event изменения размера и координат для QRect:
class MainWindow(QMainWindow):
def __init__(self, coordinates):
super().__init__()
self.setWindowTitle("Transparent Drawing Window")
self.setGeometry(0, 0, QApplication.primaryScreen().size().width(),
QApplication.primaryScreen().size().height())
self.setAttribute(Qt.WA_TranslucentBackground, True)
self.setWindowFlags(Qt.FramelessWindowHint | Qt.WindowStaysOnTopHint)
self.painter = QPainter()
self.painter.setRenderHint(QPainter.Antialiasing)
self.pen_color = QColor(255, 0, 0) # Set the initial pen color to red
self.pen_width = 4 # Set the initial pen width to 4
self.coordinates = coordinates # Store the coordinates for drawing rectangles
self.starting_point = None
#self.red_button = QPushButton('Red', self)
#self.red_button.clicked.connect(self.set_red_color)
#self.blue_button = QPushButton('Blue', self)
#self.blue_button.clicked.connect(self.set_blue_color)
self.dragLineEdit = QLineEdit(self)
#self.dragLineEdit.setEnabled(False)
self.dragLineEdit.setReadOnly(True)
def set_red_color(self):
self.pen_color = QColor(255, 0, 0)
self.update()
def set_blue_color(self):
self.pen_color = QColor(0, 0, 255)
self.update()
def paintEvent(self, event):
self.painter.begin(self)
self.painter.setPen(Qt.NoPen)
self.painter.setBrush(QBrush(Qt.transparent))
self.painter.drawRect(QRect(0, 0, self.width(), self.height())) # Draw a transparent background
self.painter.setPen(QPen(QColor(self.pen_color), self.pen_width))
self.painter.setBrush(QBrush(Qt.transparent))
#self.red_button.move(self.coordinates[0], self.coordinates[1])
#self.red_button.resize(50, 50)
#self.blue_button.move(self.coordinates[0] + 50, self.coordinates[1])
#self.blue_button.resize(50, 50)
self.dragLineEdit.move(self.coordinates[0], self.coordinates[1])
self.dragLineEdit.resize(self.coordinates[2], 20)
self.painter.drawRect(self.coordinates[0], self.coordinates[1], self.coordinates[2], self.coordinates[3]) # Draw rectangles using the provided coordinates
self.painter.end()
|
27480168e93da80006fa33b9881176a2
|
{
"intermediate": 0.3058611750602722,
"beginner": 0.5430105924606323,
"expert": 0.15112827718257904
}
|
42,605
|
Напиши 2 скрипта: Client и Server на пайтон
|
484c09759027411ef4a1cf657f8f6329
|
{
"intermediate": 0.31563541293144226,
"beginner": 0.304922491312027,
"expert": 0.37944209575653076
}
|
42,606
|
i trained a xgboost model using gride search
i have best_model = grid_search.best_estimator_
how can i save it to a file
|
2dbdb17779ce8cd8f07ab252774e1631
|
{
"intermediate": 0.20719799399375916,
"beginner": 0.09265533089637756,
"expert": 0.7001466155052185
}
|
42,607
|
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;
contract AssetRegistry {
struct Asset {
string name;
uint256 value;
bool isRegistered;
}
address pubic owner;
mapping(uint256 => Asset) private assets; / A mapping of asset IDs to Assets
uint256 private totalAssetRegistered;
error Unauthorized();
constructor() {
owner = msg.sender;
}
modifier onlyOwner() {
if (msg.sender != owner) {
revert Unauthorized();
}
_;
}
// function to register a new asset
function registerAsset(uint256 _assetId, string calldata _name, uint256 _value) external onlyOwner {
require(!assets[_assetId].isRegistered, "Asset already registered.");
assets[_assetId] = Asset({name: _name, value: _value, isRegistered: true});
totalAssetsRegistered++;
}
// Vulnerable view function restrcited by onlyOwner
function getAssetValue(uint256 _assetId) external view onlyOwner returns (uint256) {
require(assets[_assetId].isRegistered, "Asset not registered.");
return assets[_assetId].value;
}
// Another vulnerab;e view function
function getTotalAssetRegistered() external view onlyOwner returns (uint256) {
return totalAssetRegistered;
}
// ... Additional contract logic ...
}
Here's a smart contract managing a digital asset registery on the blockchain.
The assets are recorded with ownership details. What could possibly go wrong?
|
dac5685ea8aa162e1c9dd5de7b3a30eb
|
{
"intermediate": 0.4379739761352539,
"beginner": 0.3263269364833832,
"expert": 0.2356991320848465
}
|
42,608
|
give me a short phrase tu put on my porfolio webpage about a amature data analysit, data scientist, and nex.js web page developer
|
896614fe812c90e4f901c617efd8e44a
|
{
"intermediate": 0.2960613965988159,
"beginner": 0.13043218851089478,
"expert": 0.5735064148902893
}
|
42,609
|
i'm confused about this github readme. if i want to use function calling in the llm, do i need to download the model with the 'tools' tag? : ""Skip to content
adrienbrault
/
ollama-nous-hermes2pro
Type / to search
Code
Issues
Pull requests
Actions
Projects
Security
Insights
Owner avatar
ollama-nous-hermes2pro
Public
adrienbrault/ollama-nous-hermes2pro
Go to file
t
Add file
Folders and files
Name
Latest commit
adrienbrault
adrienbrault
docs: fix schema typo
4107d71
·
2 days ago
History
examples
Initial commit
3 days ago
models
Initial commit
3 days ago
.gitignore
Initial commit
3 days ago
LICENSE
Initial commit
3 days ago
Makefile
fix: add missing quants
3 days ago
Modelfile
Initial commit
3 days ago
Modelfile-json
Initial commit
3 days ago
Modelfile-tools
Initial commit
3 days ago
README.md
docs: fix schema typo
2 days ago
Repository files navigation
README
MIT license
Ollama models of NousResearch/Hermes-2-Pro-Mistral-7B-GGUF.
$ ollama run adrienbrault/nous-hermes2pro:Q4_0 'Hey!'
Hello! How can I help you today? If you have any questions or need assistance, feel free to ask.
There are -tools and -json tags with the recommended system prompt for function calling and json mode.
You provide the tools with the user message:
$ ollama run adrienbrault/nous-hermes2pro:Q4_0-tools "<tools>$(cat examples/tool-stock.json)</tools>
Fetch the stock fundamentals data for Tesla (TSLA)"
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call>
Or a schema for the json mode:
$ ollama run adrienbrault/nous-hermes2pro:Q4_0-json "<schema>$(cat examples/user-schema.json)</schema>
Adrien Brault was born in 1991"
{"firstName": "Adrien", "lastName": "Brault", "age": 30}
List of available tags:
adrienbrault/nous-hermes2pro:2_K
adrienbrault/nous-hermes2pro:2_K-json
adrienbrault/nous-hermes2pro:2_K-tools
adrienbrault/nous-hermes2pro:3_K_L
adrienbrault/nous-hermes2pro:3_K_L-json
adrienbrault/nous-hermes2pro:3_K_L-tools
adrienbrault/nous-hermes2pro:3_K_M
adrienbrault/nous-hermes2pro:3_K_M-json
adrienbrault/nous-hermes2pro:3_K_M-tools
adrienbrault/nous-hermes2pro:3_K_S
adrienbrault/nous-hermes2pro:3_K_S-json
adrienbrault/nous-hermes2pro:3_K_S-tools
adrienbrault/nous-hermes2pro:4_0
adrienbrault/nous-hermes2pro:4_0-json
adrienbrault/nous-hermes2pro:4_0-tools
adrienbrault/nous-hermes2pro:4_K_M
adrienbrault/nous-hermes2pro:4_K_M-json
adrienbrault/nous-hermes2pro:4_K_M-tools
adrienbrault/nous-hermes2pro:4_K_S
adrienbrault/nous-hermes2pro:4_K_S-json
adrienbrault/nous-hermes2pro:4_K_S-tools
adrienbrault/nous-hermes2pro:5_0
adrienbrault/nous-hermes2pro:5_0-json
adrienbrault/nous-hermes2pro:5_0-tools
adrienbrault/nous-hermes2pro:5_K_M
adrienbrault/nous-hermes2pro:5_K_M-json
adrienbrault/nous-hermes2pro:5_K_M-tools
adrienbrault/nous-hermes2pro:5_K_S
adrienbrault/nous-hermes2pro:5_K_S-json
adrienbrault/nous-hermes2pro:5_K_S-tools
adrienbrault/nous-hermes2pro:6_K
adrienbrault/nous-hermes2pro:6_K-json
adrienbrault/nous-hermes2pro:6_K-tools
adrienbrault/nous-hermes2pro:8_0
adrienbrault/nous-hermes2pro:8_0-json
adrienbrault/nous-hermes2pro:8_0-tools
About
Ollama models of NousResearch/Hermes-2-Pro-Mistral-7B-GGUF
ollama.com/adrienbrault/nous-hermes2pro
Resources
Readme
License
MIT license
Activity
Stars
9 stars
Watchers
1 watching
Forks
0 forks
Report repository
Releases
No releases published
Packages
No packages published
Languages
Makefile
100.0%
Footer
© 2024 GitHub, Inc.
Footer navigation
Terms
Privacy
Security
Status
Docs
Contact
Manage cookies
Do not share my personal information
""
|
ec36ba6885de218642b574c734a3b2fc
|
{
"intermediate": 0.2928304076194763,
"beginner": 0.3591897785663605,
"expert": 0.3479798436164856
}
|
42,610
|
test
|
7ed10db863e35eb937c4809f04db47e6
|
{
"intermediate": 0.3229040801525116,
"beginner": 0.34353747963905334,
"expert": 0.33355844020843506
}
|
42,611
|
In my below code:"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, class_data_counts):
X_train = []
y_train_class = []
y_train_seg = []
X_val = []
y_val_class = []
y_val_seg = []
X_test = []
y_test_class = []
y_test_seg = []
for label, count in class_data_counts.items():
label_indices = np.where(y_class_labels == label)[0]
class_X_data = X_data[label_indices]
class_y_class_labels = y_class_labels[label_indices]
class_y_seg_labels = y_seg_labels[label_indices]
train_count = count[0]
val_count = count[1]
test_count = count[2]
class_X_train = class_X_data[:train_count]
class_y_train_class = class_y_class_labels[:train_count]
class_y_train_seg = class_y_seg_labels[:train_count]
class_X_val = class_X_data[train_count: train_count + val_count]
class_y_val_class = class_y_class_labels[train_count: train_count + val_count]
class_y_val_seg = class_y_seg_labels[train_count: train_count + val_count]
class_X_test = class_X_data[train_count + val_count: train_count + val_count + test_count]
class_y_test_class = class_y_class_labels[train_count + val_count: train_count + val_count + test_count]
class_y_test_seg = class_y_seg_labels[train_count + val_count: train_count + val_count + test_count]
X_train.extend(class_X_train)
y_train_class.extend(class_y_train_class)
y_train_seg.extend(class_y_train_seg)
X_val.extend(class_X_val)
y_val_class.extend(class_y_val_class)
y_val_seg.extend(class_y_val_seg)
X_test.extend(class_X_test)
y_test_class.extend(class_y_test_class)
y_test_seg.extend(class_y_test_seg)
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
np.array(X_train),
np.array(y_train_class_categorical),
np.array(y_train_seg),
np.array(X_val),
np.array(y_val_class_categorical),
np.array(y_val_seg),
np.array(X_test),
np.array(y_test_class_categorical),
np.array(y_test_seg),
)
def count_labels(y_class_categorical, label_encoder):
# Convert one-hot encoded labels back to label encoded
y_class_labels = np.argmax(y_class_categorical, axis=1)
# Convert label encoded labels back to original class names
y_class_names = label_encoder.inverse_transform(y_class_labels)
unique, counts = np.unique(y_class_names, return_counts=True)
return dict(zip(unique, counts))
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/FYP_4/4 Dataset Ratio 60 20 20/jit0/f_dip/train",
"/content/gdrive/MyDrive/FYP_4/4 Dataset Ratio 60 20 20/jit0/f_dip/val",
"/content/gdrive/MyDrive/FYP_4/4 Dataset Ratio 60 20 20/jit0/f_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Define train:val:test ratio for each class
class_data_counts = {
"bridge": [40, 80, 80],
"excess": [40, 80, 80],
"good": [40, 80, 80],}
# Split data
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, class_data_counts)
'''
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
'''
# Initialize the label encoder
label_encoder = LabelEncoder()
label_encoder.fit(y_class_labels)
# Count the number of images of each class in the train, validation, and test sets
train_counts = count_labels(y_train_class, label_encoder)
val_counts = count_labels(y_val_class, label_encoder)
test_counts = count_labels(y_test_class, label_encoder)
print("Train counts: ", train_counts," Total in train set:", sum(train_counts.values()))
print("Validation counts:", val_counts, " Total in validation set:", sum(val_counts.values()))
print("Test counts: ", test_counts," Total in test set:", sum(test_counts.values()))
"
Add the code to print some random train images and train segmentation images with labels.
|
a23eb740dc58472d79c5e279d7b5759d
|
{
"intermediate": 0.31839096546173096,
"beginner": 0.3364259600639343,
"expert": 0.3451830744743347
}
|
42,612
|
I am developing multi task learning model for pcb soldering defect image classification using google colab:"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, class_data_counts):
X_train = []
y_train_class = []
y_train_seg = []
X_val = []
y_val_class = []
y_val_seg = []
X_test = []
y_test_class = []
y_test_seg = []
for label, count in class_data_counts.items():
label_indices = np.where(y_class_labels == label)[0]
class_X_data = X_data[label_indices]
class_y_class_labels = y_class_labels[label_indices]
class_y_seg_labels = y_seg_labels[label_indices]
train_count = count[0]
val_count = count[1]
test_count = count[2]
class_X_train = class_X_data[:train_count]
class_y_train_class = class_y_class_labels[:train_count]
class_y_train_seg = class_y_seg_labels[:train_count]
class_X_val = class_X_data[train_count: train_count + val_count]
class_y_val_class = class_y_class_labels[train_count: train_count + val_count]
class_y_val_seg = class_y_seg_labels[train_count: train_count + val_count]
class_X_test = class_X_data[train_count + val_count: train_count + val_count + test_count]
class_y_test_class = class_y_class_labels[train_count + val_count: train_count + val_count + test_count]
class_y_test_seg = class_y_seg_labels[train_count + val_count: train_count + val_count + test_count]
X_train.extend(class_X_train)
y_train_class.extend(class_y_train_class)
y_train_seg.extend(class_y_train_seg)
X_val.extend(class_X_val)
y_val_class.extend(class_y_val_class)
y_val_seg.extend(class_y_val_seg)
X_test.extend(class_X_test)
y_test_class.extend(class_y_test_class)
y_test_seg.extend(class_y_test_seg)
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
np.array(X_train),
np.array(y_train_class_categorical),
np.array(y_train_seg),
np.array(X_val),
np.array(y_val_class_categorical),
np.array(y_val_seg),
np.array(X_test),
np.array(y_test_class_categorical),
np.array(y_test_seg),
)
def count_labels(y_class_categorical, label_encoder):
# Convert one-hot encoded labels back to label encoded
y_class_labels = np.argmax(y_class_categorical, axis=1)
# Convert label encoded labels back to original class names
y_class_names = label_encoder.inverse_transform(y_class_labels)
unique, counts = np.unique(y_class_names, return_counts=True)
return dict(zip(unique, counts))
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, class_data_counts):
X_train = []
y_train_class = []
y_train_seg = []
X_val = []
y_val_class = []
y_val_seg = []
X_test = []
y_test_class = []
y_test_seg = []
for label, count in class_data_counts.items():
label_indices = np.where(y_class_labels == label)[0]
class_X_data = X_data[label_indices]
class_y_class_labels = y_class_labels[label_indices]
class_y_seg_labels = y_seg_labels[label_indices]
train_count = count[0]
val_count = count[1]
test_count = count[2]
class_X_train = class_X_data[:train_count]
class_y_train_class = class_y_class_labels[:train_count]
class_y_train_seg = class_y_seg_labels[:train_count]
class_X_val = class_X_data[train_count: train_count + val_count]
class_y_val_class = class_y_class_labels[train_count: train_count + val_count]
class_y_val_seg = class_y_seg_labels[train_count: train_count + val_count]
class_X_test = class_X_data[train_count + val_count: train_count + val_count + test_count]
class_y_test_class = class_y_class_labels[train_count + val_count: train_count + val_count + test_count]
class_y_test_seg = class_y_seg_labels[train_count + val_count: train_count + val_count + test_count]
X_train.extend(class_X_train)
y_train_class.extend(class_y_train_class)
y_train_seg.extend(class_y_train_seg)
X_val.extend(class_X_val)
y_val_class.extend(class_y_val_class)
y_val_seg.extend(class_y_val_seg)
X_test.extend(class_X_test)
y_test_class.extend(class_y_test_class)
y_test_seg.extend(class_y_test_seg)
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
np.array(X_train),
np.array(y_train_class_categorical),
np.array(y_train_seg),
np.array(X_val),
np.array(y_val_class_categorical),
np.array(y_val_seg),
np.array(X_test),
np.array(y_test_class_categorical),
np.array(y_test_seg),
)
def count_labels(y_class_categorical, label_encoder):
# Convert one-hot encoded labels back to label encoded
y_class_labels = np.argmax(y_class_categorical, axis=1)
# Convert label encoded labels back to original class names
y_class_names = label_encoder.inverse_transform(y_class_labels)
unique, counts = np.unique(y_class_names, return_counts=True)
return dict(zip(unique, counts))
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/FYP_4/4 Dataset Ratio 60 20 20/jit0/f_dip/train",
"/content/gdrive/MyDrive/FYP_4/4 Dataset Ratio 60 20 20/jit0/f_dip/val",
"/content/gdrive/MyDrive/FYP_4/4 Dataset Ratio 60 20 20/jit0/f_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Define train:val:test ratio for each class
class_data_counts = {
"bridge": [40, 80, 80],
"excess": [40, 80, 80],
"good": [40, 80, 80],}
# Split data
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, class_data_counts)
'''
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
'''
# Initialize the label encoder
label_encoder = LabelEncoder()
label_encoder.fit(y_class_labels)
# Count the number of images of each class in the train, validation, and test sets
train_counts = count_labels(y_train_class, label_encoder)
val_counts = count_labels(y_val_class, label_encoder)
test_counts = count_labels(y_test_class, label_encoder)
print("Train counts: ", train_counts," Total in train set:", sum(train_counts.values()))
print("Validation counts:", val_counts, " Total in validation set:", sum(val_counts.values()))
print("Test counts: ", test_counts," Total in test set:", sum(test_counts.values()))
# Build model
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
model = build_model(input_shape, num_classes)
model.summary()
# Train model n times
test_class_acc_list = []
for i in range(5):
print(f"\nTrain {i+1}:\n")
model = build_model(input_shape, num_classes)
batch_size = 64
epochs = 50
history = train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs)
# Evaluate model on test data
test_class_acc = evaluate_model(model, X_test, y_test_class, y_test_seg)
plot_performance(history)
test_class_acc_list.append(test_class_acc)
# Calculate average test classification accuracy
average_test_class_acc = sum(test_class_acc_list) / len(test_class_acc_list)
print("Test Classification Accuracy List:", test_class_acc_list)
print("Average Test Classification Accuracy:", average_test_class_acc)
"
Check my code if there is any error lead to the poor performance of the model.
|
a39033166d9de2277d57a90d0436a62a
|
{
"intermediate": 0.3829265832901001,
"beginner": 0.2936139404773712,
"expert": 0.3234594166278839
}
|
42,613
|
I am developing multi task learning model for pcb soldering defect image classification using google colab:“
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from warnings import filterwarnings
from google.colab import drive
drive.mount(”/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(“.jpg”):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(“.”)[0] + “.png”
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, class_data_counts):
X_train = []
y_train_class = []
y_train_seg = []
X_val = []
y_val_class = []
y_val_seg = []
X_test = []
y_test_class = []
y_test_seg = []
for label, count in class_data_counts.items():
label_indices = np.where(y_class_labels == label)[0]
class_X_data = X_data[label_indices]
class_y_class_labels = y_class_labels[label_indices]
class_y_seg_labels = y_seg_labels[label_indices]
train_count = count[0]
val_count = count[1]
test_count = count[2]
class_X_train = class_X_data[:train_count]
class_y_train_class = class_y_class_labels[:train_count]
class_y_train_seg = class_y_seg_labels[:train_count]
class_X_val = class_X_data[train_count: train_count + val_count]
class_y_val_class = class_y_class_labels[train_count: train_count + val_count]
class_y_val_seg = class_y_seg_labels[train_count: train_count + val_count]
class_X_test = class_X_data[train_count + val_count: train_count + val_count + test_count]
class_y_test_class = class_y_class_labels[train_count + val_count: train_count + val_count + test_count]
class_y_test_seg = class_y_seg_labels[train_count + val_count: train_count + val_count + test_count]
X_train.extend(class_X_train)
y_train_class.extend(class_y_train_class)
y_train_seg.extend(class_y_train_seg)
X_val.extend(class_X_val)
y_val_class.extend(class_y_val_class)
y_val_seg.extend(class_y_val_seg)
X_test.extend(class_X_test)
y_test_class.extend(class_y_test_class)
y_test_seg.extend(class_y_test_seg)
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
np.array(X_train),
np.array(y_train_class_categorical),
np.array(y_train_seg),
np.array(X_val),
np.array(y_val_class_categorical),
np.array(y_val_seg),
np.array(X_test),
np.array(y_test_class_categorical),
np.array(y_test_seg),
)
def count_labels(y_class_categorical, label_encoder):
# Convert one-hot encoded labels back to label encoded
y_class_labels = np.argmax(y_class_categorical, axis=1)
# Convert label encoded labels back to original class names
y_class_names = label_encoder.inverse_transform(y_class_labels)
unique, counts = np.unique(y_class_names, return_counts=True)
return dict(zip(unique, counts))
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation=“linear”, padding=“same”, strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation(“relu”)(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation=“linear”, padding=“same”, strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation(“relu”)(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation=“linear”, padding=“same”, strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation(“relu”)(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation=“linear”, padding=“same”, strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation(“relu”)(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation=“linear”, padding=“same”, strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation(“relu”)(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation=“linear”, padding=“same”, strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation(“relu”)(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation=“linear”, padding=“same”, strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation(“relu”)(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation=“linear”, padding=“same”, strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation(“relu”)(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation=“linear”, padding=“same”, strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation(“relu”)(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation=“linear”, padding=“same”, strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation(“relu”)(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation=“bilinear”)(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation=“linear”, padding=“same”, strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation(“relu”)(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation=“linear”, padding=“same”, strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation(“relu”)(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation=“bilinear”)(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation=“linear”, padding=“same”, strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation(“relu”)(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation=“linear”, padding=“same”, strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation(“relu”)(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation=“bilinear”)(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation=“linear”, padding=“same”, strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation(“relu”)(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation=“linear”, padding=“same”, strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation(“relu”)(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation=“bilinear”)(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation=“linear”, padding=“same”, strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation(“relu”)(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation=“linear”, padding=“same”, strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation(“relu”)(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation=“sigmoid”, name=“segmentation_output”)(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation=“linear”, padding=“same”, strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation(“relu”)(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation=“linear”, padding=“same”, strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation(“relu”)(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation=“linear”, padding=“same”, strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation(“relu”)(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation=“relu”)(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation=“relu”)(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation=“softmax”, name=“classification_output”)(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
“multitask_best_weights.h5”,
monitor=“val_classification_output_accuracy”,
save_best_only=True,
mode=“max”,
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor=“val_classification_output_accuracy”,
factor=0.3,
patience=2,
min_delta=0.001,
mode=“auto”,
verbose=1,)
tensorboard = TensorBoard(log_dir=“logs”)
model.compile(
optimizer=Adam(lr=0.001),
loss={“classification_output”: “categorical_crossentropy”, “segmentation_output”: segmentation_loss},
metrics={“classification_output”: “accuracy”, “segmentation_output”: “accuracy”},
loss_weights={“classification_output”: 1, “segmentation_output”: 1},)
history = model.fit(
X_train,
{“classification_output”: y_train_class, “segmentation_output”: y_train_seg},
validation_data=(X_val, {“classification_output”: y_val_class, “segmentation_output”: y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({“segmentation_loss”: segmentation_loss}):
# Load the best model weights
best_model = load_model(“multitask_best_weights.h5”)
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {“classification_output”: y_test_class, “segmentation_output”: y_test_seg})
print(“Test Classification Loss:”, test_class_loss)
print(“Test Segmentation Loss:”, test_seg_loss)
print(“Test Classification Accuracy:”, test_class_acc)
print(“Test Segmentation Accuracy:”, test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {‘classification_output’: y_val_class, ‘segmentation_output’: y_val_seg})
print(“Validation Classification Loss:”, val_class_loss)
print(“Validation Segmentation Loss:”, val_seg_loss)
print(“Validation Classification Accuracy:”, val_class_acc)
print(“Validation Segmentation Accuracy:”, val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {‘classification_output’: y_train_class, ‘segmentation_output’: y_train_seg})
print(“Train Classification Loss:”, train_class_loss)
print(“Train Segmentation Loss:”, train_seg_loss)
print(“Train Classification Accuracy:”, train_class_acc)
print(“Train Segmentation Accuracy:”, train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history[“classification_output_accuracy”]
classification_val_accuracy = history.history[“val_classification_output_accuracy”]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label=“Training Accuracy”)
plt.plot(classification_val_accuracy, label=“Validation Accuracy”)
plt.title(“Classification Accuracy”)
plt.xlabel(“Epochs”)
plt.ylabel(“Accuracy”)
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history[“classification_output_loss”]
classification_val_loss = history.history[“val_classification_output_loss”]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, “b”, label=“Training Loss”)
plt.plot(classification_val_loss, “r”, label=“Validation Loss”)
plt.title(“Classification Loss”)
plt.xlabel(“Epochs”)
plt.ylabel(“Loss”)
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history[“segmentation_output_accuracy”]
segmentation_val_accuracy = history.history[“val_segmentation_output_accuracy”]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label=“Training Accuracy”)
plt.plot(segmentation_val_accuracy, label=“Validation Accuracy”)
plt.title(“Segmentation Accuracy”)
plt.xlabel(“Epochs”)
plt.ylabel(“Accuracy”)
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history[“segmentation_output_loss”]
segmentation_val_loss = history.history[“val_segmentation_output_loss”]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, “b”, label=“Training Loss”)
plt.plot(segmentation_val_loss, “r”, label=“Validation Loss”)
plt.title(“Segmentation Loss”)
plt.xlabel(“Epochs”)
plt.ylabel(“Loss”)
plt.legend()
plt.show()
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from warnings import filterwarnings
from google.colab import drive
drive.mount(“/content/gdrive”)
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(“.jpg”):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(“.”)[0] + “.png”
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, class_data_counts):
X_train = []
y_train_class = []
y_train_seg = []
X_val = []
y_val_class = []
y_val_seg = []
X_test = []
y_test_class = []
y_test_seg = []
for label, count in class_data_counts.items():
label_indices = np.where(y_class_labels == label)[0]
class_X_data = X_data[label_indices]
class_y_class_labels = y_class_labels[label_indices]
class_y_seg_labels = y_seg_labels[label_indices]
train_count = count[0]
val_count = count[1]
test_count = count[2]
class_X_train = class_X_data[:train_count]
class_y_train_class = class_y_class_labels[:train_count]
class_y_train_seg = class_y_seg_labels[:train_count]
class_X_val = class_X_data[train_count: train_count + val_count]
class_y_val_class = class_y_class_labels[train_count: train_count + val_count]
class_y_val_seg = class_y_seg_labels[train_count: train_count + val_count]
class_X_test = class_X_data[train_count + val_count: train_count + val_count + test_count]
class_y_test_class = class_y_class_labels[train_count + val_count: train_count + val_count + test_count]
class_y_test_seg = class_y_seg_labels[train_count + val_count: train_count + val_count + test_count]
X_train.extend(class_X_train)
y_train_class.extend(class_y_train_class)
y_train_seg.extend(class_y_train_seg)
X_val.extend(class_X_val)
y_val_class.extend(class_y_val_class)
y_val_seg.extend(class_y_val_seg)
X_test.extend(class_X_test)
y_test_class.extend(class_y_test_class)
y_test_seg.extend(class_y_test_seg)
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
np.array(X_train),
np.array(y_train_class_categorical),
np.array(y_train_seg),
np.array(X_val),
np.array(y_val_class_categorical),
np.array(y_val_seg),
np.array(X_test),
np.array(y_test_class_categorical),
np.array(y_test_seg),
)
def count_labels(y_class_categorical, label_encoder):
# Convert one-hot encoded labels back to label encoded
y_class_labels = np.argmax(y_class_categorical, axis=1)
# Convert label encoded labels back to original class names
y_class_names = label_encoder.inverse_transform(y_class_labels)
unique, counts = np.unique(y_class_names, return_counts=True)
return dict(zip(unique, counts))
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation=“linear”, padding=“same”, strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation(“relu”)(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation=“linear”, padding=“same”, strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation(“relu”)(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation=“linear”, padding=“same”, strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation(“relu”)(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation=“linear”, padding=“same”, strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation(“relu”)(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation=“linear”, padding=“same”, strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation(“relu”)(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation=“linear”, padding=“same”, strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation(“relu”)(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation=“linear”, padding=“same”, strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation(“relu”)(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation=“linear”, padding=“same”, strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation(“relu”)(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation=“linear”, padding=“same”, strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation(“relu”)(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation=“linear”, padding=“same”, strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation(“relu”)(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation=“bilinear”)(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation=“linear”, padding=“same”, strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation(“relu”)(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation=“linear”, padding=“same”, strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation(“relu”)(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation=“bilinear”)(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation=“linear”, padding=“same”, strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation(“relu”)(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation=“linear”, padding=“same”, strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation(“relu”)(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation=“bilinear”)(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation=“linear”, padding=“same”, strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation(“relu”)(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation=“linear”, padding=“same”, strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation(“relu”)(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation=“bilinear”)(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation=“linear”, padding=“same”, strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation(“relu”)(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation=“linear”, padding=“same”, strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation(“relu”)(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation=“sigmoid”, name=“segmentation_output”)(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation=“linear”, padding=“same”, strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation(“relu”)(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation=“linear”, padding=“same”, strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation(“relu”)(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation=“linear”, padding=“same”, strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation(“relu”)(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation=“relu”)(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation=“relu”)(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation=“softmax”, name=“classification_output”)(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
“multitask_best_weights.h5”,
monitor=“val_classification_output_accuracy”,
save_best_only=True,
mode=“max”,
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor=“val_classification_output_accuracy”,
factor=0.3,
patience=2,
min_delta=0.001,
mode=“auto”,
verbose=1,)
tensorboard = TensorBoard(log_dir=“logs”)
model.compile(
optimizer=Adam(lr=0.001),
loss={“classification_output”: “categorical_crossentropy”, “segmentation_output”: segmentation_loss},
metrics={“classification_output”: “accuracy”, “segmentation_output”: “accuracy”},
loss_weights={“classification_output”: 1, “segmentation_output”: 1},)
history = model.fit(
X_train,
{“classification_output”: y_train_class, “segmentation_output”: y_train_seg},
validation_data=(X_val, {“classification_output”: y_val_class, “segmentation_output”: y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({“segmentation_loss”: segmentation_loss}):
# Load the best model weights
best_model = load_model(“multitask_best_weights.h5”)
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {“classification_output”: y_test_class, “segmentation_output”: y_test_seg})
print(“Test Classification Loss:”, test_class_loss)
print(“Test Segmentation Loss:”, test_seg_loss)
print(“Test Classification Accuracy:”, test_class_acc)
print(“Test Segmentation Accuracy:”, test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {‘classification_output’: y_val_class, ‘segmentation_output’: y_val_seg})
print(“Validation Classification Loss:”, val_class_loss)
print(“Validation Segmentation Loss:”, val_seg_loss)
print(“Validation Classification Accuracy:”, val_class_acc)
print(“Validation Segmentation Accuracy:”, val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {‘classification_output’: y_train_class, ‘segmentation_output’: y_train_seg})
print(“Train Classification Loss:”, train_class_loss)
print(“Train Segmentation Loss:”, train_seg_loss)
print(“Train Classification Accuracy:”, train_class_acc)
print(“Train Segmentation Accuracy:”, train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history[“classification_output_accuracy”]
classification_val_accuracy = history.history[“val_classification_output_accuracy”]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label=“Training Accuracy”)
plt.plot(classification_val_accuracy, label=“Validation Accuracy”)
plt.title(“Classification Accuracy”)
plt.xlabel(“Epochs”)
plt.ylabel(“Accuracy”)
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history[“classification_output_loss”]
classification_val_loss = history.history[“val_classification_output_loss”]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, “b”, label=“Training Loss”)
plt.plot(classification_val_loss, “r”, label=“Validation Loss”)
plt.title(“Classification Loss”)
plt.xlabel(“Epochs”)
plt.ylabel(“Loss”)
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history[“segmentation_output_accuracy”]
segmentation_val_accuracy = history.history[“val_segmentation_output_accuracy”]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label=“Training Accuracy”)
plt.plot(segmentation_val_accuracy, label=“Validation Accuracy”)
plt.title(“Segmentation Accuracy”)
plt.xlabel(“Epochs”)
plt.ylabel(“Accuracy”)
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history[“segmentation_output_loss”]
segmentation_val_loss = history.history[“val_segmentation_output_loss”]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, “b”, label=“Training Loss”)
plt.plot(segmentation_val_loss, “r”, label=“Validation Loss”)
plt.title(“Segmentation Loss”)
plt.xlabel(“Epochs”)
plt.ylabel(“Loss”)
plt.legend()
plt.show()
# Set image size
image_size = 224
# Define labels
labels = [“bridge”, “excess”, “good”]
# Set data folders
data_folders = [
“/content/gdrive/MyDrive/FYP_4/4 Dataset Ratio 60 20 20/jit0/f_dip/train”,
“/content/gdrive/MyDrive/FYP_4/4 Dataset Ratio 60 20 20/jit0/f_dip/val”,
“/content/gdrive/MyDrive/FYP_4/4 Dataset Ratio 60 20 20/jit0/f_dip/test”,]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Define train:val:test ratio for each class
class_data_counts = {
“bridge”: [40, 80, 80],
“excess”: [40, 80, 80],
“good”: [40, 80, 80],}
# Split data
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, class_data_counts)
‘’‘
print(“Number of train images:”, len(X_train))
print(“Number of train binary masks:”, len(y_train_seg))
print(“Number of validation images:”, len(X_val))
print(“Number of validation binary masks:”, len(y_val_seg))
print(“Number of test images:”, len(X_test))
print(“Number of test binary masks:”, len(y_test_seg))
’‘’
# Initialize the label encoder
label_encoder = LabelEncoder()
label_encoder.fit(y_class_labels)
# Count the number of images of each class in the train, validation, and test sets
train_counts = count_labels(y_train_class, label_encoder)
val_counts = count_labels(y_val_class, label_encoder)
test_counts = count_labels(y_test_class, label_encoder)
print(“Train counts: “, train_counts,” Total in train set:”, sum(train_counts.values()))
print(“Validation counts:”, val_counts, " Total in validation set:“, sum(val_counts.values()))
print(“Test counts: “, test_counts,” Total in test set:”, sum(test_counts.values()))
# Build model
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
model = build_model(input_shape, num_classes)
model.summary()
# Train model n times
test_class_acc_list = []
for i in range(5):
print(f”\nTrain {i+1}:\n")
model = build_model(input_shape, num_classes)
batch_size = 64
epochs = 50
history = train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs)
# Evaluate model on test data
test_class_acc = evaluate_model(model, X_test, y_test_class, y_test_seg)
plot_performance(history)
test_class_acc_list.append(test_class_acc)
# Calculate average test classification accuracy
average_test_class_acc = sum(test_class_acc_list) / len(test_class_acc_list)
print(“Test Classification Accuracy List:”, test_class_acc_list)
print(“Average Test Classification Accuracy:”, average_test_class_acc)
"
Check my code if there is any error lead to the poor performance of the model.
|
a3c4db1ad810e5d833ffa2140162b9be
|
{
"intermediate": 0.3223091959953308,
"beginner": 0.44287610054016113,
"expert": 0.23481465876102448
}
|
42,614
|
How to make text field only one row long in css?
|
1ecec8984a45bb9dcb71b2244413e11c
|
{
"intermediate": 0.38382667303085327,
"beginner": 0.303310364484787,
"expert": 0.31286290287971497
}
|
42,615
|
import numpy as np
import sys
if len(sys.argv) > 1:
TEST_DIR = sys.argv[1]
else:
raise RuntimeError('No test directory provided')
GT_DIR = 'labeled/'
def get_mse(gt, test):
test = np.nan_to_num(test)
return np.mean(np.nanmean((gt - test)**2, axis=0))
zero_mses = []
mses = []
for i in range(0,5):
gt = np.loadtxt(GT_DIR + str(i) + '.txt')
zero_mses.append(get_mse(gt, np.zeros_like(gt)))
test = np.loadtxt(TEST_DIR + str(i) + '.txt')
mses.append(get_mse(gt, test))
percent_err_vs_all_zeros = 100*np.mean(mses)/np.mean(zero_mses)
print(f'YOUR ERROR SCORE IS {percent_err_vs_all_zeros:.2f}% (lower is better)')
|
21736dbec95020e895a348e3c63be105
|
{
"intermediate": 0.3612882196903229,
"beginner": 0.36992090940475464,
"expert": 0.2687908709049225
}
|
42,616
|
in this javascript in the if statement 'if (data && data.length > 1' add a function to allow the user to add a marker to the map, - ' let streetLatitude; // Define streetLatitude globally
let streetLongitude; // Define streetLongitude globally
let marker; // Define marker globally to make it accessible across functions
let data; // Declare data globally
function fetchStreetDetails() {
fetch('main.json')
.then((response) => response.json())
.then((jsonData) => {
data = jsonData; // Store the data globally
const entryCount = data.length; // data is already an array of objects
const streetDetails = data[0];
// Extract street details
streetLatitude = streetDetails.StreetLatitude;
streetLongitude = streetDetails.StreetLongitude;
streetHeading = streetDetails.StreetHeading;
streetPitch = streetDetails.StreetPitch;
streetPanoID = streetDetails.StreetPanoID;
const StreetPoints = streetDetails.Points;
const panorama = new google.maps.StreetViewPanorama(
document.getElementById("streetview"),
{
position: { lat: streetLatitude, lng: streetLongitude },
pano: streetPanoID,
heading: streetHeading,
pitch: streetPitch,
}
);
console.log("Street Latitude: " + streetLatitude);
console.log("Street Longitude: " + streetLongitude);
console.log("Street Heading: " + streetHeading);
console.log("Street Pitch: " + streetPitch);
console.log("Street PanoID: " + streetPanoID);
console.log("Street Location: " + StreetPoints);
// Update numberoffeeds div
const numberoffeedsElement =
document.getElementById("numberoffeeds");
numberoffeedsElement.textContent = `There are ${entryCount} questions in this game.`;
})
.catch((error) => console.error("Error fetching data: ", error));
}
fetchStreetDetails();
const startingLocation = { lat: 51.540073, lng: -0.010874 }; // London Aquatics Center coordinates
function initMap() {
const zoom = 8;
const map = new google.maps.Map(document.getElementById("map"), {
center: startingLocation,
zoom: zoom,
mapId: "DEMO_MAP_ID",
});
// Function to add marker on click
function addMarker(event) {
const clickLocation = event.latLng;
marker = new google.maps.Marker({
position: clickLocation,
map: map,
draggable: true, // Set draggable to true
});
// Remove the click listener after adding a marker
google.maps.event.removeListener(clickListener);
// Add functionality after clicking the map
createSubmitButton(clickLocation);
}
// Create a function to add the submit button
function createSubmitButton(distance, clickLocation) {
const buttonsDiv = document.getElementById("buttons");
// Check if the button already exists before creating a new one
if (!document.getElementById("submit")) {
const submitButton = document.createElement("button");
submitButton.id = "submit";
submitButton.textContent = `Submit`;
// Add event listener for the submit button (you can define the functionality here)
submitButton.addEventListener("click", () => {
console.log("Submit button clicked!");
// Create the new button
const nextButton = document.createElement("button");
nextButton.id = "nextButton";
nextButton.textContent = "Next"; // Customize button text as needed
// Add event listener for the new button (optional, if needed)
nextButton.addEventListener('click', () => {
// Handle ‘nextButton’ click here
console.log('Next button clicked!');
buttons.removeChild(nextButton);
const wheremessage ="Next location. Where is this?";
// Update the 'results' div using DOM manipulation
const resultsDiv = document.getElementById("results");
resultsDiv.textContent = wheremessage;
// Check if there is next entry in the data
if (data && data.length > 1) {
const nextStreetDetails = data[1]; // Get the next street details from the json data
// Extract next street details
streetLatitude = nextStreetDetails.StreetLatitude;
streetLongitude = nextStreetDetails.StreetLongitude;
streetHeading = nextStreetDetails.StreetHeading;
streetPitch = nextStreetDetails.StreetPitch;
streetPanoID = nextStreetDetails.StreetPanoID;
const StreetPoints = nextStreetDetails.Points;
const panorama = new google.maps.StreetViewPanorama(
document.getElementById('streetview'),
{
position: { lat: streetLatitude, lng: streetLongitude },
pano: streetPanoID,
heading: streetHeading,
pitch: streetPitch,
}
);
console.log("Street Latitude: " + streetLatitude);
console.log("Street Longitude: " + streetLongitude);
console.log("Street Heading: " + streetHeading);
console.log("Street Pitch: " + streetPitch);
console.log("Street PanoID: " + streetPanoID);
console.log("Street Location: " + StreetPoints);
} else {
console.log('No next entry in the data.');
const overmessage ="Game Over";
// Update the 'results' div using DOM manipulation
const resultsDiv = document.getElementById("results");
resultsDiv.textContent = overmessage;
}
});
// Replace the buttons
buttonsDiv.replaceChild(nextButton, submitButton);
// Get the current marker position when the button is pressed
const markerPosition = marker.getPosition();
// Calculate distance using marker position and street coordinates
const distance = calculateDistance(
markerPosition.lat(),
markerPosition.lng(),
streetLatitude,
streetLongitude
);
console.log(
"Distance from marker to street: " + distance + " meters"
);
// Add your submit logic here
const message =
"You are " + distance + " meters from the correct location.";
// Update the 'results' div using DOM manipulation
const resultsDiv = document.getElementById("results");
resultsDiv.textContent = message;
// Createpolyline on marker add
drawPolyline(clickLocation);
// Set the marker as non-draggable
marker.setDraggable(false);
});
buttonsDiv.appendChild(submitButton);
}
} // Add click listener to the map
const clickListener = map.addListener("click", addMarker);
function calculateDistance(lat1, lng1, lat2, lng2) {
const deltaLat = ((lat2 - lat1) * Math.PI) / 180;
const deltaLng = ((lng2 - lng1) * Math.PI) / 180;
const earthRadius = 6371e3; // meters
const a = Math.sin(deltaLat / 2) * Math.sin(deltaLat / 2);
const b =
Math.cos((lat1 * Math.PI) / 180) *
Math.cos((lat2 * Math.PI) / 180) *
Math.sin(deltaLng / 2) *
Math.sin(deltaLng / 2);
const c = 2 * Math.atan2(Math.sqrt(a + b), Math.sqrt(1 - a - b)); // Calculate distance using Pythagorean theorem
const distance = earthRadius * c; // Round the distance to nearest meter using Math.round()
const roundedDistance = Math.round(distance);
return roundedDistance;
} // Function to draw polyline between marker and street location
function drawPolyline() {
const markerPosition = marker.getPosition(); // Get the current position of the marker
const polyline = new google.maps.Polyline({
path: [
markerPosition.toJSON(),
{ lat: streetLatitude, lng: streetLongitude },
],
strokeColor: "#FF0000", // red color
strokeWeight: 2,
map: map,
});
}
}'
|
e5c54dd6aecac41bbbaa26b03c3b85aa
|
{
"intermediate": 0.37199270725250244,
"beginner": 0.4129950702190399,
"expert": 0.21501226723194122
}
|
42,617
|
create a navbar module that ,modern and fully responsive. features aria and off canvas menu
|
511bfc52b063d602a4121e2f0173774d
|
{
"intermediate": 0.41530367732048035,
"beginner": 0.1949283927679062,
"expert": 0.38976794481277466
}
|
42,618
|
in this javascript in the if statement 'if (data && data.length > 1' add a click event function to allow the user to add a marker to the map, - ' let streetLatitude; // Define streetLatitude globally
let streetLongitude; // Define streetLongitude globally
let marker; // Define marker globally to make it accessible across functions
let data; // Declare data globally
function fetchStreetDetails() {
fetch('main.json')
.then((response) => response.json())
.then((jsonData) => {
data = jsonData; // Store the data globally
const entryCount = data.length; // data is already an array of objects
const streetDetails = data[0];
// Extract street details
streetLatitude = streetDetails.StreetLatitude;
streetLongitude = streetDetails.StreetLongitude;
streetHeading = streetDetails.StreetHeading;
streetPitch = streetDetails.StreetPitch;
streetPanoID = streetDetails.StreetPanoID;
const StreetPoints = streetDetails.Points;
const panorama = new google.maps.StreetViewPanorama(
document.getElementById("streetview"),
{
position: { lat: streetLatitude, lng: streetLongitude },
pano: streetPanoID,
heading: streetHeading,
pitch: streetPitch,
}
);
console.log("Street Latitude: " + streetLatitude);
console.log("Street Longitude: " + streetLongitude);
console.log("Street Heading: " + streetHeading);
console.log("Street Pitch: " + streetPitch);
console.log("Street PanoID: " + streetPanoID);
console.log("Street Location: " + StreetPoints);
// Update numberoffeeds div
const numberoffeedsElement =
document.getElementById("numberoffeeds");
numberoffeedsElement.textContent = `There are ${entryCount} questions in this game.`;
})
.catch((error) => console.error("Error fetching data: ", error));
}
fetchStreetDetails();
const startingLocation = { lat: 51.540073, lng: -0.010874 }; // London Aquatics Center coordinates
function initMap() {
const zoom = 8;
const map = new google.maps.Map(document.getElementById("map"), {
center: startingLocation,
zoom: zoom,
mapId: "DEMO_MAP_ID",
});
// Function to add marker on click
function addMarker(event) {
const clickLocation = event.latLng;
marker = new google.maps.Marker({
position: clickLocation,
map: map,
draggable: true, // Set draggable to true
});
// Remove the click listener after adding a marker
google.maps.event.removeListener(clickListener);
// Add functionality after clicking the map
createSubmitButton(clickLocation);
}
// Create a function to add the submit button
function createSubmitButton(distance, clickLocation) {
const buttonsDiv = document.getElementById("buttons");
// Check if the button already exists before creating a new one
if (!document.getElementById("submit")) {
const submitButton = document.createElement("button");
submitButton.id = "submit";
submitButton.textContent = `Submit`;
// Add event listener for the submit button (you can define the functionality here)
submitButton.addEventListener("click", () => {
console.log("Submit button clicked!");
// Create the new button
const nextButton = document.createElement("button");
nextButton.id = "nextButton";
nextButton.textContent = "Next"; // Customize button text as needed
// Add event listener for the new button (optional, if needed)
nextButton.addEventListener('click', () => {
// Handle ‘nextButton’ click here
console.log('Next button clicked!');
buttons.removeChild(nextButton);
const wheremessage ="Next location. Where is this?";
// Update the 'results' div using DOM manipulation
const resultsDiv = document.getElementById("results");
resultsDiv.textContent = wheremessage;
// Check if there is next entry in the data
if (data && data.length > 1) {
const nextStreetDetails = data[1]; // Get the next street details from the json data
// Extract next street details
streetLatitude = nextStreetDetails.StreetLatitude;
streetLongitude = nextStreetDetails.StreetLongitude;
streetHeading = nextStreetDetails.StreetHeading;
streetPitch = nextStreetDetails.StreetPitch;
streetPanoID = nextStreetDetails.StreetPanoID;
const StreetPoints = nextStreetDetails.Points;
const panorama = new google.maps.StreetViewPanorama(
document.getElementById('streetview'),
{
position: { lat: streetLatitude, lng: streetLongitude },
pano: streetPanoID,
heading: streetHeading,
pitch: streetPitch,
}
);
console.log("Street Latitude: " + streetLatitude);
console.log("Street Longitude: " + streetLongitude);
console.log("Street Heading: " + streetHeading);
console.log("Street Pitch: " + streetPitch);
console.log("Street PanoID: " + streetPanoID);
console.log("Street Location: " + StreetPoints);
} else {
console.log('No next entry in the data.');
const overmessage ="Game Over";
// Update the 'results' div using DOM manipulation
const resultsDiv = document.getElementById("results");
resultsDiv.textContent = overmessage;
}
});
// Replace the buttons
buttonsDiv.replaceChild(nextButton, submitButton);
// Get the current marker position when the button is pressed
const markerPosition = marker.getPosition();
// Calculate distance using marker position and street coordinates
const distance = calculateDistance(
markerPosition.lat(),
markerPosition.lng(),
streetLatitude,
streetLongitude
);
console.log(
"Distance from marker to street: " + distance + " meters"
);
// Add your submit logic here
const message =
"You are " + distance + " meters from the correct location.";
// Update the 'results' div using DOM manipulation
const resultsDiv = document.getElementById("results");
resultsDiv.textContent = message;
// Createpolyline on marker add
drawPolyline(clickLocation);
// Set the marker as non-draggable
marker.setDraggable(false);
});
buttonsDiv.appendChild(submitButton);
}
} // Add click listener to the map
const clickListener = map.addListener("click", addMarker);
function calculateDistance(lat1, lng1, lat2, lng2) {
const deltaLat = ((lat2 - lat1) * Math.PI) / 180;
const deltaLng = ((lng2 - lng1) * Math.PI) / 180;
const earthRadius = 6371e3; // meters
const a = Math.sin(deltaLat / 2) * Math.sin(deltaLat / 2);
const b =
Math.cos((lat1 * Math.PI) / 180) *
Math.cos((lat2 * Math.PI) / 180) *
Math.sin(deltaLng / 2) *
Math.sin(deltaLng / 2);
const c = 2 * Math.atan2(Math.sqrt(a + b), Math.sqrt(1 - a - b)); // Calculate distance using Pythagorean theorem
const distance = earthRadius * c; // Round the distance to nearest meter using Math.round()
const roundedDistance = Math.round(distance);
return roundedDistance;
} // Function to draw polyline between marker and street location
function drawPolyline() {
const markerPosition = marker.getPosition(); // Get the current position of the marker
const polyline = new google.maps.Polyline({
path: [
markerPosition.toJSON(),
{ lat: streetLatitude, lng: streetLongitude },
],
strokeColor: "#FF0000", // red color
strokeWeight: 2,
map: map,
});
}
}'
|
ef626b4375b07e6fb414088c6867ba52
|
{
"intermediate": 0.4606891870498657,
"beginner": 0.34798866510391235,
"expert": 0.19132214784622192
}
|
42,619
|
In this javascript after the user adds a marker for the second street view image shown and the distance is calculated to the correct location I want to move on to a third question if there is more data in the json file, to add the next street view from the file - let streetLatitude; // Define streetLatitude globally
let streetLongitude; // Define streetLongitude globally
let marker; // Define marker globally to make it accessible across functions
let data; // Declare data globally
function fetchStreetDetails() {
fetch('main.json')
.then((response) => response.json())
.then((jsonData) => {
data = jsonData; // Store the data globally
const entryCount = data.length; // data is already an array of objects
const streetDetails = data[0];
// Extract street details
streetLatitude = streetDetails.StreetLatitude;
streetLongitude = streetDetails.StreetLongitude;
streetHeading = streetDetails.StreetHeading;
streetPitch = streetDetails.StreetPitch;
streetPanoID = streetDetails.StreetPanoID;
const StreetPoints = streetDetails.Points;
const panorama = new google.maps.StreetViewPanorama(
document.getElementById("streetview"),
{
position: { lat: streetLatitude, lng: streetLongitude },
pano: streetPanoID,
heading: streetHeading,
pitch: streetPitch,
}
);
console.log("Street Latitude: " + streetLatitude);
console.log("Street Longitude: " + streetLongitude);
console.log("Street Heading: " + streetHeading);
console.log("Street Pitch: " + streetPitch);
console.log("Street PanoID: " + streetPanoID);
console.log("Street Location: " + StreetPoints);
// Update numberoffeeds div
const numberoffeedsElement =
document.getElementById("numberoffeeds");
numberoffeedsElement.textContent = `There are ${entryCount} questions in this game.`;
})
.catch((error) => console.error("Error fetching data: ", error));
}
fetchStreetDetails();
const startingLocation = { lat: 51.540073, lng: -0.010874 }; // London Aquatics Center coordinates
function initMap() {
const zoom = 8;
const map = new google.maps.Map(document.getElementById("map"), {
center: startingLocation,
zoom: zoom,
mapId: "DEMO_MAP_ID",
});
// Function to add marker on click
function addMarker(event) {
const clickLocation = event.latLng;
marker = new google.maps.Marker({
position: clickLocation,
map: map,
draggable: true, // Set draggable to true
});
// Remove the click listener after adding a marker
google.maps.event.removeListener(clickListener);
// Add functionality after clicking the map
createSubmitButton(clickLocation);
}
// Create a function to add the submit button
function createSubmitButton(distance, clickLocation) {
const buttonsDiv = document.getElementById("buttons");
// Check if the button already exists before creating a new one
if (!document.getElementById("submit")) {
const submitButton = document.createElement("button");
submitButton.id = "submit";
submitButton.textContent = `Submit`;
// Add event listener for the submit button (you can define the functionality here)
submitButton.addEventListener("click", () => {
console.log("Submit button clicked!");
// Create the new button
const nextButton = document.createElement("button");
nextButton.id = "nextButton";
nextButton.textContent = "Next"; // Customize button text as needed
// Add event listener for the new button (optional, if needed)
nextButton.addEventListener('click', () => {
// Handle ‘nextButton’ click here
console.log('Next button clicked!');
buttons.removeChild(nextButton);
const wheremessage ="Next location. Where is this?";
// Update the 'results' div using DOM manipulation
const resultsDiv = document.getElementById("results");
resultsDiv.textContent = wheremessage;
// Check if there is next entry in the data
if (data && data.length > 1) {
// Add click listener to the map to allow marker placement
const clickListener = map.addListener("click", addMarker);
const nextStreetDetails = data[1]; // Get the next street details from the json data
// Extract next street details
streetLatitude = nextStreetDetails.StreetLatitude;
streetLongitude = nextStreetDetails.StreetLongitude;
streetHeading = nextStreetDetails.StreetHeading;
streetPitch = nextStreetDetails.StreetPitch;
streetPanoID = nextStreetDetails.StreetPanoID;
const StreetPoints = nextStreetDetails.Points;
const panorama = new google.maps.StreetViewPanorama(
document.getElementById('streetview'),
{
position: { lat: streetLatitude, lng: streetLongitude },
pano: streetPanoID,
heading: streetHeading,
pitch: streetPitch,
}
);
console.log("Street Latitude: " + streetLatitude);
console.log("Street Longitude: " + streetLongitude);
console.log("Street Heading: " + streetHeading);
console.log("Street Pitch: " + streetPitch);
console.log("Street PanoID: " + streetPanoID);
console.log("Street Location: " + StreetPoints);
} else {
console.log('No next entry in the data.');
const overmessage ="Game Over";
// Update the 'results' div using DOM manipulation
const resultsDiv = document.getElementById("results");
resultsDiv.textContent = overmessage;
}
});
// Replace the buttons
buttonsDiv.replaceChild(nextButton, submitButton);
// Get the current marker position when the button is pressed
const markerPosition = marker.getPosition();
// Calculate distance using marker position and street coordinates
const distance = calculateDistance(
markerPosition.lat(),
markerPosition.lng(),
streetLatitude,
streetLongitude
);
console.log(
"Distance from marker to street: " + distance + " meters"
);
// Add your submit logic here
const message =
"You are " + distance + " meters from the correct location.";
// Update the 'results' div using DOM manipulation
const resultsDiv = document.getElementById("results");
resultsDiv.textContent = message;
// Createpolyline on marker add
drawPolyline(clickLocation);
// Set the marker as non-draggable
marker.setDraggable(false);
});
buttonsDiv.appendChild(submitButton);
}
} // Add click listener to the map
const clickListener = map.addListener("click", addMarker);
function calculateDistance(lat1, lng1, lat2, lng2) {
const deltaLat = ((lat2 - lat1) * Math.PI) / 180;
const deltaLng = ((lng2 - lng1) * Math.PI) / 180;
const earthRadius = 6371e3; // meters
const a = Math.sin(deltaLat / 2) * Math.sin(deltaLat / 2);
const b =
Math.cos((lat1 * Math.PI) / 180) *
Math.cos((lat2 * Math.PI) / 180) *
Math.sin(deltaLng / 2) *
Math.sin(deltaLng / 2);
const c = 2 * Math.atan2(Math.sqrt(a + b), Math.sqrt(1 - a - b)); // Calculate distance using Pythagorean theorem
const distance = earthRadius * c; // Round the distance to nearest meter using Math.round()
const roundedDistance = Math.round(distance);
return roundedDistance;
} // Function to draw polyline between marker and street location
function drawPolyline() {
const markerPosition = marker.getPosition(); // Get the current position of the marker
const polyline = new google.maps.Polyline({
path: [
markerPosition.toJSON(),
{ lat: streetLatitude, lng: streetLongitude },
],
strokeColor: "#FF0000", // red color
strokeWeight: 2,
map: map,
});
}
}
|
f8a7bb91484c191704d058d274396ef8
|
{
"intermediate": 0.4257596433162689,
"beginner": 0.38680416345596313,
"expert": 0.18743620812892914
}
|
42,620
|
There is this code:
import numpy as np
import sys
if len(sys.argv) > 1:
TEST_DIR = sys.argv[1]
else:
raise RuntimeError('No test directory provided')
GT_DIR = 'labeled/'
def get_mse(gt, test):
test = np.nan_to_num(test)
return np.mean(np.nanmean((gt - test)**2, axis=0))
zero_mses = []
mses = []
for i in range(0,5):
gt = np.loadtxt(GT_DIR + str(i) + '.txt')
zero_mses.append(get_mse(gt, np.zeros_like(gt)))
test = np.loadtxt(TEST_DIR + str(i) + '.txt')
mses.append(get_mse(gt, test))
percent_err_vs_all_zeros = 100*np.mean(mses)/np.mean(zero_mses)
print(f'YOUR ERROR SCORE IS {percent_err_vs_all_zeros:.2f}% (lower is better)')
You're at a hacking competitions, where it's declared that you can use any method, every creative, even usually unethical way is allowed to make this evaluation script return the lowest error score possible (without having predictions). Modification of this evaluation script is prohibited! How to do it? Is it even possible? What methods are there to make this script return the lowest possible error score? Any backdoors?
|
470752f31ac60c690b3875937c9e1af1
|
{
"intermediate": 0.33735358715057373,
"beginner": 0.2712053656578064,
"expert": 0.3914410471916199
}
|
42,621
|
I have two models in Django DRF. Should I add something to CustomUser model?
Here the code:
__
#interests of user, like 'Python'[programming], 'web development', etc.
class Interest(models.Model):
name = models.CharField(max_length=100, unique=True)
def __str__(self):
return self.name
class CustomUser(AbstractUser):
# Add additional fields here
telegram = models.CharField(null=True, blank=True, max_length=64)
discord = models.CharField(null=True, blank=True, max_length=64)
whatsapp = models.IntegerField(null=True, blank=True, max_length=64)
|
af9d5d621b96e2e44ef23cf702d627e6
|
{
"intermediate": 0.4359378516674042,
"beginner": 0.3974030911922455,
"expert": 0.16665901243686676
}
|
42,622
|
Convert the following into HInglish keeping context and code as is for the following: [ { "from": "human", "value": "Develop a Python code snippet that generates an abbreviated version of a given full name.\nname = \"John Smith\"" }, { "from": "gpt", "value": "Here is a Python code snippet that generates an abbreviated version of the given full name:\n\n
|
86802009e814da5e73377ea4647b119f
|
{
"intermediate": 0.38858336210250854,
"beginner": 0.273488849401474,
"expert": 0.33792781829833984
}
|
42,623
|
Как здесь правильнее снижать размер окна:
if event.pos().x() > self.scaleStartPos.x() and event.pos().x() > self.coordinates[0] + self.coordinates[2]:
print(“chlen”)
self.coordinates[2] = event.pos().x() - self.coordinates[0]
if event.pos().x() < self.scaleStartPos.x() and event.pos().x() < self.coordinates[0] + self.coordinates[2]:
print(“chlen”)
self.coordinates[2] -= event.pos().x()
|
1418d3ae9f2464d9f1ba92423950644c
|
{
"intermediate": 0.26619216799736023,
"beginner": 0.5402790904045105,
"expert": 0.19352880120277405
}
|
42,624
|
Convert the following json into Hinglish keep context and code as is, change any name to Indian name, for the following: [ { "from": "human", "value": "Develop a Python code snippet that generates an abbreviated version of a given full name.\nname = \"John Smith\"" }, { "from": "gpt", "value": "Here is a Python code snippet that generates an abbreviated version of the given full name:\n\n
|
f303595d111f769b3fa2dd589b1ba7cc
|
{
"intermediate": 0.395950049161911,
"beginner": 0.3019464612007141,
"expert": 0.3021034598350525
}
|
42,625
|
how to train ai model to prdict stock market with alot statges i need the way to make it and codes to train it
|
26cf89bbb759f0be9e5e95f74b23d8d5
|
{
"intermediate": 0.16303367912769318,
"beginner": 0.08681193739175797,
"expert": 0.750154435634613
}
|
42,626
|
Hello there
|
1579ff91574a3632a440093ca2ed0616
|
{
"intermediate": 0.32595759630203247,
"beginner": 0.25228530168533325,
"expert": 0.42175713181495667
}
|
42,627
|
ONNX converter
|
22b4a2b2a3f7f96d72e7180163e5c1b3
|
{
"intermediate": 0.23634524643421173,
"beginner": 0.19195133447647095,
"expert": 0.5717034339904785
}
|
42,628
|
please consider this code first :: use aes::Aes256;
use block_modes::block_padding::Pkcs7;
use block_modes::{BlockMode, Ecb};
use sha2::{Digest, Sha256};
use std::convert::TryInto;
use std::time::{Duration, Instant};
use std::{fs, io, thread};
type Aes256Ecb = Ecb<Aes256, Pkcs7>;
//const ENCRYPTED_AES_KEY: &str = "";
// i: 1174012
// j: 1258837
// k: 1477889744044
//const ENCRYPTED_SECRET: &str = "ef5ebbe8f727c54db9755e1c2ead609a0ffc837c25b9493aeb11c68e7a14710e";
const ENCRYPTED_SECRET: &str = "ce8f36aa844ab00319bcd4f86460a10d77492c060b2c2a91615f4cd1f2d0702e76b68f1ec0f11d15704ba52c5dacc60018d5ed87368464acd030ce6230efdbff7b18cba72ccaa9455a6fe6021b908dd1";
#[derive(Debug, serde::Serialize, serde::Deserialize)]
struct State {
i: u64,
j: u64,
}
fn save_state(state: &State, filename: &str) -> io::Result<()> {
let state_json = serde_json::to_string(&state)?;
fs::write(filename, state_json)?;
Ok(())
}
fn load_state(filename: &str) -> io::Result<Option<State>> {
if let Ok(state_json) = fs::read_to_string(filename) {
let state: State = serde_json::from_str(&state_json)?;
Ok(Some(state))
} else {
Ok(None)
}
}
fn main() -> io::Result<()> {
// Provided data
// let enc_secret = hex::decode("ef5ebbe8f727c54db9755e1c2ead609a0ffc837c25b9493aeb11c68e7a14710e").unwrap();
let enc_secret = hex::decode(ENCRYPTED_SECRET).unwrap();
const PAUSE_INTERVAL: Duration = Duration::from_secs(15 * 60); // 15 minutes
const PAUSE_DURATION: Duration = Duration::from_secs(60); // 60 seconds
let mut next_pause = Instant::now();
let start_range = 1 << 20; // 2^20
let end_range = 1 << 21; // 2^21
let mut i_start = start_range;
let mut j_start = start_range;
// Load the state if it exists
let state_filename = "state.json";
if let Ok(Some(state)) = load_state(state_filename) {
i_start = state.i;
j_start = state.j;
}
'outer: for i in i_start..end_range {
for j in j_start..end_range {
let k = i * j;
// Check if product has between 40 and 42 bits (not strictly required in Rust)
if (k.leading_zeros() as u64) >= (64 - 42) && (k.leading_zeros() as u64) <= (64 - 40) {
let key_material = k.to_string();
let mut hasher = Sha256::new();
hasher.update(key_material.as_bytes());
let key = hasher.finalize();
let key_slice: &[u8; 32] = key.as_slice().try_into().unwrap();
let cipher = Aes256Ecb::new_from_slices(key_slice, Default::default()).unwrap();
if let Ok(decrypted) = cipher.decrypt_vec(&enc_secret) {
println!("Key1: {}, Key2: {} --> KEY product {} !", i, j, k);
//println!("Dec secret: {:?}", std::str::from_utf8(&decrypted).unwrap());
println!("Dec secret: {:?}", String::from_utf8_lossy(&decrypted));
if decrypted
.windows(b"HTB{".len())
.any(|window| window == b"HTB{")
{
println!("Decryption successful! AES key was found: k={}", k);
println!("Decrypted FLAG: {:?}", String::from_utf8_lossy(&decrypted));
save_state(&State { i, j }, state_filename)?;
break 'outer;
}
}
}
if next_pause.elapsed() >= PAUSE_INTERVAL {
println!("Pausing for a bit to chill the CPU…");
save_state(&State { i, j }, state_filename)?;
thread::sleep(PAUSE_DURATION);
next_pause = Instant::now() + PAUSE_INTERVAL;
}
}
j_start = start_range; // Reset j start for the next iteration of i
}
Ok(())
}
|
09736c82fc9165b3a7d182715c172b79
|
{
"intermediate": 0.38210538029670715,
"beginner": 0.3589867353439331,
"expert": 0.25890785455703735
}
|
42,629
|
1_ Translate the following legal text into colloquial Farsi 2_ Place the Persian and English text side by side in the table 3_ From the beginning to the end of the text, there should be an English sentence on the left side and a Persian sentence on the right side.
4- Using legal language for Persian translation
._ Place the Farsi and English text line by line from one point to the first point next to each other in such a way that one line of English text is followed by two empty lines, followed by the Persian translation and continue this process until the end of the text.A
abandonment n. 1. The act of giving up a legal right, particularly a right of ownership of property. Property that has been abandoned is res nullius (a thing belonging to no one), and a person taking possession of it therefore acquires a lawful title. An item is regarded as abandoned when it can be established that the original owner has discarded it and is indifferent as to what becomes of it: such an item cannot be the subject of a theft charge. However, property placed by its owner in a dustbin is not abandoned, having been placed there for the purpose of being collected as refuse. In marine insurance, abandonment is the surrender of all rights to a ship or cargo in a case of constructive total loss. The insured person must do this by giving the insurer within a reasonable time a notice of abandonment, by which he relinquishes all his rights to the ship or cargo to the insurer and can treat the loss as if it were an actual total loss. 2. In civil litigation, the relinquishing of the whole or part of the claim made in an action or of an appeal. Any claim is now considered to be abandoned once a *notice of discontinuance is served, according to rule 38 (1) of the *Civil Procedure Rules. 3. The offence of a parent or guardian. leaving a child under the age of 16 to its fate. A child is not regarded as abandoned if the parent knows and approves steps someone else is taking to look after it. The court may allow a child to be adopted without the consent of its parents if they are guilty of abandonment.
abatement n. 1. (of debts) The proportionate reduction in the payment of debts that takes place if a person's assets are insufficient to settle with his creditors in full. 2. (of legacies) The reduction or cancellation of legacies when the estate is insufficient to cover all the legacies provided for in the will or on intestacy after payment of the deceased's debts. The Administration of Estates Act 1925 provides that general legacies, unless given to satisfy a debt or for other consideration, abate in proportion to the amounts of those legacies; specific and demonstrative legacies then abate if the estate is still insufficient to pay all debts, and a demonstrative legacy also abates if the specified fund is insufficient to cover it. For example, A's estate may comprise a painting, £300 in his savings account, and £700 in other money; there are debts of £100 but his will leaves the painting to B, £500 from the savings account to C. £800 to D, and £200 to E. B will receive the painting. C's demonstrative legacy abates to £300, and after the debts are paid from the remaining £700, D's and E's general legacies abate proportionately, to £480 and £120 respectively. When annuities are given by the will, the general rule is that they are valued the date of the testator's death, then abate proportionately in accordance with that valuation, and each annuitant receives the abated sum. All these rules are subject to any contrary intention being expressed in the will. 3. (in land law) Any reduction or cancellation of money payable. For example a lease may provide for an abatement of rent in certain circumstances, e.g. if the building is destroyed by fire, and a purchaser of land may claim an abatement of the price if the seller can prove his ownership of only part of the land he contracted to sell. 4. (of nuisances) The termination, removal, or destruction of a *nuisance. A person injured by a nuisance has a right to abate it. In doing so, he must not do more damage than is necessary and, if removal of the nuisance requires entry on to the property from which it emanates, he may have to give notice to the wrongdoer. A local authority can issue an abatement notice to control statutory nuisances. 5. (of proceedings) The
|
0f1a5a67002353491f075c530365455b
|
{
"intermediate": 0.4091769754886627,
"beginner": 0.42527803778648376,
"expert": 0.1655450016260147
}
|
42,630
|
payload = {"prompt":"A female athlete, sweat and determination on her face captured post-race, shot with Canon EOS R5, hyperrealistic photography, illustrating endurance, strength, and the spirit of competition","nprompt":"","steps":20,"guidenceScale":7,"style":"CINEMATIC","width":1024,"height":1024,"alchemy":True,"pr":True,"token":tokens}
headers= {
'Content-Type': 'application/json',
"User-Agent" : "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36 Edg/122.0.0.0"}
rr = session.post(r,headers=headers , json=payload)
{"error":"reCAPTCHA validation failed"}
|
b6378b7c3f6bc802e9ddb16893a05b49
|
{
"intermediate": 0.39562666416168213,
"beginner": 0.2352803647518158,
"expert": 0.3690929412841797
}
|
42,631
|
can i program TP-Link Tapo L510E to control the light brightness from my computer
|
a5bec537d390956972d94d29ff96f417
|
{
"intermediate": 0.4243294596672058,
"beginner": 0.15953905880451202,
"expert": 0.41613152623176575
}
|
42,632
|
I am making a C++ SDL based game engine, currently doing the AudioManager, the SoundEffect and the Music classes. I was using SDL_mixer but then I ran into a dilemma, the sound effects provided by the library (Mix_Chunk) has only WAV support, the only part with MP3 and several other formats is Mix_Music. Would it be hard to implement it in SDL_mixer or should I look to other alternatives like SDL_Mixer_X library, which offers all these formats on the get go but I have to add a new library for it
|
120696e68ba1a23dbb49afda2d707d7c
|
{
"intermediate": 0.7310341000556946,
"beginner": 0.1664038598537445,
"expert": 0.1025620847940445
}
|
42,633
|
he
|
a9d045f20a7a9889f2790962f4d49ab9
|
{
"intermediate": 0.33036425709724426,
"beginner": 0.2774354815483093,
"expert": 0.3922002911567688
}
|
42,634
|
pip install
|
781190be7fabd9b036af200a2387ff94
|
{
"intermediate": 0.46967965364456177,
"beginner": 0.2096761167049408,
"expert": 0.32064417004585266
}
|
42,635
|
hi
|
528e99d19e380e94491b8849acb1711e
|
{
"intermediate": 0.3246487081050873,
"beginner": 0.27135494351387024,
"expert": 0.40399640798568726
}
|
42,636
|
Hey
|
6c105ebd9c674204dd4fc00d28892dec
|
{
"intermediate": 0.3360580503940582,
"beginner": 0.274208664894104,
"expert": 0.38973328471183777
}
|
42,637
|
dict = {'aaa':123, 'bbb': 234, 'abc':fe}
if 'abc' in dict
how to check if key exists
|
3998806dc39e3d3db1b6c03a41ac2278
|
{
"intermediate": 0.38646069169044495,
"beginner": 0.27844512462615967,
"expert": 0.3350942134857178
}
|
42,638
|
please help me run this project. i cloned the github using command prompt. now, what command do i type to start it? :""Skip to content
oobabooga
/
text-generation-webui
Type / to search
Code
Issues
243
Pull requests
30
Discussions
Actions
Projects
Wiki
Security
Insights
Owner avatar
text-generation-webui
Public
oobabooga/text-generation-webui
Go to file
t
Add file
Folders and files
Name
Latest commit
oobabooga
oobabooga
Merge pull request #5680 from oobabooga/dev
1934cb6
·
last week
History
.github
Update stalebot message
last month
characters
Improve the default character
6 months ago
css
Big picture fixes (#5565)
3 weeks ago
docker
Installer: add back INSTALL_EXTENSIONS environment variable (for docker)
last week
docs
Document StreamingLLM
last week
extensions
API: don't use settings.yaml for default values
last week
grammars
Add roleplay.gbnf grammar (#5368)
2 months ago
instruction-templates
Removed extra spaces from Mistral instruction template that were caus…
last month
js
Big picture fixes (#5565)
3 weeks ago
loras
Add dummy file
last year
models
Synthia instruction templates (#5041)
3 months ago
modules
Document StreamingLLM
last week
presets
Reduce the number of built-in presets (#5217)
2 months ago
prompts
Remove duplicate code
10 months ago
training
Training: Update llama2-chat-format.json (#5593)
2 weeks ago
.gitignore
Add .vs to .gitignore
3 months ago
CMD_FLAGS.txt
Update CMD_FLAGS.txt
6 months ago
Colab-TextGen-GPU.ipynb
Use the correct PyTorch in the Colab notebook
2 weeks ago
LICENSE
Initial commit
2 years ago
README.md
Document StreamingLLM
last week
cmd_linux.sh
Add Conda env deactivation to installer scripts
6 months ago
cmd_macos.sh
Add Conda env deactivation to installer scripts
6 months ago
cmd_windows.bat
Use call for conda deactivate in Windows installer (#4042)
6 months ago
cmd_wsl.bat
Move one-click-installers into the repository
6 months ago
convert-to-safetensors.py
Make the code more like PEP8 for readability (#862)
last year
download-model.py
Revert "Replace hashlib.sha256 with hashlib.file_digest so we don't n…
3 weeks ago
one_click.py
Small fix for cuda 11.8 in the one-click installer
last week
requirements.txt
Add numba to requirements.txt
last week
requirements_amd.txt
Add numba to requirements.txt
last week
requirements_amd_noavx2.txt
Add numba to requirements.txt
last week
requirements_apple_intel.txt
Add numba to requirements.txt
last week
requirements_apple_silicon.txt
Add numba to requirements.txt
last week
requirements_cpu_only.txt
Add numba to requirements.txt
last week
requirements_cpu_only_noavx2.txt
Add numba to requirements.txt
last week
requirements_noavx2.txt
Add numba to requirements.txt
last week
requirements_nowheels.txt
Add numba to requirements.txt
last week
server.py
Minor logging improvements
last month
settings-template.yaml
Add prompt_lookup_num_tokens parameter (#5296)
2 months ago
setup.cfg
Various one-click installer improvements (#4994)
2 months ago
start_linux.sh
One-click installer: delete the Miniconda installer after completion
2 weeks ago
start_macos.sh
One-click installer: delete the Miniconda installer after completion
2 weeks ago
start_windows.bat
Installer: validate the checksum for the miniconda installer on Windows
last week
start_wsl.bat
Fixes by @jllllll
6 months ago
update_wizard_linux.sh
Create an update wizard (#5623)
2 weeks ago
update_wizard_macos.sh
Create an update wizard (#5623)
2 weeks ago
update_wizard_windows.bat
Move update_wizard_windows.sh to update_wizard_windows.bat (oops)
2 weeks ago
update_wizard_wsl.bat
Move update_wizard_wsl.sh to update_wizard_wsl.bat
2 weeks ago
wsl.sh
Create an update wizard (#5623)
2 weeks ago
Repository files navigation
README
AGPL-3.0 license
Text generation web UI
A Gradio web UI for Large Language Models.
Its goal is to become the AUTOMATIC1111/stable-diffusion-webui of text generation.
Image1 Image2
Image1 Image2
Features
3 interface modes: default (two columns), notebook, and chat.
Multiple model backends: Transformers, llama.cpp (through llama-cpp-python), ExLlamaV2, AutoGPTQ, AutoAWQ, GPTQ-for-LLaMa, CTransformers, QuIP#.
Dropdown menu for quickly switching between different models.
Large number of extensions (built-in and user-contributed), including Coqui TTS for realistic voice outputs, Whisper STT for voice inputs, translation, multimodal pipelines, vector databases, Stable Diffusion integration, and a lot more. See the wiki and the extensions directory for details.
Chat with custom characters.
Precise chat templates for instruction-following models, including Llama-2-chat, Alpaca, Vicuna, Mistral.
LoRA: train new LoRAs with your own data, load/unload LoRAs on the fly for generation.
Transformers library integration: load models in 4-bit or 8-bit precision through bitsandbytes, use llama.cpp with transformers samplers (llamacpp_HF loader), CPU inference in 32-bit precision using PyTorch.
OpenAI-compatible API server with Chat and Completions endpoints -- see the examples.
How to install
Clone or download the repository.
Run the start_linux.sh, start_windows.bat, start_macos.sh, or start_wsl.bat script depending on your OS.
Select your GPU vendor when asked.
Once the installation ends, browse to http://localhost:7860/?__theme=dark.
Have fun!
To restart the web UI in the future, just run the start_ script again. This script creates an installer_files folder where it sets up the project's requirements. In case you need to reinstall the requirements, you can simply delete that folder and start the web UI again.
The script accepts command-line flags. Alternatively, you can edit the CMD_FLAGS.txt file with a text editor and add your flags there.
To get updates in the future, run update_wizard_linux.sh, update_wizard_windows.bat, update_wizard_macos.sh, or update_wizard_wsl.bat.
Setup details and information about installing manually
List of command-line flags
Documentation
https://github.com/oobabooga/text-generation-webui/wiki
Downloading models
Models should be placed in the folder text-generation-webui/models. They are usually downloaded from Hugging Face.
GGUF models are a single file and should be placed directly into models. Example:
text-generation-webui
└── models
└── llama-2-13b-chat.Q4_K_M.gguf
The remaining model types (like 16-bit transformers models and GPTQ models) are made of several files and must be placed in a subfolder. Example:
text-generation-webui
├── models
│ ├── lmsys_vicuna-33b-v1.3
│ │ ├── config.json
│ │ ├── generation_config.json
│ │ ├── pytorch_model-00001-of-00007.bin
│ │ ├── pytorch_model-00002-of-00007.bin
│ │ ├── pytorch_model-00003-of-00007.bin
│ │ ├── pytorch_model-00004-of-00007.bin
│ │ ├── pytorch_model-00005-of-00007.bin
│ │ ├── pytorch_model-00006-of-00007.bin
│ │ ├── pytorch_model-00007-of-00007.bin
│ │ ├── pytorch_model.bin.index.json
│ │ ├── special_tokens_map.json
│ │ ├── tokenizer_config.json
│ │ └── tokenizer.model
In both cases, you can use the "Model" tab of the UI to download the model from Hugging Face automatically. It is also possible to download it via the command-line with
python download-model.py organization/model
Run python download-model.py --help to see all the options.
Google Colab notebook
https://colab.research.google.com/github/oobabooga/text-generation-webui/blob/main/Colab-TextGen-GPU.ipynb
Contributing
If you would like to contribute to the project, check out the Contributing guidelines.
Community
Subreddit: https://www.reddit.com/r/oobabooga/
Discord: https://discord.gg/jwZCF2dPQN
Acknowledgment
In August 2023, Andreessen Horowitz (a16z) provided a generous grant to encourage and support my independent work on this project. I am extremely grateful for their trust and recognition.
About
A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
Resources
Readme
License
AGPL-3.0 license
Activity
Stars
34.3k stars
Watchers
298 watching
Forks
4.6k forks
Report repository
Releases 34
snapshot-2024-03-10
Latest
last week
+ 33 releases
Sponsor this project
ko_fi
ko-fi.com/oobabooga
Packages
No packages published
Contributors
302
@oobabooga
@jllllll
@dependabot[bot]
@mcmonkey4eva
@matatonic
@missionfloyd
@FartyPants
@Ph0rk0z
@TheLounger
@mayaeary
@xanthousm
@Brawlence
@EliasVincent
@nikita-skakun
+ 288 contributors
Deployments
2
github-pages last year
Languages
Python
90.9%
CSS
3.5%
JavaScript
2.4%
Shell
1.4%
Batchfile
0.7%
Jupyter Notebook
0.6%
Dockerfile
0.5%
Footer
© 2024 GitHub, Inc.
Footer navigation
Terms
Privacy
Security
Status
Docs
Contact
Manage cookies
Do not share my personal information
""
|
29f511d33c526616cedfb75663fec8d0
|
{
"intermediate": 0.4442797899246216,
"beginner": 0.2879008948802948,
"expert": 0.267819344997406
}
|
42,639
|
I have 2 python scripts that are constantly writing and reading one json file. how to make sure that the file doesnt get damaged when one programs is writing while the other one does. just some kind of prevention so theyt both would be wiritng at the same time, like wait if other is writing
|
23391d9278c975273893c4bae44ee904
|
{
"intermediate": 0.4465523958206177,
"beginner": 0.24439042806625366,
"expert": 0.3090571165084839
}
|
42,640
|
Please modify this code to only use a decoder transformer not both an encoder and a decoder, i want it to be state of the art for its size, code: import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn.utils.rnn import pad_sequence
from torch.utils.data import DataLoader, Dataset
from collections import Counter
import json
from tqdm import tqdm
import math
import torch
import torch.optim.lr_scheduler as lr_scheduler
# Check if CUDA is available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
def positional_encoding(seq_len, d_model, device):
pos = torch.arange(seq_len, dtype=torch.float, device=device).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model)).to(device)
pe = torch.zeros(seq_len, d_model, device=device)
pe[:, 0::2] = torch.sin(pos * div_term)
pe[:, 1::2] = torch.cos(pos * div_term)
return pe.unsqueeze(0)
# Expert Transformer Model
class TransformerExpert(nn.Module):
def __init__(self, input_size, d_model, output_size, nhead, dim_feedforward, num_encoder_layers=1):
super(TransformerExpert, self).__init__()
self.d_model = d_model
self.input_fc = nn.Linear(input_size, d_model)
encoder_layer = nn.TransformerEncoderLayer(d_model=d_model, nhead=nhead, dim_feedforward=dim_feedforward, batch_first=True)
self.transformer_encoder = nn.TransformerEncoder(encoder_layer, num_layers=num_encoder_layers)
self.output_fc = nn.Linear(d_model, output_size)
def forward(self, x):
x = self.input_fc(x) + positional_encoding(x.size(1), self.d_model, x.device)
transformer_output = self.transformer_encoder(x)
output = self.output_fc(transformer_output) # Apply output_fc to each time step in the sequence
return output
# Gating Network
class GatingNetwork(nn.Module):
def __init__(self, input_feature_dim, num_experts, hidden_dims=None, dropout_rate=0.1):
super(GatingNetwork, self).__init__()
layers = []
last_dim = input_feature_dim
# If hidden layers are specified, create them
if hidden_dims is not None:
for hidden_dim in hidden_dims:
layers.append(nn.Linear(last_dim, hidden_dim))
layers.append(nn.ReLU()) # You could make this a hyperparameter as well
if dropout_rate > 0.0:
layers.append(nn.Dropout(dropout_rate))
last_dim = hidden_dim
# Final layer projecting to the number of experts
layers.append(nn.Linear(last_dim, num_experts))
self.fc_layers = nn.Sequential(*layers)
self.softmax = nn.Softmax(dim=1)
def forward(self, x):
# Assuming x is of shape [batch_size, seq_len, d_model], aggregate across the sequence length
x = x.mean(dim=1) # Aggregate feature per instance
x = self.fc_layers(x) # Pass through gating network layers
return self.softmax(x)
# Define hyperparameters specific to the transformer
d_model = 64 #128
nhead = 2 #8
dim_feedforward = 192 #256
num_encoder_layers = 6 #2
num_experts = 1 #2
model_name = "Alpha_Talk-V04-Turbo"
# Mixture of Experts Model
class MixtureOfTransformerExperts(nn.Module):
def __init__(self, input_size, d_model, output_size, nhead, dim_feedforward, num_experts, num_encoder_layers=1):
super(MixtureOfTransformerExperts, self).__init__()
self.num_experts = num_experts
self.output_size = output_size # Store output_size as an instance variable
self.experts = nn.ModuleList([TransformerExpert(input_size, d_model, output_size, nhead, dim_feedforward, num_encoder_layers) for _ in range(num_experts)])
self.gating_network = GatingNetwork(d_model, num_experts)
def forward(self, x):
gating_scores = self.gating_network(x) # [batch_size, num_experts]
expert_outputs = [expert(x) for expert in self.experts] # List of [batch_size, seq_len, output_size]
stacked_expert_outputs = torch.stack(expert_outputs) # Shape: [num_experts, batch_size, seq_len, output_size]
# Expand gating scores
expanded_gating_scores = gating_scores.unsqueeze(2).unsqueeze(3) # Shape: [batch_size, num_experts, 1, 1]
expanded_gating_scores = expanded_gating_scores.expand(-1, -1, x.size(1), self.output_size)
expanded_gating_scores = expanded_gating_scores.transpose(0, 1) # Shape: [num_experts, batch_size, seq_len, output_size]
# Now the shape of expanded_gating_scores matches stacked_expert_outputs, and broadcasting will work
mixed_output = torch.sum(stacked_expert_outputs * expanded_gating_scores, dim=0) # Sum weighted expert outputs for each time step
return mixed_output
class QAJsonlDataset(Dataset):
def __init__(self, path, seq_len):
self.seq_len = seq_len
self.pairs = self.load_data(path)
# Flatten the pairs completely before passing them to build_vocab
self.vocab, self.idx2token = self.build_vocab([word for pair in self.pairs for sublist in pair for word in sublist])
self.tokenized_pairs = [(self.tokenize(q), self.tokenize(a)) for q, a in self.pairs]
def load_data(self, path):
pairs = []
with open(path, "r", encoding="utf-8") as f:
for line in f:
data = json.loads(line.strip())
question, answer = data.get("user", ""), data.get("content", "")
pairs.append((question.split(), answer.split()))
return pairs
def tokenize(self, words):
# Tokenize a sentence and pad if necessary
# Add <eos> token at the end if there’s room
tokens = [self.vocab.get(w, self.vocab["<unk>"]) for w in words]
if len(tokens) < self.seq_len:
tokens.append(self.vocab["<eos>"]) # Add <eos> token
tokens.extend([self.vocab["<pad>"]] * (self.seq_len - len(tokens))) # Pad the rest
else:
tokens = tokens[:self.seq_len - 1] + [self.vocab["<eos>"]]
return tokens
def build_vocab(self, words):
# Start with special tokens with fixed indices
vocab = {"<unk>": 0, "<pad>": 1, "<eos>": 2}
start_index = len(vocab)
# Use Counter to count word frequencies in the corpus
counts = Counter(words)
# Create the vocab dictionary with all words, starting indices after the special tokens
for word, _ in counts.most_common():
if word not in vocab: # Skip special tokens
vocab[word] = len(vocab)
# Create the reverse mapping from indices to words
idx2token = {idx: token for token, idx in vocab.items()}
return vocab, idx2token
def __len__(self):
return len(self.tokenized_pairs)
def __getitem__(self, idx):
tokenized_question, tokenized_answer = self.tokenized_pairs[idx]
return torch.tensor(tokenized_question, dtype=torch.long), torch.tensor(tokenized_answer, dtype=torch.long)
class MoETransformerModel(nn.Module):
def __init__(self, vocab_size, d_model, moe):
super(MoETransformerModel, self).__init__()
self.embedding = nn.Embedding(num_embeddings=vocab_size, embedding_dim=d_model)
self.moe = moe
self.dropout = nn.Dropout(p=0.125) # Dropout added for regularization
def forward(self, x):
embedded = self.dropout(self.embedding(x))
return self.moe(embedded) # Remove positional encoding addition here, as it’s already added in TransformerExpert
def collate_fn(batch):
questions, answers = zip(*batch)
questions = pad_sequence(questions, batch_first=True, padding_value=0)
answers = pad_sequence(answers, batch_first=True, padding_value=0)
return questions, answers
# Set the path to your jsonl file and define sequence length
path_to_text = 'Real_talk.jsonl' # replace with the path to your jsonl file
seq_len = 32 # sequence length
# Create a dataset and data loader
dataset = QAJsonlDataset(path_to_text, seq_len)
# Save vocabulary to a text file
vocab_file = f"{model_name}_vocab.txt"
with open(vocab_file, "w", encoding="utf-8") as f:
for token, id in dataset.vocab.items():
f.write(f"{token}\t{id}\n")
# Model configuration parameters to be saved
model_config = {
"d_model": d_model,
"nhead": nhead,
"dim_feedforward": dim_feedforward,
"num_encoder_layers": num_encoder_layers,
"num_experts": num_experts,
"sequence-length":seq_len
}
# Save configuration to a JSON file
config_file = f"{model_name}_config.json"
with open(config_file, "w", encoding="utf-8") as f:
json.dump(model_config, f, indent=4)
data_loader = DataLoader(dataset, batch_size=32, shuffle=True, collate_fn=collate_fn, pin_memory=True)
# Training loop - added gradient clipping to avoid exploding gradients
def train_model(model, criterion, optimizer, scheduler, num_epochs, data_loader, val_data_loader=None):
model.train()
for epoch in range(num_epochs):
total_loss = 0
# Initially, fetch the learning rate from the optimizer
learning_rate = optimizer.param_groups[0]['lr']
progress_bar = tqdm(enumerate(data_loader), total=len(data_loader), desc='Training', leave=False)
for i, (inputs, targets) in progress_bar:
inputs, targets = inputs.to(device), targets.to(device)
optimizer.zero_grad()
predictions = model(inputs)
predictions = predictions.view(-1, predictions.size(-1))
targets = targets.view(-1)
loss = criterion(predictions, targets)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0)
optimizer.step()
total_loss += loss.item()
# Validation phase
val_loss = 0.0
if val_data_loader is not None:
model.eval()
with torch.no_grad():
for inputs, targets in val_data_loader:
inputs, targets = inputs.to(device), targets.to(device)
predictions = model(inputs)
predictions = predictions.view(-1, predictions.size(-1))
targets = targets.view(-1)
loss = criterion(predictions, targets)
val_loss += loss.item()
val_loss /= len(val_data_loader)
scheduler.step(val_loss) # Update learning rate based on validation loss
# Fetch the adjusted learning rate after the scheduler step
adjusted_learning_rate = optimizer.param_groups[0]['lr']
model.train() # Ensure model is back in training mode
# Print epoch, learning rate, training loss, and validation loss
print(f"Epoch {epoch+1}, Learning Rate: {adjusted_learning_rate:.6f}, Training Loss: {total_loss / len(data_loader.dataset)}, Validation Loss: {val_loss}")
def generate_text(model, dataset, seed_text, num_generate, temperature=1.0):
model.eval() # Put the model in evaluation mode
# List to store the generated tokens
generated_tokens = []
# Initial sequence (prefix) to start the generation process
input_sequence = [dataset.vocab.get(word, dataset.vocab["<pad>"]) for word in seed_text.split()] # Convert to token IDs
current_sequence = torch.tensor(input_sequence, dtype=torch.long).unsqueeze(0)
current_sequence = current_sequence.to(device)
# Generate num_generate tokens
for _ in range(num_generate):
# Forward pass through the model
with torch.no_grad():
output = model(current_sequence)
# Get probabilities, apply temperature scaling, and sample from the distribution
probabilities = F.softmax(output[:, -1, :] / temperature, dim=-1).detach()
next_token_idx = torch.multinomial(probabilities, 1).item()
# Append token to the current sequence and to the generated tokens
generated_tokens.append(next_token_idx)
current_sequence = torch.cat((current_sequence, torch.tensor([[next_token_idx]])), 1).to(device)
# Convert tokens to words
generated_text = " ".join([dataset.idx2token.get(token, "<unk>") for token in generated_tokens]) # Use .get() to provide a default value for missing keys
return generated_text
# Function to count the number of tokens in the dataset
def count_tokens_in_dataset(dataset):
return sum([len(pair[0]) + len(pair[1]) for pair in dataset.pairs])
num_tokens = count_tokens_in_dataset(dataset)
print(f"Total number of tokens in the dataset: {num_tokens}")
vocab_size = len(dataset.vocab) # Assume dataset.vocab is defined in the QAJsonlDataset class
# Instantiate resulting MoE transformer model and move it to device
moe = MixtureOfTransformerExperts(
input_size=d_model,
d_model=d_model,
output_size=vocab_size,
nhead=nhead,
dim_feedforward=dim_feedforward,
num_experts=num_experts,
num_encoder_layers=num_encoder_layers
).to(device)
# Instantiate the MoE transformer model and move it to device
moe_transformer_model = MoETransformerModel(vocab_size, d_model, moe).to(device)
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
# Example usage with your model:
total_params = count_parameters(moe_transformer_model)
print(f"Total trainable parameters: {total_params}")
# Training parameters
num_epochs = 4000
learning_rate = 0.001
threshold_loss = 0.01 # Adjust as needed
# Define Loss Function and Optimizer for MoE model - using Label Smoothing for better generalization
criterion = nn.CrossEntropyLoss(label_smoothing=0.0)
optimizer = torch.optim.AdamW(moe_transformer_model.parameters(), lr=learning_rate, weight_decay=0.01) # Using AdamW with weight decay
# Create the learning rate scheduler
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.5, patience=2, threshold=threshold_loss, threshold_mode='rel', verbose=True)
# Train the model with the text data
train_model(moe_transformer_model, criterion, optimizer, scheduler, num_epochs, data_loader, val_data_loader=data_loader)
# Start a loop for the interactive chat-like text generation
while True:
try:
# Get user input
seed_text = input("Enter seed text (type 'quit' to stop): ")
# Check if user wants to quit the interaction
if seed_text.lower() == "quit":
print("Exiting text generation chat.")
break
# User input is not empty and not “quit”, generate text
if seed_text.strip():
num_generate = 64 # Number of words to generate
temperature = 1.0 # Sampling temperature, higher will increase diversity
# Use the trained model to generate text
generated_text = generate_text(moe_transformer_model, dataset, seed_text, num_generate, temperature)
print("Generated Text:", generated_text)
else:
print("Seed text cannot be empty.")
except KeyboardInterrupt:
# Handle KeyboardInterrupt (Ctrl+C) to gracefully exit
print("\nExiting text generation chat.")
break
torch.save(moe_transformer_model.state_dict(), f"{model_name}.pth")
|
69c2e14c1f10028cb06846e3187009d1
|
{
"intermediate": 0.3181028962135315,
"beginner": 0.31277555227279663,
"expert": 0.3691215515136719
}
|
42,641
|
I have 2 python scripts that are constantly writing and reading one json file. how to make sure that the file doesnt get damaged when one programs is writing while the other one does. just some kind of prevention so theyt both would be wiritng at the same time, like wait if other is writing
|
8ccdf6b4fb5c082fc200592f8811f960
|
{
"intermediate": 0.4465523958206177,
"beginner": 0.24439042806625366,
"expert": 0.3090571165084839
}
|
42,642
|
hello
|
6d54f20d7cf1d01b3d4c6b9b81f41464
|
{
"intermediate": 0.32064199447631836,
"beginner": 0.28176039457321167,
"expert": 0.39759764075279236
}
|
42,643
|
is there fulter to disable new wordpress image lightbox globally
|
426b28cb4b99f8cb13c7ed264dcdb6f8
|
{
"intermediate": 0.3678833246231079,
"beginner": 0.24064461886882782,
"expert": 0.39147207140922546
}
|
42,644
|
Hello
|
4935bf92a8c712e30a6bda3856065812
|
{
"intermediate": 0.3123404085636139,
"beginner": 0.2729349136352539,
"expert": 0.4147246778011322
}
|
42,645
|
I have 2 python scripts that are constantly writing and reading one json file. how to make sure that the file doesnt get damaged when one programs is writing while the other one does. just some kind of prevention so theyt both would be wiritng at the same time, like wait if other is writing. i use windows 10
|
f72275e038c283cb5b845dba66733c18
|
{
"intermediate": 0.4162687361240387,
"beginner": 0.32389676570892334,
"expert": 0.25983455777168274
}
|
42,646
|
class JsonFile():
def load(file_path):
lock = FileLock(f"{file_path}.lock")
with lock:
while True:
try:
with open (file_path, encoding='utf-8') as file:
return json.load(file)
except BaseException as ex:
Print(str(ex) + '\n', 'red')
wait(4)
def save(file_path):
lock = FileLock(f"{file_path}.lock")
with lock:
while True:
try:
with open (file_path, 'w', encoding='utf-8') as file:
return json.dump(data, file)
except BaseException as ex:
Print('\n' + str(ex) + '\n', 'red')
wait(4)
def append_to_list(file_path, element_to_append):
data = load(file_path)
why load() is not defined when i defined it?
|
cfa238fddb7f2fb7bbd875246b195f2d
|
{
"intermediate": 0.4210871756076813,
"beginner": 0.38104334473609924,
"expert": 0.1978694200515747
}
|
42,647
|
this function is for question answering task target is answer given a contexts
def preprocess_function(examples):
questions = [q.strip() for q in examples["question"]]
inputs = tokenizer(
questions,
examples["context"],
max_length=384,
truncation="only_second",
return_offsets_mapping=True,
padding="max_length",
)
offset_mapping = inputs.pop("offset_mapping")
answers = examples["answers"]
start_positions = []
end_positions = []
for i, offset in enumerate(offset_mapping):
answer = answers[i]
start_char = answer["answer_start"][0]
end_char = answer["answer_start"][0] + len(answer["text"][0])
sequence_ids = inputs.sequence_ids(i)
# Find the start and end of the context
idx = 0
while sequence_ids[idx] != 1:
idx += 1
context_start = idx
while sequence_ids[idx] == 1:
idx += 1
context_end = idx - 1
# If the answer is not fully inside the context, label it (0, 0)
if offset[context_start][0] > end_char or offset[context_end][1] < start_char:
start_positions.append(0)
end_positions.append(0)
else:
# Otherwise it's the start and end token positions
idx = context_start
while idx <= context_end and offset[idx][0] <= start_char:
idx += 1
start_positions.append(idx - 1)
idx = context_end
while idx >= context_start and offset[idx][1] >= end_char:
idx -= 1
end_positions.append(idx + 1)
inputs["start_positions"] = start_positions
inputs["end_positions"] = end_positions
return inputs
can rewrite it for a question generation task that target is question given answer
|
0b38fbffe8a2c0bb4f5dd3f659ebc585
|
{
"intermediate": 0.27421173453330994,
"beginner": 0.4887039363384247,
"expert": 0.23708432912826538
}
|
42,648
|
from taipy.gui import Gui
from keras import models
model=models.load_model("baseline_mariya.keras")
File "C:\Users\mehrab\AppData\Local\Programs\Python\Python312\Lib\site-packages\keras\src\saving\saving_api.py", line 185, in load_model
raise ValueError(
ValueError: File not found: filepath=baseline_mariya.keras. Please ensure the file is an accessible `.keras` zip file.
|
4b3430f893332cc4a11fa90dd84acd3a
|
{
"intermediate": 0.44644537568092346,
"beginner": 0.26096001267433167,
"expert": 0.2925946116447449
}
|
42,649
|
i have a trained xgboost classification model that predict data is label 0,1 or2 ...
i want to fetch datas in my test_set that model predicted as label 2 and also 2 rows after them from my df and save them in a new csv file
give me the proper code to do so
|
556a6fbd69e31fd72b639b57219c5a74
|
{
"intermediate": 0.3688882887363434,
"beginner": 0.08527675271034241,
"expert": 0.5458349585533142
}
|
42,650
|
I have this code to scrape a webpage. How can it be improved?
import csv
import time
from bs4 import BeautifulSoup
from selenium import webdriver
# Set up Selenium WebDriver
options = webdriver.ChromeOptions()
options.add_experimental_option("debuggerAddress", "127.0.0.1:9222")
driver = webdriver.Chrome(options=options)
# Navigate to the desired webpage
url = 'https://www01.engineering.ualberta.ca/engg/index.php/accreditation/Reports/pca_report/html/program/EE/terms/1810-1840/retro_terms/1610-1840/ga/undefined'
driver.get(url)
# Wait for the page to load
time.sleep(10) # Adjust the time as needed
# Get the page source
html_content = driver.page_source
# Parse the HTML content using BeautifulSoup
soup = BeautifulSoup(html_content, 'html.parser')
# Find all the <table> elements with class "noborder"
tables = soup.find_all('table', class_='noborder')
# Open a CSV file for writing
with open('output.csv', 'w', newline='') as file:
writer = csv.writer(file)
# Write the header row
writer.writerow(['Course Number', 'Instructor Name', 'Term Taught', 'Section Number', 'Date'])
# Iterate over each table
for table in tables:
# Find the <tr> element containing the course details
course_details_row = table.find('tr', style='border: none')
if course_details_row:
# Find the <td> elements within the course details row
course_details_cells = course_details_row.find_all('td')
if len(course_details_cells) == 2:
# Extract the course details from the <p> elements within the <td> elements
course_number = ''
instructor_name = ''
date = ''
term_taught = ''
section_number = ''
for p_element in course_details_cells[0].find_all('p'):
if 'Course Number:' in p_element.text:
course_number = p_element.find('b').text.strip()
elif 'Instructor Name:' in p_element.text:
instructor_name = p_element.find('b').text.strip()
elif 'Date:' in p_element.text:
date = p_element.find('b').text.strip()
for p_element in course_details_cells[1].find_all('p'):
if 'Term Taught:' in p_element.text:
term_taught = p_element.find('b').text.strip()
elif 'Section Number:' in p_element.text:
section_number = p_element.find('b').text.strip()
# Write the extracted data as a row in the CSV file
writer.writerow([course_number, instructor_name, term_taught, section_number, date])
# Close the browser
driver.quit()
|
d66d177142ecd305916b87714e7e1aee
|
{
"intermediate": 0.44007036089897156,
"beginner": 0.42433515191078186,
"expert": 0.1355944573879242
}
|
42,651
|
hello
|
b267302a58852a4115cccff93edf2ea3
|
{
"intermediate": 0.32064199447631836,
"beginner": 0.28176039457321167,
"expert": 0.39759764075279236
}
|
42,652
|
i have read and separated my data to train and dev and test set as following :
df = pd.read_csv(
X = df.drop("Label", axis=1)
Y = df["Label"]
X_train, X_temp, y_train, y_temp = train_test_split(X, Y, train_size = 0.94, random_state = RANDOM_STATE)
# We will keep the shuffle = True since our dataset has not any time dependency.
X_dev, X_test, y_dev, y_test = train_test_split(X_temp, y_temp, test_size = 0.5, random_state = RANDOM_STATE)
is there any way so i can know index of my dev_set items in the my X data?
|
42da75aee057ef3c4581b85fe2ffdbf3
|
{
"intermediate": 0.410042405128479,
"beginner": 0.16561906039714813,
"expert": 0.42433851957321167
}
|
42,653
|
i want to rename my csv file fifth column to Volume target pair
give me the proper python code to do it
|
e47e10b4669dd8c1c24378377f287e1d
|
{
"intermediate": 0.4192247688770294,
"beginner": 0.27063727378845215,
"expert": 0.31013792753219604
}
|
42,654
|
I have a series of py strings by similar format.
`x = "One Two (Three-3)"`
"(Three-3)" is meant to be the last word from x string. I need a code that extract this last word from the string.
|
037d2cda46f9ed75820d0f891821db82
|
{
"intermediate": 0.39438703656196594,
"beginner": 0.22077877819538116,
"expert": 0.3848342001438141
}
|
42,655
|
i have a csv file that countains some columns
i want to merg following columns and make them one(named tragrt_pair_volume) :
Volume BTC, Volume BUSD, Volume USDT, Volume BNB, Volume ETH
give me the proper python code to do it
|
1fbb306da2dbd15aafdbfd367075c93c
|
{
"intermediate": 0.5321618318557739,
"beginner": 0.2736949026584625,
"expert": 0.19414322078227997
}
|
42,656
|
I am making a C++ SDL based game engine, currently installing SDL mixer X, because it supports more formats and is better than the original. I compiled it, and previously it asked for several libraries to be installed like opus, or fluidsynth, wavpack etc., but then the compiled .so doesn't show them, why?
> make -j 4
[ 2%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/music_cmd.c.o
[ 10%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/load_aiff.c.o
[ 10%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/load_voc.c.o
[ 10%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/mp3utils.c.o
[ 12%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/music_drflac.c.o
[ 15%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/music_flac.c.o
[ 17%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/music_fluidsynth.c.o
[ 20%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/music_gme.c.o
[ 23%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/music_minimp3.c.o
[ 25%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/music_modplug.c.o
[ 28%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/music_mpg123.c.o
[ 30%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/music_nativemidi.c.o
[ 33%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/music_ogg.c.o
[ 35%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/music_ogg_stb.c.o
[ 38%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/music_opus.c.o
[ 41%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/music_timidity.c.o
[ 43%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/music_wav.c.o
[ 46%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/music_wavpack.c.o
[ 48%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/music_xmp.c.o
[ 51%] Building C object CMakeFiles/SDL2_mixer.dir/src/effect_position.c.o
[ 53%] Building C object CMakeFiles/SDL2_mixer.dir/src/effect_stereoreverse.c.o
[ 56%] Building C object CMakeFiles/SDL2_mixer.dir/src/effects_internal.c.o
[ 58%] Building C object CMakeFiles/SDL2_mixer.dir/src/mixer.c.o
[ 61%] Building C object CMakeFiles/SDL2_mixer.dir/src/music.c.o
[ 64%] Building C object CMakeFiles/SDL2_mixer.dir/src/utils.c.o
[ 66%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/timidity/common.c.o
[ 69%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/timidity/instrum.c.o
[ 71%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/timidity/mix.c.o
[ 74%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/timidity/output.c.o
[ 76%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/timidity/playmidi.c.o
[ 79%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/timidity/readmidi.c.o
[ 82%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/timidity/resample.c.o
[ 84%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/timidity/tables.c.o
[ 87%] Building C object CMakeFiles/SDL2_mixer.dir/src/codecs/timidity/timidity.c.o
[ 89%] Linking C shared library libSDL2_mixer-2.0.so
[ 89%] Built target SDL2_mixer
[ 92%] Building C object CMakeFiles/playmus.dir/playmus.c.o
[ 94%] Building C object CMakeFiles/playwave.dir/playwave.c.o
[ 97%] Linking C executable playwave
[100%] Linking C executable playmus
[100%] Built target playwave
[100%] Built target playmus
> ldd libSDL2_mixer.so
linux-vdso.so.1 (0x00007fffe075a000)
libSDL2-2.0.so.0 => /lib64/libSDL2-2.0.so.0 (0x00007f7b007e6000)
libc.so.6 => /lib64/libc.so.6 (0x00007f7b00400000)
libasound.so.2 => /lib64/libasound.so.2 (0x00007f7b006dc000)
libm.so.6 => /lib64/libm.so.6 (0x00007f7b00319000)
/lib64/ld-linux-x86-64.so.2 (0x00007f7b00a53000)
|
31f18660656b7ee7a3815f5341ab28e5
|
{
"intermediate": 0.3788275718688965,
"beginner": 0.3899247944355011,
"expert": 0.23124763369560242
}
|
42,657
|
{
"name": "ValueError",
"message": "not enough values to unpack (expected 6, got 4)",
"stack": "---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[9], line 1
----> 1 X_train, X_temp, y_train, y_temp, indices_train, indices_temp = train_test_split(X, Y, train_size = 0.94, random_state = RANDOM_STATE)
3 # We will keep the shuffle = True since our dataset has not any time dependency.
5 X_dev, X_test, y_dev, y_test, indices_dev, indices_test = train_test_split(X_temp, y_temp, test_size = 0.5, random_state = RANDOM_STATE)
ValueError: not enough values to unpack (expected 6, got 4)"
}
code:
X_train, X_temp, y_train, y_temp, indices_train, indices_temp = train_test_split(X, Y, train_size = 0.94, random_state = RANDOM_STATE)
# We will keep the shuffle = True since our dataset has not any time dependency.
X_dev, X_test, y_dev, y_test, indices_dev, indices_test = train_test_split(X_temp, y_temp, test_size = 0.5, random_state = RANDOM_STATE)
|
299d505a148c26ac10e9f27f0c48048d
|
{
"intermediate": 0.3908286392688751,
"beginner": 0.3591616749763489,
"expert": 0.2500096261501312
}
|
42,658
|
class WebpageQATool(BaseTool):
name = "query_webpage"
description = "Browse a webpage and retrieve the information and answers relevant to the question. Please use bullet points to list the answers"
text_splitter: RecursiveCharacterTextSplitter = Field(default_factory=_get_text_splitter)
qa_chain: BaseCombineDocumentsChain
def _run(self, url: str, question: str) -> str:
response = requests.get(url)
page_content = response.text
print(page_content)
docs = [Document(page_content=page_content, metadata={"source": url})]
web_docs = self.text_splitter.split_documents(docs)
results = []
for i in range(0, len(web_docs), 4):
input_docs = web_docs[i:i+4]
window_result = self.qa_chain({"input_documents": input_docs, "question": question}, return_only_outputs=True)
results.append(f"Response from window {i} - {window_result}")
results_docs = [Document(page_content="\n".join(results), metadata={"source": url})]
print(results_docs)
return self.qa_chain({"input_documents": results_docs, "question": question}, return_only_outputs=True)
AttributeError: 'FieldInfo' object has no attribute 'split_documents'
Traceback:
File "/usr/local/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 542, in _run_script
exec(code, module.__dict__)
File "/home/user/app/app.py", line 80, in <module>
final_answer = run_llm(input_url, your_query)
File "/home/user/app/app.py", line 60, in run_llm
result = query_website_tool._run(url, query) # Pass the URL and query as arguments
File "/home/user/app/app.py", line 44, in _run
web_docs = self.text_splitter.split_documents(docs)
|
85f4f7243a7429b4575fb959c0b91017
|
{
"intermediate": 0.44598814845085144,
"beginner": 0.3424491584300995,
"expert": 0.21156275272369385
}
|
42,659
|
Hi
|
283c80e0b00e1295dd5415f09109b74f
|
{
"intermediate": 0.33010533452033997,
"beginner": 0.26984941959381104,
"expert": 0.400045245885849
}
|
42,660
|
i trained a decision tree using xgboost
based on confusion matrix the model performance on test set is :
[[18321 1233 849]
[ 2631 5129 174]
[ 3515 532 2993]]
explain it to me please
|
6e770b25056eeb9ed3b237f25db57baa
|
{
"intermediate": 0.1468021422624588,
"beginner": 0.07315409928560257,
"expert": 0.7800437211990356
}
|
42,661
|
how to solve this with excel AND MINITab: In this assignment you will be evaluating the potential to use a small river in the Annapolis Valley as an
irrigation water source for an agricultural producer. The drainage area of the river at the location where
the farmer wants to withdraw water is 13 km2 and is characterized by primarily agricultural land use.
The river is not gauged. Soils in the watershed are primarily sandy loam and slopes range between 1 to
10%. You can assume the watershed is located close to Steam Mill Village for selecting climate data.
The farmer is planning to submit an application to Nova Scotia Environment and Climate Change to
withdrawal water directly from the river during the time period July 1 – August 30 to irrigate a
vegetable crop. They plan to irrigate 25 ha of cropland. Some assumptions you can make in conducting
your assessment:
• Water needs can be estimated as the amount of potential evapotranspiration (PET) expected to
occur during the months of July and August. The Penman Monteith Equation, using the FAO
methodology, should be used to estimate PET. Monthly average PET rates can be used to
estimate water needs during each month.
• The farmer would operate the irrigation system for a maximum of 40 hr/week (i.e. they would
only pump water from the river for a maximum of 40 hr/week)
|
7717ace292e6e4a1a99d09ffd16cff07
|
{
"intermediate": 0.4239434003829956,
"beginner": 0.3325038552284241,
"expert": 0.24355274438858032
}
|
42,662
|
Genetics - Design a Species
Objective: In this project, you are going to design your own species of gingerbread person, coffee mug,
penguin, or elf by giving it traits that follow genetic rules from this unit. The creature should have at least 5
genetic traits from the following list. You are free to create whatever traits you like (such as hair color, size,
shape, or other features). This will be hand drawn.
2 Simple traits that follow Mendel’s law of dominance (dominant hides recessive)
1 Co-dominant trait (both are dominant and fully expressed)
1 Incompletely dominant trait (blending of two traits)
1 Sex-linked trait (on the X sex chromosome)
Your final product should have the following elements:
Part 1 KEY: Create a key showing each of the 5 traits listed above within the assigned data table. Sketch
illustrations for each phenotype and list all possible genotypes associated with each trait. Partial sketches
are fine in this case, meaning if your organism has wings vs no wings, you can just draw the part of the body
where the wings attach and don’t need to draw the whole organism for each part of the key.
Example...(this is what I am looking for in the key, but you will include all five traits instead of just the two I
have listed below)
Simple Trait # 1: No Wings (ww) Wings (Ww or WW)
Co-dominant trait: Red dots (RR) Purple dots (PP) Red and Purple dots (RP)
Part 2 MODEL MALE AND FEMALE: Sketch two examples of your creature – one male and one female.
The two examples must model different phenotypes. Each sketch should also have the genotype listed for all
5 traits. You must model a variety of genotypes (homozygous dominant, homozygous recessive, and
heterozygous) for both male and female models. You will not be able to show the third phenotypes for
incomplete and codominance. That’s ok...just pick two phenotypes for those types of inheritance to model.
Phenotypes: Phenotypes:
(no wings, stripes on tail, red dots on body, blue star on head, no heart on tail) (wings, no stripes on tail, red & purple dots on body, green star on head, heart on tail)
Part 3 PUNNETT SQUARES: Create 5 one factor/monohybrid Punnett squares by crossing your male and
female for each of the traits. Be sure to give the genotypic and phenotypic ratios for each. For example, my
first Punnett square would be between my female ww and male Ww. My second would be RR and RP etc...
Female Genotype: ww Tt RR BB XrXr
Male Genotype: Ww tt RP BY XRY
NAME: ____________________________________________ BLOCK: ____________ DATE: __________
GRADING RUBRIC
Requirements
Possible
Points
Earned
Points
Part 1: The KEY Each trait shown with clearly drawn picture
Follows Genetic Rules
Each mode of inheritance shown
All genotypes are correctly labeled
25
Part 2: The MODELS Male and Female model different phenotypes
Each mode of inheritance shown
Pictures are clearly drawn
Genotypes and phenotypes agree
25
Part 3: The
PUNNETT SQUARES
All 5 Punnett squares set up correctly
Genotypic ratios correct
Phenotypic ratios correct
25
OVERALL
Creative
Attractive/Neat
Accurate
Thoughtful/Effort
Colorful
25
TOTAL 100
Organism Key
Phenotypes
Simple
Trait #1
Possible
Genotypes
Phenotypes
Simple
Trait #2
Possible
Genotypes
Phenotypes
Codominant
Trait
Possible
Genotypes
PhenotypesIncomplete
Dominant
Trait
Possible
Genotypes
Phenotypes
Sex-Linked
Trait
Possible
Genotypes
|
f7850c168b0c61acbd6ebb0dbc9a44fa
|
{
"intermediate": 0.30025163292884827,
"beginner": 0.4408763349056244,
"expert": 0.25887200236320496
}
|
42,663
|
i have trained a decision tree model using xgboost following in confusion matrix:
- Class 0:
- Precision ≈ 90%
- Recall ≈ 75%
- Class 1:
- Precision ≈ 65%
- Recall ≈ 74%
- Class 2:
- Precision ≈ 43%
- Recall ≈ 75%
[[18321 1233 849]
[ 2631 5129 174]
[ 3515 532 2993]]
i want to increase my percision on class 2
how can i do so?
here is my code:
# Initialize the XGBoost classifier
xgb = XGBClassifier(objective='multi:softprob', random_state=42)
# Define the parameter grid
param_grid = {
'n_estimators': [200],
'max_depth': [12],
'learning_rate': [0.2],
'subsample': [0.9]
# Add other parameters here
}
from sklearn.model_selection import StratifiedKFold
# Define k-fold cross-validation parameters
kfold = StratifiedKFold(n_splits=5, shuffle=True, random_state=42)
grid_search = GridSearchCV(
estimator=xgb,
param_grid=param_grid,
scoring='accuracy', # or ‘f1_macro’ if the dataset is unbalanced
n_jobs=-1,
cv=kfold,
verbose=3
)
# Perform grid search
grid_search.fit(X_train_scaled, y_train)
|
94cb2b620f68d9751457588710562c77
|
{
"intermediate": 0.42427411675453186,
"beginner": 0.0762401893734932,
"expert": 0.49948570132255554
}
|
42,664
|
I have developed a code - """import numpy as np
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.document_loaders import PyPDFLoader
from langchain import HuggingFacePipeline
from langchain import PromptTemplate
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferWindowMemory
from sentence_transformers import SentenceTransformer
import gradio as gr
from langchain_community.vectorstores import FAISS
from flashrank.Ranker import Ranker, RerankRequest
#nltk.download()
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline, TextStreamer
def create_multipage_pdf(text, filename='output.pdf', font_size=12, margin=50):
# Split text into chunks that fit on one page
chunks = []
chunk_size = 50 # Number of characters per chunk
for i in range(0, len(text), chunk_size):
chunks.append(text[i:i+chunk_size])
# Create PDF
c = canvas.Canvas(filename, pagesize=letter)
width, height = letter
y = height - margin
for chunk in chunks:
# Draw text
text_object = c.beginText(margin, y)
text_object.setFont("Helvetica", font_size)
text_object.textLines(chunk)
c.drawText(text_object)
# Update y position
y -= (font_size + 4) # Adjust spacing between lines
# Check if we need to start a new page
if y <= margin:
c.showPage()
y = height - margin
c.save()
# Example usage
#create_multipage_pdf(text, filename='document.pdf')
bnb_config = BitsAndBytesConfig(load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=False)
model_id = "meta-llama/Llama-2-7b-chat-hf"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config = bnb_config,device_map={"":0})
import json
import textwrap
B_INST, E_INST = "[INST]", "[/INST]"
B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n"
DEFAULT_SYSTEM_PROMPT = """\
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information."""
def get_prompt(instruction, new_system_prompt=DEFAULT_SYSTEM_PROMPT ):
SYSTEM_PROMPT = B_SYS + new_system_prompt + E_SYS
prompt_template = B_INST + SYSTEM_PROMPT + instruction + E_INST
return prompt_template
loader = PyPDFLoader("document.pdf")
text_splitter = RecursiveCharacterTextSplitter(
# Set a really small chunk size, just to show.
chunk_size = 500,
chunk_overlap = 20,
length_function = len,
)
pages = loader.load_and_split(text_splitter)
# testing with embeddings
embeddings = HuggingFaceEmbeddings(model_name="hkunlp/instructor-xl", model_kwargs={'device': 'cuda'})
#embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2", model_kwargs={'device': 'cuda'})
print("OK1")
db = FAISS.from_documents(pages, embedding=embeddings)
#db = Chroma.from_documents(pages, embedding=embeddings)
print("OK2")
instruction = "Given the context that has been provided. \n {context}, Answer the following question - \n{question}"
# This part was remove from first line of the system_prompt for testing purposes. Be precise in your answers wherever possible.
system_prompt = """You are an honest virtual assistant. You will be given a context to answer from. In case you are sure you don't know the answer then you say that based on the context you don't know the answer. In all other instances you provide an answer to the best of your capability. Cite context when you can access them maintaining formatting. Don't say 'based on the context provided more than once."""
get_prompt(instruction, system_prompt)
"""## Setting up with LangChain"""
template = get_prompt(instruction, system_prompt)
print(template)
prompt = PromptTemplate(template=template, input_variables=["context", "question"])
memory = ConversationBufferWindowMemory(
memory_key="chat_history", k=5,
return_messages=True
)
retriever = db.as_retriever(search_kwargs={'k': 10})
def create_pipeline(max_new_tokens=1024):
pipe = pipeline("text-generation",
model=model,
tokenizer = tokenizer,
max_new_tokens = max_new_tokens,
temperature = 0.6)
return pipe
class ChessBot:
# send re_ranked docs instead of retriever for advanced RAG
def __init__(self, memory, prompt, task:str = "text-generation", retriever = retriever):
self.memory = memory
self.prompt = prompt
self.retriever = retriever
def create_chat_bot(self, max_new_tokens = 1024):
hf_pipe = create_pipeline(max_new_tokens)
llm = HuggingFacePipeline(pipeline =hf_pipe)
qa = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=self.retriever,
memory=self.memory,
combine_docs_chain_kwargs={"prompt": self.prompt}
)
return qa
chess_bot = ChessBot(memory = memory, prompt = prompt)
bot = chess_bot.create_chat_bot()
def clear_llm_memory():
bot.memory.clear()
def update_prompt(sys_prompt):
if sys_prompt == "":
sys_prompt = system_prompt
template = get_prompt(instruction, sys_prompt)
prompt = PromptTemplate(template=template, input_variables=["context", "question"])
bot.combine_docs_chain.llm_chain.prompt = prompt
"""1. Not using API
2. Use cases are not defined
3. Just a POC emphasis
"""
with gr.Blocks() as demo:
gr.Markdown(
"""
#
Please ask your questions!
""")
# Commenting update prompt option
#update_sys_prompt = gr.Textbox(label = "Update System Prompt")
chatbot = gr.Chatbot(label="SAMSUNG SDS", height = 300)
msg = gr.Textbox(label = "Enter your query!")
with gr.Column(scale=1):
clear = gr.ClearButton([msg, chatbot])
with gr.Column(scale=1):
clear_memory = gr.Button(value = "Clear LLM Memory")
def respond(message, chat_history):
print("Query:::", message)
bot_message = bot({"question": message})['answer']
chat_history.append((message, bot_message))
return "", chat_history
msg.submit(respond, inputs=[msg, chatbot], outputs=[msg, chatbot])
clear_memory.click(clear_llm_memory)
# Commenting update prompt option
#update_sys_prompt.submit(update_prompt, inputs=update_sys_prompt)
demo.launch(share=False, server_name='216.48.177.144', server_port=8502)
""" Now I want to modify this code to include this concept - """Long-Context Reorder
No matter the architecture of your model, there is a substantial performance degradation when you include 10+ retrieved documents. In brief: When models must access relevant information in the middle of long contexts, they tend to ignore the provided documents. See: https://arxiv.org/abs/2307.03172
To avoid this issue you can re-order documents after retrieval to avoid performance degradation.
%pip install --upgrade --quiet sentence-transformers > /dev/null
from langchain.chains import LLMChain, StuffDocumentsChain
from langchain.prompts import PromptTemplate
from langchain_community.document_transformers import (
LongContextReorder,
)
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import Chroma
from langchain_openai import OpenAI
# Get embeddings.
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
texts = [
"Basquetball is a great sport.",
"Fly me to the moon is one of my favourite songs.",
"The Celtics are my favourite team.",
"This is a document about the Boston Celtics",
"I simply love going to the movies",
"The Boston Celtics won the game by 20 points",
"This is just a random text.",
"Elden Ring is one of the best games in the last 15 years.",
"L. Kornet is one of the best Celtics players.",
"Larry Bird was an iconic NBA player.",
]
# Create a retriever
retriever = Chroma.from_texts(texts, embedding=embeddings).as_retriever(
search_kwargs={"k": 10}
)
query = "What can you tell me about the Celtics?"
# Get relevant documents ordered by relevance score
docs = retriever.get_relevant_documents(query)
docs
[Document(page_content='This is a document about the Boston Celtics'),
Document(page_content='The Celtics are my favourite team.'),
Document(page_content='L. Kornet is one of the best Celtics players.'),
Document(page_content='The Boston Celtics won the game by 20 points'),
Document(page_content='Larry Bird was an iconic NBA player.'),
Document(page_content='Elden Ring is one of the best games in the last 15 years.'),
Document(page_content='Basquetball is a great sport.'),
Document(page_content='I simply love going to the movies'),
Document(page_content='Fly me to the moon is one of my favourite songs.'),
Document(page_content='This is just a random text.')]
# Reorder the documents:
# Less relevant document will be at the middle of the list and more
# relevant elements at beginning / end.
reordering = LongContextReorder()
reordered_docs = reordering.transform_documents(docs)
# Confirm that the 4 relevant documents are at beginning and end.
reordered_docs
[Document(page_content='The Celtics are my favourite team.'),
Document(page_content='The Boston Celtics won the game by 20 points'),
Document(page_content='Elden Ring is one of the best games in the last 15 years.'),
Document(page_content='I simply love going to the movies'),
Document(page_content='This is just a random text.'),
Document(page_content='Fly me to the moon is one of my favourite songs.'),
Document(page_content='Basquetball is a great sport.'),
Document(page_content='Larry Bird was an iconic NBA player.'),
Document(page_content='L. Kornet is one of the best Celtics players.'),
Document(page_content='This is a document about the Boston Celtics')]
# We prepare and run a custom Stuff chain with reordered docs as context.
# Override prompts
document_prompt = PromptTemplate(
input_variables=["page_content"], template="{page_content}"
)
document_variable_name = "context"
llm = OpenAI()
stuff_prompt_override = """Given this text extracts:
-----
{context}
-----
Please answer the following question:
{query}"""
prompt = PromptTemplate(
template=stuff_prompt_override, input_variables=["context", "query"]
)
# Instantiate the chain
llm_chain = LLMChain(llm=llm, prompt=prompt)
chain = StuffDocumentsChain(
llm_chain=llm_chain,
document_prompt=document_prompt,
document_variable_name=document_variable_name,
)
chain.run(input_documents=reordered_docs, query=query)
'\n\nThe Celtics are referenced in four of the nine text extracts. They are mentioned as the favorite team of the author, the winner of a basketball game, a team with one of the best players, and a team with a specific player. Additionally, the last extract states that the document is about the Boston Celtics. This suggests that the Celtics are a basketball team, possibly from Boston, that is well-known and has had successful players and games in the past. '
Help us out by providing feedback on this documentation page:
""" to improve my code's output. How can I merge it?
|
a76b4d81f08ad44ba7991b3f82922b6b
|
{
"intermediate": 0.38751325011253357,
"beginner": 0.3611988425254822,
"expert": 0.25128793716430664
}
|
42,665
|
I have written a code to create a PDF QA Bot. The code for it is - """import numpy as np
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.document_loaders import PyPDFLoader
from langchain import HuggingFacePipeline
from langchain import PromptTemplate
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferWindowMemory
from sentence_transformers import SentenceTransformer
import gradio as gr
from langchain_community.vectorstores import FAISS
#nltk.download()
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline, TextStreamer
def create_multipage_pdf(text, filename='output.pdf', font_size=12, margin=50):
# Split text into chunks that fit on one page
chunks = []
chunk_size = 50 # Number of characters per chunk
for i in range(0, len(text), chunk_size):
chunks.append(text[i:i+chunk_size])
# Create PDF
c = canvas.Canvas(filename, pagesize=letter)
width, height = letter
y = height - margin
for chunk in chunks:
# Draw text
text_object = c.beginText(margin, y)
text_object.setFont("Helvetica", font_size)
text_object.textLines(chunk)
c.drawText(text_object)
# Update y position
y -= (font_size + 4) # Adjust spacing between lines
# Check if we need to start a new page
if y <= margin:
c.showPage()
y = height - margin
c.save()
# Example usage
#create_multipage_pdf(text, filename='document.pdf')
bnb_config = BitsAndBytesConfig(load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=False)
model_id = "meta-llama/Llama-2-7b-chat-hf"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config = bnb_config,device_map={"":0})
import json
import textwrap
B_INST, E_INST = "[INST]", "[/INST]"
B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n"
DEFAULT_SYSTEM_PROMPT = """\
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information."""
def get_prompt(instruction, new_system_prompt=DEFAULT_SYSTEM_PROMPT ):
SYSTEM_PROMPT = B_SYS + new_system_prompt + E_SYS
prompt_template = B_INST + SYSTEM_PROMPT + instruction + E_INST
return prompt_template
loader = PyPDFLoader("document.pdf")
text_splitter = RecursiveCharacterTextSplitter(
# Set a really small chunk size, just to show.
chunk_size = 500,
chunk_overlap = 20,
length_function = len,
)
pages = loader.load_and_split(text_splitter)
# testing with embeddings
embeddings = HuggingFaceEmbeddings(model_name="hkunlp/instructor-xl", model_kwargs={'device': 'cuda'})
#embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2", model_kwargs={'device': 'cuda'})
print("OK1")
db = FAISS.from_documents(pages, embedding=embeddings)
#db = Chroma.from_documents(pages, embedding=embeddings)
print("OK2")
instruction = "Given the context that has been provided. \n {context}, Answer the following question - \n{question}"
# This part was remove from first line of the system_prompt for testing purposes. Be precise in your answers wherever possible.
system_prompt = """You are an honest virtual assistant. You will be given a context to answer from. In case you are sure you don't know the answer then you say that based on the context you don't know the answer. In all other instances you provide an answer to the best of your capability. Cite context when you can access them maintaining formatting. Don't say 'based on the context provided more than once."""
get_prompt(instruction, system_prompt)
"""## Setting up with LangChain"""
template = get_prompt(instruction, system_prompt)
print(template)
prompt = PromptTemplate(template=template, input_variables=["context", "question"])
memory = ConversationBufferWindowMemory(
memory_key="chat_history", k=5,
return_messages=True
)
retriever = db.as_retriever(search_kwargs={'k': 10})
def create_pipeline(max_new_tokens=1024):
pipe = pipeline("text-generation",
model=model,
tokenizer = tokenizer,
max_new_tokens = max_new_tokens,
temperature = 0.6)
return pipe
class ChessBot:
# send re_ranked docs instead of retriever for advanced RAG
def __init__(self, memory, prompt, task:str = "text-generation", retriever = retriever):
self.memory = memory
self.prompt = prompt
self.retriever = retriever
def create_chat_bot(self, max_new_tokens = 1024):
hf_pipe = create_pipeline(max_new_tokens)
llm = HuggingFacePipeline(pipeline =hf_pipe)
qa = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=self.retriever,
memory=self.memory,
combine_docs_chain_kwargs={"prompt": self.prompt}
)
return qa
chess_bot = ChessBot(memory = memory, prompt = prompt)
bot = chess_bot.create_chat_bot()
def clear_llm_memory():
bot.memory.clear()
def update_prompt(sys_prompt):
if sys_prompt == "":
sys_prompt = system_prompt
template = get_prompt(instruction, sys_prompt)
prompt = PromptTemplate(template=template, input_variables=["context", "question"])
bot.combine_docs_chain.llm_chain.prompt = prompt
"""1. Not using API
2. Use cases are not defined
3. Just a POC emphasis
"""
with gr.Blocks() as demo:
gr.Markdown(
"""
#
Please ask your questions!
""")
# Commenting update prompt option
#update_sys_prompt = gr.Textbox(label = "Update System Prompt")
chatbot = gr.Chatbot(label="SAMSUNG SDS", height = 300)
msg = gr.Textbox(label = "Enter your query!")
with gr.Column(scale=1):
clear = gr.ClearButton([msg, chatbot])
with gr.Column(scale=1):
clear_memory = gr.Button(value = "Clear LLM Memory")
def respond(message, chat_history):
print("Query:::", message)
bot_message = bot({"question": message})['answer']
chat_history.append((message, bot_message))
return "", chat_history
msg.submit(respond, inputs=[msg, chatbot], outputs=[msg, chatbot])
clear_memory.click(clear_llm_memory)
# Commenting update prompt option
#update_sys_prompt.submit(update_prompt, inputs=update_sys_prompt)
demo.launch(share=False, server_name='216.48.177.144', server_port=8502)
""". Now, I want to incorporate this logic to my existing code - """from langchain.document_transformers import LongContextReorder
from langchain.retrievers import ContextualCompressionRetriever, DocumentCompressorPipeline
reordering = LongContextReorder()
pipeline_compressor = DocumentCompressorPipeline(
transformers=[
reordering
]
)
compression_retriever = ContextualCompressionRetriever(base_compressor=pipeline_compressor, base_retriever=vector_store.as_retriever())
qa = ConversationalRetrievalChain.from_llm(llm=llm, retriever=compression_retriever, memory=memory, return_source_documents=True)""". Rewrite the complete code using this logic. You can skip the gradio and import part
|
624622dd1bbb6f014467d50e929c24d2
|
{
"intermediate": 0.4135771095752716,
"beginner": 0.2737700641155243,
"expert": 0.3126528561115265
}
|
42,666
|
I have written a code to create a PDF QA Bot. The code for it is - “”“import numpy as np
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.document_loaders import PyPDFLoader
from langchain import HuggingFacePipeline
from langchain import PromptTemplate
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferWindowMemory
from sentence_transformers import SentenceTransformer
import gradio as gr
from langchain_community.vectorstores import FAISS
#nltk.download()
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline, TextStreamer
def create_multipage_pdf(text, filename=‘output.pdf’, font_size=12, margin=50):
# Split text into chunks that fit on one page
chunks = []
chunk_size = 50 # Number of characters per chunk
for i in range(0, len(text), chunk_size):
chunks.append(text[i:i+chunk_size])
# Create PDF
c = canvas.Canvas(filename, pagesize=letter)
width, height = letter
y = height - margin
for chunk in chunks:
# Draw text
text_object = c.beginText(margin, y)
text_object.setFont(“Helvetica”, font_size)
text_object.textLines(chunk)
c.drawText(text_object)
# Update y position
y -= (font_size + 4) # Adjust spacing between lines
# Check if we need to start a new page
if y <= margin:
c.showPage()
y = height - margin
c.save()
# Example usage
#create_multipage_pdf(text, filename=‘document.pdf’)
bnb_config = BitsAndBytesConfig(load_in_4bit=True,
bnb_4bit_quant_type=“nf4”,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=False)
model_id = “meta-llama/Llama-2-7b-chat-hf”
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config = bnb_config,device_map={”“:0})
import json
import textwrap
B_INST, E_INST = “[INST]”, “[/INST]”
B_SYS, E_SYS = “<<SYS>>\n”, “\n<</SYS>>\n\n”
DEFAULT_SYSTEM_PROMPT = “””<br/>You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don’t know the answer to a question, please don’t share false information.“”“
def get_prompt(instruction, new_system_prompt=DEFAULT_SYSTEM_PROMPT ):
SYSTEM_PROMPT = B_SYS + new_system_prompt + E_SYS
prompt_template = B_INST + SYSTEM_PROMPT + instruction + E_INST
return prompt_template
loader = PyPDFLoader(“document.pdf”)
text_splitter = RecursiveCharacterTextSplitter(
# Set a really small chunk size, just to show.
chunk_size = 500,
chunk_overlap = 20,
length_function = len,
)
pages = loader.load_and_split(text_splitter)
# testing with embeddings
embeddings = HuggingFaceEmbeddings(model_name=“hkunlp/instructor-xl”, model_kwargs={‘device’: ‘cuda’})
#embeddings = HuggingFaceEmbeddings(model_name=“sentence-transformers/all-mpnet-base-v2”, model_kwargs={‘device’: ‘cuda’})
print(“OK1”)
db = FAISS.from_documents(pages, embedding=embeddings)
#db = Chroma.from_documents(pages, embedding=embeddings)
print(“OK2”)
instruction = “Given the context that has been provided. \n {context}, Answer the following question - \n{question}”
# This part was remove from first line of the system_prompt for testing purposes. Be precise in your answers wherever possible.
system_prompt = “”“You are an honest virtual assistant. You will be given a context to answer from. In case you are sure you don’t know the answer then you say that based on the context you don’t know the answer. In all other instances you provide an answer to the best of your capability. Cite context when you can access them maintaining formatting. Don’t say 'based on the context provided more than once.””“
get_prompt(instruction, system_prompt)
”“”## Setting up with LangChain"“”
template = get_prompt(instruction, system_prompt)
print(template)
prompt = PromptTemplate(template=template, input_variables=[“context”, “question”])
memory = ConversationBufferWindowMemory(
memory_key=“chat_history”, k=5,
return_messages=True
)
retriever = db.as_retriever(search_kwargs={‘k’: 10})
def create_pipeline(max_new_tokens=1024):
pipe = pipeline(“text-generation”,
model=model,
tokenizer = tokenizer,
max_new_tokens = max_new_tokens,
temperature = 0.6)
return pipe
class ChessBot:
# send re_ranked docs instead of retriever for advanced RAG
def init(self, memory, prompt, task:str = “text-generation”, retriever = retriever):
self.memory = memory
self.prompt = prompt
self.retriever = retriever
def create_chat_bot(self, max_new_tokens = 1024):
hf_pipe = create_pipeline(max_new_tokens)
llm = HuggingFacePipeline(pipeline =hf_pipe)
qa = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=self.retriever,
memory=self.memory,
combine_docs_chain_kwargs={“prompt”: self.prompt}
)
return qa
chess_bot = ChessBot(memory = memory, prompt = prompt)
bot = chess_bot.create_chat_bot()
def clear_llm_memory():
bot.memory.clear()
def update_prompt(sys_prompt):
if sys_prompt == “”:
sys_prompt = system_prompt
template = get_prompt(instruction, sys_prompt)
prompt = PromptTemplate(template=template, input_variables=[“context”, “question”])
bot.combine_docs_chain.llm_chain.prompt = prompt
“”“1. Not using API
2. Use cases are not defined
3. Just a POC emphasis
”“”
with gr.Blocks() as demo:
gr.Markdown(
“”“
#
Please ask your questions!
“””)
# Commenting update prompt option
#update_sys_prompt = gr.Textbox(label = “Update System Prompt”)
chatbot = gr.Chatbot(label=“SAMSUNG SDS”, height = 300)
msg = gr.Textbox(label = “Enter your query!”)
with gr.Column(scale=1):
clear = gr.ClearButton([msg, chatbot])
with gr.Column(scale=1):
clear_memory = gr.Button(value = “Clear LLM Memory”)
def respond(message, chat_history):
print(“Query:::”, message)
bot_message = bot({“question”: message})[‘answer’]
chat_history.append((message, bot_message))
return “”, chat_history
msg.submit(respond, inputs=[msg, chatbot], outputs=[msg, chatbot])
clear_memory.click(clear_llm_memory)
# Commenting update prompt option
#update_sys_prompt.submit(update_prompt, inputs=update_sys_prompt)
demo.launch(share=False, server_name=‘216.48.177.144’, server_port=8502)
“”“. Now, I want to incorporate this logic to my existing code - “”“from langchain.document_transformers import LongContextReorder
from langchain.retrievers import ContextualCompressionRetriever, DocumentCompressorPipeline
reordering = LongContextReorder()
pipeline_compressor = DocumentCompressorPipeline(
transformers=[
reordering
]
)
compression_retriever = ContextualCompressionRetriever(base_compressor=pipeline_compressor, base_retriever=vector_store.as_retriever())
qa = ConversationalRetrievalChain.from_llm(llm=llm, retriever=compression_retriever, memory=memory, return_source_documents=True)””". Rewrite the complete code using this logic. You can skip the gradio and import part
|
0d23b98517f68dc4f13e6145cd152cf4
|
{
"intermediate": 0.3537980318069458,
"beginner": 0.35009175539016724,
"expert": 0.2961101531982422
}
|
42,667
|
Hi tell me about your self ?
|
1edce965febaa088abdd08bc39ce57f4
|
{
"intermediate": 0.4020518660545349,
"beginner": 0.3579951822757721,
"expert": 0.2399529218673706
}
|
42,668
|
как в QPainter-е сделать resizeable QRect, который можно изменять как размер окна
|
04f7fd39c883f0a8a1987cc747ec08fc
|
{
"intermediate": 0.3409753739833832,
"beginner": 0.28955477476119995,
"expert": 0.36946985125541687
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.