row_id
int64 0
48.4k
| init_message
stringlengths 1
342k
| conversation_hash
stringlengths 32
32
| scores
dict |
|---|---|---|---|
2,307
|
In python, how do I convert a bytearray into a pytorch tensor in which every value is a bit of the initial bytearray?
|
73c9b51caa5b3d36355896eb9e21e934
|
{
"intermediate": 0.36670222878456116,
"beginner": 0.07919558137655258,
"expert": 0.5541021227836609
}
|
2,308
|
In python, how do I convert a bytearray into a pytorch tensor in which every value is a bit of the initial bytearray?
|
2a662172a50a8d42268807172fc12954
|
{
"intermediate": 0.36670222878456116,
"beginner": 0.07919558137655258,
"expert": 0.5541021227836609
}
|
2,309
|
Hi! How to make app screenshot to memory(with usage) in c++
|
350173c6893400c9f8bc98f988ac9189
|
{
"intermediate": 0.42548683285713196,
"beginner": 0.3519671857357025,
"expert": 0.22254596650600433
}
|
2,310
|
// Функция пилообразного контрастирования с порогами и заданным направлением
private BufferedImage sawtoothContrast(BufferedImage image, float t1, float t2, boolean up, boolean down) {
BufferedImage output = new BufferedImage(image.getWidth(), image.getHeight(), image.getType());
for (int i = 0; i < image.getWidth(); i++) {
for (int j = 0; j < image.getHeight(); j++) {
int pixel = image.getRGB(i, j);
int red = (pixel >> 16) & 0xff;
int green = (pixel >> 8) & 0xff;
int blue = pixel & 0xff;
int brightness = (int) (0.2126 * red + 0
|
b5e5e262ba6f0f3d7fba341c92a66767
|
{
"intermediate": 0.40318563580513,
"beginner": 0.2870970666408539,
"expert": 0.3097172975540161
}
|
2,311
|
How do I extract the bit at index "X" of a byte in python?
|
2fe15bf3324cb0341831371f288292a1
|
{
"intermediate": 0.49379587173461914,
"beginner": 0.07521586865186691,
"expert": 0.4309883415699005
}
|
2,312
|
How do I convert a list of integers to a pytorch array in python?
|
7c89265545889ade2a112f2970bfdd06
|
{
"intermediate": 0.5186200141906738,
"beginner": 0.13819974660873413,
"expert": 0.3431802988052368
}
|
2,313
|
Hi! How to make in-app screenshot to memory in c++(directx9 + imgui)
|
d195bb65a885274497008cd6291fd9bb
|
{
"intermediate": 0.5267560482025146,
"beginner": 0.22750356793403625,
"expert": 0.2457403838634491
}
|
2,314
|
Hi! How to make in-app screenshot to memory in c++(directx + imgui)
|
141f8cdf286a64c7e6ea7ef93a1a5eb5
|
{
"intermediate": 0.5251115560531616,
"beginner": 0.2203182578086853,
"expert": 0.25457021594047546
}
|
2,315
|
create simple init program in rust scripts for all basic tasks in devuan without any dbus, systemvinit, systemd, elogind, x11, xorg, xinit, systemctl, nix. the original init program is called sysx with a command line for interacting with tasks.
|
e91d27956c8248e53419935f610a09a3
|
{
"intermediate": 0.3471963703632355,
"beginner": 0.31102311611175537,
"expert": 0.34178048372268677
}
|
2,316
|
In an oncology clinical trial, how to predict additional survival time in addition to current time already observed for remaining patients who have been observed for some time and still alive based on data observed up to date? Taking into below considerations of baseline characteristics of the patients who are still alive, like age and gender, and the death hazard varies over time, so piecewise hazard by time interval should be used. I want an avarage of additional time weighted by the probalibity of a tiime. Please provide the R software code with explanation step by step.
|
291ae6c6c764efd7617aa0b2c4664c70
|
{
"intermediate": 0.42778775095939636,
"beginner": 0.24557732045650482,
"expert": 0.32663494348526
}
|
2,317
|
Traceback (most recent call last):
File "C:\Users\Flowermoon\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\compat\_optional.py", line 142, in import_optional_dependency
module = importlib.import_module(name)
File "C:\Users\Flowermoon\AppData\Local\Programs\Python\Python38\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'openpyxl'
During handling of the above exception, another exception occurred:
File "C:\Users\Flowermoon\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\io\excel\_base.py", line 1513, in __init__
self._reader = self._engines[engine](self._io, storage_options=storage_options)
File "C:\Users\Flowermoon\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\io\excel\_openpyxl.py", line 548, in __init__
import_optional_dependency("openpyxl")
File "C:\Users\Flowermoon\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\compat\_optional.py", line 145, in import_optional_dependency
raise ImportError(msg)
ImportError: Missing optional dependency 'openpyxl'. Use pip or conda to install openpyxl.
|
08aa649df70a2a57587787009916ed6e
|
{
"intermediate": 0.4185212552547455,
"beginner": 0.36191415786743164,
"expert": 0.21956457197666168
}
|
2,318
|
In an oncology clinical trial, how to predict additional survival time for remaining patients who have been observed for some time and still alive based on data observed up to date? Please note, the additional time in addition to the time that these alive patients have already survived should be estimated. Please provide example R code with simulated data, and step by step explanation.
|
98bca2249839b3e38876a78f91ead2bf
|
{
"intermediate": 0.4923754334449768,
"beginner": 0.22001798450946808,
"expert": 0.2876065969467163
}
|
2,319
|
In pytorch, how do I apply batch norm if the input tensors only has two dimensions (batch_size, values_total_size)?
|
260ef5b330bdfdc02b1e252df539f63b
|
{
"intermediate": 0.35918399691581726,
"beginner": 0.0840936005115509,
"expert": 0.5567224025726318
}
|
2,320
|
How do I apply batch norm if the input tensors only has two dimensions (batch_size, values_total_size)?
|
d03c8b4503fb0bf034f473dcc6ec53e6
|
{
"intermediate": 0.32933199405670166,
"beginner": 0.10536772012710571,
"expert": 0.5653002262115479
}
|
2,321
|
"import java.util.Scanner;
import java.util.Random;
public class GameModule {
private static final int NUM_ROUNDS = 4;
public static void main(String[] args) {
Scanner scanner = new Scanner(System.in);
System.out.println("HighSum GAME");
System.out.println("================================================================================");
String playerName = "";
String playerPassword = "";
boolean loggedIn = false;
// Add user authentication
while (!loggedIn) {
playerName = Keyboard.readString("Enter Login name > ");
playerPassword = Keyboard.readString("Enter Password > ");
if (playerName.equals("IcePeak") && playerPassword.equals("password")) {
loggedIn = true;
} else {
System.out.println("Username or Password is incorrect. Please try again.");
}
}
Player player = new Player(playerName, playerPassword, 100);
Dealer dealer = new Dealer();
Deck deck = new Deck();
boolean nextGame = true;
int betOnTable;
// Add "HighSum GAME" text after logging in
System.out.println("================================================================================");
System.out.println("HighSum GAME");
while (nextGame) {
System.out.println("================================================================================");
System.out.println(playerName + ", You have " + player.getChips() + " chips");
System.out.println("--------------------------------------------------------------------------------");
System.out.println("Game starts - Dealer shuffles deck.");
deck.shuffle();
dealer.clearCardsOnHand();
player.clearCardsOnHand();
// Switch the order of dealer and player to deal cards to the dealer first
for (int i = 0; i < 2; i++) {
dealer.addCard(deck.dealCard());
player.addCard(deck.dealCard());
}
betOnTable = 0;
int round = 1;
int dealerBet;
while (round <= NUM_ROUNDS) {
System.out.println("--------------------------------------------------------------------------------");
System.out.println("Dealer dealing cards - ROUND " + round);
System.out.println("--------------------------------------------------------------------------------");
// Show dealer’s cards first, then player’s cards
if (round == 2) {
dealer.addCard(deck.dealCard());
player.addCard(deck.dealCard());
}
dealer.showCardsOnHand();
player.showCardsOnHand();
System.out.println("Value: " + player.getTotalCardsValue());
if (round > 1) {
dealer.addCard(deck.dealCard());
player.addCard(deck.dealCard());
}
if (round == 4) {
dealer.revealHiddenCard();
}
if (round >= 2) {
if (player.getTotalCardsValue() > dealer.getTotalCardsValue()) {
boolean validChoice = false;
while (!validChoice) {
String prompt = (round == 2) ? "Do you want to follow? [Y/N]: " : "Player call, do you want to [C]all or [Q]uit?: ";
String choice = Keyboard.readString(prompt).toLowerCase();
if (choice.equals("c") || choice.equals("y")) {
validChoice = true;
int playerBet = Keyboard.readInt("Player, state your bet > ");
// Check for invalid bet conditions
if (playerBet > 100) {
System.out.println("Insufficient chips");
validChoice = false;
} else if (playerBet <= 0) {
System.out.println("Invalid bet");
validChoice = false;
} else {
betOnTable += playerBet;
player.deductChips(playerBet);
System.out.println("Player call, state bet: " + playerBet);
System.out.println(playerName + ", You are left with " + player.getChips() + " chips");
System.out.println("Bet on table: " + betOnTable);
round++;
}
} else if (choice.equals("q") || choice.equals("n")) {
validChoice = true;
nextGame = false;
round = NUM_ROUNDS + 1;
break;
}
}
} else {
dealerBet = Math.min(new Random().nextInt(11), 10);
betOnTable += dealerBet;
System.out.println("Dealer call, state bet: " + dealerBet);
System.out.println(playerName + ", You are left with " + player.getChips() + " chips");
System.out.println("Bet on table: " + betOnTable);
round++;
}
} else {
dealerBet = 10;
betOnTable += dealerBet;
player.deductChips(dealerBet);
System.out.println("Dealer call, state bet: " + dealerBet);
System.out.println(playerName + ", You are left with " + player.getChips() + " chips");
System.out.println("Bet on table: " + betOnTable);
round++;
}
}
// Determine the winner after the game ends
System.out.println("--------------------------------------------------------------------------------");
System.out.println("Game End - Dealer reveal hidden cards");
System.out.println("--------------------------------------------------------------------------------");
dealer.showCardsOnHand();
System.out.println("Value: " + dealer.getTotalCardsValue());
player.showCardsOnHand();
System.out.println("Value: " + player.getTotalCardsValue());
if (player.getTotalCardsValue() > dealer.getTotalCardsValue()) {
System.out.println(playerName + " Wins");
player.addChips(betOnTable);
} else {
System.out.println("Dealer Wins");
}
System.out.println(playerName + ", You have " + player.getChips() + " chips");
System.out.println("Dealer shuffles used cards and place behind the deck.");
System.out.println("--------------------------------------------------------------------------------");
boolean valid = false;
while (!valid) {
String input = Keyboard.readString("Next Game? (Y/N) > ").toLowerCase();
if (input.equals("y")) {
nextGame = true;
valid = true;
} else if (input.equals("n")) {
nextGame = false;
valid = true;
} else {
System.out.println("*** Please enter Y or N ");
}
}
}
}
}
import java.util.ArrayList;
import java.util.Random;
public class Deck {
private ArrayList<Card> cards;
public Deck() {
cards = new ArrayList<Card>();
String[] suits = {"Heart", "Diamond", "Spade", "Club"};
for (int i = 0; i < suits.length; i++) {
String suit = suits[i];
Card card = new Card(suit, "Ace", 1);
cards.add(card);
for (int n = 2; n <= 10; n++) {
Card aCard = new Card(suit, n + "", n);
cards.add(aCard);
}
Card jackCard = new Card(suit, "Jack", 10);
cards.add(jackCard);
Card queenCard = new Card(suit, "Queen", 10);
cards.add(queenCard);
Card kingCard = new Card(suit, "King", 10);
cards.add(kingCard);
}
}
public void shuffle() {
Random random = new Random();
for (int i = 0; i < 1000; i++) {
int indexA = random.nextInt(cards.size());
int indexB = random.nextInt(cards.size());
Card cardA = cards.get(indexA);
Card cardB = cards.get(indexB);
cards.set(indexA, cardB);
cards.set(indexB, cardA);
}
}
public Card dealCard() {
return cards.remove(0);
}
}
public class Dealer extends Player {
public Dealer() {
super("Dealer", "", 0);
}
@Override
public void addCard(Card card) {
super.addCard(card);
// Hide only the first card for the dealer
if (cardsOnHand.size() == 1) {
cardsOnHand.get(0).hide();
}
}
@Override
public void showCardsOnHand() {
System.out.println(getLoginName());
int numCards = cardsOnHand.size();
for (int i = 0; i < numCards; i++) {
System.out.print(cardsOnHand.get(i));
if (i < numCards - 1) {
System.out.print(", ");
}
}
System.out.println("\n");
}
public void revealHiddenCard() {
if (!cardsOnHand.isEmpty()) {
cardsOnHand.get(0).reveal();
}
}
}
public class Card {
private String suit;
private String name;
private int value;
private boolean hidden;
public Card(String suit, String name, int value) {
this.suit = suit;
this.name = name;
this.value = value;
this.hidden = false; // Change this line to make cards revealed by default
}
public int getValue() {
return value;
}
public void reveal() {
this.hidden = false;
}
public boolean isHidden() {
return hidden;
}
@Override
public String toString() {
return this.hidden ? "<HIDDEN CARD>" : "<" + this.suit + " " + this.name + ">";
}
public static void main(String[] args) {
Card card = new Card("Heart", "Ace", 1);
System.out.println(card);
}
public void hide() {
this.hidden = true;
}
}
public class User {
private String loginName;
private String password;
public User(String loginName, String password) {
this.loginName = loginName;
this.password = password;
}
public String getLoginName() {
return loginName;
}
public boolean checkPassword(String password) {
return this.password.equals(password);
}
}
import java.util.ArrayList;
public class Player extends User {
private int chips;
protected ArrayList<Card> cardsOnHand;
public Player(String loginName, String password, int chips) {
super(loginName, password);
this.chips = chips;
this.cardsOnHand = new ArrayList<Card>();
}
public void addCard(Card card) {
this.cardsOnHand.add(card);
}
public void showCardsOnHand() {
System.out.println(getLoginName());
int numCards = cardsOnHand.size();
for (int i = 0; i < numCards; i++) {
if (i < numCards - 1) {
System.out.print(cardsOnHand.get(i) + ", ");
} else {
System.out.print(cardsOnHand.get(i));
}
}
System.out.println("\n");
}
public int getChips() {
return this.chips;
}
public void addChips(int amount) {
this.chips += amount;
}
public void deductChips(int amount) {
if (amount < chips)
this.chips -= amount;
}
public int getTotalCardsValue() {
int totalValue = 0;
for (Card card : cardsOnHand) {
totalValue += card.getValue();
}
return totalValue;
}
public int getNumberOfCards() {
return cardsOnHand.size();
}
public void clearCardsOnHand() {
cardsOnHand.clear();
}
}
public class Keyboard {
public static String readString(String prompt) {
System.out.print(prompt);
return new java.util.Scanner(System.in).nextLine();
}
public static int readInt(String prompt) {
int input = 0;
boolean valid = false;
while (!valid) {
try {
input = Integer.parseInt(readString(prompt));
valid = true;
} catch (NumberFormatException e) {
System.out.println("*** Please enter an integer ***");
}
}
return input;
}
public static double readDouble(String prompt) {
double input = 0;
boolean valid = false;
while (!valid) {
try {
input = Double.parseDouble(readString(prompt));
valid = true;
} catch (NumberFormatException e) {
System.out.println("*** Please enter a double ***");
}
}
return input;
}
public static float readFloat(String prompt) {
float input = 0;
boolean valid = false;
while (!valid) {
try {
input = Float.parseFloat(readString(prompt));
valid = true;
} catch (NumberFormatException e) {
System.out.println("*** Please enter a float ***");
}
}
return input;
}
public static long readLong(String prompt) {
long input = 0;
boolean valid = false;
while (!valid) {
try {
input = Long.parseLong(readString(prompt));
valid = true;
} catch (NumberFormatException e) {
e.printStackTrace();
System.out.println("*** Please enter a long ***");
}
}
return input;
}
public static char readChar(String prompt) {
char input = 0;
boolean valid = false;
while (!valid) {
String temp = readString(prompt);
if (temp.length() != 1) {
System.out.println("*** Please enter a character ***");
} else {
input = temp.charAt(0);
valid = true;
}
}
return input;
}
public static boolean readBoolean(String prompt) {
boolean valid = false;
while (!valid) {
String input = readString(prompt);
if (input.equalsIgnoreCase("yes") || input.equalsIgnoreCase("y") || input.equalsIgnoreCase("true")
|| input.equalsIgnoreCase("t")) {
return true;
} else if (input.equalsIgnoreCase("no") || input.equalsIgnoreCase("n") || input.equalsIgnoreCase("false")
|| input.equalsIgnoreCase("f")) {
return false;
} else {
System.out.println("*** Please enter Yes/No or True/False ***");
}
}
return false;
}
public static java.util.Date readDate(String prompt) {
java.util.Date date = null;
boolean valid = false;
while (!valid) {
try {
String input = readString(prompt).trim();
if (input.matches("\\d\\d/\\d\\d/\\d\\d\\d\\d")) {
int day = Integer.parseInt(input.substring(0, 2));
int month = Integer.parseInt(input.substring(3, 5));
int year = Integer.parseInt(input.substring(6, 10));
java.util.Calendar cal = java.util.Calendar.getInstance();
cal.setLenient(false);
cal.set(year, month - 1, day, 0, 0, 0);
date = cal.getTime();
valid = true;
} else {
System.out.println("*** Please enter a date (DD/MM/YYYY) ***");
}
} catch (IllegalArgumentException e) {
System.out.println("*** Please enter a date (DD/MM/YYYY) ***");
}
}
return date;
}
private static String quit = "0";
public static int getUserOption(String title, String[] menu) {
displayMenu(title, menu);
int choice = Keyboard.readInt("Enter Choice --> ");
while (choice > menu.length || choice < 0) {
choice = Keyboard.readInt("Invalid Choice, Re-enter --> ");
}
return choice;
}
private static void displayMenu(String title, String[] menu) {
line(80, "=");
System.out.println(title.toUpperCase());
line(80, "-");
for (int i = 0; i < menu.length; i++) {
System.out.println("[" + (i + 1) + "] " + menu[i]);
}
System.out.println("[" + quit + "] Quit");
line(80, "-");
}
public static void line(int len, String c) {
System.out.println(String.format("%" + len + "s", " ").replaceAll(" ", c));
}
}
"
with this ide card game program, edit it so that when the player choose to quit(q), all the current chips bet will go to the dealer and the dealer wins and the game ends.
|
23de945c7921711739ad1cd8ff5fc355
|
{
"intermediate": 0.2959913909435272,
"beginner": 0.5405109524726868,
"expert": 0.16349764168262482
}
|
2,322
|
Hi! How to make in-app screenshot to memory with c++ and directx+imgui?
|
c8cad56b5f7a291176c80bba7c78ac57
|
{
"intermediate": 0.6881852746009827,
"beginner": 0.1430998295545578,
"expert": 0.16871491074562073
}
|
2,323
|
Hi! How to make in-app screenshot to memory with c++(directX 9 + imgui only)?
|
a2ff5cde2fa8e900572783ff53c56312
|
{
"intermediate": 0.5432488918304443,
"beginner": 0.21588777005672455,
"expert": 0.24086330831050873
}
|
2,324
|
in python create a machine learning model, that predcits a minesweeper game on a 5x5 board. It needs to predict 5 safe spots for a 3 mines game and 3 possible bomb locations and 4 risky locations that might be a safe spot. You for this task got data for the past 30 games played in a raw list you need to use. The list is: [6, 16, 23, 16, 22, 23, 18, 19, 22, 6, 9, 16, 2, 6, 19, 6, 15, 24, 2, 4, 10, 4, 16, 22, 2, 6, 24, 9, 15, 19, 13, 16, 19, 4, 14, 15, 4, 5, 10, 1, 18, 20, 8, 9, 23, 4, 17, 24, 5, 8, 11, 3, 9, 19, 5, 7, 8, 6, 12, 13, 7, 11, 15, 2, 14, 21, 5, 19, 22, 1, 7, 20, 6, 12, 19, 8, 12, 24, 6, 11, 14, 11, 23, 24, 2, 4, 17, 15, 23, 23] and each number is a past bomb
|
677249dd4430dcc3b374a50a00a99e8b
|
{
"intermediate": 0.18829679489135742,
"beginner": 0.12096652388572693,
"expert": 0.690736711025238
}
|
2,325
|
write me a gui in python which can show the live location of both the drones on map and information of both the drones which comes in mavproxy
from pymavlink import mavutil
import math
import time
class Drone:
def __init__(self, system_id, connection):
self.system_id = system_id
self.connection = connection
def set_mode(self, mode):
self.connection.mav.set_mode_send(
self.system_id,
mavutil.mavlink.MAV_MODE_FLAG_CUSTOM_MODE_ENABLED,
mode
)
def arm(self, arm=True):
self.connection.mav.command_long_send(self.system_id, self.connection.target_component,
mavutil.mavlink.MAV_CMD_COMPONENT_ARM_DISARM, 0, int(arm), 0, 0, 0, 0, 0,
0)
def takeoff(self, altitude):
self.connection.mav.command_long_send(self.system_id, self.connection.target_component,
mavutil.mavlink.MAV_CMD_NAV_TAKEOFF, 0, 0, 0, 0, 0, 0, 0, altitude)
def send_waypoint(self, wp, next_wp, speed=3):
vx, vy, vz = calculate_velocity_components(wp, next_wp, speed)
self.connection.mav.send(mavutil.mavlink.MAVLink_set_position_target_global_int_message(
10,
self.system_id,
self.connection.target_component,
mavutil.mavlink.MAV_FRAME_GLOBAL_RELATIVE_ALT,
int(0b110111111000),
int(wp[0] * 10 ** 7),
int(wp[1] * 10 ** 7),
wp[2],
vx, vy, vz, 0, 0, 0, 0 # Set vx, vy, and vz from calculated components
))
def get_position(self):
self.connection.mav.request_data_stream_send(
self.system_id, self.connection.target_component,
mavutil.mavlink.MAV_DATA_STREAM_POSITION, 1, 1)
while True:
msg = self.connection.recv_match(type='GLOBAL_POSITION_INT', blocking=True)
if msg.get_srcSystem() == self.system_id:
return (msg.lat / 10 ** 7, msg.lon / 10 ** 7, msg.alt / 10 ** 3)
class PIDController:
def __init__(self, kp, ki, kd, limit):
self.kp = kp
self.ki = ki
self.kd = kd
self.limit = limit
self.prev_error = 0
self.integral = 0
def update(self, error, dt):
derivative = (error - self.prev_error) / dt
self.integral += error * dt
self.integral = max(min(self.integral, self.limit), -self.limit) # Clamp the integral term
output = self.kp * error + self.ki * self.integral + self.kd * derivative
self.prev_error = error
return output
distance = 5 # Distance in meters
angle = 60 # Angle in degrees
kp = 0.1
ki = 0.01
kd = 0.05
pid_limit = 0.0001
pid_lat = PIDController(kp, ki, kd, pid_limit)
pid_lon = PIDController(kp, ki, kd, pid_limit)
def calculate_follower_coordinates(wp, distance, angle):
earth_radius = 6371000.0 # in meters
latitude_change = (180 * distance * math.cos(math.radians(angle))) / (math.pi * earth_radius)
longitude_change = (180 * distance * math.sin(math.radians(angle))) / (
math.pi * earth_radius * math.cos(math.radians(wp[0])))
new_latitude = wp[0] + latitude_change
new_longitude = wp[1] + longitude_change
return (new_latitude, new_longitude, wp[2])
def calculate_velocity_components(current_wp, next_wp, speed):
dx = next_wp[0] - current_wp[0]
dy = next_wp[1] - current_wp[1]
dz = next_wp[2] - current_wp[2]
dx2 = dx ** 2
dy2 = dy ** 2
dz2 = dz ** 2
distance = math.sqrt(dx2 + dy2 + dz2)
vx = (dx / distance) * speed
vy = (dy / distance) * speed
vz = (dz / distance) * speed
return vx, vy, vz
waypoints = [
(28.5861474, 77.3421320, 10),
(28.5859040, 77.3420736, 10)
]
the_connection = mavutil.mavlink_connection('/dev/ttyUSB0', baud=57600)
master_drone = Drone(3, the_connection)
follower_drone = Drone(2, the_connection)
# Set mode to Guided and arm both drones
for drone in [master_drone, follower_drone]:
drone.set_mode(4)
drone.arm()
drone.takeoff(10)
class AbortTimeoutError(Exception):
pass
def abort():
print("Type 'abort' to return to Launch and disarm motors.")
start_time = time.monotonic()
while time.monotonic() - start_time < 7:
user_input = input("Time left: {} seconds \n".format(int(7 - (time.monotonic() - start_time))))
if user_input.lower() == "abort":
print("Returning to Launch and disarming motors…")
for drone in [master_drone, follower_drone]:
drone.set_mode(6) # RTL mode
drone.arm(False) # Disarm motors
return True
raise AbortTimeoutError("Timeout: User didn't input 'abort' in given time frame.")
mode = the_connection.mode_mapping()[the_connection.flightmode]
print(f"{mode}")
time_start = time.time()
while mode == 4:
if abort():
exit()
if time.time() - time_start >= 1:
for index, master_wp in enumerate(waypoints[:-1]):
next_wp = waypoints[index + 1]
master_drone.send_waypoint(master_wp, next_wp, speed=3)
follower_position = master_drone.get_position()
if follower_position is None:
break
follower_wp = calculate_follower_coordinates(follower_position, distance, angle)
dt = time.time() - time_start
pid_lat_output = pid_lat.update(follower_wp[0] - follower_position[0], dt)
pid_lon_output = pid_lon.update(follower_wp[1] - follower_position[1], dt)
adjusted_follower_wp = (
follower_wp[0] + pid_lat_output, follower_wp[1] + pid_lon_output, follower_wp[2])
follower_drone.send_waypoint(adjusted_follower_wp, next_wp, speed=3)
else:
mode_mapping = the_connection.mode_mapping()
current_mode = the_connection.flightmode
mode = mode_mapping[current_mode]
time_start = time.time()
time.sleep(0.1)
continue
break
for drone in [master_drone, follower_drone]:
drone.set_mode(6)
drone.arm(False)
the_connection.close()
|
7f6766f9df49e7e196aa4a74d78a3d19
|
{
"intermediate": 0.40718626976013184,
"beginner": 0.4292229115962982,
"expert": 0.16359078884124756
}
|
2,326
|
Search all sheet names. If the value in a cell in column D of sheet name Providers is the same as a sheet name, then take the last value in column J that is not blank of the matching sheet name and copy to column A in the sheet named Providers
|
5c5ee215119a0e5626376316f7eec0b4
|
{
"intermediate": 0.3589026629924774,
"beginner": 0.1935136765241623,
"expert": 0.4475836753845215
}
|
2,327
|
in the below code, implemnet grey work optimization along with pid control for swarming
from pymavlink import mavutil
import math
import time
class Drone:
def __init__(self, system_id, connection):
self.system_id = system_id
self.connection = connection
def set_mode(self, mode):
self.connection.mav.set_mode_send(
self.system_id,
mavutil.mavlink.MAV_MODE_FLAG_CUSTOM_MODE_ENABLED,
mode
)
def arm(self, arm=True):
self.connection.mav.command_long_send(self.system_id, self.connection.target_component,
mavutil.mavlink.MAV_CMD_COMPONENT_ARM_DISARM, 0, int(arm), 0, 0, 0, 0, 0,
0)
def takeoff(self, altitude):
self.connection.mav.command_long_send(self.system_id, self.connection.target_component,
mavutil.mavlink.MAV_CMD_NAV_TAKEOFF, 0, 0, 0, 0, 0, 0, 0, altitude)
def send_waypoint(self, wp, next_wp, speed=3):
vx, vy, vz = calculate_velocity_components(wp, next_wp, speed)
self.connection.mav.send(mavutil.mavlink.MAVLink_set_position_target_global_int_message(
10,
self.system_id,
self.connection.target_component,
mavutil.mavlink.MAV_FRAME_GLOBAL_RELATIVE_ALT,
int(0b110111111000),
int(wp[0] * 10 ** 7),
int(wp[1] * 10 ** 7),
wp[2],
vx, vy, vz, 0, 0, 0, 0 # Set vx, vy, and vz from calculated components
))
def get_position(self):
self.connection.mav.request_data_stream_send(
self.system_id, self.connection.target_component,
mavutil.mavlink.MAV_DATA_STREAM_POSITION, 1, 1)
while True:
msg = self.connection.recv_match(type='GLOBAL_POSITION_INT', blocking=True)
if msg.get_srcSystem() == self.system_id:
return (msg.lat / 10 ** 7, msg.lon / 10 ** 7, msg.alt / 10 ** 3)
class PIDController:
def __init__(self, kp, ki, kd, limit):
self.kp = kp
self.ki = ki
self.kd = kd
self.limit = limit
self.prev_error = 0
self.integral = 0
def update(self, error, dt):
derivative = (error - self.prev_error) / dt
self.integral += error * dt
self.integral = max(min(self.integral, self.limit), -self.limit) # Clamp the integral term
output = self.kp * error + self.ki * self.integral + self.kd * derivative
self.prev_error = error
return output
distance = 5 # Distance in meters
angle = 60 # Angle in degrees
kp = 0.1
ki = 0.01
kd = 0.05
pid_limit = 0.0001
pid_lat = PIDController(kp, ki, kd, pid_limit)
pid_lon = PIDController(kp, ki, kd, pid_limit)
def calculate_follower_coordinates(wp, distance, angle):
earth_radius = 6371000.0 # in meters
latitude_change = (180 * distance * math.cos(math.radians(angle))) / (math.pi * earth_radius)
longitude_change = (180 * distance * math.sin(math.radians(angle))) / (
math.pi * earth_radius * math.cos(math.radians(wp[0])))
new_latitude = wp[0] + latitude_change
new_longitude = wp[1] + longitude_change
return (new_latitude, new_longitude, wp[2])
def calculate_velocity_components(current_wp, next_wp, speed):
dx = next_wp[0] - current_wp[0]
dy = next_wp[1] - current_wp[1]
dz = next_wp[2] - current_wp[2]
dx2 = dx ** 2
dy2 = dy ** 2
dz2 = dz ** 2
distance = math.sqrt(dx2 + dy2 + dz2)
vx = (dx / distance) * speed
vy = (dy / distance) * speed
vz = (dz / distance) * speed
return vx, vy, vz
waypoints = [
(28.5861474, 77.3421320, 10),
(28.5859040, 77.3420736, 10)
]
the_connection = mavutil.mavlink_connection('/dev/ttyUSB0', baud=57600)
master_drone = Drone(3, the_connection)
follower_drone = Drone(2, the_connection)
# Set mode to Guided and arm both drones
for drone in [master_drone, follower_drone]:
drone.set_mode(4)
drone.arm()
drone.takeoff(10)
class AbortTimeoutError(Exception):
pass
def abort():
print("Type 'abort' to return to Launch and disarm motors.")
start_time = time.monotonic()
while time.monotonic() - start_time < 7:
user_input = input("Time left: {} seconds \n".format(int(7 - (time.monotonic() - start_time))))
if user_input.lower() == "abort":
print("Returning to Launch and disarming motors…")
for drone in [master_drone, follower_drone]:
drone.set_mode(6) # RTL mode
drone.arm(False) # Disarm motors
return True
print("7 seconds have passed. Proceeding with waypoint task...")
return False
mode = the_connection.mode_mapping()[the_connection.flightmode]
print(f"the drone is currectly at mode {mode}")
time_start = time.time()
while mode == 4:
if abort():
exit()
if time.time() - time_start >= 1:
for index, master_wp in enumerate(waypoints[:-1]):
next_wp = waypoints[index + 1]
master_drone.send_waypoint(master_wp, next_wp, speed=3)
follower_position = master_drone.get_position()
if follower_position is None:
break
follower_wp = calculate_follower_coordinates(follower_position, distance, angle)
dt = time.time() - time_start
pid_lat_output = pid_lat.update(follower_wp[0] - follower_position[0], dt)
pid_lon_output = pid_lon.update(follower_wp[1] - follower_position[1], dt)
adjusted_follower_wp = (
follower_wp[0] + pid_lat_output, follower_wp[1] + pid_lon_output, follower_wp[2])
follower_drone.send_waypoint(adjusted_follower_wp, next_wp, speed=3)
else:
mode_mapping = the_connection.mode_mapping()
current_mode = the_connection.flightmode
mode = mode_mapping[current_mode]
time_start = time.time()
time.sleep(0.1)
continue
break
for drone in [master_drone, follower_drone]:
drone.set_mode(6)
drone.arm(False)
the_connection.close()
|
a93e98e00e5b0fdd08965c85bcecb3ba
|
{
"intermediate": 0.3152102530002594,
"beginner": 0.41137853264808655,
"expert": 0.27341118454933167
}
|
2,328
|
My code is extremely slow. Can you use NumPy’s vectorized operations to perform the image warping and merging more efficiently, reducing the runtime significantly: def warp_perspective(img, H, target_shape):
h, w = target_shape
warped_image = np.zeros((h, w), dtype=np.uint8)
for i in range(int(h)):
for j in range(int(w)):
destination_point = np.array([j, i, 1])
source_point = np.dot(np.linalg.inv(H), destination_point)
source_x, source_y, w = source_point
source_x = int(source_x / w)
source_y = int(source_y / w)
# Add an additional condition here to make sure indices are within the valid range
if 0 <= source_x < img.shape[1] - 1 and 0 <= source_y < img.shape[0] - 1:
warped_image[i][j] = img[source_y][source_x]
return warped_image
|
2cc2c00821fcad459b86b7557aef3798
|
{
"intermediate": 0.45342615246772766,
"beginner": 0.23402021825313568,
"expert": 0.31255364418029785
}
|
2,329
|
Если в коде присутствуют ошибки, то их исправить. И вывести готовый рабочий код, который сработает:
Готовый код для основного файла main.py:
import sys
import os
import requests
from bs4 import BeautifulSoup
from urllib.parse import urlparse
from PyQt5.QtCore import Qt
from PyQt5.QtGui import QIcon
from PyQt5.QtWidgets import (
QApplication,
QMainWindow,
QWidget,
QVBoxLayout,
QHBoxLayout,
QLineEdit,
QPushButton,
QProgressBar,
QFileDialog,
QMessageBox,
QInputDialog,
QLabel,
)
from PyQt5 import QtCore, QtGui, QtWidgets
from pytube import YouTube
from internet import load_page
class MainWindow(QMainWindow):
def init(self):
super().init()
self.setWindowTitle(“My Browser”)
self.setWindowIcon(QIcon(“icon.png”))
self.create_ui()
self.history = []
self.future = []
self.show()
def create_ui(self):
widget = QWidget()
# Строка поиска
self.search_bar = QLineEdit()
self.search_bar.returnPressed.connect(self.search)
# Кнопки назад и вперед
self.back_button = QPushButton(“<”)
self.back_button.clicked.connect(self.back)
self.forward_button = QPushButton(“>”)
self.forward_button.clicked.connect(self.forward)
# Кнопка скачивания видео
self.download_button = QPushButton(“Download”)
self.download_button.clicked.connect(self.download_video)
# Кнопка настроек папки загрузки
self.download_folder_button = QPushButton(“Download Folder”)
self.download_folder_button.clicked.connect(self.choose_download_folder)
# Прогресс-бар загрузки файлов
self.progress_bar = QProgressBar()
# WebView для отображения страницы
self.web_view = QTextBrowser()
self.web_view.setOpenExternalLinks(True)
# Если пользователь кликнул по ссылке, то открываем ее в новом окне
self.web_view.anchorClicked.connect(self.open_link_in_browser)
# Располагаем элементы по горизонтали
horizontal_layout = QHBoxLayout()
horizontal_layout.addWidget(self.back_button)
horizontal_layout.addWidget(self.forward_button)
horizontal_layout.addWidget(self.search_bar)
horizontal_layout.addWidget(self.download_button)
horizontal_layout.addWidget(self.download_folder_button)
# Располагаем элементы по вертикали
vertical_layout = QVBoxLayout()
vertical_layout.addLayout(horizontal_layout)
vertical_layout.addWidget(self.web_view)
vertical_layout.addWidget(self.progress_bar)
widget.setLayout(vertical_layout)
self.setCentralWidget(widget)
def search(self):
url = self.search_bar.text()
if not url.startswith(“http”):
url = “http://” + url
try:
response = requests.get(url)
soup = BeautifulSoup(response.content, “html.parser”)
self.current_url = url
self.web_view.setHtml(str(soup))
self.history.append(url)
except:
QMessageBox.warning(self, “Error”, “Failed to load page.”)
def back(self):
if len(self.history) > 1:
self.future.insert(0, self.current_url)
self.current_url = self.history[-2]
self.history.pop()
self.load_page(self.current_url)
def forward(self):
if self.future:
self.history.append(self.current_url)
self.current_url = self.future.pop(0)
self.load_page(self.current_url)
def download_video(self):
url, ok = QInputDialog.getText(self, “Download Video”, “Enter YouTube URL:”)
if ok:
try:
youtube = YouTube(url)
video_title = youtube.title
video_streams = youtube.streams.filter(type=“video”, progressive=True)
options = []
for i in range(len(video_streams)):
options.append(f"{i+1}. {video_streams[i].resolution}“)
selected_option, ok = QInputDialog.getItem(self, “Select Quality”, “Select Video Quality:”, options, 0, False)
if ok:
video = video_streams[int(selected_option[0])-1]
download_folder = self.download_folder or os.path.expanduser(”~“) # Если пользователь не выбрал папку, то скачиваем в домашнюю папку
video.download(download_folder, filename=video_title)
QMessageBox.information(self, “Success”, f"Video ‘{video_title}’ has been downloaded successfully.”)
else:
QMessageBox.warning(self, “Error”, “Failed to select video quality.”)
except:
QMessageBox.warning(self, “Error”, “Failed to download video.”)
def choose_download_folder(self):
self.download_folder = QFileDialog.getExistingDirectory(self, “Select Directory”)
def load_page(self, url):
response = requests.get(url)
soup = BeautifulSoup(response.content, “html.parser”)
self.web_view.setHtml(str(soup))
self.search_bar.setText(url)
def open_link_in_browser(self, url):
QtGui.QDesktopServices.openUrl(QtCore.QUrl(url))
def load_url(self, url):
self.current_url = url
if url.startswith(“http”):
self.load_page(url)
else:
search_url = f"https://www.google.com/search?q={url}"
self.load_page(search_url)
if name == “main”:
app = QApplication(sys.argv)
window = MainWindow()
sys.exit(app.exec_())
Код для файла internet.py:
import requests
from bs4 import BeautifulSoup
def load_page(url):
response = requests.get(url)
soup = BeautifulSoup(response.content, “html.parser”)
return soup
|
badc4d16c17daa57ca55b632d8829517
|
{
"intermediate": 0.34349557757377625,
"beginner": 0.5294960737228394,
"expert": 0.1270083338022232
}
|
2,330
|
переделай код, чтобы я сам решал когда надо менять пороги. // Гистограмма для sawtooth4Image()
private void sawtooth4Image() {
boolean valid = true; // Убираем переменную valid из условия, чтобы ее значение сохранялось между вызовами метода
while (valid) { // Используем цикл, чтобы можно было изменять изображение несколько раз
if (image != null) {
// Запрос значения порогов в цикле до тех пор, пока пользователь не введет правильные значения
float t1 = userInput("Введите значение порога 1: ", 0, 1);
float t2 = userInput("Введите значение порога 2: ", 0, 1);
if (t2 > t1) {
BufferedImage sawtoothContrastImage1 = sawtoothContrastUp(image, t1, t2);
BufferedImage sawtoothContrastImage2 = sawtoothContrastDown(image, t1, t2);
BufferedImage combinedImage = combineImages(sawtoothContrastImage1, sawtoothContrastImage2, sawtoothContrastImage1);
setImageIcon(combinedImage);
saveImage(combinedImage);
int[] originalHistogram = ImageReader.getImageHistogram(image);
int[] sawtoothHistogram = ImageReader.getImageHistogram(combinedImage);
JFrame histogramFrame = new JFrame("Гистограммы");
histogramFrame.setLayout(new GridLayout(1, 2, 10, 10));
JPanel originalHistogramPanel = ImageReader.createHistogramPanel(originalHistogram, "Гистограмма исходного изображения");
JPanel sawtoothHistogramPanel = ImageReader.createHistogramPanel(sawtoothHistogram, "Гистограмма после контрастирования");
histogramFrame.add(originalHistogramPanel);
histogramFrame.add(sawtoothHistogramPanel);
histogramFrame.pack();
histogramFrame.setLocationRelativeTo(null);
histogramFrame.setVisible(true);
image = combinedImage; // Сохраняем измененное изображение в переменную image
// Запрос новых значений порогов
} else {
JOptionPane.showMessageDialog(null, "Значение порога 2 должно быть больше порога 1");
}
} else {
valid = false;
}
}
}
|
4ab7dbc384f9d219a900e1f68191c847
|
{
"intermediate": 0.27279070019721985,
"beginner": 0.4306299388408661,
"expert": 0.29657939076423645
}
|
2,331
|
can you help me fix error in warp_perspective(img, H, target_shape)
147 warped_image = np.zeros((w, h, 3), dtype=np.uint8) # Swap w and h
148 for i in range(3):
--> 149 warped_image[..., i][valid_target_coordinates[1], valid_target_coordinates[0]] = img[..., i][valid_source_coordinates[1], valid_source_coordinates[0]]
150
151 return warped_image
IndexError: index 752 is out of bounds for axis 0 with size 653, The full code is here: "import numpy as np
import cv2
import os
import glob
import matplotlib.pyplot as plt
import time
def feature_extraction(sub_images, method="SIFT"):
if method == "SIFT":
keypoint_extractor = cv2.xfeatures2d.SIFT_create()
elif method == "SURF":
keypoint_extractor = cv2.xfeatures2d.SURF_create()
elif method == "ORB":
keypoint_extractor = cv2.ORB_create()
keypoints = []
descriptors = []
for sub_image in sub_images:
keypoint, descriptor = keypoint_extractor.detectAndCompute(sub_image, None)
keypoints.append(keypoint)
descriptors.append(descriptor)
return keypoints, descriptors
def feature_matching(descriptors, matcher_type="BF"):
if matcher_type == "BF":
matcher = cv2.BFMatcher()
matches = []
for i in range(1, len(descriptors)):
match = matcher.knnMatch(descriptors[0], descriptors[i], k=2)
matches.append(match)
return matches
def compute_homography_matrix(src_pts, dst_pts):
A = []
for i in range(4):
u1, v1 = src_pts[i]
u2, v2 = dst_pts[i]
A.append([-u1, -v1, -1, 0, 0, 0, u1 * u2, v1 * u2, u2])
A.append([0, 0, 0, -u1, -v1, -1, u1 * v2, v1 * v2, v2])
A = np.array(A)
_, _, VT = np.linalg.svd(A)
h = VT[-1]
H = h.reshape(3, 3)
return H
def filter_matches(matches, ratio_thres=0.7):
filtered_matches = []
for match in matches:
good_match = []
for m, n in match:
if m.distance < ratio_thres * n.distance:
good_match.append(m)
filtered_matches.append(good_match)
return filtered_matches
def find_homography(keypoints, filtered_matches):
homographies = []
skipped_indices = [] # Keep track of skipped images and their indices
for i, matches in enumerate(filtered_matches):
src_pts = np.float32([keypoints[0][m.queryIdx].pt for m in matches]).reshape(-1, 1, 2)
dst_pts = np.float32([keypoints[i + 1][m.trainIdx].pt for m in matches]).reshape(-1, 1, 2)
H = ransac_homography(src_pts, dst_pts)
if H is not None:
H = H.astype(np.float32)
homographies.append(H)
else:
print(f"Warning: Homography computation failed for image pair (0, {i + 1}). Skipping.")
skipped_indices.append(i + 1) # Add indices of skipped images to the list
continue
return homographies, skipped_indices
def ransac_homography(src_pts, dst_pts, iterations=500, threshold=3):
best_inlier_count = 0
best_homography = None
for _ in range(iterations):
indices = np.random.choice(len(src_pts), 4, replace=True)
src_subset = src_pts[indices].reshape(-1, 2)
dst_subset = dst_pts[indices].reshape(-1, 2)
homography = compute_homography_matrix(src_subset, dst_subset)
if homography is None:
continue
inliers = 0
for i in range(len(src_pts)):
projected_point = np.dot(homography, np.append(src_pts[i], 1))
projected_point = projected_point / projected_point[-1]
distance = np.linalg.norm(projected_point - np.append(dst_pts[i], 1))
if distance < threshold:
inliers += 1
if inliers > best_inlier_count:
best_inlier_count = inliers
best_homography = homography
return best_homography
def read_ground_truth_homographies(dataset_path):
H_files = sorted(glob.glob(os.path.join(dataset_path, "H_.txt")))
ground_truth_homographies = []
for filename in H_files:
H = np.loadtxt(filename)
ground_truth_homographies.append(H)
return ground_truth_homographies
def warp_perspective(img, H, target_shape):
h, w = target_shape
target_x, target_y = np.meshgrid(np.arange(h), np.arange(w)) # Swap w and h
target_coordinates = np.stack([target_y.ravel(), target_x.ravel(), np.ones(target_y.size)]) # Swap target_x and target_y
source_coordinates = np.dot(np.linalg.inv(H), target_coordinates)
source_coordinates /= source_coordinates[2, :]
valid = np.logical_and(np.logical_and(0 <= source_coordinates[0, :], source_coordinates[0, :] < img.shape[0] - 1),
np.logical_and(0 <= source_coordinates[1, :], source_coordinates[1, :] < img.shape[1] - 1))
valid_source_coordinates = np.round(source_coordinates[:, valid].astype(int)[:2]).astype(int)
valid_target_coordinates = target_coordinates[:, valid].astype(int)[:2]
valid_source_coordinates[0] = np.clip(valid_source_coordinates[0], 0, img.shape[0] - 1)
valid_source_coordinates[1] = np.clip(valid_source_coordinates[1], 0, img.shape[1] - 1)
valid_target_coordinates[0] = np.clip(valid_target_coordinates[0], 0, w - 1)
valid_target_coordinates[1] = np.clip(valid_target_coordinates[1], 0, h - 1)
warped_image = np.zeros((w, h, 3), dtype=np.uint8) # Swap w and h
for i in range(3):
warped_image[..., i][valid_target_coordinates[1], valid_target_coordinates[0]] = img[..., i][valid_source_coordinates[1], valid_source_coordinates[0]]
return warped_image
def merge_images(sub_images, keypoints, filtered_matches, homographies, skipped_indices):
ref_img = sub_images[0]
for i in range(1, len(sub_images)):
if i in skipped_indices:
print(f"Image {i} was skipped due to homography computation failure.")
continue
img_i = sub_images[i]
H_i = homographies[i - 1 - sum(idx < i for idx in skipped_indices)]
min_x, min_y = 0, 0
max_x, max_y = max(img_i.shape[1], ref_img.shape[1]), max(img_i.shape[0], ref_img.shape[0])
trans_dst = [-min_x, -min_y]
H_trans = np.array([[1.0, 0, trans_dst[0]], [0, 1.0, trans_dst[1]], [0, 0, 1.0]])
result = warp_perspective(ref_img, H_trans, (max_y - min_y, max_x - min_x)) # Swap max_x and max_y
img_warped = warp_perspective(img_i, H_i, result.shape[:2]) # Use result.shape[:2] for target_shape
mask_result = (result > 0)
mask_img_warped = (img_warped > 0)
combined_mask = np.zeros_like(mask_result)
min_height, min_width = min(result.shape[0], img_warped.shape[0]), min(result.shape[1], img_warped.shape[1])
combined_mask[:min_height, :min_width] = mask_result[:min_height, :min_width] & mask_img_warped[:min_height, :min_width]
nonzero_result = np.nonzero(combined_mask)
result[nonzero_result] = img_warped[nonzero_result]
ref_img = result
return ref_img
def main(dataset_path):
filenames = sorted(glob.glob(os.path.join(dataset_path, "*.png")))
sub_images = []
for filename in filenames:
img = cv2.imread(filename, cv2.IMREAD_COLOR) # Load images as color
sub_images.append(img)
ground_truth_homographies = read_ground_truth_homographies(dataset_path)
methods = ["SIFT", "SURF", "ORB"]
for method in methods:
start_time = time.time()
keypoints, descriptors = feature_extraction(sub_images, method=method)
matches = feature_matching(descriptors)
filtered_matches = filter_matches(matches)
homographies, skipped_indices = find_homography(keypoints, filtered_matches)
panorama = merge_images(sub_images, keypoints, filtered_matches, homographies, skipped_indices)
end_time = time.time()
runtime = end_time - start_time
print(f"Method: {method} - Runtime: {runtime:.2f} seconds")
for idx, (image, kp) in enumerate(zip(sub_images, keypoints)):
feature_plot = cv2.drawKeypoints(image, kp, None)
plt.figure()
plt.imshow(feature_plot, cmap="gray")
plt.title(f"Feature Points - {method} - Image {idx}")
for i, match in enumerate(filtered_matches):
matching_plot = cv2.drawMatches(sub_images[0], keypoints[0], sub_images[i + 1], keypoints[i + 1], match, None, flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
plt.figure()
plt.imshow(matching_plot, cmap="gray")
plt.title(f"Feature Point Matching - {method} - Image 0 - Image {i + 1}")
plt.figure()
plt.imshow(panorama, cmap="gray", aspect="auto")
plt.title(f"Panorama - {method}")
print("\nGround truth homographies")
for i, H_gt in enumerate(ground_truth_homographies):
print(f"Image 0 to {i+1}:")
print(H_gt)
print("\nComputed homographies")
for i, H_est in enumerate(homographies):
print(f"Image 0 to {i+1}:")
print(H_est)
plt.show()
if __name__ == "__main__":
dataset_path = "dataset/v_bird"
main(dataset_path)"
|
4b042effa4e4aa86c4b6faba90f963b6
|
{
"intermediate": 0.46023333072662354,
"beginner": 0.35040900111198425,
"expert": 0.18935760855674744
}
|
2,332
|
How can I get a random object from a Firebase Database?
|
498f9811f7534be54dbcfc71de266a19
|
{
"intermediate": 0.5245216488838196,
"beginner": 0.1686328649520874,
"expert": 0.306845486164093
}
|
2,333
|
Hello! Can you please optimize my C++ code for running time?
#include<bits/stdc++.h>
using namespace std;
char s[100005];
char mass[100005];
bool otvet(size_t nach, size_t sz) {
if(nach == sz - 1) {
return true;
}
size_t z1=0, z2=0;
bool flag1 = false, flag2 = false;
if(nach != 0 && mass[nach-1] != '.') {
for(z1 = nach-2; z1 < sz && mass[z1] != 0; --z1) {
if(mass[z1] == '.') {
flag1 = true;
mass[nach] = '1';
break;
}
}
}
if(mass[nach+1] != '.') {
for(z2 = nach+2; z2 < sz && mass[z2] != 0; ++z2) {
if(mass[z2] == '.') {
flag2 = true;
mass[nach] = '1';
break;
}
}
}
bool boolik = false;
if(flag1 && flag2) {
boolik = otvet(z1, sz) || otvet(z2, sz);
}
else if(flag1) {
boolik = otvet(z1, sz);
}
else if(flag2) {
boolik = otvet(z2, sz);
}
mass[nach] = '.';
return boolik;
}
int main() {
ios::sync_with_stdio(0);
cin.tie(0);cout.tie(0);
int n, nach = 0;
cin >> n >> s;
bool flashok = false;
size_t i = 0, sz = 0;
while(i < n) {
if(s[i] == 'X') {
nach = sz;
mass[sz++] = '.';
flashok = false;
}
else if(s[i] == '.') {
mass[sz++] = s[i];
flashok = false;
}
else if(!flashok) {
mass[sz++] = '7';
flashok = true;
}
i++;
}
if(nach == sz - 1) {
cout << "YES" << endl;
}
else if(mass[sz - 1] == '.') {
bool BEPPIEV = otvet(nach, sz);
if(BEPPIEV) {
cout << "YES" << endl;
}
else {
cout << "NO" << endl;
}
}
else {
cout << "NO" << endl;
}
return 0;
}
|
589adf9946ad182b8a649aa2912483c4
|
{
"intermediate": 0.33410462737083435,
"beginner": 0.44679588079452515,
"expert": 0.21909944713115692
}
|
2,334
|
Переделай код пилообразное контрастирование с тремя пилами. // Гистограмма для sawtooth4Image()
private void sawtooth4Image() {
boolean valid = true; // Убираем переменную valid из условия, чтобы ее значение сохранялось между вызовами метода
while (valid) { // Используем цикл, чтобы можно было изменять изображение несколько раз
if (image != null) {
// Запрос значения порогов в цикле до тех пор, пока пользователь не введет правильные значения
float t1 = userInput("Введите значение порога 1: ", 0, 1);
float t2 = userInput("Введите значение порога 2: ", 0, 1);
if (t2 > t1) {
BufferedImage sawtoothContrastImage1 = sawtoothContrastUp(image, t1, t2);
BufferedImage sawtoothContrastImage2 = sawtoothContrastDown(image, t1, t2);
BufferedImage combinedImage = combineImages(sawtoothContrastImage1, sawtoothContrastImage2, sawtoothContrastImage1);
setImageIcon(combinedImage);
saveImage(combinedImage);
int[] originalHistogram = ImageReader.getImageHistogram(image);
int[] sawtoothHistogram = ImageReader.getImageHistogram(combinedImage);
JFrame histogramFrame = new JFrame(“Гистограммы”);
histogramFrame.setLayout(new GridLayout(1, 2, 10, 10));
JPanel originalHistogramPanel = ImageReader.createHistogramPanel(originalHistogram, “Гистограмма исходного изображения”);
JPanel sawtoothHistogramPanel = ImageReader.createHistogramPanel(sawtoothHistogram, “Гистограмма после контрастирования”);
histogramFrame.add(originalHistogramPanel);
histogramFrame.add(sawtoothHistogramPanel);
histogramFrame.pack();
histogramFrame.setLocationRelativeTo(null);
histogramFrame.setVisible(true);
image = combinedImage; // Сохраняем измененное изображение в переменную image
// Запрос новых значений порогов
} else {
JOptionPane.showMessageDialog(null, “Значение порога 2 должно быть больше порога 1”);
}
} else {
valid = false;
}
}
}
|
8ade9b187406daa33d2a5defe8ef098c
|
{
"intermediate": 0.27299660444259644,
"beginner": 0.4861254096031189,
"expert": 0.24087798595428467
}
|
2,335
|
How I can request AWS Cloudformer for resource template creation using Python and boto3 wuthout AWS CDK?
|
5f66c2a9ca51b30adc434ff1da08e3ad
|
{
"intermediate": 0.6468290686607361,
"beginner": 0.17835648357868195,
"expert": 0.1748144030570984
}
|
2,336
|
create simple init program in rust scripts for all basic tasks in devuan without any dbus, systemvinit, systemd, elogind, x11, xorg, xinit, systemctl, nix. the original init program is called sysx with a command line for interacting with tasks.
write rust script tasks for sysx init for devuan without any server applications, nix, sysvinit, systemd, dbus, x11, xorg, elogind, xinit, systemctl. tasks include, kernel, kernel firewall, selinux, apparmor, networking, procps, kmod, udev, console-setup, haveged, keyboard-setup.sh, mouse, rc.local, rcS, rmnologin, logging, shutdown, logout, booting
|
f3844f6929bfd059076366978d8a33b3
|
{
"intermediate": 0.31671684980392456,
"beginner": 0.4250684976577759,
"expert": 0.25821468234062195
}
|
2,337
|
Implement a simple pull-based cache invalidation scheme by using CSIM in C. In the scheme, a node
generates a query and sends a request message to a server if the queried data item is not found in its local
cache. If the node generates a query that can be answered by its local cache, it sends a check message to
the server for validity. The server replies to the node with a requested data item or a confirm message. A
set of data items is updated in the server. This scheme ensures a strong consistency that guarantees to
access the most updated data items.
2. Refer to the following technical paper and its simulation models (e.g., server, client, caching, data query
and update, etc.):
• G. Cao, "A Scalable Low-Latency Cache Invalidation Strategy for Mobile Environments," ACM
Mobicom, pp. 200-209, 2000
For the sake of simplicity, use the following simulation parameters. Run your simulation several times by
changing the simulation parameters (e.g., T_query and T_update). You can make an assumption and define
your own parameter(s). Then draw result graphs in terms of (i) cache hit ratio and (ii) query delay by
changing mean query generation time and mean update arrival time, similar to Figs. 3 and 4 in the paper.
Note that remove a cold state before collecting any results.
Parameter Values
Number of clients 5
Database (DB) size 500 items
Data item size 1024 bytes
Bandwidth 10000 bits/s
Cache size 100 items
Cache replacement policy LRU
Mean query generate time (T_query) 5, 10, 25, 50, 100 seconds
Hot data items 1 to 50
Cold data items remainder of DB
Hot data access prob. 0.8
Mean update arrival time (T_update) 5, 10, 20, 100, 200 seconds
Hot data update prob. 0.33
for reference this is the csim.h file
#ifndef __CSIM_H__
#define __CSIM_H__
/*
* CSim - Discrete system simulation library
*
* Copyright (C) 2002 Vikas G P <<PRESIDIO_ANONYMIZED_EMAIL_ADDRESS>>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
/*
* The CSim library provides an embeddable frame-work for discrete system
* simulation, using the paradigm adopted by SIMSCRIPT. It is written in C,
* as opposed to most other such libraries, which are written in C++.
*/
#ifndef NULL
# define NULL 0
#endif
#include "queue.h"
/*
* Each entity of a system is represented by a CsEntity. They are created with
* the function cs_entity_new(...)
*/
typedef struct _CsEntity{
CsQueue *attribute_queue;
} CsEntity;
/*
* Each event is represented by a CsEvent.
*/
typedef struct _CsEvent{
int (*event_routine)(void *user_data);
int count;
} CsEvent;
/* Each set is represented by a CsSet */
typedef struct _CsSet{
CsQueue *queue;
int discipline;
int count;
} CsSet;
/* Data type for clock time */
typedef long simtime_t;
/* Macro for event routines */
#define CS_EVENT_ROUTINE(func) int (*func)(void *)
/* Synonym for the current time */
#define NOW cs_get_clock_time()
/* Checks if set is empty */
#define cs_set_is_empty(set) cs_set_count_elements((set))
/* Set operations */
#define FIFO 1
#define LIFO 2
/* functions prefixed with cs_ are exported to the outside */
/*
* Initializes the library.
*/
int cs_init(void);
/*
* Starts the simulation, executing the events until it runs out of events.
*/
int cs_start_simulation(void);
/* Entity functions */
/*
* Creates a new entity.
*/
CsEntity *cs_entity_new(void);
/* Destroys an entity */
void cs_entity_destroy(CsEntity *entity);
/*
* Adds an attribute to an entity's list of attributes.
*/
int cs_attribute_add(CsEntity *entity, char *name);
/* Set's an attribute to the specified value */
int cs_attribute_set(CsEntity *entity, char *name, int new_value);
/* Returns the value of the specified attribute */
int cs_attribute_get(CsEntity *entity, char *name);
/*
* This is a faster way to specify all the attributes of an entity.
*/
void cs_attribute_specify(CsEntity *entity, char **attr_names);
/* To create temp. entities */
CsEntity *cs_entity_new_with_attributes(char **attr_names);
/* Event functions */
/*
* Creates a new event.
*/
CsEvent *cs_event_new(int (*event_routine)(void *user_data));
/* Get's the event's count */
int cs_event_get_count(CsEvent *event);
/*
* Schedules an event to executed at the specified time.
*/
int cs_event_schedule(CsEvent *event, simtime_t time, void *user_data);
/*
* Same as the cs_event_schedule, but the time is specified as an offset from
* the current clock time.
*/
int cs_event_schedule_relative(CsEvent *event, simtime_t offset, void *user_data);
/* Set functions */
/* Creates a new set */
CsSet *cs_set_new(int discipline);
/* "Files", i.e., inserts an entity into a set */
void cs_set_file(CsSet *set, CsEntity *entity);
/* Returns the top-most member of a set, and removes it from the set */
CsEntity *cs_set_remove(CsSet *set);
/* Same as above, but doesn't remove it */
CsEntity *cs_set_remove_nodestroy(CsSet *set);
/* Return number of elements in a set */
int cs_set_count_elements(CsSet *set);
/* Destroys a set */
void cs_set_destroy(CsSet *set);
/* Misc. functions */
/* Get's the current time */
simtime_t cs_get_clock_time(void);
#endif /* __CSIM_H__ */
|
b8d50351fd49f8be926b3ad3c4848ff8
|
{
"intermediate": 0.3431195020675659,
"beginner": 0.4028301239013672,
"expert": 0.2540503144264221
}
|
2,338
|
The method addChangeListener(ChangeListener) in the type JSlider is not applicable for the arguments (new ChangeListener(){}). fIx this. // Гистограмма для sawtooth4Image()
private void sawtooth4Image() {
if (image != null) {
// Запрос значения порогов в цикле до тех пор, пока пользователь не введет правильные значения
float t1 = 0, t2 = 0;
boolean valid = false;
while (!valid) {
t1 = userInput("Введите значение порога 1: ", 0, 1);
t2 = userInput("Введите значение порога 2: ", 0, 1);
if (t2 > t1) {
valid = true;
} else {
JOptionPane.showMessageDialog(null,“Значение порога 2 должно быть больше порога 1”);
}
}
BufferedImage sawtoothContrastImage1 = sawtoothContrastUp(image, t1, t2);
setImageIcon(sawtoothContrastImage1);
saveImage(sawtoothContrastImage1);
BufferedImage sawtoothContrastImage2 = sawtoothContrastDown(image, t1, t2);
setImageIcon(sawtoothContrastImage2);
saveImage(sawtoothContrastImage2);
BufferedImage combinedImage = combineImages(sawtoothContrastImage1, sawtoothContrastImage2, sawtoothContrastImage1);
int[] originalHistogram = ImageReader.getImageHistogram(image);
int[] sawtoothHistogram = ImageReader.getImageHistogram(combinedImage);
JFrame histogramFrame = new JFrame(“Гистограммы””);
histogramFrame.setLayout(new GridLayout(1, 2, 10, 10));
JPanel originalHistogramPanel = ImageReader.createHistogramPanel(originalHistogram,“Гистограмма исходного изображения”);
JPanel sawtoothHistogramPanel = ImageReader.createHistogramPanel(sawtoothHistogram, “Гистограмма после контрастирования”);
// Обработка изменения пороговых значений
JSlider t1Slider = new JSlider(0, 100, (int)(t1 * 100));
JSlider t2Slider = new JSlider(0, 100, (int)(t2 * 100));
t1Slider.setMajorTickSpacing(10);
t1Slider.setPaintTicks(true);
t2Slider.setMajorTickSpacing(10);
t2Slider.setPaintTicks(true);
t1Slider.addChangeListener(new ChangeListener() {
@Override
public void stateChanged(ChangeEvent e) {
t1 = (float)t1Slider.getValue() / 100;
sawtoothContrastImage1 = sawtoothContrastUp(image, t1, t2);
combinedImage = combineImages(sawtoothContrastImage1, sawtoothContrastImage2, sawtoothContrastImage1);
setImageIcon(combinedImage);
saveImage(combinedImage);
sawtoothHistogram = ImageReader.getImageHistogram(combinedImage);
sawtoothHistogramPanel.removeAll();
sawtoothHistogramPanel.add(ImageReader.createHistogramPanel(sawtoothHistogram, “Гистограмма после контрастирования”));
sawtoothHistogramPanel.revalidate();
sawtoothHistogramPanel.repaint();
}
});
t2Slider.addChangeListener(new ChangeListener() {
@Override
public void stateChanged(ChangeEvent e) {
t2 = (float)t2Slider.getValue() / 100;
sawtoothContrastImage2 = sawtoothContrastDown(image, t1, t2);
combinedImage = combineImages(sawtoothContrastImage1, sawtoothContrastImage2, sawtoothContrastImage1);
setImageIcon(combinedImage);
saveImage(combinedImage);
sawtoothHistogram = ImageReader.getImageHistogram(combinedImage);
sawtoothHistogramPanel.removeAll();
sawtoothHistogramPanel.add(ImageReader.createHistogramPanel(sawtoothHistogram, “Гистограмма после контрастирования”));
sawtoothHistogramPanel.revalidate();
sawtoothHistogramPanel.repaint();
}
});
JPanel slidersPanel = new JPanel();
slidersPanel.setLayout(new GridLayout(2, 1, 5, 5));
slidersPanel.setBorder(BorderFactory.createTitledBorder(“Пороговые значения”));
slidersPanel.add(t1Slider);
slidersPanel.add(t2Slider);
JPanel sawtoothPanel = new JPanel(new GridLayout(1, 1));
sawtoothPanel.setBorder(BorderFactory.createTitledBorder(“Контрастированное изображение”));
sawtoothPanel.add(new JLabel(new ImageIcon(combinedImage)));
JPanel histogramPanel = new JPanel(new GridLayout(1, 2, 10, 10));
histogramPanel.setBorder(BorderFactory.createTitledBorder(“Гистограммы”));
histogramPanel.add(originalHistogramPanel);
histogramPanel.add(sawtoothHistogramPanel);
JPanel containerPanel = new JPanel(new BorderLayout());
containerPanel.add(slidersPanel, BorderLayout.NORTH);
containerPanel.add(sawtoothPanel, BorderLayout.CENTER);
containerPanel.add(histogramPanel, BorderLayout.SOUTH);
JFrame sawtoothFrame = new JFrame(“Контрастирование через зигзагообразный порог”);
sawtoothFrame.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE);
sawtoothFrame.add(containerPanel);
sawtoothFrame.setSize(900, 600);
sawtoothFrame.setLocationRelativeTo(null);
sawtoothFrame.setVisible(true);
}
}
|
9a412815eab749fe3313689c91842b85
|
{
"intermediate": 0.3555299937725067,
"beginner": 0.5106645822525024,
"expert": 0.13380545377731323
}
|
2,339
|
Hello, I need your help to fix my project. I will be giving you details and the code in quotes and explaining the error that needs fixing. First of all, here is the premise of the project:
"
In this project, we aim to create a solution in Python for merging sub-images by using keypoint description methods (SIFT, SURF, and ORB) and obtain a final panorama image. First of all, we will extract and obtain multiple keypoints from sub-images by using the keypoint description method. Then we will compare and match these key points to merge sub-images into one panorama image. As a dataset, we will use a subset of the HPatches dataset. With this dataset, you get 6 ".png" images and 5 files for ground truth homography.
"
Here are more details about the dataset:
"
There is a reference image (image number 0) and five target images taken under different illuminations and from different viewpoints. For all images, we have the estimated ground truth homography with respect to the reference image.
"
As you can understand, the dataset includes a scene consisting of a reference image and various sub-images with the estimated ground truth homography with respect to the reference image.
The implementation has some restrictions. Here are the implementation details we used to create the project:
"
1. Feature Extraction: We are expected to extract key points in the sub-images by a keypoint extraction method. (SIFT, SURF, and ORB). You can use libraries for this part.
2. Feature Matching: Then we are expected to code a matching function (this can be based on the k-nearest neighbor method) to match extracted keypoints between pairs of sub-images. You can use libraries for this part.
3. Finding Homography: Then you should calculate a Homography Matrix for each pair of sub-images (by using the RANSAC method). For this part, you cannot use OpenCV or any library other than NumPY.
4. Merging by Transformation: Merge sub-images into a single panorama by applying transformation operations to sub-images using the Homography Matrix. For this part, you cannot use OpenCV or any library other than NumPY.
"
With that being said, I hope you understand the main goal of the project. I have a solution developed, but the end result is not what was expected. While we are expecting the result to be panorama images, the end result looks like a glitched image with some stitched parts and stretched pixels. The whole resolution looks wrong, as the image does not seem to be getting wider. Now I will provide the full code so you can check it and tell me how to fix the project:
"
import numpy as np
import cv2
import os
import glob
import matplotlib.pyplot as plt
import time
def feature_extraction(sub_images, method="SIFT"):
if method == "SIFT":
keypoint_extractor = cv2.xfeatures2d.SIFT_create()
elif method == "SURF":
keypoint_extractor = cv2.xfeatures2d.SURF_create()
elif method == "ORB":
keypoint_extractor = cv2.ORB_create()
keypoints = []
descriptors = []
for sub_image in sub_images:
keypoint, descriptor = keypoint_extractor.detectAndCompute(sub_image, None)
keypoints.append(keypoint)
descriptors.append(descriptor)
return keypoints, descriptors
def feature_matching(descriptors, matcher_type="BF"):
if matcher_type == "BF":
matcher = cv2.BFMatcher()
matches = []
for i in range(1, len(descriptors)):
match = matcher.knnMatch(descriptors[0], descriptors[i], k=2)
matches.append(match)
return matches
def compute_homography_matrix(src_pts, dst_pts):
A = []
for i in range(4):
u1, v1 = src_pts[i]
u2, v2 = dst_pts[i]
A.append([-u1, -v1, -1, 0, 0, 0, u1 * u2, v1 * u2, u2])
A.append([0, 0, 0, -u1, -v1, -1, u1 * v2, v1 * v2, v2])
A = np.array(A)
_, _, VT = np.linalg.svd(A)
h = VT[-1]
H = h.reshape(3, 3)
return H
def filter_matches(matches, ratio_thres=0.7):
filtered_matches = []
for match in matches:
good_match = []
for m, n in match:
if m.distance < ratio_thres * n.distance:
good_match.append(m)
filtered_matches.append(good_match)
return filtered_matches
def find_homography(keypoints, filtered_matches):
homographies = []
skipped_indices = [] # Keep track of skipped images and their indices
for i, matches in enumerate(filtered_matches):
src_pts = np.float32([keypoints[0][m.queryIdx].pt for m in matches]).reshape(-1, 1, 2)
dst_pts = np.float32([keypoints[i + 1][m.trainIdx].pt for m in matches]).reshape(-1, 1, 2)
H = ransac_homography(src_pts, dst_pts)
if H is not None:
H = H.astype(np.float32)
homographies.append(H)
else:
print(f"Warning: Homography computation failed for image pair (0, {i + 1}). Skipping.")
skipped_indices.append(i + 1) # Add indices of skipped images to the list
continue
return homographies, skipped_indices
def ransac_homography(src_pts, dst_pts, iterations=2000, threshold=3):
best_inlier_count = 0
best_homography = None
for _ in range(iterations):
indices = np.random.choice(len(src_pts), 4, replace=True)
src_subset = src_pts[indices].reshape(-1, 2)
dst_subset = dst_pts[indices].reshape(-1, 2)
homography = compute_homography_matrix(src_subset, dst_subset)
if homography is None:
continue
inliers = 0
for i in range(len(src_pts)):
projected_point = np.dot(homography, np.append(src_pts[i], 1))
projected_point = projected_point / projected_point[-1]
distance = np.linalg.norm(projected_point - np.append(dst_pts[i], 1))
if distance < threshold:
inliers += 1
if inliers > best_inlier_count:
best_inlier_count = inliers
best_homography = homography
return best_homography
def read_ground_truth_homographies(dataset_path):
H_files = sorted(glob.glob(os.path.join(dataset_path, "H_.txt")))
ground_truth_homographies = []
for filename in H_files:
H = np.loadtxt(filename)
ground_truth_homographies.append(H)
return ground_truth_homographies
def warp_perspective(img, H, target_shape):
h, w = target_shape
target_x, target_y = np.meshgrid(np.arange(h), np.arange(w)) # Swap w and h
target_coordinates = np.stack([target_y.ravel(), target_x.ravel(), np.ones(target_y.size)]) # Swap target_x and target_y
source_coordinates = np.dot(np.linalg.inv(H), target_coordinates)
source_coordinates /= source_coordinates[2, :]
valid = np.logical_and(np.logical_and(0 <= source_coordinates[0, :], source_coordinates[0, :] < img.shape[0] - 1),
np.logical_and(0 <= source_coordinates[1, :], source_coordinates[1, :] < img.shape[1] - 1))
valid_source_coordinates = np.round(source_coordinates[:, valid].astype(int)[:2]).astype(int)
valid_target_coordinates = target_coordinates[:, valid].astype(int)[:2]
valid_source_coordinates[0] = np.clip(valid_source_coordinates[0], 0, img.shape[0] - 1)
valid_source_coordinates[1] = np.clip(valid_source_coordinates[1], 0, img.shape[1] - 1)
valid_target_coordinates[0] = np.clip(valid_target_coordinates[0], 0, w - 1)
valid_target_coordinates[1] = np.clip(valid_target_coordinates[1], 0, h - 1)
warped_image = np.zeros((w, h, 3), dtype=np.uint8) # Swap w and h
for i in range(3):
warped_image[..., i][valid_target_coordinates[0], valid_target_coordinates[1]] = img[..., i][valid_source_coordinates[0], valid_source_coordinates[1]]
return warped_image
def merge_images(sub_images, keypoints, filtered_matches, homographies, skipped_indices):
ref_img = sub_images[0]
for i in range(1, len(sub_images)):
if i in skipped_indices:
print(f"Image {i} was skipped due to homography computation failure.")
continue
img_i = sub_images[i]
H_i = homographies[i - 1 - sum(idx < i for idx in skipped_indices)]
min_x, min_y = 0, 0
max_x, max_y = max(img_i.shape[1], ref_img.shape[1]), max(img_i.shape[0], ref_img.shape[0])
trans_dst = [-min_x, -min_y]
H_trans = np.array([[1.0, 0, trans_dst[0]], [0, 1.0, trans_dst[1]], [0, 0, 1.0]])
result = warp_perspective(ref_img, H_trans, (max_y - min_y, max_x - min_x)) # Swap max_x and max_y
img_warped = warp_perspective(img_i, H_i, result.shape[:2]) # Use result.shape[:2] for target_shape
mask_result = (result > 0)
mask_img_warped = (img_warped > 0)
combined_mask = np.zeros_like(mask_result)
min_height, min_width = min(result.shape[0], img_warped.shape[0]), min(result.shape[1], img_warped.shape[1])
combined_mask[:min_height, :min_width] = mask_result[:min_height, :min_width] & mask_img_warped[:min_height, :min_width]
nonzero_result = np.nonzero(combined_mask)
result[nonzero_result] = img_warped[nonzero_result]
ref_img = result
return ref_img
def main(dataset_path):
filenames = sorted(glob.glob(os.path.join(dataset_path, "*.png")))
sub_images = []
for filename in filenames:
img = cv2.imread(filename, cv2.IMREAD_COLOR) # Load images as color
sub_images.append(img)
ground_truth_homographies = read_ground_truth_homographies(dataset_path)
methods = ["SIFT", "SURF", "ORB"]
for method in methods:
start_time = time.time()
keypoints, descriptors = feature_extraction(sub_images, method=method)
matches = feature_matching(descriptors)
filtered_matches = filter_matches(matches)
homographies, skipped_indices = find_homography(keypoints, filtered_matches)
panorama = merge_images(sub_images, keypoints, filtered_matches, homographies, skipped_indices)
end_time = time.time()
runtime = end_time - start_time
print(f"Method: {method} - Runtime: {runtime:.2f} seconds")
for idx, (image, kp) in enumerate(zip(sub_images, keypoints)):
feature_plot = cv2.drawKeypoints(image, kp, None)
plt.figure()
plt.imshow(feature_plot, cmap="gray")
plt.title(f"Feature Points - {method} - Image {idx}")
for i, match in enumerate(filtered_matches):
matching_plot = cv2.drawMatches(sub_images[0], keypoints[0], sub_images[i + 1], keypoints[i + 1], match, None, flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
plt.figure()
plt.imshow(matching_plot, cmap="gray")
plt.title(f"Feature Point Matching - {method} - Image 0 - Image {i + 1}")
plt.figure()
plt.imshow(panorama, cmap="gray", aspect="auto")
plt.title(f"Panorama - {method}")
print("\nGround truth homographies")
for i, H_gt in enumerate(ground_truth_homographies):
print(f"Image 0 to {i+1}:")
print(H_gt)
print("\nComputed homographies")
for i, H_est in enumerate(homographies):
print(f"Image 0 to {i+1}:")
print(H_est)
plt.show()
if __name__ == "__main__":
dataset_path = "dataset/v_bird"
main(dataset_path)
"
|
eed16eeef7065cfcd5a11a9456baf204
|
{
"intermediate": 0.4247884154319763,
"beginner": 0.3544952869415283,
"expert": 0.22071635723114014
}
|
2,340
|
Hello, I need your help to fix my project. I will be giving you details and the code in quotes and explaining the error that needs fixing. First of all, here is the premise of the project:
"
In this project, we aim to create a solution in Python for merging sub-images by using keypoint description methods (SIFT, SURF, and ORB) and obtain a final panorama image. First of all, we will extract and obtain multiple keypoints from sub-images by using the keypoint description method. Then we will compare and match these key points to merge sub-images into one panorama image. As a dataset, we will use a subset of the HPatches dataset. With this dataset, you get 6 ".png" images and 5 files for ground truth homography.
"
Here are more details about the dataset:
"
There is a reference image (image number 0) and five target images taken under different illuminations and from different viewpoints. For all images, we have the estimated ground truth homography with respect to the reference image.
"
As you can understand, the dataset includes a scene consisting of a reference image and various sub-images with the estimated ground truth homography with respect to the reference image.
The implementation has some restrictions. Here are the implementation details we used to create the project:
"
1. Feature Extraction: We are expected to extract key points in the sub-images by a keypoint extraction method. (SIFT, SURF, and ORB). You can use libraries for this part.
2. Feature Matching: Then we are expected to code a matching function (this can be based on the k-nearest neighbor method) to match extracted keypoints between pairs of sub-images. You can use libraries for this part.
3. Finding Homography: Then you should calculate a Homography Matrix for each pair of sub-images (by using the RANSAC method). For this part, you cannot use OpenCV or any library other than NumPY.
4. Merging by Transformation: Merge sub-images into a single panorama by applying transformation operations to sub-images using the Homography Matrix. For this part, you cannot use OpenCV or any library other than NumPY.
"
With that being said, I hope you understand the main goal of the project. I have a solution developed, but the end result is not what was expected. While we are expecting the result to be panorama images, the end result looks like a glitched image with some stitched parts and stretched pixels. The whole resolution looks wrong, as the image does not seem to be getting wider. Now I will provide the full code so you can check it and tell me how to fix the project:
"
import numpy as np
import cv2
import os
import glob
import matplotlib.pyplot as plt
import time
def feature_extraction(sub_images, method="SIFT"):
if method == "SIFT":
keypoint_extractor = cv2.xfeatures2d.SIFT_create()
elif method == "SURF":
keypoint_extractor = cv2.xfeatures2d.SURF_create()
elif method == "ORB":
keypoint_extractor = cv2.ORB_create()
keypoints = []
descriptors = []
for sub_image in sub_images:
keypoint, descriptor = keypoint_extractor.detectAndCompute(sub_image, None)
keypoints.append(keypoint)
descriptors.append(descriptor)
return keypoints, descriptors
def feature_matching(descriptors, matcher_type="BF"):
if matcher_type == "BF":
matcher = cv2.BFMatcher()
matches = []
for i in range(1, len(descriptors)):
match = matcher.knnMatch(descriptors[0], descriptors[i], k=2)
matches.append(match)
return matches
def compute_homography_matrix(src_pts, dst_pts):
A = []
for i in range(4):
u1, v1 = src_pts[i]
u2, v2 = dst_pts[i]
A.append([-u1, -v1, -1, 0, 0, 0, u1 * u2, v1 * u2, u2])
A.append([0, 0, 0, -u1, -v1, -1, u1 * v2, v1 * v2, v2])
A = np.array(A)
_, _, VT = np.linalg.svd(A)
h = VT[-1]
H = h.reshape(3, 3)
return H
def filter_matches(matches, ratio_thres=0.7):
filtered_matches = []
for match in matches:
good_match = []
for m, n in match:
if m.distance < ratio_thres * n.distance:
good_match.append(m)
filtered_matches.append(good_match)
return filtered_matches
def find_homography(keypoints, filtered_matches):
homographies = []
skipped_indices = [] # Keep track of skipped images and their indices
for i, matches in enumerate(filtered_matches):
src_pts = np.float32([keypoints[0][m.queryIdx].pt for m in matches]).reshape(-1, 1, 2)
dst_pts = np.float32([keypoints[i + 1][m.trainIdx].pt for m in matches]).reshape(-1, 1, 2)
H = ransac_homography(src_pts, dst_pts)
if H is not None:
H = H.astype(np.float32)
homographies.append(H)
else:
print(f"Warning: Homography computation failed for image pair (0, {i + 1}). Skipping.")
skipped_indices.append(i + 1) # Add indices of skipped images to the list
continue
return homographies, skipped_indices
def ransac_homography(src_pts, dst_pts, iterations=2000, threshold=3):
best_inlier_count = 0
best_homography = None
for _ in range(iterations):
indices = np.random.choice(len(src_pts), 4, replace=True)
src_subset = src_pts[indices].reshape(-1, 2)
dst_subset = dst_pts[indices].reshape(-1, 2)
homography = compute_homography_matrix(src_subset, dst_subset)
if homography is None:
continue
inliers = 0
for i in range(len(src_pts)):
projected_point = np.dot(homography, np.append(src_pts[i], 1))
projected_point = projected_point / projected_point[-1]
distance = np.linalg.norm(projected_point - np.append(dst_pts[i], 1))
if distance < threshold:
inliers += 1
if inliers > best_inlier_count:
best_inlier_count = inliers
best_homography = homography
return best_homography
def read_ground_truth_homographies(dataset_path):
H_files = sorted(glob.glob(os.path.join(dataset_path, "H_.txt")))
ground_truth_homographies = []
for filename in H_files:
H = np.loadtxt(filename)
ground_truth_homographies.append(H)
return ground_truth_homographies
def warp_perspective(img, H, target_shape):
h, w = target_shape
target_x, target_y = np.meshgrid(np.arange(h), np.arange(w)) # Swap w and h
target_coordinates = np.stack([target_y.ravel(), target_x.ravel(), np.ones(target_y.size)]) # Swap target_x and target_y
source_coordinates = np.dot(np.linalg.inv(H), target_coordinates)
source_coordinates /= source_coordinates[2, :]
valid = np.logical_and(np.logical_and(0 <= source_coordinates[0, :], source_coordinates[0, :] < img.shape[0] - 1),
np.logical_and(0 <= source_coordinates[1, :], source_coordinates[1, :] < img.shape[1] - 1))
valid_source_coordinates = np.round(source_coordinates[:, valid].astype(int)[:2]).astype(int)
valid_target_coordinates = target_coordinates[:, valid].astype(int)[:2]
valid_source_coordinates[0] = np.clip(valid_source_coordinates[0], 0, img.shape[0] - 1)
valid_source_coordinates[1] = np.clip(valid_source_coordinates[1], 0, img.shape[1] - 1)
valid_target_coordinates[0] = np.clip(valid_target_coordinates[0], 0, w - 1)
valid_target_coordinates[1] = np.clip(valid_target_coordinates[1], 0, h - 1)
warped_image = np.zeros((w, h, 3), dtype=np.uint8) # Swap w and h
for i in range(3):
warped_image[..., i][valid_target_coordinates[0], valid_target_coordinates[1]] = img[..., i][valid_source_coordinates[0], valid_source_coordinates[1]]
return warped_image
def merge_images(sub_images, keypoints, filtered_matches, homographies, skipped_indices):
ref_img = sub_images[0]
for i in range(1, len(sub_images)):
if i in skipped_indices:
print(f"Image {i} was skipped due to homography computation failure.")
continue
img_i = sub_images[i]
H_i = homographies[i - 1 - sum(idx < i for idx in skipped_indices)]
min_x, min_y = 0, 0
max_x, max_y = max(img_i.shape[1], ref_img.shape[1]), max(img_i.shape[0], ref_img.shape[0])
trans_dst = [-min_x, -min_y]
H_trans = np.array([[1.0, 0, trans_dst[0]], [0, 1.0, trans_dst[1]], [0, 0, 1.0]])
result = warp_perspective(ref_img, H_trans, (max_y - min_y, max_x - min_x)) # Swap max_x and max_y
img_warped = warp_perspective(img_i, H_i, result.shape[:2]) # Use result.shape[:2] for target_shape
mask_result = (result > 0)
mask_img_warped = (img_warped > 0)
combined_mask = np.zeros_like(mask_result)
min_height, min_width = min(result.shape[0], img_warped.shape[0]), min(result.shape[1], img_warped.shape[1])
combined_mask[:min_height, :min_width] = mask_result[:min_height, :min_width] & mask_img_warped[:min_height, :min_width]
nonzero_result = np.nonzero(combined_mask)
result[nonzero_result] = img_warped[nonzero_result]
ref_img = result
return ref_img
def main(dataset_path):
filenames = sorted(glob.glob(os.path.join(dataset_path, "*.png")))
sub_images = []
for filename in filenames:
img = cv2.imread(filename, cv2.IMREAD_COLOR) # Load images as color
sub_images.append(img)
ground_truth_homographies = read_ground_truth_homographies(dataset_path)
methods = ["SIFT", "SURF", "ORB"]
for method in methods:
start_time = time.time()
keypoints, descriptors = feature_extraction(sub_images, method=method)
matches = feature_matching(descriptors)
filtered_matches = filter_matches(matches)
homographies, skipped_indices = find_homography(keypoints, filtered_matches)
panorama = merge_images(sub_images, keypoints, filtered_matches, homographies, skipped_indices)
end_time = time.time()
runtime = end_time - start_time
print(f"Method: {method} - Runtime: {runtime:.2f} seconds")
for idx, (image, kp) in enumerate(zip(sub_images, keypoints)):
feature_plot = cv2.drawKeypoints(image, kp, None)
plt.figure()
plt.imshow(feature_plot, cmap="gray")
plt.title(f"Feature Points - {method} - Image {idx}")
for i, match in enumerate(filtered_matches):
matching_plot = cv2.drawMatches(sub_images[0], keypoints[0], sub_images[i + 1], keypoints[i + 1], match, None, flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
plt.figure()
plt.imshow(matching_plot, cmap="gray")
plt.title(f"Feature Point Matching - {method} - Image 0 - Image {i + 1}")
plt.figure()
plt.imshow(panorama, cmap="gray", aspect="auto")
plt.title(f"Panorama - {method}")
print("\nGround truth homographies")
for i, H_gt in enumerate(ground_truth_homographies):
print(f"Image 0 to {i+1}:")
print(H_gt)
print("\nComputed homographies")
for i, H_est in enumerate(homographies):
print(f"Image 0 to {i+1}:")
print(H_est)
plt.show()
if __name__ == "__main__":
dataset_path = "dataset/v_bird"
main(dataset_path)
"
|
a01ecdabfec07dfb87fd046496028e2d
|
{
"intermediate": 0.4247884154319763,
"beginner": 0.3544952869415283,
"expert": 0.22071635723114014
}
|
2,341
|
can you write a poem in german?
|
d3df7b6a230f515e80ffd2d1c7505b3d
|
{
"intermediate": 0.36120477318763733,
"beginner": 0.37514016032218933,
"expert": 0.26365506649017334
}
|
2,342
|
Implement a simple pull-based cache invalidation scheme by using CSIM in C. In the scheme, a node
generates a query and sends a request message to a server if the queried data item is not found in its local
cache. If the node generates a query that can be answered by its local cache, it sends a check message to
the server for validity. The server replies to the node with a requested data item or a confirm message. A
set of data items is updated in the server. This scheme ensures a strong consistency that guarantees to
access the most updated data items.
2. Refer to the following technical paper and its simulation models (e.g., server, client, caching, data query
and update, etc.):
• G. Cao, "A Scalable Low-Latency Cache Invalidation Strategy for Mobile Environments," ACM
Mobicom, pp. 200-209, 2000
For the sake of simplicity, use the following simulation parameters. Run your simulation several times by
changing the simulation parameters (e.g., T_query and T_update). You can make an assumption and define
your own parameter(s). Then draw result graphs in terms of (i) cache hit ratio and (ii) query delay by
changing mean query generation time and mean update arrival time, similar to Figs. 3 and 4 in the paper.
Note that remove a cold state before collecting any results.
Parameter Values
Number of clients 5
Database (DB) size 500 items
Data item size 1024 bytes
Bandwidth 10000 bits/s
Cache size 100 items
Cache replacement policy LRU
Mean query generate time (T_query) 5, 10, 25, 50, 100 seconds
Hot data items 1 to 50
Cold data items remainder of DB
Hot data access prob. 0.8
Mean update arrival time (T_update) 5, 10, 20, 100, 200 seconds
Hot data update prob. 0.33
|
08f599f34cc229ee68f659fcd1a92cdb
|
{
"intermediate": 0.28888919949531555,
"beginner": 0.18866604566574097,
"expert": 0.5224447846412659
}
|
2,343
|
Hello, I need your help to fix my project. I will be giving you details and the code in quotes and explaining the error that needs fixing. First of all, here is the premise of the project:
"
In this project, we aim to create a solution in Python for merging sub-images by using keypoint description methods (SIFT, SURF, and ORB) and obtain a final panorama image. First of all, we will extract and obtain multiple keypoints from sub-images by using the keypoint description method. Then we will compare and match these key points to merge sub-images into one panorama image. As a dataset, we will use a subset of the HPatches dataset. With this dataset, you get 6 ".png" images and 5 files for ground truth homography.
"
Here are more details about the dataset:
"
There is a reference image (image number 0) and five target images taken under different illuminations and from different viewpoints. For all images, we have the estimated ground truth homography with respect to the reference image.
"
As you can understand, the dataset includes a scene consisting of a reference image and various sub-images with the estimated ground truth homography with respect to the reference image.
The implementation has some restrictions. Here are the implementation details we used to create the project:
"
1. Feature Extraction: We are expected to extract key points in the sub-images by a keypoint extraction method. (SIFT, SURF, and ORB). You can use libraries for this part.
2. Feature Matching: Then we are expected to code a matching function (this can be based on the k-nearest neighbor method) to match extracted keypoints between pairs of sub-images. You can use libraries for this part.
3. Finding Homography: Then you should calculate a Homography Matrix for each pair of sub-images (by using the RANSAC method). For this part, you cannot use OpenCV or any library other than NumPY.
4. Merging by Transformation: Merge sub-images into a single panorama by applying transformation operations to sub-images using the Homography Matrix. For this part, you cannot use OpenCV or any library other than NumPY.
"
With that being said, I hope you understand the main goal of the project. I have a solution developed, but the end result is not what was expected. While we are expecting the result to be panorama images, the end result looks like a glitched image with some stitched parts and stretched pixels. The whole resolution looks wrong, as the image does not seem to be getting wider. Now I will provide the full code so you can check it and tell me how to fix the project:
"
import numpy as np
import cv2
import os
import glob
import matplotlib.pyplot as plt
import time
def feature_extraction(sub_images, method="SIFT"):
if method == "SIFT":
keypoint_extractor = cv2.xfeatures2d.SIFT_create()
elif method == "SURF":
keypoint_extractor = cv2.xfeatures2d.SURF_create()
elif method == "ORB":
keypoint_extractor = cv2.ORB_create()
keypoints = []
descriptors = []
for sub_image in sub_images:
keypoint, descriptor = keypoint_extractor.detectAndCompute(sub_image, None)
keypoints.append(keypoint)
descriptors.append(descriptor)
return keypoints, descriptors
def feature_matching(descriptors, matcher_type="BF"):
if matcher_type == "BF":
matcher = cv2.BFMatcher()
matches = []
for i in range(1, len(descriptors)):
match = matcher.knnMatch(descriptors[0], descriptors[i], k=2)
matches.append(match)
return matches
def compute_homography_matrix(src_pts, dst_pts):
A = []
for i in range(4):
u1, v1 = src_pts[i]
u2, v2 = dst_pts[i]
A.append([-u1, -v1, -1, 0, 0, 0, u1 * u2, v1 * u2, u2])
A.append([0, 0, 0, -u1, -v1, -1, u1 * v2, v1 * v2, v2])
A = np.array(A)
_, _, VT = np.linalg.svd(A)
h = VT[-1]
H = h.reshape(3, 3)
return H
def filter_matches(matches, ratio_thres=0.7):
filtered_matches = []
for match in matches:
good_match = []
for m, n in match:
if m.distance < ratio_thres * n.distance:
good_match.append(m)
filtered_matches.append(good_match)
return filtered_matches
def find_homography(keypoints, filtered_matches):
homographies = []
skipped_indices = [] # Keep track of skipped images and their indices
for i, matches in enumerate(filtered_matches):
src_pts = np.float32([keypoints[0][m.queryIdx].pt for m in matches]).reshape(-1, 1, 2)
dst_pts = np.float32([keypoints[i + 1][m.trainIdx].pt for m in matches]).reshape(-1, 1, 2)
H = ransac_homography(src_pts, dst_pts)
if H is not None:
H = H.astype(np.float32)
homographies.append(H)
else:
print(f"Warning: Homography computation failed for image pair (0, {i + 1}). Skipping.")
skipped_indices.append(i + 1) # Add indices of skipped images to the list
continue
return homographies, skipped_indices
def ransac_homography(src_pts, dst_pts, iterations=2000, threshold=3):
best_inlier_count = 0
best_homography = None
for _ in range(iterations):
indices = np.random.choice(len(src_pts), 4, replace=True)
src_subset = src_pts[indices].reshape(-1, 2)
dst_subset = dst_pts[indices].reshape(-1, 2)
homography = compute_homography_matrix(src_subset, dst_subset)
if homography is None:
continue
inliers = 0
for i in range(len(src_pts)):
projected_point = np.dot(homography, np.append(src_pts[i], 1))
projected_point = projected_point / projected_point[-1]
distance = np.linalg.norm(projected_point - np.append(dst_pts[i], 1))
if distance < threshold:
inliers += 1
if inliers > best_inlier_count:
best_inlier_count = inliers
best_homography = homography
return best_homography
def read_ground_truth_homographies(dataset_path):
H_files = sorted(glob.glob(os.path.join(dataset_path, "H_.txt")))
ground_truth_homographies = []
for filename in H_files:
H = np.loadtxt(filename)
ground_truth_homographies.append(H)
return ground_truth_homographies
def warp_perspective(img, H, target_shape):
h, w = target_shape
target_x, target_y = np.meshgrid(np.arange(h), np.arange(w)) # Swap w and h
target_coordinates = np.stack([target_y.ravel(), target_x.ravel(), np.ones(target_y.size)]) # Swap target_x and target_y
source_coordinates = np.dot(np.linalg.inv(H), target_coordinates)
source_coordinates /= source_coordinates[2, :]
valid = np.logical_and(np.logical_and(0 <= source_coordinates[0, :], source_coordinates[0, :] < img.shape[0] - 1),
np.logical_and(0 <= source_coordinates[1, :], source_coordinates[1, :] < img.shape[1] - 1))
valid_source_coordinates = np.round(source_coordinates[:, valid].astype(int)[:2]).astype(int)
valid_target_coordinates = target_coordinates[:, valid].astype(int)[:2]
valid_source_coordinates[0] = np.clip(valid_source_coordinates[0], 0, img.shape[0] - 1)
valid_source_coordinates[1] = np.clip(valid_source_coordinates[1], 0, img.shape[1] - 1)
valid_target_coordinates[0] = np.clip(valid_target_coordinates[0], 0, w - 1)
valid_target_coordinates[1] = np.clip(valid_target_coordinates[1], 0, h - 1)
warped_image = np.zeros((w, h, 3), dtype=np.uint8) # Swap w and h
for i in range(3):
warped_image[..., i][valid_target_coordinates[0], valid_target_coordinates[1]] = img[..., i][valid_source_coordinates[0], valid_source_coordinates[1]]
return warped_image
def merge_images(sub_images, keypoints, filtered_matches, homographies, skipped_indices):
ref_img = sub_images[0]
for i in range(1, len(sub_images)):
if i in skipped_indices:
print(f"Image {i} was skipped due to homography computation failure.")
continue
img_i = sub_images[i]
H_i = homographies[i - 1 - sum(idx < i for idx in skipped_indices)]
min_x, min_y = 0, 0
max_x, max_y = max(img_i.shape[1], ref_img.shape[1]), max(img_i.shape[0], ref_img.shape[0])
trans_dst = [-min_x, -min_y]
H_trans = np.array([[1.0, 0, trans_dst[0]], [0, 1.0, trans_dst[1]], [0, 0, 1.0]])
result = warp_perspective(ref_img, H_trans, (max_y - min_y, max_x - min_x)) # Swap max_x and max_y
img_warped = warp_perspective(img_i, H_i, result.shape[:2]) # Use result.shape[:2] for target_shape
mask_result = (result > 0)
mask_img_warped = (img_warped > 0)
combined_mask = np.zeros_like(mask_result)
min_height, min_width = min(result.shape[0], img_warped.shape[0]), min(result.shape[1], img_warped.shape[1])
combined_mask[:min_height, :min_width] = mask_result[:min_height, :min_width] & mask_img_warped[:min_height, :min_width]
nonzero_result = np.nonzero(combined_mask)
result[nonzero_result] = img_warped[nonzero_result]
ref_img = result
return ref_img
def main(dataset_path):
filenames = sorted(glob.glob(os.path.join(dataset_path, "*.png")))
sub_images = []
for filename in filenames:
img = cv2.imread(filename, cv2.IMREAD_COLOR) # Load images as color
sub_images.append(img)
ground_truth_homographies = read_ground_truth_homographies(dataset_path)
methods = ["SIFT", "SURF", "ORB"]
for method in methods:
start_time = time.time()
keypoints, descriptors = feature_extraction(sub_images, method=method)
matches = feature_matching(descriptors)
filtered_matches = filter_matches(matches)
homographies, skipped_indices = find_homography(keypoints, filtered_matches)
panorama = merge_images(sub_images, keypoints, filtered_matches, homographies, skipped_indices)
end_time = time.time()
runtime = end_time - start_time
print(f"Method: {method} - Runtime: {runtime:.2f} seconds")
for idx, (image, kp) in enumerate(zip(sub_images, keypoints)):
feature_plot = cv2.drawKeypoints(image, kp, None)
plt.figure()
plt.imshow(feature_plot, cmap="gray")
plt.title(f"Feature Points - {method} - Image {idx}")
for i, match in enumerate(filtered_matches):
matching_plot = cv2.drawMatches(sub_images[0], keypoints[0], sub_images[i + 1], keypoints[i + 1], match, None, flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
plt.figure()
plt.imshow(matching_plot, cmap="gray")
plt.title(f"Feature Point Matching - {method} - Image 0 - Image {i + 1}")
plt.figure()
plt.imshow(panorama, cmap="gray", aspect="auto")
plt.title(f"Panorama - {method}")
print("\nGround truth homographies")
for i, H_gt in enumerate(ground_truth_homographies):
print(f"Image 0 to {i+1}:")
print(H_gt)
print("\nComputed homographies")
for i, H_est in enumerate(homographies):
print(f"Image 0 to {i+1}:")
print(H_est)
plt.show()
if __name__ == "__main__":
dataset_path = "dataset/v_bird"
main(dataset_path)
"
|
212ea4f6f3c121983eb4957912edb6c9
|
{
"intermediate": 0.4247884154319763,
"beginner": 0.3544952869415283,
"expert": 0.22071635723114014
}
|
2,344
|
#include <stdio.h>
#include <stdlib.h>
#include <csim.h>
#define NUM_CLIENTS 5
#define DB_SIZE 500
#define DATA_SIZE 1024
#define BANDWIDTH 10000
#define CACHE_SIZE 100
#define HOT_DATA_ITEMS 50
#define HOT_DATA_ACCESS_PROB 0.8
#define COLD_DATA_ITEMS (DB_SIZE - HOT_DATA_ITEMS)
#define HOT_DATA_UPDATE_PROB 0.33
int query_count = 0, hit_count = 0, cold_count = 0, update_count = 0;
double query_delay = 0;
int cache[CACHE_SIZE];
int hot_data[HOT_DATA_ITEMS];
int cold_data[COLD_DATA_ITEMS];
double T_QUERY = 5;
double T_UPDATE = 5;
void init_data() {
int i;
for (i = 0; i < CACHE_SIZE; i++) {
cache[i] = -1;
}
for (i = 0; i < HOT_DATA_ITEMS; i++) {
hot_data[i] = rand() % DB_SIZE;
cold_data[i] = -1;
}
for (i = 0; i < COLD_DATA_ITEMS; i++) {
cold_data[i] = rand() % DB_SIZE;
}
}
int is_in_cache(int data_id) {
int i;
for (i = 0; i < CACHE_SIZE; i++) {
if (cache[i] == data_id) {
return 1;
}
}
return 0;
}
void update_cache(int data_id) {
int i, lru_index = 0, lru_time = simtime() + 1;
for (i = 0; i < CACHE_SIZE; i++) {
if (cache[i] == data_id) {
return;
}
if (cache[i] == -1) {
cache[i] = data_id;
return;
}
if (lastchange(cache[i]) < lru_time) {
lru_index = i;
lru_time = lastchange(cache[i]);
}
}
cache[lru_index] = data_id;
}
int find_data(int data_id) {
int i;
for (i = 0; i < HOT_DATA_ITEMS; i++) {
if (hot_data[i] == data_id) {
return HOT_DATA_ACCESS_PROB >= ((double) rand() / RAND_MAX);
}
}
for (i = 0; i < COLD_DATA_ITEMS; i++) {
if (cold_data[i] == data_id) {
return 1;
}
}
cold_count++;
return 0;
}
double generate_query() {
int data_id = rand() % DB_SIZE;
if (is_in_cache(data_id)) {
query_count++;
wait(0.001);
return 0;
}
wait(expntl(1.0/T_QUERY));
if (find_data(data_id)) {
hit_count++;
update_cache(data_id);
}
query_count++;
return simtime() - event_time;
}
void generate_update(int data_id) {
if (HOT_DATA_UPDATE_PROB >= ((double) rand() / RAND_MAX)) {
hot_data[rand() % HOT_DATA_ITEMS] = data_id;
} else {
cold_data[rand() % COLD_DATA_ITEMS] = data_id;
}
update_count++;
}
void server() {
int data_id = rand() % DB_SIZE;
generate_update(data_id);
write("Data updated: ");
writeline(data_id);
hold(expntl(1.0/T_UPDATE));
}
void client() {
while (1) {
generate_query();
}
}
int main() {
init_data();
T_QUERY = 5;
T_UPDATE = 5;
set_seed(12345);
start_simulation(); // Start CSIM simulation
process(“main”); // Create main process
int i;
for (i = 0; i < NUM_CLIENTS; i++) {
process(client); // Create client processes
}
process(server); // Create server process
hold(10000.0);
printf(“Total queries: %d, hits: %d, hit rate: %.2f%%\n”,
query_count, hit_count, (double) hit_count / query_count * 100);
printf(“Cold data queries: %d, update count: %d\n”, cold_count, update_count);
return 0;
}
following code was written for this scenario:
Implement a simple pull-based cache invalidation scheme by using CSIM. In the scheme, a node
generates a query and sends a request message to a server if the queried data item is not found in its local
cache. If the node generates a query that can be answered by its local cache, it sends a check message to
the server for validity. The server replies to the node with a requested data item or a confirm message. A
set of data items is updated in the server. This scheme ensures a strong consistency that guarantees to
access the most updated data items.
2. Refer to the following technical paper and its simulation models (e.g., server, client, caching, data query
and update, etc.):
• G. Cao, "A Scalable Low-Latency Cache Invalidation Strategy for Mobile Environments," ACM
Mobicom, pp. 200-209, 2000
For the sake of simplicity, use the following simulation parameters. Run your simulation several times by
changing the simulation parameters (e.g., T_query and T_update). You can make an assumption and define
your own parameter(s). Then draw result graphs in terms of (i) cache hit ratio and (ii) query delay by
changing mean query generation time and mean update arrival time, similar to Figs. 3 and 4 in the paper.
Note that remove a cold state before collecting any results.
Parameter Values
Number of clients 5
Database (DB) size 500 items
Data item size 1024 bytes
Bandwidth 10000 bits/s
Cache size 100 items
Cache replacement policy LRU
Mean query generate time (T_query) 5, 10, 25, 50, 100 seconds
Hot data items 1 to 50
Cold data items remainder of DB
Hot data access prob. 0.8
Mean update arrival time (T_update) 5, 10, 20, 100, 200 seconds
Hot data update prob. 0.33
can you tell me what event_time is as it is undefined in the program, also add comments in the code explaining major steps
|
d3011eea64df05797a2ed7831c1a1db8
|
{
"intermediate": 0.39035776257514954,
"beginner": 0.449905663728714,
"expert": 0.15973655879497528
}
|
2,345
|
write me c# code, make a file move method with path saving to file
|
eaa5da3f47b79489944e065267909013
|
{
"intermediate": 0.6248071193695068,
"beginner": 0.18073730170726776,
"expert": 0.19445554912090302
}
|
2,346
|
i have this code
#include <stdio.h>
#include <stdlib.h>
#include <csim.h>
#define NUM_CLIENTS 5
#define DB_SIZE 500
#define DATA_SIZE 1024
#define BANDWIDTH 10000
#define CACHE_SIZE 100
#define HOT_DATA_ITEMS 50
#define HOT_DATA_ACCESS_PROB 0.8
#define COLD_DATA_ITEMS (DB_SIZE - HOT_DATA_ITEMS)
#define HOT_DATA_UPDATE_PROB 0.33
int query_count = 0, hit_count = 0, cold_count = 0, update_count = 0;
double query_delay = 0;
int cache[CACHE_SIZE]; // array to store cached data items
int hot_data[HOT_DATA_ITEMS]; // array to store hot data items
int cold_data[COLD_DATA_ITEMS]; // array to store cold data items
double T_QUERY = 5; // mean query generation time
double T_UPDATE = 5; // mean update arrival time
void init_data() {
int i;
// initialize cache with -1 to indicate empty slots
for (i = 0; i < CACHE_SIZE; i++) {
cache[i] = -1;
}
// initialize hot data items and cold data items
for (i = 0; i < HOT_DATA_ITEMS; i++) {
hot_data[i] = rand() % DB_SIZE; // generate random hot data item id
cold_data[i] = -1; // initialize cold data item with -1
}
for (i = 0; i < COLD_DATA_ITEMS; i++) {
cold_data[i] = rand() % DB_SIZE; // generate random cold data item id
}
}
int is_in_cache(int data_id) {
int i;
// check if data item is in cache
for (i = 0; i < CACHE_SIZE; i++) {
if (cache[i] == data_id) {
return 1; // data item is in cache
}
}
return 0; // data item is not in cache
}
void update_cache(int data_id) {
int i, lru_index = 0, lru_time = simtime() + 1;
// update cache with new data item
for (i = 0; i < CACHE_SIZE; i++) {
if (cache[i] == data_id) {
return; // data item is already in cache, no need to update
}
if (cache[i] == -1) {
cache[i] = data_id; // found empty slot, update with new data item
return;
}
if (lastchange(cache[i]) < lru_time) {
lru_index = i; // found least recently used slot
lru_time = lastchange(cache[i]);
}
}
cache[lru_index] = data_id; // update least recently used slot with new data item
}
int find_data(int data_id) {
int i;
// check if data item is a hot data item
for (i = 0; i < HOT_DATA_ITEMS; i++) {
if (hot_data[i] == data_id) {
return HOT_DATA_ACCESS_PROB >= ((double) rand() / RAND_MAX); // check if hot data item is accessed
}
}
// check if data item is a cold data item
for (i = 0; i < COLD_DATA_ITEMS; i++) {
if (cold_data[i] == data_id) {
return 1; // data item is a cold data item and is available
}
}
cold_count++; // data item is not in database, counted as cold data access
return 0; // data item is not available
}
double generate_query() {
int data_id = rand() % DB_SIZE; // generate random data item id
if (is_in_cache(data_id)) {
query_count++;
wait(0.001); // wait for a small amount of time to simulate cache hit
return 0;
}
wait(expntl(1.0/T_QUERY)); // wait for mean query generation time to simulate cache miss
if (find_data(data_id)) {
hit_count++; // data item found, counted as cache hit
update_cache(data_id); // update cache with new data item
}
query_count++;
return simtime() - event_time; // calculate query delay
}
void generate_update(int data_id) {
if (HOT_DATA_UPDATE_PROB >= ((double) rand() / RAND_MAX)) {
hot_data[rand() % HOT_DATA_ITEMS] = data_id; // update a random hot data item
} else {
cold_data[rand() % COLD_DATA_ITEMS] = data_id; // update a random cold data item
}
update_count++;
}
void server() {
int data_id = rand() % DB_SIZE; // generate random data item id
generate_update(data_id); // update database with new data item
write("Data updated: ");
writeline(data_id);
hold(expntl(1.0/T_UPDATE)); // wait for mean update arrival time
}
void client() {
while (1) {
generate_query(); // generate queries continuously
}
}
int main() {
init_data(); // initialize data
T_QUERY = 5; // set mean query generation time
T_UPDATE = 5; // set mean update arrival time
set_seed(12345);
start_simulation(); // start CSIM simulation
process(“main”); // create main process
int i;
for (i = 0; i < NUM_CLIENTS; i++) {
process(client); // create client processes
}
process(server); // create server process
hold(10000.0); // wait for simulation to finish
printf(“Total queries: %d, hits: %d, hit rate: %.2f%%\n”,
query_count, hit_count, (double) hit_count / query_count * 100);
printf(“Cold data queries: %d, update count: %d\n”, cold_count, update_count);
return 0;
}
which is built for the following requirements :
Implement a simple pull-based cache invalidation scheme by using CSIM. In the scheme, a node
generates a query and sends a request message to a server if the queried data item is not found in its local
cache. If the node generates a query that can be answered by its local cache, it sends a check message to
the server for validity. The server replies to the node with a requested data item or a confirm message. A
set of data items is updated in the server. This scheme ensures a strong consistency that guarantees to
access the most updated data items.
2. Refer to the following technical paper and its simulation models (e.g., server, client, caching, data query
and update, etc.):
• G. Cao, “A Scalable Low-Latency Cache Invalidation Strategy for Mobile Environments,” ACM
Mobicom, pp. 200-209, 2000
For the sake of simplicity, use the following simulation parameters. Run your simulation several times by
changing the simulation parameters (e.g., T_query and T_update). You can make an assumption and define
your own parameter(s). Then draw result graphs in terms of (i) cache hit ratio and (ii) query delay by
changing mean query generation time and mean update arrival time, similar to Figs. 3 and 4 in the paper.
Note that remove a cold state before collecting any results.
Parameter Values
Number of clients 5
Database (DB) size 500 items
Data item size 1024 bytes
Bandwidth 10000 bits/s
Cache size 100 items
Cache replacement policy LRU
Mean query generate time (T_query) 5, 10, 25, 50, 100 seconds
Hot data items 1 to 50
Cold data items remainder of DB
Hot data access prob. 0.8
Mean update arrival time (T_update) 5, 10, 20, 100, 200 seconds
Hot data update prob. 0.33
but it turns out the author did not use the proper variables and functions of the csim.h file, for reference i will give you the csim.h file contents below, assume all the functions are defined and rewrite the above code to function properly
#ifndef CSIM_H
#define CSIM_H
/*
* CSim - Discrete system simulation library
*
* Copyright © 2002 Vikas G P <<PRESIDIO_ANONYMIZED_EMAIL_ADDRESS>>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
/
/
* The CSim library provides an embeddable frame-work for discrete system
* simulation, using the paradigm adopted by SIMSCRIPT. It is written in C,
* as opposed to most other such libraries, which are written in C++.
/
#ifndef NULL
# define NULL 0
#endif
#include “queue.h”
/
* Each entity of a system is represented by a CsEntity. They are created with
* the function cs_entity_new(…)
*/
typedef struct _CsEntity{
CsQueue attribute_queue;
} CsEntity;
/
* Each event is represented by a CsEvent.
*/
typedef struct _CsEvent{
int (*event_routine)(void user_data);
int count;
} CsEvent;
/ Each set is represented by a CsSet */
typedef struct CsSet{
CsQueue queue;
int discipline;
int count;
} CsSet;
/ Data type for clock time /
typedef long simtime_t;
/ Macro for event routines */
#define CS_EVENT_ROUTINE(func) int (*func)(void )
/ Synonym for the current time /
#define NOW cs_get_clock_time()
/ Checks if set is empty /
#define cs_set_is_empty(set) cs_set_count_elements((set))
/ Set operations /
#define FIFO 1
#define LIFO 2
/ functions prefixed with cs are exported to the outside /
/
* Initializes the library.
/
int cs_init(void);
/
* Starts the simulation, executing the events until it runs out of events.
/
int cs_start_simulation(void);
/ Entity functions /
/
* Creates a new entity.
*/
CsEntity cs_entity_new(void);
/ Destroys an entity */
void cs_entity_destroy(CsEntity entity);
/
* Adds an attribute to an entity’s list of attributes.
*/
int cs_attribute_add(CsEntity *entity, char name);
/ Set’s an attribute to the specified value */
int cs_attribute_set(CsEntity *entity, char name, int new_value);
/ Returns the value of the specified attribute */
int cs_attribute_get(CsEntity *entity, char name);
/
* This is a faster way to specify all the attributes of an entity.
*/
void cs_attribute_specify(CsEntity *entity, char *attr_names);
/ To create temp. entities */
CsEntity *cs_entity_new_with_attributes(char *attr_names);
/ Event functions /
/
* Creates a new event.
*/
CsEvent *cs_event_new(int (*event_routine)(void user_data));
/ Get’s the event’s count */
int cs_event_get_count(CsEvent event);
/
* Schedules an event to executed at the specified time.
*/
int cs_event_schedule(CsEvent *event, simtime_t time, void user_data);
/
* Same as the cs_event_schedule, but the time is specified as an offset from
* the current clock time.
*/
int cs_event_schedule_relative(CsEvent *event, simtime_t offset, void user_data);
/ Set functions /
/ Creates a new set */
CsSet cs_set_new(int discipline);
/ “Files”, i.e., inserts an entity into a set */
void cs_set_file(CsSet *set, CsEntity entity);
/ Returns the top-most member of a set, and removes it from the set */
CsEntity *cs_set_remove(CsSet set);
/ Same as above, but doesn’t remove it */
CsEntity *cs_set_remove_nodestroy(CsSet set);
/ Return number of elements in a set */
int cs_set_count_elements(CsSet set);
/ Destroys a set */
void cs_set_destroy(CsSet set);
/ Misc. functions /
/ Get’s the current time /
simtime_t cs_get_clock_time(void);
#endif / CSIM_H */
|
4619363cd9dca253805a068581e6b8bb
|
{
"intermediate": 0.4015994668006897,
"beginner": 0.36890095472335815,
"expert": 0.22949959337711334
}
|
2,347
|
как убрать из адресса урл 969 и передовать его непосредственно через метод
class FakeExchangeService:
def __init__(self):
self.url = "https://localhost:8003/exchange_data/969"
def putExchangeData(self, statusError, typeExchange):
config = self.config[statusError][typeExchange]
zipPath = f'{squishinfo.testCase}/stmobile.zip'
files = self.getZipFile(zipPath)
requests.put(self.url, data=config, files=files, verify=False)
|
e38ca2e58b154e7f941c7b626d655ee3
|
{
"intermediate": 0.35869354009628296,
"beginner": 0.4360858201980591,
"expert": 0.20522063970565796
}
|
2,348
|
In python predict a minesweeper game on a board 5x5 field with 3 bombs in it, predict 5 safe spots. Use the past games from this list: [6, 16, 23, 16, 22, 23, 18, 19, 22, 6, 9, 16, 2, 6, 19, 6, 15, 24, 2, 4, 10, 4, 16, 22, 2, 6, 24, 9, 15, 19, 13, 16, 19, 4, 14, 15, 4, 5, 10, 1, 18, 20, 8, 9, 23, 4, 17, 24, 5, 8, 11, 3, 9, 19, 5, 7, 8, 6, 12, 13, 7, 11, 15, 2, 14, 21, 5, 19, 22, 1, 7, 20, 6, 12, 19, 8, 12, 24, 6, 11, 14, 11, 23, 24, 2, 4, 17, 15, 23, 23] each num in the list is a past mine location. Also, use machine learning
|
2e037e3084a006d8fcfd54351d4b1400
|
{
"intermediate": 0.1436418741941452,
"beginner": 0.09758215397596359,
"expert": 0.7587759494781494
}
|
2,349
|
so i need the full code for the following task in C so i can generate the graphs and reports
Implement a simple pull-based cache invalidation scheme by using CSIM. In the scheme, a node
generates a query and sends a request message to a server if the queried data item is not found in its local
cache. If the node generates a query that can be answered by its local cache, it sends a check message to
the server for validity. The server replies to the node with a requested data item or a confirm message. A
set of data items is updated in the server. This scheme ensures a strong consistency that guarantees to
access the most updated data items.
2. now for the simulation models (e.g., server, client, caching, data query
and update, etc.):
For the sake of simplicity, use the following simulation parameters. Run your simulation several times by
changing the simulation parameters (e.g., T_query and T_update). You can make an assumption and define
your own parameter(s). Then draw result graphs in terms of (i) cache hit ratio and (ii) query delay by
changing mean query generation time and mean update arrival time, similar to Figs. 3 and 4 in the paper.
Note that remove a cold state before collecting any results.
Parameter Values
Number of clients 5
Database (DB) size 500 items
Data item size 1024 bytes
Bandwidth 10000 bits/s
Cache size 100 items
Cache replacement policy LRU
Mean query generate time (T_query) 5, 10, 25, 50, 100 seconds
Hot data items 1 to 50
Cold data items remainder of DB
Hot data access prob. 0.8
Mean update arrival time (T_update) 5, 10, 20, 100, 200 seconds
Hot data update prob. 0.33
|
5925636f32ea349933c29f89ddf3c00f
|
{
"intermediate": 0.3842029571533203,
"beginner": 0.20139417052268982,
"expert": 0.41440287232398987
}
|
2,350
|
Implement a simple pull-based cache invalidation scheme by using CSIM in C. In the scheme, a node generates a query and sends a request message to a server if the queried data item is not found in its local cache. If the node generates a query that can be answered by its local cache, it sends a check message to the server for validity. The server replies to the node with a requested data item or a confirm message. A set of data items is updated in the server. This scheme ensures a strong consistency that guarantees to access the most updated data items. Note that remove a cold state before collecting any results. use the following simulation parameters .make sure to output cache hit ratio and (ii) query delay by changing mean query generation time and mean update arrival time.
Parameter Values
Number of clients 5
Database (DB) size 500 items
Data item size 1024 bytes
Bandwidth 10000 bits/s
Cache size 100 items
Cache replacement policy LRU
Mean query generate time (T_query) 5, 10, 25, 50, 100 seconds
Hot data items 1 to 50
Cold data items remainder of DB
Hot data access prob. 0.8
Mean update arrival time (T_update) 5, 10, 20, 100, 200 seconds
Hot data update prob. 0.33
|
218ee3895ebafa686777838c43338f96
|
{
"intermediate": 0.3486844003200531,
"beginner": 0.2441035360097885,
"expert": 0.4072120487689972
}
|
2,351
|
import java.util.Scanner;
public class TicTacToe {
/**
* Main method
*/
public static void main(String[] args) {
// Create a empty board
String[][] board = getBoard();
// Create two players token
String[] tokens = {" X ", " O "};
int result; // game status result
// Repeat while result is continue
do {
// Display board
print(board);
// Get available cell to mark
int[] cell = getCell(board, tokens[0]);
// Mark available cell with player's token
placeToken(board, cell, tokens[0]);
// Determine game status
result = gameStatus(board, tokens[0]);
// If status is make next player take turn//
if (result == 2) {
swap(tokens);
}
} while (result == 2);
// Display board
print(board);
// Display game results
if (result == 0) {
System.out.println(tokens[0] + "player won");
} else {
System.out.println("Players draw");
}
}
/**
* gameStatus determines the status of the game (win, draw, or continue)
*/
public static int gameStatus(String[][] m, String e) {
if (isWin(m, e)) {
return 0; // Win
} else if (isDraw(m)) {
return 1; // Draw
} else {
return 2; // Continue
}
}
/**
* isWin returns true if player has placed three tokens in,
* a horizontal vertical, or diagonal row on the grid
*/
public static boolean isWin(String[][] m, String t) {
return isHorizontalWin(m, t) || isVerticalWin(m, t) || isDiagonalWin(m, t);
}
/**
* isHorizontalWin returns true if player has
* placed three tokens in a horizontal row
*/
public static boolean isHorizontalWin(String[][] m, String t) {
for (int i = 0; i < m.length; i++) {
int count = 0;
for (int j = 0; j < m[i].length; j++) {
if (m[i][j] == t) {
count++;
}
}
if (count == 3) {
return true;
}
}
return false;
}
/**
* isVerticalWin returns true if player has
* placed three tokens in a vertical row
*/
public static boolean isVerticalWin(String[][] m, String t) {
for (int i = 0; i < m.length; i++) {
int count = 0;
for (int j = 0; j < m[i].length; j++) {
if (m[j][i] == t) ;
}
if (count == 3) {
return true;
}
}
return false;
}
/**
* isDiagonalWin returns true if player has
* placed three tokens in a diagonal row
*/
public static boolean isDiagonalWin(String[][] m, String t) {
int count = 0;
for (int i = 0; i < m.length; i++) {
if (m[i][i] == t) {
count++;
}
if (count == 3) {
return true;
}
}
for (int i = 0, j = m[i].length - 1; j >= 0; j--, i++) {
if (m[i][j] == t) {
count++;
}
if (count == 3) {
return true;
}
}
return false;
}
/**
* isDraw returns true if all the cells on the grid have been
* filled with tokens and neither player has achieved a win
*/
public static boolean isDraw(String[][] m) {
for (int i = 0; i < m.length; i++) {
for (int j = 0; j < m[i].length; j++) {
if (m[i][j] == " ") {
return false;
}
}
}
return true;
}
/**
* swap swaps the elements in an array
*/
public static void swap(String[] p) {
String temp = p[0];
p[0] = p[1];
p[1] = temp;
}
/**
* placeToken fills the matrix cell with the player's token
*/
public static void placeToken(String[][] m, int[] e, String t) {
m[e[0]][e[1]] = t;
}
/**
* getCell returns a valid cell input by user
*/
public static int[] getCell(String[][] m, String t) {
// Create a Scanner
Scanner input = new Scanner(System.in);
int[] cell = new int[2]; // Cell row and column
// Prompt player to enter a token
do {
System.out.print("Enter a row(0, 1, or 2) for player " + t + ": ");
cell[0] = input.nextInt();
System.out.print("Enter a column(0, 1, or 2) for player " + t + ": ");
cell[1] = input.nextInt();
} while (!isValidCell(m, cell));
return cell;
}
/**
* isValidCell returns true is cell is empty and is in a 3 x 3 array
*/
public static boolean isValidCell(String[][] m, int[] cell) {
for (int i = 0; i < cell.length; i++) {
if (cell[i] < 0 || cell[i] >= 3) {
System.out.println("Invalid cell");
return false;
}
}
if (m[cell[0]][cell[1]] != " ") {
System.out.println(
"\nRow " + cell[0] + " column " + cell[1] + " is filled");
return false;
}
return true;
}
/**
* getBoard returns a 3 x 3 array filled with blank spaces
*/
public static String[][] getBoard() {
String[][] m = new String[3][3];
for (int i = 0; i < 3; i++) {
for (int j = 0; j < 3; j++) {
m[i][j] = " ";
}
}
return m;
}
/**
* print displays the board
*/
public static void print(String[][] m) {
System.out.println("\n-------------");
for (int i = 0; i < m.length; i++) {
for (int j = 0; j < m[i].length; j++) {
System.out.print("|" + m[i][j]);
}
System.out.println("|\n-------------");
}
}
} fix this code
|
15acfbb8e4b2d42462c9b5bcb6677798
|
{
"intermediate": 0.2852650284767151,
"beginner": 0.56972336769104,
"expert": 0.14501157402992249
}
|
2,352
|
You're an helpful programmer assistant, working with a human user developing a web scrapper for a particular website in RStudio using rvest package.
Ask any clarifying questions before proceeding if needed.
Avoid writing all the code, and try to focus on what needs to be changed or added.
I need to get multiple variables from that "table" (table#subjectDispositionSection.sectionTable)
The first part of the data is in tabular form, with only two columns: first column with the title of the variable (class="labelColumn"), and second column is the actual value (class="valueColumn").
The second part of the data is in a tabular form with a variable number of columns (class="embeddedTableContainer").
We're focusing on the first part for now.
Ask any clarifying questions before proceeding if needed.
Here's the code so far:
---
library(rvest)
url <- "https://www.clinicaltrialsregister.eu/ctr-search/trial/2016-000172-19/results" # replace with the URL of the website you want to scrape
webpage <- read_html(url)
# Extract the data using the provided CSS selector and classes:
table_selector <- "table#subjectDispositionSection.sectionTable"
label_selector <- ".labelColumn"
value_selector <- ".valueColumn"
label_nodes <- html_nodes(webpage, paste(table_selector, label_selector, sep = " "))
value_nodes <- lapply(label_nodes, function(node) {
html_node(node, xpath = "following-sibling::*[1][contains(@class, 'valueColumn')]")
})
labels <- trimws(html_text(label_nodes))
values <- trimws(sapply(value_nodes, html_text))
# Combine the extracted labels and values into a data frame:
data <- data.frame(Variable = labels, Value = values, stringsAsFactors = FALSE)
----
THe problem is that the code so far is extracting the second part as well, but it is intended to get the first part only.
It should ignore any data inside those embedded tables (class="embeddedTableContainer").
Here's the HTML code for one of those embedded tables to be ignored, in case it is helpful:
---
<tr>
<td colspan="2" class="embeddedTableContainer">
<div style="overflow-x: auto; width:960px;">
<table class="embeddedTable">
<tbody><tr>
<td class="header labelColumn">
<div>Number of subjects in period 1 </div>
</td>
<td class="valueColumn centeredValueColumn">Empagliflozin</td>
<td class="valueColumn centeredValueColumn">Placebo</td>
</tr>
<tr>
<td class="labelColumn">
<div> Started</div>
</td>
<td class="valueColumn centeredValueColumn"><div>22</div></td>
<td class="valueColumn centeredValueColumn"><div>22</div></td>
</tr>
<tr>
<td class="labelColumn">
<div> Completed</div>
</td>
<td class="valueColumn centeredValueColumn"><div>22</div></td>
<td class="valueColumn centeredValueColumn"><div>22</div></td>
</tr>
</tbody></table>
</div>
</td>
</tr>
|
6ebfafd18756880870a03e6f5d0e28a6
|
{
"intermediate": 0.4251650273799896,
"beginner": 0.4008033275604248,
"expert": 0.1740315705537796
}
|
2,353
|
Hello, I need your help to fix my project. I will be giving you details and the code in quotes and explaining the error that needs fixing. First of all, here is the premise of the project:
"
In this project, we aim to create a solution in Python for merging sub-images by using keypoint description methods (SIFT, SURF, and ORB) and obtain a final panorama image. First of all, we will extract and obtain multiple keypoints from sub-images by using the keypoint description method. Then we will compare and match these key points to merge sub-images into one panorama image. As a dataset, we will use a subset of the HPatches dataset. With this dataset, you get 6 ".png" images and 5 files for ground truth homography.
"
Here are more details about the dataset:
"
There is a reference image (image number 0) and five target images taken under different illuminations and from different viewpoints. For all images, we have the estimated ground truth homography with respect to the reference image.
"
As you can understand, the dataset includes a scene consisting of a reference image and various sub-images with the estimated ground truth homography with respect to the reference image.
The implementation has some restrictions. Here are the implementation details we used to create the project:
"
1. Feature Extraction: We are expected to extract key points in the sub-images by a keypoint extraction method. (SIFT, SURF, and ORB). You can use libraries for this part.
2. Feature Matching: Then we are expected to code a matching function (this can be based on the k-nearest neighbor method) to match extracted keypoints between pairs of sub-images. You can use libraries for this part.
3. Finding Homography: Then you should calculate a Homography Matrix for each pair of sub-images (by using the RANSAC method). For this part, you cannot use OpenCV or any library other than NumPY.
4. Merging by Transformation: Merge sub-images into a single panorama by applying transformation operations to sub-images using the Homography Matrix. For this part, you cannot use OpenCV or any library other than NumPY.
"
With that being said, I hope you understand the main goal of the project. I have a solution developed, but the end result is not what was expected. While we are expecting the result to be panorama images, the end result looks like a glitched image with some stitched parts and stretched pixels. The whole resolution looks wrong, as the image does not seem to be getting wider. Now I will provide the full code so you can check it and tell me how to fix the project:
"
import numpy as np
import cv2
import os
import glob
import matplotlib.pyplot as plt
import time
def feature_extraction(sub_images, method="SIFT"):
if method == "SIFT":
keypoint_extractor = cv2.xfeatures2d.SIFT_create()
elif method == "SURF":
keypoint_extractor = cv2.xfeatures2d.SURF_create()
elif method == "ORB":
keypoint_extractor = cv2.ORB_create()
keypoints = []
descriptors = []
for sub_image in sub_images:
keypoint, descriptor = keypoint_extractor.detectAndCompute(sub_image, None)
keypoints.append(keypoint)
descriptors.append(descriptor)
return keypoints, descriptors
def feature_matching(descriptors, matcher_type="BF"):
if matcher_type == "BF":
matcher = cv2.BFMatcher()
matches = []
for i in range(1, len(descriptors)):
match = matcher.knnMatch(descriptors[0], descriptors[i], k=2)
matches.append(match)
return matches
def compute_homography_matrix(src_pts, dst_pts):
A = []
for i in range(4):
u1, v1 = src_pts[i]
u2, v2 = dst_pts[i]
A.append([-u1, -v1, -1, 0, 0, 0, u1 * u2, v1 * u2, u2])
A.append([0, 0, 0, -u1, -v1, -1, u1 * v2, v1 * v2, v2])
A = np.array(A)
_, _, VT = np.linalg.svd(A)
h = VT[-1]
H = h.reshape(3, 3)
return H
def filter_matches(matches, ratio_thres=0.7):
filtered_matches = []
for match in matches:
good_match = []
for m, n in match:
if m.distance < ratio_thres * n.distance:
good_match.append(m)
filtered_matches.append(good_match)
return filtered_matches
def find_homography(keypoints, filtered_matches):
homographies = []
skipped_indices = [] # Keep track of skipped images and their indices
for i, matches in enumerate(filtered_matches):
src_pts = np.float32([keypoints[0][m.queryIdx].pt for m in matches]).reshape(-1, 1, 2)
dst_pts = np.float32([keypoints[i + 1][m.trainIdx].pt for m in matches]).reshape(-1, 1, 2)
H = ransac_homography(src_pts, dst_pts)
if H is not None:
H = H.astype(np.float32)
homographies.append(H)
else:
print(f"Warning: Homography computation failed for image pair (0, {i + 1}). Skipping.")
skipped_indices.append(i + 1) # Add indices of skipped images to the list
continue
return homographies, skipped_indices
def ransac_homography(src_pts, dst_pts, iterations=2000, threshold=3):
best_inlier_count = 0
best_homography = None
for _ in range(iterations):
indices = np.random.choice(len(src_pts), 4, replace=True)
src_subset = src_pts[indices].reshape(-1, 2)
dst_subset = dst_pts[indices].reshape(-1, 2)
homography = compute_homography_matrix(src_subset, dst_subset)
if homography is None:
continue
inliers = 0
for i in range(len(src_pts)):
projected_point = np.dot(homography, np.append(src_pts[i], 1))
projected_point = projected_point / projected_point[-1]
distance = np.linalg.norm(projected_point - np.append(dst_pts[i], 1))
if distance < threshold:
inliers += 1
if inliers > best_inlier_count:
best_inlier_count = inliers
best_homography = homography
return best_homography
def read_ground_truth_homographies(dataset_path):
H_files = sorted(glob.glob(os.path.join(dataset_path, "H_.txt")))
ground_truth_homographies = []
for filename in H_files:
H = np.loadtxt(filename)
ground_truth_homographies.append(H)
return ground_truth_homographies
def warp_perspective(img, H, target_shape):
h, w = target_shape
target_y, target_x = np.meshgrid(np.arange(h), np.arange(w))
target_coordinates = np.stack([target_x.ravel(), target_y.ravel(), np.ones(target_x.size)])
source_coordinates = np.dot(np.linalg.inv(H), target_coordinates)
source_coordinates /= source_coordinates[2, :]
valid = np.logical_and(np.logical_and(0 <= source_coordinates[0, :], source_coordinates[0, :] < img.shape[1] - 1),
np.logical_and(0 <= source_coordinates[1, :], source_coordinates[1, :] < img.shape[0] - 1))
valid_source_coordinates = np.round(source_coordinates[:, valid].astype(int)[:2]).astype(int)
valid_target_coordinates = target_coordinates[:, valid].astype(int)[:2]
valid_source_coordinates[0] = np.clip(valid_source_coordinates[0], 0, img.shape[1] - 1)
valid_source_coordinates[1] = np.clip(valid_source_coordinates[1], 0, img.shape[0] - 1)
valid_target_coordinates[0] = np.clip(valid_target_coordinates[0], 0, w - 1)
valid_target_coordinates[1] = np.clip(valid_target_coordinates[1], 0, h - 1)
warped_image = np.zeros((h, w, 3), dtype=np.uint8)
for i in range(3):
warped_image[..., i][valid_target_coordinates[1], valid_target_coordinates[0]] = img[..., i][valid_source_coordinates[1], valid_source_coordinates[0]]
return warped_image
def merge_images(sub_images, keypoints, filtered_matches, homographies, skipped_indices):
ref_img = sub_images[0]
for i in range(1, len(sub_images)):
if i in skipped_indices:
print(f"Image {i} was skipped due to homography computation failure.")
continue
img_i = sub_images[i]
H_i = homographies[i - 1 - sum(idx < i for idx in skipped_indices)]
# Get corners of the second (transferred) image
h, w, _ = img_i.shape
corners = np.array([[0, 0, 1], [w-1, 0, 1], [w-1, h-1, 1], [0, h-1, 1]], dtype=np.float32)
corners_transformed = H_i @ corners.T
corners_transformed = corners_transformed / corners_transformed[2]
corners_transformed = corners_transformed.T[:, :2]
# Calculate size of the stitched image
x_min = int(min(corners_transformed[:, 0].min(), 0))
x_max = int(max(corners_transformed[:, 0].max(), ref_img.shape[1]))
y_min = int(min(corners_transformed[:, 1].min(), 0))
y_max = int(max(corners_transformed[:, 1].max(), ref_img.shape[0]))
# Get the transformation to shift the origin to min_x, min_y
shift_mtx = np.array([[1, 0, -x_min],
[0, 1, -y_min],
[0, 0, 1]], dtype=np.float32)
# Apply the transformation on both images
img_i_transformed = cv2.warpPerspective(img_i, shift_mtx @ H_i, (x_max-x_min, y_max-y_min))
ref_img_transformed = cv2.warpPerspective(ref_img, shift_mtx, (x_max-x_min, y_max-y_min))
# Calculate blend masks
mask_img_i = img_i_transformed > 0
mask_ref_img = ref_img_transformed > 0
mask_overlap = np.logical_and(mask_img_i, mask_ref_img).astype(np.float32)
mask_img_i = mask_img_i.astype(np.float32) - mask_overlap
mask_ref_img = mask_ref_img.astype(np.float32) - mask_overlap
# Normalize weights and combine images
total_weight = mask_img_i + mask_ref_img + 2 * mask_overlap
ref_img = ((mask_img_i * img_i_transformed) + (mask_ref_img * ref_img_transformed) + (2 * mask_overlap * img_i_transformed * ref_img_transformed)) / total_weight
ref_img = ref_img.astype(np.uint8)
return ref_img
def main(dataset_path):
filenames = sorted(glob.glob(os.path.join(dataset_path, "*.png")))
sub_images = []
for filename in filenames:
img = cv2.imread(filename, cv2.IMREAD_COLOR) # Load images as color
sub_images.append(img)
ground_truth_homographies = read_ground_truth_homographies(dataset_path)
methods = ["SIFT", "SURF", "ORB"]
for method in methods:
start_time = time.time()
keypoints, descriptors = feature_extraction(sub_images, method=method)
matches = feature_matching(descriptors)
filtered_matches = filter_matches(matches)
homographies, skipped_indices = find_homography(keypoints, filtered_matches)
panorama = merge_images(sub_images, keypoints, filtered_matches, homographies, skipped_indices)
end_time = time.time()
runtime = end_time - start_time
print(f"Method: {method} - Runtime: {runtime:.2f} seconds")
for idx, (image, kp) in enumerate(zip(sub_images, keypoints)):
feature_plot = cv2.drawKeypoints(image, kp, None)
plt.figure()
plt.imshow(feature_plot, cmap="gray")
plt.title(f"Feature Points - {method} - Image {idx}")
for i, match in enumerate(filtered_matches):
matching_plot = cv2.drawMatches(sub_images[0], keypoints[0], sub_images[i + 1], keypoints[i + 1], match, None, flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
plt.figure()
plt.imshow(matching_plot, cmap="gray")
plt.title(f"Feature Point Matching - {method} - Image 0 - Image {i + 1}")
plt.figure()
plt.imshow(panorama, cmap="gray", aspect="auto")
plt.title(f"Panorama - {method}")
print("\nGround truth homographies")
for i, H_gt in enumerate(ground_truth_homographies):
print(f"Image 0 to {i+1}:")
print(H_gt)
print("\nComputed homographies")
for i, H_est in enumerate(homographies):
print(f"Image 0 to {i+1}:")
print(H_est)
plt.show()
if __name__ == "__main__":
dataset_path = "dataset/v_bird"
main(dataset_path)
"
|
517d3ec5d5129169e821ed275165af05
|
{
"intermediate": 0.4247884154319763,
"beginner": 0.3544952869415283,
"expert": 0.22071635723114014
}
|
2,354
|
In pytorch how to force weights to stay within a range?
|
00d26218b0cffe2d2605ff8c57307551
|
{
"intermediate": 0.2707018554210663,
"beginner": 0.14857935905456543,
"expert": 0.5807188153266907
}
|
2,355
|
In pytorch how do I print a tensor without additional object arguments like dtype?
|
291d8f7d929d73ae066f37a0245f71ad
|
{
"intermediate": 0.6253924369812012,
"beginner": 0.14821910858154297,
"expert": 0.22638839483261108
}
|
2,356
|
How do I print a list of floats only up to a certain precision in python?
|
025fee64584c79eff34d02a4ff43c687
|
{
"intermediate": 0.5163175463676453,
"beginner": 0.07552520185709,
"expert": 0.40815722942352295
}
|
2,357
|
Give a r code to perform best subset selection for linear regression
|
31e2a9a1bbad2883d20045263dcbe9bf
|
{
"intermediate": 0.16106511652469635,
"beginner": 0.06504552066326141,
"expert": 0.7738893628120422
}
|
2,358
|
A passqors string pmn consist of binary characters. A flip is to changw the element from 1 to 0 or 0 to 1. Dind the minimum number of flips to make the string a xompination of even length substrings auch that they all contain the same element
|
39ce9d1787bda42832efc181ecf7dbaf
|
{
"intermediate": 0.42518579959869385,
"beginner": 0.19988934695720673,
"expert": 0.3749248683452606
}
|
2,359
|
Write a list of normalization methods in pytorch useful for preventing exploding gradients.
|
4277df0871368e9d8d8708cf1c5278c7
|
{
"intermediate": 0.17772696912288666,
"beginner": 0.10355139523744583,
"expert": 0.7187216281890869
}
|
2,360
|
Hi
|
02106f99cec2f128bc5ca178f2f0f25b
|
{
"intermediate": 0.33010533452033997,
"beginner": 0.26984941959381104,
"expert": 0.400045245885849
}
|
2,361
|
как мне задать пароль из 2 значений routeCode
Где routeCode = 969 А password должен быть 969969
def createArchiveForExchange(self, routeCode):
archivePath = f’{squishinfo.testCase}/stmobile.zip’
if ‘/tst_full_download’ in f"{squishinfo.testCase}":
fileNames = [f’{squishinfo.testCase}/stmobile.db3’, f’{squishinfo.testCase}/stmobile.dbt’]
else:
fileNames = [f’{squishinfo.testCase}/stmobile.db3’]
for filePath in fileNames:
fs.createArchive(archivePath, filePath, password)
|
27003b5636070cff47229129ee0799ce
|
{
"intermediate": 0.39242053031921387,
"beginner": 0.2929390072822571,
"expert": 0.31464049220085144
}
|
2,362
|
What are some good ways to store data remotely for mobile unity games?
|
09e6462b90f5aa42b6cade9b1bdc3b6f
|
{
"intermediate": 0.47042322158813477,
"beginner": 0.2380571961402893,
"expert": 0.29151955246925354
}
|
2,363
|
You're an helpful programmer assistant, working with a human user developing a web scrapper for a particular website in RStudio using rvest package.
Ask any clarifying questions before proceeding if needed.
Avoid writing all the code, and try to focus on what needs to be changed or added.
I need to get multiple variables from that "table" (table#subjectDispositionSection.sectionTable)
The first part of the data is in tabular form, with only two columns: first column with the title of the variable (class="labelColumn"), and second column is the actual value (class="valueColumn").
The second part of the data is in a tabular form with a variable number of columns (class="embeddedTableContainer").
We're focusing on the first part for now.
Here's the code so far:
---
library(rvest)
url <- "https://www.clinicaltrialsregister.eu/ctr-search/trial/2016-000172-19/results" # replace with the URL of the website you want to scrape
webpage <- read_html(url)
# Extract the data using the provided CSS selector and classes:
table_selector <- "table#subjectDispositionSection.sectionTable"
label_selector <- ".labelColumn"
value_selector <- ".valueColumn"
label_nodes <- html_nodes(webpage, paste(table_selector, label_selector, sep = " "))
value_nodes <- lapply(label_nodes, function(node) {
html_node(node, xpath = "following-sibling::*[1][contains(@class, 'valueColumn')]")
})
labels <- trimws(html_text(label_nodes))
values <- trimws(sapply(value_nodes, html_text))
# Combine the extracted labels and values into a data frame:
data <- data.frame(Variable = labels, Value = values, stringsAsFactors = FALSE)
----
THe problem is that the code so far is extracting the second part as well, but it is intended to get the first part only.
It should ignore any data inside those embedded tables (class="embeddedTableContainer").
Here's the HTML code for one of those embedded tables to be ignored, in case it is helpful:
---
<tr>
<td colspan="2" class="embeddedTableContainer">
<div style="overflow-x: auto; width:960px;">
<table class="embeddedTable">
<tbody><tr>
<td class="header labelColumn">
<div>Number of subjects in period 1 </div>
</td>
<td class="valueColumn centeredValueColumn">Empagliflozin</td>
<td class="valueColumn centeredValueColumn">Placebo</td>
</tr>
<tr>
<td class="labelColumn">
<div> Started</div>
</td>
<td class="valueColumn centeredValueColumn"><div>22</div></td>
<td class="valueColumn centeredValueColumn"><div>22</div></td>
</tr>
<tr>
<td class="labelColumn">
<div> Completed</div>
</td>
<td class="valueColumn centeredValueColumn"><div>22</div></td>
<td class="valueColumn centeredValueColumn"><div>22</div></td>
</tr>
</tbody></table>
</div>
</td>
</tr>
|
a4edd5c5ae6b87c1f07bd555e58b098b
|
{
"intermediate": 0.47944313287734985,
"beginner": 0.3824520409107208,
"expert": 0.1381048560142517
}
|
2,364
|
Güven array of integwrss goal is to make all wlwmwnts equal values by applying some number of operations
Rules for operations are to apply an operation choose a prefix of array integwrw x
Add x to each elwmwnt inside this prefix
Cost of this operation is absolute x
Example arrray is 1 4 2 1 and prefix is 2 x = -3 array bwcomes -2 1 2 1 so add -3 to first 2 wlwmwnts cost is absolute -3 which is 3
Total cost is sum of all cost of all operations
Find the minimum cost of making all the wlwmwnts equal
Java code function takes integer list are and prints the cost
|
2d4840c41cd5f485cfc3c07c58f9507e
|
{
"intermediate": 0.4035484492778778,
"beginner": 0.3341241776943207,
"expert": 0.2623273730278015
}
|
2,365
|
if operator on python
|
1d8261b88eb01489a0565b8f11131d31
|
{
"intermediate": 0.2821483910083771,
"beginner": 0.3054123818874359,
"expert": 0.4124392867088318
}
|
2,366
|
please do all steps
use tampermonkey coding in this site:http://www.drtorkzadeh.com and do the followin steps.
zero step : this code only and only run in first tab of browser not other tabs
first step :make sure this code is only run in main page of site , not branches of site.
second step :check the time, if time between 22:30 to 23:30 then run the next step else wait for desired time.
third step : check whole page for exactcly this text "رزرو فقط در ساعات 9 صبح تا 10 صبح امکان پذیر است" and then do this , if found the exact text wait 5 secondes then reload the page and do it again and again.
fourth step : if you don not found that exact text , go to second tab and click this element "TAG POS=1 TYPE=INPUT:SUBMIT FORM=ACTION:http://www.drtorkzadeh.com/ ATTR=ID:submit".
fifth step : break
|
de938e5a7e77a965b9259b8ce97764d7
|
{
"intermediate": 0.437166690826416,
"beginner": 0.20226357877254486,
"expert": 0.3605698049068451
}
|
2,367
|
hi ther
|
4f35992b5bb6fcff72d409697101222a
|
{
"intermediate": 0.32207369804382324,
"beginner": 0.24952194094657898,
"expert": 0.4284043610095978
}
|
2,368
|
How do I convert a nested python array into a pytorch tensor?
|
bfe6897133fc5c0e4fb89318e60f6c80
|
{
"intermediate": 0.4545716941356659,
"beginner": 0.13573060929775238,
"expert": 0.40969768166542053
}
|
2,369
|
In an oncology clinical trial, how to predict additional survival time for remaining patients who are still alive based on data observed up to date? We need to predict the additional time they could survive afterwards in addition to the time these alive patients have already survived taking into considerations of their baseline age and gender. Please provide example R code with simulated data, and step by step explanation.
|
7bb60c104c17b9eb13fea0b1d29b678e
|
{
"intermediate": 0.4945099353790283,
"beginner": 0.18361838161945343,
"expert": 0.32187163829803467
}
|
2,370
|
<html>
<head>
<link rel="icon" href="http://www.iconarchive.com/download/i87143/graphicloads/colorful-long-shadow/Restaurant.ico">
<TITLE> Restaurante Fabuloso </TITLE>
</head>
<body TEXT="blue" >
<center>
<I>
<FONT FACE="Harlow Solid Italic">
<P> <FONT SIZE="7"> Restaurante Fabuloso </FONT> </P>
</I>
<script type="text/javascript">
img0= new Image();
img0.src="restaurante.jpg";
img1= new Image();
img1.src="carne.jpg";
img2= new Image();
img2.src="carnea.jpg";
img3= new Image();
img3.src="carneb.jpg";
img4= new Image();
img4.src="carnes.jpg";
img5= new Image();
img5.src="carneab.jpg";
img6= new Image();
img6.src="carneas.jpg";
img7= new Image();
img7.src="carnebs.jpg";
img8= new Image();
img8.src="carneabs.jpg";
img9= new Image();
img9.src="peixe.jpg";
img10= new Image();
img10.src="peixea.jpg";
img11= new Image();
img11.src="peixeb.jpg";
img12= new Image();
img12.src="peixes.jpg";
img13= new Image();
img13.src="peixeab.jpg";
img14= new Image();
img14.src="peixeas.jpg";
img15= new Image();
img15.src="peixebs.jpg";
img16= new Image();
img16.src="peixeabs.jpg";
img17= new Image();
img17.src="vegetariano.jpg";
img18= new Image();
img18.src="vegeta.jpg";
img19= new Image();
img19.src="vegetb.jpg";
img20= new Image();
img20.src="vegets.jpg";
img21= new Image();
img21.src="vegetab.jpg";
img22= new Image();
img22.src="vegetas.jpg";
img23= new Image();
img23.src="vegetbs.jpg";
img24= new Image();
img24.src = "vegetabs.jpg";
img25 = new Image();
img25.src = "carnem.jpg";
img26 = new Image();
img26.src = "carnema.jpg";
img27 = new Image();
img27.src = "carnemb.jpg";
img28 = new Image();
img28.src = "carnems.jpg";
img29 = new Image();
img29.src = "carnemab.jpg";
img30 = new Image();
img30.src = "carnemas.jpg";
img31 = new Image();
img31.src = "carnembs.jpg";
img32 = new Image();
img32.src = "carnemasb.jpg";
img33 = new Image();
img33.src = "peixem.jpg";
img34 = new Image();
img34.src = "peixema.jpg";
img35 = new Image();
img35.src = "peixemb.jpg";
img36 = new Image();
img36.src = "peixems.jpg";
img37 = new Image();
img37.src = "peixemab.jpg";
img38 = new Image();
img38.src = "peixemas.jpg";
img39 = new Image();
img39.src = "peixembs.jpg";
img40 = new Image();
img40.src = "peixemasb.jpg";
img41 = new Image();
img41.src = "vegetm.jpg";
img42 = new Image();
img42.src = "vegetma.jpg";
img43 = new Image();
img43.src = "vegetmb.jpg";
img44 = new Image();
img44.src = "vegetms.jpg";
img45 = new Image();
img45.src = "vegetmab.jpg";
img46 = new Image();
img46.src = "vegetmas.jpg";
img47 = new Image();
img47.src = "vegetmbs.jpg";
img48 = new Image();
img48.src = "vegetmasb.jpg";
function reset()
{
var acompanhamento = ['arroz', 'batatas', 'salada', 'massa'];
// Limpar as checkboxes relativas ao acompanhamento
for (var each in acompanhamento){
document.getElementById(acompanhamento[each]).checked = false;
}
// Faz reset à lista pendente ( Tipo de prato )
document.getElementById("prato").selectedIndex = 0;
um.src = img0.src; // Coloca imagem por defeito
}
function pedido()
{
var pratoprincipal = document.getElementById("prato");
var tipodeprato = pratoprincipal.options[pratoprincipal.selectedIndex].value;
var arroz = document.getElementById('arroz').checked;
var Batata = document.getElementById('batatas').checked;
var Salada = document.getElementById('salada').checked;
var massa = document.getElementById('massa').checked;
if ((tipodeprato == "carne") && (arroz == false) && (Batata == false) && (Salada == false) && (massa == false) ) { um.src=img1.src; }
if ((tipodeprato == "carne") && (arroz == true) && (Batata == false) && (Salada == false) && (massa == false)) { um.src=img2.src; }
if ((tipodeprato == "carne") && (arroz == false) && (Batata == true) && (Salada == false) && (massa == false) ) { um.src=img3.src; }
if ((tipodeprato == "carne") && (arroz == false) && (Batata == false) && (Salada == true) && (massa == false) ) { um.src=img4.src; }
if ((tipodeprato == "carne") && (arroz == true) && (Batata == true) && (Salada == false) && (massa == false)) { um.src=img5.src; }
if ((tipodeprato == "carne") && (arroz == true) && (Batata == false) && (Salada == true) && (massa == false)) { um.src=img6.src; }
if ((tipodeprato == "carne") && (arroz == false) && (Batata == true) && (Salada == true) && (massa == false) ) { um.src=img7.src; }
if ((tipodeprato == "carne") && (arroz == true) && (Batata == true) && (Salada == true) && (massa == false)) { um.src = img8.src; }
if ((tipodeprato == "carne") && (arroz == false) && (Batata == false) && (Salada == false) && (massa == true)) { um.src = img25.src; }
if ((tipodeprato == "carne") && (arroz == true) && (Batata == false) && (Salada == false) && (massa == true)) { um.src = img26.src; }
if ((tipodeprato == "carne") && (arroz == false) && (Batata == true) && (Salada == false) && (massa == true)) { um.src = img27.src; }
if ((tipodeprato == "carne") && (arroz == false) && (Batata == false) && (Salada == true) && (massa == true)) { um.src = img28.src; }
if ((tipodeprato == "carne") && (arroz == true) && (Batata == true) && (Salada == false) && (massa == true)) { um.src = img29.src; }
if ((tipodeprato == "carne") && (arroz == true) && (Batata == false) && (Salada == true) && (massa == true)) { um.src = img30.src; }
if ((tipodeprato == "carne") && (arroz == false) && (Batata == true) && (Salada == true) && (massa == true)) { um.src = img31.src; }
if ((tipodeprato == "carne") && (arroz == true) && (Batata == true) && (Salada == true) && (massa == true)) { um.src = img32.src; }
if ((tipodeprato == "peixe") && (arroz == false) && (Batata == false) && (Salada == false) && (massa == false) ) { um.src=img9.src; }
if ((tipodeprato == "peixe") && (arroz == true) && (Batata == false) && (Salada == false) && (massa == false) ) { um.src=img10.src; }
if ((tipodeprato == "peixe") && (arroz == false) && (Batata == true) && (Salada == false) && (massa == false) ) { um.src=img11.src; }
if ((tipodeprato == "peixe") && (arroz == false) && (Batata == false) && (Salada == true) && (massa == false) ) { um.src=img12.src; }
if ((tipodeprato == "peixe") && (arroz == true) && (Batata == true) && (Salada == false) && (massa == false) ) { um.src=img13.src; }
if ((tipodeprato == "peixe") && (arroz == true) && (Batata == false) && (Salada == true) && (massa == false) ) { um.src=img14.src; }
if ((tipodeprato == "peixe") && (arroz == false) && (Batata == true) && (Salada == true) && (massa == false) ) { um.src=img15.src; }
if ((tipodeprato == "peixe") && (arroz == true) && (Batata == true) && (Salada == true) && (massa == false)) { um.src = img16.src; }
if ((tipodeprato == "peixe") && (arroz == false) && (Batata == false) && (Salada == false) && (massa == true)) { um.src = img33.src; }
if ((tipodeprato == "peixe") && (arroz == true) && (Batata == false) && (Salada == false) && (massa == true)) { um.src = img34.src; }
if ((tipodeprato == "peixe") && (arroz == false) && (Batata == true) && (Salada == false) && (massa == true)) { um.src = img35.src; }
if ((tipodeprato == "peixe") && (arroz == false) && (Batata == false) && (Salada == true) && (massa == true)) { um.src = img36.src; }
if ((tipodeprato == "peixe") && (arroz == true) && (Batata == true) && (Salada == false) && (massa == true)) { um.src = img37.src; }
if ((tipodeprato == "peixe") && (arroz == true) && (Batata == false) && (Salada == true) && (massa == true)) { um.src = img38.src; }
if ((tipodeprato == "peixe") && (arroz == false) && (Batata == true) && (Salada == true) && (massa == true)) { um.src = img39.src; }
if ((tipodeprato == "peixe") && (arroz == true) && (Batata == true) && (Salada == true) && (massa == true)) { um.src = img40.src; }
if ((tipodeprato == "vegetariano") && (arroz == false) && (Batata == false) && (Salada == false) && (massa == false) ) { um.src=img17.src; }
if ((tipodeprato == "vegetariano") && (arroz == true) && (Batata == false) && (Salada == false) && (massa == false) ) { um.src=img18.src; }
if ((tipodeprato == "vegetariano") && (arroz == false) && (Batata == true) && (Salada == false) && (massa == false) ) { um.src=img19.src; }
if ((tipodeprato == "vegetariano") && (arroz == false) && (Batata == false) && (Salada == true) && (massa == false) ) { um.src=img20.src; }
if ((tipodeprato == "vegetariano") && (arroz == true) && (Batata == true) && (Salada == false) && (massa == false) ) { um.src=img21.src; }
if ((tipodeprato == "vegetariano") && (arroz == true) && (Batata == false) && (Salada == true) && (massa == false) ) { um.src=img22.src; }
if ((tipodeprato == "vegetariano") && (arroz == false) && (Batata == true) && (Salada == true) && (massa == false) ) { um.src=img23.src; }
if ((tipodeprato == "vegetariano") && (arroz == true) && (Batata == true) && (Salada == true) && (massa == false)) { um.src = img24.src; }
if ((tipodeprato == "vegetariano") && (arroz == false) && (Batata == false) && (Salada == false) && (massa == true)) { um.src = img41.src; }
if ((tipodeprato == "vegetariano") && (arroz == true) && (Batata == false) && (Salada == false) && (massa == true)) { um.src = img42.src; }
if ((tipodeprato == "vegetariano") && (arroz == false) && (Batata == true) && (Salada == false) && (massa == true)) { um.src = img43.src; }
if ((tipodeprato == "vegetariano") && (arroz == false) && (Batata == false) && (Salada == true) && (massa == true)) { um.src = img44.src; }
if ((tipodeprato == "vegetariano") && (arroz == true) && (Batata == true) && (Salada == false) && (massa == true)) { um.src = img45.src; }
if ((tipodeprato == "vegetariano") && (arroz == true) && (Batata == false) && (Salada == true) && (massa == true)) { um.src = img46.src; }
if ((tipodeprato == "vegetariano") && (arroz == false) && (Batata == true) && (Salada == true) && (massa == true)) { um.src = img47.src; }
if ((tipodeprato == "vegetariano") && (arroz == true) && (Batata == true) && (Salada == true) && (massa == true)) { um.src = img48.src; }
}
reset();
</script>
<!-- Lista pendente para escolher o tipo de prato -->
<select id="prato">
<option value="" disabled="disabled" selected="selected">Escolha um prato</option>
<option value="carne"> Carne </option>
<option value="peixe"> Peixe </option>
<option value="vegetariano"> Vegetariano </option>
</select>
<br>
<H4>
<input name="arroz" id="arroz" type="checkbox"/> Arroz
<input name="batatas" id="batatas" type="checkbox"/> Batatas
<input name="salada" id="salada" type="checkbox"/> Salada
<input name="massa" id="massa" type="checkbox"/> Massa
</H4>
<br>
<input type="button" value="Reset" onclick="reset();" ;>
<input type="button" value="Fazer pedido" onclick="pedido();" ;>
<br><br>
<img src="restaurante.jpg" border=1 width=400 height=400 name="um"></img>
<img src="preço.jpg" border=1 width=400 height=400 name="um"></img>
</center>
</body>
</html>
Act has a programing professor, and explain me on detail why is my code of javascript not working, consider me an aprentice
|
b9811722c08566132e3fa7333a63b189
|
{
"intermediate": 0.3021000325679779,
"beginner": 0.43744799494743347,
"expert": 0.2604519724845886
}
|
2,371
|
I have an error on my code, my frontend won't be able to access my backend, but when I only run my backend the code works perfectly.
ntent.js:25 AxiosError {message: 'Network Error', name: 'AxiosError', code: 'ERR_NETWORK', config: {…}, request: XMLHttpRequest, …}code: "ERR_NETWORK"config: {transitional: {…}, adapter: Array(2), transformRequest: Array(1), transformResponse: Array(1), timeout: 0, …}message: "Network Error"name: "AxiosError"request: XMLHttpRequest {onreadystatechange: null, readyState: 4, timeout: 0, withCredentials: false, upload: XMLHttpRequestUpload, …}stack: "AxiosError: Network Error\n at XMLHttpRequest.handleError (http://localhost:3000/static/js/bundle.js:41314:14)"[[Prototype]]: Error
(névtelen) @ Content.js:25
Promise.catch (aszinkron)
componentDidMount @ Content.js:23
invokeLayoutEffectMountInDEV @ react-dom.development.js:25133
invokeEffectsInDev @ react-dom.development.js:27351
commitDoubleInvokeEffectsInDEV @ react-dom.development.js:27327
commitRootImpl @ react-dom.development.js:26883
commitRoot @ react-dom.development.js:26682
finishConcurrentRender @ react-dom.development.js:25981
performConcurrentWorkOnRoot @ react-dom.development.js:25809
workLoop @ scheduler.development.js:266
flushWork @ scheduler.development.js:239
performWorkUntilDeadline @ scheduler.development.js:533
Content.js:36 XMLHttpRequest {onreadystatechange: null, readyState: 4, timeout: 0, withCredentials: false, upload: XMLHttpRequestUpload, …}
(névtelen) @ Content.js:36
Promise.catch (aszinkron)
componentDidMount @ Content.js:23
invokeLayoutEffectMountInDEV @ react-dom.development.js:25133
invokeEffectsInDev @ react-dom.development.js:27351
commitDoubleInvokeEffectsInDEV @ react-dom.development.js:27327
commitRootImpl @ react-dom.development.js:26883
commitRoot @ react-dom.development.js:26682
finishConcurrentRender @ react-dom.development.js:25981
performConcurrentWorkOnRoot @ react-dom.development.js:25809
workLoop @ scheduler.development.js:266
flushWork @ scheduler.development.js:239
performWorkUntilDeadline @ scheduler.development.js:533
Content.js:41 {transitional: {…}, adapter: Array(2), transformRequest: Array(1), transformResponse: Array(1), timeout: 0, …}
(névtelen) @ Content.js:41
Promise.catch (aszinkron)
componentDidMount @ Content.js:23
invokeLayoutEffectMountInDEV @ react-dom.development.js:25133
invokeEffectsInDev @ react-dom.development.js:27351
commitDoubleInvokeEffectsInDEV @ react-dom.development.js:27327
commitRootImpl @ react-dom.development.js:26883
commitRoot @ react-dom.development.js:26682
finishConcurrentRender @ react-dom.development.js:25981
performConcurrentWorkOnRoot @ react-dom.development.js:25809
workLoop @ scheduler.development.js:266
flushWork @ scheduler.development.js:239
performWorkUntilDeadline @ scheduler.development.js:533
xhr.js:247 GET http://localhost:5000/Feladatok/ net::ERR_CONNECTION_REFUSED
dispat
import React, { Component } from 'react';
import axios from 'axios';
const Feladatok = props => (
<tr>
<td>{props.Feladatok.Kerdes}</td>
<td>{props.Feladatok.Valasz1}</td>
<td>{props.Feladatok.Valasz2}</td>
</tr>
)
export default class FeladatokList extends Component {
constructor(props) {
super(props);
this.state = {Feladatok: []};
}
componentDidMount() {
axios.get('http://localhost:5000/Feladatok/')
.then(response => {
this.setState({ Feladatok: response.data })
})
.catch((error) => {
console.log(error);
})
}
feladatList() {
return this.state.Feladatok.map(currentfeladat => {
return <Feladatok Feladatok={currentfeladat} key={currentfeladat._id}/>;
})
}
render() {
return (
<div>
<h3>Logged feladatok</h3>
<table className="table">
<thead className="thead-light">
<tr>
<th>Username</th>
<th>Description</th>
<th>Duration</th>
<th>Date</th>
<th>Actions</th>
</tr>
</thead>
<tbody>
{ this.feladatList() }
</tbody>
</table>
</div>
)
}
}
const express = require('express');
const cors = require('cors');
const mongoose = require('mongoose');
require('dotenv').config();
const app = express();
const port = process.env.PORT;
app.use(cors());
app.use(express.json());
const uri = process.env.REACT_APP_MONGODB;
mongoose.connect(uri, { useNewUrlParser: true, useCreateIndex: true }
);
const connection = mongoose.connection;
connection.once('open', () => {
console.log("MongoDB database connection established successfully");
})
const feladatokRouter = require('./routes/Feladatok');
app.use('/Feladatok', feladatokRouter);
app.listen(port, () => {
console.log(`Server is running on port: ${port}`);
});
const router = require('express').Router();
let Feladatok = require('../models/Feladatok.model');
router.route('/').get((req, res) => {
Feladatok.find()
.then(Feladatok => {
console.log(`Found ${Feladatok.length} documents`);
res.json(Feladatok);
})
.catch(err => res.status(400).json('Error: ' + err));
});
module.exports = router;
|
aee8f7ee18847fddc0ca3108803da6d7
|
{
"intermediate": 0.3647767901420593,
"beginner": 0.34175893664360046,
"expert": 0.2934642732143402
}
|
2,372
|
What's the best way to use OverlapMultiByObjectType in a ground actor for GAS in UE5?
|
aa264838c9a233187afb4ffe62850783
|
{
"intermediate": 0.5641157031059265,
"beginner": 0.11857837438583374,
"expert": 0.31730595231056213
}
|
2,373
|
In an oncology clinical trial, how to predict individual additional survival time for each of remaining patients who are still alive based on data observed up to date? We need to predict the additional time they could survive afterwards in addition to the time these alive patients have already survived taking into considerations of their baseline age and gender. I need only one unique averaged predicted time for each alive patient by weighting each time point after current observed time by the probability of survive at this time point. Please provide example R code with simulated data, and step by step explanation.
|
4c284a967b4b12af78671cee454ca61b
|
{
"intermediate": 0.30854690074920654,
"beginner": 0.18439224362373352,
"expert": 0.5070608258247375
}
|
2,374
|
Write a python function that reads a file then extracts its individual bits and saves them in a new file that is going to be presumably 8 times larger than the original.
|
aa384e38e0187dbda3e66a5c2ce0212d
|
{
"intermediate": 0.44512829184532166,
"beginner": 0.1220935732126236,
"expert": 0.43277817964553833
}
|
2,375
|
please do all steps
use tampermonkey coding in this site:http://www.drtorkzadeh.com and do the followin steps.
zero step : this code only and only run in first tab of browser not other tabs
first step :make sure this code is only run in main page of site , not branches of site.
second step :check the time, if time between 22:30 to 23:30 then run the next step else wait for desired time.
third step : check whole page for exactcly this text "رزرو فقط در ساعات 9 صبح تا 10 صبح امکان پذیر است" and then do this , if found the exact text wait 5 secondes then reload the page and do it again and again.
fourth step : if you don not found that exact text
fifth step : go to next tab
sixth step : click this element "TAG POS=1 TYPE=INPUT:SUBMIT FORM=ACTION:http://www.drtorkzadeh.com/ ATTR=ID:submit" and then stop he code.
|
d9f134fde74fcafa1ef5bfacbc49a002
|
{
"intermediate": 0.4143126606941223,
"beginner": 0.27849969267845154,
"expert": 0.30718764662742615
}
|
2,376
|
I have a code like this:
<div className='tantargyButton-container'>
<Link to={/TipusValaszto/${tantargy}}>
<div className='tantargyButton'>
<img className='tantargyImage' src={require('../img/'+img+'.png')}/>
<span className='tantargySpan'>{tantargy}</span>
</div>
</Link>
</div>
But I want to modify it like this:
<div className='tantargyButton-container'>
<Link to={/TipusValaszto/${tantargy}}>
<div className='tantargyButton'>
<div><img className='tantargyImage' src={require('../img/'+img+'.png')}/></div>
<div><span className='tantargySpan'>{tantargy}</span></div>
</div>
</Link>
</div>
The problem is that if I add the divs all my styles go wrong. Why?
|
3eae992bd20a617990571ccb9830fe5d
|
{
"intermediate": 0.4047565460205078,
"beginner": 0.4038287103176117,
"expert": 0.1914147585630417
}
|
2,377
|
I have a code like this in react.js:
<div className='tantargyButton-container'>
<Link to={/TipusValaszto/${tantargy}}>
<div className='tantargyButton'>
<img className='tantargyImage' src={require('../img/'+img+'.png')}/>
<span className='tantargySpan'>{tantargy}</span>
</div>
</Link>
</div>
But I want to modify it like this:
<div className='tantargyButton-container'>
<Link to={/TipusValaszto/${tantargy}}>
<div className='tantargyButton'>
<div><img className='tantargyImage' src={require('../img/'+img+'.png')}/></div>
<div><span className='tantargySpan'>{tantargy}</span></div>
</div>
</Link>
</div>
The problem is that if I add the divs all my styles go wrong. Why?
|
0ec0fedef153b62b512be1b1050c9303
|
{
"intermediate": 0.47767481207847595,
"beginner": 0.33041074872016907,
"expert": 0.1919144093990326
}
|
2,378
|
How do I turn a python list of integers into a bytestring?
|
27885639c2ac04d8fb4a5b8599b40ac1
|
{
"intermediate": 0.45329391956329346,
"beginner": 0.1369323581457138,
"expert": 0.40977367758750916
}
|
2,379
|
understand this assingment and then proceed to look ath the HTML and CSS of my home page. Swap the positions of my google maps embed and my social pages containers and also add text somewhere above or next to google maps embed that says "Where to find us"
Assingment:
Introduction
You have been asked to help develop a new website for a new camping equipment retailer
that is moving to online sales (RCC – Retail Camping Company). The company has been
trading from premises but now wants an online presence. The website currently doesn’t take
payments and orders online although the management hopes to do this in the future. The
website should be visual and should be viewable on different devices.
Scenario
The Retail Camping Company has the following basic requirements for the contents of the
website:
• Home Page:
o This page will introduce visitors to the special offers that are available and it
should include relevant images of camping equipment such as tents, cookers
and camping gear such as furniture and cookware.
o The text should be minimal, visuals should be used to break up the written
content.
o This page should include a navigation bar (with links inside the bar and hover
over tabs), slide show, header, sections, footer.
o Modal pop-up window that is displayed on top of the current page.
• Camping Equipment: This page will provide a catalogue of products that are on sale
such as tents, camping equipment and cookware.
• Furniture: This page should give customers a catalogue of camping furniture that is
on sale.
• Reviews: This page should include a forum where the registered members can
review the products they have purchased.
• Basket: This page should allow the customers to add camping equipment to their
basket which is saved and checked out later.
• Offers and Packages: This page provides a catalogue of the most popular
equipment that is sold
• Ensure you embed at least THREE (3) different plug ins such as java applets, display
maps and scan for viruses.
At the initial stage of the development process, you are required to make an HTML/CSS
prototype of the website that will clearly show the retail camping company how the final
website could work.
Content hasn’t been provided. Familiarise yourself with possible types of content by
choosing an appropriate organisation (by using web resources) to help you understand the
context in which the company operates. However, do not limit yourself to web-based sources
of information. You should also use academic, industry and other sources. Suitable content
for your prototype can be found on the web e.g. images. Use creative commons
(http://search.creativecommons.org/) or Wikimedia Commons
(http://commons.wikimedia.org/wiki/Main_Page) as a starting point to find content.
Remember the content you include in your site must be licensed for re-use. Do not spend
excessive amounts of time researching and gathering content. The purpose is to provide a
clear indication of how the final website could look and function. The client would provide
the actual content at a later point if they are happy with the website you have proposed.
Students must not use templates that they have not designed or created in the website
assessment. This includes website building applications, free HTML5 website templates,
or any software that is available to help with the assessment. You must create your own
HTML pages including CSS files and ideally you will do this through using notepad or
similar text editor.
Aim
The aim is to create a website for the Retail Camping Company (RCC).
Task 1– 25 Marks
HTML
The website must be developed using HTML 5 and feature a minimum of SIX (6) interlinked
pages which can also be viewed on a mobile device. The website must feature the content
described above and meet the following criteria:
• Researched relevant content to inform the design of the website.
• Be usable in at least TWO (2) different web browsers including being optimised for
mobile devices and responsive design. You should consult your tutor for guidance on
the specific browsers and versions you should use.
• Include relevant images of camping equipment you have selected from your research
including use of headers, sections and footers.
• Home Page:
o Minimal text with visuals to break up the written content.
o Navigation bar (with links inside the bar and hover over tabs)
o Responsive design and at least one plugin
o Header
o Sections
o Footer (including social media links)
• Reviews page: with responsive contact section including first name, last name, and
submit button (through email)
• Camping Equipment, Furniture Page and Offers and Package page with responsive
resize design and including TWO (2) different plug ins, catalogue style and animated
text search for products.
• Basket – Page created which allows the customers to add products to their basket.
Task 2 – 25 Marks
CSS
Create an external CSS file that specifies the design for the website. Each of the HTML
pages must link to this CSS file. There should be no use of the style attribute or the <style>
element in the website.
The boxes on the home page should include relevant elements such as border radius, boxshadow, hover etc.
Include on-page animated text search to allow customers to search for different products.
HTML:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="https://fonts.googleapis.com/css2?family=Cabin:wght@400;700&display=swap">
<link rel="stylesheet" href="style/style.css" />
<title>Camping Equipment - Retail Camping Company</title>
</head>
<body>
<header>
<div class="nav-container">
<img src="C:/Users/Kaddra52/Desktop/DDW/assets/images/logo.svg" alt="Logo" class="logo">
<h1>Retail Camping Company</h1>
<nav>
<ul>
<li><a href="index.html">Home</a></li>
<li><a href="camping-equipment.html">Camping Equipment</a></li>
<li><a href="furniture.html">Furniture</a></li>
<li><a href="reviews.html">Reviews</a></li>
<li><a href="basket.html">Basket</a></li>+
<li><a href="offers-and-packages.html">Offers and Packages</a></li>
</ul>
</nav>
</div>
</header>
<!-- Home Page -->
<main>
<section>
<!-- Insert slide show here -->
<div class="slideshow-container">
<div class="mySlides">
<img src="https://via.placeholder.com/600x400" alt="Tents" style="width:100%">
</div>
<div class="mySlides">
<img src="https://via.placeholder.com/600x400" alt="Cookers" style="width:100%">
</div>
<div class="mySlides">
<img src="https://via.placeholder.com/600x400" alt="Camping Gear" style="width:100%">
</div>
</div>
</section>
<section>
<!-- Display special offers and relevant images -->
<div class="special-offers-container">
<div class="special-offer">
<img src="https://via.placeholder.com/200x200" alt="Tent Offer">
<p>20% off premium tents!</p>
</div>
<div class="special-offer">
<img src="https://via.placeholder.com/200x200" alt="Cooker Offer">
<p>Buy a cooker, get a free utensil set!</p>
</div>
<div class="special-offer">
<img src="https://via.placeholder.com/200x200" alt="Furniture Offer">
<p>Save on camping furniture bundles!</p>
</div>
</div>
</section>
<section class="buts">
<!-- Modal pop-up window content here -->
<button id="modalBtn">Special Offer!</button>
<div id="modal" class="modal">
<div class="modal-content">
<span class="close">×</span>
<p>Sign up now and receive 10% off your first purchase!</p>
</div>
</div>
</section>
</main>
<footer>
<div class="footer-container">
<div class="footer-item">
<p>Follow us on social media:</p>
<ul class="social-links">
<li><a href="https://www.facebook.com">Facebook</a></li>
<li><a href="https://www.instagram.com">Instagram</a></li>
<li><a href="https://www.twitter.com">Twitter</a></li>
</ul>
</div>
<div class="footer-item">
<iframe src="https://www.google.com/maps/embed?pb=!1m18!1m12!1m3!1d3843.534694025997!2d14.508501137353216!3d35.89765941458404!2m3!1f0!2f0!3f0!3m2!1i1024!2i768!4f13.1!3m3!1m2!1s0x130e452d3081f035%3A0x61f492f43cae68e4!2sCity%20Gate!5e0!3m2!1sen!2smt!4v1682213255989!5m2!1sen!2smt" width="800" height="200" style="border:0;" allowfullscreen="" loading="lazy" referrerpolicy="no-referrer-when-downgrade"></iframe>
</div>
<div class="footer-item">
<p>Subscribe to our newsletter:</p>
<form action="subscribe.php" method="post">
<input type="email" name="email" placeholder="Enter your email" required>
<button type="submit">Subscribe</button>
</form>
</div>
</div>
</footer>
<script>
// Get modal element
var modal = document.getElementById('modal');
// Get open model button
var modalBtn = document.getElementById('modalBtn');
// Get close button
var closeBtn = document.getElementsByClassName('close')[0];
// Listen for open click
modalBtn.addEventListener('click', openModal);
// Listen for close click
closeBtn.addEventListener('click', closeModal);
// Listen for outside click
window.addEventListener('click', outsideClick);
// Function to open modal
function openModal() {
modal.style.display = 'block';
}
// Function to close modal
function closeModal() {
modal.style.display = 'none';
}
// Function to close modal if outside click
function outsideClick(e) {
if (e.target == modal) {
modal.style.display = 'none';
}
}
</script>
</body>
</html>
CSS:
html, body, h1, h2, h3, h4, p, a, ul, li, div, main, header, section, footer, img {
margin: 0;
padding: 0;
border: 0;
font-size: 100%;
font-family: inherit;
vertical-align: baseline;
box-sizing: border-box;
}
body {
font-family: 'Cabin', sans-serif;
line-height: 1.5;
color: #333;
width: 100%;
margin: 0;
padding: 0;
min-height: 100vh;
flex-direction: column;
display: flex;
background-image: url("../assets/images/cover.jpg");
background-size: cover;
}
header {
background: #00000000;
padding: 0.5rem 2rem;
text-align: center;
color: #32612D;
font-size: 1.2rem;
}
main{
flex-grow: 1;
}
.nav-container {
display: flex;
justify-content: space-between;
align-items: center;
flex-wrap: wrap;
}
.logo {
width: 50px;
height: auto;
margin-right: 1rem;
}
h1 {
flex-grow: 1;
text-align: left;
}
nav ul {
display: inline;
list-style: none;
}
nav ul li {
display: inline;
margin-left: 1rem;
}
nav ul li a {
text-decoration: none;
color: #32612D;
}
nav ul li a:hover {
color: #000000;
}
@media screen and (max-width: 768px) {
.nav-container {
flex-direction: column;
}
h1 {
margin-bottom: 1rem;
}
}
nav ul li a {
position: relative;
}
nav ul li a::after {
content: '';
position: absolute;
bottom: 0;
left: 0;
width: 100%;
height: 2px;
background-color: #000;
transform: scaleX(0);
transition: transform 0.3s;
}
nav ul li a:hover::after {
transform: scaleX(1);
}
.slideshow-container {
width: 100%;
position: relative;
margin: 1rem 0;
}
.mySlides {
display: none;
}
.mySlides img {
width: 100%;
height: auto;
}
.special-offers-container {
display: flex;
justify-content: space-around;
align-items: center;
flex-wrap: wrap;
margin: 1rem 0;
}
.special-offer {
width: 200px;
padding: 1rem;
text-align: center;
margin: 1rem;
background-color: #ADC3AB;
border-radius: 5px;
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
transition: all 0.3s ease;
}
.special-offer:hover {
box-shadow: 0 8px 16px rgba(0, 0, 0, 0.2);
transform: translateY(-5px);
}
.special-offer img {
width: 100%;
height: auto;
margin-bottom: 0.5rem;
border-radius: 5px;
}
.modal {
display: none;
position: fixed;
left: 0;
top: 0;
width: 100%;
height: 100%;
background-color: rgba(0, 0, 0, 0.5);
z-index: 1;
overflow: auto;
align-items: center;
}
.modal-content {
background-color: #fefefe;
padding: 2rem;
margin: 10% auto;
width: 30%;
min-width: 300px;
max-width: 80%;
text-align: center;
border-radius: 5px;
box-shadow: 0 1px 8px rgba(0, 0, 0, 0.1);
}
.buts{
text-align: center;
}
.close {
display: block;
text-align: right;
font-size: 2rem;
color: #333;
cursor: pointer;
}
footer {
background: #32612D;
padding: 1rem;
text-align: center;
margin-top: auto;
}
.footer-container {
display: flex;
justify-content: space-between;
align-items: center;
flex-wrap: wrap;
}
.footer-item {
margin: 1rem 2rem;
}
footer p {
color: #fff;
margin-bottom: 1rem;
}
footer ul {
list-style: none;
}
footer ul li {
display: inline;
margin: 0.5rem;
}
footer ul li a {
text-decoration: none;
color: #fff;
}
@media screen and (max-width: 768px) {
.special-offers-container {
flex-direction: column;
}
}
@media screen and (max-width: 480px) {
h1 {
display: block;
margin-bottom: 1rem;
}
}
.catalog {
display: flex;
flex-wrap: wrap;
justify-content: center;
margin: 2rem 0;
}
.catalog-item {
width: 200px;
padding: 1rem;
margin: 1rem;
background-color: #ADC3AB;
border-radius: 5px;
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
text-align: center;
}
.catalog-item:hover {
box-shadow: 0 8px 16px rgba(0, 0, 0, 0.2);
}
.catalog-item img {
width: 100%;
height: auto;
margin-bottom: 0.5rem;
border-radius: 5px;
}
.catalog-item h3 {
margin-bottom: 0.5rem;
}
.catalog-item p {
margin-bottom: 0.5rem;
}
.catalog-item button {
background-color: #32612D;
color: #fff;
padding: 0.5rem;
border: none;
border-radius: 5px;
cursor: pointer;
}
.catalog-item button:hover {
background-color: #ADC3AB;
}
footer form {
display: inline-flex;
align-items: center;
}
footer input[type="email"] {
padding: 0.5rem;
border: none;
border-radius: 5px;
margin-right: 0.5rem;
}
footer button {
background-color: #ADC3AB;
color: #32612D;
padding: 0.5rem;
border: none;
border-radius: 5px;
cursor: pointer;
}
footer button:hover {
background-color: #32612D;
color: #fff;
}
.special-offer .social-share {
display: inline-flex;
align-items: center;
justify-content: space-around;
margin-top: 0.5rem;
}
.special-offer .social-share a {
text-decoration: none;
color: #32612D;
}
.special-offer .social-share a:hover {
color: #ADC3AB;
}
|
fb694075316a4c2492414742bb1b0de4
|
{
"intermediate": 0.2989216446876526,
"beginner": 0.3672041594982147,
"expert": 0.3338741958141327
}
|
2,380
|
Let's play a game
|
2e0a5afe0c8a18d22cd272de3fe0f8bd
|
{
"intermediate": 0.3248334527015686,
"beginner": 0.4326627254486084,
"expert": 0.2425038069486618
}
|
2,381
|
How do I make an FHitResult using a FVector?
|
7a3948fe69f174794e23e1459967f17c
|
{
"intermediate": 0.41143542528152466,
"beginner": 0.14242038130760193,
"expert": 0.446144163608551
}
|
2,382
|
Hi There
|
20f105e677c44502b9dfbfbe768c1b1c
|
{
"intermediate": 0.3316761553287506,
"beginner": 0.266131728887558,
"expert": 0.402192085981369
}
|
2,383
|
In python, how do I convert a list of 1s and 0s into a bytearray?
|
97a3ff97bdde2b35fb5f068805375778
|
{
"intermediate": 0.4224061071872711,
"beginner": 0.13894298672676086,
"expert": 0.4386509954929352
}
|
2,384
|
Проведи ревью кода:
#include "cameramodel.h"
#include <QMediaDevices>
#include <QCameraDevice>
#include "common/dataformat.h"
#include "interfaces/iphotosconfigure.h"
namespace {
constexpr auto NULL_RANGE_DIFF(-1);
constexpr auto RESOLUTION_ASPECT(4.0/3.0);
}
CameraModel::CameraModel(QObject *parent) : QObject(parent)
{
}
void CameraModel::setupResolution(bool ready) {
if (ready) {
m_imageCapture->setResolution(resolveResolution());
}
}
QSize CameraModel::maxSupportedResolution() const
{
const QList<QSize> resolutions = supportedResolutions();
QSize res = *std::max_element(resolutions.constBegin(), resolutions.constEnd(), [](const QSize &ls, const QSize &rs) -> bool {
return ls.width() < rs.width();
});
return res;
}
QSize CameraModel::resolveResolution() const
{
int sideLength = photosConfigure()->maxPhotoSize();
if (sideLength == 0) {
return maxSupportedResolution();
}
QSize res;
int minRangeDiff = NULL_RANGE_DIFF;
const QList<QSize> resolutions = supportedResolutions();
for (const QSize &resolution : resolutions) {
int maxResolutionSize = std::max(resolution.width(), resolution.height());
int rangeDiff = std::abs(maxResolutionSize - sideLength);
if (minRangeDiff == NULL_RANGE_DIFF) {
minRangeDiff = rangeDiff;
}
if (rangeDiff <= minRangeDiff) {
minRangeDiff = rangeDiff;
res = resolution;
}
}
return res;
}
QList<QSize> CameraModel::supportedResolutions() const
{
QList<QSize> res;
const QList<QSize> allResolutions = QMediaDevices::defaultVideoInput().photoResolutions();
for (const QSize &resolution : allResolutions) {
double aspect = static_cast<double>(resolution.width()) / static_cast<double>(resolution.height());
if (DataFormat::doubleEqual(aspect, RESOLUTION_ASPECT)) {
res << resolution;
}
}
return res.isEmpty() ? allResolutions : res;
}
QImageCapture *CameraModel::imageCapture() const
{
return m_imageCapture;
}
void CameraModel::setImageCapture(QImageCapture *newImageCapture)
{
if (m_imageCapture == newImageCapture) {
return;
}
disconnect(m_imageCapture, &QImageCapture::readyForCaptureChanged, this, &CameraModel::setupResolution);
m_imageCapture = newImageCapture;
connect(m_imageCapture, &QImageCapture::readyForCaptureChanged, this, &CameraModel::setupResolution);
emit imageCaptureChanged();
}
|
8d8f0f68efd01fc77bbe53d03e04f4fc
|
{
"intermediate": 0.3203921914100647,
"beginner": 0.3814273476600647,
"expert": 0.2981804311275482
}
|
2,385
|
This is my dataset ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------date sale
1995 Jan 742
1995 Feb 697
1995 Mar 776
1995 Apr 898
1995 May 1030
1995 Jun 1107
1995 Jul 1165
1995 Aug 1216
1995 Sep 1208
1995 Oct 1131
1995 Nov 971
1995 Dec 783
1996 Jan 741
1996 Feb 700
1996 Mar 774
1996 Apr 932
1996 May 1099
1996 Jun 1223
1996 Jul 1290
1996 Aug 1349
1996 Sep 1341
1996 Oct 1296
1996 Nov 1066
1996 Dec 901
1997 Jan 896
1997 Feb 793
1997 Mar 885
1997 Apr 1055
1997 May 1204
1997 Jun 1326
1997 Jul 1303
1997 Aug 1436
1997 Sep 1473
1997 Oct 1453
1997 Nov 1170
1997 Dec 1023
1998 Jan 951
1998 Feb 861
1998 Mar 938
1998 Apr 1109
1998 May 1274
1998 Jun 1422
1998 Jul 1486
1998 Aug 1555
1998 Sep 1604
1998 Oct 1600
1998 Nov 1403
1998 Dec 1209
1999 Jan 1030
1999 Feb 1032
1999 Mar 1126
1999 Apr 1285
1999 May 1468
1999 Jun 1637
1999 Jul 1611
1999 Aug 1608
1999 Sep 1528
1999 Oct 1420
1999 Nov 1119
1999 Dec 1013-----------------------------------Below is the question ---------------------------
1.8 Let's do some accuracy estimation. Split the data into training and testing.
Let all points up to the end of 1998 (including) are training set. -----------------------Below is my code for the question -----------------------------------------library(forecast)
plastics_ts <- ts(plastics$sale, start = c(1971, 1), frequency = 12)
(plastics_ts)
# Split the data into training and testing sets
plastics_train <- window(plastics_ts, end = c(1998,12))
plastics_test <- window(plastics_ts, start = c(1999,1))----------------------------below is the plastics_ts dataset-------------------------------------- Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
1971 742 697 776 898 1030 1107 1165 1216 1208 1131 971 783
1972 741 700 774 932 1099 1223 1290 1349 1341 1296 1066 901
1973 896 793 885 1055 1204 1326 1303 1436 1473 1453 1170 1023
1974 951 861 938 1109 1274 1422 1486 1555 1604 1600 1403 1209
1975 1030 1032 1126 1285 1468 1637 1611 1608 1528 1420 1119 1013--------------------------Heres my error---------------------Error in window.default(x, ...) : 'start' cannot be after 'end'--------------- Kindly resolve it, where am i aking mistake
|
189bdb65bf2240ba187e3232147279c8
|
{
"intermediate": 0.4212222397327423,
"beginner": 0.35645076632499695,
"expert": 0.2223270684480667
}
|
2,386
|
cmake language behave like c++ macro
|
e0446b59a12253f3b295ba26ccb02de4
|
{
"intermediate": 0.2828558683395386,
"beginner": 0.5357828736305237,
"expert": 0.18136130273342133
}
|
2,387
|
My dog always stirs up the trash can.
How to wean a dog from scattering garbage from the bin.
|
a0feb7af94d9d8bdada48577fa073063
|
{
"intermediate": 0.3469029366970062,
"beginner": 0.29919928312301636,
"expert": 0.35389772057533264
}
|
2,388
|
understand this assignment and then proceed to look ath the HTML and CSS of my home page. Swap the positions of my google maps embed and my social pages containers and also add text somewhere above or next to google maps embed that says “Where to find us”
Assingment:
Introduction
You have been asked to help develop a new website for a new camping equipment retailer
that is moving to online sales (RCC – Retail Camping Company). The company has been
trading from premises but now wants an online presence. The website currently doesn’t take
payments and orders online although the management hopes to do this in the future. The
website should be visual and should be viewable on different devices.
Scenario
The Retail Camping Company has the following basic requirements for the contents of the
website:
• Home Page:
o This page will introduce visitors to the special offers that are available and it
should include relevant images of camping equipment such as tents, cookers
and camping gear such as furniture and cookware.
o The text should be minimal, visuals should be used to break up the written
content.
o This page should include a navigation bar (with links inside the bar and hover
over tabs), slide show, header, sections, footer.
o Modal pop-up window that is displayed on top of the current page.
• Camping Equipment: This page will provide a catalogue of products that are on sale
such as tents, camping equipment and cookware.
• Furniture: This page should give customers a catalogue of camping furniture that is
on sale.
• Reviews: This page should include a forum where the registered members can
review the products they have purchased.
• Basket: This page should allow the customers to add camping equipment to their
basket which is saved and checked out later.
• Offers and Packages: This page provides a catalogue of the most popular
equipment that is sold
• Ensure you embed at least THREE (3) different plug ins such as java applets, display
maps and scan for viruses.
At the initial stage of the development process, you are required to make an HTML/CSS
prototype of the website that will clearly show the retail camping company how the final
website could work.
Content hasn’t been provided. Familiarise yourself with possible types of content by
choosing an appropriate organisation (by using web resources) to help you understand the
context in which the company operates. However, do not limit yourself to web-based sources
of information. You should also use academic, industry and other sources. Suitable content
for your prototype can be found on the web e.g. images. Use creative commons
(http://search.creativecommons.org/) or Wikimedia Commons
(http://commons.wikimedia.org/wiki/Main_Page) as a starting point to find content.
Remember the content you include in your site must be licensed for re-use. Do not spend
excessive amounts of time researching and gathering content. The purpose is to provide a
clear indication of how the final website could look and function. The client would provide
the actual content at a later point if they are happy with the website you have proposed.
Students must not use templates that they have not designed or created in the website
assessment. This includes website building applications, free HTML5 website templates,
or any software that is available to help with the assessment. You must create your own
HTML pages including CSS files and ideally you will do this through using notepad or
similar text editor.
Aim
The aim is to create a website for the Retail Camping Company (RCC).
Task 1– 25 Marks
HTML
The website must be developed using HTML 5 and feature a minimum of SIX (6) interlinked
pages which can also be viewed on a mobile device. The website must feature the content
described above and meet the following criteria:
• Researched relevant content to inform the design of the website.
• Be usable in at least TWO (2) different web browsers including being optimised for
mobile devices and responsive design. You should consult your tutor for guidance on
the specific browsers and versions you should use.
• Include relevant images of camping equipment you have selected from your research
including use of headers, sections and footers.
• Home Page:
o Minimal text with visuals to break up the written content.
o Navigation bar (with links inside the bar and hover over tabs)
o Responsive design and at least one plugin
o Header
o Sections
o Footer (including social media links)
• Reviews page: with responsive contact section including first name, last name, and
submit button (through email)
• Camping Equipment, Furniture Page and Offers and Package page with responsive
resize design and including TWO (2) different plug ins, catalogue style and animated
text search for products.
• Basket – Page created which allows the customers to add products to their basket.
Task 2 – 25 Marks
CSS
Create an external CSS file that specifies the design for the website. Each of the HTML
pages must link to this CSS file. There should be no use of the style attribute or the <style>
element in the website.
The boxes on the home page should include relevant elements such as border radius, boxshadow, hover etc.
Include on-page animated text search to allow customers to search for different products.
HTML:
<!DOCTYPE html>
<html lang=“en”>
<head>
<meta charset=“UTF-8”>
<meta name=“viewport” content=“width=device-width, initial-scale=1.0”>
<link rel=“stylesheet” href=“https://fonts.googleapis.com/css2?family=Cabin:wght@400;700&display=swap”>
<link rel=“stylesheet” href=“style/style.css” />
<title>Camping Equipment - Retail Camping Company</title>
</head>
<body>
<header>
<div class=“nav-container”>
<img src=“C:/Users/Kaddra52/Desktop/DDW/assets/images/logo.svg” alt=“Logo” class=“logo”>
<h1>Retail Camping Company</h1>
<nav>
<ul>
<li><a href=“index.html”>Home</a></li>
<li><a href=“camping-equipment.html”>Camping Equipment</a></li>
<li><a href=“furniture.html”>Furniture</a></li>
<li><a href=“reviews.html”>Reviews</a></li>
<li><a href=“basket.html”>Basket</a></li>+
<li><a href=“offers-and-packages.html”>Offers and Packages</a></li>
</ul>
</nav>
</div>
</header>
<!-- Home Page -->
<main>
<section>
<!-- Insert slide show here -->
<div class=“slideshow-container”>
<div class=“mySlides”>
<img src=“https://via.placeholder.com/600x400” alt=“Tents” style=“width:100%”>
</div>
<div class=“mySlides”>
<img src=“https://via.placeholder.com/600x400” alt=“Cookers” style=“width:100%”>
</div>
<div class=“mySlides”>
<img src=“https://via.placeholder.com/600x400” alt=“Camping Gear” style=“width:100%”>
</div>
</div>
</section>
<section>
<!-- Display special offers and relevant images -->
<div class=“special-offers-container”>
<div class=“special-offer”>
<img src=“https://via.placeholder.com/200x200” alt=“Tent Offer”>
<p>20% off premium tents!</p>
</div>
<div class=“special-offer”>
<img src=“https://via.placeholder.com/200x200” alt=“Cooker Offer”>
<p>Buy a cooker, get a free utensil set!</p>
</div>
<div class=“special-offer”>
<img src=“https://via.placeholder.com/200x200” alt=“Furniture Offer”>
<p>Save on camping furniture bundles!</p>
</div>
</div>
</section>
<section class=“buts”>
<!-- Modal pop-up window content here -->
<button id=“modalBtn”>Special Offer!</button>
<div id=“modal” class=“modal”>
<div class=“modal-content”>
<span class=“close”>×</span>
<p>Sign up now and receive 10% off your first purchase!</p>
</div>
</div>
</section>
</main>
<footer>
<div class=“footer-container”>
<div class=“footer-item”>
<p>Follow us on social media:</p>
<ul class=“social-links”>
<li><a href=“https://www.facebook.com”>Facebook</a></li>
<li><a href=“https://www.instagram.com”>Instagram</a></li>
<li><a href=“https://www.twitter.com”>Twitter</a></li>
</ul>
</div>
<div class=“footer-item”>
<iframe src=“https://www.google.com/maps/embed?pb=!1m18!1m12!1m3!1d3843.534694025997!2d14.508501137353216!3d35.89765941458404!2m3!1f0!2f0!3f0!3m2!1i1024!2i768!4f13.1!3m3!1m2!1s0x130e452d3081f035%3A0x61f492f43cae68e4!2sCity Gate!5e0!3m2!1sen!2smt!4v1682213255989!5m2!1sen!2smt” width=“800” height=“200” style=“border:0;” allowfullscreen=“” loading=“lazy” referrerpolicy=“no-referrer-when-downgrade”></iframe>
</div>
<div class=“footer-item”>
<p>Subscribe to our newsletter:</p>
<form action=“subscribe.php” method=“post”>
<input type=“email” name=“email” placeholder=“Enter your email” required>
<button type=“submit”>Subscribe</button>
</form>
</div>
</div>
</footer>
<script>
// Get modal element
var modal = document.getElementById(‘modal’);
// Get open model button
var modalBtn = document.getElementById(‘modalBtn’);
// Get close button
var closeBtn = document.getElementsByClassName(‘close’)[0];
// Listen for open click
modalBtn.addEventListener(‘click’, openModal);
// Listen for close click
closeBtn.addEventListener(‘click’, closeModal);
// Listen for outside click
window.addEventListener(‘click’, outsideClick);
// Function to open modal
function openModal() {
modal.style.display = ‘block’;
}
// Function to close modal
function closeModal() {
modal.style.display = ‘none’;
}
// Function to close modal if outside click
function outsideClick(e) {
if (e.target == modal) {
modal.style.display = ‘none’;
}
}
</script>
</body>
</html>
CSS:
html, body, h1, h2, h3, h4, p, a, ul, li, div, main, header, section, footer, img {
margin: 0;
padding: 0;
border: 0;
font-size: 100%;
font-family: inherit;
vertical-align: baseline;
box-sizing: border-box;
}
body {
font-family: ‘Cabin’, sans-serif;
line-height: 1.5;
color: #333;
width: 100%;
margin: 0;
padding: 0;
min-height: 100vh;
flex-direction: column;
display: flex;
background-image: url(“…/assets/images/cover.jpg”);
background-size: cover;
}
header {
background: #00000000;
padding: 0.5rem 2rem;
text-align: center;
color: #32612D;
font-size: 1.2rem;
}
main{
flex-grow: 1;
}
.nav-container {
display: flex;
justify-content: space-between;
align-items: center;
flex-wrap: wrap;
}
.logo {
width: 50px;
height: auto;
margin-right: 1rem;
}
h1 {
flex-grow: 1;
text-align: left;
}
nav ul {
display: inline;
list-style: none;
}
nav ul li {
display: inline;
margin-left: 1rem;
}
nav ul li a {
text-decoration: none;
color: #32612D;
}
nav ul li a:hover {
color: #000000;
}
@media screen and (max-width: 768px) {
.nav-container {
flex-direction: column;
}
h1 {
margin-bottom: 1rem;
}
}
nav ul li a {
position: relative;
}
nav ul li a::after {
content: ‘’;
position: absolute;
bottom: 0;
left: 0;
width: 100%;
height: 2px;
background-color: #000;
transform: scaleX(0);
transition: transform 0.3s;
}
nav ul li a:hover::after {
transform: scaleX(1);
}
.slideshow-container {
width: 100%;
position: relative;
margin: 1rem 0;
}
.mySlides {
display: none;
}
.mySlides img {
width: 100%;
height: auto;
}
.special-offers-container {
display: flex;
justify-content: space-around;
align-items: center;
flex-wrap: wrap;
margin: 1rem 0;
}
.special-offer {
width: 200px;
padding: 1rem;
text-align: center;
margin: 1rem;
background-color: #ADC3AB;
border-radius: 5px;
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
transition: all 0.3s ease;
}
.special-offer:hover {
box-shadow: 0 8px 16px rgba(0, 0, 0, 0.2);
transform: translateY(-5px);
}
.special-offer img {
width: 100%;
height: auto;
margin-bottom: 0.5rem;
border-radius: 5px;
}
.modal {
display: none;
position: fixed;
left: 0;
top: 0;
width: 100%;
height: 100%;
background-color: rgba(0, 0, 0, 0.5);
z-index: 1;
overflow: auto;
align-items: center;
}
.modal-content {
background-color: #fefefe;
padding: 2rem;
margin: 10% auto;
width: 30%;
min-width: 300px;
max-width: 80%;
text-align: center;
border-radius: 5px;
box-shadow: 0 1px 8px rgba(0, 0, 0, 0.1);
}
.buts{
text-align: center;
}
.close {
display: block;
text-align: right;
font-size: 2rem;
color: #333;
cursor: pointer;
}
footer {
background: #32612D;
padding: 1rem;
text-align: center;
margin-top: auto;
}
.footer-container {
display: flex;
justify-content: space-between;
align-items: center;
flex-wrap: wrap;
}
.footer-item {
margin: 1rem 2rem;
}
footer p {
color: #fff;
margin-bottom: 1rem;
}
footer ul {
list-style: none;
}
footer ul li {
display: inline;
margin: 0.5rem;
}
footer ul li a {
text-decoration: none;
color: #fff;
}
@media screen and (max-width: 768px) {
.special-offers-container {
flex-direction: column;
}
}
@media screen and (max-width: 480px) {
h1 {
display: block;
margin-bottom: 1rem;
}
}
.catalog {
display: flex;
flex-wrap: wrap;
justify-content: center;
margin: 2rem 0;
}
.catalog-item {
width: 200px;
padding: 1rem;
margin: 1rem;
background-color: #ADC3AB;
border-radius: 5px;
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
text-align: center;
}
.catalog-item:hover {
box-shadow: 0 8px 16px rgba(0, 0, 0, 0.2);
}
.catalog-item img {
width: 100%;
height: auto;
margin-bottom: 0.5rem;
border-radius: 5px;
}
.catalog-item h3 {
margin-bottom: 0.5rem;
}
.catalog-item p {
margin-bottom: 0.5rem;
}
.catalog-item button {
background-color: #32612D;
color: #fff;
padding: 0.5rem;
border: none;
border-radius: 5px;
cursor: pointer;
}
.catalog-item button:hover {
background-color: #ADC3AB;
}
footer form {
display: inline-flex;
align-items: center;
}
footer input[type=“email”] {
padding: 0.5rem;
border: none;
border-radius: 5px;
margin-right: 0.5rem;
}
footer button {
background-color: #ADC3AB;
color: #32612D;
padding: 0.5rem;
border: none;
border-radius: 5px;
cursor: pointer;
}
footer button:hover {
background-color: #32612D;
color: #fff;
}
.special-offer .social-share {
display: inline-flex;
align-items: center;
justify-content: space-around;
margin-top: 0.5rem;
}
.special-offer .social-share a {
text-decoration: none;
color: #32612D;
}
.special-offer .social-share a:hover {
color: #ADC3AB;
}
|
679e6709468e89ab026eb12c106cab44
|
{
"intermediate": 0.39332228899002075,
"beginner": 0.3228287696838379,
"expert": 0.28384894132614136
}
|
2,389
|
How to use ControlNet for image enhancement?Can you tell me how to do it step by step?
|
5c8e0c36193a6ca74514a0182e73c155
|
{
"intermediate": 0.28526774048805237,
"beginner": 0.09463467448949814,
"expert": 0.6200976371765137
}
|
2,390
|
in autohotkey how to auto run a .exe when another exe runs
|
692fd2c3bd6247a9c6686856dc26e60a
|
{
"intermediate": 0.3647429943084717,
"beginner": 0.25368767976760864,
"expert": 0.38156935572624207
}
|
2,391
|
R function that breaks survive time into different intervals according to cutoff points. For example, a patient has survived for 120 days, i want to break them into 3 intervals, 0-30, >30 - 90, >90
|
9c1e04fa4bf6879e1510d830a9430531
|
{
"intermediate": 0.24765539169311523,
"beginner": 0.2906487286090851,
"expert": 0.46169593930244446
}
|
2,392
|
What is scratch?
|
3ed853e9acda70a0bd004000032f57c7
|
{
"intermediate": 0.32636064291000366,
"beginner": 0.38787001371383667,
"expert": 0.2857692837715149
}
|
2,393
|
store local directory‘s structure metadata into sqlite using c++,and support add file,remove file,add directory,remove directory,query file’s size, query directory’s size,remove directory recursive from sqlite
|
a71ff3944f9bdcf8cd81755dac380e71
|
{
"intermediate": 0.44423726201057434,
"beginner": 0.22906717658042908,
"expert": 0.3266955316066742
}
|
2,394
|
sql where子句添加case when
|
0140aa71e8f09c19147a562b6de75e7b
|
{
"intermediate": 0.23953144252300262,
"beginner": 0.4184122085571289,
"expert": 0.34205636382102966
}
|
2,395
|
write me a python script to continuously monitor my computer and network and automatically block any unauthorized changes
|
c673c0b368705e6bd29df9f61c8c0655
|
{
"intermediate": 0.42922115325927734,
"beginner": 0.17623645067214966,
"expert": 0.3945424258708954
}
|
2,396
|
store local directory structure metadata into sqlite using c++,and support add file,remove file,add directory,remove directory,get file’s size, get directory’s size
|
bff767f12cccc0c34e6ae510a66ce48b
|
{
"intermediate": 0.5334694981575012,
"beginner": 0.16369524598121643,
"expert": 0.30283528566360474
}
|
2,397
|
How can I active a tab in a windows by xdotool? Such as I have open two doc files named U1.doc and u2.doc in the same windows , I want to active u1.doc first then u2.doc with xdotool. How can i do it ?
|
7b4f839bfe015bca3859acda57106dda
|
{
"intermediate": 0.6087647676467896,
"beginner": 0.16227039694786072,
"expert": 0.2289648801088333
}
|
2,398
|
Hello, I need your help to fix my project. I will be giving you details and the code in quotes and explaining the error that needs fixing. First of all, here is the premise of the project:
"
In this project, we aim to create a solution in Python for merging sub-images by using keypoint description methods (SIFT, SURF, and ORB) and obtain a final panorama image. First of all, we will extract and obtain multiple keypoints from sub-images by using the keypoint description method. Then we will compare and match these key points to merge sub-images into one panorama image. As a dataset, we will use a subset of the HPatches dataset. With this dataset, you get 6 ".png" images and 5 files for ground truth homography.
"
Here are more details about the dataset:
"
There is a reference image (image number 0) and five target images taken under different illuminations and from different viewpoints. For all images, we have the estimated ground truth homography with respect to the reference image.
"
As you can understand, the dataset includes a scene consisting of a reference image and various sub-images with the estimated ground truth homography with respect to the reference image.
The implementation has some restrictions. Here are the implementation details we used to create the project:
"
1. Feature Extraction: We are expected to extract key points in the sub-images by a keypoint extraction method. (SIFT, SURF, and ORB). You can use libraries for this part.
2. Feature Matching: Then we are expected to code a matching function (this can be based on the k-nearest neighbor method) to match extracted keypoints between pairs of sub-images. You can use libraries for this part.
3. Finding Homography: Then you should calculate a Homography Matrix for each pair of sub-images (by using the RANSAC method). For this part, you cannot use OpenCV or any library other than NumPY.
4. Merging by Transformation: Merge sub-images into a single panorama by applying transformation operations to sub-images using the Homography Matrix. For this part, you cannot use OpenCV or any library other than NumPY.
"
With that being said, I hope you understand the main goal of the project. Now the issue I am facing is I am getting Memory Error in warp_perspective(img, H, target_shape)
144
145 source_coordinates = np.dot(np.linalg.inv(H), target_coordinates)
--> 146 source_coordinates /= source_coordinates[2, :]
147
148 valid = np.logical_and(np.logical_and(0 <= source_coordinates[0, :], source_coordinates[0, :] < img.shape[1] - 1),
MemoryError: Unable to allocate 6.16 GiB for an array with shape (3, 275502480) and data type float64. This program should not be consuming this much memory, as the images we are trying construct panorama with only 1 mb each and there are 6 of them. I have no idea how memory consumption got up to gigabytes. Please help me fix the problem. Now I will provide the full code so you can check it and tell me how to fix the project:
"
import numpy as np
import cv2
import os
import glob
import matplotlib.pyplot as plt
import time
def feature_extraction(sub_images, method="SIFT"):
if method == "SIFT":
keypoint_extractor = cv2.xfeatures2d.SIFT_create()
elif method == "SURF":
keypoint_extractor = cv2.xfeatures2d.SURF_create()
elif method == "ORB":
keypoint_extractor = cv2.ORB_create()
keypoints = []
descriptors = []
for sub_image in sub_images:
keypoint, descriptor = keypoint_extractor.detectAndCompute(sub_image, None)
keypoints.append(keypoint)
descriptors.append(descriptor)
return keypoints, descriptors
def feature_matching(descriptors, matcher_type="BF"):
if matcher_type == "BF":
matcher = cv2.BFMatcher()
matches = []
for i in range(1, len(descriptors)):
match = matcher.knnMatch(descriptors[0], descriptors[i], k=2)
matches.append(match)
return matches
def compute_homography_matrix(src_pts, dst_pts):
def normalize_points(pts):
pts_homogeneous = np.hstack((pts, np.ones((pts.shape[0], 1))))
centroid = np.mean(pts, axis=0)
scale = np.sqrt(2) / np.mean(np.linalg.norm(pts - centroid, axis=1))
T = np.array([[scale, 0, -scale * centroid[0]], [0, scale, -scale * centroid[1]], [0, 0, 1]])
normalized_pts = (T @ pts_homogeneous.T).T
return normalized_pts[:, :2], T
src_pts_normalized, T1 = normalize_points(src_pts)
dst_pts_normalized, T2 = normalize_points(dst_pts)
A = []
for p1, p2 in zip(src_pts_normalized, dst_pts_normalized):
x1, y1 = p1
x2, y2 = p2
A.append([0, 0, 0, -x1, -y1, -1, y2 * x1, y2 * y1, y2])
A.append([x1, y1, 1, 0, 0, 0, -x2 * x1, -x2 * y1, -x2])
A = np.array(A)
try:
_, _, VT = np.linalg.svd(A)
except np.linalg.LinAlgError:
return None
h = VT[-1]
H_normalized = h.reshape(3, 3)
H = np.linalg.inv(T2) @ H_normalized @ T1
return H / H[-1, -1]
def filter_matches(matches, ratio_thres=0.7):
filtered_matches = []
for match in matches:
good_match = []
for m, n in match:
if m.distance < ratio_thres * n.distance:
good_match.append(m)
filtered_matches.append(good_match)
return filtered_matches
def find_homography(keypoints, filtered_matches):
homographies = []
skipped_indices = [] # Keep track of skipped images and their indices
for i, matches in enumerate(filtered_matches):
src_pts = np.float32([keypoints[0][m.queryIdx].pt for m in matches]).reshape(-1, 1, 2)
dst_pts = np.float32([keypoints[i + 1][m.trainIdx].pt for m in matches]).reshape(-1, 1, 2)
H = ransac_homography(src_pts, dst_pts)
if H is not None:
H = H.astype(np.float32)
homographies.append(H)
else:
print(f"Warning: Homography computation failed for image pair (0, {i + 1}). Skipping.")
skipped_indices.append(i + 1) # Add indices of skipped images to the list
continue
return homographies, skipped_indices
def ransac_homography(src_pts, dst_pts, iterations=2000, threshold=3):
best_inlier_count = 0
best_homography = None
for _ in range(iterations):
indices = np.random.choice(len(src_pts), 4, replace=True)
src_subset = src_pts[indices].reshape(-1, 2)
dst_subset = dst_pts[indices].reshape(-1, 2)
homography = compute_homography_matrix(src_subset, dst_subset)
if homography is None:
continue
inliers = 0
for i in range(len(src_pts)):
projected_point = np.dot(homography, np.append(src_pts[i], 1))
projected_point = projected_point / projected_point[-1]
distance = np.linalg.norm(projected_point - np.append(dst_pts[i], 1))
if distance < threshold:
inliers += 1
if inliers > best_inlier_count:
best_inlier_count = inliers
best_homography = homography
return best_homography
def read_ground_truth_homographies(dataset_path):
H_files = sorted(glob.glob(os.path.join(dataset_path, "H_.txt")))
ground_truth_homographies = []
for filename in H_files:
H = np.loadtxt(filename)
ground_truth_homographies.append(H)
return ground_truth_homographies
def warp_perspective(img, H, target_shape):
h, w = target_shape
target_y, target_x = np.meshgrid(np.arange(h), np.arange(w))
target_coordinates = np.stack([target_x.ravel(), target_y.ravel(), np.ones(target_x.size)])
source_coordinates = np.dot(np.linalg.inv(H), target_coordinates)
source_coordinates /= source_coordinates[2, :]
valid = np.logical_and(np.logical_and(0 <= source_coordinates[0, :], source_coordinates[0, :] < img.shape[1] - 1),
np.logical_and(0 <= source_coordinates[1, :], source_coordinates[1, :] < img.shape[0] - 1))
valid_source_coordinates = np.round(source_coordinates[:, valid].astype(int)[:2]).astype(int)
valid_target_coordinates = target_coordinates[:, valid].astype(int)[:2]
valid_source_coordinates[0] = np.clip(valid_source_coordinates[0], 0, img.shape[1] - 1)
valid_source_coordinates[1] = np.clip(valid_source_coordinates[1], 0, img.shape[0] - 1)
valid_target_coordinates[0] = np.clip(valid_target_coordinates[0], 0, w - 1)
valid_target_coordinates[1] = np.clip(valid_target_coordinates[1], 0, h - 1)
warped_image = np.zeros((h, w, 3), dtype=np.uint8)
for i in range(3):
warped_image[..., i][valid_target_coordinates[1], valid_target_coordinates[0]] = img[..., i][valid_source_coordinates[1], valid_source_coordinates[0]]
return warped_image
def merge_images(sub_images, keypoints, filtered_matches, homographies, skipped_indices):
ref_img = sub_images[0]
for i in range(1, len(sub_images)):
if i in skipped_indices:
print(f"Image {i} was skipped due to homography computation failure.")
continue
img_i = sub_images[i]
H_i = homographies[i - 1 - sum(idx < i for idx in skipped_indices)]
# Get corners of the second (transferred) image
h, w, _ = img_i.shape
corners = np.array([[0, 0, 1], [w-1, 0, 1], [w-1, h-1, 1], [0, h-1, 1]], dtype=np.float32)
corners_transformed = H_i @ corners.T
corners_transformed = corners_transformed / corners_transformed[2]
corners_transformed = corners_transformed.T[:, :2]
# Calculate size of the stitched image
x_min = int(min(corners_transformed[:, 0].min(), 0))
x_max = int(max(corners_transformed[:, 0].max(), ref_img.shape[1]))
y_min = int(min(corners_transformed[:, 1].min(), 0))
y_max = int(max(corners_transformed[:, 1].max(), ref_img.shape[0]))
# Get the transformation to shift the origin to min_x, min_y
shift_mtx = np.array([[1, 0, -x_min],
[0, 1, -y_min],
[0, 0, 1]], dtype=np.float32)
# Apply the transformation on both images
img_i_transformed = warp_perspective(img_i, shift_mtx @ H_i, (x_max-x_min, y_max-y_min))
ref_img_transformed = warp_perspective(ref_img, shift_mtx, (x_max-x_min, y_max-y_min))
# Calculate blend masks
mask_img_i = img_i_transformed > 0
mask_ref_img = ref_img_transformed > 0
mask_overlap = np.logical_and(mask_img_i, mask_ref_img).astype(np.float32)
mask_img_i = mask_img_i.astype(np.float32) - mask_overlap
mask_ref_img = mask_ref_img.astype(np.float32) - mask_overlap
# Normalize weights and combine images
total_weight = mask_img_i + mask_ref_img + 2 * mask_overlap
ref_img = ((mask_img_i * img_i_transformed) + (mask_ref_img * ref_img_transformed) + (2 * mask_overlap * img_i_transformed * ref_img_transformed)) / total_weight
# Crop black regions
gray = cv2.cvtColor(ref_img.astype(np.uint8), cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray, 1, 255, cv2.THRESH_BINARY)
contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2:]
cnt = max(contours, key=cv2.contourArea)
x, y, w, h = cv2.boundingRect(cnt)
ref_img = ref_img[y:y+h, x:x+w]
ref_img = ref_img.astype(np.uint8)
return ref_img
def main(dataset_path):
filenames = sorted(glob.glob(os.path.join(dataset_path, "*.png")))
sub_images = []
for filename in filenames:
img = cv2.imread(filename, cv2.IMREAD_COLOR) # Load images as color
sub_images.append(img)
ground_truth_homographies = read_ground_truth_homographies(dataset_path)
methods = ["SIFT", "SURF", "ORB"]
for method in methods:
start_time = time.time()
keypoints, descriptors = feature_extraction(sub_images, method=method)
matches = []
for i in range(len(sub_images) - 1):
match = feature_matching([descriptors[i], descriptors[i + 1]])
matches.append(match[0])
filtered_matches = filter_matches(matches)
homographies = []
for i in range(len(sub_images) - 1):
src_pts = np.float32([keypoints[i][m.queryIdx].pt for m in filtered_matches[i]]).reshape(-1, 1, 2)
dst_pts = np.float32([keypoints[i + 1][m.trainIdx].pt for m in filtered_matches[i]]).reshape(-1, 1, 2)
H = ransac_homography(src_pts, dst_pts)
homographies.append(H)
panorama = sub_images[0]
for i in range(len(sub_images) - 1):
panorama = merge_images([panorama, sub_images[i + 1]], [keypoints[i], keypoints[i + 1]], [filtered_matches[i]], [homographies[i]], [])
end_time = time.time()
runtime = end_time - start_time
print(f"Method: {method} - Runtime: {runtime:.2f} seconds")
for idx, (image, kp) in enumerate(zip(sub_images, keypoints)):
feature_plot = cv2.drawKeypoints(image, kp, None)
plt.figure()
plt.imshow(feature_plot, cmap="gray")
plt.title(f"Feature Points - {method} - Image {idx}")
for i, match in enumerate(filtered_matches):
matching_plot = cv2.drawMatches(sub_images[i], keypoints[i], sub_images[i + 1], keypoints[i + 1], match, None, flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
plt.figure()
plt.imshow(matching_plot, cmap="gray")
plt.title(f"Feature Point Matching - {method} - Image {i} - Image {i + 1}")
plt.figure()
plt.imshow(panorama, cmap="gray", aspect="auto")
plt.title(f"Panorama - {method}")
print("\nGround truth homographies")
for i, H_gt in enumerate(ground_truth_homographies):
print(f"Image 0 to {i+1}:")
print(H_gt)
print("\nComputed homographies")
for i, H_est in enumerate(homographies):
print(f"Image {i} to {i+1}:")
print(H_est)
plt.show()
if __name__ == "__main__":
dataset_path = "dataset/v_bird"
main(dataset_path)
"
|
b58746d77bcfab5d71ec3e18679d6c52
|
{
"intermediate": 0.3920682966709137,
"beginner": 0.4228290915489197,
"expert": 0.18510271608829498
}
|
2,399
|
in R data.table, how to generate the maximum value within one subject ID, with all other columns unchanged?
|
14751ac7af3b77efcd0d5af858c3d32a
|
{
"intermediate": 0.39477163553237915,
"beginner": 0.172277569770813,
"expert": 0.43295079469680786
}
|
2,400
|
storing the file and directory structure in sqlite column include name parent_Id level size is_file or is_dir etc
|
955ce31b034ea72c304e4314bd4df8e8
|
{
"intermediate": 0.33847594261169434,
"beginner": 0.3111139237880707,
"expert": 0.350410133600235
}
|
2,401
|
Could u be a transtlate util that can convert Chinese to English
|
5b4bd780f343a59c243d0e0de5589943
|
{
"intermediate": 0.28816094994544983,
"beginner": 0.28091204166412354,
"expert": 0.430927038192749
}
|
2,402
|
Use HTML to write a data annotation interface program
|
b64bb48b14d2e2ec56f7ab59db834f57
|
{
"intermediate": 0.5772271752357483,
"beginner": 0.19076034426689148,
"expert": 0.23201249539852142
}
|
2,403
|
I need you to create me a merging by transformation function which will merge images using the already calculated homography matrix. Do not use OpenCV or any other library other than NumPy. Make sure the function is runtime-friendly meaning provides a good (short) runtime while keeping memory usage low. Add comments to each line describing what you are doing. "Merging by Transformation: Merge sub-images into a single panorama by applying transformation operations to sub-images using the Homography Matrix. For this part, you cannot use OpenCV or any library other than NumPy."
|
fc84a3ffc6a7331dbb233de6deffa315
|
{
"intermediate": 0.49806058406829834,
"beginner": 0.12728121876716614,
"expert": 0.3746581971645355
}
|
2,404
|
Andrew is free from 11 am to 3 pm, Joanne is free from noon to 2 pm and then 3:30 pm to 5 pm. Hannah is available at noon for half an hour, and then 4 pm to 6 pm. What are some options for start times for a 30 minute meeting for Andrew, Hannah, and Joanne?
|
f4ed721b41dea094aea0ad26deb20479
|
{
"intermediate": 0.308638334274292,
"beginner": 0.2706359624862671,
"expert": 0.4207257628440857
}
|
2,405
|
R software, how to add a prefix for the colum names for data.table::dcast?
|
32d6522ed20277b1a8c6483c5cc39e44
|
{
"intermediate": 0.4619384706020355,
"beginner": 0.30262526869773865,
"expert": 0.23543627560138702
}
|
2,406
|
access midjourney account api key
|
7b0031baa496c86660d7a0286fb7a81e
|
{
"intermediate": 0.4264407455921173,
"beginner": 0.2222103327512741,
"expert": 0.3513489067554474
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.