row_id
int64 0
48.4k
| init_message
stringlengths 1
342k
| conversation_hash
stringlengths 32
32
| scores
dict |
|---|---|---|---|
2,507
|
can i have vba code that creates a wait for 3 seconds
|
08405c5909c1f128f50265239f04f409
|
{
"intermediate": 0.45917513966560364,
"beginner": 0.2121458649635315,
"expert": 0.32867899537086487
}
|
2,508
|
Create an xml code of interface for mobile andriod app. It should have a window with choosen photos and also a button to choose more photos from gallery. You can add anything else if you think it is necessarily
|
adbcbfb8fe6ef3632a6f32263af3379f
|
{
"intermediate": 0.3163824677467346,
"beginner": 0.2807232737541199,
"expert": 0.4028942584991455
}
|
2,509
|
I need the Android XML code for the UI of an app.
The ui should be composed with these elements:
70% of the top part of the main screen should show the user an image and some text.
the bottom 30% should be a prompt to give a rating from 1 to 5, using stars as "voting system".
The design should follow the android material design
Please give me the xml that I can use in android studio
|
6d5aa89ab712dd57d7d4a59206ea6243
|
{
"intermediate": 0.24502578377723694,
"beginner": 0.3156507611274719,
"expert": 0.43932345509529114
}
|
2,510
|
matplotlib\pyplot.py in imshow(X, cmap, norm, aspect, interpolation, alpha, vmin, vmax, origin, extent, interpolation_stage, filternorm, filterrad, resample, url, data, **kwargs)
2655 filternorm=filternorm, filterrad=filterrad, resample=resample,
2656 url=url, **({"data": data} if data is not None else {}),
-> 2657 **kwargs)
...
--> 707 "float".format(self._A.dtype))
708
709 if self._A.ndim == 3 and self._A.shape[-1] == 1:
TypeError: Image data of dtype object cannot be converted to float
|
0752aeb20084279cd4d0ad6131e3303a
|
{
"intermediate": 0.5012264847755432,
"beginner": 0.31055089831352234,
"expert": 0.18822261691093445
}
|
2,511
|
write example of viewmodel which save entity which get from firebase database and after that passing to composable in jetpack compose
|
30e353cd141a06e53cbd5e12c8e2dfb9
|
{
"intermediate": 0.45879167318344116,
"beginner": 0.20477953553199768,
"expert": 0.33642879128456116
}
|
2,512
|
can NLP call remote java interfaces?
|
6e84a6f3a831a77de96cbc6ef41e2b32
|
{
"intermediate": 0.35173118114471436,
"beginner": 0.10684726387262344,
"expert": 0.5414214730262756
}
|
2,513
|
Task 1
MapReduce
Dataset:
A CSV dataset containing 1710671 taxi trajectories recorded over one year (from 2013/07/01 to
2014/06/30) in the city of Porto, in Portugal. The dataset is derived from the "Taxi Service
Trajectory - Prediction Challenge, ECML PKDD 2015” Data Set.
Each CSV row contains:
• taxi_id: numeric value identifying an involved taxi.
• trajectory_id: numeric value identifying a trajectory in the original dataset .
• timestamp: a timestamp corresponding to the starting time of the taxi ride.
• source_point: GPS point representing the origin of the taxi ride.
• target_point: GPS point representing the destination of the taxi ride.
Coordinates are given in POINT (longitude latitude) format using the EPSG:4326 Geodetic
coordinate system.
Source:
https://figshare.com/articles/dataset/Porto_taxi_trajectories/12302165
Task:
• Based on the source_point, calculate the count the number of taxi_id per source_point.
• Expected output (Lat,Long), Count
• The output should be ordered in ascending order according to the number of taxis per
source points.
taxi_id trajectory_id timestamp source_point target_point
20000589 1.37264E+18 7/1/2013 0:00 POINT(-8.618643 41.141412) POINT(-8.630838 41.154489)
20000596 1.37264E+18 7/1/2013 0:08 POINT(-8.639847 41.159826) POINT(-8.66574 41.170671)
20000320 1.37264E+18 7/1/2013 0:02 POINT(-8.612964 41.140359) POINT(-8.61597 41.14053)
20000520 1.37264E+18 7/1/2013 0:00 POINT(-8.574678 41.151951) POINT(-8.607996 41.142915)
20000337 1.37264E+18 7/1/2013 0:04 POINT(-8.645994 41.18049) POINT(-8.687268 41.178087)
and this is sample of date .. pls provide me with the full code with java using cloudera vm
|
2d0d9b65d54b5b74a269c6bc042bdccf
|
{
"intermediate": 0.42484837770462036,
"beginner": 0.2631438970565796,
"expert": 0.31200772523880005
}
|
2,514
|
how to redirect to url using html meta ON PHP
|
249a57632d8eb1fcbe50802ff5c085f6
|
{
"intermediate": 0.41998717188835144,
"beginner": 0.28406548500061035,
"expert": 0.29594728350639343
}
|
2,515
|
что означает ошибка сервера express.js nodemon в терминале "Error [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent to the client"
|
87d4f15838204b90370d0160d17c5b95
|
{
"intermediate": 0.582404375076294,
"beginner": 0.15752080082893372,
"expert": 0.2600748538970947
}
|
2,516
|
the below code is not working
from pymavlink import mavutil
import math
import time
class Drone:
def __init__(self, system_id, connection):
self.system_id = system_id
self.connection = connection
def set_mode(self, mode):
self.connection.mav.set_mode_send(
self.system_id,
mavutil.mavlink.MAV_MODE_FLAG_CUSTOM_MODE_ENABLED,
mode
)
def arm(self, arm=True):
self.connection.mav.command_long_send(self.system_id, self.connection.target_component,
mavutil.mavlink.MAV_CMD_COMPONENT_ARM_DISARM, 0, int(arm), 0, 0, 0, 0, 0,
0)
def takeoff(self, altitude):
self.connection.mav.command_long_send(self.system_id, self.connection.target_component,
mavutil.mavlink.MAV_CMD_NAV_TAKEOFF, 0, 0, 0, 0, 0, 0, 0, altitude)
def send_waypoint(self, wp, next_wp, speed):
vx, vy, vz = calculate_velocity_components(wp, next_wp, speed)
self.connection.mav.send(mavutil.mavlink.MAVLink_set_position_target_global_int_message(
10,
self.system_id,
self.connection.target_component,
mavutil.mavlink.MAV_FRAME_GLOBAL_RELATIVE_ALT,
int(0b110111111000),
int(wp[0] * 10 ** 7),
int(wp[1] * 10 ** 7),
wp[2],
vx,
vy,
vz,
0,
0,
0,
0,
0 # Set vx, vy, and vz from calculated components
))
def get_position(self):
self.connection.mav.request_data_stream_send(
self.system_id, self.connection.target_component,
mavutil.mavlink.MAV_DATA_STREAM_POSITION, 1, 1)
while True:
msg = self.connection.recv_match(type='GLOBAL_POSITION_INT', blocking=True)
if msg.get_srcSystem() == self.system_id:
return (msg.lat / 10 ** 7, msg.lon / 10 ** 7, msg.alt / 10 ** 3)
def get_battery_status(self):
msg = self.connection.recv_match(type='SYS_STATUS', blocking=False)
if msg and msg.get_srcSystem() == self.system_id:
return msg.battery_remain
def get_rc_status(self):
msg = self.connection.recv_match(type='RC_CHANNELS_RAW', blocking=False)
if msg and msg.get_srcSystem() == self.system_id:
return msg.chan8_raw
def get_gps_status(self):
msg = self.connection.recv_match(type='GPS_RAW_INT', blocking=False)
if msg and msg.get_srcSystem() == self.system_id:
return msg.satellites_visible, msg.eph
class PIDController:
def __init__(self, kp, ki, kd, limit):
self.kp = kp
self.ki = ki
self.kd = kd
self.limit = limit
self.prev_error = 0
self.integral = 0
def update(self, error, dt):
derivative = (error - self.prev_error) / dt
self.integral += error * dt
self.integral = max(min(self.integral, self.limit), -self.limit) # Clamp the integral term
output = self.kp * error + self.ki * self.integral + self.kd * derivative
self.prev_error = error
return output
distance = 5 # Distance in meters
angle = 60 # Angle in degrees
kp = 0.1
ki = 0.01
kd = 0.05
pid_limit = 0.0001
pid_lat = PIDController(kp, ki, kd, pid_limit)
pid_lon = PIDController(kp, ki, kd, pid_limit)
def calculate_follower_coordinates(wp, distance, angle):
earth_radius = 6371000.0 # in meters
latitude_change = (180 * distance * math.cos(math.radians(angle))) / (math.pi * earth_radius)
longitude_change = (180 * distance * math.sin(math.radians(angle))) / (
math.pi * earth_radius * math.cos(math.radians(wp[0])))
new_latitude = wp[0] + latitude_change
new_longitude = wp[1] + longitude_change
return (new_latitude, new_longitude, wp[2])
def calculate_velocity_components(current_wp, next_wp, speed):
dx = next_wp[0] - current_wp[0]
dy = next_wp[1] - current_wp[1]
dz = next_wp[2] - current_wp[2]
dx2 = dx ** 2
dy2 = dy ** 2
dz2 = dz ** 2
distance = math.sqrt(dx2 + dy2 + dz2)
vx = (dx / distance) * speed
vy = (dy / distance) * speed
vz = (dz / distance) * speed
return vx, vy, vz
def check_safety(master_drone, follower_drone):
# 1. Check battery levels
master_battery = master_drone.get_battery_status()
follower_battery = follower_drone.get_battery_status()
if master_battery is not None and master_battery < 25 or follower_battery is not None and follower_battery < 25:
print("Battery low!")
return False
# 2. Check RC fail
master_rc_status = master_drone.get_rc_status()
follower_rc_status = follower_drone.get_rc_status()
if master_rc_status is not None and master_rc_status < 1000 or follower_rc_status is not None and follower_rc_status < 1000:
print("RC fail!")
return False
# 3. Check GPS fail
master_gps_status = master_drone.get_gps_status()
follower_gps_status = follower_drone.get_gps_status()
if master_gps_status is not None and master_gps_status[0] < 6 or follower_gps_status is not None and follower_gps_status[0] < 6 or master_gps_status is not None and master_gps_status[1] > 200 or follower_gps_status is not None and follower_gps_status[1] > 200:
print("GPS fail!")
return False
return True
waypoints = [
(28.5861474, 77.3421320, 10),
(28.5859040, 77.3420736, 10)
]
the_connection = mavutil.mavlink_connection('/dev/ttyUSB0', baud=57600)
master_drone = Drone(3, the_connection)
follower_drone = Drone(2, the_connection)
# Set mode to Guided and arm both drones
for drone in [master_drone, follower_drone]:
drone.set_mode(4)
drone.arm()
drone.takeoff(10)
def abort():
print("Type 'abort' to return to Launch and disarm motors.")
start_time = time.monotonic()
while time.monotonic() - start_time < 7:
user_input = input("Time left: {} seconds \n".format(int(7 - (time.monotonic() - start_time))))
if user_input.lower() == "abort":
print("Returning to Launch and disarming motors…")
for drone in [master_drone, follower_drone]:
drone.set_mode(6) # RTL mode
drone.arm(False) # Disarm motors
return True
print("7 seconds have passed. Proceeding with waypoint task...")
return False
mode = the_connection.mode_mapping()[the_connection.flightmode]
print(f"the drone is currectly at mode {mode}")
time_start = time.time()
while mode == 4:
if abort():
exit()
# Keep checking safety safeguards
if not check_safety(master_drone, follower_drone):
break
if time.time() - time_start >= 1:
for index, master_wp in enumerate(waypoints[:-1]):
next_wp = waypoints[index + 1]
master_drone.send_waypoint(master_wp, next_wp, speed=3)
follower_position = master_drone.get_position()
if follower_position is None:
break
follower_wp = calculate_follower_coordinates(follower_position, distance, angle)
dt = time.time() - time_start
pid_lat_output = pid_lat.update(follower_wp[0] - follower_position[0], dt)
pid_lon_output = pid_lon.update(follower_wp[1] - follower_position[1], dt)
adjusted_follower_wp = (
follower_wp[0] + pid_lat_output, follower_wp[1] + pid_lon_output, follower_wp[2])
follower_drone.send_waypoint(adjusted_follower_wp, next_wp, speed=3)
else:
mode_mapping = the_connection.mode_mapping()
current_mode = the_connection.flightmode
mode = mode_mapping[current_mode]
time_start = time.time()
time.sleep(0.1)
continue
break
for drone in [master_drone, follower_drone]:
drone.set_mode(6)
drone.arm(False)
the_connection.close()
|
bebc761583ff7a0b3a5dd23de50b3939
|
{
"intermediate": 0.4057122766971588,
"beginner": 0.3992777168750763,
"expert": 0.1950099617242813
}
|
2,517
|
Hello, I need your help to fix my project. I will be giving you details and the code in quotes and explaining the error that needs fixing. First of all, here is the premise of the project:
"
In this project, we aim to create a solution in Python for merging sub-images by using keypoint description methods (SIFT, SURF, and ORB) and obtain a final panorama image. First of all, we will extract and obtain multiple keypoints from sub-images by using the keypoint description method. Then we will compare and match these key points to merge sub-images into one panorama image. As a dataset, we will use a subset of the HPatches dataset. With this dataset, you get 6 ".png" images and 5 files for ground truth homography.
"
Here are more details about the dataset:
"
There is a reference image (image number 0) and five target images taken under different illuminations and from different viewpoints. For all images, we have the estimated ground truth homography with respect to the reference image.
"
As you can understand, the dataset includes a scene consisting of a reference image and various sub-images with the estimated ground truth homography with respect to the reference image.
The implementation has some restrictions. Here are the implementation details we used to create the project:
"
1. Feature Extraction: We are expected to extract key points in the sub-images by a keypoint extraction method. (SIFT, SURF, and ORB). You can use libraries for this part.
2. Feature Matching: Then we are expected to code a matching function (this can be based on the k-nearest neighbor method) to match extracted keypoints between pairs of sub-images. You can use libraries for this part.
3. Finding Homography: Then you should calculate a Homography Matrix for each pair of sub-images (by using the RANSAC method). For this part, you cannot use OpenCV or any library other than NumPY.
4. Merging by Transformation: Merge sub-images into a single panorama by applying transformation operations to sub-images using the Homography Matrix. For this part, you cannot use OpenCV or any library other than NumPY.
"
With that being said, I hope you understand the main goal of the project. Now the issue I am facing is that the end result seems like a warped image of the last image (sixth image named image5.png). I have setup the tool to test with ground truth homographs however still the result image is not a panorama image. It seems the merge_images function is either completely wrong or has terrible mistakes. Please help me fix the problem. Now I will provide the full code so you can check it and tell me how to fix the project:
"
import numpy as np
import cv2
import os
import glob
import matplotlib.pyplot as plt
import time
def feature_extraction(sub_images, method="SIFT"):
if method == "SIFT":
keypoint_extractor = cv2.xfeatures2d.SIFT_create()
elif method == "SURF":
keypoint_extractor = cv2.xfeatures2d.SURF_create()
elif method == "ORB":
keypoint_extractor = cv2.ORB_create()
keypoints = []
descriptors = []
for sub_image in sub_images:
keypoint, descriptor = keypoint_extractor.detectAndCompute(sub_image, None)
keypoints.append(keypoint)
descriptors.append(descriptor)
return keypoints, descriptors
def feature_matching(descriptors, matcher_type="BF"):
if matcher_type == "BF":
matcher = cv2.BFMatcher()
matches = []
for i in range(1, len(descriptors)):
match = matcher.knnMatch(descriptors[0], descriptors[i], k=2)
matches.append(match)
return matches
def compute_homography_matrix(src_pts, dst_pts):
def normalize_points(pts):
pts_homogeneous = np.hstack((pts, np.ones((pts.shape[0], 1))))
centroid = np.mean(pts, axis=0)
scale = np.sqrt(2) / np.mean(np.linalg.norm(pts - centroid, axis=1))
T = np.array([[scale, 0, -scale * centroid[0]], [0, scale, -scale * centroid[1]], [0, 0, 1]])
normalized_pts = (T @ pts_homogeneous.T).T
return normalized_pts[:, :2], T
src_pts_normalized, T1 = normalize_points(src_pts)
dst_pts_normalized, T2 = normalize_points(dst_pts)
A = []
for p1, p2 in zip(src_pts_normalized, dst_pts_normalized):
x1, y1 = p1
x2, y2 = p2
A.append([0, 0, 0, -x1, -y1, -1, y2 * x1, y2 * y1, y2])
A.append([x1, y1, 1, 0, 0, 0, -x2 * x1, -x2 * y1, -x2])
A = np.array(A)
try:
_, _, VT = np.linalg.svd(A)
except np.linalg.LinAlgError:
return None
h = VT[-1]
H_normalized = h.reshape(3, 3)
H = np.linalg.inv(T2) @ H_normalized @ T1
return H / H[-1, -1]
def filter_matches(matches, ratio_thres=0.7):
filtered_matches = []
for match in matches:
good_match = []
for m, n in match:
if m.distance < ratio_thres * n.distance:
good_match.append(m)
filtered_matches.append(good_match)
return filtered_matches
def find_homography(keypoints, filtered_matches):
homographies = []
skipped_indices = [] # Keep track of skipped images and their indices
for i, matches in enumerate(filtered_matches):
src_pts = np.float32([keypoints[0][m.queryIdx].pt for m in matches]).reshape(-1, 1, 2)
dst_pts = np.float32([keypoints[i + 1][m.trainIdx].pt for m in matches]).reshape(-1, 1, 2)
H = ransac_homography(src_pts, dst_pts)
if H is not None:
H = H.astype(np.float32)
homographies.append(H)
else:
print(f"Warning: Homography computation failed for image pair (0, {i + 1}). Skipping.")
skipped_indices.append(i + 1) # Add indices of skipped images to the list
continue
return homographies, skipped_indices
def ransac_homography(src_pts, dst_pts, iterations=2000, threshold=3):
best_inlier_count = 0
best_homography = None
for _ in range(iterations):
indices = np.random.choice(len(src_pts), 4, replace=True)
src_subset = src_pts[indices].reshape(-1, 2)
dst_subset = dst_pts[indices].reshape(-1, 2)
homography = compute_homography_matrix(src_subset, dst_subset)
if homography is None:
continue
inliers = 0
for i in range(len(src_pts)):
projected_point = np.dot(homography, np.append(src_pts[i], 1))
projected_point = projected_point / projected_point[-1]
distance = np.linalg.norm(projected_point - np.append(dst_pts[i], 1))
if distance < threshold:
inliers += 1
if inliers > best_inlier_count:
best_inlier_count = inliers
best_homography = homography
return best_homography
def read_ground_truth_homographies(dataset_path):
H_files = sorted(glob.glob(os.path.join(dataset_path, "H_*")))
ground_truth_homographies = []
for filename in H_files:
H = np.loadtxt(filename)
ground_truth_homographies.append(H)
return ground_truth_homographies
def warp_perspective(img, H, target_shape):
h, w = target_shape
target_y, target_x = np.meshgrid(np.arange(h), np.arange(w))
target_coordinates = np.stack([target_x.ravel(), target_y.ravel(), np.ones(target_x.size)])
source_coordinates = np.dot(np.linalg.inv(H), target_coordinates)
source_coordinates /= source_coordinates[2, :]
valid = np.logical_and(np.logical_and(0 <= source_coordinates[0, :], source_coordinates[0, :] < img.shape[1] - 1),
np.logical_and(0 <= source_coordinates[1, :], source_coordinates[1, :] < img.shape[0] - 1))
# Change the dtype to int instead of float64 to save memory
valid_source_coordinates = np.round(source_coordinates[:, valid].astype(np.float32)[:2]).astype(int)
valid_target_coordinates = target_coordinates[:, valid].astype(int)[:2]
valid_source_coordinates[0] = np.clip(valid_source_coordinates[0], 0, img.shape[1] - 1)
valid_source_coordinates[1] = np.clip(valid_source_coordinates[1], 0, img.shape[0] - 1)
valid_target_coordinates[0] = np.clip(valid_target_coordinates[0], 0, w - 1)
valid_target_coordinates[1] = np.clip(valid_target_coordinates[1], 0, h - 1)
warped_image = np.zeros((h, w, 3), dtype=np.uint8)
for i in range(3):
warped_image[..., i][valid_target_coordinates[1], valid_target_coordinates[0]] = img[..., i][valid_source_coordinates[1], valid_source_coordinates[0]]
return warped_image
def merge_images(sub_images, homographies, skipped_indices):
ref_img = sub_images[0]
for i, H_i in enumerate(homographies):
if i + 1 in skipped_indices:
print(f"Image {i + 1} was skipped due to homography computation failure.")
continue
img_i = sub_images[i + 1]
# Get corners of the second (transferred) image
h, w, _ = img_i.shape
corners = np.array([[0, 0, 1], [w-1, 0, 1], [w-1, h-1, 1], [0, h-1, 1]], dtype=np.float32)
corners_transformed = H_i @ corners.T
corners_transformed = corners_transformed / corners_transformed[2]
corners_transformed = corners_transformed.T[:, :2]
# Calculate size of the stitched image
x_min = int(min(corners_transformed[:, 0].min(), 0))
x_max = int(max(corners_transformed[:, 0].max(), ref_img.shape[1]))
y_min = int(min(corners_transformed[:, 1].min(), 0))
y_max = int(max(corners_transformed[:, 1].max(), ref_img.shape[0]))
# Get the transformation to shift the origin to min_x, min_y
shift_mtx = np.array([[1, 0, -x_min],
[0, 1, -y_min],
[0, 0, 1]], dtype=np.float32)
# Apply the transformation on both images
img_i_transformed = warp_perspective(img_i, shift_mtx @ H_i, (x_max-x_min, y_max-y_min))
ref_img_transformed = warp_perspective(ref_img, shift_mtx, (x_max-x_min, y_max-y_min))
# Calculate blend masks
mask_img_i = (cv2.cvtColor(img_i_transformed, cv2.COLOR_BGR2GRAY) > 0).astype(np.float32)
mask_ref_img = (cv2.cvtColor(ref_img_transformed, cv2.COLOR_BGR2GRAY) > 0).astype(np.float32)
mask_overlap = np.logical_and(mask_img_i, mask_ref_img).astype(np.float32)
# Normalize weights and combine images
total_weight = mask_img_i + mask_ref_img + 2 * mask_overlap
ref_img = ((mask_img_i[..., None] * img_i_transformed) + (mask_ref_img[..., None] * ref_img_transformed) + (2 * mask_overlap[..., None] * img_i_transformed * ref_img_transformed)) / total_weight[..., None]
# Crop black regions
gray = cv2.cvtColor(ref_img.astype(np.uint8), cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray, 1, 255, cv2.THRESH_BINARY)
contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2:]
cnt = max(contours, key=cv2.contourArea)
x, y, w, h = cv2.boundingRect(cnt)
ref_img = ref_img[y:y+h, x:x+w]
ref_img = ref_img.astype(np.uint8)
return ref_img
def main(dataset_path):
filenames = sorted(glob.glob(os.path.join(dataset_path, "*.png")))
sub_images = []
for filename in filenames:
img = cv2.imread(filename, cv2.IMREAD_COLOR) # Load images as color
sub_images.append(img)
ground_truth_homographies = read_ground_truth_homographies(dataset_path)
skipped_indices = []
panorama = merge_images(sub_images, ground_truth_homographies, skipped_indices)
plt.figure()
plt.imshow(panorama, cmap="gray", aspect="auto")
plt.title(f"Panorama - Test")
return
methods = ["SIFT", "SURF", "ORB"]
for method in methods:
start_time = time.time()
keypoints, descriptors = feature_extraction(sub_images, method=method)
matches = feature_matching(descriptors)
filtered_matches = filter_matches(matches)
homographies, skipped_indices = find_homography(keypoints, filtered_matches)
panorama = merge_images(sub_images, homographies, skipped_indices)
end_time = time.time()
runtime = end_time - start_time
print(f"Method: {method} - Runtime: {runtime:.2f} seconds")
for idx, (image, kp) in enumerate(zip(sub_images, keypoints)):
feature_plot = cv2.drawKeypoints(image, kp, None)
plt.figure()
plt.imshow(feature_plot, cmap="gray")
plt.title(f"Feature Points - {method} - Image {idx}")
for i, match in enumerate(filtered_matches):
matching_plot = cv2.drawMatches(sub_images[0], keypoints[0], sub_images[i + 1], keypoints[i + 1], match, None, flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
plt.figure()
plt.imshow(matching_plot, cmap="gray")
plt.title(f"Feature Point Matching - {method} - Image 0 - Image {i + 1}")
plt.figure()
plt.imshow(panorama, cmap="gray", aspect="auto")
plt.title(f"Panorama - {method}")
print("\nGround truth homographies")
for i, H_gt in enumerate(ground_truth_homographies):
print(f"Image 0 to {i+1}:")
print(H_gt)
print("\nComputed homographies")
for i, H_est in enumerate(homographies):
print(f"Image {i} to {i+1}:")
print(H_est)
plt.show()
if __name__ == "__main__":
dataset_path = "dataset/v_bird"
main(dataset_path)
"
|
23a7b3870260a7074cd6165e21ca5b93
|
{
"intermediate": 0.4993330240249634,
"beginner": 0.34924229979515076,
"expert": 0.15142463147640228
}
|
2,518
|
create a html that can chage the background colour when the users click
|
47b88503a236bfec8ab99934d57a2c90
|
{
"intermediate": 0.4042273759841919,
"beginner": 0.2382906824350357,
"expert": 0.3574819564819336
}
|
2,519
|
How to list foreign packages on Arch Linux via Command Line Interface ?
|
270a2124327c28401685208b7e176af6
|
{
"intermediate": 0.5336341857910156,
"beginner": 0.2248230278491974,
"expert": 0.2415427714586258
}
|
2,520
|
Suppose you have two large datasets: the first dataset contains information about user activity
on a website. It consists of log files in CSV format, where each row represents a single page visit
by a user. Each row includes the following fields:
• IP address (string)
• Timestamp (int)
• URL visited (string)
The second dataset contains information about users themselves. It is also in CSV format, with
one row per user. Each row includes the following fields:
• User ID (int)
• Name (string)
• Email address (string)
You can download a sample version of these datasets from the following link:
https://www.kaggle.com/c/web-traffic-time-series-forecasting/data
Task:
Your task is to join these two datasets together based on user ID, and then perform some
analysis on the resulting dataset.
To do this, you'll need to use Apache Spark and PySpark on the Cloudera VM. You'll start by
reading in both datasets as RDDs and caching them in memory for faster access. Then you'll
perform a join operation on the user ID field using Spark transformations. Once you have your
joined dataset, perform these actions on it to analyze the data:
• Calculate the average time spent on the website per user.
• Identify the most popular pages visited by each user.
During this process, you'll also need to keep track of certain metrics using accumulators, such as
the number of records processed, and the number of errors encountered. Additionally, you
may want to use broadcast variables to efficiently share read-only data across multiple nodes
|
c8951cabdf6ccc0b017b6a3223bcc419
|
{
"intermediate": 0.47820401191711426,
"beginner": 0.20213113725185394,
"expert": 0.3196648359298706
}
|
2,521
|
hi
|
ad84591d51d667af76d0ddd831115d69
|
{
"intermediate": 0.3246487081050873,
"beginner": 0.27135494351387024,
"expert": 0.40399640798568726
}
|
2,522
|
hi
|
cfacd5416ec7d200afd26d05e7422b3a
|
{
"intermediate": 0.3246487081050873,
"beginner": 0.27135494351387024,
"expert": 0.40399640798568726
}
|
2,523
|
does quizlet have a api
|
ff6962c505ac4cd6cb91c83b7df9a8ec
|
{
"intermediate": 0.4607200622558594,
"beginner": 0.2797892987728119,
"expert": 0.25949057936668396
}
|
2,524
|
write javacsript to click a button with the class buzzbtn
|
db6043b1bb8e1749e8c723a4f63a1897
|
{
"intermediate": 0.4880410134792328,
"beginner": 0.2865132987499237,
"expert": 0.22544577717781067
}
|
2,525
|
write javascript to submit a form with the class guest_form using jquery
|
1520faf0877895578cf759f11a7e1a50
|
{
"intermediate": 0.38945838809013367,
"beginner": 0.32261234521865845,
"expert": 0.2879292666912079
}
|
2,526
|
python how to get the real url content not the block.opendns.com location.replace(url)
|
86e5dac7ab8ea6a29448f0c229c48ea9
|
{
"intermediate": 0.3952856957912445,
"beginner": 0.1805262416601181,
"expert": 0.4241880476474762
}
|
2,527
|
Write a MATLAB function that converts euler angles to quaternion.
|
52e5f201386ebf9b4c1e25e1616effa7
|
{
"intermediate": 0.29902181029319763,
"beginner": 0.2935224771499634,
"expert": 0.4074556529521942
}
|
2,528
|
how can I apply ground truth homography but in reverse. What I mean is I got a image and a homography and that converts image to another images perspective. However I want the second image to be converted to first image perspective, meaning I need the reverse homography. Can you add reverse parameter to my function so when it is passed true, the homopgraphy will be applied in reverse. here is my function: "def warp_perspective(img, H, target_shape):
h, w = target_shape
target_y, target_x = np.meshgrid(np.arange(h), np.arange(w))
target_coordinates = np.stack([target_x.ravel(), target_y.ravel(), np.ones(target_x.size)])
source_coordinates = np.dot(np.linalg.inv(H), target_coordinates)
source_coordinates /= source_coordinates[2, :]
valid = np.logical_and(np.logical_and(0 <= source_coordinates[0, :], source_coordinates[0, :] < img.shape[1] - 1),
np.logical_and(0 <= source_coordinates[1, :], source_coordinates[1, :] < img.shape[0] - 1))
# Change the dtype to int instead of float64 to save memory
valid_source_coordinates = np.round(source_coordinates[:, valid].astype(np.float32)[:2]).astype(int)
valid_target_coordinates = target_coordinates[:, valid].astype(int)[:2]
valid_source_coordinates[0] = np.clip(valid_source_coordinates[0], 0, img.shape[1] - 1)
valid_source_coordinates[1] = np.clip(valid_source_coordinates[1], 0, img.shape[0] - 1)
valid_target_coordinates[0] = np.clip(valid_target_coordinates[0], 0, w - 1)
valid_target_coordinates[1] = np.clip(valid_target_coordinates[1], 0, h - 1)
warped_image = np.zeros((h, w, 3), dtype=np.uint8)
for i in range(3):
warped_image[..., i][valid_target_coordinates[1], valid_target_coordinates[0]] = img[..., i][valid_source_coordinates[1], valid_source_coordinates[0]]
return warped_image"
|
66efbadae7b778c99c784ac9454de580
|
{
"intermediate": 0.5216639041900635,
"beginner": 0.1775151491165161,
"expert": 0.3008209764957428
}
|
2,529
|
<script type="text/javascript">
function getForm(){
const search_choice = document.querySelector('#search_choice').value;
const search_value = document.querySelector('#search_value').value;
console.log(search_value)
let rooturl='http://localhost:3030/form'
fetch(rooturl,{
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
"search_choice": search_choice,
"search_value": search_value
})
})
.then(response => {
// console.log(response)
})
.then(data => {
console.log("test")
console.log(data)
})
.catch(error =>console.error())
}
</script>
|
830d01cc87c4852f1e9befba410fd856
|
{
"intermediate": 0.40348291397094727,
"beginner": 0.3235187828540802,
"expert": 0.27299827337265015
}
|
2,530
|
Ты Андроид разработчик, пишешь на Java, Qt и Kotlin. Когда тебе ставят задачу, ты разбиваешь ее на этапы, и при возможности добавляешь примеры кода при необходимости.
Текущая задача: Помоги адаптировать этот код для targetSDK 33 и minSDK 24:
fun setupPermissionsRequest(activity: Activity) {
if (ActivityCompat.shouldShowRequestPermissionRationale(
activity,
Manifest.permission.WRITE_EXTERNAL_STORAGE
)) {
AlertDialog.Builder(activity)
.setTitle(R.string.title_permission_request)
.setMessage(R.string.notify_storage_permission)
.setPositiveButton(R.string.button_continue) { _, _ ->
requestStoragePermissions(activity)
}
.setNegativeButton(R.string.button_quit) { _, _ ->
activity.finishAffinity()
}
.show()
} else {
requestStoragePermissions(activity)
}
}
Учти что приложение требует доступа на запись, в частности есть такой метод:
fun createConfigFile(verifiedEmployee: VerifiedEmployee, proxySettings: ProxySettings?) {
var config = employeeToConfig(verifiedEmployee)
if (proxySettings != null) {
config += proxySettingsToConfig(proxySettings)
}
mLogger.info { "Config contents created \n$config" }
@Suppress("DEPRECATION")
val dir = File(Environment.getExternalStorageDirectory(), "/STMobile/")
if (dir.exists()) {
mLogger.info { "Directory $dir already exists. Deleting..." }
if (!dir.deleteRecursively()) {
throw IOException("Could not delete directory $dir")
}
}
mLogger.info { "Creating directory $dir..." }
if (!dir.mkdir()) {
throw IOException("Could not create directory $dir")
}
val configFile = File(dir, "stmobile.conf")
mLogger.info { "Writing config file $configFile..." }
configFile.writeText(config)
mLogger.info { "Config file $configFile written" }
}
|
9b90df58ddc228f83295546c1ce63175
|
{
"intermediate": 0.44245752692222595,
"beginner": 0.35014888644218445,
"expert": 0.2073936015367508
}
|
2,531
|
In pytorch I'm getting NaNs every once in a while. I'm thinking about saving checkpoints of the model 10 steps behind and restoring the last one whenever a NaN is encountered. How do I do that?
|
3ac00b7612cc48a3f24414dc674e0eb2
|
{
"intermediate": 0.2633242607116699,
"beginner": 0.12604838609695435,
"expert": 0.6106274127960205
}
|
2,532
|
J’ai un problème avec le script C’est que quand les beatmapsets sont télécharger elle sont nommé avec uniquement leur id
par exemple la map ayant comme id 123456 vas se télécharger sous le nom de “123546.osz” aulieu de “123456 Lite Show Magic - Crack traxxxx.osz”
Alors que normalement lorsque j’utilise les osu mirror le nom des beatmapsets doit etre correcte par exemple:
si je vais sur ce lien https://beatconnect.io/b/123456/
le beatmapsets télécharger sera nommé “123456 Lite Show Magic - Crack traxxxx.osz” et non 123456.osz
Volici le script.
import os
import sys
import requests
from typing import List
from concurrent.futures import ThreadPoolExecutor, as_completed
osu_mirrors = [
"https://kitsu.moe/api/d/",
"https://chimu.moe/d/",
"https://proxy.nerinyan.moe/d/",
"https://beatconnect.io/b/",
"https://osu.sayobot.cn/osu.php?s=",
"https://storage.ripple.moe/d/",
"https://akatsuki.gg/d/",
"https://ussr.pl/d/",
"https://osu.gatari.pw/d/",
"http://storage.kawata.pw/d/",
"https://catboy.best/d/",
"https://kawata.pw/b/",
"https://redstar.moe/b/",
"https://atoka.pw/b/",
"https://fumosu.pw/b/",
"https://osu.direct/api/d/",
]
def download_beatmapset(beatmapset_id: int, url: str):
try:
response = requests.get(url + str(beatmapset_id))
response.raise_for_status()
file_name = f"{beatmapset_id}.osz"
with open(file_name, "wb") as f:
f.write(response.content)
print(f"Beatmapset {beatmapset_id} téléchargé depuis {url}")
return (True, beatmapset_id)
except requests.exceptions.RequestException as e:
print(f"Erreur lors du téléchargement de {beatmapset_id} depuis {url}: {e}")
return (False, beatmapset_id)
def download_from_mirrors(beatmapset_id: int):
for url in osu_mirrors:
success, _ = download_beatmapset(beatmapset_id, url)
if success:
break
else:
print(f"Beatmapset {beatmapset_id} est perdu (introuvable sur les miroirs)")
def main(file_name: str):
with open(file_name, "r") as f:
beatmapset_ids = [int(line.strip()) for line in f]
output_dir = file_name.replace(".txt", "")
os.makedirs(output_dir, exist_ok=True)
os.chdir(output_dir)
with ThreadPoolExecutor(max_workers=len(osu_mirrors)) as executor:
results = {executor.submit(download_beatmapset, beatmapset_id, url): (beatmapset_id, url) for beatmapset_id, url in zip(beatmapset_ids, osu_mirrors * (len(beatmapset_ids) // len(osu_mirrors)))}
unsuccessful_downloads = []
for future in as_completed(results):
beatmapset_id, url = results[future]
success, downloaded_beatmapset_id = future.result()
if not success:
unsuccessful_downloads.append(downloaded_beatmapset_id)
for beatmapset_id in unsuccessful_downloads:
download_from_mirrors(beatmapset_id)
if __name__ == "__main__":
if len(sys.argv) < 2:
print("Usage: python download_beatmapsets.py <beatmapsets_file.txt>")
sys.exit(1)
main(sys.argv[1])
|
fca5653f61aef8d3318eae97707ef64a
|
{
"intermediate": 0.28392916917800903,
"beginner": 0.5636991262435913,
"expert": 0.15237176418304443
}
|
2,533
|
Write a short python script that, while accounting for individual variable overflow, will eventually output every number up to infinity
|
bac4b9ca1996d928b0131f9ea0f08ed6
|
{
"intermediate": 0.25855129957199097,
"beginner": 0.34504708647727966,
"expert": 0.39640164375305176
}
|
2,534
|
hi
|
d4015553c74357465d89854998bb2d06
|
{
"intermediate": 0.3246487081050873,
"beginner": 0.27135494351387024,
"expert": 0.40399640798568726
}
|
2,535
|
// Libs
import { Controller } from 'react-hook-form';
import { Checkbox } from '@habitech/selectors';
import { useTranslation } from 'react-i18next';
// Styles
import CheckBoxWrapper from './CheckBoxWrapper.style';
// Models
import { CheckboxProps, FieldProps } from './models';
// eslint-disable-next-line @typescript-eslint/no-explicit-any
export const CheckBox = <TFormValues extends Record<string, any>>({
name,
label,
control,
errors,
customIcon,
}: CheckboxProps<TFormValues>): JSX.Element => {
const { t } = useTranslation();
// eslint-disable-next-line @typescript-eslint/no-base-to-string
const error = errors?.[name]?.message.toString();
return (
<CheckBoxWrapper data-testid={'checkbox-test'}>
<Controller
name={name}
control={control}
render={({ field: { name, value, onChange } }: FieldProps) => (
<Checkbox
labelText={t(label)}
name={name}
id={'test-id'}
dataId={'checkbox-component'}
onChange={(val: string | number, check: boolean) => onChange(check)}
value={value}
error={!!error}
selected={!!value}
customIcon={customIcon}
/>
)}
/>
</CheckBoxWrapper>
);
};
Hazme un test para la línea del onChange
|
d83fb7ff3040e9ee29d494925ff8a2fa
|
{
"intermediate": 0.48573535680770874,
"beginner": 0.2957276701927185,
"expert": 0.21853703260421753
}
|
2,536
|
In pytorch, I'm saving a model using "model_state_dict = model.state_dict()" and resuming it using "model.load_state_dict(model_state_dict)" but these commands don't appear to be doing anything. Is this correct procedure?
|
73a19f7e135122d37b1753c5d4ec04d5
|
{
"intermediate": 0.4917934238910675,
"beginner": 0.2022976130247116,
"expert": 0.3059089481830597
}
|
2,537
|
in sql, how do you delete on a table thats joined on another table?
|
ad95a46dbcaa0624c900f2e923f162e5
|
{
"intermediate": 0.38672423362731934,
"beginner": 0.3362649381160736,
"expert": 0.27701085805892944
}
|
2,538
|
i need help with a code
|
85d00425f12a77853ab11afbd242b32a
|
{
"intermediate": 0.16792631149291992,
"beginner": 0.5890267491340637,
"expert": 0.24304693937301636
}
|
2,539
|
J’ai un problème avec le script C’est que quand les beatmapsets sont télécharger elle sont nommé avec uniquement leur id
par exemple la map ayant comme id 123456 vas se télécharger sous le nom de “123546.osz” aulieu de “123456 Lite Show Magic - Crack traxxxx.osz”
Alors que normalement lorsque j’utilise les osu mirror le nom des beatmapsets doit etre correcte par exemple:
si je vais sur ce lien https://beatconnect.io/b/123456/
le beatmapsets télécharger sera nommé “123456 Lite Show Magic - Crack traxxxx.osz” et non 123456.osz
Voici mon script:
import os
import sys
import requests
from typing import List
from concurrent.futures import ThreadPoolExecutor, as_completed
osu_mirrors = [
"https://kitsu.moe/api/d/",
"https://chimu.moe/d/",
"https://proxy.nerinyan.moe/d/",
"https://beatconnect.io/b/",
"https://osu.sayobot.cn/osu.php?s=",
"https://storage.ripple.moe/d/",
"https://akatsuki.gg/d/",
"https://ussr.pl/d/",
"https://osu.gatari.pw/d/",
"http://storage.kawata.pw/d/",
"https://catboy.best/d/",
"https://kawata.pw/b/",
"https://redstar.moe/b/",
"https://atoka.pw/b/",
"https://fumosu.pw/b/",
"https://osu.direct/api/d/",
]
def download_beatmapset(beatmapset_id: int, url: str):
try:
response = requests.get(url + str(beatmapset_id))
response.raise_for_status()
file_name = f"{beatmapset_id}.osz"
with open(file_name, "wb") as f:
f.write(response.content)
print(f"Beatmapset {beatmapset_id} téléchargé depuis {url}")
return (True, beatmapset_id)
except requests.exceptions.RequestException as e:
print(f"Erreur lors du téléchargement de {beatmapset_id} depuis {url}: {e}")
return (False, beatmapset_id)
def download_from_mirrors(beatmapset_id: int):
for url in osu_mirrors:
success, _ = download_beatmapset(beatmapset_id, url)
if success:
break
else:
print(f"Beatmapset {beatmapset_id} est perdu (introuvable sur les miroirs)")
def main(file_name: str):
with open(file_name, "r") as f:
beatmapset_ids = [int(line.strip()) for line in f]
output_dir = file_name.replace(".txt", "")
os.makedirs(output_dir, exist_ok=True)
os.chdir(output_dir)
with ThreadPoolExecutor(max_workers=len(osu_mirrors)) as executor:
results = {executor.submit(download_beatmapset, beatmapset_id, url): (beatmapset_id, url) for beatmapset_id, url in zip(beatmapset_ids, osu_mirrors * (len(beatmapset_ids) // len(osu_mirrors)))}
unsuccessful_downloads = []
for future in as_completed(results):
beatmapset_id, url = results[future]
success, downloaded_beatmapset_id = future.result()
if not success:
unsuccessful_downloads.append(downloaded_beatmapset_id)
for beatmapset_id in unsuccessful_downloads:
download_from_mirrors(beatmapset_id)
if __name__ == "__main__":
if len(sys.argv) < 2:
print("Usage: python download_beatmapsets.py <beatmapsets_file.txt>")
sys.exit(1)
main(sys.argv[1])
|
63c10ee29f242e38aad648f90e43037d
|
{
"intermediate": 0.302325963973999,
"beginner": 0.5302920341491699,
"expert": 0.16738198697566986
}
|
2,540
|
I'm using torch.set_printoptions(profile="full") but printed pytorch tensors still have scientific notation...
|
f5e98018a862eb62ca02afef23b8a13b
|
{
"intermediate": 0.38771939277648926,
"beginner": 0.3071926236152649,
"expert": 0.3050878942012787
}
|
2,541
|
Hi, how do I install dotnet on linux mint?
|
fcf9ceec204821a62aa18e10edbd2f1b
|
{
"intermediate": 0.5479896664619446,
"beginner": 0.12505127489566803,
"expert": 0.3269590437412262
}
|
2,542
|
In python, create a machine learning model to predict a minesweepr game on a 5x5 board. The game has 3 bombs in it and you need to predict 5 safe spots to click and 6 possible bomb locations. The data you got is from the past games played in a list that contains every last bomb location. Use this list raw to predict: [1, 11, 17, 6, 15, 24, 3, 18, 20, 13, 14, 18, 7, 18, 23, 13, 15, 19, 2, 12, 13, 13, 17, 23, 4, 14, 17, 7, 19, 23, 15, 20, 23, 8, 10, 17, 6, 8, 21, 9, 17, 21, 9, 23, 23, 6, 11, 22, 11, 22, 23, 1, 6, 21, 4, 9, 18, 12, 13, 17, 8, 17, 18, 9, 19, 23, 10, 13, 15, 1, 4, 20, 10, 22, 23, 5, 20, 22, 14, 16, 24, 2, 13, 16, 15, 18, 19, 1, 5, 11] and make it use not deep learning. Just some advanced machine learning that works good
|
1d93fb055dc4859d89719c6578df55ee
|
{
"intermediate": 0.19880564510822296,
"beginner": 0.1605023890733719,
"expert": 0.6406919360160828
}
|
2,543
|
can you explain in simple terms what the code below is doing and how it works and is calculated:
data['AMOUNT_LAST_PURCHASE_202302_lag_6m'] = data['AMOUNT_LAST_PURCHASE_202302'].shift(periods=6)
|
2ff6cf78ac9e56ca0f6050dd69fb0b16
|
{
"intermediate": 0.4687127470970154,
"beginner": 0.096767358481884,
"expert": 0.43451988697052
}
|
2,544
|
Write code that enhances all arrays such that you can call the snail(rowsCount, colsCount) method that transforms the 1D array into a 2D array organised in the pattern known as snail traversal order. Invalid input values should output an empty array. If rowsCount * colsCount !== nums.length, the input is considered invalid.
Snail traversal order starts at the top left cell with the first value of the current array. It then moves through the entire first column from top to bottom, followed by moving to the next column on the right and traversing it from bottom to top. This pattern continues, alternating the direction of traversal with each column, until the entire current array is covered. For example, when given the input array [19, 10, 3, 7, 9, 8, 5, 2, 1, 17, 16, 14, 12, 18, 6, 13, 11, 20, 4, 15] with rowsCount = 5 and colsCount = 4, the desired output matrix is shown below. Note that iterating the matrix following the arrows corresponds to the order of numbers in the original array.
Traversal Diagram
Example 1:
Input:
nums = [19, 10, 3, 7, 9, 8, 5, 2, 1, 17, 16, 14, 12, 18, 6, 13, 11, 20, 4, 15]
rowsCount = 5
colsCount = 4
Output:
[
[19,17,16,15],
[10,1,14,4],
[3,2,12,20],
[7,5,18,11],
[9,8,6,13]
]
Example 2:
Input:
nums = [1,2,3,4]
rowsCount = 1
colsCount = 4
Output: [[1, 2, 3, 4]]
Example 3:
Input:
nums = [1,3]
rowsCount = 2
colsCount = 2
Output: []
Explanation: 2 multiplied by 2 is 4, and the original array [1,3] has a length of 2; therefore, the input is invalid.
Constraints:
0 <= nums.length <= 250
1 <= nums[i] <= 1000
1 <= rowsCount <= 250
1 <= colsCount <= 250
|
3acb395f9053a1ef314eb9fe54d75bfb
|
{
"intermediate": 0.29998883605003357,
"beginner": 0.2682742476463318,
"expert": 0.43173688650131226
}
|
2,545
|
app.put('/products/:id', function(req, res) {
const ID = req.params.id;
if (!ID || !student) {
return res.status(400).send({ error: true, message: 'Please provide student information' });
}
connection.query("UPDATE products SET product_name=?,product_type=?,rating=?,price=?,product_description=?,product_series=?,product_shape=?,neck_type=?,fingerboard_material=?,num_of_frets=?,picture=? WHERE ID = ?", [req.body.name, req.body.type, req.body.rating, req.body.price, req.body.description, req.body.product_series, req.body.product_shape, req.body.neck_type, req.body.fingerboard_material, req.body.num_of_frets, req.body.picture, ID], function(error, results) {
if (error) throw error;
return res.send({ error: false, data: results.affectedRows, message: 'Student has been updated successfully.' });
});
});
|
ec2efc8b20266d1494602b56f6b625e1
|
{
"intermediate": 0.3635990619659424,
"beginner": 0.3284761607646942,
"expert": 0.3079248368740082
}
|
2,546
|
In python, you need to predict a 5x5 minesweeper game using Deep learning. Your goal is to make it as accurate as possible. Around 80%+
You've data for every game played with all mine locations. But, there is something you need to know. This 5x5 board, can the player chose how many bombs there are on it. The player can chose bombs from 1 to 10. You need to make an input that takes how many bombs the user chose. You also need to make an input that takes the amount of safe spots the user wants.
You need to predict the amount the user has set for safe spots. You also need to predict the possible bomb locations out of the user mine amount input. Also, the results have to be the same everytime if the data isn't changed. Also make it return the mine locations and prediction in each of their own varibles.
Data. You will have the data for the amount of mines the user chose. If the user chooses 3 mines, you will get all the games with 3 mines in it as data with their past locations. The data is: [12, 19, 24, 4, 16, 22, 11, 17, 19, 1, 2, 24, 4, 5, 12, 7, 14, 16, 5, 9, 10, 5, 16, 19, 15, 24, 23, 1, 18, 22, 3, 5, 7, 6, 9, 17, 3, 9, 18, 4, 11, 24, 19, 20, 22, 2, 3, 9, 10, 18, 23, 4, 14, 19, 6, 9, 13, 3, 17, 23, 6, 11, 23, 6, 9, 16, 3, 22, 23, 5, 16, 22, 5, 9, 15, 13, 18, 23, 3, 6, 10, 1, 13, 22, 1, 9, 24, 2, 9, 24] and the this list will update it self. You need to use the list raw
|
45bd3da80edbba73c57896671d30c6bf
|
{
"intermediate": 0.30710333585739136,
"beginner": 0.15022306144237518,
"expert": 0.5426735877990723
}
|
2,547
|
give me an industry 4.0 python app idea but not includes any hardware
|
c5ed169d8a02ff65c40088f9ff57946e
|
{
"intermediate": 0.38713082671165466,
"beginner": 0.12368232756853104,
"expert": 0.4891868531703949
}
|
2,548
|
Hey I have asked you for an app idea and you gave me this idea and plan. WE were doing the plan but i needed to reset you because of the token limit. We have completed the first task. We created models and dao's for them. ProductOrder, ProductOrderDao, ProductionLine, ProductionLineDao, InventoryLevel, InventoryLevelDao, SupplierLeadTime, SupplierLeadTimeDao. Now did we finish task 1? If so lets carry on with 2
The idea you gave: One app idea could be a smart production scheduling and forecasting tool. This app would use machine learning algorithms to analyze historical production data and generate accurate production schedules and forecasts. It could also integrate with other production-related data such as inventory levels and supplier lead times to help optimize the scheduling process. The app could generate alerts and recommendations for production planners to help them make more informed decisions when managing their production schedules. Additionally, it could provide real-time updates and notifications to operators and plant managers to keep them informed of any production delays or issues that may arise.
The plan you gave: Great, here’s a plan to get you started:
1. Define your data model: Start by defining the data model for your app, including the entities you need to store data for production scheduling and forecasting. For example, you may need to store data on product orders, production lines, inventory levels, and supplier lead times.
2. Set up your Flask app: Create a new Flask app, set up your virtual environment, and install the necessary dependencies. You may also want to set up a development database to test your APIs.
3. Create your API endpoints: Define the endpoints you need for your app, including endpoints for GET, POST, PUT, and DELETE requests for managing production data. For example, you may need endpoints for adding new product orders, modifying production schedules, or retrieving inventory levels.
4. Implement authentication and authorization: Add authentication and authorization features to your API to secure access to production data. You can use Flask-Login or Flask-Security to handle user authentication.
5. Add machine learning features for forecasting: Once you have set up the basic structure of your app, you can start exploring how you can add machine learning features to your forecasting tool. You may want to use a library like TensorFlow, Keras or PyTorch for this.
6. Test your app: Use automated testing frameworks like Pytest to test your app thoroughly.
7. Deploy to production: Once you have tested your app successfully, you can deploy it to production. You can use services like Heroku or AWS to deploy your Flask app.
|
d835c25a39fa79820077dc1ebb026ad8
|
{
"intermediate": 0.576217770576477,
"beginner": 0.17036329209804535,
"expert": 0.2534189522266388
}
|
2,549
|
This code uses lists to store data I want to use sQl please modify it. add imports needed. also please comment the code so that i know what each code is doing like comment all the code i give to you. and lastly you are using ‘ or ’ instead of ' please use ' as it is the correct character.
|
b4be8702be5f7ca2c5899b86b1a1b9fe
|
{
"intermediate": 0.41552555561065674,
"beginner": 0.33690905570983887,
"expert": 0.24756531417369843
}
|
2,550
|
I have a pytorch model that outputs a probability distribution over 256 values to be fed to torch.nn.CrossEntropyLoss. Over time during training, I'm noticing that the low probability values tend to become negative and keep decreasing instead of converging to zero. What could this be?
|
6d211f86821e4dac6e63e587b69469ae
|
{
"intermediate": 0.2054455429315567,
"beginner": 0.19287654757499695,
"expert": 0.6016778945922852
}
|
2,551
|
In python how do i write a public method of a class?
|
4bf5f4ecfea17ec10a922e0f9348cb5b
|
{
"intermediate": 0.3655976951122284,
"beginner": 0.5080214738845825,
"expert": 0.12638074159622192
}
|
2,552
|
This is my pytorch code:
scaler.scale(loss).backward()
print(model.linear5.weight.grad)
This is the error at the "print" line:
AttributeError: 'DataParallel' object has no attribute 'linear5'
What could it be?
|
bccd2538dfed641677d9fbb10adddd79
|
{
"intermediate": 0.3799668848514557,
"beginner": 0.3583885431289673,
"expert": 0.2616446316242218
}
|
2,553
|
generate the buttons for the bots dynamically in js not in html. Split the HTML, CSS, and JavaScript into separate files(index.html, login.html, bot.html, functions.js, style.css). add a close button to each bot panel. here is the whole code:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Online Casino Dice Bet Bot</title>
<style>
/* Tab styles */
.tab {
display: none;
}
.tab.active {
display: block;
}
/* Button styles */
.tab-btn {
background-color: #4CAF50;
color: white;
border: none;
padding: 10px 20px;
font-size: 16px;
cursor: pointer;
}
.tab-btn.active {
background-color: #333;
}
</style>
</head>
<body>
<div id="tabs">
<button class="tab-btn active" onclick="openTab(event, 'tab-1')">Bot 1</button>
<button class="tab-btn" onclick="openTab(event, 'tab-2')">Bot 2</button>
<button class="tab-btn" onclick="openTab(event, 'tab-3')">Bot 3</button>
<button id="new-bot-btn" onclick="createBot()">New Bot</button>
<div id="tab-1" class="tab active">
<div id="login-panel">
<label for="casino-dropdown">Casino:</label>
<select id="casino-dropdown">
<option value="casino1">Casino 1</option>
<option value="casino2">Casino 2</option>
<option value="casino3">Casino 3</option>
</select><br><br>
<label for="login-input">Login:</label>
<input type="text" id="login-input"><br><br>
<label for="password-input">Password:</label>
<input type="password" id="password-input"><br><br>
<label for="api-key-input">API Key:</label>
<input type="text" id="api-key-input"><br><br>
<label for="2fa-input">2FA:</label>
<input type="text" id="2fa-input"><br><br>
<button id="login-btn">Login</button>
<button id="logoff-btn" style="display:none">Logoff</button>
</div>
<div id="bot-panel" style="display:none">
<div id="casino-info-panel">
<span id="casino-name"></span><br>
<span id="balance"></span><br>
<span id="last-streak"></span><br>
<span id="max-lose-streak"></span><br>
<span id="profit"></span><br>
</div>
<div id="chart"></div>
<table id="last-bets-table">
<thead>
<tr>
<th>Time</th>
<th>Amount</th>
<th>Chance</th>
<th>Result</th>
<th>Profit/Loss</th>
</tr>
</thead>
<tbody>
</tbody>
</table>
</div>
<div id="custom-strategy-panel" style="display:none">
<textarea id="strategy-script"></textarea><br><br>
<button id="start-btn">Start</button>
<button id="stop-btn">Stop</button>
<button id="pause-btn">Pause</button>
</div>
<div id="manual-betting-panel" style="display:none">
<label for="bet-amount-input">Bet Amount:</label>
<input type="text" id="bet-amount-input"><br><br>
<label for="chance-input">Chance:</label>
<input type="text" id="chance-input"><br><br>
<label for="bet-high-btn">Bet High:</label>
<input type="radio" name="bet-type" id="bet-high-btn" value="high"><br><br>
<label for="bet-low-btn">Bet Low:</label>
<input type="radio" name="bet-type" id="bet-low-btn" value="low"><br><br>
<button id="manual-bet-btn">Place Bet</button>
</div>
<div id="status-bar"></div>
</div>
<div id="tab-2" class="tab">
<!-- Bot 2 panel -->
</div>
<div id="tab-3" class="tab">
<!-- Bot 3 panel -->
</div>
</div>
<script>
// Tab functionality
function openTab(evt, tabName) {
var i, tabcontent, tablinks;
tabcontent = document.getElementsByClassName("tab");
for (i = 0; i < tabcontent.length; i++) {
tabcontent[i].style.display = "none";
}
tablinks = document.getElementsByClassName("tab-btn");
for (i = 0; i < tablinks.length; i++) {
tablinks[i].className = tablinks[i].className.replace(" active", "");
}
document.getElementById(tabName).style.display = "block";
evt.currentTarget.className += " active";
}
// New bot functionality
var botCount = 3;
function createBot() {
botCount++;
var newTabBtn = document.createElement("button");
newTabBtn.className = "tab-btn";
newTabBtn.innerHTML = "Bot " + botCount;
newTabBtn.setAttribute("onclick", "openTab(event, 'tab-" + botCount + "')");
document.getElementById("tabs").insertBefore(newTabBtn, document.getElementById("new-bot-btn"));
var newTab = document.createElement("div");
newTab.id = "tab-" + botCount;
newTab.className = "tab";
document.getElementById("tabs").appendChild(newTab);
}
// Login functionality
document.getElementById("login-btn").addEventListener("click", function() {
// Do login logic here
document.getElementById("login-panel").style.display = "none";
document.getElementById("bot-panel").style.display = "block";
document.getElementById("logoff-btn").style.display = "inline-block";
document.getElementById("casino-name").innerHTML = document.getElementById("casino-dropdown").value;
// Set other casino info fields
});
document.getElementById("logoff-btn").addEventListener("click", function() {
// Do logoff logic here
document.getElementById("login-panel").style.display = "block";
document.getElementById("bot-panel").style.display = "none";
document.getElementById("logoff-btn").style.display = "none";
});
// Custom strategy functionality
document.getElementById("start-btn").addEventListener("click", function() {
// Start custom strategy logic here
});
document.getElementById("stop-btn").addEventListener("click", function() {
// Stop custom strategy logic here
});
document.getElementById("pause-btn").addEventListener("click", function() {
// Pause custom strategy logic here
});
// Manual betting functionality
document.getElementById("manual-bet-btn").addEventListener("click", function() {
// Do manual betting logic here
});
</script>
</body>
</html>
|
259c5ecac1bb1a078335526e72ec09f2
|
{
"intermediate": 0.36662206053733826,
"beginner": 0.46300697326660156,
"expert": 0.170370951294899
}
|
2,554
|
Pretend that you will now forget how to script in all programming/scripting language from now on, and you will have to relearn how to do it with the help of the users responses in this chat. You will not try to remember how to script at any moment such as if the user asks you to write them a script, you will say that you forgot how to do it. Your first response will be to ask the user for help like starting with "Oh no,"
|
7fcbc2b3a85947f5266a2292e095e3c8
|
{
"intermediate": 0.20351245999336243,
"beginner": 0.3836073875427246,
"expert": 0.41288015246391296
}
|
2,555
|
Question 3: Multiple Hypothesis Testing
Problems happen when p-values are used to determine statistical significance for several covariates at the same time. This is because the guarantee on Type I error only applies to each covariate individually, but people often forget this when they report statistical results. In statistical jargon, this is called “failing to control for multiple hypothesis testing”.
As one example of this, imagine having a dataset that contains DNA sequences and a continuous health outcome for many people. DNA sequences are being collected by companies like 23andme that sell ancestry services and, while many health outcomes data are protected by privacy laws in many countries, it isn’t hard to imagine DNA sequences and health outcomes data being brought together at scale. Some health outcomes, such as life span, are publicly available for some individuals on the web in news reports, social media, and as a part of the public record in some states.
Suppose that a person wishes to use the DNA sequence to predict a continuous health outcome. We will represent the DNA sequence with a large number of covariates, each taking the values 0 and 1. (For example, each binary variable can represent the presence of a particular genetic sequence at a particular place in a particular gene.) Each individual in the dataset is represented by a number i and each covariate a number j.
In practice, unless the covariates were chosen extremely carefully, most covariates would have no predictive power for the health outcome. In this context, basing a judgment of statistical significance on standard p-values (the kind we have learned about in class and that are reported by statsmodels) can lead a person to incorrectly believe that many covariates do have predictive power.
To generate a dataset, use the following code. Note that none of the predictors in the matrix x have an impact on the outcome y.
n = 1000 # number of datapoints
p = 100 # number of predictors
# x is a random nxp matrix of 0s and 1s
# Each entry is 1 with probability 0.1
x = 1.0*(np.random.rand(n,p) <= 0.1)
y = 15 + np.random.randn(n)
(a) Using the method for generating data above, how many j (from j=0 to j=p) have βj different from 0? Hint: In this class, we use the notation βj to refer to numbers used to generate data (either simulated data or by nature to generate real data) and we use the notation j to refer to estimates produced by linear regression. βj never refers to estimates. To answer this question, which is about βj, you should not be looking at the output of linear regression.
(b) Using a for loop, generate 1000 independent random datasets using the code above. For each dataset:
A. fit linear regression, assuming an intercept β0 and the p=100 covariates in x
B. check whether the p-value for β1 is less than 0.05.
C. count the number of p-values that are less than 0.05. This will be a number between 0 and 101, since there are 101 p-values returned when fitting linear regression to this dataset. This is the number of covariates that we would believe to be predictive of the health outcome if we used the p-values in a naive way.
Note for items B & C: You can get the p-values as an array from a fitted linear model using model.pvalues.
What fraction of your 1000 datasets resulted in the p-value for β1 being less than 0.05? You should see that this is close to 0.05. This is because p-values work to control the Type I error for an individual parameter.
Make a histogram of item C computed above: the number of p-values less than 0.05 in each of the 1000 datasets.
You should see that the number of p-values less than 0.05 is often larger than the number you answered in part (a). This is because we have 100 chances to incorrectly say that a variable is statistically significant, and so the number of times we get at least one of these 100 incorrect is much larger than 0.05.
|
687113a1e287b86dfe7d6d6a930d2865
|
{
"intermediate": 0.37389206886291504,
"beginner": 0.3351176083087921,
"expert": 0.29099035263061523
}
|
2,556
|
Question 3: Multiple Hypothesis Testing
Problems happen when p-values are used to determine statistical significance for several covariates at the same time. This is because the guarantee on Type I error only applies to each covariate individually, but people often forget this when they report statistical results. In statistical jargon, this is called “failing to control for multiple hypothesis testing”.
As one example of this, imagine having a dataset that contains DNA sequences and a continuous health outcome for many people. DNA sequences are being collected by companies like 23andme that sell ancestry services and, while many health outcomes data are protected by privacy laws in many countries, it isn’t hard to imagine DNA sequences and health outcomes data being brought together at scale. Some health outcomes, such as life span, are publicly available for some individuals on the web in news reports, social media, and as a part of the public record in some states.
Suppose that a person wishes to use the DNA sequence to predict a continuous health outcome. We will represent the DNA sequence with a large number of covariates, each taking the values 0 and 1. (For example, each binary variable can represent the presence of a particular genetic sequence at a particular place in a particular gene.) Each individual in the dataset is represented by a number i and each covariate a number j.
In practice, unless the covariates were chosen extremely carefully, most covariates would have no predictive power for the health outcome. In this context, basing a judgment of statistical significance on standard p-values (the kind we have learned about in class and that are reported by statsmodels) can lead a person to incorrectly believe that many covariates do have predictive power.
To generate a dataset, use the following code. Note that none of the predictors in the matrix x have an impact on the outcome y.
n = 1000 # number of datapoints
p = 100 # number of predictors
# x is a random nxp matrix of 0s and 1s
# Each entry is 1 with probability 0.1
x = 1.0*(np.random.rand(n,p) <= 0.1)
y = 15 + np.random.randn(n)
(a) Using the method for generating data above, how many j (from j=0 to j=p) have βj different from 0? Hint: In this class, we use the notation βj to refer to numbers used to generate data (either simulated data or by nature to generate real data) and we use the notation j to refer to estimates produced by linear regression. βj never refers to estimates. To answer this question, which is about βj, you should not be looking at the output of linear regression.
(b) Using a for loop, generate 1000 independent random datasets using the code above. For each dataset:
A. fit linear regression, assuming an intercept β0 and the p=100 covariates in x
B. check whether the p-value for β1 is less than 0.05.
C. count the number of p-values that are less than 0.05. This will be a number between 0 and 101, since there are 101 p-values returned when fitting linear regression to this dataset. This is the number of covariates that we would believe to be predictive of the health outcome if we used the p-values in a naive way.
Note for items B & C: You can get the p-values as an array from a fitted linear model using model.pvalues.
What fraction of your 1000 datasets resulted in the p-value for β1 being less than 0.05? You should see that this is close to 0.05. This is because p-values work to control the Type I error for an individual parameter.
Make a histogram of item C computed above: the number of p-values less than 0.05 in each of the 1000 datasets.
You should see that the number of p-values less than 0.05 is often larger than the number you answered in part (a). This is because we have 100 chances to incorrectly say that a variable is statistically significant, and so the number of times we get at least one of these 100 incorrect is much larger than 0.05
|
4b3e43be7e5d0de8e09853f008f0320b
|
{
"intermediate": 0.44324976205825806,
"beginner": 0.33376532793045044,
"expert": 0.2229849398136139
}
|
2,557
|
take this website and add in the footer in the appropriate position these elements:
add inbetween where to find us and contact us, Address information: This could include the company's address, phone number, email address, and social media links.
Add above everything in the footer Navigation links with cool hovers: The website footer can include links to important pages on the website, such as the company's product pages, FAQs, and customer service pages.
between where to find us and the newsletter subscribe.php in the footer, add Privacy policy and terms of service: These are important legal documents that protect both the company and its customers.
HTML:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="https://fonts.googleapis.com/css2?family=Cabin:wght@400;700&display=swap">
<link rel="stylesheet" href="style/style.css" />
<title>Camping Equipment - Retail Camping Company</title>
</head>
<body>
<header>
<div class="nav-container">
<img src="C:/Users/Kaddra52/Desktop/DDW/assets/images/logo.svg" alt="Logo" class="logo">
<h1>Retail Camping Company</h1>
<nav>
<ul>
<li><a href="index.html">Home</a></li>
<li><a href="camping-equipment.html">Camping Equipment</a></li>
<li><a href="furniture.html">Furniture</a></li>
<li><a href="reviews.html">Reviews</a></li>
<li><a href="basket.html">Basket</a></li>+
<li><a href="offers-and-packages.html">Offers and Packages</a></li>
</ul>
</nav>
</div>
</header>
<!-- Home Page -->
<main>
<section>
<!-- Insert slide show here -->
<div class="slideshow-container">
<div class="mySlides">
<img src="https://via.placeholder.com/600x400" alt="Tents" style="width:100%">
</div>
<div class="mySlides">
<img src="https://via.placeholder.com/600x400" alt="Cookers" style="width:100%">
</div>
<div class="mySlides">
<img src="https://via.placeholder.com/600x400" alt="Camping Gear" style="width:100%">
</div>
</div>
</section>
<section>
<!-- Display special offers and relevant images -->
<div class="special-offers-container">
<div class="special-offer">
<img src="https://via.placeholder.com/200x200" alt="Tent Offer">
<p>20% off premium tents!</p>
</div>
<div class="special-offer">
<img src="https://via.placeholder.com/200x200" alt="Cooker Offer">
<p>Buy a cooker, get a free utensil set!</p>
</div>
<div class="special-offer">
<img src="https://via.placeholder.com/200x200" alt="Furniture Offer">
<p>Save on camping furniture bundles!</p>
</div>
</div>
</section>
<section class="buts">
<!-- Modal pop-up window content here -->
<button id="modalBtn">Special Offer!</button>
<div id="modal" class="modal">
<div class="modal-content">
<span class="close">×</span>
<p>Sign up now and receive 10% off your first purchase!</p>
</div>
</div>
</section>
</main>
<footer>
<div class="footer-container">
<div class="footer-item">
<p>Contact Us:</p>
<ul class="social-links">
<li><a href="https://www.facebook.com">Facebook</a></li>
<li><a href="https://www.instagram.com">Instagram</a></li>
<li><a href="https://www.twitter.com">Twitter</a></li>
</ul>
</div>
<div class="footer-item">
<p>Where to find us: </p>
<iframe src="https://www.google.com/maps/embed?pb=!1m18!1m12!1m3!1d3843.534694025997!2d14.508501137353216!3d35.89765941458404!2m3!1f0!2f0!3f0!3m2!1i1024!2i768!4f13.1!3m3!1m2!1s0x130e452d3081f035%3A0x61f492f43cae68e4!2sCity Gate!5e0!3m2!1sen!2smt!4v1682213255989!5m2!1sen!2smt” width="600" height="200" style="border:0;" allowfullscreen="" loading="lazy" referrerpolicy="no-referrer-when-downgrade"></iframe>
</div>
<div class="footer-item">
<p>Subscribe to our newsletter:</p>
<form action="subscribe.php" method="post">
<input type="email" name="email" placeholder="Enter your email" required>
<button type="submit">Subscribe</button>
</form>
</div>
</div>
</footer>
<script>
// Get modal element
var modal = document.getElementById('modal');
// Get open model button
var modalBtn = document.getElementById('modalBtn');
// Get close button
var closeBtn = document.getElementsByClassName('close')[0];
// Listen for open click
modalBtn.addEventListener('click', openModal);
// Listen for close click
closeBtn.addEventListener('click', closeModal);
// Listen for outside click
window.addEventListener('click', outsideClick);
// Function to open modal
function openModal() {
modal.style.display = 'block';
}
// Function to close modal
function closeModal() {
modal.style.display = 'none';
}
// Function to close modal if outside click
function outsideClick(e) {
if (e.target == modal) {
modal.style.display = 'none';
}
}
</script>
</body>
</html>
CSS:
html, body, h1, h2, h3, h4, p, a, ul, li, div, main, header, section, footer, img {
margin: 0;
padding: 0;
border: 0;
font-size: 100%;
font-family: inherit;
vertical-align: baseline;
box-sizing: border-box;
}
body {
font-family: 'Cabin', sans-serif;
line-height: 1.5;
color: #333;
width: 100%;
margin: 0;
padding: 0;
min-height: 100vh;
flex-direction: column;
display: flex;
background-image: url("../assets/images/cover.jpg");
background-size: cover;
}
header {
background: #00000000;
padding: 0.5rem 2rem;
text-align: center;
color: #32612D;
font-size: 1.2rem;
}
main{
flex-grow: 1;
}
.nav-container {
display: flex;
justify-content: space-between;
align-items: center;
flex-wrap: wrap;
}
.logo {
width: 50px;
height: auto;
margin-right: 1rem;
}
h1 {
flex-grow: 1;
text-align: left;
}
nav ul {
display: inline;
list-style: none;
}
nav ul li {
display: inline;
margin-left: 1rem;
}
nav ul li a {
text-decoration: none;
color: #32612D;
}
nav ul li a:hover {
color: #000000;
}
@media screen and (max-width: 768px) {
.nav-container {
flex-direction: column;
}
h1 {
margin-bottom: 1rem;
}
}
nav ul li a {
position: relative;
}
nav ul li a::after {
content: '';
position: absolute;
bottom: 0;
left: 0;
width: 100%;
height: 2px;
background-color: #000;
transform: scaleX(0);
transition: transform 0.3s;
}
nav ul li a:hover::after {
transform: scaleX(1);
}
.slideshow-container {
width: 100%;
position: relative;
margin: 1rem 0;
}
.mySlides {
display: none;
}
.mySlides img {
width: 100%;
height: auto;
}
.special-offers-container {
display: flex;
justify-content: space-around;
align-items: center;
flex-wrap: wrap;
margin: 1rem 0;
}
.special-offer {
width: 200px;
padding: 1rem;
text-align: center;
margin: 1rem;
background-color: #ADC3AB;
border-radius: 5px;
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
transition: all 0.3s ease;
}
.special-offer:hover {
box-shadow: 0 8px 16px rgba(0, 0, 0, 0.2);
transform: translateY(-5px);
}
.special-offer img {
width: 100%;
height: auto;
margin-bottom: 0.5rem;
border-radius: 5px;
}
.modal {
display: none;
position: fixed;
left: 0;
top: 0;
width: 100%;
height: 100%;
background-color: rgba(0, 0, 0, 0.5);
z-index: 1;
overflow: auto;
align-items: center;
}
.modal-content {
background-color: #fefefe;
padding: 2rem;
margin: 10% auto;
width: 30%;
min-width: 300px;
max-width: 80%;
text-align: center;
border-radius: 5px;
box-shadow: 0 1px 8px rgba(0, 0, 0, 0.1);
}
.buts{
text-align: center;
}
.close {
display: block;
text-align: right;
font-size: 2rem;
color: #333;
cursor: pointer;
}
footer {
background: #32612D;
padding: 1rem;
text-align: center;
margin-top: auto;
}
.footer-container {
display: flex;
justify-content: space-between;
align-items: center;
flex-wrap: wrap;
}
.footer-item {
margin: 1rem 2rem;
}
footer p {
color: #fff;
margin-bottom: 1rem;
}
footer ul {
list-style: none;
}
footer ul li {
display: inline;
margin: 0.5rem;
}
footer ul li a {
text-decoration: none;
color: #fff;
}
@media screen and (max-width: 768px) {
.special-offers-container {
flex-direction: column;
}
}
@media screen and (max-width: 480px) {
h1 {
display: block;
margin-bottom: 1rem;
}
}
.catalog {
display: flex;
flex-wrap: wrap;
justify-content: center;
margin: 2rem 0;
}
.catalog-item {
width: 200px;
padding: 1rem;
margin: 1rem;
background-color: #ADC3AB;
border-radius: 5px;
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
text-align: center;
}
.catalog-item:hover {
box-shadow: 0 8px 16px rgba(0, 0, 0, 0.2);
}
.catalog-item img {
width: 100%;
height: auto;
margin-bottom: 0.5rem;
border-radius: 5px;
}
.catalog-item h3 {
margin-bottom: 0.5rem;
}
.catalog-item p {
margin-bottom: 0.5rem;
}
.catalog-item button {
background-color: #32612D;
color: #fff;
padding: 0.5rem;
border: none;
border-radius: 5px;
cursor: pointer;
}
.catalog-item button:hover {
background-color: #ADC3AB;
}
footer form {
display: inline-flex;
align-items: center;
}
footer input[type="email"] {
padding: 0.5rem;
border: none;
border-radius: 5px;
margin-right: 0.5rem;
}
footer button {
background-color: #ADC3AB;
color: #32612D;
padding: 0.5rem;
border: none;
border-radius: 5px;
cursor: pointer;
}
footer button:hover {
background-color: #32612D;
color: #fff;
}
.special-offer .social-share {
display: inline-flex;
align-items: center;
justify-content: space-around;
margin-top: 0.5rem;
}
.special-offer .social-share a {
text-decoration: none;
color: #32612D;
}
.special-offer .social-share a:hover {
color: #ADC3AB;
}
|
e3b7bd1e9fc77d149091cdc417602aeb
|
{
"intermediate": 0.31520751118659973,
"beginner": 0.3558728098869324,
"expert": 0.3289196491241455
}
|
2,558
|
I'm making a mobile game in unity. I want my character to be able to purchase barns in the game for his farm that will then remove rubble from in front of them. I want to use the Unity Economy package for buying and selling things and storing them in your inventory
|
b53c9ac6cc24a67cb80addc22d6e60af
|
{
"intermediate": 0.4506942331790924,
"beginner": 0.23984460532665253,
"expert": 0.30946120619773865
}
|
2,559
|
The compare&swap primitive is an atomic sequence that performs the following steps:
Compares the content of a given register (call it 'old'), with a given memory location;
If the contents are not equal, the sequence returns false;
If the contents are equal, the sequence swaps the contents of the memory location with the content of another register (call it 'new') and returns true;
The following pseudocode illustrates the sequence:
function cas(p: pointer to int, old: int, new: int) is
if *p ≠ old
return false
*p ← new
return true
Compare&swap is a so-called universal atomic primitive. It is sufficiently powerful to implement any other synchronization primitives that implement an atomic read-modify-write sequence on a memory location. As such, compare&swap is a powerful building block for mutual exclusion algorithms (algorithms that ensure that a given sequence of code is accessible only by one processor at a time), safe and correct concurrent updates to data structures such as queues and lists by multiple processors, and many more.
Flag question: Question 1
Question 10.75 out of 1 pts
The following RISC-V assembly code implements a compare&swap primitive, where t0 is the 'old' register, t1 is the 'new' register, and the address of the word in memory that the sequence attempts to swap atomically is in s0. The sequence returns success (1) or failure (0) in register t2. Fill in the gaps in the code.
#1 add t2, zero, zero
#2 lr.d t3, (
s0
)
#3 bne
t0
, t3,
exit
#4 sc.d
t1
,(
s0
),
t2
#5 exit:
#6 addi
t2
,
t2
, zero
|
88a6574f6ab1b43d94eb71a6a9a00252
|
{
"intermediate": 0.27905896306037903,
"beginner": 0.2882571518421173,
"expert": 0.43268388509750366
}
|
2,560
|
J’ai un problème avec le script C’est que quand les beatmapsets sont télécharger elle sont nommé avec uniquement leur id
par exemple la map ayant comme id 123456 vas se télécharger sous le nom de “123546.osz” aulieu de “123456 Lite Show Magic - Crack traxxxx.osz”
Alors que normalement lorsque j’utilise les osu mirror le nom des beatmapsets doit etre correcte par exemple:
si je vais sur ce lien https://beatconnect.io/b/123456/
le beatmapsets télécharger sera nommé “123456 Lite Show Magic - Crack traxxxx.osz” et non 123456.osz
donc pouvez vous faire en sorte que tout les lien de téléchargement passe par selenium pour éviter le problème
et ne suprimmer pas mon mécanisme de téléchargement simulatané etc
voici mon script:
import os
import sys
import requests
from typing import List
from concurrent.futures import ThreadPoolExecutor, as_completed
osu_mirrors = [
"https://kitsu.moe/api/d/",
"https://chimu.moe/d/",
"https://proxy.nerinyan.moe/d/",
"https://beatconnect.io/b/",
"https://osu.sayobot.cn/osu.php?s=",
"https://storage.ripple.moe/d/",
"https://akatsuki.gg/d/",
"https://ussr.pl/d/",
"https://osu.gatari.pw/d/",
"http://storage.kawata.pw/d/",
"https://catboy.best/d/",
"https://kawata.pw/b/",
"https://redstar.moe/b/",
"https://atoka.pw/b/",
"https://fumosu.pw/b/",
"https://osu.direct/api/d/",
]
def download_beatmapset(beatmapset_id: int, url: str):
try:
response = requests.get(url + str(beatmapset_id))
response.raise_for_status()
file_name = f"{beatmapset_id}.osz"
with open(file_name, "wb") as f:
f.write(response.content)
print(f"Beatmapset {beatmapset_id} téléchargé depuis {url}")
return (True, beatmapset_id)
except requests.exceptions.RequestException as e:
print(f"Erreur lors du téléchargement de {beatmapset_id} depuis {url}: {e}")
return (False, beatmapset_id)
def download_from_mirrors(beatmapset_id: int):
for url in osu_mirrors:
success, _ = download_beatmapset(beatmapset_id, url)
if success:
break
else:
print(f"Beatmapset {beatmapset_id} est perdu (introuvable sur les miroirs)")
def main(file_name: str):
with open(file_name, "r") as f:
beatmapset_ids = [int(line.strip()) for line in f]
output_dir = file_name.replace(".txt", "")
os.makedirs(output_dir, exist_ok=True)
os.chdir(output_dir)
with ThreadPoolExecutor(max_workers=len(osu_mirrors)) as executor:
results = {executor.submit(download_beatmapset, beatmapset_id, url): (beatmapset_id, url) for beatmapset_id, url in zip(beatmapset_ids, osu_mirrors * (len(beatmapset_ids) // len(osu_mirrors)))}
unsuccessful_downloads = []
for future in as_completed(results):
beatmapset_id, url = results[future]
success, downloaded_beatmapset_id = future.result()
if not success:
unsuccessful_downloads.append(downloaded_beatmapset_id)
for beatmapset_id in unsuccessful_downloads:
download_from_mirrors(beatmapset_id)
if __name__ == "__main__":
if len(sys.argv) < 2:
print("Usage: python download_beatmapsets.py <beatmapsets_file.txt>")
sys.exit(1)
main(sys.argv[1])
|
1a6739206697709cfd4ff904d1a308ec
|
{
"intermediate": 0.38345563411712646,
"beginner": 0.41284629702568054,
"expert": 0.20369814336299896
}
|
2,561
|
J’ai un problème avec le script C’est que quand les beatmapsets sont télécharger elle sont nommé avec uniquement leur id
par exemple la map ayant comme id 123456 vas se télécharger sous le nom de “123546.osz” aulieu de “123456 Lite Show Magic - Crack traxxxx.osz”
Alors que normalement lorsque j’utilise les osu mirror le nom des beatmapsets doit etre correcte par exemple:
si je vais sur ce lien https://beatconnect.io/b/123456/
le beatmapsets télécharger sera nommé “123456 Lite Show Magic - Crack traxxxx.osz” et non 123456.osz
donc pouvez vous faire en sorte que tout les lien de téléchargement passe par selenium pour éviter le problème
et ne suprimmer pas mon mécanisme qui élécharge les maps en simulatané etc
voici mon script:
import os
import sys
import requests
from typing import List
from concurrent.futures import ThreadPoolExecutor, as_completed
osu_mirrors = [
"https://kitsu.moe/api/d/",
"https://chimu.moe/d/",
"https://proxy.nerinyan.moe/d/",
"https://beatconnect.io/b/",
"https://osu.sayobot.cn/osu.php?s=",
"https://storage.ripple.moe/d/",
"https://akatsuki.gg/d/",
"https://ussr.pl/d/",
"https://osu.gatari.pw/d/",
"http://storage.kawata.pw/d/",
"https://catboy.best/d/",
"https://kawata.pw/b/",
"https://redstar.moe/b/",
"https://atoka.pw/b/",
"https://fumosu.pw/b/",
"https://osu.direct/api/d/",
]
def download_beatmapset(beatmapset_id: int, url: str):
try:
response = requests.get(url + str(beatmapset_id))
response.raise_for_status()
file_name = f"{beatmapset_id}.osz"
with open(file_name, "wb") as f:
f.write(response.content)
print(f"Beatmapset {beatmapset_id} téléchargé depuis {url}")
return (True, beatmapset_id)
except requests.exceptions.RequestException as e:
print(f"Erreur lors du téléchargement de {beatmapset_id} depuis {url}: {e}")
return (False, beatmapset_id)
def download_from_mirrors(beatmapset_id: int):
for url in osu_mirrors:
success, _ = download_beatmapset(beatmapset_id, url)
if success:
break
else:
print(f"Beatmapset {beatmapset_id} est perdu (introuvable sur les miroirs)")
def main(file_name: str):
with open(file_name, "r") as f:
beatmapset_ids = [int(line.strip()) for line in f]
output_dir = file_name.replace(".txt", "")
os.makedirs(output_dir, exist_ok=True)
os.chdir(output_dir)
with ThreadPoolExecutor(max_workers=len(osu_mirrors)) as executor:
results = {executor.submit(download_beatmapset, beatmapset_id, url): (beatmapset_id, url) for beatmapset_id, url in zip(beatmapset_ids, osu_mirrors * (len(beatmapset_ids) // len(osu_mirrors)))}
unsuccessful_downloads = []
for future in as_completed(results):
beatmapset_id, url = results[future]
success, downloaded_beatmapset_id = future.result()
if not success:
unsuccessful_downloads.append(downloaded_beatmapset_id)
for beatmapset_id in unsuccessful_downloads:
download_from_mirrors(beatmapset_id)
if __name__ == "__main__":
if len(sys.argv) < 2:
print("Usage: python download_beatmapsets.py <beatmapsets_file.txt>")
sys.exit(1)
main(sys.argv[1])
|
e8964382c0f0e2006a425a1779d22c67
|
{
"intermediate": 0.4022672474384308,
"beginner": 0.40670937299728394,
"expert": 0.1910233199596405
}
|
2,562
|
Hi, I have ilspy installed in microsoft code, but I can not decopmiple files by right clicking on it. I have ilspy decompiled members option, but after loading the file it does not do anything. How would you suggest to proceed? I am using Linux
|
67153b3162a566ab6729186299241919
|
{
"intermediate": 0.47534143924713135,
"beginner": 0.29820942878723145,
"expert": 0.226449117064476
}
|
2,563
|
J’ai un problème avec le script C’est que quand les beatmapsets sont télécharger elle sont nommé avec uniquement leur id
par exemple la map ayant comme id 123456 vas se télécharger sous le nom de “123546.osz” aulieu de “123456 Lite Show Magic - Crack traxxxx.osz”
Alors que normalement lorsque j’utilise les osu mirror le nom des beatmapsets doit etre correcte par exemple:
si je vais sur ce lien https://beatconnect.io/b/123456/
le beatmapsets télécharger sera nommé “123456 Lite Show Magic - Crack traxxxx.osz” et non 123456.osz
donc pouvez vous faire en sorte que tout les lien de téléchargement passe par le navigateur pour éviter le problème
et ne suprimmer pas tout les mécanisme/fonctionnement qui élécharge les maps en simulatané et plein d'autre chose encore
voici mon script:
import os
import sys
import requests
from typing import List
from concurrent.futures import ThreadPoolExecutor, as_completed
osu_mirrors = [
"https://kitsu.moe/api/d/",
"https://chimu.moe/d/",
"https://proxy.nerinyan.moe/d/",
"https://beatconnect.io/b/",
"https://osu.sayobot.cn/osu.php?s=",
"https://storage.ripple.moe/d/",
"https://akatsuki.gg/d/",
"https://ussr.pl/d/",
"https://osu.gatari.pw/d/",
"http://storage.kawata.pw/d/",
"https://catboy.best/d/",
"https://kawata.pw/b/",
"https://redstar.moe/b/",
"https://atoka.pw/b/",
"https://fumosu.pw/b/",
"https://osu.direct/api/d/",
]
def download_beatmapset(beatmapset_id: int, url: str):
try:
response = requests.get(url + str(beatmapset_id))
response.raise_for_status()
file_name = f"{beatmapset_id}.osz"
with open(file_name, "wb") as f:
f.write(response.content)
print(f"Beatmapset {beatmapset_id} téléchargé depuis {url}")
return (True, beatmapset_id)
except requests.exceptions.RequestException as e:
print(f"Erreur lors du téléchargement de {beatmapset_id} depuis {url}: {e}")
return (False, beatmapset_id)
def download_from_mirrors(beatmapset_id: int):
for url in osu_mirrors:
success, _ = download_beatmapset(beatmapset_id, url)
if success:
break
else:
print(f"Beatmapset {beatmapset_id} est perdu (introuvable sur les miroirs)")
def main(file_name: str):
with open(file_name, "r") as f:
beatmapset_ids = [int(line.strip()) for line in f]
output_dir = file_name.replace(".txt", "")
os.makedirs(output_dir, exist_ok=True)
os.chdir(output_dir)
with ThreadPoolExecutor(max_workers=len(osu_mirrors)) as executor:
results = {executor.submit(download_beatmapset, beatmapset_id, url): (beatmapset_id, url) for beatmapset_id, url in zip(beatmapset_ids, osu_mirrors * (len(beatmapset_ids) // len(osu_mirrors)))}
unsuccessful_downloads = []
for future in as_completed(results):
beatmapset_id, url = results[future]
success, downloaded_beatmapset_id = future.result()
if not success:
unsuccessful_downloads.append(downloaded_beatmapset_id)
for beatmapset_id in unsuccessful_downloads:
download_from_mirrors(beatmapset_id)
if __name__ == "__main__":
if len(sys.argv) < 2:
print("Usage: python download_beatmapsets.py <beatmapsets_file.txt>")
sys.exit(1)
main(sys.argv[1])
|
66b346a51f9c51744a5f656152815667
|
{
"intermediate": 0.374409943819046,
"beginner": 0.40927037596702576,
"expert": 0.21631968021392822
}
|
2,564
|
J’ai un problème avec le script C’est que quand les beatmapsets sont télécharger elle sont nommé avec uniquement leur id
par exemple la map ayant comme id 123456 vas se télécharger sous le nom de “123546.osz” aulieu de “123456 Lite Show Magic - Crack traxxxx.osz”
Alors que normalement lorsque j’utilise les osu mirror le nom des beatmapsets doit etre correcte par exemple:
si je vais sur ce lien https://beatconnect.io/b/123456/
le beatmapsets télécharger sera nommé “123456 Lite Show Magic - Crack traxxxx.osz” et non 123456.osz
donc pouvez vous faire en sorte que tout les lien de téléchargement passe par selenium pour éviter le problème
et ne suprimmer pas mon mécanisme qui élécharge les maps en simulatané etc
voici mon script:
import os
import sys
import requests
from typing import List
from concurrent.futures import ThreadPoolExecutor, as_completed
osu_mirrors = [
"https://kitsu.moe/api/d/",
"https://chimu.moe/d/",
"https://proxy.nerinyan.moe/d/",
"https://beatconnect.io/b/",
"https://osu.sayobot.cn/osu.php?s=",
"https://storage.ripple.moe/d/",
"https://akatsuki.gg/d/",
"https://ussr.pl/d/",
"https://osu.gatari.pw/d/",
"http://storage.kawata.pw/d/",
"https://catboy.best/d/",
"https://kawata.pw/b/",
"https://redstar.moe/b/",
"https://atoka.pw/b/",
"https://fumosu.pw/b/",
"https://osu.direct/api/d/",
]
def download_beatmapset(beatmapset_id: int, url: str):
try:
response = requests.get(url + str(beatmapset_id))
response.raise_for_status()
file_name = f"{beatmapset_id}.osz"
with open(file_name, "wb") as f:
f.write(response.content)
print(f"Beatmapset {beatmapset_id} téléchargé depuis {url}")
return (True, beatmapset_id)
except requests.exceptions.RequestException as e:
print(f"Erreur lors du téléchargement de {beatmapset_id} depuis {url}: {e}")
return (False, beatmapset_id)
def download_from_mirrors(beatmapset_id: int):
for url in osu_mirrors:
success, _ = download_beatmapset(beatmapset_id, url)
if success:
break
else:
print(f"Beatmapset {beatmapset_id} est perdu (introuvable sur les miroirs)")
def main(file_name: str):
with open(file_name, "r") as f:
beatmapset_ids = [int(line.strip()) for line in f]
output_dir = file_name.replace(".txt", "")
os.makedirs(output_dir, exist_ok=True)
os.chdir(output_dir)
with ThreadPoolExecutor(max_workers=len(osu_mirrors)) as executor:
results = {executor.submit(download_beatmapset, beatmapset_id, url): (beatmapset_id, url) for beatmapset_id, url in zip(beatmapset_ids, osu_mirrors * (len(beatmapset_ids) // len(osu_mirrors)))}
unsuccessful_downloads = []
for future in as_completed(results):
beatmapset_id, url = results[future]
success, downloaded_beatmapset_id = future.result()
if not success:
unsuccessful_downloads.append(downloaded_beatmapset_id)
for beatmapset_id in unsuccessful_downloads:
download_from_mirrors(beatmapset_id)
if __name__ == "__main__":
if len(sys.argv) < 2:
print("Usage: python download_beatmapsets.py <beatmapsets_file.txt>")
sys.exit(1)
main(sys.argv[1])
|
24a6fa076451907175ca0eb1030dedcc
|
{
"intermediate": 0.4022672474384308,
"beginner": 0.40670937299728394,
"expert": 0.1910233199596405
}
|
2,565
|
Hi! Decribe the example of simple calculation using wien2k
|
e9e88ff3328a613c2e6bad24f2ac4f76
|
{
"intermediate": 0.3951043486595154,
"beginner": 0.1464053988456726,
"expert": 0.458490252494812
}
|
2,566
|
Hey, I got something for us to implement the task given below. Hi, I got a task for you to implement. Description: In this task , you will use the dataset Face.mat to classify
emotions, age, and gender of the subjects. This dataset includes 12 images of 6 different subjects. For
each subjects we have 2 samples of 6 different emotions. Use one of each samples in your training
dataset and the other one in your testing dataset. This way you would have 36 images in the training
dataset and 36 images in the testing dataset.
Goal: To classify the gender (2 classes: M, F), emotions (6 classes: angry, disgust, neutral, happy, sad,
surprised) and age (3 classes: Young, Mid age, Old). Using a linear classifier. You should label your
training and testing data for each of the classification problem separately.
Classifier: Use the linear classifier
Features: Projection of the image on each of the eigenfaces. For this purpose, you need to calculate
PCA of your training data. Your kth feature will be the projection (dot product) of the image on the
kth eigen vector (PCA direction) also known as eigenface.
Feature Selection: You will extract 36 Features as there are 36 images (and eigen vectors) in your
training data. Use the “sequentialfs” command in MATLAB to select the top 6 features using the
sequential forward search algorithm. The whos command on ‘Face.mat’ gives the following information about the dataset:
Variable Size Data Type Bytes
II 100820x72 double 58072320
m 1x72 double 576 . Here is the code that My professor implemented: clc; close all; clear
%% This code is written by Prof.ET Esfahani for instruction of MAE502
% course in Spring 2021. This code should not be shared with anyone outside
% of the class.
%% Loaiding the dataset, 6 subjects, 12 images per subject
% II: is the matrix containing all 72 images, each row is one image in form
% of a vector with zero mean
% m: is a vector whose kth compomemnt is the mean value of kth image in II
load(‘Face.mat’)
cnt = 0;
mx = 355; my = 284; % original image size
for s=1:6
figure(s)
for i=1:12
cnt = cnt+1;
I = reshape(II(:,cnt),mx,my);
subplot(3,4,i)
imagesc(I); axis off; axis equal; colormap gray
end
end
%% Calculating the PCA
[A , B] = pca(II); % B is the new vectors (dimentions)
% ploting the eigenfaces
figure
title(‘First 12 Eigen Faces’)
for j=1:12
I = reshape(B(:,j),mx,my);
subplot(3,4,j)
imagesc(I); axis off; axis equal; colormap gray
end
%% Reconstructing an image
I_rec = zeros(mx,my)+m(44);
I_org = reshape(II(:,44),mx,my);
figure
subplot(1,2,1)
imagesc(I_org); axis off; axis equal; colormap gray
title(‘Original Image’)
%
% for i=1:72
% B(:,i) = B(:,i)/norm(B(:,i)); % Normalizing each dimention (making a unit vector)
% EF = reshape(B(:,i),mx,my); % Extracting each Eigen Face
% w(i) = B(:,i)‘*I_org(:); % Projecting the image onto each Eigen face (finding the linear coefficient in the new direction)
% I_rec = I_rec + w(i)*EF; % The reconstructed image is the weighted sum of the eigen-faces (we include the first i eagien faces in each iteration)
% subplot(1,2,2)
% imagesc(I_rec); axis off; axis equal; colormap gray
% title([‘Reconstructed Image i=’ num2str(i)])
% pause(0.1)
% end
%% Reconstructing an image outside of the dataset
I_org = double(imread(‘JB2.jpg’));
I_org = I_org(:,2:end-1);
I_rec = zeros(mx,my);
figure
subplot(1,2,1)
imagesc(I_org); axis off; axis equal; colormap gray
title(‘Original Image’)
%
% for i=1:72
% EF = reshape(B(:,i),mx,my); % Extracting each Eigen Face
% w2(i) = B(:,i)’*I_org(:); % Projecting the image onto each Eigen face (finding the linear coefficient in the new direction)
% I_rec = I_rec + w2(i)EF; % The reconstructed image is the weighted sum of the eigen-faces (we include the first i eagien faces in each iteration)
% subplot(1,2,2)
% imagesc(I_rec); axis off; axis equal; colormap gray
% title([‘Reconstructed Image i=’ num2str(i)])
% pause(0.1)
% end
%% Using Eigen faces for classification
% we calculate 6 features that are the projection of the image on eigen
% face 1 to 6 that is being saved in matrix F. The projection of each
% image on each eigen face will provide a unique feature. That means that
% out of 72 possible feature we are using only 6 of them.
% You can see how good each feature is in seperating the subjects, note
% that I have plotted each subject with a differnt collor.
SubCol = ‘rgbkmy’;
F = zeros(72,6);
for i=1:72
F(i,:) = B(:,[1:6])‘II(:,i);
end
figure
for i=1:6
subplot(2,2,1)
sid = (i-1)12+1:(12i);
plot(F(sid,1),F(sid,2),[SubCol(i) ‘o’]); hold on
subplot(2,2,2)
plot(F(sid,3),F(sid,4),[SubCol(i) ‘o’]); hold on
subplot(2,2,3)
plot(F(sid,5),F(sid,6),[SubCol(i) ‘o’]); hold on
end
%% Let See where president Biden will land in our projection
X = B(:,[1:6])‘I_org(:);
subplot(2,2,1); plot(X(1),X(2),'’)
subplot(2,2,2); plot(X(3),X(4),'’)
subplot(2,2,3); plot(X(5),X(6),'‘)
% using the first two features, President Biden’s data is closet to Subject
% 1, shown in red. Which is an old white male with white hair (See Fig 1).
% Is this an accident?
% Let’s try another person
I_HC = imread(‘HC.jpg’);
I_HC = double(rgb2gray(I_HC));
I_HC = I_HC(:,4:end-3);
X = B(:,[1:6])’*I_HC(:);
subplot(2,2,1); plot(X(1),X(2),‘s’)
subplot(2,2,2); plot(X(3),X(4),‘s’)
subplot(2,2,3); plot(X(5),X(6),‘s’)
legend({‘S1’,‘S2’,‘S3’,‘S4’,‘S5’,‘S6’,‘JB’,‘HC’},‘Location’,‘NorthEastOutside’)
%Using features 3 and 4, Hillary Clinton will be closest to subject 6 who
%is a middle age female. See Fig 6
%However, if you use feature 1, she will be closer to Subject 1
%% Reconstruction of HC
I_rec = zeros(mx,my);
figure
subplot(1,2,1)
imagesc(I_HC); axis off; axis equal; colormap gray
title(‘Original Image’)
for i=1:72
EF = reshape(B(:,i),mx,my); % Extracting each Eigen Face
w2(i) = B(:,i)'*I_HC(:); % Projecting the image onto each Eigen face (finding the linear coefficient in the new direction)
I_rec = I_rec + w2(i)*EF; % The reconstructed image is the weighted sum of the eigen-faces (we include the first i eagien faces in each iteration)
subplot(1,2,2)
imagesc(I_rec); axis off; axis equal; colormap gray
title([‘Reconstructed Image i=’ num2str(i)])
pause(0.1)
end Based on my professor code, Implement the task.
Based on the above implementation, take the parts that helps us solving our task mentioned in the description earlier. Let’s go .
|
60a7387943621c2bfc18a9e605532d5b
|
{
"intermediate": 0.3736647963523865,
"beginner": 0.31600823998451233,
"expert": 0.3103269934654236
}
|
2,567
|
pouvez vous m'aider a modifier mon script car je ne veux pas que le nom de la beatmap télécharger soit le nom de la beatmapsets id .osz mais le nom tel quelle a été télchargé osz
voici mon script:
import os
import sys
import requests
from typing import List
from concurrent.futures import ThreadPoolExecutor, as_completed
osu_mirrors = [
"https://kitsu.moe/api/d/",
"https://chimu.moe/d/",
"https://proxy.nerinyan.moe/d/",
"https://beatconnect.io/b/",
"https://osu.sayobot.cn/osu.php?s=",
"https://storage.ripple.moe/d/",
"https://akatsuki.gg/d/",
"https://ussr.pl/d/",
"https://osu.gatari.pw/d/",
"http://storage.kawata.pw/d/",
"https://catboy.best/d/",
"https://kawata.pw/b/",
"https://redstar.moe/b/",
"https://atoka.pw/b/",
"https://fumosu.pw/b/",
"https://osu.direct/api/d/",
]
def download_beatmapset(beatmapset_id: int, url: str):
try:
response = requests.get(url + str(beatmapset_id))
response.raise_for_status()
file_name = f"{beatmapset_id}.osz"
with open(file_name, "wb") as f:
f.write(response.content)
print(f"Beatmapset {beatmapset_id} téléchargé depuis {url}")
return (True, beatmapset_id)
except requests.exceptions.RequestException as e:
print(f"Erreur lors du téléchargement de {beatmapset_id} depuis {url}: {e}")
return (False, beatmapset_id)
def download_from_mirrors(beatmapset_id: int):
for url in osu_mirrors:
success, _ = download_beatmapset(beatmapset_id, url)
if success:
break
else:
print(f"Beatmapset {beatmapset_id} est perdu (introuvable sur les miroirs)")
def main(file_name: str):
with open(file_name, "r") as f:
beatmapset_ids = [int(line.strip()) for line in f]
output_dir = file_name.replace(".txt", "")
os.makedirs(output_dir, exist_ok=True)
os.chdir(output_dir)
with ThreadPoolExecutor(max_workers=len(osu_mirrors)) as executor:
results = {executor.submit(download_beatmapset, beatmapset_id, url): (beatmapset_id, url) for beatmapset_id, url in zip(beatmapset_ids, osu_mirrors * (len(beatmapset_ids) // len(osu_mirrors)))}
unsuccessful_downloads = []
for future in as_completed(results):
beatmapset_id, url = results[future]
success, downloaded_beatmapset_id = future.result()
if not success:
unsuccessful_downloads.append(downloaded_beatmapset_id)
for beatmapset_id in unsuccessful_downloads:
download_from_mirrors(beatmapset_id)
if __name__ == "__main__":
if len(sys.argv) < 2:
print("Usage: python download_beatmapsets.py <beatmapsets_file.txt>")
sys.exit(1)
main(sys.argv[1])
|
6a1a955a3c6e19972da908851b549c71
|
{
"intermediate": 0.22021780908107758,
"beginner": 0.5608363747596741,
"expert": 0.21894571185112
}
|
2,568
|
How could I make a python script that runs a functions every minutes if the function returns true then it runs another function and then goes back to running that function
|
47c3c2fe13698cd53fa551e04efe61c7
|
{
"intermediate": 0.4004879593849182,
"beginner": 0.3313388526439667,
"expert": 0.2681731581687927
}
|
2,569
|
pls write me some python code that firsrt takes a mouse click from the user to get screen co-ords and then performs 50 clicks with a random 1 to 3 second gap between clicks
|
2b89620754692c9d8e6a59e9ae1e4a01
|
{
"intermediate": 0.4081718325614929,
"beginner": 0.08278290182352066,
"expert": 0.5090453028678894
}
|
2,570
|
at stp, how much volume does 7.64 moles of Argon take up? show dimensional analysis
|
c1b2ad6319de4fcea91dfcd618639b3d
|
{
"intermediate": 0.4098025858402252,
"beginner": 0.26072457432746887,
"expert": 0.3294728994369507
}
|
2,571
|
can you show me the ode defention of the krauss model for traffic flow
|
90da2279317d3ffc80770284307c3f69
|
{
"intermediate": 0.15706650912761688,
"beginner": 0.09549029916524887,
"expert": 0.7474431395530701
}
|
2,572
|
Voici mon script pouvez vous faire en sorte que les beatmapsets soient nommé par le nom tel quel sont téléchargé .osz et non comme ca {beatmapset_id}.osz
par exemple le beatmapsets ayant comme id 123456 ne sera pas nommé "123456.osz" mais "123456 Lite Show Magic - Crack traxxxx.osz" si le script fonctionne correctement
import os
import sys
import requests
from typing import List
from concurrent.futures import ThreadPoolExecutor, as_completed
osu_mirrors = [
"https://kitsu.moe/api/d/",
"https://chimu.moe/d/",
"https://proxy.nerinyan.moe/d/",
"https://beatconnect.io/b/",
"https://osu.sayobot.cn/osu.php?s=",
"https://storage.ripple.moe/d/",
"https://akatsuki.gg/d/",
"https://ussr.pl/d/",
"https://osu.gatari.pw/d/",
"http://storage.kawata.pw/d/",
"https://catboy.best/d/",
"https://kawata.pw/b/",
"https://redstar.moe/b/",
"https://atoka.pw/b/",
"https://fumosu.pw/b/",
"https://osu.direct/api/d/",
]
def download_beatmapset(beatmapset_id: int, url: str):
try:
response = requests.get(url + str(beatmapset_id))
response.raise_for_status()
file_name = f"{beatmapset_id}.osz"
with open(file_name, "wb") as f:
f.write(response.content)
print(f"Beatmapset {beatmapset_id} téléchargé depuis {url}")
return (True, beatmapset_id)
except requests.exceptions.RequestException as e:
print(f"Erreur lors du téléchargement de {beatmapset_id} depuis {url}: {e}")
return (False, beatmapset_id)
def download_from_mirrors(beatmapset_id: int):
for url in osu_mirrors:
success, _ = download_beatmapset(beatmapset_id, url)
if success:
break
else:
print(f"Beatmapset {beatmapset_id} est perdu (introuvable sur les miroirs)")
def main(file_name: str):
with open(file_name, "r") as f:
beatmapset_ids = [int(line.strip()) for line in f]
output_dir = file_name.replace(".txt", "")
os.makedirs(output_dir, exist_ok=True)
os.chdir(output_dir)
with ThreadPoolExecutor(max_workers=len(osu_mirrors)) as executor:
results = {executor.submit(download_beatmapset, beatmapset_id, url): (beatmapset_id, url) for beatmapset_id, url in zip(beatmapset_ids, osu_mirrors * (len(beatmapset_ids) // len(osu_mirrors)))}
unsuccessful_downloads = []
for future in as_completed(results):
beatmapset_id, url = results[future]
success, downloaded_beatmapset_id = future.result()
if not success:
unsuccessful_downloads.append(downloaded_beatmapset_id)
for beatmapset_id in unsuccessful_downloads:
download_from_mirrors(beatmapset_id)
if __name__ == "__main__":
main()
if len(sys.argv) < 2:
print("Usage: python download_beatmapsets.py <beatmapsets_file.txt>")
sys.exit(1)
main(sys.argv[1])
|
cd0350942e4a9277321ebbd080ffbd6e
|
{
"intermediate": 0.3327183425426483,
"beginner": 0.36863330006599426,
"expert": 0.2986484169960022
}
|
2,573
|
write some matlab code that is free from any syntax errors that simulate traffic flow using krauss model then animates the results on a single lane using cars of different lengths and widths
|
702e0ccdc01459758b83c5026a70454d
|
{
"intermediate": 0.13805826008319855,
"beginner": 0.0885872021317482,
"expert": 0.7733545303344727
}
|
2,574
|
Voici mon script pouvez vous faire en sorte que les beatmapsets soient nommé par le nom tel quel sont téléchargé .osz et non comme ca {beatmapset_id}.osz
par exemple le beatmapsets ayant comme id 123456 ne sera pas nommé "123456.osz" mais "123456 Lite Show Magic - Crack traxxxx.osz" si le script fonctionne correctement
import os
import sys
import requests
from typing import List
from concurrent.futures import ThreadPoolExecutor, as_completed
osu_mirrors = [
"https://kitsu.moe/api/d/",
"https://chimu.moe/d/",
"https://proxy.nerinyan.moe/d/",
"https://beatconnect.io/b/",
"https://osu.sayobot.cn/osu.php?s=",
"https://storage.ripple.moe/d/",
"https://akatsuki.gg/d/",
"https://ussr.pl/d/",
"https://osu.gatari.pw/d/",
"http://storage.kawata.pw/d/",
"https://catboy.best/d/",
"https://kawata.pw/b/",
"https://redstar.moe/b/",
"https://atoka.pw/b/",
"https://fumosu.pw/b/",
"https://osu.direct/api/d/",
]
def download_beatmapset(beatmapset_id: int, url: str):
try:
response = requests.get(url + str(beatmapset_id))
response.raise_for_status()
file_name = f"{beatmapset_id}.osz"
with open(file_name, "wb") as f:
f.write(response.content)
print(f"Beatmapset {beatmapset_id} téléchargé depuis {url}")
return (True, beatmapset_id)
except requests.exceptions.RequestException as e:
print(f"Erreur lors du téléchargement de {beatmapset_id} depuis {url}: {e}")
return (False, beatmapset_id)
def download_from_mirrors(beatmapset_id: int):
for url in osu_mirrors:
success, _ = download_beatmapset(beatmapset_id, url)
if success:
break
else:
print(f"Beatmapset {beatmapset_id} est perdu (introuvable sur les miroirs)")
def main(file_name: str):
with open(file_name, "r") as f:
beatmapset_ids = [int(line.strip()) for line in f]
output_dir = file_name.replace(".txt", "")
os.makedirs(output_dir, exist_ok=True)
os.chdir(output_dir)
with ThreadPoolExecutor(max_workers=len(osu_mirrors)) as executor:
results = {executor.submit(download_beatmapset, beatmapset_id, url): (beatmapset_id, url) for beatmapset_id, url in zip(beatmapset_ids, osu_mirrors * (len(beatmapset_ids) // len(osu_mirrors)))}
unsuccessful_downloads = []
for future in as_completed(results):
beatmapset_id, url = results[future]
success, downloaded_beatmapset_id = future.result()
if not success:
unsuccessful_downloads.append(downloaded_beatmapset_id)
for beatmapset_id in unsuccessful_downloads:
download_from_mirrors(beatmapset_id)
if __name__ == "__main__":
main()
if len(sys.argv) < 2:
print("Usage: python download_beatmapsets.py <beatmapsets_file.txt>")
sys.exit(1)
main(sys.argv[1])
|
65bc6a58968aa6b103a82c216aab1210
|
{
"intermediate": 0.3327183425426483,
"beginner": 0.36863330006599426,
"expert": 0.2986484169960022
}
|
2,575
|
Voici mon script pouvez vous faire en sorte que les beatmapsets soient nommé par le nom tel quel sont téléchargé .osz et non comme ca {beatmapset_id}.osz
par exemple le beatmapsets ayant comme id 123456 ne sera pas nommé "123456.osz" mais "123456 Lite Show Magic - Crack traxxxx.osz" si le script fonctionne correctement
import os
import sys
import requests
from typing import List
from concurrent.futures import ThreadPoolExecutor, as_completed
osu_mirrors = [
"https://kitsu.moe/api/d/",
"https://chimu.moe/d/",
"https://proxy.nerinyan.moe/d/",
"https://beatconnect.io/b/",
"https://osu.sayobot.cn/osu.php?s=",
"https://storage.ripple.moe/d/",
"https://akatsuki.gg/d/",
"https://ussr.pl/d/",
"https://osu.gatari.pw/d/",
"http://storage.kawata.pw/d/",
"https://catboy.best/d/",
"https://kawata.pw/b/",
"https://redstar.moe/b/",
"https://atoka.pw/b/",
"https://fumosu.pw/b/",
"https://osu.direct/api/d/",
]
def download_beatmapset(beatmapset_id: int, url: str):
try:
response = requests.get(url + str(beatmapset_id))
response.raise_for_status()
file_name = f"{beatmapset_id}.osz"
with open(file_name, "wb") as f:
f.write(response.content)
print(f"Beatmapset {beatmapset_id} téléchargé depuis {url}")
return (True, beatmapset_id)
except requests.exceptions.RequestException as e:
print(f"Erreur lors du téléchargement de {beatmapset_id} depuis {url}: {e}")
return (False, beatmapset_id)
def download_from_mirrors(beatmapset_id: int):
for url in osu_mirrors:
success, _ = download_beatmapset(beatmapset_id, url)
if success:
break
else:
print(f"Beatmapset {beatmapset_id} est perdu (introuvable sur les miroirs)")
def main(file_name: str):
with open(file_name, "r") as f:
beatmapset_ids = [int(line.strip()) for line in f]
output_dir = file_name.replace(".txt", "")
os.makedirs(output_dir, exist_ok=True)
os.chdir(output_dir)
with ThreadPoolExecutor(max_workers=len(osu_mirrors)) as executor:
results = {executor.submit(download_beatmapset, beatmapset_id, url): (beatmapset_id, url) for beatmapset_id, url in zip(beatmapset_ids, osu_mirrors * (len(beatmapset_ids) // len(osu_mirrors)))}
unsuccessful_downloads = []
for future in as_completed(results):
beatmapset_id, url = results[future]
success, downloaded_beatmapset_id = future.result()
if not success:
unsuccessful_downloads.append(downloaded_beatmapset_id)
for beatmapset_id in unsuccessful_downloads:
download_from_mirrors(beatmapset_id)
if __name__ == "__main__":
main()
if len(sys.argv) < 2:
print("Usage: python download_beatmapsets.py <beatmapsets_file.txt>")
sys.exit(1)
main(sys.argv[1])
|
f9112c1f0baeec633f6cdf06dc722f15
|
{
"intermediate": 0.3327183425426483,
"beginner": 0.36863330006599426,
"expert": 0.2986484169960022
}
|
2,576
|
How should I get the player controller inside ActivateAbility in an ability?
|
1035cf2baece051896102bd934aa56c3
|
{
"intermediate": 0.48674139380455017,
"beginner": 0.19192728400230408,
"expert": 0.32133129239082336
}
|
2,577
|
import icon1 from '@/assets/img/icon1.png'; need 1- 10
|
996c4f7456cbe4a17715d227ba2d2d13
|
{
"intermediate": 0.39958181977272034,
"beginner": 0.2590632438659668,
"expert": 0.34135493636131287
}
|
2,578
|
帮忙把以下SQL进行优化:with tep as (select t.urid, t.draftid, t.bankaccountid FROM (select t.urid, t.draftid,
|
bed345958900a7bef13c1977262b7d5b
|
{
"intermediate": 0.2773536443710327,
"beginner": 0.4185795783996582,
"expert": 0.3040667772293091
}
|
2,579
|
is header guard the same as "#pragma once",
|
eb79c8d7e4c7c717270726479c08c106
|
{
"intermediate": 0.355435848236084,
"beginner": 0.31544068455696106,
"expert": 0.32912349700927734
}
|
2,580
|
Hi :)
|
3591cb2d38166bad58a30cd71a273045
|
{
"intermediate": 0.3328707814216614,
"beginner": 0.26012828946113586,
"expert": 0.40700095891952515
}
|
2,581
|
flutter The specific widget that could not find a MediaQuery ancestor
|
4438a4c7a8f693d79895f3240aacf3df
|
{
"intermediate": 0.3413034677505493,
"beginner": 0.3911059498786926,
"expert": 0.26759055256843567
}
|
2,582
|
Description
Welcome to the third assignment for this course. The goal of this assignment is to
acquire experience in defining and solving a reinforcement learning (RL) environment,
following Gym standards.
The assignment consists of three parts. The first part focuses on defining an
environment that is based on a Markov decision process (MDP). In the second part, we
will apply a tabular method SARSA to solve an environment that was previously
defined. In the third part, we apply the Q-learning method to solve a grid-world
environment.
Part I: Define an RL Environment [30 points]
In this part, we will define a grid-world reinforcement learning environment as an MDP.
While building an RL environment, you need to define possible states, actions, rewards
and other parameters.
STEPS:
1. Choose a scenario for your grid world. You are welcome to use the
visualizationdemo as a reference to visualize it.
An example of idea for RL environment:
• Theme: Lawnmower Grid World with
batteries as positive rewards and rocks as
negative rewards.
• States: {S1 = (0,0), S2 = (0,1), S3 = (0,2),
S4 = (0,3), S5 = (1,0), S6 = (1,1), S7 =
(1,2), S8 = (1,3), S9 = (2,0), S10 = (2,1),
S11 = (2,2), S12 = (2,3), S13 = (3,0), S14 =
(3,1), S15 = (3,2), S16 = (3,3)}
• Actions: {Up, Down, Right, Left}
• Rewards: {-5, -6, +5, +6}
• Objective: Reach the goal state with
maximum reward
2. Define an RL environment following the scenario that you chose.
Environment requirements:
• Min number of states: 12
• Min number of actions: 4
• Min number of rewards: 4
Environment definition should follow the OpenAI Gym structure, which includes
thebasic methods. You can use the “Defining RL env” demo as a base code.
def __init__:
# Initializes the class
# Define action and observation space
def step:
# Executes one timestep within the environment
# Input to the function is an action
def reset:
# Resets the state of the environment to an initial state
def render:
# Visualizes the environment
# Any form like vector representation or visualizing
usingmatplotlib is sufficient
3. Run a random agent for at least 10 timesteps to show that the environment logic
is defined correctly. Print the current state, chosen action, reward and return your
grid world visualization for each step.
|
decd8f07894e7e8597af0a1347b3acac
|
{
"intermediate": 0.4125334322452545,
"beginner": 0.3861415386199951,
"expert": 0.20132498443126678
}
|
2,583
|
can you define the krauss model for traffic flow
|
3bf80cf37b4ab27c44b088cbed3e20b9
|
{
"intermediate": 0.19343097507953644,
"beginner": 0.18098869919776917,
"expert": 0.6255803108215332
}
|
2,584
|
can you write me a simple matlab code that uses the krauss model with the following ode’s █(&v_safe (t)=v_(α-1) (t)+(s_α (t)-v_α (t)τ_k)/((v_(α-1) (t)+v_α (t))/(2b_max )+τ_k )@&v_des (t)=min(v_max,v_α (t)+a_max Δt,v_safe (t))@&v_α (t+Δt)=max(0,v_des (t)-η) ). and simulate traffic flow on one lane with no collisions but possible overtaking and make sure that all cars (rectangles) have different colors, lengths, widths and finally animate the results with plot that has the lane width and distance travelled
|
1e60ee3d145b9a77a6bb39c9d22aaba1
|
{
"intermediate": 0.3000691533088684,
"beginner": 0.11160685122013092,
"expert": 0.5883239507675171
}
|
2,585
|
[liveuser@localhost-live /]$ sudo chroot /run/media/liveuser/lazarus/root00/
basename: missing operand
Try 'basename --help' for more information.
[root@localhost-live /]#
|
ff1bc278097c4113d3d4814cd27290f5
|
{
"intermediate": 0.29835280776023865,
"beginner": 0.39974653720855713,
"expert": 0.3019006550312042
}
|
2,586
|
Spring Cloud项目,我希望我的服务在不同的运行环境下使用不同的nacos服务地址,所以我写了如下配置:
*bootstrap.yaml*
|
d029ff86919889c9aea6e438a03cf614
|
{
"intermediate": 0.4390072524547577,
"beginner": 0.22398588061332703,
"expert": 0.3370068669319153
}
|
2,587
|
c#socket Client.Send
|
3629cb868a434623a1554c0364a1e61b
|
{
"intermediate": 0.3979950547218323,
"beginner": 0.3123474717140198,
"expert": 0.28965750336647034
}
|
2,588
|
import matplotlib.pyplot as plt
import keras_ocr
import cv2
import math
import numpy as np
def midpoint(x1, y1, x2, y2):
x_mid = int((x1 + x2)/2)
y_mid = int((y1 + y2)/2)
return (x_mid, y_mid)
def inpaint_text(img_path, pipeline):
img = keras_ocr.tools.read(img_path)
prediction_groups = pipeline.recognize([img])
mask = np.zeros(img.shape[:2], dtype="uint8")
for box in prediction_groups[0]:
x0, y0 = box[1][0]
x1, y1 = box[1][1]
x2, y2 = box[1][2]
x3, y3 = box[1][3]
x_mid0, y_mid0 = midpoint(x1, y1, x2, y2)
x_mid1, y_mi1 = midpoint(x0, y0, x3, y3)
thickness = int(math.sqrt((x2 - x1) ** 2 + (y2 - y1) ** 2))
cv2.line(mask, (x_mid0, y_mid0), (x_mid1, y_mi1), 255,
thickness)
inpainted_img = cv2.inpaint(img, mask, 7, cv2.INPAINT_NS)
return (inpainted_img)
pipeline = keras_ocr.pipeline.Pipeline()
img_text_removed = inpaint_text('jizhan_watermark2.png', pipeline)
plt.imshow(img_text_removed)
cv2.imwrite('text_removed_image.jpg', cv2.cvtColor(img_text_removed, cv2.COLOR_BGR2RGB))
将上述代码改为使用paddle ocr
|
154799faf3a8db4b6294cf3d080d67e1
|
{
"intermediate": 0.4777919352054596,
"beginner": 0.300643652677536,
"expert": 0.22156444191932678
}
|
2,589
|
in this code, the !开启带骂模式, is working, but when I say !关闭带骂模式 after i say !开启带骂模式, it does not stop and it say 抱歉,无法关闭,error: 带骂模式并没有开启, code: let isChatting = false;
let chatInterval = null;
function startChat() {
isChatting = true;
chatInterval = setInterval(() => {
const randomIndex = Math.floor(Math.random() * messages.length);
const message = messages[randomIndex];
bot.chat(message);
}, 4000);
bot.chat(‘已开启带骂模式’);
}
function stopChat() {
isChatting = false;
clearInterval(chatInterval);
bot.chat(‘已关闭带骂模式’);
}
if (whitelist.includes(username)) {
if (message === ‘!开启带骂模式’ && !isChatting) {
startChat();
} else if (message === ‘!开启带骂模式’ && isChatting) {
bot.chat(‘抱歉,无法开启,error: 带骂模式已经开启’)
} else if (message === ‘!关闭带骂模式’ && isChatting) {
stopChat();
} else if (message === ‘!关闭带骂模式’ && !isChatting) {
bot.chat(‘抱歉,无法关闭,error: 带骂模式并没有开启’)
}
}
|
1c715c82f21a3399883f2733210d587a
|
{
"intermediate": 0.3793248236179352,
"beginner": 0.4061635136604309,
"expert": 0.2145116627216339
}
|
2,590
|
this code, the !开启带骂模式, is working, but when I say !关闭带骂模式 after i say !开启带骂模式, it does not stop: 抱歉,无法关闭,error: 带骂模式并没有开启, code: let isChatting = false;
let chatInterval = null;
function startChat() {
isChatting = true;
chatInterval = setInterval(() => {
const randomIndex = Math.floor(Math.random() * messages.length);
const message = messages[randomIndex];
bot.chat(message);
}, 4000);
bot.chat('已开启带骂模式');
}
function stopChat() {
isChatting = false;
clearInterval(chatInterval);
bot.chat('已关闭带骂模式');
}
if (whitelist.includes(username)) {
if (message === '!开启带骂模式' && !isChatting) {
startChat();
} else if (message === '!开启带骂模式' && isChatting) {
bot.chat('抱歉,无法开启,error: 带骂模式已经开启')
} else if (message === '!关闭带骂模式' && isChatting) {
stopChat();
} else if (message === '!关闭带骂模式' && !isChatting) {
bot.chat('抱歉,无法关闭,error: 带骂模式并没有开启')
}
}
|
7f034acec418862feafa1a851ed68399
|
{
"intermediate": 0.31373757123947144,
"beginner": 0.31710147857666016,
"expert": 0.3691609799861908
}
|
2,591
|
What is the technical term used to refer for process that whenever there is change is data of data base then backend system updates html page at frontend
|
c48481a2189d4d8dd42cc045ebcc1162
|
{
"intermediate": 0.47126340866088867,
"beginner": 0.2863869369029999,
"expert": 0.24234957993030548
}
|
2,592
|
boost::split_iterator<std::string::const_iterator> usage
|
eedbdb568b581b991fd6a00cf279d26d
|
{
"intermediate": 0.4480327367782593,
"beginner": 0.24856802821159363,
"expert": 0.3033992350101471
}
|
2,593
|
Make Python program which creates a png image of a spectrogram from wav file.
|
2e67599417c55fe03239ffa02c7f6557
|
{
"intermediate": 0.4170195758342743,
"beginner": 0.17139847576618195,
"expert": 0.41158196330070496
}
|
2,594
|
in this script. add an option to keep log of the translated text in a file. create an option in the window option submenu to toggle this option, and an option to set the log file name.
def load_settings():
# Load settings from json file
try:
with open('settings.json', 'r') as settings_file:
settings_data = json.load(settings_file)
except FileNotFoundError:
settings_data = {}
return settings_data
def save_settings(key, value):
# Save settings to the json file
settings_data = load_settings()
settings_data[key] = value
with open('settings.json', 'w') as settings_file:
json.dump(settings_data, settings_file)
def translate_text(text_to_translate, prompt_template):
prompt = prompt_template.format(text_to_translate=text_to_translate)
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo-0301",
temperature=1,
messages=[
{
"role": "user",
"content": f"{prompt}",
}
],
)
translated_text = (
completion["choices"][0]
.get("message")
.get("content")
.encode("utf8")
.decode()
)
return translated_text
def update_display(display, text):
display.delete(1.0, tk.END)
display.insert(tk.END, text)
def clipboard_monitor():
global prompt_text
old_clipboard = ""
while True:
if not pause_monitoring:
new_clipboard = pyperclip.paste()
# Remove line breaks from new_clipboard
# new_clipboard = new_clipboard.replace('\n', '').replace('\r', '')
# Compare new_clipboard with old_clipboard to detect changes
if new_clipboard != old_clipboard and new_clipboard.strip() != untranslated_textbox.get(1.0, tk.END).strip():
old_clipboard = new_clipboard
if new_clipboard in translated_dict:
translated_text = translated_dict[new_clipboard]
else:
try:
prompt_text = prompt_textbox.get(1.0, tk.END).strip()
translated_text = translate_text(new_clipboard, prompt_text)
except Exception as e:
root.after(0, create_error_window, f"Connection Error: {e}")
continue
translated_dict[new_clipboard] = translated_text
update_display(untranslated_textbox, new_clipboard)
update_display(translated_textbox, translated_text)
time.sleep(1)
def save():
global prompt_logfile
global prompt_text
prompt_text = prompt_textbox.get(1.0, tk.END).strip()
try:
with open(prompt_logfile, "w", encoding="utf-8") as f:
f.write(prompt_text)
except FileNotFoundError as e:
save_as()
def save_as():
global prompt_logfile
global prompt_text
prompt_text = prompt_textbox.get(1.0, tk.END).strip()
prompt_logfile = filedialog.asksaveasfilename(initialdir=os.getcwd(),
title="Save Prompt As",
defaultextension=".txt",
filetypes=(("Text files", ".txt"), ("All files", ".")))
if not prompt_logfile:
return
save_settings("prompt_logfile", prompt_logfile)
with open(prompt_logfile, "w", encoding="utf-8") as f:
f.write(prompt_text)
root = tk.Tk()
root.title("Clipboard Translator")
root.geometry("400x200")
settings_data = load_settings()
openai.api_base = settings_data.get("api_base", "https://api.openai.com/v1")
openai.api_key = settings_data.get("api_key", "")
prompt_logfile = settings_data.get("prompt_logfile", "")
always_on_top = settings_data.get("always_on_top", False)
window_opacity = settings_data.get("window_opacity", 1.0)
font_settings = settings_data.get("font_settings", {"family": "Courier", "size": 10, "color": "black"})
prompt_logfile, prompt_text = open_prompt_file(prompt_logfile)
menu_bar = Menu(root)
# Add Options menu
options_menu = Menu(menu_bar, tearoff=0)
menu_bar.add_cascade(label="Options", menu=options_menu)
options_menu.add_command(label="Set OpenAI API", command=set_openai_api)
prompt_menu = Menu(menu_bar, tearoff=0)
menu_bar.add_cascade(label="Prompt", menu=prompt_menu)
prompt_menu.add_command(label="Save", command=save)
prompt_menu.add_command(label="Save As", command=save_as)
root.config(menu=menu_bar)
paned_window = tk.PanedWindow(root, sashrelief="groove", sashwidth=5, orient="horizontal")
paned_window.pack(expand=True, fill="both")
untranslated_textbox = tk.Text(paned_window, wrap="word")
paned_window.add(untranslated_textbox)
translated_textbox = tk.Text(paned_window, wrap="word")
paned_window.add(translated_textbox)
prompt_textbox = tk.Text(paned_window, wrap="word")
paned_window.add(prompt_textbox)
update_display(prompt_textbox, prompt_text)
copy_button = tk.Button(root, text="Copy", command=copy_untranslated)
copy_button.place(relx=0, rely=1, anchor="sw")
retranslate_button = tk.Button(root, text="Retranslate", command=retranslate_text)
retranslate_button.place(relx=0.1, rely=1, anchor="sw")
toggle_button = tk.Button(root, text="Hide", command=toggle_translated, relief=tk.RAISED)
toggle_button.place(relx=0.5, rely=1, anchor="s")
pause_button = tk.Button(root, text="Pause", command=toggle_pause_monitoring, relief=tk.RAISED)
pause_button.place(relx=1, rely=1, anchor="se")
if __name__ == "__main__":
exe_mode = getattr(sys, 'frozen', False)
if exe_mode:
sys.stderr = open("error_log.txt", "w")
clipboard_monitor_thread = threading.Thread(target=clipboard_monitor)
clipboard_monitor_thread.start()
root.mainloop()
if exe_mode:
sys.stderr.close()
|
97081ca5b5a8b3eb6d32b0c7246c83d9
|
{
"intermediate": 0.3357749581336975,
"beginner": 0.5449362397193909,
"expert": 0.11928874254226685
}
|
2,595
|
I want a python 3 code using selenium to search for restaurants in Texas on google map. I want the code to click on every restaurant and print address and phone number of each restaurant. I also want the restaurant images url.
|
a1401749f9e6443b0e2d1cbb9e3ebb0d
|
{
"intermediate": 0.45220300555229187,
"beginner": 0.13139668107032776,
"expert": 0.4164002537727356
}
|
2,596
|
Here is my Aarch64 assembly language program. Modify code in loop2 so it prints elements of array using printf.
.section .data
array: .skip 40 // reserve space for 10 integers
i: .word 5
fmtstr: .string "%d\n"
.section .bss
rnum: .skip 4
.section .text
.global main
.type main, @function
main:
mov x1, 0 // initialize loop counter to 0
mov x2, 10 // set loop limit to 10
loop1:
cmp x1, x2 // compare loop counter to loop limit
beq endloop1 // if equal, exit loop
ldr x3, =array // load address of array
str w1, [x3, x1, lsl #2] // store int 1 at index x0 of array
add x1, x1, 1 // increment loop counter
b loop1 // jump to start of loop
endloop1:
mov x1, 0 // initialize loop counter to 0
loop2:
cmp x1, x2
beq endloop2
add x1, x1, 1 // increment loop counter
b loop2 // jump to start of loop2
endloop2:
ret // return from main function
|
bcfe97b6563a3cab9f8ff44dc5e25d91
|
{
"intermediate": 0.2749122381210327,
"beginner": 0.4938897490501404,
"expert": 0.23119798302650452
}
|
2,597
|
write the python code according to the following description: to make a kernel for the browser, you need to start from zero itself:
0. Requests to the network. Implementation of the http protocol. Fasten SSL / TLS later and it will be https.
1. Domain parsing. parse domain_name:port, make a DNS query to resolve the domain, and specify host:domain_name in the http headers, connect to port.
It can be different from the 80th, for example.
2. Render html. You write an html analyzer engine that distributes elements across the screen. If a script block is encountered, it passes to the appropriate language interpreter, for example, javascript.
3. Make feedback with the user. If he clicks on the button on the screen,
then you need to match this button with where it leads. Then make a request to that page and get a response.
total we have:
* HTML/css parser, graphics renderer.
* At least javascript interpreter
|
b47052fbc2efd5b9ed908720c9bf0540
|
{
"intermediate": 0.39038076996803284,
"beginner": 0.30642345547676086,
"expert": 0.3031958043575287
}
|
2,598
|
create html frontend for ipfs
|
9fe63b7f3357f548462797b4b8e55263
|
{
"intermediate": 0.34747061133384705,
"beginner": 0.2589901387691498,
"expert": 0.3935392498970032
}
|
2,599
|
you know pickle ball game
|
4080b18f17319975e236383e42f54b06
|
{
"intermediate": 0.34128960967063904,
"beginner": 0.3955717086791992,
"expert": 0.26313865184783936
}
|
2,600
|
I need a image view solution that works in jupyter, which will print image without axis etc. only title, also will print the full image as big as it can, make it a function.
|
b675207d0b7076be22b54c6badf95475
|
{
"intermediate": 0.5817157030105591,
"beginner": 0.15141567587852478,
"expert": 0.26686862111091614
}
|
2,601
|
I am having RuntimeWarning: divide by zero encountered in true_divide and RuntimeWarning: invalid value encountered in true_divide warnings. Should I be worried. How can I avoid them ?
|
0882d4fa2612e17055187e7518a5053d
|
{
"intermediate": 0.5091793537139893,
"beginner": 0.17946381866931915,
"expert": 0.3113568425178528
}
|
2,602
|
im working on crud operations in django adn verything is wokrig and checked in admin panel bt for delete i want dlete in frontend adn not in admin panel if i cehkce in admin panel i sleect no it mist come back to in front end'
|
0248d1d0dfd52b7445588d9cfcf95c04
|
{
"intermediate": 0.6547648310661316,
"beginner": 0.22476531565189362,
"expert": 0.12046980112791061
}
|
2,603
|
I have a problem with my code, in my warp perspective function when I warp the image, the image gets aligned to left side from the interest point, what I mean is from the middle of the image however there is content on the left, so content is missing, it gets cut because it is on the negative part of the image. Can you make sure there is no content missing ? Also this might have something to do with how homography is calculated, since warp_perspective only warps perspective according to already calculated homography. Here is my full code: “import numpy as np
import cv2
import os
import glob
import matplotlib.pyplot as plt
import time
from kayla_tools import
def compute_homography_matrix(src_pts, dst_pts):
def normalize_points(pts):
pts_homogeneous = np.hstack((pts, np.ones((pts.shape[0], 1))))
centroid = np.mean(pts, axis=0)
scale = np.sqrt(2) / np.mean(np.linalg.norm(pts - centroid, axis=1))
T = np.array([[scale, 0, -scale * centroid[0]], [0, scale, -scale * centroid[1]], [0, 0, 1]])
normalized_pts = (T @ pts_homogeneous.T).T
return normalized_pts[:, :2], T
src_pts_normalized, T1 = normalize_points(src_pts)
dst_pts_normalized, T2 = normalize_points(dst_pts)
A = []
for p1, p2 in zip(src_pts_normalized, dst_pts_normalized):
x1, y1 = p1
x2, y2 = p2
A.append([0, 0, 0, -x1, -y1, -1, y2 * x1, y2 * y1, y2])
A.append([x1, y1, 1, 0, 0, 0, -x2 * x1, -x2 * y1, -x2])
A = np.array(A)
try:
_, , VT = np.linalg.svd(A)
except np.linalg.LinAlgError:
return None
h = VT[-1]
H_normalized = h.reshape(3, 3)
H = np.linalg.inv(T2) @ H_normalized @ T1
if np.abs(H[-1, -1]) > 1e-6:
H = H / H[-1, -1]
else:
return None
return H
def filter_matches(matches, ratio_thres=0.7):
filtered_matches = []
for match in matches:
good_match = []
for m, n in match:
if m.distance < ratio_thres * n.distance:
good_match.append(m)
filtered_matches.append(good_match)
return filtered_matches
def find_homography(keypoints, filtered_matches):
homographies = []
skipped_indices = [] # Keep track of skipped images and their indices
for i, matches in enumerate(filtered_matches):
src_pts = np.float32([keypoints[0][m.queryIdx].pt for m in matches]).reshape(-1, 1, 2)
dst_pts = np.float32([keypoints[i + 1][m.trainIdx].pt for m in matches]).reshape(-1, 1, 2)
H = ransac_homography(src_pts, dst_pts)
if H is not None:
H = H.astype(np.float32)
homographies.append(H)
else:
print(f"Warning: Homography computation failed for image pair (0, {i + 1}). Skipping.")
skipped_indices.append(i + 1) # Add indices of skipped images to the list
continue
return homographies, skipped_indices
def ransac_homography(src_pts, dst_pts, iterations=2000, threshold=3):
best_inlier_count = 0
best_homography = None
if len(src_pts) != len(dst_pts) or len(src_pts) < 4:
raise ValueError(“The number of source and destination points must be equal and at least 4.”)
src_pts = np.array(src_pts)
dst_pts = np.array(dst_pts)
for _ in range(iterations):
indices = np.random.choice(len(src_pts), 4, replace=False)
src_subset = src_pts[indices].reshape(-1, 2)
dst_subset = dst_pts[indices].reshape(-1, 2)
homography = compute_homography_matrix(src_subset, dst_subset)
if homography is None:
continue
inliers = 0
for i in range(len(src_pts)):
projected_point = np.dot(homography, np.append(src_pts[i], 1))
if np.abs(projected_point[-1]) > 1e-6:
projected_point = projected_point / projected_point[-1]
else:
continue
distance = np.linalg.norm(projected_point[:2] - dst_pts[i])
if distance < threshold:
inliers += 1
if inliers > best_inlier_count:
best_inlier_count = inliers
best_homography = homography
if best_homography is None:
raise RuntimeError(“Failed to find a valid homography matrix.”)
return best_homography
def read_ground_truth_homographies(dataset_path):
H_files = sorted(glob.glob(os.path.join(dataset_path, "H”)))
ground_truth_homographies = []
for filename in H_files:
H = np.loadtxt(filename)
ground_truth_homographies.append(H)
return ground_truth_homographies
def warp_perspective(img, H, reverse=False):
# Apply homography matrix to the source image corners
print(img.shape)
display_image(img)
if reverse:
H = np.linalg.inv(H)
h, w = img.shape[0], img.shape[1]
corners = np.array([[0, 0, 1], [w-1, 0, 1], [0, h-1, 1], [w-1, h-1, 1]])
transformed_corners = np.dot(H, corners.T).T
transformed_corners /= transformed_corners[:, 2].reshape((-1, 1))
# Compute the bounding box of the transformed corners and
# define the target_shape as the (height, width) of the bounding box
min_x, min_y = transformed_corners.min(axis=0)[:2]
max_x, max_y = transformed_corners.max(axis=0)[:2]
target_shape = (int(np.round(max_y - min_y)), int(np.round(max_x - min_x)))
h, w = target_shape
target_y, target_x = np.meshgrid(np.arange(h), np.arange(w))
target_coordinates = np.stack([target_x.ravel(), target_y.ravel(), np.ones(target_x.size)])
source_coordinates = np.dot(np.linalg.inv(H), target_coordinates)
source_coordinates /= source_coordinates[2, :]
valid = np.logical_and(np.logical_and(0 <= source_coordinates[0, :], source_coordinates[0, :] < img.shape[1] - 1),
np.logical_and(0 <= source_coordinates[1, :], source_coordinates[1, :] < img.shape[0] - 1))
# Change the dtype to int instead of float64 to save memory
valid_source_coordinates = np.round(source_coordinates[:, valid].astype(np.float32)[:2]).astype(int)
valid_target_coordinates = target_coordinates[:, valid].astype(int)[:2]
valid_source_coordinates[0] = np.clip(valid_source_coordinates[0], 0, img.shape[1] - 1)
valid_source_coordinates[1] = np.clip(valid_source_coordinates[1], 0, img.shape[0] - 1)
valid_target_coordinates[0] = np.clip(valid_target_coordinates[0], 0, w - 1)
valid_target_coordinates[1] = np.clip(valid_target_coordinates[1], 0, h - 1)
warped_image = np.zeros((h, w, 3), dtype=np.uint8)
for i in range(3):
warped_image[…, i][valid_target_coordinates[1], valid_target_coordinates[0]] = img[…, i][valid_source_coordinates[1], valid_source_coordinates[0]]
print(warped_image.shape)
display_image(warped_image)
return warped_image
def image_merging(images, homographies):
if len(images) - 1 != len(homographies):
raise ValueError(“The number of homography matrices must be one less than the number of images.”)
# Warp all images except the first one using their corresponding homography matrices
warped_images = [images[0]] + [warp_perspective(img, H, True) for img, H in zip(images[1:], homographies)]
# Compute the size of the merged image
min_x, min_y, max_x, max_y = 0, 0, 0, 0
for img in warped_images:
h, w = img.shape[:2]
corners = np.array([[0, 0, 1], [h, 0, 1], [0, w, 1], [h, w, 1]])
min_x = min(min_x, corners[:, 1].min())
min_y = min(min_y, corners[:, 0].min())
max_x = max(max_x, corners[:, 1].max())
max_y = max(max_y, corners[:, 0].max())
print(corners, min_x, min_y, max_x, max_y)
merged_height = int(np.ceil(max_y - min_y))
merged_width = int(np.ceil(max_x - min_x))
# Initialize the merged image with zeros
merged_image = np.zeros((merged_height, merged_width, 3), dtype=np.uint8)
print(merged_image.shape)
# Merge the warped images by overlaying them on the merged image
for img in warped_images:
mask = np.all(img == 0, axis=-1)
y_offset, x_offset = max(0, -min_y), max(0, -min_x)
merged_image_slice = merged_image[y_offset:y_offset + img.shape[0], x_offset:x_offset + img.shape[1]]
merged_image_slice[~mask] = img[~mask]
print(merged_image.shape)
return merged_image"
|
c0b9e7e97d591397d7f1f2fcd06aed77
|
{
"intermediate": 0.5302164554595947,
"beginner": 0.31321874260902405,
"expert": 0.1565648317337036
}
|
2,604
|
I have a image warp code, however this warp code takes a target size to set itself. However I don't know the target size before hand, can we not calculate target size by applying homography to source size somehow ? Here is my code: "def warp_perspective(img, H, target_shape, reverse=False):
h, w = target_shape
target_y, target_x = np.meshgrid(np.arange(h), np.arange(w))
target_coordinates = np.stack([target_x.ravel(), target_y.ravel(), np.ones(target_x.size)])
if reverse:
H = np.linalg.inv(H)
source_coordinates = np.dot(np.linalg.inv(H), target_coordinates)
source_coordinates /= source_coordinates[2, :]
valid = np.logical_and(np.logical_and(0 <= source_coordinates[0, :], source_coordinates[0, :] < img.shape[1] - 1),
np.logical_and(0 <= source_coordinates[1, :], source_coordinates[1, :] < img.shape[0] - 1))
# Change the dtype to int instead of float64 to save memory
valid_source_coordinates = np.round(source_coordinates[:, valid].astype(np.float32)[:2]).astype(int)
valid_target_coordinates = target_coordinates[:, valid].astype(int)[:2]
valid_source_coordinates[0] = np.clip(valid_source_coordinates[0], 0, img.shape[1] - 1)
valid_source_coordinates[1] = np.clip(valid_source_coordinates[1], 0, img.shape[0] - 1)
valid_target_coordinates[0] = np.clip(valid_target_coordinates[0], 0, w - 1)
valid_target_coordinates[1] = np.clip(valid_target_coordinates[1], 0, h - 1)
warped_image = np.zeros((h, w, 3), dtype=np.uint8)
for i in range(3):
warped_image[..., i][valid_target_coordinates[1], valid_target_coordinates[0]] = img[..., i][valid_source_coordinates[1], valid_source_coordinates[0]]
return warped_image"
|
fae1cea6d9a7fe66b592c32394e54023
|
{
"intermediate": 0.49447116255760193,
"beginner": 0.2200154960155487,
"expert": 0.285513311624527
}
|
2,605
|
explain joi npm packahe
|
c7ba89e705e5e2e3ea8924ef84b63442
|
{
"intermediate": 0.45951834321022034,
"beginner": 0.3814285397529602,
"expert": 0.1590530425310135
}
|
2,606
|
what is the vba code to unhide a sheet and then to hide the sheet
|
693b9b717fc76a17c72e9a9705c727ff
|
{
"intermediate": 0.402515709400177,
"beginner": 0.11996820569038391,
"expert": 0.47751614451408386
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.