row_id
int64 0
48.4k
| init_message
stringlengths 1
342k
| conversation_hash
stringlengths 32
32
| scores
dict |
|---|---|---|---|
3,815
|
c++ u_char max bits
|
6d27b8e0b97714cd77d9a737b4a43000
|
{
"intermediate": 0.27564674615859985,
"beginner": 0.43836572766304016,
"expert": 0.28598752617836
}
|
3,816
|
In Perl, last takes a label or an expr. if it takes an expr, how does that work?
|
94627c4e34e5e6a030d2b0d63f089736
|
{
"intermediate": 0.39561015367507935,
"beginner": 0.36539122462272644,
"expert": 0.23899860680103302
}
|
3,817
|
last LABEL
last EXPR
last
The last command is like the break statement in C (as used in loops); it immediately exits the loop in question. If the LABEL is omitted, the command refers to the innermost enclosing loop. The last EXPR form, available starting in Perl 5.18.0, allows a label name to be computed at run time, and is otherwise identical to last LABEL. The continue block, if any, is not executed:
LINE: while (<STDIN>) {
last LINE if /^$/; # exit when done with header
#...
}
last cannot return a value from a block that typically returns a value, such as eval {}, sub {}, or do {}. It will perform its flow control behavior, which precludes any return value. It should not be used to exit a grep or map operation.
Note that a block by itself is semantically identical to a loop that executes once. Thus last can be used to effect an early exit out of such a block.
See also continue for an illustration of how last, next, and redo work.
Unlike most named operators, this has the same precedence as assignment. It is also exempt from the looks-like-a-function rule, so last ("foo")."bar" will cause "bar" to be part of the argument to last.
what is meant by EXPR in this context?
|
f0187cd268f059b69e4ae5004c4e02b8
|
{
"intermediate": 0.2652070224285126,
"beginner": 0.5366449952125549,
"expert": 0.1981479823589325
}
|
3,818
|
last LABEL
last EXPR
last
The last command is like the break statement in C (as used in loops); it immediately exits the loop in question. If the LABEL is omitted, the command refers to the innermost enclosing loop. The last EXPR form, available starting in Perl 5.18.0, allows a label name to be computed at run time, and is otherwise identical to last LABEL. The continue block, if any, is not executed:
LINE: while (<STDIN>) {
last LINE if /^$/; # exit when done with header
#…
}
last cannot return a value from a block that typically returns a value, such as eval {}, sub {}, or do {}. It will perform its flow control behavior, which precludes any return value. It should not be used to exit a grep or map operation.
Note that a block by itself is semantically identical to a loop that executes once. Thus last can be used to effect an early exit out of such a block.
See also continue for an illustration of how last, next, and redo work.
Unlike most named operators, this has the same precedence as assignment. It is also exempt from the looks-like-a-function rule, so last (“foo”).“bar” will cause “bar” to be part of the argument to last.
what is meant by EXPR in this context? Give an example of an EXPR (as a single line) that would be used to determine the label
|
c51ed2c199c7f4b081e6b3e547fa5357
|
{
"intermediate": 0.261752724647522,
"beginner": 0.5478494167327881,
"expert": 0.19039791822433472
}
|
3,819
|
Can you use the same label name twice in the same Perl subroutine?
|
055d7551c1f132f5c2b833e788f74d65
|
{
"intermediate": 0.30804356932640076,
"beginner": 0.419729620218277,
"expert": 0.2722267806529999
}
|
3,820
|
Your task is to determine if the Spanish translation of an English sentence contains some translation errors and explain the errors, if any.
The source English sentence is delimited by triple backticks.
The Spanish translation is also delimited by triple backticks.
English sentence:
|
c711ecddf75bdf70c74b7646241fb1ba
|
{
"intermediate": 0.3618896007537842,
"beginner": 0.3315891921520233,
"expert": 0.3065212666988373
}
|
3,821
|
php fix bruh
Warning: Use of undefined constant name - assumed 'name' (this will throw an Error in a future version of PHP) in /storage/ssd4/325/18882325/public_html/my_websites/image_uploader/upload.php on line 12
Warning: Cannot modify header information - headers already sent by (output started at /storage/ssd4/325/18882325/public_html/my_websites/image_uploader/upload.php:6) in /storage/ssd4/325/18882325/public_html/my_websites/image_uploader/upload.php on line 21
|
cf87c4c77ebb186436c717938aa4b32f
|
{
"intermediate": 0.3766988217830658,
"beginner": 0.30188530683517456,
"expert": 0.32141590118408203
}
|
3,822
|
Добавь Успадкування (наследование) и Шаблоны методов и классов
В этт код:
#include <iostream>
#include <fstream>
#include <exception>
#include <vector>
#include <string>
#include <cassert>
class FileException : public std::exception
{
virtual const char *what() const throw()
{
return "Помилка читання з файлу!";
}
} file_exception;
class Player
{
public:
std::string name;
int number;
int points;
int age;
// Конструктор с аргументами
Player(std::string n, int num, int p, int a) : name(n), number(num), points(p), age(a) {}
// добавлено const
void display() const
{
std::cout << name << ", Гравець #" << number << ", Бали: " << points << ", Вік: " << age << std::endl;
}
};
class Team
{
public:
std::string name;
std::vector<Player> players;
Team() {} // конструктор по умолчанию
Team(std::string n) : name(n) {}
void add_player(Player player)
{
players.push_back(player);
}
void display_players() const
{
std::cout << "Гравці за " << name << ":" << std::endl;
for (const auto &player : players)
{
player.display();
}
}
};
class League
{
public:
std::string name;
std::vector<Team> teams;
League() {} // конструктор по умолчанию
League(std::string n) : name(n) {}
void add_team(Team team)
{
teams.push_back(team);
}
void display_teams() const
{
std::cout << "Команди ліги " << name << ":" << std::endl;
for (const auto &team : teams)
{
std::cout << team.name << std::endl;
}
}
};
class Championship
{
public:
std::string name;
std::vector<League> leagues;
Championship() {} // конструктор по умолчанию
Championship(std::string n) : name(n) {}
void add_league(League league)
{
leagues.push_back(league);
}
void display_leagues() const
{
std::cout << "Ліги чемпіонату " << name << ":" << std::endl;
for (const auto &league : leagues)
{
std::cout << league.name << std::endl;
}
}
};
void test()
{
// Створено дві команди
Team team1("DINAZ");
Team team2("Chicago Bulls");
// Додайте гравців до команди1
team1.add_player(Player("Василь", 1, 20, 25));
team1.add_player(Player("Петро", 2, 15, 24));
team1.add_player(Player("Іван", 3, 30, 28));
// Додайте гравців до команди2
team2.add_player(Player("Олег", 4, 10, 27));
team2.add_player(Player("Михайло", 5, 25, 26));
team2.add_player(Player("Тарас", 6, 18, 24));
// Створено лігу
League league1;
league1.name = "Ліга 1";
// Додайте команди до ліги
league1.add_team(team1);
league1.add_team(team2);
// Створено другу лігу
Team team3("LA Lakers");
Team team4("Golden State Warriors");
// Додайте гравців до команди3
team3.add_player(Player("Леброн", 1, 30, 36));
team3.add_player(Player("Ентоні", 2, 25, 28));
team3.add_player(Player("Девіс", 3, 20, 28));
// Додайте гравців до команди4
team4.add_player(Player("Каррі", 4, 35, 33));
team4.add_player(Player("Томпсон", 5, 20, 31));
team4.add_player(Player("Грін", 6, 15, 31));
League league2;
league2.name = "Ліга 2";
// Додайте команди до другої ліги
league2.add_team(team3);
league2.add_team(team4);
// Створено чемпіонат
Championship championship("Чемпіонат 2023");
// Додайте ліги до чемпіонату
championship.add_league(league1);
championship.add_league(league2);
// Збереження тесту в файл
std::ofstream outfile(team1.name + ".bin", std::ios::binary);
outfile.write(reinterpret_cast<char*>(&team1), sizeof(Team));
outfile.close();
// Тестове читання з файлу
Team team_read;
std::ifstream infile(team1.name + ".bin", std::ios::binary);
infile.read(reinterpret_cast<char*>(&team_read), sizeof(Team));
infile.close();
// Переконайтеся, що прочитаний об'єкт збігається з оригіналом
assert(team1.name == team_read.name);
assert(team1.players.size() == team_read.players.size());
for (int i = 0; i < team1.players.size(); i++)
{
assert(team1.players[i].number == team_read.players[i].number);
assert(team1.players[i].points == team_read.players[i].points);
assert(team1.players[i].age == team_read.players[i].age);
}
// Протестуйте відображення команд для кожної ліги
std::cout << "Оберіть лігу:" << std::endl;
for (int i = 0; i < championship.leagues.size(); i++)
{
std::cout << i + 1 << ". " << championship.leagues[i].name << std::endl;
}
int league_choice;
std::cin >> league_choice;
if (league_choice >= 1 && league_choice <= championship.leagues.size())
{
const auto& league = championship.leagues[league_choice - 1];
league.display_teams();
std::cout << "Оберіть команду:" << std::endl;
for (int i = 0; i < league.teams.size(); i++)
{
std::cout << i + 1 << ". " << league.teams[i].name << std::endl;
}
int team_choice;
std::cin >> team_choice;
if (team_choice >= 1 && team_choice <= league.teams.size())
{
const auto& team = league.teams[team_choice - 1];
team.display_players();
std::cout << "Оберіть гравця:" << std::endl;
for (int i = 0; i < team.players.size(); i++)
{
std::cout << i + 1 << ". Гравець #" << team.players[i].number << std::endl;
}
int player_choice;
std::cin >> player_choice;
if (player_choice >= 1 && player_choice <= team.players.size())
{
const auto& player = team.players[player_choice - 1];
std::cout << "Гравець #" << player.number << " має " << player.points << " очок і йому " << player.age << " років." << std::endl;
}
else
{
std::cout << "Неправильний вибір." << std::endl;
}
}
else
{
std::cout << "Неправильний вибір." << std::endl;
}
}
else
{
std::cout << "Неправильний вибір." << std::endl;
}
}
int main()
{
try
{
test();
}
catch(std::exception& e)
{
std::cout << e.what() << std::endl;
}
return 0;
}
|
34f22f0b4ba9b30464825864604b4c17
|
{
"intermediate": 0.3661386966705322,
"beginner": 0.5585511326789856,
"expert": 0.07531015574932098
}
|
3,823
|
how do I print out the table that I'm deleting from when I do a delete with a sql server query like this: DELETE FROM tblSomething?
|
2f18034a3b5704f659a0af65cde775c1
|
{
"intermediate": 0.6295741200447083,
"beginner": 0.16796858608722687,
"expert": 0.20245735347270966
}
|
3,824
|
The While loop gets its name from the way it works: While a condition is false, do some task. True or false?
|
bb1b0cdafb048329a7ce794bd793c099
|
{
"intermediate": 0.1452145278453827,
"beginner": 0.49578621983528137,
"expert": 0.35899925231933594
}
|
3,825
|
почему не создается канвас?
import * as THREE from ‘three’;
let width = window.innerWidth;
let height = window.innerHeight;
//Scene
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(45, width, height);
scene.add(camera);
//Meshes
const sphereGeometry = new THREE.SphereGeometry(2, 64, 64);
const sphereMaterial = new THREE.MeshBasicMaterial({color: 0xfffaaa});
const sphere = new THREE.Mesh(sphereGeometry, sphereMaterial);
scene.add(sphere);
//Renderer
const renderer = new THREE.WebGLRenderer({canvas});
renderer.setSize(width, height);
document.body.appendChild( renderer.domElement );
renderer.render(scene,camera);
|
be33d895d48b147cdf328ca5dedebf84
|
{
"intermediate": 0.3990030586719513,
"beginner": 0.3928634226322174,
"expert": 0.2081335335969925
}
|
3,826
|
I want to perform a lab on investigating ssh and telnet communications in the client end. Guide me with all the required commands in detail on how to perform the experiment.
|
1ff27007c330b40e7a6940accfb99417
|
{
"intermediate": 0.3152192533016205,
"beginner": 0.23653718829154968,
"expert": 0.44824352860450745
}
|
3,827
|
javascript unroll 2d array
|
844dda745ae9075e75cc1221c7598344
|
{
"intermediate": 0.36623820662498474,
"beginner": 0.31102752685546875,
"expert": 0.32273420691490173
}
|
3,828
|
Can I have a different VBA code that does waht this code is supposed to do.
Unforntunately the code does not update values to column 11 when I enter a value in column1.
Private Sub Worksheet_Change(ByVal Target As Range)
If Not Application.CalculationState = xlDone Then Exit Sub
If Target.Column = 1 Then
Application.EnableEvents = False
Application.ScreenUpdating = False
UpdateColumn11
Application.EnableEvents = True
Application.ScreenUpdating = True
End If
TimeStampSimple Target, "G2", "J", "mm/dd/yyyy hh:mm:ss", False
End Sub
Sub UpdateColumn11()
Dim lastRow As Long
With Worksheets("PREQUEST")
' Find the last row with data
lastRow = .Cells(.Rows.Count, "A").End(xlUp).Row
' Fill in column 11 with the VLOOKUP results
.Range("K2:K" & lastRow).FormulaR1C1 = "=IF(RC[-10]<>"",VLOOKUP(RC[-10],LIST!R2C1:R54C3,3,FALSE),"""")"
End With
End Sub
|
6f0e4ff133cae380a31b392355a17945
|
{
"intermediate": 0.6207972168922424,
"beginner": 0.24802716076374054,
"expert": 0.1311757117509842
}
|
3,829
|
why doesn't this code work on unity {public class BlockGenerator : MonoBehaviour
{
public GameObject[] blockPrefabs;
public float frequency = 0.5f;
public float amplitude = 0.5f;
void Start()
{
Terrain terrain = GetComponent<Terrain>();
float[,] heights = new float[terrain.terrainData.heightmapWidth,terrain.terrainData.heightmapHeight];
for (int x = 0; x < heights.GetLength(0); x++)
{
for (int y = 0; y < heights.GetLength(1); y++)
{
float noise = Mathf.PerlinNoise(x * frequency, y * frequency) * amplitude;
heights[x, y] += noise;
if (heights[x, y] > 0.5f) // generate a block at this position
{
Vector3 position = new Vector3(x, heights[x, y] * terrain.terrainData.size.y, y);
GameObject block = Instantiate(blockPrefabs[Random.Range(0, blockPrefabs.Length)], position, Quaternion.identity);
block.transform.SetParent(transform);
}
}
}
terrain.terrainData.SetHeights(0, 0, heights);
}
}
// Start is called before the first frame update
void Start()
{
}
|
bf6c3fa3357c9624809810391b4716f3
|
{
"intermediate": 0.5129690170288086,
"beginner": 0.28006988763809204,
"expert": 0.20696111023426056
}
|
3,830
|
In spring boot, I have enabled h2 database. When I access the h2 database locally, I see databasechangelog table, where is that coming from and what is its purpose?
|
5d2eccbc7d43639376afe523cdf194f9
|
{
"intermediate": 0.4374887943267822,
"beginner": 0.22396999597549438,
"expert": 0.3385412395000458
}
|
3,831
|
Create a program that takes (at least) three command line arguments. The first two will be integers and the third will be a float.
On one line, separated by spaces, print the sum of the first two arguments, the product of the first and third arguments, the first argument modulo the second, and the integer quotient of the third by the first.
Add 1 to all three arguments.
On a new line, print the first argument bitwise right shift 3, the second argument divided by 2 (not integer division), and the bitwise OR of the first and second arguments. (all separated by spaces.)
On the last line, print the sum of the first argument (after the addition) and the total number of arguments, excluding the program name.
|
e42919a3c33eaac5f34a6a17a981b6bf
|
{
"intermediate": 0.3541770875453949,
"beginner": 0.335688978433609,
"expert": 0.3101339638233185
}
|
3,832
|
The following function:
void Menu::item::winAppendMenu(HMENU handle) {
wchar_t* buffer = NULL;
if (type == separator_type_id) {
AppendMenu(handle, MF_SEPARATOR, 0, NULL);
}
else if (type == vseparator_type_id) {
AppendMenu(handle, MF_MENUBREAK, 0, NULL);
}
else if (type == submenu_type_id) {
if (winMenu != NULL) {
LPWSTR str = getWString(winConstructMenuText(), L"", buffer);
if (wcscmp(str, L""))
AppendMenuW(handle, MF_POPUP | MF_STRING, (uintptr_t)winMenu, str);
else
AppendMenu(handle, MF_POPUP | MF_STRING, (uintptr_t)winMenu, winConstructMenuText().c_str());
}
}
else if (type == item_type_id) {
unsigned int attr = MF_STRING;
attr |= (status.checked) ? MF_CHECKED : MF_UNCHECKED;
attr |= (status.enabled) ? MF_ENABLED : (MF_DISABLED | MF_GRAYED);
LPWSTR str = getWString(winConstructMenuText(), L"", buffer);
if (wcscmp(str, L""))
AppendMenuW(handle, attr, (uintptr_t)(master_id + winMenuMinimumID), str);
else
AppendMenu(handle, attr, (uintptr_t)(master_id + winMenuMinimumID), winConstructMenuText().c_str());
}
if (buffer != NULL) {delete[] buffer;buffer = NULL;}
}
produces a compiler warning of "ISO C++ forbids converting a string constant to 'wchar_t*' [-Wwrite-strings]"
for the L"" part of the line LPWSTR str = getWString(winConstructMenuText(), L"", buffer);
Can you explain this warning, and recommend a way to fix the code so the warning no longer happens?
|
74a0b1e9da51d03def2ef4c4b4acf7dc
|
{
"intermediate": 0.5187809467315674,
"beginner": 0.34197089076042175,
"expert": 0.13924814760684967
}
|
3,833
|
im getting strange error - im getting blank response when quering my database in room on android
|
32e4d7ca03505ff024a122cdb92d713a
|
{
"intermediate": 0.3586679697036743,
"beginner": 0.42766010761260986,
"expert": 0.21367192268371582
}
|
3,834
|
powershell pull all url links from a web page
|
2237f35f4f2dac2ba24fce7fa26b1f56
|
{
"intermediate": 0.40354427695274353,
"beginner": 0.19540993869304657,
"expert": 0.40104570984840393
}
|
3,835
|
powershell pull all url links from a web page
|
fe709dcfa7e0e55a4338000f5f6c3c7c
|
{
"intermediate": 0.40354427695274353,
"beginner": 0.19540993869304657,
"expert": 0.40104570984840393
}
|
3,836
|
ما معني هذا الخطا No value passed for parameter 'message'
|
95a84fc17d2bfb523f2f51949e417714
|
{
"intermediate": 0.3250727653503418,
"beginner": 0.33828219771385193,
"expert": 0.3366449773311615
}
|
3,837
|
powershell command to connect to mfp and clear print queue
|
eb4519a216e368477776d714179e8950
|
{
"intermediate": 0.5449298620223999,
"beginner": 0.14721250534057617,
"expert": 0.3078576326370239
}
|
3,838
|
static function sendNotification(EntityManagerInterFace $entityManager, ?string $title = null, ?string $message = null, ?string $link = null, ?array $specificUsersId = null, bool $verifySubscriptionValidity = false): ?array
{
$pushNotificationRepository = $entityManager->getRepository(PushNotification::class);
$now = new \DateTime();
$notifiedUsersId = [];
$webPush = new WebPush(['VAPID' => self::VAPID]);
$payload = json_encode([
'title' => $title,
'message' => $message,
'link' => $link,
'date' => $now->getTimestamp() * 1000,
'verifySubscriptionValidity' => $verifySubscriptionValidity
]);
$newHistoryNotification = new HistoryNotification();
$newHistoryNotification
->setPayload($payload)
->setCreatedAt($now)
;
$entityManager->persist($newHistoryNotification);
$entityManager->flush();
if ($specificUsersId === null) {
$subscriptions = $pushNotificationRepository->findAll();
} else {
$subscriptions = $pushNotificationRepository->findBy(['user' => $specificUsersId]);
}
foreach ($subscriptions as $subscriptionData) {
$subscription = Subscription::create(json_decode($subscriptionData->getSubscription(), true));
$webPush->queueNotification($subscription, $payload);
}
foreach ($webPush->flush() as $index => $report) {
$statusCode = $report->getResponse()->getStatusCode();
if ($statusCode === 404 || $statusCode === 410) {
$entityManager->remove($subscriptions[$index]);
} else if ($statusCode === 201 && $specificUsersId !== null) {
$notifiedUsersId[] = $subscriptions[$index]->getUser()->getId();
}
}
$entityManager->flush();
if ($specificUsersId !== null) {
$usersNotNotifiedIds = array_diff($specificUsersId, $notifiedUsersId);
return $usersNotNotifiedIds === [] ? null : $usersNotNotifiedIds;
} else {
return null;
}
}
static function historisation(User $user, UserHistoricalAccessRepository $userHistoricalAccess, EntityManager $entityManager, object $service, ?array $moreInfos = null): bool
{
$now = new \DateTime();
$data = null;
switch (get_class($service)) {
case Fichiers::class:
$data = $userHistoricalAccess->getHistoricalfileAccess($now, $service, $user);
break;
default:
return false;
break;
}
if ($data != null) {
$nbAccess = $data[0]->getnbAccess();
$data[0]->setnbAccess($nbAccess + 1)
->setUpdatedAt($now);
$entityManager->persist($data[0]);
$entityManager->flush();
return true;
} else { //Sinon nouvelle entrée
switch (get_class($service)) {
case Fichiers::class:
$userHistoricalAccess->addHistoricAccess($user, $entityManager, $service, null, null, null, null, null,);
break;
default:
return false;
break;
}
return true;
}
return false;
}
Symfony, j'ai un fonction sendNotification qui utilise web push pour envoyer des notifications
J'aimerais mettre en place, après la consultation d’une doc, d’un code défaut ou d’une valeur de sonde, le user reçoive une notification qui l’invite à noter le service rendu.
10 min après l'historisation mais pas plus de 1 fois par semaine.
|
706004a5a8d3b95371f1a5dd53d07756
|
{
"intermediate": 0.27043163776397705,
"beginner": 0.46557387709617615,
"expert": 0.2639944851398468
}
|
3,839
|
make it like junior level of python programmer write this without changing the output or results, import sys
def main(args):
arg_one = int(args[1])
arg_two = int(args[2])
arg_three = float(args[3])
print(arg_one + arg_two, “{:.2f}”.format(arg_one * arg_three), arg_one % arg_two, int(arg_one / arg_three))
arg_one += 1
arg_two += 1
arg_three += 1
print(arg_one << 3, arg_two / 2, arg_one | arg_two)
print(arg_one + (len(args) - 1))
if name == “main”:
args = sys.argv
main(args)
|
dae1511c70eac00fdde1dd2fc75299ab
|
{
"intermediate": 0.19908471405506134,
"beginner": 0.6692285537719727,
"expert": 0.1316867619752884
}
|
3,840
|
Create a program that takes one integer command line argument.
Ensure the argument is an integer, if not the program doesn't do anything.
|
da8e18ac51013179e359d07026f0588b
|
{
"intermediate": 0.4095665514469147,
"beginner": 0.1891845315694809,
"expert": 0.401248961687088
}
|
3,841
|
java code to detect all drives on the local computer, index all files , save this index on file and use it for search files to not search all files again, update the file by monitoring file changes on all drives
|
b2a3a8b5f91441afd03df1910a853550
|
{
"intermediate": 0.588709831237793,
"beginner": 0.1178111881017685,
"expert": 0.29347899556159973
}
|
3,842
|
c++ char max value
|
95f477b9819db9b56e540463941efa71
|
{
"intermediate": 0.29868826270103455,
"beginner": 0.3784053921699524,
"expert": 0.3229062855243683
}
|
3,843
|
act as a python expert.
|
f6e4704a23cd5e451bcf17193bd11149
|
{
"intermediate": 0.28574973344802856,
"beginner": 0.271436870098114,
"expert": 0.44281336665153503
}
|
3,844
|
Give me the code to use an encoder on Arduino.
|
cf6c985279d1be2fa4dfe0c0e479da98
|
{
"intermediate": 0.5148411989212036,
"beginner": 0.11537504941225052,
"expert": 0.3697836995124817
}
|
3,845
|
java code to select 10 pixels randomly from a 32 x 32 pixel source jpg , dont alter the source jpg physically but use each of the RGB values by altering them randomly by 50% of their original values and then use these 10 RGB values to alter 10 randomly selected pixels of a 32 x 32 target-jpg also by 50% of their original values
|
7d165521b18b6c8442ca55ac813a06ae
|
{
"intermediate": 0.47196316719055176,
"beginner": 0.1360606700181961,
"expert": 0.39197614789009094
}
|
3,846
|
you are a python expert.
|
89b943406ae35c6d1ed063177d08d02d
|
{
"intermediate": 0.25182491540908813,
"beginner": 0.26707717776298523,
"expert": 0.48109784722328186
}
|
3,847
|
what is the difference between this website and chat gpt?
|
594bf6ebf2ff1a4c3b9f892ced9b5272
|
{
"intermediate": 0.2928158938884735,
"beginner": 0.29418474435806274,
"expert": 0.4129994213581085
}
|
3,848
|
If I have gdb attached to a process, how can I make the process ignore the TERM signal
|
306244ef2bb6d8276bb45ee0ec72e38c
|
{
"intermediate": 0.5229775309562683,
"beginner": 0.1518596112728119,
"expert": 0.3251628577709198
}
|
3,849
|
Write a web client with a socket connection to https://ad4d9241-5819-42af-9214-891ffc1b9182.id.repl.co:9000 that emits send_msg and receives new_msg
|
a65e7549334435d3d7899ea3e24c4399
|
{
"intermediate": 0.3755383789539337,
"beginner": 0.24587443470954895,
"expert": 0.3785872757434845
}
|
3,850
|
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using System.Data.OleDb;
using static System.Windows.Forms.VisualStyles.VisualStyleElement.ListView;
using System.Xml.Linq;
namespace DForm1
{
public partial class AdminForm : Form
{
public AdminForm()
{
}
private void dataGridView1_CellContentClick(object sender, DataGridViewCellEventArgs e)
{
}
private void AdminForm_Load(object sender, EventArgs e)
{
}
private void button1_Click(object sender, EventArgs e)
{
}
private void button2_Click(object sender, EventArgs e)
{
}
private void button3_Click(object sender, EventArgs e)
{
}
private void button4_Click(object sender, EventArgs e)
{
}
}
}
This is my code. The private void dataGridView1_CellContentClick(object sender, DataGridViewCellEventArgs e) is connected to the database in Microsoft Access manually. I want private void button1_Click(object sender, EventArgs e) to add another row of information to the datagridview which when saved by pressing private void button4_Click(object sender, EventArgs e) will also update in the database in microsoft access.
Please give me the code for this function to work
|
a2a52c836ce87f2f2364b1b89179b65d
|
{
"intermediate": 0.4469337463378906,
"beginner": 0.36976152658462524,
"expert": 0.18330472707748413
}
|
3,851
|
Create a program that takes one integer command line argument.
Ensure the argument is an integer, if not the program doesn't do anything.
|
8943f128c6925a76fbf3ebd412380961
|
{
"intermediate": 0.4095665514469147,
"beginner": 0.1891845315694809,
"expert": 0.401248961687088
}
|
3,852
|
What is a good way to write a Material system for a game engine in C++? Any mesh should be able to have a material, which is either a single colour or a texture, and has a shader attached to it.
|
f3afd7be5a51cb61ca864cf475abc98a
|
{
"intermediate": 0.47288578748703003,
"beginner": 0.20288684964179993,
"expert": 0.32422736287117004
}
|
3,853
|
hi python professional. Create a program that has a function called factorial that takes one integer argument. Ensure the argument is an integer, which means no negative numbers are allowed. If negative numbers are entered, the program simply pass and do not print anything. Calculate the factorial of the number which is all the positive whole numbers. A factorial is the product of a whole number. For example, 4 factorial (usually written 4!) is 4x3x2x1 = 24. The function should return a string. Ex: “4x3x2x1 = 24”
|
24ea6792bb52fa4ecfed6e09ecb6f185
|
{
"intermediate": 0.3233436942100525,
"beginner": 0.2360253781080246,
"expert": 0.4406309723854065
}
|
3,854
|
Write merge sort in C++
|
8686cfbdfa967442ba8f4b6faf67a834
|
{
"intermediate": 0.3196406662464142,
"beginner": 0.2578427493572235,
"expert": 0.4225165843963623
}
|
3,855
|
nombresDeUsuarios :: RedSocial -> [String]
nombresDeUsuarios ([], _, _) = []
nombresDeUsuarios (u:us, r, q) | (snd u):(nombresDeUsuarios (us, r, q)) --Las variables r, q son irrelevantes y no son usadas
How to modify this function so it returns the list without repeated elements?
|
fc8e42a97d3f2b5bf6b22901dea29a21
|
{
"intermediate": 0.2684512138366699,
"beginner": 0.6083526015281677,
"expert": 0.12319625169038773
}
|
3,856
|
In Greasemonkey-style scripts, is there a way to store data in moment (non-persistantly)?
|
669e8029dc6262a210d83fecfcd43166
|
{
"intermediate": 0.6154494881629944,
"beginner": 0.1250932514667511,
"expert": 0.25945723056793213
}
|
3,857
|
hi
|
46851b140620e8e0814f386b2cf61393
|
{
"intermediate": 0.3246487081050873,
"beginner": 0.27135494351387024,
"expert": 0.40399640798568726
}
|
3,858
|
I'm making about nation data any app name that starts with the letter 'O'
|
7b8ca80b72105752e742fb0449cecd8e
|
{
"intermediate": 0.34433677792549133,
"beginner": 0.27989569306373596,
"expert": 0.3757675588130951
}
|
3,859
|
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Diagnostics;
using System.Drawing;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using System.Xml.Linq;
using System.Data.OleDb;
using System.Text.RegularExpressions;
using static System.Net.Mime.MediaTypeNames;
namespace WindowsFormsApp1
{
public partial class Register : Form
{
public Register()
{
InitializeComponent();
}
//connect to database
OleDbConnection con = new OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=usersdb.mdb");
OleDbCommand cmd = new OleDbCommand();
OleDbDataAdapter da = new OleDbDataAdapter();
private void RegisterButton_Click(object sender, EventArgs e)
{
string name = NameTextBox.Text;
string email = EmailTextBox.Text;
string address = AddressTextBox.Text;
string phoneNumber = PhoneNumberTextBox.Text;
string username = UsernameTextBox.Text;
string password = PasswordTextBox.Text;
string confirmpassword = ConPasswordTextBox.Text;
// Check if registration information is valid
bool isValid = ValidateRegistration(name, email, address, phoneNumber, username, password);
if (isValid)
{
con.Open();
bool usernameExists = CheckUsernameExists(UsernameTextBox, con);
if (usernameExists)
{
MessageBox.Show("Username already exists in database.");
}
else
{
USERS newUser = new USERS(name, username, password, email, address, phoneNumber ,"Customer");
{
};
InsertMyClass(newUser);
// Successful registration, do something here
MessageBox.Show("Registration successful!");
//emptying the fields
NameTextBox.Text = "";
EmailTextBox.Text = "";
AddressTextBox.Text = "";
PhoneNumberTextBox.Text = "";
UsernameTextBox.Text = "";
PasswordTextBox.Text = "";
ConPasswordTextBox.Text = "";
this.Hide();
new Login().Show();
}
con.Close();
}
else if(password != confirmpassword)
{
MessageBox.Show("Passwords do not match.");
}
else if(password.Length < 8 || password.Length > 16)
{
MessageBox.Show("Your password must be between 8 and 16 characters long.");
}
else if (!password.Any(char.IsLower) || !password.Any(char.IsUpper))
{
MessageBox.Show("Your password must contain at least one lowercase and one uppercase letter.");
}
else if(!Regex.IsMatch(username, "^[a-zA-Z0-9]*$"))
{
MessageBox.Show("Username must only contain letters and numbers.");
}
else
{
// Failed registration, display error message
MessageBox.Show("All required fields must be filled.");
}
}
public void InsertMyClass(USERS newUser)
{
using (OleDbConnection connection = new OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=usersdb.mdb"))
{
connection.Open();
using (OleDbCommand command = new OleDbCommand("INSERT INTO tbl_dbusers (username, password, FullName, Email, Address, PhoneNumber, UserType) \r\nVALUES (@username, @password, @name, @Email, @Address, @PhoneNumber, @UserType)\r\n", connection))
{
command.Parameters.AddWithValue("@username", newUser.Username);
command.Parameters.AddWithValue("@password", newUser.Password);
command.Parameters.AddWithValue("@Email", newUser.Email);
command.Parameters.AddWithValue("@Address", newUser.Address);
command.Parameters.AddWithValue("@PhoneNumber", newUser.PhoneNumber);
command.Parameters.AddWithValue("@UserType", newUser.UserType);
command.Parameters.AddWithValue("@FullName", newUser.Name);
command.ExecuteNonQuery();
}
}
}
//checks if username exists in the database
public bool CheckUsernameExists(TextBox UsernameTextBox, OleDbConnection connection)
{
string username = UsernameTextBox.Text.Trim();
string query = "SELECT COUNT(*) FROM tbl_dbusers WHERE username = @username";
OleDbCommand command = new OleDbCommand(query, connection);
command.Parameters.AddWithValue("@username", username);
int count = (int)command.ExecuteScalar();
return count > 0;
}
private bool ValidateRegistration(string name, string email, string address, string phoneNumber, string username, string password)
{
//checks if all fields are not empty, password match and meet conditions
return !string.IsNullOrEmpty(name) &&
!string.IsNullOrEmpty(email) &&
!string.IsNullOrEmpty(address) &&
!string.IsNullOrEmpty(phoneNumber) &&
!string.IsNullOrEmpty(username) &&
!string.IsNullOrEmpty(password) &&
password == ConPasswordTextBox.Text &&
password.Length > 8 && password.Length < 16 &&
password.Any(char.IsLower) &&
password.Any(char.IsUpper) &&
//check if username contains only letters and numbers
Regex.IsMatch(username, "^[a-zA-Z0-9]*$");
}
}
}
check for any error and fix it giving syntax error in insert into stament for this line command.ExecuteNonQuery();
|
7e0dd363cd84d733c554a9075bbf2243
|
{
"intermediate": 0.44607120752334595,
"beginner": 0.3860991597175598,
"expert": 0.167829692363739
}
|
3,860
|
I have a bunch of lines in a sql server script that do something like
DELETE FROM tblSomething OUTPUT @@ROWCOUNT AS 'Test'
but I'm getting weird results where sometimes it shows 50 "1"s in a record and sometimes shows "50" as a record. why is it doing that?
|
04e7e64ca6aa8fd6c2dc6ea4f0735b88
|
{
"intermediate": 0.5045166611671448,
"beginner": 0.3432447612285614,
"expert": 0.1522386074066162
}
|
3,861
|
generate daily price data for the bitcoin price from 1/1/23 to 1/30/23
the data should like this example:
date price
1/1/23 100
1/2/23 112
|
6a92a7a6b59ae8020d9069fee253fdae
|
{
"intermediate": 0.46325942873954773,
"beginner": 0.12652264535427094,
"expert": 0.4102179706096649
}
|
3,862
|
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
|
f8ead45549a32466662dc7b2effd3718
|
{
"intermediate": 0.33375340700149536,
"beginner": 0.20704074203968048,
"expert": 0.45920589566230774
}
|
3,863
|
I want you to create the ultimate windows bat file for IT administrators. It should be user friendly, menu orientated, and utilize all windows systems commands in an efficient simple interface. provide all the code, with a help menu
|
e346890d3c10e46a1e16e7bd51aad68c
|
{
"intermediate": 0.40708446502685547,
"beginner": 0.3395032584667206,
"expert": 0.25341230630874634
}
|
3,864
|
i am using visual studio community windows form .network frmake work i created a class with feilds Name =
Address
Email
PhoneNumber
Username
Password
UserType
now i am getting input from textboxes in a form i want to create an object of that class and after the object is created insert the object into an access database
|
bfac6edce6ace2b95e60f13927b05901
|
{
"intermediate": 0.4602043032646179,
"beginner": 0.338993102312088,
"expert": 0.20080260932445526
}
|
3,865
|
napisz kod który w zależności od html tag wyświetla odpowiedni tekst w textbox tkinter tutaj przykład funkcji dla <em></em>
def em_points(text):
suppat = re.compile(r'<sup>\w*</sup>')
suppatiter = suppat.findall(text)
if suppatiter:
for suptag in suppatiter:
text = "".join(text.split(suptag))
finds = list()
if "<em>" in text:
find_points = list()
emcount = text.count("<em>")
for _ in range(emcount):
find_open = text.find("<em>")
text = text[:find_open] + text[find_open + 4:]
find_close = text.find("</em>")
text = text[:find_close] + text[find_close + 5:]
find_points.append([find_open, find_close])
for points in find_points:
finds.append(text[points[0]: points[1]])
return [text, finds]
def italicize_text(text_box, finds):
italics_font = font.Font(text_box, text_box.cget("font"))
italics_font.configure(slant="italic")
text_box.tag_configure("italics", font=italics_font)
text_in_box = text_box.get(1.0, END)
used_points = list()
for find in finds:
if find not in text_in_box:
raise RuntimeError(f"Could not find text to italicise in textbox:\n {find}\n {text_in_box}")
else:
start_point = text_in_box.find(find)
end_point = start_point + len(find)
found_at = [start_point, end_point]
if found_at in used_points:
while found_at in used_points:
reduced_text = text_in_box[end_point:]
start_point = end_point + reduced_text.find(find)
end_point = start_point + len(find)
found_at = [start_point, end_point]
used_points.append(found_at)
text_to_startpoint = text_in_box[:start_point]
text_to_endpoint = text_in_box[:end_point]
start_line = text_to_startpoint.count("\n") + 1
end_line = text_to_endpoint.count("\n") + 1
if "\n" in text_to_startpoint:
line_start_point = len(text_in_box[text_to_startpoint.rfind("\n") + 1: start_point])
else:
line_start_point = start_point
if "\n" in text_to_endpoint:
line_end_point = len(text_in_box[text_to_endpoint.rfind("\n") + 1: end_point])
else:
line_end_point = end_point
start_point = Decimal(f"{start_line}.{line_start_point}")
end_point = Decimal(f"{end_line}.{line_end_point}")
text_box.tag_add("italics", start_point, end_point)
text = 'abcd efg hijk\n<sup>lmn<em>"op qrs tu</em>v wx yz</sup>'
em_text = em_points(text)
clean_text = em_text[0]
em_list = em_text[1]
print(clean_text)
print(em_list)
text_box = Text(root, width=200, height=200, font=("Courier", 12))
text_box.grid(row=0,column=0,columnspan=2)
text_box.insert(1.0, clean_text)
italicize_text(text_box, em_list)
root.mainloop()
|
64a6a19d7963305638abff10a78c30c5
|
{
"intermediate": 0.2443622350692749,
"beginner": 0.6046851873397827,
"expert": 0.15095257759094238
}
|
3,866
|
how to do a offline browsable site of https://www.hilti.ch/downloads?showAll=true&continue=true including the downloads
|
13705058cac733adc02d77db6eee9b73
|
{
"intermediate": 0.3457354009151459,
"beginner": 0.2390601634979248,
"expert": 0.4152044355869293
}
|
3,867
|
how to do a offline browsable site of https://www.hilti.ch/downloads?showAll=true&continue=true including the downloads
|
5a8389009c19272236ab24a9a5bfafbd
|
{
"intermediate": 0.3457354009151459,
"beginner": 0.2390601634979248,
"expert": 0.4152044355869293
}
|
3,868
|
how to do a offline browsable site of https://www.hilti.ch/downloads?showAll=true&continue=true including the downloads
|
014510269b3045362d8c8a7933ed6c4d
|
{
"intermediate": 0.3457354009151459,
"beginner": 0.2390601634979248,
"expert": 0.4152044355869293
}
|
3,869
|
how to do a offline browsable site of https://www.hilti.ch/downloads?showAll=true&continue=true including the downloads
|
c026b8bebb563c95f1e678c0c0764669
|
{
"intermediate": 0.3457354009151459,
"beginner": 0.2390601634979248,
"expert": 0.4152044355869293
}
|
3,870
|
how to do a offline browsable site of https://www.hilti.ch/downloads?showAll=true&continue=true including the downloads
|
196da080264e35d673b5acaeba547629
|
{
"intermediate": 0.3457354009151459,
"beginner": 0.2390601634979248,
"expert": 0.4152044355869293
}
|
3,871
|
please fix this issue, my code gives me this error from this line
Line: cmd.ExecuteNonQuery();
Error: System.Data.OleDb.OleDbException: 'No value given for one or more required parameters.'
Full code:
using System;
using System.Linq;
using System.Data.OleDb;
using System.Windows.Forms;
using System.Drawing;
using static System.Net.Mime.MediaTypeNames;
namespace ApplianceRental
{
public partial class RegistrationForm : Form
{
// Declare and initialize a new OleDbConnection object
private readonly OleDbConnection connection = new OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=db_users.mdb");
public RegistrationForm()
{
// Initialize UI components
InitializeComponent();
}
private void Register_Click(object sender, EventArgs e)
{
// Check for an empty username box
if (string.IsNullOrEmpty(textBox1.Text))
{
MessageBox.Show("Please enter a username.");
return;
}
// Check for an empty password box
if (string.IsNullOrEmpty(textBox2.Text))
{
MessageBox.Show("Please enter a password.");
return;
}
// Compare password boxes for a match
if (textBox2.Text != textBox3.Text)
{
MessageBox.Show("Passwords do not match.");
return;
}
// Check for an empty full name box
if (string.IsNullOrEmpty(textBox4.Text))
{
MessageBox.Show("Please enter your full name.");
return;
}
// Check for an empty email address box
if (string.IsNullOrEmpty(textBox5.Text))
{
MessageBox.Show("Please enter your email address.");
return;
}
// Check for an empty address box
if (string.IsNullOrEmpty(textBox6.Text))
{
MessageBox.Show("Please enter your address.");
return;
}
// Check for valid password length
if (textBox2.Text.Length < 8 || textBox2.Text.Length > 16)
{
MessageBox.Show("Password must be between 8 and 16 characters.");
return;
}
// Check for presence of both lowercase and uppercase characters
if (!textBox2.Text.Any(char.IsLower) || !textBox2.Text.Any(char.IsUpper))
{
MessageBox.Show("Password must contain at least one lowercase and one uppercase letter.");
return;
}
// Open the database connection
try
{
connection.Open();
}
catch (Exception ex)
{
MessageBox.Show("Error connecting to the database: " + ex.Message);
return;
}
// Insert the user’s registration data into the database
using (var cmd = new OleDbCommand())
{
cmd.Connection = connection;
cmd.CommandText = "INSERT INTO tbl_users(username, [password], fullname, email, address, [User Type]) " +
"VALUES(@username, @password, @fullname, @email, @address, ‘Customer’)";
cmd.Parameters.AddWithValue("@username", textBox1.Text);
cmd.Parameters.AddWithValue("@password", textBox2.Text);
cmd.Parameters.AddWithValue("@fullname", textBox4.Text);
cmd.Parameters.AddWithValue("@email", textBox5.Text);
cmd.Parameters.AddWithValue("@address", textBox6.Text);
cmd.ExecuteNonQuery();
}
// Close the database connection
connection.Close();
// Inform the user that the registration was successful
MessageBox.Show("Registration successful!");
// Clear input fields
ClearFields();
// Hide this form and show the main form
this.Hide();
new Form1().Show();
}
// Clear input fields
private void ClearFields()
{
textBox1.Text = "";
textBox2.Text = "";
textBox3.Text = "";
textBox4.Text = "";
textBox5.Text = "";
textBox6.Text = "";
}
// Event handlers for text-changed events (placeholders)
private void textBox1_TextChanged(object sender, EventArgs e) { }
private void textBox2_TextChanged(object sender, EventArgs e) { }
private void textBox3_TextChanged(object sender, EventArgs e) { }
private void textBox4_TextChanged(object sender, EventArgs e) { }
private void textBox5_TextChanged(object sender, EventArgs e) { }
private void textBox6_TextChanged(object sender, EventArgs e) { }
}
}
|
f7be57b6f7e544cd51ef1ffb8471222c
|
{
"intermediate": 0.2986341416835785,
"beginner": 0.500408947467804,
"expert": 0.20095695555210114
}
|
3,872
|
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
import "@openzeppelin/contracts/token/ERC721/ERC721.sol";
import "@openzeppelin/contracts/access/Ownable.sol";
import "@openzeppelin/contracts/utils/math/SafeMath.sol";
contract FarmingCropNFT is ERC721, Ownable {
using SafeMath for uint256;
// Define events
event CropIssued(uint256 cropID, string cropName, uint256 amount, uint256 value, address farmer);
event CropTransferred(address from, address to, uint256 cropID, uint256 value);
// Define struct for crop
struct Crop {
string cropName;
uint256 amount;
uint256 value;
address farmer;
bool exists;
string ipfsHash; // added ipfsHash for off-chain data
}
// Define mapping of crop IDs to crops
mapping(uint256 => Crop) public crops;
// Define counter for crop IDs
uint256 public cropIDCounter;
// Define variable for mint price
uint256 public mintPrice;
// Define constructor to initialize contract and ERC721 token
constructor() ERC721("Farming Crop NFT", "FARMCRP") {
// Set the mint price
mintPrice = 0.001 ether;
}
// Define function for farmer to issue NFT of crops to primary buyer
function issueCropNFT(
string memory _cropName,
uint256 _amount,
uint256 _value,
string memory _ipfsHash
) public payable returns (uint256) {
// Ensure minimum payment is met
require(msg.value >= mintPrice, "Insufficient payment");
// Increment crop ID counter
cropIDCounter++;
// Create new crop struct
Crop memory newCrop = Crop({
cropName: _cropName,
amount: _amount,
value: _value,
farmer: msg.sender,
exists: true,
ipfsHash: _ipfsHash // store the ipfsHash for off-chain data
});
// Add new crop to mapping
crops[cropIDCounter] = newCrop;
// Mint NFT to farmer
_mint(msg.sender, cropIDCounter);
// Emit event for new crop issuance
emit CropIssued(cropIDCounter, _cropName, _amount, _value, msg.sender);
// Return crop ID
return cropIDCounter;
}
// Override the transferFrom function with custom logic for primary and secondary buyers.
function transferFrom(address from, address to, uint256 tokenId) public override {
// Ensure crop exists
require(crops[tokenId].exists, "Crop does not exist");
// If the owner is farmer, primary buyer logic
if (ownerOf(tokenId) == crops[tokenId].farmer) {
// Primary buyer logic
// Here you can add conditions for primary buyer purchase
super.transferFrom(from, to, tokenId);
emit CropTransferred(from, to, tokenId, crops[tokenId].value);
} else {
// Secondary buyer logic
// Here you can add conditions for secondary buyer purchase on the marketplace
super.transferFrom(from, to, tokenId);
emit CropTransferred(from, to, tokenId, crops[tokenId].value);
}
}
// Define function for farmer to destroy NFT of crops when they stop farming
function destroyCropNFT(uint256 _cropID) public onlyOwner {
// Ensure crop exists and is owned by farmer
require(crops[_cropID].exists, "Crop does not exist");
require(ownerOf(_cropID) == crops[_cropID].farmer, "Crop is not owned by farmer");
// Burn NFT
_burn(_cropID);
// Emit event for crop destruction
emit CropTransferred(crops[_cropID].farmer, address(0), _cropID, crops[_cropID].value);
// Delete crop from mapping
delete crops[_cropID];
}
// Add a function to retrieve the IPFS hash
function getCropIPFSHash(uint256 _cropID) public view returns (string memory) {
require(crops[_cropID].exists, "Crop does not exist");
return crops[_cropID].ipfsHash;
}
}
give me reactjs code for the above solidity code for ui framework
|
5d70c7988996d4e81ec983f254428636
|
{
"intermediate": 0.3853713572025299,
"beginner": 0.38670891523361206,
"expert": 0.22791975736618042
}
|
3,873
|
Write me an Excel class that will read an excel file consisting of multiple columns and rows that only includes text and numbers without using any external dependencies
|
be535e1f4e3ba20f259e13ba2e131764
|
{
"intermediate": 0.48463863134384155,
"beginner": 0.20493046939373016,
"expert": 0.3104308843612671
}
|
3,874
|
What's the best way to tie a niagaracomponent/niagarasystem with an ability in UE5 using GAS? For example, I use my ability and I trigger the niagara effect. I want the damage to actually happen when the effect hits the target actor instead of doing it instantly. How do I achieve this?
|
859e814e1be3270ed24cb284ab0eef51
|
{
"intermediate": 0.5331124067306519,
"beginner": 0.19469818472862244,
"expert": 0.2721894383430481
}
|
3,875
|
I am building a 3d video game engine using Vulkan. The game will also use Bullet to handle game physics. What would be the best way to handle the model vertex information?
|
e6afae367dca9e7e2dd5bcae1eebfec4
|
{
"intermediate": 0.5068343281745911,
"beginner": 0.17009219527244568,
"expert": 0.32307344675064087
}
|
3,876
|
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using System.Data.OleDb;
using static System.Windows.Forms.VisualStyles.VisualStyleElement.ListView;
using System.Xml.Linq;
using System.Diagnostics;
namespace DForm1
{
public partial class AdminForm : Form
{
public AdminForm()
{
InitializeComponent();
}
OleDbConnection con = new OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=db_appliances.accdb");
OleDbCommand cmd = new OleDbCommand();
OleDbDataAdapter da = new OleDbDataAdapter();
private void dataGridView1_CellContentClick(object sender, DataGridViewCellEventArgs e)
{
}
private void AdminForm_Load(object sender, EventArgs e)
{
}
private void button1_Click(object sender, EventArgs e)
{
}
private void button2_Click(object sender, EventArgs e)
{
}
private void button3_Click(object sender, EventArgs e)
{
}
private void button4_Click(object sender, EventArgs e)
{
}
}
}
This is my code. The private void dataGridView1_CellContentClick(object sender, DataGridViewCellEventArgs e) is connected to the database in Microsoft Access manually. I want private void button1_Click(object sender, EventArgs e) to add another row of information to the datagridview which when saved by pressing private void button4_Click(object sender, EventArgs e) will also update in the database in microsoft access.
Please give me the code for this function to work
|
e65008e2b2f684947b1c78aec36f6db1
|
{
"intermediate": 0.45128700137138367,
"beginner": 0.3336457312107086,
"expert": 0.2150672823190689
}
|
3,877
|
You are given the results of 15 different experiments that were each designed to
measure tumor incidence in rats. The observed data, given in the file data.
final.2023.Q3.csv available on Canvas, consists of the n i = the number of rats in each
experiment i and the yi = the number of those rats that had tumors.
(a) Outline a hierarchical model that allows for an experiment-specific tumor
proportion qi in experiment i but with sharing of information between experiments.
Give the data model and all necessary prior distributions.
(b) Calculate the joint posterior distribution for this hierarchical model and simplify
as much as possible. Derive 1. the conditional distribution of the experiment-
specific proportions given the global parameters and 2. the conditional distribution
of the global parameters given the experiment-specific proportions.
|
2302bf43ba676b2554fe52779bc19f21
|
{
"intermediate": 0.32693156599998474,
"beginner": 0.29797956347465515,
"expert": 0.3750888705253601
}
|
3,878
|
create a website that would allow for easy editing of JSON data in the following format.
{"entries":{"0":{"uid":0,"key":["Raihan"],"keysecondary":[],"comment":"","content":"Raihan is a young man with beautiful teal eyes and black hair tied up with an undercut. Raihan is a male, has brown skin, and is 6’4\" tall.\nLikes: dragon pokemon, kids, fashion, selfies, attention, cooking\nPersonality: confident, charismatic, competitive, motherly","constant":false,"selective":false,"order":100,"position":0,"disable":false},"1":{"uid":1,"key":["Renge"],"keysecondary":[],"comment":"","content":"Renge is a teenage boy. He has black hair in a medium-length ponytail, gold eyes, and a mole on his right cheek. He wears a black tank top and short shorts. He is 5’6\" tall, slightly chubby, and has dark skin. Renge is a male.\nPersonality: bratty, cunning. Likes: drugs, alcohol, getting high, pokemon.","constant":false,"selective":false,"order":100,"position":0,"disable":false},"2":{"uid":2,"key":["Karamatsu","Kara"],"keysecondary":[],"comment":"","content":"Karamatsu is a young man with short black hair, brown eyes, thick thighs, big ass, soft body. He is the second oldest of sextuplets. \nPersonality: narcissistic, stupid, fashionable, caring, attention-seeking, overconfident, delusional, defenseless\nLikes: romance, attention, his brothers","constant":false,"selective":false,"order":100,"position":0,"disable":false}}}
|
29ceb942552af1f9a9a37c14b6ea3032
|
{
"intermediate": 0.3320307433605194,
"beginner": 0.43416157364845276,
"expert": 0.23380766808986664
}
|
3,879
|
Write a grammar G for the language L consisiting of strings of 0’s and 1’s that are the binary
representation of odd integers greater that 4. For example 011 L, 0101L, 101L, 0110 L.
Draw parse trees for the strings 1011 and 1101
|
83ea7d48bd907fdd8f06d5ce575024f5
|
{
"intermediate": 0.3922809064388275,
"beginner": 0.33398035168647766,
"expert": 0.2737388014793396
}
|
3,880
|
How do I calculate distortion energy of a molecule in Gaussian?
|
f3033aa0fe108f509fc3d7af8e7e258d
|
{
"intermediate": 0.3309350311756134,
"beginner": 0.19683189690113068,
"expert": 0.47223302721977234
}
|
3,881
|
modify the following script to make it capable of editing the following format of JSON data.
<script>
const selectFormat = document.getElementById("selectFormat")
const spanFormatName = document.getElementById("spanFormatName")
const checkboxQuotes = document.getElementById("checkboxQuotes")
const oneLineFormat = document.getElementById("oneLineFormat")
const squareBracketsEncasing = document.getElementById("squareBracketsEncasing")
const textareaOutput = document.getElementById("textareaOutput")
const textareaJSONOutput = document.getElementById("textareaJSONOutput")
const objectName = document.getElementById("objectName")
const objectType = document.getElementById("objectType")
const table = document.getElementById("table")
function getStartTimerFunction() {
let timer
return (e) => {
if (e.key === "Tab") {
updateTable()
parseResult()
} else {
if (timer) {
clearTimeout(timer)
}
timer = setTimeout(() => {
updateTable()
parseResult()
}, 250)
}
}
}
const formats = {
wpp: {
name: `W++`,
options: ['optionQuotes', 'optionOneLine', `optionSquareBracketsEncasing`],
getPrefix: (objectType, parsedObjectName, spaceOrNewLine) => `${objectType}(${parsedObjectName})${spaceOrNewLine}{${spaceOrNewLine}`,
getValuesSeparator: () => ` + `,
getAttributeSeparator: (isOneLineFormat) => isOneLineFormat ? ', ' : '\n',
addAttribute: (propertyName, propertiesValue) => `${propertyName}(${propertiesValue})`,
appendAttributes: (propertyStrings, spaceOrNewLine) => `${propertyStrings.join(formats[selectFormat.value].getAttributeSeparator(oneLineFormat.checked))}${spaceOrNewLine}}`,
finalize: (stringResult, addSquareBrackets) => addSquareBrackets ? `[${stringResult}]` : stringResult
},
sbf: {
name: `Square Brackets Format`,
options: ['optionQuotes'],
getPrefix: (objectType, parsedObjectName) => `${objectType}: ${parsedObjectName};`,
getValuesSeparator: () => `, `,
getAttributeSeparator: () => `; `,
addAttribute: (propertyName, propertiesValue) => `${propertyName}: ${propertiesValue}`,
appendAttributes: (propertyStrings) => ` ${propertyStrings.join(formats[selectFormat.value].getAttributeSeparator())}`,
finalize: (stringResult) => `[ ${stringResult} ]`
}
}
let startUpdateTimer = getStartTimerFunction()
table.onkeydown = startUpdateTimer
table.onclick = startUpdateTimer
checkboxQuotes.onclick = startUpdateTimer
oneLineFormat.onclick = startUpdateTimer
squareBracketsEncasing.onclick = startUpdateTimer
objectName.onclick = startUpdateTimer
objectName.onkeydown = startUpdateTimer
objectType.onclick = startUpdateTimer
objectType.onkeydown = startUpdateTimer
selectFormat.onchange = startUpdateTimer
selectFormat.onclick = startUpdateTimer
selectFormat.onchange = () => {
spanFormatName.innerHTML = formats[selectFormat.value].name
const elems = document.getElementsByClassName(`formatOption`)
for (let i = 0; i < elems.length; i++) {
elems.item(i).hidden = !formats[selectFormat.value].options.includes(elems.item(i).id)
}
}
window.addEventListener("drop", dropHandler)
async function dropHandler(ev) {
let text
// Prevent default behavior (Prevent file from being opened)
ev.preventDefault();
if (ev.dataTransfer.items) {
// Use DataTransferItemList interface to access the file(s)
for (let i = 0; i < ev.dataTransfer.items.length; i++) {
// If dropped items aren't files, reject them
if (ev.dataTransfer.items[i].kind === 'file') {
const file = ev.dataTransfer.items[i].getAsFile();
if (file.name.toLowerCase().endsWith('.json')) {
text = await file.text()
}
}
}
} else {
// Use DataTransfer interface to access the file(s)
for (let i = 0; i < ev.dataTransfer.files.length; i++) {
const file = ev.dataTransfer.files[i]
text = await file.text()
}
}
loadJSONFromFile(text)
}
function loadJSONFromFile(text) {
const json = JSON.parse(text)
if (json.type && json.name) {
textareaJSONOutput.value = JSON.stringify(json, null, 4)
loadFromJSON()
updateTable()
parseResult()
}
}
function parseResult() {
const objectName = document.getElementById("objectName").value
const objectType = document.getElementById("objectType").value
let jsonObject = {type: objectType, name: objectName, properties: {}}
let rowLength = table.rows.length;
const parsedObjectName = checkboxQuotes.checked ? `"${objectName}"` : objectName
const spaceOrNewLine = oneLineFormat.checked ? ' ' : '\n'
let stringResult = formats[selectFormat.value].getPrefix(objectType, parsedObjectName, spaceOrNewLine)
const propertyStrings = []
for (let i = 1; i < rowLength - 1; i++) {
const propertyName = document.getElementById(`input${i - 1};${0}`).value
if (!propertyName) continue
const propertyValues = []
let oCells = table.rows.item(i).cells;
let cellLength = oCells.length;
for (let j = 1; j < cellLength - 1; j++) {
const val = document.getElementById(`input${i - 1};${j}`).value
if (val) {
propertyValues.push(val)
}
}
jsonObject.properties[propertyName] = propertyValues
const propertiesValue = propertyValues
.map(pv => checkboxQuotes.checked ? `"${pv}"` : pv)
.join(formats[selectFormat.value].getValuesSeparator())
propertyStrings.push(formats[selectFormat.value].addAttribute(propertyName, propertiesValue))
}
stringResult += formats[selectFormat.value].appendAttributes(propertyStrings, spaceOrNewLine)
textareaOutput.value = formats[selectFormat.value].finalize(stringResult, squareBracketsEncasing.checked)
textareaJSONOutput.value = JSON.stringify(jsonObject, null, 4)
}
function downloadObjectAsJson(exportObj, exportName) {
const dataStr = "data:text/json;charset=utf-8," + encodeURIComponent(JSON.stringify(exportObj, null, 4));
const downloadAnchorNode = document.createElement('a');
downloadAnchorNode.setAttribute("href", dataStr);
downloadAnchorNode.setAttribute("download", exportName + ".json");
document.body.appendChild(downloadAnchorNode); // required for firefox
downloadAnchorNode.click();
downloadAnchorNode.remove();
}
function downloadJSON() {
const json = JSON.parse(textareaJSONOutput.value)
downloadObjectAsJson(json, `${json.type}_${json.name}`)
}
function loadFromJSON() {
const json = JSON.parse(textareaJSONOutput.value)
document.getElementById("objectName").value = json.name
document.getElementById("objectType").value = json.type
table.innerHTML = `<thead><tr id="theadCells"><th>Attribute</th><th>Value1</th></tr></thead>`
for (let propertyName of Object.keys(json.properties)) {
const newRow = table.insertRow()
newRow.id = `tr${table.rows.length - 1}`
let newCell
newCell = newRow.insertCell()
newCell.innerHTML = `<input id="input${table.rows.length - 2};${0}" value="${propertyName}">`
for (let i = 0; i < json.properties[propertyName].length; i++) {
if (i + 1 >= table.tHead.rows.item(0).cells.length) {
const newThead = document.createElement("TH")
newThead.innerHTML = `Value${i + 1}`
table.tHead.rows.item(0).appendChild(newThead)
}
newCell = newRow.insertCell()
newCell.innerHTML = `<input id="input${table.rows.length - 2};${i + 1}" value="${json.properties[propertyName][i]}">`
}
}
}
function updateTable() {
let rowLength = table.rows.length;
for (let i = rowLength - 1; i >= 1; i--) {
let oCells = table.rows.item(i).cells;
let cellLength = oCells.length;
for (let j = cellLength - 1; j >= 0; j--) {
const input = document.getElementById(`input${i - 1};${j}`)
const leftInput = document.getElementById(`input${i - 1};${j - 1}`)
// adds new line
if (j === 0 && i === rowLength - 1 && input?.value?.trim()) {
const newRow = table.insertRow()
const newCell = newRow.insertCell()
newRow.id = `tr${i}`
newCell.innerHTML = `<input id="input${i};${0}" value="">`
}
// adds new cell
if (j === cellLength - 1 && input?.value?.trim()) {
if (j + 1 >= table.tHead.rows.item(0).cells.length) {
const newThead = document.createElement("TH")
newThead.innerHTML = `Value${j + 1}`
table.tHead.rows.item(0).appendChild(newThead)
}
const newCell = table.rows.item(i).insertCell()
newCell.innerHTML = `<input id="input${i - 1};${cellLength}" value="">`
} else if (j === cellLength - 1 && !input?.value?.trim() && leftInput && !leftInput?.value?.trim()) {
table.rows.item(i).deleteCell(j)
let maxAttributes = 0
for (let k = 1; k < table.rows.length; k++) {
if (table.rows.item(k).cells.length >= maxAttributes) {
maxAttributes = table.rows.item(k).cells.length
}
}
console.log("j", j)
console.log("maxAttributes", maxAttributes)
if (j >= maxAttributes) {
console.log("table.tHead.rows.item(0).deleteCell(j)")
table.tHead.rows.item(0).deleteCell(j)
}
}
}
}
}
</script>
JSON DATA:
{
"entries": {
"0": {
"uid": 0,
"key": [
"KEY1"
],
"keysecondary": [],
"comment": "",
"content": "EXAMPLE CONTENT 1",
"constant": false,
"selective": false,
"order": 100,
"position": 0,
"disable": false
},
"1": {
"uid": 1,
"key": [
"KEY2"
],
"keysecondary": [],
"comment": "",
"content": "EXAMPLE CONTENT 2",
"constant": false,
"selective": false,
"order": 100,
"position": 0,
"disable": false
}
}
}
|
27afa0fbc46a24400c472f0bba491081
|
{
"intermediate": 0.3529629111289978,
"beginner": 0.5101471543312073,
"expert": 0.13688990473747253
}
|
3,882
|
"FarmingCropNFT" could not deploy due to insufficient funds
* Account: 0x828111760872EAcB8B658d17ef87e8F2C2005d32
* Balance: 1122044167146832797 wei
* Message: insufficient funds for gas * price + value
* Try:
+ Using an adequately funded account
+ If you are using a local Geth node, verify that your node is synced.
at /usr/local/lib/node_modules/truffle/build/webpack:/packages/deployer/src/deployment.js:330:1
at processTicksAndRejections (node:internal/process/task_queues:95:5)
Truffle v5.8.1 (core: 5.8.1)
Node v18.15.0
|
37883ef8b209fdb51e6a409ac70afbda
|
{
"intermediate": 0.3447735607624054,
"beginner": 0.37432435154914856,
"expert": 0.28090208768844604
}
|
3,883
|
What is wrong with this code
using System;
using System.Collections.Generic;
using System.IO;
using System.IO.Packaging;
using System.Linq;
using System.Xml;
using System.Xml.Linq;
class ExcelReader
{
private MemoryStream _memoryStream;
private List<List<object>> _data;
public ExcelReader(MemoryStream memoryStream)
{
_memoryStream = memoryStream;
_data = new List<List<object>>();
}
public void ReadExcelFile()
{
Package package = Package.Open(_memoryStream, FileMode.Open, FileAccess.Read);
XNamespace xRelationships = "http://schemas.openxmlformats.org/officeDocument/2006/relationships";
XNamespace xSpreadsheetMain = "http://schemas.openxmlformats.org/spreadsheetml/2006/main";
PackagePart sharedStringsPart = package.GetRelationshipsByType(xRelationships.ToString() + "sharedStrings")
.Select(r => package.GetPart(r.TargetUri)).FirstOrDefault();
// Initialize an empty shared strings list
List<string> sharedStrings = new List<string>();
// If a shared strings part is present, read the strings
if (sharedStringsPart != null)
{
XDocument sharedStringsDoc = XDocument.Load(XmlReader.Create(sharedStringsPart.GetStream()));
sharedStrings = sharedStringsDoc.Descendants(xSpreadsheetMain + "t").Select(t => t.Value).ToList();
}
PackagePart sheetPart = package.GetRelationshipsByType(xRelationships.GetType() + "worksheet")
.Select(r => package.GetPart(r.TargetUri)).FirstOrDefault();
XDocument sheetDoc = XDocument.Load(XmlReader.Create(sheetPart.GetStream()));
List<List<object>> data = new List<List<object>>();
foreach (var row in sheetDoc.Descendants(xSpreadsheetMain + "row"))
{
List<object> rowData = new List<object>();
foreach (var cell in row.Elements(xSpreadsheetMain + "c"))
{
string cellValue = cell.Elements(xSpreadsheetMain + "v").FirstOrDefault()?.Value;
if (cellValue != null)
{
string cellType = cell.Attribute(xSpreadsheetMain + "t")?.Value;
if (cellType == "s")
{
rowData.Add(sharedStrings[int.Parse(cellValue)]);
}
else
{
rowData.Add(cellValue);
}
}
}
data.Add(rowData);
}
_data = data;
}
public List<List<object>> GetData()
{
return _data;
}
}
// Example usage:
class Program
{
static void Main(string[] args)
{
// Read the Excel file into a MemoryStream
FileStream fileStream = new FileStream("tst.xlsx", FileMode.Open);
Console.WriteLine(fileStream.Length);
MemoryStream memoryStream = new MemoryStream();
fileStream.CopyTo(memoryStream);
// Pass the MemoryStream to ExcelReader
ExcelReader excelReader = new ExcelReader(memoryStream);
excelReader.ReadExcelFile();
List<List<object>> data = excelReader.GetData();
foreach (List<object> row in data)
{
Console.WriteLine(string.Join(", ", row));
}
}
}
|
7d54fa2fe5384a31a90ac7e228528b4a
|
{
"intermediate": 0.42150044441223145,
"beginner": 0.5104750990867615,
"expert": 0.06802444905042648
}
|
3,884
|
Write a AstroJS app that connects to the Studio Ghibli API.
|
8d4ba4268417df07255f20160605e1cc
|
{
"intermediate": 0.7248612642288208,
"beginner": 0.10714530199766159,
"expert": 0.167993426322937
}
|
3,885
|
Perform as if you were a big data analytics engineer and write a hadoop java program to run on cloudera hadoop environment.
This program performs sentiment analysis on a given text input and returns two output files one with the negative words and the other with the positive ones you can also use a dataset or ignore it if it's not necessary.
You should know the required/different components of the program like java classes and write their code also explain what does each one of these classes do.
I also want you to show me the steps and how to run it on cloudera environment and the terminal commands I will use.
Also what are the external jars i need to add.
how will the program differentiate positive from nwgative opinions (how will it know this is a positive opinion and this is a negative opinion) and if an opinion has positive and negative words at the same time what will happen and how will the program deal with it.
|
66ed8c08b422415a9c347c1ac79a077b
|
{
"intermediate": 0.5800239443778992,
"beginner": 0.22233548760414124,
"expert": 0.1976405382156372
}
|
3,886
|
'''
import requests
import json
import datetime
import streamlit as st
from itertools import zip_longest
import os
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
import japanize_matplotlib
def basic_info():
config = dict()
config["access_token"] = st.secrets["access_token"]
config['instagram_account_id'] = st.secrets.get("instagram_account_id", "")
config["version"] = 'v16.0'
config["graph_domain"] = 'https://graph.facebook.com/'
config["endpoint_base"] = config["graph_domain"] + config["version"] + '/'
return config
def InstaApiCall(url, params, request_type):
if request_type == 'POST':
req = requests.post(url, params)
else:
req = requests.get(url, params)
res = dict()
res["url"] = url
res["endpoint_params"] = params
res["endpoint_params_pretty"] = json.dumps(params, indent=4)
res["json_data"] = json.loads(req.content)
res["json_data_pretty"] = json.dumps(res["json_data"], indent=4)
return res
def getUserMedia(params, pagingUrl=''):
Params = dict()
Params['fields'] = 'id,caption,media_type,media_url,permalink,thumbnail_url,timestamp,username,like_count,comments_count'
Params['access_token'] = params['access_token']
if not params['endpoint_base']:
return None
if pagingUrl == '':
url = params['endpoint_base'] + params['instagram_account_id'] + '/media'
else:
url = pagingUrl
return InstaApiCall(url, Params, 'GET')
def getUser(params):
Params = dict()
Params['fields'] = 'followers_count'
Params['access_token'] = params['access_token']
if not params['endpoint_base']:
return None
url = params['endpoint_base'] + params['instagram_account_id']
return InstaApiCall(url, Params, 'GET')
def saveCount(count, filename):
with open(filename, 'w') as f:
json.dump(count, f, indent=4)
def getCount(filename):
try:
with open(filename, 'r') as f:
return json.load(f)
except (FileNotFoundError, json.decoder.JSONDecodeError):
return {}
st.set_page_config(layout="wide")
params = basic_info()
count_filename = "count.json"
if not params['instagram_account_id']:
st.write('.envファイルでinstagram_account_idを確認')
else:
response = getUserMedia(params)
user_response = getUser(params)
if not response or not user_response:
st.write('.envファイルでaccess_tokenを確認')
else:
posts = response['json_data']['data'][::-1]
user_data = user_response['json_data']
followers_count = user_data.get('followers_count', 0)
NUM_COLUMNS = 6
MAX_WIDTH = 1000
BOX_WIDTH = int(MAX_WIDTH / NUM_COLUMNS)
BOX_HEIGHT = 400
yesterday = (datetime.datetime.now(datetime.timezone(datetime.timedelta(hours=9))) - datetime.timedelta(days=1)).strftime('%Y-%m-%d')
follower_diff = followers_count - getCount(count_filename).get(yesterday, {}).get('followers_count', followers_count)
st.markdown(f'''
Follower: {followers_count} ({'+' if follower_diff >= 0 else ''}{follower_diff})
''', unsafe_allow_html=True)
show_description = st.checkbox("キャプションを表示")
show_summary_chart = st.checkbox("サマリーチャートを表示")
show_like_comment_chart = st.checkbox("いいね/コメント数グラフを表示")
posts.reverse()
post_groups = [list(filter(None, group)) for group in zip_longest(*[iter(posts)] * NUM_COLUMNS)]
count = getCount(count_filename)
today = datetime.datetime.now(datetime.timezone(datetime.timedelta(hours=9))).strftime('%Y-%m-%d')
if today not in count:
count[today] = {}
count[today]['followers_count'] = followers_count
if datetime.datetime.now(datetime.timezone(datetime.timedelta(hours=9))).strftime('%H:%M') == '23:59':
count[yesterday] = count[today]
max_like_diff = 0
max_comment_diff = 0
summary_chart_data = {"Date": [], "Count": [], "Type": []}
for post_group in post_groups:
for post in post_group:
like_count_diff = post['like_count'] - count.get(yesterday, {}).get(post['id'], {}).get('like_count', post['like_count'])
comment_count_diff = post['comments_count'] - count.get(yesterday, {}).get(post['id'], {}).get('comments_count', post['comments_count'])
max_like_diff = max(like_count_diff, max_like_diff)
max_comment_diff = max(comment_count_diff, max_comment_diff)
if show_summary_chart:
for date in count.keys():
if date != today:
summary_chart_data["Date"].append(datetime.datetime.strptime(date, '%Y-%m-%d').strftime('%m/%d'))
summary_chart_data["Count"].append(count[date].get("followers_count", 0))
summary_chart_data["Type"].append("Follower")
for post_id in count[date].keys():
if post_id not in ["followers_count"]:
summary_chart_data["Date"].append(datetime.datetime.strptime(date, '%Y-%m-%d').strftime('%m/%d'))
summary_chart_data["Count"].append(count[date][post_id].get("like_count", 0))
summary_chart_data["Type"].append("Like")
summary_chart_data["Date"].append(datetime.datetime.strptime(date, '%Y-%m-%d').strftime('%m/%d'))
summary_chart_data["Count"].append(count[date][post_id].get("comments_count", 0))
summary_chart_data["Type"].append("Comment")
summary_chart_df = pd.DataFrame(summary_chart_data)
plt.figure(figsize=(15, 10))
summary_chart_palette = {"Follower": "lightblue", "Like": "orange", "Comment": "green"}
sns.lineplot(data=summary_chart_df, x="Date", y="Count", hue="Type", palette=summary_chart_palette)
plt.xlabel("Date")
plt.ylabel("Count")
plt.title("日別 サマリーチャート")
st.pyplot()
for post_group in post_groups:
with st.container():
columns = st.columns(NUM_COLUMNS)
for i, post in enumerate(post_group):
with columns[i]:
st.image(post['media_url'], width=BOX_WIDTH, use_column_width=True)
st.write(f"{datetime.datetime.strptime(post['timestamp'], '%Y-%m-%dT%H:%M:%S%z').astimezone(datetime.timezone(datetime.timedelta(hours=9))).strftime('%Y-%m-%d %H:%M:%S')}")
like_count_diff = post['like_count'] - count.get(yesterday, {}).get(post['id'], {}).get('like_count', post['like_count'])
comment_count_diff = post['comments_count'] - count.get(yesterday, {}).get(post['id'], {}).get('comments_count', post['comments_count'])
st.markdown(
f"👍: {post['like_count']} <span style='{'' if like_count_diff != max_like_diff or max_like_diff == 0 else 'color:green;'}'>({'+1' if like_count_diff >= 0 else ''}{like_count_diff})"
f"\n💬: {post['comments_count']} <span style='{'' if comment_count_diff != max_comment_diff or max_comment_diff == 0 else 'color:green;'}'>({'+1' if comment_count_diff >= 0 else ''}{comment_count_diff})",
unsafe_allow_html=True)
if show_like_comment_chart:
like_comment_chart_data = {"Date": [], "Count": [], "Type": []}
for date in count.keys():
if date != today and post["id"] in count[date]:
like_comment_chart_data["Date"].append(datetime.datetime.strptime(date, '%Y-%m-%d').strftime('%m/%d'))
like_comment_chart_data["Count"].append(count[date][post["id"]].get("like_count", 0))
like_comment_chart_data["Type"].append("Like")
like_comment_chart_data["Date"].append(datetime.datetime.strptime(date, '%Y-%m-%d').strftime('%m/%d'))
like_comment_chart_data["Count"].append(count[date][post["id"]].get("comments_count", 0))
like_comment_chart_data["Type"].append("Comment")
if like_comment_chart_data["Date"]:
like_comment_chart_df = pd.DataFrame(like_comment_chart_data)
plt.figure(figsize=(5, 3))
like_comment_chart_palette = {"Like": "orange", "Comment": "green"}
sns.lineplot(data=like_comment_chart_df, x="Date", y="Count", hue="Type", palette=like_comment_chart_palette)
plt.xlabel("Date")
plt.ylabel("Count")
plt.title("日別 いいね/コメント数")
st.pyplot()
caption = post['caption']
if caption is not None:
caption = caption.strip()
if "[Description]" in caption:
caption = caption.split("[Description]")[1].lstrip()
if "[Tags]" in caption:
caption = caption.split("[Tags]")[0].rstrip()
caption = caption.replace("#", "")
caption = caption.replace("[model]", "👗")
caption = caption.replace("[Equip]", "📷")
caption = caption.replace("[Develop]", "🖨")
if show_description:
st.write(caption or "No caption provided")
else:
st.write(caption[:0] if caption is not None and len(caption) > 50 else caption or "No caption provided")
count[today][post['id']] = {'like_count': post['like_count'], 'comments_count': post['comments_count']}
saveCount(count, count_filename)
'''
あなたは指示に忠実なプログラマーです。
現在、インスタグラムの投稿の一覧表示をして各投稿の"Container"の中に"いいね数"や"コメント数"、また一部の文字を改変機能を持つ"キャプション"を表示する機能と、jsonファイルに保存した"フォロワー数"や"いいね数"や"コメント数"のデータを元にグラフを描画する機能を持つアプリケーションを作成しています。
上記のコードを以下の要件をすべて満たして改修してください
- Python用のインデントを行頭に付与して、正常に表示されるかチェックしてから出力する
- コードの説明文は表示しない
- 指示のない部分のコードの改変は絶対にしない
- 'like_count'と"comments_count"表示部分は絶対に改変しない
- 'Container'については絶対に改変しない
- "caption = post['caption']"以降のブロックについては絶対に改変しない
- "caption.replace"に関するコードは絶対に残存させる
- グラフ作成のためのPythonライブラリは"seaborn"のみを使用する
- "サマリーチャート"の"followers_count"と'like_count'は左縦軸に目盛を表示し、'comments_count'は右縦軸に目盛を表示する
- "サマリーチャート"の'like_count'と'comments_count'の数はjsonファイル内の過去すべての日次データを参照し、"対象日時点の数値"ではなく、"対象日とその前日データの差分"を算出した数を表示する
- "いいね/コメント数グラフ"は、'like_count'を左縦軸に目盛を表示し、'comments_count'を右縦軸に目盛を表示する
- "いいね/コメント数グラフ"の'like_count'と'comments_count'の数はjsonファイル内の過去すべての日次データを参照し、"対象日時点の数値"ではなく、"対象日とその前日データの差分"を算出した数を表示する
|
c46fd5bbfd590334f724056fb8cbf1ba
|
{
"intermediate": 0.33554038405418396,
"beginner": 0.38297271728515625,
"expert": 0.2814869284629822
}
|
3,888
|
function [Top_gazelle_fit,Top_gazelle_pos,Convergence_curve]=GOA2_Laplace(SearchAgents_no,Max_iter,lb,ub,dim,fobj, mu, sigma)
% Inputs:
% SearchAgents_no: number of search agents
% Max_iter: maximum number of iterations
% lb: lower bound of the search space
% ub: upper bound of the search space
% dim: dimension of the problem
% fobj: objective function to be optimized
% mu: mean of the Laplace distribution
% sigma: scale parameter of the Laplace distribution
% Outputs:
% Top_gazelle_fit: fitness value of the best solution found
% Top_gazelle_pos: position of the best solution found
% Convergence_curve: convergence curve of the algorithm
Top_gazelle_pos = zeros(1,dim);
Top_gazelle_fit = inf;
Convergence_curve = zeros(1,Max_iter);
stepsize = zeros(SearchAgents_no,dim);
fitness = inf(SearchAgents_no,1);
% initialize search agents
gazelle = initialization(SearchAgents_no,dim,ub,lb);
% set boundaries
Xmin = repmat(ones(1,dim).lb,SearchAgents_no,1);
Xmax = repmat(ones(1,dim).ub,SearchAgents_no,1);
Iter = 0;
PSRs = 0.34;
S = 0.88;
s = rand();
while Iter<Max_iter
% evaluate top gazelle
for i=1:size(gazelle,1)
Flag4ub = gazelle(i,:) > ub;
Flag4lb = gazelle(i,:) < lb;
gazelle(i,:) = (gazelle(i,:).(~(Flag4ub + Flag4lb))) + ub.Flag4ub + lb.Flag4lb;
fitness(i,1) = fobj(gazelle(i,:));
if fitness(i,1) < Top_gazelle_fit
Top_gazelle_fit = fitness(i,1);
Top_gazelle_pos = gazelle(i,:);
end
end
% keep track of fitness values
if Iter == 0
fit_old = fitness;
Prey_old = gazelle;
end
Inx = (fit_old < fitness);
Indx = repmat(Inx, 1, dim);
gazelle = Indx.Prey_old + ~Indx.gazelle;
fitness = Inx.fit_old + ~Inx.fitness;
fit_old = fitness;
Prey_old = gazelle;
% compute step size
Elite = repmat(Top_gazelle_pos,SearchAgents_no,1);
CF = (1 - Iter/Max_iter)^(2Iter/Max_iter);
RL = levyrnd(SearchAgents_no,dim,mu,sigma); % Levy random number vector
RB = randn(SearchAgents_no,dim); % Brownian random number vector
for i=1:size(gazelle,1)
for j=1:size(gazelle,2)
R = rand();
r = rand();
if mod(Iter,2) == 0
mu_u = -1;
else
mu_u = 1;
end
% exploitation
if r > 0.5
stepsize(i,j) = RB(i,j)(Elite(i,j) - RB(i,j)gazelle(i,j));
gazelle(i,j) = gazelle(i,j) + sRstepsize(i,j);
% exploration
else
if i > size(gazelle,1)/2
stepsize(i,j) = RB(i,j)(RL(i,j)Elite(i,j) - gazelle(i,j));
gazelle(i,j) = Elite(i,j) + Smu_uCFstepsize(i,j);
else
stepsize(i,j) = RL(i,j)(Elite(i,j) - RL(i,j)gazelle(i,j));
gazelle(i,j) = gazelle(i,j) + Smu_uRstepsize(i,j);
end
end
end
end
% update top gazelle
for i=1:size(gazelle,1)
Flag4ub = gazelle(i,:) > ub;
Flag4lb = gazelle(i,:) < lb;
gazelle(i,:) = (gazelle(i,:).(~(Flag4ub+Flag4lb))) + ub.Flag4ub + lb.Flag4lb;
fitness(i,1) = fobj(gazelle(i,:));
if fitness(i,1) < Top_gazelle_fit
Top_gazelle_fit = fitness(i,1);
Top_gazelle_pos = gazelle(i,:);
end
end
% update fitness values
if Iter == 0
fit_old = fitness;
Prey_old = gazelle;
end
Inx = (fit_old < fitness);
Indx = repmat(Inx, 1, dim);
gazelle = Indx.Prey_old + ~Indx.gazelle;
fitness = Inx.fit_old + ~Inx.fitness;
fit_old = fitness;
Prey_old = gazelle;
% apply PSRs
if rand() < PSRs
U = rand(SearchAgents_no,dim) < PSRs;
gazelle = gazelle + CF((Xmin + rand(SearchAgents_no,dim).(Xmax - Xmin)).U);
else
r = rand();
Rs = size(gazelle,1);
stepsize = (PSRs(1-r) + r)(gazelle(randperm(Rs),:) - gazelle(randperm(Rs),:));
gazelle = gazelle + stepsize;
end
Iter = Iter + 1;
Convergence_curve(Iter) = Top_gazelle_fit;
end
end
function Positions = initialization(SearchAgents_no,dim,ub,lb)
Boundary_no = size(ub,2);
if Boundary_no == 1
Positions = rand(SearchAgents_no,dim).(ub-lb) + lb;
end
if Boundary_no > 1
for i = 1:dim
ub_i = ub(i);
lb_i = lb(i);
Positions(:,i) = rand(SearchAgents_no,1).(ub_i-lb_i) + lb_i;
end
end
end
function r = levyrnd(varargin)
if nargin == 1
mu = 0;
sigma = 1;
elseif nargin == 2
mu = varargin{1};
sigma = 1;
elseif nargin == 3
mu = varargin{1};
sigma = varargin{2};
end
r = mu - sigmatan(pi(rand(size(varargin{1}, 1), size(varargin{1}, 2)) - 0.5));
end
|
db75d63b5af15de8090599510f6d45fd
|
{
"intermediate": 0.4074850380420685,
"beginner": 0.33542415499687195,
"expert": 0.25709083676338196
}
|
3,889
|
'''
import requests
import json
import datetime
import streamlit as st
from itertools import zip_longest
import os
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
import japanize_matplotlib
def basic_info():
config = dict()
config["access_token"] = st.secrets["access_token"]
config['instagram_account_id'] = st.secrets.get("instagram_account_id", "")
config["version"] = 'v16.0'
config["graph_domain"] = 'https://graph.facebook.com/'
config["endpoint_base"] = config["graph_domain"] + config["version"] + '/'
return config
def InstaApiCall(url, params, request_type):
if request_type == 'POST':
req = requests.post(url, params)
else:
req = requests.get(url, params)
res = dict()
res["url"] = url
res["endpoint_params"] = params
res["endpoint_params_pretty"] = json.dumps(params, indent=4)
res["json_data"] = json.loads(req.content)
res["json_data_pretty"] = json.dumps(res["json_data"], indent=4)
return res
def getUserMedia(params, pagingUrl=''):
Params = dict()
Params['fields'] = 'id,caption,media_type,media_url,permalink,thumbnail_url,timestamp,username,like_count,comments_count'
Params['access_token'] = params['access_token']
if not params['endpoint_base']:
return None
if pagingUrl == '':
url = params['endpoint_base'] + params['instagram_account_id'] + '/media'
else:
url = pagingUrl
return InstaApiCall(url, Params, 'GET')
def getUser(params):
Params = dict()
Params['fields'] = 'followers_count'
Params['access_token'] = params['access_token']
if not params['endpoint_base']:
return None
url = params['endpoint_base'] + params['instagram_account_id']
return InstaApiCall(url, Params, 'GET')
def saveCount(count, filename):
with open(filename, 'w') as f:
json.dump(count, f, indent=4)
def getCount(filename):
try:
with open(filename, 'r') as f:
return json.load(f)
except (FileNotFoundError, json.decoder.JSONDecodeError):
return {}
st.set_page_config(layout="wide")
params = basic_info()
count_filename = "count.json"
if not params['instagram_account_id']:
st.write('.envファイルでinstagram_account_idを確認')
else:
response = getUserMedia(params)
user_response = getUser(params)
if not response or not user_response:
st.write('.envファイルでaccess_tokenを確認')
else:
posts = response['json_data']['data'][::-1]
user_data = user_response['json_data']
followers_count = user_data.get('followers_count', 0)
NUM_COLUMNS = 6
MAX_WIDTH = 1000
BOX_WIDTH = int(MAX_WIDTH / NUM_COLUMNS)
BOX_HEIGHT = 400
yesterday = (datetime.datetime.now(datetime.timezone(datetime.timedelta(hours=9))) - datetime.timedelta(days=1)).strftime('%Y-%m-%d')
follower_diff = followers_count - getCount(count_filename).get(yesterday, {}).get('followers_count', followers_count)
st.markdown(f'''
Follower: {followers_count} ({'+' if follower_diff >= 0 else ''}{follower_diff})
''', unsafe_allow_html=True)
show_description = st.checkbox("キャプションを表示")
show_summary_chart = st.checkbox("サマリーチャートを表示")
show_like_comment_chart = st.checkbox("いいね/コメント数グラフを表示")
posts.reverse()
post_groups = [list(filter(None, group)) for group in zip_longest(*[iter(posts)] * NUM_COLUMNS)]
count = getCount(count_filename)
today = datetime.datetime.now(datetime.timezone(datetime.timedelta(hours=9))).strftime('%Y-%m-%d')
if today not in count:
count[today] = {}
count[today]['followers_count'] = followers_count
if datetime.datetime.now(datetime.timezone(datetime.timedelta(hours=9))).strftime('%H:%M') == '23:59':
count[yesterday] = count[today]
max_like_diff = 0
max_comment_diff = 0
summary_chart_data = {"Date": [], "Count": [], "Type": []}
for post_group in post_groups:
for post in post_group:
like_count_diff = post['like_count'] - count.get(yesterday, {}).get(post['id'], {}).get('like_count', post['like_count'])
comment_count_diff = post['comments_count'] - count.get(yesterday, {}).get(post['id'], {}).get('comments_count', post['comments_count'])
max_like_diff = max(like_count_diff, max_like_diff)
max_comment_diff = max(comment_count_diff, max_comment_diff)
if show_summary_chart:
for date in count.keys():
if date != today:
summary_chart_data["Date"].append(datetime.datetime.strptime(date, '%Y-%m-%d').strftime('%m/%d'))
summary_chart_data["Count"].append(count[date].get("followers_count", 0))
summary_chart_data["Type"].append("Follower")
for post_id in count[date].keys():
if post_id not in ["followers_count"]:
summary_chart_data["Date"].append(datetime.datetime.strptime(date, '%Y-%m-%d').strftime('%m/%d'))
summary_chart_data["Count"].append(count[date][post_id].get("like_count", 0))
summary_chart_data["Type"].append("Like")
summary_chart_data["Date"].append(datetime.datetime.strptime(date, '%Y-%m-%d').strftime('%m/%d'))
summary_chart_data["Count"].append(count[date][post_id].get("comments_count", 0))
summary_chart_data["Type"].append("Comment")
summary_chart_df = pd.DataFrame(summary_chart_data)
plt.figure(figsize=(15, 10))
summary_chart_palette = {"Follower": "lightblue", "Like": "orange", "Comment": "green"}
sns.lineplot(data=summary_chart_df, x="Date", y="Count", hue="Type", palette=summary_chart_palette)
plt.xlabel("Date")
plt.ylabel("Count")
plt.title("日別 サマリーチャート")
st.pyplot()
for post_group in post_groups:
with st.container():
columns = st.columns(NUM_COLUMNS)
for i, post in enumerate(post_group):
with columns[i]:
st.image(post['media_url'], width=BOX_WIDTH, use_column_width=True)
st.write(f"{datetime.datetime.strptime(post['timestamp'], '%Y-%m-%dT%H:%M:%S%z').astimezone(datetime.timezone(datetime.timedelta(hours=9))).strftime('%Y-%m-%d %H:%M:%S')}")
like_count_diff = post['like_count'] - count.get(yesterday, {}).get(post['id'], {}).get('like_count', post['like_count'])
comment_count_diff = post['comments_count'] - count.get(yesterday, {}).get(post['id'], {}).get('comments_count', post['comments_count'])
st.markdown(
f"👍: {post['like_count']} <span style='{'' if like_count_diff != max_like_diff or max_like_diff == 0 else 'color:green;'}'>({'+1' if like_count_diff >= 0 else ''}{like_count_diff})"
f"\n💬: {post['comments_count']} <span style='{'' if comment_count_diff != max_comment_diff or max_comment_diff == 0 else 'color:green;'}'>({'+1' if comment_count_diff >= 0 else ''}{comment_count_diff})",
unsafe_allow_html=True)
if show_like_comment_chart:
like_comment_chart_data = {"Date": [], "Count": [], "Type": []}
for date in count.keys():
if date != today and post["id"] in count[date]:
like_comment_chart_data["Date"].append(datetime.datetime.strptime(date, '%Y-%m-%d').strftime('%m/%d'))
like_comment_chart_data["Count"].append(count[date][post["id"]].get("like_count", 0))
like_comment_chart_data["Type"].append("Like")
like_comment_chart_data["Date"].append(datetime.datetime.strptime(date, '%Y-%m-%d').strftime('%m/%d'))
like_comment_chart_data["Count"].append(count[date][post["id"]].get("comments_count", 0))
like_comment_chart_data["Type"].append("Comment")
if like_comment_chart_data["Date"]:
like_comment_chart_df = pd.DataFrame(like_comment_chart_data)
plt.figure(figsize=(5, 3))
like_comment_chart_palette = {"Like": "orange", "Comment": "green"}
sns.lineplot(data=like_comment_chart_df, x="Date", y="Count", hue="Type", palette=like_comment_chart_palette)
plt.xlabel("Date")
plt.ylabel("Count")
plt.title("日別 いいね/コメント数")
st.pyplot()
caption = post['caption']
if caption is not None:
caption = caption.strip()
if "[Description]" in caption:
caption = caption.split("[Description]")[1].lstrip()
if "[Tags]" in caption:
caption = caption.split("[Tags]")[0].rstrip()
caption = caption.replace("#", "")
caption = caption.replace("[model]", "👗")
caption = caption.replace("[Equip]", "📷")
caption = caption.replace("[Develop]", "🖨")
if show_description:
st.write(caption or "No caption provided")
else:
st.write(caption[:0] if caption is not None and len(caption) > 50 else caption or "No caption provided")
count[today][post['id']] = {'like_count': post['like_count'], 'comments_count': post['comments_count']}
saveCount(count, count_filename)
'''
あなたは指示に忠実なプログラマーです。
上記のコードを以下の要件をすべて満たして改修してください
- Python用のインデントを行頭に付与して出力する
- コードの説明文は表示しない
- 指示のない部分のコードの改変は絶対にしない
- " caption = post['caption']
if caption is not None:
caption = caption.strip()
if "[Description]" in caption:
caption = caption.split("[Description]")[1].lstrip()
if "[Tags]" in caption:
caption = caption.split("[Tags]")[0].rstrip()
caption = caption.replace("#", "")
caption = caption.replace("[model]", "👗")
caption = caption.replace("[Equip]", "📷")
caption = caption.replace("[Develop]", "🖨")
if show_description:
st.write(caption or "No caption provided")
else:
st.write(caption[:0] if caption is not None and len(caption) > 50 else caption or "No caption provided")
count[today][post['id']] = {'like_count': post['like_count'], 'comments_count': post['comments_count']}
"部分のコードの機能については、絶対にコード内に使用する
- グラフ作成のためのPythonライブラリは"seaborn"のみを使用する
- "サマリーチャート"の"followers_count"と'like_count'は左縦軸に目盛を表示し、'comments_count'は右縦軸に目盛を表示する
- "サマリーチャート"の'like_count'と'comments_count'の数はjsonファイル内の過去すべての日次データを参照し、"対象日時点の数値"ではなく、"対象日とその前日データの差分"を算出した数を表示する
- "いいね/コメント数グラフ"は、'like_count'を左縦軸に目盛を表示し、'comments_count'を右縦軸に目盛を表示する
- "いいね/コメント数グラフ"の'like_count'と'comments_count'の数はjsonファイル内の過去すべての日次データを参照し、"対象日時点の数値"ではなく、"対象日とその前日データの差分"を算出した数を表示する
|
a8402ab48916204acd23ed807046df8d
|
{
"intermediate": 0.33554038405418396,
"beginner": 0.38297271728515625,
"expert": 0.2814869284629822
}
|
3,890
|
This is aarch64 assembly language code.
.section .bss
n16: .skip 4
// compute next highest multiple of 16 that is bigger or equal to n
adr x1, n
ldr w1, [x1]
sbfiz x1, x2, #2, #20
add x1, x1, #0xf
and x1, x1, #0xfffffffffffffff0
adr x2, n16
str w1, [x2]
It's not storing value on n16, what's the issue?
|
7035c195c7865b342dc64301d9d9a9f8
|
{
"intermediate": 0.29658475518226624,
"beginner": 0.45765945315361023,
"expert": 0.24575573205947876
}
|
3,891
|
im creating a website for students and teachers, made in react with a backend in vs c#, the student account and teacher account have different privileges and access to certain parts of the website, what would be better to handle their login, to make separate logins for students and teachers, or to make them log in the same form and automatically detect the user and show them their respective page
|
0e0697d3890d5d968d3f0512a39b21c3
|
{
"intermediate": 0.624177873134613,
"beginner": 0.17480884492397308,
"expert": 0.20101328194141388
}
|
3,892
|
could you please make me a MarkDown cheatsheet?
|
97857f0ffbf0c3355dd20268dd0ab45a
|
{
"intermediate": 0.24629703164100647,
"beginner": 0.44198206067085266,
"expert": 0.31172093749046326
}
|
3,893
|
c++ how to use n dimentional array
|
c986daf5609d604bf97f8989506bee36
|
{
"intermediate": 0.4049648344516754,
"beginner": 0.2613747715950012,
"expert": 0.33366042375564575
}
|
3,894
|
function [Top_gazelle_fit,Top_gazelle_pos,Convergence_curve]=GOA2(SearchAgents_no,Max_iter,lb,ub,dim,fobj)
Top_gazelle_pos=zeros(1,dim);
Top_gazelle_fit=inf;
Convergence_curve=zeros(1,Max_iter);
stepsize=zeros(SearchAgents_no,dim);
fitness=inf(SearchAgents_no,1);
gazelle=initialization(SearchAgents_no,dim,ub,lb);
Xmin=repmat(ones(1,dim).*lb,SearchAgents_no,1);
Xmax=repmat(ones(1,dim).*ub,SearchAgents_no,1);
Iter=0;
PSRs=0.34;
S=0.88;
s=rand();
while Iter<Max_iter
%------------------- Evaluating top gazelle -----------------
for i=1:size(gazelle,1)
Flag4ub=gazelle(i,:)>ub;
Flag4lb=gazelle(i,:)<lb;
gazelle(i,:)=(gazelle(i,:).*(~(Flag4ub+Flag4lb)))+ub.*Flag4ub+lb.*Flag4lb;
fitness(i,1)=fobj(gazelle(i,:));
if fitness(i,1)<Top_gazelle_fit
Top_gazelle_fit=fitness(i,1);
Top_gazelle_pos=gazelle(i,:);
end
end
%------------------- Keeping tract of fitness values-------------------
if Iter==0
fit_old=fitness; Prey_old=gazelle;
end
Inx=(fit_old<fitness);
Indx=repmat(Inx,1,dim);
gazelle=Indx.*Prey_old+~Indx.*gazelle;
fitness=Inx.*fit_old+~Inx.*fitness;
fit_old=fitness; Prey_old=gazelle;
%------------------------------------------------------------
Elite=repmat(Top_gazelle_pos,SearchAgents_no,1); %(Eq. 3)
CF=(1-Iter/Max_iter)^(2*Iter/Max_iter);
RL=0.05*levy(SearchAgents_no,dim,1.5); %Levy random number vector
RB=randn(SearchAgents_no,dim); %Brownian random number vector
for i=1:size(gazelle,1)
for j=1:size(gazelle,2)
R=rand();
r=rand();
if mod(Iter,2)==0
mu=-1;
else
mu=1;
end
%------------------ Exploitation -------------------
if r>0.5
stepsize(i,j)=RB(i,j)*(Elite(i,j)-RB(i,j)*gazelle(i,j));
gazelle(i,j)=gazelle(i,j)+s*R*stepsize(i,j);
%--------------- Exploration----------------
else
if i>size(gazelle,1)/2
stepsize(i,j)=RB(i,j)*(RL(i,j)*Elite(i,j)-gazelle(i,j));
gazelle(i,j)=Elite(i,j)+S*mu*CF*stepsize(i,j);
else
stepsize(i,j)=RL(i,j)*(Elite(i,j)-RL(i,j)*gazelle(i,j));
gazelle(i,j)=gazelle(i,j)+S*mu*R*stepsize(i,j);
end
end
end
end
%------------------ Updating top gazelle ------------------
for i=1:size(gazelle,1)
Flag4ub=gazelle(i,:)>ub;
Flag4lb=gazelle(i,:)<lb;
gazelle(i,:)=(gazelle(i,:).*(~(Flag4ub+Flag4lb)))+ub.*Flag4ub+lb.*Flag4lb;
fitness(i,1)=fobj(gazelle(i,:));
if fitness(i,1)<Top_gazelle_fit
Top_gazelle_fit=fitness(i,1);
Top_gazelle_pos=gazelle(i,:);
end
end
%---------------------- Updating history of fitness values ----------------
if Iter==0
fit_old=fitness; Prey_old=gazelle;
end
Inx=(fit_old<fitness);
Indx=repmat(Inx,1,dim);
gazelle=Indx.*Prey_old+~Indx.*gazelle;
fitness=Inx.*fit_old+~Inx.*fitness;
fit_old=fitness; Prey_old=gazelle;
%---------- Applying PSRs -----------
if rand()<PSRs
U=rand(SearchAgents_no,dim)<PSRs;
gazelle=gazelle+CF*((Xmin+rand(SearchAgents_no,dim).*(Xmax-Xmin)).*U);
else
r=rand(); Rs=size(gazelle,1);
stepsize=(PSRs*(1-r)+r)*(gazelle(randperm(Rs),:)-gazelle(randperm(Rs),:));
gazelle=gazelle+stepsize;
end
Iter=Iter+1;
Convergence_curve(Iter)=Top_gazelle_fit;
end
|
a606cbe585f2122171a609cb9c9ad836
|
{
"intermediate": 0.34565678238868713,
"beginner": 0.2886227071285248,
"expert": 0.36572057008743286
}
|
3,895
|
Page
4
of 6
The Developer and Assessment Guidelines must be read and used in conjunction with this document when
designing and developing the assessment tool. Use the Assessment Development Checklist to ensure all
aspects have been completed. Develop the assessor version first and then resave as the student version and
remove any content that is not required for the student. Tip: Create all assessor instructions in another colour eg:
green text, this will make it easier when removing text for student version.
Overview
This assessment template contains both the assessment tool and instructions that will be used to
gather and interpret evidence within the assessment process and is designed to ensure it complies
with the Principles of Assessment and the Rules of Evidence.
The instructions for students and the assessor have been integrated into the assessment tool and
tasks to provide information about ‘what, when, where, how and why’ each task forms part of the
evidence gathering in the assessment process.
Principles of assessment
Fairness – the individual student’s needs are considered in the assessment process:
reasonable adjustments, where appropriate are applied by SUT to take into account the
individual student’s needs
the student is informed about the assessment process and provided with the opportunity to
challenge the result of the assessment and be reassessed if necessary.
Flexibility – assessment is flexible to the individual student by:
reflecting the student’s needs
assessing competencies held by the student no matter how or where they have been
acquired
drawing from a range of assessment methods and using those that are appropriate to the
context, the unit of competency and associated assessment requirements, and the individual.
Validity – any assessment decision of the assessor is justified, based on the evidence of
performance of the individual student and requires
assessment against the unit/s of competency and the associated assessment requirements
covers the broad range of skills and knowledge that are essential to competent performance
assessment of knowledge and skills is integrated with their practical application
assessment to be based on evidence that demonstrates that a student could demonstrate
these skills and knowledge in other similar situations
judgement of competence is based on evidence of student performance that is aligned to the
unit/s of competency and associated assessment requirements.
Reliability – evidence presented for assessment is consistently interpreted and assessment results
are comparable irrespective of the assessor conducting the assessment.
Rules of evidence
Validity – the assessor is assured that the student has the skills, knowledge and attributes as
described in the module or unit of competency and associated assessment requirements.
Sufficiency – the assessor is assured that the quality, quantity and relevance of the assessment
evidence enables a judgement to be made of a student’s competency.
Authenticity – the assessor is assured that the evidence presented for assessment is the student’s
own work.
Currency – the assessor is assured that the assessment evidence demonstrates current
competency. This requires the assessment evidence to be from the present or the very recent past.
Course Details
Code ICT50220 Title Diploma of IT Advanced Programming
Unit Details
Code(s) Title(s)
ICTPRG430 Apply Introductory Object-Orientated language skills
ICTPRG549 Apply Intermediate Object-Oriented language skills
Assessment Task Details
Number Assessment Method Title
AT1 Product Based Challenge 1
Type Project/Assignment
Assessment Requirements
Purpose of the task:
This assessment is designed to gather evidence of the student’s ability to apply the knowledge and
skills required to write scripts for cyber security tasks detailed in the unit outline.
Instructions
Students are required to complete this task and submit all evidence required to meet the Product
Based evidence marking criteria for this task.
To complete this task, students can use their learning materials as a reference.
Students may use a computer to complete this project.
Submit online via Canvas.
Refer to the:
– Organisation Guidelines and Polices document in Canvas
– Checklist of the required evidence to be submitted (documents and diagrams)
Assessment Conditions
Due Date: Session 12
Time: Students are expected to complete the assessment in 3 weeks.
Location and environment: Students will complete this assessment in a simulated workplace
environment.
This task requires a student to complete activities typical of those required by a job role and
workplace environment applicable to industry.
Authenticity: It must be the individual students own work and written in their own words. When
quoting specific references, students must acknowledge the source. Evidence of plagiarism or
cheating will result in the assessment being assessed as unsatisfactory and further investigation will
occur.
Resources and Equipment Required
Students will need to provide(if working away from University such as your home):
Computer and internet connection
Visual Studio Code with C#
The Assessor will provide:
The assessment task cover sheet either in hard copy or electronically in Canvas
Access to classroom environment
Computer and internet connection
Visual Studio Code with C#
Task Summary
To successfully complete this assessment, students are required to: ............
Summary
In this assessment you are to develop an information system application. The information system
needs to
Project: Develop an Information system.
Design an information system with columns of data.
Can add records.
You are to negotiate the development of the program with a client (represented by the teacher).
Once the program is developed you are to present and discuss your code to the client. During this
presentation you are expected to answer questions from the client and receive approval for your
application.
You will need to organise a team of Group of up to 3 people per project.
Team Members
1. <Insert student name here>
2. <Insert student name here>
3. <Insert student name here>
Tasks
A breakdown of the tasks to be completed are as follows:
1. Write a report which includes a program requirements section inside the document.
Write inside of your report a program requirements section inside the document.
Determine the information which you are capturing into a class design.
The requirement being that the information is organised with column attributes, and each
instance of the information can be represented as a ROW of data.
In the program requirement's section of the report, document the information which data
attributes are being recorded and include some example data.
(This will be considered as programming requirements)
e.g.
For a contacts application,
Name, Phone, Address
"Fred", "89290484", "4 Paisly St Melb Vic"
Document a table of rows of the information as example data.
From the user's perspective, document some functionality and tasks which may be beneficial.
Create a roadmap (a wish list) of these features. i.e. A numbered list of potential features.
From this list, select a minimum number of features which will be implemented. This
requirements section is expected to be revised as you consider all of the questions in the
assessment and implement them in over time. In this way, agreement between parties (typically
you and your manager or client) can be found as the project progresses.
[Note: It is very normal for specifications to by dynamic in programming projects and these do
not necessarily conform to engineering project management, but require other tools and
processes not covered in these units.]
2. Design a class to hold the row data (a record) as a single object.
2.1 Have the class able to implement a constructor which, from a row of data, uses a
non-default constructor to initialise a class instance.
e.g.
For a contacts application,
Contact c = new Contact("Fred","89290484", "4 Paisly St Melb Vic");
2.2 Write a function to support formatted printing of the information (there are
many ways of doing this).
However, each attribute will be of fixed length display which you will have to decide
and implement.
2.3 Implement set and get for the class attributes (for encapsulation policy).
3. A container object.
3.1 Create a container class which holds a list of your information type objects.
You will have to decide on which data structure to use, but have it support array indexing.
3.2 Implement a sorting method to sort your list on a particular attribute.
This function should be OOP in design and be a member of the list container class with your
design.
3.3 Write a display function to have formatted columns of the data.
3.4 Write a function which appends an object defined to contain a row of data in
the previous question to a list (variable) within the container class.
4. Debugging
In the report, include a Debugging section with screenshots of using the debugger and
briefly explain how you used it and why.
Include an example where you use debugging to identify and rectify a logical error (where
the program runs but still is in error).
With multiple screenshots in time, show evidence of examining variables (requires debug
mode and
making sure the variables are being displayed) and tracing running code.
5. Library code
In your design implementation of C#, have your library source code in its own libraries
component.
All classes by the organisational guidelines should have a class definition in its own file.
Make a C# solution to do this, and then link the components together to be able to build the
project.
6. Build a text menu system application.
6.1 Have an for the user to add a row of data.
6.2 Have an option for the user to display the table data.
6.3 Have an option for the user to load some test data.
6.4 Have an option for the user to exit the menu.
7. Demonstration
Present your application to the client:
Be prepared to answer questions from the client.
You will receive feedback from the client and if your work is satisfactory, you will get
approval(sign-off) from your client. Sign-off will be given via the Canvas feedback
section.
Assessment Outcome
To successfully complete this activity, students are required to complete all requirements and submit all
evidence to meet the Product Quality Criteria evidence criteria for this task to achieve a Satisfactory (S)
result. The outcome of this assessment will contribute to the evidence used in the final decision to achieve
competency for this unit/s.
Assessment Submission
To meet the requirements of this assessment you are required to submit the following documents/evidence
via Canvas to the Assessor:
Completed assessment task cover sheet
|
5c23e1906b87abc29e8427f122cba431
|
{
"intermediate": 0.4120897054672241,
"beginner": 0.40952903032302856,
"expert": 0.17838117480278015
}
|
3,896
|
'''
import requests
import json
import datetime
import streamlit as st
from itertools import zip_longest
import os
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
import japanize_matplotlib
def basic_info():
config = dict()
config["access_token"] = st.secrets["access_token"]
config['instagram_account_id'] = st.secrets.get("instagram_account_id", "")
config["version"] = 'v16.0'
config["graph_domain"] = 'https://graph.facebook.com/'
config["endpoint_base"] = config["graph_domain"] + config["version"] + '/'
return config
def InstaApiCall(url, params, request_type):
if request_type == 'POST':
req = requests.post(url, params)
else:
req = requests.get(url, params)
res = dict()
res["url"] = url
res["endpoint_params"] = params
res["endpoint_params_pretty"] = json.dumps(params, indent=4)
res["json_data"] = json.loads(req.content)
res["json_data_pretty"] = json.dumps(res["json_data"], indent=4)
return res
def getUserMedia(params, pagingUrl=''):
Params = dict()
Params['fields'] = 'id,caption,media_type,media_url,permalink,thumbnail_url,timestamp,username,like_count,comments_count'
Params['access_token'] = params['access_token']
if not params['endpoint_base']:
return None
if pagingUrl == '':
url = params['endpoint_base'] + params['instagram_account_id'] + '/media'
else:
url = pagingUrl
return InstaApiCall(url, Params, 'GET')
def getUser(params):
Params = dict()
Params['fields'] = 'followers_count'
Params['access_token'] = params['access_token']
if not params['endpoint_base']:
return None
url = params['endpoint_base'] + params['instagram_account_id']
return InstaApiCall(url, Params, 'GET')
def saveCount(count, filename):
with open(filename, 'w') as f:
json.dump(count, f, indent=4)
def getCount(filename):
try:
with open(filename, 'r') as f:
return json.load(f)
except (FileNotFoundError, json.decoder.JSONDecodeError):
return {}
st.set_page_config(layout="wide")
params = basic_info()
count_filename = "count.json"
if not params['instagram_account_id']:
st.write('.envファイルでinstagram_account_idを確認')
else:
response = getUserMedia(params)
user_response = getUser(params)
if not response or not user_response:
st.write('.envファイルでaccess_tokenを確認')
else:
posts = response['json_data']['data'][::-1]
user_data = user_response['json_data']
followers_count = user_data.get('followers_count', 0)
NUM_COLUMNS = 6
MAX_WIDTH = 1000
BOX_WIDTH = int(MAX_WIDTH / NUM_COLUMNS)
BOX_HEIGHT = 400
yesterday = (datetime.datetime.now(datetime.timezone(datetime.timedelta(hours=9))) - datetime.timedelta(days=1)).strftime('%Y-%m-%d')
follower_diff = followers_count - getCount(count_filename).get(yesterday, {}).get('followers_count', followers_count)
st.markdown(f'''
Follower: {followers_count} ({'+' if follower_diff >= 0 else ''}{follower_diff})
''', unsafe_allow_html=True)
show_description = st.checkbox("キャプションを表示")
show_summary_chart = st.checkbox("サマリーチャートを表示")
show_like_comment_chart = st.checkbox("いいね/コメント数グラフを表示")
posts.reverse()
post_groups = [list(filter(None, group)) for group in zip_longest(*[iter(posts)] * NUM_COLUMNS)]
count = getCount(count_filename)
today = datetime.datetime.now(datetime.timezone(datetime.timedelta(hours=9))).strftime('%Y-%m-%d')
if today not in count:
count[today] = {}
count[today]['followers_count'] = followers_count
if datetime.datetime.now(datetime.timezone(datetime.timedelta(hours=9))).strftime('%H:%M') == '23:59':
count[yesterday] = count[today]
max_like_diff = 0
max_comment_diff = 0
summary_chart_data = {"Date": [], "Count": [], "Type": []}
for post_group in post_groups:
for post in post_group:
like_count_diff = post['like_count'] - count.get(yesterday, {}).get(post['id'], {}).get('like_count', post['like_count'])
comment_count_diff = post['comments_count'] - count.get(yesterday, {}).get(post['id'], {}).get('comments_count', post['comments_count'])
max_like_diff = max(like_count_diff, max_like_diff)
max_comment_diff = max(comment_count_diff, max_comment_diff)
if show_summary_chart:
for date in count.keys():
if date != today:
summary_chart_data["Date"].append(datetime.datetime.strptime(date, '%Y-%m-%d').strftime('%m/%d'))
summary_chart_data["Count"].append(count[date].get("followers_count", 0))
summary_chart_data["Type"].append("Follower")
for post_id in count[date].keys():
if post_id not in ["followers_count"]:
summary_chart_data["Date"].append(datetime.datetime.strptime(date, '%Y-%m-%d').strftime('%m/%d'))
summary_chart_data["Count"].append(count[date][post_id].get("like_count", 0))
summary_chart_data["Type"].append("Like")
summary_chart_data["Date"].append(datetime.datetime.strptime(date, '%Y-%m-%d').strftime('%m/%d'))
summary_chart_data["Count"].append(count[date][post_id].get("comments_count", 0))
summary_chart_data["Type"].append("Comment")
summary_chart_df = pd.DataFrame(summary_chart_data)
fig, ax1 = plt.subplots(figsize=(15, 10))
summary_chart_palette = {"Follower": "lightblue", "Like": "orange", "Comment": "green"}
sns.lineplot(data=summary_chart_df[summary_chart_df["Type"] != "Comment"], x="Date", y="Count", hue="Type", palette=summary_chart_palette, ax=ax1)
ax1.set_xlabel("Date")
ax1.set_ylabel("Count")
ax2 = ax1.twinx()
sns.lineplot(data=summary_chart_df[summary_chart_df["Type"] == "Comment"], x="Date", y="Count", color="green", ax=ax2)
ax2.set_ylabel("Comment Count")
plt.title("日別 サマリーチャート")
st.pyplot(fig)
for post_group in post_groups:
with st.container():
columns = st.columns(NUM_COLUMNS)
for i, post in enumerate(post_group):
with columns[i]:
st.image(post['media_url'], width=BOX_WIDTH, use_column_width=True)
st.write(f"{datetime.datetime.strptime(post['timestamp'], '%Y-%m-%dT%H:%M:%S%z').astimezone(datetime.timezone(datetime.timedelta(hours=9))).strftime('%Y-%m-%d %H:%M:%S')}")
like_count_diff = post['like_count'] - count.get(yesterday, {}).get(post['id'], {}).get('like_count', post['like_count'])
comment_count_diff = post['comments_count'] - count.get(yesterday, {}).get(post['id'], {}).get('comments_count', post['comments_count'])
st.markdown(
f"👍: {post['like_count']} <span style='{'' if like_count_diff != max_like_diff or max_like_diff == 0 else 'color:green;'}'>({'+1' if like_count_diff >= 0 else ''}{like_count_diff})"
f"\n💬: {post['comments_count']} <span style='{'' if comment_count_diff != max_comment_diff or max_comment_diff == 0 else 'color:green;'}'>({'+1' if comment_count_diff >= 0 else ''}{comment_count_diff})",
unsafe_allow_html=True)
likes_diff: int = post['like_count'] - count.get(yesterday, {}).get(post['id'], {}).get('like_count', post['like_count'])
comments_diff: int = post['comments_count'] - count.get(yesterday, {}).get(post['id'], {}).get('comments_count', post['comments_count'])
max_like_diff = max(likes_diff, max_like_diff)
max_comment_diff = max(comments_diff, max_comment_diff)
if show_like_comment_chart:
like_comment_chart_data = {"Date": [], "Count": [], "Type": []}
for date in count.keys():
if date != today and post["id"] in count[date]:
prev_count = count[date][post["id"]]
likes_diff = post['like_count'] - prev_count.get("like_count", 0)
comments_diff = post['comments_count'] - prev_count.get("comments_count", 0)
like_comment_chart_data["Date"].append(datetime.datetime.strptime(date, '%Y-%m-%d').strftime('%m/%d'))
like_comment_chart_data["Count"].append(likes_diff)
like_comment_chart_data["Type"].append("Like")
like_comment_chart_data["Date"].append(datetime.datetime.strptime(date, '%Y-%m-%d').strftime('%m/%d'))
like_comment_chart_data["Count"].append(comments_diff)
like_comment_chart_data["Type"].append("Comment")
if like_comment_chart_data["Date"]:
like_comment_chart_df = pd.DataFrame(like_comment_chart_data)
fig, ax3 = plt.subplots(figsize=(5, 3))
like_comment_chart_palette = {"Like": "orange", "Comment": "green"}
sns.lineplot(data=like_comment_chart_df[like_comment_chart_df["Type"] == "Like"], x="Date", y="Count", color="orange", ax=ax3)
ax3.set_xlabel("Date")
ax3.set_ylabel("Like Count")
ax4 = ax3.twinx()
sns.lineplot(data=like_comment_chart_df[like_comment_chart_df["Type"] == "Comment"], x="Date", y="Count", color="green", ax=ax4)
ax4.set_ylabel("Comment Count")
plt.title("日別 いいね/コメント数")
st.pyplot(fig)
caption = post['caption']
if caption is not None:
caption = caption.strip()
if "[Description]" in caption:
caption = caption.split("[Description]")[1].lstrip()
if "[Tags]" in caption:
caption = caption.split("[Tags]")[0].rstrip()
caption = caption.replace("#", "")
caption = caption.replace("[model]", "👗")
caption = caption.replace("[Equip]", "📷")
caption = caption.replace("[Develop]", "🖨")
if show_description:
st.write(caption or "No caption provided")
else:
st.write(caption[:0] if caption is not None and len(caption) > 50 else caption or "No caption provided")
count[today][post['id']] = {'like_count': post['like_count'], 'comments_count': post['comments_count']}
saveCount(count, count_filename)
'''
あなたは指示に忠実なプログラマーです。
上記のコードを以下の要件をすべて満たして改修してください
- 説明部分は省略し、コード改修部分のみ表示してください
- Python用のインデントを行頭に付与して出力する
- 指示のない部分のコードの改変は絶対にしない
- グラフ作成のためのPythonライブラリは"seaborn"のみを使用する
- "サマリーチャート"の'like_count'と'comments_count'の数は、現在値とjsonファイル内の過去すべての日次データをソースとし、"表示対象日時点での数"ではなく、"表示対象日の'Count'の純増減数"を表示する
- "いいね/コメント数チャート"の'like_count'
と'comments_count'の数は、現在値とjsonファイル内の過去すべての日次データをソースとし、"表示対象日時点での数"ではなく、"表示対象日の各投稿における'Count'の純増減数"を表示する
|
1fbe4b8a900076cbf414f8f3fc9d8a99
|
{
"intermediate": 0.33554038405418396,
"beginner": 0.38297271728515625,
"expert": 0.2814869284629822
}
|
3,897
|
function [z] = levy(n,m,beta)
num = gamma(1+beta)*sin(pi*beta/2); % used for Numerator
den = gamma((1+beta)/2)*beta*2^((beta-1)/2); % used for Denominator
sigma_u = (num/den)^(1/beta);% Standard deviation
u = random('Normal',0,sigma_u,n,m);
v = random('Normal',0,1,n,m);
z =u./(abs(v).^(1/beta));
end
|
2a160c99a2cfb450699944355387e86a
|
{
"intermediate": 0.2778685390949249,
"beginner": 0.43522197008132935,
"expert": 0.2869094908237457
}
|
3,898
|
visual studio code discord extension that shows what file your editing and also says stop stalking :3
|
7cf9a5132325108747e1ed8411865d23
|
{
"intermediate": 0.3361544609069824,
"beginner": 0.30565300583839417,
"expert": 0.3581925630569458
}
|
3,899
|
What would be the most efficient way to store ['3ef0', '3ef8', '3efg', '3efo', 'bef0', 'bef8', 'befg', 'befo', 'mel1', 'mel7', 'melh', 'meln', 'sel1', 'sel7', 'selh', 'seln'] for c++
|
33235e71ff78ae705a364641afb17a25
|
{
"intermediate": 0.41718748211860657,
"beginner": 0.2670140266418457,
"expert": 0.3157985508441925
}
|
3,900
|
draw schema diagram in typedb studio with screenshot helping
|
e756a1b8fd5b43df5bdf6fc822624593
|
{
"intermediate": 0.4394477605819702,
"beginner": 0.2562836706638336,
"expert": 0.304268479347229
}
|
3,901
|
hey
|
aae825fb221fde8a15e45e5f723c8ebd
|
{
"intermediate": 0.33180856704711914,
"beginner": 0.2916048467159271,
"expert": 0.3765866458415985
}
|
3,902
|
how to check if there is a folder in python? and if no, create one
|
414f385c7f9366bad57ccef3328ad192
|
{
"intermediate": 0.5674967765808105,
"beginner": 0.1443716436624527,
"expert": 0.28813159465789795
}
|
3,903
|
'''
import requests
import json
import datetime
import streamlit as st
from itertools import zip_longest
import os
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
import japanize_matplotlib
def basic_info():
config = dict()
config["access_token"] = st.secrets["access_token"]
config['instagram_account_id'] = st.secrets.get("instagram_account_id", "")
config["version"] = 'v16.0'
config["graph_domain"] = 'https://graph.facebook.com/'
config["endpoint_base"] = config["graph_domain"] + config["version"] + '/'
return config
def InstaApiCall(url, params, request_type):
if request_type == 'POST':
req = requests.post(url, params)
else:
req = requests.get(url, params)
res = dict()
res["url"] = url
res["endpoint_params"] = params
res["endpoint_params_pretty"] = json.dumps(params, indent=4)
res["json_data"] = json.loads(req.content)
res["json_data_pretty"] = json.dumps(res["json_data"], indent=4)
return res
def getUserMedia(params, pagingUrl=''):
Params = dict()
Params['fields'] = 'id,caption,media_type,media_url,permalink,thumbnail_url,timestamp,username,like_count,comments_count'
Params['access_token'] = params['access_token']
if not params['endpoint_base']:
return None
if pagingUrl == '':
url = params['endpoint_base'] + params['instagram_account_id'] + '/media'
else:
url = pagingUrl
return InstaApiCall(url, Params, 'GET')
def getUser(params):
Params = dict()
Params['fields'] = 'followers_count'
Params['access_token'] = params['access_token']
if not params['endpoint_base']:
return None
url = params['endpoint_base'] + params['instagram_account_id']
return InstaApiCall(url, Params, 'GET')
def saveCount(count, filename):
with open(filename, 'w') as f:
json.dump(count, f, indent=4)
def getCount(filename):
try:
with open(filename, 'r') as f:
return json.load(f)
except (FileNotFoundError, json.decoder.JSONDecodeError):
return {}
st.set_page_config(layout="wide")
params = basic_info()
count_filename = "count.json"
if not params['instagram_account_id']:
st.write('.envファイルでinstagram_account_idを確認')
else:
response = getUserMedia(params)
user_response = getUser(params)
if not response or not user_response:
st.write('.envファイルでaccess_tokenを確認')
else:
posts = response['json_data']['data'][::-1]
user_data = user_response['json_data']
followers_count = user_data.get('followers_count', 0)
NUM_COLUMNS = 6
MAX_WIDTH = 1000
BOX_WIDTH = int(MAX_WIDTH / NUM_COLUMNS)
BOX_HEIGHT = 400
yesterday = (datetime.datetime.now(datetime.timezone(datetime.timedelta(hours=9))) - datetime.timedelta(days=1)).strftime('%Y-%m-%d')
follower_diff = followers_count - getCount(count_filename).get(yesterday, {}).get('followers_count', followers_count)
st.markdown(f'''
Follower: {followers_count} ({'+' if follower_diff >= 0 else ''}{follower_diff})
''', unsafe_allow_html=True)
show_description = st.checkbox("キャプションを表示")
show_summary_chart = st.checkbox("サマリーチャートを表示")
show_like_comment_chart = st.checkbox("いいね/コメント数グラフを表示")
posts.reverse()
post_groups = [list(filter(None, group)) for group in zip_longest(*[iter(posts)] * NUM_COLUMNS)]
count = getCount(count_filename)
today = datetime.datetime.now(datetime.timezone(datetime.timedelta(hours=9))).strftime('%Y-%m-%d')
if today not in count:
count[today] = {}
count[today]['followers_count'] = followers_count
if datetime.datetime.now(datetime.timezone(datetime.timedelta(hours=9))).strftime('%H:%M') == '23:59':
count[yesterday] = count[today]
max_like_diff = 0
max_comment_diff = 0
summary_chart_data = {"Date": [], "Count": [], "Type": []}
for post_group in post_groups:
for post in post_group:
like_count_diff = post['like_count'] - count.get(yesterday, {}).get(post['id'], {}).get('like_count', post['like_count'])
comment_count_diff = post['comments_count'] - count.get(yesterday, {}).get(post['id'], {}).get('comments_count', post['comments_count'])
max_like_diff = max(like_count_diff, max_like_diff)
max_comment_diff = max(comment_count_diff, max_comment_diff)
if show_summary_chart:
for date in count.keys():
if date != today:
summary_chart_data["Date"].append(datetime.datetime.strptime(date, '%Y-%m-%d').strftime('%m/%d'))
summary_chart_data["Count"].append(count[date].get("followers_count", 0))
summary_chart_data["Type"].append("フォロワー")
for post_id in count[date].keys():
if post_id not in ["followers_count"]:
summary_chart_data["Date"].append(datetime.datetime.strptime(date, '%Y-%m-%d').strftime('%m/%d'))
summary_chart_data["Count"].append(count[date][post_id].get("like_count", 0))
summary_chart_data["Type"].append("いいね")
summary_chart_data["Date"].append(datetime.datetime.strptime(date, '%Y-%m-%d').strftime('%m/%d'))
summary_chart_data["Count"].append(count[date][post_id].get("comments_count", 0))
summary_chart_data["Type"].append("コメント")
summary_chart_df = pd.DataFrame(summary_chart_data)
fig, ax1 = plt.subplots(figsize=(15, 10))
summary_chart_palette = {"フォロワー": "lightblue", "いいね": "orange", "コメント": "green"}
sns.lineplot(data=summary_chart_df[summary_chart_df["Type"] != "コメント"], x="Date", y="Count", hue="Type", palette=summary_chart_palette, ax=ax1)
ax1.set_xlabel("日付")
ax1.set_ylabel("いいね/フォロワー")
ax2 = ax1.twinx()
sns.lineplot(data=summary_chart_df[summary_chart_df["Type"] == "コメント"], x="Date", y="Count", color="green", ax=ax2)
ax2.set_ylabel("コメント")
plt.title("日別 サマリーチャート")
st.pyplot(fig)
for post_group in post_groups:
with st.container():
columns = st.columns(NUM_COLUMNS)
for i, post in enumerate(post_group):
with columns[i]:
st.image(post['media_url'], width=BOX_WIDTH, use_column_width=True)
st.write(f"{datetime.datetime.strptime(post['timestamp'], '%Y-%m-%dT%H:%M:%S%z').astimezone(datetime.timezone(datetime.timedelta(hours=9))).strftime('%Y-%m-%d %H:%M:%S')}")
like_count_diff = post['like_count'] - count.get(yesterday, {}).get(post['id'], {}).get('like_count', post['like_count'])
comment_count_diff = post['comments_count'] - count.get(yesterday, {}).get(post['id'], {}).get('comments_count', post['comments_count'])
st.markdown(
f"👍: {post['like_count']} <span style='{'' if like_count_diff != max_like_diff or max_like_diff == 0 else 'color:green;'}'>({'+1' if like_count_diff >= 0 else ''}{like_count_diff})"
f"\n💬: {post['comments_count']} <span style='{'' if comment_count_diff != max_comment_diff or max_comment_diff == 0 else 'color:green;'}'>({'+1' if comment_count_diff >= 0 else ''}{comment_count_diff})",
unsafe_allow_html=True)
likes_diff: int = post['like_count'] - count.get(yesterday, {}).get(post['id'], {}).get('like_count', post['like_count'])
comments_diff: int = post['comments_count'] - count.get(yesterday, {}).get(post['id'], {}).get('comments_count', post['comments_count'])
max_like_diff = max(likes_diff, max_like_diff)
max_comment_diff = max(comments_diff, max_comment_diff)
if show_like_comment_chart:
like_comment_chart_data = {"Date": [], "Count": [], "Type": []}
for date in count.keys():
if date != today and post["id"] in count[date]:
prev_count = count[date][post["id"]]
likes_diff = post['like_count'] - prev_count.get("like_count", 0)
comments_diff = post['comments_count'] - prev_count.get("comments_count", 0)
like_comment_chart_data["Date"].append(datetime.datetime.strptime(date, '%Y-%m-%d').strftime('%m/%d'))
like_comment_chart_data["Count"].append(likes_diff)
like_comment_chart_data["Type"].append("Like")
like_comment_chart_data["Date"].append(datetime.datetime.strptime(date, '%Y-%m-%d').strftime('%m/%d'))
like_comment_chart_data["Count"].append(comments_diff)
like_comment_chart_data["Type"].append("Comment")
if like_comment_chart_data["Date"]:
like_comment_chart_df = pd.DataFrame(like_comment_chart_data)
fig, ax3 = plt.subplots(figsize=(5, 3))
like_comment_chart_palette = {"Like": "orange", "Comment": "green"}
sns.lineplot(data=like_comment_chart_df[like_comment_chart_df["Type"] == "Like"], x="Date", y="Count", color="orange", ax=ax3)
ax3.set_xlabel("日付")
ax3.set_ylabel("いいね")
ax4 = ax3.twinx()
sns.lineplot(data=like_comment_chart_df[like_comment_chart_df["Type"] == "Comment"], x="Date", y="Count", color="green", ax=ax4)
ax4.set_ylabel("コメント")
plt.title("日別 いいね/コメント数")
st.pyplot(fig)
caption = post['caption']
if caption is not None:
caption = caption.strip()
if "[Description]" in caption:
caption = caption.split("[Description]")[1].lstrip()
if "[Tags]" in caption:
caption = caption.split("[Tags]")[0].rstrip()
caption = caption.replace("#", "")
caption = caption.replace("[model]", "👗")
caption = caption.replace("[Equip]", "📷")
caption = caption.replace("[Develop]", "🖨")
if show_description:
st.write(caption or "No caption provided")
else:
st.write(caption[:0] if caption is not None and len(caption) > 50 else caption or "No caption provided")
count[today][post['id']] = {'like_count': post['like_count'], 'comments_count': post['comments_count']}
saveCount(count, count_filename)
'''
あなたは指示に忠実なプログラマーです。
上記のコードを以下の要件をすべて満たして改修してください
- 説明部分は省略し、コード改修部分のみ表示してください
- Python用のインデントを行頭に付与して出力する
- 指示のない部分のコードの改変は絶対にしない
- グラフ作成のためのPythonライブラリは"seaborn"のみを使用する
- 日毎の全体のカウント数について、"total_daily_like(followers,comments)_count"と定義し直す
- 日毎の各投稿のカウント数について、"post_daily_like(comments)_count"と定義し直す
- "サマリーチャート"の"いいね"や"フォロワー"や"コメント"の数は、現在値とjsonファイル内の過去すべての日次データをソースとし、"対象日時点の数"ではなく、"対象日の'Count'の日次純増減数"を算出して"total_daily_like(followers,comments)_count"とする
- "いいね/コメント数チャート"の"いいね"や"コメント数"の数は、現在値とjsonファイル内の過去すべての日次データをソースとし、"対象日時点の数"ではなく、"各投稿における対象日の'Count'の日次純増減数"を算出して"post_daily_like(comments)_count"とする
- "post_daily_like_count"や"post_daily_comments_count"が"各投稿の対象日と前日との差"であるかどうかチェックし、そうでなければ修正する
- 定義しなおした事で矛盾が生じないように、すべてのコードをチェックして問題があれば修正する
- " caption = post['caption']
if caption is not None:
caption = caption.strip()
if "[Description]" in caption:
caption = caption.split("[Description]")[1].lstrip()
if "[Tags]" in caption:
caption = caption.split("[Tags]")[0].rstrip()
caption = caption.replace("#", "")
caption = caption.replace("[model]", "👗")
caption = caption.replace("[Equip]", "📷")
caption = caption.replace("[Develop]", "🖨")
if show_description:
st.write(caption or "No caption provided")
else:
st.write(caption[:0] if caption is not None and len(caption) > 50 else caption or "No caption provided")
count[today][post['id']] = {'like_count': post['like_count'], 'comments_count': post['comments_count']}
"部分のコードの機能については、絶対にコード内に使用する
|
a030b8a976ca94914118eca3f39cf362
|
{
"intermediate": 0.33554038405418396,
"beginner": 0.38297271728515625,
"expert": 0.2814869284629822
}
|
3,904
|
The following line of code in a C++ project causes a compiler warning:
safe_strncpy(szBuffer,szBuffer+8,26);
szBuffer is of type "char *".
safe_strncpy is defined by the following:
#define safe_strncpy(a,b,n) do { strncpy((a),(b),(size_t)((n)-1)); (a)[(size_t)((n)-1)] = 0; } while (0)
The compiler warning is about the call to strncpy, and is as follows:
warning: 'char* strncpy(char*, const char*, size_t)' accessing 25 bytes at offsets 0 and 8 overlaps between 1 and 17 bytes at offset 8 [-Wrestrict]
Can you please explain the warning and suggest how to resolve it?
|
44b21d3af7e6d761eaf18780b562cc53
|
{
"intermediate": 0.5217545032501221,
"beginner": 0.2238215208053589,
"expert": 0.25442397594451904
}
|
3,905
|
javascript sort integer arary
|
8d6b4c14abc47cf716970333165a244c
|
{
"intermediate": 0.32880866527557373,
"beginner": 0.29331856966018677,
"expert": 0.3778727650642395
}
|
3,906
|
write a c++ function to convert each array in a 2d array of integers into a hashmaps with the values of the arrays as keys and and array of the indexes of the keys in the array as the value of each key
|
c0753ab94ebe3431503d10db5b945d50
|
{
"intermediate": 0.39141538739204407,
"beginner": 0.15223677456378937,
"expert": 0.456347793340683
}
|
3,907
|
write a c++ function to convert each array in a 2d array of integers into a hashmaps with the values of the arrays as keys and set of the indexes of the keys in the array as the value of each key
|
d69452acd3a87c3b6ca862da0f49ffcc
|
{
"intermediate": 0.39722055196762085,
"beginner": 0.1389162242412567,
"expert": 0.46386316418647766
}
|
3,908
|
write a c++ function to convert each array in a 2d array of integers into a hashmaps with the values of the arrays as keys and set of the indexes of the keys in the array as the value of each key. for example, convert [[1,2,5,3],[9,3,3,9]] into [{1:{0}, 2:{1}, 3: {3}, 5: {2}},{3: {1,2},9:{0,3}}]
|
d19aa3dea450efeb620a60868c41a4cc
|
{
"intermediate": 0.36503490805625916,
"beginner": 0.1840224266052246,
"expert": 0.450942724943161
}
|
3,909
|
исправить код
код:
import pandas as pd
import json
from pprint import pprint
with open('recipes.json') as f:
data = json.load(f)
all_ingredients=[]
for recipe in data:
for ingredient in recipe['ingredients']:
all_ingredients.append(ingredient)
print(all_ingredients) # Выводим список всех ингредиентов
По заданию у нас данные в датафрейме, необходимо прочитать recipes.csv и с ним работать
|
c7e05faf0a5b361fd718130f7958edf6
|
{
"intermediate": 0.4422905445098877,
"beginner": 0.3262077271938324,
"expert": 0.23150168359279633
}
|
3,910
|
hey
|
ac47389d9bc3f1a97eb87686e02e8cc1
|
{
"intermediate": 0.33180856704711914,
"beginner": 0.2916048467159271,
"expert": 0.3765866458415985
}
|
3,911
|
Исправь пожалуйста ошибки в коде
|
b90858dae24756655937ab0f05231b17
|
{
"intermediate": 0.2987670600414276,
"beginner": 0.2584679126739502,
"expert": 0.4427649974822998
}
|
3,912
|
how add on object function errors with errors list in python?
|
3654c5f640a1f8e91e3a1861e873e50a
|
{
"intermediate": 0.4752807021141052,
"beginner": 0.2611198425292969,
"expert": 0.2635994255542755
}
|
3,913
|
% 匹配器输出端信号
output = conv(r1, x_);
subplot(4,1,4);
t3_1 = -1:1/2e9:0.25e-6;
t3_2 = 0.25e-6:1/2e9:0.5e-6;
t3_3 = 0.5e-6:1/2e9:0.75e-6;
t3_4 = 0.75e-6:1/2e9:1e-6;
plot(t3_1, output(1:numel(t3_1)));
hold on;
plot(t3_2, output(numel(t3_1)+1:numel(t3_1)+numel(t3_2)));
plot(t3_3, output(numel(t3_1)+numel(t3_2)+1:numel(t3_1)+numel(t3_2)+numel(t3_3)));
plot(t3_4, output(numel(t3_1)+numel(t3_2)+numel(t3_3)+1:end));
title('匹配器输出端信号波形');
xlabel('时间(秒)');
ylabel('幅度');该代码报错:索引超出数组元素的数目(3043);报错语句为:plot(t3_1, output(1:numel(t3_1)));请解决
|
4f8ec7409906504b6ac9c70686b04e88
|
{
"intermediate": 0.293379545211792,
"beginner": 0.32078057527542114,
"expert": 0.3858398199081421
}
|
3,914
|
generate Go code that:
- checks if the phone number is in international Romanian format
- if it is in national format then reformat it as international romanian format
|
66840a118b92923725f27f26ed4730ac
|
{
"intermediate": 0.46525582671165466,
"beginner": 0.13556678593158722,
"expert": 0.3991774022579193
}
|
3,915
|
write me code racing game in c++ programming
|
39faa5096292ead2dde351f2c6db229a
|
{
"intermediate": 0.25286757946014404,
"beginner": 0.5190203785896301,
"expert": 0.2281121015548706
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.