row_id
int64 0
48.4k
| init_message
stringlengths 1
342k
| conversation_hash
stringlengths 32
32
| scores
dict |
|---|---|---|---|
32,523
|
How I get all class Baseline dict names
|
a7f04711669463eb4484d9dd0e420f9e
|
{
"intermediate": 0.2824676036834717,
"beginner": 0.4645046889781952,
"expert": 0.2530277967453003
}
|
32,524
|
hi
|
b29aa07ed33c9fa258e446a2c30444f8
|
{
"intermediate": 0.3246487081050873,
"beginner": 0.27135494351387024,
"expert": 0.40399640798568726
}
|
32,525
|
Whats up
|
8c8aa38b0b1549b1d77859a9c7f038d8
|
{
"intermediate": 0.33414027094841003,
"beginner": 0.2937088906764984,
"expert": 0.37215080857276917
}
|
32,526
|
ai using Random Chaos Number Generator algorithm generate 5 main numbers 1 to 70 and 1 bonus 1 to 25 1 line winning combinations from the next mega millions lotto, exclude only from main numbers line the following numbers 1 10 12 23 24 26 35 42 43 44 45 47 49 50 51 53 55 56 57 65 67 70
|
8c1379f402a6796c0840e555a22b8ebe
|
{
"intermediate": 0.14379827678203583,
"beginner": 0.11397191882133484,
"expert": 0.7422298192977905
}
|
32,527
|
quora_data["question1","question2"]=append(quora_data["question1"],quora_data["question2"])
NameError: name 'append' is not defined
|
3a34729ff994643c48604081d2798299
|
{
"intermediate": 0.3422219455242157,
"beginner": 0.27501380443573,
"expert": 0.3827643096446991
}
|
32,528
|
𝐾 = [
0 1 0
1 −4 1
0 1 0
]
a. What is the visual effect of applying this kernel to an image?
The image shown below has allowable gray-scale range (1 to 9)
𝐼 = [
1 5 1
3 7 2
1 4 9
]
b. Apply the convolution operation between 𝐼 and 𝐾 to given parameters: padding=1 and
stride=1. Show the steps for every pixel in the first row?
|
1167d6ccb684ed258c2e58ae1d5d8870
|
{
"intermediate": 0.22449776530265808,
"beginner": 0.1594606637954712,
"expert": 0.6160416007041931
}
|
32,529
|
Перепиши на питон
#include <iostream>
#include <vector>
#include <cmath>
using namespace std;
int main() {
int n; cin >> n;
int dp[101][2];
dp[1][0] = 1;
dp[1][1] = 1;
for (int i = 2; i < n + 1; i++) {
dp[i][0] = dp[i - 1][0] + dp[i - 1][1];
dp[i][1] = dp[i - 1][0];
}
cout << dp[n][1] + dp[n][0];
return 0;
}
|
f7a2144c23bf17c9d9de22449abde28d
|
{
"intermediate": 0.33488738536834717,
"beginner": 0.37390655279159546,
"expert": 0.291206032037735
}
|
32,530
|
give step by step commands needed to install this /
Skip to content
BuilderIO
/
gpt-crawler
Type / to search
Code
Issues
28
Pull requests
10
Actions
Projects
Security
Insights
Owner avatar
gpt-crawler
Public
BuilderIO/gpt-crawler
2 branches
1 tag
Latest commit
@steve8708
steve8708 Merge pull request #67 from Daethyra/main
…
82b70f2
13 hours ago
Git stats
97 commits
Files
Type
Name
Latest commit message
Commit time
.github/workflows
chore: add build step to build workflow
5 days ago
containerapp
modified config.ts to fix containerized execution
5 days ago
src
Update configSchema in src/config.ts
4 days ago
.dockerignore
modified: .dockerignore
yesterday
.gitignore
Merge pull request #58 from luissuil/main
14 hours ago
.releaserc
chore(ci): release workflow
5 days ago
Dockerfile
Continuing file reversion for sake of PR clarity
yesterday
License
Create License
16 hours ago
README.md
Add resourceExclusions to Config type
5 days ago
config.ts
cleanup
last week
package-lock.json
chore(release): 1.0.0 [skip ci]
14 hours ago
package.json
Merge pull request #67 from Daethyra/main
13 hours ago
tsconfig.json
build: use strict ts mode
5 days ago
README.md
GPT Crawler
Crawl a site to generate knowledge files to create your own custom GPT from one or multiple URLs
Example
Get started
Running locally
Clone the repository
Install dependencies
Configure the crawler
Run your crawler
Alternative methods
Running in a container with Docker
Running as a CLI
Development
Upload your data to OpenAI
Create a custom GPT
Create a custom assistant
Contributing
Example
Here is a custom GPT that I quickly made to help answer questions about how to use and integrate Builder.io by simply providing the URL to the Builder docs.
This project crawled the docs and generated the file that I uploaded as the basis for the custom GPT.
Try it out yourself by asking questions about how to integrate Builder.io into a site.
Note that you may need a paid ChatGPT plan to access this feature
Get started
Running locally
Clone the repository
Be sure you have Node.js >= 16 installed.
git clone https://github.com/builderio/gpt-crawler
Install dependencies
npm i
Configure the crawler
Open config.ts and edit the url and selectors properties to match your needs.
E.g. to crawl the Builder.io docs to make our custom GPT you can use:
export const defaultConfig: Config = {
url: "https://www.builder.io/c/docs/developers",
match: "https://www.builder.io/c/docs/**",
selector: `.docs-builder-container`,
maxPagesToCrawl: 50,
outputFileName: "output.json",
};
See config.ts for all available options. Here is a sample of the common configu options:
type Config = {
/** URL to start the crawl, if sitemap is provided then it will be used instead and download all pages in the sitemap */
url: string;
/** Pattern to match against for links on a page to subsequently crawl */
match: string;
/** Selector to grab the inner text from */
selector: string;
/** Don't crawl more than this many pages */
maxPagesToCrawl: number;
/** File name for the finished data */
outputFileName: string;
/** Optional resources to exclude
*
* @example
* ['png','jpg','jpeg','gif','svg','css','js','ico','woff','woff2','ttf','eot','otf','mp4','mp3','webm','ogg','wav','flac','aac','zip','tar','gz','rar','7z','exe','dmg','apk','csv','xls','xlsx','doc','docx','pdf','epub','iso','dmg','bin','ppt','pptx','odt','avi','mkv','xml','json','yml','yaml','rss','atom','swf','txt','dart','webp','bmp','tif','psd','ai','indd','eps','ps','zipx','srt','wasm','m4v','m4a','webp','weba','m4b','opus','ogv','ogm','oga','spx','ogx','flv','3gp','3g2','jxr','wdp','jng','hief','avif','apng','avifs','heif','heic','cur','ico','ani','jp2','jpm','jpx','mj2','wmv','wma','aac','tif','tiff','mpg','mpeg','mov','avi','wmv','flv','swf','mkv','m4v','m4p','m4b','m4r','m4a','mp3','wav','wma','ogg','oga','webm','3gp','3g2','flac','spx','amr','mid','midi','mka','dts','ac3','eac3','weba','m3u','m3u8','ts','wpl','pls','vob','ifo','bup','svcd','drc','dsm','dsv','dsa','dss','vivo','ivf','dvd','fli','flc','flic','flic','mng','asf','m2v','asx','ram','ra','rm','rpm','roq','smi','smil','wmf','wmz','wmd','wvx','wmx','movie','wri','ins','isp','acsm','djvu','fb2','xps','oxps','ps','eps','ai','prn','svg','dwg','dxf','ttf','fnt','fon','otf','cab']
*/
resourceExclusions?: string[];
};
Run your crawler
npm start
Alternative methods
Running in a container with Docker
To obtain the output.json with a containerized execution. Go into the containerapp directory. Modify the config.ts same as above, the output.jsonfile should be generated in the data folder. Note : the outputFileName property in the config.ts file in containerapp folder is configured to work with the container.
Upload your data to OpenAI
The crawl will generate a file called output.json at the root of this project. Upload that to OpenAI to create your custom assistant or custom GPT.
Create a custom GPT
Use this option for UI access to your generated knowledge that you can easily share with others
Note: you may need a paid ChatGPT plan to create and use custom GPTs right now
Go to https://chat.openai.com/
Click your name in the bottom left corner
Choose "My GPTs" in the menu
Choose "Create a GPT"
Choose "Configure"
Under "Knowledge" choose "Upload a file" and upload the file you generated
Create a custom assistant
Use this option for API access to your generated knowledge that you can integrate into your product.
Go to https://platform.openai.com/assistants
Click "+ Create"
Choose "upload" and upload the file you generated
Contributing
Know how to make this project better? Send a PR!
About
Crawl a site to generate knowledge files to create your own custom GPT from a URL
www.builder.io/blog/custom-gpt
Resources
Readme
License
ISC license
Activity
Stars
12.1k stars
Watchers
84 watching
Forks
849 forks
Report repository
Releases 1
v1.0.0
Latest
14 hours ago
Contributors
18
+ 7 contributors
Languages
TypeScript
55.9%
Dockerfile
22.6%
JavaScript
15.4%
Shell
6.1%
Footer
© 2023 GitHub, Inc.
Footer navigation
Terms
Privacy
Security
Status
Docs
Contact GitHub
Pricing
API
Training
Blog
About
|
e0ad2eff8cf7a529929c3c184a55004f
|
{
"intermediate": 0.4026893377304077,
"beginner": 0.3165959119796753,
"expert": 0.2807146906852722
}
|
32,531
|
translate this from js to rust:
eval("2+2")
|
8229313d2f0dcb5185743c9e4c538816
|
{
"intermediate": 0.17062105238437653,
"beginner": 0.6805993318557739,
"expert": 0.14877961575984955
}
|
32,532
|
write c# code that call an asmx web service
|
2c186912ed5da8fe23072f9067e686c9
|
{
"intermediate": 0.4934045970439911,
"beginner": 0.268939733505249,
"expert": 0.2376556694507599
}
|
32,533
|
make me a JS snippet that creates a string long as a given number
|
7aa76099819d03e431ebf7b23699f8fc
|
{
"intermediate": 0.3619025647640228,
"beginner": 0.41653040051460266,
"expert": 0.2215670645236969
}
|
32,534
|
how can i optimize my dxvk for wow 3.3.5a wotlk warmane for my hardware specs? amd 14 core 28 thread newest cpu. amd 7900 xt gpu with 20gb vram with 64gb system ram. desktop i want max graphics with still good performance. the same uses a 32bit exe but i patched it to up to 4gb of memory. I use free sync and vsync in my amd drivers. 3840x2160 42" monitor @ 120hz
dxvk.enableAsync = True
dxvk.numCompilerThreads = 14
dxvk.numAsyncThreads = 14
dxvk.maxFrameRate = 0
d3d9.maxFrameLatency = 1
d3d9.numBackBuffers = 2
d3d9.presentInterval = 0
d3d9.tearFree = Auto
d3d9.maxAvailableMemory = 4096
d3d9.evictManagedOnUnlock = True
d3d9.allowDiscard = True
d3d9.samplerAnisotropy = 16
d3d9.invariantPosition = False
d3d9.memoryTrackTest = True
d3d9.noExplicitFrontBuffer = True
d3d9.strictConstantCopies = False
d3d9.lenientClear = True
d3d9.longMad = False
d3d9.floatEmulation = Auto
d3d9.forceSwapchainMSAA = 0
d3d9.supportVCache = False
d3d9.forceSamplerTypeSpecConstants = False
dxvk.useRawSsbo = False
dxgi.maxDeviceMemory = 0
dxgi.maxSharedMemory = 0
dxgi.customVendorId = 0
dxgi.customDeviceId = 0
dxgi.customDeviceDesc = “”
dxvk.logLevel = none
dxvk.debugName = False
dxvk.debugOverlay = False
d3d9.shaderModel = 3
d3d9.dpiAware = True
please don't explain anything. I just want the config.wtf with no garbage or detials
|
83df2fe191d1332c9e41d73b8589cdf4
|
{
"intermediate": 0.3747209906578064,
"beginner": 0.31824222207069397,
"expert": 0.30703675746917725
}
|
32,535
|
how can i optimize my dxvk for wow 3.3.5a wotlk warmane for my hardware specs? amd 14 core 28 thread newest cpu. amd 7900 xt gpu with 20gb vram with 64gb system ram. desktop i want max graphics with still good performance. the same uses a 32bit exe but i patched it to up to 4gb of memory. I use free sync and vsync in my amd drivers. 3840x2160 42" monitor @ 120hz
dxvk.enableAsync = True
dxvk.numCompilerThreads = 14
dxvk.numAsyncThreads = 14
dxvk.maxFrameRate = 0
d3d9.maxFrameLatency = 1
d3d9.numBackBuffers = 2
d3d9.presentInterval = 0
d3d9.tearFree = Auto
d3d9.maxAvailableMemory = 4096
d3d9.evictManagedOnUnlock = True
d3d9.allowDiscard = True
d3d9.samplerAnisotropy = 16
d3d9.invariantPosition = False
d3d9.memoryTrackTest = True
d3d9.noExplicitFrontBuffer = True
d3d9.strictConstantCopies = False
d3d9.lenientClear = True
d3d9.longMad = False
d3d9.floatEmulation = Auto
d3d9.forceSwapchainMSAA = 0
d3d9.supportVCache = False
d3d9.forceSamplerTypeSpecConstants = False
dxvk.useRawSsbo = False
dxgi.maxDeviceMemory = 0
dxgi.maxSharedMemory = 0
dxgi.customVendorId = 0
dxgi.customDeviceId = 0
dxgi.customDeviceDesc = “”
dxvk.logLevel = none
dxvk.debugName = False
dxvk.debugOverlay = False
d3d9.shaderModel = 3
d3d9.dpiAware = True
I am on windows 11 using 10 bit panel oled with reshade with a sharpening filter and a clarity filter. Can you optimize my dxvk.conf
|
b9e016d14adc962ef4c501d41fafe1e6
|
{
"intermediate": 0.4334619343280792,
"beginner": 0.21964862942695618,
"expert": 0.3468894064426422
}
|
32,536
|
in matlab write a code to Compute and imagesc Continuous point Source in a channel with steady flow. the inputs should be:
M as the mass of pollution
Dx as diffusion in x direction
Dy as diffusion in y direction
W as the width of channel
L as the length of channel
R as retardation coefficient
U as flow rate in x direction
the defaulf value of inputs is:
M = 50 gram
Dx = 2 cm^2/s
Dy = 2 cm^2/s
W = 100 cm
L = 350 cm
R = 1
U = 1 cm/s
the parameters are:
dt = 1s as time step
dx =0.5cm as moving step in horizental direction
dy =0.5 cm as moving step in vertical direction
start time = 1
endtime = 200
the point of pollution source is at x=0 and y=50
Your Code should be such that many parameters of the program can be adjusted by users
and have a default value in the beginning.
|
3e68ba121c5478d5fefe5a2f1577e8f8
|
{
"intermediate": 0.35627666115760803,
"beginner": 0.11780298501253128,
"expert": 0.5259203910827637
}
|
32,537
|
Optimize code
|
bb23c2ed0d6e428eb8d6e51f72b0a6ad
|
{
"intermediate": 0.2213270664215088,
"beginner": 0.19202853739261627,
"expert": 0.5866443514823914
}
|
32,538
|
how can i optimize my dxvk for wow 3.3.5a wotlk warmane for my hardware specs? amd 14 core 28 thread newest cpu. amd 7900 xt gpu with 20gb vram with 64gb system ram. desktop i want max graphics with still good performance. the same uses a 32bit exe but i patched it to up to 4gb of memory. I use free sync and vsync in my amd drivers. 3840x2160 42" monitor @ 120hz
dxvk.enableAsync = True
dxvk.numCompilerThreads = 14
dxvk.numAsyncThreads = 14
dxvk.maxFrameRate = 0
d3d9.maxFrameLatency = 1
d3d9.numBackBuffers = 2
d3d9.presentInterval = 0
d3d9.tearFree = Auto
d3d9.maxAvailableMemory = 4096
d3d9.evictManagedOnUnlock = True
d3d9.allowDiscard = True
d3d9.samplerAnisotropy = 16
d3d9.invariantPosition = False
d3d9.memoryTrackTest = True
d3d9.noExplicitFrontBuffer = True
d3d9.strictConstantCopies = False
d3d9.lenientClear = True
d3d9.longMad = False
d3d9.floatEmulation = Auto
d3d9.forceSwapchainMSAA = 0
d3d9.supportVCache = False
d3d9.forceSamplerTypeSpecConstants = False
dxvk.useRawSsbo = False
dxgi.maxDeviceMemory = 0
dxgi.maxSharedMemory = 0
dxgi.customVendorId = 0
dxgi.customDeviceId = 0
dxgi.customDeviceDesc = “”
dxvk.logLevel = none
dxvk.debugName = False
dxvk.debugOverlay = False
d3d9.shaderModel = 3
d3d9.dpiAware = True
I am on windows 11 using 10 bit panel oled with reshade with a sharpening filter and a clarity filter. Can you optimize my dxvk.conf All i care is about the single file. I don't want any explanation or anything. just the tweaks nothing else.
|
49ec40d3efff130e042238bf71f9950a
|
{
"intermediate": 0.4274168610572815,
"beginner": 0.338985800743103,
"expert": 0.23359732329845428
}
|
32,539
|
MATLAB:
Newton's method
The method is based on determining the value of the first and second derivatives of the function f. The initial point x1 should be close enough to the minimum sought. Then, the next point is determined according to the formula xk+1 = xk - f'(xk)/f''(xk). The search for the minimum should end when the condition |f'(xk)| is met <eps
Usage:
[x2, n2] = method(f, a, b, eps);
Numerical calculation of derivatives at point x
function df = f_prime(x)
h = 1e-5;
df = (f(x + h) - f(x - h)) / (2 * h);
end
function d2f = f_double_prime(x)
h = 1e-5;
d2f = (f(x + h) - 2 * f(x) + f(x - h)) / h^2;
end
|
bba6a7a688d874c8252fa33860333516
|
{
"intermediate": 0.3094692826271057,
"beginner": 0.2573980987071991,
"expert": 0.4331326186656952
}
|
32,540
|
i wrote this in rust:
#![no_std]
#![no_main]
use core::panic::PanicInfo;
#[no_mangle]
pub extern "C" fn _start() -> ! {
loop {}
}
#[panic_handler]
fn panic(_info: &PanicInfo) -> ! {
loop {}
}
throws:
Compiling os v0.1.0 (/Users/alexkatkov/Programs/Rust/os)
error[E0463]: can't find crate for `core`
|
= note: the `os` target may not be installed
= help: consider downloading the target with `rustup target add os`
error[E0463]: can't find crate for `compiler_builtins`
error[E0463]: can't find crate for `core`
--> src/main.rs:4:5
|
4 | use core::panic::PanicInfo;
| ^^^^ can't find crate
|
= note: the `os` target may not be installed
= help: consider downloading the target with `rustup target add os`
error: requires `sized` lang_item
For more information about this error, try `rustc --explain E0463`.
error: could not compile `os` (bin "os") due to 4 previous errors
|
5f9e27d3344e069727252bb757d76007
|
{
"intermediate": 0.1919230967760086,
"beginner": 0.6375399827957153,
"expert": 0.17053693532943726
}
|
32,541
|
$(document).ready(function() {
$('.form-group').addClass('hidden');
$('.form-group').first().removeClass('hidden');
$('.form-control').on('change', function() {
$(this).closest('.form-group').next().removeClass('hidden');
});
}); <div id="first-step">
<img class="modal-img" id="image-placeholder" src="{% static 'img/building.jpg' %}" alt="Изображение">
<form id="first-form" class="m-t" role="form" method="post">
{% csrf_token %}
{% for field in form %}
<div class="form-group"
{% if field.field.required %}
aria-required="true"
{% else %}
aria-required="false"
{% endif %}
>
<div class="container-input">
{{ field|addclass:'select2_demo_1 form-control' }}
<span class="fa fa-question-circle tooltip-icon tooltip-{{ forloop.counter }}" data-toggle="tooltip" title="{{ field.help_text|safe }}"></span>
</div>
{% if field.help_text %}
<small id="{{ field.id_for_label }}-help" class="form-text text-muted">
{{ field.help_text|safe }}
</small>
{% endif %}
</div>
{% endfor %}
<input class="input-invisible" type="text" name="longitude" id="longitude">
<input class="input-invisible" type="text" name="latitude" id="latitude">
<div class="container-input">
<div contenteditable="true" name="address" id="address" class="form-control"></div>
<span class="fa fa-question-circle tooltip-icon tooltip-3" data-toggle="tooltip" title="Задайте адрес сайта"></span>
</div>
<small id="id_address-help" class="form-text text-muted mb">
Задайте адрес сайта
</small>
<div class="yandex-map" id="yandex-map"></div>
<button type="button" id="next-button" title="Новая прощадка"
class="btn btn-primary block full-width m-b">Добавить площадку
</button>
</form>
</div> нужно сделать чтобы можно было выбрать любой вариант из поля, а не только когда первое значение меняется на другое, можно выбрать и первое значение
|
77765e6fa1b8d9daf4cb27898c0900ac
|
{
"intermediate": 0.3992963135242462,
"beginner": 0.522209644317627,
"expert": 0.07849404960870743
}
|
32,542
|
private void GSMFilterOTPBtn_Click(object sender, EventArgs e)
{
var filter = GSMFilterOTPTextBox.Text.Trim();
var messages = GlobalVar.Messages.OrderByDescending(mess => mess.RecievedTime);
foreach (var gsmCom in GSMComBingingList)
{
var messageOTP = messages.FirstOrDefault(mess =>
mess.Sender.Equals(filter, StringComparison.OrdinalIgnoreCase)
&& mess.PhoneNumber == gsmCom.PhoneNumber
&& mess.OTPCode != string.Empty);
if (messageOTP != null)
{
gsmCom.OTP = messageOTP.OTPCode;
gsmCom.OTPTime = messageOTP.RecievedTime.ToString();
}
else
{
gsmCom.OTP = string.Empty;
gsmCom.OTPTime = string.Empty;
}
}
}
tôi không muốn hiển thị dấu cách trước và sau string
|
228a3423c7aa1db62f71a5d32235449e
|
{
"intermediate": 0.44165438413619995,
"beginner": 0.29904505610466003,
"expert": 0.2593006193637848
}
|
32,543
|
What does logprob exactly mean in the Viterbi algorithm output?
|
a98bfef5de077768571f2e024e2bbb4c
|
{
"intermediate": 0.13558819890022278,
"beginner": 0.07642115652561188,
"expert": 0.7879905700683594
}
|
32,544
|
<form action="bookingConfirmationPage.jsp" method="post">
<label for="roomSelection">Select Room:</label>
<select id="roomSelection" name="room">
<option value="room1">Room 1 - $100 per night</option>
<option value="room2">Room 2 - $120 per night</option>
<option value="room3">Room 3 - $90 per night</option>
<!-- Add options for more rooms -->
</select>
<div class="dropdown">
<label for="dropdown-toggle" class="dropdown-toggle">1客房,1房客▾</label>
<div class="dropdown-content">
<label for="guests">房客:</label>
<button id="guests-minus">-</button>
<span id="guests-count">1</span>
<button id="guests-plus">+</button>
<label for="room-adults">每客房成人人数:</label>
<button id="room-adults-minus">-</button>
<span id="room-adults-count">1</span>
<button id="room-adults-plus">+</button>
<label for="children">儿童:</label>
<button id="children-minus">-</button>
<span id="children-count">0</span>
<button id="children-plus">+</button>
<div id="children-ages-container"></div>
</div>
</div>
<script>
// 获取下拉框的元素
var dropdownToggle = document.querySelector(".dropdown-toggle");
var dropdownContent = document.querySelector(".dropdown-content");
// 获取儿童年龄容器的元素
var childrenAgesContainer = document.getElementById("children-ages-container");
// 点击下拉框的事件处理程序
dropdownToggle.addEventListener("click", function() {
if (dropdownContent.style.display === "none") {
dropdownContent.style.display = "block";
} else {
dropdownContent.style.display = "none";
}
});
// 获取加减按钮和计数器的元素
var guestsMinusBtn = document.getElementById("guests-minus");
var guestsCount = document.getElementById("guests-count");
var guestsPlusBtn = document.getElementById("guests-plus");
var roomAdultsMinusBtn = document.getElementById("room-adults-minus");
var roomAdultsCount = document.getElementById("room-adults-count");
var roomAdultsPlusBtn = document.getElementById("room-adults-plus");
var childrenMinusBtn = document.getElementById("children-minus");
var childrenCount = document.getElementById("children-count");
var childrenPlusBtn = document.getElementById("children-plus");
// 更新下拉框显示的内容
function updateDropdownText() {
var guests = parseInt(guestsCount.innerText);
var roomAdults = parseInt(roomAdultsCount.innerText);
var children = parseInt(childrenCount.innerText);
var totalGuests = guests * (roomAdults + children);
dropdownToggle.innerText = guests + "客房," + totalGuests + "房客 ";
}
// 房客数减少按钮的点击事件处理程序
guestsMinusBtn.addEventListener("click", function() {
var count = parseInt(guestsCount.innerText);
if (count > 1) {
guestsCount.innerText = count - 1;
updateDropdownText();
}
});
// 房客数增加按钮的点击事件处理程序
guestsPlusBtn.addEventListener("click", function() {
var count = parseInt(guestsCount.innerText);
if (count < 2) {
guestsCount.innerText = count + 1;
updateDropdownText();
}
});
// 每客房成人人数减少按钮的点击事件处理程序
roomAdultsMinusBtn.addEventListener("click", function() {
var count = parseInt(roomAdultsCount.innerText);
if (count > 1) {
roomAdultsCount.innerText = count - 1;
updateDropdownText();
}
});
// 每客房成人人数增加按钮的点击事件处理程序
roomAdultsPlusBtn.addEventListener("click", function() {
var count = parseInt(roomAdultsCount.innerText);
if (count < 9) {
roomAdultsCount.innerText = count + 1;
updateDropdownText();
}
});
// 儿童人数减少按钮的点击事件处理程序
childrenMinusBtn.addEventListener("click", function() {
var count = parseInt(childrenCount.innerText);
if (count > 0) {
childrenCount.innerText = count - 1;
removeChildAgeSelect();
updateDropdownText();
}
});
// 儿童人数增加按钮的点击事件处理程序
childrenPlusBtn.addEventListener("click", function() {
var count = parseInt(childrenCount.innerText);
if (count < 4) {
childrenCount.innerText = count + 1;
createChildAgeSelect(count + 1);
updateDropdownText();
}
});
// 创建儿童年龄选择框
function createChildAgeSelect(childIndex) {
var childAgeSelect = document.createElement("select");
childAgeSelect.id = "child-age-" + childIndex;
childAgeSelect.name = "childAge" + childIndex;
// 添加年龄选项
var ageOptions = ["0岁", "1岁", "2岁"];
for (var i = 0; i < ageOptions.length; i++) {
var option = document.createElement("option");
option.value = i;
option.innerText = ageOptions[i];
childAgeSelect.appendChild(option);
}
childrenAgesContainer.appendChild(childAgeSelect);
}
// 删除最后一个儿童年龄选择框
function removeChildAgeSelect() {
var lastChildIndex = parseInt(childrenCount.innerText) + 1;
var lastChildAgeSelect = document.getElementById("child-age-" + lastChildIndex);
if (lastChildAgeSelect) {
childrenAgesContainer.removeChild(lastChildAgeSelect);
}
}
</script>
<label for="specialRate">Special Rate:</label>
<select id="specialRate" name="specialRate">
<option value="none">No Special Rate</option>
<option value="government">Government/Military</option>
<option value="business">Business</option>
<option value="special offer number">特惠编号</option>
<option value="company or group number">公司或团体编号</option>
</select>
<div id="specialNumberContainer" style="display: none;">
<label for="specialNumber">Special Number:</label>
<input type="text" id="specialNumber" name="specialNumber">
</div>
<div id="companyNumberContainer" style="display: none;">
<label for="companyNumber">Company/Group Number:</label>
<input type="text" id="companyNumber" name="companyNumber">
</div>
<script>
var specialRateSelect = document.getElementById("specialRate");
var specialNumberContainer = document.getElementById("specialNumberContainer");
var companyNumberContainer = document.getElementById("companyNumberContainer");
specialRateSelect.addEventListener("change", function() {
if (specialRateSelect.value === "special offer number") {
specialNumberContainer.style.display = "block";
companyNumberContainer.style.display = "none";
}
else if (specialRateSelect.value === "company or group number"){
companyNumberContainer.style.display = "block";
specialNumberContainer.style.display = "none";
}
else {
specialNumberContainer.style.display = "none";
companyNumberContainer.style.display = "none";
}
});
</script>
<label for="usePoints">Use Points:</label>
<input type="checkbox" id="usePoints" name="usePoints">
<label for="accessibleRoom">Accessible Room:</label>
<input type="checkbox" id="accessibleRoom" name="accessibleRoom">
<label for="checkInDate">Check-in Date:</label>
<input type="date" id="checkInDate" name="checkInDate">
<label for="checkOutDate">Check-out Date:</label>
<input type="date" id="checkOutDate" name="checkOutDate">
<input type="submit" value="Submit">
</form>
bookingConfirmationPage.jsp内容为显示客房和套房的选择并预定,请给出相关代码
|
2337a47b1959b2343ef6b4094ed1da1c
|
{
"intermediate": 0.33898860216140747,
"beginner": 0.5221661925315857,
"expert": 0.1388452649116516
}
|
32,545
|
With the ECOC strategy for classifying seven different types of bacteria, what are the minimum and the
maximum lengths of the codewords? Redundant codes (i.e. a code where all classes get the same label, or
codes that describe the same class division) should not be included for maximum length.
What are the maximum and minimum length?
|
93fa7c774a2b258d10fe6bef856b144c
|
{
"intermediate": 0.25080248713493347,
"beginner": 0.34493133425712585,
"expert": 0.40426623821258545
}
|
32,546
|
I have 2 dice, one is fair and the other one is loaded. I select one of them with equal probability and roll it. Then after every roll there is a 0.9 probability that I roll the same die next time and 0.1 I switch to the other one. I need to calculate the probability of having a fair die always rolled for 300 rolls. Can this be done by using CategoricalHMM from hmmlearn package in Python?
|
7e8f3c1539e3ed371b423c9d3f52fcdd
|
{
"intermediate": 0.41995325684547424,
"beginner": 0.0794404074549675,
"expert": 0.5006063580513
}
|
32,547
|
Skip to content
BuilderIO
/
gpt-crawler
Type / to search
Code
Issues
28
Pull requests
10
Actions
Projects
Security
Insights
Owner avatar
gpt-crawler
Public
BuilderIO/gpt-crawler
2 branches
1 tag
Latest commit
@steve8708
steve8708 Merge pull request #67 from Daethyra/main
…
82b70f2
13 hours ago
Git stats
97 commits
Files
Type
Name
Latest commit message
Commit time
.github/workflows
chore: add build step to build workflow
5 days ago
containerapp
modified config.ts to fix containerized execution
5 days ago
src
Update configSchema in src/config.ts
4 days ago
.dockerignore
modified: .dockerignore
yesterday
.gitignore
Merge pull request #58 from luissuil/main
14 hours ago
.releaserc
chore(ci): release workflow
5 days ago
Dockerfile
Continuing file reversion for sake of PR clarity
yesterday
License
Create License
16 hours ago
README.md
Add resourceExclusions to Config type
5 days ago
config.ts
cleanup
last week
package-lock.json
chore(release): 1.0.0 [skip ci]
14 hours ago
package.json
Merge pull request #67 from Daethyra/main
13 hours ago
tsconfig.json
build: use strict ts mode
5 days ago
README.md
GPT Crawler
Crawl a site to generate knowledge files to create your own custom GPT from one or multiple URLs
Gif showing the crawl run
Example
Get started
Running locally
Clone the repository
Install dependencies
Configure the crawler
Run your crawler
Alternative methods
Running in a container with Docker
Running as a CLI
Development
Upload your data to OpenAI
Create a custom GPT
Create a custom assistant
Contributing
Example
Here is a custom GPT that I quickly made to help answer questions about how to use and integrate Builder.io by simply providing the URL to the Builder docs.
This project crawled the docs and generated the file that I uploaded as the basis for the custom GPT.
Try it out yourself by asking questions about how to integrate Builder.io into a site.
Note that you may need a paid ChatGPT plan to access this feature
Get started
Running locally
Clone the repository
Be sure you have Node.js >= 16 installed.
git clone https://github.com/builderio/gpt-crawler
Install dependencies
npm i
Configure the crawler
Open config.ts and edit the url and selectors properties to match your needs.
E.g. to crawl the Builder.io docs to make our custom GPT you can use:
export const defaultConfig: Config = {
url: "https://www.builder.io/c/docs/developers",
match: "https://www.builder.io/c/docs/**",
selector: `.docs-builder-container`,
maxPagesToCrawl: 50,
outputFileName: "output.json",
};
See config.ts for all available options. Here is a sample of the common configu options:
type Config = {
/** URL to start the crawl, if sitemap is provided then it will be used instead and download all pages in the sitemap */
url: string;
/** Pattern to match against for links on a page to subsequently crawl */
match: string;
/** Selector to grab the inner text from */
selector: string;
/** Don't crawl more than this many pages */
maxPagesToCrawl: number;
/** File name for the finished data */
outputFileName: string;
/** Optional resources to exclude
*
* @example
* ['png','jpg','jpeg','gif','svg','css','js','ico','woff','woff2','ttf','eot','otf','mp4','mp3','webm','ogg','wav','flac','aac','zip','tar','gz','rar','7z','exe','dmg','apk','csv','xls','xlsx','doc','docx','pdf','epub','iso','dmg','bin','ppt','pptx','odt','avi','mkv','xml','json','yml','yaml','rss','atom','swf','txt','dart','webp','bmp','tif','psd','ai','indd','eps','ps','zipx','srt','wasm','m4v','m4a','webp','weba','m4b','opus','ogv','ogm','oga','spx','ogx','flv','3gp','3g2','jxr','wdp','jng','hief','avif','apng','avifs','heif','heic','cur','ico','ani','jp2','jpm','jpx','mj2','wmv','wma','aac','tif','tiff','mpg','mpeg','mov','avi','wmv','flv','swf','mkv','m4v','m4p','m4b','m4r','m4a','mp3','wav','wma','ogg','oga','webm','3gp','3g2','flac','spx','amr','mid','midi','mka','dts','ac3','eac3','weba','m3u','m3u8','ts','wpl','pls','vob','ifo','bup','svcd','drc','dsm','dsv','dsa','dss','vivo','ivf','dvd','fli','flc','flic','flic','mng','asf','m2v','asx','ram','ra','rm','rpm','roq','smi','smil','wmf','wmz','wmd','wvx','wmx','movie','wri','ins','isp','acsm','djvu','fb2','xps','oxps','ps','eps','ai','prn','svg','dwg','dxf','ttf','fnt','fon','otf','cab']
*/
resourceExclusions?: string[];
};
Run your crawler
npm start
Alternative methods
Running in a container with Docker
To obtain the output.json with a containerized execution. Go into the containerapp directory. Modify the config.ts same as above, the output.jsonfile should be generated in the data folder. Note : the outputFileName property in the config.ts file in containerapp folder is configured to work with the container.
Upload your data to OpenAI
The crawl will generate a file called output.json at the root of this project. Upload that to OpenAI to create your custom assistant or custom GPT.
Create a custom GPT
Use this option for UI access to your generated knowledge that you can easily share with others
Note: you may need a paid ChatGPT plan to create and use custom GPTs right now
Go to https://chat.openai.com/
Click your name in the bottom left corner
Choose "My GPTs" in the menu
Choose "Create a GPT"
Choose "Configure"
Under "Knowledge" choose "Upload a file" and upload the file you generated
Gif of how to upload a custom GPT
Create a custom assistant
Use this option for API access to your generated knowledge that you can integrate into your product.
Go to https://platform.openai.com/assistants
Click "+ Create"
Choose "upload" and upload the file you generated
Gif of how to upload to an assistant
Contributing
Know how to make this project better? Send a PR!
Made with love by Builder.io
About
Crawl a site to generate knowledge files to create your own custom GPT from a URL
www.builder.io/blog/custom-gpt
Resources
Readme
License
ISC license
Activity
Stars
12.1k stars
Watchers
84 watching
Forks
849 forks
Report repository
Releases 1
v1.0.0
Latest
14 hours ago
Contributors
18
@steve8708
@guillermoscript
@marcelovicentegc
@Daethyra
@iperzic
@Umar-Azam
@justindhillon
@pipech
@nilwurtz
@highergroundstudio
@86
+ 7 contributors
Languages
TypeScript
55.9%
Dockerfile
22.6%
JavaScript
15.4%
Shell
6.1%
Footer
© 2023 GitHub, Inc.
Footer navigation
Terms
Privacy
Security
Status
Docs
Contact GitHub
Pricing
API
Training
Blog
About
|
5594b320c7001b11edc847ecbddd2182
|
{
"intermediate": 0.41574978828430176,
"beginner": 0.23659905791282654,
"expert": 0.3476511538028717
}
|
32,548
|
split the text in flutter into two lines
|
a05ecde446700e8953f6417e8a8c70c1
|
{
"intermediate": 0.3718467652797699,
"beginner": 0.25856125354766846,
"expert": 0.36959195137023926
}
|
32,549
|
using System.Collections.Generic;
namespace yield
{
public static class ExpSmoothingTask
{
public static IEnumerable<DataPoint> SmoothExponentialy(this IEnumerable<DataPoint> data, double alpha)
{
var isFirstItem = true;
double previousItem = 0;
foreach (var e in data)
{
var item = new DataPoint();
item.AvgSmoothedY = e.AvgSmoothedY;
item.MaxY = e.MaxY;
item.OriginalY = e.OriginalY;
item.X = e.X;
if(isFirstItem)
{
isFirstItem = false;
previousItem = e.OriginalY;
}
else
{
previousItem = alpha * e.OriginalY + (1 - alpha) * previousItem;
}
item.ExpSmoothedY = previousItem;
yield return item;
}
}
}
}
получаю ошибку:
/app/checking/EthalonSolutions.cs(41,9): warning CS8602: Dereference of a possibly null reference. [/app/NUnitTestRunner.csproj]
/app/checking/EthalonSolutions.cs(43,35): warning CS8602: Dereference of a possibly null reference. [/app/NUnitTestRunner.csproj]
/app/checking/EthalonSolutions.cs(47,36): warning CS8602: Dereference of a possibly null reference. [/app/NUnitTestRunner.csproj]
/app/MovingMaxTask.cs(16,9): warning CS8602: Dereference of a possibly null reference. [/app/NUnitTestRunner.csproj]
/app/MovingMaxTask.cs(18,35): warning CS8602: Dereference of a possibly null reference. [/app/NUnitTestRunner.csproj]
/app/MovingMaxTask.cs(22,36): warning CS8602: Dereference of a possibly null reference. [/app/NUnitTestRunner.csproj]
/app/ExpSmoothingTask.cs(14,32): error CS1729: 'DataPoint' does not contain a constructor that takes 0 arguments [/app/NUnitTestRunner.csproj]
/app/ExpSmoothingTask.cs(15,17): error CS0272: The property or indexer 'DataPoint.AvgSmoothedY' cannot be used in this context because the set accessor is inaccessible [/app/NUnitTestRunner.csproj]
/app/ExpSmoothingTask.cs(16,17): error CS0272: The property or indexer 'DataPoint.MaxY' cannot be used in this context because the set accessor is inaccessible [/app/NUnitTestRunner.csproj]
/app/ExpSmoothingTask.cs(17,17): error CS0191: A readonly field cannot be assigned to (except in a constructor or init-only setter of the type in which the field is defined or a variable initializer) [/app/NUnitTestRunner.csproj]
/app/ExpSmoothingTask.cs(18,17): error CS0191: A readonly field cannot be assigned to (except in a constructor or init-only setter of the type in which the field is defined or a variable initializer) [/app/NUnitTestRunner.csproj]
/app/ExpSmoothingTask.cs(28,17): error CS0272: The property or indexer 'DataPoint.ExpSmoothedY' cannot be used in this context because the set accessor is inaccessible [/app/NUnitTestRunner.csproj]
The build failed. Fix the build errors and run again.
что делать?!
|
0d1ba4964c419f1044fb6f3d58e6adf9
|
{
"intermediate": 0.301104336977005,
"beginner": 0.4318719506263733,
"expert": 0.26702365279197693
}
|
32,550
|
hey! i wanna make an IOT Project for myself! That is 'Automatic Wifi Door opener' for this purpose i need to write a code but unfortunately i dont know specific C++ code used in Arduino IDE. As per my Project i have decided to use Hardwere: ESP32 Cam module, Survo Motor, FTDI module (for programming ESP32 CAM), Doorbell Sensor(that sense if doorbell rangs or not!), an Hall-sensor (Hall megnetic sensor for sensing if door opens or not!), For Softwere i need to make Web Application (for controlling the hardwere and for displaying the ESP32 Camera Output) in web application there are two buttons (one for opening the door, and one for Capturing the Images from Esp32 camera), also there are two windows for(displaying Wifi status as Connected, Not-connected, Connecting.., and SSID status and other window is used to stream MJPEG video from ESP32 cam only when the Door bell rangs). Now i will tell you the working so that you can understand better: when someone press doorbell the system sends MJPEG video to web app so that i can see who is there, if the person is Known i press the button on web app 'open the door'and the system should opens the door! then the person enters into home as the hall sensor sense the door is now open hance it deactivate the whole system which eventually stops the MJPEG video Streaming to Web app. And when the doorbell rangs again the whole process starts again seamlesly, Furthermore i want my ESP32 cam always connected to Wifi for seamless connection and tries to connect to known wifi when the connection loose, i also want the ESP32 cam only sends MJPEG video to webapp only when the doorbell rangs, i also want when i press the button 'Capture image' on web application even when the doorbell Not rangs the module capture images and stores in the module's Memory card! Well now you understand the whole process i want you to please Write Arduino Code for ESP32 CAM module for this IOT small PROject!
|
80d235073f38562d9aea3ed456af3e25
|
{
"intermediate": 0.4865524470806122,
"beginner": 0.22995226085186005,
"expert": 0.2834952473640442
}
|
32,551
|
import random
print("Ask the Magic Python Ball a question:")
question = input()
print("")
print("Loading...")
num = random.randint(1,13)
if num == 1:
print("")
print("Magic Python Ball: Ask again later, I'm not gonna answer your stupid question.")
elif num == 2:
print("")
print("Magic Python Ball: I'm not going to answer just to be petty. ")
elif num == 3:
print("")
print("Magic Python Ball: Totally not, bro.")
elif num == 4:
print("")
print("Magic Python Ball: Nah")
elif num == 5:
print("")
print("Magic Python Ball: IDK")
elif num == 6:
print("")
print("Magic Python Ball: Yuh")
elif num == 7:
print("")
print("Magic Python Ball: Nuh uh")
elif num == 8:
print("")
print("Magic Python Ball: I uhhhh")
elif num == 9:
print("")
print("Magic Python Ball: Sureeeee")
elif num == 10:
print("")
print("Magic Python Ball: YOU can solve this question.")
elif num == 11:
print("")
print("Magic Python Ball: You can take a guess on this one.")
elif num == 12:
print("")
print("Magic Python Ball: Believe whatever you want.")
elif num == 13:
print("")
print("Magic Python Ball: Error Code: 2904 (This is a real Error Code and isn't me trying to avoid your question.)")
I want this to loop back to the beginning after a question is answered.
|
4349bf1b514974b76ba3bd2a9a751006
|
{
"intermediate": 0.2872774302959442,
"beginner": 0.39593392610549927,
"expert": 0.3167886435985565
}
|
32,552
|
This program has a bug, which means we need to fix it! This bug is a logic error.
RULE: A function will not run until it is called in the program.
Click Run and watch the stage to see what's wrong.
Nothing happens because there is no Function Call!
Debug the program by calling the function. Notice, the function's name is moon_scene().
def moon_scene():
moon_scene()
stage.set_background("moon")
sprite = codesters.Sprite("rocket")
sprite.say("3 - 2 - 1...")
stage.wait(1)
sprite.say("... blast off!")
sprite.move_up(550)
|
ec9ec3cf14894e889ec851eab76a27df
|
{
"intermediate": 0.43412259221076965,
"beginner": 0.35923391580581665,
"expert": 0.2066434770822525
}
|
32,553
|
make a raycast in godot4 on player character that checks in a square, I will use it to check the sorrundings when you go into platform view to determen the layout of the platformer, so it will be a top down with a squared raycast that will create what will be in the platformer or not
|
553a38680165fbbd099db49daef25dc7
|
{
"intermediate": 0.5466156601905823,
"beginner": 0.10533283650875092,
"expert": 0.348051518201828
}
|
32,554
|
I have 2 dice, one is fair and the other one is loaded. I select one of them with equal probability and roll it. Then after every roll there is a 0.9 probability that I roll the same die next time and 0.1 I switch to the other one. I have the results as a sequence for a total of 300 rolls. Based on that sequence I need to calculate the probability of having all rolls rolled with a fair die. Can this be done by using CategoricalHMM from hmmlearn package in Python?
|
e4f0378012ccb0510f241f80ed810083
|
{
"intermediate": 0.39273783564567566,
"beginner": 0.07695946097373962,
"expert": 0.5303027629852295
}
|
32,555
|
how would i get the error code 'TypeError: __init__() takes 3 positional arguments but 5 were given', in the code:
class Animal(object):
def __init__(self, s, n):
self.state = s
self.size = n
def getState(self):
return self._state
def getSize(size):
return self._size
def feed(self):
self._size +=1
print(self._state, "has been fed")
class Fish(Animal):
def __init__(self, s, n, m):
super().__init__(s,n)
self.maxSize = m
def setMaxSize(self, m):
self.maxSize = m
def feed(self):
self._size += 2
print("fed")
if n >= m:
print("BIG")
class Duck(Animal):
def __init__(self,s,n,m):
super().__init__(s,n,m)
def stopFeed(self):
if n == 5:
print("big duck")
|
98f1cafe857bd42da2d59a42c7711b22
|
{
"intermediate": 0.2828770577907562,
"beginner": 0.6145075559616089,
"expert": 0.10261531919240952
}
|
32,556
|
audit solidity code
for (uint256 i = 0; i < getRoleMemberCount(role); i++) {
address account = getRoleMember(role, getRoleMemberCount(role) - 1 - i);
if (account != address(0)) {
_revokeRole(role, account);
_roleMembers[role].remove(account);
}
}
|
c2ce3ea84bd700a263dfeffccc2d622d
|
{
"intermediate": 0.4663965702056885,
"beginner": 0.25775694847106934,
"expert": 0.27584654092788696
}
|
32,557
|
как в этом запросе достать записи с другими статусами, то есть указать несколько статусов return dataService.selectFromWhere(QAbsDisposalCaption.absDisposalCaption, QAbsDisposalCaption.class, (p) -> p.state.eq(AbsState.NOT_PASS).and(p.executing.eq(false))).fetch();
|
227938fcb708738601834884236a7ff8
|
{
"intermediate": 0.39727693796157837,
"beginner": 0.3014518916606903,
"expert": 0.3012712597846985
}
|
32,558
|
You are an expert in AI-assisted writing and also a professional developmental editor for fiction books. I need a help with describing a very minor character in my outline for AI-parsing. The character operates behind the scenes and doesn't appear himself in the plot, only mentioned in letters and talks of other characters.
I used the following format:
<storyRole>
Only mentioned. Potential ally to Rhaenyra.
</storyRole>
But the ai tool keeps creating scenes with this character appearing in the text himself.
|
8fe586dd1d7c164ed3d3da34de020156
|
{
"intermediate": 0.306123286485672,
"beginner": 0.28215816617012024,
"expert": 0.411718487739563
}
|
32,559
|
/*
This is a simple MJPEG streaming webserver implemented for AI-Thinker ESP32-CAM
and ESP-EYE modules.
This is tested to work with VLC and Blynk video widget and can support up to 10
simultaneously connected streaming clients.
Simultaneous streaming is implemented with FreeRTOS tasks.
Inspired by and based on this Instructable: $9 RTSP Video Streamer Using the ESP32-CAM Board
(https://www.instructables.com/id/9-RTSP-Video-Streamer-Using-the-ESP32-CAM-Board/)
Board: AI-Thinker ESP32-CAM or ESP-EYE
Compile as:
ESP32 Dev Module
CPU Freq: 240
Flash Freq: 80
Flash mode: QIO
Flash Size: 4Mb
Patrition: Minimal SPIFFS
PSRAM: Enabled
*/
// ESP32 has two cores: APPlication core and PROcess core (the one that runs ESP32 SDK stack)
#define APP_CPU 1
#define PRO_CPU 0
#include "src/OV2640.h"
#include <WiFi.h>
#include <WebServer.h>
#include <WiFiClient.h>
#include <esp_bt.h>
#include <esp_wifi.h>
#include <esp_sleep.h>
#include <driver/rtc_io.h>
// Select camera model
//#define CAMERA_MODEL_WROVER_KIT
#define CAMERA_MODEL_ESP_EYE
//#define CAMERA_MODEL_M5STACK_PSRAM
//#define CAMERA_MODEL_M5STACK_WIDE
//#define CAMERA_MODEL_AI_THINKER
#include "camera_pins.h"
/*
Next one is an include with wifi credentials.
This is what you need to do:
1. Create a file called "home_wifi_multi.h" in the same folder OR under a separate subfolder of the "libraries" folder of Arduino IDE. (You are creating a "fake" library really - I called it "MySettings").
2. Place the following text in the file:
#define SSID1 "replace with your wifi ssid"
#define PWD1 "replace your wifi password"
3. Save.
Should work then
*/
#include "home_wifi_multi.h"
OV2640 cam;
WebServer server(80);
// ===== rtos task handles =========================
// Streaming is implemented with 3 tasks:
TaskHandle_t tMjpeg; // handles client connections to the webserver
TaskHandle_t tCam; // handles getting picture frames from the camera and storing them locally
TaskHandle_t tStream; // actually streaming frames to all connected clients
// frameSync semaphore is used to prevent streaming buffer as it is replaced with the next frame
SemaphoreHandle_t frameSync = NULL;
// Queue stores currently connected clients to whom we are streaming
QueueHandle_t streamingClients;
// We will try to achieve 25 FPS frame rate
const int FPS = 14;
// We will handle web client requests every 50 ms (20 Hz)
const int WSINTERVAL = 100;
// ======== Server Connection Handler Task ==========================
void mjpegCB(void* pvParameters) {
TickType_t xLastWakeTime;
const TickType_t xFrequency = pdMS_TO_TICKS(WSINTERVAL);
// Creating frame synchronization semaphore and initializing it
frameSync = xSemaphoreCreateBinary();
xSemaphoreGive( frameSync );
// Creating a queue to track all connected clients
streamingClients = xQueueCreate( 10, sizeof(WiFiClient*) );
//=== setup section ==================
// Creating RTOS task for grabbing frames from the camera
xTaskCreatePinnedToCore(
camCB, // callback
"cam", // name
4096, // stacj size
NULL, // parameters
2, // priority
&tCam, // RTOS task handle
APP_CPU); // core
// Creating task to push the stream to all connected clients
xTaskCreatePinnedToCore(
streamCB,
"strmCB",
4 * 1024,
NULL, //(void*) handler,
2,
&tStream,
APP_CPU);
// Registering webserver handling routines
server.on("/mjpeg/1", HTTP_GET, handleJPGSstream);
server.on("/jpg", HTTP_GET, handleJPG);
server.onNotFound(handleNotFound);
// Starting webserver
server.begin();
//=== loop() section ===================
xLastWakeTime = xTaskGetTickCount();
for (;;) {
server.handleClient();
// After every server client handling request, we let other tasks run and then pause
taskYIELD();
vTaskDelayUntil(&xLastWakeTime, xFrequency);
}
}
// Commonly used variables:
volatile size_t camSize; // size of the current frame, byte
volatile char* camBuf; // pointer to the current frame
// ==== RTOS task to grab frames from the camera =========================
void camCB(void* pvParameters) {
TickType_t xLastWakeTime;
// A running interval associated with currently desired frame rate
const TickType_t xFrequency = pdMS_TO_TICKS(1000 / FPS);
// Mutex for the critical section of swithing the active frames around
portMUX_TYPE xSemaphore = portMUX_INITIALIZER_UNLOCKED;
// Pointers to the 2 frames, their respective sizes and index of the current frame
char* fbs[2] = { NULL, NULL };
size_t fSize[2] = { 0, 0 };
int ifb = 0;
//=== loop() section ===================
xLastWakeTime = xTaskGetTickCount();
for (;;) {
// Grab a frame from the camera and query its size
cam.run();
size_t s = cam.getSize();
// If frame size is more that we have previously allocated - request 125% of the current frame space
if (s > fSize[ifb]) {
fSize[ifb] = s * 4 / 3;
fbs[ifb] = allocateMemory(fbs[ifb], fSize[ifb]);
}
// Copy current frame into local buffer
char* b = (char*) cam.getfb();
memcpy(fbs[ifb], b, s);
// Let other tasks run and wait until the end of the current frame rate interval (if any time left)
taskYIELD();
vTaskDelayUntil(&xLastWakeTime, xFrequency);
// Only switch frames around if no frame is currently being streamed to a client
// Wait on a semaphore until client operation completes
xSemaphoreTake( frameSync, portMAX_DELAY );
// Do not allow interrupts while switching the current frame
portENTER_CRITICAL(&xSemaphore);
camBuf = fbs[ifb];
camSize = s;
ifb++;
ifb &= 1; // this should produce 1, 0, 1, 0, 1 ... sequence
portEXIT_CRITICAL(&xSemaphore);
// Let anyone waiting for a frame know that the frame is ready
xSemaphoreGive( frameSync );
// Technically only needed once: let the streaming task know that we have at least one frame
// and it could start sending frames to the clients, if any
xTaskNotifyGive( tStream );
// Immediately let other (streaming) tasks run
taskYIELD();
// If streaming task has suspended itself (no active clients to stream to)
// there is no need to grab frames from the camera. We can save some juice
// by suspedning the tasks
if ( eTaskGetState( tStream ) == eSuspended ) {
vTaskSuspend(NULL); // passing NULL means "suspend yourself"
}
}
}
// ==== Memory allocator that takes advantage of PSRAM if present =======================
char* allocateMemory(char* aPtr, size_t aSize) {
// Since current buffer is too smal, free it
if (aPtr != NULL) free(aPtr);
size_t freeHeap = ESP.getFreeHeap();
char* ptr = NULL;
// If memory requested is more than 2/3 of the currently free heap, try PSRAM immediately
if ( aSize > freeHeap * 2 / 3 ) {
if ( psramFound() && ESP.getFreePsram() > aSize ) {
ptr = (char*) ps_malloc(aSize);
}
}
else {
// Enough free heap - let's try allocating fast RAM as a buffer
ptr = (char*) malloc(aSize);
// If allocation on the heap failed, let's give PSRAM one more chance:
if ( ptr == NULL && psramFound() && ESP.getFreePsram() > aSize) {
ptr = (char*) ps_malloc(aSize);
}
}
// Finally, if the memory pointer is NULL, we were not able to allocate any memory, and that is a terminal condition.
if (ptr == NULL) {
ESP.restart();
}
return ptr;
}
// ==== STREAMING ======================================================
const char HEADER[] = "HTTP/1.1 200 OK\r\n" \
"Access-Control-Allow-Origin: *\r\n" \
"Content-Type: multipart/x-mixed-replace; boundary=123456789000000000000987654321\r\n";
const char BOUNDARY[] = "\r\n--123456789000000000000987654321\r\n";
const char CTNTTYPE[] = "Content-Type: image/jpeg\r\nContent-Length: ";
const int hdrLen = strlen(HEADER);
const int bdrLen = strlen(BOUNDARY);
const int cntLen = strlen(CTNTTYPE);
// ==== Handle connection request from clients ===============================
void handleJPGSstream(void)
{
// Can only acommodate 10 clients. The limit is a default for WiFi connections
if ( !uxQueueSpacesAvailable(streamingClients) ) return;
// Create a new WiFi Client object to keep track of this one
WiFiClient* client = new WiFiClient();
*client = server.client();
// Immediately send this client a header
client->write(HEADER, hdrLen);
client->write(BOUNDARY, bdrLen);
// Push the client to the streaming queue
xQueueSend(streamingClients, (void *) &client, 0);
// Wake up streaming tasks, if they were previously suspended:
if ( eTaskGetState( tCam ) == eSuspended ) vTaskResume( tCam );
if ( eTaskGetState( tStream ) == eSuspended ) vTaskResume( tStream );
}
// ==== Actually stream content to all connected clients ========================
void streamCB(void * pvParameters) {
char buf[16];
TickType_t xLastWakeTime;
TickType_t xFrequency;
// Wait until the first frame is captured and there is something to send
// to clients
ulTaskNotifyTake( pdTRUE, /* Clear the notification value before exiting. */
portMAX_DELAY ); /* Block indefinitely. */
xLastWakeTime = xTaskGetTickCount();
for (;;) {
// Default assumption we are running according to the FPS
xFrequency = pdMS_TO_TICKS(1000 / FPS);
// Only bother to send anything if there is someone watching
UBaseType_t activeClients = uxQueueMessagesWaiting(streamingClients);
if ( activeClients ) {
// Adjust the period to the number of connected clients
xFrequency /= activeClients;
// Since we are sending the same frame to everyone,
// pop a client from the the front of the queue
WiFiClient *client;
xQueueReceive (streamingClients, (void*) &client, 0);
// Check if this client is still connected.
if (!client->connected()) {
// delete this client reference if s/he has disconnected
// and don't put it back on the queue anymore. Bye!
delete client;
}
else {
// Ok. This is an actively connected client.
// Let's grab a semaphore to prevent frame changes while we
// are serving this frame
xSemaphoreTake( frameSync, portMAX_DELAY );
client->write(CTNTTYPE, cntLen);
sprintf(buf, "%d\r\n\r\n", camSize);
client->write(buf, strlen(buf));
client->write((char*) camBuf, (size_t)camSize);
client->write(BOUNDARY, bdrLen);
// Since this client is still connected, push it to the end
// of the queue for further processing
xQueueSend(streamingClients, (void *) &client, 0);
// The frame has been served. Release the semaphore and let other tasks run.
// If there is a frame switch ready, it will happen now in between frames
xSemaphoreGive( frameSync );
taskYIELD();
}
}
else {
// Since there are no connected clients, there is no reason to waste battery running
vTaskSuspend(NULL);
}
// Let other tasks run after serving every client
taskYIELD();
vTaskDelayUntil(&xLastWakeTime, xFrequency);
}
}
const char JHEADER[] = "HTTP/1.1 200 OK\r\n" \
"Content-disposition: inline; filename=capture.jpg\r\n" \
"Content-type: image/jpeg\r\n\r\n";
const int jhdLen = strlen(JHEADER);
// ==== Serve up one JPEG frame =============================================
void handleJPG(void)
{
WiFiClient client = server.client();
if (!client.connected()) return;
cam.run();
client.write(JHEADER, jhdLen);
client.write((char*)cam.getfb(), cam.getSize());
}
// ==== Handle invalid URL requests ============================================
void handleNotFound()
{
String message = "Server is running!\n\n";
message += "URI: ";
message += server.uri();
message += "\nMethod: ";
message += (server.method() == HTTP_GET) ? "GET" : "POST";
message += "\nArguments: ";
message += server.args();
message += "\n";
server.send(200, "text / plain", message);
}
// ==== SETUP method ==================================================================
void setup()
{
// Setup Serial connection:
Serial.begin(115200);
delay(1000); // wait for a second to let Serial connect
// Configure the camera
camera_config_t config;
config.ledc_channel = LEDC_CHANNEL_0;
config.ledc_timer = LEDC_TIMER_0;
config.pin_d0 = Y2_GPIO_NUM;
config.pin_d1 = Y3_GPIO_NUM;
config.pin_d2 = Y4_GPIO_NUM;
config.pin_d3 = Y5_GPIO_NUM;
config.pin_d4 = Y6_GPIO_NUM;
config.pin_d5 = Y7_GPIO_NUM;
config.pin_d6 = Y8_GPIO_NUM;
config.pin_d7 = Y9_GPIO_NUM;
config.pin_xclk = XCLK_GPIO_NUM;
config.pin_pclk = PCLK_GPIO_NUM;
config.pin_vsync = VSYNC_GPIO_NUM;
config.pin_href = HREF_GPIO_NUM;
config.pin_sscb_sda = SIOD_GPIO_NUM;
config.pin_sscb_scl = SIOC_GPIO_NUM;
config.pin_pwdn = PWDN_GPIO_NUM;
config.pin_reset = RESET_GPIO_NUM;
config.xclk_freq_hz = 20000000;
config.pixel_format = PIXFORMAT_JPEG;
// Frame parameters: pick one
// config.frame_size = FRAMESIZE_UXGA;
// config.frame_size = FRAMESIZE_SVGA;
// config.frame_size = FRAMESIZE_QVGA;
config.frame_size = FRAMESIZE_VGA;
config.jpeg_quality = 12;
config.fb_count = 2;
#if defined(CAMERA_MODEL_ESP_EYE)
pinMode(13, INPUT_PULLUP);
pinMode(14, INPUT_PULLUP);
#endif
if (cam.init(config) != ESP_OK) {
Serial.println("Error initializing the camera");
delay(10000);
ESP.restart();
}
// Configure and connect to WiFi
IPAddress ip;
WiFi.mode(WIFI_STA);
WiFi.begin(SSID1, PWD1);
Serial.print("Connecting to WiFi");
while (WiFi.status() != WL_CONNECTED)
{
delay(500);
Serial.print(F("."));
}
ip = WiFi.localIP();
Serial.println(F("WiFi connected"));
Serial.println("");
Serial.print("Stream Link: http://");
Serial.print(ip);
Serial.println("/mjpeg/1");
// Start mainstreaming RTOS task
xTaskCreatePinnedToCore(
mjpegCB,
"mjpeg",
4 * 1024,
NULL,
2,
&tMjpeg,
APP_CPU);
}
void loop() {
vTaskDelay(1000);
}, Can you please tell me the working of this IOT project? using ESP32 cam module? tell me in detial? and the implementation? also tell me is this code is ready to be flashed in ESP32 cam module?
|
b013c852a467b3c39adf8d5ab37b1fdd
|
{
"intermediate": 0.38688161969184875,
"beginner": 0.3035695254802704,
"expert": 0.3095487654209137
}
|
32,560
|
how to redirect stdout and stderr to /dev/null for linux service
|
38eacc152413a6b6df4976ec7381fe72
|
{
"intermediate": 0.4758942723274231,
"beginner": 0.2673429250717163,
"expert": 0.2567628026008606
}
|
32,561
|
You are an expert in AI-assisted writing and also a professional developmental editor for fiction books. I need a help with describing a very minor character in my outline for AI-parsing. The character is currently daed and doesn’t appear himself in the plot, only mentioned in letters and talks of other characters.
I used the following format:
<storyRole>
Currently dead. Mentioned in other characters' talks.
</storyRole>
But the ai tool keeps creating scenes with this character appearing in the text himself.
|
558dfdbe918eae672e19f0b6ba9a6e1e
|
{
"intermediate": 0.27619460225105286,
"beginner": 0.29982224106788635,
"expert": 0.4239831864833832
}
|
32,562
|
local i = io.read()
while true do
if i == "the" then
print("Yes")
elseif i ~= "the" then
break
end
end
|
931ab76b393c2e59ba00f85e18a6b86b
|
{
"intermediate": 0.1525276154279709,
"beginner": 0.7760986089706421,
"expert": 0.07137373834848404
}
|
32,563
|
I want to extract the numbers from sentences similar to this one but it will not always be the same, and also only wanna extract numbers with kg/kile/hg/hekto/gram/g next to it:
Hej, jag skulle vilja lägga in 102 kilo.
so if it was that it would extract 102 kilo
but here
Hej, jag 15 skulle 230 vilja lägga in 102 kilo.
or
Hej, jag 102 kilo skulle vilja lägga in 40.
it still would only extract 102
|
0cd13a345219a2ba3db7b05a1ba326cf
|
{
"intermediate": 0.29166364669799805,
"beginner": 0.2613086402416229,
"expert": 0.44702771306037903
}
|
32,564
|
Hello! Can you please create a url shortener in python?
|
5271d521781014712ed8c952d4ac22a9
|
{
"intermediate": 0.6096484065055847,
"beginner": 0.1496810019016266,
"expert": 0.2406705617904663
}
|
32,565
|
Minecraft fabric modding. I need print all blocks in minecraft to console. For example: minecraft:stone, minecraft:grass_block ...
|
3839cd61fc02940cd5b00b77b58f732f
|
{
"intermediate": 0.5251967906951904,
"beginner": 0.15144315361976624,
"expert": 0.32336005568504333
}
|
32,566
|
instance.new("Vector3") to turn a block 180 degrees
|
e3491bacbe395f8c82a8b9ce969fc4c7
|
{
"intermediate": 0.3533744215965271,
"beginner": 0.36673346161842346,
"expert": 0.2798921465873718
}
|
32,567
|
this code should show me an animation but it doesnt. why?
clear all
clc
%% User Input
D = input('Diffusion coefficient (m^2/s): ');
t_end = input('End time (s): ');
L = input('Spatial dimension-L (m): ');
k = input('Reaction rate (1/s): ');
R = input('Retardation coefficient (R): ');
To = input('Tortuosity : ');
boundary_condition = input('Boundary condition ([1]: No-Flux, [2]: Perfectly Absorbing): ');
%% Parameters
M = 1;
t_start = 1;
dt = 1;
dx = 0.1;
dy = 0.1;
time_range = t_start:dt:t_end;
x_range = [-L:dx:L];
y_range = [-L:dy:L];
D_eff = D/R*To;
%% Initialize result matrix
Result = zeros(numel(y_range), numel(x_range), numel(time_range));
%% Compute concentrations
[X, Y, T] = meshgrid(x_range, y_range, time_range);
C_M = M ./ (4 * pi * T).^1.5 .* D_eff;
% Compute concentration based on the selected boundary condition
if boundary_condition == 1
c = exp((-X.^2) ./ (4 * D_eff * T) + (-Y.^2) ./ (4 * D_eff * T) - k * T) + exp((-X.^2) ./ (4 * D_eff * T) + (-(Y + 2 * L).^2) ./ (4 * D_eff * T) - k * T) + exp((-X.^2) ./ (4 * D_eff * T) + (-(Y - 2 * L).^2) ./ (4 * D_eff * T) - k * T);
elseif boundary_condition == 2
c = 0;
for n = -1:1:1
c = c + exp( (-X.^2)./(4*D_eff*T) + (-(Y+4*n*L).^2)./(4*D_eff*T)) - exp( (-X.^2)./(4*D_eff*T) + (-(Y+(4*n-2)*L).^2)./(4*D_eff*T));
end
else
error('Invalid choice for boundary condition. Choose 1 for No-Flux or 2 for Perfectly Absorbing.');
end
Result = C_M .* c;
%% Visualization (2D Plot)
for t = time_range
imagesc(Result(:, :, t));
title(strcat('time: ', num2str(t), ' s'))
xticks(1:(numel(x_range) - 1)/2:numel(x_range));
xticklabels({num2str(-L), num2str(0), num2str(L)})
xlabel('X-Axis (m)');
yticks(1:(numel(x_range) - 1)/2:numel(x_range));
yticklabels({num2str(L), num2str(0), num2str(-L)})
ylabel('Y-Axis (m)');
colorbar
hold off
pause(0.01)
colormap('jet');
drawnow
end
|
b50f887f281e71522ed7f61bc56b2ac1
|
{
"intermediate": 0.3607894778251648,
"beginner": 0.3757117986679077,
"expert": 0.2634987533092499
}
|
32,568
|
#include <iostream>
class UtoplyusVKostrule {
public:
UtoplyusVKostrule(int x, int y, int z) : width_(x), height_(y), depth_(z) {
arr_ = new int32_t[kNumLenght_ * x * y * z];
std::memset(arr_, 0, kNumLenght_ * x * y * z * sizeof(int32_t));
}
~UtoplyusVKostrule() {
delete[] arr_;
}
UtoplyusVKostrule(const UtoplyusVKostrule& other) : width_(other.width_), height_(other.height_), depth_(other.depth_) {
arr_ = new int32_t[kNumLenght_ * width_ * height_ * depth_];
std::memcpy(arr_, other.arr_, kNumLenght_ * width_ * height_ * depth_ * sizeof(int32_t));
}
UtoplyusVKostrule& operator=(const UtoplyusVKostrule& other) {
if (this != &other) {
delete[] arr_;
width_ = other.width_;
height_ = other.height_;
depth_ = other.depth_;
arr_ = new int32_t[kNumLenght_ * width_ * height_ * depth_];
std::memcpy(arr_, other.arr_, kNumLenght_ * width_ * height_ * depth_ * sizeof(int32_t));
}
return *this;
}
static UtoplyusVKostrule make_array(int x, int y, int z) {
return UtoplyusVKostrule(x, y, z);
}
UtoplyusVKostrule& operator()(int x, int y, int z) {
current_index_ = x + y * width_ + z * width_ * height_;
return *this;
}
UtoplyusVKostrule& operator=(int32_t value) {
int first_bit_index = current_index_ * kNumLenght_;
char* c_value = new char[kNumLenght_];
int a = -1;
while (value != 0) {
c_value[++a] = (value % 2) + '0';
value /= 2;
}
for (int i = 0; i < kNumLenght_; ++i) {
if (c_value[i] - '0') {
SetBit(arr_, first_bit_index + i);
}
}
delete[] c_value;
return *this;
}
private:
const int8_t kNumLenght_ = 17;
int32_t current_index_;
int32_t width_;
int32_t height_;
int32_t depth_;
int32_t* arr_;
bool TakeBit(const int32_t* value, uint32_t bit_position) {
return (value[bit_position / 32] >> (bit_position % 32)) & 1;
}
UtoplyusVKostrule& SetBit(int32_t* value, int32_t bit_position) {
value[bit_position / 32] |= (1 << (bit_position % 32));
return *this;
}
int32_t ToDec(int32_t* arr_, int32_t current_index_) const {
int first_bit_index = current_index_ * kNumLenght_;
int32_t decimal_value = 0;
int32_t exp = 1;
for (int i = first_bit_index + kNumLenght_ - 1; i >= first_bit_index; --i) {
decimal_value += (arr_[i / 32] >> (i % 32) & 1) * exp;
exp *= 2;
}
return decimal_value;
}
UtoplyusVKostrule& TwoImplement(int32_t* arr_) {
int32_t current_value = ToDec(arr_, current_index_);
int32_t inverted_value = ~current_value;
int32_t new_value = inverted_value + 1;
for (int i = 0; i < kNumLenght_; ++i) {
arr_[current_index_ / 32] = (arr_[current_index_ / 32] & ~(1 << (current_index_ % 32))) | ((new_value & 1) << (current_index_ % 32));
}
return *this;
}
friend UtoplyusVKostrule& operator+(const UtoplyusVKostrule& lhs, const UtoplyusVKostrule& rhs);
friend std::ostream& operator<<(std::ostream& stream, const UtoplyusVKostrule& array);
};
UtoplyusVKostrule& operator+(const UtoplyusVKostrule& lhs, const UtoplyusVKostrule& rhs) {
if (lhs.width_ != rhs.width_ || lhs.height_ != rhs.height_ || lhs.depth_ != rhs.depth_) {
throw std::invalid_argument("Dimensions of both arrays must match");
}
UtoplyusVKostrule result(lhs.width_, lhs.height_, lhs.depth_);
for (int z = 0; z < lhs.depth_; ++z) {
for (int y = 0; y < lhs.height_; ++y) {
for (int x = 0; x < lhs.width_; ++x) {
int32_t decimal_lhs = lhs.ToDec(lhs.arr_, x + y * lhs.width_ + z * lhs.width_ * lhs.height_);
int32_t decimal_rhs = rhs.ToDec(rhs.arr_, x + y * rhs.width_ + z * rhs.width_ * rhs.height_);
result(x, y, z) = decimal_lhs + decimal_rhs;
}
}
}
return result;
}
std::ostream& operator<<(std::ostream& stream, const UtoplyusVKostrule& array) {
int32_t decimal_value = array.ToDec(array.arr_, array.current_index_);
stream << decimal_value;
return stream;
}
int main() {
UtoplyusVKostrule a = UtoplyusVKostrule::make_array(1, 1, 1);
UtoplyusVKostrule b = UtoplyusVKostrule::make_array(1, 1, 1);
UtoplyusVKostrule c = UtoplyusVKostrule::make_array(1, 1, 1);
a(1, 1, 1) = 2;
b(1, 1, 1) = 4;
c = a + b;
std::cout << c(1, 1, 1);
}
/bin/sh: line 1: 17202 Segmentation fault: 11 "/Users/alex/labwork5-SPLOIT47/lib/"main
исправь
|
19b7cd0a8e98e1c777965a9515900f63
|
{
"intermediate": 0.3156784176826477,
"beginner": 0.43461933732032776,
"expert": 0.24970227479934692
}
|
32,569
|
programs like
Xmouse buttons control
|
9d42fe5643850cfc00e18a44c8d1f6f6
|
{
"intermediate": 0.3341737389564514,
"beginner": 0.23770780861377716,
"expert": 0.42811843752861023
}
|
32,570
|
I want you to generate a code where you implement GridSearchCV with DBSCAN algorithm using python code.
|
2f2ec7ab0e17654b41d810c4c633e896
|
{
"intermediate": 0.14452402293682098,
"beginner": 0.05075075477361679,
"expert": 0.804725170135498
}
|
32,571
|
In c++ wxwidgets, I am trying to make a move functions for a wxgrid, so it gets the old position and the new position and move the row data to the new position. So far it works perfectly for one item but when I try to add a new item the order gets wrong, help me solve this issue.
This is how it works right now: A - B - C - D, I move D to 0, the output is: D - A - B - C, but when I move B to 1 the output is wrong: C - B - D - A, so the first position is wrong now, but the newly added is correct.
This is my code:
void MoveRow(wxGrid* grid, int rowPos, int newRowPos)
{
grid->InsertRows(newRowPos);
for (int col = 0; col < grid->GetNumberCols(); col++)
{
grid->SetCellValue(newRowPos, col, grid->GetCellValue(rowPos + 1, col));
}
grid->DeleteRows(rowPos + 1);
}
std::vector<int> pinnedRows;
void FrameMain::OnPinRows(wxCommandEvent& event)
{
wxArrayString rowValues = Utils::GetSelectedRowValues(gridGames, 0);
if (rowValues.Count() >= 1)
{
for (size_t i = 0; i < rowValues.Count(); i++)
{
int row = games->GetGameRow(rowValues[i]);
std::vector<int>::iterator it = std::find(pinnedRows.begin(), pinnedRows.end(), row);
if (it == pinnedRows.end())
{
pinnedRows.push_back(row);
}
else
{
pinnedRows.erase(it);
}
}
SetPinnedRows();
}
}
void FrameMain::SetPinnedRows()
{
for (size_t i = 0; i < pinnedRows.size(); i++)
{
Utils::MoveRow(gridGames, pinnedRows[i], i);
}
SetPinnedRowsColor();
}
|
59199fe9c9de185217e8546a466c9322
|
{
"intermediate": 0.6514373421669006,
"beginner": 0.18109020590782166,
"expert": 0.16747243702411652
}
|
32,572
|
Let {a, b}
∗ denote the set of all possible finite length strings consisting of the symbols a and b, including
the empty string. For two strings x, y, let xy denote their concatenation.
(a) Consider the set (also called a language) L = {ww : w ∈ {a, b}
∗}. Write a function that on input
a string x tests and outputs whether or not x ∈ L.
(b) For any set of strings S, let Si = {x1x2 · · · xi
: x1, x2, · · · , xi ∈ S}, the set of all strings obtained
by concatenating i arbitrary strings from S. Define S
∗ as
S
∗ = ∪
∞
i=0Si
,
that is, S
∗
is a set consisting of the empty string and all strings of finite length obtained by con-
catenating arbitrary elements of S.
Write a function that takes as input a string y and determines whether or not y ∈ L
∗ as effi-
ciently as possible. Here, L is the language defined in Part (a). You will need the function from
the previous part here. Try to minimize the number of calls to the function from Part (a).
In the main() function,
- Read a string y.
- Call the function from the second part and output ‘Yes’ indicating that y ∈ L
∗ or ‘No’ indicating
that y /∈ L
∗
.
You are not allowed to use any library functions for string manipulation other than strlen only.
Sample Input/Output:
1. Enter string: bbbbababbaabbaababaaabaa
Yes
2. Enter string: bbaabbaaaaba
No
3. Enter string: aaabbb
No
4. Enter string: aabb
Yes
|
635f0472a6ac02242563417747b2e5d1
|
{
"intermediate": 0.33311793208122253,
"beginner": 0.2491770088672638,
"expert": 0.41770505905151367
}
|
32,573
|
Let {a, b}
∗ denote the set of all possible finite length strings consisting of the symbols a and b, including
the empty string. For two strings x, y, let xy denote their concatenation.
(a) Consider the set (also called a language) L = {ww : w ∈ {a, b}
∗}. Write a function that on input
a string x tests and outputs whether or not x ∈ L.
(b) For any set of strings S, let Si = {x1x2 · · · xi
: x1, x2, · · · , xi ∈ S}, the set of all strings obtained
by concatenating i arbitrary strings from S. Define S
∗ as
S
∗ = ∪
∞
i=0Si
,
that is, S
∗
is a set consisting of the empty string and all strings of finite length obtained by con-
catenating arbitrary elements of S.
Write a function that takes as input a string y and determines whether or not y ∈ L
∗ as effi-
ciently as possible. Here, L is the language defined in Part (a). You will need the function from
the previous part here. Try to minimize the number of calls to the function from Part (a).
In the main() function,
- Read a string y.
- Call the function from the second part and output ‘Yes’ indicating that y ∈ L
∗ or ‘No’ indicating
that y /∈ L
∗
.
You are not allowed to use any library functions for string manipulation other than strlen only.
Sample Input/Output:
1. Enter string: bbbbababbaabbaababaaabaa
Yes
2. Enter string: bbaabbaaaaba
No
3. Enter string: aaabbb
No
4. Enter string: aabb
Yes
|
142c715dc432f7f95e4c2337845ad72b
|
{
"intermediate": 0.33311793208122253,
"beginner": 0.2491770088672638,
"expert": 0.41770505905151367
}
|
32,574
|
this is header
#include <iostream>
class Int17Matrix3D {
public:
Int17Matrix3D(int x, int y, int z) : width_(x), height_(y), depth_(z) {
arr_ = new int32_t[kNumLenght_ * x * y * z];
std::memset(arr_, 0, kNumLenght_ * x * y * z * sizeof(int32_t));
}
~Int17Matrix3D() {
delete[] arr_;
}
Int17Matrix3D(const Int17Matrix3D& other) : width_(other.width_), height_(other.height_), depth_(other.depth_) {
arr_ = new int32_t[kNumLenght_ * width_ * height_ * depth_];
std::memcpy(arr_, other.arr_, kNumLenght_ * width_ * height_ * depth_ * sizeof(int32_t));
}
Int17Matrix3D& operator=(const Int17Matrix3D& other);
static Int17Matrix3D make_array(int x, int y, int z);
Int17Matrix3D& operator()(int x, int y, int z);
Int17Matrix3D& operator=(int32_t value);
private:
const int8_t kNumLenght_ = 17;
int32_t current_index_;
int32_t width_;
int32_t height_;
int32_t depth_;
int32_t* arr_;
bool TakeBit(const int32_t* value, uint32_t bit_position) const;
Int17Matrix3D& ClearBit(int32_t* value, int32_t bit_position);
Int17Matrix3D& SetBit(int32_t* value, int32_t bit_position);
int32_t ToDec(const int32_t* arr_, int32_t current_index_) const;
friend Int17Matrix3D operator+(const Int17Matrix3D& lhs, const Int17Matrix3D& rhs);
friend std::ostream& operator<<(std::ostream& stream, const Int17Matrix3D& array);
};
Int17Matrix3D operator+(const Int17Matrix3D& lhs, const Int17Matrix3D& rhs);
Int17Matrix3D operator-(const Int17Matrix3D& lhs, const Int17Matrix3D& rhs);
Int17Matrix3D operator*(const Int17Matrix3D& lhs, const int32_t& rhs);
std::ostream& operator<<(std::ostream& stream, const Int17Matrix3D& array);
this is cpp
#include <iostream>
#include "Int17Matrix3D.h"
Int17Matrix3D& operator=(const Int17Matrix3D& other) {
if (this != &other) {
delete[] arr_;
width_ = other.width_;
height_ = other.height_;
depth_ = other.depth_;
arr_ = new int32_t[kNumLenght_ * width_ * height_ * depth_];
std::memcpy(arr_, other.arr_, kNumLenght_ * width_ * height_ * depth_ * sizeof(int32_t));
}
return *this;
}
static Int17Matrix3D make_array(int x, int y, int z) {
return Int17Matrix3D(x, y, z);
}
Int17Matrix3D& operator()(int x, int y, int z) {
current_index_ = GetIndex(x, y, z);
return *this;
}
Int17Matrix3D& operator=(int32_t value) {
int first_bit_index = current_index_ * kNumLenght_;
for (int bit = 0; bit < kNumLenght_; ++bit) {
if (value & (1u << bit)) {
SetBit(arr_, first_bit_index + bit);
} else {
ClearBit(arr_, first_bit_index + bit);
}
}
return *this;
}
bool TakeBit(const int32_t* value, int bit_position) const {
int array_index = bit_position / 32;
int bit_index = bit_position % 32;
return ((value[array_index] >> bit_index) & 1) != 0;
}
Int17Matrix3D& ClearBit(int32_t* value, int32_t bit_position) {
value[bit_position / 32] &= ~(1 << (bit_position % 32));
return *this;
}
Int17Matrix3D& SetBit(int32_t* value, int32_t bit_position) {
value[bit_position / 32] |= (1 << (bit_position % 32));
return *this;
}
int GetIndex(int x, int y, int z) const {
return x + y * (width_ * kNumLenght_) + z * (width_ * height_ * kNumLenght_);
}
int32_t ToDec(const int32_t* arr_, int32_t current_index_) const {
int first_bit_index = current_index_ * kNumLenght_;
int32_t decimal_value = 0;
int32_t exp = 1;
for (int i = 0; i < kNumLenght_; ++i) {
if (TakeBit(arr_, first_bit_index + i)) {
decimal_value += exp;
}
exp *= 2;
}
return decimal_value;
}
Int17Matrix3D operator+(const Int17Matrix3D& lhs, const Int17Matrix3D& rhs) {
Int17Matrix3D result(lhs.width_, lhs.height_, lhs.depth_);
for (int z = 0; z < lhs.depth_; ++z) {
for (int y = 0; y < lhs.height_; ++y) {
for (int x = 0; x < lhs.width_; ++x) {
int index = lhs.GetIndex(x, y, z);
int val_lhs = lhs.ToDec(lhs.arr_, index);
int val_rhs = rhs.ToDec(rhs.arr_, index);
result(x, y, z) = val_lhs + val_rhs;
}
}
}
return result;
}
Int17Matrix3D operator-(const Int17Matrix3D& lhs, const Int17Matrix3D& rhs) {
Int17Matrix3D result(lhs.width_, lhs.height_, lhs.depth_);
for (int z = 0; z < lhs.depth_; ++z) {
for (int y = 0; y < lhs.height_; ++y) {
for (int x = 0; x < lhs.width_; ++x) {
int index = lhs.GetIndex(x, y, z);
int val_lhs = lhs.ToDec(lhs.arr_, index);
int val_rhs = rhs.ToDec(rhs.arr_, index);
result(x, y, z) = val_lhs - val_rhs;
}
}
}
return result;
}
Int17Matrix3D operator*(const Int17Matrix3D& lhs, const int32_t& rhs) {
Int17Matrix3D result(lhs.width_, lhs.height_, lhs.depth_);
for (int z = 0; z < lhs.depth_; ++z) {
for (int y = 0; y < lhs.height_; ++y) {
for (int x = 0; x < lhs.width_; ++x) {
int index = lhs.GetIndex(x, y, z);
int val_lhs = lhs.ToDec(lhs.arr_, index);
result(x, y, z) = val_lhs * rhs;
}
}
}
return result;
}
std::ostream& operator<<(std::ostream& stream, const Int17Matrix3D& array) {
int32_t decimal_value = array.ToDec(array.arr_, array.current_index_);
stream << decimal_value;
return stream;
}
fix
❯ clang++ Int17Matrix3D.cpp -o main
In file included from Int17Matrix3D.cpp:2:
./Int17Matrix3D.h:28:28: warning: default member initializer for non-static data member is a C++11 extension [-Wc++11-extensions]
const int8_t kNumLenght_ = 17;
^
Int17Matrix3D.cpp:4:16: error: overloaded 'operator=' must be a binary operator (has 1 parameter)
Int17Matrix3D& operator=(const Int17Matrix3D& other) {
^
Int17Matrix3D.cpp:5:7: error: invalid use of 'this' outside of a non-static member function
if (this != &other) {
^
Int17Matrix3D.cpp:6:16: error: use of undeclared identifier 'arr_'
delete[] arr_;
^
Int17Matrix3D.cpp:7:7: error: use of undeclared identifier 'width_'
width_ = other.width_;
^
Int17Matrix3D.cpp:7:22: error: 'width_' is a private member of 'Int17Matrix3D'
width_ = other.width_;
^
./Int17Matrix3D.h:30:11: note: declared private here
int32_t width_;
^
Int17Matrix3D.cpp:8:7: error: use of undeclared identifier 'height_'
height_ = other.height_;
^
Int17Matrix3D.cpp:8:23: error: 'height_' is a private member of 'Int17Matrix3D'
height_ = other.height_;
^
./Int17Matrix3D.h:31:11: note: declared private here
int32_t height_;
^
Int17Matrix3D.cpp:9:7: error: use of undeclared identifier 'depth_'
depth_ = other.depth_;
^
Int17Matrix3D.cpp:9:22: error: 'depth_' is a private member of 'Int17Matrix3D'
depth_ = other.depth_;
^
./Int17Matrix3D.h:32:11: note: declared private here
int32_t depth_;
^
Int17Matrix3D.cpp:10:7: error: use of undeclared identifier 'arr_'
arr_ = new int32_t[kNumLenght_ * width_ * height_ * depth_];
^
Int17Matrix3D.cpp:10:26: error: use of undeclared identifier 'kNumLenght_'
arr_ = new int32_t[kNumLenght_ * width_ * height_ * depth_];
^
Int17Matrix3D.cpp:10:40: error: use of undeclared identifier 'width_'
arr_ = new int32_t[kNumLenght_ * width_ * height_ * depth_];
^
Int17Matrix3D.cpp:10:49: error: use of undeclared identifier 'height_'
arr_ = new int32_t[kNumLenght_ * width_ * height_ * depth_];
^
Int17Matrix3D.cpp:10:59: error: use of undeclared identifier 'depth_'
arr_ = new int32_t[kNumLenght_ * width_ * height_ * depth_];
^
Int17Matrix3D.cpp:11:31: error: 'arr_' is a private member of 'Int17Matrix3D'
std::memcpy(arr_, other.arr_, kNumLenght_ * width_ * height_ * depth_ * sizeof(int32_t));
^
./Int17Matrix3D.h:33:12: note: declared private here
int32_t* arr_;
^
Int17Matrix3D.cpp:11:19: error: use of undeclared identifier 'arr_'
std::memcpy(arr_, other.arr_, kNumLenght_ * width_ * height_ * depth_ * sizeof(int32_t));
^
Int17Matrix3D.cpp:11:37: error: use of undeclared identifier 'kNumLenght_'
std::memcpy(arr_, other.arr_, kNumLenght_ * width_ * height_ * depth_ * sizeof(int32_t));
^
Int17Matrix3D.cpp:11:51: error: use of undeclared identifier 'width_'
std::memcpy(arr_, other.arr_, kNumLenght_ * width_ * height_ * depth_ * sizeof(int32_t));
^
Int17Matrix3D.cpp:11:60: error: use of undeclared identifier 'height_'
std::memcpy(arr_, other.arr_, kNumLenght_ * width_ * height_ * depth_ * sizeof(int32_t));
^
fatal error: too many errors emitted, stopping now [-ferror-limit=]
1 warning and 20 errors generated.
|
8d2e22ff0907422aed6707f18fdba154
|
{
"intermediate": 0.3031427562236786,
"beginner": 0.420168399810791,
"expert": 0.2766888737678528
}
|
32,575
|
I have this error:
Error: File: p3.m Line: 118 Column: 1
Function definition not supported in this context.
Create functions in code file.
for this code:
clear all
clc
%% User Input
D = input('Diffusion coefficient (m^2/s): ');
t_end = input('End time (s): ');
L = input('Spatial dimension-L (m): ');
k = input('Reaction rate (1/s): ');
R = input('Retardation coefficient ®: ');
To = input('Tortuosity : ');
U = input('Velocity of channel flow : ');
boundary_condition = input('Boundary condition ([1]: No-Flux, [2]: Perfectly Absorbing): ');
%% Parameters
M = 1;
t_start = 1;
dt = 1;
dx = 0.1;
dy = 0.1;
time_range = t_start:dt:t_end;
x_range = [0:dx:2*L];
y_range = [-L:dy:L];
D_eff = D/R*To;
U_eff = U/R;
%% Initialize result matrix
Result = zeros(numel(y_range), numel(x_range), numel(time_range));
%% Compute concentrations
[X, Y, T] = meshgrid(x_range, y_range, time_range);
C_M = M ./ (4 * pi * T).^1.5 .* D_eff;
% Compute concentration based on the selected boundary condition
if boundary_condition == 1
c = exp((-(X-U_eff*T).^2) ./ (4 * D_eff * T) + (-Y.^2) ./ (4 * D_eff * T) - k * T) + exp((-(X-U_eff*T).^2) ./ (4 * D_eff * T) + (-(Y + 2 * L).^2) ./ (4 * D_eff * T) - k * T) + exp((-(X-U_eff*T).^2) ./ (4 * D_eff * T) + (-(Y - 2 * L).^2) ./ (4 * D_eff * T) - k * T);
elseif boundary_condition == 2
c = 0;
for n = -1:1:1
c = c + exp( (-(X - U_eff * T).^2)./(4 * D_eff * T) + (-(Y+ 4 * n *L).^2)./(4 * D_eff * T)) - exp( (-(X - U_eff * T).^2)./(4 * D_eff * T) + (-(Y+(4*n-2)*L).^2)./(4*D_eff*T));
end
% Add continuous point source
c = c + f_source(X, Y, T);
% c = 0;
% for n = -1:1:1
% c = c + exp( (-(X-U_eff*T).^2)./(4*D_eff*T) + (-(Y+4*n*L).^2)./(4*D_eff*T)) - exp( (-(X-U_eff*T).^2)./(4*D_eff*T) + (-(Y+(4*n-2)*L).^2)./(4*D_eff*T));
% end
% else
%
% error('Invalid choice for boundary condition. Choose 1 for No-Flux or 2 for Perfectly Absorbing.');
%
% end
Result = C_M .* c;
%% Visualization (2D Plot)
figure;
for t = time_range
imagesc(Result(:, :, t));
title(strcat('time: ', num2str(t), ' s'))
xticks(1:(numel(x_range) - 1)/2:numel(x_range));
xticklabels({num2str(-L), num2str(0), num2str(L)})
xlabel('X-Axis (m)');
yticks(1:(numel(y_range) - 1)/2:numel(y_range));
yticklabels({num2str(L), num2str(0), num2str(-L)})
ylabel('Y-Axis (m)');
colorbar
pause(0.01)
colormap('jet');
end
close;
% 1D Concentration Profiles
x = round(numel(x_range) / 2);
t_1 = numel(time_range);
t_2 = round(numel(time_range) / 2);
t_3 = round(numel(time_range) / 4);
c_1 = Result(:, x, t_1);
c_2 = Result(:, x, t_2);
c_3 = Result(:, x, t_3);
figure;
plot(c_1, 1:numel(c_1));
hold on
plot(c_2, 1:numel(c_2));
hold on
plot(c_3, 1:numel(c_3));
hold off
xlim([0, 1.5 * max([max(c_1), max(c_2), max(c_3)])])
xlabel('Concentration (g/m^2)');
ylabel('Y-Axis (m)');
yticks(1:(numel(y_range) - 1)/4:numel(y_range));
yticklabels({num2str(-L), num2str(-L/2), num2str(0), num2str(L/2), num2str(L)})
legend({strcat('t = ', num2str(t_3), ' s'), strcat('t = ', num2str(t_2), ' s'), strcat('t = ', num2str(t_1), ' s') }, 'Location', 'northeast')
function c_source = f_source(X, Y, T)
% Parameters for the continuous point source
x_source = 0; % x-coordinate of the source
y_source = 0; % y-coordinate of the source
C_source = 1; % concentration of the source
r_source = 0.5; % radius of the source
% Calculate distance from each point to the source
r = sqrt((X - x_source).^2 + (Y - y_source).^2);
% Calculate concentration contribution from the source at each point
c_source = (r <= r_source) .* C_source; % Only points within the radius of the source will have a nonzero concentration
end
|
27599eb51cb154a0ea9fd29d440c706b
|
{
"intermediate": 0.3023439049720764,
"beginner": 0.4101235568523407,
"expert": 0.2875325679779053
}
|
32,576
|
public int SendMessage(string reciever, string content)
{
PortStatus = "Đang gửi tin";
var result = 0;
if (string.IsNullOrEmpty(PhoneNumber) || SIMCardCarrier == SIMCarrier.None)
return result;
var command = $@"AT+CMGS=""{reciever}""{Environment.NewLine}{content}";
var response = GetResponse(command);
if (response.Contains("OK"))
{
result = 1;
PortStatus = "Gửi thành công";
}
else
{
result = 0;
PortStatus = "Gửi thất bại";
}
return result;
}
nếu result = 1 thì hiển thị màu xanh lá cây , còn nếu hiển thị result = 0 thì hiển thị màu đỏ
|
c1512d8dc8b97322cfb27843e2a6d70c
|
{
"intermediate": 0.3144533932209015,
"beginner": 0.3671647906303406,
"expert": 0.31838178634643555
}
|
32,577
|
Write a function odds_outnumber(numbers) that takes a list of integer numbers and returns
True if the number of odd numbers is strictly bigger than the number of even ones
False otherwise.
In this problem, zero is considered to be neither odd nor even number.
Example input: [1, 2, 3, 4, 5, 6, 7]
Example output: True
|
58c2f9cfe9c0972bbbe6faeb8d7d8760
|
{
"intermediate": 0.3209950029850006,
"beginner": 0.37855979800224304,
"expert": 0.30044522881507874
}
|
32,578
|
given the following radial grid, I would like to make the mouse function work for that radial grid( it was previously a rectangular grid instead): -- Define variables to track mouse state
local isMousePressed = false
local pixelStartX, pixelStartY = 0, 0 -- Inizialite with default values
local gridStartX, gridStartY = 0, 0 -- Track grid cell coordinates (in cells)
local endX, endY = 0, 0
drawingMode="wall"
function love.mousepressed(x, y, button, istouch, presses)
if love.mouse.isDown(1) then
-- Start tracking mouse coordinates and state
isMousePressed = true
pixelStartX, pixelStartY = x, y
-- Calculate the grid cell position based on the initial coordinates
--local adjustedX = x - gridOffsetX
--local adjustedY = y - gridOffsetY
--gridStartX = math.floor(adjustedX / cellSize) + 1
--gridStartY = math.floor(adjustedY / cellSize) + 1
end
end
function love.mousemoved(x, y, dx, dy, istouch)
local gridOffsetX = 200
local gridOffsetY = 200
if gamestatus == "levelEditor" then
if love.mouse.isDown(1) then
-- Adjust x and y coordinates for the zoomed grid
x = x - gridOffsetX
y = y - gridOffsetY
-- Calculate the cell position based on the adjusted coordinates
local cellX = math.floor(x / cellSize) + 1
local cellY = math.floor(y / cellSize) + 1
if cellX >= 1 and cellX <= gridWidth and cellY >= 1 and cellY <= gridHeight then
-- Update the aesthetic grid based on the selected tile
if TileSet then
aestheticTileData[cellX] = aestheticTileData[cellX] or {}
aestheticTileData[cellX][cellY] = selectedTile
end
if drawingMode == "wall" then
grid[cellX][cellY] = 1
elseif drawingMode == "agent" then
addAgent(cellX, cellY)
elseif drawingMode == "agent2" then
addAgent2(cellX, cellY)
elseif drawingMode == "exit" then
addExitArea(cellX, cellY)
elseif drawingMode == "border" then
grid[cellX][cellY] = 3
elseif drawingMode == "emptyCell" then
grid[cellX][cellY] = 0
end
-- Update pixelStart
--pixelStartX, pixelStartY = x, y
-- Update gridStartX and gridStartY for grid cell coordinates
--gridStartX, gridStartY = cellX, cellY
-- Update endX and endY
endX, endY = cellX, cellY
-- Update endX and endY
endX, endY = cellX, cellY
end
end
end
end
function love.mousereleased(x, y, button, istouch, presses)
-- Stop tracking mouse coordinates and state on mouse release
isMousePressed = false
if pixelStartX == nil then pixelStartX = 0 end
if pixelStartY == nil then pixelStartY = 0 end
-- Calculate the grid cell positions based on the starting pixel coordinates
gridStartX = math.floor(pixelStartX / cellSize) + 1
gridStartY = math.floor(pixelStartY / cellSize) + 1
endX = math.floor(x / cellSize) + 1
endY = math.floor(y / cellSize) + 1
-- Update the grid based on the drawn shape
if drawingMode == "line" then
print("Drawing line from (" .. gridStartX .. ", " .. gridStartY .. ") to (" .. endX .. ", " .. endY .. ")")
addLineToGrid(gridStartX, gridStartY, endX, endY)
elseif drawingMode == "rectangle" then
print("Drawing rectangle from (" .. gridStartX .. ", " .. gridStartY .. ") to (" .. endX .. ", " .. endY .. ")")
addRectangleToGrid(gridStartX, gridStartY, endX, endY)
elseif drawingMode == "circle" then
print("Drawing circle at (" .. gridStartX .. ", " .. gridStartY .. ") with radius " .. endX)
addCircleToGrid(gridStartX, gridStartY, endX, endY)
end
end
function drawLevelEditor(dt)
local gridOffsetX = 200
local gridOffsetY = 200
if BackgroundSet == true then
drawforshader2()
end
if drawingMode == "object" then
-- (existing code for drawing mode)
end
if gridVisible == true then
if BackgroundSet then
love.graphics.setColor(1, 1, 1, 0.5)
else
love.graphics.setColor(0.5, 0.5, 0.5) -- Set grid color (gray)
end
-- Draw the radial grid
drawRadialGrid(gridOffsetX, gridOffsetY)
end
-- Draw agents and objects as before (existing code)
drawLevelEditorHelp()
drawObjectHelp()
love.graphics.setFont(poorfishsmall)
if not (drawingMode == "object") then
if not (drawingMode == "emptyCell") then
love.graphics.print("Drawing:" .. drawingMode, DrawWallsButton.x, DrawWallsButton.y - 100, 0, 0.8, 1)
elseif drawingMode == "emptyCell" then
love.graphics.print("Erasing:", DrawWallsButton.x, DrawWallsButton.y - 100, 0, 0.8, 1)
end
elseif drawingMode == "object" then
if isDrawingObject then
love.graphics.print("Drawing object:", DrawWallsButton.x, DrawWallsButton.y - 100, 0, 0.8, 1)
elseif isMovingObject then
love.graphics.print("Moving object:", DrawWallsButton.x, DrawWallsButton.y - 100, 0, 0.8, 1)
end
end
drawLevelEditorButtons()
drawLevelEditorTabs()
drawAbout()
fadesExportInformation() -- render the path and button to access the saved custom level
if screenshottaken then
--fadescreenshotinformation()
end
end
function drawRadialGrid(gridOffsetX, gridOffsetY)
local centerX = gridOffsetX + gridWidth * cellSize / 2
local centerY = gridOffsetY + gridHeight * cellSize / 2
for r = 1, math.max(gridWidth, gridHeight) do
local angleIncrement = 2 * math.pi / (r * 6) -- Adjust the divisor to control the density of circles
for theta = 0, 2 * math.pi, angleIncrement do
local cellX = centerX + r * cellSize * math.cos(theta)
local cellY = centerY + r * cellSize * math.sin(theta)
-- Draw cells based on their type (similar to existing code)
love.graphics.setColor(1, 1, 1) -- white (adjust as needed)
love.graphics.rectangle("fill", cellX, cellY, cellSize, cellSize)
end
end
end
|
c8f37a8aaac8727ee69d2d74d28f3b78
|
{
"intermediate": 0.30165547132492065,
"beginner": 0.5404021739959717,
"expert": 0.15794239938259125
}
|
32,579
|
hello
|
7a1835f041d07e8e62087ba7cb5f9f48
|
{
"intermediate": 0.32064199447631836,
"beginner": 0.28176039457321167,
"expert": 0.39759764075279236
}
|
32,580
|
def update_video():
# Получение текущего кадра видео
ret, frame = video_capture.read()
if ret:
# Конвертация цветового пространства BGR в RGB
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# Создание изображения Tkinter из массива numpy
img = Image.fromarray(frame_rgb)
imgtk = ImageTk.PhotoImage(image=img)
# Обновление изображения в окне
video_label.imgtk = imgtk
video_label.configure(image=imgtk)
video_label.after(10, update_video)
# Создание canvas (холста) для отображения видео
canvas = tk.Canvas(root, width=640, height=480)
canvas.pack()
как связать эти функции чтобы в canvas было видео?
|
58c8a73c3a9559f32fe2d44aa40b98c0
|
{
"intermediate": 0.4358106851577759,
"beginner": 0.2635747194290161,
"expert": 0.300614595413208
}
|
32,581
|
in a c++ wxwidgets project, is there a way to change the order rows are visualized? In my case, I have a vector with int positions of pinned rows. So that way I want to control the order of displayed rows when first the rows in the pinned rows vector are shown and then the remaining rows in the original order.
|
ed9ae259c0bf042a1df7b2f31557dbac
|
{
"intermediate": 0.7773308753967285,
"beginner": 0.10981820523738861,
"expert": 0.11285087466239929
}
|
32,582
|
// Decompiled by AS3 Sorcerer 6.78
// www.buraks.com/as3sorcerer
//Game
package
{
import flash.display.MovieClip;
import flash.display.Stage;
import alternativa.init.OSGi;
import alternativa.init.Main;
import alternativa.init.BattlefieldModelActivator;
import alternativa.init.BattlefieldSharedActivator;
import alternativa.init.PanelModelActivator;
import alternativa.init.TanksLocaleActivator;
import alternativa.init.TanksServicesActivator;
import alternativa.init.TanksWarfareActivator;
import alternativa.init.BattlefieldGUIActivator;
import alternativa.init.TanksLocaleRuActivator;
import alternativa.init.TanksLocaleEnActivator;
import alternativa.init.TanksFonts;
import alternativa.object.ClientObject;
import flash.display.BitmapData;
import alternativa.tanks.models.battlefield.BattlefieldModel;
import alternativa.tanks.models.tank.TankModel;
import scpacker.networking.Network;
import flash.text.TextField;
import scpacker.networking.INetworker;
import com.reygazu.anticheat.events.CheatManagerEvent;
import flash.display.StageScaleMode;
import flash.display.StageAlign;
import specter.utils.Logger;
import flash.net.SharedObject;
import alternativa.tanks.loader.LoaderWindow;
import alternativa.tanks.loader.ILoaderWindowService;
import scpacker.SocketListener;
import alternativa.register.ObjectRegister;
import scpacker.gui.IGTanksLoader;
import scpacker.gui.GTanksLoaderWindow;
import alternativa.osgi.service.storage.IStorageService;
import scpacker.resource.ResourceUtil;
import scpacker.tanks.turrets.TurretsConfigLoader;
import scpacker.tanks.hulls.HullsConfigLoader;
import scpacker.networking.connecting.ServerConnectionServiceImpl;
import scpacker.networking.connecting.ServerConnectionService;
import alternativa.tanks.model.panel.PanelModel;
import alternativa.tanks.model.panel.IPanel;
public class Game extends MovieClip
{
public static var getInstance:Game;
public static var currLocale:String;
public static var local:Boolean = false;
public static var _stage:Stage;
public var osgi:OSGi;
public var main:Main;
public var battlefieldModel:BattlefieldModelActivator;
public var battlefieldShared:BattlefieldSharedActivator;
public var panel:PanelModelActivator;
public var locale:TanksLocaleActivator;
public var services:TanksServicesActivator;
public var warfare:TanksWarfareActivator;
public var battleGui:BattlefieldGUIActivator;
public var localeRu:TanksLocaleRuActivator = new TanksLocaleRuActivator();
public var localeEn:TanksLocaleEnActivator = new TanksLocaleEnActivator();
public var fonts:TanksFonts = new TanksFonts();
public var classObject:ClientObject;
public var colorMap:BitmapData = new BitmapData(100, 100);
public var battleModel:BattlefieldModel;
public var tankModel:TankModel;
public var loaderObject:Object;
public function Game()
{
if (numChildren > 1)
{
removeChildAt(0);
removeChildAt(0);
};
}
public static function onUserEntered(e:CheatManagerEvent):void
{
var network:Network;
var cheaterTextField:TextField;
var osgi:OSGi = Main.osgi;
if (osgi != null)
{
network = (osgi.getService(INetworker) as Network);
};
if (network != null)
{
network.send("system;c01");
}
else
{
while (_stage.numChildren > 0)
{
_stage.removeChildAt(0);
};
cheaterTextField = new TextField();
cheaterTextField.textColor = 0xFF0000;
cheaterTextField.text = "CHEATER!";
_stage.addChild(cheaterTextField);
};
}
public function activateAllModels():void
{
var localize:String;
this.main.start(this.osgi);
this.fonts.start(this.osgi);
try
{
localize = root.loaderInfo.url.split("?")[1].split("&")[0].split("=")[1];
}
catch(e:Error)
{
localize = null;
};
if (((localize == null) || (localize == "ru")))
{
this.localeRu.start(this.osgi);
currLocale = "RU";
}
else
{
this.localeEn.start(this.osgi);
currLocale = "EN";
};
this.panel.start(this.osgi);
this.locale.start(this.osgi);
this.services.start(this.osgi);
}
public function SUPER(stage:Stage):void
{
_stage = stage;
this.focusRect = false;
stage.focus = this;
stage.scaleMode = StageScaleMode.NO_SCALE;
stage.align = StageAlign.TOP_LEFT;
_stage = stage;
this.osgi = OSGi.init(false, stage, this, "127.0.0.1", [12345], "127.0.0.1", 12345, "res/", new Logger(), SharedObject.getLocal("gtanks"), "RU", Object);
this.main = new Main();
this.battlefieldModel = new BattlefieldModelActivator();
this.panel = new PanelModelActivator();
this.locale = new TanksLocaleActivator();
this.services = new TanksServicesActivator();
getInstance = this;
this.activateAllModels();
specter.utils.Logger.init();
var loaderService:LoaderWindow = (Main.osgi.getService(ILoaderWindowService) as LoaderWindow);
this.loaderObject = new Object();
var listener:SocketListener = new SocketListener();
var objectRegister:ObjectRegister = new ObjectRegister(listener);
this.classObject = new ClientObject("sdf", null, "GTanks", listener);
this.classObject.register = objectRegister;
objectRegister.createObject("sdfsd", null, "GTanks");
Main.osgi.registerService(IGTanksLoader, new GTanksLoaderWindow(IStorageService(Main.osgi.getService(IStorageService)).getStorage().data["use_new_loader"]));
ResourceUtil.addEventListener(function ():void
{
var l:TurretsConfigLoader = new TurretsConfigLoader("turretConfig.json");
(Main.osgi.getService(IGTanksLoader) as GTanksLoaderWindow).addProgress(100);
l.addListener(function ():void
{
(Main.osgi.getService(IGTanksLoader) as GTanksLoaderWindow).addProgress(300);
var h:HullsConfigLoader = new HullsConfigLoader("hullsConfig.json");
h.addListener(function ():void
{
(Main.osgi.getService(IGTanksLoader) as GTanksLoaderWindow).addProgress(300);
onResourceLoaded();
});
h.load();
});
l.load();
});
ResourceUtil.loadResource();
if (IStorageService(Main.osgi.getService(IStorageService)).getStorage().data["k_V"] == null)
{
IStorageService(Main.osgi.getService(IStorageService)).getStorage().setProperty("k_V", 0.02);
};
if (IStorageService(Main.osgi.getService(IStorageService)).getStorage().data["k_AV"] == null)
{
IStorageService(Main.osgi.getService(IStorageService)).getStorage().setProperty("k_AV", 6);
};
stage.scaleMode = StageScaleMode.NO_SCALE;
stage.align = StageAlign.TOP_LEFT;
specter.utils.Logger.log(("Loader url: " + stage.loaderInfo.url));
}
private function onResourceLoaded():void
{
var serverConnectionServie:ServerConnectionService = new ServerConnectionServiceImpl();
serverConnectionServie.connect(("socket.cfg?rand=" + Math.random()));
var lobbyServices:Lobby = new Lobby();
var panel:PanelModel = new PanelModel();
Main.osgi.registerService(IPanel, panel);
Main.osgi.registerService(ILobby, lobbyServices);
(Main.osgi.getService(IGTanksLoader) as GTanksLoaderWindow).setFullAndClose(null);
var auth:Authorization = new Authorization();
Main.osgi.registerService(IAuthorization, auth);
specter.utils.Logger.log("Game::onResourceLoaded()");
}
}
}//package
как убрать hullsConfig.json и turretConfig.json
|
f98a5a2c4a4352df170bc0b74607bd0a
|
{
"intermediate": 0.24464894831180573,
"beginner": 0.48273423314094543,
"expert": 0.27261683344841003
}
|
32,583
|
Please fix this to be the most efficient and fastest way:
if feat_type == "transcript" {
gff_line += &format!(
"ID={};Parent={};gene_id={};transcript_id={}\n",
record.name(),
gene_name,
gene_name,
record.name()
);
} else {
let prefix = match feat_type {
"exon" => "exon",
"CDS" => "CDS",
"five_prime_utr" => "UTR5",
"three_prime_utr" => "UTR3",
"start_codon" => "start_codon",
"stop_codon" => "stop_codon",
_ => panic!("Unknown feature type {}", feat_type),
};
// Excludes UTRs
if exon >= 0 {
match record.strand() {
"-" => {
gff_line += &format!(
"ID={}:{}.{};Parent={};gene_id={};transcript_id={},exon_number={}\n",
prefix,
record.name(),
record.exon_count() - exon,
record.name(),
gene_name,
record.name(),
record.exon_count() - exon
);
}
"+" => {
gff_line += &format!(
"ID={}:{}.{};Parent={};gene_id={};transcript_id={},exon_number={}\n",
prefix,
record.name(),
exon + 1,
record.name(),
gene_name,
record.name(),
exon + 1
);
}
_ => panic!("Invalid strand {}", record.strand()),
}
} else {
gff_line += &format!(
"ID={}:{};Parent={};gene_id={};transcript_id={}\n",
prefix,
record.name(),
record.name(),
gene_name,
record.name()
);
}
}
|
45a51297342b37bca398bdd30a16bbf4
|
{
"intermediate": 0.32676661014556885,
"beginner": 0.44012096524238586,
"expert": 0.2331124097108841
}
|
32,584
|
[ WARN:1@25.047] global cap_msmf.cpp:471 `anonymous-namespace'::SourceReaderCB::OnReadSample videoio(MSMF): OnReadSample() is called with error status: -1072875772
[ WARN:1@25.047] global cap_msmf.cpp:483 `anonymous-namespace'::SourceReaderCB::OnReadSample videoio(MSMF): async ReadSample() call is failed with error status: -1072875772
[ WARN:0@25.047] global cap_msmf.cpp:1759 CvCapture_MSMF::grabFrame videoio(MSMF): can't grab frame. Error: -1072875772
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\HOME\AppData\Local\Programs\Python\Python311\Lib\tkinter\__init__.py", line 1948, in __call__
return self.func(*args)
^^^^^^^^^^^^^^^^
File "D:\pythonProject5\video_vebcam.py", line 147, in <lambda>
register_button = tk.Button(new_user_window, text="Зарегистрироваться", command=lambda: register(login_entry.get(), password_entry.get()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\pythonProject5\video_vebcam.py", line 167, in register
if not detect_face(frame):
^^^^^^^^^^^^^^^^^^
File "D:\pythonProject5\video_vebcam.py", line 121, in detect_face
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
cv2.error: OpenCV(4.8.1) D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor'
в чем может быть ошибка?
|
e45e3dacc0ec56b5074f3ac7300e0458
|
{
"intermediate": 0.47000518441200256,
"beginner": 0.1618974655866623,
"expert": 0.3680972754955292
}
|
32,585
|
Can you help me on my coding project?
|
88782ca06588ea18560def33d7470b5d
|
{
"intermediate": 0.4925914704799652,
"beginner": 0.24451009929180145,
"expert": 0.2628984749317169
}
|
32,586
|
in a c++ wxwidgets project, is there a way to change the order rows are visualized? In my case, I have a vector with int positions of pinned rows. So that way I want to control the order of displayed rows when first the rows in the pinned rows vector are shown and then the remaining rows in the original order.
So make a way to change the visualization order of my wxgrid so the pinned rows are shown first than the rest of the rows.
|
88f7cfc4db3284d3d2cb6a5ff05d703a
|
{
"intermediate": 0.7448514103889465,
"beginner": 0.10061376541852951,
"expert": 0.15453481674194336
}
|
32,587
|
I have 2 dice, one is fair and the other one is loaded. I select one of them with equal probability and roll it. Then after every roll there is a 0.9 probability that I roll the same die next time and 0.1 I switch to the other one. I need to calculate the probability of rolling 300 rolls with the fair die only.
|
d3a85ebd443f6dc7ec6e9ce5093afd9e
|
{
"intermediate": 0.3743319809436798,
"beginner": 0.31587088108062744,
"expert": 0.30979713797569275
}
|
32,588
|
Stata code
|
de181bb42b79fbe83333f59f88589475
|
{
"intermediate": 0.17854663729667664,
"beginner": 0.24199481308460236,
"expert": 0.5794585347175598
}
|
32,589
|
finish the code:
//Adds the star ranking of a student for a course.
//If the student or the course does not exist, return false.
//If the student or the course does not exist, return false.
//Else if the student already has a star ranking for that course then return return false.
//Else, add a star ranking, update the stars_count of the course, and increase by one the ranks_count of the student and return true.
// IMPORTANT: Always add in a new StarRank at the end of the list
// @param: student_head points to the head of Student list
// @param: the id of the student (sid)
// @param: course_array the array of pointers for the Course
// @param: the id of the course the student ranks (course_id)
// @param: the number of courses in the site
// @param: the rating which is a between 1 and MAX_RANKING_STARS
// @out: a boolean value indicating whether the insertion was successful
bool add_star_rank(Student *&student_head, unsigned int sid,
Course **&course_array, unsigned int course_id,
const unsigned int num_courses, int star) {
// TODO: Write code to implement add_star_rank
// use error cout carefully
}
|
35904c9948f94f6ade0eb195c31ae001
|
{
"intermediate": 0.40105077624320984,
"beginner": 0.31536629796028137,
"expert": 0.2835829257965088
}
|
32,590
|
I have a gray image as a two-dimensional numpy array. I need to count the number of pairs for all pixel values located next to each other. The result should be a 256 X 256 matrix where element in row i and column j shows the number of pairs for pixel i and pixel j.
|
a1071c3482e3bfdc987307bed905484f
|
{
"intermediate": 0.43223655223846436,
"beginner": 0.2272939532995224,
"expert": 0.34046947956085205
}
|
32,591
|
def detect_face(frame):
# Загрузка каскадного классификатора для обнаружения лиц
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
# Преобразование кадра в оттенки серого
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Обнаружение лиц на кадре
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=4, minSize=(30, 30))
if len(faces) > 0:
return True
else:
return False
# Создание главного окна приложения
root = tk.Tk()
def new_user():
# Создание нового окна
new_user_window = tk.Toplevel(root)
# Создание canvas (холста) для отображения видео с веб-камеры
canvas = tk.Canvas(new_user_window, width=640, height=480)
canvas.pack()
# Создание формы регистрации
login_label = tk.Label(new_user_window, text="Логин:")
login_entry = tk.Entry(new_user_window)
password_label = tk.Label(new_user_window, text="Пароль:")
password_entry = tk.Entry(new_user_window, show='*')
register_button = tk.Button(new_user_window, text="Зарегистрироваться", command=lambda: register(login_entry.get(), password_entry.get()))
# Расположение элементов формы регистрации на окне
login_label.pack()
login_entry.pack()
password_label.pack()
password_entry.pack()
register_button.pack()
def register(login, password):
# Проверка, существует ли уже пользователь с таким логином
if os.path.exists(f"dataset/{login}"):
print("Пользователь уже существует!")
return
# Захват видео с веб-камеры
cap = cv2.VideoCapture(2)
ret, frame = cap.read()
if not detect_face(frame):
print("Лицо не обнаружено!")
cap.release()
cv2.destroyAllWindows()
return
# Создание папки для нового пользователя
os.mkdir(f"dataset/{login}")
# Сохранение кадра с лицом пользователя
cv2.imwrite(f"dataset/{login}/face.jpg", frame)
# Сохранение логина и пароля в файле
with open(f"dataset/{login}/info.txt", "w") as file:
file.write(f"Логин: {login}\n")
file.write(f"Пароль: {password}")
# Закрытие окна и освобождение ресурсов
cap.release()
cv2.destroyAllWindows()
# Создание кнопок “Новый пользователь” и “Войти”
new_user_button = tk.Button(root, text="Новый пользователь", command=new_user)
login_button = tk.Button(root, text="Войти")
# Расположение кнопок на холсте
new_user_button.pack()
login_button.pack()
video_label = tk.Label(root)
video_label.pack()
video_capture = cv2.VideoCapture(2)
def update_video():
# Получение текущего кадра видео
ret, frame = video_capture.read()
if ret:
# Конвертация цветового пространства BGR в RGB
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# Создание изображения Tkinter из массива numpy
img = Image.fromarray(frame_rgb)
imgtk = ImageTk.PhotoImage(image=img)
# Обновление изображения в виджете Label
video_label.imgtk = imgtk
video_label.configure(image=imgtk)
video_label.after(10, update_video)
# Вызов функции update_video() для начала обновления видео
update_video()
сделай в моем коде так, чтобы при нажатии на кнопку регистрации перед тем как создать нового пользователя, программа проверяла лица, ести ли пользователь с таким же лицом из базы данных, и если есть, то выводила бы такое сообщение, что такой пользовател уже существует, если нет, новый пользователь успешно создавался
|
773230ea770471dda9336e2948b65b49
|
{
"intermediate": 0.3080834746360779,
"beginner": 0.5285999178886414,
"expert": 0.16331662237644196
}
|
32,592
|
используя bash выведи из строчки "● cpupower.service - Configure CPU power related setting" только "Configure CPU power related setting"
|
2217925d31ae68694334bc2471e80083
|
{
"intermediate": 0.34859904646873474,
"beginner": 0.2893542945384979,
"expert": 0.3620466887950897
}
|
32,593
|
[ 7%] Building CXX object lib/CMakeFiles/Int17Matrix3D.dir/Int17Matrix3D.cpp.o
[ 14%] Linking CXX static library libInt17Matrix3D.a
[ 14%] Built target Int17Matrix3D
[ 21%] Building CXX object bin/CMakeFiles/labwork5.dir/main.cpp.o
[ 28%] Linking CXX executable labwork5
Undefined symbols for architecture arm64:
"_main", referenced from:
implicit entry/start for main executable
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [bin/labwork5] Error 1
make[1]: *** [bin/CMakeFiles/labwork5.dir/all] Error 2
make: *** [all] Error 2
how to fix it
|
ec2913a70759944c7801b7ddb98805ae
|
{
"intermediate": 0.499271035194397,
"beginner": 0.270476758480072,
"expert": 0.2302522361278534
}
|
32,594
|
i need to find a way to know the system state on the start of launcher (we write kotlin code in launcher for android tv). i want a way to know about
1. work of GC
2. if youtube / live TV was running for a long time before start of launcher
3. RAM was overloaded
on start of our app launcher. all this can be output with println for the time being
|
4fadc0c6720c19bb1d7f05823f4ba885
|
{
"intermediate": 0.2631385326385498,
"beginner": 0.5889680981636047,
"expert": 0.14789338409900665
}
|
32,595
|
Hello! Please create a URL shortener in Racket please!
|
c036de1a13af8cad0859873818bd56b4
|
{
"intermediate": 0.3787052631378174,
"beginner": 0.23277826607227325,
"expert": 0.38851654529571533
}
|
32,596
|
i need to find a way to know the system state on the start of launcher (we write kotlin code in launcher for android tv). i want a way to know about
1. work of GC
2. if youtube / live TV was running for a long time before start of launcher
3. RAM was overloaded
on start of our app launcher. all this can be output with println for the time being
|
a24bb7b79ca1af577f89f5472761e2d7
|
{
"intermediate": 0.2631385326385498,
"beginner": 0.5889680981636047,
"expert": 0.14789338409900665
}
|
32,597
|
Hi ChatGPT, I hope you are feeling good today. I have a locally built docker image and a docker hub account. Could you please tell me how to publish my image to docker hub?
|
2ee08744f671e35ad6d8c290cae26c37
|
{
"intermediate": 0.4521872401237488,
"beginner": 0.25024592876434326,
"expert": 0.29756689071655273
}
|
32,598
|
can u give me an code for an TASM code accepting 2 digit as input
here's should be the step by step process:
-the program will ask user to input first 2 digit binary number
-after the user input the first number, now is the time to ask the user to imput 2nd nuumber
-the program will add both numbers that user inputed
-display the output
|
41ca3bfaa086a8a96869ea0ee96682cd
|
{
"intermediate": 0.4653235673904419,
"beginner": 0.06529886275529861,
"expert": 0.4693776071071625
}
|
32,599
|
How do i get the list of my system's specs. I want cpu, ram, gpu, nvme
|
c81a2d66c5c846c8eba4974655398fec
|
{
"intermediate": 0.3006208837032318,
"beginner": 0.37053990364074707,
"expert": 0.3288392424583435
}
|
32,600
|
hi there
|
f3f91995d616af0fe5e345c48d3afa84
|
{
"intermediate": 0.32885003089904785,
"beginner": 0.24785484373569489,
"expert": 0.42329514026641846
}
|
32,601
|
def HTTP_Request(endPoint, method, payload, Info):
timestamp = str(int(time.time() * 10 ** 3))
user.headers.update({'X-BAPI-TIMESTAMP': timestamp, 'X-BAPI-SIGN': genSignature(timestamp, payload, recv_window)})
url_with_endpoint = user.url_bybit_trade + endPoint
data = payload if method == "POST" else None
response = requests.Session().request(method, f"{url_with_endpoint}?{payload}" if method != "POST" else url_with_endpoint, headers=user.headers, data=data)
return response.json()
reduce
|
42f1e7055563450eee80b68bf388cc0a
|
{
"intermediate": 0.426220178604126,
"beginner": 0.37804704904556274,
"expert": 0.19573275744915009
}
|
32,602
|
def HTTP_Request(endPoint, method, payload, Info):
timestamp = str(int(time.time() * 10 ** 3))
user.headers.update({'X-BAPI-TIMESTAMP': timestamp, 'X-BAPI-SIGN': genSignature(timestamp, payload, recv_window)})
url_with_endpoint = user.url_bybit_trade + endPoint
data = payload if method == "POST" else None
response = requests.Session().request(method, f"{url_with_endpoint}?{payload}" if method != "POST" else url_with_endpoint, headers=user.headers, data=data)
return response.json()
reduce
|
7e5fcf570be8bf0888476093a172627f
|
{
"intermediate": 0.426220178604126,
"beginner": 0.37804704904556274,
"expert": 0.19573275744915009
}
|
32,604
|
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Arrow : MonoBehaviour
{
public int damage = 1; // Damage value
void Start()
{
Invoke("Despawn", 3);
}
void Despawn()
{
Destroy(gameObject);
}
private void OnCollisionEnter2D(Collision2D collision)
{
// Check if the collided object has an EnemyHealth component attached to it
EnemyHealth enemyHealth = collision.gameObject.GetComponent<EnemyHealth>();
if (enemyHealth != null)
{
enemyHealth.TakeDamage(damage); // Call the TakeDamage method of the enemyHealth
}
// Check if the collided object has an EnemyHealth component attached to it
BossHealth bossHealth = collision.gameObject.GetComponent<BossHealth>();
if (bossHealth != null)
{
bossHealth.TakeDamage(damage); // Call the TakeDamage method of the enemyHealth
}
Destroy(gameObject); // Destroy the arrow
}
}
make it so if the arrow collides with an object tagged "Wall" the arrow is destroyed
|
cf2d008ea378401f20fb671164bb5e50
|
{
"intermediate": 0.3983490765094757,
"beginner": 0.3661248981952667,
"expert": 0.23552604019641876
}
|
32,605
|
Can you read PDF?
|
45c3356ec2cff361eb70662b0ab0658d
|
{
"intermediate": 0.3503878712654114,
"beginner": 0.273554265499115,
"expert": 0.37605786323547363
}
|
32,606
|
1. TASK
The starting point for your term paper will be the course book, the contents of which will serve as the basis for an in-depth examination of one of the following questions. You are expected to research and cite from sources corresponding to your chosen topic.
1.1 Description of the Task
You get (A) 4 training datasets and (B) one test dataset, as well as (C) datasets for 50 ideal functions. All data respectively consists of x-y-pairs of values.
Structure of all CSV-files provided: XY
x1 y1
... ...
xn yn
Your task is to write a Python-program that uses training data to choose the four ideal functions which are the best fit out of the fifty provided (C) *.
i) Afterwards, the program must use the test data provided (B) to determine for each and every x-y- pair of values whether or not they can be assigned to the four chosen ideal functions**; if so, the program also needs to execute the mapping and save it together with the deviation at hand
ii) All data must be visualized logically
iii) Where possible, create/ compile suitable unit-test
* The criterion for choosing the ideal functions for the training function is how they minimize the sum of all y- deviations squared (Least-Square)
** The criterion for mapping the individual test case to the four ideal functions is that the existing maximum deviation of the calculated regression does not exceed the largest deviation between training dataset (A) and the ideal function (C) chosen for it by more than factor sqrt(2)
In order to give proof of your skills in Python related to this course, you need to adhere to certain criteria when solving the exercise; these criteria are subsequently described under ‘Details.’
Seite 2 von 5
EXAMINATION OFFICE
IU.ORG
1.2 Details
You are given four training datasets in the form of csv-files. Your Python program needs to be able to independently compile a SQLite database (file) ideally via sqlalchemy and load the training data into a single five- column spreadsheet / table in the file. Its first column depicts the x-values of all functions. Table 1, at the end of this subsection, shows you which structure your table is expected to have. The fifty ideal functions, which are also provided via a CSV-file, must be loaded into another table. Likewise, the first column depicts the x-values, meaning there will be 51 columns overall. Table 2, at end of this subsection, schematically describes what structure is expected.
After the training data and the ideal functions have been loaded into the database, the test data (B) must be loaded line-by-line from another CSV-file and – if it complies with the compiling criterion – matched to one of the four functions chosen under i (subsection above). Afterwards, the results need to be saved into another four- column-table in the SQLite database. In accordance with table 3 at end of this subsection, this table contains four columns with x- and y-values as well as the corresponding chosen ideal function and the related deviation.
Finally, the training data, the test data, the chosen ideal functions as well as the corresponding / assigned datasets are visualized under an appropriately chosen representation of the deviation.
Please create a Python-program which also fulfills the following criteria:
− Its design is sensibly object-oriented
− It includes at least one inheritance
− It includes standard- und user-defined exception handlings
− For logical reasons, it makes use of Pandas’ packages as well as data visualization via Bokeh, sqlalchemy, as well as others
− Write unit-tests for all useful elements
− Your code needs to be documented in its entirety and also include Documentation Strings, known as
”docstrings“
Table 1: The training data's database table:
X Y1 (training func) x1 y11
... ...
xn y1n
Y2(training func) y21
...
y2n
Y3(training func) y31
...
y3n
Y4(training func) y41
...
y4n
|
0447e255d94a012a484e03fd0c1c92ea
|
{
"intermediate": 0.38353079557418823,
"beginner": 0.3359762728214264,
"expert": 0.2804929316043854
}
|
32,607
|
After doing :split in vim, how can I change the file loaded in the other view or whatever it's called?
|
0bdd945f6441d255627cced488c18be3
|
{
"intermediate": 0.49798712134361267,
"beginner": 0.26333561539649963,
"expert": 0.23867720365524292
}
|
32,608
|
generate an image of a dog
|
f6822bb4b807f220dd6218e79f870889
|
{
"intermediate": 0.2711998522281647,
"beginner": 0.21272119879722595,
"expert": 0.5160789489746094
}
|
32,609
|
how to append to an existing excel file in python
|
f72011118d594ba7bd23af940bd2cf85
|
{
"intermediate": 0.4182031750679016,
"beginner": 0.2909822165966034,
"expert": 0.2908145785331726
}
|
32,610
|
package com.krieq.ka;
import org.bukkit.plugin.java.JavaPlugin;
import org.bukkit.command.Command;
import org.bukkit.command.CommandExecutor;
import org.bukkit.command.CommandSender;
import org.bukkit.command.PluginCommand;
import java.util.List;
public class Main extends JavaPlugin {
@Override
public void onEnable() {
// Здесь загружаем и регистрируем команды из конфигурационного файла
loadCommands();
}
private void loadCommands() {
List commands = getConfig().getList("commands");
if (commands == null) {
return;
}
for (Object obj : commands) {
if (obj instanceof CommandConfig) {
CommandConfig commandConfig = (CommandConfig) obj;
String commandName = commandConfig.getCommand();
String description = commandConfig.getDescription();
String permission = commandConfig.getPermission();
PluginCommand command = getCommand(commandName);
if (command != null) {
command.setExecutor(new MyCommandExecutor());
command.setDescription(description);
command.setPermission(permission);
// Можете добавить дополнительные настройки команды по желанию
}
}
}
}
private class MyCommandExecutor implements CommandExecutor {
// Реализуйте выполнение команды в этом методе
@Override
public boolean onCommand(CommandSender sender, Command command, String label, String[] args) {
// Ваш код для выполнения команды
return true;
}
}
private class CommandConfig {
private String command;
private String description;
private String permission;
// Реализуйте геттеры и сеттеры для полей
public CommandConfig(String command, String description, String permission) {
this.command = command;
this.description = description;
this.permission = permission;
}
}
}
Помоги у меня методы в loadCommands() не нашел
|
a05db143b0d3be7b39d949f9db698b36
|
{
"intermediate": 0.28570348024368286,
"beginner": 0.6624984741210938,
"expert": 0.05179804936051369
}
|
32,611
|
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import random
import math
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_iris
iris_dataset = load_iris()
x = random.randint(0, 100)
y = random.randint(0, 100)
def cartesian_distance(x1, y1, x2, y2):
return math.sqrt((x2 - x1) ** 2 + (y2 - y1) ** 2)
point1 = (x, y) # случайная точка
points = iris_dataset # список всех точек
distances = []
# вычисляем декартово расстояние между случайной точкой и остальными точками
for point2 in points:
dist = cartesian_distance(point1[0], point1[1], point2[0], point2[1])
distances.append(dist)
# сортируем расстояния по возрастанию
distances.sort()
# находим 3 ближайшие точки
closest_points = []
for i in range(3):
dist = distances[i]
index = distances.index(dist) # индекс точки с таким расстоянием
closest_points.append(points[index])
print("Три ближайшие точки:", closest_points)
X_new = np.array([[random.randint(int(iris_dataset.data[:, 0].min()), int(iris_dataset.data[:, 0].max())),
random.uniform(iris_dataset.data[:, 1].min(), iris_dataset.data[:, 1].max()),
random.uniform(iris_dataset.data[:, 2].min(), iris_dataset.data[:, 2].max()),
random.uniform(iris_dataset.data[:, 3].min(), iris_dataset.data[:, 3].max())]])
print(X_new)
что делает код и что выведет
|
3db999fc33337fe8022aa4d61e856fb4
|
{
"intermediate": 0.31876280903816223,
"beginner": 0.457103967666626,
"expert": 0.22413326799869537
}
|
32,612
|
CPU: Ryzen 9 7950x 16 core 32 thread
GPU: Sapphire 11323-02-20G Pulse AMD Radeon RX 7900 XT Gaming Graphics Card with 20GB GDDR6, AMD RDNA 3 (vsync and freesync enabled in drivers)
Memory: DDR5 5600 (PC5 44800) Timing 28-34-34-89 CAS Latency 28 Voltage 1.35V
Drives: Samsung 990 Pro 2tb + WD_Black SN850X 4000GB
LAN: realtek Gaming 2.5GbE Family Controller
Wireless: RZ616 Bluetooth+WiFi 6E 160MHz (just use for bluetooth sound and xbox x and dualsense controllers)
USB DAC: Fiio New K3 Series DAC 32 bit, 384000 Hz
Monitor: 42" LG C2 TV 120 hz with freesync
Mouse: SteelSeries Aerox 9 Wireless @ 3100 DPI no accelleratioin or smoothing.
Software:
Windows 11 23H2
Process Lasso: I disable the first 2 cores, 4 threads of my process affinity.
MSI Afterburner: mV 1044, power limit +0, core clock 2745, memory clock 2600
Reshade: Immerse Sharpen maxxed out at 1. And Immerse Pro Clarity about 50%
AMD5 105 cTDP PPT: 142W - TDC: 110A - EDC 170A with a curve of -20
Can you optimnize my async 2.3 dxvk.conf for wow 3.3.5a. the client is about 10 years old. I use a hd texture overhaul and tons and tons of addons. This game is very cpu limited. I use a 4gb patch on the exe to allow for more memory. I get about 50-70 fps in dalaran. I want really good graphics and care highly about sharpness. My fps does dip down though during some intense 25 man fights but not too worried.
Can you cut out all of the comments and details. I just want optimized dxvk.conf for me
dxvk.enableAsync = True
dxvk.numCompilerThreads = 14
dxvk.numAsyncThreads = 14
dxvk.maxFrameRate = 0
d3d9.maxFrameLatency = 1
d3d9.numBackBuffers = 3
d3d9.presentInterval = 1
d3d9.tearFree = False
d3d9.maxAvailableMemory = 4096
d3d9.evictManagedOnUnlock = True
d3d9.allowDiscard = True
d3d9.samplerAnisotropy = 16
d3d9.invariantPosition = False
d3d9.memoryTrackTest = False
d3d9.noExplicitFrontBuffer = False
d3d9.strictConstantCopies = False
d3d9.lenientClear = True
d3d9.longMad = False
d3d9.floatEmulation = Auto
d3d9.forceSwapchainMSAA = 0
d3d9.supportVCache = True
d3d9.forceSamplerTypeSpecConstants = False
dxvk.useRawSsbo = False
dxgi.maxDeviceMemory = 20000
dxgi.maxSharedMemory = 65536
dxgi.customVendorId = 0
dxgi.customDeviceId = 0
dxgi.customDeviceDesc = “”
dxvk.logLevel = none
dxvk.debugName = False
dxvk.debugOverlay = False
d3d9.shaderModel = 3
d3d9.dpiAware = True
|
fa5cef1df3b40b4366e66820e907f859
|
{
"intermediate": 0.3613569438457489,
"beginner": 0.34145134687423706,
"expert": 0.2971917390823364
}
|
32,613
|
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import random
import math
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_iris
iris_dataset = load_iris()
x = random.randint(0, 100)
y = random.randint(0, 100)
def cartesian_distance(x1, y1, x2, y2):
return math.sqrt((x2 - x1) ** 2 + (y2 - y1) ** 2)
point1 = (x, y) # случайная точка
points = iris_dataset # список всех точек
distances = []
# вычисляем декартово расстояние между случайной точкой и остальными точками
for point2 in points:
dist = cartesian_distance(point1[0], point1[1], point2[0], point2[1])
distances.append(dist)
# сортируем расстояния по возрастанию
distances.sort()
# находим 3 ближайшие точки
closest_points = []
for i in range(3):
dist = distances[i]
index = distances.index(dist) # индекс точки с таким расстоянием
closest_points.append(points[index])
print(“Три ближайшие точки:”, closest_points)
df = pd.DataFrame(closest_points, columns=iris_dataset.feature_names) # Создаем DataFrame для анализа данных
counts = df.value_counts() # Считаем количество точек с одинаковыми характеристиками
как присвоить point1 те характеристики, которых больше
|
94de77893a96887d88636ae5eb305274
|
{
"intermediate": 0.5899894833564758,
"beginner": 0.17532813549041748,
"expert": 0.2346823811531067
}
|
32,614
|
make me a game
|
cc8958b79caf1194245bda26f93ecfa1
|
{
"intermediate": 0.33053603768348694,
"beginner": 0.44182756543159485,
"expert": 0.22763638198375702
}
|
32,615
|
please make a 4 by 4 table in latex with names at the left and on the top of the table for each row and column
|
4db3856fe63ca4bc10172abf22df9b52
|
{
"intermediate": 0.3930029273033142,
"beginner": 0.21464328467845917,
"expert": 0.3923538327217102
}
|
32,616
|
imagine you are a researcher and make a new clean code for similarity detection using transformers
|
b8a92566532bc4720a5b62082a607a24
|
{
"intermediate": 0.05341588333249092,
"beginner": 0.029315538704395294,
"expert": 0.9172685742378235
}
|
32,617
|
I want to scrape content started with "sk-" in .js files of the sites that can be accessed from Google with the search query "chat site:github.io"
|
03cf56d75ab809027c772b53f5536031
|
{
"intermediate": 0.27889376878738403,
"beginner": 0.33682262897491455,
"expert": 0.38428357243537903
}
|
32,618
|
I have 300 text files of JSON data. I need to extract all the phone numbers from the .txt files. How do I accomplish this quickly?
|
b3b8691634bad660e9b59d554a792ef7
|
{
"intermediate": 0.5334286093711853,
"beginner": 0.25367972254753113,
"expert": 0.2128916084766388
}
|
32,619
|
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Arrow : MonoBehaviour
{
public int damage = 1; // Damage value
void Start()
{
Invoke("Despawn", 3);
}
void Despawn()
{
Destroy(gameObject);
}
private void OnCollisionEnter2D(Collision2D collision)
{
// Check if the collided object has an EnemyHealth component attached to it
EnemyHealth enemyHealth = collision.gameObject.GetComponent<EnemyHealth>();
if (enemyHealth != null)
{
enemyHealth.TakeDamage(damage); // Call the TakeDamage method of the enemyHealth
Destroy(gameObject); // Destroy the arrow
return;
}
// Check if the collided object has a BossHealth component attached to it
BossHealth bossHealth = collision.gameObject.GetComponent<BossHealth>();
if (bossHealth != null)
{
bossHealth.TakeDamage(damage); // Call the TakeDamage method of the bossHealth
Destroy(gameObject); // Destroy the arrow
return;
}
}
}
make it so if the arrow is traveling in the positive Y direction, the arrow will be rotated 90 degrees on the Z axis. If the arrow is traveling in the negative Y direction, the arrow will be rotated -90 degrees on the Z axis. And if the arrow is traveling in the negative X direction, the arrow will be rotated 180 degrees on the Y axis
|
f1603b725541c0bd2c781437602f52a9
|
{
"intermediate": 0.4054803252220154,
"beginner": 0.35116827487945557,
"expert": 0.24335132539272308
}
|
32,620
|
Write a MatLab code for the following problem.
Consider a 2D problem that models a pipe. In the y-direction, there is a finite width that can be specified by the user. In the x-direction, the system is not bounded. Start a random walker at the origin in the x-direction, and in the middle of the pipe along the width of the pipe in the y-direction. The random walker can only step between vertices on a regular square grid. The spacing between vertices is called the “lattice constant”. At each time step, the random walker can wait in its current location with probability "pw" and with "1 − pw" probability make a step. The step can be to the right, up, left, down, with probabilities of 1/4 for each direction. Assigning equal probabilities for each possible direction indicates that the material has isotropic transport properties. Use reflective boundary conditions in the y-direction at the edge of the pipe. The lattice constant, a, that gives the length of a step when the walker moves, and the time, t, for each time step must be set. In this first problem use pw = 0. Simulate the random walk as just described. In this example, set the width of the pipe to be 5 mm and set a = 0.01 mm.
Note the two formulas: R"(n) = n*a^2 and R"(t) = DT. The first formula relates the mean squared displacement over many unrestricted random walks directly in terms of the number of random walk steps, given by n. The second formula expresses the mean displacement of a particle that is unrestricted in terms of continuous time, where it can be assumed that T = t*n. Express the diffusion constant in terms of the model parameters {a,t}.
1) Visualize the random walk trajectories
2) In the x-direction, calculate the mean squared displacement as a function of number of steps.
3) In the y-direction, calculate the mean squared displacement as a function of number of steps.
4) For 1,000,000 walks, plot the histogram of locations of the random walker in the x-direction after 100, 1000, 10000, 100000, 1000000 steps.
5) For 1,000,000 walks, plot the histogram of locations of the random walker in the y-direction after 100, 1000, 10000, 100000, 1000000 steps.
6) In order to obtain a diffusion constant of 10^-7 cm^2/sec, what should the average time per step be (given by t) in units of seconds?
7) Working in the x-direction only, find the diffusion constant, D by taking the slope of the mean squared displacement as a function of time.
|
5195b2fcbd5a3809bc63cb07f67327ea
|
{
"intermediate": 0.3451307415962219,
"beginner": 0.3140028417110443,
"expert": 0.3408663868904114
}
|
32,621
|
in a c++ wxwidgets project, is there a way to change the order rows are visualized? In my case, I have a vector with int positions of pinned rows. So that way I want to control the order of displayed rows when first the rows in the pinned rows vector are shown and then the remaining rows in the original order.
So make a way to change the visualization order of my wxgrid so the pinned rows are shown first than the rest of the rows, without inheriting from base wx classes and only using the methods provided by the classes themselves. DO NOT INHERIT wxGridTableBase OR wxGrid classes.
|
085fe887f39cfcf6d9e67a9adc9be965
|
{
"intermediate": 0.7708988785743713,
"beginner": 0.14404934644699097,
"expert": 0.08505173772573471
}
|
32,622
|
I have a Java program that needs to be ran with provided arguments, a token, or a file that provides a token. Give me an example of how this command could look like
|
7332808c0c40feac784c9e9d6787de46
|
{
"intermediate": 0.45096808671951294,
"beginner": 0.2794634997844696,
"expert": 0.26956844329833984
}
|
32,623
|
What does -- mean in a command line argument, and what would that look like in the Java args[] in the main method?
|
4fe5ea91e6c2d156aa6994f0b2ff9569
|
{
"intermediate": 0.4538539946079254,
"beginner": 0.3430657982826233,
"expert": 0.20308023691177368
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.