instruction stringlengths 0 30k ⌀ |
|---|
Harbor docker push first path segment in URL cannot contain colon |
|docker|harbor| |
I solved this issue by adding the following to my nginx config (the Host header is what made it work, the others are also needed).
```
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
```
|
[enter image description here][1]I ran the deep learning for regression, it is a multimodal model with two loss functions. the mse and mae are acceptable, but the r2 is so horrible. any idea can explain it? what should I check? Thanks!!!!!
this is my code.
def r_squared(y_true, y_pred):
SS_res = K.sum(K.square(y_true - y_pred))
SS_tot = K.sum(K.square(y_true - K.mean(y_true)))
return 1 - SS_res/(SS_tot + K.epsilon())
optimizer1 = keras.optimizers.Adam(1e-5, clipnorm=0.3, epsilon=1e-4)
model.compile(optimizer=optimizer1, loss=losses1, loss_weights=loss_weights1, metrics=['mse',r_squared,'mae'])
[1]: https://i.stack.imgur.com/MA8x2.jpg |
null |
null |
null |
null |
I too faced the same issue.
Then I checked my compileSDK version and target SDK version in build.gradle file in the Gradle Scripts Folder.
I changed
plugins {
id 'com.android.application'
}
android {
namespace 'com.example.classify'
compileSdk 33
defaultConfig {
applicationId "com.example.classify"
minSdk 24
targetSdk 33
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
}
}
dependencies {
implementation 'androidx.appcompat:appcompat:1.6.1'
implementation 'com.google.android.material:material:1.10.0'
implementation 'androidx.constraintlayout:constraintlayout:2.1.4'
testImplementation 'junit:junit:4.13.2'
androidTestImplementation 'androidx.test.ext:junit:1.1.5'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.5.1'
}
into this......
plugins {
id 'com.android.application'
}
android {
namespace 'com.example.classify'
compileSdk 34
defaultConfig {
applicationId "com.example.classify"
minSdk 24
targetSdk 34
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
}
}
dependencies {
implementation 'androidx.appcompat:appcompat:1.6.1'
implementation 'com.google.android.material:material:1.8.0'
implementation 'androidx.constraintlayout:constraintlayout:2.1.4'
testImplementation 'junit:junit:4.13.2'
androidTestImplementation 'androidx.test.ext:junit:1.1.5'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.5.1'
}
I just updated the SDK version and clicked syn gradle button....
When i run the code now its promising |
When I use ./gradlew build -Dquarkus.package.type=uber-jar to create an executable package, an error will be reported and I cannot continue packaging.
```
class EncryptConfigInterceptor : ConfigSourceInterceptor {
override fun getValue(context: ConfigSourceInterceptorContext, name: String): ConfigValue? {
val key = System.getProperty("gs4k")
val ePrefix = System.getProperty("gs4kp")
if (key.isEmpty() || ePrefix.isEmpty()) {
throw RuntimeException("key or prefix is null")
}
val config: ConfigValue? = context.proceed(name)
return config?.value?.let { value ->
if (value.startsWith(ePrefix)) {
val encryptValue = value.removePrefix(ePrefix)
val decryptedValue = SM4Utils.decryptCbc(key, encryptValue)
config.withValue(decryptedValue)
} else {
config
}
} ?: config
}
}
```
I use ./gradlew build -Dgs4k=xxx -Dgs4kp=xxx -Dquarkus.package.type=uber-jar to set system properties and package,
System.getProperty cannot obtain the configuration,But I use environment variables, which can be obtained through System.getEnv. Now I use System.getProperty to report an error and cannot package it. |
When gradlew is packaged, System.getProperty cannot obtain the configuration |
|quarkus| |
null |
I am using R for deep learning with the MNIST dataset.
I have written this code to store the training and testing data, and define and fit the model:
```
library(keras)
#Obtain data
mnist <- dataset_mnist()
train_data <- mnist$train$x
train_labels <- mnist$train$y
test_data <- mnist$test$x
test_labels <- mnist$test$y
#Reshape & normalize
train_data <- array_reshape(train_data,c(nrow(train_data), 784))
train_data <- train_data / 255
test_data <- array_reshape(test_data,c(nrow(test_data), 784))
test_data <- test_data / 255
#One hot encoding train_labels <- to_categorical(train_labels, 10)
test_labels <- to_categorical(test_labels, 10)
#Model
model <- keras_model_sequential()
model %>% layer_dense(units=128,activation="relu") %>%
layer_dropout(rate=0.3) %>%
layer_dense(units=64,activation="relu") %>%
layer_dropout(rate=0.2) %>%
layer_dense(units=10,activation="softmax")
#Compile
model %>% compile(loss="categorical_crossentropy",
optimizer="rmsprop",
metrics="accuracy")
#Train
history <- model %>% fit(train_data,
train_labels,
epochs=10,
batch_size=784,
validation_split=0.2,
verbose=2)
#Evaluation and prediction
model %>% evaluate(test_data, test_labels)
pred <- model %>% predict(test_data)
print(table(Predicted=pred, Actual=test_labels))
```
When running it in R studio, the following error occurs:
```
ValueError: No gradients provided for any variable: (['dense_124/kernel:0', 'dense_124/bias:0', 'dense_123/kernel:0', 'dense_123/bias:0', 'dense_122/kernel:0', 'dense_122/bias:0'],). Provided `grads_and_vars` is ((None, <tf.Variable 'dense_124/kernel:0' shape=(784, 128) dtype=float32>), (None, <tf.Variable 'dense_124/bias:0' shape=(128,) dtype=float32>), (None, <tf.Variable 'dense_123/kernel:0' shape=(128, 64) dtype=float32>), (None, <tf.Variable 'dense_123/bias:0' shape=(64,) dtype=float32>), (None, <tf.Variable 'dense_122/kernel:0' shape=(64, 10) dtype=float32>), (None, <tf.Variable 'dense_122/bias:0' shape=(10,) dtype=float32>)).
```
I think the problem may be with the conflicting shapes of the input data and the input, but no idea how to solve this.
Thanks for help! |
Firstly, next time please provide the log of attempting task (by click to the task logs).
Next one, I think the problem is that is from the `sudo` which make you cannot type the password.
Moreover, I suggest that you should use the venv, rather than directly install the library. The setup file should be:
```yaml
runners:
hadoop:
setup:
- 'set -e'
- VENV=/tmp/$mapreduce_job_id
- if [ ! -e $VENV ]; then virtualenv $VENV; fi
- . $VENV/bin/activate
- 'pip install numpy'
```
Remember that the venv should be installed with root user.
Please refer to this [thread](https://stackoverflow.com/questions/31133050/virtualenv-command-not-found) also. |
Based on my experience, I've outlined the Kubernetes request flow. Could someone please add or highlight any points I might have overlooked?
When a user deploys an Nginx pod using kubectl create -f nginx.yaml, the request is processed by the kube-apiserver, which validates and stores the pod's configuration in the etcd database; the Kubernetes controllers within the kube-controller-manager then ensure the cluster's state matches the desired state, while the kube-scheduler assigns the pod to an appropriate node based on cluster resources and requirements, and finally, the kubelet on the assigned node interacts with the container runtime to start the pod, with kube-proxy managing networking aspects to ensure traffic can reach the pod via services' virtual IPs, maintaining the desired application state, high availability, and scalable deployments across the cluster.
kuberneties architecture flow |
Based on my experience, I've outlined the Kubernetes request flow. Could someone please add or highlight any points I might have overlooked? |
|kubernetes| |
null |
Use TraceSource logger, bur add switch with level verbose, then all your logging goes to the trace listener test.log :)
var builder = webApplication.CreateBuilder(args);
builder.Logging.AddTraceSource(new
SourceSwitch("trace", "Verbose"),
new TextWriterTraceListener(Path.Combine(b.Environment.ContentRootPath,"trace.log")));
|
|javascript|firebase|express|firebase-authentication|google-cloud-functions| |
null |
I am looking to optimally solve a simple resource allocation problem that has a set of tasks and a set of men as its input. Can anyone please suggest some textbooks that would give me a clear idea and to refer from?
I am new to research and I am looking to learn about the resource allocation problem and the algorithmic ways to solve them. I have tried looking for them but my efforts were in vain. |
need help solving this resource allocation problem |
|resources|allocation| |
null |
I am new to firebase cloud functions and I'm using cloud functions with express.js. I need to retrieve an access token after registering a new user. i am using the below method for that.
const { admin, db } = require("../util/admin");
exports.registerUser = (req, res) => {
const newUser = {
email: req.body.email,
password: req.body.password,
};
// ToDO valiate data
db.collection("users")
.doc(newUser.email)
.get()
.then((doc) => {
if (doc.exists) {
return res.status(400).json({ email: "email already exists" });
} else {
return admin
.auth()
.createUser({ email: newUser.email, password: newUser.password });
}
})
.then(async (userRecord) => {
// Get the access token
const token = await userCredential.user.getIdToken();
return token;
})
.then((token) => {
return res.status(201).json({ token });
})
.catch((err) => {
console.error(err);
return res.status(500).json({ error: err.code });
});
};
After sending the post request with email and password, the user is registered but the I receive a status 500 response. In the firebase authentication tab the registered user is visible but I'm unable to receive the access token. How to reteireve the token? Below there is the dependency versions I'm using.
```
"dependencies": {
"express": "^4.18.3",
"firebase": "^10.10.0",
"firebase-admin": "^11.8.0",
"firebase-functions": "^4.3.1"
},
``` |
Starting a new project in Next.js I have this folder structure
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/HhiJo.png
But when I navigate to localhost:3000 I get a 404 screen, so there is some issue with the routing, can't find any info on it online though. |
Next.js localhost:3000 is showing a 404 |
|typescript|next.js|frontend| |
I have a symmetrical dataframe with roughly 400 rows & columns. I would like to turn it into a 3-dimensional object (cube) by taking the sums of each pair between the three X,Y,Z indices.
For example, say the coordinates on the cube were A-B-C, I would take the sum of the following cells in the 2D dataframe/array: A-B,B-C,A-C. Remember again since it is symmetrical, whether the first or second index is the row or column does not matter here).
Here's a 3x3 snapshot of the data:
```
# A B C
df_in = [[ 1, 5, 2], # A
[ 5, 1, 3], # B
[ 2, 3, 1]] # C
```
And then these would be the 3 planes that make up the outputted cube:
```
# A B C
outputA = [[ 3, 11, 5], # A
[ 11, 11, 10], # B
[ 5, 10, 5]] # C
# A B C
outputB = [[ 11, 11, 10], # A
[ 11, 3, 7], # B
[ 10, 7, 7]] # C
# A B C
outputC = [[ 5, 10, 5], # A
[ 10, 7, 7], # B
[ 5, 7, 3]] # C
```
So finally the cube would be the combination of these three arrays stacked along the Z-axis. Given the size, I'm evidently attempting to avoid loops but I'm coming up blank on thinking of efficient, or even just not-super-slow ways to complete this. |
Creating 3D python data from index sums of 2D data |
|python|pandas|dataframe|numpy|3d| |
null |
This is now supported in hosted onboarding by enabling this toggle in the Stripe dashboard: https://dashboard.stripe.com/settings/connect/payouts/onboarding |
I have scaled my dataset using the MinMaxScaler form sklearn like this:
from sklearn.preprocessing import MinMaxScaler
# create a StandardScaler object
self.scaler = MinMaxScaler(feature_range=(0, 1))
# fit the scaler to the dataset
self.scaler.fit(self.X_org)
# transform dataset using the scaler
self.X_scalled = pd.DataFrame(self.scaler.transform(self.X_org), columns=self.X_org.columns)
return self.X_scalled
However, I am now using the last 10% of the entire dataset for a validation run also scaling the data with the scaler from the training dataset like so:
X_input_val_data_scalled = pd.DataFrame(self.scaler.transform(X_input_val_data), columns=X_input_val_data.columns)
Now my challenge:
In the training X_org set I get a nicely scaled dataset from 0 to 1. In the scaled validation X dataset I get completely wired data ranging from 7.5 to 8...
What am I doing wrong? |
Calling MinMaxScaler differs between same sets |
|scikit-learn| |
I have a requirement to upload over API a video and embed it to a WordPress page, which needs to be one-time viewable and then never accessible again. Also the video needs to have no buttons to stop/pause/rewind/rewatch; once you land on the page, it will start playing and you will not see the video ever again. The access can be granted either by URL sent by email or password protected.
Second option would be not embedded, can be viewable on some streaming or video providing service like youtube or vimeo or any other, just need the previously mentioned requirements regarding privacy (one-time viewable is most important).
Current solution is on wordpress web page which currently implements WordPress's **PPWP plugin**, to lock content per path with a password.
But this solution has a flaw.
Videos uploaded to let's say `www.wordpress-domain.com/video-storage/video1.mp4` need to be public to be embedded for instance to path: `www.wordpress-domain.com/page1/private-sub-page/`.
So if the user knows the video path (or paths) `/video-storage/video1.mp4` they can just paste the video urls and play all of the videos that should be "private".
So my question now is what would be the best solution to upload/share a video using API and make it one-time available as described in the requirements? Is there any tool/video hosting platform that enables this?
I have searched and looked up for instance Vimeo, but all vimeo offers is domain based links that are private, which is fine, but they don't offer one time views, which is a necessary requirement. And most of other video hosting apps offer the same functionalities as Vimeo does. |
Never mind, the loader ld-linux-x86-64.so.2 was the cause of the problem, for reasons which I do not know. The problem was fixed by removing the loader and letting spwn download the loader instead. |
The problem was the buffer size which was 30720 bytes.
const int BUFFER_SIZE = 500000000;
I just changed the size which is approximately 500MB and now select them from the heap.
char* buffer = new char[BUFFER_SIZE];
This is where the buffer is used, I simply take data from it and then pass it as an argument to a function that writes this data to a file:
void TcpServer::startListen()
{
if (listen(m_socket, 20) < 0)
{
exitWithError("Socket listen failed");
}
std::ostringstream ss;
ss << "\n*** Listening on ADDRESS: " << inet_ntoa(m_socketAddress.sin_addr) << " PORT: " << ntohs(m_socketAddress.sin_port) << " ***\n\n";
log(ss.str());
int bytesReceived;
while (true)
{
log("====== Waiting for a new connection ======\n\n\n");
acceptConnection(m_new_socket);
char* buffer = new char[BUFFER_SIZE];
bytesReceived = recv(m_new_socket, buffer, BUFFER_SIZE, 0);
if (bytesReceived < 0)
{
exitWithError("Failed to receive bytes from client socket connection");
}
std::ostringstream ss;
ss << "------ Received Request from client ------\n\n";
log(ss.str());
sendResponse();
std::string requestData(buffer);
handleRequest(requestData);
delete[] buffer;
closesocket(m_new_socket);
}
}
The data was written to the file as needed because the buffer size could not accommodate large data. |
No gradients provided for any variable in R |
|r|machine-learning|deep-learning|mnist| |
null |
I would appreciate any guidance on correcting my mistakes. I'm attempting to implement the RCPSP model by Pritsker et al. (1969).
Link : [Here](https://www.google.com/books/edition/Scheduling_of_Resource_Constrained_Proje/CNncBwAAQBAJ?hl=fr&gbpv=1&dq=RCPSP*&pg=PA79&printsec=frontcover)
```
// Example data
range J = 1..5;
range R = 1..3;
range T = 1..10;
int P[J] = [0,1,2,3,4];
int d[J] = [1, 1, 3, 5, 2];
int EF[J] = [1, 2, 1, 1, 1];
int LF[J] = [2, 3, 4, 6, 2];
int Tbar = sum(j in J) d[j];
// Resource usage matrix
int u[J][R] = [[1, 0, 2],
[1, 1, 1],
[1, 1, 0],
[2, 0, 3],
[1, 1, 2]];
// Resource availability
int a[R]= [1,2,3];
// Decision Variables
dvar boolean x[J][T];
dexpr int CT = sum(j in J, t in EF[j]..LF[j])t*x[j][t];
minimize CT;
subject to {
forall(j in J)
sum (t in EF[j]..LF[j]) x[j][t] == 1;
forall(j in J, i in J)
sum (t in EF[j]..LF[j]) (t - d[j]) * x[j][t] - sum (t in EF[i]..LF[i]) t * x[i][t] >= 0;
forall(r in R, t in 1..Tbar) {
sum(j in J) u[j][r] * sum(q in max(t, EF[j])..min(t + d[j] - 1, LF[j])) x[j][q] <= a[r];
}
}
```
|
Resource-Constrained Project Scheduling Problem (RCPSP) Implementation in OPL for CPLEX |
{"OriginalQuestionIds":[77690729],"Voters":[{"Id":3358074,"DisplayName":"dmitryro"},{"Id":8770336,"DisplayName":"lucutzu33"},{"Id":14991864,"DisplayName":"Abdul Aziz Barkat","BindingReason":{"GoldTagBadge":"django"}}]} |
The Python 3.12 embedding documentation for embedding gives this example:
```
#define PY_SSIZE_T_CLEAN
#include <Python.h>
int
main(int argc, char *argv[])
{
wchar_t *program = Py_DecodeLocale(argv[0], NULL);
if (program == NULL) {
fprintf(stderr, "Fatal error: cannot decode argv[0]\n");
exit(1);
}
Py_SetProgramName(program); /* optional but recommended */
Py_Initialize();
PyRun_SimpleString("from time import time,ctime\n"
"print('Today is', ctime(time()))\n");
if (Py_FinalizeEx() < 0) {
exit(120);
}
PyMem_RawFree(program);
return 0;
}
```
Although calling `Py_SetProgramName()` is recommended, it throws a compile warning:
```
test01.c:12:5: warning: 'Py_SetProgramName' is deprecated [-Wdeprecated-declarations]
Py_SetProgramName(program); /* optional but recommended */
^
/opt/python/3.11/include/python3.11/pylifecycle.h:37:1: note: 'Py_SetProgramName' has been explicitly marked deprecated here
Py_DEPRECATED(3.11) PyAPI_FUNC(void) Py_SetProgramName(const wchar_t *);
^
/opt/python/3.11/include/python3.11/pyport.h:336:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
1 warning generated.
```
The resulting excecutable runs and if you add `import sys` and `print(sys.executable)` to the `PyRun_SimpleString()` argument, the correct executable name is shown.
As this was deprecated in 3.11, and although is still recommended for 3.12, I rather get rid of the warning. How should I change the program? |
It's not required to kill the app, when waiting until the user pressed the back button.
@RequiresApi(api = Build.VERSION_CODES.R)
public void gotoExternalStorageSettings() {
Intent intent = new Intent(Settings.ACTION_MANAGE_APP_ALL_FILES_ACCESS_PERMISSION);
intent.setData(Uri.fromParts("package", requireContext().getPackageName(), null));
this.startActivityAndWait.launch(intent);
}
ActivityResultLauncher<Intent> startActivityAndWait = registerForActivityResult(
new ActivityResultContracts.StartActivityForResult(), (ActivityResult result) -> {
if (Environment.isExternalStorageManager()) {
// permission was granted
// continue in here ...
}
});
The culprit otherwise is indeed, that the app does not know when.<br/>
There isn't any result returned, but one does not need any result ... |
I'm working on a Laravel project and trying to create an anonymous Blade component that doesn't rely on a controller. I want this component to have a variable and a method that can be called within it to change the variable's state.
For instance, I have several links (like a navigation bar) and want to change the `activeTab` value when clicking a link.
However, I've noticed that Laravel always looks for the called function inside a controller. How can I change this behavior and implement the functionality I need directly within the anonymous Blade component? |
Function in anonymous Laravel Blade component |
Do not mix CSS/JQuery syntax (`#` for identifier) with native JS.
**Native JS solution:**
document.getElementById('_1234').checked = true;
To also trigger the associated event (e.g. if related controls need to be shown/hidden), call `click()`:
document.getElementById('_1234').click();
**JQuery solution:**
$('#_1234').prop('checked', true); |
You can scroll to a particular view in a `ScrollView` by using its id. This can either be done using a `ScrollViewReader` (as suggested in a comment) or by using a `.scrollPosition` modifier on the `ScrollView` (requires iOS 17).
Here is an example of how it can be done using `.scrollPosition`.
- The images used here are just system images (symbols). The symbol name is used as its id.
- The images in the scrolled view all have different heights. This makes it difficult for a `LazyVStack` to predict the scroll distance correctly, so a `VStack` is used instead. But if your images all have the same height then you could try using a `LazyVStack`.
- Set the target to scroll to in `.onAppear`.
- You could try using different anchors for the scroll, but it may be difficult to control the exact screen position of the target image.
```swift
struct ContentView: View {
private let imageNames = ["hare", "tortoise", "dog", "cat", "lizard", "bird", "ant", "ladybug", "fossil.shell", "fish"]
private let threeColumnGrid: [GridItem] = [.init(), .init(), .init()]
var body: some View {
NavigationStack {
LazyVGrid(columns: threeColumnGrid, alignment: .center, spacing: 20) {
ForEach(imageNames, id: \.self) { imageName in
NavigationLink(destination: ScrollPostView(imageNames: imageNames, selectedName: imageName)){
Image(systemName: imageName)
.resizable()
.scaledToFit()
.padding()
.frame(maxHeight: 100)
.background(.yellow)
.clipShape(RoundedRectangle(cornerRadius: 15))
}
.overlay(
RoundedRectangle(cornerRadius: 15)
.stroke(Color.black, lineWidth: 2)
)
}
}
}
}
}
struct ScrollPostView: View {
let imageNames: [String]
let selectedName: String
@State private var scrollPosition: String?
var body: some View {
ScrollView {
VStack {
ForEach(imageNames, id: \.self) { imageName in
Image(systemName: imageName)
.resizable()
.scaledToFit()
.padding()
.background(selectedName == imageName ? .orange : .yellow)
.clipShape(RoundedRectangle(cornerRadius: 15))
}
}
.scrollTargetLayout()
}
.scrollPosition(id: $scrollPosition)
.onAppear {
scrollPosition = selectedName
}
}
}
```
 |
I need to separate texts into paragraphs and be able to work with each of them. How can I do that? Between every 2 paragraphs can be at least 1 empty line. Like this:
<!-- language: none -->
Hello world,
this is an example.
Let´s program something.
Creating new program. |
i have string ```"This auction will run from Friday 28 July - Monday 7 August. It will close from 7pm (GMT) on Monday 7 August 2023. Read here for information on how our auctions end."```
and im trying to get dates from this string
```
const regex = /(\d{1,2} \w+(?: \d{4})?)/g;
const dates = str.match(regex)
```
this is regex that i wrote and in regex checker site((\d{1,2} \w+(?: \d{4})?)) it matches for 28 july, 7 august and 7 august 2023, but in javascript it only matches for 7 august and 7 august 2023, whats wrong?
but strange part is here "This auction will run from Friday 23 February - Monday 4 March. It will close from 7pm (GMT) on Monday 4 March 2024." its works perfectly
|
I'm taking an online course on website development and ran into issues setting up dependencies using npm. The course materials are a few years old, so many dependencies are outdated, causing headaches. Strangely, my package.json file in VS Code is filled with hundreds of dependencies when there should only be like 5-10. The devDependencies section is fine though, as I installed those intentionally. I suspect if Warp, the iOS terminal tool I'm using, might have automatically installed these extra dependencies. Here's a list of my active devDependencies, some of which are outdated, in case that these somehow installed all the other dependencies, which I'm sharing a few since I have loads of them. Any help on fixing this mess would be appreciated, since I've got tons of dependencies now on my project and I have no clue how to fix this. Thanks!
"devDependencies": {
"babel-loader": "^9.1.3",
"clean-webpack-plugin": "^4.0.0",
"copy-webpack-plugin": "^12.0.2",
"css-loader": "^6.10.0",
"eslint": "^8.56.0",
"eslint-config-standard": "^17.1.0",
"eslint-loader": "^4.0.2",
"eslint-plugin-import": "^2.29.1",
"eslint-plugin-promise": "^6.1.1",
"eslint-plugin-standard": "^5.0.0",
"file-loader": "^6.2.0",
"imagemin": "^8.0.1",
"mini-css-extract-plugin": "^2.8.0",
"postcss-loader": "^8.1.0",
"sass": "^1.71.0",
"sass-loader": "^14.1.0",
"terser-webpack-plugin": "^5.3.10",
"webpack": "^5.90.2",
"webpack-cli": "^5.1.4",
"webpack-dev-server": "^5.0.2",
"webpack-merge": "^5.10.0"
},
"main": "webpack.config.build.js",
"dependencies": {
"accepts": "^1.3.8",
"acorn": "^8.11.3",
"acorn-import-assertions": "^1.9.0",
"ajv": "^6.12.6",
"ajv-formats": "^2.1.1",
"ajv-keywords": "^3.5.2",
"ansi-html-community": "^0.0.8",
"ansi-regex": "^6.0.1",
"ansi-styles": "^6.2.1",
"anymatch": "^3.1.3",
"arg": "^5.0.2",
"array-flatten": "^1.1.1",
"array-union": "^1.0.2",
"array-uniq": "^1.0.3",
"balanced-match": "^1.0.2",
"batch": "^0.6.1",
"binary-extensions": "^2.2.0",
"body-parser": "^1.20.1",
"bonjour-service": "^1.2.1",
"brace-expansion": "^2.0.1",
"braces": "^3.0.2",
"browserslist": "^4.23.0",
"buffer-from": "^1.1.2",
"bundle-name": "^4.1.0",
"bytes": "^3.0.0",
"call-bind": "^1.0.7",
"caniuse-lite": "^1.0.30001587",
"chokidar": "^3.6.0",
"chrome-trace-event": "^1.0.3",
"clone-deep": "^4.0.1",
"color-convert": "^2.0.1",
"color-name": "^1.1.4",
"colorette": "^2.0.20"
}
|
null |
{"OriginalQuestionIds":[4660142],"Voters":[{"Id":1070452,"DisplayName":"Ňɏssa Pøngjǣrdenlarp"},{"Id":5836671,"DisplayName":"VDWWD","BindingReason":{"GoldTagBadge":"c#"}}]} |
I'm deploying an application (on docker) that is totally dependent on `sf` but I'm having trouble loading it into docker :
FROM rocker/shiny
#Make a directory in the container
RUN mkdir /home/shiny-app
RUN R -e "install.packages('sf')"
RUN R -e "library('sf')"
COPY . /home/shiny-app/
This is the error message I received. I have no idea what caused this error. Probably GDAL?
I need help to solve this problem.
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/XeNjj.png |
I can not load sf in docker |
|r|docker|gdal| |
After drop_na it shows 0 obs. of 68 variables:
[![(https://i.stack.imgur.com/elrGR.png)][1]][1]
[![(https://i.stack.imgur.com/txJEv.png)\]][2]][2]
After drop_na I don't see any results in the Table except dates. This was not the case when I tried it before, I could see the values in the table.
```
library(tidyverse)
WDI_GDP <- read_csv("C:/Users/ASYA/Desktop/P_Data_Extract_From_World_Development_Indicators/b0351889-13b3-4cbe-a5c0-a2dd9d633eab_Data.csv")
WDI_GDP <- WDI_GDP %>%
mutate(across(contains("[YR"), ~na_if(.x,"..")) %>%
mutate(across(contains("[YR"), as.numeric)))
WDI_GDP <- drop_na(WDI_GDP)
```
[1]: https://i.stack.imgur.com/YNczn.png
[2]: https://i.stack.imgur.com/9kSfO.png |
Why are zero variables shown after using drop_na()? |
|r|tidyr| |
Dears, I'm calling who can defeat any SQL query performance issue
I'm in the process of generating an Inventory Report that handles locations for items in stock and I want to show for the user a flag say "Not available any more" that mean the location is not available because it's taken from another item.
That is mainly done by this part of the code. Using below condition:
IIF(ItemLocationsCount > 1 and ItemsInStock = 0 and StoreId <> MainStoreId, 'Not available any more', Store)
this condition is showing the flag only for items that has more than one location and with 0 Quantity
This aggregate is counting number of loations taken by an item:
ItemsInStockByCurrentItemLocation as (
select *
,( select
count(*)
from
ItemsInStock ItemsInStockLocationsCount
where
ItemsInStock.StoreId = ItemsInStockLocationsCount.StoreId
group by MainStoreId) ItemLocationsCount
from
ItemsInStock
)
This is the view that is showing the whole data with the required flag (This run too slowly 30 minutes):
,FilteredUnitifiedConsederingLocation as (
select (select
IIF(ItemLocationsCount > 1 and ItemsInStock = 0 and StoreId <> MainStoreId, 'Not available any more', Store)
from
ItemsInStockByCurrentItemLocation ItemsInStockByCurrentItemLocationAvalability
where
ItemsInStockByCurrentItemLocationAvalability.StoreId = ItemsInStockByCurrentItemLocation.StoreId and
ItemsInStockByCurrentItemLocationAvalability.ItemId = ItemsInStockByCurrentItemLocation.ItemId) StoreAvalability,
ItemsInStockByCurrentItemLocation.*,
MainAndWorkFlowUnitified.Date_P,
MainAndWorkFlowUnitified.Date
from ItemsInStockByCurrentItemLocation
inner join MainAndWorkFlowUnitified
on MainAndWorkFlowUnitified.StoreId = ItemsInStockByCurrentItemLocation.StoreId and
MainAndWorkFlowUnitified.ItemId = ItemsInStockByCurrentItemLocation.ItemId
where
--(ItemLocationsCount = 1 or
--(ItemLocationsCount > 1 and ItemsInStock <> 0)) and
(ItemsInStockByCurrentItemLocation.StorePath Like '%' + @StorePath + '%'
or (ItemsInStockByCurrentItemLocation.StorePath is null and MSId is not null))
and ItemsInStockByCurrentItemLocation.ItemPath Like @ItemPath + '%'
and (ItemsInStockByCurrentItemLocation.Description is null or ItemsInStockByCurrentItemLocation.Description like '%' + @Description + '%' )
and (Date_P is null or (Date >= @FromDate and Date <= @ToDate))
)
This is the view that is showing the whole data without the section that generate the required flag (This run too fast 1 second):
,FilteredUnitified as (
select ItemsInStock.*,
MainAndWorkFlowUnitified.Date_P,
MainAndWorkFlowUnitified.Date
from ItemsInStock
inner join MainAndWorkFlowUnitified
on MainAndWorkFlowUnitified.StoreId = ItemsInStock.StoreId and
MainAndWorkFlowUnitified.ItemId = ItemsInStock.ItemId
where
(ItemsInStock.StorePath Like '%' + @StorePath + '%'
or (ItemsInStock.StorePath is null and MSId is not null))
and ItemsInStock.ItemPath Like @ItemPath + '%'
and (ItemsInStock.Description is null or ItemsInStock.Description like '%' + @Description + '%' )
and (Date_P is null or (Date >= @FromDate and Date <= @ToDate))
)
This is the view that change between both on a parameter value:
ItemsInStocks as (
select
distinct
ItemId,
ParentGroup,
Unit,
Item, ItemPath,
StoreAvalability Store,
ItemLocationsCount,
MainStorePath,
--Store,
StorePath,
ItemsInStock,
Description
,Code,
Name,
NameEn
from FilteredUnitifiedConsederingLocation where @ShowItemsWithUnavailableLocations = 1
union all
select
distinct
ItemId,
ParentGroup,
Unit,
Item, ItemPath,
Store,
null ItemLocationsCount,
MainStorePath,
StorePath,
ItemsInStock,
Description
,Code,
Name,
NameEn
from FilteredUnitified where @ShowItemsWithUnavailableLocations = 0
I have tried to not showing the row instead of putting a flag.
(You can see the related code commented)
--(ItemLocationsCount = 1 or
--(ItemLocationsCount > 1 and ItemsInStock <> 0))
But same is happing. the view is still running too slowly
I have refered to the related questions in the knowledge base, and find one suggesting to use indexes:
I figure that all required columns is indexed like ItemId, StoreId, and MainStoreId because they are foreign keys. Kindly, if there is any additional column should be indexed to optimize performance let me know.
|
I’m deleting your post because it is written in a language other than English. While we understand that this may be frustrating, we don’t have the moderation capacity to allow posts to be written in any human language. We suggest using machine translation (such as Google Translate) to translate your post into English and then reposting it. [See this page](https://stackoverflow.com/help/non-english-questions) for more information. |
null |
You may want to call `mysql_secure_installation` command first to deal with the `root` password and allow the connection. |
|javascript|reactjs|jsx| |
I'm deleting your post because this seems like a programming-specific question, rather than a conversation starter. With more detail added, this may be better as a Question rather than a Discussions post.
Please see [this page](https://stackoverflow.com/help/how-to-ask) for help on asking a question on Stack Overflow.
If you are interested in starting a more general conversation about how to approach an issue or concept related to the topic of this collective, feel free to make another Discussion post. You can check the discussions guidelines at https://stackoverflow.com/help/discussions-guidelines |
{"OriginalQuestionIds":[21291675],"Voters":[{"Id":285587,"DisplayName":"Your Common Sense","BindingReason":{"GoldTagBadge":"php"}}]} |
I recently encountered an issue while initializing a new project using Composer. Upon running Composer, I received an error indicating a PHP version mismatch between the required version by my dependencies and the version installed on my system.
The error message from Composer reads:
> Composer detected issues in your platform: Your Composer dependencies
> require a PHP version '>= 8.2.0'. You are running 8.1.2.
To resolve this, I've considered updating my PHP version. However, I'm hesitant to reinstall XAMPP due to concerns about preserving my database. I'm currently using Windows 10.
I researched how to update PHP on Windows 10 without reinstalling XAMPP to avoid losing the existing database. However, I was uncertain about the best method to achieve this while ensuring the integrity of my database.
Ultimately, I'm seeking guidance on the most effective approach to update the PHP version in my XAMPP environment on Windows 10 while preserving the database, as well as exploring alternative solutions to address the PHP version mismatch issue encountered with Composer.
What steps can I take to update my PHP version without reinstalling XAMPP, ensuring that my database remains intact? Additionally, is there a more optimal solution than the ones I've considered? |
I'm encountering an error in my React Native application when trying to save user data and login details to Firebase using Redux. The error message I'm receiving is:
> [Error: Actions must be plain objects. Instead, the actual type was: 'undefined'. You may need to add middleware to your store setup to handle dispatching other values, such as 'redux-thunk' to handle dispatching functions.
I'm using Redux to manage my application's state, and I've configured a store using @reduxjs/toolkit. The strange thing is that the data is being saved successfully to Firebase, but I'm still getting this error.
Here's a simplified version of my Redux setup:
**Signup.js**
```js
const isTestMode = true;
const initialState = {
inputValues: {
username: isTestMode ? "John Doe" : "",
email: isTestMode ? "email@gmail.com" : "",
password: isTestMode ? "12121212" : "",
},
inputValidities: {
username: false,
email: false,
password: false,
},
formIsValid: false,
};
export default function Signup({ navigation }) {
const [isLoading, setIsLoading] = useState(false);
const [error, setError] = useState(null);
const [formState, dispatchFormState] = useReducer(reducer, initialState);
const dispatch = useDispatch();
const inputChangedHandler = useCallback(
(inputId, inputValue) => {
const result = validateInput(inputId, inputValue);
dispatchFormState({ inputId, validationResult: result, inputValue });
},
[dispatchFormState]
);
const signupHandler = async () => {
try {
setIsLoading(true);
const action = signUp(
formState.inputValues.username,
formState.inputValues.email,
formState.inputValues.password
);
await dispatch(action);
Alert.alert("Account Successfully created", "Account created");
setError(null);
setIsLoading(false);
navigation.navigate("Login");
} catch (error) {
console.log(error);
setIsLoading(false);
setError(error.message);
}
};
return (
<SafeAreaProvider>
<View style={styles.container}>
<View style={styles.inputView}>
<Inputs
id="username"
placeholder="Username"
errorText={formState.inputValidities["username"]}
onInputChanged={inputChangedHandler}
/>
<Inputs
id="email"
placeholder="Enter your email"
errorText={formState.inputValidities["email"]}
onInputChanged={inputChangedHandler}
/>
<InputsPassword
id="password"
placeholder="Password"
errorText={formState.inputValidities["password"]}
onInputChanged={inputChangedHandler}
/>
</View>
<Buttons
title="SIGN UP"
onPress={signupHandler}
isLoading={isLoading}
/>
<StatusBar style="auto" />
</View>
</SafeAreaProvider>
);
}
```
**AuthSlice.js**
```js
import { createSlice } from "@reduxjs/toolkit";
const authSlice = createSlice({
name: "auth",
initialState: {
token: null,
userData: null,
didTryAutoLogin: false,
},
reducers: {
authenticate: (state, action) => {
const { payload } = action;
state.token = payload.token;
state.userData = payload.userData;
state.didTryAutoLogin = true;
},
setDidTryAutoLogin: (state, action) => {
state.didTryAutoLogin = true;
},
},
});
export const authenticate = authSlice.actions.authenticate;
export default authSlice.reducer;
```
**Store.js**
```js
import { configureStore } from "@reduxjs/toolkit";
import authSlice from "./authSlice";
export const store = configureStore({
reducer: {
auth: authSlice,
},
});
``` |
I have a transformation matrix with scaling (20, 20, 1) and rotation by 90 degrees:
```
0, -20, 0, 80
20, 0, 0, 20
0, 0, 1, 0
0, 0, 0, 1
```
I want to get scaling. I have the next example in JavaScript with the glMatrix library that extracts scaling from the transformation matrix above:
```html
<body>
<!-- Since import maps are not yet supported by all browsers, its is
necessary to add the polyfill es-module-shims.js -->
<script async src="https://unpkg.com/es-module-shims@1.7.3/dist/es-module-shims.js">
</script>
<script type="importmap">
{
"imports": {
"gl-matrix": "https://cdn.jsdelivr.net/npm/gl-matrix@3.4.3/+esm"
}
}
</script>
<script type="module">
import { mat4, vec3 } from "gl-matrix";
// Create a matrix
const matrix = mat4.fromValues(
0, 20, 0, 0,
-20, 0, 0, 0,
0, 0, 1, 0,
80, 20, 0, 1);
// Extract scaling
const scaling = vec3.create();
mat4.getScaling(scaling, matrix);
console.log(scaling[0], scaling[1], scaling[2]);
</script>
</body>
```
I want to rewrite it to Qt. I read a documentation and tried to google but I didn't find how to extract scaling from a transformation matrix.
```cpp
#include <QtGui/QMatrix4x4>
#include <QtWidgets/QApplication>
#include <QtWidgets/QWidget>
class Widget : public QWidget
{
public:
Widget()
{
QMatrix4x4 matrix(
0, -20, 0, 80,
20, 0, 0, 20,
0, 0, 1, 0,
0, 0, 0, 1);
qDebug() << "hello";
}
};
int main(int argc, char *argv[])
{
QApplication app(argc, argv);
Widget w;
w.show();
return app.exec();
}
```
|
How to get scaling from transformation matrix |
# **here is my Addtodo.jsx**
```
import { useEffect, useState ,useContext} from "react";
import Usercontext from "./context/usercontext";
export default function Addtodo(){
let [options,setoptions] = useState([]);
let id = 1;
let {setuser} = useContext(Usercontext);
return(
<>
<input type="text" placeholder="ADD TO DO" defaultValue={''} className="border rounded-l-lg w-64 h-6 text-black" id='todo'/>
<button className="bg-orange-500 rounded-r-lg" onClick={()=>{
let todo = document.querySelector("#todo").value;
options.push({
'id':id,
'tick':false,
'edit':false,
'todo':{todo}
});
setuser({options});
id++;
document.querySelector('#todo').value = '';
}}>ADD</button>
</>
)
}
```
# **and here is my Usercontext.js **
```
import React from "react";
const Usercontext = React.createContext();
export default Usercontext;
```
# **and here is my Usercontextprovider.jsx**
```
import React, { useState } from "react";
import Usercontext from "./usercontext";
const Usercontextprovider = ({children})=>{
let [user,Setuser] = useState(null);
<Usercontext.Provider value={{user,Setuser}}>
{children}
</Usercontext.Provider>
}
```
export default Usercontextprovider;`
# **and here is my app.jsx**
```
import Addtodo from './Addtodo'
import Usercontextprovider from './context/usercontextprovider'
import List from './List'
function App() {
return (
<Usercontextprovider>
<h1>Hello</h1>
<Addtodo />
</Usercontextprovider>
)
}
```
export default App;
my webpage right now is empty with no element in the root ,
i was rather expecting it to have the Addtodo.jsx, but it rather is empty |
I have hundreds of dependencies on my package.json file which I didn't install (npm and using Warp) |
|npm|dependencies|warp-terminal| |
null |
If you have a list of the buttons then you can select a list entry at random. Always remember that the first entry in a list or an array has an index of 0.
If you put all the buttons in a Panel, it can be easier to keep them organised: for example, you can get VB.NET to select all the buttons within that panel. Then you can also get it to count them for you, so if you ever change how many buttons there are, you won't need to go through the code changing every applicable instance of 15.
For a random number generator, you'll be better off using the .NET [Random](https://learn.microsoft.com/en-us/dotnet/api/system.random) class instead of the old VB Rnd() function. You only need one instance of it, so it can be created at the same time as the main form.
- Note: the random-generator in the code in the question will produce floating-point numbers like 11.2342 - I doubt you have a button with that number, although there is no indication of what the `Order` method does with the value.
Please start a new VB.NET Windows Forms project, add a Panel named "TheButtonPanel" to the default form, and as many buttons as you like, just for testing, to the panel.
I have added a simple handler to each of the buttons, so you can see some of the possibilities of getting the code to do work instead of you.
```
Public Class Form1
Public rand As New Random()
Public theButtons As List(Of Button)
Sub Game()
Dim nButtons = theButtons.Count()
Dim n = rand.Next(0, nButtons)
Dim selectedButton = theButtons(n)
selectedButton.PerformClick()
End Sub
Private Sub bnClick(sender As Object, e As EventArgs)
Dim bn = DirectCast(sender, Button)
MessageBox.Show(bn.Text)
End Sub
Private Sub Init()
theButtons = TheButtonPanel.Controls.OfType(Of Button).ToList()
' An example of doing something with the buttons:
For Each bn In theButtons
AddHandler bn.Click, AddressOf bnClick
Next
' Other initialisation code.
End Sub
Private Sub Form1_Shown(sender As Object, e As EventArgs) Handles MyBase.Shown
Game()
End Sub
Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load
Init()
End Sub
End Class
```
When you run it, it will show the form and automatically click a random button, and the click handler will show you the value of the button's .Text property (because I wanted to put *something* in the `Game()` method). You can still click the buttons to run the click handler.
Finally, try to give controls more descriptive names than things like "TheButtonPanel"—I have no idea what would be a good name in your program. |
i have string ```"This auction will run from Friday 28 July - Monday 7 August. It will close from 7pm (GMT) on Monday 7 August 2023. Read here for information on how our auctions end."```
and im trying to get dates from this string
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
const str = "This auction will run from Friday 28 July - Monday 7 August. It will close from 7pm (GMT) on Monday 7 August 2023. Read here for information on how our auctions end.";
const regex = /(\d{1,2} \w+(?: \d{4})?)/g;
const dates = str.match(regex);
console.dir(dates);
<!-- end snippet -->
this is regex that i wrote and in regex checker site((\d{1,2} \w+(?: \d{4})?)) it matches for 28 july, 7 august and 7 august 2023, but in javascript it only matches for 7 august and 7 august 2023, whats wrong?
|
I am trying to use Context api for a To-do website in vite and somehow its not working out |
|web|vite| |
null |
To check a field for a particular value, there is no need to attach a persistent listener, you can only perform a [Query#get()][1] call.
So assuming that you have two EditText objects:
EditText previousPasswordEditText = findViewById(R.id.previous_password_edit_text);
EditText newPasswordEditText = findViewById(new_password_edit_text);
To check the value of the `password` field that exists in the database against the value that is introduced by the user inside the `previousPasswordEditText` and only then perform the update with the new password that was typed inside the `newPasswordEditText`, please use the following lines of code:
DatabaseReference db = FirebaseDatabase.getInstance().getReference();
DatabaseReference userRef = db.child("user").child("user22");
userRef.get().addOnCompleteListener(new OnCompleteListener<DataSnapshot>() {
@Override
public void onComplete(@NonNull Task<DataSnapshot> task) {
if (task.isSuccessful()) {
DataSnapshot userSnapshot = task.getResult();
String oldPassword = userSnapshot.child("password").getValue(String.class);
String previousPassword = previousPasswordEditText.getText().toString();
String newPassword = newPasswordEditText.getText().toString();
if (oldPassword.equals(previousPassword)) {
Map<String, Object> updatePassword = new HashMap<>();
updatePassword.put("password", newPassword);
userSnapshot.getRef().updateChildren(updatePassword).addOnCompleteListener(new OnCompleteListener<Void>() {
@Override
public void onComplete(@NonNull Task<Void> updateTask) {
if (updateTask.isSuccessful()) {
Log.d("TAG", "Update successful!");
} else {
Log.d("TAG", "Failed with: " + updateTask.getException().getMessage());
}
}
});
} else {
Log.d("TAG", "The oldPassword and the previousPassword don't match!");
}
}
}
});
While the above code will certainly work, please note that it's very important not to store sensitive data like passwords in plain text in the database. Malicious users might take advantage of that. I recommend you use [Firebase Authentication][2] and right after that secure the database using [Firebase Realtime Database Security Rules][3].
[1]: https://firebase.google.com/docs/reference/android/com/google/firebase/firestore/Query#get()
[2]: https://firebase.google.com/docs/auth
[3]: https://firebase.google.com/docs/database/security |
null |
null |
I opened a source code of `getScaling` and saw how scaling is calculated there: https://glmatrix.net/docs/mat4.js.html#line1197
```js
export function getScaling(out, mat) {
let m11 = mat[0];
let m12 = mat[1];
let m13 = mat[2];
let m21 = mat[4];
let m22 = mat[5];
let m23 = mat[6];
let m31 = mat[8];
let m32 = mat[9];
let m33 = mat[10];
out[0] = Math.hypot(m11, m12, m13);
out[1] = Math.hypot(m21, m22, m23);
out[2] = Math.hypot(m31, m32, m33);
return out;
}
```
I made the same with [qHypot][1]
```cpp
#include <QtGui/QMatrix4x4>
#include <QtGui/QVector3D>
#include <QtMath>
#include <QtWidgets/QApplication>
#include <QtWidgets/QWidget>
class Widget : public QWidget
{
public:
Widget()
{
QMatrix4x4 m(
0, -20, 0, 80,
20, 0, 0, 20,
0, 0, 1, 0,
0, 0, 0, 1);
float sx = qHypot(m.row(0)[0], m.row(1)[0], m.row(2)[0]);
float sy = qHypot(m.row(0)[1], m.row(1)[1], m.row(2)[1]);
float sz = qHypot(m.row(0)[2], m.row(1)[2], m.row(2)[2]);
QVector3D scaling(sx, sy, sz);
qDebug() << scaling;
}
};
int main(int argc, char *argv[])
{
QApplication app(argc, argv);
Widget w;
w.show();
return app.exec();
}
```
P.S. [The QMatrix4x4 constructor](https://doc.qt.io/qt-6/qmatrix4x4.html#QMatrix4x4-3) uses row-major order. [The glMatrix fromValues method](https://glmatrix.net/docs/mat4.js.html#line111) uses column-major order.
[1]: https://doc.qt.io/qt-6/qtmath.html#qHypot |
null |
The problem seems to come from there:
```jsx
getCourses().map((course) => (
<Card key={course.id}
course={course}
likedCourses={likedCourses}
setLikedCourses={setLikedCourses}
/>
))
```
According to me, the call to `getCourses` returns `undefined`.
Since you either return `allCourses` or `courses[category]` from that function, I'd say that you sometimes call your `Cards` component with a `category` that is not present in your `courses` object.
You can find more information in a similar post: https://stackoverflow.com/questions/69080597/%C3%97-typeerror-cannot-read-properties-of-undefined-reading-map. |
I am trying to run Random Forest regression from `cuml.ensemble` on my dataset using GPU. My output will be 2 features and input is 7 features. But for some reason Random Forest regression using Rapids, doesn't take 2 column as output. It shows me error that 'EXpected one column but got 2." How can I resolve this issue? is there any way I can train the RF regression model for multi outputs?
```python
X = df.iloc[:, 0:7]
Z = df[['PCA_1','PCA_2']]
x_train, x_test, y_train, y_test = cuml.train_test_split(X, P, train_size=0.8, test_size = 0.2)
from cuml.ensemble import RandomForestRegressor
ranforest = RandomForestRegressor(n_estimators=120, n_bins = 8, accuracy_metric='r2', max_depth = 8, split_criterion = 'mse')
ranforest.fit(x_train, y_train)
```
```
ValueError: Expected 1 columns but got 2 columns.
``` |
This should do the job
```
const dialogs = document.querySelectorAll("dialog");
const closeButton = document.querySelector("dialog button");
const imgButtons = document.querySelectorAll(".btn");
imgButtons.forEach((button,index) => {
console.log('hi')
button.addEventListener("click", () => {
const dialog = dialogs[index]
const dialogImage = dialog.querySelector("img");
const imageSrc = button.querySelector("img").src;
dialogImage.src = imageSrc;
dialog.showModal();
});
})
dialogs.forEach(dialog => {
const closeButton = dialog.querySelector("button");
closeButton.addEventListener("click", () => {
dialog.close();
});
});
```
|
I setup go run environment on a Debian 9 remote Linux machine and can run GO code without problem. Then I tried to connect to the remote machine using VS Code. It was successful but VS code complained it could not find GOROOT. I've setup the envs in the .bashrc. Where can I specify these Environment Variables in VS Code? Thanks!
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/W8C7c.png |
VS Code cannot find GOROOT when VS Code connects to a remote server |
|go|visual-studio-code| |
1. Utilize Parallel Processing:
s5cmd with -c flag: The s5cmd tool itself might offer parallel processing capabilities. Check if it supports the -c (concurrency) flag. You can specify a higher number of concurrent connections to list files simultaneously. Refer to s5cmd --help for details.
Python Scripting: Write a Python script using libraries like boto3 (for Minio interaction) and multiprocessing to parallelize the listing process across multiple cores. This allows concurrent listing of files from different subfolders.
2. Leverage Minio Server-Side Listing:
Minio CLI stat command: The Minio CLI offers a stat command that can retrieve bucket statistics including the number of objects. You can use this to get an approximate file count without listing each file individually.
Minio Python SDK: The Minio Python SDK provides methods like list_objects that allow listing objects with filtering options. You can potentially filter by prefix to list only objects within the "s3://ccdata/minhash/20*" folder structure, reducing the number of objects retrieved.
3. Optimize Command Structure:
Reduce awk usage: The awk command likely adds some overhead. Consider modifying the s5cmd command to directly output the filename part (using options like --csv or custom formatting) instead of piping it through awk.
4. Minio Server-Side Filtering (if supported):
Minio Lifecycle Rules: If your Minio server supports lifecycle rules, you could potentially configure a rule to automatically generate a daily manifest file containing the list of minhash files. This would eliminate the need to run the s5cmd command altogether. |
I have a working postgres query that I am trying to setup in Mybatis but keep receiving syntax errors. The query that works in my PgAdmin that I would like to implement in Mybatis checks if there are any common items between 2 arrays. The working query in my pgAdmin goes like:
```
SELECT * FROM weather_schema.weather weather
WHERE STRING_TO_ARRAY(weather.phenoms, ',') && '{"TO", "WI"}'```
```
Now below is how I have this setup in Mybatis xml Mapper with the '{"TO", "WI"}' being replaced by injectable list of strings called "phenoms".
<select id="getFilteredWeather" resultMap="WeatherObj">
SELECT
*
FROM weather_schema.weather weather
WHERE
string_to_array(weather.custom_phenoms, ',') &&
<foreach item="phenom" index="," collection="phenoms"
open="'{"" separator="","" close=""}'">
#{phenom}
</foreach>
</select>
This is giving below result:
```
org.mybatis.spring.MyBatisSystemException\] with root cause
org.postgresql.util.PSQLException: The column index is out of range: 7, number of columns: 6.
``` |
Postgres && statement Error in Mybatis Mapper? |