instruction stringlengths 0 30k ⌀ |
|---|
I do not understand how the following operations `2s – 1` and `1 - 2s` are performed in the following expressions:
```vhdl
R <= (s & '0') - 1;
L <= 1-(s & '0');
```
Considering the fact that `R` and `L` are of type `signed(1 downto 0)` and `s` of type `std_logic`. I have extracted them from a `vhdl` code snippet in my professor's notes.
__What I understand (or at least consider to understand – premises of my reasoning)__
1. The concatenation with the `0` literal achieves the product by `2` (That is what shifting to the left does).
2. The concatenation also achieves a `std_logic_vector` of 2 bits (not so sure about that, I inferred this from a comment in the following [StackOverflow question]( https://stackoverflow.com/questions/18689477/how-to-add-std-logic-using-numeric-std)
3. "`std_logic_vector` is great for implementing data buses, it’s useless for performing arithmetic operations" - source: [vhdlwhiz](https://vhdlwhiz.com/signed-unsigned/#:~:text=Finally%2C%20signed%20and%20unsigned%20can,can%20only%20have%20number%20values).
__What baffles me:__
1. What type does the compiler interpret the `1` literal is?
- An integer? If so, can an `integer` be used without casting in an arithmetic expression with a `std_logic_vector`? This option doesn't seem very plausible to me...
- Assuming the fact that the `(s & '0')` is indeed interpreted as a `std_logic_vector` (second premise) it also comes to my mind the possibility that the compiler, based on the type of the other operand in the expression (i.e., `s`), inferred `1` to be of type `std_logic_vector` as well. However, even though both `(s & '0')` and `1` were interpreted as `std_logic_vector` they should not be behaving correctly according to my third premise.
- A thought that comes to my mind in order tu justify the possibility of both operands being of type `std_logic_vector` is that both `(s & '0')` and `1` are implicitly casted to `signed` by the compiler because it acknowledges the fact the signal in which the result is stored is of type `signed`. This, doesn't seem to make sense to me
`R <= (s & '0') - 1;` --suppose `s` is equal to `1`
Both are converted to `std_logic_vectors(1 downto 0)`
`R <= "10" - "01"`
Now, if the contents of the std_logic_vectors were interpreted as `signed` the result of the subtraction would be
`R <= (-2) - (1) = -3`
As you can tell I am really confused. I believe we've only scratched the surface when it comes to discussing data types in class and I am encountering a lot of problems when solving problems because choosing the wrong data types
I sincerely apologize for the questions not being as clear as I would, but they are only a reflection of my understanding on the subject. I appreciate your patience.
|
Need clarification on VHDL expressions involving std_logic_vector, unsigned and literals, unsure about compiler interpretation |
|math|types|vhdl|unsigned|ieee| |
null |
I had similar problems with webpack 5 and http-proxy-middleware.
The issue is webpack DevServer is using the `/ws` path, but my proxy and socket server were also using `/ws`.
**Option 1**
Before: proxy and socket server were using `/ws`.
```js
const setupProxy = [
createProxyMiddleware('/api', options),
createProxyMiddleware('/ws', { ...options, ws: true }),
];
module.exports = (app) => app.use(setupProxy);
```
After: change to use `/wss`, or anything else.
```js
const setupProxy = [
createProxyMiddleware('/api', options),
createProxyMiddleware('/wss', { ...options, ws: true }),
];
module.exports = (app) => app.use(setupProxy);
```
**Option 2**
Alternatively, you could modify webpack's socket `pathname`.
Docs: https://webpack.js.org/configuration/dev-server/#websocketurl
Fix: https://github.com/webpack/webpack/discussions/15520#discussioncomment-2343375
And if you're using CRA, you can set this in your `.env`.
```env
WDS_SOCKET_PATH=/wds
``` |
What is the reason I'm seeing a Lookup index which is null when I run, graph.run(SHOW INDEXES;)? |
|python|neo4j|py2neo| |
null |
The efficiency of either approach depends on whether you have a greater number of cells per grid or more layers, essentially dictated by your region of interest and spatial resolution, as well as the period or temporal resolution under investigation. Both methods could be viable, but their suitability varies depending on the specific circumstances, in my opinion.
However, in the referenced question, @mastefan appeared quite confident that he had managed to fix @Robert Hijmans approach. However, this exchange took place in 2018, and I have been unable to replicate the solution myself. Therefore, I decided to attempt using the 2018 version of the {SPEI} package available on CRAN (and probably I should have also used the legacy versions of `{raster}` and `{zoo}` to ensure consistency in results, but you get the idea for now...):
```r
url <- "https://cran.r-project.org/src/contrib/Archive/SPEI/SPEI_1.7.tar.gz"
install.packages(url, repos = NULL, type = "source")
```
Let's run the code (and please forgive me for not reiterating all initialization and pre-processing steps):
``` r
raster::overlay(tm, lat, fun = Vectorize(th))
#> class : RasterBrick
#> dimensions : 3, 4, 12, 768 (nrow, ncol, ncell, nlayers)
#> resolution : 0.25, 0.3333333 (x, y)
#> extent : 0, 1, 0, 1 (xmin, xmax, ymin, ymax)
#> crs : NA
#> source : memory
#> names : layer.1, layer.2, layer.3, layer.4, layer.5, layer.6, layer.7, layer.8, layer.9, layer.10, layer.11, layer.12, layer.13, layer.14, layer.15, ...
#> min values : 125.1804, 125.1804, 125.1804, 125.1804, 125.1804, 125.1804, 125.1804, 125.1804, 125.1804, 125.1804, 125.1804, 125.1804, 125.1804, 125.1804, 125.1804, ...
#> max values : 125.5248, 125.5248, 125.5248, 125.5248, 125.5248, 125.5248, 125.5248, 125.5248, 125.5248, 125.5248, 125.5248, 125.5248, 125.5248, 125.5248, 125.5248, ...
```
Great, this is now executable. It appears that the behavior of `SPEI::thornthwaite()` may have changed since 2018. However, let's examine the execution speed as you expressed concerns about efficiency:
``` r
mbm <- microbenchmark::microbenchmark("raster::overlay()" = raster::overlay(tm, lat, fun = Vectorize(th)),
"cell-wise loop" = for (i in 1:ncell(tm)) {
out[i] <- th(tm[i], lat[i])
},
times = 100)
mbm
#> Unit: milliseconds
#> expr min lq mean median uq max neval
#> raster::overlay() 7694.4259 7865.7535 8040.483 7995.6493 8126.7056 9043.048 100
#> cell-wise loop 102.5918 106.5566 116.190 114.4493 121.6464 171.598 100
```
The execution speed doesn't seem to be faster; in fact, it appears significantly slower. However, to be fair, I'm not very familiar with `{raster}`, so perhaps someone more experienced could have optimized this quite easily. My intention was to profile these code snippets as they are. Additionally, it's worth noting that `{terra}` would likely be the preferred package choice in 2024.
However, what worries me more are the differing results from both approaches. Perhaps we should prioritize obtaining accurate results over maximizing speed.
``` r
out1 <- raster::overlay(tm, lat, fun = Vectorize(th))
out1
#> class : RasterBrick
#> dimensions : 3, 4, 12, 768 (nrow, ncol, ncell, nlayers)
#> resolution : 0.25, 0.3333333 (x, y)
#> extent : 0, 1, 0, 1 (xmin, xmax, ymin, ymax)
#> crs : NA
#> source : memory
#> names : layer.1, layer.2, layer.3, layer.4, layer.5, layer.6, layer.7, layer.8, layer.9, layer.10, layer.11, layer.12, layer.13, layer.14, layer.15, ...
#> min values : 125.1804, 125.1804, 125.1804, 125.1804, 125.1804, 125.1804, 125.1804, 125.1804, 125.1804, 125.1804, 125.1804, 125.1804, 125.1804, 125.1804, 125.1804, ...
#> max values : 125.5248, 125.5248, 125.5248, 125.5248, 125.5248, 125.5248, 125.5248, 125.5248, 125.5248, 125.5248, 125.5248, 125.5248, 125.5248, 125.5248, 125.5248, ...
```
``` r
out2 <- raster::brick(tm, values = FALSE)
for (i in 1:ncell(tm)) {
out2[i] <- th(tm[i], lat[i])
}
out2
#> class : RasterBrick
#> dimensions : 3, 4, 12, 768 (nrow, ncol, ncell, nlayers)
#> resolution : 0.25, 0.3333333 (x, y)
#> extent : 0, 1, 0, 1 (xmin, xmax, ymin, ymax)
#> crs : NA
#> source : memory
#> names : layer.1, layer.2, layer.3, layer.4, layer.5, layer.6, layer.7, layer.8, layer.9, layer.10, layer.11, layer.12, layer.13, layer.14, layer.15, ...
#> min values : 76.06896, 68.79146, 76.29749, 73.89062, 76.37712, 73.92394, 76.38355, 76.36436, 73.87383, 76.21108, 73.64047, 76.04259, 76.06896, 68.79146, 76.29749, ...
#> max values : 76.27825, 68.91329, 76.32396, 73.97994, 76.56333, 74.14654, 76.59547, 76.49955, 73.89596, 76.30668, 73.82273, 76.27298, 76.27825, 68.91329, 76.32396, ...
```
**Edit:**
What leaves me baffled are the oscillations in results of `out2`. Let's remember, our average air temperature is fixed (= 20 °C), so the monthly long term average which is used implicitly, is the same for every month. Accordingly, variability in results between layers should be a function of latitude only, causing some minor spread per layer, but not between layers. But what we actually do have here, is a variability as a function of time, not making sense at all from my point of view, since `tm` is constant.
My careful guess: `SPEI::thornthwaite()` eventually makes assumptions, which are not applicable on this data, so that `out2` is fast but not correct. The results in `out1` seem to have a credible range when comparing results manually making use of the equations given in [Thornthwaite (1948)](https://www.researchgate.net/publication/275605891_Epocas_de_florescimento_e_colheita_da_nogueira-macadamia_para_areas_cafeicolasda_regiao_sudeste/fulltext/55f5a93c08ae63926cf4e732/Epocas-de-florescimento-e-colheita-da-nogueira-macadamia-para-areas-cafeicolasda-regiaeo-sudeste.pdf), eq. 9+10, but don't pin me down on that. |
I'm trying to make a search bar that will look nice. What I did is, I made an image of an search bar and I'm adding the image to the back-ground of the input and I'm editing the place and the size that the font will appear.
The only thing that I can't find a way to edit is the small 'x' button that appears when I'm using input type search.
I want to move it a little bit left so it will fix my search bar image.
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-css -->
#search {
width: 480px;
height: 49px;
border: 3px solid black;
padding: 1px 0 0 48px;
font-size: 22px;
color: blue;
background-image: url('images/search.jpg');
background-repeat: no-repeat;
background-position: center;
outline: 0;
}
<!-- language: lang-html -->
<input id="search" name="Search" type="search" value="Search" />
<!-- end snippet -->
|
The answer would definitively be to use [Module Federation](https://module-federation.io/), since you'll get the low-level build and runtime primitives that make it easy for you to load separate remote builds into your main application host, instead of starting from scratch.
This will also give you programmatic hooks, which, depending on your needs, you can use to add additional strategies, for example refreshing a remote container, if the main application discovers there's been a newer version deployed. |
null |
Auto delete old posts using Snippets for special category |
I try to make a 38 KHz carrier IR remote control frequency using delay function. When I press the key named "tasto1" the remote could send from ir led the carrier but I don't see nothing on the ocilloscope.The remote control have a ATtiny 2313V mcu and I wrote the code in avr-c using avr-gcc compiler. Is there something wrong with this code?:
#include <avr/io.h>
#define F_CPU 4000000UL
#include <util/delay.h>
#include <avr/pgmspace.h>
#include <avr/interrupt.h>
#include <avr/sleep.h>
#include <util/atomic.h>
volatile uint8_t tasto1= !(PINB6 & PIND2);
int main()
{
DDRD = 0x01;
PORTD = 0x00; // LED on PD0 output low (sink) for hardware modulator
while (1) {
if (tasto1==0){ //enable IR driver
PORTD &= ~(1 << PD0);
_delay_us(18);
PORTD |= (1 << PD0);
_delay_us(8);
}
else { DDRD=0x00;}
}
}
When I compile and upload the hex code on the mcu I haven't errors. I must use the delay function solution because the dedicated pins to make timers are mapped (wired) to be used like keys. |
|git|github|version-control| |
I made it! I added `alpha=year` to my code and I got this:
[Final graph][1]
[1]: https://i.stack.imgur.com/Lt27G.jpg
|
Usually using a variable is an additional overhead, but in your case Chrome was able to provide the same performance. But if a variable is reused, that could actually boost performance. So the rule could be - don't create unnecessary variables.
Also note that JS engine could optimize code while compiling so in reality your both examples could be exactly the same after being compiled.
If you create a variable it could be considered a write operation, but write operations are anything that mutates data or creates new one, in your case you join an array and this is a quite big write operation that stores the result as a temporary string. So when you assign this string to a real variable you add almost nothing to the already tremendous overhead. The less write operations the faster code. But that's about constant factors in an algorithm. I suggest to learn about time complexity and big O notation.
```
` Chrome/123
---------------------------------------------------------------------------------------
> n=10 | n=100 | n=1000 | n=10000
without vars ■ 1.00x x100k 565 | 1.00x x10k 594 | ■ 1.00x x1k 629 | ■ 1.00x x10 125
with vars 1.02x x100k 577 | ■ 1.00x x10k 592 | 1.01x x1k 635 | 1.03x x10 129
---------------------------------------------------------------------------------------
https://github.com/silentmantra/benchmark `
```
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
let $length = 10;
const big_strings = [];
const palette = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789";
for (let i = 0; i < $length; i++) {
let big_string = "";
for (let i = 0; i < 100; i++) {
big_string += palette[Math.floor(Math.random() * palette.length)];
}
big_strings.push(big_string);
}
let $input = big_strings;
var arrayStringsAreEqual = function(word1, word2) {
let a = word1.join('');
let b = word2.join('');
if (a == b) {
return true;
} else {
return false;
}
};
var arrayStringsAreEqual2 = function(word1, word2) {
return word1.join('') == word2.join('');
};
// @benchmark with vars
arrayStringsAreEqual($input, $input);
// @benchmark without vars
arrayStringsAreEqual2($input, $input);
/*@skip*/ fetch('https://cdn.jsdelivr.net/gh/silentmantra/benchmark/loader.js').then(r => r.text().then(eval));
<!-- end snippet -->
If we manage to avoid the intermediate join string we get faster code with bigger arrays:
```
` Chrome/123
----------------------------------------------------------------------------------------
> n=10 | n=100 | n=1000 | n=10000
without join 1.33x x100k 762 | 1.41x x10k 823 | 1.29x x1k 821 | ■ 1.00x x100 920
with join ■ 1.00x x100k 571 | ■ 1.00x x10k 583 | ■ 1.00x x1k 638 | 1.28x x10 118
----------------------------------------------------------------------------------------
https://github.com/silentmantra/benchmark `
```
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
let $length = 10;
const big_strings = [];
const palette = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789";
for (let i = 0; i < $length; i++) {
let big_string = "";
for (let i = 0; i < 100; i++) {
big_string += palette[Math.floor(Math.random() * palette.length)];
}
big_strings.push(big_string);
}
let $input = big_strings;
var arrayStringsAreEqual = function(word1, word2) {
return word1.join('') == word2.join('');
};
function iter(word){
let i = 0, j = 0;
return () => {
if(i === word.length) return '';
if(j === word[i].length) {
if(++i === word.length) return '';
j = 0;
}
return word[i][j++];
}
}
var arrayStringsAreEqual2 = function(word1, word2) {
const a = iter(word1), b = iter(word2);
let x, y;
do{
x = a();
y = b();
}while(x && x === y)
return !x && !y;
};
// @benchmark with join
arrayStringsAreEqual($input, $input);
// @benchmark without join
arrayStringsAreEqual2($input, $input);
/*@skip*/ fetch('https://cdn.jsdelivr.net/gh/silentmantra/benchmark/loader.js').then(r => r.text().then(eval));
<!-- end snippet -->
|
A `dplyr` solution:
```
library(dplyr)
df |>
filter(Disease == 1 & lead(Disease) == 2, .by = ID) |>
pull(ID)
```
Result:
```
[1] 2 4
```
**Edit:**
The original question was extended such that cases with disease sequence (1, 3, 2) shall also be included:
```
df |>
filter(Disease == 1 &
(lead(Disease) == 2 |
(lead(Disease) == 3 & lead(Disease, n = 2) == 2)), .by = ID) |>
pull(ID)
```
Result:
```
[1] 2 4 6
```
|
I don't understand how to create the SLT instruction inside my 4-bit ripple ALU, that consists of 1-bit ALU's. I don't know what to put in the 2'b11 for the 1-bit ALU and how to connect them together in my 4-Bit ALU. How would I set the LSB if I can't tell whether or not a \< b till the MSB ALU. Here's my code, any help at all would be appreciated.
Full Adder:
```
module FullAdder(a, b, cin, sum, cout);
input a, b, cin;
output sum, cout;
assign sum = a ^ b ^ cin;
assign cout = (a & b) | ((a ^ b) & cin);
endmodule
```
1-Bit ALU:
```
module OneBitALU(a, b, cin, ainv, binv, less, op, result,
cout, set);
input a, b, cin;
input ainv, binv;
input less;
input [1:0] op;
output result;
output cout;
output set;
wire aneg, bneg;
reg result;
reg tmp;
reg set;
assign aneg = ainv ? ~a : a;
assign bneg = binv ? ~b : b;
FullAdder fulladder(aneg, bneg, cin, tmp, cout);
always @ (*) begin
case(op)
2'b00: result = aneg & bneg;
2'b01: result = aneg | bneg;
2'b10: result = tmp;
2'b11: begin
result = 1'b0;
end
default: result = 1'b0;
endcase
end
endmodule
```
4-bit ALU:
```
module FourBitALU(a, b, op, result, cout);
input [3:0] a, b;
input [3:0] op;
output [3:0] result;
output cout;
reg [3:0] result;
reg [3:0] sum;
wire [2:0] co;
OneBitALU oba0(a[0], b[0], op[2], op[3], op[2], sum[3], op[1:0], result[0], co[0], sum[0]);
OneBitALU oba1(a[1], b[1], co[0], op[3], op[2], 0, op[1:0], result[1], co[1], sum[1]);
OneBitALU oba2(a[2], b[2], co[1], op[3], op[2], 0, op[1:0], result[2], co[2], sum[2]);
OneBitALU oba3(a[3], b[3], co[2], op[3], op[2], 0, op[1:0], result[3], cout, sum[3]);
endmodule
```
I don't understand how to do this. |
4-bit ALU SLT operation |
|verilog| |
null |
So i have a problem where i simply cannot change my nginx error page (Even after setting it up in nginx website config, it still shows the default one)
My website is under cloudflare proxy if that is the problem.
I tried looking it up in the internet, basically everyone had simmilar config so i really dont know what could be broken or so. Maybe its something in the config that blocks it maybe not. I would really appreciate some help.
**Changed real creds to mysite.com, I am aware of this**
Here's the code
```
server {
listen 80;
server_name mysite.com www.mysite.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name mysite.com www.mysite.com;
root /var/www/html;
index index.php index.html;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log error;
# allow larger file uploads and longer script runtimes
client_max_body_size 100m;
client_body_timeout 120s;
sendfile off;
ssl_certificate /etc/ssl/origin.pem;
ssl_certificate_key /etc/ssl/origin.key;
ssl_session_cache shared:SSL:10m;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM->
ssl_prefer_server_ciphers on;
# See https://hstspreload.org/ before uncommenting the line below.
# add_header Strict-Transport-Security "max-age=15768000; preload;";
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header X-Robots-Tag none;
add_header Content-Security-Policy "frame-ancestors 'self'";
add_header X-Frame-Options DENY;
add_header Referrer-Policy same-origin;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/run/php/php8.1-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param PHP_VALUE "upload_max_filesize = 100M \n post_max_size=100M";
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTP_PROXY "";
fastcgi_intercept_errors off;
fastcgi_buffer_size 16k;
fastcgi_buffers 4 16k;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
include /etc/nginx/fastcgi_params;
}
location ~ /\.ht {
deny all;
}
# Custom error pages
error_page 404 /404.html;
location = /404.html {
root /var/www/html/error;
internal;
}
error_page 403 /403.html;
location = /403.html {
root /var/www/html/error;
internal;
}
}
``` |
Getting Right Answer on Console.Log, but return function not giving right answer |
|javascript| |
null |
I am using an ASP.NET Core Web API with Entity Framework Core (pomelo). I have a MariaDB database. I use Swagger UI to explore my API, as per the template. When I try to use it to delete a row, I get the following error:
> Microsoft.EntityFrameworkCore.DbUpdateException: An error occurred while saving the entity changes. See the inner exception for details.
>
> MySqlConnector.MySqlException (0x80004005): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'RETURNING 1' at line 3
>
> at MySqlConnector.Core.ServerSession.ReceiveReplyAsync(IOBehavior ioBehavior, CancellationToken cancellationToken) in /_/src/MySqlConnector/Core/ServerSession.cs:line 894
at MySqlConnector.Core.ResultSet.ReadResultSetHeaderAsync(IOBehavior ioBehavior) in /_/src/MySqlConnector/Core/ResultSet.cs:line 37
at MySqlConnector.MySqlDataReader.ActivateResultSet(CancellationToken cancellationToken) in /_/src/MySqlConnector/MySqlDataReader.cs:line 130
at MySqlConnector.MySqlDataReader.InitAsync(CommandListPosition commandListPosition, ICommandPayloadCreator payloadCreator, IDictionary`2 cachedProcedures, IMySqlCommand command, CommandBehavior behavior, Activity activity, IOBehavior ioBehavior, CancellationToken cancellationToken) in /_/src/MySqlConnector/MySqlDataReader.cs:line 483
at MySqlConnector.Core.CommandExecutor.ExecuteReaderAsync(CommandListPosition commandListPosition, ICommandPayloadCreator payloadCreator, CommandBehavior behavior, Activity activity, IOBehavior ioBehavior, CancellationToken cancellationToken) in /_/src/MySqlConnector/Core/CommandExecutor.cs:line 56
at MySqlConnector.MySqlCommand.ExecuteReaderAsync(CommandBehavior behavior, IOBehavior ioBehavior, CancellationToken cancellationToken) in /_/src/MySqlConnector/MySqlCommand.cs:line 357
at MySqlConnector.MySqlCommand.ExecuteDbDataReaderAsync(CommandBehavior behavior, CancellationToken cancellationToken) in /_/src/MySqlConnector/MySqlCommand.cs:line 350
at Microsoft.EntityFrameworkCore.Storage.RelationalCommand.ExecuteReaderAsync(RelationalCommandParameterObject parameterObject, CancellationToken cancellationToken)
at Microsoft.EntityFrameworkCore.Storage.RelationalCommand.ExecuteReaderAsync(RelationalCommandParameterObject parameterObject, CancellationToken cancellationToken)
at Microsoft.EntityFrameworkCore.Update.ReaderModificationCommandBatch.ExecuteAsync(IRelationalConnection connection, CancellationToken cancellationToken)
The delete should be handled here in my controller and repository, like this:
[HttpDelete("alertId")]
public async Task<IActionResult> DeleteAlert(int alertId)
{
var alert = await _dataRepository.GetAlertAsync(alertId);
if (alert is null)
{
return NotFound("Alert not found");
}
await _dataRepository.DeleteAlertAsync(alert);
return NoContent();
}
and this
public class AlertRepository (IrtsContext context) : IDataRepositoryAlerts
{
readonly IrtsContext _alertContext = context;
public async Task DeleteAlertAsync(Alert entity)
{
if (entity != null)
{
_alertContext.Remove(entity);
await _alertContext.SaveChangesAsync();
}
else
{
throw new NotImplementedException();
}
}
}
I do not understand this. I believe it is my `dbContext` that handles the "saving the entity changes". How can I have a SQL syntax error? I cannot find "Returning 1" anywhere in my code.
I have tried deleting the row manually in my database. That works.
All other operations (GET, POST and PUT) work just fine.
I have tried running this with holding points to see where the error occurs but everything seems to execute without issue.
I am grateful for any hints. I am obviously very new to this ;)
Edit: MariaDB version 11.2.2
Edit2: This is my Alert class:
public partial class Alert
{
public int AlertId { get; set; }
public DateTime? Zeitpunkt { get; set; }
public string? Quelle { get; set; }
public string? AlertStatus { get; set; }
public string? AlertTyp { get; set; }
public string? BetroffeneSysteme { get; set; }
public virtual ICollection<Vorfall> Vorfalls { get; set; } = new List<Vorfall>();
}
and this is its entity configuration:
modelBuilder.Entity<Alert>(entity =>
{
entity.HasKey(e => e.AlertId).HasName("PRIMARY");
entity
.ToTable("alert")
.HasCharSet("utf8mb4")
.UseCollation("utf8mb4");
entity.Property(e => e.AlertId)
.HasColumnType("int(11)")
.HasColumnName("AlertID");
entity.Property(e => e.AlertStatus).HasMaxLength(255);
entity.Property(e => e.AlertTyp).HasMaxLength(255);
entity.Property(e => e.BetroffeneSysteme).HasMaxLength(255);
entity.Property(e => e.Quelle).HasMaxLength(255);
entity.Property(e => e.Zeitpunkt).HasColumnType("datetime");
});
edit3: I found the parameterized query. It goes thus:
[The error in the log with sql query]
Edit 4: copying the query into workbench gives the following error:
https://i.stack.imgur.com/5jiaK.png
It appears that the problem really is the "RETURN 1" and I must admit I have no idea what it is for. Is there a way to remove it? I have found no way to edit the EF queries themselves. Alternatively I might need to try a different database.
Thank you for your help!
|
{"Voters":[{"Id":1191247,"DisplayName":"user1191247"},{"Id":5389997,"DisplayName":"Shadow"},{"Id":5193536,"DisplayName":"nbk"}]} |
There are several pieces to this: either specify `runApp(.., host=, port=)` or shift to using the built-in shiny-server in the parent image.
# Fix `runApp`
First is that you expose port 8180 but the default of `runApp` may be to randomly assign a port. From [`?runApp`](https://shiny.posit.co/r/reference/shiny/latest/runapp):
```
port: The TCP port that the application should listen on. If the
‘port’ is not specified, and the ‘shiny.port’ option is set
(with ‘options(shiny.port = XX)’), then that port will be
used. Otherwise, use a random port between 3000:8000,
excluding ports that are blocked by Google Chrome for being
considered unsafe: 3659, 4045, 5060, 5061, 6000, 6566,
6665:6669 and 6697. Up to twenty random ports will be tried.
```
My guess is that it does not randomly choose 8180, at least not reliably enough for you to count on that.
The second problem is that network port-forwarding using docker's `-p` forwards to the container host, but not to the container's `localhost` (`127.0.0.1`). So we also should assign a host to your call to `runApp`. The magic `'0.0.0.0'` in TCP/IP networking means "all applicable network interfaces", which will include those that you don't know about before hand (i.e., the default routing network interface within the docker container). Thus,
```
CMD ["R", "-e", "shiny::runApp('/home/shiny-app',host='0.0.0.0',port=8180)"]
```
When I do that, I'm able to run the container and connect to `http://localhost:8180` and that shiny app works. (Granted, I modified the shiny code _a little_ since I don't have your data, but that's tangential.)
FYI, if you base your image on `FROM rocker/shiny-verse` instead of `FROM rocker/shiny`, you don't need to `install.packages('tidyverse')`, which can be a large savings. Also, with both `rocker/shiny` and `rocker/shiny-verse`, you don't need to `install.packages('shiny')` since it is already included. Two packages saved.
# Use the built-in shiny-server
The recommended way to use `rocker/shiny-verse` is to put your app in `/srv/shiny-server/appnamegoeshere`, and use the already-functional shiny-server baked in to the docker image.
Two benefits, one consequence:
- Benefit #1: you can deploy and serve multiple apps in one docker image;
- Benefit #2: if/when the shiny fails or exits, the built-in shiny-server will automatically restart it; when `runApp(.)` fails, it stops. (Granted, this is governed by restart logic of shiny in the presence of clear errors in the code.)
- Consequence: your local browser must include the app name in the URL, as in `http://localhost:8180/appnamegoeshere`. The `http://localhost:8180` page is a mostly-static landing page to say that shiny-server is working, and it does not by default list all of the apps that are being served by the server.
This means that your `Dockerfile` could instead be this:
```docker
# Base image
FROM rocker/shiny-verse
# Install dependencies (tidyverse and shiny are already included)
RUN R -e "install.packages(c('shinydashboard', 'DT'))"
# Make a directory in the container
RUN mkdir /srv/shiny-server/myapp
COPY . /srv/shiny-server/myapp
```
That's it, nothing more required to get everything you need since `CMD` is already defined in the parent image. Because shiny-server defaults to port 3838, your run command is now
```bash
docker run -p 3838:3838 deploy_test
```
and your local browser uses `http://localhost:3838/myapp` for browsing.
(FYI, the order of `RUN` and other commands in a `Dockerfile` can be influential. If, for instance, you change anything _before_ the `install.packages(.)`, then when you re-build the image it will have to reinstall those packages. Since we're no longer needing to (re)install `"tidyverse"` this should be rather minor, but if you stick with `rocker/shiny` and you have to `install.packages("tidyverse")`, then this can be substantial savings. By putting the `RUN` and `COPY` commands for this app _after_ `install.packages(..)`, then if we rename the app and/or add more docker commands later, then that `install.packages` step is cached/preserved and does not need to be rerun.) |
null |
I have a Avalonia project in C# that was just setup. Currently I am testing if Avalonia fits my requirements. Currently I struggle at a simple point.
I want to create a custom user control of type `UserControl` and provide a property in that control, that should be set by a binding from the view that is using the control.
What I have:
1. A new `MainWindow.axaml` with this content
```
...
xmlns:c="using:Project.Avalonia.Controls"
...
<c:MyControl Text="{Binding myObject.DisplayText, Mode=OneWay}" />
```
2. A new `UserControl`, created by the Avalonia Template.
In `MyControl.axaml`:
```
<Label Content="{Binding Text, Mode=OneWay}" />
```
In the `MyControl.axaml.cs`:
```
public MyControl()
{
InitializeComponent();
DataContext = this;
}
public static readonly StyledProperty<string?> TextProperty = AvaloniaProperty.Register<MyControl, string?>(nameof(Text));
public string? Text
{
get { return GetValue(TextProperty); }
set { SetValue(TextProperty, value); }
}
```
My Problem:
When I run the program, the binding simply not work. The source string variable has a valid string as when I bind it to a standard label, it works.
In the Debug output I see the following line:
`Exception thrown: 'System.InvalidCastException' in System.Private.CoreLib.dll`. It disappears, if I remove the binding. It stays if I remove the Label in the `MyControl`, so the source seems to be the binding between the MainWindow and the UserControl. I don't understand why this is thrown as both have the type `string?`.
Did I miss something in the Avalonia documentation? |
Avalonia Binding Custom Control throws System.InvalidCastException |
|c#|avalonia| |
Just use a normal **window.onerror** event to catch it...
<!DOCTYPE html><html><head></head><body>
<script>
window.onerror=(e)=>{alert(e);};
setTimeout(function(){
console.log(window.frames.testframe.location.href);
},2000);
</script>
<iframe name="test_frame" id="test_frame" src="https://orthodoxchurchfathers.com"></iframe>
</body></html>
If you want to go further you can also...
window.onunhandledrejection=(e)=>{alert(e.reason);};
|
Flutter has changed its default theme blue color to purple and some of its widgets(if you set `useMaterial3: true` in **ThemeData** )
After editing your question, there are 2 ways,
To change app theme color you need to add
colorScheme: ColorScheme.fromSeed(seedColor: Colors.blue),
inside your **ThemeData**
if you create a new project with Flutter version 3.16.9,
your Material app looks like this:
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: const MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
the second way is only changing AppBar,
You need to add `appBarTheme: AppBarTheme(color: Colors.blue),` in *MaterialApp/theme*
For Example:
MaterialApp(
title: 'App name',
// initialRoute: ,
navigatorKey: _navigator,
theme: ThemeData(
appBarTheme: AppBarTheme( //Add here
color: Colors.blue
), // until here
primarySwatch: Colors.blue,
),
);
**Happy Coding :)** |
Paste the pieces together, and evaluate it as an expression, for example
```r
x1 <- c(1, 2, 3)
y1 <- c(1, 2, 4)
string <- paste0(c('x','y'), 1, '==', c(1,2), collapse='&')
eval(parse(text=string))
# [1] FALSE FALSE FALSE
``` |
Download link is not working in my WordPress website. File, link address everything is correct. Even by right click the file is opening in other tab. But it is not downloading directly. How to solve this problem?
I have cleared the cache of the website. But nothing happened. |
Download button not working in wordpress website. How can solve it? |
|wordpress| |
null |
I'm encountering an issue with Hibernate where I'm getting the following SQL error:
Please help me to resolve error
SQL Error: 0, SQLState: 42P01
ERROR: missing FROM-clause entry for table "th1_1"
Position: 14
I'm working on a Spring Boot application where I'm using Hibernate for ORM mapping. I have entities defined for dx_entity and dx_temporary_hazard, and I'm using a join strategy inheritance between them.
I'm attempting to retrieve data using Hibernate's findAll method.
@Getter
@Setter
@Entity
@Table(name = "dx_entity")
@Inheritance(strategy = InheritanceType.JOINED)
@DiscriminatorColumn(name = "table_name", discriminatorType =
DiscriminatorType.STRING)
@Where(clause = "deleted_at is null")
@NoArgsConstructor
public class DxEntity extends MultiTenantEntity implements Serializable
{
// Entity fields...
}
@Entity
@Table(name = "dx_temporary_hazard")
@Setter
@Getter
@DiscriminatorValue("dx_temporary_hazard")
public class TemporaryHazard extends DxEntity {
// Entity fields...
}
@RestController
public class TController {
private final TemporaryHazardRepository
temporaryHazardRepository;
public TController(TemporaryHazardRepository
temporaryHazardRepository) {
this.temporaryHazardRepository = temporaryHazardRepository;
}
@GetMapping("/test")
public Page<TemporaryHazard> test() {
return
temporaryHazardRepository.findAll(PageRequest.of(0,20));
}
}
When we trigger /test controller, Quires performed in console:
`SELECT th1_1.pk_id,
th1_0.changed_at,
th1_0.changed_by,
th1_0.created_at,
th1_0.created_by,
th1_0.created_layout,
th1_0.deleted_at,
th1_0.module_name,
th1_0.status,
th1_0.tag,
th1_0.tenant_id,
th1_0.updated_layout,
th1_1.abc,
th1_1.xyz,
th1_1.abc1,
th1_1.control_measures_required,
th1_1.xyz1,
th1_1.type
FROM PUBLIC.dx_entity th1_0
JOIN PUBLIC.dx_temporary_hazard th1_1
ON th1_0.pk_id=th1_1.pk_id
WHERE th1_0.tenant_id = ?
AND (
th1_0.deleted_at IS NULL)
AND th1_0.table_name='dx_temporary_hazard' offset ? rowsFETCH first ? rows only`
` SELECT count(th1_1.pk_id)
FROM PUBLIC.dx_entity th1_0
WHERE th1_0.tenant_id = ?
AND (
th1_0.deleted_at IS NULL)
AND th1_0.table_name='dx_temporary_hazard'
`
**Error:**
SQL Error: 0, SQLState: 42P01
ERROR: missing FROM-clause entry for table
We can see that "th1_1.pk_id" is wrong alias name in count query. Hence query failing to return result.
I tried with hibernate 6.2.x and 6.4.x version, No luck |
dynamically retieve style from css with javascript |
|javascript|html|css| |
Making an open world choose-your-own-adventure text based game in python. Sorta new to python so I'm not sure what the best way to do this is. I want to store information about lots of different items such as armor, weapons, etc. I also want to store information about different towns and the NPCs in them. How can I store this info, all with their own subdata such as a swords damage, in a easy to use, sorted manner?
My first thought was to use classes. This was so I could do something like
```
class goblin:
health = 10
class items:
class weapons:
class ironSword:
damage = 1
goblin.health -= items.weapons.ironSword
```
But i figured this wouldn't work because I need the items to be accessible, like a variable. This is because I am using a class for the player, which has a list for their inventory. I want to be able to store these items in said list, Which I am not sure how to do with classes. |
How to store data with lots of subdata but keep easy and simple access in python |
|python|text| |
null |
I recently upgraded my laravel application to bootstrap 5 and I cannot (for the life of me) get bootstrap's tooltips working. My application is running Laravel Framework 9.52.16 with bootstrap 5.3.3 and popper.js 1.16.1.
My default html tooltips work but I would like to use bootstrap 5's tooltips as they are cleaner with better functionality and positioning. However, they do not appear now matter how I try to instantiate them or use them. There are no errors in my console, the tooltips simply do not appear.
My site uses a structure where I have a layout.blade.php file which loads all of my dependencies and provides a basic structure for the site. It then uses a series of @include or @yield statements to include content appropriate for the page the user is on at any given time.
Initially I tried included the suggested bootstrap instantiation code in the <script> tag at the bottom of layout.blade.php:
`
/** Instantiate all BS5 Tooltips */
var tooltipTriggerList = [].slice.call(document.querySelectorAll('[data-bs-toggle="tooltip"]'))
var tooltipList = tooltipTriggerList.map(function (tooltipTriggerEl) {
return new bootstrap.Tooltip(tooltipTriggerEl)
})
`
I've tried placing this in my $(function() {}); tag as well as in $(document).ready(function(){}); but neither corrected the issue. I've also tried adding defer to the script tag, that also did not resolve the issue. I've copied a few other various shorthand solutions from codepen in the same locations and they also did not work. Finally, I've also tried including the above code directly in the lowest-level page (the blade file that actually has the tooltips) but that ALSO did not resolve the issue.
If I stick an alert before the return statement, I can clearly see that the code is identifying each tooltip, but the fact doesn't change that the tooltips do not appear when I hover over their container.
Here is an example tooltip from my HTML as well (I am only adding title; popper is adding the data-bs-original-title tag - so I know it is doing SOMETHING, just not making the tooltip appear.)
`
<i class="fas fa-question-circle help-icon" data-bs-toggle="tooltip" data-bs-placement="top" container="body" aria-label="This will place the character on the list of characters that can be drawn for gift art. This does not have any other functionality, but allow users looking for characters to draw to find your character easily." data-bs-original-title="This will place the character on the list of characters that can be drawn for gift art. This does not have any other functionality, but allow users looking for characters to draw to find your character easily."></i>
`
I am a hobby developer and I am at my wits end on this. I spent days figuring out how to use mix properly so I could use all of BS5's new functionality only to hit this very silly wall over tooltips of all things. Any help is appreciated! |
I am probably missing something simple but...
I have a website created with WordPress that has a WooCommerce shop on it.
I want to put an image behind the "Shop" text in the banner on the page.
Can't seem to find a way to do it.
Any help would be greatly appreciated.
[Link to website page in question](https://malcolmwray.com/shop/)
Thanks
Not editable with Elamentor! |
Put an image behind the title in a WP, WooCommerce "shop" page |
|wordpress|image|woocommerce|background-image| |
null |
I have been asked in one of the Top company below question and I was not able to answer.
Just replied : I need to update myself on this topic
After my research I got stuck at this point : `Index Skip Scan`
What is the relationship between `Composite indexing` and `Index Skip Scan`
In order to confuse me it was asked ? Any solution for this below questions is much appreciated?
**Question :**
**If you create a composite indexing on 3 columns (eid , ename , esal ) ?**
- If i mention only eid=10 after where clause will the indexing be called ?
`select * from emp where eid=10`;
- If I mention only eid=10 and ename='Raj' will the indexing be called ?
`select * from emp where eid=10 and ename='Raj';`
- If I mention in different order like esal=1000 and eid=10 will the indexing be called ?
`select * from emp where esal=1000 and eid=10;`
- If I mention in reverse order like esal = 1000 and ename = 'Raj' and eid = 10 will the indexing be called ?
`select * from emp where esal=1000 and enam='Raj' and eid=10;`
Need a solution for this with detail table representation with data how it does?
Will upvote the great solution you provide |
I have a project with the `composer.lock` file.
I installed packages with the command:
composer install
Now I would like to roll back that `composer install` command to the state as it was before running it.
How to remove all packages without affecting `composer.lock` file?
Is there any single `composer` command to do that?
I tried:
composer remove *
but I got:
> [UnexpectedValueException]<br>
> "LICENSE" is not a valid alias.
I tried:
composer remove */*
But then I get bunch of print like:
> bin/console is not required in your composer.json and has not been removed
> Package "bin/console" listed for update is not locked.
Why `composer remove *` did not work at all? AFAIK the package name as `VendorName/PackageName` is a common convention for Packagist but not a must (if you use private repos) so how one would be able to remove all packages named `IdontHaveAnySlash` etc. at once?
I may use someting similar to:
for package in $(composer show | awk '{print $1}'); do composer remove --no-interaction --dev --no-install "$package"; done
But that is not a simple and single `composer` command.
Also composer often complains about a package being a part (dependency) of another one so `composer` does not uninstall it.
> Removal failed, doctrine/annotations is still present, it may be required by another package. See `composer why doctrine/annotations`.
As my intention is to rollback to the state that did not have any package installed but only files: `composer.lock` and potentially `composer.json` I really don't care about any dependencies, packages versions, downloading repositories' urls etc.
I just want to have a project without any installed dependencies as it was before.
Is there any single `composer` command to do that?
My:
composer --version
is:
> version 2.2.7 2022-02-25 11:12:27
|
While building an App, I came across an error:
[![enter image description here][1]][1]
I'm guessing the error was saying I needed to update the location package in order to use it. While researching how to update the package, I came across this website :
https://www.fluttercampus.com/guide/391/android-gradle-plugin-supports-only-kotlin-gradle-plugin-version-and-heigher/
It basically said in order to update the **kotlin grade plugin version**, I would have to change the **ext.kotlin_version**'s number in the package's **android/build.gradle** file then create a path to the file in **pubspec.yaml**. I made the proper changes to location package like the website said, but the problem came when I tried creating a path to the file:
[![enter image description here][2]][2]
[1]: https://i.stack.imgur.com/eqhex.png
[2]: https://i.stack.imgur.com/FatII.png
Is this even allowed? The website said to create a link straight from your computer's downloads file but Flutter isn't letting me. Are version numbers like "**1.1.1**" the only thing that can be accepted? I tried getting rid of the **path:** but that didn't work. A little help would be appreciated.
|
How Do I Create A Path In The pubspec.yaml File? |
|flutter|kotlin|dart|gradle|pubspec.yaml| |
If you pay attention to the design of your data structures then I don't see any need for `fmt.Sprintf`.
For example,
type TTKey struct {
Bitboard [2]uint64
PlayerTurn int
}
type TTFlag int64
type TTEntry struct {
BestScore float64
BestMove int
Flag TTFlag
Depth int
IsValid bool
}
type Solver struct {
NodeVisitCounter int
TTMapHitCounter int
TTMap map[TTKey]TTEntry
}
type Position struct {
Key TTKey
HeightBB [7]int
NumberOfMoves int
}
func (s *Solver) StoreEntry(p *Position, entry TTEntry) {
s.TTMap[p.Key] = entry
}
func (s *Solver) RetrieveEntry(p *Position) TTEntry {
if entry, exists := s.TTMap[p.Key]; exists {
s.TTMapHitCounter++
return entry
}
return TTEntry{}
} |
git ls-files -d | xargs git checkout --
this assumes that with "reset" you mean restore or undelete the file. because "reset" is another git concept which is related in some obscure ways. more on that later
explanation:
`git ls-files -d` will list all files that are deleted. note it will only list unstaged files. this is related to the "reset". more on that later.
`git checkout` will take the files from the current HEAD and write them in the working copy. in other words it will restore/undelete those files.
the double dash `--` is a safeguard against funny filenames starting with dash and name collisions with branches. see here for more info: https://stackoverflow.com/questions/13321458/meaning-of-git-checkout-double-dashes
`xargs` will collect the filenames from the first part and execute the second part with the collected filenames as argument.
so if the first part looks like this
$ git ls-files -d
file1
file2
the second part after `xargs` will be this
git checkout -- file1 file2
`xargs` is not part of git but a common command line tool.
----
note about space in filename:
this command, like all involving `xargs`, will break if the files have spaces in the name.
to accomodate for space in names do like this:
git ls-files -d -z | xargs -0 git checkout --
^^ ^^
----
note about staging and "reset"
git has this concept called staging. it is quite powerfull but rarely needed in day to day work with git. if you commonly use `git commit -a` then you are staging implicitly.
staging is typically done using `git add`. unstaging using `git reset`. some commands like `git rm` will delete and stage automatically. a normal `rm` will only delete but not stage.
note "reset" does not do "undelete". it only does "unstage". depending on how you deleted the file you will need to unstage them first.
observe:
$ ls
file1 file2 file3
we have three files.
$ git status
On branch main
nothing to commit, working tree clean
nothing is changed. also nothing is staged.
now we delete a file using normal `rm` and `git rm`
$ rm file2
$ git rm file3
rm 'file3'
check status
$ git status
# On branch master
# Changes to be committed:
# (use "git reset HEAD <file>..." to unstage)
#
# deleted: file3
#
# Changes not staged for commit:
# (use "git add <file>..." to update what will be committed)
# (use "git checkout -- <file>..." to discard changes in working directory)
#
# deleted: file2
#
the verbose variant of git status says as it is. file3 deleted and staged ready to be commited. file2 deleted but not staged. a `git commit` at this point will only commit file3. leaving file2 unchanged in the commit but still deleted in the working copy.
now let us "reset" those deleted files
$ git ls-files -d
file2
`git ls-files -d` will only list all files that are deleted but not staged.
$ git checkout -- file2
file2 is now restored/undeleted in the working copy.
$ git checkout -- file3
error: pathspec 'file3' did not match any file(s) known to git
trying to restore file3 will give a very missleading error message.
$ git reset
Unstaged changes after reset:
D file3
file3 is now unstaged
$ git checkout -- file3
file3 is now restored in the working copy
$ ls
file1 file2 file3
----
note
newer git introduced a `git restore` command. it replaces `git reset` when all you want is to restore/undelete a file. this is perhaps safer because `git reset` can also change commit history.
see this question and answer for more info: https://stackoverflow.com/questions/58003030/what-is-git-restore-and-how-is-it-different-from-git-reset
----
data loss warning
if you do an unconditional `git reset` you will lose all information about which files and which changes are staged and which are not.
you will not lose the actual code changes as those will still be in the working copy. but if you just worked hard separating a merge conflict, and not commited yet, then you might want to think twice before a `git reset`.
explaining this is a bit out of scope for this question and answer. i assume that when you are asking about restoring files from HEAD then you will not have been working with a complicated staging. so a `git reset` is safe to do for you.
|
I do not know why Amass only show 2 commands. Amass Enum and Intel. Why? In the github there is more than 2 commands.
My second questions, what is FQDN, manage, and other Amass Enum results? Thank you so much for helping me out. I am so confuse right now.
Detailed and easy explanation for all my questions |
OWASP Amass Subcommands |
|security|owasp| |
null |
Use [esbuid][1] CLI:
```bash
esbuild --minify file.js
```
See: https://esbuild.github.io/api/#minify
[1]: https://www.npmjs.com/package/esbuild |
actually you are importing from wrong package instead use
import { redirect } from "next/navigation";
This will hopefully work!!. |
I am unable to create a new market on openbook-dex. I am trying to get the createMarket.ts script from their example repo (https://github.com/openbook-dex/scripts-v2/) to function. It is not clear from the example what values should be set for all parameters, as there are a number of commented out lines. Code below as I am executing it, with the commented lines removed and loading a helper script where I load the rpc. I have been using the solana devnet for my tests. I have minted a new token for this test, and using that address for the baseMint, and the authority loaded via the keyphrase json is the mint authority for that token.
```code
import {
Keypair,
PublicKey,
ComputeBudgetProgram,
SystemProgram,
Transaction,
Connection,
} from "@solana/web3.js";
import {
AnchorProvider,
BN,
Program,
Wallet,
getProvider,
} from "@coral-xyz/anchor";
import { createAccount } from "./solana_utils";
import { MintUtils } from "./mint_utils";
import { OpenBookV2Client } from "@openbook-dex/openbook-v2";
import { RPC, authority, connection, programId } from "./utils";
import { HTTP_URL, WSS_URL } from "./constants";
function delay(ms: number) {
return new Promise((resolve) => setTimeout(resolve, ms));
}
async function main() {
const wallet = new Wallet(authority);
const provider = new AnchorProvider(new Connection(HTTP_URL, {wsEndpoint: WSS_URL}), wallet, {
commitment: "confirmed",
});
const client = new OpenBookV2Client(provider, programId);
console.log(
"starting with balance: ",
await provider.connection.getBalance(authority.publicKey)
);
const baseMint = new PublicKey("EoSHYd5zpfZDkC33gssK98sy8Q3QGXKkVaAwsEAqBTZi"); //Base Mint: Your token mint address.
//SOL
const quoteMint = new PublicKey("So11111111111111111111111111111111111111112"); //Quote Mint: The token you wish to pair with your token.
const oracleAId = null;
const oracleBId = null;
const name = "VVCT-SOL";
const [ixs, signers] = await client.createMarketIx(
authority.publicKey, // payer
name, // name
quoteMint, // quoteMint
baseMint, // baseMint
new BN(1), // quoteLotSize
new BN(1000000), // baseLotSize
new BN(0), // makerfee
new BN(0), // takerfee
new BN(0), // timeExpiry
oracleAId, // oracleA
oracleBId, // oracleB
null, // openOrdersAdmin
null, // consumeEventsAdmin
null, // closeMarketAdmin
);
const tx = await client.sendAndConfirmTransaction(ixs, {
additionalSigners: signers,
});
console.log("created market", tx);
console.log(
"finished with balance: ",
await connection.getBalance(authority.publicKey)
);
}
main();
```
and the contents of my helper script:
```code
const NETWORK = "DEVNET"; //DEVNET or MAINNET
export var HTTP_URL = '';
export var WSS_URL = '';
if (NETWORK == "DEVNET") {
HTTP_URL = "https://api.devnet.solana.com";
WSS_URL = "wss://api.devnet.solana.com";
} else if (NETWORK == "MAINNET") {
HTTP_URL = "https://api.mainnet-beta.solana.com";
WSS_URL = "wss://api.mainnet-beta.solana.com";
}
```
when run, this returns:
```shell
(node:4183492) ExperimentalWarning: The Fetch API is an experimental feature. This feature could change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
starting with balance: 980597400
[UnhandledPromiseRejection: This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). The promise rejected with the reason "#<Object>".] {
code: 'ERR_UNHANDLED_REJECTION'
}
```
the rejection is happening in the sendAndConfirmTransaction call, but the error isn't helpful in determining why.
I found a successful transaction to the same program on devnet here: https://explorer.solana.com/tx/3JkmfJE6tKoodGhRkQaq4dP4LXSw3Je8rQCw6X7YW4qB6HT4mAqxk1HPGpLGhgQ6ce1w3WwPmR3wUNcq1FeD6zpz?cluster=devnet but it is not clear to me why mine fails. Based on the successful transaction, I have also tried setting `oracleAId` and `oracleBId` to `new PublicKey("opnb2LAfJYbRMAHHvqjCwQxanZn7ReEHp1k81EohpZb")`, which is the openbook v2 programId, but I get the same error. |
Solana openbook-dex createMarket script |
Databricks documentation says that autoloader can ingest millions of files per second from cloud storage using event notification mode. However, when I enable event notification mode, speed is at best reaching 300 files pers second. All files are quite small 10-50KB in size.
Have tried both directory listing and file notifications. Both show same speeds. I suspect file notification mode is not working properly. Even though I can see that it has setup the event grid and queue storage in azure automatically after I provided required access / permissions |
ingesting high volume small size files in azure databricks |
|apache-spark|pyspark|azure-databricks|spark-structured-streaming| |
null |
I am new to Flask and JavaScript, so help would be appreciated. I am making a mock draft simulator, and I have a function that contains a for loop, that simulates a selection for each team, adds the pick to a dictionary, and after the loop breaks, the dictionary is sent to JavaScript.
```
@app.route('/simulate-draft' , methods=['POST'])
def simulate_draft():
# Get info from user and send to JavaScript
for pick,info in draft_results.items():
if team == user_team:
#get pick from user
#append to draft_results
else:
#simulate pick
#append to draft_results
return jsonify(draft_results)
```
However, I want the selections to be sent to JavaScript one by one, so the team the user chose to select for can make their selection, instead of the whole dictionary being sent at the end of the function I have read that yield could be a potential solution for this, but I keep getting errors when implementing this. |
You want the `$`<sup>[(docs)](https://www.autohotkey.com/docs/v2/Hotkeys.htm#prefixdollar)</sup> prefix.
#Requires AutoHotkey 2
$1::
{
static num := 1
SendInput(num++)
if (num > 3)
num := 1
}
|
You can use [`full()`][1] filter.
```rust
let hello = warp::path!("fullpath")
.and(warp::filters::path::full())
.map(|path: FullPath| format!("{}", path.as_str()));
```
[1]: https://docs.rs/warp/latest/warp/filters/path/fn.full.html |
When I try to Add AAR Dependency in the repository in Maven gradle, the Files are not resolved and Class Files are not available in the External Libraries, when I Inspect the code, only the BuildConif is available, how to overcome this issue?
But the issue is Specific to the machine, the same works fine in other machines. |
Android Studio is Not Resolving the Maven Repository AAR Files , Class Files are not available |
|aar| |
null |
This could be something I overlooked, that the commit is not actually under any branch, and now it cannot be seen under Xcode -> Source Control navigator -> Repositories
The only place that seems having some details of this commit is under .git -> objects where 'f5' is definitely the first 2 characters of the ID of that commit.
But neither of these 2 files can be viewed in any way:
[](https://i.stack.imgur.com/vjQOn.png)
Is it possible to get this commit back and how?
git reflog & git log only display the commits inside a branch, this should be the ID of the commit that I'm looking for `f513102`
<pre><code>$ git reflog
62566f0 (HEAD -> onboarding-ch) HEAD@{0}: checkout: moving from 74f55303e8b94ce00f7b34b9e9ad3094d558be21 to onboarding-ch
74f5530 (main) HEAD@{1}: checkout: moving from f5131027b344163de473b7cc4e0a3b9a83e133cd to f5131027b344163de473b7cc4e0a3b9a83e133cd
<b>f513102</b> HEAD@{2}: commit (amend): - branch 'onboarding-ch'
6f82ade HEAD@{3}: commit: - branch 'onboarding-ch'
</code></pre> |
I am using an ASP.NET Core Web API with Entity Framework Core (pomelo). I have a MariaDB database. I use Swagger UI to explore my API, as per the template. When I try to use it to delete a row, I get the following error:
> Microsoft.EntityFrameworkCore.DbUpdateException: An error occurred while saving the entity changes. See the inner exception for details.
>
> MySqlConnector.MySqlException (0x80004005): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'RETURNING 1' at line 3
>
> at MySqlConnector.Core.ServerSession.ReceiveReplyAsync(IOBehavior ioBehavior, CancellationToken cancellationToken) in /_/src/MySqlConnector/Core/ServerSession.cs:line 894
at MySqlConnector.Core.ResultSet.ReadResultSetHeaderAsync(IOBehavior ioBehavior) in /_/src/MySqlConnector/Core/ResultSet.cs:line 37
at MySqlConnector.MySqlDataReader.ActivateResultSet(CancellationToken cancellationToken) in /_/src/MySqlConnector/MySqlDataReader.cs:line 130
at MySqlConnector.MySqlDataReader.InitAsync(CommandListPosition commandListPosition, ICommandPayloadCreator payloadCreator, IDictionary`2 cachedProcedures, IMySqlCommand command, CommandBehavior behavior, Activity activity, IOBehavior ioBehavior, CancellationToken cancellationToken) in /_/src/MySqlConnector/MySqlDataReader.cs:line 483
at MySqlConnector.Core.CommandExecutor.ExecuteReaderAsync(CommandListPosition commandListPosition, ICommandPayloadCreator payloadCreator, CommandBehavior behavior, Activity activity, IOBehavior ioBehavior, CancellationToken cancellationToken) in /_/src/MySqlConnector/Core/CommandExecutor.cs:line 56
at MySqlConnector.MySqlCommand.ExecuteReaderAsync(CommandBehavior behavior, IOBehavior ioBehavior, CancellationToken cancellationToken) in /_/src/MySqlConnector/MySqlCommand.cs:line 357
at MySqlConnector.MySqlCommand.ExecuteDbDataReaderAsync(CommandBehavior behavior, CancellationToken cancellationToken) in /_/src/MySqlConnector/MySqlCommand.cs:line 350
at Microsoft.EntityFrameworkCore.Storage.RelationalCommand.ExecuteReaderAsync(RelationalCommandParameterObject parameterObject, CancellationToken cancellationToken)
at Microsoft.EntityFrameworkCore.Storage.RelationalCommand.ExecuteReaderAsync(RelationalCommandParameterObject parameterObject, CancellationToken cancellationToken)
at Microsoft.EntityFrameworkCore.Update.ReaderModificationCommandBatch.ExecuteAsync(IRelationalConnection connection, CancellationToken cancellationToken)
The delete should be handled here in my controller and repository, like this:
[HttpDelete("alertId")]
public async Task<IActionResult> DeleteAlert(int alertId)
{
var alert = await _dataRepository.GetAlertAsync(alertId);
if (alert is null)
{
return NotFound("Alert not found");
}
await _dataRepository.DeleteAlertAsync(alert);
return NoContent();
}
and this
public class AlertRepository (IrtsContext context) : IDataRepositoryAlerts
{
readonly IrtsContext _alertContext = context;
public async Task DeleteAlertAsync(Alert entity)
{
if (entity != null)
{
_alertContext.Remove(entity);
await _alertContext.SaveChangesAsync();
}
else
{
throw new NotImplementedException();
}
}
}
I do not understand this. I believe it is my `dbContext` that handles the "saving the entity changes". How can I have a SQL syntax error? I cannot find "Returning 1" anywhere in my code.
I have tried deleting the row manually in my database. That works.
All other operations (GET, POST and PUT) work just fine.
I have tried running this with holding points to see where the error occurs but everything seems to execute without issue.
I am grateful for any hints. I am obviously very new to this ;)
Edit: MariaDB version 11.2.2
Edit2: This is my Alert class:
public partial class Alert
{
public int AlertId { get; set; }
public DateTime? Zeitpunkt { get; set; }
public string? Quelle { get; set; }
public string? AlertStatus { get; set; }
public string? AlertTyp { get; set; }
public string? BetroffeneSysteme { get; set; }
public virtual ICollection<Vorfall> Vorfalls { get; set; } = new List<Vorfall>();
}
and this is its entity configuration:
modelBuilder.Entity<Alert>(entity =>
{
entity.HasKey(e => e.AlertId).HasName("PRIMARY");
entity
.ToTable("alert")
.HasCharSet("utf8mb4")
.UseCollation("utf8mb4");
entity.Property(e => e.AlertId)
.HasColumnType("int(11)")
.HasColumnName("AlertID");
entity.Property(e => e.AlertStatus).HasMaxLength(255);
entity.Property(e => e.AlertTyp).HasMaxLength(255);
entity.Property(e => e.BetroffeneSysteme).HasMaxLength(255);
entity.Property(e => e.Quelle).HasMaxLength(255);
entity.Property(e => e.Zeitpunkt).HasColumnType("datetime");
});
edit3: I found the parameterized query. It goes thus:
[The error in the log with sql query]
Edit 4: copying the query into workbench gives the following error:
https://i.stack.imgur.com/5jiaK.png
It appears that the problem really is the "RETURNING 1" and I must admit I have no idea what it is for. Is there a way to remove it? I have found no way to edit the EF queries themselves. Alternatively I might need to try a different database.
Thank you for your help!
|
I have created a new repository on github and uploaded a test index.html file on it . How can I find the File_ID for this file ?
Should be something like : File_ID=765345_index.html
I was able to get User_ID via api at https://api.github.com/ but I have no idea what to do for File_ID . |
How do I find Github File_ID? |
|github|github-pages|github-api| |
null |
Based purely on [this documentation](http://re2c.org/manual/manual_c.html#regular-expressions), a form of lookahead is supported.
So one might hope that this would work:
```
BINARY_NUM = "0b" ("0"|"1") ("_"? ("0"|"1"))* / [^_01] ;
```
or slightly more compactly:
```
BINARY_NUM = "0b" [01]+ ( "_" [01]+ )* / [^_01] ;
```
although something more sophisticated would be needed, since the example above implies that:
```
0b010101010222
```
would be parsed as `0b010101010` followed by `222`.
However, I discovered "trailing contexts are not allowed in named definitions" when I tried substituting the above into the [introductory sample code in the manual](http://re2c.org/manual/manual_c.html#introduction).
Modifying it with [the "sentinel" example](http://re2c.org/manual/manual_c.html#sentinel), I get:
```
// re2c $INPUT -o $OUTPUT -i --case-ranges
#include <assert.h>
bool lex(const char *s) {
const char *YYCURSOR = s;
const char *YYMARKER;
for(;;) {
/*!re2c
re2c:yyfill:enable = 0;
re2c:define:YYCTYPE = char;
number = "0b" [01]+ ( "_" [01]+ )*;
number { continue; }
[\x00] { return true; }
* { return false; }
*/
}
}
int main() {
assert(lex("0b01_001"));
assert(lex("0b00000_"));
return 0;
}
```
This successfully rejects trailing `_`.
It is also possible to just include the null directly:
```
number = "Ob" [01]+ ("_"[01]+)* "\x00";
``` |
I found a way to do this in just HTML, if that's what you're looking for. If you re-write your code to look like this:
<a href="website_homepage.htm">Home</a>
<a href="website_hills.htm">Hills Pupil Tailored Website</a>
Then your links will show side by side.
Full transparency, I'm just starting out on my developer journey. I'm currently taking a full stack developer course and I'm just on the HTML section of the course (I haven't hit the css or javascript portion of it yet). So there may be a better way to do it using CSS or other tricks that I haven't learned yet.
I'm assuming I'll eventually learn how to format this through CSS but I figured I'd add what I know in case it's what you're looking for. |
[As Moshi pointed out][1], I conflated the concepts of "case-sensitivity" and "case of letters":
The "scheme" component and the "host" authority sub-component being **case-insensitive** means that they can contain letters of any cases, but an implementation should treat "scheme" and "host" values, respectively, as identical if they only differ in the cases of the letters contained. (See `ALPHA`'s definition in [Section 1.3 Syntax Notation][2].)
[1]: https://[](https://software.codidact.com/posts/291216/291217#answer-291217)
[2]: https://www.rfc-editor.org/rfc/rfc3986#section-3.1 |
Cannot get Bootstrap Tooltips working in Laravel Application after BS5 upgrade even when tooltips are enabled per BS5 documentation |
|user-interface|bootstrap-5|laravel-9| |
null |
You have two separate variables called `ch` -- a local parameter to the `Get` function and a global static `ch`. The global static is used as the argument to `Get` in `main`, so its value will be used to initialize the parameter `ch`, but other than that, these are two completely separate variables with no connection between them.
So within `Get`, `ch` will refer to the parameter `ch`, and changes to it (by assignment or with `scanf`) will have no effect on the global `ch` |
I have created this Swiper slider with pagination bullets that have a progress bar pagination. It changes color for the bullet (to #000) on active slides. Is there a way for the bullets for the slide that was visited has their colored changed (to `#000`) as well - so only the unvisited slide's bullet with the grey `#DDD` background?
Here's my code:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
var mySwiper = new Swiper('.swiper-container', {
loop: true,
slidesPerView: 1,
autoplay: {
delay: 5000,
},
effect: 'fade',
fadeEffect: {
crossFade: true
},
pagination: {
el: '.swiper-pagination',
clickable: 'true',
type: 'bullets',
renderBullet: function (index, className) {
return '<span class="' + className + '">' + '<i class="progress-bar-bg"></i>' + '<b class="progress-bar-cover"></b>' + '</span>';
},
},
})
<!-- language: lang-css -->
:root {
--swiper-pagination-bullet-border-radius: 0;
--swiper-pagination-bullet-width: 40px;
--swiper-pagination-bullet-height: 2px;
}
body {
font-family: Helvetica;
color: #000;
}
.swiper-container {
width: 100%; height: 100vh;
}
.swiper-wrapper {
width: 100%; height: 100%;
}
.swiper-slide {
font-size: 100px; text-align: center;
line-height:100vh;
}
.swiper-pagination-bullet {
position: relative;
height: auto;
opacity: 1;
margin-right: 20px;
background-color: transparent;
.progress-bar-bg {
position: absolute;
bottom: 0;
left: 0;
z-index: 1;
width: 100%;
height: 2px;
background-color: #DDD;
}
.progress-bar-cover {
position: absolute;
bottom: 0;
left: 0;
z-index: 2;
width: 0%;
height: 2px;
background-color: #000;
}
}
.swiper-pagination-bullet-active {
background-color: transparent;
b {
animation-name: countingBar;
animation-duration: 3s;
animation-timing-function: ease-in;
animation-iteration-count: 1;
animation-direction: alternate ;
animation-fill-mode:forwards;
}
}
@keyframes countingBar {
0% {width: 0;}
100% {width:100%;}
}
<!-- language: lang-html -->
<link
rel="stylesheet"
href="https://cdn.jsdelivr.net/npm/swiper@11/swiper-bundle.min.css"
/>
<script src="https://cdn.jsdelivr.net/npm/swiper@11/swiper-bundle.min.js"></script>
<!-- Slider main container -->
<div class="swiper-container">
<!-- Additional required wrapper -->
<div class="swiper-wrapper">
<!-- Slides -->
<div class="swiper-slide">Slide 1</div>
<div class="swiper-slide">Slide 2</div>
<div class="swiper-slide">Slide 3</div>
...
</div>
<!-- If we need pagination -->
<div class="swiper-pagination"></div>
</div>
<!-- end snippet -->
Any pointers would be an immense help. Thank you so much.
In JavaScript, I tried to add an event listener to visited slide through click and added a 'visited-slide' class before. However, it requires a click but won't automatically updating the bullet's color as the slide animation goes on. |
A windowed function is processed at the same time the SELECT is.
More specifically, this is the order of operations:
FROM and JOINS
WHERE
GROUP BY
HAVING
SELECT
notice how SELECT is processed last.
Therefore your queries are not the same as your first query is filtering before the windowed function. The second query is filtering after.
|
pip install numpy
Collecting numpy
Using cached numpy-1.26.4.tar.gz (15.8 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [21 lines of output]
+ C:\Users\Ya Zahra\AppData\Local\Programs\Python\Python313\python.exe C:\Users\Ya Zahra\AppData\Local\Temp\pip-install-9k0yru47\numpy_aa35ee8d6f314c8383668c5ac8a97aef\vendored-meson\meson\meson.py setup C:\Users\Ya Zahra\AppData\Local\Temp\pip-install-9k0yru47\numpy_aa35ee8d6f314c8383668c5ac8a97aef C:\Users\Ya Zahra\AppData\Local\Temp\pip-install-9k0yru47\numpy_aa35ee8d6f314c8383668c5ac8a97aef\.mesonpy-0_r_213r -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --native-file=C:\Users\Ya Zahra\AppData\Local\Temp\pip-install-9k0yru47\numpy_aa35ee8d6f314c8383668c5ac8a97aef\.mesonpy-0_r_213r\meson-python-native-file.ini
The Meson build system
Version: 1.2.99
Source dir: C:\Users\Ya Zahra\AppData\Local\Temp\pip-install-9k0yru47\numpy_aa35ee8d6f314c8383668c5ac8a97aef
Build dir: C:\Users\Ya Zahra\AppData\Local\Temp\pip-install-9k0yru47\numpy_aa35ee8d6f314c8383668c5ac8a97aef\.mesonpy-0_r_213r
Build type: native build
Project name: NumPy
Project version: 1.26.4
WARNING: Failed to activate VS environment: Could not find C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe
..\meson.build:1:0: ERROR: Unknown compiler(s): [['icl'], ['cl'], ['cc'], ['gcc'], ['clang'], ['clang-cl'], ['pgcc']]
The following exception(s) were encountered:
Running `icl ""` gave "[WinError 2] The system cannot find the file specified"
Running `cl /?` gave "[WinError 2] The system cannot find the file specified"
Running `cc --version` gave "[WinError 2] The system cannot find the file specified"
Running `gcc --version` gave "[WinError 2] The system cannot find the file specified"
Running `clang --version` gave "[WinError 2] The system cannot find the file specified"
Running `clang-cl /?` gave "[WinError 2] The system cannot find the file specified"
Running `pgcc --version` gave "[WinError 2] The system cannot find the file specified"
A full log can be found at C:\Users\Ya Zahra\AppData\Local\Temp\pip-install-9k0yru47\numpy_aa35ee8d6f314c8383668c5ac8a97aef\.mesonpy-0_r_213r\meson-logs\meson-log.txt
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
I wanted to install the numpy module when I encountered the following error. Not only this module but all modules are facing this problem |
Preparing metadata (pyproject.toml) ... error |
|python|module| |
null |
|typescript|solana|solana-web3js|anchor-solana| |
null |