instruction stringlengths 0 30k ⌀ |
|---|
I have an html that has content over multiple pages that has to be printed. I have attached a sample below.
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Dynamically Generated HTML with Header for PDF</title>
<style>
/* Header styles for print */
@media print {
.header {
position: fixed;
top: 0;
left: 0;
right: 0;
height: 50px;
/* Adjust height as needed */
color: white;
text-align: center;
line-height: 50px;
/* Adjust line height as needed */
}
}
/* Content styles */
.content {
margin-top: 60px;
/* Ensure content starts below the header */
}
</style>
</head>
<body>
<div class="header">Header Content</div>
<div class="content">
<!-- Your dynamically generated HTML content here -->
<p>This is a sample dynamically generated content.</p>
<!-- More content -->
</div>
</body>
</html>
```
This has a header at a fixed position. When the html is printed to pdf the content overlaps with the header from the 2nd page onwards. The margin-top css of the content is applied only for the first page.
Even I tried setting @page css with a margin, in this case the fixed position header is also moved down by the margin. I tried with different css techniques but none of them worked. The main problem is css is getting applied only to the first printed page and not in the subsequent pages. |
|javascript|visual-studio-code|microsoft-edge| |
null |
What steps should I follow to correctly set up two foreign keys in Laravel 11, as I've been having difficulties with this task?
```php
$table->id();
$table->foreignId('news_id')
->constrained(table: 'news')
->cascadeOnDelete()
->cascadeOnUpdate();
$table->foreignId('category_id')
->constrained(table: 'categories')
->cascadeOnDelete()
->cascadeOnUpdate();
$table->timestamps();
```
 |
Laravel 11 foreign key not showing up in SQL |
Ok, so if you have a h1 element with a margin or padding of greater than 0, you want the wrapping element to have the style: overflow:hidden. Sometimes, applying it to the body won't work, as the element containing the h1 or p element may be a div.
The wrapping element could be main, div or something else.
So don't do this:
* {
margin: 0px;
padding: 0px;
}
And hover every element like a h1 or p with a default margin, and then check the immediate parent and then apply a overflow:hidden on that element. |
I'm literally going crazy. I can't figure out where is the error in the code below whose purpose is to retrieve an image from a database and display it in a frame with id 'img1'. I've tried every way but what I get is just a string that isn't transformed into an image. Some advice?
```
<?php
//file foto.php
include './admin/conn.php';
if (!empty($_POST['photo'])) {
$Id = $_POST['photo'];
$query=$conny->query("SELECT * from picture where cf ='$Id'");
while($row=$query->fetch_array()) {
echo $row["image"];
}
} else {
echo $searchErr = "Errore nell'estrapolazione del dato IN/OUT";
}
?>
```
```
//file index.php
$.ajax( {
method: 'POST',
url: 'foto.php',
async: false,
data:{
photo: data14
},
success: function (result) {
if (result != null && result != "") {
alert(result); //<<<<<< (i get a JFIF string)
document.getElementById("img1").src= '<img src="data:image/jpeg;base64, base64_encode('+result+') width="150px" height="150px" />'; //<<<<< (i get an empty frame)
}}
})
``` |
Blob image displaying from database |
|javascript|php|mysql|ajax| |
The function you are fitting has two properties that the data does not have:
* It necessarily has its maximum at the beginning (t=0) since you use the cos function without a phase shift. You should definitely add a phase shift (additive constant within the argument)
* Also, it necessarily oscillates around 0 because you don't allow for a shift in the y direction. You should also add this.
(Since you allude to this in the question: yes, you could also shift down to a zero average value your data by simply subtracting the mean value from all data values. But adding an additive constant to the function used for the fit could be more versatile and would give a result more directly related to the original data.)
PS: not only were you shaking very importantly, but also you seem to have followed the pendulum with the camera. As a consequence the position of the object on the image does not correspond to the absolute position of the object in space... But well, you'll still get a cosine curve that goes as well as possible through the data points....
|
I am so silly! I just set the x,y and z in `setAngularFactor()` to 0 and it works as intended! |
I'm trying to send an object to my api but i always get null values.
Here is my angular method:
createNormalPost(post: CreateNormalPostInterface) {
const formData = 'ceasdasdas'
return this.httpClient.post<any>(this.API, formData,
{headers: {'Content-Type': 'multipart/form-data'}}
);
}
And here is my controller:
[Route("api/[controller]")]
[ApiController]
[AllowAnonymous]
public class PostController : ControllerBase
{
private readonly IPostService _postService;
private readonly ILogger<AuthController> _logger;
public PostController(IPostService postService, ILogger<AuthController> logger)
{
_postService = postService;
_logger = logger;
}
[HttpPost]
public async Task<IActionResult> CreatePostNormalAsync([FromForm] string formData)
{
try
{
Console.WriteLine(formData);
//var response = await _postService.CreatePostNormalAsync(post);
return Ok(formData);
}
catch (Exception ex)
{
_logger.LogError(ex, ex.Message);
return StatusCode(StatusCodes.Status500InternalServerError, ex.Message);
}
}
}
I'm currently trying to send a string just to check if the data is being sent but i still get null.
EDIT: I think i found the problem, somehow the request is sent as 'application/json' not as form data. |
Finally, I've found a solution to this problem without utilizing the redirect and refreshListenable properties of GoRouter.
The concept involves creating a custom redirection component. Essentially, authStateChanges are globbally listened, and upon any change in AuthenticationState, we navigate to our redirection component. In my implementation, this redirection component is a splash page. From there, you can proceed to redirect to the desired page.
Here is my code
Router Class
class AppRouter {
factory AppRouter() => _instance;
AppRouter._();
static final AppRouter _instance = AppRouter._();
final _config = GoRouter(
initialLocation: RoutePaths.splashPath,
routes: <RouteBase>[
GoRoute(
path: RoutePaths.splashPath,
builder: (context, state) => const SplashPage(),
),
GoRoute(
path: RoutePaths.loginPath,
builder: (context, state) => const LoginPage(),
),
GoRoute(
path: RoutePaths.registrationPath,
builder: (context, state) => const RegistrationPage(),
),
GoRoute(
path: RoutePaths.homePath,
builder: (context, state) => const HomePage(),
),
],
);
GoRouter get config => _config;
}
class RoutePaths {
static const String loginPath = '/login';
static const String registrationPath = '/register';
static const String homePath = '/home';
static const String splashPath = '/';
}
Authentication Bloc
class AuthenticationBloc
extends Bloc<AuthenticationEvent, AuthenticationState> {
AuthenticationBloc(
this.registerUseCase,
this.loginUseCase,
) : super(const AuthenticationInitial()) {
on<AuthenticationStatusCheck>((event, emit) async {
await emit.onEach(
FirebaseAuth.instance.authStateChanges(),
onData: (user) async {
this.user = user;
emit(const AuthenticationLoading());
await Future.delayed(const Duration(seconds: 1), () {
if (user == null) {
emit(const AuthenticationInitial());
} else {
emit(AuthenticationSuccess(user));
}
});
},
);
});
Main
final router = AppRouter().config;
return MultiBlocProvider(
providers: [
BlocProvider<AuthenticationBloc>(
create: (context) =>
sl<AuthenticationBloc>()..add(const AuthenticationStatusCheck()),
),
],
child: BlocListener<AuthenticationBloc, AuthenticationState>(
listenWhen: (previous, current) =>
previous.runtimeType != current.runtimeType,
listener: (context, state) {
if (state is AuthenticationLoading) {
router.go(RoutePaths.splashPath);
}
},
child: MaterialApp.router(
title: 'Hero Games Case Study',
theme: ThemeData.dark(
useMaterial3: true,
),
routerConfig: router,
),
),
);
splash_page
return BlocListener<AuthenticationBloc, AuthenticationState>(
listener: (context, state) {
if (state is AuthenticationSuccess) {
context.go(RoutePaths.homePath);
} else if (state is AuthenticationInitial) {
context.go(RoutePaths.loginPath);
}
},
child: Scaffold(
appBar: AppBar(
title: const Text('Splash Page'),
),
body: const Center(
child: CircularProgressIndicator(),
),
),
);
NOTE:
If you will use deeplinking, you can add the code below to your GoRouter
redirect: (context, state) {
if (context.read<AuthenticationBloc>().user == null &&
(state.fullPath != RoutePaths.loginPath &&
state.fullPath != RoutePaths.registrationPath &&
state.fullPath != RoutePaths.splashPath)) {
return RoutePaths.splashPath;
}
return null;
},
|
How can I lock a rigidbody's rotation on all axis? Bullet Physics [SOLVED] |
I am getting this error and also "Not starting debugger since process cannot load the jdwp agent " error when i'm connecting my device using cable and i also enable usb debugging. Android studio is showing my account but still i cannot use my device anyone please help me.
I have tried by checking and updating my sdk and also by removing my cable and attaching it again |
Failed to getEnergyData |
|android|kotlin|android-studio| |
null |
I followed the basic *getting started* instructions for Node.js on Heroku here:
https://devcenter.heroku.com/categories/nodejs
These instructions don't tell you to create a .gitignore node_modules, and therefore imply that folder *node_modules* should be checked into Git. When I included *node_modules* in the Git repository, my getting started application ran correctly.
When I followed the more advanced example at:
* *[Building a Real-time, Polyglot Application with Node.js, Ruby, MongoDB and Socket.IO][1]*
* https://github.com/mongolab/tractorpush-server (source)
It instructed me to add folder *node_modules* to file *.gitignore*. So I removed folder *node_modules* from Git, added it to file *.gitignore*, and then redeployed. This time the deployment failed like so:
```lang-none
-----> Heroku receiving push
-----> Node.js app detected
-----> Resolving engine versions
Using Node.js version: 0.8.2
Using npm version: 1.0.106
-----> Fetching Node.js binaries
-----> Vendoring node into slug
-----> Installing dependencies with npm
Error: npm doesn't work with node v0.8.2
Required: node@0.4 || 0.5 || 0.6
at /tmp/node-npm-5iGk/bin/npm-cli.js:57:23
at Object.<anonymous> (/tmp/node-npm-5iGk/bin/npm-cli.js:77:3)
at Module._compile (module.js:449:26)
at Object.Module._extensions..js (module.js:467:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:362:17)
at require (module.js:378:17)
at Object.<anonymous> (/tmp/node-npm-5iGk/cli.js:2:1)
at Module._compile (module.js:449:26)
Error: npm doesn't work with node v0.8.2
Required: node@0.4 || 0.5 || 0.6
at /tmp/node-npm-5iGk/bin/npm-cli.js:57:23
at Object.<anonymous> (/tmp/node-npm-5iGk/bin/npm-cli.js:77:3)
at Module._compile (module.js:449:26)
at Object.Module._extensions..js (module.js:467:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:362:17)
at require (module.js:378:17)
at Object.<anonymous> (/tmp/node-npm-5iGk/cli.js:2:1)
at Module._compile (module.js:449:26)
Dependencies installed
-----> Discovering process types
Procfile declares types -> mongod, redis, web
-----> Compiled slug size is 5.0MB
-----> Launching... done, v9
```
Running "heroku ps" confirms the crash. OK, no problem, so I rolled back the change, added folder *node_module* back to the Git repository and removed it from file *.gitignore*. However, even after reverting, I still get the same error message on deployment, but now the application is running correctly again. Running "heroku ps" tells me the application is running.
What's the right way to do this? Include folder *node_modules* or not? And why would I still be getting the error message when I roll back? My guess is the Git repository is in a bad state on the Heroku side.
[1]: https://devcenter.heroku.com/articles/realtime-polyglot-app-node-ruby-mongodb-socketio
|
I'm very new to JS. I'm trying to return a matched value from user input from a global array (aStoreItems) for a theoretical online book store. The array consists of objects passed through a constructor.
```
// Function to find object by ID value
function getBookById(id) {
console.log("Variable being passed in(in getbookbyid function)= " + id)
console.log(" type Variable being passed in(ingetbookbyid function)= " + typeof (id))
for (var i = 0; i < aStoreItems.length; ++i) {
if (aStoreItems[i].bID == id) {
return aStoreItems[i];
selectedBook = aStoreItems[i];
}
}
};
// Sample of object in array
var ID23 = new StoreItem("ID23", "The Slow Regard of Silent Things", 12.99, 50, 1, "Fiction", 1.99, ["Great story", 5], "Desc", "<img src='./imgs/the_slow_regard_of_silent_things.jpeg' />");
```
Now I'm trying to implement an add to cart function that passes a user typed value and passes it through the getBookById() function.
Right now I'm just trying to pass the variable created by the users input to find the book that they want to add to cart
```
function addToCart() {
var bookSelection = document.getElementById("pID").value;
console.log("user typed (in add to cart function) " + bookSelection);
getBookById(bookSelection);
console.log("return is(in add to cart function) " + selectedBook);
};
```
console output shows this:
user typed (in add to cart function) ID23
index.html:264 Variable being passed in(in get book by id function)= ID23
index.html:265 type Variable being passed in(in get book by id function)= string
index.html:280 return is(in add to cart function) undefined
I don't know where I'm going wrong and other solutions researched are not fairing any better
Thanks to all for you help!
|
Function is returning undefined but should be returning a matched object from array in JavaScript |
|javascript|arrays|function| |
null |
If I understand you want to exit the loop when any of the child notebook execution fails. Currently your exit loop condition is based on the status of "New" and for multiple child notebook this would not be sufficient.
Along with the current exit condition add another OR condition to check if any failure in the execution.
Declare a variable with a default value like
> varIsFail = 0
Set a variable activity in the Failure flow of Execute pipeline activity to change this variable when execute pipeline activity fails.
> varIsFail = 1
And add this varIsFail = 1 in your loop condition to exit the loop.
Hope this helps.
|
I am trying to convert some R code I've written into an R shiny app so others can use it more readily. The code utilizes a package called `IPDfromKM`. The main function of issue is `getpoints()`, which in R will generate a plot and the user will need to click the max X and max Y coordinates, followed by clicking through the entire KM curve, which extracts the coordinates into a data frame. However, I cannot get this to work in my R shiny app.
There is a [link ](https://biostatistics.mdanderson.org/shinyapps/IPDfromKM/)to the working R shiny app from the creator
This is the getpoints() code:
```
getpoints <- function(f,x1,x2,y1,y2){
## if bitmap
if (typeof(f)=="character")
{ lfname <- tolower(f)
if ((strsplit(lfname,".jpeg")[[1]]==lfname) && (strsplit(lfname,".tiff")[[1]]==lfname) &&
(strsplit(lfname,".bmp")[[1]]==lfname) && (strsplit(lfname,".png")[[1]]==lfname) &&
(strsplit(lfname,".jpg")[[1]]==lfname))
{stop ("This function can only process bitmap images in JPEG, PNG,BMP, or TIFF formate.")}
img <- readbitmap::read.bitmap(f)
} else if (typeof(f)=="double")
{
img <- f
} else {
stop ("Please double check the format of the image file.")
}
## function to read the bitmap and points on x-axis and y-axis
axispoints <- function(img){
op <- par(mar = c(0, 0, 0, 0))
on.exit(par(op))
plot.new()
rasterImage(img, 0, 0, 1, 1)
message("You need to define the points on the x and y axis according to your input x1,x2,y1,y2. \n")
message("Click in the order of left x-axis point (x1), right x-axis point(x2),
lower y-axis point(y1), and upper y-axis point(y2). \n")
x1 <- as.data.frame(locator(n = 1,type = 'p',pch = 4,col = 'blue',lwd = 2))
x2 <- as.data.frame(locator(n = 1,type = 'p',pch = 4,col = 'blue',lwd = 2))
y1 <- as.data.frame(locator(n = 1,type = 'p',pch = 3,col = 'red',lwd = 2))
y2 <- as.data.frame(locator(n = 1,type = 'p',pch = 3,col = 'red',lwd = 2))
ap <- rbind(x1,x2,y1,y2)
return(ap)
}
## function to calibrate the points to the appropriate coordinates
calibrate <- function(ap,data,x1,x2,y1,y2){
x <- ap$x[c(1,2)]
y <- ap$y[c(3,4)]
cx <- lm(formula = c(x1,x2) ~ c(x))$coeff
cy <- lm(formula = c(y1,y2) ~ c(y))$coeff
data$x <- data$x*cx[2]+cx[1]
data$y <- data$y*cy[2]+cy[1]
return(as.data.frame(data))
}
## take the points
ap <- axispoints(img)
message("Mouse click on the K-M curve to take the points of interest. The points will only be labled when you finish all the clicks.")
takepoints <- locator(n=512,type='p',pch=1,col='orange4',lwd=1.2,cex=1.2)
df <- calibrate(ap,takepoints,x1,x2,y1,y2)
par()
return(df)
}
```
I'm a bit at a loss in how to execute this in my main panel. I've tried using `plotOutput()`, `imageOutput()`, and calling variations of the below functions, but nothing pops up or seems to work like it does in R studio. I feel like the issue is that the 'getpoints()' function executes 3 things :
1. Displaying an image from an uploaded file.
2. Allow user to click and store multiple points.
3. Return a data frame.
Will I need to split out the components of the function into individual steps?
```
createPoints<-reactive({
#Read File
file <- input$file1
ext <- tools::file_ext(file$datapath)
req(file)
validate(need(ext == "png", "Please upload a .png file"))
##should run the function that generates a plot for clicking coordinates and stores them in a data frame
points<-getpoints(file,x1=0, x2=input$Xmax,y1=0, y2=100)
return(points)
})
```
```
output$getPointsPlot<-renderPlot(
createPoints()
)
```
**MAJOR EDIT:**
The solution was indeed to break out the function into its component parts. if anyone is curious here is how I handled it:
```
library(shiny)
library(IPDfromKM)
library(png)
ui <- fluidPage(
titlePanel("Extracting Coordinates from KM Curves"),
sidebarLayout(
sidebarPanel(
fileInput("file", "Upload Image", accept = c(".png", ".jpeg", ".jpg", ".bmp", ".tiff")),
actionButton("reset", "Reset"),
downloadButton("download", "Download Data")
),
mainPanel(
plotOutput("plot", click = "plot_click", width="1000px", height="800px"),
tableOutput("coordinates")
)
)
)
server <- function(input, output) {
values <- reactiveValues(
x = numeric(),
y = numeric(),
xmin = NULL,
xmax = NULL,
ymin = NULL,
ymax = NULL,
click_count = 0
)
observeEvent(input$reset, {
values$x <- numeric()
values$y <- numeric()
values$xmin <- NULL
values$xmax <- NULL
values$ymin <- NULL
values$ymax <- NULL
values$click_count <- 0
})
output$plot <- renderPlot({
req(input$file)
img <- readPNG(input$file$datapath)
plot(NA, xlim = c(0, ncol(img)), ylim = c(0, nrow(img)), type = "n", xlab = "", ylab = "", xaxt = "n", yaxt = "n")
rasterImage(img, 0, 0, ncol(img), nrow(img))
points(values$x, values$y, pch = 19, col = "red")
})
observeEvent(input$plot_click, {
values$click_count <- values$click_count + 1
if (values$click_count == 1) {
values$xmin <- input$plot_click$x
} else if (values$click_count == 2) {
values$xmax <- input$plot_click$x
} else if (values$click_count == 3) {
values$ymin <- input$plot_click$y
} else if (values$click_count == 4) {
values$ymax <- input$plot_click$y
} else {
values$x <- c(values$x, input$plot_click$x)
values$y <- c(values$y, input$plot_click$y)
}
})
output$coordinates <- renderTable({
if (!is.null(values$xmin) && !is.null(values$xmax) && !is.null(values$ymin) && !is.null(values$ymax)) {
calibrated_x <- (values$x - values$xmin) / (values$xmax - values$xmin)
calibrated_y <- (values$y - values$ymin) / (values$ymax - values$ymin)
data.frame(x = calibrated_x, y = calibrated_y)
} else {
data.frame(x = values$x, y = values$y)
}
})
output$download <- downloadHandler(
filename = function() {
paste("data-", Sys.Date(), ".txt", sep = "")
},
content = function(file) {
write.table(isolate(output$coordinates), file, row.names = FALSE, sep = "\t")
}
)
}
shinyApp(ui, server)
```
|
If the code to handle when the end of one run is reached before the other is included in the if | else compare logic, then only one loop will be run for each instance of `Merge`. A one time allocation of the second array and one time copy at entry to merge sort and then alternating the direction of merge with each level of recursion eliminates the need for a copy back in `Merge`. The code below is C or C++ code (it uses malloc instead of new), the range of data is [bgn, end) instead of [low, high], where `end` is 1+index to the last element, which is more common in the case of merge sort.
```
void Merge(int a[], int bgn, int mid, int end, int b[])
{
int i, j, k;
i = bgn, j = mid, k = bgn;
while(1){
if(a[i] <= a[j]){ // if left smaller
b[k++] = a[i++]; // copy left element
if(i < mid) // if not end of left run
continue; // continue
do // else copy rest of right run
b[k++] = a[j++];
while(j < end);
break; // and break
} else { // else right smaller
b[k++] = a[j++]; // copy right element
if(j < end) // if not end of right run
continue; // continue
do // else copy rest of left run
b[k++] = a[i++];
while(i < mid);
break; // and break
}
}
}
void MergeSortR(int b[], int bgn, int end, int a[])
{
if (end - bgn <= 1) // if run size == 1
return; // consider it sorted
int mid = (end + bgn) / 2;
MergeSortR(a, bgn, mid, b);
MergeSortR(a, mid, end, b);
Merge(b, bgn, mid, end, a);
}
void MergeSort(int a[], int n) // n = size (not size-1)
{
if(n < 2)
return;
int *b = malloc(n*sizeof(int)); // 1 time allocate and copy
for(size_t i = 0; i < n; i++)
b[i] = a[i];
MergeSortR(b, 0, n, a); // sort data from b[] into a[]
free(b);
}
```
Link to Wikipedia article showing the same logic:
https://en.wikipedia.org/wiki/Merge_sort#Top-down_implementation
|
null |
The redis-stack-server does include RediSearch: https://hub.docker.com/r/redis/redis-stack-server
However, the image does NOT include the latest RSCoordinator (included in RediSearch), which synchronizes indices among Redis cluster nodes. Is there any images, which upgraded the image with this functionality? Or - anyone documents/manuals on how to do it?
Also, I don't see neither module-oss.so nor module-enterprise.so in redis-stack-server lib [![enter image description here][1]][1].
Anyone got a tip on how to activate the syncronization of indicies with the redis-stack-server image (It's puzzling to me why the community would NOT want that functionality)?
Also, the building instructions on RediSearch README.md are simply wrong:
https://github.com/RediSearch/RediSearch/tree/master/coord
FYI: the same case was tried to solve on [GitHub for the Bitnami Image][2].
[1]: https://i.stack.imgur.com/IBMDt.png
[2]: https://github.com/bitnami/charts/issues/21597 |
I think the best way is to malloc a in main function and pass it to the fn function.
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
void fn(char *n) {
/*
* imagine this variable 'b' is
* a part of a structure of
* some library
*/
char *b = "Hello, world";
memcpy(n, b, strlen(b));
}
int main() {
char *a = malloc(100); // 100 is a sample
fn(a);
/* it throws garbage or SIGSEV */
printf("%s\n", a);
}
Also, you can avoid memcpy, by assigning the struture variable to n, instead of b |
I tried the following approach which can solve the first problem in the problem detail to some extent, but does not completely resolve it.
```python
import torch
a1 = torch.randn(15000, 30000).cuda(0)
a2 = torch.randn(15000, 30000).cuda(1)
b1 = torch.randn(30000, 30000).cuda(0)
b2 = b1.cuda(1)
# create a empty tensor first,
# then directly use it to save the computation result,
# but its maximum memory usage on a single GPU is still high
c = torch.empty(30000, 30000).cuda(0)
c[:15000] = torch.mm(a1,b1)
c[15000:] = torch.mm(a2,b2).to(0)
```
UPDATE: this code can reduce the maximum mem usage on a single GPU when using multiple GPUs (here 2 GPUs used):
```python
import torch
# assuming a1 and a2 are parts of a big matrix
a1 = torch.randn(15000, 30000).cuda(0)
a2 = torch.randn(15000, 30000).cuda(1)
b1 = torch.randn(30000, 15000).cuda(0)
b2 = torch.randn(30000, 15000).cuda(1)
c = torch.empty(30000, 30000).cuda(0)
c[:15000, :15000] = torch.mm(a1,b1)
c[:15000, 15000:] = torch.mm(a1.cuda(1),b2).cuda(0)
c[15000:, :15000] = torch.mm(a2,b1.cuda(1)).cuda(0)
c[15000:, 15000:] = torch.mm(a2,b2)
```
|
1. You're overwriting the value on each match, you should rather store them into a [List][1] and then get the random item from the list, something like:
import java.util.Random;
import java.util.regex.Pattern;
import java.util.regex.Matcher;
import org.apache.commons.lang.math.RandomUtils;
String responseData = prev.getResponseDataAsString();
// log.info("the response data is : " + responseData);
Pattern pattern = Pattern.compile("<option value=\"(.+?)\">(.+?)</option>");
// log.info("pattern is : " + pattern);
Matcher matcher = pattern.matcher(responseData);
// log.info("matcher is :" + matcher);
List cities = new ArrayList();
while(matcher.find())
{
log.info("the cities are : " + matcher.group(1));
String extractedCity = matcher.group(1);
log.info("the extracted city is : " + extractedCity);
cities.add(extractedCity);
}
vars.put("Depart_city", cities.get(RandomUtils.nextInt(cities.size())));
2. Using Beanshell is some form of a performance ant-pattern, [since JMeter 3.1 you're supposed to be using JSR223 Test Elements and Groovy language][2] for scripting
3. [Using regular expressions for parsing HTML is not the best idea][3], you could rather go for [jsoup][4] library
4. And last but not the least, you don't need any scripting at all, you could get the random city from the response into a JMeter Variable using [CSS Selector Extractor][5]:
[![enter image description here][6]][6]
[1]: https://docs.oracle.com/javase/8/docs/api/java/util/List.html
[2]: https://jmeter.apache.org/usermanual/best-practices.html#bsh_scripting
[3]: https://stackoverflow.com/a/1732454/8592047
[4]: https://jsoup.org/
[5]: https://www.blazemeter.com/blog/cssjquery-extractor
[6]: https://i.stack.imgur.com/G3aHl.png |
null |
I'm deleting your post because this seems like a programming-specific question, rather than a conversation starter. With more detail added, this may be better as a Question rather than a Discussions post.
Please see [this page](https://stackoverflow.com/help/how-to-ask) for help on asking a question on Stack Overflow.
If you are interested in starting a more general conversation about how to approach an issue or concept related to the topic of this collective, feel free to make another Discussion post. You can check the discussions guidelines at https://stackoverflow.com/help/discussions-guidelines |
null |
In a basic Remix V2 app, i need help understanding if the following is expected behavior, a bug in V2, or possibly a missing configuration option or setting.
I could not find anything related to this issue in the Remix documentation.
I created an demo Remix V2 app by running `npx create-remix@latest`
I wrote a simulated API backend method for basic testing which simply returns JSON data as retrieved from a JSON file:
export async function getStoredNotes() {
const rawFileContent = await fs.readFile('notes.json', { encoding: 'utf-8' });
const data = JSON.parse(rawFileContent);
const storedNotes = data.notes ?? [];
return storedNotes;
}
The client side consists of 2 simple pages that uses `NavLink` for navigation between the 2 pages:
import { NavLink } from '@remix-run/react';
...
<NavLink to="/">Home</NavLink>
<NavLink to="/notes">Notes</NavLink>
In a component for the `notes` route, i have the following loader function which makes a call to my simulated API method:
export const loader:LoaderFunction = async () => {
try {
const notes = await getStoredNotes();
return json(notes);
} catch (error) {
return json({ errorMessage: 'Failed to retrieve stored notes.', errorDetails: error }, { status: 400 });
}
}
And in the `notes` main component function i receive that data using the `useLoaderData` hook and attempt to print the returned JSON data to the console:
export default function NotesView() {
const notes = useLoaderData<typeof loader>();
console.log("JSON Data retrieved:", notes);
return (
<main>
<NoteList notes={notes as Note[] || []} />
</main>
)
}
**When i run a `build` and subsequently `serve` the application everything is working correctly:**
Initial page load receives the data successfully and prints the value to the console as JSON.
{"notes":[{"title":"my title","content":"my note","id":"2024-03-28T05:22:52.875Z"}]}
Navigating between the index route and back to the notes route using the `NavLink` is also working such that i see the data print to the console again, correctly on subsequent page visits.
**When i run in `dev` mode using `npx run dev` the following problem occurs:**
The initial page load coming from the dev server also loads correctly and prints the JSON to the console.
Navigating client side between the index route and back to the notes route using the `NavLink` causes an issue whereas the data that prints to the console is not JSON. Instead, i am seeing a strange output of an exported Javascript array definition:
export const notes = [
{
title: "my title",
content: "my note",
id: "2024-03-28T05:22:52.875Z"
}
];
Again, to be clear, this behavior only occurs when navigating client side by clicking `NavLink` or `Link` elements as noted. Initial page loads rendered on the server are working correctly without issue.
Is this expected behavior when running in `dev` mode?
|
DJI Tello won't follow me |
|python-3.x|pycharm|tello-drone| |
null |
**1st solution** - automatic.
The PyCharm plugin "**PyCharm Hel**p" allows you to automatically download web help for offline use: when help is invoked, pages are delivered via a built-in Web server.
This solution has drawbacks - for me, it downloaded help only for one version of Python, but not for others. Also, in that version of Python help, the search doesn't work.
**2nd solution** - better, very flexible, but manual.
1. Download [pythons' html help][1] and unpack it into the folder with the corresponding version name, e.g., for Windows to "C:\py_help_server\3.12".
*Folder "py_help_server" will become root folder for our server, and "3.12" naming should correspond online helps' URL format.*
2. Run cmd as admin and run such commands:
cd C:\py_help_server\3.12
python -m http.server 80 --bind 127.0.0.1
3. For Chrome/Brave, download the plugin "Requestly - Intercept, Modify & Mock HTTP Requests". In its settings, go to "HTTP Rules", then "My Rules", click "New Rule" with the type "Replace String".
And create a rule like this:
If the URL contains "https://docs.python.org/3.12/", replace "https://docs.python.org/" with "http://127.0.0.1/".
Now, all pages of Python help for the 3.12 version will be redirected to our local server, which we started in the step 2.
This works for me like a charm. I tried to edit the hosts file too, but that didn't work for me at all.
Also, this last method has an advantage over the "PyCharm Help" plugin - the local web help's search function works well!
[1]: https://docs.python.org/3/download.html
|
1. Open an Excel workbook.
2. Turn on "Record Macro" (Developer ribbon).
3. Perform manually everything what you need to do.
4. Stop macro recording.
5. Open VBA editor and evaluate the recorded macro. You can correct something, improve.
6. Run the corrected macro to verify it.
7. Copy the code to your c# program and edit it to meet c# requirements. |
I am using an ASP.NET Core Web API with Entity Framework Core (pomelo). I have a MariaDB database. I use Swagger UI to explore my API, as per the template. When I try to use it to delete a row, I get the following error:
> Microsoft.EntityFrameworkCore.DbUpdateException: An error occurred while saving the entity changes. See the inner exception for details.
>
> MySqlConnector.MySqlException (0x80004005): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'RETURNING 1' at line 3
>
> at MySqlConnector.Core.ServerSession.ReceiveReplyAsync(IOBehavior ioBehavior, CancellationToken cancellationToken) in /_/src/MySqlConnector/Core/ServerSession.cs:line 894
at MySqlConnector.Core.ResultSet.ReadResultSetHeaderAsync(IOBehavior ioBehavior) in /_/src/MySqlConnector/Core/ResultSet.cs:line 37
at MySqlConnector.MySqlDataReader.ActivateResultSet(CancellationToken cancellationToken) in /_/src/MySqlConnector/MySqlDataReader.cs:line 130
at MySqlConnector.MySqlDataReader.InitAsync(CommandListPosition commandListPosition, ICommandPayloadCreator payloadCreator, IDictionary`2 cachedProcedures, IMySqlCommand command, CommandBehavior behavior, Activity activity, IOBehavior ioBehavior, CancellationToken cancellationToken) in /_/src/MySqlConnector/MySqlDataReader.cs:line 483
at MySqlConnector.Core.CommandExecutor.ExecuteReaderAsync(CommandListPosition commandListPosition, ICommandPayloadCreator payloadCreator, CommandBehavior behavior, Activity activity, IOBehavior ioBehavior, CancellationToken cancellationToken) in /_/src/MySqlConnector/Core/CommandExecutor.cs:line 56
at MySqlConnector.MySqlCommand.ExecuteReaderAsync(CommandBehavior behavior, IOBehavior ioBehavior, CancellationToken cancellationToken) in /_/src/MySqlConnector/MySqlCommand.cs:line 357
at MySqlConnector.MySqlCommand.ExecuteDbDataReaderAsync(CommandBehavior behavior, CancellationToken cancellationToken) in /_/src/MySqlConnector/MySqlCommand.cs:line 350
at Microsoft.EntityFrameworkCore.Storage.RelationalCommand.ExecuteReaderAsync(RelationalCommandParameterObject parameterObject, CancellationToken cancellationToken)
at Microsoft.EntityFrameworkCore.Storage.RelationalCommand.ExecuteReaderAsync(RelationalCommandParameterObject parameterObject, CancellationToken cancellationToken)
at Microsoft.EntityFrameworkCore.Update.ReaderModificationCommandBatch.ExecuteAsync(IRelationalConnection connection, CancellationToken cancellationToken)
The delete should be handled here in my controller and repository, like this:
[HttpDelete("alertId")]
public async Task<IActionResult> DeleteAlert(int alertId)
{
var alert = await _dataRepository.GetAlertAsync(alertId);
if (alert is null)
{
return NotFound("Alert not found");
}
await _dataRepository.DeleteAlertAsync(alert);
return NoContent();
}
and this
public class AlertRepository (IrtsContext context) : IDataRepositoryAlerts
{
readonly IrtsContext _alertContext = context;
public async Task DeleteAlertAsync(Alert entity)
{
if (entity != null)
{
_alertContext.Remove(entity);
await _alertContext.SaveChangesAsync();
}
else
{
throw new NotImplementedException();
}
}
}
I do not understand this. I believe it is my `dbContext` that handles the "saving the entity changes". How can I have a SQL syntax error? I cannot find "Returning 1" anywhere in my code.
I have tried deleting the row manually in my database. That works.
All other operations (GET, POST and PUT) work just fine.
I have tried running this with holding points to see where the error occurs but everything seems to execute without issue.
I am grateful for any hints. I am obviously very new to this ;)
Edit: MariaDB version 11.2.2
Edit2: This is my Alert class:
public partial class Alert
{
public int AlertId { get; set; }
public DateTime? Zeitpunkt { get; set; }
public string? Quelle { get; set; }
public string? AlertStatus { get; set; }
public string? AlertTyp { get; set; }
public string? BetroffeneSysteme { get; set; }
public virtual ICollection<Vorfall> Vorfalls { get; set; } = new List<Vorfall>();
}
and this is its entity configuration:
modelBuilder.Entity<Alert>(entity =>
{
entity.HasKey(e => e.AlertId).HasName("PRIMARY");
entity
.ToTable("alert")
.HasCharSet("utf8mb4")
.UseCollation("utf8mb4");
entity.Property(e => e.AlertId)
.HasColumnType("int(11)")
.HasColumnName("AlertID");
entity.Property(e => e.AlertStatus).HasMaxLength(255);
entity.Property(e => e.AlertTyp).HasMaxLength(255);
entity.Property(e => e.BetroffeneSysteme).HasMaxLength(255);
entity.Property(e => e.Quelle).HasMaxLength(255);
entity.Property(e => e.Zeitpunkt).HasColumnType("datetime");
});
edit3: The console log does not show the query. I can only find the request header. I don't know how to make EF show me the parameterized query. I will research this and come back when I find out. |
**1st solution** - automatic.
The PyCharm plugin **"PyCharm Help"** allows you to automatically download web help for offline use: when help is invoked, pages are delivered via a built-in Web server.
This solution has drawbacks - for me, it downloaded help only for one version of Python, but not for others. Also, in that version of Python help, the search doesn't work.
**2nd solution** - better, very flexible, but manual.
1. Download [pythons' html help][1] and unpack it into the folder with the corresponding version name, e.g., for Windows to "C:\py_help_server\3.12".
*Folder "py_help_server" will become root folder for our server, and "3.12" naming should correspond online helps' URL format.*
2. Run cmd as admin and run such commands:
cd C:\py_help_server\3.12
python -m http.server 80 --bind 127.0.0.1
3. For Chrome/Brave, download the plugin "Requestly - Intercept, Modify & Mock HTTP Requests". In its settings, go to "HTTP Rules", then "My Rules", click "New Rule" with the type "Replace String".
And create a rule like this:
If the URL contains "https://docs.python.org/3.12/", replace "https://docs.python.org/" with "http://127.0.0.1/".
Now, all pages of Python help for the 3.12 version will be redirected to our local server, which we started in the step 2.
This works for me like a charm. I tried to edit the hosts file too, but that didn't work for me at all.
Also, this last method has an advantage over the "PyCharm Help" plugin - the local web help's search function works well!
[1]: https://docs.python.org/3/download.html
|
Python ChatGPT bot issue with openai.Completion |
|python|openai-api| |
null |
{"Voters":[{"Id":3744304,"DisplayName":"connexo"},{"Id":3889449,"DisplayName":"Marco Bonelli"},{"Id":26742,"DisplayName":"Sani Huttunen"}],"SiteSpecificCloseReasonIds":[11]} |
For django 4+ with drf, it is by-the-book solution but I will put it in here, because it took me like 2 hours to find out.
class DSLPaginator(Paginator):
"""
Override Django's built-in Paginator class to take in a count/total number of items;
Elasticsearch provides the total as a part of the query results, so we can minimize hits.
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.count = self.object_list.hits.total["value"]
def page(self, number):
# this is overridden to prevent any slicing of the object_list - Elasticsearch has
# returned the sliced data already.
number = self.validate_number(number)
return Page(self.object_list, number, self)
class ESPageNumberPagination(PageNumberPagination):
django_paginator_class = DSLPaginator
page_size = 12
With DRF, use it like this:
class VideoListESView(generics.ListAPIView):
serializer_class = VideoListESSerializer
model = serializer_class.Meta.model
pagination_class = ESPageNumberPagination
def get_queryset(self):
page = int(self.request.GET.get("page", 1))
page_size = self.pagination_class.page_size
s = VideoDocument.search()[(page - 1) * page_size : page * page_size].sort({"id": {"order": "desc"}}).execute()
return s |
I would like to create a function that takes three arguments (n,m,d) and it should output a matrix with n rows and m columns.
The matrix should be populated with values 0 and 1 at random, in order to ensure that you have a density d of ones.
This is what I have come up with so far, just can't seem to work out how to integrate the density variable.
```
`def create_matrix(n, m):
count = 1
grid = []
while count <= n:
for i in range(m):
x = [random.randrange(0,2) for _ in range(m)]
grid.append(x)
count += 1
return grid`
```
Thanks in advance |
Function to create matrix of zeros and ones, with a certain density of ones |
|python|python-3.x|matrix| |
null |
|python|environment-variables|fastapi|pydantic|uvicorn| |
null |
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
const { MyHero, Hero } = require("../models/index.js");
class MyHeroesController {
static async addHeroToMyHeroes(req, res, next) {
try {
const { heroId } = req.params;
const myHero = await MyHero.findByPk(heroId);
if (!myHero) {
throw { name: "NotFound" };
}
const { UserId, status, match } = req.body
const myHeroes = await MyHero.create({
UserId,
HeroId: +heroId,
status,
match
});
res.status(201).json({
id: myHeroes.id,
UserId: myHeroes.UserId,
HeroId: heroId,
status: myHeroes.status,
match: myHeroes.match
});
} catch (error) {
next(error);
}
}
<!-- end snippet -->
|
**Context:
**
Our app recently switched to using Custom Chrome Tabs for authentication. WebViews wouldn't allow login from Google and Facebook due to privacy restrictions.
Everything worked well using `onResume` and `onNewIntent` lifecycle methods to detect when the user closes the Custom Chrome Tab and is redirected back to our app. However, a recent Chrome update introduced a minimize button (Picture-in-Picture mode) for Custom Chrome Tabs.
For our authentication flow, this minimize functionality is undesirable. We want the user to complete authentication before interacting with the app again. Unfortunately, modifying Chrome Tab behavior directly is limited due to privacy concerns.
Our current issue is that when the user minimizes the Custom Chrome Tab, our app using onResume interprets it as a complete closure. I've explored `onMinimized` and `onUnMinimized` APIs from CustomTabCallback, but they don't detect the scenario where the user minimizes and then closes the tab. This leaves our app's activity empty, resulting in a poor user experience.
**Question:
**
- Are there ways to reliably detect when a minimized Custom Chrome Tab is finally closed by the user?
- Are there alternative solutions to effectively track the minimize functionality for our authentication process?
**Additional Information:
**
I've explored `onMinimized` and `onUnMinimized` APIs from `CustomTabCallback`. To track the minimized state, I implemented a flag. However, this approach has limitations. Inside onResume, I use a timer (set to 750 milliseconds) to periodically check the flag. If the flag shows the tab is minimized, I avoid treating it as a closed tab. This introduces a slight delay in detecting the close event, and I'm curious if there are better solutions.
But as I said earlier this solution can resolve detecting the minimize state but when user closes the custom tab when it is in minimized state we dont have any way to detect.
Is there any way to detect the closing of custom tab in minimized state?
|
I am trying to make a voice changing device with **esp32** board (**PlatformIO + ArduinoIDE**).
I have:
1. external microphone on input
2. esp32, to process voice (internal ADC is used for incoming signal - **I2S protocol**)
3. external DAC (PCM5102) to output changed voice (**I2S protocol**)
My scheme is working. The *sound goes to micro -> adc (I2S) -> process on esp32 -> dac (I2S)*. But after some time of working (~1 min) I begin to hear a terrible loid fading **noise** (decay period ~3 sec). Then noise appears once every 5 seconds constantly.
**What could be the problem? Maybe I'm setting up adc and dac incorrectly?**
Connection scheme (connection is made on a **breadboard**):
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/dk2mk.png
Code:
```
#include <driver/i2s.h>
#define I2S_SAMPLE_RATE 100000
#define I2S_DMA_BUF_LEN 1024
#define ADC_INPUT ADC1_CHANNEL_4 //pin 32
#define I2S_DATA_PIN GPIO_NUM_22 // DIN
#define I2S_LRCK_PIN GPIO_NUM_25 // LCK
#define I2S_BCLK_PIN GPIO_NUM_26 // BCK
/*
* Configure ADC.
*/
void i2sInputInit() {
i2s_config_t i2s_config = {
.mode = (i2s_mode_t) (I2S_MODE_MASTER | I2S_MODE_RX | I2S_MODE_ADC_BUILT_IN),
.sample_rate = I2S_SAMPLE_RATE, // The format of the signal using ADC_BUILT_IN
.bits_per_sample = I2S_BITS_PER_SAMPLE_16BIT, // is fixed at 12bit, stereo, MSB
.channel_format = I2S_CHANNEL_FMT_RIGHT_LEFT,
.communication_format = I2S_COMM_FORMAT_STAND_I2S,
.intr_alloc_flags = ESP_INTR_FLAG_LEVEL1,
.dma_buf_count = 2048,
.dma_buf_len = I2S_DMA_BUF_LEN,
.use_apll = true,
.tx_desc_auto_clear = true,
.fixed_mclk = 0
};
i2s_driver_install(I2S_NUM_0, &i2s_config, 0, NULL);
i2s_set_adc_mode(ADC_UNIT_1, ADC_INPUT);
// i2s_adc_enable(I2S_NUM_0);
// adc1_config_channel_atten(ADC_INPUT, ADC_ATTEN_DB_11);
}
/*
* Configure DAC.
*/
void i2sOutputInit() {
// Initialize I2S
i2s_config_t i2s_config = {
.mode = (i2s_mode_t) (I2S_MODE_MASTER | I2S_MODE_TX),
.sample_rate = I2S_SAMPLE_RATE,
.bits_per_sample = I2S_BITS_PER_SAMPLE_16BIT,
.channel_format = I2S_CHANNEL_FMT_RIGHT_LEFT,
.communication_format = I2S_COMM_FORMAT_STAND_I2S,
.intr_alloc_flags = ESP_INTR_FLAG_LEVEL1,
.dma_buf_count = 2048,
.dma_buf_len = I2S_DMA_BUF_LEN,
.use_apll = false,
.tx_desc_auto_clear = true,
.fixed_mclk = 0
};
i2s_driver_install(I2S_NUM_1, &i2s_config, 0, NULL);
// Configure I2S pins
i2s_pin_config_t pin_config = {
.bck_io_num = I2S_BCLK_PIN,
.ws_io_num = I2S_LRCK_PIN,
.data_out_num = I2S_DATA_PIN,
.data_in_num = I2S_PIN_NO_CHANGE,
};
i2s_set_pin(I2S_NUM_1, &pin_config);
}
void setup() {
Serial.begin(115200);
i2sInputInit();
i2sOutputInit();
}
void loop() {
size_t bytes_read;
size_t bytes_written;
int *int_buffer = (int *) malloc(I2S_DMA_BUF_LEN * sizeof(int));
i2s_read(I2S_NUM_0, int_buffer, I2S_DMA_BUF_LEN * sizeof(int), &bytes_read, portMAX_DELAY);
applyEffect(int_buffer);
i2s_write(I2S_NUM_1, int_buffer, I2S_DMA_BUF_LEN * sizeof(int), &bytes_written, portMAX_DELAY);
free(int_buffer);
}
``` |
I am totally new to bioinformatics, therefore apologies if the terminology is not correct or the question is dumb :)
I have been trying the following:
- installed miniconda
- create an environment called "rnaseq" where I installed kallisto
```
conda install -c bioconda kallisto (0.50.1)
```
- created another environment called "kb", where I installed Kallisto-bustools
```
conda create -y --name kb python=3.8
conda activate kb
pip install kb-python
```
- run the following line of code from terminal:
```
kb count \
pbmc_1k_v3_S1_mergedLanes_R1.fastq.gz pbmc_1k_v3_S1_mergedLanes_R2.fastq.gz \
-i Homo_sapiens.GRCh38.cdna.all.index \
-x 10XV3 \
-g t2g.txt \
-t 8 \
--cellranger
```
[I get this error message](https://i.stack.imgur.com/h9H5p.png)
I have a MacBook Pro 2020, Intel Core i5, OS Sonoma 14.4.
I used the same ".index" file and ran Kallisto on bulk RNA-seq data and it worked.
I tried now to redownload the fastq files (they belong to one of the test datasets from 10x), remove all the conda envs and reinstall everything but nothing worked.
Thanks for any help you could provide! |
problem with running Kallisto on single cell data |
|python|miniconda| |
null |
I'm new to Polars and need some advice from the experts. I have some working code but I've got to believe theres a faster and/or more elegant way to do this. I've got a large dataframe with columns cik(int), form(string) and period(date) of relevance here. Form can have value either '10-Q' or '10-K'. Each cik will have many rows of the 2 form types with different periods represented.
What I want to end up with is, for each cik group, only the most recent 10-Q remains and only the most recent 10 10-Ks remain. Of course if there are less than 10 10-K forms, all should remain.
Here's what I'm doing now (it works):
def filter_sub_for_11_rows_per_cik(df_):
df = df_.sort('cik')
# Keep only the last 10-Q
q_filtered_df = df.group_by('cik').map_groups(
lambda g:
g.sort('period', descending=True).filter(pl.col('form').eq('10-Q')).head(1))
# Keep the last up to 10 10-Ks
k_filtered_df = df.group_by('cik').map_groups(
lambda g:
g.sort('period', descending=True)
.filter(pl.col('form').eq('10-K'))
.slice(0, min(10, g.filter(pl.col('form').eq('10-K')).shape[0]))
)
return pl.concat([q_filtered_df, k_filtered_df])
|
Filtering inside groups in polars |
|python|python-polars| |
As noted in the comments, there were 2 issues. Cuda 9 changed the guarantees about threads in a warp remaining in lockstep, previously answered here: https://stackoverflow.com/questions/67907772/cuda-min-warp-reduction-produces-race-condition
The second issue after fixing this was due to imprecision in floating point representations, where `32774 + 33550336` cannot be represented by `33583110`, but instead is `33583112` which matches the off by 2. |
As a user, how would I know how many arguments and what arguments I need to pass in a command line arguments program, since there is no input function applied to the script with some sort of indication stating, enter your name or age or etc...or how a user know that age comes first or name come first or address ?
just a general question for my understanding, as im going through a python learning process. |
Here is a possible version of the correct code:
import random
print('Welcome to the Higher or Lower game!')
while True:
lowr = int(input('\nWhat would you like for your lower bound to be?: '))
upr = int(input('And your highest?: '))
x = (random.randint(lowr, upr))
if lowr >= upr:
print('The lowest bound must not be higher than highest bound. Try again.')
if lowr < upr:
g = int(input(f"""Great now guess a number between
{lowr} and {upr}:"""))
while True:
if g < x:
g = int(input('Nope. Too low, Guess another number: '))
elif g > x:
g = int(input('Nope. Too high, Guess another number: '))
if g == x:
print('You got it!')
break
There were some errors with: the indentation, the use of the input method and the event lagestine.
|
error in my code, i can't create a hull by turf because it isn't format input
[enter image description here](https://i.stack.imgur.com/4eAWN.png)
```
var pointData = [
[-74.5, 40],
[-74.6, 40.1],
[-74.4, 39.9],
[-74.55, 39.95],
[-74.65, 40.05],
[-74.35, 40.15]
];
var pointGeoJSON = {
type: 'FeatureCollection',
features: pointData.map(function(coord) {
return {
type: 'Feature',
geometry: {
type: 'Point',
coordinates: coord
}
};
})
};
var hull = turf.convex(pointGeoJSON);
```
|
How can i show the layer like polygon cover all marker in mapbox. Please help me. Thanks |
|javascript|mapbox|polygon| |
null |
send email with react, node using mailgun
I try to send a email message by using mailgun. I use node.js (nest.js) and this is my js file. What should I change? I got this Error Unauthorized, forbidden
```
const FormData = require("form-data");
const Mailgun = require("mailgun.js");
const mailgun = new Mailgun(FormData);
const mg = mailgun.client({
username: "api",
key:
process.env.MAILGUN_API_KEY
});
app.post("/sendMail", (req, res) => {
const { name, email, message } = req.body;
// Check if required fields are present
if (!name || !email || !message) {
return res
.status(400)
.json({ error: "Name, email, and message are required fields" });
}
mg.messages
.create("sandbox-123.mailgun.org", {
from: "Excited User <mailgun@sandbox-123.mailgun.org>",
to: [email],
subject: "Hello",
text: message,
html: `<h1>${name} says:</h1><p>${message}</p>`,
})
.then((msg) => {
console.log("Email sent successfully:", msg);
res.send({ message: "Email sent" });
})
.catch((err) => {
console.error("Error sending email:", err);
res.status(500).json({ error: "Failed to send email" });
});
});
```
|
Mailgun 401 forbidden |
|javascript|reactjs|node.js|unauthorized| |
null |
When you create a new project with angular 17, you won’t have any modules in your project, we have the [standalone configuration as the default](https://blog.angular.io/introducing-angular-v17-4d7033312e4b)
So Angular 17 generate an angular application which is standalone API, there is no module. |
null |
Finally, I've found a solution to this problem without utilizing the redirect and refreshListenable properties of GoRouter.
The concept involves creating a custom redirection component. Essentially, authStateChanges are globbally listened, and upon any change in AuthenticationState, we navigate to our redirection component. In my implementation, this redirection component is a splash page. From there, you can proceed to redirect to the desired page.
Here is my code
Router Class
class AppRouter {
factory AppRouter() => _instance;
AppRouter._();
static final AppRouter _instance = AppRouter._();
final _config = GoRouter(
initialLocation: RoutePaths.splashPath,
routes: <RouteBase>[
GoRoute(
path: RoutePaths.splashPath,
builder: (context, state) => const SplashPage(),
),
GoRoute(
path: RoutePaths.loginPath,
builder: (context, state) => const LoginPage(),
),
GoRoute(
path: RoutePaths.registrationPath,
builder: (context, state) => const RegistrationPage(),
),
GoRoute(
path: RoutePaths.homePath,
builder: (context, state) => const HomePage(),
),
],
);
GoRouter get config => _config;
}
class RoutePaths {
static const String loginPath = '/login';
static const String registrationPath = '/register';
static const String homePath = '/home';
static const String splashPath = '/';
}
Authentication Bloc
class AuthenticationBloc
extends Bloc<AuthenticationEvent, AuthenticationState> {
AuthenticationBloc(
this.registerUseCase,
this.loginUseCase,
) : super(const AuthenticationInitial()) {
on<AuthenticationStatusCheck>((event, emit) async {
await emit.onEach(
FirebaseAuth.instance.authStateChanges(),
onData: (user) async {
this.user = user;
emit(const AuthenticationLoading());
await Future.delayed(const Duration(seconds: 1), () {
if (user == null) {
emit(const AuthenticationInitial());
} else {
emit(AuthenticationSuccess(user));
}
});
},
);
});
Main
final router = AppRouter().config;
return MultiBlocProvider(
providers: [
BlocProvider<AuthenticationBloc>(
create: (context) =>
sl<AuthenticationBloc>()..add(const AuthenticationStatusCheck()),
),
],
child: BlocListener<AuthenticationBloc, AuthenticationState>(
listener: (context, state) {
if (state is AuthenticationLoading) {
router.go(RoutePaths.splashPath);
}
},
child: MaterialApp.router(
title: 'Hero Games Case Study',
theme: ThemeData.dark(
useMaterial3: true,
),
routerConfig: router,
),
),
);
splash_page
return BlocListener<AuthenticationBloc, AuthenticationState>(
listener: (context, state) {
if (state is AuthenticationSuccess) {
context.go(RoutePaths.homePath);
} else if (state is AuthenticationInitial) {
context.go(RoutePaths.loginPath);
}
},
child: Scaffold(
appBar: AppBar(
title: const Text('Splash Page'),
),
body: const Center(
child: CircularProgressIndicator(),
),
),
);
NOTE:
If you will use deeplinking, you can add the code below to your GoRouter
redirect: (context, state) {
if (context.read<AuthenticationBloc>().user == null &&
(state.fullPath != RoutePaths.loginPath &&
state.fullPath != RoutePaths.registrationPath &&
state.fullPath != RoutePaths.splashPath)) {
return RoutePaths.splashPath;
}
return null;
},
|
Far from perfect. This one:
* wraps each word in a ``<span>``
* calculates by ``offsetHeight`` if a SPAN spans multiple lines
* of so it found a hyphened word
* it then **removes** each _last_ character from the ``<span>``
to find when the word wrapped to a new line
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-html -->
<style>
.hyphen { background: pink }
.remainder { background: lightblue }
</style>
<process-text lang="en" style="display:inline-block;
overflow-wrap: word-break; hyphens: auto; zoom:1.2; width: 7em">
By using words like
"incomprehensibilities",
we can demonstrate word breaks.
</process-text>
<script>
customElements.define('process-text', class extends HTMLElement {
connectedCallback() {
setTimeout(() => {
let words = this.innerHTML.trim().split(/(\W+)/);
let spanned = words.map(w => `<span>${w}</span>`).join('');
this.innerHTML = spanned;
let spans = [...this.querySelectorAll("span")];
let defaultHeight = spans[0].offsetHeight;
let hyphend = spans.map(span => {
let hyphen = span.offsetHeight > defaultHeight;
console.assert(span.offsetHeight == defaultHeight, span.innerText, span.offsetWidth);
span.classList.toggle("hyphen", hyphen);
if (hyphen) {
let saved = span.innerText;
while (span.innerText && span.offsetHeight > defaultHeight) {
span.innerText = span.innerText.slice(0, -1);
}
let remainder = document.createElement("span");
remainder.innerText = saved.replace(span.innerText, "");
remainder.classList.add("remainder");
span.after(remainder);
}
})
console.log(this.querySelectorAll("span").length, "<SPAN> created" );
}) //setTimeout to read innerHTML
} // connectedCallback
});
</script>
<!-- end snippet -->
The error is "demonstrate" _fits_ when shortened to "demonstr" **-** "ate"

Needs some more JS voodoo |
I'd like to display a rationale, where user can be navigated to the settings on accept, or asked for a city name when declined.
I've tried to make there two functions. One, 'displayRationale', and second 'getCityNameFromUser'. When entered to the second function it should display TextField and button. When user will enter anything in TextField, and later on click on button, the value of LiveData property inside of ViewModel should be changed.
I've tried to make it on my own, but it get messy. Here's the code:
**ViewModel**
@HiltViewModel
class CurrentForecastViewModel @Inject constructor(
private val repo: repo,
private val lr: lr
) : ViewModel() {
private val _cityName = MutableLiveData("")
val cityName get() = _cityName
fun onCityNameChange(newCityName: String){
_cityName.value = newCityName
}
}
**Activity**
@Composable
private fun displayRationale(
currentForecastViewModel: CurrentForecastViewModel = viewModel()
) {
val context = LocalContext.current
val cityName by currentForecastViewModel.cityName.observeAsState(initial = "")
AlertDialog(
onDismissRequest = {},
confirmButton = { context.startActivity(Intent(Settings.ACTION_LOCATION_SOURCE_SETTINGS)) },
dismissButton = {
Button(onClick = {
getCityNameFromUser(cityName = cityName) { newCityName ->
currentForecastViewModel.onCityNameChange(newCityName)
}
}) {
Text(text = "Enter city instead")
}
}
)
}
@Composable
private fun getCityNameFromUser(cityName: String, onCityNameChange: (String) -> Unit) {
OutlinedTextField(
value = cityName,
onValueChange = onCityNameChange,
singleLine = true
)
}
|
Remix V2 useLoaderData issue while running in dev mode |
|javascript|reactjs|json|typescript|remix| |
Since 2017-03-13 `std::is_callable` is gone from [cppreference.com][1]. The last available description of it is from 2016-11-21 on [WaybackMachine][2].
The main difference between `std::is_callable` and `std::is_invocable`, that replaced it, is that
* the former used a single template parameter `template <class FnArgs>` specialized to `FnArgs` = `Fn(Args...)` for the callable type `Fn` and for the parameter types to test (`Args...`), while
* the latter accepts all of these in separate template parameters (`template <class Fn, class... Args>`).
What was the problem with `std::is_callable`'s `Fn(Args...)` approach?
I understand, that `Fn(Args...)` is a function type, where `Fn` is the return type. `std::is_callable` gave `Fn` another meaning, the function type to test, which I find misleading. This is only one problem. Could you name all the rest?
What I can think of, but cannot put it together (in keywords):
- Not every type can be return or parameter type of a function, e.g. `void`, abstract classes, C arrays, function types, [abominable function types][3], incomplete types come to mind.
- Parameters of functions discard the outermost `const`, and decay of function, array and reference types may play a role (see [LWG2895][4]).
- I don't know, maybe the `Fn(Args...)` approach may introduce ambiguity in some cases.
I have found _one_ compiler on Godbolt, on which `std::is_callable` works: [MVSC v19.10][5]. The others say there is no such type in namespace `std`. Seeing it work helps to understand this type trait more deeply, e.g. Alisdair Meredith's example from [P0604r0][6]. See on [Godbolt][7].
[1]: https://en.cppreference.com/w/cpp/types/is_callable
[2]: https://web.archive.org/web/20161121072930/https://en.cppreference.com/w/cpp/types/is_callable
[3]: https://wg21.link/P0172r0#2.1
[4]: https://wg21.link/LWG2895
[5]: https://godbolt.org/z/dKnz9avvE
[6]: https://wg21.link/P0604r0#Rationale
[7]: https://godbolt.org/z/156hGx7d5 |
You may want to call `brew services start mysql` command first to allow the connection. |
I'm deleting your post because this seems like a programming-specific question, rather than a conversation starter. With more detail added, this may be better as a Question rather than a Discussions post.
Please see [this page](https://stackoverflow.com/help/how-to-ask) for help on asking a question on Stack Overflow.
If you are interested in starting a more general conversation about how to approach an issue or concept related to the topic of this collective, feel free to make another Discussion post. You can check the discussions guidelines at https://stackoverflow.com/help/discussions-guidelines |
null |
I want to reset my product filter.
Filter work by same code
```
filteredData: function () {
var vm = this
var sf = vm.startFloor;
var ef = vm.endFloor;
var sa = vm.startSquare;
var ea = vm.endSquare;
var sp = vm.startPrice;
var ep = vm.endPrice;
return _.filter(this.avalFloors, (function (data) {
if ((_.isNull(sf) && _.isNull(ef)) && (_.isNull(sa) && _.isNull(ea)) && (_.isNull(sp) && _.isNull(ep))) {
return true
} else {
var areaf = data.areaf;
var floor = data.floorf;
var pricef = data.pricef;
return (floor >= sf && floor <= ef) && (areaf >= sa && areaf <= ea) && (pricef >= sp && pricef <= ep);
}
}))
}
```
How I can reset this by @click button? |
Some noise when attempting to produce sound wia external DAC for esp323 |
|esp32|arduino-ide|arduino-esp8266|arduino-esp32|platformio| |
**Context:
**
Our app recently switched to using Custom Chrome Tabs for authentication. WebViews wouldn't allow login from Google and Facebook due to privacy restrictions.
Everything worked well using `onResume` and `onNewIntent` lifecycle methods to detect when the user closes the Custom Chrome Tab and is redirected back to our app. However, a recent Chrome update introduced a minimize button (Picture-in-Picture mode) for Custom Chrome Tabs.
For our authentication flow, this minimize functionality is undesirable. We want the user to complete authentication before interacting with the app again. Unfortunately, modifying Chrome Tab behavior directly is limited due to privacy concerns.
Our current issue is that when the user minimizes the Custom Chrome Tab, our app using onResume interprets it as a complete closure. I've explored `onMinimized` and `onUnMinimized` APIs from CustomTabCallback, but they don't detect the scenario where the user minimizes and then closes the tab. This leaves our app's activity empty, resulting in a poor user experience.
**Question:
**
- Are there ways to reliably detect when a minimized Custom Chrome Tab is finally closed by the user?
- Are there alternative solutions to effectively track the minimize functionality for our authentication process?
**Additional Information:
**
I've explored `onMinimized` and `onUnMinimized` APIs from `CustomTabCallback`. To track the minimized state, I implemented a flag. However, this approach has limitations. Inside onResume, I use a timer (set to 750 milliseconds) to periodically check the flag. If the flag shows the tab is minimized, I avoid treating it as a closed tab. This introduces a slight delay in detecting the close event, and I'm curious if there are better solutions.
But as I said earlier this solution can resolve detecting the minimize state but when user closes the custom tab when it is in minimized state we dont have any way to detect.
Is there any way to detect the closing of custom tab in minimized state?
|
How do I get a caller's HTML "Label" surrounding my Custom Element to bubble its "click" events to my shadow root? Is there not something in the shadowRoot that I can listen for Click events for my LightDOM label?
I believe this [clicking labels][1] answer comes close but I am inheriting from HTMLElement.
In the following code I want my ABC-TOGGLE custom element to hear events on label "aLabel".
export class ABCToggle extends HTMLElement
{
static formAssociated = true;
No joy (#toggle is the top level element): -
this.#shadowRoot.host.addEventListener('click', (e) => {
console.log("SR click");
});
this.#shadowRoot.addEventListener('click', (e) => {
console.log("SR click");
});
this.#toggle.addEventListener('click', (e) => {
Calling HTML: -
<div id="disableIt">
<label id="aLabel" for="disTick">Disable Toggle </label>
<div id="tickContainer">
<abc-toggle id="disTick" name="disTick" checked="false" tabindex=0 type="tick"></abc-toggle>
</div>
</div>
[1]: https://stackoverflow.com/questions/38320270/clicking-label-not-focusing-custom-element-web-components |
Currently, there is no option to turn on/off data-labels series wise. It is enabled for all series.
I am opening an [issue on GitHub][1] to implement it in the next release.
**EDIT**: A new option `enabledOnSeries` is shipped in v3.5.0. You can use it like
options = {
dataLabels: {
enabled: true,
enabledOnSeries: [1, 2]
}
}
The above will show data-labels only for series with index 1,2 and disable data-labels on series with index 0 (in your example - the column series)
[1]: https://github.com/apexcharts/apexcharts.js/issues/345 |
I have a Bulk Data with ID and Name Columns. Now I want to have the report in below format:
ID Name ID Name ID Name ID Name
1 x 6 y 11 z 16 f
.
.
.
.
5 f 10 l 15 n 20 a
I want to populate the table in left to right direction if Page1 filled thus then I want to go to Page2 and follow the same steps.
How can i design this report?
It would be great if you people help me to do the same.
#TIA |
How to make page columns in RDLC Report? |
|asp.net|ssrs-2008|rdlc| |
null |
The libraries related to the helper functions must exist in the same file, since libraries.py will import the libraries to a name space that helper functions file doesn't access it
|
Android devices <API 25 not connecting to LetsEncrypt SSL servers anymore? |