instruction stringlengths 0 30k ⌀ |
|---|
null |
I am starting to learn Java Spring Boot 3.0 and I am using IntelliJ Community Edition with WSL2 as OS. After I installed the most recent Update (2023.3.6) this warning appeared. And it seems that it is avoiding me from Load Maven Project.
[enter image description here](https://i.stack.imgur.com/O5DZG.png)
Other documentations make it look like the problem is with Ubuntu but before the update, everything was working. |
Ubuntu-22.04 File watcher failed repeatedly and has been disabled (External file changes sync might be slow) |
|java|intellij-idea|wsl-2|ubuntu-22.04| |
null |
**1st solution** - automatic.
The PyCharm plugin **"PyCharm Help"** allows you to automatically download web help for offline use: when help is invoked, pages are delivered via a built-in Web server.
This solution has drawbacks - for me, it downloaded help only for one version of Python, but not for others. Also, in that version of Python help, the search doesn't work.
**2nd solution** - better, very flexible, but manual.
1. Download [pythons' html help][1] and unpack it into the folder with the corresponding version name, e.g., for Windows to "C:\py_help_server\3.12".
*Folder "py_help_server" will become root folder for our server, and "3.12" naming should correspond online helps' URL format.*
2. Run cmd as admin and run such commands:
cd C:\py_help_server\3.12
python -m http.server 80 --bind 127.0.0.1
3. For Chrome/Brave, download the plugin "Requestly - Intercept, Modify & Mock HTTP Requests". In its settings, go to "HTTP Rules", then "My Rules", click "New Rule" with the type "Replace String".
And create a rule like this:
If the URL contains "https://docs.python.org/3.12/", replace "https://docs.python.org/" with "http://127.0.0.1/".
Now, all pages of Python help for the 3.12 version will be redirected to our local server, which we started in the step 2.
This works for me like a charm. I tried to edit the hosts file too, but that didn't work for me at all.
Also, this last method has an advantage over the "PyCharm Help" plugin - the local web help's search function works well!
[1]: https://docs.python.org/3/download.html
|
GitHub Pages usage limitations and rate limiting are documented here. https://docs.github.com/en/pages/getting-started-with-github-pages/about-github-pages#usage-limits
At the time of this answer
> GitHub Pages sites have a soft bandwidth limit of 100 GB per month. |
|loops|parallel-processing|fortran|sparse-matrix| |
I have a react app created with create-react-app.
In package.json dependencies I have:
...
"react": "18.2.0",
"react-app-rewired": "2.2.1",
"react-scripts": "5.0.1",
...
in scripts I have:
"scripts": {
"start": "npm run watch:css & react-app-rewired start",
...
In `config-overrides.js` file I have next:
const MonacoWebpackPlugin = require('monaco-editor-webpack-plugin');
module.exports = function override(config, env) {
config.plugins.push(
new MonacoWebpackPlugin({
languages: ['json', 'html']
})
);
return config;
};
The problem is, whenever I change something in the code hot reloading is triggered, but in browser (chrome) it show me an alert to confirm that I want to reload the page.
Any idea why?
|
Why react app shows alert in chrome to confirm reloading? |
|javascript|reactjs|google-chrome| |
I have this settings page with some Input components on it. I want them to be white like the rest of the page but for some reason they are dark. Overwriting the classes doenst help. I use React, Tailwind CSS and DaisyUI.
I also have absolute no Idea why this is even dark at all. If someone could explain this I would be very thankful.
[![Settings Page][1]][1]
[1]: https://i.stack.imgur.com/B4Yqh.png
Settings.jsx
```
import React from 'react';
import 'tailwindcss/tailwind.css';
import 'daisyui/dist/full.css';
import FormInput from '../components/inputs/FormInput';
import DateInput from '../components/inputs/DateInput';
function Settings() {
return (
<div className="min-h-screen bg-gray-100 flex flex-col justify-center items-center">
<div className="max-w-2xl mx-auto bg-white shadow-xl rounded-lg p-8" style={{ width: '800px' }}> {/* Nichtmehr responsive */}
<h1 className="text-4xl font-bold mb-8 text-center">Einstellungen</h1>
<div>
<h2 className="text-2xl font-bold mb-4">Konfiguration</h2>
<div className="grid grid-cols-1 gap-4">
<FormInput
label="Zigaretten am Tag"
placeholder="15"
type="number"
name="cigsPerDay"
/>
<FormInput
label="Zigaretten pro Packung"
placeholder="80"
type="number"
name="cigsPerPack"
/>
<FormInput
label="Preis pro Packung (€)"
placeholder="7"
type="number"
name="pricePerPack"
/>
<DateInput
label="Zeitpunkt des Aufhörens"
name="dateOfReturn"
/>
</div>
</div>
</div>
</div>
);
}
export default Settings;
```
FormInput.jsx
```
import React, { useState, useEffect } from 'react';
function FormInput({ label, placeholder, type, name }) {
const [inputValue, setInputValue] = useState('');
// Load value from localStorage on component mount
useEffect(() => {
const storedValue = localStorage.getItem(name);
if (storedValue !== null) {
setInputValue(storedValue);
}
}, [name]);
const handleFocus = (e) => {
e.target.select();
};
const handleChange = (e) => {
const newValue = e.target.value;
setInputValue(newValue);
// Store value in localStorage
localStorage.setItem(name, newValue);
};
return (
<div className="form-control">
<label className="label">
<span className="label-text">{label}</span>
</label>
<input
type={type}
placeholder={placeholder}
className="input input-bordered"
value={inputValue}
onFocus={handleFocus}
onChange={handleChange}
/>
</div>
);
}
export default FormInput;
```
Also here is my try at fixing it
```
<div className="form-control">
<label className="label">
<span className="label-text">{label}</span>
</label>
<input
type={type}
placeholder={placeholder}
className="input input-bordered bg-white" // Add bg-white to
override dark style
value={inputValue}
onFocus={handleFocus}
onChange={handleChange}
/>
</div>
``` |
Pytorch distribute process across nodes and gpu |
|python|pytorch|slurm|multi-gpu| |
je veux créer une class BottleOfWater () et une fonction def __init__(self, contenance, couleur, matiere, liquid) :
puis afficher le contenu de la bouteille : bouteille = BottleOfWater (30, "green", "glass", "water")
```
class BottleOfWater () :
def __init__(self, contenance, couleur, matiere, liquid) :
self.contenance = contenance
self.couleur = couleur
self.matiere = matiere
self.liquid = liquid
for i in BottleOfWater () :
bouteille = BottleOfWater (30, "green", "glass", "water")
type (bouteille)
print (bouteille)
``` |
I have my production database with some specific master password and a specific user. My database is AWS RDS with PostgreSQL.
I'm automatically cloning it every day to some *dev* environment, where I need multiple developers to have access to it, but they should not have access to the *production* environment.
How can I give it a new, non-production password during cloning? I can obviously use some automatic tool I can write. But I prefer something simpler, or optionally to use AWS API. |
null |
I have imported the .less file to the vue component, but it still gives me an error.
I define the variables in `base.less` under `assets/less`.
```
@font-face {
font-family: 'Poppins';
src: url(../fonts/Poppins-Regular.ttf);
font-weight: normal;
font-style: normal;
}
@button-bg-color: #809c2c;
@button-bg-color-hover: #99b056;
@button-border-radius: 7px;
@button-font: 'Poppins';
```
And import it to `App.vue`.
```
<style lang="less" scoped>
@import './assets/less/main.less';
header {
display: flex;
align-items: center;
justify-content: space-between;
margin: 2em;
.header-title-nav {
width: 60%;
display: flex;
align-items: center;
justify-content: space-between;
i {
width: 6.25em;
height: 6.25em;
}
.logo {
width: 100%;
height: 100%;
cursor: pointer;
}
}
.header-btn {
background-color: @button-bg-color;
border-radius: .5em;
width: 12.5em;
height: 7.8125em;
text-decoration: none;
color: white;
font-size: .8em;
font-weight: 600;
display: flex;
justify-content: center;
align-items: center;
&:hover {
background-color: #99b056;
}
}
}
</style>
```
The error message appears in the line under `.header-btn` in VScode is property value expected.
I installed less, and variables are able to use in main.less.
There's others who encounter the same problem just add an import statement and solved, but it didn't work for me. |
Here's another easy way to add truly empty columns to your query.
[example of adding empty columns to a query][1]
=query(A:E;"SELECT Col1, 1/0, Col2, 2/0 label 1/0 'blank', 2/0 'blank'";1)
, where 1/0 and 2/0 (any number divided by zero) result in empty columns
[1]: https://i.stack.imgur.com/BBgaa.png |
The documentation appears to be for the development version not the CRAN version. [In an open issue on GitHub][1], the package developer suggests this:
apa.reg.table(blk1,filename="exRegTable.doc")
[1]: https://github.com/dstanley4/apaTables/issues/40 |
I have recently been through this upgrade.
Upgrading to Appium 2 itself is not mandatory but recommended as most of the tools and frameworks depending on it will be doing it considering that it is the future. If you rely on cloud testing platforms, they would have supporting it as it can be difficult for them to maintain the older versions.
[Here][1] is the official guide on how to migrate and details about what change
[1]: https://appium.io/docs/en/2.0/guides/migrating-1-to-2/ |
Yes. see JBeret examples below. This is specific to JBeret, not a standard feature in Java Batch spec.
https://github.com/jberet/jsr352/blob/main/test-apps/propertyInjection/src/main/java/org/jberet/testapps/propertyinjection/PropertyInjectionBatchlet.java#L215
https://github.com/jberet/jsr352/blob/main/test-apps/propertyInjection/src/main/resources/META-INF/batch-jobs/propertyInjection.xml#L38 |
I am using docker compose watch with compose.yaml file and everything is configured to rebuild on env file change. However, although the image rebuilds and the container is being restarted, environment variables are not changed. Is there a way to do it with watch directive or maybe I would need to use watchtower(just read about it)?
Here is part of my compose.yaml file.
develop:
watch:
- action: rebuild
path: dotenvs/.nginx.env
Thanks! |
How to use docker compose watch to sync environment variables change |
|docker|docker-compose| |
While you can't have a `box-shadow: linear-gradient(#9198E5, #E66465)`, you could take advantage of [Pseudo-elements](https://developer.mozilla.org/en-US/docs/Web/CSS/Pseudo-elements) to have the same gradient, mixing it with `filter: blur(10px)` to have the same effect of shadow. Note that `#back` must have `z-index: -2` to sit behind the pseudo-element.
Also, note that because of the different sizes between `html` height and `#front` height the gradient spread color will be different, so to make sure that it will have an effect like _dissolve_ I used Chrome DevTools to pick the hex colors. The first color was picked from the bottom of the green rectangle and the second color was picked from the top of the green rectangle.
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-css -->
html {
height: 500px;
background: linear-gradient(#9198E5, #E66465);
}
#back {
position: absolute;
top: 100px;
left: 100px;
width: 200px;
height: 200px;
background-color: #FFBBBB;
z-index: -2;
}
#front {
position: absolute;
display: block;
top: 200px;
left: 200px;
width: 200px;
height: 200px;
background-color: #BBFFBB;
}
#front:before {
content: "";
position: absolute;
inset: 0;
background: linear-gradient(#ac87bc, #d17085);
filter: blur(10px);
transform: translate(-40px, -40px);
z-index: -1;
}
<!-- language: lang-html -->
<div id="back"></div>
<div id="front"></div>
<!-- end snippet -->
|
I have the following controller's action:
```C#
[HttpPost("api/v1/addProduct")]
[Consumes("multipart/form-data")]
public Task<ProductDto?> AddProduct([FromForm] ProductRequest request, CancellationToken cancellationToken)
{
return _productService.AddProduct(request, cancellationToken);
}
```
and the following model:
```C#
public class ProductRequest
{
[Required]
public string Title { get; set; }
public string? Description { get; set; }
[Required]
public string PreviewDescription { get; set; }
[Required]
public IFormFile PreviewImage { get; set; }
public IFormFile[]? Images { get; set; }
[Required]
public int[] CategoryIds { get; set; }
public bool? IsAvailable { get; set; }
}
```
I successfully sent data to the server (I use formData):
[![enter image description here][1]][1]
But by some reason, I see the following in the debugger:
[![enter image description here][2]][2]
Everything is ok with `previewImage` but where are `images`? Why `previewImage` is here, but there is no `images`? I sent them in exactly the same way as I sent `previewImage`. We can see it at the request payload screen. Help me please to figure it out.
**UPDATE**
I tried `IFormFileCollection`:
[![enter image description here][3]][3]
No result.
But:
[![enter image description here][4]][4]
As we can see here (using `HttpContext.Request.Form.Files`) I see all the files. What is wrong with binding?
I can solve it in this way:
[![enter image description here][5]][5]
But it looks awful. Model binding must work here! Why it doesn't?
[1]: https://i.stack.imgur.com/N6lJx.png
[2]: https://i.stack.imgur.com/oruB8.png
[3]: https://i.stack.imgur.com/3pxB9.png
[4]: https://i.stack.imgur.com/Q39GI.png
[5]: https://i.stack.imgur.com/bhCNe.png |
I am writing a program that reads a bmp file, and then outputs the same BMP file but cropped in half. I'm doing this by simply dividing the bmp height in half, and that results in the top half of the BMP image being cropped out. However, I can only crop out the top half. I am trying to find a way to crop out the bottom half of the BMP image, but it seems that the pixels begin writing from the bottom of the file and I am trying to get them to start halfway through the file, or maybe start from the top down.
```
// Update the width and height in the BMP header
header.width = newWidth;
header.height = newHeight;
printf("New header width: %d\n", header.width);
printf("New header height: %d\n", header.height);
// Write the modified BMP header to the output file
fwrite(&header, sizeof(BMPHeader), 1, outputFile);
// Calculate the padding
int padding = (4 - (header.width * (header.bpp / 8)) % 4) % 4;
// Copy the image data
unsigned char pixel[4];
for (int y = 0; y < newHeight; y++) {
for (int x = 0; x < header.width; x++) {
fread(pixel, sizeof(unsigned char), header.bpp / 8, inputFile);
fwrite(pixel, sizeof(unsigned char), header.bpp / 8, outputFile);
}
for (int p = 0; p < padding; p++) {
fputc(0, outputFile); // Write padding for the new image
}
}
// Close the files
fclose(inputFile);
fclose(outputFile);
printf("BMP image cropped successfully\n");
```
This is essentially all the code that does the image cropping. I'm only using stdio.h and stdlib.h libraries and would like to keep it that way. The outputted image is the bottom half of the original image, but I would like to also be able to find a way to keep the top half instead. The original BMP image I am using is 3200x1200, and I am setting the new height to be 600 instead of 1200 so the new image can be cut in half vertically.
EDIT: The header.height and header.width variables are both positive. header.height = 1200, and header.width = 3200 originally.
[Original BMP image (3200x1200)](https://i.stack.imgur.com/3Rml3.png)
[Cropped image (3200x600)](https://i.stack.imgur.com/zStih.png)
|
How to reset vue product filter? |
|javascript|vue.js| |
null |
so I have a library (physical) with around 20k books and I am in the process of digitalising these books (making them into pdfs by scanning).
I want to make an open library which can host all these 20k books with search, the search is not limited to book name but can also be author or publications.
I’ve thought of making a database initially with all the book entries and pointing it to the pdf. So when a book is searched, it goes through the database and has a download ready.
It would also be able to sort and display the dataset in alphabetical order making discovery of unknown books possible.
What would be the best way to do this, a database with books pointing to downloadable books in a file server?
Scope is to make something very similar to Anna’s archive
Help and suggestions to make this as simple as possible is very much appreciated.
Database with file server or only web based file server or a website holding everything internally (would be very tedious to make 20k pages) |
Open Web Library |
|php|html|sql|database|server| |
null |
|javascript|reactjs|json|remix.run| |
In a basic Remix V2 app, i need help understanding if the following is expected behavior, a bug in V2, or possibly a missing configuration option or setting.
I could not find anything related to this issue in the Remix documentation.
I created an demo Remix V2 app by running `npx create-remix@latest`
I wrote a simulated API backend method for basic testing which simply returns JSON data as retrieved from a JSON file:
export async function getStoredNotes() {
const rawFileContent = await fs.readFile('notes.json', { encoding: 'utf-8' });
const data = JSON.parse(rawFileContent);
const storedNotes = data.notes ?? [];
return storedNotes;
}
The client side consists of 2 simple pages that uses `NavLink` for navigation between the 2 pages:
import { NavLink } from '@remix-run/react';
...
<NavLink to="/">Home</NavLink>
<NavLink to="/notes">Notes</NavLink>
In the `notes` route, i have the following loader function defined which makes a call to my simulated API method:
export const loader:LoaderFunction = async () => {
try {
const notes = await getStoredNotes();
return json(notes);
} catch (error) {
return json({ errorMessage: 'Failed to retrieve stored notes.', errorDetails: error }, { status: 400 });
}
}
And in the `notes` main component function i receive that data using the `useLoaderData` hook and attempt to print the returned JSON data to the console:
export default function NotesView() {
const notes = useLoaderData<typeof loader>();
console.log("JSON Data retrieved:", notes);
return (
<main>
<NoteList notes={notes as Note[] || []} />
</main>
)
}
**When i run a `build` and subsequently `serve` the application everything is working correctly:**
Initial page load receives the data successfully and prints the value to the console as JSON.
{"notes":[{"title":"my title","content":"my note","id":"2024-03-28T05:22:52.875Z"}]}
Navigating between the index route and back to the notes route using the `NavLink` is also working such that i see the data print to the console again, correctly on subsequent page visits.
**When i run in `dev` mode using `npx run dev` the following problem occurs:**
The initial page load coming from the dev server also loads correctly and prints the JSON to the console.
Navigating client side between the index route and back to the notes route using the `NavLink` causes an issue whereas the data that prints to the console is not JSON. Instead, i am seeing a strange output of an exported Javascript array definition:
export const notes = [
{
title: "my title",
content: "my note",
id: "2024-03-28T05:22:52.875Z"
}
];
Again, to be clear, this behavior only occurs when navigating client side using `NavLink` or `Link` elements while running `npx run dev`.
Is this expected behavior when running in `dev` mode?
|
in the current screen call
const CurrentScreen = ({navigation})=>{
navigation.navigate('TargetScreen', {phone});
}
in the other side you can get it on route params
const TargetScreen = ({navigation, route})=>{
console.log(route.params.phone) // the phone number is just hire
} |
Kinda new to programming, but heres the problem:
As the title suggest, im correctly sending the data im writing to the database through the form.
```
<!-- VISUALIZZA TABELLA -->
<div>
<!-- MODALE AGGIUNGI -->
<div class="modal fade" id="addTreatment" tabindex="-1" role="dialog" aria-labelledby="addTreatmentTitle" aria-hidden="true">
<div class="modal-dialog modal-dialog-centered" role="document">
<div class="modal-content">
<div class="modal-header">
<h5 class="modal-title" id="addTreatmentLongTitle">Aggiungi nuovo Trattamento</h5>
</div>
<form method="POST" action="#">
<div class="modal-body">
<!-- FORM -->
<div class="mb-3">
<label for="nome" class="form-label">Tipologia: </label>
<input type="text" class="form-control" id="nome" name="nome" required>
</div>
<div class="mb-3">
<label for="cognome" class="form-label">Durata: </label>
<input type="text" class="form-control" id="cognome" name="cognome" required>
</div>
</div>
<div class="modal-footer">
<button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Close</button>
<button type="submit" class="btn btn-primary">Invia</button>
</div>
</form>
</div>
</div>
</div>
</div>
<table class="table table-striped">
<thead>
<tr>
<th scope="col">Tipologia</th>
<th scope="col">Durata</th>
</tr>
</thead>
<tbody>
<?php if ($bResult->num_rows > 0) {
while ($row = $bResult->fetch_assoc()) {
?>
<tr>
<th scope="row"><?php echo $row['nome']; ?></th>
<td><?php echo $row['cognome']; ?></td>
<!-- <td><button type="submit" class="btn btn-danger" name="delete" value="<?php //$row['id'] ?>">Elimina</button></td> -->
</tr>
<?php
}
} ?>
</tbody>
</table>
```
Unfortunately, every time i refresh manually the page, it sends others data that i sent hours ago
Here i have the PHP file:
```
<?php
include "../db_connection.php";
// INSERIMENTO TRATTAMENTO
if (isset($_POST['nome']) || isset($_POST['cognome'])) {
$sNome = $_POST['nome'];
$sCognome = $_POST['cognome'];
if ($sNome != NULL) {
$tInsert = "INSERT INTO test (nome, cognome) VALUES ('$sNome', '$sCognome')";
if ($sConn->query($tInsert) == FALSE) {
echo 'Oh';
}
}
}
// DISPLAY TRATTAMENTI
$tShow = "SELECT id, nome, cognome FROM test";
$bResult = $sConn->query($tShow);
$sConn->close();
```
Any suggestions on what i have to do?
(I'm also trying print a button for every row that deletes the same row) |
I need an azure product where i can run my ffmpeg process on but i dont want to deal with acces overhead etc. But i need to be able to specify how strong the cpu and stuff should be.
So consumption functions would be to weak & Premium functions are billed with a fixed rate so i cant use that either
I tried first using a webapi but those are quite costly and because the commands are extremely sudden like in a couple seconds it can go to 100% cpu usage, so scalinng solutions even if i could afford it would be quite hard.
I'm basicly looking for something that i can write a http triggered function for, i call it, it comes to life executes and then dies.
Ps I'm a novice to Azure so if i overlooked anything obvious please let me know. |
I need an azure product that executes my intensive ffmpeg command then dies, and i only get charged for the delta. Any Tips? |
|azure|ffmpeg|cloud|scale|cost-management| |
null |
use the following:
python setup.py install --prefix=<your path> |
Assuming no date comes before Jan 1, 1970 (standard epoch time). This should suffice. And if you do have pre-1970 dates, change the `int year = 1970` statement below to have an initial value of `1`.
The code below basically counts up the number of seconds from 1/1/1970 at midnight accounting for leap years as it goes.
Below assumes a basic understanding that:
* Leap years have 366 days. Non leap-years have 365 days
* Leap years usually occur on years divisible by 4, but not on years divisible by 100 unless also divisible by 400 (the year 2000 is a leap year, but the year 2100 will not).
* That the `DateTime` struct will never have negative numbers, months less than 1 or greater than 12. Hours are assumed to be between 0-23 (military time). Minutes are a value between 0-59.
* Months are assumed to be numbered between 1-12
* Same time zone for comparing differences.
```
#define _CRT_SECURE_NO_WARNINGS
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ctype.h>
#include <inttypes.h>
struct DateTime {
int day, month, year, hour, minute;
};
const int SECONDS_IN_DAY = 86400;
int isLeapYear(int year) {
if (year % 4) return 0;
if (year % 400 == 0) return 1;
if (year % 100 == 0) return 0;
return 1;
}
int64_t seconds_since_epoch(const struct DateTime* dt) {
size_t days_in_month[13] = { 0, 31,28,31,30,31,30,31,31,30,31,30,31 };
int year = 1970;
int month = 1;
int day = 1;
int64_t total = 0;
while (year < dt->year) {
if (isLeapYear(year)) {
total += 366 * SECONDS_IN_DAY;
}
else {
total += 365 * SECONDS_IN_DAY;
}
year++;
}
if (isLeapYear(year)) {
days_in_month[2] = 29;
}
while (month < dt->month) {
total += days_in_month[month] * SECONDS_IN_DAY;
month++;
}
total += (dt->day - day) * SECONDS_IN_DAY;
total += 60 * 60 * dt->hour; // hours to seconds
total += 60 * dt->minute; // minutes to seconds
return total;
}
// Function to calculate time difference
int calculate_time_difference(char date_in[], char time_in[], char date_out[], char time_out[]) {
struct DateTime dt1, dt2;
sscanf(date_in, "%d-%d-%d", &dt1.day, &dt1.month, &dt1.year);
sscanf(time_in, "%d:%d", &dt1.hour, &dt1.minute);
sscanf(date_out, "%d-%d-%d", &dt2.day, &dt2.month, &dt2.year);
sscanf(time_out, "%d:%d", &dt2.hour, &dt2.minute);
// Convert date and time to seconds since epoch
int64_t seconds1 = seconds_since_epoch(&dt1);
int64_t seconds2 = seconds_since_epoch(&dt2);
auto difference_in_seconds = abs(seconds1 - seconds2);
int difference_in_minutes = difference_in_seconds / 60;
return difference_in_minutes;
}
int main() {
int time_diff = calculate_time_difference("01-01-2024", "19:05", "02-01-2024", "9:05");
printf("Time difference->%i\n", time_diff);
return 0;
}
``` |
Every time i refresh the page, the form makes another db call |
|php|mysql| |
null |
If it is the last stage in your Jenkinsfile, (like my case), you can use `BUILD_ID=dontKillMe` to solve that problem, like this:
stage("Run"){
steps{
withEnv(['BUILD_ID=dontKillMe']) {
script{
sh '<YOUR COMMAND HERE>'
}
}
}
}
In my case, I have Jenkins in Ubuntu, and I want to run the last stage of my jenkinsfile forever, so I used :
stage("Run app service forever"){
steps{
withEnv(['BUILD_ID=dontKillMe']) {
script{
kubeconfig(credentialsId: 'my_minikube_config_file', serverUrl: 'https://###.###.###.###:8443') {
sh 'kubectl port-forward --address 0.0.0.0 services/tecapp 5000:5000'
}
}
}
}
}
[![Jenkins last stage running forever][1]][1]
NB: DON'T add **&** at the end of your command, like this :
stage("Run"){
steps{
withEnv(['BUILD_ID=dontKillMe']) {
script{
sh '<YOUR COMMAND HERE> &'
}
}
}
}
when I used **&**, the stage didn't run forever.
[1]: https://i.stack.imgur.com/23TqK.png |
I need to train a model to add the SSML tags and punctuation to the input text.
For example, from the sentence "Hello world." I'd like to get the
`<speak> Hello! world. </speak>` output.
Another example:
Input: "In reverse bias, the electrons flow from anode to cathode (P -> N), while the holes (positive charges) flow from cathode to anode (N -> P). This happens because in reverse bias, a greater voltage is wired to N, attracting electrons to outside, while the least voltage does the same with holes."
Output:`<speak>In reverse bias, the electrons, flow from anode to cathode (P -> N), while the holes (positive charges), flow from cathode to anode (N -> P). <break time = "0.5s" /> This happens because in reverse bias, a greater voltage, is wired to N, attracting electrons to outside, while the least voltage, does the same with holes. </speak>`
I followed the standard Seq2Seq training using the huggings face tutorials, but had no luck. the input text is the same as the output. I used a Flan-T5-base model. My data is 1200 pairs.
any suggestion how to force the model to show the ssml tags and the "incorrect" punctuation? |
|huggingface-transformers|training-data|large-language-model|huggingface|seq2seq| |
I am using Next.js 14 (App router) for my react project. I need to use Graphql. I also have the Apollo server set up. As a complete beginner, quite confused as to which directory to create which file. I have installed Apollo client and have
app\util\apolloClient.ts
So in order to wrap with Apollo provider, is it layout.js or page.js . I don't have 'src' folder.
Please advise.
Also, how do I get around getting the configuration for the graphql from json ? |
Using Apollo client wrapper in Next.js 14 App router |
|typescript|next.js|graphql|apollo-client|apollo-server| |
{"Voters":[{"Id":341994,"DisplayName":"matt"},{"Id":2395282,"DisplayName":"vimuth"},{"Id":11841571,"DisplayName":"Lamanus"}]} |
I'm having trouble getting my Oracle connection to work with IIS. I've tried different methods, such as adjusting access permissions for authenticated users/IIS users to the Oracle client directory and adding Oracle_Home to my environment path. However, I'm still facing the error 'System.Data.OracleClient requires software version 8.1.7 or greater.' Just to note, my Oracle client installation includes the Basic Lite Package + ODBC Driver Package as per the link provided.
https://www.oracle.com/database/technologies/instant-client/winx64-64-downloads.html
I've tried different methods, such as adjusting access permissions for authenticated users/IIS users to the Oracle client directory and adding Oracle_Home to my environment path. However, I'm still facing the error 'System.Data.OracleClient requires software version 8.1.7 or greater.' |
Oracle Managed Data Access Client can't work from IIS but work for local debug environment |
|c#|oracle|.net-4.8|oracle-manageddataaccess| |
null |
The VCALENDAR object, as supplied, is not strictly valid: the EXDATE property must have one or more date-time or date values (see [3.8.5.1. Exception Date-Times](http://icalendar.org/iCalendar-RFC-5545/3-8-5-1-exception-date-times.html)).
Despite that, *vobject* parses the data successfully, ending up with an empty list of exceptions in the VEVENT object.
To get the list of occurrences, try:
>>> s = "BEGIN:VCALENDAR ... END:VCALENDAR"
>>> ical = vobject.readOne(s)
>>> rrule_set = ical.vevent.getrruleset()
>>> print(list(rrule_set))
[datetime.datetime(2011, 3, 25, 8, 0), datetime.datetime(2011, 3, 26, 8, 0), datetime.datetime(2011, 3, 27, 8, 0), datetime.datetime(2011, 3, 28, 8, 0), datetime.datetime(2011, 3, 29, 8, 0), datetime.datetime(2011, 3, 30, 8, 0)]
>>>
If we add a valid EXDATE property value, like
EXDATE:20110327T080000
re-parse the string, and examine the RRULE set again, we get:
>>> list(ical.vevent.getrruleset())
[datetime.datetime(2011, 3, 25, 8, 0), datetime.datetime(2011, 3, 26, 8, 0), datetime.datetime(2011, 3, 28, 8, 0), datetime.datetime(2011, 3, 29, 8, 0), datetime.datetime(2011, 3, 30, 8, 0)]
>>>
which is correctly missing the 27th March, as requested.
(Apologies for the 13 year delay in responding. I hope this might help someone, someday!) |
For install **cudatoolkit and cudnn** by Anaconda you can use these following command `conda install -c conda-forge cudatoolkit=11.2 cudnn=8.1.0`
You must aware the **tensorflow version** must be less than **2.11**
For check if the everything installed properly
1) In command prompt check `nvidia-smi` command. if shows command not found you must install the latest GPU driver
2) Use this python script to config the GPU in programming
[enter image description here][1]
[1]: https://i.stack.imgur.com/NonX0.png |
{"Voters":[{"Id":3966456,"DisplayName":"Weijun Zhou"},{"Id":550094,"DisplayName":"Thierry Lathuille"},{"Id":4032703,"DisplayName":"EJoshuaS - Stand with Ukraine"}],"SiteSpecificCloseReasonIds":[19]} |
Unable to decode audio data and get sample rate |
|javascript|web-mediarecorder| |
Unfortunately, you cannot parametrize the name of a table (see [this post][1]). You will have to use Python string operations to do what you are attempting here.
[1]: https://stackoverflow.com/questions/474261/python-pysqlite-not-accepting-my-qmark-parameterization |
I have created a new disk using the code below:
```
disk_client = compute_v1.DisksClient()
request = compute_v1.InsertDiskRequest(project=PROJECT, zone=ZONE,disk_resource=disk)
operation = disk_client.insert(project=PROJECT, zone=ZONE, disk_resource=disk)
```
Using gcloud I can see my operation:
```
gcloud compute operations describe operation-1711690161380-614c5ec08dd85-8f424625-2dfb72a1 --zone europe-west3-b
endTime: '2024-03-28T22:29:43.198-07:00'
id: '2781620015981129566'
insertTime: '2024-03-28T22:29:21.946-07:00'
kind: compute#operation
name: operation-1711690161380-614c5ec08dd85-8f424625-2dfb72a1
operationType: insert
progress: 100
selfLink: https://www.googleapis.com/compute/v1/projects/my-project/zones/europe-west3-b/operations/operation-1711690161380-614c5ec08dd85-8f424625-2dfb72a1
startTime: '2024-03-28T22:29:21.957-07:00'
status: DONE
targetId: '5681157969428606814'
targetLink: https://www.googleapis.com/compute/v1/projects/my-project/zones/europe-west3-b/disks/rst99-vtlnprd01-disk-1
user: my-service-account@my-project.iam.gserviceaccount.com
zone: https://www.googleapis.com/compute/v1/projects/my-project/zones/europe-west3-b
```
In a different script, I want to get the result of the long-running operation using the google.api_core.operations_v1.AbstractOperationsClient class, but always return a credential issue!
Here is the code:
```
os.environ['GOOGLE_APPLICATION_CREDENTIALS']="..\\..\\my-service-account.json"
name = f"projects/{PROJECT}/zones/{ZONE}/operations/operation-1711690161380-614c5ec08dd85-8f424625-2dfb72a1"
credentials, projectId = google.auth.default()
print("credentials service account: ", credentials.service_account_email)
options = ClientOptions(api_endpoint="compute.googleapis.com/compute")
operation_client = AbstractOperationsClient(credentials=credentials, client_options=options)
operation = operation_client.get_operation(name)
```
But I have the following error:
```
credentials service account: my-service-account@my-project.iam.gserviceaccount.com
Traceback (most recent call last):
File "C:\Users\john.do\operation.py", line 22, in <module>
operation = operation_client.get_operation(name)
File "C:\Users\john.do\AppData\Local\Programs\Python\Python310\lib\site-packages\google\api_core\operations_v1\abstract_operations_client.py", line 495, in get_operation
response = rpc(
File "C:\Users\john.do\AppData\Local\Programs\Python\Python310\lib\site-packages\google\api_core\gapic_v1\method.py", line 131, in __call__
return wrapped_func(*args, **kwargs)
File "C:\Users\john.do\AppData\Local\Programs\Python\Python310\lib\site-packages\google\api_core\retry\retry_unary.py", line 293, in retry_wrapped_func
return retry_target(
File "C:\Users\john.do\AppData\Local\Programs\Python\Python310\lib\site-packages\google\api_core\retry\retry_unary.py", line 153, in retry_target
_retry_error_helper(
File "C:\Users\john.do\AppData\Local\Programs\Python\Python310\lib\site-packages\google\api_core\retry\retry_base.py", line 212,
in _retry_error_helper
raise final_exc from source_exc
File "C:\Users\john.do\AppData\Local\Programs\Python\Python310\lib\site-packages\google\api_core\retry\retry_unary.py", line 144, in retry_target
result = target()
File "C:\Users\john.do\AppData\Local\Programs\Python\Python310\lib\site-packages\google\api_core\grpc_helpers.py", line 79, in error_remapped_callable
return callable_(*args, **kwargs)
File "C:\Users\john.do\AppData\Local\Programs\Python\Python310\lib\site-packages\google\api_core\operations_v1\transports\rest.py", line 301, in _get_operation
raise core_exceptions.from_http_response(response)
google.api_core.exceptions.Unauthorized: 401 GET https://compute.googleapis.com/compute/v1/projects/my-project/zones/europe-west3-b/operations/operation-1711690161380-614c5ec08dd85-8f424625-2dfb72a1: Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.
```
I am using the same service account to create the disk and get the operation, can you help?
Thanks |
How can I get the long running operation with google.api_core.operations_v1.AbstractOperationsClient |
|python|google-cloud-platform|google-compute-engine|long-running-processes|google-compute-disk| |
null |
|r|classification|decision-tree|rpart| |
To make it work on newly enabled High Performance Orders Storage (and in legacy mode too), use the following:
```php
add_action('manage_shop_order_posts_custom_column', 'custom_orders_list_column_content', 20, 2);
add_action('manage_woocommerce_page_wc-orders_custom_column', 'custom_orders_list_column_content', 20, 2);
function custom_orders_list_column_content( $column, $order = null ){
if ( 'order_status' === $column ) {
if ( ! is_a($order, 'WC_Order') ) {
global $the_order; $order = $the_order;
}
echo '<ul class="orders-list-items-preview">';
// Loop through order items
foreach ( $order->get_items() as $item ) {
$product = $item->get_product(); // Get the WC_Product object
printf('<li>%s <label>%d</label> %s</li>',
$product->get_image( array(30, 30) ),
$item->get_quantity(), $item->get_name() );
}
echo '</ul>';
}
}
add_action('admin_head', 'orders_list_preview_css');
function orders_list_preview_css() {
echo "<style>
.orders-list-items-preview {
background-color: #eee;
padding: 8px 8px 0 5px;
border-radius: 4px;
}
.orders-list-items-preview li {
padding-left: 55px;
position: relative;
padding-bottom: 10px;
padding-right: 40px;
padding-top: 0;
font-size: 10px;
line-height: 11px;
min-height: 30px;
}
.orders-list-items-preview li label {
border: 1px solid gray;
width: 25px;
display: block;
text-align: center;
border-radius: 4px;
right: 5px;
top: 0px;
position: absolute;
font-size: 12px;
font-weight: bold;
padding: 5px 0;
}
.orders-list-items-preview img {
margin: 1px 2px;
position: absolute;
left: 0;
top: 0;
height: 30px;
max-height: 30px !important;
}
</style>";
}
```
Code goes in functions.php file of your child theme (or in a plugin). Tested and works.
[![enter image description here][1]][1]
To display other order items details, see:
- https://stackoverflow.com/questions/45706007/get-order-items-and-wc-order-item-product-in-woocommerce-3/45706318#45706318
[1]: https://i.stack.imgur.com/jZVfv.png |
|javascript|php|clipboard| |
I created my token on BSC and I am currently trying to verify the code on BSCScan:
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;
import "https://github.com/OpenZeppelin/openzeppelin-contracts/blob/master/contracts/token/ERC20/ERC20.sol";
contract Token is ERC20 {
constructor(uint256 initialSupply) ERC20 ("Token", "TKN"){
_mint(msg.sender,initialSupply);
}
}
But, when I try to verify, this error pops out:
Error! Unable to generate Contract Bytecode and ABI (General Exception, unable to get compiled [bytecode]) |
When your `m_data` object is constructed, its constructor first adds a *copy* of the `data1..data5` members to the `data` list *before* those members have been initialized. Those members are initialized *after* the `data` list is populated, and the pointers stored in the `data` list are never being updated.
So, when your loop later retrieves a stored pointer from `m_data->data[i]` and tries to dereference that pointer, you experience *Undefined Behavior*, resulting in a runtime crash.
Try this instead for your `Data` struct:
```
struct Data {
Data() {
data1 = new QBarDataRow;
data2 = new QBarDataRow;
data3 = new QBarDataRow;
data4 = new QBarDataRow;
data5 = new QBarDataRow;
//initialize the list here...
data = QList<QBarDataRow *>{ data1, data2, data3, data4, data5 };
}
~Data() {
delete data1;
delete data2;
delete data3;
delete data4;
delete data5;
}
public:
//don't initialize the list here...
//QList<QBarDataRow *> data{ data1, data2, data3, data4, data5 };
QList<QBarDataRow *> data;
QBarDataRow data1;
QBarDataRow data2;
QBarDataRow data3;
QBarDataRow data4;
QBarDataRow data5;
};
```
Also, there's no real need to `new`/`delete` the `data1..data5` members manually at all, eg:
```
struct Data {
Data() {
data = QList<QBarDataRow *>{ &data1, &data2, &data3, &data4, &data5 };
}
// or:
// Data() : data{ &data1, &data2, &data3, &data4, &data5 } {}
public:
QList<QBarDataRow *> data;
QBarDataRow data1;
QBarDataRow data2;
QBarDataRow data3;
QBarDataRow data4;
QBarDataRow data5;
};
```
Or:
```
struct Data {
QList<QBarDataRow *> data{ &data1, &data2, &data3, &data4, &data5 };
QBarDataRow data1;
QBarDataRow data2;
QBarDataRow data3;
QBarDataRow data4;
QBarDataRow data5;
};
```
---
That being said, you also don't need the secondary loops at all, since you already know exactly which item to update:
```
connect(lineEdit, &QLineEdit::editingFinished, [&]() {
QPoint position = m_series->selectedBar();
int x = position.x(), y = position.y();
if (x >= 0 && x < m_data->data.size()) {
QBarDataRow *row = m_data->data[x];
if (y >= 0 && y < row->size()) {
(*row)[y].setValue(lineEdit->text().toFloat());
}
}
});
``` |
simply put nothing inside th oninitialise method runs the public class TutorialModClient appears grey in IntelliJ .I have all the necessary entries in my fabric.mod.json :
the json entries are
```
"entrypoints": {
"main": [
"net.example.vladimir.TutorialMod"
],
"client": [
"net.example.vladimir.TutorialModClient"
],
"fabric-datagen": [
"net.example.vladimir.TutorialModDataGenerator"
]
},
```
I tried to look on the fabric API but its says about the entry points which I have done and I could not find anything else that could help:
my code for MyModClient
```
package net.example.me;
import ...
public class TutorialModClient implements ClientModInitializer {
@Override
public void onInitializeClient() {
EntityRendererRegistry.register(ModEntities.PORCUPINE, PorcupineRenderer::new);
EntityModelLayerRegistry.registerModelLayer(ModModelLayers.PORCUPINE, PorcupineModel::getTexturedModelData);
}
}
```
I was following a tutorial by the youtube Kaupenjoe just in case. |
Well, the current version of XSLT is 3.0, there you could use `xsl:where-populated` as follows:
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
version="3.0"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
exclude-result-prefixes="#all"
expand-text="yes">
<xsl:strip-space elements="*"/>
<xsl:output indent="yes"/>
<xsl:mode on-no-match="shallow-copy"/>
<xsl:key name="lov" match="ValueGroup[@AttributeID='tec_att_ignore_basic_data_text']/Value" use="@QualifierID" />
<xsl:template match="ValueGroup[@AttributeID='prd_att_description']/Value[key('lov', @QualifierID)/@ID='Y']"/>
<xsl:template match="ValueGroup[@AttributeID='prd_att_description']">
<xsl:where-populated>
<xsl:next-match/>
</xsl:where-populated>
</xsl:template>
</xsl:stylesheet> |
{"OriginalQuestionIds":[14368788],"Voters":[{"Id":4108803,"DisplayName":"blackgreen"}]} |
I found the issue.
The culprit was this line
```dart
final int index = y * width + x;
```
where I wrongly assumed ```width``` to be always equal to ```image.planes[0].bytesPerRow```
but the values shown in my question already point out that they can be different (512 vs 320).
The additional length is a padding that needs to be correctly adjusted when converting the image. If this is not done the stripes will appear since the processing uses wrong y values.
Simply correcting the calculation to
```dart
final int bytesPerRowY = image.planes[0].bytesPerRow;
final int index = y * bytesPerRowY + x;
```
solved the issue! |
|xml|xslt|xslt-2.0| |
The Google Docs explain that the peak_qps Google Cloud Metric is the project's maximum per-second crypto request count but it always seems to read too high when I view it in the console using the Metrics Explorer. Am I mis-understanding what it represents?
If I do
- 3 decryptions every second the peak_qps is 40
- 1 decryption every second the peak_qps is 20
- 1 decryption every 5 seconds the peak_qps is 10
- 1 decryption every 10 seconds the peak_qps is 5
- 1 decryption every 20 seconds the peak_qps is 2.5
I was expecting 1 decryption every second to give me a qps of 1. And 1 decryption every 5 seconds would be a qps of 0.2. etc. |
I want to know why I get this JDBC connection error. I already added the SQL driver in classpath.
When I try use init parameter
```
@WebServlet(urlPatterns="/addServlet",initParams= {@WebInitParam(name="dburl",value="com.mysql.cj.jdbc.Driver"),`
`@WebInitParam(name="dbUser",value="test"),@WebInitParam(name="dbPassword",value="test")})
```
And right this line
```java
public void init(ServletConfig config) {
try {
Class.forName("com.mysql.cj.jdbc.Driver");
System.out.println("dbURL"+ config.getInitParameter("dburl"));
System.out.println("dbUSER"+config.getInitParameter("dbUser"));
System.out.println(config.getInitParameter("dbPassword"));
con=DriverManager.getConnection(config.getInitParameter("dburl"),config.getInitParameter("dbUser"),config.getInitParameter("dbPassword"));
//con = DriverManager.getConnection("jdbc:mysql://localhost:3306/mydb","test","test");
System.out.println(con);
con=DriverManager.getConnection("jdbc:mysql//localhost/mydb:3306","test","test");
Statement stmt=con.createStatement();
//con = DriverManager.getConnection("jdbc:mysql://localhost/mydb", "test", "test");
con = DriverManager.getConnection("jdbc:mysql://localhost:3306/mydb", "test", " test");
System.out.println(con);
} catch (SQLException e) {
e.printStackTrace();
} catch(ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
```
By normal connection with this `//con = DriverManager.getConnection("jdbc:mysql://localhost:3306/mydb","test","test");` I am able to connect to the database, but if I try using initParmas as below it gives me an error.
```
con=DriverManager.getConnection(config.getInitParameter("dburl"),config.getInitParameter("dbUser"),config.getInitParameter("dbPassword"));
```
Error as below
```none
java.sql.SQLException: No suitable driver found for com.mysql.cj.jdbc.Driver
at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:706)
at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:229)
at com.test.user.servlet.CreateUserServlet.init(CreateUserServlet.java:44)
at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:944)
at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:901)
at org.apache.catalina.core.StandardWrapper.allocate(StandardWrapper.java:649)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:115)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:482)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:115)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:673)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:391)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:896)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1744)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52)
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63)
at java.base/java.lang.Thread.run(Thread.java:833)
Mar 21, 2024 4:44:36 PM org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet [com.test.user.servlet.CreateUserServlet] in context with path [/UserApp] threw exception
java.lang.NullPointerException: Cannot invoke "java.sql.Connection.createStatement()" because "this.con" is null
```
|
I am running into an issue where I am getting a CORS error when trying to fetch an Excel file from Firebase Storage.
This is the error I am getting:
> Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at <Firebase_Storage_File_URL>. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing). Status code: 200.
This is my 'cors.json' that I set in Google Cloud:
```
[
{
"origin": ["https://<Firebase_Project_ID>.web.app"],
"method": ["GET, POST"],
"responseHeader": ["Content-Type"],
"maxAgeSeconds": 3600
}
]
```
This is my CORS configuration in my 'index.js' file:
```
app.use(cors({
origin: 'https://<Firebase_Project_ID>.web.app',
credentials: true
}));
```
This is how I am trying to fetch the Excel file from Firebase Storage on the client-side:
```
let response;
try {
response = await fetch(filePath, {
method: 'GET',
mode: 'cors',
credentials: 'include'
});
console.log("Excel file fetch response: ", response);
} catch (error) {
console.error("Error fetching the Excel file:", error);
return "Error fetching the Excel file: " + error;
}
```
I have tried setting the 'cors.json' that I have shared above in Google Cloud. I have also tried changing the origin to just "*". However, I find that although it says my changes have been set in Google Cloud, I still get the same CORS error when trying to load the file.
In my 'index.js' CORS configuration (shared above), I have tried changing the origin between "https://<Firebase_Project_ID>.web.app" and "*".
On the client-side, initially I was just doing `response = await fetch(filePath)`. As I started experimenting with it to try and resolve the CORS issue, I added the 'method', 'mode', and 'credentials' attributes to the fetch call.
For the 'index.js' CORS configuration and client-side changes, I redeployed the app each time I made a change. However, each time, I still got the same CORS error.
Just to note: my web app is already hosted using Firebase and Google Cloud Run, if that has anything to do with this issue.
I have been struggling with this problem for a long time and have scoured the internet for help, but could not find anything that worked for me. I would really appreciate any help! |
From [\[bit.cast\]/2](https://timsong-cpp.github.io/cppwp/n4950/bit.cast#2):
> [...] A bit in the value representation of the result is indeterminate if it does not correspond to a bit in the value representation of _from_ [...].
For each bit in the value representation of the result that is indeterminate, the smallest object containing that bit has an indeterminate value; the behavior is undefined unless that object is of unsigned ordinary character type or std::byte type. [...]
So, in your case, the behavior of the expression is undefined, but not because of the `std::bit_cast` itself.
Because the bits of `b` do not correspond to any bit of the value representation of `A{}`, they are indeterminate and the value of `b` itself is consequently indeterminate. Only because `b` has type `unsigned char`, this is not immediately undefined behavior.
However, for the comparison with `0 ==`, you then read the value of `b`, which causes undefined behavior because its value is indeterminate.
Of course, because you are using the comparison as the condition in a `static_assert`, the program will not have undefined behavior, but will be ill-formed instead, because the expression is not a constant expression. In a constant expression any lvalue-to-rvalue conversion on an indeterminate value is disallowed. The compiler needs to diagnose this and therefore only Clang is conforming. |
In OWASP ZAP what does the summary in the Token Generation and Analysis tool mean?
What do each of the tests mean? There is not very much info about this tool online.
[ZAP Token Generation and Analysis tool][1]
[1]: https://i.stack.imgur.com/pWU9q.png |
I have been dealing with WordPress for many years, but have a delicate problem with an access rule that I want to store in .htaccess. I use actually Apache 2.4.56.
- Initial situation 1: The web server should only be accessible from the outside for certain users.
- Initial situation 2: I have two given IP addresses.
The first client IP address should be able to fully access the web server, therefore:
```
require ip 192.168.124.1
```
Now I have a second client IP address that should also be allowed to access the web server, but only to a specific directory structure called /foo. This access rule should apply to /foo, but also to everything below it, e.g. /foo/bar. There should explicitly be no redirection within the shared folder structure, i.e. if someone wants to access /foo/bar, then they should also be allowed to access /foo/bar.
Since I use WordPress, the directory structure is only virtual, so I cannot store my own .htaccess files in the subdirectory structure.
Regards,
Besim |
Special access rule in an .htaccess file for IP addresses, authorized only for one directory structure |
|regex|apache|.htaccess| |
null |
1. I have a classic build pipeline
2. I have defined a non-secret (var1), and a secret var (var2) in variables tab
3. I have maven task that runs my java application
4. I have tried to access the value of the variables through getenv, getProperty and ProcessBuilder approach.
5. getenv and getProperty return null for both var1 and var2
6. ProcessBuilder approach returns the value of var1 but var2 is still null
Is there a way to access the value of the secret variables inside java app?
Thank you. |
I'm trying to connect my php web application to mysqli for a school project, but mysqli does not seem to be installed. When I try to run my localhost8080 I get:
[mysqli error message](https://i.stack.imgur.com/aYlsL.png)
I was under the impression of my instructor that mysqli is part of PHP and automatically installed with php. I have the extension uncommented out in my .ini file. I have tried reinstalling php with:
```
brew tap shivammathur/php
brew reinstall shivammathur/php/php@8.0
```
This hasn't worked. I am a beginner to this sort of thing and would appreciate any info. I see a couple solutions for windows, but none for mac (Sonoma) |
After I plugged my iPhone into my mac, I started debugging my app. For the first few days, it all seemed smooth.
But after a few days, things changed.
The `Debug` -> `Attach to process` menu is empty, as shown below. I can't figure out why.
But the weirder thing is that when I debugged my app three months later, the processes on my iPhone were correctly listed on the `Attach to process` menu, but they all disappeared again after a few days.
My device configuration is: iPhone 6 - iOS 12 - macOS Sonoma 14.3.1 - Xcode 15.2
[image](https://i.stack.imgur.com/MfmJy.png)
I can't wait a few months before I have a chance to debug my app. So I had to ask for help here.
|
Consider something like [Google Input Tools](https://www.google.com/inputtools/) for instance, which can transliterate text in multiple languages using the Latin script. As you type a word, it interactively autocompletes it, showing you multiple possible results.
My question is, do the major OSs (i.e. Linux, macOS and Windows) enable developers to implement something like [Google Input Tools](https://www.google.com/inputtools/) system-wide? Such that a user can get the autocomplete service with a popup next to any text input field in any OS application?
The "autocomplete popup" needn't appear automatically, but could rather be enabled with a keyboard shortcut. Moreover, it needn't modify the target text field itself in real-time, but rather use a buffer of its own that can be easily copied to the original text field. Thus the ability to (1) detect the text field the cursor is in and (2) push text onto it could be paired with the UI of something like Spotlight on macOS to achieve decent results.
However, I lack any knowledge about OS user-interface programming and thus I'm unsure if what I'm trying to achieve here is _universally_ supported by all major OS. |
We're building an API and I'm working on the rewriterule of my htaccess to make our URL like this `example.com/student/user/parameter1/parameter2`.
Everything is working fine until I caught a problem:
The `parameter2` consists of random alphanumeric and special characters, and it also includes a forward slashes (`/`). So whenever we're running the endpoint and captures the parameters, the values are wrong because it's reading the slashes within the parameter
Example: `example.com/student/user/anna/a!p=/dfa`
- Expected result should be `parameter1` is *`anna`* and `parameter2` is *`a!p=/dfa`*
- What's happening is `parameter1` is *`a!p=`* and `parameter2` is *`dfa`*
I'm hoping somebody would give me an idea on how to fix this.
This is our current `.htaccess`:
```apache
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^student/user/(.*)/(.*)$ api/studentuser.php?name=$1&token=$2 [L]
``` |
I am working on a fabric mod and I have issues with the ClientOnInitialize method |
|java|json|intellij-idea|minecraft|fabric| |
null |
How can I get the screen reader to read controls that are not tab stops (e.g. static labels)? |
Saving a custom template with humanize function in it, fails with the following error: `Error: failed to save and apply Alertmanager configuration: template: mytemplate:2: function "humanize" not defined`.
Template contents are:
```
{{ humanize (0.123456) }}
```
Grafana version: 9.5+
|
Grafana error: function "humanize" not defined |
null |