instruction stringlengths 0 30k ⌀ |
|---|
C library and tools for interacting with the Linux GPIO character device (/dev/gpiochipN). |
null |
I have a `JComboBox` which displays items which are *not* Strings, although they do have a `toString()` method and can be displayed as Strings.
This works fine when the JComboBox is not editable, but If I make it editable then the selected item becomes a `java.lang.String` which is not what we want! In addition the String might not represent a valid value at all (we have a parser that can detect this).
What is the best way to set up a JComboBox so that:
1. It manages items of a custom (POJO) type
2. It is editable
3. When the user edits the field, the user's input can be parsed with a custom parser to see if it is the correct type
4. If the input doesn't parse, it can either be corrected or a default value used
5. The selected item will therefore always be of the correct class, not a Java `String` |
Similar issue, my workaround is using absolute positioning on the child like
```
position: absolute;
top: 0;
right: 0;
bottom: 0;
left: 0;
```
I use a custom tailwind plugin to target children:
```
// In tailwind.config.js
plugins: [
// Add support for targeting children with child:
({ addVariant }) => {
addVariant('child', '& > *');
addVariant('child-hover', '& > *:hover');
},
```
Then on the scrollArea:
`child:px-4 child:absolute child:top-0 child:right-0 child:bottom-0 child:left-0` |
It's immensely hacky and time consuming, and so suitable only for a chart that you really care about, but you can annotate a plot with rectangles to produce the desired effect. First, you have to work out (by trial and error) where ggplot is placing each dodged bar's centre point (a manual process, but for factor variables it's not too painful). When you have that and the relevant maximum and minimum values (derived from the underlying plot) creating a df with the relevant rectangle coordinates and placing it over the plot (with a suitable alpha) seems to work. In the image (https://i.stack.imgur.com/Wk7jm.png), the orange bars have been partly covered by white rectangles with alpha 0.6, to de-emphasise them. The effect is quite nice IMO.
|
|c++|opengl|glsl|glm-math|memory-alignment| |
Polars provides the user with [`pl.Expr.fill_null`](https://docs.pola.rs/py-polars/html/reference/expressions/api/polars.Expr.fill_null.html) to fill missing values. It is used as follows.
```python
import polars as pl
df = pl.DataFrame({
"a": [0, 1, 2, 3, None, 5, 6, None, 8, None],
"b": range(10),
})
(
df
.with_columns(
pl.col("a").fill_null(pl.col("b")).alias("a_filled"),
pl.col("a").fill_null(pl.col("b").shift()).alias("a_filled_lag"),
pl.col("a").fill_null(pl.col("b").mean()).alias("a_filled_mean"),
)
)
```
```
shape: (10, 5)
┌──────┬─────┬──────────┬──────────────┬───────────────┐
│ a ┆ b ┆ a_filled ┆ a_filled_lag ┆ a_filled_mean │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 ┆ i64 ┆ f64 │
╞══════╪═════╪══════════╪══════════════╪═══════════════╡
│ 0 ┆ 0 ┆ 0 ┆ 0 ┆ 0.0 │
│ 1 ┆ 1 ┆ 1 ┆ 1 ┆ 1.0 │
│ 2 ┆ 2 ┆ 2 ┆ 2 ┆ 2.0 │
│ 3 ┆ 3 ┆ 3 ┆ 3 ┆ 3.0 │
│ null ┆ 4 ┆ 4 ┆ 3 ┆ 4.5 │
│ 5 ┆ 5 ┆ 5 ┆ 5 ┆ 5.0 │
│ 6 ┆ 6 ┆ 6 ┆ 6 ┆ 6.0 │
│ null ┆ 7 ┆ 7 ┆ 6 ┆ 4.5 │
│ 8 ┆ 8 ┆ 8 ┆ 8 ┆ 8.0 │
│ null ┆ 9 ┆ 9 ┆ 8 ┆ 4.5 │
└──────┴─────┴──────────┴──────────────┴───────────────┘
``` |
Your loop in `Play.move_invaders()` can tell each invader to move, except that any invader that moves beyond the limit left or right will cause *all* invaders to move down, *and* that switches the direction, which changes the test for the other invaders (`if self.direction == 'right'` etc).
Your loop should first move all the invaders and then, if the limit was exceeded, should then call `move_all_down()`:
```
import tkinter
import timeit
tk = tkinter.Tk()
WIDTH = 1000
HEIGHT = 650
global canvas
canvas = tkinter.Canvas(tk, width=WIDTH, height=HEIGHT, bg="black")
canvas.pack()
class Invader():
def __init__(self, canvas, play, x, y):
self.canvas = canvas
self.play = play
self.x_coord = x
self.y_coord = y
self.shape = canvas.create_rectangle(self.x_coord, self.y_coord, self.x_coord + 50, self.y_coord + 50, fill='green')
self.direction = 'left'
def move(self):
ret = False
if self.direction == 'right':
self.x_coord += 10
if self.x_coord + 40 >= WIDTH:
ret = True
elif self.direction == 'left':
self.x_coord -= 10
if self.x_coord - 10 <= 0:
ret = True
canvas.coords(self.shape, self.x_coord, self.y_coord, self.x_coord + 50, self.y_coord + 50)
return ret
class Play():
def __init__(self, canvas):
self.canvas = canvas
self.invaderlist = []
self.last_move_time = timeit.default_timer()
self.move_delay = 0.3443434
def move_invaders(self):
current_time = timeit.default_timer()
if current_time - self.last_move_time > self.move_delay:
move_down = False
for invader in self.invaderlist:
im = invader.move()
move_down = move_down or im
self.last_move_time = current_time
if move_down:
self.move_all_down()
def move_all_down(self):
for invader in self.invaderlist:
invader.y_coord += 10
#canvas.coords(invader.shape, invader.x_coord, invader.y_coord, invader.x_coord + 50, invader.y_coord + 50)
if invader.direction == 'left':
invader.direction = 'right'
elif invader.direction == 'right':
invader.direction = 'left'
def run_all(self):
x_coords = [50, 120, 200, 270, 350, 420, 500, 570, 650, 720]
y = 200
for i in range(10):
x = x_coords[i]
invader = Invader(self.canvas, self, x, y)
self.invaderlist.append(invader)
while True:
self.move_invaders()
canvas.after(5)
self.canvas.update()
play = Play(canvas)
play.run_all()
``` |
How to convert a List or Array to C Array in Kotlin/Native |
|kotlin-native|cinterop| |
I created this extension function that allocates a new array on the given `NativePlacement` and then copies every single item from the source collection to the newly allocated array and returns that array.
```kotlin
inline fun <reified T : CVariable> Collection<T>.toCArray(np: NativePlacement = nativeHeap): CArrayPointer<T> {
val typeSize = sizeOf<T>()
val array = np.allocArray<T>(this.size)
forEachIndexed { index, item ->
memcpy(
interpretPointed<T>(array.rawValue + index * typeSize).ptr,
item.ptr,
typeSize.toULong()
)
}
return array
}
```
It might not be that great of a solution but is the best I was able to come up with.
|
The AsyncCollector is a very nice way of simplifying sending messages
However this involves having logic in an azure function whereas I prefer to have my logic in services
How can I inject an AsyncCollector into a service?
Paul |
Inject AsyncCollector into a service |
|azure|azureservicebus| |
Use Animations with navigations in WearOS |
|conditional-operator| |
I am having trouble in fixing this error while performing SDM:
```
Error in UseMethod("evaluate") :
no applicable method for 'evaluate' applied to an object of class "data.frame"
```
I am trying to evaluate my Maxent model with this command:
```
e <- evaluate(pres_test, backg_test, xm, predictors)
```
but receive the above error. How can I solve this? |
|r| |
> I wanted to ask how I can use this schema to create a database file and prepopulate it.
The simplest way, rather than trying to interpret the saved schema, is to let Room do the interpretation by:
1. Successfully compiling the project
1. From the Android View of Android Studio locate the java (generated) folder/directory
1. Within the java (generated) folder find the class that is the same name as the `@Database` annotated class (*LocationDatabase in you case*) BUT is suffixed with _Impl (*In your case **`LocationDatabase_Impl`***).
1. Find the `createAllTables` method, this will have a line of code that executes the SQL to create all the tables. This SQL is exactly what Room expects and can be copied to whatever tool you are using to build the pre-packaged database.
1. Once populated and saved as a single file you can copy the file into the assets folder (which you may have to create)
1. (if the is a -wal file then database has not been closed and you need to properly close the database e.g. with Navicat you have to close the application)
1. You then include the [`createFromAsset`][1] method in the databaseBuilder call
[1]: https://developer.android.com/reference/kotlin/androidx/room/RoomDatabase.Builder#createfromasset |
Man page and many other sources clearly say:
```
• Real user ID and real group ID. These IDs determine who owns the
process. A process can obtain its real user (group) ID using ge‐
tuid(2) (getgid(2)).
• Effective user ID and effective group ID. These IDs are used by the
kernel to determine the permissions that the process will have when
accessing shared resources
```
So it should be quite clear that
eUID controls the actual permissions.
But the following test says otherwise:
```
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main()
{
uid_t uid;
uid_t euid;
uid = getuid();
euid = geteuid();
printf("uid1=%i euid1=%i\n", uid, euid);
system("ls /root");
setreuid(euid, uid);
uid = getuid();
euid = geteuid();
printf("uid2=%i euid2=%i\n", uid, euid);
system("ls /root");
return 0;
}
```
Building and running the test:
```
cc -Wall tst.c
sudo chown root:root a.out
sudo chmod u+s a.out
./a.out
uid1=500 euid1=0
ls: cannot open directory '/root': Permission denied
uid2=0 euid2=500
Desktop Templates
```
So we see that with eUID=0 the listing
failed, but succeeded when UID became 0
(and eUID was intentionally changed to 500).
So from this test its quite clear that
UID does the actual checking.
Can someone please make the sense out of that? |
You can create a more generalized function to check for collisions with the wall by defining the coordinates of the wall segments and then checking if the snake's head collides with any of these segments. Here's how you can do it:
def wallCollision():
wall_segments = [
((-410, 390), (-400, 390)), # Top horizontal segment
((-400, 390), (-400, 90)), # Right vertical segment
((-400, 90), (-410, 90)), # Bottom horizontal segment
((-410, 90), (-410, 390)) # Left vertical segment
]
for segment in wall_segments:
start, end = segment
if (start[0] <= snek_head.xcor() <= end[0] or end[0] <=
snek_head.xcor() <= start[0]) and \
(start[1] <= snek_head.ycor() <= end[1] or end[1] <=
snek_head.ycor() <= start[1]):
return True
return False
In this function:
wall_segments contains the coordinates of the endpoints of each wall segment.
We iterate over each segment and check if the distance from the snake's head to both endpoints of the segment is less than a threshold (here, I've used 10 as the threshold). If it is, then the snake's head is considered to be colliding with that wall segment.
This way, you don't need to manually check each coordinate separately, and the code remains clean and easy to maintain. Adjust the threshold value as needed for your game.
If this is wrong, sry! I tried my best. I'm pretty sure it is right, though. |
Vite reloading full page on every change |
I am using Bootstrap responsive table. The problem I am facing here is the text in the table column is overflowing. As per my knowledge the table adjusts the column width automatically if it is responsive. But the problem here is I have a input field inside td and I have a placeholder for that input field. The placeholder is getting overflowed. How to fix this issue for this requirement. **Placeholder : Drag and Drop here**
**Note: This problem happens in mobile device only.
**
```
<div class="table-responsive border">
<table class="table" id="dataTable" width="100%" style="overflow-x:auto">
<thead>
<tr>
<th>Document Type</th>
<th class="text-center">Upload</th>
<th>Open</th>
<th>Resolved</th>
<th>Not Required</th>
</tr>
</thead>
<tbody id="missingBody">
<tr>
<tdFront and Back - Passport/Driver’s License/Photo Car</td>
<td>
<div class="file-field">
<div class="file-path-wrapper""><input type="file" name="file1" missval="dcn_56" multiple=""><input style="vertical-align:top;padding-bottom:8px;" class="file-path validate" type="text" id="dcn_56" doctype="56" multiple="" placeholder="Drag and Drop files here"></div>
</div>
</td>
<td><input type="radio" id="open56" missinginfoid="56" checked="" value="71" name="Missing0" class="custom-control-input"><label class="custom-control-label" for="open56"></label></td>
<td><input type="radio" id="resolve56" missinginfoid="56" value="72" name="Missing0" class="custom-control-input"><label class="custom-control-label" for="resolve56"></label></td>
<td><input type="radio" id="close56" missinginfoid="56" value="73" name="Missing0" class="custom-control-input"><label class="custom-control-label" for="close56"></label></td>
</tr>
</tbody>
</table>
</div>
```
**Ouput:** https://ibb.co/TrCSrk4
The output should look like drag and drop here in same line |
Editing non-String values in JComboBox |
|java|swing|parsing|jcombobox| |
I'm new in react native, and I'm creating a simple test app. I followed the firebase documentation and inserted the dependencies and the google-services.json file. The problem is that now I would like to read the data from my realtime database, and following the documentation I did something like this:
...
import { firebase } from '@react-native-firebase/app';
const Categories = () => {
const reference = firebase
.app()
.database('https://testapp-app-default-rtdb.euorpe-west1.firebasedatabase.app/')
.ref('/categories');
...
But if I running the command from the terminal: `npx expo start` to test the application on my Android phone, the console gives me this error:
`ERROR Error: You attempted to use a firebase module that's not installed on your Android project by calling firebase.app().
Ensure you have:
1) imported the 'io.invertase.firebase.app.ReactNativeFirebaseAppPackage' module in your 'MainApplication.java' file.
2) Added the 'new ReactNativeFirebaseAppPackage()' line inside of the RN 'getPackages()' method list.`
How can i solve this to test my app with db?
Testing with database
|
I'm trying to build a game with vue3 js phaser and spine . I've used the official template from phaser with vue and installed phaser-spine and import it .
https://phaser.io/news/2024/02/official-phaser-3-and-vue-3-template
this is the main.js file :
```
import { Lootbox } from './scenes/Lootbox';
import Phaser from 'phaser';
import {SpinePlugin} from "@esotericsoftware/spine-phaser"
// Find out more information about the Game Config at:
// https://newdocs.phaser.io/docs/3.70.0/Phaser.Types.Core.GameConfig
const config = {
type: Phaser.AUTO,
width: 1024,
height: 768,
parent: 'game-container',
backgroundColor: '#028af8',
plugins: {
scene: [
{ key: 'spine.SpinePlugin', plugin: SpinePlugin, mapping: 'spine' }
]
},
scene: [game]
};
const StartGame = (parent) => {
return new Phaser.Game({ ...config, parent });
}
export default StartGame;
```
and this is the scene file :
```
import Phaser from 'phaser';
export class Lootbox extends Phaser.Scene {
constructor() {
super('Lootbox');
}
preload() {
this.load.spineBinary("man-data", "assets/man.skel");
this.load.spineAtlas("man-atlas", "assets/man.atlas");
console.log("scene loaded ")
}
create() {
const manAnimation = this.add.spine(333, 333, 'man-data', "idle", true);
manAnimation.animationState.setAnimation(0, "in", true)
}
}
```
Now i see my atlas an skel file loaded in the network tab and the png but i keep getting this error
Uncaught TypeError: Cannot read properties of undefined (reading 'data')
on this line
```
const lootbox = this.add.spine(333, 333, 'lootbox-data', "idle", true);
```
I've tried using vanilla js and it works there, but i want to make the project with vuejs 3 . So the assets are ok and working when using a vanilla js project .
From what i understand i try to initiate spine to early? |
vue 3 pahser and spine.js - Uncaught TypeError: Cannot read properties of undefined (reading 'data') |
How do I have achieve a combined line limit for two Text views in an VStack in SwiftUI? |
I am using Bootstrap responsive table. The problem I am facing here is the text in the table column is overflowing. As per my knowledge the table adjusts the column width automatically if it is responsive. But the problem here is I have a input field inside td and I have a placeholder for that input field. The placeholder is getting overflowed. How to fix this issue for this requirement. **Placeholder : Drag and Drop here**
**Note: This problem happens in mobile device only.**
```
<div class="table-responsive border">
<table class="table" id="dataTable" width="100%" style="overflow-x:auto">
<thead>
<tr>
<th>Document Type</th>
<th class="text-center">Upload</th>
<th>Open</th>
<th>Resolved</th>
<th>Not Required</th>
</tr>
</thead>
<tbody id="missingBody">
<tr>
<tdFront and Back - Passport/Driver’s License/Photo Car</td>
<td>
<div class="file-field">
<div class="file-path-wrapper""><input type="file" name="file1" missval="dcn_56" multiple=""><input style="vertical-align:top;padding-bottom:8px;" class="file-path validate" type="text" id="dcn_56" doctype="56" multiple="" placeholder="Drag and Drop files here"></div>
</div>
</td>
<td><input type="radio" id="open56" missinginfoid="56" checked="" value="71" name="Missing0" class="custom-control-input"><label class="custom-control-label" for="open56"></label></td>
<td><input type="radio" id="resolve56" missinginfoid="56" value="72" name="Missing0" class="custom-control-input"><label class="custom-control-label" for="resolve56"></label></td>
<td><input type="radio" id="close56" missinginfoid="56" value="73" name="Missing0" class="custom-control-input"><label class="custom-control-label" for="close56"></label></td>
</tr>
</tbody>
</table>
</div>
```
**Ouput:** https://ibb.co/TrCSrk4
The output should look like drag and drop here in same line |
I am recently using WDL pipeline to build the bioinformatics pipeline for our facility.
The pipeline could run successfully. However, when I re-run the pipeline, it will always restart, reprocess everything and then create a new results in a separate folder.
Please check the screenshot below:

So the phenomenon, explanation, and what I expect could be summarized below
1: phenomenon:
When we trigger the same WDL pipeline 5 times, 5 new full-set results will be created (the random folders as shown in the screenshot) by the pipeline.
2: explanation
Actually, the oldest random folder "e5e79fc1-e3ff-4b63-ba9d-ca1151b892fa" already contains all successful running results.
3: What I expect
No matter how many times the WDL pipeline be triggered, once there is a successful running results existed in the root directory, the WDL pipeline should simply stop automatically and do nothing.
I used to use snakemake to build the pipeline, and snakemake will simply stop if all results are been existed. I do hope WDL could doing the job in the same way.
May I know how to let WDL pipeline stop automatically when the successful running results are existed in the root directory? Please help.
Thank you so much |
|vuejs3|spine.js|phaser| |
null |
To send to both remote with one command, you can create an alias for it:
git config alias.pushall '!git push origin devel && git push github devel'
With this, when you use the command `git pushall`, it will update both repositories. |
```r
Rscript.exe -e "1"
# NULL
# [1] 1
```
`Rscript.exe` always prints a null in windows system.
Any method to forbid it?
### Environment
- R: 4.3.3 |
Rscript print a NULL in the windows |
|r|rscript| |
I am encountering an issue where I am consuming the contents of an enum by moving a value from it based on the a match over it. After doing this, I want to return that calculated value and also another value just based on which match the enum is. However, I am getting an error `use of partially moved value: <variable name>`. I know that I can just move everything into 1 match statement but that just isn't as clean in my more complicated case.
This is a minimum reproducible example of my issue. I want to do something like this:
```rust
enum Hand {
TwoPair(String, String),
ThreeOfAKind(String),
}
fn print_hand(hand: Hand) {
let value = match hand {
Hand::TwoPair(a, b) => a.parse::<i32>().unwrap() + b.parse::<i32>().unwrap(),
Hand::ThreeOfAKind(a) => a.parse::<i32>().unwrap(),
};
match hand {
Hand::TwoPair(_, _) => println!("Two pair: {}", value),
Hand::ThreeOfAKind(_) => println!("Three of a kind: {}", value),
}
}
```
but in order to get it to actally work right now, I would have to do something like:
```rust
fn print_hand_ugly(hand: Hand) {
match hand {
Hand::TwoPair(a, b) => {
let value = a.parse::<i32>().unwrap() + b.parse::<i32>().unwrap();
println!("Two pair: {}", value);
}
Hand::ThreeOfAKind(a) => {
let value = a.parse::<i32>().unwrap();
println!("Three of a kind: {}", value);
}
}
}
```
any help or tips would be appreciated. Copying the value is not possible in my situation, moving is very much the preferred case. [MRE in Rust Playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=470e803c3458aa2a2c8f8cdf2ee6d228) |
You can use a CTE to get all the values of `column1` you're looking for, then filter the table:
**Schema (PostgreSQL v15)**
CREATE TABLE t (
column1 INTEGER,
column2 INTEGER
);
INSERT INTO t
(column1, column2)
VALUES
(1, 1),
(1, 2),
(2, 1),
(2, 1),
(3, 3),
(4, 5);
---
**Query #1**
WITH cte AS (
SELECT column1
FROM t
GROUP BY 1
HAVING COUNT(DISTINCT column2) > 1
)
SELECT column1, column2
FROM t
JOIN cte USING (column1);
| column1 | column2 |
| ------- | ------- |
| 1 | 1 |
| 1 | 2 |
---
Alternatively you can create a list of the `column2` values (per column1) and unnest that, which avoids scanning the table a second time.
**Query #2**
WITH cte AS (
SELECT column1, array_agg(column2) AS arr
FROM t
GROUP BY 1
HAVING COUNT(DISTINCT column2) > 1
)
SELECT column1, column2
FROM cte, unnest(arr) as f(column2);
| column1 | column2 |
| ------- | ------- |
| 1 | 1 |
| 1 | 2 |
---
[View on DB Fiddle](https://www.db-fiddle.com/f/bjcjmvXmApVXq7ndpFyxru/1) |
I believe the question is independent from Flutter.
Anyway, I see two possible paths:
- If a user can be either a Buyer or a Seller, just go for a single login, and infer it's role from the API response
- If a user can be both you can have separate logins in the same app (I don't like it, personally)
- OR you can make 2 apps from the same code base, using flavors (better, IMHO). Many apps uses this approach when they have apps for users and for business (delivery apps paired with driver apps, hotel booking apps paired with hotel business apps, etc…). |
null |
Here's the solution: don't have it read/write to the same directory as the dynamic library.
Instead, you have two options:
1. Grab the application's data directory using Flutter and pass it to the library over ffi (easier in my experience, ironically)
2. Use the NDK to get the application's data directory instead (but I think this requires some sort of context object, making it much less ergonomic)
I can't remember exactly where dynamic libraries are put, but think of it this way: if an app is able to read/write to that folder, what is stopping arbitrary code execution from taking place? You could have some malicious code write to a new .so file, and load that into the app, and bingo, you just got around Google Play's restrictions and malware protections. This probably isn't going to be possible without a rooted phone and distributing outside of one of the stores. |
I want to see GUI interface immediately and initialize the application on an asynchronous thread.
For this aim I am using ```QFuture```
My code:
```
// main.cpp
int main(int argc, char *argv[]) {
// create config and credentials
QApplication app(argc, argv);
LoginForm loginForm(config, credentials);
int result = app.exec();
return result;
}
```
Then in login form I am using QFuture:
```
void initLauncher(const Aws::Client::ClientConfiguration &config, const Aws::Auth::AWSCredentials &credentials) {
// slow function
}
LoginForm::LoginForm(const Aws::Client::ClientConfiguration &config, const Aws::Auth::AWSCredentials &credentials,
QWidget *parent) : QWidget(parent), awsConfig(config), awsCredentials(credentials) {
...
QFuture<void> future = QtConcurrent::run([this, &config, &credentials]() {
initLauncher(config, credentials);
});
QFutureWatcher<void> *watcher = new QFutureWatcher<void>(this);
connect(watcher, SIGNAL(finished()), this, SLOT(initializationFinished()));
// delete the watcher when finished too
connect(watcher, SIGNAL(finished()), watcher, SLOT(deleteLater()));
watcher->setFuture(future);
// creating buttons
show();
```
Where LoginForm is QWidget class
**Problem** -- I see GUI interface only after initLauncher() function is done
How can show GUI interface before initLauncher() ? |
Why does QFuture is not working asynchronously |
|c++|qt| |
This sounds like [`rsample::manual_rset()`](https://rsample.tidymodels.org/reference/manual_rset.html) might do what you want? |
i have a problem with my redirection.
<?php
// Inclure les fichiers d'initialisation
require_once("../inc/init.inc.php");
require_once("../inc/haut.inc.php");
ob_start();
// Assurez-vous que l'intervention_id est passé en paramètre dans l'URL
if (isset($_GET['id_intervention']) && is_numeric($_GET['id_intervention']))
{
$intervention_id = intval($_GET['id_intervention']);
// Récupérez les données existantes de la base de données
$selectStmt = $mysqli->prepare("SELECT heure_debut, heure_fin FROM intervention_site WHERE intervention_id = ?");
$selectStmt->bind_param("i", $intervention_id);
$selectStmt->execute();
$result = $selectStmt->get_result();
$row = $result->fetch_assoc();
$heure_debut = $row['heure_debut'] ?? null;
$heure_fin = $row['heure_fin'] ?? null;
$selectStmt->close();
// Extrait les deux premiers caractères pour obtenir l'heure
$heure_debut_affichage = substr($heure_debut, 0, 5); // Format "HH:MM"
// Vérifiez si le formulaire a été soumis pour l'heure de début
if ($_SERVER["REQUEST_METHOD"] == "POST" && isset($_POST['heure_debut']))
{
// Vérifiez le jeton CSRF
if (!isset($_POST['csrf_token']) || $_POST['csrf_token'] !== $_SESSION['csrf_token'])
{
// Jeton CSRF invalide, traitement de l'erreur (par exemple, redirection ou affichage d'un message d'erreur)
header("Location: erreur.php");
exit;
}
// Récupérez l'heure de début depuis $_POST
$heure_debut = $_POST['heure_debut'];
// Vérifiez si une entrée existe déjà dans la table intervention_site
$selectStmt = $mysqli->prepare("SELECT heure_debut FROM intervention_site WHERE intervention_id = ?");
$selectStmt->bind_param("i", $intervention_id);
$selectStmt->execute();
$selectStmt->store_result();
if ($selectStmt->num_rows > 0)
{
// Une entrée existe déjà, effectuez une mise à jour
$sql = "UPDATE intervention_site SET heure_debut = ? WHERE intervention_id = ?";
$stmt = $mysqli->prepare($sql);
$stmt->bind_param("si", $heure_debut, $intervention_id);
if ($stmt->execute())
{
// L'heure de début a été mise à jour avec succès
// Mettre à jour le statut dans la table interventions à "en cours"
$statut = "En cours";
$updateStatusStmt = $mysqli->prepare("UPDATE interventions SET statut = ? WHERE id_intervention = ?");
$updateStatusStmt->bind_param("si", $statut, $intervention_id);
if ($updateStatusStmt->execute())
{
echo "Mis à jour table interventions 1 ok ";
header("Location: intervention_journaliere.php");
exit(); }
else
{
echo "Erreur lors de la mise à jour du statut: " . $updateStatusStmt->error;
}
echo "Insertion intervention site 2ok ";
header("Location: intervention_journaliere.php");
exit();
$updateStatusStmt->close();
}
else
{
echo "Erreur lors de la mise à jour de l'heure de début : " . $stmt->error;
}
$stmt->close();
}
else
{
// Aucune entrée n'existe, insérez une nouvelle entrée
$sql = "INSERT INTO intervention_site (intervention_id, heure_debut) VALUES (?, ?)";
$stmt = $mysqli->prepare($sql);
$stmt->bind_param("is", $intervention_id, $heure_debut);
if ($stmt->execute())
{
// L'heure de début a été insérée avec succès
// Mettre à jour le statut dans la table interventions à "en cours"
$statut = "En cours";
$updateStatusStmt = $mysqli->prepare("UPDATE interventions SET statut = ? WHERE id_intervention = ?");
$updateStatusStmt->bind_param("si", $statut, $intervention_id);
if ($updateStatusStmt->execute())
{
echo "Mis à jour table interventions 2 ok ";
header("Location: intervention_journaliere.php");
exit();
}
else
{
echo "Erreur lors de la mise à jour du statut: " . $updateStatusStmt->error;
}
echo "Insertion intervention site 2 ok ";
header("Location: intervention_journaliere.php");
exit();
$updateStatusStmt->close();
}
else
{
echo "Erreur lors de l'insertion de l'heure de début : " . $stmt->error;
}
$stmt->close();
}
}
// Générez un jeton CSRF et stockez-le en session
$csrf_token = bin2hex(random_bytes(32));
$_SESSION['csrf_token'] = $csrf_token;
}
else
{
// Ajoutez un message d'erreur ou une redirection ici si nécessaire
echo "Erreur : ID d'intervention non valide.";
exit;
}
?>
<div class="container heure-arrivee-container"> <!-- Utilisez une classe spécifique pour le conteneur -->
<h2>Validation de l'heure d'arrivée</h2>
<form class="intervention-form heure-arrivee-form" method="POST" action="" enctype="multipart/form-data">
<input type="hidden" name="intervention_id" value="<?php echo htmlspecialchars($intervention_id); ?>">
<input type="hidden" name="csrf_token" value="<?php echo htmlspecialchars($csrf_token); ?>">
<label for="heure_debut">Heure d'arrivée :</label>
<input type="time" id="heure_debut" name="heure_debut" value="<?php echo htmlspecialchars($heure_debut_affichage); ?>" required>
<input type="submit" value="Valider l'heure de début">
</form>
<a href="intervention_journaliere.php" class="return-link">Retour aux plannings journaliers</a>
</div>
<?php
ob_end_flush(); // Affiche le contenu mis en mémoire tampon et désactive la mise en mémoire tampon
// Inclure le fichier de pied de page
require_once("../inc/bas.inc.php");
?>
The message "mise a jour table ok" and "insertion intervention" is displayed correctly.
In my another page the redirection work but here it dont. Do you now why ?
I try to change my redirection but it's the same |
[enter image description here](https://i.stack.imgur.com/OUnLa.png)
I've been encountering some difficulties while trying to install React Native, and I'm seeking assistance on Stack Overflow. Whenever I attempt to set up React Native on my system, I encounter an error message that's hindering the installation process.
The error message I'm receiving is [Error: Cannot find module 'C:\Program Files\nodejs\node_modules\npm\node_modules\path-scurry\node_modules\lru-cache\dis
t\cjs\index.js']. Despite trying various troubleshooting methods, I haven't been successful in resolving the issue.
Could someone please provide guidance on how to troubleshoot and resolve this installation problem? Any insights, tips, or suggestions would be greatly appreciated. Thank you in advance! |
i am facing the problem while intalling react native anyone can give me solution |
|react-native|npm-install|npx| |
null |
Wordpress version: 6.4.3
"timber/timber": "2.x-dev"
Starter theme: upstatement/timber-starter-theme
I understand you require composer to install timber, no longer require plugin
but how to new custom page in timber wordpress?
in page attributes, i cannot see the tempalete dropdown to select the template

Did i miss out something?
I did create contact.php
```
<?php
/**
* The main template file
* This is the most generic template file in a WordPress theme
* and one of the two required files for a theme (the other being style.css).
* It is used to display a page when nothing more specific matches a query.
* E.g., it puts together the home page when no home.php file exists
*
* Methods for TimberHelper can be found in the /lib sub-directory
*
* @package WordPress
* @subpackage Timber
* @since Timber 0.1
*/
$context = Timber::context();
$context['posts'] = Timber::get_posts();
$context['foo'] = 'bar';
$templates = array( 'contact.twig' );
Timber::render( $templates, $context );
```
also the contact.twig
```
{% extends "base.twig" %}
{% block content %}
{% include "partial/altHero.twig" %}
<h1>Contact</h1>
{% endblock %}
```
I am not sure, what else do i missout?
and i cannot install the timber plugin as well,
error msg when i try to insstall timber plugin
**Plugin could not be activated because it triggered a fatal error.
**
Your help is most appreciated
Cheers |
I have a table that displays the elements retrieved from an API in Jquery I would like to retrieve the IDs of the rows to add a deletion or a modification
$("#examplee").DataTable({
"data" : data,
"columns":[
{"data" : 'numeroDossier'},
{"data" : 'nomOuRs'},
{"data" : 'tel'},
{"data" : 'province'},
{
data : null,
render : function()
{
return '<a><i class="btn fa-solid success fa-user-tag" onclick="get_assaj(\''+data.id+'\')" title ="selectionner assujetti" ></i></a><a><i class="btn fa fa-solid fa-user-pen" style="color: #B197FC" onclick="edit_assaj('+ data +')" ></i></a>';
}
}
]
}) |
Using `pytest-mock` or `unittest.mock`you can use the [mocker.patch.multiple][1] method to override the `__abstractmethods__` attribute. By doing that you will be able to create an instance of the abstract class and test the non-abstract methods of it.
Example using pytest and pytest-mock:
from pytest_mock import mocker
def test_something(mocker):
mocker.patch.multiple(ExampleClass, __abstractmethods__=set())
instance = ExampleClass()
...
You could also use `unittest.mock.patch.` as a decorator:
from unittest.mock import patch
@patch.multiple(ExampleClass, __abstractmethods__=set())
def test_something(self):
instance = ExampleClass()
...
[1]: https://docs.python.org/3/library/unittest.mock.html#patch-multiple |
I'm new in react native, and I'm creating a simple test app. I followed the firebase documentation and inserted the dependencies and the google-services.json file. The problem is that now I would like to read the data from my realtime database, and following the documentation I did something like this:
...
import { firebase } from '@react-native-firebase/app';
const Categories = () => {
const reference = firebase
.app()
.database('https://testapp-app-default-rtdb.euorpe-west1.firebasedatabase.app/')
.ref('/categories');
...
But if I running the command from the terminal: `npx expo start` to test the application on my Android phone, the console gives me this error:
ERROR Error: You attempted to use a firebase module that's not installed on your Android project by calling firebase.app().
Ensure you have:
1) imported the 'io.invertase.firebase.app.ReactNativeFirebaseAppPackage' module in your 'MainApplication.java' file.
2) Added the 'new ReactNativeFirebaseAppPackage()' line inside of the RN 'getPackages()' method list.
How can i solve this to test my app with db?
|
|npm|yarnpkg| |
{"OriginalQuestionIds":[9515704],"Voters":[{"Id":3959875,"DisplayName":"wOxxOm","BindingReason":{"GoldTagBadge":"google-chrome-extension"}}]} |
It will work for single request:
SET work_mem = '256MB';show work_mem;
to make it available for the session you should update it on session level ( can't remember request )
to make it available for all next sessions:
ALTER USER <username> SET work_mem TO '256MB'; |
This is considered a bug in TypeScript, as described in [microsoft/TypeScript#57816](https://github.com/microsoft/TypeScript/issues/57816). It doesn't look like a high priority to be fixed, but the good news is that it's listed as [Help Wanted](https://github.com/microsoft/TypeScript/labels/Help%20Wanted), meaning that they would consider accepting a pull request submitted by community members. So if you really want to see this fixed, you might want to try to fix it yourself and get the fix merged into the language.
On the other hand you might just want to work around it. In cases where [assignment narrowing](https://www.typescriptlang.org/docs/handbook/2/narrowing.html#assignments) of a [union type](https://www.typescriptlang.org/docs/handbook/2/everyday-types.html#union-types) gets it wrong (there's a whole huge issue at [microsoft/TypeScript#9998](https://github.com/microsoft/TypeScript/issues/9998) describing situations where TypeScript simply cannot properly anticipate whether a narrowing should or should not be reset), you can "opt out" of it by using a [type assertion](https://www.typescriptlang.org/docs/handbook/2/everyday-types.html#type-assertions) instead of a [type annotation](https://www.typescriptlang.org/docs/handbook/2/everyday-types.html#type-annotations-on-variables). That is, if you turn
let secondLastBlock: IBlock | null = null;
into
let secondLastBlock = null as IBlock | null;
then there's no assignment narrowing (since `null as IBlock | null` hides the fact that you're assigning `null`) and your code will start working as-is.
[Playground link to code](https://www.typescriptlang.org/play?ts=5.4.3&ssl=19&ssc=1&pln=20&pc=17#code/KYDwDg9gTgLgBASwHY2FAZgQwMbDgSQCEAbCbAazgG8AoOeubCUqALjgGcYpkBzAbhoBfGjSZIucVFwCCUTAE92RUhQDaAXTgBeOGroNaDY42bR2AIl5RgwJBYP0hAGkfU3xpi0s2AJg+MXNyMTei9zOAsFYGJSAHcAhiDjENDwtkiAI2IAV2BEpxoNUXFJdI4dPTcrGzsLV2MLP3rq6NiIBIaGC2y8h2KaYmB4DmBxXwAZTC4SMkpdJBzYuGmCWYo4AB84RdjBIfhiaZh18mVTrZ2l4krd4kF0aDgAClL4bLm4CHQpYFl5BQASncxjiAAsEEMXh4GB8KAA6dI6bS6UbjKYzVTkAD8iLMUDgADJCTD6HDyHiWMjdEdMXNcek3MDUiZyZSnrpyvpQqEALKYGBgxHASHPfmC+HyJC+CAAW2ewIAVHAACyA0lwYqBUTGNEQaUYk5Yyq0o1zQTGU0XXTkwTCIA) |
|android|android-studio| |
null |
I would like to use frida 16.2.1 read a file from android13 device. I try below code but it reports the following error:
```
[*] [type]: error
[*] [description]: Error: expected an unsigned integer
```
```function readFile(pathName) {
var String = Java.use("java.lang.String")
var Files = Java.use("java.nio.file.Files")
var Paths = Java.use("java.nio.file.Paths")
var URI = Java.use("java.net.URI")
var path = Paths.get(URI.create("file://" + pathName))
var fileBytes = Files.readAllBytes(path) // crash at this line
var content = String.$new(fileBytes)
return content
}
```
The crashed line is using a Path object, not sure why frida report expect unsigned int. |
It's much easier than you think.
The Python code performs an OpenSSL compliant encryption/decryption:
- The key derivation is `EVP_BytesToKey()` compatible. It uses MD5 as digest, an iteration count of 1 and an 8 bytes salt.
- As encryption/decryption AES-256 in CBC mode and PKCS#7 padding is applied.
- The result is returned in the Base64 encoded OpenSSL format. The OpenSSL format consists of the concatenation of the ASCII encoding of `Salted__`, followed by the 8 bytes of salt, followed by the actual ciphertext.
CryptoJS is OpenSSL compatible (s. [here][1]) and supports the above encryption/decryption by default. All you need to do is pass the passphrase as string (s. [here][2]).
Note that when using `bytes_to_key()` instead of `bytes_to_key_md5()` in the Python code, SHA256 must be explicitly specified as digest on the CryptoJS side, as MD5 is the default.
CryptoJS sample code:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
// MD5 sample
var ciphertextFromPython = "U2FsdGVkX18lJwVCQIbRWqiIycIZg4LRZFHq+ORvygkE/umH1Il3m/yzgu3n9jVQhUikwXeURBW9yAjMawTk3A==";
var passphrase = "<secret passpharse value>";
var decrypted = CryptoJS.AES.decrypt(ciphertextFromPython, passphrase);
console.log(decrypted.toString(CryptoJS.enc.Utf8));
var plaintext = "secret message to be send over network";
var passphrase = "<secret passpharse value>";
var ciphertextForPython = CryptoJS.AES.encrypt(plaintext, passphrase);
console.log(ciphertextForPython.toString()); // e.g. U2FsdGVkX18/aYM99XaqbT/GjFDAuNlGBMd2Wd7Vuum120DkmeItS7tJndPLbxDyNzEUBF28AOG5pOwLGvpSSA==
// SHA-256 sample
CryptoJS.algo.EvpKDF.cfg.hasher = CryptoJS.algo.SHA256.create();
var ciphertextFromPython = "U2FsdGVkX189ft5ncnmOK/rJIB2fkdrfdWQCbf6DgbXkWMXw7yjX2oRXbDgZTIt4LibWBPamalnKCZl3l1VnWQ==";
var passphrase = "<secret passpharse value>";
var decrypted = CryptoJS.AES.decrypt(ciphertextFromPython, passphrase);
console.log(decrypted.toString(CryptoJS.enc.Utf8));
var plaintext = "secret message to be send over network";
var passphrase = "<secret passpharse value>";
var ciphertextForPython = CryptoJS.AES.encrypt(plaintext, passphrase);
console.log(ciphertextForPython.toString()); // e.g. U2FsdGVkX188W7G1Xis9KZogKpVCvCVbDQHc1AIul+CSTjS8m+zdc4pPQ9jlunIP4jbTD49q82GV9ic/4HVNNA==
<!-- language: lang-html -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/crypto-js/4.2.0/crypto-js.min.js"></script>
<!-- end snippet -->
In the CryptoJS code above, `ciphertextFromPython` was generated with the posted Python code and `ciphertextForPython` can be decrypted with the posted Python code.
The first example applies MD5 as digest (by default) and correponds to the key derivation function `bytes_to_key_md5()`, the second applies SHA256 (explicitly specified) and correponds to `bytes_to_key()`.
----------
Security:
The key derivation function `EVP_BytesToKey()`, especially in combination with the broken digest MD5 and an iteration count of 1, is considered insecure. Instead, a reliable key derivation function should be applied (at least PBKDF2, which is also supported by CryptoJS).
Note that CryptoJS has been [discontinued][3]. The last version is 4.2.0.
[1]: https://cryptojs.gitbook.io/docs/#interoperability
[2]: https://cryptojs.gitbook.io/docs/#the-cipher-input
[3]: https://github.com/brix/crypto-js?tab=readme-ov-file#discontinued |
what really controls the permissions: UID or eUID? |
|linux|credentials| |
I am working on a **Spring Boot** project. For production environment, I am using **SQL Server** AND **Azure App Service**. I followed following steps to configure my deployment.
My database url is-
```
jdbc:sqlserver://spring-sql-server.database.windows.net:1433;database=<database-name>;user=<username>@spring-sql-server;password=<password>;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;
```
1. Added all my secrets to the GitHub Repository Secrets.
[![enter image description here][1]][1]
2. Added environment variable configuration to my application.properties file.
[![enter image description here][2]][2]
When I build my project using GitHub Actions, I get an error-
> Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in class path resource [org/springframework/boot/autoconfigure/orm/jpa/HibernateJpaConfiguration.class]: [PersistenceUnit: default] Unable to build Hibernate SessionFactory; nested exception is java.lang.RuntimeException: Driver com.microsoft.sqlserver.jdbc.SQLServerDriver claims to not accept jdbcUrl, ${SPRING_DATASOURCE_URL}
I have added the URL properly but still getting this error. (I am not sure, but is it possible that the error might be because of a '#' in my passsword?)
[1]: https://i.stack.imgur.com/zzFcn.png
[2]: https://i.stack.imgur.com/nI92D.png |
```
p1 = new Promise((res,rej)=>{
console.log("p1 setTimeout");
setTimeout(()=>{
res(17);
}, 10000);
});
p2 = new Promise((res,rej)=>{
console.log("p2 setTimeout");
setTimeout(()=>{
res(36);
}, 2000);
});
function checkIt() {
console.log("Started");
let val1 = this.p1;
console.log("P1");
val1.then((data)=>{console.log("P1 => "+data)});
let val2 = this.p2;
console.log("P2");
val2.then((data)=>{console.log("P2 => "+data)});
console.log("End");
}
checkIt();
```
My understanding of above written JavaScript code was:
1: In callback queue, p2's setTimeout will be first and then p1's setTimeout (and they will execute in FIFO manner)
2: Callback queue won't execute before microtask queue
3: In microtask queue, p1's callback function will first and then p2's callback function (and they will execute in FIFO manner)
4: Hence this should be deadlock.
But instead getting output:
1: p1 setTimeout
2: p2 setTimeout
3: Started
4: P1
5: P2
6: End
7: (After 2 seconds) P2 => 36
8: (After 10 seconds) P1 => 17
**Doubt:** How 7th and 8th line of output are coming?
I have ran the code and getting output as defined above |
I would like to outsource the following hungry process in a worker within the loop:
```encoder.addFrame(ctx);```
Unfortunately, I have not been able to do this with node.js worker threads so far. Perhaps I have lacked the right approach so far. In my attempts, I was unable to pass either the encoder or the ctx object to the worker. Maybe my plan doesn't work at all with node.js Worker threads... I don't know. Maybe someone of you has a solution or knows another way to outsource the process in the background?
Here is the complete code:
generate: function (cb) {
const GIFEncoder = require('gif-encoder-2');
const encoder = new GIFEncoder( Number(200), Number(200), 'neuquant', true);
encoder.start();
encoder.setRepeat( 0 ); // 0 for repeat, -1 for no-repeat
encoder.setDelay(1000); // frame delay in ms
encoder.setQuality(10); // image quality. 10 is default.
const { createCanvas, registerFont } = require('canvas');
const canvas = createCanvas( Number(200), Number(200) );
const ctx = canvas.getContext('2d');
for (let i = 0; i < 10; i++) {
ctx.beginPath();
ctx.moveTo(0, 0);
ctx.lineTo(200, 0);
ctx.lineTo(200, 200);
ctx.lineTo(0, 200);
ctx.closePath();
ctx.fillStyle = 'blue';
ctx.fill();
ctx.fillStyle = 'white';
ctx.font = ['72px', 'Open Sans'].join(' ');
ctx.textAlign = 'center';
ctx.textBaseline = 'middle';
ctx.fillText(i.toString(), 100, 84 + ctx.measureText(i.toString()).actualBoundingBoxAscent / 2);
encoder.addFrame(ctx);
}
encoder.finish();
return encoder.out.getData();
} |
ImageView is null while inflating other activity dialog in View MVC |
|java|android|model-view-controller| |
I'm encountering an error when trying to run the **`dbt-dry-run`** command within a Databricks workflow. The error message I'm receiving is as follows:
```
CalledProcessError: Command 'b'\nif cd "/tmp/tmp-dbt-run-833842462680805" ; then\n set -x\n dbt-dry-run returned non-zero exit status 1.
+ dbt deps
10:32:57 Running with dbt=1.7.9
10:32:57 Warning: No packages were found in packages.yml
10:32:57 Warning: No packages were found in packages.yml
+ dbt seed
10:33:01 Running with dbt=1.7.9
10:33:03 Registered adapter: databricks=1.7.9
10:33:03 Unable to do partial parsing because saved manifest not found. Starting full parse.
10:33:06 Found 5 models, 3 seeds, 20 tests, 0 sources, 0 exposures, 0 metrics, 538 macros, 0 groups, 0 semantic models
10:33:06
10:33:06 Concurrency: 8 threads (target='databricks_cluster')
10:33:06
10:33:06 1 of 3 START seed file dbt.raw_customers ....................................... [RUN]
10:33:06 2 of 3 START seed file dbt.raw_orders .......................................... [RUN]
10:33:06 3 of 3 START seed file dbt.raw_payments ........................................ [RUN]
10:33:17 1 of 3 OK loaded seed file dbt.raw_customers ................................... [INSERT 100 in 10.91s]
10:33:17 2 of 3 OK loaded seed file dbt.raw_orders ...................................... [INSERT 99 in 11.16s]
10:33:17 3 of 3 OK loaded seed file dbt.raw_payments .................................... [INSERT 113 in 11.21s]
10:33:17
10:33:17 Finished running 3 seeds in 0 hours 0 minutes and 11.83 seconds (11.83s).
10:33:17
10:33:17 Completed successfully
10:33:17
10:33:17 Done. PASS=3 WARN=0 ERROR=0 SKIP=0 TOTAL=3
+ dbt-dry-run
Traceback (most recent call last):
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/bin/dbt-dry-run", line 8, in <module>
sys.exit(main())
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/dbt_dry_run/__main__.py", line 5, in main
exit(app())
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/dbt_dry_run/cli.py", line 117, in run
exit_code = dry_run(
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/dbt_dry_run/cli.py", line 48, in dry_run
project = ProjectService(args)
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/dbt_dry_run/adapter/service.py", line 40, in __init__
dbt_project, dbt_profile = RuntimeConfig.collect_parts(self._args)
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/dbt/config/runtime.py", line 251, in collect_parts
profile = cls.get_profile(
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/dbt/config/runtime.py", line 106, in get_profile
return load_profile(
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/dbt/config/runtime.py", line 70, in load_profile
profile = Profile.render(
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/dbt/config/profile.py", line 436, in render
return cls.from_raw_profiles(
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/dbt/config/profile.py", line 391, in from_raw_profiles
raise DbtProjectError("Could not find profile named '{}'".format(profile_name))
dbt.exceptions.DbtProjectError: Runtime Error
Could not find profile named 'jaffle_shop'
```
Command I have given
1. [Databricks workflow dbt command](https://i.stack.imgur.com/EHuj6.png)
But when I tried this same project without dry run in workflow command this is working
1. **Installation**: I've installed the **`dbt-dry-run`** Python library on the Databricks workflow cluster.
2. **Environment Setup**: The environment is configured to use Databricks, and I'm running the workflow within this environment.
3. **Project Configuration**: The project configuration seems to be correct, as other **`dbt`** commands execute without errors.
4. **Profile Configuration**: I suspect the issue might be related to the profile configuration. The error message indicates it can't find a profile named 'jaffle_shop'. *However, the profile is properly configured and the project runs successfully without the dry run option.*
5. **Cluster Libraries**: Ensure that all necessary libraries and dependencies are properly installed and accessible within the Databricks cluster environment.
Any insights or suggestions on how to resolve this issue would be greatly appreciated.
Thanks in advance |
Error when running dbt-dry-run command in Databricks workflow |
|python|sql|github|databricks|dbt| |
null |
The TextBlock's Margin can be bound to the ActualWidth of the TextBlock, then setting the Left Margin equal to the ActualWidth / -2 in a converter results in no need to hardcode the Width and Margin of the TextBlocks:
```
<Grid>
<Grid.Resources>
<converters:ActualWidthToNegativeHalfLeftMarginConverter x:Key="ActualWidthToNegativeHalfLeftMarginConverter" />
</Grid.Resources>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="2*" />
<ColumnDefinition Width="1*" />
<ColumnDefinition Width="3*" />
</Grid.ColumnDefinitions>
<TextBlock Grid.Column="1"
Text="textOfTextBlock1"
HorizontalAlignment="Left">
<TextBlock.Margin>
<Binding Path="ActualWidth" RelativeSource="{RelativeSource Self}" Converter="{StaticResource ActualWidthToNegativeHalfLeftMarginConverter}" />
</TextBlock.Margin>
</TextBlock>
<TextBlock Grid.Column="2"
Text="textOfTextBlock2"
HorizontalAlignment="Left">
<TextBlock.Margin>
<Binding Path="ActualWidth" RelativeSource="{RelativeSource Self}" Converter="{StaticResource ActualWidthToNegativeHalfLeftMarginConverter}" />
</TextBlock.Margin>
</TextBlock>
</Grid>
```
```
public class ActualWidthToNegativeHalfLeftMarginConverter : IValueConverter
{
public object Convert(object value, Type targetType, object parameter, CultureInfo culture)
{
return new Thickness((double)value / -2, 0, 0, 0);
}
public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture)
{
// ...
}
}
``` |
|javascript|babeljs| |
The [handle class](https://www.mathworks.com/help/matlab/handle-classes.html) and its copy-by-reference behavior is the natural way to implement linkage in Matlab.
It is, however, possible to implement a linked list in Matlab without OOP. And an abstract list which does *not* splice an existing array in the middle to insert a new element -- as complained in [this comment](https://stackoverflow.com/questions/1413860/matlab-linked-list#comment23877880_1422443).
(Although I do have to use a Matlab data type somehow, and adding new element to an existing Matlab array requires memory allocation somewhere.)
The reason of this availability is that we can model linkage in ways other than pointer/reference. The reason is *not* [closure](https://en.wikipedia.org/wiki/Closure_(computer_programming)) with [nested functions](https://www.mathworks.com/help/matlab/matlab_prog/nested-functions.html).
I will nevertheless use closure to encapsulate a few *persistent* variables. At the end, I will include an example to show that closure alone confers no linkage. And so [this answer](https://stackoverflow.com/a/1421186/3181104) as written is incorrect.
At the end of the day, linked list in Matlab is only an academic exercise. Matlab, aside from aforementioned handler class and classes inheriting from it (called subclasses in Matlab), is purely copy-by-value. Matlab will optimize and automate how copying works under the hood. It will avoid deep copy whenever it can. That is probably the better take-away for OP's question.
The absence of reference in its core functionality is also why linked list is not obvious to make in Matlab.
-------------
##### Example Matlab linked list:
```lang-matlab
function headNode = makeLinkedList(value)
% value is the value of the initial node
% for simplicity, we will require initial node; and won't implement insert before head node
% for the purpose of this example, we accommodate only double as value
% we will also limit max list size to 2^31-1 as opposed to the usual 2^48 in Matlab vectors
m_id2ind=containers.Map('KeyType','int32','ValueType','int32'); % pre R2022b, faster to split than to array value
m_idNext=containers.Map('KeyType','int32','ValueType','int32');
%if exist('value','var') && ~isempty(value)
m_data=value; % stores value for all nodes
m_id2ind(1)=1;
m_idNext(1)=0; % 0 denotes no next node
m_id=1; % id of head node
m_endId=1;
%else
% m_data=double.empty;
% % not implemented
%end
headNode = struct('value',value,...
'next',@next,...
'head',struct.empty,...
'push_back',@addEnd,...
'addAfter',@addAfter,...
'deleteAt',@deleteAt,...
'nodeById',@makeNode,...
'id',m_id);
function nextNode=next(node)
if m_idNext(node.id)==0
warning('There is no next node.')
nextNode=struct.empty;
else
nextNode=makeNode(m_idNext(node.id));
end
end
function node=makeNode(id)
if isKey(m_id2ind,id)
node=struct('value',id2val(id),...
'next',@next,...
'head',headNode,...
'push_back',@addEnd,...
'addAfter',@addAfter,...
'deleteAt',@deleteAt,...
'nodeById',@makeNode,...
'id',id);
else
warning('No such node!')
node=struct.empty;
end
end
function temp=id2val(id)
temp=m_data(m_id2ind(id));
end
function addEnd(value)
addAfter(value,m_endId);
end
function addAfter(value,id)
m_data(end+1)=value;
temp=numel(m_data);% new id will be new list length
if (id==m_endId)
m_idNext(temp)=0;
else
m_idNext(temp)=temp+1;
end
m_id2ind(temp)=temp;
m_idNext(id)=temp;
m_endId=temp;
end
function deleteAt(id)
end
end
```
With the above .m file, the following runs:
```lang-matlab
>> clear all % remember to clear all before making new lists
>> headNode = makeLinkedList(1);
>> node2=headNode.next(headNode);
Warning: There is no next node.
> In makeLinkedList/next (line 33)
>> headNode.push_back(2);
>> headNode.push_back(3);
>> node2=headNode.next(headNode);
>> node3=node2.next(node2);
>> node3=node3.next(node3);
Warning: There is no next node.
> In makeLinkedList/next (line 33)
>> node0=node2.head;
>> node2=node0.next(node0);
>> node2.value
ans =
2
>> node3=node2.next(node2);
>> node3.value
ans =
3
```
`.next()` in the above can take any valid node `struct` -- not limited to itself. Similarly, `push_back()` etc can be done from any node. A node it cannot reference itself implicitly and automatically because non-OOP [`struct`](https://www.mathworks.com/help/matlab/ref/struct.html) in Matlab does not have a `this` pointer or `self` reference.
In the above example, nodes are given unique IDs, a dictionary is used to map ID to data (index) and to map ID to next ID. (With pre-R2022 `containers.Map()`, it's more efficient to have 2 dictionaries even though we have the same key and same value type across the two.) So when inserting new node, we simply need to update the relevant next ID. (Double) array was chosen to store the node values (which are doubles) and that is the data type Matlab is designed to work with and be efficient at. As long as no new allocation is required to append an element, insertion is constant time. Matlab automates the management of memory allocation. Since we are not doing array operations on the underlying array, Matlab is unlikely to take extra step to make copies of new contiguous arrays every time there is a resize. [Cell array](https://www.mathworks.com/help/matlab/ref/cell.html) may incur less re-allocation but with some trade-offs.
Since [dictionary](https://www.mathworks.com/help/matlab/ref/dictionary.html) is used, I am not sure if this solution qualifies as purely [functional](https://en.wikipedia.org/wiki/Functional_programming).
------------
##### re: closure vs linkage
In short, closure does not confer linkage. Matlab's nested functions have access to variables in parent functions directly -- as long as they are not shadowed by local variables of the same names. But there is no variable passing. And thus there is no pass-by-reference. And thus we can't model linkage with this non-existent referencing.
I did take advantage of closure above to make a few variables persistent and shared, since scope (called [workspace](https://www.mathworks.com/help/matlab/matlab_prog/base-and-function-workspaces.html) in Matlab) being referred to means all variables in the scope will persist. That said, Matlab also has a [persistent](https://www.mathworks.com/help/matlab/ref/persistent.html) specifier. Closure is not the only way.
To showcase this distinction, the example below will not work because every time there is passing of `previousNode`, `nextNode`, they are passed-by-value. There is no way to access the original `struct` across function boundaries. And thus, even with nested function and closure, there is no linkage!
```lang-matlab
function newNode = SOtest01(value,previousNode,nextNode)
if ~exist('previousNode','var') || isempty(previousNode)
i_prev=m_prev();
else
i_prev=previousNode;
end
if ~exist('nextNode','var') || isempty(nextNode)
i_next=m_next();
else
i_next=nextNode;
end
newNode=struct('value',m_value(),...
'prev',i_prev,...
'next',i_next);
function out=m_value
out=value;
end
function out=m_prev
out=previousNode;
end
function out=m_next
out=nextNode;
end
end
``` |
Here is my test class while testing this i am getting error:
```
describe('some default check', () => {
let serverObj: http.Server;
let jwtToken: string;
beforeAll(async()=> {
setGracefulCleanup();
serverObj = beforeEach();
await request(serverObj)
.post('/api/authUser')
.set('Accept', 'application/json')
.send({accessToken: 'accessToken'})
.expect('Content-Type', /json/)
.expect((response: any) => {
jwtToken = response.body.data.token;
});
});
afterAll(() => {
serverObj = afterEach();
setGracefulCleanup();
});
test('it should get the correct default base class json response for no response from org',async()=>{
let sfSeedObject: SfConnectionManagerSpec;
sfSeedObject = new SfConnectionManagerSpec();
const value = SfConnectionManager.getConnectionByParams(sfSeedObject.sfConnectionObject);
const spySfConnectionManager = jest.spyOn(SfConnectionManager, 'getConnectionByParams').mockImplementation(()=> value);
console.log('Number of times connection is called:', spySfConnectionManager.mock.calls.length);
const connection = jest.spyOn(value.tooling,'query').mockImplementationOnce(()=> res);
console.log('Number of times connection is called:', connection.mock.calls.length);
console.log("are we getting res",res);
await request(serverObj)
.post('/api/sf/getDefaultBaseClassJSON')
.set('Accept', 'application/json')
.set('token', jwtToken)
.expect('Content-Type', /json/)
.expect((response: any) => {
expect (connection).toHaveBeenCalled();
expect(response.body.success).toBeTruthy();
expect(response.body.data[0].defaultclassName).toEqual('someRecodForm');
expect(response.body.data).toHaveLength(1)
expect(response.body.data[0].objects).not.toBeNull();
expect(response.body.data[0].objects).not.toBeUndefined()
});
})
}); ```
> here is the res:
```
export const res: any = {
queryLocator: null,
entityTypeName: 'LightningComponentBundle',
records: [ ]
};
```
> and here is the SFcinnectionManagerSpec:
```
export class SfConnectionManagerSpec {
public sfConnectionObject = {
accessToken: 'access token',
instanceUrl: 'instanceURL'
};
}
```
> so my issues write here is i am getting error as while running this test.
```
Expected number of calls: >= 1
Received number of calls: 0
59 | .set('token', jwtToken)
60 | .expect('Content-Type', /json/)
> 61 | .expect((response: any) => {
| ^
62 | expect (connection).toHaveBeenCalled();
63 | expect(response.body.success).toBeTruthy();
64 | expect(response.body.data[0].defaultclassName).toEqual('someRecodForm');
```
|
I'm using Elasticsearch 8.7.0 and using function_score with some matching rules. Matching part works well when doing non boosted search. When I want to boost based on terms-parameter in my function as shown bellow and when terms-parameter contains more than one item it stops to work ie I get not boosted search result.
I'm using a query param "boostedStringArray" sent in and this one should be a string array. Im using:`["{boostedStringArray}"]` so ES understands it should be string array if I only use `[{boostedStringArray}]` ES wont allow be to have 0-prepended numbers
What I understand it seems to be something with how "boostedStringArray"-value data is structured due to if I just send one item it works well and document having this "myPropertyToBoostOn"-value is boosted. When I add multiple values function-evaluation does not match terms-expression.
My boostedStringArray-query-param looks like bellow when sending one item:
boostedStringArray=123
when sending multiple items:
boostedStringArray=123%2C234
where 123 and 234 are the values
By this it seems that ES does not do an urldecode for given parameter before putting the value into the boostedStringArray-property. What can I do to fix this, can I use any expression in my query-JSON or is there a better solution to achieve the same "personalized" approach? I know there are query rules in newer ES-versions but cant upgrade
...
"functions": [
{
"filter": {
"terms": {
"myPropertyToBoostOn": ["{boostedStringArray}"]
}
},
"weight": 3
}
],
"score_mode": "sum",
"boost_mode": "multiply"
...
|
Elasticsearch functional_score with parameter of type string array as input not working |
|elasticsearch| |
According to the article [Demystifying Firebase Auth Tokens](https://medium.com/@jwngr/demystifying-firebase-auth-tokens-e0c533ed330c) by a former Firebase team member:
> The refresh token is a standard OAuth 2.0 refresh token, which you can learn more about [here](https://auth0.com/learn/refresh-tokens). |
null |
An interactive rebase is quite the overkill here. You can simply reset then recommit.
```
git reset --soft HEAD~2
git commit -m "message describing changes in commits 2 and 3"
git push --force-with-lease
``` |
I'm trying to update the WSO2 challenge questions given the users email by calling the rest api endpoint using RestTemplate:
String url = baseUrl + email + "/challenge-questions";
HttpEntity<Payload> request = new HttpEntity<Payload>(payload, headers);
try{
ResponseEntity response = restTemplate.exchange(url, HttpMethod.PUT, request, String.class);
} catch(Exception e){
LOG.error(e);
}
Does anyone else get the following error even after providing the authorization token in the headers:
`org.springframework.web.client.HttpClientErrorExeption$Unauthorized: 401 :[no body]`
If so, how can I resolve this issue? |
How to resolve unauthenticated error after calling REST API endpoint to update the Challenge Questions in WSO2 v5.11.0? |
I installed `appium` using `npm install -g appium` but after that I run `appium --version` to check and I found out that:
```
appium --version
node:internal/modules/cjs/loader:1145
throw err;
^
Error: Cannot find module 'C:\Users\p_cra\Usersp_craAppDataRoamingnpmnode_modulesappiumlibmain.js'
at Module._resolveFilename (node:internal/modules/cjs/loader:1142:15)
at Module._load (node:internal/modules/cjs/loader:983:27)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:142:12)
at node:internal/main/run_main_module:28:49 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
Node.js v21.7.1
```
I don't know why the path to the directory seems to be duplicated at the beginning, I've tried uninstalling `node` and `appium` and re-installing them countless times but it keeps giving me the same problem.
When a run:
```
npm ls -g --depth=0
C:\Users\p_cra\AppData\Roaming\npm
├── appium@2.5.1
├── cypress@13.6.4
├── p_cra@1.0.0 -> .\..\..\..
└── zx@7.2.3
```
I see the `appium` as installed. |
{"Voters":[{"Id":-1,"DisplayName":"Community"}]} |
|javascript|three.js|html5-canvas| |
I created the cards of this tab using loop and the data is coming from database.
when I click on show more button the entire visible column expands with it.
it does not show the content of just expands, the content are visible when I click show more.
but if its one card in on line its just expands that only
```
import React, { useState } from "react";
import {
Box,
Card,
CardActions,
CardContent,
Collapse,
Button,
Typography,
Rating,
useTheme,
useMediaQuery,
} from "@mui/material";
import Header from "../../components/Header";
import { useGetProductsQuery } from "../../state/api";
import LinearProgress from "@mui/material/LinearProgress";
const Product = ({
_id,
name,
description,
price,
rating,
category,
supply,
stat,
}) => {
const theme = useTheme();
const [isExpanded, setIsExpanded] = useState(false);
return (
<Card
sx={{
backgroundImage: "none",
backgroundColor: theme.palette.background.alt,
borderRadius: "0.55rem",
}}
>
<CardContent>
<Typography
sx={{ fontSize: 14 }}
color={theme.palette.secondary[700]}
gutterBottom
>
{category}
</Typography>
<Typography variant="h5" component="div">
{name}
</Typography>
<Typography sx={{ mb: "1.5rem" }} color={theme.palette.secondary[400]}>
${Number(price).toFixed(2)}
</Typography>
<Rating value={rating} readOnly />
<Typography variant="body2">{description}</Typography>
</CardContent>
<CardActions>
<Button
variant="primary"
size="small"
onClick={() => setIsExpanded(!isExpanded)}
sx={{ backgroundColor: theme.palette.secondary[600] }}
>
See More
</Button>
</CardActions>
<Collapse
in={isExpanded}
timeout="auto"
unmountOnExit
sx={{
color: theme.palette.neutral[300],
}}
>
<CardContent>
<Typography>id: {_id}</Typography>
<Typography>Supply Left: {supply}</Typography>
<Typography>
Yearly Sales This Year: {stat[0].yearlySalesTotal}
</Typography>
<Typography>
Yearly Units Sold This Year: {stat[0].yearlyTotalSoldUnits}
</Typography>
</CardContent>
</Collapse>
</Card>
);
};
const Products = () => {
const { data, isLoading } = useGetProductsQuery();
const isNonMobile = useMediaQuery("(min-width: 1000px)");
return (
<Box m="1.5rem 2.5rem">
<Header title="PRODUCTS" subtitle="See your list of products." />
{data || !isLoading ? (
<Box
mt="20px"
display="grid"
gridTemplateColumns="repeat(4, minmax(0, 1fr))"
justifyContent="space-between"
rowGap="20px"
columnGap="1.33%"
sx={{
"& > div": { gridColumn: isNonMobile ? undefined : "span 4" },
}}
>
{data.map(
({
_id,
name,
description,
price,
rating,
category,
supply,
stat,
}) => (
<Product
key={_id}
_id={_id}
name={name}
description={description}
price={price}
rating={rating}
category={category}
supply={supply}
stat={stat}
/>
)
)}
</Box>
) : (
<>
<LinearProgress
color="primary"
fourcolor="false"
variant="indeterminate"
/>
</>
)}
</Box>
);
};
export default Products;```
[this is the image of cards(https://i.stack.imgur.com/Ur2m8.png)
well I expect it to expands only one card at a time |
The entire column of cards expands with one click |
|javascript|reactjs|react-redux| |
null |
I have Parent child one to many relationship between Alerts & Matches Table, group by among these tables work fine with aggregate function count, max, etc. But when I try to select matches records to display it in a comma separated string it gives the following error.
```System.InvalidOperationException: The LINQ expression could not be translated. Either rewrite the query in a form that can be translated, or switch to client evaluation explicitly by inserting a call to 'AsEnumerable', 'AsAsyncEnumerable', 'ToList', or 'ToListAsync'. See https://go.microsoft.com/fwlink/?linkid=2101038 for more information.```
This is my basic query
```var groupedQuery = from a in _clientContext.Alerts
join m in _clientContext.AlertsMatches
on a.AlertID equals m.alertID
group new { a, m } by new
{
a.AlertID,
a.AlertDate,
a.AlertScore,
} into gr
select new
{
AlertId = gr.Key.AlertID,
AlertDate = gr.Key.AlertDate,
AlertScore = gr.Key.AlertScore,
ScenarioNames = gr.Select(x => x.m.Scenario.ScenarioName),
};
var query = from item in groupedQuery
select new AlertsDTO
{
AlertId = item.AlertId,
AlertDate = item.AlertDate,
Scenario = string.Join(", ", item.ScenarioNames)
};``` |
Linq GroupBy and Concat |
|c#|entity-framework|linq| |
I developed a github repository with empty main in GitHub.
Then i created local repo in git and done basic operations to push...
```
git init
```
for new empty repo
```
git add .
```
for adding all files in local repo
```
git commit -m "first update"
git add remote origin <github repo link>.git
```
I've tried merge and rebase too....
```
git merge origin/main
git rebase origin/main
```
still it was unable to merge it
please help with a solution |
Can't able to merge branch to main branch in github |
|git|github|cmd|repository|github-for-windows| |