instruction stringlengths 0 30k ⌀ |
|---|
IIUC, you can build a directed graph with [`networkx`](https://networkx.org), then find the [`shortest_path`](https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.shortest_paths.generic.shortest_path.html#networkx.algorithms.shortest_paths.generic.shortest_path) between each node and `'0'`, then use that to map the `path_variable`:
```
import networkx as nx
G = nx.from_pandas_edgelist(df, source='m_eid', target='eid',
create_using=nx.DiGraph)
s = df.set_index('eid')['path_variable']
mapper = {n: '|'.join(s.get(x, '') for x in
nx.shortest_path(G, source='0',
target=n)[1:])
for n in df['eid'].unique()
}
df['complete_path'] = df['eid'].map(mapper)
```
Output:
```
eid m_eid level path_variable complete_path
0 1 0 0 0 0
1 2 1 1 0 0|0
2 3 1 1 1 0|1
3 4 2 2 0 0|0|0
4 5 2 2 1 0|0|1
5 6 2 2 2 0|0|2
6 7 3 2 0 0|1|0
7 8 3 2 1 0|1|1
8 9 3 2 2 0|1|2
9 10 3 2 3 0|1|3
10 11 4 3 0 0|0|0|0
11 12 4 3 1 0|0|0|1
12 13 10 3 0 0|1|3|0
```
Graph:
[![organization graph, networkx graphviz][1]][1]
If you already have unique values in `eid` you could avoid the mapper and use:
```
df['complete_path'] = ['|'.join(s.get(x, '') for x in
nx.shortest_path(G, source=n,
target='0')[-2::-1])
for n in df['eid']]
```
---
To make it easier to understand, here is a more classical path with the nodes ids (not the path_variables):
```
mapper = {n: '|'.join(nx.shortest_path(G, source='0',
target=n)[1:])
for n in df['eid'].unique()
}
df['complete_path'] = df['eid'].map(mapper)
```
Output:
```
eid m_eid level path_variable complete_path
0 1 0 0 0 1
1 2 1 1 0 1|2
2 3 1 1 1 1|3
3 4 2 2 0 1|2|4
4 5 2 2 1 1|2|5
5 6 2 2 2 1|2|6
6 7 3 2 0 1|3|7
7 8 3 2 1 1|3|8
8 9 3 2 2 1|3|9
9 10 3 2 3 1|3|10
10 11 4 3 0 1|2|4|11
11 12 4 3 1 1|2|4|12
12 13 10 3 0 1|3|10|13
```
[1]: https://i.stack.imgur.com/1PZMQ.png |
null |
I'm trying to do a massive deployment to many users, and I've creating this script, one of the functions into the script is, to verify the username of the machine, to match the current path of the linphone installation, etc, adittionally to this i have a bunch of content that we need to put into this file, I created a plain text file to verifiy if this script is writting into the file but it's not.
```
# Get the current user's name
$usuario = $env:USERNAME
# Check if Chocolatey is installed
$chocoPath = Get-Item -Path 'C:\ProgramData\chocolatey\bin\choco.exe' -ErrorAction SilentlyContinue
if (-not $chocoPath) {
# If not installed, download and install it
Write-Host "Chocolatey is not installed. Downloading and installing..."
Set-ExecutionPolicy Bypass -Scope Process -Force; iwr https://community.chocolatey.org/install.ps1 -UseBasicParsing | iex
$chocoPath = Get-Item -Path 'C:\ProgramData\chocolatey\bin\choco.exe' -ErrorAction SilentlyContinue
if ($chocoPath) {
Write-Host "Chocolatey installed successfully."
$env:Path += ";C:\ProgramData\chocolatey\bin"
} else {
Write-Host "Error installing Chocolatey."
exit
}
} else {
# If already installed, show a message
Write-Host "Chocolatey is already installed."
}
# Check if Linphone is installed
$linphonePath = Get-Item -Path 'C:\Program Files\Linphone' -ErrorAction SilentlyContinue
if (-not $linphonePath) {
# If not installed, install it using Chocolatey
Write-Host "Linphone is not installed. Installing with Chocolatey..."
choco install linphone -y
$linphonePath = Get-Item -Path 'C:\Program Files\Linphone' -ErrorAction SilentlyContinue
if ($linphonePath) {
Write-Host "Linphone installed successfully."
} else {
Write-Host "Error installing Linphone."
exit
}
} else {
# If already installed, show a message
Write-Host "Linphone is already installed."
}
# Start Linphone
Start-Process "C:\Program Files\Linphone\bin\linphone.exe" -WindowStyle Hidden
Start-Sleep -Seconds 8
# Extract the current user's email from the obtained path
if ($linphonePath) {
$pathcurrentuseremail = (Get-ItemProperty -Path Registry::HKCU\Software\Microsoft\OneDrive\Accounts\Business1 -Name "UserEmail" -ErrorAction SilentlyContinue).UserEmail
if ($pathcurrentuseremail) {
Write-Host "Current user's email: $pathcurrentuseremail"
} else {
Write-Host "Unable to find current user's email in the registry."
exit
}
}
# Path of the Linphone configuration file
if ($usuario) {
$rutaConfiguracion = "C:\Users\pedro\AppData\Local\linphone\prueba.txt"
} else {
Write-Host "Unable to get the current user's name."
exit
}
# Load the list of emails, passwords, and usernames from a CSV file
$listaUsuarios = Import-Csv -Path "C:\Users\pedro\OneDrive\Escritorio\DESKTOP\juan\SCRIPTS\extensions.csv"
# Search for the email in the loaded list
$usuarioEncontrado = $listaUsuarios | Where-Object { $_.Email -eq $pathcurrentuseremail }
if ($usuarioEncontrado) {
# Get the corresponding data
$username = $usuarioEncontrado.Username
$passwd = $usuarioEncontrado.Password
$displayname = $usuarioEncontrado.DisplayName
$extension = $usuarioEncontrado.Extension
# Content to add to the configuration file
$contenido = @"
[auth_info_0]
username=$username
passwd=$passwd
domain=192.168.0.1
[proxy_0]
reg_proxy=<sip:192.168.0.1;transport=udp>
reg_identity="$displayname" <sip:$extension@192.168.0.1>
quality_reporting_enabled=0
quality_reporting_interval=0
reg_expires=3600
reg_sendregister=1
publish=0
avpf=0
avpf_rr_interval=1
dial_escape_plus=0
use_dial_prefix_for_calls_and_chats=1
privacy=32768
push_notification_allowed=0
remote_push_notification_allowed=0
force_register_on_push=0
cpim_in_basic_chat_rooms_enabled=0
idkey=proxy_config_U9B64IIcxmY~wfM
publish_expires=-1
rtp_bundle=0
rtp_bundle_assumption=0
"@
# Add the content to the configuration file
Add-Content -Path $rutaConfiguracion -Value $contenido
# Check if the writing was successful
if (-not (Test-Path $rutaConfiguracion)) {
Write-Host "Error: Unable to write to the configuration file."
exit
} else {
Write-Host "Content added to the configuration file successfully."
}
} else {
Write-Host "User not found in the loaded list."
}
```
I'm trying to do an automatation following these rules:
1: Identify current user into the pc
2: Verifiy if chocolatey is installed if not, install it. to after this, install linphone through chocolatey
3: Verify and catch the current email linked with Onedrive into the pc, to match this email later into a CSV, and other properties to the user to complete the authentication id and password charged into a csv file.
4: Add the content to the configuration file, with the varible that I've asigned into the script
|
PowerShell Linphone Configuration |
|powershell|batch-file|scripting|voip|linphone| |
null |
null |
In, my case I followed [this advice][1] and changed this emmet setting in vscode to `false`:
```json
"emmet.triggerExpansionOnTab": false
```
I was receinving the error `Cannot read property 'value' of null`. It solved the problem immediatelly.
[1]: https://github.com/Microsoft/vscode/issues/20209#issuecomment-278428711 |
null |
I'm having trouble with Git. I'm trying to push the master but I apparently have a problem with the new ssh key.
willy@DESKTOP-MH62RJV MINGW64 ~/OneDrive/Desktop/booki (master)
$ git push --force ssh master
git@github.com: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Do you've an idea ??
I also tried to clone it but the same:
willy@DESKTOP-MH62RJV MINGW64 ~/OneDrive/Desktop/booki (master)
$ git clone git@github.com:RodolpheACHY/booki.git
Cloning into 'booki'...
git@github.com: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
willy@DESKTOP-MH62RJV MINGW64 ~/OneDrive/Desktop/booki (master) |
problem to push files on a repository git |
|git|ssh|git-push| |
null |
I am using [this package](https://pub.dev/packages/flutter_pixelmatching) and creating two instances of its class PixelMatching which uses dart ffi and some c++ code to perform feature matching using open cv. I initialise the two instances with different source images with which it compares my camera feed.
However, both of the instances end up using the first image in their comparisons.
Here is where I initialize the matching with two different images.
[initializing the instances](https://i.stack.imgur.com/rhLIa.png)
This is a part of the function where I call the methods to compare the image with my camera stream. The two instances are called alternatively, in one frame the first and in the next the second instance.
[calling the methods](https://i.stack.imgur.com/i5lz9.png)
Am I creating the classes in some wrong way? I don't understand why the two instances can't hold different states. |
My website is displaying a blank page. When I inspect element, it says "You need to enable JavaScript to run this app.", but I know that my JS is already enabled on my browser.
I followed these instructions to deploy the project: https://github.com/gitname/react-gh-pages
Here is the repository: https://github.com/eric-mxrtin/weather-app-react
Here is the link to the site: https://eric-mxrtin.github.io/weather-app-react/?lang=en
Note: My project runs fine on local host. |
EDIT 1 → 2024-03-30:
====================
- Included parameters in the lambda functions so the table names and columns may be injected by the user
- Updated lambda functions with such paramteres
- Even though I found a solution, I tested the whole programming and it's really resource/memmory intensive.
> There must be a way to optimize/simplify it
---
---
Objective:
----------
Calculate products' costs for a range of dates based on a Bill of Materials and purchase price history of each material (component)
----------
**Process:**
--------
Get products' components and quantities up to the purchase level and then multiply them by their purchase price in each month.
**Source data**
-----------
**TableBOM** has the mix of components needed in each product (Q = qty)
| Componente 1 | Q1 | Componente 2 | Q2 | Resultado |
|-----------------------|----|--------------------|----|-----------------------|
| Papel 1 | 1 | Tarjeta personal | 20 | Tarjeta x20 |
| Papel 2 | 2 | Tarjetón | 40 | Tarjetón x40 |
| Caja pe | 4 | | | Empaque pequeño |
| Cinta pe | 5 | | | Empaque pequeño |
| Separador pe | 6 | | | Empaque pequeño |
| Tarjeta x20 | 1 | Empaque pequeño | 1 | Tarjeta empaque pe |
| Tarjetón x40 | 1 | Empaque grande | 1 | Tarjetón empaque gr |
| Caja gr | 7 | | | Empaque grande |
| Cinta gr | 8 | | | Empaque grande |
| Nota | 1 | Empaque pequeño | 1 | Nota empaque pe |
| Tarjeta x20 | 1 | Empaque grande | 1 | Tarjeta empaque gr |
| Tarjetón empaque gr | 1 | Nota empaque pe | 2 | Tarjetón + nota |
| Papel 3 | 3 | | | Tarjetón |
| Divi gr | 9 | | | Empaque grande |
| Sobre 4 | 1 | Nota sola | 1 | Nota |
<br>
**TablePurchaseComponent** has the historical prices of each material (component)
| Fecha | Componente | Precio unitario |
|-----------|------------------|-----------------|
| 1/01/2023 | Caja gr | 100 |
| 1/01/2023 | Caja pe | 110 |
| 1/01/2023 | Cinta gr | 120 |
| 1/01/2023 | Cinta pe | 130 |
| 1/01/2023 | Divi gr | 140 |
| 1/01/2023 | Nota sola | 150 |
| 1/01/2023 | Papel 1 | 10 |
| 1/01/2023 | Papel 2 | 20 |
| 1/01/2023 | Papel 3 | 30 |
| 1/01/2023 | Separador pe | 190 |
| 1/01/2023 | Sobre 4 | 200 |
| 1/01/2023 | Tarjeta personal | 2 |
| 1/01/2024 | Caja gr | 200 |
| 1/01/2024 | Caja pe | 220 |
| 1/01/2024 | Cinta gr | 240 |
| 1/01/2024 | Cinta pe | 260 |
| 1/01/2024 | Divi gr | 280 |
| 1/01/2024 | Nota sola | 300 |
| 1/01/2024 | Papel 1 | 20 |
| 1/01/2024 | Papel 2 | 40 |
| 1/01/2024 | Papel 3 | 60 |
| 1/01/2024 | Separador pe | 380 |
| 1/01/2024 | Sobre 4 | 400 |
| 1/01/2024 | Tarjeta personal | 4 |
Tables in the sheet:
[![enter image description here][1]][1]
----------
**Lambda functions (as defined names):**
----------------------------------------
**fxProcessVal:**
Parameters → `lookup_val;component1_cols;component2_cols;result_col`
Code:
=LET(
comp_res; VSTACK(
FILTER(component1_cols; result_col = lookup_val);
FILTER(component2_cols; result_col = lookup_val)
);
comp_res_fil; FILTER(comp_res; CHOOSECOLS(comp_res; 2) <> "");
lookup_col; IFNA(EXPAND(lookup_val; ROWS(comp_res_fil)); lookup_val);
IFERROR(HSTACK(lookup_col; comp_res_fil); "")
)
**fxProcessCompRow:**
Parameters → `data;lookup_row;component1_cols;component2_cols;result_col`
Code:
=LET(
sourceData; fxProcessVal(
INDEX(data; lookup_row; 2);
component1_cols;
component2_cols;
result_col
);
res_val; INDEX(data; 1; 1);
res_col; IFNA(EXPAND(res_val; ROWS(sourceData)); res_val);
sourceRes; HSTACK(res_col; DROP(sourceData; 0; 1));
IF(
sourceData <> "";
FILTER(sourceRes; CHOOSECOLS(sourceRes; 2) <> "");
CHOOSEROWS(data; lookup_row)
)
)
**fxProcessComp:**
Parameters → `source;component1_cols;component2_cols;result_col`
Code:
=LET(
seq; SEQUENCE(ROWS(source));
reducer; REDUCE(
"";
seq;
LAMBDA(acc; curr;
VSTACK(
acc;
IFNA(
fxProcessCompRow(source; curr; component1_cols; component2_cols; result_col);
HSTACK(""; ""; "")
)
)
)
);
temp_res; DROP(reducer; 1);
temp_res
)
**fxCompCost:**
Parameters → `component1_cols;component2_cols;result_col`
Code:
=IFNA(
INDEX(
unitprice_col;
MATCH(
MAXIFS(purchasedate_col; purchasedate_col; "<=" & date; purchasecomponent_col; comp) &
comp;
purchasedate_col & purchasecomponent_col;
0
)
);
0
)
**prox:**
Parameters → `seed;component1_cols;component2_cols;result_col`
Code:
=LET(
res; IF(
COUNTA(seed) = 1;
fxProcessVal(seed; component1_cols; component2_cols; result_col);
fxProcessComp(seed; component1_cols; component2_cols; result_col)
);
comp; CONCAT(seed) = CONCAT(res);
IF(comp; seed; prox(res; component1_cols; component2_cols; result_col))
)
----------
Reference:
[![enter image description here][2]][2]
Formulas
--------
Formula to explode the BOM list (and desaggregate each product material and quantity needed):
=UNIQUE(DROP(REDUCE("";TableBOM[Resultado];LAMBDA(acc;curr;VSTACK(acc;prox(curr;TableBOM[[Componente 1]:[Q1]];TableBOM[[Componente 2]:[Q2]];TableBOM[Resultado]))));1))
[![enter image description here][3]][3]
The dates to calculate the costs of each product are dynamically set in a range using this formula:
=DATE(YEAR(Q1);SEQUENCE(1;DATEDIF(Q1;Q2;"M")+1;MONTH(Q1);1);1)
[![enter image description here][4]][4]
# STUCKED HERE: #
I would like to set Q3 (dates range as a spilled range) and calculate the cost for each product in each month.
Currently the SUMPRODUCT calculates the total for all the months (dates in the range).
I've tried BYCOL and a combination of MAP and Sequence but haven't figure it out.
Any help I would appreciate it.
[Link][5] to read-only sample file (no macros)
=LET(
data;L4#;
date;Q3#;
res;INDEX(data;;1);
comp;INDEX(data;;2);
q;INDEX(data;;3);
lab;UNIQUE(res);
cost;q*fxCompCost(comp;date);
rt;MAP(lab;LAMBDA(a; SUMPRODUCT((res=a)*cost)));
temp;HSTACK(lab;rt);
temp
)
[![enter image description here][6]][6]
Desired result:
[![enter image description here][7]][7]
[1]: https://i.stack.imgur.com/y8pbs.png
[2]: https://i.stack.imgur.com/Gfr2r.png
[3]: https://i.stack.imgur.com/89Ha0.png
[4]: https://i.stack.imgur.com/z4LmB.png
[5]: https://1drv.ms/x/s!ArAKssDW3T7w1sQerWmJeBduNhgrjQ?e=qXhQsL
[6]: https://i.stack.imgur.com/9CwaZ.png
[7]: https://i.stack.imgur.com/CWiqc.png |
How to build aosp 6.0 img and run emulator with the img on mac m1 device ?
**In a word, I want to compile my own android 6.0 image and then use this image to run the emulator on mac m1**
I'm what to learn aosp . Now i got aosp-6.0.0_r1 and aosp-6.0.0_r6 source code . I can build the code success , and got the result img files , on intel64 linux device .
- on intel64 devide the build result imgs can be use by emulator to run .
My build enviroment is
- Debian12
- [Docker container ubuntu 14.04](https://android.googlesource.com/platform/build/+/refs/heads/main/tools/docker/ "https://android.googlesource.com/platform/build/+/refs/heads/main/tools/docker/")
The build result of aosp-6.0.0_r1 :
- when on docker build success , i exit docker .
- on debian my `aosp.zshrc` config is this . **"attachments `aosp.zshrc.sh`"**
- ```
# AOSP source path evn
export AOSP_ROOT=/Volumes/Beyourself/AOSP
export AOSP_5=$AOSP_ROOT/android-5.0.1_r1
export AOSP_5_OUT=$AOSP_ROOT/android-5.0.1_r1.tmp
export AOSP_6=$AOSP_ROOT/android-6.0.0_r1
export AOSP_6_OUT=$AOSP_ROOT/android-6.0.0_r1.tmp
export AOSP_6_R6=$AOSP_ROOT/android-6.0.0_r6
export AOSP_6_R6_OUT=$AOSP_ROOT/android-6.0.0_r6.tmp
# export AOSP_PATH=$AOSP_6
# export AOSP_OUT=$AOSP_6_OUT
#export AOSP_PATH=$AOSP_6_R6
#export AOSP_OUT=$AOSP_6_R6_OUT
# alias source-aosp-build-envsetup.sh='source $AOSP_PATH/build/envsetup.sh'
alias repoAlias='python3 $AOSP_ROOT/bin/repo'
function source-aosp-build-envsetup() {
# AOSP build cache setting : https://source.android.com/source/initializing?hl=zh-cn#optimizing-a-build-environment
curPath=$(pwd)
if [ "$curPath" = "$AOSP_PATH" ]; then
echo -e "\n You are on AOSP path: $AOSP_PATH "
echo "Config build cache. "
export USE_CCACHE=1
export CCACHE_DIR=$AOSP_OUT/build_ccache
prebuilts/misc/linux-x86/ccache/ccache -M 100G
fi
echo -e "\n>>>> source $AOSP_PATH/build/envsetup.sh"
source $AOSP_PATH/build/envsetup.sh
}
# When config the build result env config ,
# just emulatorAliasXxx we can run the result emulator with we build imgs .
export ANDROID_PRODUCT_OUT=$AOSP_OUT/build_out/target/product/generic
export OUT=$ANDROID_PRODUCT_OUT
export ANDROID_BUILD_TOP=$AOSP_PATH
alias emulatorAliasMac='$AOSP_PATH/prebuilts/android-emulator/darwin-x86_64/emulator'
alias emulatorAliasLinux='$AOSP_PATH/prebuilts/android-emulator/linux-x86_64/emulator'
```
- On intel Debian devide , i can run emulator with build out imgs success .
- On Mac m1 i can't
- My lunch cmd : `lunch aosp_arm-eng`
I know , on Android studio , we can download android 6 emulator image files . And the images can run success by emulator .
- \*\*I what to imitate it to build 6.0 imgs by my self , and run emulator with my self imgs . \*\*
This is the images download log : **attachments `android-studio-download-android6-imgs.log`**
- ```
Preparing "Install ARM 64 v8a System Image API 23 (revision 7)".
Downloading https://dl.google.com/android/repository/sys-img/android/arm64-v8a-23_r07.zip
"Install ARM 64 v8a System Image API 23 (revision 7)" ready.
Preparing "Install Intel x86 Atom System Image API 23 (revision 10)".
Downloading https://dl.google.com/android/repository/sys-img/android/x86-23_r10.zip
"Install Intel x86 Atom System Image API 23 (revision 10)" ready.
Preparing "Install ARM EABI v7a System Image API 23 (revision 6)".
Downloading https://dl.google.com/android/repository/sys-img/android/armeabi-v7a-23_r06.zip
"Install ARM EABI v7a System Image API 23 (revision 6)" ready.
Preparing "Install Intel x86_64 Atom System Image API 23 (revision 10)".
Downloading https://dl.google.com/android/repository/sys-img/android/x86_64-23_r10.zip
"Install Intel x86_64 Atom System Image API 23 (revision 10)" ready.
Installing ARM 64 v8a System Image in /Volumes/Beyourself/dev_tools/dev_kit/android_sdk/system-images/android-23/default/arm64-v8a
"Install ARM 64 v8a System Image API 23 (revision 7)" complete.
"Install ARM 64 v8a System Image API 23 (revision 7)" finished.
Installing Intel x86 Atom System Image in /Volumes/Beyourself/dev_tools/dev_kit/android_sdk/system-images/android-23/default/x86
"Install Intel x86 Atom System Image API 23 (revision 10)" complete.
"Install Intel x86 Atom System Image API 23 (revision 10)" finished.
Installing ARM EABI v7a System Image in /Volumes/Beyourself/dev_tools/dev_kit/android_sdk/system-images/android-23/default/armeabi-v7a
"Install ARM EABI v7a System Image API 23 (revision 6)" complete.
"Install ARM EABI v7a System Image API 23 (revision 6)" finished.
Installing Intel x86_64 Atom System Image in /Volumes/Beyourself/dev_tools/dev_kit/android_sdk/system-images/android-23/default/x86_64
"Install Intel x86_64 Atom System Image API 23 (revision 10)" complete.
"Install Intel x86_64 Atom System Image API 23 (revision 10)" finished.
```
I run emulator with [armeabi-v7a-23_r06](https://dl.google.com/android/repository/sys-img/android/armeabi-v7a-23_r06.zip "https://dl.google.com/android/repository/sys-img/android/armeabi-v7a-23_r06.zip") images , success . On android setting app i got build info is :
- *Android version* : 6.0
- *Build number* : sdk_phone_armv7-userdebug 6.0 MASTER 3079352 test-key .
- **attachments: `android-download-emulator-imgs-run-result.jpeg`**
- [](https://i.stack.imgur.com/pYlIz.jpg)
How to find the code branch `MASTER 3079352` , I don't know . Because aosp source are many repository .
- The images name `armeabi-v7a-23_r06` i guess the aosp branch is `android-6.0.0_r6`
- Then i download the code and lunch cmd is : `lunch sdk_phone_armv7-userdebug`
- `build` success on docker ubuntu os .
- on debian12 `alias emulatorAliasLinux='$AOSP_PATH/prebuilts/android-emulator/linux-x86_64/emulator'`
- I run emulator success by the build result imgs , use cmd `emulatorAliasLinux`
- the same aosp and build result copy to mac m1 device
- env `alias emulatorAliasMac='$AOSP_PATH/prebuilts/android-emulator/darwin-x86_64/emulator'`
- execute `emulatorAliasMac`
- the emulator just show Android LOGO , not start success . **attachments: `aosp6.0.0_r6_build_imgs_run_on_mac_m1_just_show_logo.png`**
- [](https://i.stack.imgur.com/eeJKr.png)
- this is `emulatorAliasMac` log : **attachments : `4.emulator.just.aosp.content.log`**
- ```
emulator: found Android build root: /Volumes/MacOs_disk/AOSP/android-6.0.0_r6
emulator: found Android build out: /Volumes/MacOs_disk/AOSP/android-6.0.0_r6.tmp/build_out/target/product/generic
emulator: Read property file at /Volumes/MacOs_disk/AOSP/android-6.0.0_r6.tmp/build_out/target/product/generic/system/build.prop
emulator: Cannot find boot properties file: /Volumes/MacOs_disk/AOSP/android-6.0.0_r6.tmp/build_out/target/product/generic/boot.prop
emulator: Found target API sdkVersion: 23
emulator: virtual device has no config file - no problem
emulator: using core hw config path: /Volumes/MacOs_disk/AOSP/android-6.0.0_r6.tmp/build_out/target/product/generic/hardware-qemu.ini
emulator: found skin-specific hardware.ini: /Volumes/MacOs_disk/AOSP/android-6.0.0_r6/development/tools/emulator/skins/HVGA/hardware.ini
emulator: autoconfig: -skin HVGA
emulator: autoconfig: -skindir /Volumes/MacOs_disk/AOSP/android-6.0.0_r6/development/tools/emulator/skins
emulator: found skin-specific hardware.ini: /Volumes/MacOs_disk/AOSP/android-6.0.0_r6/development/tools/emulator/skins/HVGA/hardware.ini
emulator: keyset loaded from: /Users/tom/.android/default.keyset
emulator: trying to load skin file '/Volumes/MacOs_disk/AOSP/android-6.0.0_r6/development/tools/emulator/skins/HVGA/layout'
emulator: skin network speed: 'full'
emulator: skin network delay: 'none'
emulator: autoconfig: -kernel /Volumes/MacOs_disk/AOSP/android-6.0.0_r6/prebuilts/qemu-kernel/arm/kernel-qemu-armv7
emulator: Auto-detect: Kernel image requires legacy device naming scheme.
emulator: Auto-detect: Kernel does not support YAFFS2 partitions.
emulator: autoconfig: -ramdisk /Volumes/MacOs_disk/AOSP/android-6.0.0_r6.tmp/build_out/target/product/generic/ramdisk.img
emulator: autoconfig: -sysdir /Volumes/MacOs_disk/AOSP/android-6.0.0_r6.tmp/build_out/target/product/generic
emulator: Using initial system image: /Volumes/MacOs_disk/AOSP/android-6.0.0_r6.tmp/build_out/target/product/generic/system.img
emulator: WARNING: system partition size adjusted to match image file (1536 MB > 200 MB)
emulator: autoconfig: -data /Volumes/MacOs_disk/AOSP/android-6.0.0_r6.tmp/build_out/target/product/generic/userdata-qemu.img
emulator: autoconfig: -initdata /Volumes/MacOs_disk/AOSP/android-6.0.0_r6.tmp/build_out/target/product/generic/userdata.img
emulator: WARNING: data partition size adjusted to match image file (550 MB > 200 MB)
emulator: autoconfig: -cache /Volumes/MacOs_disk/AOSP/android-6.0.0_r6.tmp/build_out/target/product/generic/cache.img
emulator: Physical RAM size: 512MB
emulator: GPU emulation is disabled
emulator: WARNING: CPU acceleration only works with x86/x86_64 system images.
emulator: Auto-config: -qemu -cpu cortex-a8
Content of hardware configuration file:
hw.cpu.arch = arm
hw.cpu.model = cortex-a8
hw.ramSize = 512
hw.screen = touch
hw.mainKeys = yes
hw.trackBall = yes
hw.keyboard = no
hw.keyboard.lid = no
hw.keyboard.charmap = qwerty2
hw.dPad = yes
hw.gsmModem = yes
hw.gps = yes
hw.battery = yes
hw.accelerometer = yes
hw.audioInput = yes
hw.audioOutput = yes
hw.sdCard = yes
disk.cachePartition = yes
disk.cachePartition.path = /Volumes/MacOs_disk/AOSP/android-6.0.0_r6.tmp/build_out/target/product/generic/cache.img
disk.cachePartition.size = 66m
hw.lcd.width = 320
hw.lcd.height = 480
hw.lcd.depth = 16
hw.lcd.density = 160
hw.lcd.backlight = yes
hw.gpu.enabled = no
hw.initialOrientation = portrait
hw.camera.back = emulated
hw.camera.front = none
vm.heapSize = 48
hw.sensors.proximity = yes
hw.sensors.magnetic_field = yes
hw.sensors.orientation = yes
hw.sensors.temperature = yes
hw.useext4 = yes
kernel.path = /Volumes/MacOs_disk/AOSP/android-6.0.0_r6/prebuilts/qemu-kernel/arm/kernel-qemu-armv7
kernel.parameters = androidboot.hardware=goldfish android.checkjni=1
kernel.newDeviceNaming = no
kernel.supportsYaffs2 = no
disk.ramdisk.path = /Volumes/MacOs_disk/AOSP/android-6.0.0_r6.tmp/build_out/target/product/generic/ramdisk.img
disk.systemPartition.initPath = /Volumes/MacOs_disk/AOSP/android-6.0.0_r6.tmp/build_out/target/product/generic/system.img
disk.systemPartition.size = 1536m
disk.dataPartition.path = /Volumes/MacOs_disk/AOSP/android-6.0.0_r6.tmp/build_out/target/product/generic/userdata-qemu.img
disk.dataPartition.size = 550m
avd.name = <build>
.
QEMU options list:
emulator: argv[00] = "/Volumes/MacOs_disk/AOSP/android-6.0.0_r6/prebuilts/android-emulator/darwin-x86_64/emulator64-arm"
emulator: argv[01] = "-android-hw"
emulator: argv[02] = "/Volumes/MacOs_disk/AOSP/android-6.0.0_r6.tmp/build_out/target/product/generic/hardware-qemu.ini"
Concatenated QEMU options:
/Volumes/MacOs_disk/AOSP/android-6.0.0_r6/prebuilts/android-emulator/darwin-x86_64/emulator64-arm -android-hw /Volumes/MacOs_disk/AOSP/android-6.0.0_r6.tmp/build_out/target/product/generic/hardware-qemu.ini
emulator: registered 'boot-properties' qemud service
emulator: Using kernel serial device prefix: ttyS
emulator: Ramdisk image contains fstab.goldfish file
emulator: Found format of system partition: 'ext4'
emulator: Found format of userdata partition: 'ext4'
emulator: Found format of cache partition: 'ext4'
emulator: system partition format: ext4
emulator: Mapping 'system' partition image to /tmp/android-tom/emulator-kBISGe
emulator: nand_add_dev: system,size=0x60000000,file=/tmp/android-tom/emulator-kBISGe,initfile=/Volumes/MacOs_disk/AOSP/android-6.0.0_r6.tmp/build_out/target/product/generic/system.img,pagesize=512,extrasize=0
emulator: userdata partition format: ext4
emulator: nand_add_dev: userdata,size=0x22600000,file=/Volumes/MacOs_disk/AOSP/android-6.0.0_r6.tmp/build_out/target/product/generic/userdata-qemu.img,pagesize=512,extrasize=0
emulator: cache partition format: ext4
emulator: nand_add_dev: cache,size=0x4200000,file=/Volumes/MacOs_disk/AOSP/android-6.0.0_r6.tmp/build_out/target/product/generic/cache.img,pagesize=512,extrasize=0
emulator: registered 'boot-properties' qemud service
emulator: Adding boot property: 'dalvik.vm.heapsize' = '48m'
emulator: Adding boot property: 'ro.config.low_ram' = 'true'
emulator: Adding boot property: 'qemu.sf.lcd_density' = '160'
emulator: Adding boot property: 'qemu.hw.mainkeys' = '1'
emulator: Adding boot property: 'qemu.sf.fake_camera' = 'back'
emulator: Kernel parameters: qemu.gles=0 qemu=1 console=ttyS0 android.qemud=ttyS1 androidboot.hardware=goldfish android.checkjni=1 ndns=1
emulator: autoconfig: -scale 1
emulator: Forcing ro.adb.qemud to "0".
emulator: control console listening on port 5554, ADB on port 5555
emulator: sent '0012host:emulator:5555' to ADB server
emulator: setting up http proxy: server=127.0.0.1 port=8889
emulator: android_hw_fingerprint_init: fingerprint qemud listen service initialized
emulator: ping program: /Volumes/MacOs_disk/AOSP/android-6.0.0_r6/prebuilts/android-emulator/darwin-x86_64/ddms
goldfish_fb_get_pixel_format:170: display surface,pixel format:
bits/pixel: 16
bytes/pixel: 2
depth: 16
red: bits=5 mask=0xf800 shift=11 max=0x1f
green: bits=6 mask=0x7e0 shift=5 max=0x3f
blue: bits=5 mask=0x1f shift=0 max=0x1f
alpha: bits=0 mask=0x0 shift=0 max=0x0
emulator: ### WARNING: /etc/localtime does not point to /usr/share/zoneinfo/, can't determine zoneinfo timezone name
emulator: _hwFingerprint_connect: connect finger print listen is called
emulator: got message from guest system fingerprint HAL
emulator: User configuration saved to /Users/tom/.android/emulator-user.ini
```
**In a word, I want to compile my own android 6.0 image and then use this image to run the emulator on mac m1**
**In a word, I want to compile my own android 6.0 image and then use this image to run the emulator on mac m1** |
How to build aosp 6.0 img and run emulator with the img on mac m1 device? |
|emulation|android-source| |
null |
null |
null |
null |
null |
In adittion to paul answer. if you are using Android composable you have to do the changes inside Android view in the updated method
```
AndroidView(
modifier = Modifier.fillMaxSize(),
factory = { context ->
val previewView = PreviewView(context)
//no changes here, it only initialized once
previewView
},
update = {
bindCamera..()
}
) |
### Updated
If you want to perform the additional action in response to the button's touchUpInSide event, you can just provide a Button extension:
```swift
extension Button where Label == Text {
init(_ titleKey: any StringProtocol, action: @escaping () -> Void, additionalAction: @escaping () -> Void) {
self.init(titleKey) {
action()
additionalAction()
}
}
}
// Usage example:
Button("Tap me") {
print("tapped")
} additionalAction: {
print("tapped again")
}
```
If you insist on using view modifiers on the Button (not a custom Button), you may need to use the solution below with DragGesture to determine if the user released a touch inside the button.
For example: record the origin of the touch and compare the distance with button's bounds.
Things will get a little more complicated then. I would just stop here.
---
### Original
You can observe the button's `configuration.isPressed` like @lorem said:
```swift
extension ButtonStyle where Self == AdditionalActionButtonStyle {
static func printWhenTapped(_ message: String) -> Self {
return Self(message: message)
}
}
struct AdditionalActionButtonStyle: ButtonStyle {
private var message: String
init(message: String) {
self.message = message
}
public func makeBody(configuration: Configuration) -> some View {
configuration.label.onChange(of: configuration.isPressed) { _, newValue in
if newValuw {
print(self.message)
}
}
}
}
// Usage example:
Button("Tap me") {
print("tapped")
}
.buttonStyle(.printWhenTapped("tapped again"))
```
If you want to use it as a view modifier, you can wrap it:
```swift
extension View {
public func printWhenTapped(_ message: String) -> some View {
modifier(AdditionalActionButtonStyleViewModifier(message: message))
}
}
struct AdditionalActionButtonStyleViewModifier: ViewModifier {
private var message: String?
init(message: String) {
self.message = message
}
func body(content: Content) -> some View {
content.buttonStyle(.printWhenTapped(self.message))
}
}
// Usage example:
Button("Tap me") {
print("tapped")
}
.printWhenTapped("tapped again")
``` |
My friends, thank you very much for your help, I managed to solve it, the error was in the concatenation of the url + SQL |
{"Voters":[{"Id":238704,"DisplayName":"President James K. Polk"},{"Id":3001761,"DisplayName":"jonrsharpe"},{"Id":3807365,"DisplayName":"IT goldman"}],"SiteSpecificCloseReasonIds":[13]} |
I'm working on a speed typing game using React and I have this function that removes the word from the array after it's either skipped or entered correctly. I think there's a better way to do it by creating a new array/without directly removing it from my words array but I'm a bit stuck on implementation. What would be the best approach to refactor the remove word function?
import wordsArray from "./components/wordsArray";
export default function App() {
const getRandomWord = () => {
return wordsArray[Math.floor(Math.random() * wordsArray.length)];
};
const [word, setWord] = useState(getRandomWord());
const removeWord = () => {
const removedWordIndex = wordsArray.indexOf(word);
wordsArray.splice(removedWordIndex, 1);
if (wordsArray.length === 0) {
setGameOver(true);
}
};
} |
{"Voters":[{"Id":3832970,"DisplayName":"Wiktor Stribiżew"},{"Id":238704,"DisplayName":"President James K. Polk"},{"Id":546871,"DisplayName":"AdrianHHH"}]} |
**tl;dr**
Use the **[`$OutputEncoding` preference variable](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Preference_Variables#outputencoding)**:
* In _Windows PowerShell_:
```
# Using the system's legacy ANSI code page, as Python does by default.
# NOTE: The & { ... } enclosure isn't strictly necessary, but
# ensures that the $OutputEncoding change is only temporary,
# by limiting to the child scope that the enclosure cretes.
& {
$OutputEncoding = [System.Text.Encoding]::Default
"‘I am well’ he said." | python -c 'import sys; print(sys.stdin.read())'
}
# Using UTF-8 instead, which is generally preferable.
# Note the `-X utf8` option (Python 3.7+)
& {
$OutputEncoding = [System.Text.UTF8Encoding]::new()
"‘I am well’ he said." | python -X utf8 -c 'import sys; print(sys.stdin.read())'
}
```
* In [PowerShell (Core) 7+](https://github.com/PowerShell/PowerShell/blob/master/README.md):
```
# Using the system's legacy ANSI code page, as Python does by default.
# Note: In PowerShell (Core) / .NET 5+,
# [System.Text.Encoding]::Default` now reports UTF-8,
# not the active ANSI encoding.
& {
$OutputEncoding = [System.Text.Encoding]::GetEncoding([cultureinfo]::CurrentCulture.TextInfo.ANSICodePage)
"‘I am well’ he said." | python -c 'import sys; print(sys.stdin.read())'
}
# Using UTF-8 instead, which is generally preferable.
# Note the `-X utf8` option (Python 3.7+)
# NO need to set $OutputEncoding, as it now *defaults* to UTF-8
"‘I am well’ he said." | python -X utf8 -c 'import sys; print(sys.stdin.read())'
```
Note:
* **[`$OutputEncoding`](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Preference_Variables#outputencoding)** controls what **encoding is used to send data TO external programs** via the pipeline (to stdin). It defaults to ASCII(!) in *Windows PowerShell*, and UTF-8 in _PowerShell (Core)_.
* **[`[Console]::OutputEncoding`](https://docs.microsoft.com/en-US/dotnet/api/System.Console.OutputEncoding)** controls **how data received FROM external programs (via stdout) is decoded**. It defaults to the console's active code page, which in turn defaults to the system's legacy OEM code page, such as `437` on US-English systems).
* Note that this implies that **if you want to *receive* data from a `python` call for _programmatic processing_**, you **must (temporarily) set `[Console]::OutputEncoding` too** - see the bottom section of [this answer](https://stackoverflow.com/a/78227364/45375).
That these two encodings are _not_ aligned by default is unfortunate; while _Windows PowerShell_ will see no more changes, there is hope for _PowerShell (Core)_: it would make sense to have it default _consistently_ to UTF-8:
* [GitHub issue #7233](https://github.com/PowerShell/PowerShell/issues/7233) suggests at least defaulting the shortcut files that launch PowerShell to UTF-8 (code page `65001`); [GitHub issue #14945](https://github.com/PowerShell/PowerShell/issues/14945) more generally discusses the problematic mismatch.
* **In *Windows 10 and above*, there is an option to switch to UTF-8 _system-wide_**, which then makes both the OEM and ANSI code pages default to UTF-8 (`65001`); however, **this has *far-reaching consequences*** and is still labeled as being _in beta_ as of Windows 11 - see [this answer](https://stackoverflow.com/a/57134096/45375).
---
#### Background information:
It is the **[`$OutputEncoding` preference variable](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Preference_Variables#outputencoding)** that determines what character encoding PowerShell uses to send data (invariably _text_, as of PowerShell 7.3) to an _external program_ via the pipeline.
* Note that this even applies when data is read _from a file_: **PowerShell, as of v7.3, never sends _raw bytes_ through the pipeline: it reads the content into .NET strings _first_ and then _re-encodes_ them** based on `$OutputEncoding` on sending them through the pipeline to an external program.
* Therefore, **what encoding your `ansi.txt` input file uses is ultimately irrelevant, as long as PowerShell decodes it correctly** when reading it into .NET strings (which are internally composed of UTF-16 code units).
* See [this answer](https://stackoverflow.com/a/59118502/45375) for more information.
Thus, **the character encoding stored in `$OutputEncoding` must match the encoding that the target program expects**.
_By default_ the encoding in `$OutputEncoding` is _unrelated_ to the encoding implied by the console's active code page (which itself defaults to the system's legacy OEM code page, such as [`437`](https://en.wikipedia.org/wiki/Code_page_437) on US-English systems), which is what at least legacy console applications tend to use; however, Python does _not_, and uses the legacy _ANSI_ code page; other modern CLIs, notably Node.js' `node.exe`, always use UTF-8.
While `$OutputEncoding`'s default in [PowerShell (Core) 7+](https://github.com/PowerShell/PowerShell/blob/master/README.md) is now UTF-8, _Windows PowerShell_'s default is, regrettably, ASCII(!), which means that non-ASCII characters get "lossily" transliterated to _verbatim_ ASCII `?` characters, which is what you saw.
Therefore, you must (temporarily) set `$OutputEncoding` to the encoding that Python expects and/or ask it use UTF-8 instead.
|
I found how to easily insert pngs as axes tick labels, following [this post.](https://stackoverflow.com/questions/54247880/image-as-axis-tick-ggplot)
However, this seems to work only for a single axis. I am not quite sure how to draw both my y axis and x axis grob to my plot.
To plot one grob for **either** y or x axis one would do something like this...
```
library(ggplot2)
library(cowplot2)
library(png)
library(Rcurl)
data <- data.frame(
pairType = c("assortative", "disassortative", "disassortative", "assortative", "disassortative", "disassortative", "assortative"),
Male_Phenotype = c("metallic", "rufipennis", "metallic", "militaris-a", "metallic", "militaris-a", "rufipennis"),
Female_Phenotype = c("metallic", "metallic", "militaris-a", "militaris-a", "rufipennis", "rufipennis", "rufipennis"),
exp = c(0.10204082, 0.11224490, 0.28571429, 0.10204082, 0.02040816, 0.08163265, 0.29591837),
obs = c(0.04081633, 0.02040816, 0.02040816, 0.03061224, 0.00000000, 0.00000000, 0.03061224)
)
rufiPNG<- "https://i.stack.imgur.com/Q8BqO.png"
rufipennis <- readPNG(getURLContent(rufiPNG))
militarisPNG<- "https://i.stack.imgur.com/EtdfR.png
militaris_a <- readPNG(getURLContent(militarisPNG))
metallicPNG<- "https://i.stack.imgur.com/YIDoA.png"
metallic <- readPNG(getURLContent(metallicPNG))
#my crazy plot
p<- ggplot(chiMate, aes(x = `Male Phenotype`, y = `Female Phenotype`)) +
geom_point(aes(color = pairType, size = 2.1*exp))+
geom_point(color = "black", aes(size = 1.6*exp)) +
geom_point(color = "white", aes(size = 1.3*exp)) +
geom_point(aes(size = 1.25*obs), color= "salmon") +
scale_color_manual("Pair type", values = c("assortative" = "cornflowerblue", "disassortative" = "aquamarine3")) +
scale_size_continuous(range = c(10, 45))+
scale_x_discrete(expand=c(.3,.3))+
scale_y_discrete(expand=c(0.3,0.3))+
geom_text(aes(label= round(obs, digits= 3)))+
theme_minimal() +
labs(x = "Male Phenotype", y = "Female Phenotype", size = "Mate Count")+
theme(legend.position = 'bottom', legend.box.background = element_rect(color='black'), axis.title = element_text(size= 14), axis.text = element_text(size= 10))+
guides(color = guide_legend(override.aes = list(size = 12)), size = "none")
#create canvas
ystrip <- axis_canvas(p, axis = 'y') +
draw_image(rufipennis, y = 2.35, scale = 4) +
draw_image(militaris_a, y = 1.5, scale = 4) +
draw_image(metallic, y = .65, scale = 4)
#'draw' grob onto ggplot, 'p'
ggdraw(insert_yaxis_grob(p, ystrip, position = "left",width = grid::unit(0.05, "null")))
```
I tried
```
ggdraw(p)+
insert_yaxis_grob(p, ystrip, position = "left",width = grid::unit(0.05, "null"))+
insert_xaxis_grob(p, xstrip, position = "bottom",height = grid::unit(0.1, "null"))
```
and
```
ggdraw(insert_yaxis_grob(p, ystrip, position = "left",width = grid::unit(0.05, "null")),
insert_xaxis_grob(p, xstrip, position = "bottom",height = grid::unit(0.1, "null")))
```
to no avail. Any ideas? |
The code below uses a **background callback** (plotly/dash) to retrieve a value.
[![enter image description here][1]][1]
Every 3 seconds, `update_event(event, shared_value)` sets an `event` and define a `value`. In parallel, `listening_process(set_progress)` waits for the `event`.
This seems to work nicely and the waiting time is low (say a few seconds). When the waiting time is below 1 second (say 500ms), `listening_process(set_progress)` is missing values.
Is there a way for the **background callback** to refresh faster ?
It looks like the server updates with a delay of a few hundred of ms, independently of the rate at which I set the `event`.
```
import time
from uuid import uuid4
import diskcache
import dash_bootstrap_components as dbc
from dash import Dash, html, DiskcacheManager, Input, Output
from multiprocessing import Event, Value, Process
from datetime import datetime
import random
# Background callbacks require a cache manager + a unique identifier
launch_uid = uuid4()
cache = diskcache.Cache("./cache")
background_callback_manager = DiskcacheManager(
cache, cache_by=[lambda: launch_uid], expire=60,
)
# Creating an event
event = Event()
# Creating a shared value
shared_value = Value('i', 0) # 'i' denotes an integer type
# Updating the event
# This will run in a different process using multiprocessing
def update_event(event, shared_value):
while True:
event.set()
with shared_value.get_lock(): # ensure safe access to shared value
shared_value.value = random.randint(1, 100) # generate a random integer between 1 and 100
print("Updating event...", datetime.now().time(), "Shared value:", shared_value.value)
time.sleep(3)
app = Dash(__name__, background_callback_manager=background_callback_manager)
app.layout = html.Div([
html.Button('Run Process', id='run-button'),
dbc.Row(children=[
dbc.Col(
children=[
# Component sets up the string with % progress
html.P(None, id="progress-component")
]
),
]
),
html.Div(id='output')
])
# Listening for the event and generating a random process
def listening_process(set_progress):
while True:
event.wait()
event.clear()
print("Receiving event...", datetime.now().time())
with shared_value.get_lock(): # ensure safe access to shared value
value = shared_value.value # read the shared value
set_progress(value)
@app.callback(
[
Output('run-button', 'style', {'display': 'none'}),
Output("progress-component", "children"),
],
Input('run-button', 'n_clicks'),
prevent_initial_call=True,
background=True,
running=[
(Output("run-button", "disabled"), True, False)],
progress=[
Output("progress-component", "children"),
],
)
def run_process(set_progress, n_clicks):
if n_clicks is None:
return False, None
elif n_clicks > 0:
p = Process(target=update_event, args=(event,shared_value))
p.start()
listening_process(set_progress)
return True, None
if __name__ == '__main__':
app.run(debug=True, port=8050)
```
EDIT : found a solution using `interval`.
```
@app.callback(
[
Output('run-button', 'style', {'display': 'none'}),
Output("progress-component", "children"),
],
Input('run-button', 'n_clicks'),
prevent_initial_call=True,
interval= 100, #### **THIS IS NEW**
background=True,
running=[
(Output("run-button", "disabled"), True, False)],
progress=[
Output("progress-component", "children"),
],
)
```
But then, what is the advantage of this approach over a `dcc.interval` ? In both cases, there is polling. The only advantage I can see is that the consumer receives the updated value at the *exact* same time it is produced.
[1]: https://i.stack.imgur.com/jqJ5v.gif |
I was facing similar issue. Just break the cache of image. _I am using MVC Razor View._
```javascript
var link = document.createElement('link');
link.type = 'image/x-icon';
link.rel = 'shortcut icon';
link.href = "/images/favicon.ico?t=@DateTime.Now.Ticks";
var head = document.head || document.getElementsByTagName('head')[0];
head.appendChild(link);
``` |
null |
I'm using iverilog to compile the following matrix multiplication code and testbench:
```
`timescale 1ns / 1ps
module testbench();
// Testbench parameters
parameter m1 = 2;
parameter n1 = 2;
parameter m2 = 2;
parameter n2 = 2;
// Testbench variables
logic clk;
real matrix_a[m1*n1-1:0];
real matrix_b[m2*n2-1:0];
real result_matrix[m1*n2-1:0];
// Instantiate the module under test
matrix_dot_product #(.m1(m1), .n1(n1), .m2(m2), .n2(n2)) uut (
.clk(clk),
.matrix_a(matrix_a),
.matrix_b(matrix_b),
.result_matrix(result_matrix)
);
initial clk = 0;
always #5 clk = ~clk;
initial begin
$dumpfile("testbench.vcd");
$dumpvars(0, testbench);
#10;
matrix_a[0] = 1.0;
matrix_a[1] = 2.0;
matrix_a[2] = 3.0;
matrix_a[3] = 4.0;
matrix_b[0] = 1.0;
matrix_b[1] = 2.0;
matrix_b[2] = 3.0;
matrix_b[3] = 4.0;
#10;
// $display("Result matrix:");
for (int i = 0; i < m1*n2; i++) begin
$display("result_matrix[%0d] = %f", i, result_matrix[i]);
end
// Finish the simulation
$finish;
end
endmodule
module matrix_dot_product #(
parameter m1 = 2,
parameter n1 = 2,
parameter m2 = 2,
parameter n2 = 2
)(
input logic clk,
input real matrix_a[m1*n1-1:0],
input real matrix_b[m2*n2-1:0],
output real result_matrix[m1*n2-1:0]
);
integer i, j, k;
real sum;
real product;
always @(posedge clk) begin
for (i = 0; i < m1; i++) begin
for (j = 0; j < n2; j++) begin
sum = 0;
for (k = 0; k < n1; k++) begin
product = matrix_a[i*n1+k] * matrix_b[k*n2+j];
sum = sum + product;
end
result_matrix[i*n2+j] = sum;
end
end
for (int i = 0; i < m1*n2; i++) begin
$display("result_matrix[%0d] = %f", i, result_matrix[i]);
end
end
endmodule
```
I get the following output when running this matrix multiply test bench:
MODULE OUTPUT
result_matrix[0] = 0.000000
result_matrix[1] = 0.000000
result_matrix[2] = 0.000000
result_matrix[3] = 0.000000
MODULE OUTPUT
result_matrix[0] = 7.000000
result_matrix[1] = 10.000000
result_matrix[2] = 15.000000
result_matrix[3] = 22.000000
TEST BENCH OUTPUT
result_matrix[0] = 0.000000
result_matrix[1] = 0.000000
result_matrix[2] = 0.000000
result_matrix[3] = 0.000000
Based on my output, the code multiplies correctly when running inside the module, however, it doesn't seem to effect the output of the matrix in the test bench. I'd appreciate any help with this issue! |
iverilog Matrix Multiplication Testbench Yields Inconsistent Results |
|verilog|system-verilog|iverilog| |
Recently, I have been using the library causal_conv1d for machine learning programming, and causal_conv1d is a part of the mamba_ssm library. However, I can only run these libraries on NVIDIA GPUs. I am using a MAC machine with an M-series chip (M2 PRO). How can I use the causal_conv1d library on my MAC machine or is there any alternative library available?
I've tried installing PyTorch with the MPS version, but it seems like the causal_conv1d library directly requires support for nvcc and CUDA. |
Which library can replace causal_conv1d in machine learning programming? |
|machine-learning|deep-learning|pytorch| |
null |
Since you only want the side menu to show when a left-to-right drag happens on the first tab view, I would suggest applying the drag gesture to the content of view 1 only. And since we are talking about a left-to-right drag, it is probably reasonable to expect it to start on the left-side of the screen. In fact, I would expect that many users will probably "pull" the menu from near the left edge, if they know it's there.
The example below uses an overlay that covers just the left-side of view 1. The width of the overlay is half the width of the menu. The overlay has an opacity of 0.001, which makes it effectively invisible, but this allows it to be used for capturing the drag gesture.
Here is how the (white) overlay would look if the opacity would be 0.5 instead of 0.001:

When a drag gesture is detected that starts on the overlay and continues for a minimum of one-quarter of the menu width, the menu is brought into view. Any drag gesture from right-to-left that begins in the uncovered part of the view (on the right) will be handled by the `TabView` in the usual way.
When the menu is showing, the overlay expands to cover the full width of the menu, so that any drag that starts over the menu can be detected. Here is how it looks when the opacity is 0.5 instead of 0.001:

The menu can be hidden again with a drag gesture from right-to-left that begins over the menu. A small part of view 1 is still visible on the right and the menu can also be hidden by tapping in this area. However, if a right-to-left drag gesture begins in this area then it is handled by the `TabView`. This allows view 2 to be brought into view without having to close the menu first.
Views 2 and 3 are not impacted by the overlay on view 1, so drag gestures are handled by the `TabView` in the normal way with these views.
The updated example is shown below. Some more notes:
- A `GeometryReader` is used to measure the width of the screen, instead of using the first window scene as you were doing before. A `GeometryReader` also works on an iPad when split screen is used.
- `MainView` is now integrated into `FeedBaseView`, but view 1 has been factored out as a separate view. The side-menu view is unchanged, but I gave it a capitalized name (which is more conventional).
- You were applying a blur to view 1 when the menu is showing. This causes the background color of the screen to show through at the edges, so when the background is white, the edges get brighter. To mask this on the left edge where it is connected to the menu, a `zIndex` and also a shadow effect are applied to the menu.
- I found that `.animation` modifiers did not seem to work when applied to the content of view1. I don't know why not, but suspect that it may be because the view is nested inside a `TabView`. However, animations work fine when changes are applied `withAnimation`.
- I didn't understand what the first responder calls were doing in your original code so I stripped these out. If you really need them then hopefully it is clear where to put them back in.
```swift
struct View1WithMenu: View {
let screenWidth: CGFloat
let sideBarWidth: CGFloat
@State private var showMenu: Bool = false
@GestureState private var dragOffset: CGFloat = 0
init(screenWidth: CGFloat) {
self.screenWidth = screenWidth
self.sideBarWidth = screenWidth - 90
}
private func revealFraction(sideBarWidth: CGFloat) -> CGFloat {
showMenu ? 1 : max(0, min(1, dragOffset / (sideBarWidth / 2)))
}
var body: some View {
HStack(spacing: 0) {
SideMenu()
.frame(width: sideBarWidth)
.shadow(
color: Color(white: 0.2),
radius: revealFraction(sideBarWidth: sideBarWidth) * 10
)
.zIndex(1) // to cover the blur at the edge
ZStack {
Color.blue
Text("tab 0")
}
.frame(width: screenWidth)
.blur(radius: revealFraction(sideBarWidth: sideBarWidth) * 4)
.overlay {
Color.black
.opacity(revealFraction(sideBarWidth: sideBarWidth) * 0.2)
.onTapGesture {
if showMenu {
UIImpactFeedbackGenerator(style: .light).impactOccurred()
withAnimation { showMenu = false }
}
}
}
}
.offset(x: showMenu ? sideBarWidth / 2 : -sideBarWidth / 2)
.offset(x: dragOffset)
.overlay(alignment: .leading) {
Color.white
.opacity(0.001) //Change to 0.1 to see it
.frame(width: sideBarWidth + (showMenu ? sideBarWidth / 2 : 0))
.gesture(
DragGesture(minimumDistance: 1)
.updating($dragOffset) { value, state, trans in
let minOffset = showMenu ? -sideBarWidth : 0
let maxOffset = showMenu ? 0 : sideBarWidth
state = max(minOffset, min(maxOffset, value.translation.width))
}
.onEnded { value in
withAnimation {
showMenu = value.translation.width >= sideBarWidth / 4
}
}
)
}
.onDisappear {
showMenu = false
}
}
}
struct FeedBaseView: View {
@State private var selection: Int = 0
var body: some View {
GeometryReader { proxy in
TabView(selection: $selection) {
View1WithMenu(screenWidth: proxy.size.width)
.tag(0)
ZStack {
Color.red
Text("tab 1")
}
.tag(1)
ZStack {
Color.green
Text("tab 2")
}
.tag(2)
}
.tabViewStyle(.page)
}
}
}
```
 |
I suggest using a UserControl that is using the original PasswordBox and an alternate TextBox, controlled by a ToggleButton.
The user control XAML:
<UserControl x:Class="Problem300324A.DualPasswordBox"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:local="clr-namespace:Problem300324A"
mc:Ignorable="d"
d:DesignHeight="450" d:DesignWidth="800" Name="Parent">
<UserControl.Resources>
<BooleanToVisibilityConverter x:Key="BolleanToVisibilityConverter" />
<local:InvertBooleanToVisibilityConverter x:Key="InvertBooleanToVisibilityConverter"/>
<local:ShowHideConverter x:Key="ShowHideConverter"/>
</UserControl.Resources>
<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition/>
<ColumnDefinition />
</Grid.ColumnDefinitions>
<PasswordBox Name="password"
PasswordChanged="PasswordBox_PasswordChanged"
Visibility="{Binding ElementName=Parent,Path=ShowPassword,Converter={StaticResource InvertBooleanToVisibilityConverter}}"
/>
<TextBox Name="text" Grid.Column="0"
TextChanged="text_TextChanged"
Text="{Binding ElementName=Parent,Path=Password}"
Visibility="{Binding ElementName=Parent,Path=ShowPassword,Converter={StaticResource BolleanToVisibilityConverter}}"
/>
<ToggleButton Name="toggle" Grid.Column="1"
IsChecked="{Binding ElementName=Parent,Path=ShowPassword,Mode=TwoWay}" Width="40"
Content="{Binding ElementName=Parent,Path=ShowPassword,Converter={StaticResource ShowHideConverter}}"
/>
</Grid>
</UserControl>
The Code Behind :
public partial class DualPasswordBox : UserControl
{
public DualPasswordBox()
{
InitializeComponent();
}
public static readonly DependencyProperty ShowPasswordProperty =
DependencyProperty.Register("ShowPassword", typeof(bool), typeof(DualPasswordBox));
public bool ShowPassword
{
get { return (bool)GetValue(ShowPasswordProperty); }
set { SetValue(ShowPasswordProperty, value); }
}
private void PasswordBox_PasswordChanged(object sender, RoutedEventArgs e)
{
if (text.Text != password.Password)
{
text.Text = password.Password;
text.CaretIndex = text.Text.Length;
}
}
private void text_TextChanged(object sender, TextChangedEventArgs e)
{
if (text.Text != password.Password)
{
password.Password = text.Text;
password
.GetType().
GetMethod("Select", BindingFlags.Instance | BindingFlags.NonPublic).Invoke(password, new object[] { text.Text.Length, 0 });
}
}
}
Using example :
<local:DualPasswordBox ShowPassword="True" Height="40" />
Naturally , the code is introducing security breach that has to be evaluated. |
Github Pages Deployment deploys a blank page |
|reactjs|github|deployment|github-pages| |
I have a C cli program that I compiled to was with Emscripten. I'm trying to run that program each time a button is pressed, and read from a textbox into stdin and from stdout to another textbox. However, I can only call "Module.callMain()" sucessfully once. Each subsequent invocation does not execute. How to properly setup this?
Here is the HTML code.
```
<!doctype html>
<html lang="en-us">
<head>
<meta charset="utf-8">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>title</title>
</head>
<body>
<textarea id="input" rows="8"></textarea>
<button onclick="exec()">run</button>
<textarea id="output" rows="8"></textarea>
<script>
var input_box = document.getElementById('input');
var input_text = "";
let i = 0;
var Module = {
noInitialRun: true,
noExitRuntime: true,
print: (function() {
var element = document.getElementById('output');
if (element) element.value = ''; // clear browser cache
return (...args) => {
var text = args.join(' ');
console.log(text);
if (element) {
element.value += text + "\n";
element.scrollTop = element.scrollHeight; // focus on bottom
}
};
})(),
stdin: (function() {
return () => {
if(i >= input_text.length) {
return null;
} else {
return input_text.charCodeAt(i++);
}
};
})(),
};
function exec() {
console.log(i);
i = 0;
input_text = input_box.value;
console.log(input_text);
Module.callMain([]);
}
</script>
<script async type="text/javascript" src="main.js"></script>
</body>
</html>
```
The .js file is the one output from emscripten without modifications.
Here are the flags used to compile: -s 'EXIT_RUNTIME=0' -s EXPORTED_RUNTIME_METHODS='[\"callMain\",\"run\"]' -s ENVIRONMENT=web |
Run main several times of wasm in browser |
|javascript|webassembly|emscripten| |
I have a web app form with text fields and dropdowns. I am trying to pass certain selections over to a google sheet. I want to bring (in this order): landowner, comments, date, land agent, parcel ID to the corresponding columns in google sheet (columns A-E).
[![enter image description here][1]][1]
When I press submit, everything but the landowner dropdown sends to the sheet:
[![enter image description here][2]][2]
Here's the DoPost and addrecord status I have:
function doPost(e) {
// Check if the 'e' parameter is defined and contains the expected properties
if (e && e.parameter) {
var landowner = e.parameter.landowner ? e.parameter.landowner.toString() : '';
var substation = e.parameter.substation ? e.parameter.substation.toString() : '';
var comment = e.parameter.comment ? e.parameter.comment.toString() : '';
var status = e.parameter.status ? e.parameter.status.toString() : '';
var supervisor = e.parameter.supervisor ? e.parameter.supervisor.toString() : '';
var date = e.parameter.date ? e.parameter.date : '';
var parcelId = e.parameter.parcelId ? e.parameter.parcelId.toString() : ''; // Retrieve parcel ID parameter
addRecord(landowner, comment, date, supervisor, parcelId); // Pass parcelId to addRecord
addToStatuses(landowner, status, date, supervisor, parcelId); // Pass parcelId to addToStatuses
removeDuplicateStatuses();
var htmlOutput = HtmlService.createTemplateFromFile('DependentSelect');
var subs = getDistinctSubstations();
htmlOutput.message = 'Record Added';
htmlOutput.subs = subs;
return htmlOutput.evaluate();
} else {
// If 'e' parameter is not defined or doesn't contain expected properties, return an error message
return ContentService.createTextOutput("Error: Invalid request parameters");
}
}
__
// Function to add a record to the Comments sheet
function addRecord(landowner, comment, date, supervisor, parcelId) {
var ss = SpreadsheetApp.getActiveSpreadsheet();
var dataSheet = ss.getSheetByName("Comments");
var formattedDate = Utilities.formatDate(new Date(date), Session.getScriptTimeZone(), "MM/dd/yy HH:mm");
dataSheet.appendRow([landowner, comment, formattedDate, supervisor, parcelId]); // Include landowner in column A
}
Can anyone see what I'm missing here?
[1]: https://i.stack.imgur.com/OYyi1.jpg
[2]: https://i.stack.imgur.com/hnJtd.png
|
How can I add a grob for both the x and y axis using cowplot? |
|r|ggplot2|cowplot| |
null |
Reflection on Kotlin/Native is very limited. See the documentation pages for the [`kotlin.reflect`][1] package with the "Native" filter. The only useful things you can get out of a `KClass` are `simpleName`, `qualifiedName`, `isInstance`, `cast`, and `safeCast`.
While on the JVM you are forced to put lots of these reflection "metadata" into class files, having all that information emitted into a native binary would increase the size of the binary by a lot. From [here][2], this seems to go against the goals of Kotlin/Native (emphasis mine):
> The initial release of K/N will definitely not support Class.forName. The whole idea of K/N is to provide a “closed world” view of the code and **enable to build a highly-optimized and compact executable**. We believe that use-cases that require reflection in the way that you’ve described are perfectly covered by Kotlin/JVM already. We are not trying to build a better JVM.
As an idea, you can consider using some [annotation processing][3] to generate a `List<KProperty>` for you to use. Something similar is also suggested in [this discussion][4].
[1]: https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.reflect/
[2]: https://discuss.kotlinlang.org/t/reflection/4054/12
[3]: https://kotlinlang.org/docs/ksp-quickstart.html
[4]: https://youtrack.jetbrains.com/issue/KT-58974 |
I'm building a server-side rendering blazor app
In Program.cs I have
builder
.Services
.AddRazorComponents()
.AddInteractiveServerComponents();
and
app.MapRazorComponents<App>()
.AddInteractiveServerRenderMode();
In a component I have something like this, which is not working, and I would like to debug this code, with breakpoints, but I don't find a way to debug it
@using System.Globalization
@inject IJSRuntime JSRuntime
@inject NavigationManager Nav
<select class="form-control" @onchange="ChangeLanguage">
@foreach (var language in supportedLanguages)
{
<option value="@language">@language.DisplayName</option>
}
</select>
@code
{
CultureInfo[] supportedLanguages = new[]
{
new CultureInfo("en-US"),
new CultureInfo("pt-PT"),
new CultureInfo("fr-FR"),
};
private async Task ChangeLanguage(ChangeEventArgs e)
{
var culture = e.Value?.ToString();
Console.WriteLine("culture is " + culture);
if (!string.IsNullOrEmpty(culture))
{
await JSRuntime.InvokeVoidAsync("BlazorCulture.setCulture", culture);
}
}
} |
How can I debug server side rendering blazor code in a component? |
|c#|.net|blazor-server-side| |
I'm switching my app from firebase web (V8) to react-native-firebase. (I figured it would be less work to refactor than going to web V9) But I have run into a problem.
I use Priority values to store a value that indicates ordering of objects in a realtime database container, and when I read the snapshot the priorities are not there. It seems like the top-level snapshot has a priority value, but when I access the child snapshots with snap.child("child1') or snap.forEach(), the child snapshots do not contain a `.priority` property.
This is the code has been working in my app for over 5 years (with added debugging printouts).
```
subscribeFB() {
if (!this.unlistenFB) {
this.unlistenFB = this.user.child('Counternames').on(
'value',
(snap) => {
// Need to also grab the current timestamps when
// Counternames changes.
this.getTimestamps(snap);
},
(error) => {
info('Counternames subscription cancelled because', error);
}
);
}
}
getTimestamps(snap) {
this.user.child('Timestamps').once(
'value',
(tsSnap) => {
setTimeout(
() =>
MultiActions.setCounters({
counterSnap: snap,
timestamps: tsSnap.val(),
}),
0
);
},
(error) => {
info('Could not get timestamps because', error);
}
);
}
// Then in setCounters I use both snaps
setCounters(snaps) {
[...]
if ( timestamps.sequence > this.state.timestamps.sequence) {
debug('Getting new sequence from FB');
// get sequence from FB
var CounternameSnap = snaps.counterSnap;
var cns = CounternameSnap.exportVal();
debug(`CounternameSnap exportVal ${JSON.stringify(cns, null, 2)}`);
var ch = CounternameSnap.child('-MH3NnGU5iM0eakrd7ZS').exportVal();
debug(`Counter 44 exportVal ${JSON.stringify(ch)}`);
CounternameSnap.forEach((ctr) => {
var p = ctr.getPriority();
debug(`Counter priority ${p} has key ${ctr.key}, val ${ctr.val()}`);
// Do something with priority...
});
```
When I change the order of the counters on another device, my `.on` listener triggers and I get the following printout:
```
LOG MultiStore:debug Getting new sequence from FB +2ms
LOG MultiStore:debug CounternameSnap exportVal {
".value": {
"-MH3MqLK32gCi0pqBUg3": "Counter 33",
"-MH4yucF8rmeewGbPAVI": "Counter 55",
"-MH3NnGU5iM0eakrd7ZS": "Counter 44",
"-NuBYkNVlAUIUeoQQCKA": "Counter 66",
"-MGQOBLgEvEekOg6geQI": "Counter 11",
"-MGzzZXdUwpBr8RlLLE-": "Counter 22"
},
".priority": null
} +1ms
LOG MultiStore:debug Counter 44 exportVal {".value":"Counter 44"} +1ms
LOG MultiStore:debug Counter priority undefined has key -MGzzZXdUwpBr8RlLLE-, val Counter 22 +2ms
LOG MultiStore:debug Counter priority undefined has key -MH4yucF8rmeewGbPAVI, val Counter 55 +1ms
LOG MultiStore:debug Counter priority undefined has key -MH3NnGU5iM0eakrd7ZS, val Counter 44 +1ms
LOG MultiStore:debug Counter priority undefined has key -MH3MqLK32gCi0pqBUg3, val Counter 33 +0ms
LOG MultiStore:debug Counter priority undefined has key -MGQOBLgEvEekOg6geQI, val Counter 11 +1ms
LOG MultiStore:debug Counter priority undefined has key -NuBYkNVlAUIUeoQQCKA, val Counter 66 +1ms
```
This shows the top-level snap contains a `.priority` value (null) but the "Counter 44" child does not have the property at all. I can verify that the other device successfully pushed the new priorities to firebase: although the firebase console does not show priorities, I can export and download the container, and it looks like this:
```
{
"-MGQOBLgEvEekOg6geQI": {
".value": "Counter 11",
".priority": 4
},
"-MGzzZXdUwpBr8RlLLE-": {
".value": "Counter 22",
".priority": 0
},
"-MH3MqLK32gCi0pqBUg3": {
".value": "Counter 33",
".priority": 1
},
"-MH3NnGU5iM0eakrd7ZS": {
".value": "Counter 44",
".priority": 2
},
"-MH4yucF8rmeewGbPAVI": {
".value": "Counter 55",
".priority": 3
},
"-NuBYkNVlAUIUeoQQCKA": {
".value": "Counter 66",
".priority": 5
}
}
```
Is this exposing a previously unknown bug in my program, is it a bug in RNFirebase, a change in underlying FB V9+ iOS/Android libraries (It has the same problem on both platforms), or ???
react-native: 0.70.15
react-native-firebase: 19.1.1
[UPDATE:] I wondered if offline persistence might have something to do with this so I added `database().setPersistenceEnabled(true);` right after my firebase imports and this problem persists |
null |
I just deployed my express app on vercel , the app deployed without any error but now when I go to the link provided by the vercel I see Internal Server Error on the browser.
I am using EJS template engine with my express app as well.
This is the folder structure of my project.
[](https://i.stack.imgur.com/e2v40.png)
this is the package.json of my project
```
{
"name": "templates",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"start": "node api/index.js",
"start:dev": "PORT=3000 node server.js",
"dev": "nodemon api/index.js",
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"body-parser": "^1.20.2",
"ejs": "^3.1.9",
"express": "^4.18.2",
"express-ejs-layouts": "^2.5.1",
"zod": "^3.22.4"
},
"devDependencies": {
"nodemon": "^3.1.0"
}
}
```
Im using npm run dev for running project locally and it is running fine.
This is the index.js file code
```
const express = require("express");
const expressLayouts = require("express-ejs-layouts");
const { formValidationSchema } = require("../schemas/formvalidation");
const app = express();
const PORT = 8080;
app.use(express.static(__dirname + "../public"));
app.use(express.json());
app.use(express.urlencoded({ extended: true }));
app.set("views", "./views");
app.use(expressLayouts);
app.set("layout", "../views/layouts/layout");
app.set("view engine", "ejs");
app.get("/", async function (req, res) {
res.render("index", {
title: "Welcome to my portfolio!",
// Other data
});
});
```
and this is my vercel.json file code
```
{
"version": 2,
"rewrites": [{ "source": "/(.*)", "destination": "/api" }]
}
```
I tried the official next js provided way of using express app on vercel but it still failed
`https://vercel.com/guides/using-express-with-vercel`
|
*This question is meant to be used as reference for all frequently asked questions of this nature.*
Why do I get a mysterious crash or *'segmentation fault'* when I assign/copy/scan data to the address where an uninitialized pointer points to?
Examples:
```
int *ptr;
*ptr = 10; // Crash here!
```
```
char *ptr;
strcpy(ptr, "Hello, World!"); // Crash here!
```
```
char *ptr;
scanf("%s", ptr); // Crash here!
``` |
I'm programming a simple 'game' that is just a block that can jump around. I placed a wall, but now the character phases through when moving to the right, but the collision works when going to the right.
Every other part of this is fine. the `willImpactMovingLeft` function works just fine and should be a mirror of `willImpactMovingRight`, but right doesn't work.
simple working code for this example:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-html -->
<!DOCTYPE html>
<html>
<body>
<div id="cube" class="cube"></div>
<div id="ground1" class="ground1"></div>
<div id="ground2" class="ground2"></div>
<script>
var runSpeed = 5;var CUBE = document.getElementById("cube");var immovables = ["ground1", "ground2"];let Key = {pressed: {},left: "ArrowLeft",right: "ArrowRight",isDown: function (key){return this.pressed[key];},keydown: function (event){this.pressed[event.key] = true;},keyup: function (event){delete this.pressed[event.key];}}
window.addEventListener("keyup", function(event) {Key.keyup(event);});
window.addEventListener("keydown", function(event) {Key.keydown(event);});
setInterval(()=>{
if (Key.isDown(Key.left)){
if(willImpactMovingLeft("cube", immovables)!=false){
cube.style.left = willImpactMovingLeft("cube", immovables)+"px";
}else{
cube.style.left = CUBE.offsetLeft - runSpeed +"px";
}
}
if (Key.isDown(Key.right)){
if(willImpactMovingRight("cube", immovables)!=false){
cube.style.left = willImpactMovingRight("cube", immovables)+"px";
}else{
cube.style.left = CUBE.offsetLeft + runSpeed +"px";
}
}
}, 10);
function willImpactMovingLeft(a, b){
var docA = document.getElementById(a);
var docB = document.getElementById(b[0]);
for(var i=0;i<b.length;i++){
docB = document.getElementById(b[i]);
if((docA.offsetTop>docB.offsetTop&&docA.offsetTop<docB.offsetTop+docB.offsetHeight)||(docA.offsetTop+docA.offsetHeight>docB.offsetTop&&docA.offsetTop+docA.offsetHeight<docB.offsetTop+docB.offsetHeight)){//vertical check
if(docA.offsetLeft+docA.offsetWidth>docB.offsetLeft+runSpeed){
if(docA.offsetLeft-runSpeed<docB.offsetLeft+docB.offsetWidth){
return docB.offsetLeft+docB.offsetWidth;
}
}
}
}
return false;
}
function willImpactMovingRight(a, b){
var docA = document.getElementById(a);
var docB = document.getElementById(b[0]);
for(var i=0;i<b.length;i++){
docB = document.getElementById(b[i]);
if((docA.offsetTop>docB.offsetTop&&docA.offsetTop<docB.offsetTop+docB.offsetHeight)||(docA.offsetTop+docA.offsetHeight>docB.offsetTop&&docA.offsetTop+docA.offsetHeight<docB.offsetTop+docB.offsetHeight)){//vertical check
if(docA.offsetLeft>docB.offsetWidth+docB.offsetLeft-runSpeed){
if(docA.offsetLeft+docA.offsetWidth+runSpeed<=docB.offsetLeft){
CUBE.textContent = "WIMR";
return docB.offsetLeft-docA.offsetWidth;
}
}
}
}
return false;
}
</script><style>.cube{height:50px;width:50px;background-color:red;position:absolute;top:500px;left:500px;}.ground1{height:10px;width:100%;background-color:black;position:absolute;top:600px;left:0;}.ground2{height:150px;width:10px;background-color:black;position:absolute;top:450px;left:700px;}</style></body></html>
<!-- end snippet -->
it looks like a lot, but the only problem is the last if statement. Even when the condition is true, it still skips the return number and returns false. Any idea why?
P.S. **I CANNOT USE THE CONSOLE, I AM ON A MANAGED COMPUTER** |
{"Voters":[{"Id":14466860,"DisplayName":"Munsif Ali"}],"DeleteType":1} |
Based on your question, it seems like you are really new to using Redis as a Vector Database. For that reason, the first thing I suggest is install **redis-stack-server** on an instance outside of kubernetes before you attempt this in a kubernetes environment and verify connectivity of the new ACL feature available since redis 6. `ACLs` allow named users to be created and assigned fine-grained permissions. The advantage of ACLs is it limits certain connections in terms of the commands that can be executed and the keys that can be accessed. A client connecting is required to provide a username and password, and if authentication succeeds, the connection is associated with the given user and the limits the user has. From within the server you installed redis-stack-server, run the redis-cli utility in localhost and validate the default user exists:
> ACL LIST
"user default on nopass sanitize-payload ~* &* +@all”
Here it indicates the default user is active, requires no password, has access to all keys and can access all commands. You can create your own user, and assign them permissions:
> ACL SETUSER langchain on >secret allcommands allkeys
Here we create a user called langchain, set them as active, defined a password called secret, and gave them access all keys and commands.
Note depending if you want to validate this connection eventually outside of a kubernetes cluster, you must understand that protected node is enabled by default, which means outside of the cluster in a different network, you will still face issues connecting. In Rocky Linux, that could be addressed by editing `/etc/redis-stack.conf` and `protected-mode no`. This will disable protected mode. Since version 3.2.0, Redis enters a protected mode when it is executed with the default configuration and without any password. This was designed as a preventative guardrail, thus only allowing replies to queries from the loopback interface.
Now with access set up correctly, you can verify connectivity both from the cli and from the Python script directly. From cli:
redis-cli -h [host] -p 6379 —user [user] —pass [password]
From Python Script:
import redis
from dotenv import load_dotenv
load_dotenv()
redis_config = {
'host': os.environ['REDIS_HOST'],
'port': os.environ['REDIS_PORT'],
'decode_responses': True,
'health_check_interval': 30,
'username': os.environ['REDIS_USERNAME'],
'password': os.environ['REDIS_PASSWORD']
}
client = redis.Redis(**redis_config)
res = client.ping()
print(f'PING: {res}’)
# >>> PING: True
At this point, you should understand the authentication process of using redis-stack-server (which is what is layered in the Docker image redis/redis-stack:latest you referenced in your question). I want to draw your attention to what modules redis-stack-server loads. If you cat that some /etc/redis-stack.conf file, you will see the loaded modules:
$ sudo cat /etc/redis-stack.conf
port 6379
daemonize no
protected-mode no
loadmodule /opt/redis-stack/lib/rediscompat.so
loadmodule /opt/redis-stack/lib/redisearch.so
loadmodule /opt/redis-stack/lib/redistimeseries.so
loadmodule /opt/redis-stack/lib/rejson.so
loadmodule /opt/redis-stack/lib/redisbloom.so
loadmodule /opt/redis-stack/lib/redisgears.so v8-plugin-path /opt/redis-stack/lib/libredisgears_v8_plugin.so
I want to draw your attention specifically to **redisearch**. It is a module that extends Redis with **vector similarity search** features. If you check that module’s github page, notice it tags “vector-database”. What does it mean? You can use Redis’s support for Hierarchical Navigable Small World (HNSW) ANN or KNN (K Nearest Neighbor)) for vector embeddings.
The bottom line is by installing Redis this way, at least first, you can get a deeper understanding of what it is installing, how what it is installing works, and how it functions as a vector store. Once you grasp those concepts, you can then decide to use your own or the one available in Docker Hub: https://hub.docker.com/r/redis/redis-stack. Either way, you end up with a Docker image that can be deployed in a Kubernetes cluster or preferably outside of the cluster (Kubernetes pods are stateless; **StatefulSet** is an alternative option). But I recommend **keeping the database outside of k8 cluster**.
Now, you would need to configure your application to be deployed in the Kubernetes cluster. In the context of Langchain, you typically use a `document loader` (e.g. from langchain.document_loaders.pdf import PyPDFLoader) to load the raw documents. Then you typically use a `text splitter` to break up the large documents in chunks with a specified chunk size, chunk overlap and separaters (e.g. from langchain.text_splitter import RecursiveCharacterTextSplitter). And then you load the `embeddings model` (e.g. from langchain.embeddings import HuggingFaceEmbeddings). In this example, I am using HuggingFace embeddings. Then you would use the Redis vectorstore that langchain provides:
from langchain.document_loaders.pdf import PyPDFLoader
from langchain.text_splitters import RecursiveCharacterTextSplitter
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
from langchain_community.vectorstores.redis import Redis
loader = PyPDFLoader(‘path-to-doc’)
raw_documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=20, separaters=‘\n’)
texts = text_splitter.split_documents(documents)
rds = Redis.from_texts(
texts,
embeddings,
metadatas=metadata,
redis_url=os.environ['REDIS_URL'],
index_name="users",
)
Hence, our application is using Redis as a vector store. Now build a Docker image out of this. Now this Python application should **be part of your k8 cluster**. It should be deployed in a Deployment with a ReplicaSet that defines the number of pods to run in the Deployment and specify readiness and liveness probes as well to health check the pods in your Deployment.
At this point, with the knowledge you now have, you can deploy this in a GKE cluster. [The Deploy an app in a container image to a GKE cluster][1] goes through the specifics of taking the application (in our case the Python app), containerizing the app with Cloud Build, creating a GKE cluster, and deploying to GKE.
[1]: https://cloud.google.com/kubernetes-engine/docs/quickstarts/deploy-app-container-image#python_1 |
{"Voters":[{"Id":14466860,"DisplayName":"Munsif Ali"}]} |
I have a table called employees with follow column:
+----------------+---------------+-----------+---------+
| Field | Type | NULL | Key |
|employee_id | int | NO | PRI |
|first_name | varchar(20) | YES | |
|last_name | varchar(25) | NO | |
|email | varchar(100) | NO | |
|phone_number | varchar(20) | YES | |
|hire_date | date | NO | |
|job_id | int | NO | MUL |
|salary | decimal(8,2) | NO | |
|manager_id | int | YES | MUL |
|department_id | int | YES | MUL |
I want to rank the department with most employees, here was my query:
```
SELECT department_id, COUNT(employee_id) no_employee,
DENSE_RANK()
OVER (PARTITION BY department_id
ORDER BY COUNT(employee_id) DESC ) Rank_
FROM employees
GROUP BY department_id;
```
However, the result gave back rank all the departments "1", no matter how much employees they had, like below:
+---------------+---------------+-------+
|department_id | no_employee | Rank\_ |
+---------------+---------------+-------+
| 1 | 1 | 1 |
| 2 | 2 | 1 |
| 3 | 6 | 1 |
At first, I did not add the GROUP BY operator in the query, but then the system showed error of non-aggragate column.
Can someone help me with this?
I expect the result will be like this (the data may not be correct)
+---------------+---------------+-------+
|department_id | no_employee | Rank\_ |
+---------------+---------------+-------+
| 1 | 10 | 1 |
| 2 | 9 | 2 |
| 3 | 8 | 3 | |
DENSE-RANK to rank the department with most employees |
|mysql|rank| |
null |
null |
I have a numpy array and corresponding row and column indices:
```
matrix = np.array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
row_idx = np.array([0, 0, 0,
0, 0, 0,
0, 0, 0,])
col_idx = np.array([0, 1, 2,
0, 1, 2,
0, 1, 2])
```
I would like to unravel the matrix by the groups specified by row_idx and col_idx. In the case of this example, the row_idx are all zero, so the elements would unravel by columns:
```
result = np.array([0, 3, 6, 1, 4, 7, 2, 5, 8])
```
The problem with ravel is that it does not generalise to where the matrix has areas that are grouped by rows and other areas by columns based on the row_idx and col_idx. There can be a combination of both row and column grouping, as below:
In this example
```
matrix = np.array([[0, 1, 2, 3],
[4, 5, 6, 7],
[8, 9, 10, 11]])
row_idx = np.array([0, 0, 0, 0,
0, 0, 0, 0,
1, 1, 1, 1])
col_idx = np.array([0, 0, 1, 2,
0, 0, 1, 2,
0, 0, 1, 2])
The result array is the top-left 2x2, [0, 1, 4, 5], followed by the top right 1x2, [2, 3], and so on based on the row and column groupings.
result = np.array([[0, 1, 4, 5, 2, 3, 6, 7, 8, 9, 10, 11])
```
I have tried aggregate() from the numpy_groupies package, but it is both slow and returns an array combining np.arrays and ints (which makes further manipulation difficult and slow). |
{"Voters":[{"Id":6296561,"DisplayName":"Zoe is on strike"}]} |
{"Voters":[{"Id":9835872,"DisplayName":"ruohola"},{"Id":5741073,"DisplayName":"TomServo"},{"Id":701996,"DisplayName":"KenS"}],"SiteSpecificCloseReasonIds":[16]} |
Suppose I have two entities : StudentDO and ParamDO. ParamDO is an entity that contains parameter code & values that are used throughout the application.
I have already cached this entity using EhCache as Second Level Cache.
@Entity
@Table(name="PARAM")
@jakarta.persistence.Cacheable
@Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
class ParamDO
{
@Column(name="TYPE")
private String type;
@Column(name="CODE")
private String code;
@Column(name="VALUE")
private String value;
}
`
Student Entity is associated to Param like this :
@Entity
@Table(name="STUDENT")
class StudentDO
{
@ManyToOne(fetch = FetchType.EAGER)
@JoinColumn(name = "SUBJECT_ID")
private ParamDO subject;
}
`
My problem is despite caching the ParamDO , on querying StudentDO , associated ParamDO entity is still being fetched from DB. Is there a way this can be prevented?
I have already set these properties in application.yml -
spring.jpa.properties.hibernate.cache:
use_second_level_cache: true
use_query_cache: true
From the logs I can see that a cache is created for ParamDO in EhCache.
What am I missing? |
Convert C# DateTime.Ticks to Bigquery DateTime Format |
1. Validate the connection string, Subscription, Topic names used in the function code are correct. Get Connection string from Service Bus Namespace to use in function code.

2. Check if the messages are moving to the dead-letter queue or the messages being processed by another function.
---
I have followed below steps and able to trigger the Service Bus Topic Triggered Azure function.
- Created a Service Bus Topic trigger Azure Function.
- Created a Service Bus, Topic and a subscription inside the Topic.

**Code Snippet:**
```csharp
[FunctionName("Function1")]
public void Run([ServiceBusTrigger("mytopic", "mysubscription", Connection = "demo")]string mySbMsg, ILogger log)
{
log.LogInformation($"C# ServiceBus topic trigger function processed message: {mySbMsg}");
}
```
**local.settings.json:**
```json
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"FUNCTIONS_WORKER_RUNTIME": "dotnet",
"demo": "Endpoint=sb://<ServiceBus_name>.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=GRWoe2XXXXXXXXXXXXXXbEd8Ing="
}
}
```
- Sending the messages:

Console Output:
```csharp
Functions:
Function1: serviceBusTrigger
For detailed output, run func with --verbose flag.
[2024-03-31T10:41:54.990Z] Host lock lease acquired by instance ID '000000000000000000000000F72731CC'.
[2024-03-31T10:42:15.389Z] Executing 'Function1' (Reason='(null)', Id=b57524a6-4fb8-4586-9bfb-c88e23d137bb)
[2024-03-31T10:42:15.394Z] Trigger Details: MessageId: 129a51488e34478db8ff88c43308da8b, SequenceNumber: 1, DeliveryCount: 1, EnqueuedTimeUtc: 2024-03-31T10:42:12.1620000+00:00, LockedUntilUtc: 2024-03-31T10:43:12.2400000+00:00, SessionId: (null)
[2024-03-31T10:42:15.433Z] C# ServiceBus topic trigger function processed message: Hello
[2024-03-31T10:42:15.490Z] Executed 'Function1' (Succeeded, Id=b57524a6-4fb8-4586-9bfb-c88e23d137bb, Duration=205ms)
``` |
I don't know how to solve it
logcat:
java.lang.NullPointerException: Attempt to invoke virtual method 'int androidx.constraintlayout.widget.ConstraintLayout.getWidth()' on a null object reference
at com.example.cashapp.Fragments.HomeFragment.shineStart(HomeFragment.java:406)
at com.example.cashapp.Fragments.HomeFragment.access$000(HomeFragment.java:78)
at com.example.cashapp.Fragments.HomeFragment$1$1.run(HomeFragment.java:134)
at android.os.Handler.handleCallback(Handler.java:942)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loopOnce(Looper.java:201)
at android.os.Looper.loop(Looper.java:288)
at android.app.ActivityThread.main(ActivityThread.java:7872)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:548)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:936)
code:
Java:
```
layout = getActivity().findViewById(R.id.layout);
Shine = getActivity().findViewById(R.id.shine);
ScheduledExecutorService executorService =
Executors.newSingleThreadScheduledExecutor();
executorService.scheduleAtFixedRate(new Runnable() {
@Override
public void run() {
getActivity().runOnUiThread(new Runnable() {
@Override
public void run() {
shineStart();
}
});
}
}, 3,3, TimeUnit.SECONDS);
private void shineStart() {
Animation animation = new TranslateAnimation(
0,
layout.getWidth()+Shine.getWidth(),0,0);
animation.setDuration(550);
animation.setFillAfter(false);
animation.setInterpolator(new AccelerateDecelerateInterpolator());
Shine.startAnimation(animation);
}
layout = getActivity().findViewById(R.id.layout);
Shine = getActivity().findViewById(R.id.shine);
ScheduledExecutorService executorService =
Executors.newSingleThreadScheduledExecutor();
executorService.scheduleAtFixedRate(new Runnable() {
@Override
public void run() {
getActivity().runOnUiThread(new Runnable() {
@Override
public void run() {
shineStart();
}
});
}
}, 3,3, TimeUnit.SECONDS);x
```
XML:
```
<androidx.constraintlayout.widget.ConstraintLayout
android:id="@+id/layout"
android:layout_width="126dp"
android:layout_height="26dp"
android:layout_marginStart="14dp"
android:layout_marginEnd="14dp"
android:background="@drawable/header"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent">
<androidx.constraintlayout.widget.ConstraintLayout
android:id="@+id/shine"
android:layout_width="50dp"
android:layout_height="70dp"
android:layout_marginStart="-50dp"
android:src="@drawable/shine_drawable"
tools:ignore="MissingConstraints" />
```
what exactly is going on and how to solve it
i really dont know what is the error i try some things, again this error show
any suggestions exactly? |
Attempt to invoke virtual method 'int androidx.constraintlayout.widget.ConstraintLayout.getWidth() what problem? |
|java|android|crash| |
null |
So I was given this block of code where `call by Reference` had to be used for every method call and I had to give an output of a in the format
`y:[result]; y: [result]; y:[result]; x:[result]; a:[result]`
```
public class Main {
static int x = 2;
public static void main(String[] args) {
int[] a = {17, 43, 12};
foo(a[x]);
foo(x);
foo(a[x]);
System.out.println("x:" + x);
System.out.println("a:" + Arrays.toString(a));
}
static void foo(int y) {
x = x - 1;
y = y + 2;
if (x < 0) {
x = 5;
} else if (x > 20) {
x = 7;
}
System.out.println("y:" + y);
}
}.
```
I'm not 100% sure on how the call by reference works in some cases and I'm not sure which result is the right one. Anyway here is one: <b3>
1) `foo(a[x])` is called with `a[2]` (which is 12). y becomes 12 + 2 = 14. x is decremented to 1.
`foo(x)` is called with x (which is 1). both x and y point to the value 1 of x. x is decremented to 0. and then x becomes 3 because y=y+2 and y was pointing at the value 1 of x.
`foo(a[x])` is called with `a[3]` (which doesnt exists). x is decremented to 2.
The array a transforms into `17,43,14`.
So the results would be like:
`y : 14; y : 3; y : ?; x : 2; a : 17,43,14`<br>
I think the thing that confuses me the most is in the case of foo(x) does y point at variable x or the value of x at the moment the method is called? <br><br>
Any help is appreciated. Thanks a lot in advance.
|
i use this to get $purchase->getExpiryTimeMillis();
please note im using older version (2.10.1) of [google-api-php-client][1]
$packageName = 'your package name';
$productId = 'your product id';
$token = 'purchase token';
$client = new \Google_Client();
$path = 'googleApi/service.json';
$client->setAuthConfig($path);
$client->addScope('https://www.googleapis.com/auth/androidpublisher');
$service = new \Google_Service_AndroidPublisher($client);
$purchase = $service->purchases_subscriptions->get($packageName, $productId, $token);
$autoRenewing = $purchase->getAutoRenewing();
$acknowledgementState = $purchase->getAcknowledgementState();
$expiryTimeMillis = $purchase->getExpiryTimeMillis();
echo('$expiryTimeMillis: '. $expiryTimeMillis. ' | $autoRenewing:'. $autoRenewing. ' | $acknowledgementState: '. $acknowledgementState);
dd($purchase);
[1]: https://github.com/googleapis/google-api-php-client |
after installation, I changed only 4 files. Then I rebooted daemon (daemon-restart) and bind9 (bind9.service). A startup error pops up (see image).
[ ](https://i.stack.imgur.com/jKjxh.png)
I have edited the following documents:
>
```
=======================db.local=======================
;
; BIND data file for local loopback interface
;
$TTL 604800
prudnikov.ru. IN SOA ns1.prudnikov.ru.
root.prudnikov.ru. (
1 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
prudnikov.ru. IN NS ns1.prudnikov.ru.
prudnikov.ru. IN MX 10 mail.prudnikov.ru.
prudnikov.ru. IN A 10.0.10.1
ns1 IN A 10.0.10.1
www IN CNAME prudnikov.ru.
mail IN A 10.0.10.1
=======================db.255=======================
;
; BIND reverse data file for broadcast zone
;
$TTL 604800
10.0.10.in-addr.arpa. IN SOA ns1.prudnikov.ru.
root.prudnikov.ru. (
1 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
10.0.10.in-addr.arpa. IN NS ns1.prudnikov.ru.
1.10.0.10.in-addr.arpa. IN PTR prudnikov.ru.
=======================named.conf.options=======================
options {
directory "/var/cache/bind";
// If there is a firewall between you and nameservers you want
// to talk to, you may need to fix the firewall to allow multiple
// ports to talk. See http://www.kb.cert.org/vuls/id/800113
// If your ISP provided one or more IP addresses for stable
// nameservers, you probably want to use them as forwarders.
// Uncomment the following block, and insert the addresses replacing
// the all-0's placeholder.
forwarders {
8.8.8.8;
};
listen-on port 53 { 10.0.10.0/24; };
allow-query { 10.0.10.0/24; };
//========================================================================
// If BIND logs error messages about the root key being expired,
// you will need to update your keys. See https://www.isc.org/bind-keys
//========================================================================
dnssec-validation auto;
auth-nxdomain no;
listen-on-v6 { any; };
};
=======================named.conf.local=======================
//
// Do any local configuration here
//
// Consider adding the 1918 zones here, if they are not used in your
// organization
//include "/etc/bind/zones.rfc1918";
zone "prudnikov.ru." {
type master;
file "/etc/bind/db.local";
};
zone ""10.0.10.in-addr.arpa" {
type master;
file "/etc/bind/db.255";
};
```
reinstalled and restarted services. in the name.service file, I added "ExecStart=/usr/sbin/named -4 -f $OPTIONS" in the line. the file is here:
```
===================/etc/systemd/system/bind9.service=================
[Unit]
Description=BIND Domain Name Server
Documentation=man:named(8)
After=network.target
Wants=nss-lookup.target
Before=nss-lookup.target
[Service]
Type=forking
EnvironmentFile=-/etc/default/named
ExecStart=/usr/sbin/named $OPTIONS
ExecReload=/usr/sbin/rndc reload
ExecStop=/usr/sbin/rndc stop
Restart=on-failure
[Install]
WantedBy=multi-user.target
Alias=bind9.service
``` |
parameter values only being sent to certain columns in google sheet? |
|javascript|google-apps-script| |
terms = ['A.B3.C4', 'A.B3.C5', 'A.B4.C6', 'A.B5.C6', 'A1.B1.C1.D1', 'A1.B1.C1.D2', "D1"]
def rem_mid_words(txt1):
if "." in txt1:
l1 = txt1.split(".")
txt2 = f"{l1[0]}.{l1[-1]}" # pos 0 term & . & (-1 =) last term
return txt2
else:
pass # string does not contain "."
term_count = {rem_mid_words(txt1): 0 for txt1 in terms} # initial counting dictionary
for x in terms:
key = rem_mid_words(x)
term_count[key] += 1
new_terms = [rem_mid_words(txt1)
if term_count[rem_mid_words(txt1)] < 2
else txt1 for txt1 in terms]
print(new_terms)
# start of adjusted answer
def rem_one_word(txt1, len1, pos):
# Example "A1.B2.C3.D4.E5"
# pos 0. 1. 2. 3. 4
# length 5
# if len1 is 5 and pos = 1 then remove B2 and return A1.C3.D4.E5
# otherwise return original txt1
if txt1 != None and "." in txt1:
l1 = txt1.split(".")
if len(l1) == len1:
txt1 = ".".join(l1[:pos] + l1[(1+pos):])
return txt1
def word_len(txt1):
if "." in txt1:
return len(txt1.split("."))
else: return 0
# if the max length of all strings in terms is
def shorten_words(terms): # input list of strings
# get longest word in terms of "."
# iterate by reducing length by one and replace if count < 2
max_len = max([word_len(txt1) for txt1 in terms])
terms1 = terms
for len1 in np.arange(max_len, 2, -1):
print("string length", len1)
for pos in (1 + np.arange((len1)-2)):
print("pos", pos)
temp_words = [rem_one_word(txt1, len1, pos) for txt1 in terms1]
temp_count = {x: 0 for x in temp_words}
for x in temp_words: temp_count[x] += 1
terms2 = [temp_words[i] if temp_count[temp_words[i]] < 2
else terms1[i] for i in range(len(terms1))]
terms1 = terms2
return terms1
terms1 = shorten_words(terms)
for i in range(len(terms)):
print(terms[i], " ", terms1[i])
|
[![Screen shot of error message][1]][1]The syntax error does not appear until I type the = for the variable assignment on the next line. App is Watchmaker.
Complete code
```
-- Calculate rotation angle based on current time
function update_rotation()
local current_hour = os.date("%H")
local rotation_angle = (current_hour / 24) * 360
return rotation_angle
end
-- Update rotation every minute
function on_minute()
local rotation = update_rotation()
your_control.rotation = rotation
end
```
I am expecting this code to rotate a watch hand around the face once in each 24 hours.
[1]: https://i.stack.imgur.com/i716p.jpg |
I'm struggling to get the correct keyboard to show and be able to insert numbers and the decimal symbol but I'm faced with a strange situation.
This is what i got with Android Studio using Java code:
[![enter image description here][1]][1]
Here the xml code to show this keypad:
<EditText
android:id="@+id/editTextNumberDecimal"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:ems="10"
android:inputType="numberDecimal"
tools:layout_editor_absoluteX="0dp"
tools:layout_editor_absoluteY="1dp" />
So in this case everything works fine, i can digit numbers and dot to separate decimals.
Using Visual Studio with Net Xamarin and Syncfusion the keypad change like this:
[![enter image description here][2]][2]
The Keypad Show the negative sign and the dot.
I can insert in the negative sign doing double tap as the first char but not the dot.
The dot is never get showed on the textbox.
here is the xml code to call the numeric keypad:
<control:BorderlessEntry
x:Name="StretchRatioLenEntry"
Placeholder="(Es. 3,3)"
Keyboard= "Numeric"
Style="{StaticResource BorderlessEntryStyle}"
Text="{Binding StretchRatioLen.Value}" PlaceholderColor="#7B7B7B">
<Entry.Behaviors>
<behaviour:NumberEntryBehavior IsValid="{Binding StretchRatioLen.IsValid}" />
</Entry.Behaviors>
</control:BorderlessEntry>
So do you have any suggestion?
Thanks for your help.
here a picture from the emulator:
[![enter image description here][3]][3]
as you can see, this is what i want but on the real device the keypad change.
[1]: https://i.stack.imgur.com/oxV0Y.png
[2]: https://i.stack.imgur.com/7wDQ1.png
[3]: https://i.stack.imgur.com/VSEvj.png |
Vercel showing Internal Server Error after deploying express app successfully |
|express|next.js|ejs|vercel|internal-server-error| |
null |
As stated in the top rated answer, it is not possible to directly configure a fallback font via JavaFX's css.
But it is possible when leveraging the internal JavaFX API. It is possible by directly manipulating the underlying `FontResource` via the Reflection API.
Here is a [demo](https://github.com/duoduobingbing/javafx-custom-font-fallback-demo) that shows how this can be done with JavaFX 21. |
In a basic Remix V2 app, i need help understanding if the following is expected behavior, a bug in V2, or possibly a missing configuration option or setting.
I could not find anything related to this issue in the Remix documentation.
I created an demo Remix V2 app by running `npx create-remix@latest`
I wrote a simulated API backend method for basic testing which simply returns JSON data as retrieved from a JSON file:
export async function getStoredNotes() {
const rawFileContent = await fs.readFile('notes.json', { encoding: 'utf-8' });
const data = JSON.parse(rawFileContent);
const storedNotes = data.notes ?? [];
return storedNotes;
}
The client side consists of 2 simple routes that use `NavLink` components for navigation between the 2 routes:
import { NavLink } from '@remix-run/react';
...
<NavLink to="/">Home</NavLink>
<NavLink to="/notes">Notes</NavLink>
In the `notes` route, i have the following `loader` function defined which makes a call to my simulated API method:
export const loader:LoaderFunction = async () => {
try {
const notes = await getStoredNotes();
return json(notes);
} catch (error) {
return json({ errorMessage: 'Failed to retrieve stored notes.', errorDetails: error }, { status: 400 });
}
}
In the `notes` main component function i receive that data using the `useLoaderData` hook and attempt to print the returned JSON data to the console:
export default function NotesView() {
const notes = useLoaderData<typeof loader>();
console.log(notes);
return (
<main>
<NoteList notes={notes as Note[] || []} />
</main>
)
}
**When i run an `npx run build` and subsequently `npx run serve` everything is working correctly:**
Initial page load receives the data successfully and prints the value to the console as JSON.
{"notes":[{"title":"my title","content":"my note","id":"2024-03-28T05:22:52.875Z"}]}
Navigating between the index route and back to the notes route by clinking on the `NavLink` components is also working such that i see the same data print to the console on subsequent visits to the `notes` route.
**When i run in `dev` mode however, using `npx run dev` the following problem occurs:**
The initial page load coming from the dev server does also load correctly and prints the JSON to the console.
*Navigating client side* between the index route and back to the notes route using the `NavLink` components *causes an issue whereas the data that prints to the console is not JSON.* Instead, i am seeing a strange output of an exported Javascript array definition quite literally just like this:
export const notes = [
{
title: "my title",
content: "my note",
id: "2024-03-28T05:22:52.875Z"
}
];
Again, to be clear, this behavior only occurs when navigating client side using `NavLink` or `Link` elements while running `npx run dev`.
Is this expected behavior when running in `dev` mode?
|
null |
I have a google sheet with values on it. Think it like this:
| A header | col1 | header 3 |
| -------- | -------------- | --------- |
| First | | row |
| Second | | row |
I will have another data that will come and go to the 2nd column.
So what I want to use append_row with specific column name because after each process (my code) I want to immediately add it to my google sheet.
I have 2 columns like this. So what I have done before after my code completes (all data is ready now) I was adding those data with worksheet update like this:
```python
headers = worksheet.row_values(1)
col1_index = headers.index('col1') + 1
col2_index = headers.index('col2') + 1
for item in result:
col1_list.append(item['col1'])
col2_list.append(item['col2'])
col1_transposed = [[item] for item in col1_list]
col2_transposed = [[item] for item in col2_list]
col1_range = '{}2:{}{}'.format(chr(65 + col1_index - 1), chr(65 + col1_index - 1),
len(col1_list) + 1)
col2_range = '{}2:{}{}'.format(chr(65 + col2_index - 1), chr(65 + col2_index - 1),
len(col2_list) + 1)
worksheet.update(col1_range, col1_transposed)
worksheet.update(col2_range, col2_transposed)
```
But now I want to say like I want to append my data row by row to specific columns. After each process I will have a data like this
{'col1': 'value1', 'col2': 'value2'}
and value1 will be on the col1 column in the first row.
|
How to append row to specific columns with gspread? |
|python|google-sheets|gspread| |
Suppose I have two entities : StudentDO and ParamDO. ParamDO is an entity that contains parameter code & values that are used throughout the application.
I have already cached this entity using EhCache as Second Level Cache.
@Entity
@Table(name="PARAM")
@jakarta.persistence.Cacheable
@Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
class ParamDO
{
@Column(name="TYPE")
private String type;
@Column(name="CODE")
private String code;
@Column(name="VALUE")
private String value;
}
`
Student Entity is associated to Param like this :
@Entity
@Table(name="STUDENT")
class StudentDO
{
@ManyToOne(fetch = FetchType.EAGER)
@JoinColumn(name = "SUBJECT_ID")
private ParamDO subject;
}
`
My problem is despite caching the ParamDO , on querying StudentDO , associated ParamDO entity is still being fetched from DB. Is there a way this can be prevented?
I have already set these properties in application.yml -
spring.jpa.properties.hibernate.cache:
use_second_level_cache: true
use_query_cache: true
From the logs I can see that a cache is created for ParamDO in EhCache.
What am I missing? |
Using "onclick" requires the function to be a property of the global (window) object. You can add `window.mathfa = mathfa`; to the module.
Using `.addEventListener()` is generally considered better however, as it's more flexible. |
The problem with raising the DNS server on Ubuntu 20.04 - bind9 |