instruction stringlengths 0 30k ⌀ |
|---|
|scala|reflection|macros|scala-macros|scala-3| |
I essentially want to write a program that takes F and decrements it until its 0, displaying the results like : F E D C B A 9 8 7 6 5 4 3 2 1 0. Since I can't use system calls to print (write) hex digits I abandoned that way of thinking. Wherever I go I'm told to use printf since it doesn't require a conversion from number to character. However, printf seems to require a parameter that's usually something along the lines of "db '%x', 10, 0" within a label. That 10 usually being a LF makes it seemingly impossible to write a *horizontal* list and instead writes a *vertical* list which I really don't want.
So I tried to take this program:
```asm
extern printf
global main
section .data
format_specifier:
db '%x', 10, 0 ;format specifier for printf with LF
section .text
main:
mov rbx, qword 16
loop1: ;loop to decrement and print number in rsi
dec rbx
mov rdi, format_specifier
mov rsi, rbx
xor rax, rax
call printf
cmp rbx, qword 0
jne loop1
mov rax, 60
syscall
```
and replace the 10 in the format specifier with 0x20 (the space ASCII) to separate the hex digits as they printed. This, of course, resulted in nothing outputting whatsoever.
I have seen that LF flushes the output buffer and I have tried using fflush to do the same with no avail as seen:
```asm
extern printf
extern fflush
global main
section .data
format_specifier:
db '%x', 0x20, 0 ;format specifier for printf with LF
section .text
main:
mov rbx, qword 16
loop1: ;loop to decrement and print number in rsi
dec rbx
mov rdi, format_specifier
mov rsi, rbx
xor rax, rax
xor rsi, rsi
call fflush
call printf
cmp rbx, qword 0
jne loop1
mov rax, 60
syscall
```
I'm unsure whether it is even the right idea to try and use fflush for my program or if I'm just going in the wrong direction. Please help. |
I want to add a worker and then his id and name go to another table to work with
this doesnt work I tried it
```
router.post('/add_employee', (req, res) => {
// Insert into puntoretuji table
const sql = "INSERT INTO puntoretuji (name, salary, role) VALUES (?)";
const values = [req.body.name, req.body.salary, req.body.role];
con.query(sql, [values], (err, result) => {
if (err) return res.json({ Status: false, Error: "Query Error" });
return res.json({ Status: true});
});
}
);
router.post('/worker_id', (req, res) => {
const query = `SELECT MAX(id) AS max_id FROM puntoretuji`;
const values = [req.body.id]
const result = con.query(query, values)
const maxId = result[0].max_id;
// Insert into worker_data table
const insertWorkerDataQuery = "INSERT INTO worker_data (worker_id, name) VALUES (?)";
const workerDataValues = [maxId, req.body.name]; // Assuming current date for 'date'
con.query(insertWorkerDataQuery, workerDataValues);
}
``` |
|javascript|async-await|scheduled-tasks|generator|scheduler| |
Need to calculate Matrix exponential with Tailor series with MPI matrix is small 3 x 3 for example
Meanwhile
```
vector<vector<double>> matrixExp(const vector<vector<double>>& A) {
int n = A.size();
vector<vector<double>> E(n, vector<double>(n, 0));
vector<vector<double>> T(n, vector<double>(n, 0));
vector<vector<double>> localE(n, vector<double>(n, 0));
int rank, size;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
for (int i = 0; i < n; i++)
E[i][i] = 1;
for (int i = 0; i < n; i++)
localE[i][i] = 0;
T = E;
for (int j = 1; j <= rank; j++)
{
T = matrixMult(T, A);
T = matrixDiv(T, j);
}
localE = T;
for (int i = rank + 1; i <= N; i += size) {
for (int j = i; j < i + size; j++) {
T = matrixMult(T, A);
T = matrixDiv(T, j);
}
localE = matrixSum(localE, T);
}
MPI_Reduce(localE[0].data(), E[0].data(), n, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);
MPI_Reduce(localE[1].data(), E[1].data(), n, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);
MPI_Reduce(localE[2].data(), E[2].data(), n, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);
return E;
}
```
But i dont know how to optimize this
```
for (int j = i; j < i + size; j++) {
T = matrixMult(T, A);
T = matrixDiv(T, j);
}
```
Maybe it's impossible with this implementation
!https://i.stack.imgur.com/fY1Zq.png |
```Bot is ready
/home/runner/BattleEx/.pythonlibs/lib/python3.10/site-packages/disnake/ext/commands/common_bot_base.py:464: RuntimeWarning: coroutine 'setup' was never awaited
setup(self)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
Ignoring exception in on_ready
Traceback (most recent call last):
File "/home/runner/BattleEx/.pythonlibs/lib/python3.10/site-packages/disnake/client.py", line 703, in _run_event
await coro(*args, **kwargs)
File "/home/runner/BattleEx/main.py", line 13, in on_ready
await bot.load_extensions("cogz")
TypeError: object NoneType can't be used in 'await' expression
```
This is my redis.py
I don't understand why this is happening there is nothing that can be Nonetype here I guess.
redis.py
```
import aioredis,asyncio
from disnake.ext import commands
import os
from disnake.ext import commands
import os
class Redis(commands.Cog):
"""For Redis."""
def __init__(self, bot: commands.Bot):
self.bot = bot
self.pool = None
self._reconnect_task = self.bot.loop.create_task(self.connect())
#all funcs here
async def setup(bot: commands.Bot):
print("Here")
redis_cog = Redis(bot)
print("Setting up Redis cog...")
await redis_cog.connect()
await bot.add_cog(redis_cog)
```
I tried it with discord.py using bot.load_extensino("cogz,redis"), but I am still getting the same error. |
You can't pass a function as dictionary item. This is actually quite logic, since the key is a column name it assumes you already know this name and thus could craft the expected value.
Thus you should use:
```
df.rename(columns={'Col 2': 'COL 2'})
```
If you know the value can be in a list/set of values you could also use a custom function:
```
def renamer(col):
if col in {'Col 2'}:
return col.upper()
return col
df.rename(columns=renamer)
```
Output:
```
Col 0 Col 1 COL 2 Col 3
0 18 79 5 79
1 70 43 22 47
2 43 0 79 28
3 7 10 97 49
4 97 16 44 9
``` |
|vba|outlook| |
Small percentage of parameters cause very slow queries |
I wrote an embedded function inside my feature file and on a conditional basis, I want to call the function only if the first object of the data array doesn't match dataNotFound
definition.
Aprreciate your help.
Scenario: xxxxxx
#Getting data array from DB
* def lenArray = data.length
* def deleteTokens =
"""
function(lenArray) {
for (var i = 0; i<lenArray; i++) {
karate.call('deleteTokensCreated.feature', {ownerId: data[i].owner_id, token: data[i].token});
}
}
"""
* def dataNotFound = {"message": "Data not found!"}
* def deletedTokens = call deleteTokens lenArray
**"* eval if (data[0] != dataNotFound ) call deleteTokens lenArray"** doesn't work.
Please, consider I am an business analyst who is trusted with test automation task and has no prior experience with either Java or Karate.
|
Trying out morphia client to connect to mongodb , I do find that morphia always makes more than 1 connection even when we have strictly set both min and max size to 1
**whats the right way to use connection pool and restrict to 1 connection only ?**
code here
```
public static void main(String[] args) {
ConnectionPoolSettings connectionPoolSettings = ConnectionPoolSettings.builder().minSize(1).maxSize(1).build();
MongoClient client = MongoClients.create( MongoClientSettings
.builder()
.applyConnectionString(new ConnectionString("mongodb://localhost:27017/test"))
.applyToConnectionPoolSettings(builder -> builder.applySettings(connectionPoolSettings))
.build());
MongoDatabase database = client.getDatabase("admin");
Document serverStatus = database.runCommand(new Document("serverStatus", 1));
new Thread(() -> {
while(true) {
Map connections = (Map) serverStatus.get("connections");
System.out.println("Number of connections " + connections.get("current"));
try {
TimeUnit.SECONDS.sleep(10);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
}
}).start();
try {
while(true)
TimeUnit.SECONDS.sleep(60);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
}
```
```
raaghu@FVFH8ET1Q05D ~ % netstat -anv | awk '$5 ~ "27017" {print}'| grep ESTABLISHED
tcp4 0 0 127.0.0.1.57851 127.0.0.1.27017 ESTABLISHED 337773 146988 27252 0 00002 00000008 000000000008ef3a 00000080 00000800 1 0 000001
tcp4 0 0 127.0.0.1.57850 127.0.0.1.27017 ESTABLISHED 407373 146988 27252 0 00002 00000008 000000000008ef21 00000080 00000800 1 0 000001
tcp4 0 0 127.0.0.1.57849 127.0.0.1.27017 ESTABLISHED 407373 146988 27252 0 00002 00000008 000000000008ef20 00000080 00000800 1 0 000001
```
|
Is there some way to use printf to print a horizontal list of decrementing hex digits in NASM assembly on Linux |
|linux|assembly|nasm| |
null |
I had a similar problem where I wasn't seeing a Vertical scollbar. I
found if I set the row height in the grid containing the CollectionView from "Auto" to "*", I would then see a vertical scrollbar. |
I'm using Datatables.js version 1.9, and doing server-side implementation. Pagination is working fine but, the start param is resets to 0 value if do sorting on any column.
For example If a go to 3rd or 4th page, the value of start should be persistent but here Datatables sending every time 0 value on subsequent http requests. I Checked length (no.of records per page) param, it's good when doing sorting but start param is resets to 0. Any please suggest me on this.
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/yF2Cy.png |
In Datatables, start value resets to 0, when column sorting |
|javascript|jquery|datatables|pagination|jquery-datatables-editor| |
At this date you can only run JavaScript, TypeScript or Python code on Cloud Functions. Consequently, you shall convert it to one of those languages.
You can use the **[Dart compile to JS compiler][1]** to convert and generate JS code from you Dart code.
```console
dart compile js
```
Beware that this compiles your Dart code to deployable JavaScript, which may not always be compatible with NodeJs.
[1]: https://dart.dev/tools/dart-compile#js |
I'm using Microsoft Message Analyzer to trace network traffic. However, I cannot find an option to disable the resolution of IP addresses to their domain names. This function was called AutoIP (Auto IP Address Resolution).
I'm aware of Wireshark, but what's crucial for me is to display the process name alongside other network information.
If anyone has any idea how this can be turned off, I would be really grateful.
|
null |
To achieve printing each string from the c array with its corresponding double array from the d 2D array on the same line, you should iterate through both arrays simultaneously. Here's how you can modify your method:
public static void printData(String[] c, double[][] d) {
for (int i = 0; i < c.length; ++i) {
// Print the string from the c array
System.out.print(c[i] + " ");
// Print the corresponding double array from the d 2D array
for (int col = 0; col < d[i].length; ++col) {
System.out.print(d[i][col] + " ");
}
// Move to the next line for the next string and its corresponding double array
System.out.println();
}
This modified method will print each string from the c array along with the values of the corresponding double array from the d 2D array on the same line.
|
{"Voters":[{"Id":10024425,"DisplayName":"user246821"},{"Id":1043380,"DisplayName":"gunr2171"},{"Id":286934,"DisplayName":"Progman"}]} |
null |
I am plotting some lines with Seaborn:
```python
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
# dfs: dict[str, DataFrame]
fig, ax = plt.subplots()
for label, df in dfs.items():
sns.lineplot(
data=df,
x="Time step",
y="Loss",
errorbar="sd",
label=label,
ax=ax,
)
ax.set(xscale='log', yscale='log')
```
The result looks like [this](https://i.stack.imgur.com/FanHR.png).
Note the clipped negative values in the lower error band of the "effector_final_velocity" curve, since the standard deviation of the loss is larger than its mean, for those time steps.
However, if `ax.set(xscale='log', yscale='log')` is called *before* the looped calls to `sns.lineplot`, the result looks like [this](https://i.stack.imgur.com/JVGG4.png).
I'm not sure where the unclipped values are arising.
Looking at the source of `seaborn.relational`: at the end of `lineplot`, the `plot` method of a `_LinePlotter` instance is called. It plots the error bands by passing the already-computed standard deviation bounds to `ax.fill_between`.
Inspecting the values of these bounds right before they are passed to `ax.fill_between`, the negative values (which would be clipped) are still present. Thus I had assumed that the "unclipping" behaviour must be something matplotlib is doing during the call to `ax.fill_between`, since `_LinePlotter.plot` appears to do no other relevant post-transformations of any data before it returns, and `lineplot` returns immediately.
However, consider a small example that calls `fill_between` where some of the lower bounds are negative:
```python
import numpy as np
fig, ax = plt.subplots(1, 1, figsize=(5, 5))
np.random.seed(5678)
ax.fill_between(
np.arange(10),
np.random.random((10,)) - 0.2,
np.random.random((10,)) + 0.75,
)
ax.set_yscale('log')
```
Then it makes no difference if `ax.set_yscale('log')` is called before `ax.fill_between`; in both cases the result is [this](https://i.stack.imgur.com/ctRUi.png).
I've spent some time searching for answers about this in the Seaborn and matplotlib documentation, and looked for answers on SA and elsewhere, but I haven't found any information about what is going on here.
|
You can create a RandomRange class, you can create as many as you want and put into a list
public class RandomRange
{
public int Start { get; set; }
public int End { get; set; }
public int Exception { get; set; }
public RandomRange(int start, int end, int exception)
{
Start = start;
End = end;
Exception = exception;
}
public int getRandomIndex()
{
Random random = new Random();
int index = random.Next(Start-1, End+1);
if (index < Start || index > End)
{
index = Exception;
}
return index;
}
}
In your main program, simply call it
static void Main(string[] args)
{
Random random = new Random();
// Define your ranges
List<RandomRange> ranges = new List<RandomRange>
{
new RandomRange(2, 10, 15), // Represents indices from 2 to 9
new RandomRange(15, 20, 25), // Represents indices from 15 to 19
new RandomRange(25, 30, 35) // Represents indices from 25 to 29
};
//i use random here in this case, but u can simply change it to a user input
RandomRange selectedRange = ranges[random.Next(ranges.Count)];
Console.WriteLine($"Selected Index: {selectedRange.getRandomIndex()}");
}
You can then use the `selectedRange.getRandomIndex()` to get your list item
Note that there is no check for index out of bound exception and i presume you already done all the list indexing exception handling
|
```Bot is ready
/home/runner/BattleEx/.pythonlibs/lib/python3.10/site-packages/disnake/ext/commands/common_bot_base.py:464: RuntimeWarning: coroutine 'setup' was never awaited
setup(self)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
Ignoring exception in on_ready
Traceback (most recent call last):
File "/home/runner/BattleEx/.pythonlibs/lib/python3.10/site-packages/disnake/client.py", line 703, in _run_event
await coro(*args, **kwargs)
File "/home/runner/BattleEx/main.py", line 13, in on_ready
await bot.load_extensions("cogz")
TypeError: object NoneType can't be used in 'await' expression
```
This is my redis.py
I don't understand why this is happening there is nothing that can be Nonetype here I guess.
redis.py
```
import aioredis,asyncio
from disnake.ext import commands
import os
from disnake.ext import commands
import os
class Redis(commands.Cog):
"""For Redis."""
def __init__(self, bot: commands.Bot):
self.bot = bot
self.pool = None
self._reconnect_task = self.bot.loop.create_task(self.connect())
#all funcs here
async def setup(bot: commands.Bot):
print("Here")
redis_cog = Redis(bot)
print("Setting up Redis cog...")
await redis_cog.connect()
await bot.add_cog(redis_cog)
```
I tried it with discord.py using bot.load_extension("cogz,redis"), but I am still getting the same error. |
then in app.module.ts I advise you not to put the parameters to connect, but only the code structure.
And then in a component you can put all the logic. Tell me if you have already solved otherwise I help you. |
I'm experimenting with GLFW (compiling with **g++ -o nxtBluePixel nxtBluePixel.cpp -lglfw -lGLEW -lGL** ) to simply draw a blue box and move it up/down/left/right. I want to output a message "out of bounds" when the box touches the edges of the visible area, which should logically be -1 or 1 (in those so-called normalized OpenGL coordinates, so I read). But the box keeps continuing to move in an "invisible" region outside (and is invisible) but no message is displayed until a while (at least 10 hits or so on a key outside the edge boundary ... ChatGPT 4 can't help me, it says:
*"Correct Boundary Check: If you want the "out of bounds" message to appear as soon as the point is about to leave the visible area, your original check without the large epsilon is logically correct. However, if the message appears too late or too early, it might be due to how the point's position is updated or rendered, not the boundary check itself."*
any ideas? I never use OpenGL, so I wanted to try... but this is typically the kind of very annoying problem I hate!
and here my code:
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <iostream>
#include <string>
#include <cctype>
GLint worldWidth = 400;
GLint worldHeight = 300;
// Starting position of the pixel
float currX = 0.0f;
float currY = 0.0f;
float stepSize = 1.0f / worldWidth;
float speed = 2.0f;
void updateWorld(char keyPressed, float speed){
switch (keyPressed) {
case 'W':
//up
currY += stepSize*speed;
break;
case 'A':
//left
currX -= stepSize*speed;
break;
case 'S':
//down
currY -= stepSize*speed;
break;
case 'D':
//right
currX += stepSize*speed;
break;
}
//using openGL 'normalized' coords i.e. between -1 and 1
if (currX >= 1.0 || currX <= -1.0 || currY >= 1.0 || currY <= -1.0){
printf("OUT OF BOUNDS !!!!");
}
}
void key_callback(GLFWwindow* window, int key, int scancode, int action, int mods) {
char key_str = '0';
if (action == GLFW_PRESS || action == GLFW_REPEAT) {
switch (key) {
case GLFW_KEY_W:
key_str = 'W';
break;
case GLFW_KEY_A:
key_str = 'A';
break;
case GLFW_KEY_S:
key_str = 'S';
break;
case GLFW_KEY_D:
key_str = 'D';
break;
case GLFW_KEY_Q:
printf("Bye ...");
glfwSetWindowShouldClose(window, GL_TRUE);
break;
default:
printf("unknown key pressed \n");
break;
}
updateWorld(key_str, speed);
}
}
int main(void) {
GLFWwindow* window;
// Initialize the library
if (!glfwInit())
return -1;
//remove win frame
//glfwWindowHint(GLFW_DECORATED, GLFW_FALSE);
// Create a windowed mode window and its OpenGL context
window = glfwCreateWindow(worldWidth, worldHeight, "Move Pixel with Keyboard", NULL, NULL);
if (!window) {
glfwTerminate();
return -1;
}
// Make the window's context current
glfwMakeContextCurrent(window);
// Set the keyboard input callback
glfwSetKeyCallback(window, key_callback);
// Initialize GLEW
glewExperimental = GL_TRUE;
if (glewInit() != GLEW_OK) {
std::cerr << "Failed to initialize GLEW" << std::endl;
return -1;
}
//glfwGetFramebufferSize(window, &worldWidth, &worldHeight);
// Define the viewport dimensions
glViewport(0, 0, worldWidth, worldHeight);
// Set point size
glPointSize(10.0f); // Increase if you want the "pixel" to be bigger
// Loop until the user closes the window
while (!glfwWindowShouldClose(window)) {
// Render here
glClear(GL_COLOR_BUFFER_BIT);
// Set the drawing color to blue
glColor3f(0.0f, 0.0f, 1.0f); // RGB: Blue
// Draw a point at the current position
glBegin(GL_POINTS);
glVertex2f(currX, currY); // Use the updated position
glEnd();
// Swap front and back buffers
glfwSwapBuffers(window);
// Poll for and process events
glfwPollEvents();
}
glfwTerminate();
return 0;
}
|
|c++|opengl|glfw| |
null |
An improved version of the [@bytecode77](https://stackoverflow.com/a/38676215/7585517) and [@Bill Tarbell](https://stackoverflow.com/a/70636265/7585517) answers:
using Microsoft.Win32.SafeHandles;
using System.Runtime.InteropServices;
using System.Security.Principal;
public static class WinApi
{
[Flags]
public enum ProcessAccessRights : long
{
PROCESS_QUERY_INFORMATION = 0x0400,
}
[Flags]
public enum TokenAccessRights : ulong
{
TOKEN_QUERY = 0x0008
}
[DllImport("kernel32.dll", SetLastError = true)]
public static extern SafeProcessHandle OpenProcess(
ProcessAccessRights dwDesiredAccess,
[MarshalAs(UnmanagedType.Bool)] bool bInheritHandle,
UIntPtr dwProcessId);
[DllImport("advapi32.dll", SetLastError = true)]
[return: MarshalAs(UnmanagedType.Bool)]
public static extern bool OpenProcessToken(
SafeProcessHandle ProcessHandle,
TokenAccessRights DesiredAccess,
out SafeFileHandle TokenHandle);
}
public static class ProcessUtils
{
public static (string domain, string user) GetProcessOwnerDomainAndUserNames(int processId)
{
using var processHandle = WinApi.OpenProcess(WinApi.ProcessAccessRights.PROCESS_QUERY_INFORMATION, false, (UIntPtr)processId);
if (processHandle.IsInvalid)
{
return ("", "");
}
if (!WinApi.OpenProcessToken(processHandle, WinApi.TokenAccessRights.TOKEN_QUERY, out var tokenHandle))
{
return ("", "");
}
using (tokenHandle)
{
using var wi = new WindowsIdentity(tokenHandle.DangerousGetHandle());
var domainAndUser = wi.Name.Split(new char[] { '\\' });
return (domainAndUser[0], domainAndUser[1]);
}
}
}
public class Program
{
public static void Main()
{
var (domainName, userName) = ProcessUtils.GetProcessOwnerDomainAndUserNames(2680);
Console.WriteLine($"Domain: {domainName}, User: {userName}");
}
} |
null |
This is my ansible playbook:
```yaml
---
- name: Check timeout error
hosts: all
ignore_errors: yes
gather_facts: false
tasks:
- name: test if server is connecting within n seconds
become: True
shell: id
register: id_result
async: 60
poll: 20
- name: display id result
debug: var=id_result
```
Question:
I need to exit shell task, if it runs beyond n seconds (remote host). In above case it is 10 seconds.
Expected Outcome:
```json
{"msg": "Timeout (62s) waiting for privilege escalation prompt: "}
```
Actual Outcome:
I get the shell command output, but it takes more than 4 minutes to get the output, since ssh itself to remote host is quite slow.
Doubt:
I know async works for localhost command, but is there any similar way check timeout for shell module that runs command on remote host? |
For me, I didn't know that you have to add
``` @EnableFeignClients ```
under the @SpringBootApplication class |
I want to access app.properties in my spring application from external source other than from classPath. like from any URL or app.properties in another directory other than base one. how i can i access that i have created my own propertySource and checked but still not working
package com.snapdeal.dataplatform.ThirdpartyDataTransmitter.propConfig;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@Configuration
public class appConfig {
private static final String PROPERTIES_URL = "https://drive.google.com/file/d/1vdB2hpm6DRVsCimFddoq9P5mOh-ibeod/view?usp=sharing";
@Bean
public RemotePropertySource remotePropertySource() {
return new RemotePropertySource("remotePropertySource", PROPERTIES_URL);
}
}
i want to access the app.properties from any url where i have hosted by app.properties rather than from classPath |
Access app.properties from external path other than class path |
|java|spring|spring-boot|maven|model-view-controller| |
null |
I have trained my model with
```
import tensorflow as tf
import os
from keras.models import Model
from keras.layers import Input, Conv2D, MaxPooling2D, UpSampling2D, Conv2DTranspose
from keras.layers import concatenate, BatchNormalization, Dropout, Lambda
from tensorflow.keras import backend as K
os.environ["SM_FRAMEWORK"] = "tf.keras"
import segmentation_models as sm
def jaccard_coef(y_true, y_pred):
y_true_flatten = K.flatten(y_true)
y_pred_flatten = K.flatten(y_pred)
intersection = K.sum(y_true_flatten * y_pred_flatten)
final_coef_value = (intersection + 1.0) / (K.sum(y_true_flatten) + K.sum(y_pred_flatten) - intersection + 1.0)
return final_coef_value
def multi_unet_model(n_classes=7, image_height=512, image_width=512, image_channels=3):
inputs = Input((image_height, image_width, image_channels))
source_input = inputs
c1 = Conv2D(16, (3,3), activation="relu", kernel_initializer="he_normal", padding="same")(source_input)
c1 = Dropout(0.2)(c1)
c1 = Conv2D(16, (3,3), activation="relu", kernel_initializer="he_normal", padding="same")(c1)
p1 = MaxPooling2D((2,2))(c1)
c2 = Conv2D(32, (3,3), activation="relu", kernel_initializer="he_normal", padding="same")(p1)
c2 = Dropout(0.2)(c2)
c2 = Conv2D(32, (3,3), activation="relu", kernel_initializer="he_normal", padding="same")(c2)
p2 = MaxPooling2D((2,2))(c2)
c3 = Conv2D(64, (3,3), activation="relu", kernel_initializer="he_normal", padding="same")(p2)
c3 = Dropout(0.2)(c3)
c3 = Conv2D(64, (3,3), activation="relu", kernel_initializer="he_normal", padding="same")(c3)
p3 = MaxPooling2D((2,2))(c3)
c4 = Conv2D(128, (3,3), activation="relu", kernel_initializer="he_normal", padding="same")(p3)
c4 = Dropout(0.2)(c4)
c4 = Conv2D(128, (3,3), activation="relu", kernel_initializer="he_normal", padding="same")(c4)
p4 = MaxPooling2D((2,2))(c4)
c5 = Conv2D(256, (3,3), activation="relu", kernel_initializer="he_normal", padding="same")(p4)
c5 = Dropout(0.2)(c5)
c5 = Conv2D(256, (3,3), activation="relu", kernel_initializer="he_normal", padding="same")(c5)
u6 = Conv2DTranspose(128, (2,2), strides=(2,2), padding="same")(c5)
u6 = concatenate([u6, c4], axis=3)
c6 = Conv2D(128, (3,3), activation="relu", kernel_initializer="he_normal", padding="same")(u6)
c6 = Dropout(0.2)(c6)
c6 = Conv2D(128, (3,3), activation="relu", kernel_initializer="he_normal", padding="same")(c6)
u7 = Conv2DTranspose(64, (2,2), strides=(2,2), padding="same")(c6)
u7 = concatenate([u7, c3], axis=3)
c7 = Conv2D(64, (3,3), activation="relu", kernel_initializer="he_normal", padding="same")(u7)
c7 = Dropout(0.2)(c7)
c7 = Conv2D(64, (3,3), activation="relu", kernel_initializer="he_normal", padding="same")(c7)
u8 = Conv2DTranspose(32, (2,2), strides=(2,2), padding="same")(c7)
u8 = concatenate([u8, c2], axis=3)
c8 = Conv2D(32, (3,3), activation="relu", kernel_initializer="he_normal", padding="same")(u8)
c8 = Dropout(0.2)(c8)
c8 = Conv2D(32, (3,3), activation="relu", kernel_initializer="he_normal", padding="same")(c8)
u9 = Conv2DTranspose(16, (2,2), strides=(2,2), padding="same")(c8)
u9 = concatenate([u9, c1], axis=3)
c9 = Conv2D(16, (3,3), activation="relu", kernel_initializer="he_normal", padding="same")(u9)
c9 = Dropout(0.2)(c9)
c9 = Conv2D(16, (3,3), activation="relu", kernel_initializer="he_normal", padding="same")(c9)
outputs = Conv2D(n_classes, (1,1), activation="softmax")(c9)
model = Model(inputs=[inputs], outputs=[outputs])
return model
# Percentages of data for each class
class_percentages = {
'urban_land': 7.8383,
'agriculture_land': 5.6154,
'rangeland': 9.6087,
'forest_land': 1.2616,
'water': 2.9373,
'barren_land': 1.0828,
'unknown': 0.00017716
}
# Calculate the inverse of percentages
inverse_percentages = {cls: 1 / pct for cls, pct in class_percentages.items()}
# Normalize weights
sum_inverse_percentages = sum(inverse_percentages.values())
weights = [weight / sum_inverse_percentages for cls, weight in inverse_percentages.items()]
print("Class Weights:", weights)
# Loss function
dice_loss = sm.losses.DiceLoss(class_weights=weights)
focal_loss = sm.losses.CategoricalFocalLoss()
total_loss = dice_loss + (1 * focal_loss)
# Metrics
metrics = ["accuracy", jaccard_coef]
with strategy.scope():
# Build the model
model = multi_unet_model()
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
# Compile the model
model.compile(optimizer=optimizer, loss=total_loss, metrics=metrics)
# Print model summary
model.summary()
train_size = int(len(dataset)*0.8)
train_dataset = dataset.take(train_size)
val_dataset = dataset.skip(train_size)
train_dataset = train_dataset.batch(batch_size=16)
train_dataset = strategy.experimental_distribute_dataset(train_dataset)
val_dataset = val_dataset.batch(batch_size=16)
val_dataset = strategy.experimental_distribute_dataset(val_dataset)
history = model.fit(train_dataset,
validation_data=val_dataset,
epochs=3,
)
```
then I saved model with
```
with strategy.scope():
model.save("new_3_epoch_model.h5")
```
and when i load model with
```
load_model("/kaggle/working/data/imgs/new_3_epoch_model.h5", custom_objects={"dice_loss_plus_1focal_loss": total_loss, "jaccard_coef":jaccard_coef})
```
it shows error:
```
TypeError Traceback (most recent call last)
Cell In[46], line 1
----> 1 load_model("/kaggle/working/data/imgs/new_3_epoch_model.h5", custom_objects={"dice_loss_plus_1focal_loss": total_loss, "jaccard_coef":jaccard_coef})
File /usr/local/lib/python3.10/site-packages/keras/src/saving/saving_api.py:183, in load_model(filepath, custom_objects, compile, safe_mode)
176 return saving_lib.load_model(
177 filepath,
178 custom_objects=custom_objects,
179 compile=compile,
180 safe_mode=safe_mode,
181 )
182 if str(filepath).endswith((".h5", ".hdf5")):
--> 183 return legacy_h5_format.load_model_from_hdf5(filepath)
184 elif str(filepath).endswith(".keras"):
185 raise ValueError(
186 f"File not found: filepath={filepath}. "
187 "Please ensure the file is an accessible `.keras` "
188 "zip file."
189 )
File /usr/local/lib/python3.10/site-packages/keras/src/legacy/saving/legacy_h5_format.py:155, in load_model_from_hdf5(filepath, custom_objects, compile)
151 training_config = json_utils.decode(training_config)
153 # Compile model.
154 model.compile(
--> 155 **saving_utils.compile_args_from_training_config(
156 training_config, custom_objects
157 )
158 )
159 saving_utils.try_build_compiled_arguments(model)
161 # Set optimizer weights.
File /usr/local/lib/python3.10/site-packages/keras/src/legacy/saving/saving_utils.py:145, in compile_args_from_training_config(training_config, custom_objects)
143 loss = _deserialize_nested_config(losses.deserialize, loss_config)
144 # Ensure backwards compatibility for losses in legacy H5 files
--> 145 loss = _resolve_compile_arguments_compat(loss, loss_config, losses)
147 # Recover metrics.
148 metrics = None
File /usr/local/lib/python3.10/site-packages/keras/src/legacy/saving/saving_utils.py:245, in _resolve_compile_arguments_compat(obj, obj_config, module)
237 """Resolves backwards compatiblity issues with training config arguments.
238
239 This helper function accepts built-in Keras modules such as optimizers,
(...)
242 this does nothing.
243 """
244 if isinstance(obj, str) and obj not in module.ALL_OBJECTS_DICT:
--> 245 obj = module.get(obj_config["config"]["name"])
246 return obj
TypeError: string indices must be integers
``` |
Can't load keras model with Custom objects trained on kaggle TPU |
|tensorflow|keras|deep-learning|kaggle|tpu| |
null |
import glob
import os
import zipfile
def remove_parent_path(parent_path, file_path):
# Get the relative path of the file_path with respect to the parent_path
relative_path = os.path.relpath(file_path, parent_path)
return relative_path
def compress_to(parent_path, out_file_path, structured_files=None, include_parent_dir=False):
with zipfile.ZipFile(out_file_path, 'w') as zipf:
for file in structured_files:
if not include_parent_dir:
zip_archive_name = remove_parent_path(parent_path, file)
else:
zip_archive_name = file
zipf.write(file, zip_archive_name)
parent_dir = 'out'
files_with_sub_dir = os.path.join(parent_dir, '*', '*.*')
out_file_path = 'sample.zip'`enter code here`
files_list = glob.glob(files_with_sub_dir, recursive=True)
compress_to(parent_dir, out_file_path, files_list, include_parent_dir=True)
|
morphia client always makes more than 1 connection |
|mongodb|connection-pooling|morphia| |
I found this error when running a firebase transaction in my flutter app.
await _firestore.runTransaction<bool>(
(transaction) async {
... // things went here.
}
) |
Error: Dart exception from converted Future. Use the properties 'error' to fetch the boxed error and 'stack' to recover the stack trace |
|flutter|firebase|dart| |
Short answer: add a try-catch block inside the async function.
Journey: After some tinkering, I realised that the error was because the async function in the runTransaction() raised an error from it without being properly handled by using try-catch block.
After I added a try-catch block, the problem was solved.
await _firestore.runTransaction<bool>(
(transaction) async {
try {
... // things went here.
} catch (e) {
... // error-handling
}
}
) |
We are using Copado as part of our Salesforce Development / Deployment process, we are not using any Azure Pipelines. This issue I am having is that Copado requires that users MUST NOT Complete a Pull Request in Azure DevOps as that would merge the branches outside of Copado, and cause a whole world of pain.
Is there a way using branch policies / permissions to prevent reviewers from completing the Pull Request?
I have tried much Googling and searching on here, but have not found a solution to this particular issue, as Approvers have the ability to complete pull requests. |
Disable Azure DevOps Pull Request being completed when Using Copado |
|azure-devops|salesforce|devops| |
null |
I want to repeat the [sip-call][1] project, but I just can't assemble the project without errors. Apparently, the files that are in it are not enough to build the project. I have already tried many methods and can't get any success. What am I doing:
I'm cloning the [sip-call][1] repository to my computer. I do ```idf.py build```. The message appears:
CMakeLists.txt not found in project directory D:\sip-call
Then I add the file _CMakeLists.txt_ to the _sip-call_ folder. The file contains the text:
cmake_minimum_required(VERSION 3.16)
include($ENV{IDF_PATH}/tools/cmake/project.cmake)
project(sip-call)
And I do ```idf.py build```. The project is compiled, a lot of text is output to the console, and eventually the compilation ends with an error:
FAILED: sip-call.elf
...
undefined reference to 'app_main'
I add the file _CMakeLists.txt_ to the _main_ folder. The file contains the text:
idf_component_register(SRCS "main.c"
INCLUDE_DIRS "")
And I do ```idf.py build```. I get the error:
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Program Files\Python39\Lib\site-packages\kconfgen\__main__.py", line 16, in <module>
main()
File "C:\Program Files\Python39\Lib\site-packages\kconfgen\core.py", line 402, in main
output_function(deprecated_options, config, temp_file)
File "C:\Program Files\Python39\Lib\site-packages\kconfgen\core.py", line 566, in write_cmake
write_node(n)
File "C:\Program Files\Python39\Lib\site-packages\kconfgen\core.py", line 557, in write_node
write('set({}{} "{}")\n'.format(prefix, sym.name, val))
File "encodings\cp1251.py", line 19, in encode
UnicodeEncodeError: 'charmap' codec can't encode character '\xfc' in position 36: character maps to <undefined>
If I add library dependencies in the _CMakeLists.txt_ file in the _main_ folder, as was done in the original project by **chrta** in his [original sip-call project][2]:
idf_component_register(SRCS ./main.cpp
REQUIRES
driver
esp_netif
esp_wifi
mbedtls
nvs_flash
sip_client
audio_client
display
#web_server
INCLUDE_DIRS "")
target_compile_options(${COMPONENT_LIB} PRIVATE -std=gnu++17)
and do ```idf.py build```, the compiler will display a message about unfamiliar libraries with a hint:
HINT: The component sip_client could not be found.
Then I add a _CMakeLists.txt_ file to each local library. The file contains the text:
idf_component_register(INCLUDE_DIRS include)
And again I get the same error:
File "encodings\cp1251.py", line 19, in encode
UnicodeEncodeError: 'charmap' codec can't encode character '\xfc' in position 36: character maps to <undefined>
I use Windows 10, ESP-IDF v5.2.1
Any other projects from the [esp-idf][3] examples are built without errors.
Please tell me how to build the [sip-call][1] project without errors? What files are needed for this and what content should they contain? Thank you
[1]: https://github.com/sikorapatryk/sip-call
[2]: https://github.com/chrta/sip_call
[3]: https://github.com/espressif/esp-idf |
**You don't use functions and logic codes in jsx section!**
<p>change your code to this:<p>
```js
import { useState, useEffect } from "react";
import axios from "axios";
const Component = () => {
const [abilities, setAbilities] = useState([]);
async function fetchURL() {
const res = await axios.get("url to fetch");
const data = res.data;
setAbilities(data);
}
useEffect(() => {
fetchURL();
}, []);
return (
<ul className="">
{abilities.map((ability, index) => (
<li key={index} className="inline px-3">
{ability.name}
<p>{ability.effect}</p>
</li>
))}
</ul>
);
};
export default Component;
```
for learn more about this codes go to react documents
# Update
if you need a map in array for fetch all urls, you can use this code instead up code:
```js
import { useState, useEffect } from "react";
import axios from "axios";
const urls = [
{
id: 1,
url: "https://jsonplaceholder.typicode.com/posts",
},
{
id: 2,
url: "https://jsonplaceholder.typicode.com/posts",
},
{
id: 3,
url: "https://jsonplaceholder.typicode.com/posts",
},
];
const Component = () => {
const [abilities, setAbilities] = useState([]);
async function fetchURL(url) {
const res = await axios.get(url);
const data = res.data;
return data;
}
async function fetchAllUrls() {
const data = await Promise.all(
urls.map(async (url) => {
return await fetchURL(url.url);
}),
);
setAbilities(data);
}
useEffect(() => {
fetchAllUrls();
}, []);
return (
<div>
{abilities.map((aba, rootIndex) => (
<ul key={rootIndex} className="">
{aba.map((ability, index) => (
<li key={`${rootIndex}-${index}`} className="inline px-3">
{ability.title}
<p>{ability.effect}</p>
</li>
))}
</ul>
))}
</div>
);
};
export default Component;
```
Of course, this solution is not very good for production apps because loop requests and fetch data happened. but this is working fine! |
Need to calculate Matrix exponential with Tailor series with MPI matrix is small 3 x 3 for example
Meanwhile
```
vector<vector<double>> matrixExp(const vector<vector<double>>& A) {
int n = A.size();
vector<vector<double>> E(n, vector<double>(n, 0));
vector<vector<double>> T(n, vector<double>(n, 0));
vector<vector<double>> localE(n, vector<double>(n, 0));
int rank, size;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
for (int i = 0; i < n; i++)
E[i][i] = 1;
for (int i = 0; i < n; i++)
localE[i][i] = 0;
T = E;
for (int j = 1; j <= rank; j++)
{
T = matrixMult(T, A);
T = matrixDiv(T, j);
}
localE = T;
for (int i = rank + 1; i <= N; i += size) {
for (int j = i; j < i + size; j++) {
T = matrixMult(T, A);
T = matrixDiv(T, j);
}
localE = matrixSum(localE, T);
}
MPI_Reduce(localE[0].data(), E[0].data(), n, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);
MPI_Reduce(localE[1].data(), E[1].data(), n, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);
MPI_Reduce(localE[2].data(), E[2].data(), n, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);
return E;
}
```
But i dont know how to optimize this
```
for (int j = i; j < i + size; j++) {
T = matrixMult(T, A);
T = matrixDiv(T, j);
}
```
Maybe it's impossible with this implementation
[scheme][1]
[1]: https://i.stack.imgur.com/Lc65Z.png |
If one of the guys in the comments posts an answer within 72 hours, I will mark their answer as the solution and not mine since their comments helped me solve this.
What is going on is called a Merge Conflict.
It occurs when I make changes to one branch, switch to another branch and make changes, and then try to merge branches.
The way you solve this is by going into Github, editing the sections that contain "<<<<<<<<<<<<" and ">>>>>>>>>" and removing either the line above or below the "=======" line. You also want to obviously remove the "<<<<<<" and ">>>>>>>>" lines.
If you go to open up your file in <insert software program here> and it looks blank or get an error message, you made a mistake when editing the file to get rid of the Merge conflicts. |
I tried to follow a scrimba on creating a gpt-4 (or 3.5 for me) chatbot. This was working great, but after I downloaded it and ran it locally it just stopped working. I reset the scrimba and everything, but it just didnt work. Here is the link. I do have to right api key.
[text](https://scrimba.com/learn/buildaiapps/starter-code-coa28453fbd547f14691eb135) |
Scrimba tutorial was working, suddenly stopped even trying the default |
|javascript|html|openai-api| |
null |
Trying to put multiple arguments into a string in Bash is generally not a good idea. Sometimes the only way to make it work is to use `eval`, and that often just gives you more problems (some of which may be very unobvious). See [Why should eval be avoided in Bash, and what should I use instead?](https://stackoverflow.com/q/17529220/4154375).
In this case I'd avoid the problem by defining the function like this:
function myFunc
{
codesign "${@:2}" "$1"
}
and calling it like this:
myFunc "/MyFolder/MyFile.dylib" -f -vvvv --strict --deep --timestamp -s "Developer ID Application: My Company Inc. (123456)" --options runtime
I.e. put the path of the file to be signed as the first argument and the arguments to `codesign` as the following, all separate, arguments. Any arguments that work with `codesign` itself are guaranteed to work when provided as arguments (2 and following) to `myFunc`.
|
{"Voters":[{"Id":506413,"DisplayName":"Jonathan Potter"},{"Id":403671,"DisplayName":"Simon Mourier"},{"Id":6752050,"DisplayName":"273K"}]} |
I'm trying to set up a scheduled task with ECS Fargate. The task was dockerized and would be run through AWS ECS with Fargate.
Unfortunately, the service I want to run needs to access an API of a partner where the IP address needs to be whitelisted. I see that for each execution of the task with Fargate, a new ENI with an different IP address is assigned.
How is it possible to assign a static IP address to a AWS ECS Fargate task?
|
I ran into the same issue; assuming you are injecting the Router instance using the inject method, try changing and injecting it using the constructor
constructor(private router: Router) {}
instead of ...
router = inject(Router);
Also be sure to use the correct import. After implementing the above code change I then realized a mistake I made; there is a difference between `Inject(Router)` and `inject(Router)`
import { Inject, inject } from '@angular/core';
You need to use the latter. |
|wpf|vb.net| |
I'm using Next.js with React Pro Sidebar, but this seems to be a problem related to JS, HTML, and CSS. Anyways, I've tried a lot of "solutions" on a lot of questions here on SO, but none of them worked, and I'm surprised this isn't covered inside of React Pro Sidebar itself.
Anytime I route to a page with bigger height, I get something like this:
[![Cut through Sidebar][1]][1]
That's because I haven't been able to update the height of the sidebar to the height of the whole document. The height of the document is another problem, because I haven't been able to find a way of tracking it through events, e.g. `resize`. The full height of the document seems to be tracked by `document.body.scrollHeight`.
I've tried to create a `useEffect` for `document.body.scrollHeight`, but it doesn't seem to be enough (in Next.js, you might need to create a `dynamic` component with no SSR, like [this](https://stackoverflow.com/a/68713422/4756173)):
```tsx
const [viewHeight, setViewHeight] = useState<string | number>("100vh")
useEffect(() => {
setViewHeight(document.body.scrollHeight)
}, [document.body.scrollHeight])
...
<ProSidebar style={{ height: viewHeight }}>
...
```
I've also tried to create a `ResizeObserver` but it didn't work either:
```typescript
useEffect(() => {
const observer = new ResizeObserver(() => {
setViewHeight(document.body.scrollHeight)
})
observer.observe(document.body)
}, [])
```
And, specifically for React Pro Sidebar, I've also tried this solution from [an issue created by one of its contributors](https://github.com/azouaoui-med/react-pro-sidebar/issues/158#issuecomment-1487817653):
```tsx
<Sidebar
defaultCollapsed={isCollapsed}
rootStyles={{
[`.${sidebarClasses.container}`]: {
backgroundColor: "#ffffff",
height: "100vh !important",
},
[`.${sidebarClasses.root}`]: {},
}}
>
```
Does anyone know of a specific, or, better yet, general solution to updating the sidebar height in JS (or React)?
[1]: https://i.stack.imgur.com/olBLqt.png |
Update Sidebar Height to Cover All Content |
|javascript|html|css|reactjs|next.js| |
This solved the problem for me. Add this CSS to `ion-toolbar`, it will automatically adjust.
&__toolbar {
margin-top: calc(env(safe-area-inset-top));
} |
I want to store my object into MongoDB in golang.
The object 'A' contains a field:
```go
Messages []*Message `bson:"messages,omitempty" json:"messages"`
```
This array is an array of interface pointers. This is my interface:
```go
type Message interface {
GetType() enums.MessageType
GetId() string
}
```
How can I save this field in mongoDB without creating the messages bson manually?
Today I am creating a bson by iterating on all the messages
This is my insert command:
```go
_, err := h.db.Collection(collectionName).InsertOne(ctx, A)
```
I tried to convert my array to a `map[string]*Message` and use the bson `inline` flag.
I tried to Marshall the entire object and got an error that there is no encoder to the interface. |
null |
Actually, experimenting with a notification server I've implemented, I've just verified that `notify-send` indeed does return whatever the server is passing to it.
So it must be Dunst that, although correctly avoids replacing notifications for which an ID was explicitly requested with notifications for which the ID was not requested, it probably does not do so via IDs, but via some other internally defined unique identifier.
([Indeed](https://github.com/dunst-project/dunst/issues/1317).) |
For jPasskit for apple wallet , Is there an option to check if passes were already created? |
|apple-wallet| |
null |
The library is deprecated: see https://github.com/googleapis/oauth2client/tree/master.
Also see https://cloud.google.com/appengine/docs/standard/python3/services/access#web_frameworks |
I am new in react native and I want to achieve text chat functionality with agora in react native.
Right now I am facing a problem, if user is not chat screen and if thay receive any messages from other user, then same unread messages get appended in last after old messages gets loads. also if user send a new messages then new messages gets added along with same old unread messages.
Please suggest what I did wrong, also if my explanation needs some data, please let me know.
Thanks in advance
Below is my code implementation:
```
import { Image, Platform, StyleSheet, Text, TextInput, View } from 'react-native'
import React, { useState, useCallback, useEffect, useContext, useRef } from 'react'
import { GiftedChat, Send, Bubble } from 'react-native-gifted-chat'
import { IconButton } from 'react-native-paper'
import { ChatMessage, ChatMessageChatType } from 'react-native-agora-chat'
import MCIcon from 'react-native-vector-icons/MaterialCommunityIcons'
import { useFocusEffect } from '@react-navigation/native'
import { AuthContext } from '../../../context/AuthContext'
function CustomInputToolbar(props) {
const { text, setText, sendmsg } = props
return (
<View
style={{
// flex: 1,
flexDirection: 'row',
alignItems: 'center',
gap: 10,
marginHorizontal: 8,
}}
>
<TextInput
style={[styles.input, { paddingTop: 10 }]}
placeholder="Message"
placeholderTextColor="gray"
value={text}
onChangeText={setText}
multiline
/>
<MCIcon
name="send"
color="#dd1077"
size={32}
onPress={() => {
if (text !== '') {
sendmsg()
setText('')
}
}}
/>
</View>
)
}
function ChatScreen({ route, navigation }) {
const { tokenData, profileData, chatClient, chatManager } = route.params
const logoutCalled = useRef(false)
const [messages, setMessages] = useState([])
const [text, setText] = React.useState('')
const { chatNotification } = useContext(AuthContext)
// console.log(JSON.stringify(tokenData))
const { markChatRead } = useContext(AuthContext)
// Custom bubble component to change background color to pink
const renderBubble = (props) => (
<Bubble
{...props}
wrapperStyle={{
right: {
backgroundColor: '#dd1077',
},
left: {
backgroundColor: 'white',
},
}}
/>
)
useEffect(() => {
markChatRead(tokenData.conversationId)
const login = async () => {
console.log('start login ...')
try {
await chatClient.isLoginBefore()
await chatClient.loginWithAgoraToken(tokenData?.username, tokenData?.token)
console.log('login operation success.')
loadOldMessages()
} catch (error) {
console.log(`login fail: ${JSON.stringify(error)}`)
}
}
const loadOldMessages = async () => {
try {
const res = await chatManager.fetchHistoryMessagesByOptions(tokenData.toUsername, 0, {
cursor: '',
pageSize: 50,
})
const newMessages = res.list.map((message) => ({
_id: message.msgId,
text: message.body.content,
createdAt: new Date(message.serverTime),
user: {
_id: message.from,
name: message.from,
avatar: '',
},
received: true,
}))
setMessages((previousMessages) => GiftedChat.append(previousMessages, newMessages))
setMessageListener() // Call setMessageListener after setting the messages
} catch (error) {
console.error('Error fetching old messages:', error)
}
}
const setMessageListener = () => {
const msgListener = {
onMessagesReceived(receivedMessages) {
const newMessages = receivedMessages
.filter((message) => message.from === tokenData.toUsername)
.map((message) => ({
_id: message.msgId,
text: message.body.content,
createdAt: new Date(message.serverTime),
user: {
_id: message.from,
name: message.from,
avatar: '',
},
received: false,
}))
setMessages((previousMessages) => GiftedChat.append(previousMessages, newMessages))
},
onCmdMessagesReceived: (messages) => {},
onMessagesRead: (messages) => {},
onGroupMessageRead: (groupMessageAcks) => {},
onMessagesDelivered: (messages) => {},
onMessagesRecalled: (messages) => {},
onConversationsUpdate: () => {},
onConversationRead: (from, to) => {},
}
chatManager.removeAllMessageListener()
chatManager.addMessageListener(msgListener)
}
login()
}, [])
useFocusEffect(
React.useCallback(
() => () => {
if (!logoutCalled.current) {
logoutCalled.current = true
logout()
if (navigation && navigation.getState().routes.slice(-1)[0].name !== 'ChatList') {
navigation.goBack()
}
// navigation.goBack()
}
},
[navigation]
)
)
const onSend = useCallback((messages = []) => {
setMessages((previousMessages) => GiftedChat.append(previousMessages, messages))
}, [])
const logout = () => {
console.log('start logout ...')
chatClient
.logout()
.then(() => {
console.log('logout success.')
})
.catch((reason) => {
console.log(`logout fail:${JSON.stringify(reason)}`)
})
}
// Sends a text message to somebody.
const sendmsg = () => {
let msg = ChatMessage.createTextMessage(
tokenData.toUsername,
text,
ChatMessageChatType.PeerChat
)
const callback = new (class {
onProgress(locaMsgId, progress) {
console.log(`send message process: ${locaMsgId}, ${progress}`)
}
onError(locaMsgId, error) {
console.log(`send message fail: ${locaMsgId}, ${JSON.stringify(error)}`)
}
onSuccess(message) {
console.log(`send message success: ${message.localMsgId}`)
const newMessage = {
_id: message.msgId,
text: message.body.content,
createdAt: new Date(message.serverTime),
user: {
_id: message.from,
name: message.from,
avatar: '',
},
sent: true,
}
console.log(JSON.stringify(newMessage))
chatNotification(tokenData.conversationId, message.body.content)
setMessages((previousMessages) => GiftedChat.append(previousMessages, newMessage))
}
})()
console.log('start send message ...')
chatClient.chatManager
.sendMessage(msg, callback)
.then(() => {
console.log(`send message: ${msg.localMsgId} ${JSON.stringify(msg)}`)
})
.catch((reason) => {
console.log(`send fail: ${JSON.stringify(reason)}`)
})
}
function CustomSendButton(props) {
return (
<Send {...props}>
<View style={{ marginRight: 10, marginBottom: 10 }}>
<Image
source={require('../../../../assets/icons/bookmark-white.png')}
resizeMode={'cover'}
/>
</View>
</Send>
)
}
return (
<>
<View
style={{
flexDirection: 'row',
alignItems: 'center',
paddingLeft: 16,
}}
>
{/* Arrow icon */}
<IconButton
icon={() => <MCIcon name="chevron-left" color="#fff" size={30} />}
onPress={() => {
logoutCalled.current = true
logout()
navigation.goBack()
}}
/>
{/* Profile image */}
<Image
source={{
uri: profileData.profileImageUrl || profileData.profileImage,
}}
style={{ width: 50, height: 50, borderRadius: 40 }}
/>
<View
style={{ flexDirection: 'column', alignItems: 'flex-start', marginLeft: 10, width: 200 }}
>
<Text style={{ fontSize: 12, color: 'white' }}>
{(profileData.role
? profileData.role.charAt(0).toUpperCase() + profileData.role.slice(1)
: '') ||
(profileData.type
? profileData.type.charAt(0).toUpperCase() + profileData.type.slice(1)
: '')}
</Text>
<Text style={{ fontSize: 16, fontWeight: 'bold', color: 'white', flexWrap: 'wrap' }}>
{profileData.firstName} {profileData.lastName}{' '}
<Text style={{ fontSize: 10 }}>{profileData.title}</Text>
</Text>
<Text style={{ fontSize: 12, color: 'white' }}>
{profileData.university || profileData.institution}
</Text>
</View>
</View>
<GiftedChat
messages={messages}
onSend={(messages) => onSend(messages)}
alwaysShowSend
renderAvatar={null}
renderSend={(props) => <CustomSendButton {...props} />}
renderInputToolbar={(props) => (
<CustomInputToolbar {...props} text={text} setText={setText} sendmsg={sendmsg} />
)}
renderBubble={renderBubble}
bottomOffset={Platform.OS === 'ios' ? 100 : 0}
// messagesContainerStyle={{ }}
user={{
_id: tokenData.username,
}}
/>
</>
)
}
export default ChatScreen
const styles = StyleSheet.create({
input: {
flex: 1,
backgroundColor: 'white',
color: 'black',
height: 40,
borderRadius: 8,
paddingHorizontal: 16,
fontSize: 16,
},
})
``` |
Agora Chat integration in React native |
|react-native| |
null |
Using GeometryReader will cause the available screen space to be reduced when the keyboard pops up, causing the GeometryReader and its internal views to be re-layout. This rearrangement may interrupt ongoing animations.
However, this situation will not occur if you use the .animation modifier to define animations.
Effect:
[![Effect picture][1]][1]
```
import SwiftUI
struct ContentView: View {
@State var t: String = ""
@State var appeared = 0
let text = "rdvz"
var horizontalSpacing: CGFloat {
let textSize: CGSize = text.size(withAttributes: [NSAttributedString.Key.font: UIFont(name: "Inter-Bold", size: 14.0)!])
return textSize.width + 4
}
var body: some View {
VStack {
GeometryReader { geometry in
generateMarquee(with: geometry.size.width, spacing: horizontalSpacing)
.frame(height: 20)
}
TextField("something to write", text: $t)
.padding()
}
.onAppear(perform: animateMarquee)
}
private func generateMarquee(with width: CGFloat, spacing: CGFloat) -> some View {
HStack(alignment: .center, spacing: 0.0) {
ForEach(0...Int(width / spacing) + 2, id: \.self) { index in
Text("rdvz")
.frame(width: spacing)
.font(.system(size: 14, weight: .bold))
.foregroundColor(index % 2 == 0 ? Color.gray : Color.secondary)
}
}
.offset(x: appeared > 0 ? -spacing * 2 : 0)
//Apply animation modifiers directly to this view
.animation(.linear(duration: 1.5).repeatForever(autoreverses: false), value: appeared)
}
private func animateMarquee() {
appeared = 1
}
}
#Preview {
ContentView()
}
```
[1]: https://i.stack.imgur.com/fP7f6.gif |
I have 2 Python classes that are in separate modules. One class maintains a state attribute that the other needs to observe and use when certain actions are handled.
Right now, I'm using an event-based approach to allow the second class to detect when the state changes in the first class and update its reference to the state accordingly. But I feel this approach is not the best way to do this as it makes maintaining and understanding the code difficult.
So here's an example of what I'm doing right now.
```python
# module 1
class Foo:
def __init__(self):
self.state = "INITIAL_STATE"
def change_state(self, new_state):
self.state = new_state
EventManager.fire_event("state_changed", self.state)
# module 2
class Bar:
def __init__(self):
self.state = "INITIAL_STATE"
def handle_state_change_event(self, new_state):
self.state = new_state
# how this state is used in this class
def take_action(self):
if self.state == "INITIAL_STATE":
print("action1")
elif self.state == "CONFIRMED":
print("action2)
# And so on
```
I want to see if anyone can suggest a better approach/design pattern to model this.
Note: The second class doesn't change the state, only use the value of the state to decide which action to take |
from django import forms
from django.contrib.auth.models import User
class SignupForm(forms.ModelForm):
class Meta:
model = User
fields = ['username', 'password', 'first_name', 'last_name', 'email']
widgets = {
'password': forms.PasswordInput()
}
|
It refused to work on Windows regardless of suggestions across multiple threads. I tried `$var`, `${var}`, `%var%`, `${env:var}`.
Finally, I just wrote a short script
const envFilenameWithoutExtension = "mockup";
// Note that the path does not include the filename
const envFilePath = require('path').join(__dirname, "..");
// This plugin looks for env files named <.env.mode_name> . Note the leading dot
// My env file uses a interpolated variables, like ... api_url = "${url}", and dot-env didn't translate these,
// but custom-env which I was already using, does
require("custom-env").env(envFilenameWithoutExtension, envFilePath);
// Grouped together mostly for convenient logging
const basicObject = {
modeFolder: process.env.BUILD_FOLDER,
mode: process.env.REACT_APP_MODE,
// I use this variable later, basicObject.url, to make sure the file was read.
// You'll need to change it to something you use.
url: process.env.REACT_APP_API,
command: ""
};
basicObject.command = `cross-env NODE_ENV=${envFilenameWithoutExtension} ...the rest of your command`;
if (!basicObject.url) {
console.log(process.env);
throw "Couldn't find the env file."
} else {
console.log("basic-config", basicObject)
}
require('child_process').exec(basicObject.command, (error, stdout, stderr) => {
if (error) {
console.error(`exec error: ${error}`);
} else if (stderr) {
console.error(`stderr: ${stderr}`);
} else {
console.log(`stdout: ${stdout}`);
}
});
|
If I understand your question correctly, you have "double-category" in the y-axis because you have specified the following
```
ticks: {
callback: function(value, index, values) {
// Menampilkan label kategori sesuai index
return getCategoryLabel(value);
}
}
```
but the getCategoryLabel function return is like this (returning the same values multiple times):
```
function getCategoryLabel(value) {
// Menentukan label kategori sesuai nilai
if (value >= 0 && value <= 1) {
return 'Ringan';
} else if (value >= 2 && value <= 3) {
return 'Sedang';
} else if (value >= 4) {
return 'Berat';
} else {
return '';
}
}
```
If you wish the Y-axis to show the numeric value instead, simply remove (or remark) the ticks block as follows:
```
/*
ticks: {
callback: function(value, index, values) {
// Menampilkan label kategori sesuai index
return getCategoryLabel(value);
}
}
*/
```
So the following code
```
<div>
<canvas id="myChart"></canvas>
</div>
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
<script>
const canvas = document.getElementById('myChart');
const ctx = canvas.getContext('2d');
var chartData = {}; // Menyiapkan objek untuk menyimpan data chart
var myChart = new Chart(ctx, {
type: 'bar', // Menggunakan chart jenis bar
data: {
datasets: [{
label: 'Statistik Karyawan',
data: [],
backgroundColor: ['skyblue'], // Warna untuk masing-masing kategori
borderWidth: 1
}],
labels: ['Jan', 'Feb', 'Mar', 'Apr', 'Mei', 'Jun', 'Jul', 'Agu', 'Sep', 'Okt', 'Nov', 'Des']
},
options: {
scales: {
y: {
beginAtZero: false,
/*
ticks: {
callback: function(value, index, values) {
// Menampilkan label kategori sesuai index
return getCategoryLabel(value);
}
}
*/
}
},
responsive: true, // Membuat chart responsif
maintainAspectRatio: true // Mengatur rasio aspek chart
}
});
// Fungsi untuk mendapatkan label kategori
function getCategoryLabel(value) {
// Menentukan label kategori sesuai nilai
if (value >= 0 && value <= 1) {
return 'Ringan';
} else if (value >= 2 && value <= 3) {
return 'Sedang';
} else if (value >= 4) {
return 'Berat';
} else {
return '';
}
}
// Fungsi untuk mengupdate chart dengan data baru
function updateChart(newData) {
myChart.data.datasets[0].data = newData.allData;
myChart.update();
}
// Fungsi untuk mengubah data chart saat terjadi perubahan
function onDataChanged() {
var newData = {
allData: [1, 2, 3, 4, 5, 6, 7, 8]
};
// Menggunakan objek untuk melacak kategori yang sudah ditambahkan
var addedCategories = {};
// Memproses data baru dan menambahkannya ke chart
newData.allData.forEach(function(value) {
var categoryLabel = getCategoryLabel(value);
// Menambahkan kategori ke chart hanya jika belum ada
if (!addedCategories[categoryLabel]) {
addedCategories[categoryLabel] = true;
myChart.data.datasets[0].data.push(value);
}
});
updateChart(newData);
}
onDataChanged();
</script>
```
|
I have used recast for my code generation with babel as i jhave checked it is compatable with babel but recast is removing all my comments
Here is my recast code
` const generatedAst = recast.print(ast, {
tabWidth: 2,
quote: 'single',
});`
i am passing AST generated from babel parse
and directly if i use babel generate to generate my code than parenthesis are getting removed
Can anyone help me with this
Thankyou in advance
i don't know any other package that is compatible with babel
i used recast because i have checked this [text](https://github.com/babel/babel/issues/8974) and in this it is mentioned that babel removes parenthesis |
Recast removes comments |
|javascript| |
null |
# Docker Traefik Provider Example
### Case
For example, in case a WebDav service is behind a TLS reverse-proxy (e.g. Traefik, port `443`), and the service itself is running on non-TLS protocol (i.e. port `80`), and renaming a file (i.e. HTTP method `MOVE`) in Nginx may result in error:
```
..."MOVE /dav/test.txt HTTP/1.1" 400...
...[error] 39#39: *4 client sent invalid "Destination" header...
```
### Static (i.e. `traefik.yaml`)
```yaml
experimental:
plugins:
rewriteHeaders:
modulename: 'github.com/bitrvmpd/traefik-plugin-rewrite-headers'
version: 'v0.0.1'
```
### Dynamic
```yaml
services:
app:
image: php
volumes:
- '/var/docker/data/app/config/:/etc/nginx/conf.d/:ro'
- '/var/docker/data/app/data/:/var/www/'
networks:
reverse-proxy:
labels:
- 'traefik.enable=true'
- 'traefik.http.services.app.loadbalancer.server.port=80'
# Replace 'Destination' request header: 'https://' -> 'http://'
- 'traefik.http.middlewares.rewriteHeadersMW.plugin.traefik-plugin-rewrite-headers.rewrites.request[0].header=Destination'
- 'traefik.http.middlewares.rewriteHeadersMW.plugin.traefik-plugin-rewrite-headers.rewrites.request[0].regex=^https://(.+)$'
- 'traefik.http.middlewares.rewriteHeadersMW.plugin.traefik-plugin-rewrite-headers.rewrites.request[0].replacement=http://$1'
- 'traefik.http.routers.app.rule=Host(`sub.example.com`)'
- 'traefik.http.routers.app.entrypoints=https'
- 'traefik.http.routers.app.priority=2'
- 'traefik.http.routers.app.middlewares=rewriteHeadersMW'
networks:
reverse-proxy:
name: 'reverse-proxy'
external: true
```
---
Related:
- [Traefik Plugin Page](https://plugins.traefik.io/plugins/63718c14c672f04dd500d1a0/rewrite-headers);
- [GitHub Plugin Page](https://github.com/bitrvmpd/traefik-plugin-rewrite-headers). |
{"Voters":[{"Id":13212125,"DisplayName":"Jacek Kaczmarek"}],"DeleteType":1} |
While developing a `feat` branch based on the `develop` branch, the `develop` branch has been frozen, that is merged into the `staging` branch. So now I would like to rebase my `feat` branch onto `staging`.
Initial version history:
```
D---E feat
/
A---B---C develop
\
F---G---H staging
```
Final version history (what I want to get):
```
A---B---C develop
\
F---G---H staging
\
D---E feat
```
Final version history (what I will get with the `git rebase develop feat` command):
```
D---E feat
/
A---B---C develop
\
F---G---H staging
```
Which `git` command will set the initial version history into the desired state? |
> mov ss, ax
> mov sp, 0x7c00
That code comes from my answer at https://stackoverflow.com/questions/78205755/no-bios-output-from-sector-1
> org 0x7C00
> bits 16
>
> XOR AX, AX
> MOV DS, AX
> MOV ES, AX
> MOV SS, AX
> MOV SP, 0x7C00
You're asking about the purpose of the lines that setup the SS and SP registers.
Well, **every program at some point will be using the stack**.
- It could be because you want to execute a subroutine for which you would use a `call` instruction that places a return address on the stack and use a `ret` instruction that removes that very same return address from the stack.
- Or you could choose to preserve one or more registers and then the stack would be a convenient place to do so.
- Or you could desire to make use of the BIOS or DOS api for which you would use the `int` instruction that places a copy of the FLAGS register, the CS segment register, and the IP register on the stack. Upon returning the `iret` instruction will then restore those items.
And then there are the interrupt handlers like timer, keyboard, and others that operate autonomously and also need the stack properly setup.
So you see that the stack is very important.
But **why did I initialize SS:SP to 0x0000:0x7C00?**
When BIOS gave your bootloader program control of the machine a functional stack already existed. Sadly, you can't count on the fact that it would reside at a place that does not interfere with whatever it is that you plan to do on the machine. The obvious solution is to place the stack yourself at a safe location, and the safest spot for a bootloader that itself sits at the linear address 0x7C00 is just beneath it in memory. So the stackpointer SP (that represents the top of the stack memory) is set to 0x7C00 which provides a stack memory of some 30000 bytes. That's more than enough for even the most demanding bootloader.
> MOV SS, AX
> MOV SP, 0x7C00
You should **always** change these registers in tandem and in this particular order! The x86 cpu has a special safety mechanism that makes it impossible for an interrupt to execute after a write to the SS segment register. `pop ss` enjoys the same protection as a `mov ss, r/m16`. If the programmer then uses this window to assign a value to the companion SP register, there's no more risk of a malformed stackpointer SS:SP which would be a disastrous event.
|
{"Voters":[{"Id":13212125,"DisplayName":"Jacek Kaczmarek"}]} |
This is very old yet interesting question, but this link (from Brent Ozar) is the best solution compared with all other solutions here listed
https://www.brentozar.com/archive/2018/09/one-hundred-percent-cpu/
CREATE OR ALTER PROCEDURE dbo._keep_it_100
AS
BEGIN
WITH e1(n) AS
(
SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL
SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL
SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL
SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL
SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL
SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL
SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL
SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL
SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL
SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL
),
e2(n) AS (SELECT TOP 2147483647 NEWID() FROM e1 a, e1 b, e1 c, e1 d, e1 e, e1 f, e1 g, e1 h, e1 i, e1 j)
SELECT MAX(ca.n)
FROM e2
CROSS APPLY
(
SELECT TOP 2147483647 *
FROM (
SELECT TOP 2147483647 *
FROM e2
UNION ALL SELECT * FROM e2
UNION ALL SELECT * FROM e2
UNION ALL SELECT * FROM e2
UNION ALL SELECT * FROM e2
UNION ALL SELECT * FROM e2
UNION ALL SELECT * FROM e2
UNION ALL SELECT * FROM e2
UNION ALL SELECT * FROM e2
UNION ALL SELECT * FROM e2
UNION ALL SELECT * FROM e2
UNION ALL SELECT * FROM e2
) AS x
WHERE x.n = e2.n
ORDER BY x.n
) AS ca
OPTION(MAXDOP 0, LOOP JOIN, QUERYTRACEON 8649);
END;
|
I'm encountering an issue during the compilation of my project using CMake. When attempting to build the project, I receive the following error message:
Copy error :
FAILED: CMakeFiles/TestOpencv.dir/main.c.obj
C:\PROGRA~1\MIB055~1\2022\COMMUN~1\VC\Tools\MSVC\1439~1.335\bin\Hostx64\x64\cl.exe /nologo -external:ID:\lilia\Documents\libs\opencv\build\include -external:W0 /DWIN32 /D_WINDOWS /Zi /Ob0 /Od /RTC1 -MDd /showIncludes /FoCMakeFiles\TestOpencv.dir\main.c.obj /FdCMakeFiles\TestOpencv.dir\ /FS -c C:\Users\lilia\CLionProjects\TestOpencv\main.c
D:\lilia\Documents\libs\opencv\build\include\opencv2/core.hpp(49): fatal error C1189: #error: core.hpp header must be compiled as C++
ninja: build stopped: subcommand failed.
I believe this error is related to the inclusion of the core.hpp header file from OpenCV, but I'm uncertain about the exact cause or how to resolve it. I'm using CLion as my development environment and CMake for project configuration.
In attempting to resolve the compilation issue I encountered with my project, I've taken the following steps:
Dependency Check: I verified that all necessary dependencies, including OpenCV, are correctly installed on my system.
CMake Configuration: I carefully reviewed my CMakeLists.txt file to ensure that OpenCV is properly included and configured.
Compilation Options: I checked the compilation options to ensure they're correctly set.
Despite these efforts, I haven't been able to resolve the error. I was expecting the project to compile successfully without encountering the mentioned error related to the core.hpp header file from OpenCV.
Could you please provide guidance on further troubleshooting steps or possible solutions?
Thank you for your assistance. |
OpenCV2 on CLion |
|c|opencv| |