instruction stringlengths 0 30k ⌀ |
|---|
I have data with this structure (YR weather forecasts)
```
df1 <- read.table(text = "time temperature
00 0
01 0
02 1
03 1
04 2
05 2
06 2
07-13 3
13-19 4
19-01 1", header = TRUE)
```
I want to get a structure: each row is one hour, and therefore transform the hourly intervals into the appropriate number of rows
```
> df1.full
time temperature
1 0 0
2 1 0
3 2 1
4 3 1
5 4 2
6 5 2
7 6 2
8 7 3
9 8 3
10 9 3
11 10 3
12 11 3
13 12 3
14 13 4
15 14 4
16 15 4
17 16 4
18 17 4
19 18 4
20 19 1
21 20 1
22 21 1
23 22 1
24 23 1
``` |
Convert the time intervals to equal hours and fill in the value column |
in antd you can see the example code with functional component:
https://ant.design/components/table#components-table-demo-resizable-column
```
import React, { useState } from 'react';
import { Table } from 'antd';
import { Resizable } from 'react-resizable';
const ResizableTitle = (props) => {
const { onResize, width, ...restProps } = props;
if (!width) {
return <th {...restProps} />;
}
return (
<Resizable
width={width}
height={0}
handle={
<span
className="react-resizable-handle"
onClick={(e) => {
e.stopPropagation();
}}
/>
}
onResize={onResize}
draggableOpts={{
enableUserSelectHack: false,
}}
>
<th {...restProps} />
</Resizable>
);
};
const App = () => {
const [columns, setColumns] = useState([
{
title: 'Date',
dataIndex: 'date',
width: 200,
},
{
title: 'Amount',
dataIndex: 'amount',
width: 100,
sorter: (a, b) => a.amount - b.amount,
},
{
title: 'Type',
dataIndex: 'type',
width: 100,
},
{
title: 'Note',
dataIndex: 'note',
width: 100,
},
{
title: 'Action',
key: 'action',
render: () => <a>Delete</a>,
},
]);
const data = [
{
key: 0,
date: '2018-02-11',
amount: 120,
type: 'income',
note: 'transfer',
},
{
key: 1,
date: '2018-03-11',
amount: 243,
type: 'income',
note: 'transfer',
},
{
key: 2,
date: '2018-04-11',
amount: 98,
type: 'income',
note: 'transfer',
},
];
const handleResize =
(index) =>
(_, { size }) => {
const newColumns = [...columns];
newColumns[index] = {
...newColumns[index],
width: size.width,
};
setColumns(newColumns);
};
const mergeColumns = columns.map((col, index) => ({
...col,
onHeaderCell: (column) => ({
width: column.width,
onResize: handleResize(index),
}),
}));
return (
<Table
bordered
components={{
header: {
cell: ResizableTitle,
},
}}
columns={mergeColumns}
dataSource={data}
/>
);
};
export default App;
```
css (it's important):
```
#components-table-demo-resizable-column .react-resizable {
position: relative;
background-clip: padding-box;
}
#components-table-demo-resizable-column .react-resizable-handle {
position: absolute;
right: -5px;
bottom: 0;
z-index: 1;
width: 10px;
height: 100%;
cursor: col-resize;
}
``` |
My app is to open one of the applications with some parameters in Windows.
I was successful by doing that in Flask and provided .exe format and it works properly.
now instead of giving .exe I want to place the project in the server and give the URL.
the problem here is if I execute the `subprocess.popon` it tries to execute on the server but now that I want to execute on the client side.
I checked with ActiveXObject but got to know that it is just for IE.
The question is what is the best approach to do it?
by NodeJS I thought but that is also server side which is the same as Python.
|
|mongodb|geospatial|or-tools|cp-sat| |
I'm trying to use the [NVIDIA SDK][1] to encode HDR video with H265. Windows Media Foundation doesn't (yet) support 10-bit input with H265.
I can't seem to feed the colors correctly to the encoder. I'm trying to render a video which has 3 images, one green with a value of 1.0, one with value of 2.0 and one with value of 3.0 (maximum) in the RGB, that is, in D2D1_COLOR_F it's {0,1,0,1}, {0,2,0,1} and {0,3,0,1}.
Only the maximum 1 is seen correctly (The left one is the generated video, the right one is the correct color that should be shown in the video):
[![enter image description here][2]][2]
With green set to 2.0, this is the result:
[![enter image description here][3]][3]
And with green set to 1.0, even worse:
[![enter image description here][4]][4]
And this is the result of a real HDR image:
[![enter image description here][5]][5]
The Nvidia encoder accepts colors in AR30 format, that is, 10 bits for R,G,B and 2 for Alpha (which is ignored). My DirectX rendered has the colors in GUID_WICPixelFormat128bppPRGBAFloat so I'm doing this:
struct fourx
{
float r, g, b, a;
};
float*f = pointer_to_floats;
for (int x = 0; x < wi; x++)
{
for (int y = 0; y < he; y++)
{
char* dx = (char*)f;
dx += y * wi * 16;
dx += x * 16;
fourx* col = (fourx*)dx;
DirectX::XMVECTOR v;
DirectX::XMVECTORF32 floatingVector = { col->r,col->g,col->b,col->a };
v = floatingVector;
// float is 0 to max_lim
float max_number = 3.0f; // this is got from my monitor's white level as described [here][6].
DirectX::PackedVector::XMUDECN4 u4 = {};
col->r *= 1023.0f / max_number;
col->g *= 1023.0f / max_number;
col->b *= 1023.0f / max_number;
u4.z = (int)col->r;
u4.y = (int)col->g;
u4.x = (int)col->b;
u4.w = 0;
DWORD* dy = output_pointer;
dy += y * wi;
dy += x;
*dy = u4.operator unsigned int();
}
}
I suspect something's wrong with the gamma or what.
I'm not sure how to proceed from now on.
[1]: https://developer.nvidia.com/nvidia-video-codec-sdk/download
[2]: https://i.stack.imgur.com/qdsoV.png
[3]: https://i.stack.imgur.com/Qr8YD.png
[4]: https://i.stack.imgur.com/5eTNV.png
[5]: https://i.stack.imgur.com/27IoG.jpg
[6]: https://learn.microsoft.com/en-us/windows/win32/direct3darticles/high-dynamic-range |
I am working on a Rust project using the libp2p library to create a peer-to-peer network. I have configured my swarm to listen on all interfaces using the following code:
```rust
let listen_address_udp = format!("/ip4/0.0.0.0/udp/{}/quic-v1", port);
swarm.listen_on(listen_address_udp.parse()?)?;
let listen_address_tcp = format!("/ip4/0.0.0.0/tcp/{}", port);
swarm.listen_on(listen_address_tcp.parse()?)?;
```
However, when reading swarm events, I am unable to retrieve the external public address that my node is listening on. The code for reading swarm events is as follows:
```rust
loop {
select! {
_ = sig_term_handler.recv() => {
trigger_message = !trigger_message;
},
event = swarm.select_next_some() => match event {
SwarmEvent::Behaviour(MyBehaviourEvent::Gossipsub(gossipsub::Event::Message {
propagation_source: peer_id,
message_id: id,
message,
})) => {
println!(
"Received'{}' with id: {id} from peer: {peer_id}, Size :{}",
String::from_utf8_lossy(&message.data), message.data.len()
)
},
SwarmEvent::NewListenAddr { address, .. } => {
println!("Local node is listening on {address}");
}
_ => {}
}
}
}
```
The output I receive only shows the local addresses where my node is listening, such as:
```
Local node is listening on /ip4/127.0.0.1/tcp/8082
Local node is listening on /ip4/172.24.181.240/tcp/8082
```
I have tried pinging and opening port 8082 for TCP on all nodes, but I still cannot determine if my local node is publicly listening or not. What should be the external public address I expect to see in the `SwarmEvent::NewListenAddr` event? Any suggestions or insights into resolving this issue would be greatly appreciated.
|
In rust, a slice is a portion of an array or vector.
When we create a slice, we know the number of elements it represents in array/vector, which means we know the size of the slice.
So why do we call slice a DST? |
Why is a slice a DST? |
|rust| |
|c|x86-64|cpu-architecture|micro-optimization|micro-architecture| |
The error message you're seeing indicates that the AWS CLI is not found in the expected location (/opt/bin/aws). This is because the AWS CLI is not included in the Bash layer you're using (arn:aws:lambda::744348701589:layer:bash:8).
To use the AWS CLI in your Lambda function, you have two options:
1. Create a custom AWS Lambda layer that includes the AWS CLI: You can create a zip file that includes the AWS CLI and any other dependencies your function needs, and then create a new Lambda layer using that zip file. Once the layer is created, you can add it to your Lambda function.
2. Use the AWS SDK instead of the AWS CLI: The AWS SDKs (Software Development Kits) are available in many programming languages and include the same functionality as the AWS CLI. You can rewrite your function to use the AWS SDK instead of the AWS CLI. For example, if you're comfortable with Python, you can use the Boto3 library (the AWS SDK for Python) to interact with AWS services.
More generally, it sounds like a linux machine with a cron job would make a better fit.
Using AWS Lambda for this task could be overkill, especially if the script takes a long time to run or if it doesn't fit well with the event-driven model of Lambda. AWS Lambda is best suited for short, event-driven tasks, while cron jobs are better for longer, time-based tasks.
If you are not interested in assigning a VM for that, you can containerize your application isong a Docker container with a cron job inside it.
|
|r|dplyr| |
I try to make a little program wit Pycharm, following some tutorials. But i get this error:
```
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\Anton1n\AppData\Local\Programs\Python\Python311\Lib\tkinter\__init__.py", line 1967, in __call__
return self.func(*args)
^^^^^^^^^^^^^^^^
File "C:\Users\Anton1n\PycharmProjects\Livechatvideo\Base.py", line 5, in open_file
file_path = filedialog.askopenfilename()
^^^^^^^^^^
NameError: name 'filedialog' is not defined
```
with this code :
```
import tkinter as tk
from tkinter import *
from tkinter import filedialog
def open_file():
filename = filedialog.askopenfilename(initialdir = "/",
title = "Select a File",
filetypes=(("PNG", "*.png"),
("JPEG", "*.jpg;*.jpeg"),
("GIF", "*.gif") ))
send_button.configure(text="File Opened: " + filename)
# Créer une fenêtre
root = Tk()
# Personnaliser la fenêtre
root.title("LivechatVideo")
root.geometry("300x300")
root.minsize(350, 400)
root.iconbitmap("livechatico.ico")
root.config(bg="#4279B4")
# Créer la frame
frame = Frame(root, bg="#4279B4", bd=1, relief=SUNKEN)
# Ajouter un premier texte
label_title = Label(frame, text="Bienvenue sur LivechatVideo", font=("calibri", 15), fg="orange", bg="#4279B4")
label_title.pack()
# Ajouter second texte
label_subtitle = Label(frame, text="Envoyez vos photos ou vidéos directement à vos amis", font=("calibri", 10), fg="white", bg="#4279B4")
label_subtitle.pack()
# Ajouter
frame.pack(pady=5)
# Ajout de widgets (boutons, labels, etc.) à la fenêtre
send_button = Button(root, text="Sélectionnez votre fichier", font=("calibri", 10), bg="white", fg="black", command=open_file)
send_button.pack(pady=15)
# Création barre de menu
menu_bar = Menu(root)
# Création d'un menu
file_menu = Menu(menu_bar, tearoff=0)
file_menu.add_command(label="Option")
menu_bar.add_cascade(label="Fichier", menu=file_menu)
# Configuraiton de la fenêtre pour menu
root.config(menu=menu_bar)
# Afficher
root.mainloop()
```
I get the error when i launch the program and click on the button "Sélectionnez votre fichier". I tried almost everything I found on internet, and every way to import tkmodule. I use Python 3. I simply want to open a windows file explorer to select an image on my computer. Why can't it work please ? |
null |
I try to make a little program wit Pycharm, following some tutorials. But i get this error:
```none
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\Anton1n\AppData\Local\Programs\Python\Python311\Lib\tkinter\__init__.py", line 1967, in __call__
return self.func(*args)
^^^^^^^^^^^^^^^^
File "C:\Users\Anton1n\PycharmProjects\Livechatvideo\Base.py", line 5, in open_file
file_path = filedialog.askopenfilename()
^^^^^^^^^^
NameError: name 'filedialog' is not defined
```
with this code:
```py
import tkinter as tk
from tkinter import *
from tkinter import filedialog
def open_file():
filename = filedialog.askopenfilename(initialdir = "/",
title = "Select a File",
filetypes=(("PNG", "*.png"),
("JPEG", "*.jpg;*.jpeg"),
("GIF", "*.gif") ))
send_button.configure(text="File Opened: " + filename)
# Créer une fenêtre
root = Tk()
# Personnaliser la fenêtre
root.title("LivechatVideo")
root.geometry("300x300")
root.minsize(350, 400)
root.iconbitmap("livechatico.ico")
root.config(bg="#4279B4")
# Créer la frame
frame = Frame(root, bg="#4279B4", bd=1, relief=SUNKEN)
# Ajouter un premier texte
label_title = Label(frame, text="Bienvenue sur LivechatVideo", font=("calibri", 15), fg="orange", bg="#4279B4")
label_title.pack()
# Ajouter second texte
label_subtitle = Label(frame, text="Envoyez vos photos ou vidéos directement à vos amis", font=("calibri", 10), fg="white", bg="#4279B4")
label_subtitle.pack()
# Ajouter
frame.pack(pady=5)
# Ajout de widgets (boutons, labels, etc.) à la fenêtre
send_button = Button(root, text="Sélectionnez votre fichier", font=("calibri", 10), bg="white", fg="black", command=open_file)
send_button.pack(pady=15)
# Création barre de menu
menu_bar = Menu(root)
# Création d'un menu
file_menu = Menu(menu_bar, tearoff=0)
file_menu.add_command(label="Option")
menu_bar.add_cascade(label="Fichier", menu=file_menu)
# Configuraiton de la fenêtre pour menu
root.config(menu=menu_bar)
# Afficher
root.mainloop()
```
I get the error when i launch the program and click on the button "Sélectionnez votre fichier". I tried almost everything I found on internet, and every way to import tkmodule. I use Python 3. I simply want to open a windows file explorer to select an image on my computer. Why can't it work? |
if you were looking for a textfield type file uploader like i was but didn't, this is what i managed to piece up you can use this
[![enter image description here][1]][1]
i used an `AutoComplete` Component of Material UI and made changes to that.
**Reusable Component:**
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
import { Close, FileUploadOutlined } from "@mui/icons-material";
import { Autocomplete, ButtonBase, TextField } from "@mui/material";
import React, { Fragment, useRef } from "react";
const FileField = ({
textfieldProps,
autoCompleteProps,
multiple,
files,
setFiles,
}) => {
const fileRef = useRef(null);
const handleCarouselFiles = (e) => {
const selectedFiles = e.target.files;
if (multiple) {
setFiles((prevFiles) => [...prevFiles, ...selectedFiles]);
} else {
setFiles(selectedFiles);
}
};
const handleCarouselInput = () => {
fileRef.current.click();
};
return (
<Fragment>
<Autocomplete
multiple
options={Array.from(files)}
getOptionLabel={(option) => option.name}
renderInput={(params) => (
<TextField
{...params}
{...(textfieldProps ?? {})}
disabled
onClick={handleCarouselInput}
InputProps={{
...params.InputProps,
endAdornment: (
<Fragment>
{files.length > 0 && (
<ButtonBase
onClick={(e) => {
e.preventDefault();
e.stopPropagation();
setFiles([]);
}}
sx={{
paddingRight: "0.5rem",
}}
>
<Close />
</ButtonBase>
)}
<ButtonBase>
<FileUploadOutlined onClick={handleCarouselInput} />
</ButtonBase>
</Fragment>
),
}}
sx={{
color: "inherit",
"& .MuiInputBase-root , & .MuiInputBase-input": {
paddingRight: "1rem !important",
cursor: "pointer",
},
}}
/>
)}
value={Array.from(files)}
onChange={(event, newValue) => {
event.preventDefault();
setFiles(newValue);
}}
open={false}
sx={{
caretColor: "transparent",
cursor: "pointer",
"& .Mui-disabled,& .MuiInputLabel-root": {
color: "rgba(0,0,0,0.6)",
backgroundColor: "transparent",
},
}}
{...(autoCompleteProps ?? {})}
/>
<input
type="file"
ref={fileRef}
style={{ display: "none" }}
onChange={handleCarouselFiles}
multiple={multiple}
/>
</Fragment>
);
};
export default FileField;
<!-- end snippet -->
**Usage:**
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
import { Fragment, useState } from "react";
import "./App.css";
import FileField from "./FileField";
function App() {
const [profile, setProfile] = useState([]);
const [coverPhotos, setcoverPhotos] = useState([]);
return (
<Fragment>
<div className="p-5">
<FileField
textfieldProps={{ label: "Single" }}
autoCompleteProps={{ className: "my-5" }}
files={profile}
setFiles={setProfile}
/>
<FileField
textfieldProps={{ label: "Multiple" }}
autoCompleteProps={{ className: "my-5" }}
files={coverPhotos}
setFiles={setcoverPhotos}
multiple={true}
/>
</div>
</Fragment>
);
}
export default App;
<!-- end snippet -->
**Explaination:**
The `textfieldProps` are any props that you would pass down to normal textfield and similarly for `autoCompleteProps` , the files itself are taken as props as an array along with a setter function to set the value onChange
**Links:**
Github [here][2]
CodeSandbox [here][3]
Hopefully this helps
[1]: https://i.stack.imgur.com/qYLlZ.png
[2]: https://github.com/AbdullahAbid87/material-ui-file
[3]: https://codesandbox.io/p/sandbox/material-ui-file-xk6w56?layout=%257B%2522sidebarPanel%2522%253A%2522EXPLORER%2522%252C%2522rootPanelGroup%2522%253A%257B%2522direction%2522%253A%2522horizontal%2522%252C%2522contentType%2522%253A%2522UNKNOWN%2522%252C%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522id%2522%253A%2522ROOT_LAYOUT%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522UNKNOWN%2522%252C%2522direction%2522%253A%2522vertical%2522%252C%2522id%2522%253A%2522cltiae0ll0006286bir2xzvsw%2522%252C%2522sizes%2522%253A%255B100%252C0%255D%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522EDITOR%2522%252C%2522direction%2522%253A%2522horizontal%2522%252C%2522id%2522%253A%2522EDITOR%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL%2522%252C%2522contentType%2522%253A%2522EDITOR%2522%252C%2522id%2522%253A%2522cltiae0ll0002286bq51dpboc%2522%257D%255D%257D%252C%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522SHELLS%2522%252C%2522direction%2522%253A%2522horizontal%2522%252C%2522id%2522%253A%2522SHELLS%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL%2522%252C%2522contentType%2522%253A%2522SHELLS%2522%252C%2522id%2522%253A%2522cltiae0ll0003286br823gk23%2522%257D%255D%252C%2522sizes%2522%253A%255B100%255D%257D%255D%257D%252C%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522DEVTOOLS%2522%252C%2522direction%2522%253A%2522vertical%2522%252C%2522id%2522%253A%2522DEVTOOLS%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL%2522%252C%2522contentType%2522%253A%2522DEVTOOLS%2522%252C%2522id%2522%253A%2522cltiae0ll0005286bwlnszdva%2522%257D%255D%252C%2522sizes%2522%253A%255B100%255D%257D%255D%252C%2522sizes%2522%253A%255B50%252C50%255D%257D%252C%2522tabbedPanels%2522%253A%257B%2522cltiae0ll0002286bq51dpboc%2522%253A%257B%2522id%2522%253A%2522cltiae0ll0002286bq51dpboc%2522%252C%2522tabs%2522%253A%255B%257B%2522id%2522%253A%2522cltshvp0s0002286bn1t7fpez%2522%252C%2522mode%2522%253A%2522permanent%2522%252C%2522type%2522%253A%2522FILE%2522%252C%2522filepath%2522%253A%2522%252Fsrc%252FFileField.jsx%2522%252C%2522state%2522%253A%2522IDLE%2522%257D%252C%257B%2522id%2522%253A%2522cltsk912m0002286bfe5miu4b%2522%252C%2522mode%2522%253A%2522permanent%2522%252C%2522type%2522%253A%2522FILE%2522%252C%2522initialSelections%2522%253A%255B%257B%2522startLineNumber%2522%253A3%252C%2522startColumn%2522%253A23%252C%2522endLineNumber%2522%253A3%252C%2522endColumn%2522%253A23%257D%255D%252C%2522filepath%2522%253A%2522%252Fsrc%252FApp.js%2522%252C%2522state%2522%253A%2522IDLE%2522%257D%255D%252C%2522activeTabId%2522%253A%2522cltsk912m0002286bfe5miu4b%2522%257D%252C%2522cltiae0ll0005286bwlnszdva%2522%253A%257B%2522tabs%2522%253A%255B%257B%2522id%2522%253A%2522cltiae0ll0004286b6pgfbjrr%2522%252C%2522mode%2522%253A%2522permanent%2522%252C%2522type%2522%253A%2522UNASSIGNED_PORT%2522%252C%2522port%2522%253A0%252C%2522path%2522%253A%2522%252F%2522%257D%255D%252C%2522id%2522%253A%2522cltiae0ll0005286bwlnszdva%2522%252C%2522activeTabId%2522%253A%2522cltiae0ll0004286b6pgfbjrr%2522%257D%252C%2522cltiae0ll0003286br823gk23%2522%253A%257B%2522tabs%2522%253A%255B%255D%252C%2522id%2522%253A%2522cltiae0ll0003286br823gk23%2522%257D%257D%252C%2522showDevtools%2522%253Atrue%252C%2522showShells%2522%253Afalse%252C%2522showSidebar%2522%253Atrue%252C%2522sidebarPanelSize%2522%253A15%257D |
Even after conversion to a Maven project JDeveloper still seems to use jpr-file for defining the context root. You can change the jpr-file manually:
<hash n="oracle.jdeveloper.deploy.dt.DeploymentProfiles">
<hash n="profileDefinitions">
<hash n="ViewController">
...
<value n="contextRoot" v="context-root-you-want"/>
...
</hash>
</hash>
...
<hash n="oracle.jdeveloper.model.J2eeSettings">
<value n="j2eeWebAppName" v="app-name-you-want"/>
<value n="j2eeWebContextRoot" v="context-root-you-want"/>
</hash>
---
The another straightforward way would be to overwrite an JEE application descriptor as `/META-INF/application.xml` **in your EAR-file**. It looks like this:
<?xml version="1.0" encoding="UTF-8" ?>
<application xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/application_5.xsd"
xmlns="http://java.sun.com/xml/ns/javaee" version="5">
<module>
<web>
<web-uri>ViewController.war</web-uri>
<context-root>context-root-you-want</context-root>
</web>
</module>
</application>
It should work for every J2EE server and any package tool.
----
Alternately, you can use Weblogic application descriptor `WEB-INF\weblogic.xml` **in your WAR-file** like this:
<wls:weblogic-web-app
xmlns:wls="http://xmlns.oracle.com/weblogic/weblogic-web-app"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee https://java.sun.com/xml/ns/javaee/ejb-jar_3_0.xsd http://xmlns.oracle.com/weblogic/weblogic-web-app https://xmlns.oracle.com/weblogic/weblogic-web-app/1.4/weblogic-web-app.xsd">
<wls:context-root>/context-root-you-want</wls:context-root>
</wls:weblogic-web-app>
The method with `application.xml` has higher priority than the method with `weblogic.xml`. See for more information:
https://docs.oracle.com/cd/E13222_01/wls/docs90/webapp/weblogic_xml.html#1073750 |
I connected my device with Bluetooth BLE using a Flutter application, and this is what I obtained: 'indication received from 00002a35-0000-1000-8000-00805f9b34fb, value :(0x) DE-F6-F4-16-F3-FF-07-E4-07-01-14-08-2B-00-B2-F2-01-80-00'. How do I interpret these results? When I used nRF Connect, the interpreted values of systolic pressure are: 127.0 mmHg. How was the systolic value obtained? Considering that the format of the systolic pressure is sfloat (2 bytes) and 'The value of 1 equals 1.0 mmHg. The valid range is 0-300.' However, I was unable to determine how the systolic and diastolic values were obtained in mmHg. How can I demonstrate that it is equal to 127.0 mmHg? |
Interpreting Bluetooth BLE Data from Flutter App: Understanding Systolic Pressure Values |
|android|flutter|hex|bluetooth-lowenergy|device| |
This is my output from a data set.
I want to output only `state` and total number of `property_type`.
my code for below output;
```
homes_by_state = df_south.groupby(["state"])["property_type"].value_counts()
```
the output;
```
state property_type
Paraná apartment 1834
house 710
Rio Grande do Sul apartment 2059
house 584
Santa Catarina apartment 2192
house 442
Name: property_type, dtype: int64
``` |
I want to create a unit test for a Camal route with a dynamic endpoint. To show my problem I will demonstrate on a simple example.
I started with the simple route as descibed in the [Apache Camel Routes Testing in Spring Boot][1] tutorial at Baeldung.
Route under test:
@Component
public class GreetingsFileRouter extends RouteBuilder {
@Override
public void configure() throws Exception {
from("direct:start")
.routeId("greetings-route")
.setBody(constant("Hello Baeldung Readers!"))
.to("file:output");
}
}
and unit test:
@SpringBootTest
@CamelSpringBootTest
@MockEndpoints("file:output")
class GreetingsFileRouterUnitTest {
@Autowired
private ProducerTemplate template;
@EndpointInject("mock:file:output")
private MockEndpoint mock;
@Test
void whenSendBody_thenGreetingReceivedSuccessfully() throws InterruptedException {
mock.expectedBodiesReceived("Hello Baeldung Readers!");
template.sendBody("direct:start", null);
mock.assertIsSatisfied();
}
}
this works OK as expected. I extened the endpoint in the route as following:
.toD("file:output?filename=${header.fileName}");
Ie. the filename for the output file should be taken from the header `filename`
I have also added the header to the unit test
template.sendBodyAndHeaders("direct:start",
"Hello Baeldung Readers!",
Map.of("filename", "myfilename.txt"));
When I run the test the output file with filename `myfilename.txt` is created but the test fails with following message:
java.lang.AssertionError: mock://file:output Received message count. Expected: <1> but was: <0>
Expected :<1>
Actual :<0>
I think it is because the file endpoint was not mocked.
I have tried many combinations for the String in `@MockEndpoints` and `@EndpointInject` like the full uri with placeholder `"file:output?filename=${header.fileName}"`, wildcard etc. but without any success. Can anyone help.
[1]: https://www.baeldung.com/spring-boot-apache-camel-routes-testing
**EDIT:**
After some debugging I found out that I oversimplify the problem description. In thes particular case all what has to be done is to modify `@MockEndpoints("file:output")` to `@MockEndpoints("file:output*")` or simply dropping the parameter that is `"*"` by default.
Unfortunate this trick does not work for my real use case. My dynamic end point looks like following:
.toD("{{api-base-url}}/projects/${header.project_id}/devices/messages/up")
where `{{api-base-url}}` is from application.yaml, something like `https://example.com/api/v1`. If use `@MockEndpoints` without parameters I see in the console:
Adviced endpoint [https://example.com] with mock endpoint [mock:https:example.com]
If I modify `@EndpointInject` to `@EndpointInject("mock:https:example.com")` the mock is injected as expected.
But I do not like hardcoding the generated mock name `"mock:https:example.com"` into the test. I can see the logic how it was probably derived from `api-base-url` but I would prefer to make the unit test independent on the values in application.yaml.
Is there any way i can modify the name of the generated mock endpoint name?
|
can you help me with writing Python code to create a Telegram bot that automatically searches Twitter for a new posted link "https://..." every second and sends it to a Telegram bot? Don't judge harshly, I'm a newbie. |
Nvidia HDR Encoder |
|c++|winapi|directx|nvidia|hdr| |
You can turn your class into a `ConsumerStatefullWidget` and discard the hooks, then all your parameters can become class fields.
Furthermore, try not to pull logic into the widget. There is business logic, then place it in a notifier class (and even use an `AsyncNotifier` that contains "sealed" state with data, load and error). If it's ui logic, it's not a bad option to place it in the presenter classes.
All of this will increase code reusability and allow each layer of your application to clearly fulfill its responsibilities. |
If this page is the homepage (first page) of your app, I may have an answer. Flutterflow support tells me that the navbar will not show on the homepage unless/until the user is authenticated.
I haven't found this in their docs anywhere and am still trying to find workarounds, but figured I'd post just in case it's helpful to you. The only options they suggested are 1) make user sign in first (makes no sense for my app) or 2) build a custom nav.
UPDAT: Option 3, and the one I went with: create a dummy screen that is the opening screen of the app that immediately redirects the user to the real home screen. The real home screen will then behave normally and show the nav. Dumb, but it works, and is much easier than building your own nav. |
const usersList = document.getElementById("usersBox");
function handleBoxResize() {
if (usersList.scrollHeight > usersList.clientHeight) {
usersList.classList.add('your-class');
} else {
usersList.classList.remove('your-class');
}
}
handleBoxResize();
var observer = new ResizeObserver(handleBoxResize);
observer.observe(usersList);
|
|ios|ruby|xcode|bundle| |
Wordpress version: 6.4.3
"timber/timber": "2.x-dev"
Starter theme: upstatement/timber-starter-theme
I understand you require composer to install timber, no longer require plugin
but how to new custom page in timber wordpress?
in page attributes, i cannot see the tempalete dropdown to select the template
[Page attributes](https://i.stack.imgur.com/KE0Ec.png)
Did i miss out something?
I did create contact.php
```
<?php
/**
* The main template file
* This is the most generic template file in a WordPress theme
* and one of the two required files for a theme (the other being style.css).
* It is used to display a page when nothing more specific matches a query.
* E.g., it puts together the home page when no home.php file exists
*
* Methods for TimberHelper can be found in the /lib sub-directory
*
* @package WordPress
* @subpackage Timber
* @since Timber 0.1
*/
$context = Timber::context();
$context['posts'] = Timber::get_posts();
$context['foo'] = 'bar';
$templates = array( 'contact.twig' );
Timber::render( $templates, $context );
```
also the contact.twig
```
{% extends "base.twig" %}
{% block content %}
{% include "partial/altHero.twig" %}
<h1>Contact</h1>
{% endblock %}
```
I am not sure, what else do i missout?
and i cannot install the timber plugin as well,
error msg when i try to insstall timber plugin
**Plugin could not be activated because it triggered a fatal error.
**
Your help is most appreciated
Cheers |
How to scrape links using Python? |
|twitter|python-telegram-bot| |
null |
What you ask for is not possible as Laravel tries to execute the Gearman payload (see `\Illuminate\Bus\Dispatcher`).
I was in the same situation and just created a wrapper `command` around the Laravel job class. This is not the nicest solution as it will re-queue events, coming on the json queue, but you don't have to touch existing job classes. Maybe someone with more experience knows how to dispatch a job without actually sending it over the wire again.
Lets assume we have one regular Laravel worker class called `GenerateIdentApplicationPdfJob`.
class GenerateIdentApplicationPdfJob extends Job implements SelfHandling, ShouldQueue
{
use InteractsWithQueue, SerializesModels;
/** @var User */
protected $user;
protected $requestId;
/**
* Create a new job instance.
*
* QUEUE_NAME = 'ident-pdf';
*
* @param User $user
* @param $requestId
*/
public function __construct(User $user, $requestId)
{
$this->user = $user;
$this->requestId = $requestId;
}
/**
* Execute the job.
*
* @return void
*/
public function handle(Client $client)
{
// ...
}
}
To be able to handle this class, we need to provide the constructor arguments our own. Those are the required data from our json queue.
Below is a Laravel `command` class `GearmanPdfWorker`, which does all the boilerplate of Gearman connection and `json_decode` to be able to handle the original job class.
class GearmanPdfWorker extends Command {
/**
* The console command name.
*
* @var string
*/
protected $name = 'pdf:worker';
/**
* The console command description.
*
* @var string
*/
protected $description = 'listen to the queue for pdf generation jobs';
/**
* @var \GearmanClient
*/
private $client;
/**
* @var \GearmanWorker
*/
private $worker;
public function __construct(\GearmanClient $client, \GearmanWorker $worker) {
parent::__construct();
$this->client = $client;
$this->worker = $worker;
}
/**
* Wrapper listener for gearman jobs with plain json payload
*
* @return mixed
*/
public function handle()
{
$gearmanHost = env('CB_GEARMAN_HOST');
$gearmanPort = env('CB_GEARMAN_PORT');
if (!$this->worker->addServer($gearmanHost, $gearmanPort)) {
$this->error('Error adding gearman server: ' . $gearmanHost . ':' . $gearmanPort);
return 1;
} else {
$this->info("added server $gearmanHost:$gearmanPort");
}
// use a different queue name than the original laravel command, since the payload is incompatible
$queueName = 'JSON.' . GenerateIdentApplicationPdfJob::QUEUE_NAME;
$this->info('using queue: ' . $queueName);
if (!$this->worker->addFunction($queueName,
function(\GearmanJob $job, $args) {
$queueName = $args[0];
$decoded = json_decode($job->workload());
$this->info("[$queueName] payload: " . print_r($decoded, 1));
$job = new GenerateIdentApplicationPdfJob(User::whereUsrid($decoded->usrid)->first(), $decoded->rid);
$job->onQueue(GenerateIdentApplicationPdfJob::QUEUE_NAME);
$this->info("[$queueName] dispatch: " . print_r(dispatch($job)));
},
[$queueName])) {
$msg = "Error registering gearman handler to: $queueName";
$this->error($msg);
return 1;
}
while (1) {
$this->info("Waiting for job on `$queueName` ...");
$ret = $this->worker->work();
if ($this->worker->returnCode() != GEARMAN_SUCCESS) {
$this->error("something went wrong on `$queueName`: $ret");
break;
}
$this->info("... done `$queueName`");
}
}
}
The class `GearmanPdfWorker` needs to be registered in your `\Bundle\Console\Kernel` like this:
class Kernel extends ConsoleKernel
{
protected $commands = [
// ...
\Bundle\Console\Commands\GearmanPdfWorker::class
];
// ...
Having all that in place, you can call `php artisan pdf:worker` to run the worker and put one job into Gearman via commandline: `gearman -v -f JSON.ident-pdf '{"usrid":9955,"rid":"ABC4711"}'`
You can see the successful operation then
added server localhost:4730
using queue: JSON.ident-pdf
Waiting for job on `JSON.ident-pdf` ...
[JSON.ident-pdf] payload: stdClass Object
(
[usrid] => 9955
[rid] => ABC4711
)
0[JSON.ident-pdf] dispatch: 1
... done `JSON.ident-pdf`
Waiting for job on `JSON.ident-pdf` ... |
I am working on a Rust project using the libp2p library to create a peer-to-peer network. I have configured my swarm to listen on all interfaces using the following code:
```rust
let listen_address_udp = format!("/ip4/0.0.0.0/udp/{}/quic-v1", port);
swarm.listen_on(listen_address_udp.parse()?)?;
let listen_address_tcp = format!("/ip4/0.0.0.0/tcp/{}", port);
swarm.listen_on(listen_address_tcp.parse()?)?;
```
However, when reading swarm events, I am unable to retrieve the external public address that my node is listening on. The code for reading swarm events is as follows:
```rust
loop {
select! {
_ = sig_term_handler.recv() => {
trigger_message = !trigger_message;
},
event = swarm.select_next_some() => match event {
SwarmEvent::Behaviour(MyBehaviourEvent::Gossipsub(gossipsub::Event::Message {
propagation_source: peer_id,
message_id: id,
message,
})) => {
println!(
"Received'{}' with id: {id} from peer: {peer_id}, Size :{}",
String::from_utf8_lossy(&message.data), message.data.len()
)
},
SwarmEvent::NewListenAddr { address, .. } => {
println!("Local node is listening on {address}");
}
_ => {}
}
}
}
```
The output I receive only shows the local addresses where my node is listening, such as:
```
Local node is listening on /ip4/127.0.0.1/tcp/8082
Local node is listening on /ip4/172.24.181.240/tcp/8082
```
I have tried pinging and opening port 8082 for TCP on all nodes, but I still cannot determine if my local node is publicly listening or not. What should be the external public address I expect to see in the `SwarmEvent::NewListenAddr` event? Any suggestions or insights into resolving this issue would be greatly appreciated.I have also attempted to fetch the public IP address for the node and then tried binding the node to the public IP address.
|
Is there any difference between using map and using function if everything is known at compile time. (I'm new to Kotlin/Java and i couldn't find answer to this)
Example:
```kt
val mappings = mapOf(
"PL" to "Poland",
"EN" to "England",
"DE" to "Germany",
"US" to "United States of America",
)
fun mappingsFunc(code: String): String {
return when (code) {
"PL" -> "Poland"
"EN" -> "England"
"DE" -> "Germany"
"US" -> "United States of America"
else -> "Unknown" // previous checks guarantee never returning this
}
}
fun main() {
println(mappings["PL"])
println(mappingsFunc("US"))
}
```
Both of them works, both syntax's are fine for me but i dunno which one is recommended. |
Difference between map and function returning when in Kotlin |
|kotlin| |
null |
One counter example to your solution is:
```
[8, 7, 5, 4, 4, 1]
```
Adding as you have done would give the subsets:
```
[8, 4, 4], [7, 5, 1]: difference of sums = 3
```
while the optimal solution is:
```
[8, 5, 4], [7, 4, 1]: difference of sums = 1
```
Thus, to solve this problem, you need to brute force generate all combinations of (n choose floor(n/2)) and find the one with the smallest difference. Here is a sample code:
```python
comb = []
def getcomb(l, ind, k):
if len(comb) == k:
return [comb[:]]
if ind == len(l):
return []
ret = getcomb(l, ind+1, k)
comb.append(l[ind])
ret += getcomb(l, ind+1, k)
comb.pop()
return ret
def get_best_split(l):
lsm = sum(l)
best = lsm
for i in getcomb(l, 0, len(l)//2):
sm = sum(i)
best = min(best, abs(sm - (lsm - sm)))
return best
print(get_best_split([8, 7, 5, 4, 4, 1])) # outputs 1
```
**EDIT:**
If you don't care about the subsets themselves, then you can just generate all possible sums:
```python
def getcomb(l, ind, k, val):
if k == 0:
return [val]
if ind == len(l):
return []
return getcomb(l, ind+1, k, val) + getcomb(l, ind+1, k-1, val + l[ind])
def get_best_split(l):
l = sorted(l, reverse=True)
best = 1000000
for i in getcomb(l, 0, len(l)//2, 0):
best = min(best, abs(i - (sum(l) - i)))
return best
```
**EDIT 2:**
Another interesting thing to try out might be a modified knapsack solution, where you keep track of the set of the number of elements that can make each value. The complexity would be N^2 * sum(L), which is arguably better than N choose (N/2) depending on how large the average element in your list is:
```python
def get_best_split_knapsack(l):
sm = sum(l)
dp = [[-1, set()] for _ in range(sm+1)]
dp[0][0] = 1
dp[0][1].add(0)
best = sm
for i in l:
for j in range(sm//2, i-1, -1):
if dp[j-i][0] == 1:
dp[j][0] = 1
dp[j][1].update(k+1 for k in dp[j-i][1])
for j in range(max(0,int(sm//2-best)), min(int(sm//2+best)+1, sm)):
if dp[j][0] == 1 and len(l)//2 in dp[j][1]:
best = min(best, abs(sm/2 - j))
return int(2*best)
``` |
|python| |
The usual MSVC compiler warnings are `C4xxx` and `C5xxx`. Warnings with other number format are Code Analysis warnings.
The [`/analyze`][1] option enables code analysis, which makes the compiler deliberately analyze code semantic for red flags. It has lot more warnings. Some are implemented by the compiler itself, some are available via plugins (like C++ Code Guidelines).
There's an option `/analyze:only` to run with code analysis, but without compilation. This makes sense for larger programs, as code analysis is way slower than the usual compilation, so you compile without `/analyze` at all, and have a scheduled run of `/analyze:only` on a build server.
To control large number of various Code Analysis warnings in a more convenient way than pragma or compiler switches, there are `.ruleset` files. They are in `%VSINSTALLDIR%\Team Tools\Static Analysis Tools\Rule Sets`. They can be edited via IDE, or as XML files, so that you can create your own `.ruleset` file, based on an existing one, and suppress any warnings.
For example if you run the compiler with `/analyze:only` on the following program, which does not only try to format an integer as a string, but also tries to obtain that integer by indirection of a null pointer:
```c++
#include <stdio.h>
int main()
{
int *i = nullptr;
printf("%s", *i);
}
```
You'll have the following output:
```
D:\temp2\test.cpp(6) : warning C6067: _Param_(2) in call to 'printf' must be the address of a string. Actual type: 'int'.
D:\temp2\test.cpp(6) : warning C6011: Dereferencing NULL pointer 'i'. : Lines: 5, 6
```
If you create the following `only_format_string.ruleset` file:
```xml
<?xml version="1.0" encoding="utf-8"?>
<RuleSet Name="Format Strings" Description="I'm only interested in format strings" ToolsVersion="17.0">
<Rules AnalyzerId="Microsoft.Analyzers.NativeCodeAnalysis" RuleNamespace="Microsoft.Rules.Native">
<Rule Id="C6067" Action="Warning" />
</Rules>
</RuleSet>
```
And run the conpiler with `/analyze:only /analyze:ruleset only_format_string.ruleset`, you'll have only C6067, but not C6011.
[1]: https://learn.microsoft.com/en-us/cpp/build/reference/analyze-code-analysis |
I came across this problem on the internet of c language. The question was that will this code compile successfully compile or will return an error.
```
#include <stdio.h>
int main(void)
{
int first = 10;
int second = 20;
int third = 30;
{
int third = second - first;
printf("%d\n", third);
}
printf("%d\n", third);
return 0;
}
```
I personally think that this code should give an error as we are re initializing the variable third in the main function whereas the answer to this problem was this code will run successfully with output 10 and 30. then I compiled this code on vs code and it gave an error but on some online compilers it ran successfully with no errors can somebody please explain. I don't think there can be two variables with same name inside the curly braces inside main. if the third was initialized after the curly braces instead before it would work completely fine. like this :-
|
{"Voters":[{"Id":23476278,"DisplayName":"Rohan jain"}],"DeleteType":1} |
I have 68 3d points (I'm guessing it's called a sparse point cloud). I want to connect them all together to create a mesh. I first tried this using Delaunay Triangulation. However, it didn't work well because Delaunay Triangulation only gives a convex hull, so it is giving a mesh that basically ignores the eyes which are concave for example.
Here is a picture illustrating what I mean:
[Delaunay](https://i.stack.imgur.com/WXSX0.png)
So I tried using something else which is alphashape. I've been using this documentation: https://pypi.org/project/alphashape/
My problem is that it's simply just not working.
Here are some pictures:
[Alpha Shape](https://i.stack.imgur.com/QEzSQ.png)
The above picture shows the 3d points which I want to convert to a mesh
Below pictures show the result of me using alpha shape!
I want to get a 3D convex hull.
```
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import alphashape
points_3d = np.array(sixtyEightLandmarks3D)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(points_3d[:, 0], points_3d[:, 1], points_3d[:, 2])
plt.show()
points_3d = [
(0., 0., 0.), (0., 0., 1.), (0., 1., 0.),
(1., 0., 0.), (1., 1., 0.), (1., 0., 1.),
(0., 1., 1.), (1., 1., 1.), (.25, .5, .5),
(.5, .25, .5), (.5, .5, .25), (.75, .5, .5),
(.5, .75, .5), (.5, .5, .75)
]
points_3d = [
(7, 191, 325.05537989702617), (6, 217, 330.15148038438355), (8, 244, 334.2528982671654),
(11, 270, 340.24843864447047), (19, 296, 349.17330940379736), (34, 320, 361.04985805287333),
(56, 340, 373.001340480165), (80, 356, 383.03263568526376), (110, 361, 387.06330231630074),
(140, 356, 383.08354180256816), (165, 341, 373.1621631409058), (187, 321, 359.4022815731698),
(205, 298, 344.64039229318433), (214, 272, 334.72376670920755), (216, 244, 328.54984401152893),
(218, 217, 324.34703636691364), (217, 190, 319.189598828032), (22, 166, 353.0056656769123), (33, 152, 359.0055709874152),
(52, 145, 364.0), (72, 147, 368.00135869314397), (91, 153, 372.0013440835933), (125, 153, 370.0013513488836),
(145, 146, 366.001366117669), (167, 144, 361.0), (186, 151, 358.00558654859003), (197, 166, 351.0056979594491),
(108, 179, 376.02127599379264), (108, 197, 381.02099679676445), (109, 214, 387.03229839381623),
(109, 233, 393.04579885809744), (87, 252, 383.03263568526376), (98, 255, 386.0323820614017), (109, 257, 387.03229839381623),
(120, 254, 385.0324661635691), (131, 251, 383.0469945058961), (44, 183, 360.01249978299364), (55, 176, 363.00550960006103),
(69, 176, 363.0123964825444), (81, 186, 364.0219773585106), (69, 188, 364.0219773585106), (54, 188, 364.0219773585106),
(136, 185, 361.01246515875323), (147, 175, 362.0013812128346), (162, 175, 361.00554012369395),
(174, 183, 357.0014005574768), (163, 188, 360.01249978299364), (149, 188, 362.01243072579706),
(73, 289, 384.04687213932624), (86, 282, 389.0462697417879), (100, 278, 391.0319680026174),
(109, 281, 391.0460330958492), (120, 277, 391.0319680026174), (134, 281, 387.03229839381623),
(147, 289, 380.0210520484359), (135, 299, 388.03221515745315), (121, 305, 392.04591567825315),
(110, 307, 392.04591567825315), (100, 306, 392.04591567825315), (86, 300, 391.0626548265636),
(78, 290, 386.0207248322297), (100, 290, 391.0319680026174), (109, 291, 391.0319680026174),
(120, 289, 391.0319680026174), (142, 289, 381.03280698648507), (120, 290, 391.0319680026174),
(109, 292, 392.03188645823184), (100, 291, 392.04591567825315),
(0., 1., 1.), (1., 1., 1.), (.25, .5, .5),
(.5, .25, .5)
]
alpha_shape = alphashape.alphashape(points_3d, lambda ind, r: 0.3 + any(np.array(points_3d)[ind][:,0] == 0.0))
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.plot_trisurf(*zip(*alpha_shape.vertices), triangles=alpha_shape.faces)
plt.show()
```
Note: I am getting the points of the face and then putting it in the 3d_points variable manually. Also for some reason, I found that if I don't add: "(0., 1., 1.), (1., 1., 1.), (.25, .5, .5),
(.5, .25, .5)" to the array, then alphashape won't work and will give me the following error: too many indices for array: array is 1-dimensional, but 2 were indexed |
Converting MP3/MP4 to WAV in the Frontend Using ffmpegwasm with Next.js Results in Module Not Found Error |
|ffmpeg| |
Coding a two-dimensional, dynamically allocated array of `int` is a non-trivial task. If you use a _double pointer_, there are significant memory allocation details. In particular, you need to check every allocation attempt, and, if any one fails, back out of the ones that didn't fail, without leaking memory. And, finally, to take a term from the C++ lexicon, you need to provide some equivalent of the _Rule of Three_ functions used by a C++ _RAII_ class.
A two-dimensional array of `int` with `n_rows` and `n_cols` should probably be allocated with a single call to `malloc`. For the program in the OP, you might have:
```lang-c
char *str = "1255555555555555";
const size_t n_rows = strlen(str);
const size_t n_cols = 2;
int (*data)[n_cols] = malloc(n_rows * sizeof (*data));
```
Variable `data` is a _pointer to an array of 2 `int`s_, i.e., it is a pointer to a row in the OP's two-dimensional array. `sizeof(*data)`, therefore, is the size of one row. Multiplying by `n_rows` gives you the size of the entire array, which is the size allocated by `malloc`. So, `malloc` allocates the entire array, and variable `data` picks up a pointer to the first row.
See this [Stack Overflow question](https://stackoverflow.com/q/36794202/22193627) for a fuller explanation of this C idiom. Also, see this [question](https://stackoverflow.com/questions/42094465/correctly-allocating-multi-dimensional-arrays).
The program in the OP, however, does not use this idiom. It takes the _double pointer_ route, and allocates an _array of pointers to int_. The elements in such an array are `int*`. Thus, the array is an _array of pointers to rows_, where each row is an _array of `int`_. The argument to the `sizeof` operator, therefore, must be `int*`.
```lang-c
int** data = malloc(n_rows * sizeof (int*)); // step 1.
```
Afterwards, you run a loop to allocate the columns of each row. Conceptually, each row is an array of `int`, with `malloc` returning a pointer to the first element of each row.
```lang-c
for (size_t r = 0; r < n_rows; ++r)
data[r] = malloc(n_cols * sizeof int); // step 2.
```
The program in the OP errs in step 1.
```lang-c
// from the OP:
ret = (int **)malloc(sizeof(int) * len); // should be sizeof(int*)
```
Taking guidance from the [answer by @Eric Postpischil](https://stackoverflow.com/a/78248309/22193627), this can be coded as:
```lang-c
int** ret = malloc(len * sizeof *ret);
```
The program below, however, uses (the equivalent of):
```lang-c
int** ret = malloc(len * sizeof(int*));
```
#### Fixing memory leaks
The program in the OP is careful to check each call to `malloc`, and abort the allocation process if any one of them fails. After a failure, however, it does not call `free` to release the allocations that were successful. Potentially, therefore, it leaks memory. It should keep track of the row where a failure occurs, and call `free` on the preceding rows. It should also call `free` on the original array of pointers.
There is enough detail to warrant refactoring the code to create a separate header with functions to manage a two-dimensional array. This separates the business of array management from the application itself.
Header `tbx.int_array2D.h`, defined below, provides the following functions.
- _Struct_ – `struct int_array2D_t` holds a `data` pointer, along with variables for `n_rows` and `n_cols`.
- _Make_ – Function `make_int_array2D` handles allocations, and returns a `struct int_array2D_t` object.
- _Free_ – Function `free_int_array2D` handles deallocations.
- _Clone_ – Function `clone_int_array2D` returns a _deep copy_ of a `struct int_array2D_t` object. It can be used in initialization expressions, but, in general, should not be used for assignments.
- _Swap_ – Function `swap_int_array2D` swaps two `int_array2D_t` objects.
- _Copy assign_ – Function `copy_assign_int_array2D` replaces an existing `int_array2D_t` object with a _deep copy_ of another. It performs allocation and deallocation, as needed.
- _Move assign_ – Function `move_assign_int_array2D` deallocates an existing `int_array2D_t` object, and replaces it with a _shallow copy_ of another. After assignment, it zeros-out the source.
- _Equals_ – Function `equals_int_array2D` performs a _deep comparison_ of two `int_array2D_t` objects, returning `1` when they are equal, and `0`, otherwise.
```lang-c
#pragma once
// tbx.int_array2D.h
#include <stddef.h>
struct int_array2D_t
{
int** data;
size_t n_rows;
size_t n_cols;
};
void free_int_array2D(
struct int_array2D_t* a);
struct int_array2D_t make_int_array2D(
const size_t n_rows,
const size_t n_cols);
struct int_array2D_t clone_int_array2D(
const struct int_array2D_t* a);
void swap_int_array2D(
struct int_array2D_t* a,
struct int_array2D_t* b);
void copy_assign_int_array2D(
struct int_array2D_t* a,
const struct int_array2D_t* b);
void move_assign_int_array2D(
struct int_array2D_t* a,
struct int_array2D_t* b);
int equals_int_array2D(
const struct int_array2D_t* a,
const struct int_array2D_t* b);
// end file: tbx.int_array2D.h
```
#### A trivial application
With these functions, code for the application becomes almost trivial.
I am not sure why the OP wrote his own version of `strlen`, but I went with it, changing only the type of its return value.
```lang-c
// main.c
#include <stddef.h>
#include <stdio.h>
#include "tbx.int_array2D.h"
size_t ft_strlen(char* str)
{
size_t count = 0;
while (*str++)
count++;
return (count);
}
struct int_array2D_t parray(char* str)
{
size_t len = ft_strlen(str);
struct int_array2D_t ret = make_int_array2D(len, 2);
for (size_t r = ret.n_rows; r--;) {
ret.data[r][0] = str[r] - '0';
ret.data[r][1] = (int)(len - 1 - r);
}
return ret;
}
int main()
{
char* str = "1255555555555555";
struct int_array2D_t ret = parray(str);
for (size_t r = 0; r < ret.n_rows; ++r) {
printf("%d %d \n", ret.data[r][0], ret.data[r][1]);
}
free_int_array2D(&ret);
}
// end file: main.c
```
#### Source code for `tbx.int_array2D.c`
Function `free_int_array2D` has been designed so that it can be used for the normal deallocation of an array, such as happens in function `main`, and also so that it can be called from function `make_int_array2D`, when an allocation fails.
Either way, it sets the `data` pointer to `NULL`, and both `n_rows` and `n_cols` to zero. Applications that use header `tbx.int_array2D.h` can check the `data` pointer of objects returned by functions `make_int_array2D`, `clone_int_array2D`, and `copy_assign_int_array2D`. If it is `NULL`, then the allocation failed.
```lang-c
// tbx.int_array2D.c
#include <stddef.h>
#include <stdlib.h>
#include "tbx.int_array2D.h"
//======================================================================
// free_int_array2D
//======================================================================
void free_int_array2D(struct int_array2D_t* a)
{
for (size_t r = a->n_rows; r--;)
free(a->data[r]);
free(a->data);
a->data = NULL;
a->n_rows = 0;
a->n_cols = 0;
}
//======================================================================
// make_int_array2D
//======================================================================
struct int_array2D_t make_int_array2D(
const size_t n_rows,
const size_t n_cols)
{
struct int_array2D_t a = {
malloc(n_rows * sizeof(int*)),
n_rows,
n_cols
};
if (!n_rows || !n_cols)
{
// If size is zero, the behavior of malloc is implementation-
// defined. For example, a null pointer may be returned.
// Alternatively, a non-null pointer may be returned; but such
// a pointer should not be dereferenced, and should be passed
// to free to avoid memory leaks. – CppReference
// https://en.cppreference.com/w/c/memory/malloc
free(a.data);
a.data = NULL;
a.n_rows = 0;
a.n_cols = 0;
}
else if (a.data == NULL) {
a.n_rows = 0;
a.n_cols = 0;
}
else {
for (size_t r = 0; r < n_rows; ++r) {
a.data[r] = malloc(n_cols * sizeof(int));
if (a.data[r] == NULL) {
a.n_rows = r;
free_int_array2D(&a);
break;
}
}
}
return a;
}
//======================================================================
// clone_int_array2D
//======================================================================
struct int_array2D_t clone_int_array2D(const struct int_array2D_t* a)
{
struct int_array2D_t clone = make_int_array2D(a->n_rows, a->n_cols);
for (size_t r = clone.n_rows; r--;) {
for (size_t c = clone.n_cols; c--;) {
clone.data[r][c] = a->data[r][c];
}
}
return clone;
}
//======================================================================
// swap_int_array2D
//======================================================================
void swap_int_array2D(
struct int_array2D_t* a,
struct int_array2D_t* b)
{
struct int_array2D_t t = *a;
*a = *b;
*b = t;
}
//======================================================================
// copy_assign_int_array2D
//======================================================================
void copy_assign_int_array2D(
struct int_array2D_t* a,
const struct int_array2D_t* b)
{
if (a->data != b->data) {
if (a->n_rows != b->n_rows || a->n_cols != b->n_cols) {
free_int_array2D(a);
*a = make_int_array2D(b->n_rows, b->n_cols);
}
for (size_t r = a->n_rows; r--;) {
for (size_t c = a->n_cols; c--;) {
a->data[r][c] = b->data[r][c];
}
}
}
}
//======================================================================
// move_assign_int_array2D
//======================================================================
void move_assign_int_array2D(
struct int_array2D_t* a,
struct int_array2D_t* b)
{
if (a->data != b->data) {
free_int_array2D(a);
*a = *b;
b->data = NULL;
b->n_rows = 0;
b->n_cols = 0;
}
}
//======================================================================
// equals_int_array2D
//======================================================================
int equals_int_array2D(
const struct int_array2D_t* a,
const struct int_array2D_t* b)
{
if (a->n_rows != b->n_rows ||
a->n_cols != b->n_cols) {
return 0;
}
for (size_t r = a->n_rows; r--;) {
for (size_t c = a->n_cols; c--;) {
if (a->data[r][c] != b->data[r][c]) {
return 0;
}
}
}
return 1;
}
// end file: tbx.int_array2D.c
```
|
Can't open new instance of another window in my app, in WPF .net 8 |
I am making a form using next js 14, I made an actions.ts file there I built a generic method that shows a console log. In the form component, loginForm.tsx, in the form tag, I use the action property, and from there, I call the generic method that I made in action.ts
What happened is that, when I clicked on the save button of the form, I observed that in the devtools, in the network tab, a request was reflected, but I saw that in the netowrk tab, in the payload option, all the information in the form fields is displayed.
my doubts are:
Why does the user and password information appear in the devtools -> tab network -> in the payload?
Is that considered a security vulnerability?
Is there any idea on how to prevent it from displaying that information?
I was investigating the topic, apparently it has to do with the html tag <form action={}></form>, but there was no mention of how to solve that detail |
How can I prevent the password from appearing in the network tab payload? |
|forms|next.js|react-hooks|server|passwords| |
null |
<!-- language-all: sh -->
_Update_:
* The next section still applies to _direct_ `msiexec` invocations from PowerShell.
* A **simpler solution** is to **call `msiexec` _via `cmd.exe /c`_**, as it gives you more direct control over the resulting process command line and its quoting, and the invocation is _synchronous_ (blocking) by default and even reports the exit code via the [automatic `$LASTEXITCODE` variable](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Automatic_Variables#lastexitcode):
# Executes synchronously and reports the exit code via $LASTEXITCODE
cmd /c 'msiexec.exe /q /i "C:\Users\ADMINI~1\AppData\Local\Temp\mongo-server-3.4-latest.msi" INSTALLLOCATION="C:\Program Files\MongoDB\Server\3.4\" ADDLOCAL="all"'
* Inside the overall `'...'` ([verbatim](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Quoting_Rules#single-quoted-strings)) string passed to `cmd /c`, you must **only use `"..."` quoting**, which `cmd.exe` - unlike PowerShell - will pass through as-is to `msiexec`.
* To embed PowerShell variable values, use an overall `"..."` ([expandable](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Quoting_Rules#double-quoted-strings)) string and escape the embedded `"` as `` `" `` (or `""`).
* This approach notably also works when you need to pass *property values with spaces* which require _partial quoting_, e.g. ``FOO="bar baz"`` (``FOO=`"bar baz`"`` inside `"..."` quoting); if you used direct invocation, PowerShell would reformat this to `"FOO=bar baz"` behind the scenes, which `msiexec.exe` doesn't recognize.
* To make the call _asynchronous_ (return control to PowerShell right after launching `msiexec`, before it completes, at the expense of not learning its exit code), use `cmd /c 'start "" msiexec ...'`.
---
It seems that **in order to pass paths with *embedded spaces* to `msiexec`, you must use explicit _embedded_ `"..."` quoting around them.**
In your case, this means that instead of passing
`INSTALLLOCATION='C:\Program Files\MongoDB\Server\3.4\'`, you must pass `INSTALLLOCATION='"C:\Program Files\MongoDB\Server\3.4\\"'`<sup>[1]</sup>
Note the embedded `"..."` and the extra `\` at the end of the path to ensure that `\"` alone isn't mistaken for an _escaped_ `"` by `msiexec` (though it may work without the extra `\` too).
To put it all together:
```powershell
# See v7.3+ caveat below.
msiexec.exe /q /i `
'C:\Users\ADMINI~1\AppData\Local\Temp\mongo-server-3.4-latest.msi' `
INSTALLLOCATION='"C:\Program Files\MongoDB\Server\3.4\\"' ADDLOCAL='all'
```
**Note**:
* To make this call _synchronous_, you can use a trick: append `... | Write-Output`, which also causes `msiexec`'s _exit code_ to be reflected in [`$LASTEXITCODE`](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Automatic_Variables#lastexitcode).
**Caveat**:
* This embedded-quoting technique **relies on longstanding, but _broken_ PowerShell behavior** - see [this answer](https://stackoverflow.com/a/59036879/45375); this behavior was fixed in [_PowerShell (Core)_](https://github.com/PowerShell/PowerShell/blob/master/README.md) v7.3 (with selective exceptions on Windows), so **in _v7.3+_, you must first set [`$PSNativeCommandArgumentPassing`](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Preference_Variables?view=powershell-7.5#psnativecommandargumentpassing) `= 'Legacy'`**; by contrast, the
`--%` approach shown below continues to work as-is.
* A workaround-free, future-proof method is to use the **PSv3+ `ie` helper function** from the **[`Native` module](https://github.com/mklement0/Native)** (in PSv5+, install with `Install-Module Native` from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Native)), which internally **compensates for all broken behavior** and allows passing arguments as expected; that is, simply prepending `ie` to your original command would be enough:
```powershell
# No workarounds needed with the 'ie' function from the 'Native' module.
ie msiexec.exe /q /i 'C:\Users\ADMINI~1\AppData\Local\Temp\mongo-server-3.4-latest.msi' INSTALLLOCATION='C:\Program Files\MongoDB\Server\3.4\' ADDLOCAL='all'
```
---
The **alternative** is to stick with the original quoting and use `--%`, the [stop-parsing symbol](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Parsing), but note that this means that you cannot use PowerShell variables in all subsequent arguments (however, you could define _environment_ variables - e.g. `$env:foo = ...` - and then reference them with `cmd.exe syntax - e.g. `%foo%`):
```powershell
msiexec.exe /q /i `
'C:\Users\ADMINI~1\AppData\Local\Temp\mongo-server-3.4-latest.msi' `
--% INSTALLLOCATION="C:\Program Files\MongoDB\Server\3.4\\" ADDLOCAL='all'
```
---
Note that **`msiexec`**, despite having a CLI (command-line interface), is a _GUI_-subsystem application, so it **runs _asynchronously_ by default**; if you want **to run it _synchronously_, use
`Start-Process -Wait`**:
```powershell
$msiArgs = '/q /i "C:\Users\ADMINI~1\AppData\Local\Temp\mongo-server-3.4-latest.msi" INSTALLLOCATION="C:\Program Files\MongoDB\Server\3.4\\" ADDLOCAL=all'
$ps = Start-Process -PassThru -Wait msiexec -ArgumentList $msiArgs
# $ps.ExitCode contains msiexec's exit code.
```
Note that the argument-list string, `$msiArgs`, is used _as-is_ by `Start-Process` as part of the command line used to invoke the target program (`msiexec`), which means:
* only (embedded) _double-quoting_ must be used.
* use `"..."` with embedded `"` escaped as `` `" `` to embed PowerShell variables and expressions in the string.
* conversely, however, no workaround for partially quoted arguments is needed.
Even though `Start-Process` technically supports passing the arguments _individually_, as an _array_, this is best avoided due to a longstanding bug - see [GitHub issue #5576](https://github.com/PowerShell/PowerShell/issues/5576).
---
<sup>[1] The reason that `INSTALLLOCATION='C:\Program Files\MongoDB\Server\3.4\'` doesn't work is that PowerShell transforms the argument by `"..."`-quoting it _as a whole_, which `msiexec` doesn't recognize; specifically, what is passed to `msiexec` in this case is:
`"INSTALLLOCATION=C:\Program Files\MongoDB\Server\3.4\"`</sup>
|
```
function sendDataToServer(
firstName,
lastName,
phoneNumber,
address,
birthday,
age,
idNumber,
gender,
degree,
intake,
semester,
course
) {
console.log("sending date" + birthday);
console.log("sending nic" + idNumber);
console.log("sending phone" + phoneNumber);
console.log(typeof phoneNumber);
console.log(typeof idNumber);
console.log(typeof birthday);
fetch("http://localhost:8080/student/add", {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
firstName: firstName,
lastName: lastName,
phoneNumber: phoneNumber,
address: address,
birthday: birthday,
age: age,
idNumber: idNumber,
gender: gender,
degree: degree,
intake: intake,
semester: semester,
course: course,
}),
})
.then((response) => {
if (response.ok) {
alert("Student added successfully");
} else {
alert("An error occurred");
console.log(response);
}
})
.catch((error) => {
console.log(error);
alert("An error occurred");
});
}
```
This function takes in various parameters representing user data such as firstName, lastName, phoneNumber, address, birthday, age, idNumber, gender, degree, intake, semester, and course.
These are all String data and in database also these are all defined as Strings
I take the date, nic and phoneNumber as Strings and tried to pass it, but JSON body does not passing these three Strings but it passing other strings.
Why is that ??
**Dummy data for post**
{
"firstName": "John",
"lastName": "Doe",
"phoneNumber": "1234567890",
"address": "123 Main Street",
"birthday": "1990-01-01",
"age": 31,
"idNumber": "123456789012",
"gender": "male",
"degree": "Bachelor of Science",
"intake": "2020",
"semester": "Spring",
"course": "Computer Science"
}
**Request Body**
{
"regNo": 28,
"firstName": "John",
"lastName": "Doe",
"phoneNo": null,
"address": "123 Main Street",
"nicNo": null,
"gender": "male",
"dob": null,
"age": "31",
"degree": "Bachelor of Science",
"intake": "2020",
"semester": "Spring",
"course": "Computer Science",
"enrollments": null
}
|
From what I can tell, the `flutterfire` CLI gets installed in the [pub.dev global system cache](https://dart.dev/tools/pub/glossary#system-cache). From there:
> When pub gets a remote package, it downloads it into a single system cache directory maintained by pub. On Mac and Linux, this directory defaults to `~/.pub-cache`. On Windows, the directory defaults to `%LOCALAPPDATA%\Pub\Cache`, though its exact location may vary depending on the Windows version. You can specify a different location using the `PUB_CACHE` environment variable.
So you can set/change the `PUB_CACHE` environment variable to change the locaton of the pub cache. This will affect all packages that use it, not just the `flutterfire` CLI. |
null |
|typescript|requirejs|karma-runner|riotts| |
I like using the Document Sets in Sharepoint (online 365 version). A document placed in a document set picks up the columns on the document set as metadata, which helps with form production. I have been able to access this metadata in VBA (to assemble text blocks, etc...) using the first line bwlow.
This is just not working now, and I can't figure out why. I am getting an error: "Method 'value' of object 'Metaproperty' failed".
I am trying to just debug.print it, and this generally works in other (older) documents.
I have also tried the second set of code, which also works in older documents, but now throws an error: element not found
Anyone have any ideas for me? Am I possibly missing a reference library in VBA?
1. ActiveDocument.ContentTypeProperties("*column name*").Value
or
2. Dim sp As MetaProperties
Set sp = ActiveDocument.ContentTypeProperties
Debug.Print sp("*column name*") |
Sharepoint-Word ContentTypeProperties |
|vba|ms-word|sharepoint-online| |
null |
We have a large number of databases running on a SQL Database Server in Azure. We're trying to automate some basic functions, in this case, setting the LTR policy on all new databases, so there's no chance of one slipping through the cracks and not getting set.
Is there anyway to automatically assign LTR policies?
I've found lots of ways to assign it (Through the Portal, through Azure CLI, Powershell, etc), but they all require doing it either en masse or to single databases. And all require manual execution.
Is there anyway to tell Azure to apply a default to new databases automatically, so we don't have to do anything manually? |
Azure SQL Database Server: Automatically Assign Long Term Retention (LTR) Policy to New Databases |
|sql-server|azure| |
null |
|webpack|ecmascript-6|babeljs|riot.js| |
Good morning,
For a NSI project, I have to create a minesweeper and it only remains for me to make a graphical interface with `tkinter`
I already inquired about the creation of a window, the parameters of size, color, etc. but impossible to find how I can place the field of my game on the window `tkinter`.
Do you have any advice or solutions for me?
Thank you in advance for your answers. |
Make a seat object. The player will automatically sit if he touches it. |
if the variable `createdFromId` is in scope when this function is called then your function should be
```javascript
function wf_populateField() { nlapiSetFieldValue('tranid',createdFromId); }
```
note that there are no quotes around the variable name. |
Some attributes in your config are no longer used in [Tomcat 10][1]
For eg: 'sslEnabledProtocols' was depcreated in [Tomcat 9][2] and removed from Tomcat 10 instead 'protocols' attribute should be used.
Try to use the below config and check once again.
~~~
<Connector
port="8443"
protocol="org.apache.coyote.http11.Http11NioProtocol"
maxThreads="150"
minSpareThreads="25"
SSLEnabled="true"
scheme="https"
secure="true"
enableLookups="false"
disableUploadTimeout="true"
acceptCount="400"
URIEncoding="UTF-8"
clientAuth="false"
defaultSSLHostConfigName="abx.io"
connectionTimeout="20000">
<UpgradeProtocol className="org.apache.coyote.http2.Http2Protocol"/>
<SSLHostConfig hostName="abx.io" protocols="TLSv1.2" ciphers="TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,SSL_RSA_WITH_RC4_128_SHA">
<Certificate
certificateFile="conf/cert_abx/cert.pem"
certificateKeyFile="conf/cert_abx/privkey.pem"
certificateChainFile="conf/cert_abx/chain.pem"/>
</SSLHostConfig>
</Connector>
~~~
[1]: https://tomcat.apache.org/tomcat-10.0-doc/config/http.html#SSL_Support_-_SSLHostConfig
[2]: https://tomcat.apache.org/tomcat-9.0-doc/config/http.html#SSL_Support_-_Connector_-_NIO_and_NIO2_(deprecated) |
I have 68 3d points (I'm guessing it's called a sparse point cloud). I want to connect them all together to create a mesh. I first tried this using Delaunay Triangulation. However, it didn't work well because Delaunay Triangulation only gives a convex hull, so it is giving a mesh that basically ignores the eyes which are concave for example.
Here is a picture illustrating what I mean:

So I tried using something else which is alphashape. I've been using this documentation: https://pypi.org/project/alphashape/
My problem is that it's simply just not working.
Here are some pictures:

The above picture shows the 3d points which I want to convert to a mesh
Below pictures show the result of me using alpha shape!
I want to get a 3D convex hull.
```
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import alphashape
points_3d = np.array(sixtyEightLandmarks3D)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(points_3d[:, 0], points_3d[:, 1], points_3d[:, 2])
plt.show()
points_3d = [
(0., 0., 0.), (0., 0., 1.), (0., 1., 0.),
(1., 0., 0.), (1., 1., 0.), (1., 0., 1.),
(0., 1., 1.), (1., 1., 1.), (.25, .5, .5),
(.5, .25, .5), (.5, .5, .25), (.75, .5, .5),
(.5, .75, .5), (.5, .5, .75)
]
points_3d = [
(7, 191, 325.05537989702617), (6, 217, 330.15148038438355), (8, 244, 334.2528982671654),
(11, 270, 340.24843864447047), (19, 296, 349.17330940379736), (34, 320, 361.04985805287333),
(56, 340, 373.001340480165), (80, 356, 383.03263568526376), (110, 361, 387.06330231630074),
(140, 356, 383.08354180256816), (165, 341, 373.1621631409058), (187, 321, 359.4022815731698),
(205, 298, 344.64039229318433), (214, 272, 334.72376670920755), (216, 244, 328.54984401152893),
(218, 217, 324.34703636691364), (217, 190, 319.189598828032), (22, 166, 353.0056656769123), (33, 152, 359.0055709874152),
(52, 145, 364.0), (72, 147, 368.00135869314397), (91, 153, 372.0013440835933), (125, 153, 370.0013513488836),
(145, 146, 366.001366117669), (167, 144, 361.0), (186, 151, 358.00558654859003), (197, 166, 351.0056979594491),
(108, 179, 376.02127599379264), (108, 197, 381.02099679676445), (109, 214, 387.03229839381623),
(109, 233, 393.04579885809744), (87, 252, 383.03263568526376), (98, 255, 386.0323820614017), (109, 257, 387.03229839381623),
(120, 254, 385.0324661635691), (131, 251, 383.0469945058961), (44, 183, 360.01249978299364), (55, 176, 363.00550960006103),
(69, 176, 363.0123964825444), (81, 186, 364.0219773585106), (69, 188, 364.0219773585106), (54, 188, 364.0219773585106),
(136, 185, 361.01246515875323), (147, 175, 362.0013812128346), (162, 175, 361.00554012369395),
(174, 183, 357.0014005574768), (163, 188, 360.01249978299364), (149, 188, 362.01243072579706),
(73, 289, 384.04687213932624), (86, 282, 389.0462697417879), (100, 278, 391.0319680026174),
(109, 281, 391.0460330958492), (120, 277, 391.0319680026174), (134, 281, 387.03229839381623),
(147, 289, 380.0210520484359), (135, 299, 388.03221515745315), (121, 305, 392.04591567825315),
(110, 307, 392.04591567825315), (100, 306, 392.04591567825315), (86, 300, 391.0626548265636),
(78, 290, 386.0207248322297), (100, 290, 391.0319680026174), (109, 291, 391.0319680026174),
(120, 289, 391.0319680026174), (142, 289, 381.03280698648507), (120, 290, 391.0319680026174),
(109, 292, 392.03188645823184), (100, 291, 392.04591567825315),
(0., 1., 1.), (1., 1., 1.), (.25, .5, .5),
(.5, .25, .5)
]
alpha_shape = alphashape.alphashape(points_3d, lambda ind, r: 0.3 + any(np.array(points_3d)[ind][:,0] == 0.0))
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.plot_trisurf(*zip(*alpha_shape.vertices), triangles=alpha_shape.faces)
plt.show()
```
Note: I am getting the points of the face and then putting it in the 3d_points variable manually. Also for some reason, I found that if I don't add: "(0., 1., 1.), (1., 1., 1.), (.25, .5, .5),
(.5, .25, .5)" to the array, then alphashape won't work and will give me the following error: too many indices for array: array is 1-dimensional, but 2 were indexed |
null |
null |
null |
null |
I'm trying to create a code for **perfectly optimal chess endgame**.
By that I mean that the loosing player tries to delay the checkmate as long as possible while the winning tries to checkmate the opponent as soon as possible.
This code for chess endgame is my currently best [one](https://pastebin.com/zkcbgANy)
import chess
def simplify_fen_string(fen):
parts = fen.split(' ')
simplified_fen = ' '.join(parts[:4]) # Zachováváme pouze informace o pozici
return simplified_fen
def evaluate_position(board):
#print(f"Position: {board.fen()}")
if board.is_checkmate():
### print(f"Position: {board.fen()}, return -1000")
return -1000 # Mat protihráči
elif board.is_stalemate() or board.is_insufficient_material() or board.can_claim_draw():
### print(f"Position: {board.fen()}, return 0")
return 0 # Remíza
else:
#print(f"Position: {board.fen()}, return None")
return None # Hra pokračuje
def create_AR_entry(result, children, last_move):
return {"result": result, "children": children, "last_move": last_move, "best_child": None}
def update_best_case(best_case):
if best_case == 0:
return best_case
if best_case > 0:
return best_case - 1
else:
return best_case + 1
def update_AR_for_mate_in_k(board, AR, simplified_initial_fen, max_k=1000):
evaluated_list = []
#print(f"")
for k in range(1, max_k + 1):
print(f"K = {k}")
changed = False
for _t in range(2): # Zajistíme, že pro každé k proběhne aktualizace dvakrát
print(f"_t = {_t}")
for fen in list(AR.keys()):
#print(f"Fen = {fen}, looking for {simplified_initial_fen}, same = {fen == simplified_initial_fen}")
board.set_fen(fen)
if AR[fen]['result'] is not None:
if fen == simplified_initial_fen:
print(f"Finally we found a mate! {AR[fen]['result']}")
return
continue # Pokud již máme hodnocení, přeskočíme
# Získáme výchozí hodnoty pro nejlepší a nejhorší scénář
best_case = float("-inf")
#worst_case = float("inf")
nones_present = False
best_child = None
for move in board.legal_moves:
#print(f"Move = {move}")
board.push(move)
next_fen = simplify_fen_string(board.fen())
#AR[fen]['children'].append(next_fen)
if next_fen not in AR:
AR[next_fen] = create_AR_entry(evaluate_position(board), None, move)
evaluated_list.append(next_fen)
if ((len(evaluated_list)) % 100000 == 0):
print(f"Evaluated: {len(evaluated_list)}")
board.pop()
#for child in AR[fen]['children']:
next_eval = AR[next_fen]['result']
if next_eval is not None:
if (-next_eval > best_case):
best_case = max(best_case, -next_eval)
best_child = next_fen
#worst_case = min(worst_case, -next_eval)
else:
nones_present = True
if nones_present:
if best_case > 0:
AR[fen]['result'] = update_best_case(best_case)
AR[fen]['best_child'] = best_child
changed = True
else:
# Aktualizace hodnocení podle nejlepšího a nejhoršího scénáře
#if worst_case == -1000:
# Pokud všechny tahy vedou k matu, hráč na tahu může být matován v k tazích
# AR[fen] = -1000 + k
# changed = True
#elif best_case <= 0:
# Pokud nejlepší scénář není lepší než remíza, znamená to remízu nebo prohru
# AR[fen] = max(best_case, 0) # Zabráníme nastavení hodnoty méně než 0, pokud je remíza možná
# changed = True
#elif best_case == 1000:
# Pokud existuje alespoň jeden tah, který vede k matu protihráče, hráč na tahu může vynutit mat v k tazích
# AR[fen] = 1000 - k
# changed = True
AR[fen]['result'] = update_best_case(best_case)
AR[fen]['best_child'] = best_child
changed = True
### print(f"Position = {fen}, results = {best_case} {nones_present} => {AR[fen]['result']}")
if (fen == "8/8/3R4/8/8/5K2/8/4k3 b - -" or fen == "8/8/3R4/8/8/5K2/8/5k2 w - -"):
print("^^^^^^^^")
# remove here
#break
#if not changed:
#break # Pokud nedošlo k žádné změně, ukončíme smyčku
#if not changed:
#break # Ukončíme hlavní smyčku, pokud nedošlo ke změně v poslední iteraci
def print_draw_positions(AR):
"""
Vytiskne všechny remízové pozice (hodnota 0) zaznamenané v slovníku AR.
"""
print("Remízové pozice:")
for fen, value in AR.items():
if True or (value > 990 and value < 1000):
print(f"FEN>: {fen}, Hodnota: {value}","\n",chess.Board(fen),"<\n")
def find_path_to_end(AR, fen):
if AR[fen]['result'] is None:
print(f"Unfortunately, there is no path that is known to be the best")
fen_i = fen
print(chess.Board(fen_i),"\n<")
path = fen
while AR[fen_i]['best_child'] is not None:
fen_i = AR[fen_i]['best_child']
print(chess.Board(fen_i),"\n<")
path = path + ", " + fen_i
print(f"Path is: {path}")
def main():
initial_fen = "1k6/5P2/2K5/8/8/8/8/8 w - - 0 1"
initial_fen_original = "8/8/8/8/3Q4/5K2/8/4k3 w - - 0 1"
initial_fen_mate_in_one_aka_one_ply = "3r1k2/5r1p/5Q1K/2p3p1/1p4P1/8/8/8 w - - 2 56"
initial_fen_mate_in_two_aka_three_plies = "r5k1/2r3p1/pb6/1p2P1N1/3PbB1P/3pP3/PP1K1P2/3R2R1 b - - 4 28"
initial_fen_mated_in_two_plies = "r5k1/2r3p1/p7/bp2P1N1/3PbB1P/3pP3/PP1K1P2/3R2R1 w - - 5 29"
mate_in_two_aka_three_plies_simple = "8/8/8/8/3R4/5K2/8/4k3 w - - 0 1"
mated_in_one_aka_two_plies_simple = "8/8/3R4/8/8/5K2/8/4k3 b - - 1 1"
mate_in_one_aka_one_ply_simple = "8/8/3R4/8/8/5K2/8/5k2 w - - 2 2"
initial_fen = mate_in_two_aka_three_plies_simple
initial_fen = "1k6/5P2/2K5/8/8/8/8/8 w - - 0 1"
initial_fen = "1k6/8/2K5/8/8/8/8/8 w - - 0 1"
initial_fen = "8/8/8/8/8/7N/1k5K/6B1 w - - 0 1"
initial_fen = "7K/8/k1P5/7p/8/8/8/8 w - - 0 1"
simplified_fen = simplify_fen_string(initial_fen)
board = chess.Board(initial_fen)
AR = {simplified_fen: {"result": None, "last_move": None, "children": None, "best_child": None}} # Inicializace AR s počáteční pozicí
update_AR_for_mate_in_k(board, AR, simplified_fen, max_k=58) # Aktualizace AR
#print_draw_positions(AR)
print(f"AR for initial fen is = {AR[simplified_fen]}")
find_path_to_end(AR, simplified_fen)
main()
However,for initial fen = "8/8/8/4k3/2K4R/8/8/8 w - - 0 1" it doesn't give the optimal result like this one: https://lichess.org/analysis/8/8/8/4k3/2K4R/8/8/8_w_-_-_0_1?color=white
Rather, it gives 27 plies [like this](https://pastebin.com/hZ6AaBZe) while lichess.com link above gives 1000-977==23 plies. Finding the bug will be highly appreciated. |
I am trying to convert some R code I've written into an R shiny app so others can use it more readily. The code utilizes a package called `IPDfromKM`. The main function of issue is `getpoints()`, which in R will generate a plot and the user will need to click the max X and max Y coordinates, followed by clicking through the entire KM curve, which extracts the coordinates into a data frame. However, I cannot get this to work in my R shiny app.
There is a [link ](https://biostatistics.mdanderson.org/shinyapps/IPDfromKM/)to the working R shiny app from the creator
This is the getpoints() code:
```
getpoints <- function(f,x1,x2,y1,y2){
## if bitmap
if (typeof(f)=="character")
{ lfname <- tolower(f)
if ((strsplit(lfname,".jpeg")[[1]]==lfname) && (strsplit(lfname,".tiff")[[1]]==lfname) &&
(strsplit(lfname,".bmp")[[1]]==lfname) && (strsplit(lfname,".png")[[1]]==lfname) &&
(strsplit(lfname,".jpg")[[1]]==lfname))
{stop ("This function can only process bitmap images in JPEG, PNG,BMP, or TIFF formate.")}
img <- readbitmap::read.bitmap(f)
} else if (typeof(f)=="double")
{
img <- f
} else {
stop ("Please double check the format of the image file.")
}
## function to read the bitmap and points on x-axis and y-axis
axispoints <- function(img){
op <- par(mar = c(0, 0, 0, 0))
on.exit(par(op))
plot.new()
rasterImage(img, 0, 0, 1, 1)
message("You need to define the points on the x and y axis according to your input x1,x2,y1,y2. \n")
message("Click in the order of left x-axis point (x1), right x-axis point(x2),
lower y-axis point(y1), and upper y-axis point(y2). \n")
x1 <- as.data.frame(locator(n = 1,type = 'p',pch = 4,col = 'blue',lwd = 2))
x2 <- as.data.frame(locator(n = 1,type = 'p',pch = 4,col = 'blue',lwd = 2))
y1 <- as.data.frame(locator(n = 1,type = 'p',pch = 3,col = 'red',lwd = 2))
y2 <- as.data.frame(locator(n = 1,type = 'p',pch = 3,col = 'red',lwd = 2))
ap <- rbind(x1,x2,y1,y2)
return(ap)
}
## function to calibrate the points to the appropriate coordinates
calibrate <- function(ap,data,x1,x2,y1,y2){
x <- ap$x[c(1,2)]
y <- ap$y[c(3,4)]
cx <- lm(formula = c(x1,x2) ~ c(x))$coeff
cy <- lm(formula = c(y1,y2) ~ c(y))$coeff
data$x <- data$x*cx[2]+cx[1]
data$y <- data$y*cy[2]+cy[1]
return(as.data.frame(data))
}
## take the points
ap <- axispoints(img)
message("Mouse click on the K-M curve to take the points of interest. The points will only be labled when you finish all the clicks.")
takepoints <- locator(n=512,type='p',pch=1,col='orange4',lwd=1.2,cex=1.2)
df <- calibrate(ap,takepoints,x1,x2,y1,y2)
par()
return(df)
}
```
I'm a bit at a loss in how to execute this in my main panel. I've tried using `plotOutput()`, `imageOutput()`, and calling variations of the below functions, but nothing pops up or seems to work like it does in R studio. Will I need to split out the components of the function into individual steps?
```
createPoints<-reactive({
#Read File
file <- input$file1
ext <- tools::file_ext(file$datapath)
req(file)
validate(need(ext == "png", "Please upload a .png file"))
##should run the function that generates a plot for clicking coordinates and stores them in a data frame
points<-getpoints(file,x1=0, x2=input$Xmax,y1=0, y2=100)
return(points)
})
```
```
output$getPointsPlot<-renderPlot(
createPoints()
)
``` |
The function `sizeThatFits` needs to compute the width that it really needs to fit inside the proposal it receives. At the moment it is just returning the proposed width it is given, so it is always using the full available width.
For example, you could change the function `sizeThatFits` to something like this:
```swift
func sizeThatFits(proposal: ProposedViewSize, subviews: Subviews, cache: inout ()) -> CGSize {
let maxWidth = proposal.width ?? 0
var width: CGFloat = 0
var height: CGFloat = 0
let rows = generateRows(maxWidth, proposal, subviews)
for (index, row) in rows.enumerated() {
var rowWidth = CGFloat.zero
for (i, subview) in row.enumerated() {
if i > 0 {
rowWidth += spacing
}
rowWidth += subview.sizeThatFits(proposal).width
}
width = max(width, rowWidth)
if index == (rows.count - 1) {
height += row.maxHeight(proposal)
} else {
height += row.maxHeight(proposal) + spacing
}
}
return .init(width: width, height: height)
}
```
 |
I have a class, Word, that has a String representing a word and an Enum representing the difficulty of the word. Then I have another class, Vocab, that has an ArrayList of Word-objects. I want to sort my ArrayList by either difficulty or by the alphabetical order of the word.
I figured I should use a Comparator, and here is how I did it. All classes are in the same package:
Diff:
public enum Diff {
EASY, MEDIUM, HARD;
}
Word:
public class Word {
private String word;
private Diff diff;
public Word(String word, Diff diff) {
this.word = word;
this.diff = diff;
}
public String getWord() {
return word;
}
public void setWord(String word) {
this.word = word;
}
public Diff getDiff() {
return diff;
}
public void setDiff(Diff diff) {
this.diff = diff;
}
@Override
public String toString() {
String string = getWord() + ", " + getDiff();
return string;
}
}
Vocab:
public class Vocab {
private ArrayList<Word> words;
public Vocab() {
words = new ArrayList<>();
}
public void sortByPrio() {
Collections.sort(words, prioCompare);
}
public void sortByName() {
Collections.sort(words, nameCompare);
}
private Comparator<Word> prioCompare = new Comparator<Word>() {
@Override
public int compare(Word w1, Word w2) {
return (w1.getDiff().ordinal() - w2.getDiff().ordinal());
}
};
private Comparator<Word> nameCompare = new Comparator<Word>() {
@Override
public int compare(Word w1, Word w2) {
return (w1.getWord().compareToIgnoreCase(w2.getWord()));
}
};
public void addWord(Word newWord) {
words.add(newWord);
}
@Override
public String toString() {
String string = words.toString();
return string;
}
}
Main:
public class Main {
public static void main(String[] args) {
Vocab myWords = new Vocab();
myWords.addWord(new Word("Apple", Diff.EASY));
myWords.addWord(new Word("Besmirch", Diff.MEDIUM));
myWords.addWord(new Word("Pulchritudinous", Diff.HARD));
myWords.addWord(new Word("Dog", Diff.EASY));
System.out.println(myWords);
myWords.sortByPrio();
System.out.println(myWords);
}
}
Is anything inherently wrong with my code above, regarding my implementation of the Comparator? If so, what should I improve? Also, I haven't redefined `equals`, which I usually do when I implement `Comparable`. Should I redefine it now as well?
|
[CSS layers][1] are awesome, except one (in my opinion) unintuitive behavior: they have a lower priority than unlayered styles:
```css
/* site style */
div {
color: red;
}
/* user style */
@layer myflavor {
div {
color: blue;
}
}
/* The div is still red :-( */
```
So I can not, for example, use layers in user styles. Because then the user style may not have an effect if a rule for that selector is already defined.
It also makes it complex to gradually transform the styles of a site to a layer based approach, except the complete original CSS gets wrapped in a layer first, which may be difficult to achieve.
Is there a feature which makes a layer have a higher priority than unlayered styles? Something like `@layer! foo {...}`?
[1]: https://developer.mozilla.org/en-US/docs/Web/CSS/@layer |
Is there a way to have layered CSS styles have a higher priority than unlayered styles? |
|css-layer| |
The top API level function `numba.decorated_jit` is [deprecated and removed from numba version>=0.59.0][1].
I suggest to install last version where `numba.decorated_jit` is and that is `numba==0.58.1`
[1]: https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-generated-jit |
I have access to 2 nodes, each has 2 GPUs. I want to have 4 processes, each has a GPU. I use `nccl` (if this is relevant).
Here is the Slurm script I tried. I tried different combinations of setup.
It works occasionally as wanted. Most of time, it creates 4 processes in 1 node, and allocate 2 processes to 1 GPU. It slows down the program and cause out of memory, and makes `all_gather` fail.
How can I distribute processes correctly?
```
#!/bin/bash
#SBATCH -J jobname
#SBATCH -N 2
#SBATCH --cpus-per-task=10
# version 1
#SBATCH --ntasks-per-node=2
#SBATCH --gres=gpu:2
#SBATCH --gpu-bind=none
# version 2
#SBATCH --ntasks-per-node=2
#SBATCH --gres=gpu:2
# version 3
#SBATCH --ntasks=4
#SBATCH --ntasks-per-node=2
#SBATCH --gres=gpu:2
# version 4
#SBATCH --ntasks=4
#SBATCH --ntasks-per-node=2
#SBATCH --gres=gpu:2
#SBATCH --gpus-per-task=1
# # version 5
#SBATCH --ntasks=4
#SBATCH --ntasks-per-node=2
#SBATCH --gpus-per-task=1
module load miniconda3
eval "$(conda shell.bash hook)"
conda activate gpu-env
nodes=( $( scontrol show hostnames $SLURM_JOB_NODELIST) )
nodes_array=($nodes)
head_node=${nodes_array[0]}
head_node_ip=$(srun --nodes=1 --ntasks=1 -w "$head_node" hostname --ip-address)
echo Node IP: $head_node_ip
export LOGLEVEL=INFO
export NCCL_P2P_LEVEL=NVL
srun torchrun --nnodes 2 --nproc_per_node 2 --rdzv_id $RANDOM --rdzv_backend c10d --rdzv_endpoint $head_node_ip:29678 mypythonscript.py
```
In python script:
```
dist.init_process_group(backend="nccl")
torch.cuda.set_device(int(os.environ["LOCAL_RANK"]))
```
Log:
```
[W socket.cpp:464] [c10d] The server socket has failed to listen on [::]:29678 (errno: 98 - Address already in use).
[2024-03-31 15:46:06,691] torch.distributed.elastic.agent.server.local_elastic_agent: [INFO] log directory set to: /tmp/torchelastic_f6ldgsym/4556_xxbhwnb4
[2024-03-31 15:46:06,691] torch.distributed.elastic.agent.server.api: [INFO] [default] starting workers for entrypoint: python
[2024-03-31 15:46:06,691] torch.distributed.elastic.agent.server.api: [INFO] [default] Rendezvous'ing worker group
[W socket.cpp:464] [c10d] The server socket has failed to bind to 0.0.0.0:29678 (errno: 98 - Address already in use).
[E socket.cpp:500] [c10d] The server socket has failed to listen on any local network address.
```
I am not sure if this is relevant, because for the successful cases, I also see this info.
UPDATE: I used to follow the tutorial from pytorch using `torchrun`. following this [tutorial][1] makes it work.
[1]: https://github.com/PrincetonUniversity/multi_gpu_training/blob/main/02_pytorch_ddp/README.md |
Is this a correct implementation of Comparator? |
|java|comparator| |
Based on your last comment, I suspect the issue may lie with the fact that `Peak` and `Trough` are both objects of the class date-time (`dttm`), whereas internally `ggplot` is expecting them to be just `Date` objects. Since the `date` column of `df` appears to lack any `hh:mm:ss`, I've decided to drop `hh:mm:ss` from the `REC2` data.frame as well. Keep in mind as well that the recession bands start in the year 1969, whereas the sample data you've provided us only goes as far as 1964, so the final plot will seem strange, but should look correct on your real data. I'm also running R version 4.3.3 with `ggplot2` version 3.5.0.
``` r
library(ggplot2)
library(lubridate)
# Read fed funds rate table
df <- read.table(
textConnection(
"FEDFUNDS PCEPI UNRATE date inflationgap unemploymentgap taylor_rule
3.9333333333333331 1.6955100000000001 5.1333333333333329 1960-01-01 -0.30449 1.1333333 1.55442167
3.6966666666666668 1.81114 5.2333333333333334 1960-04-01 -0.18886 1.2333333 1.54223667
2.9366666666666665 1.5806800000000001 5.5333333333333332 1960-07-01 -0.41932 1.5333333 1.21700667
2.2966666666666669 1.47695 6.2666666666666666 1960-10-01 -0.52305 2.2666667 0.65180833
2.0033333333333334 1.5333399999999999 6.7999999999999998 1961-01-01 -0.46666 2.8000000 0.30667000
1.7333333333333334 0.99473999999999996 7 1961-04-01 -1.00526 3.0000000 -0.10263000
1.6833333333333333 0.97816999999999998 6.7666666666666666 1961-07-01 -1.02183 2.7666667 0.05241833
2.3999999999999999 0.64354999999999996 6.2000000000000002 1961-10-01 -1.35645 2.2000000 0.28177500
2.4566666666666666 0.89122000000000001 5.6333333333333329 1962-01-01 -1.10878 1.6333333 0.80227667
2.6066666666666669 1.26149 5.5333333333333332 1962-04-01 -0.73851 1.5333333 1.05741167
2.8466666666666667 1.16794 5.5666666666666664 1962-07-01 -0.83206 1.5666667 0.98730333
2.9233333333333333 1.3677999999999999 5.5333333333333332 1962-10-01 -0.63220 1.5333333 1.11056667
2.9666666666666668 1.2206699999999999 5.7666666666666666 1963-01-01 -0.77933 1.7666667 0.87366833
2.9633333333333334 1.0209900000000001 5.7333333333333334 1963-04-01 -0.97901 1.7333333 0.79716167
3.3300000000000001 1.2403599999999999 5.5 1963-07-01 -0.75964 1.5000000 1.07018000
3.4533333333333331 1.30339 5.5666666666666664 1963-10-01 -0.69661 1.5666667 1.05502833
3.4633333333333334 1.49129 5.4666666666666668 1964-01-01 -0.50871 1.4666667 1.21897833
3.4900000000000002 1.5534300000000001 5.2000000000000002 1964-04-01 -0.44657 1.2000000 1.43671500
3.4566666666666666 1.3948700000000001 5 1964-07-01 -0.60513 1.0000000 1.49743500
3.5766666666666667 1.35467 4.9666666666666668 1964-10-01 -0.64533 0.9666667 1.50066833"
),
header = TRUE,
colClasses = c(rep("numeric", 3), "Date", rep("numeric", 3))
)
# read recession timepoints, modified with commas as separators
REC2 <- read.table(
textConnection(
"Peak, Trough
1969-04-01 00:00:00, 1967-10-01 00:00:00
1969-07-01 00:00:00, 1968-01-01 00:00:00
1969-10-01 00:00:00, 1968-04-01 00:00:00
1970-01-01 00:00:00, 1968-07-01 00:00:00
1970-04-01 00:00:00, 1968-10-01 00:00:00
1970-07-01 00:00:00, 1969-01-01 00:00:00
1970-10-01 00:00:00, 1971-01-01 00:00:00
1973-10-01 00:00:00, 1971-04-01 00:00:00
1974-01-01 00:00:00, 1971-07-01 00:00:00
1974-04-01 00:00:00, 1971-10-01 00:00:00"
),
sep = ",",
header = TRUE,
# Drops h:m:s for now, but alternatively can be read as character class first
# and then manipulated using lubridate
colClasses = "Date"
)
# Construct original line graph
p <- ggplot(df, aes(date, FEDFUNDS, group = 1, color="red"))+
# `Size` aesthetic deprecated since version 3.4.0, use `linewidth` instead
geom_line(linewidth=1.2, alpha=1, linetype=2)+
# df2 not provided so we'll silence this for now
# geom_line(data = df2, aes(x = date, y = taylor_rule, color="blue"), linewidth=1.2, alpha=1, linetype=1) +
ggtitle("The Taylor (1993) rule for the US and the Fed Funds rate, 1960-2023")+
xlab("Year") + ylab("Interest Rate and Taylor Rate")+
ylim(-5,20)+
scale_color_hue(labels = c("Taylor Rule", "Federal Funds Rate"))+
theme(legend.position="bottom")
# Add recession bands
pp <- p + geom_rect(
data = REC2,
aes(
xmin = Peak,
xmax = Trough,
ymin = -Inf,
ymax = +Inf
),
fill = "pink",
alpha = 0.2,
inherit.aes = FALSE
)
# Call output
pp
```
<!-- -->
<sup>Created on 2024-03-31 with [reprex v2.1.0](https://reprex.tidyverse.org)</sup>
|
Would you mind if I used styles? [Updated](http://jsfiddle.net/J3uP7/1/)
Code:
<ul>
<li>item</li>
<li>item</li>
<li style="list-style:none">
<ul>
<li>item</li>
<li>item</li>
<li>item</li>
</ul>
</li>
</ul>
<ol>
<li>item</li>
<li>item</li>
<li style="list-style:none"><ol>
<li>item</li>
<li>item</li>
<li>item</li>
</ol>
</li>
</ol>
Another method might be running through the list of LI and adding the style by detecting child nodes. [Example](http://jsfiddle.net/J3uP7/12/)
Code (HTML is the same):
```js
var li = document.getElementsByTagName("li");
for (var i=0, max=li.length; i < max; i++)
if (li[i].childNodes.length > 1)
li[i].style.listStyle = "none";
``` |
null |
Would you mind if I used styles? [Updated](http://jsfiddle.net/J3uP7/1/)
Code:
<ul>
<li>item</li>
<li>item</li>
<li style="list-style:none">
<ul>
<li>item</li>
<li>item</li>
<li>item</li>
</ul>
</li>
</ul>
<ol>
<li>item</li>
<li>item</li>
<li style="list-style:none">
<ol>
<li>item</li>
<li>item</li>
<li>item</li>
</ol>
</li>
</ol>
Another method might be running through the list of LI and adding the style by detecting child nodes. [Example](http://jsfiddle.net/J3uP7/12/)
Code (HTML is the same):
```js
var li = document.getElementsByTagName("li");
for (var i=0, max=li.length; i < max; i++)
if (li[i].childNodes.length > 1)
li[i].style.listStyle = "none";
``` |
Is there any difference between using map and using function if everything is known at compile time. (I'm new to Kotlin/Java and i couldn't find answer to this)
Example:
```kt
val mappings = mapOf(
"PL" to "Poland",
"EN" to "England",
"DE" to "Germany",
"US" to "United States of America",
)
fun mappingsFunc(code: String): String {
return when (code) {
"PL" -> "Poland"
"EN" -> "England"
"DE" -> "Germany"
"US" -> "United States of America"
else -> "Unknown" // previous checks guarantee never returning this
}
}
fun main() {
println(mappings["PL"])
println(mappingsFunc("US"))
}
```
Playground: https://pl.kotl.in/xo24ulKbo
Both of them works, both syntax's are fine for me but i dunno which one is recommended. |
You can try `page.goto("file://to-html", { waitUntil: 'domcontentloaded' })`
instead of `page.setContent()`. [This][1] resolved ProtocolError, I am receiving.
However, I still cannot generate the a PDF with ~19MB HTML file with either methods.
[1]: https://github.com/puppeteer/puppeteer/issues/11720#issuecomment-1906278485 |
If I understand your question correctly you want to check that your mock was called and assert the various properties of that request.
Rather than using snapshot recordings, WireMock provides an API to assert the request matches some criteria:
```java
verify(postRequestedFor(urlEqualTo("/verify/this"))
.withHeader("Content-Type", equalTo("text/xml")));
```
You can find the docs for verifying in java here - https://wiremock.org/docs/verifying/#verifying-in-java
If you still want to go down the route of retrieving all the requests, that can be accomplished via the following:
```java
List<ServeEvent> allServeEvents = getAllServeEvents();
```
These can be filtered and queried - https://wiremock.org/docs/verifying/#querying-the-request-journal |
I've been trying to get my code working for a while and the only bit thats not working is the close button. Can anyone spot where I'm going wrong?
The popup box is dynamic from WP using a custom post type. The data is working and popup appears but then the close button doesnt work?
The active class should be removed once onclick has been clicked.
My code:
<?php
$panelhead = $args['panelhead'];
$panelintro = $args['panelintro'];
// NOT HERE!
// $teamid = get_the_ID();
?>
<a id="team"></a>
<div class="fullcol pb-team-panel pb-padding">
<div class="midcol clearfix">
<div class="fullcol copy-wrapper clearfix">
<div class="team-intro copycol anim-target bottomfade-fm-dm">
<h3><?php echo $panelhead; ?></h3>
<p><?php echo $panelintro; ?></p>
</div>
</div>
<div class="fullcol team-grid">
<?php
// HERE IS THE QUERY LOOP, YOU NEED TO SET THE ID AFTER THIS QUERY OTHERWISE YOU'LL ONLY BE RETURNING THE ID OF THE CURRENT PAGE AND NOT THE TEAM MEMBER.
$recent = new WP_Query("post_type=team&posts_per_page=-1"); while($recent->have_posts()) : $recent->the_post();
// HERE!
$teamid = get_the_ID();
$teamimg = get_field('team_image');
?>
<!-- The visible team item(s) -->
<div class="team-item anim-target bottomfade-fm-dm">
<a class="trigger" id="<?php echo "trigger-".$teamid; ?>">
<div class="team-image bg-image rounded-box" style="background-image: url(<?php echo $teamimg['url']; ?>);"></div>
<h4><?php the_title(); ?></h4>
<p><?php the_field('team_jobtitle'); ?></p>
</a>
</div>
<!-- The popup item(s) -->
<!-- popup start -->
<div class="team-popup target" id="<?php echo "target-".$teamid; ?>">
<div class="overlay"></div>
<div class="popup">
<div class="popupcontrols">
<span class="popupclose">X</span>
</div>
<div class="popupcontent">
<div class="g3">
<img src="<?php echo $teamimg['url']; ?>" />
<a class="nexbtn" href="<?php the_field('team_linkedin'); ?>" target="_blank" rel="noopener noreferrer">Follow <?php the_title(); ?> on LinkedIn</a>
</div>
<div class="g7">
<h4><?php the_title(); ?></h4>
<p><?php the_field('team_bio'); ?></p>
</div>
</div>
</div>
</div>
<!-- popup end -->
<?php endwhile; wp_reset_postdata(); ?>
</div>
</div>
</div>
<script type="text/javascript">
// SET POPUP(TARGET) BEHAVIOUR WHEN CLICKING A MATCHING TRIGGER ID
$("[id^='trigger-']").on("click", function() {
var id = parseInt(this.id.replace("trigger-", ""), 10);
var thetarget = $("#target-" + id);
if ($(this).hasClass("active")) {
// Do something here if the trigger is already active
} else {
// Do something here if the trigger is not active
// Remove active classes from triggers in general
$('.trigger').removeClass("active");
// Remove active classes from targets in general
$('.target').removeClass("active");
// Add active class to this trigger
$(this).addClass("active");
// Add active class to this target
$(thetarget).addClass("active");
}
});
// Close Popup Event
$('.popupclose').on("click", function() {
console.log('Button test!');
$('.trigger').removeClass("active");
$('.target').removeClass("active");
});
</script>
|
How do i get my close button to work on a popup? |
|wordpress|onclick| |
The queue name is extracted from the RabbitMQ connection properties, not from the message context.
The queue name is set when the connection is established and the consumer is initialized. It's not a property that's associated with each message. Therefore, it's not possible to extract the queue name from an individual message using a property expression and also it is simillar for the priority as well.
As an example If you need to use the queue name within your sequence, you could set it as a property of the sequence when the inject sequence is initialized. Here's an example of how you could do this:
<sequence xmlns="http://ws.apache.org/ns/synapse" name="processMessage">
<property name="QueueName" value="your_queue_name"/>
<log level="custom">
<property name="QueueName" expression="get-property('QueueName')"/>
<property name="MessageID" expression="get-property('rabbitmq.message.id')"/>
</log>
<!-- Add your database update logic here -->
</sequence>
In the above example, replace "your_queue_name" with the name of your queue. This will set the queue name as a property of the sequence, and you can then use the get-property('QueueName') expression to retrieve it.
To update the message status in a database, you can use the DBReport mediator. You would need to configure a datasource in the Micro Integrator and then use the DBReport mediator to update the database.
|
null |