instruction stringlengths 0 30k ⌀ |
|---|
I managed to create a Python + O365 app that logs into my email and prints the emails in the console. Here's the code:
```
import O365
from O365 import Connection, FileSystemTokenBackend
from O365.account import Account
tokenstorage = FileSystemTokenBackend(token_path='XXX', token_filename='XXX')
scopes_graph = ['User.Read', 'Mail.ReadWrite', 'Mail.Read', 'OFFLINE_ACCESS']
credentials = ('XXX', 'XXX')
account = Account(credentials, tenant_id='XXX', scopes=scopes_graph, token_backend=tokenstorage)
if not account.is_authenticated:
account.authenticate()
mailbox = account.mailbox()
inbox = mailbox.inbox_folder()
query = mailbox.new_query()
query = query.on_attribute('Subject').contains('Test')
for message in inbox.get_messages(limit=10, query=query):
print(message._Message__created, message.sender, message)
```
The thing is, I would prefer the program to run continuously - perhaps through some kind of an infinite loop. It would be best if it could "wait and listen" for emails so that whenever I receive a new email, it prints a new line in the console 24/7. Do you guys think this is doable? How?
|
XPath for an element with only child elements named X? |
Browse tho the eclipse workspace:
<workspace>/.metadata/.plugins/org.eclipse.core.runtime/.settings/org.eclipse.jst.ws.axis.consumption.ui.prefs
Sometime, if you do not have this file created, then create this file
Add the following entry and save the file:
eclipse.preferences.version=1
disableAxisJarCopy=true
|
I'm currently trying to deploy a nextJS app on GitHub pages using GitHub actions, but I get a page 404 error even after it successfully deploys. I've looked around a bunch of similarly named questions and am having trouble figuring this out. I'll add that this is my first nextjs project.
Here is my github repo: https://github.com/Mctripp10/mctripp10.github.io
Here is my website: mctripp10.github.io
I used the *deploy Next.js site to pages* workflow that GitHub provides. Here is the nextjs.yml file:
# Sample workflow for building and deploying a Next.js site to GitHub Pages
#
# To get started with Next.js see: https://nextjs.org/docs/getting-started
#
name: Deploy Next.js site to Pages
on:
# Runs on pushes targeting the default branch
push:
branches: ["dev"]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions:
contents: read
pages: write
id-token: write
# Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued.
# However, do NOT cancel in-progress runs as we want to allow these production deployments to complete.
concurrency:
group: "pages"
cancel-in-progress: false
jobs:
# Build job
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Detect package manager
id: detect-package-manager
run: |
if [ -f "${{ github.workspace }}/yarn.lock" ]; then
echo "manager=yarn" >> $GITHUB_OUTPUT
echo "command=install" >> $GITHUB_OUTPUT
echo "runner=yarn" >> $GITHUB_OUTPUT
exit 0
elif [ -f "${{ github.workspace }}/package.json" ]; then
echo "manager=npm" >> $GITHUB_OUTPUT
echo "command=ci" >> $GITHUB_OUTPUT
echo "runner=npx --no-install" >> $GITHUB_OUTPUT
exit 0
else
echo "Unable to determine package manager"
exit 1
fi
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: "20"
cache: ${{ steps.detect-package-manager.outputs.manager }}
- name: Setup Pages
uses: actions/configure-pages@v4
with:
# Automatically inject basePath in your Next.js configuration file and disable
# server side image optimization (https://nextjs.org/docs/api-reference/next/image#unoptimized).
#
# You may remove this line if you want to manage the configuration yourself.
static_site_generator: next
- name: Restore cache
uses: actions/cache@v4
with:
path: |
.next/cache
# Generate a new cache whenever packages or source files change.
key: ${{ runner.os }}-nextjs-${{ hashFiles('**/package-lock.json', '**/yarn.lock') }}-${{ hashFiles('**.[jt]s', '**.[jt]sx') }}
# If source files changed but packages didn't, rebuild from a prior cache.
restore-keys: |
${{ runner.os }}-nextjs-${{ hashFiles('**/package-lock.json', '**/yarn.lock') }}-
- name: Install dependencies
run: ${{ steps.detect-package-manager.outputs.manager }} ${{ steps.detect-package-manager.outputs.command }}
- name: Build with Next.js
run: ${{ steps.detect-package-manager.outputs.runner }} next build
- name: Static HTML export with Next.js
run: ${{ steps.detect-package-manager.outputs.runner }} next export
- name: Upload artifact
uses: actions/upload-pages-artifact@v3
with:
path: ./out
# Deployment job
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4
I got this on the build step:
Route (app) Size First Load JS
┌ ○ /_not-found 875 B 81.5 kB
├ ○ /pages/about 2.16 kB 90.2 kB
├ ○ /pages/contact 2.6 kB 92.5 kB
├ ○ /pages/experience 2.25 kB 90.3 kB
├ ○ /pages/home 2.02 kB 92 kB
└ ○ /pages/projects 2.16 kB 90.2 kB
+ First Load JS shared by all 80.6 kB
├ chunks/472-0de5c8744346f427.js 27.6 kB
├ chunks/fd9d1056-138526ba479eb04f.js 51.1 kB
├ chunks/main-app-4a98b3a5cbccbbdb.js 230 B
└ chunks/webpack-ea848c4dc35e9b86.js 1.73 kB
○ (Static) automatically rendered as static HTML (uses no initial props)
Full image: [Build with Next.js][1]
I read in this post https://stackoverflow.com/questions/58039214/next-js-pages-end-in-404-on-production-build that perhaps it has something to do with having sub-folders inside the pages folder, but I'm not sure how to fix that as I wasn't able to get it to work without sub-foldering page.js files for each page.
Any help would be greatly appreciated! Thanks!
[1]: https://i.stack.imgur.com/wSlPq.png |
404 error on deploying Nextjs app with GitHub actions |
|reactjs|next.js|build|github-pages| |
I am trying to implement the login with google functionality in asp.net core web api using identity external provider. All things are going well but when my url is redirecting to this external-auth-callback function I am getting info null. This is the external-auth-callback function.
public async Task<MessageViewModel> ExternalLoginCallback([FromQuery] string returnUrl)
{
var info = await _signInManager.GetExternalLoginInfoAsync();
if (info != null)
{
var signInResult = await _signInManager.ExternalLoginSignInAsync(info.LoginProvider,
info.ProviderKey, isPersistent: false, bypassTwoFactor: true);
return new MessageViewModel()
{
IsSuccess = signInResult.Succeeded,
Message = "User with this account is already in table",
};
}
else
{
var email = info.Principal.FindFirstValue(ClaimTypes.Email);
var user = await _userManager.FindByEmailAsync(email);
if (user == null)
{
user = new Users
{
UserName = info.Principal.FindFirstValue(ClaimTypes.Email),
Email = info.Principal.FindFirstValue(ClaimTypes.Email),
FirstName = info.Principal.FindFirstValue(ClaimTypes.GivenName),
LastName = info.Principal.FindFirstValue(ClaimTypes.Surname),
};
await _userManager.CreateAsync(user);
}
await _userManager.AddLoginAsync(user, info);
await _signInManager.SignInAsync(user, isPersistent: false);
return new MessageViewModel()
{
IsSuccess = true,
Message = "User with this account is created successfully"
};
}
}
Before that i am returning the return url through this method
public async Task<LoginProviderViewModel> ExternalLogin(string provider, string returnUrl)
{
var redirectUrl = $"https://localhost:7008/api/account/external-auth-callback?returnUrl={returnUrl}";
var properties = _signInManager.ConfigureExternalAuthenticationProperties(provider, redirectUrl);
properties.AllowRefresh = true;
return new LoginProviderViewModel()
{
Provider = provider,
Properties = properties,
};
}
from my frontend when I click on login with google button this method is called and after coming to this method first externalLogin method will called and I returning with return url after that in frontend i got google sign in page and when i select user then it goes to external-auth-callback function which is used to save the user who try to login with google.
const handleExternalLogin = async () => {
const url = `${externalLogin}?provider=Google&returnUrl=/admin-dashboard`;
const response = await getData(url);
if (response.isSuccessfull) {
const redirectUrl = response.data.properties.items[".redirect"];
const loginProvider = response.data.properties.items["LoginProvider"];
const googleAuthorizationUrl =
`https://accounts.google.com/o/oauth2/v2/auth` +
`?client_id=xxxxx.apps.googleusercontent.com` +
`&redirect_uri=${redirectUrl}` +
`&response_type=code` +
`&scope=openid%20profile%20email` +
`&state=${loginProvider}`;
window.location.href = googleAuthorizationUrl;
}
};
Please help me out as I have stucked to this point from 2 days.
I have tried all the possible checks but not understanding why the info is showing null in external-auth-callback function? So I was expecting to please anyone let me know why I getting the null value. |
I suggest you start to improve your skills by developing on your local system.
Depending on what skills you want to improve you could either:
1. Setup a virtual Server with Virtualbox and install a LAMP Server. There a lots of tutorials about how to do that in the Web.
or
2. Install some preconfigured developing tools like XAMPP (or MAMPP - Depending to your operating system) if you do not want to hazzle around with server configurations.
or
3. Install docker and docker compose. There a some good boilerplates on github to run LAMP developing environments. Including mailserver like mailhog.
Based on the description of your skills i would suggest you start with XAMPP or Virtualbox. But maybe you take some time to watch some YT-videos about the 3 topics.
With a local virtual developing environment you can start to implement your project and do testing with existing frameworks. Sounds like it would be a good idea to use a content management system (CMS). I prefer TYPO3, but you can also take a look at some other CMS.
A CMS comes with a lot of plugins that can handle things like User-Logins/-Administration, E-Mailing framework, Page-/Content-handling and Styling.
I suggest you start setup a normal website with some pages and styling. Then you can start to implement a custom plugin to do you "recipe" functions.
When you have a running project on your local system you can start to think about deployment to a real Server. Depending on the size of your project you can then decide what kind of server you will need.
If you run a CMS i would suggest you look for an provider that is specialized on hosting that kind of CMS.
Of course you can also implement everything by yourself with PHP. But you will soon find out, that this isn't that easy as you maybe think at the moment.
Rule of thumb: Learning by doing. You will only improve your skills if you start to develop. All good developers have lots of projects, that never goes online because they where only developed for learning purposes. |
null |
A `LogoutView` mainly has a `template_name` because it subclasses from the `TemplateView` yes, but that is only used if somehow the redirect points back to the `LogoutView` itself, but that is probably a bad idea anyway.
You thus *don't* use the `LogoutView` to show a form to logout, you put the logout button on some other view, or on all pages, and then thus logout there with the form you provided in the question. You thus can put this form for example in the *navbar*, it will logout the user and redirect to the page specified by the `next_page` if that is specified in the `LogoutView`; or the [**`LOGOUT_REDIRECT_URL`** setting <sup>\[Django-doc\]</sup>](https://docs.djangoproject.com/en/stable/ref/settings/#std-setting-LOGOUT_REDIRECT_URL) if that one is not specified. |
FirebaseError: [code=permission-denied]: Missing or insufficient permissions. even if firestore rules read and write are permitted - Angular-Fire |
I have following code block to create a gauge. I will use this for a custom widget in SAP Analytics Cloud. I want to use two pointers in this gauge. How can I do this within my code?
Thanks in advance
`option = {
series: [
{
type: 'gauge',
startAngle: 180,
endAngle: 0,
min: 0,
max: 240,
splitNumber: 12,
itemStyle: {
color: '#009AA6',
shadowColor: 'rgba(0,138,255,0.45)',
shadowBlur: 10,
shadowOffsetX: 2,
shadowOffsetY: 2
},
progress: {
show: true,
roundCap: true,
width: 18
},
pointer: {
icon: 'path://M2090.36389,615.30999 L2090.36389,615.30999 C2091.48372,615.30999 2092.40383,616.194028 2092.44859,617.312956 L2096.90698,728.755929 C2097.05155,732.369577 2094.2393,735.416212 2090.62566,735.56078 C2090.53845,735.564269 2090.45117,735.566014 2090.36389,735.566014 L2090.36389,735.566014 C2086.74736,735.566014 2083.81557,732.63423 2083.81557,729.017692 C2083.81557,728.930412 2083.81732,728.84314 2083.82081,728.755929 L2088.2792,617.312956 C2088.32396,616.194028 2089.24407,615.30999 2090.36389,615.30999 Z',
length: '75%',
width: 16,
offsetCenter: [0, '5%']
},
axisLine: {
roundCap: true,
lineStyle: {
width: 18
}
},
axisTick: {
splitNumber: 2,
lineStyle: {
width: 2,
color: '#999'
}
},
splitLine: {
length: 12,
lineStyle: {
width: 3,
color: '#999'
}
},
axisLabel: {
distance: 30,
color: '#999',
fontSize: 20
},
title: {
show: false
},
detail: {
backgroundColor: '#fff',
borderColor: '#999',
borderWidth: 2,
width: '60%',
lineHeight: 70,
height: 50,
borderRadius: 8,
offsetCenter: [0, '35%'],
valueAnimation: true,
formatter: function (value) {
return '{value|' + value.toFixed(0) + '}{unit|%}';
},
rich: {
value: {
fontSize: 30,
fontWeight: 'bolder',
color: '#777'
},
unit: {
fontSize: 30,
color: '#999',
padding: [0, 0, -10, 10]
}
}
},
data: [
{
value: 100
}
]
}
]
};
` |
How can I show two pointers in a gauge code? |
|javascript|gauge|sap|sap-analytics-cloud| |
I have a problem with adding the `geolocator (v 11.0.0)` Flutter package to my existing project. As soon as I add it to the pubspec.yaml file I get an error `FAILURE: Build failed with an exception. Execution failed for task ':app:checkDebugDuplicateClasses'.`
To try and test if the problem is in my project specifically or the geolocator package, I created an empty new project (see below main.dart and pubspec.yaml code).
The new (empty) project runs (debugs) normally. But if I uncomment the `#geolocator: ^11.0.0` line in pubspec I get the beforementioned error.
**main.dart**
```dart
import 'package:flutter/material.dart';
void main() {
runApp(
const MaterialApp(
home: MyApp(),
),
);
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text("Geolocator test"),
),
body: const Center(
child: Text("Hello"),
),
);
}
}
```
**pubspec.yaml**
```yaml
name: geolocator_test
description: "A new Flutter project."
publish_to: 'none' # Remove this line if you wish to publish to pub.dev
version: 1.0.0+1
environment:
sdk: '>=3.3.1 <4.0.0'
dependencies:
flutter:
sdk: flutter
cupertino_icons: ^1.0.6
#geolocator: ^11.0.0
dev_dependencies:
flutter_test:
sdk: flutter
flutter_lints: ^3.0.0
flutter:
uses-material-design: true
```
What am I doing wrong?
***environment**: Windows 11 Professional, Android Studio Hedgehog 2023.1.1, Flutter 3.19.3, Dart SDK 3.3.1, debugging on Pixel 7 API 31 virtual device.* |
How to Implement Continuous Email Monitoring in a Python + O365 |
|python|azure-active-directory|microsoft-graph-api|office365|python-o365| |
null |
I am working on a React application using Redux for state management. In my layout component, I am trying to display the header and the footer only if there is a user, but I am facing an issue where the they both are not displayed.
I understand that it's because the user is still null, but I can find a way to solve it, tried also subscribe, but no changes.
Here's a simplified version of my Layout component:
```tsx
import Routing from "../Routing/Routing";
import "./Layout.css";
import { NavLink, useLocation } from "react-router-dom";
import { authStore } from "../../redux/redux";
import { useEffect, useState } from "react";
import UserModel from "../../../models/UserModel";
function Layout(): JSX.Element {
const { pathname } = useLocation()
const hideHeaderPaths: string[] = ['/login']
const [user, setUser] = useState<UserModel>();
console.log(user)
useEffect(() => {
setUser(authStore.getState().user);
const unsubscribe = authStore.subscribe(() => {
const updatedUser = authStore.getState().user
setUser(updatedUser);
alert("Change")
});
setUser(authStore.getState().user);
return () => unsubscribe();
}, []);
return (
<div className="Layout" style={{ display: hideHeaderPaths.includes(pathname) ? "block" : "grid" }}>
{!hideHeaderPaths.includes(pathname) && user && <header><p>{`${user.firstName} ${user.lastName}`} |<NavLink to={"/logout"}>Logout</NavLink> </p></header>}
<main>
{/* <Main /> */}
<Routing />
</main>
{!hideHeaderPaths.includes(pathname) && <footer>this is the footer</footer>}
</div>
);
}
export default Layout;
```
Could anyone please help me understand why the user information is not displayed correctly on the initial render?
I have also tried removing the conditional rendering for the user in the header and footer, but I still face the same problem. The user's first name and last name appear as undefined on the initial render.
To better explain what's going on, I'm tossing in the code for my Redux:
import { createStore } from 'redux'
import UserModel from '../../models/UserModel';
import { jwtDecode } from "jwt-decode";
export class AuthState {
public token: string = null;
public user: UserModel = null;
public constructor() {
this.token = localStorage.getItem("token");
if (this.token) {
const container: { user: UserModel } = jwtDecode(this.token)
this.user = container.user
console.log(this.user)
}
}
}
export enum AuthActionType {
register = "register",
login = "login",
logout = "logout"
}
export interface AuthAction {
type: AuthActionType,
payload?: any
}
export function authReducer(currentState = new AuthState(), action: AuthAction): AuthState {
const newState = { ...currentState }
switch (action.type) {
case AuthActionType.login:
case AuthActionType.register:
newState.token = action.payload;
localStorage.setItem("token", newState.token);
break;
case AuthActionType.logout:
newState.token = null;
newState.user = null;
localStorage.removeItem("token");
console.log(newState.user)
break;
}
return newState
}
export const authStore = createStore(authReducer)
|
Flutter geolocator checkDebugDuplicateClasses issue |
|flutter|dart|geolocator| |
hope the title wasn't too confusing.
this is my project:
[teams](https://i.stack.imgur.com/rBQ8L.png)
I'm trying to figure out how to get the names on the right (NICK-GL) to stack on top of each other at the bottom, when one of the mon-fri boxes has one of the correlating names in it, i.e., when Nick is is displayed in F3, NICK-GL appears in F21. The names on the right will change from week to week. I'd like them to automatically bump to the top of the list at the bottom, but stay in the order they are on the Name list.
If you already couldn't tell, I'm a beginner.
I found this formula on stackoverflow, but I don't know how to implement it into my sheet.
```
=LET(datal,E74:E83,datar,T74:T82,dell,"/",delr,"-",
dl,FILTER(datal,LEN(datal),""),
IFNA(XLOOKUP(TEXTBEFORE(dl,dell),
TEXTBEFORE(datar,delr),datar),dl))
```
If someone could also explain to me what each part of that means, I would tremendously appreciate it. |
Trying to get names from a box on the right to automatically appear in another box when a different box has a value in it |
|excel|excel-formula|formula|let| |
null |
I have a C# game emulator which uses TcpListener and originally had a TCP based client.
A new client was introduced which is HTML5 (web socket) based. I wanted to support this without modifying too much of the existing server code, still allowing `TcpListener` and `TcpClient` to work with web socket clients connecting.
Here is what I have done, but I feel like I'm missing something, as I am not getting the usual order of packets, therefor handshake never completes.
1. Implement protocol upgrade mechanism
public static byte[] GetHandshakeUpgradeData(string data)
{
const string eol = "\r\n"; // HTTP/1.1 defines the sequence CR LF as the end-of-line marker
var response = Encoding.UTF8.GetBytes("HTTP/1.1 101 Switching Protocols" + eol
+ "Connection: Upgrade" + eol
+ "Upgrade: websocket" + eol
+ "Sec-WebSocket-Accept: " + Convert.ToBase64String(
System.Security.Cryptography.SHA1.Create().ComputeHash(
Encoding.UTF8.GetBytes(
new Regex("Sec-WebSocket-Key: (.*)").Match(data).Groups[1].Value.Trim() + "258EAFA5-E914-47DA-95CA-C5AB0DC85B11"
)
)
) + eol
+ eol);
return response;
}
This is then used like so:
private async Task OnReceivedAsync(int bytesReceived)
{
var data = new byte[bytesReceived];
Buffer.BlockCopy(_buffer, 0, data, 0, bytesReceived);
var stringData = Encoding.UTF8.GetString(data);
if (stringData.Length >= 3 && Regex.IsMatch(stringData, "^GET"))
{
await _networkClient.WriteToStreamAsync(WebSocketHelpers.GetHandshakeUpgradeData(stringData), false);
return;
}
2. Encode all messages after switching protocol response
public static byte[] EncodeMessage(byte[] message)
{
byte[] response;
var bytesRaw = message;
var frame = new byte[10];
var indexStartRawData = -1;
var length = bytesRaw.Length;
frame[0] = 129;
if (length <= 125)
{
frame[1] = (byte)length;
indexStartRawData = 2;
}
else if (length >= 126 && length <= 65535)
{
frame[1] = 126;
frame[2] = (byte)((length >> 8) & 255);
frame[3] = (byte)(length & 255);
indexStartRawData = 4;
}
else
{
frame[1] = 127;
frame[2] = (byte)((length >> 56) & 255);
frame[3] = (byte)((length >> 48) & 255);
frame[4] = (byte)((length >> 40) & 255);
frame[5] = (byte)((length >> 32) & 255);
frame[6] = (byte)((length >> 24) & 255);
frame[7] = (byte)((length >> 16) & 255);
frame[8] = (byte)((length >> 8) & 255);
frame[9] = (byte)(length & 255);
indexStartRawData = 10;
}
response = new byte[indexStartRawData + length];
int i, reponseIdx = 0;
// Add the frame bytes to the reponse
for (i = 0; i < indexStartRawData; i++)
{
response[reponseIdx] = frame[i];
reponseIdx++;
}
// Add the data bytes to the response
for (i = 0; i < length; i++)
{
response[reponseIdx] = bytesRaw[i];
reponseIdx++;
}
return response;
}
Used here:
public async Task WriteToStreamAsync(byte[] data, bool encode = true)
{
if (encode)
{
data = WebSocketHelpers.EncodeMessage(data);
}
3. Decoding all messages
public static byte[] DecodeMessage(byte[] bytes)
{
var secondByte = bytes[1];
var dataLength = secondByte & 127;
var indexFirstMask = dataLength switch
{
126 => 4,
127 => 10,
_ => 2
};
var keys = bytes.Skip(indexFirstMask).Take(4);
var indexFirstDataByte = indexFirstMask + 4;
var decoded = new byte[bytes.Length - indexFirstDataByte];
for (int i = indexFirstDataByte, j = 0; i < bytes.Length; i++, j++)
{
decoded[j] = (byte)(bytes[i] ^ keys.ElementAt(j % 4));
}
return decoded;
}
Which is used here
private async Task OnReceivedAsync(int bytesReceived)
{
var data = new byte[bytesReceived];
Buffer.BlockCopy(_buffer, 0, data, 0, bytesReceived);
var stringData = Encoding.UTF8.GetString(data);
if (stringData.Length >= 3 && Regex.IsMatch(stringData, "^GET"))
{
await _networkClient.WriteToStreamAsync(WebSocketHelpers.GetHandshakeUpgradeData(stringData), false);
return;
}
var decodedData = WebSocketHelpers.DecodeMessage(data);
if (decodedData[0] == 60)
{
await OnReceivedPolicyRequest();
}
else if (_networkClient != null)
{
foreach (var packet in DecodePacketsFromBytes(decodedData))
{
_packetHandler.HandleAsync(_networkClient, packet);
}
} |
The purpose of the below code is to retrieves data from firebase and post duplicate check & split it would append the items to buffer. Once the buffer limit has reached 4 its would insert the items in the DDB table using batchWriter.
To confirm the above code runs fine if I use a new/empty Dynamodb table but after the inserted items goes upto 40K then not all the expected items gets inserted. I re-used an old table also but same strange behavior.
I wish to understand is this could be a code logic problem or something modification required in the AWS table side.?
Results post running the code:
-=-=-=-=-=-=-=-=-=-=
The below code successfully entered 32 items into the new table.
But when I just changed the table name to old table (which already contains 60,485 items) not a single data got inserted out of the 32. Code dint give any error nor
did cloudwatch showed any alerts. However, If i keep executing the code for a longer period of time where 150 rows should have been inserted, only one gets inserted.
The WCU and RCU is set ot 25 and also had changed it to On-Demand, but still same problem.
Code:
-=-=-
```
`# Define the DynamoDB tables
dynamodb = boto3.resource('dynamodb', region_name='my-region')
table_bms = dynamodb.Table('MS_TR_MS')
# Buffer and counter initialization
buffers = {
'ms_str_msg': [],
}
counter = 0
buffer_size = 4
received_values = {}
def save_to_db(tob_id, top_name, top_data):
with open(log_file_path, 'a') as log_file:
try:
# Check for duplicate entry
key = f'{tob_id}_{top_name}'
value = top_data
if key in received_values and received_values[key] == value:
# print(f"Duplicate entry for {key}: {value}. Discarding.")
return
else:
received_values[key] = value
print(f"Adding entry: {key}: {value}")
# Split & Append data to the buffer
if isinstance(top_data, str):
topic_data = top_data.split(":")[0]
string_array = top_data.split(",")
else:
return
except Exception as e:
print(f'Error: {top_data} \n{e}')
return
# checking number of items inserted fro buffer to DDB table
if counter>=30:
exit()
#Updating items to single table
if top_name == "ms_str_msg":
if len(string_array) == 5:
# table = table_ms.name
print('Updating MS table')
buffers[top_name].append({
# 'UID': {'S': unique_constant_value},
"timestamp": str(current_date_time),
"date": str(current_date),
"time": str(current_timestamp),
"tob_id": str(rotob_id),
"voltage": str(string_array[0]),
"current_amps": str(string_array[1]),
"total_capacity": str(string_array[2]),
"SOC": str(string_array[3]),
"Battery Level": str(string_array[4])})
# Check buffer size and update tables
if len(buffers[top_name]) >= buffer_size:
update_tables(top_name)
# using batch_writer to write items from buffer to table
def update_tables(top_name):
global counter
try:
with table_bms.batch_writer() as writer:
for item in buffers[topic_name]:
writer.put_item(Item=item)
counter = counter+1
buffers[top_name] = []
# Adding a counter to check number of rows inserting
print(f"Counter is {counter}")
if counter>=30:
exit()
except ClientError as err:
print("Couldn't load data into table %s. Here's why: %s: %s")
if __name__ == "_
```
_main__":
main()` |
hope the title wasn't too confusing.
this is my project:

I'm trying to figure out how to get the names on the right (NICK-GL) to stack on top of each other at the bottom, when one of the mon-fri boxes has one of the correlating names in it, i.e., when Nick is is displayed in F3, NICK-GL appears in F21. The names on the right will change from week to week. I'd like them to automatically bump to the top of the list at the bottom, but stay in the order they are on the Name list.
If you already couldn't tell, I'm a beginner.
I found this formula on stackoverflow, but I don't know how to implement it into my sheet.
```
=LET(datal,E74:E83,datar,T74:T82,dell,"/",delr,"-",
dl,FILTER(datal,LEN(datal),""),
IFNA(XLOOKUP(TEXTBEFORE(dl,dell),
TEXTBEFORE(datar,delr),datar),dl))
```
If someone could also explain to me what each part of that means, I would tremendously appreciate it. |
You can try achieve lower latency by using the SoundPlayer class from the System.Media namespace, which is part of the .NET Framework.
https://learn.microsoft.com/en-us/dotnet/api/system.media.soundplayer |
Can any body please assist me. I am trying to upgrade PHP on my Uwamp to php 7.4.33 and or php 8.3.3 But somehow if I want to run the upgrade php.7.3.3 and or php 8.3.3 Apache stops running. php 7.2.7 works fine on my uwamp. I need to upgrade to a higher versions due to a theme I want to use that do not run on php 7.2.7
I have installed the php versions directly in the uwamp C:\UwAmp\bin\php folder and started uwamp. uwamp have picked up the new versions of php and it installed it. no problem but it do not want to run the Apache. also under the php installation tab "uwamp php repository" the new php versions does not show.
I have also tried to install the php version directly onto windows via edit system environment with no joy
I have also configured uwamp "httpd_uwamp.conf" with no joy
There is not many uwamp tutorials on youtube. I fond plenty tuorials on wamp and xwamp but only a few on uwamp
|
I am currently making a wysiwyg editor, and need to support rotation by pressing "r" key while dragging some element with mouse
I noticed that onkeydown event is not being called when I try to press a key while dragging something with a mouse.
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport"
content="width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<title>Document</title>
</head>
<body>
<div draggable="true">Drag this and then try to press some keys</div>
<script>
document.addEventListener('keydown', function(event) {
console.log(`Key pressed: ${event.key}`);
});
document.addEventListener('mousedown', function(event) {
console.log(`Mouse button pressed: ${event.button}`);
});
</script>
</body>
</html>
<!-- end snippet -->
This is the example code you can try on your own (make something draggable as well). So my question - is it even possible to do this in DOM or is it an unavoidable limitation? |
I'm looking for a way to declare an empty dictionary that will only allow specific data types to be used as keys and values. This may seem similar to [this other question][1], but the nuances are such that it doesn't answer my question. (Now that that's out of the way...)
What I'd like to do is this:
my_var: int = 4
my_str_var: str = "Hello"
But when initializing a dictionary. Something like:
my_dict: {key = str, value = int} ## totally wrong, but hopefully gets the idea across.
Is this possible?
[1]: https://stackoverflow.com/questions/60819684/python-initialize-empty-dictionary-with-default-values-type |
How can I initialize an empty Python dictionary and set types for the keys and values? |
|python|python-3.x|dictionary| |
createtoken(""); is not working in my laravel project... where i have deleted vendor file and done `composer install`:
i am getting :
Problem 1
- lcobucci/jwt is locked to version 4.3.0 and an update of this package was not requested.
- lcobucci/jwt 4.3.0 requires ext-sodium * -> it is missing from your system. Install or enable PHP's sodium extension.
Problem 2
- lcobucci/jwt 4.3.0 requires ext-sodium * -> it is missing from your system. Install or enable PHP's sodium extension.
- kreait/firebase-tokens 4.3.0 requires lcobucci/jwt ^4.3.0|^5.0 -> satisfiable by lcobucci/jwt[4.3.0].
- kreait/firebase-tokens is locked to version 4.3.0 and an update of this package was not requested.
then i did `sudo pecl install libsodium`
but i am getting :
> janammaharjan@Janams-MacBook-Air laravel-uat % sudo pecl install libsodium
WARNING: channel "pecl.php.net" has updated its protocols, use "pecl channel-update
pecl.php.net" to update
pecl/libsodium requires PHP (version >= 7.0.0, version <= 8.0.99), installed version
is 8.2.4
No valid packages found
install failed
I have php version 8.2.4
i tried installing xampp with php 8.0.28 but it is not helping can anyone help me on this ? |
|javascript|html|drag|webapi| |
I have a .NET 6 application that my client has asked me to deploy on AWS Fargate and expose via API Gateway.
I need to change the application context root to be /<stage name> instead of /
I have the following Dockerfile so far
```
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
WORKDIR /WebAdmin
EXPOSE 80
EXPOSE 443
ARG API_GATEWAY_STAGE_NAME
RUN echo "API_GATEWAY_STAGE_NAME is $API_GATEWAY_STAGE_NAME"
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
ARG BUILD_CONFIGURATION=Release
WORKDIR /src
COPY ["MyApp.Web.Admin/MyApp.Web.Admin.csproj", "MyApp.Web.Admin/"]
RUN dotnet restore "./MyApp.Web.Admin/./MyApp.Web.Admin.csproj"
COPY . .
WORKDIR "/src/MyApp.Web.Admin"
RUN sed -i "s/~\//\/${API_GATEWAY_STAGE_NAME}\//g" ./Views/Shared/_Layout.cshtml
RUN more ./Views/Shared/_Layout.cshtml
RUN dotnet build "./MyApp.Web.Admin.csproj" -c $BUILD_CONFIGURATION -o /WebAdmin/build
FROM build AS publish
ARG BUILD_CONFIGURATION=Release
RUN dotnet publish "./MyApp.Web.Admin.csproj" -c $BUILD_CONFIGURATION -o /WebAdmin/publish /p:UseAppHost=false
FROM base AS final
WORKDIR /WebAdmin
COPY --from=publish /WebAdmin/publish .
ENTRYPOINT ["dotnet", "MyApp.Web.Admin.dll"]
```
How can I go about changing the context root of the application via either the Dockerfile or in code?
I am running into issues where static assets such as images / css files are not coming back with the correct path and causing errors.
The client does not have a DNS I can use yet so a Custom Domain Name is out of the question for now.
Any assistance on how I can change the context path to have the application running for a demo would be much appreciated |
Change Context path of .NET 6 application |
|.net|aws-api-gateway|.net-6.0|aws-fargate| |
null |
I'm new to using IBM's Carbon Design System. We have some legacy applications - which don't use frameworks like React - so we've opted to use the `@carbon/styles` [package][1].
When reading the [docs][2] it says
> You can bring in all the styles for the Carbon Design System by including `@carbon/styles` in your Sass files. For example:
> `@use '@carbon/styles';`
I've created a Sass file, `app.scss`, which I'm compiling to `app.css` and linking that in my webpage.
If I place the following into `app.scss`
@use '@carbon/styles';
.test {
color: colors.$blue-50;
}
then compile it (using `sass app.scss app.css`) I get this error message:
> Error: There is no module with the namespace "colors".
> `color: colors.$blue-50;`
But if I change the Sass file so it has this - and recompile - there are no errors. I can use things like `<p class="test">Test</p>` in my webpage and get the word "Test" in blue, which is the expected result.
@use '@carbon/styles';
@use '@carbon/styles/scss/colors';
.test {
color: colors.$blue-50;
}
The difference between these 2 files is that the second one has `@use '@carbon/styles/scss/colors';`.
I don't understand this because it says in the docs that
`@use '@carbon/styles';` is so you can "bring in *all* the styles".
In `node_modules/@carbon/styles/index.scss` it has this
`@forward 'scss/colors'`
The Sass [documentation][3] for `@forward` says
> It loads the module at the given URL just like `@use`, but it makes the public members of the loaded module available to users of your module as though they were defined directly in your module. Those members aren’t available in your module, though—if you want that, you’ll need to write a @use rule as well. Don’t worry, it’ll only load the module once!
This is incredibly confusing. The first sentence makes it sounds like `scss/colors` can be referenced directly by `app.scss`, whereas the second sentence suggests the opposite.
In any case I don't understand what `@use '@carbon/styles';` is actually letting me do. It suggests that this provides a single entrypoint to [all of these files][4] which it refers to. But then you have to reference them manually afterwards to actually use them in your own Sass file.
What is the purpose of the entrypoint file given its scope is so limited?
I'm using `sass` 1.71.1 compiled with dart2js 3.3.0 for reference.
[1]: https://github.com/carbon-design-system/carbon/tree/main/packages/styles
[2]: https://github.com/carbon-design-system/carbon/tree/main/packages/styles#usage
[3]: https://sass-lang.com/documentation/at-rules/forward/
[4]: https://github.com/carbon-design-system/carbon/blob/main/packages/styles/index.scss |
I wrote below bash Shell script to check whether the input value is a character string or a number (using Mathematical function):
#!/bin/bash
uniq_value=$1
if `$(echo "$uniq_value / $uniq_value" | bc)` ; then
echo "Given value is number"
else
echo "Given value is string"
fi
The execution result is as follows:
$ sh -x test.sh abc
`+ uniq_value=abc`
`+++ echo 'abc / abc'`
`+++ bc`
`Runtime error (func=(main), adr=5): Divide by zero`
+ echo 'Given value is number'
Given value is number
There is an error like this: "Runtime error (func=(main), adr=5): Divide by zero"
Can anyone please suggest how to rectify this error?
The expected result for the input "abc123xy" should be "Given value is string"
The expected result for the input "3.045" should be "Given value is number"
The expected result for the input "6725302" should be "Given value is number"
After this I will assign a series of values to "uniq_value" variable in a loop. Hence getting the output for this script is very important. |
How to check whether the input is a character string or a number using mathematical function in UNIX Shell Script? |
|linux|bash|shell|unix|sh| |
null |
How to resolve the rsAccessDenied from SQL Server Reporting Service |
I'm working on a Asp.Net project where I'm trying to add "Hangfire" library for background jobs. I've installed all required packages according [doccumentation][1] and also created the test database.
I've also added the required startup methods in Global.asax.vb (had to convert from C#, given in the example, to VB.net) so my file looks like this:
Imports Hangfire
Imports Hangfire.SqlServer
Public Class Global_asax
Inherits HttpApplication
Sub Application_Start(sender As Object, e As EventArgs)
' Fires when the application is started
Try
HangfireAspNet.Use(GetHangfireServers)
Catch ex As Exception
Debug.Assert(False, "Not Yet Ready")
End Try
End Sub
Private Iterator Function GetHangfireServers() As IEnumerable(Of IDisposable)
GlobalConfiguration.Configuration.SetDataCompatibilityLevel(CompatibilityLevel.Version_170).UseSimpleAssemblyNameTypeSerializer().UseRecommendedSerializerSettings().UseSqlServerStorage("Data Source=xxx,00000;Initial Catalog=xxx;User ID=xxx;Password=xxx", New SqlServerStorageOptions With {
.CommandBatchMaxTimeout = TimeSpan.FromMinutes(5),
.SlidingInvisibilityTimeout = TimeSpan.FromMinutes(5),
.QueuePollInterval = TimeSpan.Zero,
.UseRecommendedIsolationLevel = True,
.DisableGlobalLocks = True
})
Yield New BackgroundJobServer()
End Function
End Class
And the
HangfireAspNet.Use(GetHangfireServers)
line is throwing the next exception:
> Unable to cast object of type 'VB$StateMachine_6_GetHangfireServers' to type 'System.Func`1[System.Collections.Generic.IEnumerable`1[System.IDisposable]]
I've verified that the connection string is OK and it connects to the test database with no problems, but I'm stuck regarding the exception.
What can I try next?
[1]: https://docs.hangfire.io/en/latest/getting-started/aspnet-applications.html |
Using FastAPI I have set up a POST endpoint that takes a command, I want this command to be case insensitive, while still having suggested values (i.e. within the SwaggerUI docs)
For this, I have set up an endpoint with a `Command` class as a schema for the POST body parameters:
```python
@router.post("/command", status_code=HTTPStatus.ACCEPTED) # @router is a fully set up APIRouter()
async def control_battery(command: Command):
result = do_work(command.action)
return result
```
For `Command` I currently have 2 possible versions, which both do not have the full functionality I desire.
```python
from fastapi import HTTPException
from pydantic import BaseModel, field_validator
from typing import Literal
## VERSION 1
class Command(BaseModel):
action: Literal["jump", "walk", "sleep"]
## VERSION 2
class Command(BaseModel):
action: str
@field_validator('action')
@classmethod
def validate_command(cls, v: str) -> str:
"""
Checks if command is valid and converts it to lower.
"""
if v.lower() not in {'jump', 'walk', 'sleep'}:
raise HTTPException(status_code=422, detail="Action must be either 'jump', 'walk', or 'sleep'")
return v.lower()
```
Version 1 has is obviously not case sensitive, but has the correct 'suggested value' behaviour, as below.
<img src="https://i.stack.imgur.com/ujg7N.png" width="300" />
Whereas Version 2 has the correct case sensitivity and allows for greater control over the validation, but no longer shares suggested values with users of the schema. e.g., in the image above "jump" would be replaced with "string".
How do I combine the functionality of both of these approaches?
|
Pydantic / FastAPI, how to set up a case-insensitive model with suggested values |
|python|fastapi|pydantic| |
My input string:
AWS-HMAC-SHA256 Credential=eyJhbGciOiJIUzI1NiIsIngtc3MiOjEy/20160911/cn/user-service/request,SignedHeaders=host;x-aws-date, Signature=d9ee2d43f2067e4b8857f15fa8fff27820051d95a4ec31e93be866f201e0797a
How can I get the values for `Credential`, `SignedHeaders`, and `Signature`? |
Extract significant parts of an AWS authorization header string |
|php|preg-match-all|text-extraction|authorization-header| |
null |
You can try using TypeScript's type assertion
```
<app-validation [inputValidation]="(item as ICertificate).url"></app-
```
|
What you need is to accumulate values for each individual stream and keep the last successful one as well as the current value. Then you can decide what to do downstream, if you want to use the current one based on the type (pending, success, error) or use the last valid one.
Here's how that can be achieved.
First of all, we define our data structure for a result:
```typescript
interface ResultPending {
type: 'pending';
loading: true;
}
interface ResultSuccess<Data> {
type: 'success';
loading: false;
data: Data;
}
interface ResultError {
type: 'error';
loading: false;
error: any;
}
type Result<Data> = ResultPending | ResultSuccess<Data> | ResultError;
```
Then we create a generic function to describe a stream as a result using the previously built interface:
```typescript
function keepLastValidResult<Data>(): Observable<{
lastValidResult: ResultSuccess<Data>;
lastResult: Result<Data>;
}> {
return (obs$: Observable<Data>) =>
obs$.pipe(
scan((acc, curr) => {
switch (curr.type) {
case 'success':
return {
lastValidResult: curr,
lastResult: curr,
};
case 'pending':
case 'error':
return {
...acc,
lastResult: curr,
};
}
return acc;
})
);
}
```
And now, we can apply this custom operator to individual streams:
```typescript
combineLatest({
data1: source1$.pipe(keepLastValidResult()),
data2: source2$.pipe(keepLastValidResult()),
});
```
Then in your combine latest, you can check the type of the current result to know if it's a success or not, and if not use instead the `lastValidResult` to keep displaying the last valid result as the name suggests, but also display the error or loading state based on `lastResult`. |
I am working on a virtualised data grid for my application.
I use transform: translateY for the table offset on scroll to make table virtualised.
I developed all the functionality in React 17 project, but when migrated to React 18 I found that the data grid behaviour changed for the worse - the data grid started to bounce on scroll.
I prepared the minimal representing code extract, which shows my problem.
To assure that the code is the same for React 17 and React 18, I change only the import of ReactDOM from 'react-dom/client' to 'react-dom' (which is of course incorrect, since the latter is deprecated) in my index.tsx file.
This is the code:
index.html
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<title>Virtuailsed table</title>
</head>
<body>
<noscript>You need to enable JavaScript to run this app.</noscript>
<div id="root"></div>
</body>
</html>
```
index.js
```
// import ReactDOM from "react-dom";
import ReactDOM from "react-dom/client";
import { useState } from "react";
import "./styles.css";
let vendors = [];
for (let i = 0; i < 1000; i++ ){
vendors.push({
id: i,
edrpou: i,
fullName: i,
address: i
})
}
const scrollDefaults = {
scrollTop: 0,
firstNode: 0,
lastNode: 70,
};
function App() {
const [scroll, setScroll] = useState(scrollDefaults);
const rowHeight = 20;
const tableHeight = rowHeight * vendors.length + 40;
const handleScroll = (event) => {
const scrollTop = event.currentTarget.scrollTop;
const firstNode = Math.floor(scrollTop / rowHeight);
setScroll({
scrollTop: scrollTop,
firstNode: firstNode,
lastNode: firstNode + 70,
});
};
const vendorKeys = Object.keys(vendors[0]);
return (
<div
style={{ height: "1500px", overflow: "auto" }}
onScroll={handleScroll}
>
<div className="table-fixed-head" style={{ height: `${tableHeight}px` }}>
<table style={{transform: `translateY(${scroll.scrollTop}px)`}}>
<thead style={{ position: "relative" }}>
<tr>
{vendorKeys.map((key) => <td>{key}</td>)}
</tr>
</thead>
<tbody >
{vendors.slice(scroll.firstNode, scroll.lastNode).map((item) => (
<tr style={{ height: rowHeight }} key={item.id}>
<td><div className="data">{item.id}</div></td>
<td><div className="data">{item.edrpou}</div></td>
<td><div className="data">{item.fullName}</div></td>
<td><div className="data">{item.address}</div></td>
</tr>
))}
</tbody>
</table>
</div>
</div>
);
}
// const rootElement = document.getElementById("root");
// ReactDOM.render(<App />, rootElement);
const root = ReactDOM.createRoot(
document.getElementById('root')
);
root.render(
<App />
);
```
styles.css
```
* {
padding: 0;
margin: 0
}
.table-fixed-head thead th{
background-color: white;
}
.row {
line-height: 20px;
background: #dafff5;
max-width: 200px;
margin: 0 auto;
box-shadow: 0 0 1px 0 rgba(0, 0, 0, 0.5);
}
.data{
width: 150px;
white-space: nowrap;
overflow: hidden;
margin-right: 20px;
}
```
I have spent 1.5 day trying to find the reason why the table bounces on scroll in React 18 without result.
BTW, overscroll-behaviour: none doesn`t work. |
HTML Table bounces onScroll in React 18, whereas React 17 renders the table correct |
|javascript|css|reactjs| |
null |
Hello I was facing the same issue I created a template mustache named model.mustache to overwrite the original one.
The main problem is that the `#isEnum` condition always return false but `^isEnum` is working correctly; so I found a way to check for `enum` differently with `#allowableValues`. The solution consist to create two generation block one for `enum` and the second one for `model`.
This is what you must do step by step.
1. Create a new file in your project called model.mustache
2. Add the following content to the file:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-html -->
package {{package}};
{{#imports}}import {{import}};
{{/imports}}
import io.swagger.annotations.*;
import com.google.gson.annotations.SerializedName;
{{#models}}
{{#model}}
{{#allowableValues}}
@ApiModel(description = "")
public enum {{classname}} {
{{#allowableValues}}{{#values}} {{.}}, {{/values}}{{/allowableValues}}
}
{{/allowableValues}}
{{/model}}
{{/models}}
{{#models}}
{{#model}}
{{^isEnum}}
{{#description}}
/**
{{.}}
**/{{/description}}
@ApiModel(description = "{{{description}}}")
public class {{classname}} {{#parent}}extends {{{.}}}{{/parent}} {
{{#vars}}{{#isEnum}}
public enum {{datatypeWithEnum}} {
{{#allowableValues}}{{#values}} {{.}}, {{/values}}{{/allowableValues}}
};
@SerializedName("{{baseName}}")
private {{{datatypeWithEnum}}} {{name}} = {{{defaultValue}}};{{/isEnum}}{{^isEnum}}
@SerializedName("{{baseName}}")
private {{{dataType}}} {{name}} = {{{defaultValue}}};{{/isEnum}}{{/vars}}
{{#vars}}
/**{{#description}}
{{{.}}}{{/description}}{{#minimum}}
minimum: {{.}}{{/minimum}}{{#maximum}}
maximum: {{.}}{{/maximum}}
**/
@ApiModelProperty({{#required}}required = {{required}}, {{/required}}value = "{{{description}}}")
public {{{datatypeWithEnum}}} {{getter}}() {
return {{name}};
}
public void {{setter}}({{{datatypeWithEnum}}} {{name}}) {
this.{{name}} = {{name}};
}
{{/vars}}
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
{{classname}} {{classVarName}} = ({{classname}}) o;{{#hasVars}}
return {{#vars}}(this.{{name}} == null ? {{classVarName}}.{{name}} == null : this.{{name}}.equals({{classVarName}}.{{name}})){{^-last}} &&
{{/-last}}{{#-last}};{{/-last}}{{/vars}}{{/hasVars}}{{^hasVars}}
return true;{{/hasVars}}
}
@Override
public int hashCode() {
int result = 17;
{{#vars}}
result = 31 * result + (this.{{name}} == null ? 0: this.{{name}}.hashCode());
{{/vars}}
return result;
}
@Override
public String toString() {
StringBuilder sb = new StringBuilder();
sb.append("class {{classname}} {\n");
{{#parent}}sb.append(" " + super.toString()).append("\n");{{/parent}}
{{#vars}}sb.append(" {{name}}: ").append({{name}}).append("\n");
{{/vars}}sb.append("}\n");
return sb.toString();
}
}
{{/isEnum}}
{{/model}}
{{/models}}
<!-- end snippet -->
3. Now in your generating configuration, you mus add the template folder path with `templateDir = "path/to/the/folder/were/you/create/model.mustache"`
This is and example of the config I have in my `build.gradle`
var generatedSourcesPath = "$buildDir" // /generated/sources/openapi
var apiDescriptionFolder = "$rootDir/app/openapi"
var apiRootName = "com.deepdrimz.openapi"
ext {
sName = 'njangui'
}
openApiGenerate {
var serviceName = findProperty("sName")
generatorName = "android"
templateDir = "${apiDescriptionFolder}/templates"
inputSpec = "${apiDescriptionFolder}/${serviceName}.yaml"
outputDir = "${generatedSourcesPath}"
modelPackage = "${apiRootName}.${serviceName}"
configOptions = [
dateLibrary: "java8",
library: "httpclient",
serializationLibrary: "gson",
serializableModel : "true"
]
} |
|python|signal-processing|spectrum| |
I am using following code to generate chart in Microsoft Access Form using web browser control and on most of the computers it is working completely fine including on Windows 7 SP1 and Windows 10 but on two PCs with windows 10, it is not showing up correctly. Urdu text is showing up incorrectly (characters mixed with each other as shown in below image).
I saved the following code in text file with extension html and opened it with internet Explorer on both PCs (one on which it works perfectly and other on which problem exists) and it worked on one PC and not on other PC.
I verified locale/regional settings on both PCs using Windows-Key + R and then using intl.cpl command and both PCs have same settings.
What could be causing this issue and how can I fix it. Please help.
[](https://postimg.cc/c6gSRmJb)
Chart With Urdu Not Showing Correctly
```
<!DOCTYPE html>
<!-- saved from url=(0014)about:internet -->
<html>
<head>
<title>Chart</title>
<meta charset='utf-8'>
<meta http-equiv='X-UA-Compatible' content='IE=Edge'>
<script>
function isPowerOfTen (n) { while (n >= 10 && n % 10 == 0) { n /= 10; } return n == 1; }
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.9.4/Chart.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/chartjs-plugin-colorschemes"></script>
<style>
body { margin: 0; padding: 0; }
#container { width: 100%; }
</style>
</head>
<body>
<div id='container'>
<canvas id='myChart'></canvas>
</div>
<script>
Chart.defaults.global.animation.duration = 0;
Chart.defaults.global.animation.easing = 'linear';
var ctx = document.getElementById('myChart').getContext('2d');
var myChart = new Chart(ctx, {
type: 'bar',
data: {
labels: ['جنوری','فروری','مارچ',],
datasets: [
{label: 'تعداد' ,
data: [ 2, 4, 12],
backgroundColor: Chart['colorschemes'].tableau.Tableau20,
borderWidth: 1}
]
},
options: {
aspectRatio: 3.65166865315852,
title: {
display: true,
position: 'top',
text: 'مہینہ کے حساب سے تعداد',
fontFamily: 'Jameel Noori Nastaleeq',
fontSize: 25,
fontStyle: 'normal'
},
legend: {
display: false
},
scales: {
yAxes: [{
id: 'first-y-Axis',
display: true,
ticks: {
TicksMaxLimit: 5 ,
autoSkip: true
}
}],
xAxes: [{
id: 'first-x-Axis',
display: true,
ticks: {
fontFamily: 'Jameel Noori Nastaleeq',
autoSkip: false
}
}]
},
plugins: {
colorschemes: {
scheme: 'tableau.Tableau20'
}
}
}
});
</script>
</body>
</html>
``` |
Urdu Text is Not Showing Up Correctly in Internet Explorer On Some Computers |
|html|internet-explorer-11|urdu| |
Identify lines in spectrogram using python |
I am trying to implement the login with google functionality in asp.net core web api using identity external provider. All things are going well but when my url is redirecting to this external-auth-callback function I am getting info null. This is the external-auth-callback function.
program.cs file authentication code
builder.Services.AddAuthentication(options =>
{
options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
options.DefaultScheme = JwtBearerDefaults.AuthenticationScheme;
}).AddCookie()
.AddJwtBearer(o =>
{
o.TokenValidationParameters = new TokenValidationParameters
{
ValidIssuer = builder.Configuration["Jwt:ValidIssuer"],
ValidAudience = builder.Configuration["Jwt:ValidAudience"],
IssuerSigningKey = new SymmetricSecurityKey
(Encoding.UTF8.GetBytes(builder.Configuration["Jwt:Secret"])),
ValidateIssuer = true,
ValidateAudience = true,
ValidateLifetime = false,
ValidateIssuerSigningKey = true
};
}).AddGoogle(opt =>
{
opt.ClientId = builder.Configuration["GoogleLoginProvider:ClientId"];
opt.ClientSecret = builder.Configuration["GoogleLoginProvider:ClientSecret"];
opt.SignInScheme = IdentityConstants.ExternalScheme;
});
public async Task<MessageViewModel> ExternalLoginCallback([FromQuery] string returnUrl)
{
var info = await _signInManager.GetExternalLoginInfoAsync();
if (info != null)
{
var signInResult = await _signInManager.ExternalLoginSignInAsync(info.LoginProvider,
info.ProviderKey, isPersistent: false, bypassTwoFactor: true);
return new MessageViewModel()
{
IsSuccess = signInResult.Succeeded,
Message = "User with this account is already in table",
};
}
else
{
var email = info.Principal.FindFirstValue(ClaimTypes.Email);
var user = await _userManager.FindByEmailAsync(email);
if (user == null)
{
user = new Users
{
UserName = info.Principal.FindFirstValue(ClaimTypes.Email),
Email = info.Principal.FindFirstValue(ClaimTypes.Email),
FirstName = info.Principal.FindFirstValue(ClaimTypes.GivenName),
LastName = info.Principal.FindFirstValue(ClaimTypes.Surname),
};
await _userManager.CreateAsync(user);
}
await _userManager.AddLoginAsync(user, info);
await _signInManager.SignInAsync(user, isPersistent: false);
return new MessageViewModel()
{
IsSuccess = true,
Message = "User with this account is created successfully"
};
}
}
Before that i am returning the return url through this method
public async Task<LoginProviderViewModel> ExternalLogin(string provider, string returnUrl)
{
var redirectUrl = $"https://localhost:7008/api/account/external-auth-callback?returnUrl={returnUrl}";
var properties = _signInManager.ConfigureExternalAuthenticationProperties(provider, redirectUrl);
properties.AllowRefresh = true;
return new LoginProviderViewModel()
{
Provider = provider,
Properties = properties,
};
}
from my frontend when i click on login with google button this method is called and after coming to this method first externalLogin method will called and I returning with return url after that in frontend i got google sign in page and when i select user then it goes to external-auth-callback function which is used to save the user who try to login with google.
const handleExternalLogin = async () => {
const url = `${externalLogin}?provider=Google&returnUrl=/admin-dashboard`;
const response = await getData(url);
if (response.isSuccessfull) {
const redirectUrl = response.data.properties.items[".redirect"];
const loginProvider = response.data.properties.items["LoginProvider"];
const googleAuthorizationUrl =
`https://accounts.google.com/o/oauth2/v2/auth` +
`?client_id=xxxxx.apps.googleusercontent.com` +
`&redirect_uri=${redirectUrl}` +
`&response_type=code` +
`&scope=openid%20profile%20email` +
`&state=${loginProvider}`;
window.location.href = googleAuthorizationUrl;
}
};
Please help me out as I have stucked to this point from 2 days.
I have tried all the possible checks but not understanding why the info is showing null in external-auth-callback function? So I was expecting to please anyone let me know why I getting the null value. |
My job is to automatically display, one by one, the pages of a pdf file with tkinter. To do this, I create an image for each page and store all the images in a list. I can display all the pages from my list with manual scrolling. But I want it to be automatic (autoscroll). How can I proceed?<br/>
Here is my code:
import os,ironpdf,time
import shutil
from tkinter import *
from PIL import Image,ImageTk
def do_nothing():
print("No thing")
def convert_pdf_to_image(pdf_file):
pdf = ironpdf.PdfDocument.FromFile(pdf_file)
#Extrat all pages to a folder as image files
folder_path = "images"
pdf.RasterizeToImageFiles(os.path.join(folder_path,"*.png"))
#List to store the image paths
image_paths = []
#Get the list of image files in the folder
for filename in os.listdir(folder_path):
if filename.lower().endswith((".png",".jpg",".jpeg",".gif")):
image_paths.append(os.path.join(folder_path,filename))
return image_paths
def on_closing():
#Delete the images in the 'images' folder
shutil.rmtree("images")
window.destroy()
window = Tk()
window.title("PDF Viewer")
window.geometry("1900x800")
canvas = Canvas(window)
canvas.pack(side=LEFT,fill=BOTH,expand=True)
canvas.configure(background="black")
scrollbar = Scrollbar(window,command=canvas.yview)
scrollbar.pack(side=RIGHT,fill=Y)
canvas.configure(yscrollcommand=scrollbar.set)
canvas.bind("<Configure>",lambda e:canvas.configure(scrollregion=canvas.bbox("all")))
canvas.bind("<MouseWheel>",lambda e:canvas.yview_scroll(int(-1*(e.delta/120)),"units"))
frame = Frame(canvas)
canvas.create_window((0,0),window=frame,anchor="nw")
images = convert_pdf_to_image("input.pdf")
for image_path in images:
print(image_path)
image = Image.open(image_path)
photo = ImageTk.PhotoImage(image,size=1200)
label = Label(frame,image=photo)
label.image = photo
label.pack(fill=BOTH)
time.sleep(1)
window.mainloop()
|
How to automatically scroll (autoscroll) a list of images contained in a list with tkinter |
|python|tkinter|autoscroll| |
I recently encountered an issue with SwiftData where I unintentionally created multiple `ModelContainer` instances, leading to unexpected behavior. Specifically, after creating redundant `ModelContainer` instances, the constraints on Model data seemed to be lost, allowing the insertion of duplicate Item instances.
However, upon restarting the entire application, it appears that only a single Item instance remains, suggesting that the duplicated insertions were merely transient. Could this behavior be indicative of a bug within SwiftData?
I'm seeking clarification on whether this issue stems from a misconfiguration on my end or if it indeed reflects a potential bug within SwiftData. Any insights or guidance on resolving this matter would be greatly appreciated. Thank you.
```swift
import SwiftData
import SwiftUI
@Model
class Item {
@Attribute(.unique)
var name: String
init(name: String) {
self.name = name
}
}
@MainActor
struct ContentView: View {
@State var modelContainer: ModelContainer? = nil
@State var items: [Item] = []
var body: some View {
VStack {
List {
ForEach(items) { item in
Text(item.name)
}
}
Spacer()
Button("AddItemInCurrentContext") {
addItemInCurrentContext(modelContainer: modelContainer!)
items = fetchItems(modelContainer: modelContainer!)
}
Button("AddItemInNewContext") {
addItemInNewContext()
items = fetchItems(modelContainer: modelContainer!)
}
}
.onAppear {
modelContainer = createModelContainer() // create the 1st ModelContainer instance
}
}
}
func createModelContainer() -> ModelContainer {
do {
let config = ModelConfiguration()
return try ModelContainer(for: Item.self, configurations: config)
} catch {
fatalError("Could not initialize ModelContainer")
}
}
@MainActor
func fetchItems(modelContainer: ModelContainer) -> [Item] {
let descriptor = FetchDescriptor<Item>()
do {
return try modelContainer.mainContext.fetch(descriptor)
} catch {
return []
}
}
@MainActor
func addItemInCurrentContext(modelContainer: ModelContainer) {
let item = Item(name: "Item") // identical name
modelContainer.mainContext.insert(item)
do {
try modelContainer.mainContext.save()
} catch {
print("addItemInCurrentContext error: \(error)")
}
}
@MainActor
func addItemInNewContext() {
let item = Item(name: "Item") // identical name
let newModelContainer = createModelContainer()
newModelContainer.mainContext.insert(item)
do {
try newModelContainer.mainContext.save()
} catch {
print("addItemInCurrentContext error: \(error)")
}
}
#Preview {
ContentView()
}
```
[Snapshot](https://www.youtube.com/watch?v=BizyQFZY27c)
|
I am trying to upgrade PHP on Uwamp. But somehow if I want to run the upgrade php.7.3.3 and or php 8.3.3 Apache stops. php 7.2.7 works fine |
null |
I have a C# game emulator which uses TcpListener and originally had a TCP based client.
A new client was introduced which is HTML5 (web socket) based. I wanted to support this without modifying too much of the existing server code, still allowing `TcpListener` and `TcpClient` to work with web socket clients connecting.
Here is what I have done, but I feel like I'm missing something, as I am not getting the usual order of packets, therefor handshake never completes.
1. Implement protocol upgrade mechanism
public static byte[] GetHandshakeUpgradeData(string data)
{
const string eol = "\r\n"; // HTTP/1.1 defines the sequence CR LF as the end-of-line marker
var response = Encoding.UTF8.GetBytes("HTTP/1.1 101 Switching Protocols" + eol
+ "Connection: Upgrade" + eol
+ "Upgrade: websocket" + eol
+ "Sec-WebSocket-Accept: " + Convert.ToBase64String(
System.Security.Cryptography.SHA1.Create().ComputeHash(
Encoding.UTF8.GetBytes(
new Regex("Sec-WebSocket-Key: (.*)").Match(data).Groups[1].Value.Trim() + "258EAFA5-E914-47DA-95CA-C5AB0DC85B11"
)
)
) + eol
+ eol);
return response;
}
This is then used like so:
private async Task OnReceivedAsync(int bytesReceived)
{
var data = new byte[bytesReceived];
Buffer.BlockCopy(_buffer, 0, data, 0, bytesReceived);
var stringData = Encoding.UTF8.GetString(data);
if (stringData.Length >= 3 && Regex.IsMatch(stringData, "^GET"))
{
await _networkClient.WriteToStreamAsync(WebSocketHelpers.GetHandshakeUpgradeData(stringData), false);
return;
}
2. Encode all messages after switching protocol response
public static byte[] EncodeMessage(byte[] message)
{
byte[] response;
var bytesRaw = message;
var frame = new byte[10];
var indexStartRawData = -1;
var length = bytesRaw.Length;
frame[0] = 129;
if (length <= 125)
{
frame[1] = (byte)length;
indexStartRawData = 2;
}
else if (length >= 126 && length <= 65535)
{
frame[1] = 126;
frame[2] = (byte)((length >> 8) & 255);
frame[3] = (byte)(length & 255);
indexStartRawData = 4;
}
else
{
frame[1] = 127;
frame[2] = (byte)((length >> 56) & 255);
frame[3] = (byte)((length >> 48) & 255);
frame[4] = (byte)((length >> 40) & 255);
frame[5] = (byte)((length >> 32) & 255);
frame[6] = (byte)((length >> 24) & 255);
frame[7] = (byte)((length >> 16) & 255);
frame[8] = (byte)((length >> 8) & 255);
frame[9] = (byte)(length & 255);
indexStartRawData = 10;
}
response = new byte[indexStartRawData + length];
int i, reponseIdx = 0;
// Add the frame bytes to the reponse
for (i = 0; i < indexStartRawData; i++)
{
response[reponseIdx] = frame[i];
reponseIdx++;
}
// Add the data bytes to the response
for (i = 0; i < length; i++)
{
response[reponseIdx] = bytesRaw[i];
reponseIdx++;
}
return response;
}
Used here:
public async Task WriteToStreamAsync(byte[] data, bool encode = true)
{
if (encode)
{
data = WebSocketHelpers.EncodeMessage(data);
}
3. Decoding all messages
public static byte[] DecodeMessage(byte[] bytes)
{
var secondByte = bytes[1];
var dataLength = secondByte & 127;
var indexFirstMask = dataLength switch
{
126 => 4,
127 => 10,
_ => 2
};
var keys = bytes.Skip(indexFirstMask).Take(4);
var indexFirstDataByte = indexFirstMask + 4;
var decoded = new byte[bytes.Length - indexFirstDataByte];
for (int i = indexFirstDataByte, j = 0; i < bytes.Length; i++, j++)
{
decoded[j] = (byte)(bytes[i] ^ keys.ElementAt(j % 4));
}
return decoded;
}
Which is used here
private async Task OnReceivedAsync(int bytesReceived)
{
var data = new byte[bytesReceived];
Buffer.BlockCopy(_buffer, 0, data, 0, bytesReceived);
var stringData = Encoding.UTF8.GetString(data);
if (stringData.Length >= 3 && Regex.IsMatch(stringData, "^GET"))
{
await _networkClient.WriteToStreamAsync(WebSocketHelpers.GetHandshakeUpgradeData(stringData), false);
return;
}
var decodedData = WebSocketHelpers.DecodeMessage(data);
if (decodedData[0] == 60)
{
await OnReceivedPolicyRequest();
}
else if (_networkClient != null)
{
foreach (var packet in DecodePacketsFromBytes(decodedData))
{
_packetHandler.HandleAsync(_networkClient, packet);
}
}
} |
if you use `bootstrapApplication()` to bootstap your app (Angular version >14), add `provideAnimations()` to `providers[]` in app.config.ts
otherwise, add `BrowserAnimationsModule` to `imports[]` of the standalone component (or module, in case you don't use standalone components) that uses `matToolTip`
|
I have following code block to create a gauge. I will use this for a custom widget in SAP Analytics Cloud. I want to use two pointers in this gauge. How can I do this within my code?
Thanks in advance
```js
option = {
series: [
{
type: 'gauge',
startAngle: 180,
endAngle: 0,
min: 0,
max: 240,
splitNumber: 12,
itemStyle: {
color: '#009AA6',
shadowColor: 'rgba(0,138,255,0.45)',
shadowBlur: 10,
shadowOffsetX: 2,
shadowOffsetY: 2
},
progress: {
show: true,
roundCap: true,
width: 18
},
pointer: {
icon: 'path://M2090.36389,615.30999 L2090.36389,615.30999 C2091.48372,615.30999 2092.40383,616.194028 2092.44859,617.312956 L2096.90698,728.755929 C2097.05155,732.369577 2094.2393,735.416212 2090.62566,735.56078 C2090.53845,735.564269 2090.45117,735.566014 2090.36389,735.566014 L2090.36389,735.566014 C2086.74736,735.566014 2083.81557,732.63423 2083.81557,729.017692 C2083.81557,728.930412 2083.81732,728.84314 2083.82081,728.755929 L2088.2792,617.312956 C2088.32396,616.194028 2089.24407,615.30999 2090.36389,615.30999 Z',
length: '75%',
width: 16,
offsetCenter: [0, '5%']
},
axisLine: {
roundCap: true,
lineStyle: {
width: 18
}
},
axisTick: {
splitNumber: 2,
lineStyle: {
width: 2,
color: '#999'
}
},
splitLine: {
length: 12,
lineStyle: {
width: 3,
color: '#999'
}
},
axisLabel: {
distance: 30,
color: '#999',
fontSize: 20
},
title: {
show: false
},
detail: {
backgroundColor: '#fff',
borderColor: '#999',
borderWidth: 2,
width: '60%',
lineHeight: 70,
height: 50,
borderRadius: 8,
offsetCenter: [0, '35%'],
valueAnimation: true,
formatter: function (value) {
return '{value|' + value.toFixed(0) + '}{unit|%}';
},
rich: {
value: {
fontSize: 30,
fontWeight: 'bolder',
color: '#777'
},
unit: {
fontSize: 30,
color: '#999',
padding: [0, 0, -10, 10]
}
}
},
data: [
{
value: 100
}
]
}
]
};
``` |
OAS 3: https://swagger.io/docs/specification/authentication/
> ### Using Multiple Authentication Types
>
> Some REST APIs support several authentication types. The `security` section lets you combine the security requirements using logical OR and AND to achieve the desired result. `security` uses the following logic:
>
> security: # A OR B
> - A: []
> - B: []
>
> security: # A AND B
> - A: []
> B: []
>
> security: # (A AND B) OR (C AND D)
> - A: []
> B: []
> - C: []
> D: [] |
I have wsl2 Debian setup with yubikey and running ssh-agent:
```
export | grep SSH_AUTH_SOCK
declare -x SSH_AUTH_SOCK="/run/user/1000//yubikey-agent/yubikey-agent.sock"
```
In VS code I try to make an SSH Remote connection to a different remote linux machine, VS code of course using either local Windows OpenSSH.exe or with some tricks (https://github.com/microsoft/vscode-remote-release/issues/937#issuecomment-1654832590) wsl ssh binary. Both are unaware of running ssh-agent on the wsl2 instance and they try to find some id_rsa file in some typical directories - so I cannot connect.
Is there any way to create a remote SSH connection in VS Code via wsl2 ssh client with proper ssh-agent? |
VS Code and Remote SSH with WSL2 debian ssh-agent |
|visual-studio-code|vscode-remote|vscode-remote-ssh| |
This is answer is more along the lines of a trick, or even a hack. As the comment above by @ThomA states, all branches of a `CASE` expression need to have the same type. You can achieve this by casting the `UserId` to text and then left padding with some number (say 10) of zeroes. This means that, when used for sorting, the `UserId` column would sort as text. However, since it is now fixed width, left padded with zeroes, a lexicographical sort should agree with a numeric sort.
<!-- language: sql -->
ORDER BY
CASE WHEN @sortOrder = '1' THEN
CASE @sortField
WHEN 'FirstName' THEN FIRSTNAME
WHEN 'SkillNames' THEN SkillNames
WHEN 'UserId'
THEN CAST(RIGHT('0000000000' + ISNULL(UserId, ''), 10) AS varchar(10))
END
END ASC,
CASE WHEN @sortOrder = '-1' THEN
CASE @sortField
WHEN 'UserId'
THEN CAST(RIGHT('0000000000' + ISNULL(UserId, ''), 10) AS varchar(10))
WHEN 'FirstName' THEN FirstName
WHEN 'SkillNames' THEN SkillNames
END
END DESC
Sort order for the `UserId`:
0000000001
0000000004
0000000010
0000000011
0000000053
0000000101 |
I am currently making a wysiwyg editor, and need to support rotation by pressing "r" key while dragging some element with mouse
I noticed that onkeydown event is not being called when I try to press a key while dragging something with a mouse.
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-html -->
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport"
content="width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<title>Document</title>
</head>
<body>
<div draggable="true">Drag this and then try to press some keys</div>
<script>
document.addEventListener('keydown', function(event) {
console.log(`Key pressed: ${event.key}`);
});
document.addEventListener('mousedown', function(event) {
console.log(`Mouse button pressed: ${event.button}`);
});
</script>
</body>
</html>
<!-- end snippet -->
This is the example code you can try on your own (make something draggable as well). So my question - is it even possible to do this in DOM or is it an unavoidable limitation? |
What is the purpose of entrypoint files in Sass? |
|css|sass| |
You can customize your WebClient
WebClient.builder()
.clientConnector(new ReactorClientHttpConnector(httpClient.create()
.secure(contextSpec -> contextSpec.sslContext(sslContext)
.handlerConfigurator(handler -> {
SSLEngine engine = handler.engine();
SSLParameters params = engine.getSSLParameters ();
params.setServerNames(List.of(new SNIHostName(<hostname-to-match-certificate-CN>)));
//params.setEndpointIdentificationAlgorithm(null); if you want disable hostname verification, but I would not recommend that
engine.setSSLParameters(params);
});
)))
.build();
providing the hostname in such a way will cause it to be verified with what we have in the certificate and then the url we are hitting can contain, for example, a clean IP in the host section |
I'm trying to create a reusable table, came about NgTemplateOutlet and it's a nice option, the problem we are facing is that to make the wrapper for the table look clean, we are separating the template into a different "component" and then getting the TemplateRef using ViewContainerRef.createcomponent().instance.TemplateRef, everything works fine except when we try to use the p-table functionality, one case is pSelectbableRow, when setting it up, the table with ngTemplateOutlet stops rendering, it's something that can be overlooked and even be resolved at the template level with css and such but that defeats the purpose of using a css framework and also I'd like to understand Angular at a deeper level as to what is causing it to break and if possible to have an alternate solution or a better solution, thanks in advance.
**dynamic.table.component.html**
```html
<p-table
[columns]="columns"
selectionMode="single"
[(selection)]="selectedProduct"
dataKey="name"
[value]="data"
>
<ng-template pTemplate="header" let-columns>
<tr>
<th scope="col" class="text-center" *ngFor="let col of columns">
{{ col }}
</th>
</tr>
</ng-template>
<ng-template pTemplate="body" let-rowData let-columns="columns">
<ng-container
pTemplate="body"
[ngTemplateOutlet]="customBodyTemplate"
[ngTemplateOutletContext]="{
$implicit: rowData,
cols: columns
}"
>
</ng-container>
</ng-template>
</p-table>
```
**dynamic.table.component.ts**
```
@Component({
selector: "app-dynamic-table",
templateUrl: "./dynamic.table.component.html",
styleUrls: ["./dynamic.table.component.scss"]
})
export class DynamicTableComponent {
@Input()
customBodyTemplate!: TemplateRef<any>;
@Input()
data: any[];
columns: any[];
@Output() eventEmit: EventEmitter<any> = new EventEmitter();
selectedProduct: any;
constructor() {}
ngOnInit() {
this.columns = this.toColumns();
}
toColumns() {
const keys = Object.keys(this.data[0]);
return keys.map((column) => {
return column.toUpperCase();
});
}
}
```
**example.component.html**
```html
<ng-template #example let-rowData let-columns="cols">
<tr [pSelectableRow]="rowData">
<td class="text-right" *ngFor="let col of columns">
{{rowData[col]}}
</td>
</tr>
</ng-template>
```
**example.component.ts**
```
@Component({
selector: "app-example",
templateUrl: "./example.component.html",
styleUrls: ["./example.component.scss"],
})
export class ExampleComponent {
@ViewChild("example",{ static: true }) public readonly templateRef: TemplateRef<any>;
emit($event: any){
//TODO:Implement action
}
}
```
We are separating the templates to a different component, the service gets the component and returns the TemplateRef like this example
```
vcr = inject(ViewContainerRef);
return this.vcr.createComponent(ExampleComponent).instance.templateRef;
```
I'm assuming it's not working because the pSelectableRow directive is in a different context than the table and as I was writting this I resolved the issue by moving the <tr [pSelectableRow]="rowData> to the dynamic table and only template the "td" and that did the trick, but still I'm posting because I'd still would like to know what was causing this and if my code can be improved.
**example.component.html**
```html
<ng-template #example let-rowData let-columns="cols">
<td class="text-right" *ngFor="let col of columns">
{{rowData[col]}}
</td>
</ng-template>
```
|
|matlab|http-live-streaming|hdl-coder| |
GoogleCloud: Caller does not have required permission to use project foo |
I have made a Random Forest Classifier model in Python, and now want to make partial dependence plot (PDP). I used scaled data for training and testing the model, and make the PDP like this:
`PartialDependenceDisplay.from_estimator(best_clf, X_test_final, best_features)`. However, the x-axis values are scaled which limits interpretability.
Unscaling the data `X_test_final` before calling the `PartialDependenceDisplay` does not work, any suggestions on how I can change the x-axis values from scaled to unscaled? I have scaled my data using `StandardScaler()`. |
|python|machine-learning|scikit-learn|random-forest| |
```r
header <- c("A", "B", "C", "D")
subset <- c("D", "B")
readr::read_csv("sample_data.csv",
col_names = header,
col_select = any_of(subset))
# A tibble: 3 × 2
D B
<dbl> <dbl>
1 4 2
2 8 6
3 12 10
``` |
I am trying to make a WiFi boardcast application in Python. The idea is to place two network interface cards (NICs) in monitor mode, and inject packets such that two devices can communicate. This has been done before, especially in the context of drone RC/telemetry/video links, some examples include [OpenHD][1], [ez-Wifibroadcast][2] and [WFB-NG][3]. I specifically tested my hardware on [OpenHD][1] and was able to achieve a bitrate of ~4 MBits/s (2 x Raspberry Pi 3B+, 2 x TL-WN722N V2, 1 USB camera).
I put the two NICs into monitor mode, confirmed with `iwconfig`. They are also at the same frequency.
In making my Python application, I noticed a very poor bitrate and packet loss. I created a test script to demonstrate:
import socket
from time import perf_counter, sleep
import binascii
import sys
import threading
class PacketLoss:
def __init__(self, transmitter_receiver, size, iface1, iface2):
self.transmitter_receiver = transmitter_receiver
self.iface1, self.iface2 = iface1, iface2
self.send = True
self.counting = False
self.packet_recv_counter = 0
self.packet_send_counter = 0
self.t_target = 1
self.t_true = None
payload = (0).to_bytes(size, byteorder='little', signed=False)
self.payload_length = len(payload)
h = bytes((self.payload_length).to_bytes(2, byteorder='little', signed=False))
radiotap_header = b'\x00\x00\x0c\x00\x04\x80\x00\x00' + bytes([6 * 2]) + b'\x00\x18\x00'
frame_type = b'\xb4\x00\x00\x00'
self.msg = radiotap_header + frame_type + transmitter_receiver + h + payload
def inject_packets(self):
rawSocket = socket.socket(socket.AF_PACKET, socket.SOCK_RAW, socket.htons(0x0004))
rawSocket.bind((self.iface1, 0))
t0 = perf_counter()
while (self.send):
rawSocket.send(self.msg)
if self.counting:
self.packet_send_counter += 1
self.t_send_true = perf_counter() - t0
rawSocket.close()
def sniff_packets(self):
s = socket.socket(socket.AF_PACKET, socket.SOCK_RAW, socket.htons(0x0003))
s.bind((self.iface2, 0))
t0 = perf_counter()
self.counting = True
while perf_counter() - t0 < self.t_target:
packet = s.recv(200)
match = packet[22:34]
if match == self.transmitter_receiver:
self.packet_recv_counter += 1
self.t_true = perf_counter() - t0
self.send = False
self.counting = False
s.close()
def get_stats(self):
dr_send = self.packet_send_counter * self.payload_length / self.t_send_true
dr_recv = self.packet_recv_counter * self.payload_length / self.t_true
packet_loss = 1 - self.packet_recv_counter/self.packet_send_counter
return dr_send, dr_recv, packet_loss
def print_statistics(self):
print(f'In {self.t_true:.3f}s, sent {self.packet_send_counter} captured {self.packet_recv_counter} packets.')
dr_send, dr_recv, packet_loss = self.get_stats()
print(f'{dr_send=:.1f}B/s; {dr_recv=:.1f}B/s; Packet loss: {packet_loss*100:.1f}%')
I tested `PacketLoss` with the following:
if __name__ == '__main__':
# Get test parameters
iface1, iface2 = sys.argv[1], sys.argv[2] # eg 'wlan0', 'wlan1'
size = int(sys.argv[3]) # eg 60
# Recv/transmit mac addresses
MAC_r = '35:eb:9e:3b:75:33'
MAC_t = 'e5:26:be:89:65:27'
receiver_transmitter = binascii.unhexlify(MAC_r.replace(':', '')) + binascii.unhexlify(MAC_t.replace(':', ''))
# create testing object
pl = PacketLoss(receiver_transmitter, size, iface1, iface2)
# start injecting packets
t = threading.Thread(target=pl.inject_packets)
t.start()
# wait a bit
sleep(0.1)
# start sniffing
pl.sniff_packets()
# print statistics
pl.print_statistics()
I tested with packet sizes of 30, 60 and 120, getting the following results:
| Packet size (B) | Received data rate (kB/s) | Packet loss (%) |
| ----------- | ----------------------------- | --------------- |
| 30 | 32.3 | 47.3 |
| 60 | 51.3 | 57.2 |
| 120 | 63.9 | 73.0 |
Eventually I want to use my program to stream video (and RC/telemetry), and I would need at least 125 kB/s received data rate, and a much better packet loss. Am I overlooking something that would increase the data rate and reduce the packet loss?
Thank you
[1]: https://github.com/OpenHD/OpenHD
[2]: https://github.com/rodizio1/EZ-WifiBroadcast
[3]: https://github.com/svpcom/wfb-ng
|
Hi Stackoverflow community,
I'm trying to learn Flask-Admin. The documentation is a very good start, however I've found a very specific use-case that I'm not able to resolve. I have a table in PostgreSQL for variables. I've chosen for this approach since I've got quite a lot of variables but they don't share the same attributes (for example, the region doesn't have the property email).
| Variable | Value |
| -------- | -------- |
| User1 | `{"name": "Name", "email": "test@example.com"}` |
| Region1 | `{"region": "Europe", "name": "DHL", "color": "blue"}` |
I'd like to have the functionality that the end-user can edit the variables themselves. They don't have any knowledge about JSON of course so it would be nice to render out the fields. Currently it's one field where the JSON value is. It would be perfect if there's a field for every item in the edit view in Flask-Admin. For example this would mean that editing the `User1` variable shows 2 fields, one for name and one for email.
## Edit:
This code seems to work, but it doesn't render the actual edit form. The print statements return the correct variables.
```py
Variable = Base.classes.variables
class VariableForm(BaseForm):
def __init__(self, formdata=None, obj=None, prefix='', **kwargs):
super(VariableForm, self).__init__(formdata, obj, prefix, **kwargs)
if obj is not None:
value = json.loads(obj.label)
for key in value.keys():
print(key, value[key])
self.__setattr__(key, StringField(key, validators=[InputRequired()], default=value[key]))
print(self.__getattribute__(key))
def populate_obj(self, obj):
super(VariableForm, self).populate_obj(obj)
obj.label = json.dumps({field: getattr(self, field).data for field in self._fields})
class VariableView(ModelView):
form = VariableForm
admin.add_view(VariableView(Variable, db_user.session, name="Variables"))
``` |
Yes. (see https://registry.khronos.org/OpenGL/specs/gl/glspec46.core.pdf#page=166&zoom=100,168,308) from rule 9 and 10 for `std430`: If the member is an array of S structures, the base alignment of the structure is N, where
N is the largest base alignment value of any of its members.
In the case of `std140`, this would be rounded up to the basic alignment of a `vec4`. I case of `std430` is is not rounded up. However, since the largest element in your case is a `vec4`, this makes no difference in your case. |
Suppose we have a gRPC DB-backed application with some resource `X` like this, for example in Go:
```
type X struct {
id string
name string
version string
}
```
And for some reason the application does not permit updates unless `version = "2"`.
If a client tries to make a request to update the name of an existing resource whose `version = "1"`, like this:
```
// This exists in the DB.
x := X{
id: "xyz",
version: "1",
name: "some name",
}
UpdateX(
X{
id: "xyz",
name: "some other name",
},
)
```
what gRPC error code should be returned?
Referring to the codes here: https://grpc.github.io/grpc/core/md_doc_statuscodes.html, I can see arguments for both INVALID_ARGUMENT and FAILED_PRECONDITION.
INVALID_ARGUMENT:
- The resource cannot be updated so the request will always fail, and is hence invalid
FAILED_PRECONDITION:
- The state of the system needs to be checked to see what the version of the resource is, you can't determine a priori whether the request is possible
- Even though the system currently does not allow such updates, in theory it could be updated so that the request becomes acceptable
|
I am currently working on a short URL service project that involves using AWS CloudFront to route requests to both S3 and API Gateway as origins. S3 is set up for static hosting, and there are two endpoints configured in API Gateway: /shorten_url and `/{key}`. The `/shorten_url` endpoint is designed to shorten long URLs and save them in the `/u` directory on S3, creating files like 1234abc, 12345ab, etc. The `/{key}` endpoint serves to redirect short URLs to their respective long URLs.
The behavior settings in CloudFront are configured such that the path pattern `/api/*` routes to the API Gateway, and the default path pattern (\*) routes to the index.html file in S3.
```
axios
.post("/shorten_url", postData)
.then((response) => {
console.log(response.data);
this.url_short = response.data["url_short"];
})
```
The `/shorten_url` endpoint works fine, successfully interacting with the API Gateway. However, the issue arises with the /{key} endpoint. Instead of recognizing API Gateway, it seems to bypass it entirely. I would like users to enter a short URL like www.example.com/1234abc and have it route through API Gateway and Lambda to access S3, but currently, it directly connects to S3(for example. I should like this way -\> www.example.com/u/1234abc ). How can I resolve this issue to achieve the desired routing? |
Troubleshooting CloudFront Routing for API Gateway and S3 in a Short URL Service |
|amazon-web-services|aws-api-gateway|amazon-cloudfront| |
null |