id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,897,760
Twilio Challenge: Travel Planner via Twilio Functions, WhatsApp & Gemini
This is a submission for the Twilio Challenge Twilio Challenge: Travel Planner via Twilio...
0
2024-06-23T13:05:36
https://dev.to/chintanonweb/twilio-challenge-travel-planner-via-twilio-functions-whatsapp-gemini-1m4e
devchallenge, twiliochallenge, ai, twilio
*This is a submission for the [Twilio Challenge ](https://dev.to/challenges/twilio)* ### Twilio Challenge: Travel Planner via Twilio Functions, WhatsApp & Gemini ## What I Built I created an innovative travel planning solution using Twilio Functions, WhatsApp, and Gemini. This application allows users to effortlessly plan their trips by interacting with a WhatsApp chatbot. Users can get real-time travel suggestions, receive travel tips, and manage all within their WhatsApp interface. Leveraging the robust capabilities of Twilio Functions, the bot integrates AI services and provides a seamless user experience. ## Demo Check out the demo of the Travel Planner app scan the screenshot below and start planning. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/anywzooer8hnbya9gapu.png) Below are some screenshots demonstrating the features and functionality: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1plakaqilzrt02r2yedm.gif) *WhatsApp chat with the travel planner bot.* ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j49mc64ywu9jccso1nwk.jpg) *Real-time travel suggestions and itineraries.* ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qee5ehwi75qyf3306cuj.jpg) ## Twilio and AI To enhance the functionality and intelligence of the travel planner, I integrated Twilio’s capabilities with AI. Using Twilio’s Programmable Messaging API, the bot communicates with users via WhatsApp. Twilio Functions enable serverless execution of backend processes, ensuring efficient and scalable performance. For AI, Gemini’s natural language processing (NLP) capabilities help in understanding user queries, providing relevant travel information, and making intelligent suggestions based on user preferences and behavior. - **Impactful Innovators:** Providing a convenient and efficient solution for travel planning, significantly improving user experience. - **Entertaining Endeavors:** Offering a fun and engaging way for users to plan their travels through interactive chat.
chintanonweb
1,897,758
aryan's SCSS COMPLETE GUIDE Part-2
note: I will not always show CSS version of my code. note: It is a three part series. (if link are...
0
2024-06-23T13:02:30
https://dev.to/aryan015/aryans-scss-complete-guide-part-2-1p1i
css, scss, javascript, html
`note`: I will not always show CSS version of my code. `note`: It is a three part series. (if link are taking you at the same blog then links are not updated yet. waiting...) [one](https://dev.to/aryan015/scss-complete-guide-part-one-4d03) [three](https://dev.to/aryan015/3-finale-of-complete-sass-longer-2gpe) ## SCSS `@mixin` mixin are resuable `css` code in your scss. ```css @mixin name{ property:value; property2:value; } selector{ @include mixin-name; } ``` ```scss @mixin important-text{ color:red; font-size:25px; font-weight:bold; border:1px solid blue; } .danger{ @include important-text; background-color:red; } ``` @mixin can also have other mixin elements ```scss @mixin special-text{ @include link; @include important; } ``` ### passing variable to mixin ```scss @mixin border($color,$width){ border:$width solid $color; } .myArticle{ @include bordered(blue,1px); /* border: 1px solid blue*/ } .para{ @include bordered(orange,2px); /* border:2px solid orange*/ } ``` ### mixin default values ```scss @mixin bordered($color:blue,$width){ border:$width solid $color; } .para{ @included bordered($width:2px) /*specify the value only*/ } ``` [🔗mylinkedin](https://www.linkedin.com/in/aryan-khandelwal-779b5723a/) ## learning resources [🧡Scaler - India's Leading software E-learning](www.scaler.com) [🧡w3schools - for web developers](www.w3school.com)
aryan015
1,897,755
Guest Post Outreach: A Simple Guide
What is Guest Post Outreach? Guest post outreach is when you reach out to other bloggers...
0
2024-06-23T12:57:49
https://dev.to/taiwo17/guest-post-outreach-a-simple-guide-37b8
seo, outreach, guestpost, digitalmarketing
## What is Guest Post Outreach? [Guest post](https://www.upwork.com/services/product/marketing-technical-seo-audit-technical-on-page-seo-fix-seo-issues-1803811118137311009?ref=project_share) outreach is when you reach out to other bloggers and websites to propose your content for their sites. The goal is to add value to their audience, [build your authority](https://www.upwork.com/services/product/marketing-technical-seo-audit-technical-on-page-seo-fix-seo-issues-1803811118137311009?ref=project_share), and reach new readers. According to a Moz survey, 18% of SEOs believe guest posting is the best link-building strategy because it helps get backlinks and grow your audience. > [Increase the awareness and visibility of your site through SEO ](https://www.upwork.com/services/product/marketing-technical-seo-audit-technical-on-page-seo-fix-seo-issues-1803811118137311009?ref=project_share) Guest blogging services like Outreach Monks can help make this process faster by ensuring you get organic backlinks from sites in your niche that have real traffic. ### Why is Guest Post Outreach Important? **Guest post outreach has several key benefits:** 1. **Building Relationships**: Interacting with other blogs and websites can help you become known as a reliable source of information, leading to partnerships, backlinks, and other opportunities. 2. **Raising Brand Awareness**: Posting on other websites introduces you to a new audience, helping you build your brand and attract more visitors to your blog. For example, a small marketing business can expand its reach and increase brand recognition by guest posting on popular marketing blogs and promoting the posts on social media. 3. **Improving [SEO](https://www.upwork.com/services/product/marketing-technical-seo-audit-technical-on-page-seo-fix-seo-issues-1803811118137311009?ref=project_share)**: Getting backlinks from high-quality websites can boost your blog’s domain authority and search engine rankings. For example, a fashion blog can improve its SEO and organic traffic by guest blogging on other fashion websites. 4. **[Increasing Traffic](https://www.upwork.com/services/product/marketing-technical-seo-audit-technical-on-page-seo-fix-seo-issues-1803811118137311009?ref=project_share)**: If your guest posts interest readers, they might visit your website to learn more about your business. ### How to Do Guest Post Outreach Successfully** **Follow these steps to succeed in guest post outreach:** 1. Find Potential Sites: Use tools like Buzzsumo, SEMrush, or Ahrefs to find reputable websites in your niche that accept guest posts. 2. Research the Site: Understand the target site’s audience and content to ensure your proposal is relevant. Knowing their social media presence and popular posts helps create a strong proposal. 3. Write Your Pitch: Draft a pitch that is friendly and professional. Mention your qualifications, why you’re a good fit for the site, and suggest a few potential topics. Highlight the value you can offer their audience. > **[Do you want you business to reach a wider audience? Create a website now to connect with your audience](https://www.upwork.com/services/product/development-it-elementor-expert-i-elementor-developer-elementor-designer-wordpress-1797776899411774051?ref=project_share)** **Dos and Don’ts of Guest Post Outreach** **Dos:** - Research the Website: Ensure it’s a good fit for your content. - Personalize Your Outreach: Use the website owner’s name and mention specific articles you liked. - Provide Value: Create original, high-quality content that appeals to their audience. - Follow-Up: If you don’t hear back, send a polite reminder. - Be Professional: Maintain a courteous and formal tone in your communication. **Don’ts:** - Send Generic Emails: Avoid impersonal, bulk emails. - Focus Only on Yourself: Emphasize how your post will benefit their audience. - Ignore Instructions: Follow the site’s submission guidelines. - Be Pushy: Respect the website owner’s time and decision process. - Forget to Say Thank You: Send a sincere thank you to the website owner for considering your guest post. **In Summary** Guest post outreach is an excellent way to boost your online presence and traffic. By following these steps and maintaining a professional, value-focused approach, you can strengthen your online presence and build positive relationships with website owners in your niche. > **[Get a Technical Audit for you website today. Contact me via Upwork](https://www.upwork.com/services/product/marketing-technical-seo-audit-technical-on-page-seo-fix-seo-issues-1803811118137311009?ref=project_share)**
taiwo17
1,897,246
O básico de mirror do Istio
O Istio é a principal ferramenta de service mesh para o Kubernetes, ele disponibiliza diversas...
0
2024-06-23T12:57:03
https://dev.to/wandpsilva/o-basico-de-mirror-do-istio-4k5
istio, kubernetes, devops, sre
O Istio é a principal ferramenta de service mesh para o Kubernetes, ele disponibiliza diversas features para você gerenciar a sua malha de serviços, dentre essas features temos a parte de gerenciamento de tráfego onde uma das possibilidade é criar um 'mirror'. Mas o que é um mirror? um mirror permite você 'espelhar' o tráfego de requisições de um serviço para outro. E porque isso é útil? podemos encaixar essa funcionalidade em algumas situações, dentre elas vou destacar uma: Vamos supor que você tem uma aplicação que usa o banco de dados mysql e você precisa migra-la para postgresql, porém, antes de tornar a versão migrada produtiva você quer testar o comportamento dela para ver se os dados serão salvos corretamente e se a aplicação não apresentará erros. Para isso, você pode espelhar as requisições que chegam na aplicação cuja versão utiliza o mysql para a versão que utiliza o postgresql. O tráfego continuará sendo enviado para a versão com mysql e a versão com postgresql receberá uma cópia desse tráfego, o retorno da aplicação com postgresql não é enviado ao cliente, garantindo assim que o cliente receba apenas o retorno da aplicação com mysql que já funciona, evitanto assim receber possíveis erros da versão nova com postgresql. --- ## Interessante, não? 🔥 Vamos ver como funciona na prática? Para isso vamos precisar das seguintes ferramentas instaladas na nossa máquina: - [Docker](https://www.docker.com/products/docker-desktop/) - [Minikube](https://storage.googleapis.com/minikube/releases/latest/minikube-installer.exe) - [Istioctl](https://github.com/istio/istio/releases) - [Git bash](https://git-scm.com/downloads) > ⚠️ Os comandos citados abaixo para criar os componentes começam com a instrução _istioctl kube-inject -f_, esta instrução injeta no pod a ser criado o container de proxy do istio, para que o istio possa controlar este pod. Caso você não queira usar esta instrução confira [aqui](https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/) como injetar o proxy do istio de outras maneiras. ## Mãos a obra 🤝🎓 **1** - Para começar, vamos subir nosso cluster minikube, primeiramente inicie o docker abrindo o docker desktop (para Windows), em seguida abra um terminal git bash e execute o seguinte comando para subir o minikube: ``` minikube start ``` **2** - Instale o istio ``` istioctl install --set profile=demo -y ``` **3** - Copie o código abaixo e execute no git bash para criar o deployment principal ``` cat <<EOF | istioctl kube-inject -f - | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: istioapp-v1 labels: app: istioapp version: v1 spec: replicas: 1 selector: matchLabels: app: istioapp version: v1 template: metadata: labels: app: istioapp version: v1 spec: containers: - name: istioapp image: wandpsilva/istioapp:v1.0 ports: - containerPort: 8080 EOF ``` **4** - Agora faça o mesmo para o código abaixo para a versão 2 que receberá o tráfego espelhado ``` cat <<EOF | istioctl kube-inject -f - | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: istioapp-v2 labels: app: istioapp version: v2 spec: replicas: 1 selector: matchLabels: app: istioapp version: v2 template: metadata: labels: app: istioapp version: v2 spec: containers: - name: istioapp image: wandpsilva/istioapp:v1.0 ports: - containerPort: 8080 EOF ``` **5** - Vamos criar o service do kubernetes ``` kubectl create -f - <<EOF apiVersion: v1 kind: Service metadata: name: istioapp-service spec: selector: app: istioapp ports: - protocol: TCP port: 8080 targetPort: 8080 type: NodePort EOF ``` **6** - Criaremos agora nosso virtualservice e destinationrule, estes são componentes do istio utilizados para gerenciamento de tráfego e neste momento possuem uma configuração básica, apenas enviar a requisição recebida para o pod v1 ``` kubectl apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: istioapp-virtualservice spec: hosts: - istioapp-service.default.svc.cluster.local http: - route: - destination: host: istioapp-service.default.svc.cluster.local subset: v1 --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: istioapp-destinationrule spec: host: istioapp-service.default.svc.cluster.local subsets: - name: v1 labels: app: istioapp version: v1 EOF ``` **7** - Com todos os componentes criados, vamos criar um deployment do [httpbin](https://httpbin.org/) para chamarmos a nossa aplicação e testa-la ``` cat <<EOF | istioctl kube-inject -f - | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: sleep spec: replicas: 1 selector: matchLabels: app: sleep template: metadata: labels: app: sleep spec: containers: - name: sleep image: curlimages/curl command: ["/bin/sleep","3650d"] imagePullPolicy: IfNotPresent EOF ``` **8** - Com o httpbin criado podemos chamar o endpoint da aplicação que subimos no minikube e verificarmos o response dela pelo log do pod. Vamos então chamar a aplicação com o seguinte comando ``` export SLEEP_POD=$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) kubectl exec "${SLEEP_POD}" -c sleep -- curl -sS http://istioapp-service:8080/v1/istioapp/hello-word/ping ``` Verifique as logs dos dois pods pod v1 ``` export V1_POD=$(kubectl get pod -l app=istioapp,version=v1 -o jsonpath={.items..metadata.name}) kubectl logs "$V1_POD" -c istioapp ``` pod v2 ``` export V2_POD=$(kubectl get pod -l app=istioapp,version=v2 -o jsonpath={.items..metadata.name}) kubectl logs "$V2_POD" -c istioapp ``` você deverá ver a mensagem 'PONG' na log do pod v1 e nada na log do pod v2. **9** - Vamos fazer agora o mirror funcionar, fazendo com que a chamada feita no passo 8 chegue tanto para o pod v1 quanto para o pod v2, para isso vamos configurar novamente nosso virtualservice e destinationrule, aplique as configurações abaixo ``` kubectl apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: istioapp-virtualservice spec: hosts: - istioapp-service.default.svc.cluster.local http: - route: - destination: host: istioapp-service.default.svc.cluster.local subset: v1 weight: 100 mirror: host: istioapp-service.default.svc.cluster.local subset: v2 mirrorPercentage: value: 100 --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: istioapp-destinationrule spec: host: istioapp-service.default.svc.cluster.local subsets: - name: v1 labels: app: istioapp version: v1 - name: v2 labels: app: istioapp version: v2 EOF ``` **10** - Chame novamente a aplicação ``` export SLEEP_POD=$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) kubectl exec "${SLEEP_POD}" -c sleep -- curl -sS http://istioapp-service:8080/v1/istioapp/hello-word/ping ``` Verifique a log dos pods novamente pod v1 ``` export V1_POD=$(kubectl get pod -l app=istioapp,version=v1 -o jsonpath={.items..metadata.name}) kubectl logs "$V1_POD" -c istioapp ``` pod v2 ``` export V2_POD=$(kubectl get pod -l app=istioapp,version=v2 -o jsonpath={.items..metadata.name}) kubectl logs "$V2_POD" -c istioapp ``` ## O que aconteceu? 🤔 Como você pode notar, ambos os pods exibiram nas logs a mensagem _PONG_, o que significa que ambos receberam a requisição feita, porém, o pod v2 recebe apenas uma cópia do tráfego, o seu retorno seja sucesso ou falha não retorna ao cliente, o que nos dá segurança de testarmos qualquer alteração e em qualquer ambiente. ## Como aconteceu? 😃 Toda a lógica do mirror fica entre os componentes _virtualservice_ e _destinationrule_, analisando os arquivos acima note que o virtualservice através do atributo **_mirror_** consegue replicar o tráfego para um outro **_subset (v2)_**, este _**subset**_ é configurado no destinationrule e lá dizemos através de labels para qual pod a requisição deverá ser enviada. Também é possível dizer o percentual de tráfego que deverá ser espelhado com o atributo **_mirrorPercentage_**, se este for omitido, 100% do tráfego será espelhado. --- Este artigo foi baseado na [documentação oficial do istio](https://istio.io/latest/docs/tasks/traffic-management/mirroring/). Recomendo navegar pela documentação e explorar mais e mais do istio 🚀 Espero que tenham gostado, até a próxima. 😉
wandpsilva
1,876,113
Lists and tuples in Elixir
We will keep learning two of the most used collection data types in Elixir: lists and tuples. They...
0
2024-06-23T12:54:49
https://dev.to/rhaenyraliang/lists-and-tuples-in-elixir-2lme
webdev, beginners, tutorial, elixir
We will keep learning two of the most used collection data types in Elixir: lists and tuples. They are immutable! Like lists, tuples can hold any value. In Elixir data structure they are _immutable_, so every operation will return a new _**List**_ or a new _**Touple**_. ## (Link)List Elixir uses square brackets to specify a list of values. List operators never modify the existing list. You can freely pass the data around with the guarantee that no one will mutate it in memory—only transform it. Linked lists hold zero, one, or more elements in the chosen order. ```elixir iex> [1, 2, true, 3] [1, 2, true, 3] iex> [1, "two", 3, :four] [1, "two", 3, :four] iex> [1, 3, "two", :four] [1, 3, "two", :four] iex> length([1, 2, 3]) 3 ``` ### List concatenated Two lists can be concatenated or subtracted using the `++/2` and `--/2` operators. - `left ++ right` > Concatenates a proper list and a term, returning a list. If the `right` operand is not a proper list, it returns an improper list. If the `left` operand is not a proper list, it raises an `ArgumentError`. - `left -- right` > List subtraction operator. Removes the first occurrence of each element in the left list for each element in the right list. ```elixir iex>[1] ++ [2,3] [1, 2, 3] iex> [1, 2, 3] ++ [4, 6, 6] [1, 2, 3, 4, 5, 6] iex>[1, 2] ++ 3 [1, 2 | 3] iex> [1, 2] ++ {3, 4} [1, 2 | {3, 4}] iex> [1, 2, 3] -- [1, 2] [3] iex> [1, 2, 3, 2, 1] -- [1, 2, 2] [3, 1] iex> [1, 2, 3] -- [2] -- [3] [1, 3] iex> [1, 2, 3] -- ([2] -- [3]) [1, 3] iex> [1, true, 2, false, 3, true] -- [true, false] [1, 2, 3, true] ``` ### List prepended An element can be prepended to a list using `|`. Due to their cons cell-based representation, _prepending an element to a list is always fast (constant time)_, while appending becomes slower as the list grows in size (linear time): ```elixir iex(1)> new = 0 iex(2)> list = [1, 2, 3] iex(3)> [new | list] [0, 1, 2, 3] iex(1)>list = [1, 2, 3] iex(2)[0 | list] # fast [0, 1, 2, 3] iex(3)list ++ [4] # slow [1, 2, 3, 4] ``` ### Head and tail of list Lists in Elixir are effectively linked lists, which means they are internally represented in pairs containing the head and the tail of a list. ```elixir iex> [head | tail] = [1, 2, 3] iex> head 1 iex> tail [2, 3] ``` The head is the first element of a list and the tail is the remainder of the list. They can be retrieved with the functions `hd/1` and `tl/1` - `hd(list)` > Returns the head of a list. Raises `ArgumentError` if the list is empty. - `tl(list)` > Returns the tail of a list, which is the list without its first element. Raises `ArgumentError` if the list is empty. ```elixir iex> hd([1, 2, 3, 4]) 1 iex> hd([1 | 2]) 1 iex> hd([]) ** (ArgumentError) argument error iex> tl([1, 2, 3, :go]) [2, 3, :go] iex>tl([:a, :b | :improper_end]) [:b | :improper_end] iex>tl([:a | %{b: 1}]) %{b: 1} iex>tl([:one]) [] iex>tl([]) ** (ArgumentError) argument error ``` ## Tuples Elixir uses curly brackets to define tuples. Tuples store elements contiguously in memory, which means accessing a tuple element by index or getting the tuple size is a fast operation. Indexes start from zero. ```elixir iex> {} {} iex> {:ok, "hello"} {:ok, "hello"} iex> {1, :two, "three"} {1, :two, "three"} iex> tuple_size({:ok, "hello"}) 2 iex(1)> tuple = {:ok, 1, "hello"} {:ok, 1, "hello"} iex(2)> tuple_size(tuple) 3 ``` ### Functions for working Accessing a tuple element with `elem/2`, `put_elem/3`, and `tuple_size/1`. - `elem/2(tuple, index)` > Gets the element at the zero-based `index` in `tuple`. Raises `ArgumentError` when the index is negative or out of range of the tuple elements. - `put_elem/3(tuple, index, value)` > Puts `value` at the given zero-based `index` in `tuple`. - `tuple_size/1(tuple)` > Returns the size of a tuple, which is the number of elements in the tuple. ```elixir iex(1)> tuple = {:ok, 1, "hello"} {:ok, 1, "hello"} iex(2)> elem(tuple, 1) 1 iex> elem({:ok, "hello", 1}, 1) "helo" iex> elem({}, 0) ** (ArgumentError) argument error iex> elem({:foo, :bar}, 2) ** (ArgumentError) argument error iex> put_elem({:foo, :ok, 3}, 2, 4) {:foo, :ok, 4} iex(1)> tuple = {:foo, :bar, 3} iex(2)> put_elem(tuple, 0, :baz) {:baz, :bar, 3} iex(1)> tuple = {:ok, 1, "hello"} {:ok, 1, "hello"} iex(2)> tuple_size(tuple) 3 ``` ### File.read/1 The File module is an interface module that provides an interface to the file system. The `read(path)` _**function returns a tuple with the atom**_ `:ok` as the first element and the file contents as the second. Elixir allows us to _pattern match_ on tagged tuples and effortlessly handle both success and failure cases. ```elixir iex> File.read("path/to/existing/file") {:ok, "... contents ..."} iex> File.read("path/to/unknown/file") {:error, :enoent} ``` ## What is the difference between lists and tuples? ![Lists and Tuples?](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qtuf1cen0tfo1pvp64cr.png) ```elixir iex > list = [1, 2, 3] [1, 2, 3] # This is fast as we only need to traverse `[0]` to prepend to `list` iex > [0] ++ list [0, 1, 2, 3] # This is slow as we need to traverse `list` to append 4 iex > list ++ [4] [1, 2, 3, 4] ``` ```elixir #When you update a tuple, all entries are shared between the old and the new tuple, except for the entry that has been replaced iex >tuple = {:a, :b, :c, :d} {:a, :b, :c, :d} iex >put_elem(tuple, 2, :e) {:a, :b, :e, :d} ``` ## Final Thoughts This wraps up today's introduction to collection data types in _Elixir: Lists and Tuples_. If you haven't read the previous article, [Basic types in Elixir](https://dev.to/rhaenyraliang/basic-types-in-elixir-2f3l), be sure to check it out. Our next article will delve into _Pattern matching_. Make sure not to miss it. I hope this article helps you. See you next time, bye!
rhaenyraliang
1,897,687
Get started with Fullstack Development: React + SpringBoot + MySQL + Postman
Prerequisite: Basic knowledge of Javascript and React library Basic knowledge of Java,...
0
2024-06-23T12:54:21
https://dev.to/kajal_mapare24/get-started-with-fullstack-development-react-springboot-mysql-postman-5997
fullstack, react, springboot, beginners
## Prerequisite: - Basic knowledge of Javascript and React library - Basic knowledge of Java, SpringBoot - Basic knowledge of MySQL ## Architecture: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/85qr9fp4vuy8jtqi94up.png) ## Tools and installation links: - [Java openjdk 17](https://www.oracle.com/java/technologies/javase/jdk17-archive-downloads.html) - [MySQL](https://dev.mysql.com/downloads/mysql/) and [MySQL Workbench](https://dev.mysql.com/downloads/workbench/) - [Postman](https://www.postman.com/downloads/) - [NodeJs](https://nodejs.org/en/download/package-manager/current), [yarn](https://tecadmin.net/install-yarn-macos/) - IDE: [Intellij](https://www.jetbrains.com/idea/download/?section=mac)(for Backend), [VSCode](https://code.visualstudio.com/download)(for Frontend) ## Step 1: Spring Initializer Spring Initializer is a web-based tool provided by the Spring framework that simplifies the process of creating a new Spring Boot project. It allows developers to quickly bootstrap a new Spring Boot application with the necessary dependencies and configurations. To set up SpringBoot development environment - https://start.spring.io/ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1pkd58hun77b5ap12l43.png) Now open the extracted zip file in Intellij ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ta99ma0swxe9sr1ym5a7.png) ## Step 2: Create packages in src main java - model, repository, controller, exception ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/smm8kekossij9zv8mqxf.png) - The `model` represents the data of your application. In a Spring Boot application, models are typically Java classes annotated with JPA annotations that map the class properties to database columns. - `Repositories` are responsible for interacting with the database. In Spring Boot, repositories are interfaces that extend JpaRepository or CrudRepository. Spring Data JPA provides the implementation at runtime. - `Controllers` handle HTTP requests and responses. They are responsible for processing user inputs, invoking business logic, and returning the results. Controllers are typically annotated with @RestController. - `Exception` handling handles exceptions globally and provide custom error responses. ## Step 3: Creating model class and connecting with MySql `Model/User:` ```java package com.demo.fullstack_backend.model; import jakarta.persistence.Entity; import jakarta.persistence.GeneratedValue; import jakarta.persistence.Id; @Entity public class User { @Id @GeneratedValue private long id; private String username; private String name; private String email; public long getId() { return id; } public void setId(long id) { this.id = id; } // other getters and setters (use @GeneratedValue) } ``` <u>Annotations used:</u> @Entity: help us to map our domain objects (POJOs) to the relational database tables @id: to designate that this member will uniquely identify the entity in the database @GeneratedValue: On right click -> Generate getters and setters `repository/UserRepository` ```java package com.demo.fullstack_backend.repository; import com.demo.fullstack_backend.model.User; import org.springframework.data.jpa.repository.JpaRepository; public interface UserRepository extends JpaRepository<User, Long> { // Long id will be used as primary key } ``` Interface JpaRepository<T,ID> : Returns a reference to the entity with the given identifier. It contains the APIs for basic CRUD operations, the APIS for pagination, and the APIs for sorting. ## Step 4: Set MySQL configurations ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aksje9lkjgu7sii4288y.png) **MySQL installation:** - Download [MySQL server](https://dev.mysql.com/downloads/mysql/) - Install the server ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p80v605nvk66zux08duo.png) - Add bin path in bash profile: Open the terminal 1. If bash profile file is already present : `open ~/.bash_profile` else create and open the file 2. Add path: `export PATH=${PATH}:/usr/local/mysql-8.4.0-macos14-x86_64/bin` 3. `source ~/.bash_profile` 4. `mysql -u root -p` 5. Check if installation is successful: `show databases` ## Step 5: MySQL workbench `create database fullstack; show databases; use fullstack; show tables; desc user; ` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u8hj4ffqxqs9zjpu6zbg.png) ## Step 6: Postmapping for sending data in database and Getmapping for getting data from databases `controller/UserController` ```java package com.demo.fullstack_backend.controller; import com.demo.fullstack_backend.model.User; import com.demo.fullstack_backend.repository.UserRepository; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.*; import java.util.List; @CrossOrigin("http://localhost:3000") @RestController public class UserController { @Autowired private UserRepository userRepository; @PostMapping("/user") User newUser(@RequestBody User newUser) { return userRepository.save(newUser); } @GetMapping("/users") List<User> getAllUsers() { return userRepository.findAll(); } } ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gi79v8cc3y9kl6h0g21r.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iy701lcsprv78ke6qwmn.png) @RestController: used for developing RESTful web services @Autowired: allows Spring to automatically inject dependencies into the class, eliminating the need for manual configuration @CrossOrigin: provides a way to overcome the same-origin policy applied by web browser ## Step 7: Develop frontend using React and connect to springboot - Install [Node](https://nodejs.org/en) - Install [yarn](https://classic.yarnpkg.com/lang/en/docs/install/#mac-stable) - On Terminal run: `yarn global add create-react-app` - Create and open project folder in VSCode/ any IDE and run: `create-react-app app-name` - yarn add axios (to add axios library to web application. It is a popular JavaScript library used for making HTTP requests from a web browser or Node. js) - To use readymade material components: - yarn add @mui/material @emotion/react @emotion/styled - yarn add @mui/icons-material - Add [React developer tool chrome extension](https://chromewebstore.google.com/detail/react-developer-tools/fmkadmapgofadopljbjfkapdkoienihi) to inspect the React component hierarchies - Run the application: **Backend:** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hqxpkefph8912hgwevlc.png) **Frontend:** `yarn start` _**Good to goooo!!!**_ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nz8lp2k7867ecary1c5c.png) After submitting new user data: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v9dkn5ghlsv4w70fx5tg.png) **React files:** (Just 4 files ;)) `App.js` ```javascript import React, { useState, useEffect } from "react"; import axios from "axios"; import UserForm from "./Form/UserForm"; import UserList from "./UserList/UserList"; import "./App.css"; const App = () => { const [users, setUsers] = useState([]); useEffect(() => { loadUsers(); }, []); const loadUsers = async () => { const result = await axios.get("http://localhost:8080/users"); setUsers(result.data); }; const addUser = (user) => { setUsers([...users, user]); }; return ( <div className="App"> <h1>Fullstack Frontend</h1> <UserForm addUser={addUser} /> <UserList users={users} /> </div> ); }; export default App; ``` `App.css` ```css .App { font-family: "Segoe UI", Tahoma, Geneva, Verdana, sans-serif; background-color: #f4f4f9; padding: 20px; max-width: 800px; margin: 0 auto; border-radius: 8px; box-shadow: 0 0 20px rgba(0, 0, 0, 0.1); } .container { margin-top: 20px; } h1 { text-align: center; color: #333; margin-bottom: 20px; } form { background-color: #fff; padding: 20px; border-radius: 8px; box-shadow: 0 0 10px rgba(0, 0, 0, 0.1); margin-bottom: 20px; } form div { margin-bottom: 15px; } label { display: block; margin-bottom: 5px; color: #333; font-weight: bold; } input[type="text"], input[type="email"] { width: 100%; padding: 10px; border: 1px solid #ccc; border-radius: 4px; box-sizing: border-box; } button { width: 100%; background-color: #007bff; color: #fff; border: none; padding: 10px; border-radius: 4px; font-size: 16px; cursor: pointer; transition: background-color 0.3s ease; } button:hover { background-color: #0056b3; } ul { list-style-type: none; padding: 0; } li { background-color: #fff; margin-bottom: 10px; padding: 15px; border-radius: 8px; box-shadow: 0 0 10px rgba(0, 0, 0, 0.1); } li strong { display: block; color: #007bff; } li + li { margin-top: 10px; } li div { margin-bottom: 5px; } ``` `UserForm.js` ```javascript import React, { useState } from "react"; import axios from "axios"; const UserForm = ({ addUser }) => { const [user, setUser] = useState({ name: "", email: "" }); const handleChange = (e) => { const { name, value } = e.target; setUser({ ...user, [name]: value }); }; const handleSubmit = async (e) => { e.preventDefault(); if (user.name && user.email) { addUser(user); setUser({ name: "", email: "" }); await axios.post("http://localhost:8080/user", user); } }; return ( <form onSubmit={handleSubmit}> <div> <label>Name:</label> <input type="text" name="name" value={user.name} onChange={handleChange} placeholder="Enter your name" /> </div> <div> <label>Email:</label> <input type="email" name="email" value={user.email} onChange={handleChange} placeholder="Enter your email" /> </div> <button type="submit" disabled={!(user.name && user.email)}> Add User </button> </form> ); }; export default UserForm; ``` `UserList.js` ```javascript import React from "react"; import { Table, TableBody, TableCell, TableContainer, TableHead, TableRow, Paper, } from "@mui/material"; const UserList = ({ users }) => { return ( <TableContainer component={Paper} sx={{ mt: 4 }}> <Table sx={{ minWidth: 650 }} aria-label="simple table"> <TableHead> <TableRow> <TableCell>ID</TableCell> <TableCell>Name</TableCell> <TableCell>Email</TableCell> </TableRow> </TableHead> <TableBody> {users.map((user, index) => ( <TableRow key={index}> <TableCell>{index + 1}</TableCell> <TableCell>{user.name}</TableCell> <TableCell>{user.email}</TableCell> </TableRow> ))} </TableBody> </Table> </TableContainer> ); }; export default UserList; ``` ## Basic React concepts React is a popular JavaScript library for building user interfaces, particularly single-page applications where you need a fast and interactive user experience. Here are some basic and important concepts in React: - `Components`: Building blocks of a React application. They are self-contained and reusable pieces of UI. Types: Functional components, class components. Functional components are widely used nowadays. - `JSX`: JSX stands for JavaScript XML. It allows you to write HTML in React. - `Props`: Props (short for properties) are read-only attributes used to pass data from parent components to child components. - `State`: State is an object that represents the dynamic parts of a component and can change over time. State is managed within the component and can be updated using the setState method. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e8w5lx1w11lenbo8aj6z.jpeg) - `Lifecycle Methods`: Lifecycle methods are special methods in class components that allow you to hook into different phases of a component's life (mounting, updating, and unmounting). Examples: componentDidMount, componentDidUpdate, componentWillUnmount, etc. - `Hooks`: Hooks are functions that let you use state and other React features in functional components. Examples: useState, useEffect, useContext, etc. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bmm3ytm6cdhgzpogdf2f.png) - `Context`: Context provides a way to pass data through the component tree without having to pass props down manually at every level. Useful for global state like themes or user data. - `Handling Events`: Event handling in React involves defining event handlers directly in JSX, using camelCase syntax for event names (e.g., onClick), and passing functions that specify what should happen when an event occurs, ensuring a seamless interaction between user actions and the component's state or behavior. Handling events in React is similar to handling events on DOM elements. - `Rendering`: Rendering refers to the process of displaying UI elements on the screen. In React, rendering typically involves the render method, which returns a description of what you want to see on the screen in the form of a React element. - `Conditional Rendering`: It is the process of displaying elements and components based on certain conditions. Use JavaScript operators like if, &&, and ? : to create elements representing the current state. - `Lists`: In React, working with lists often involves creating a dynamic collection of elements based on data arrays. Using JavaScript methods like map, reduce, and filter allows you to manipulate and render these lists efficiently. - `Keys`: Keys help React optimize the rendering of lists by identifying each element uniquely. - `Real DOM`: The Real DOM (Document Object Model) is a programming interface provided by web browsers that allows scripts to dynamically access and update the content, structure, and style of a document. It represents the entire structure of a web page as a tree of objects, where each object represents part of the document. - `Virtual DOM`: The Virtual DOM is a lightweight representation of the real DOM. React uses it to optimize rendering by making updates in a more efficient way. When state or props change, React creates a new virtual DOM tree, compares it with the previous one (a process called "reconciliation"), and only updates the changed parts in the real DOM. - `Fragment`: Fragment allows you to group multiple elements without adding extra nodes to the DOM. ```javascript function FragmentExample() { return ( <> <h1>Title</h1> <p>Description</p> </> ); } ``` ## Bonus Info (Error and resolution): 1. If mysql server is automatically getting turned on/off: Go to System preferences/settings -> MySql => Regardless of version, **invalidate the caches**: a. Click File 🡒 Invalidate Caches / Restart. b. Click Invalidate and Restart. 2. Error: Can't connect to local MySQL server through socket '**/tmp/mysql.sock**' (2): issue resolved by https://www.landfx.com/kb/installation-help/mysql/3437-mysql-stop-mac **Thanks for reading!**
kajal_mapare24
1,893,486
When You Need More Power Than a Lambda Provides
Navigating toward a cloud-native architecture can be both exciting and challenging. The expectation...
0
2024-06-23T12:49:59
https://dev.to/johnjvester/when-you-need-more-power-than-a-lambda-provides-3061
cloud, serverless, heroku, webdev
![Article Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/95mxux2y3mhoa2sw4eb3.jpg) Navigating toward a cloud-native architecture can be both exciting and challenging. The expectation of learning valuable lessons should always be top of mind as design becomes reality.  In this article, I wanted to focus on an example where my project seemed like a perfect serverless use case, one where I’d leverage AWS Lambda. Spoiler alert: it was not. ## **Rendering Fabric.js Data** In a publishing project, we utilized [Fabric.js](http://fabricjs.com/)—a JavaScript HTML5 canvas library—to manage complex metadata and content layers. These complexities included spreads, pages, and templates, each embedded with fonts, text attributes, shapes, and images. As the content evolved, teams were tasked with updates, necessitating the creation of a publisher-quality PDF after each update. We built a Node.js service to run Fabric.js, generating PDFs and storing resources in AWS S3 buckets with private cloud access. During a typical usage period, over 10,000 teams were using the service, with each individual contributor sending multiple requests to the service as a result of manual page saves or auto-saves driven by the [Angular](https://angular.io/) client. The service was set it up to run as a Lambda in AWS. The idea of paying at the request level seemed ideal. ## **Where Serverless Fell Short** We quickly realized that our Lambda approach wasn’t going to cut it. The spin-up time turned out to be the first issue. Not only was there the time required to start the Node.js service, but preloading nearly 100 different fonts that could be used by those 10,000 teams caused delays too. We were also concerned about Lambda’s processing limit of 250 MB of unzipped source code. The initial release of the code was already over 150 MB in size, and we still had a large backlog of feature requests that would only drive this number higher. Finally, the complexity of the pages—especially as more elements were added—demanded increased CPU and memory to ensure quick PDF generation. After observing the usage for first-generation page designs completed by the teams, we forecasted the need for nearly 12 GB of RAM. Currently, AWS Lambdas are limited to 10 GB of RAM. Ultimately, we opted for dedicated EC2 compute resources to handle the heavy lifting. Unfortunately, this decision significantly increased our DevOps management workload.  ## **Looking For a Better Solution** Although I am no longer involved with that project, I’ve always wondered if there was a better solution for this use case. While I appreciate AWS, Google, and Microsoft providing enterprise-scale options for cloud-native adoption, what kills me is the associated learning curve for every service. The company behind the project was a smaller technology team. Oftentimes teams in that position struggle with adoption when it comes to  using the big-three cloud providers. The biggest challenges I continue to see in this regard are: * A heavy investment in DevOps or CloudOps to become cloud-native. * Gaining a full understanding of what appears to be endless options. * Tech debt related to cost analysis and optimization. Since I have been working with the Heroku platform, I decided to see if they had an option for my use case. Turns out, they introduced [large dynos](https://blog.heroku.com/heroku-larger-dyno-types) earlier this year. For example, with their Performance-L RAM dyno, my underlying service would get 50x the compute power of a standard Dyno and 30 GB of RAM. The capability to [write to AWS S3](https://devcenter.heroku.com/articles/s3) has been available from Heroku for a long time too. ## **V2 Design In Action** Using the Performance-L RAM dyno in Heroku would be no different (at least operationally) than using any other dyno in Heroku. To run my code, I just needed the following items: * A Heroku account * The [Heroku command-line interface (CLI)](https://devcenter.heroku.com/articles/heroku-cli) installed locally After navigating to the source code folder, I would issue a series of commands to log in to Heroku, create my app, set up my AWS-related environment variables, and run up to five instances of the service using the Performance-L dyno with auto-scaling in place: ```bash heroku login heroku apps:create example-service heroku config:set AWS_ACCESS_KEY_ID=MY-ACCESS-ID AWS_SECRET_ACCESS_KEY=MY-ACCESS-KEY heroku config:set S3_BUCKET_NAME=example-service-assets heroku ps:scale web=5:Performance-L-RAM git push heroku main ``` Once deployed, my `example-service` application can be called via standard RESTful API calls. As needed, the auto-scaling technology in Heroku could launch up to five instances of the Performance-L Dyno to meet consumer demand. I would have gotten all of this without having to spend a lot of time understanding a complicated cloud infrastructure or worrying about cost analysis and optimization. ## **Projected Gains** As I thought more about the CPU and memory demands of our publishing project—during standard usage seasons and peak usage seasons—I saw how these performance dynos would have been exactly what we needed. * Instead of crippling our CPU and memory when the requested payload included several Fabric.js layers, we would have had enough horsepower to generate the expected image, often before the user navigated to the page containing the preview images. * We wouldn’t have had size constraints on our application source code, which we would inevitably have hit in AWS Lambda limitations within the next 3 to 4 sprints. * The time required for our DevOps team to learn Lambdas first and then switch to EC2 hit our project’s budget pretty noticeably. And even then, those services weren't cheap, especially when spinning up several instances to keep up with demand. But with Heroku, the DevOps investment would be considerably reduced, placed into the hands of software engineers working on the use case. Just like any other dyno, it’s easy to use and scale up the performance dynos either with the CLI or the Heroku dashboard. ## **Conclusion** My readers may recall my personal mission statement, which I feel can apply to any IT professional: > **“Focus your time on delivering features/functionality that extends the value of your intellectual property. Leverage frameworks, products, and services for everything else.”** > > **\- J. Vester** In this example, I had a use case that required a large amount of CPU and memory to process complicated requests made by over 10,000 consumer teams. I walked through what it would have looked like to fulfill this use case using Heroku's large dynos, and all I needed was a few CLI commands to get up and running. Burning out your engineering and DevOps teams is not your only option. There are alternatives available to relieve the strain. By taking the Heroku approach, you lose the steep learning curve that often comes with cloud adoption from the big three. Even better, the tech debt associated with cost analysis and optimization never sees the light of day. In this case, Heroku adheres to my personal mission statement, allowing teams to focus on what is likely a mountain of feature requests to help product owners meet their objectives. Have a really great day!
johnjvester
1,897,751
Node Exporter Service File
[Unit] Description= Node Exporter Documentation=...
0
2024-06-23T12:42:04
https://dev.to/tj_27/node-exporter-service-file-2oci
nodeexporter, prometheus
``` [Unit] Description= Node Exporter Documentation= https://prometheus.io/ Wants=network.target After=network.target [Service] User=node_exporter Group=node_exporter Type=simple ExecStart=$PATH/node_exporter Restart=on-failure [Install] WantedBy=multi-user.target ```
tj_27
1,897,750
Day 2/40 CKA Dockerize an Application
Dockerizing Your Project: A Step-by-Step Guide Docker is an essential tool for modern software...
0
2024-06-23T12:36:01
https://dev.to/emmanuel_oghre_abe292c74f/day-240-cka-dockerize-an-application-k3p
40daysofkubernetes, cloudopscommunity, piyushsachdeva
**Dockerizing Your Project: A Step-by-Step Guide** Docker is an essential tool for modern software development, providing a consistent and isolated environment for your applications. In this guide, we will walk you through the process of dockerizing a sample Node.js application. By the end, you will have a Docker image that you can run anywhere, ensuring your application behaves the same in all environments. **Prerequisites** Docker Desktop: Download and install the Docker Desktop client from Docker's official website. Simplify Setup: By providing a starting point, docker init helps developers get up and running with Docker more quickly, reducing the initial setup time and effort. Sample Application: We will use a sample application for this demo. You can clone it from GitHub or use your project. Step 1: Clone the Sample Repository First, clone the sample repository: ``` git clone https://github.com/docker/getting-started-app.git cd getting-started-app/ ``` Step 2: Create a Dockerfile A Dockerfile is a text file that contains instructions on how to build a Docker image. Create an empty Dockerfile in your project directory: ``` touch Dockerfile ``` Using your preferred text editor, open the Dockerfile and add the following content: **Dockerfile** ``` # Use the official Node.js 18 image from the Docker Hub FROM node:18-alpine # Set the working directory inside the container WORKDIR /app # Copy all files from the current directory to the working directory in the container COPY . . # Install dependencies using yarn RUN yarn install --production # Specify the command to run the application CMD ["node", "src/index.js"] # Expose port 3000 to the host EXPOSE 3000 ``` Step 3: Build the Docker Image With the Dockerfile in place, you can build the Docker image. Run the following command in your terminal: ``` docker build -t day02-todo . ``` This command tells Docker to build an image named day02-todo using the instructions in the Dockerfile. Step 4: Verify the Image After the build process completes, verify that the image has been created and stored locally: ``` docker images ``` You should see day02-todo listed among the available images. Step 5: Push the Image to Docker Hub To share your Docker image with others or deploy it to another environment, push it to Docker Hub. First, log in to Docker Hub: ``` docker login ``` Tag the image with your Docker Hub username and repository name: ``` docker tag day02-todo:latest username/new-reponame:tagname ``` Push the image to Docker Hub: ``` docker push username/new-reponame:tagname ``` Step 6: Pull the Image to Another Environment To use the image in another environment, you can pull it from Docker Hub: ``` docker pull username/new-reponame:tagname ``` Step 7: Run the Docker Container Start a container from your Docker image: ``` docker run -dp 3000:3000 username/new-reponame:tagname ``` This command maps port 3000 of your host machine to port 3000 of the container, allowing you to access the application locally. Step 8: Verify the Application If everything was set up correctly, your application should be running and accessible at http://localhost:3000. Step 9: Access the Container If you need to enter the container for debugging or other purposes, use the following command: ``` docker exec -it containername sh ``` Replace containername with the actual name or ID of your running container. Conclusion Congratulations! You have successfully dockerized a Node.js application. This Docker image can now be run anywhere, ensuring a consistent environment across development, testing, and production. For further exploration, consider using sandbox environments like [Play with Docker](https://labs.play-with-docker.com/) or [Play with Kubernetes](https://labs.play-with-k8s.com/). Kindly refer: youtube playlist [CKA Full Course 2024](https://youtu.be/nfRsPiRGx74) by Piyush Sachdeva for more hands-on. Github Link: https://github.com/Emmy-code-dev/CKA-2024 **Happy Dockerizing!**
emmanuel_oghre_abe292c74f
1,897,749
Dive into the Depths of Database Design with This Gem! 🔍
Database Design _ 2nd Edition by Adrienne Watt and Nelson Eng @ BCcampus Open Pressbooks is a comprehensive guide to database design and management, covering topics such as data modeling, database architecture, and best practices for database implementation.
27,801
2024-06-23T12:32:38
https://getvm.io/tutorials/database-design-2nd-edition
getvm, programming, freetutorial, technicaltutorials
Hey there, fellow data enthusiasts! 👋 If you're looking to level up your database design skills, I've got the perfect resource for you - "Database Design - 2nd Edition" by Adrienne Watt and Nelson Eng, published by BCcampus Open Pressbooks. This comprehensive guide is a true treasure trove of knowledge, covering everything you need to know about database design and management. 💎 From the history of databases to the intricacies of data modeling and database architecture, this textbook has got you covered. ## What's Inside? 🔍 The authors have done an incredible job of breaking down complex topics into easy-to-understand concepts. You'll learn about the different data models, the database development process, and best practices for database implementation. 🧠 And the best part? It's all presented in a way that's engaging and easy to follow. ## Highlights 🌟 - Dive into the fascinating history of databases and explore the characteristics of modern database systems - Gain a deep understanding of data models, data modeling, and database management systems - Discover the importance of integrity rules, constraints, functional dependencies, and normalization - Follow the step-by-step guide to the database development process, from design to implementation ## Why You Need This Resource 🤩 Whether you're a student, a professional, or simply someone who needs to work with databases in their daily life, this textbook is an absolute must-have. 👨‍💻 It's a comprehensive and user-friendly resource that will help you navigate the world of database design with confidence. So, what are you waiting for? 🚀 Head over to [https://opentextbc.ca/dbdesign01/](https://opentextbc.ca/dbdesign01/) and get your hands on this amazing resource. Trust me, your future self will thank you! ## Supercharge Your Learning with GetVM Playground 🚀 Eager to put your newfound database design knowledge into practice? Look no further than the GetVM Playground! 💻 This powerful online coding environment seamlessly integrates with the "Database Design - 2nd Edition" resource, allowing you to dive right in and start experimenting. With the GetVM Playground, you can easily access the [https://getvm.io/tutorials/database-design-2nd-edition](https://getvm.io/tutorials/database-design-2nd-edition) and put your skills to the test. No more tedious setup or configuration - the Playground provides a clean, distraction-free workspace where you can focus on learning and creating. 🧠 The beauty of the GetVM Playground lies in its simplicity and flexibility. You can quickly spin up virtual machines, experiment with different database management systems, and test your understanding of the concepts covered in the textbook. 🛠️ Plus, with real-time feedback and the ability to share your projects, the Playground makes collaborative learning a breeze. So, why wait? Install the GetVM Chrome extension and unlock the full potential of the "Database Design - 2nd Edition" resource. With the Playground at your fingertips, you'll be well on your way to mastering database design and management. 🎉 Let's dive in and start coding! --- ## Practice Now! - 🔗 Visit [Database Design _ 2nd Edition](https://opentextbc.ca/dbdesign01/) original website - 🚀 Practice [Database Design _ 2nd Edition](https://getvm.io/tutorials/database-design-2nd-edition) on GetVM - 📖 Explore More [Free Resources on GetVM](https://getvm.io/explore) Join our [Discord](https://discord.gg/XxKAAFWVNu) or tweet us [@GetVM](https://x.com/getvmio) ! 😄
getvm
1,897,748
Fitness trainer
This is a part-time remote role for a Fitness Coach at Vmc The Mission Nutrition. The Fitness Coach...
0
2024-06-23T12:28:52
https://dev.to/work_fromhome_13cfdc9e32/fitness-trainer-52o1
This is a part-time remote role for a Fitness Coach at Vmc The Mission Nutrition. The Fitness Coach will be responsible for providing personalized fitness training and guidance to clients. They will create customized workout plans, monitor progress, and provide ongoing support and motivation. The Fitness Coach will also educate clients on proper nutrition and wellness practices to help them achieve their fitness goals.
work_fromhome_13cfdc9e32
1,423,985
How to Show or Hide Zero Values in Excel?
Excel is a powerful tool that is widely used to organize, analyze, and display data. One important...
0
2023-04-04T05:22:05
https://aawexcel.com/how-to-show-or-hide-zero-values-in-excel/
kutools
--- title: How to Show or Hide Zero Values in Excel? published: true date: 2023-04-03 04:36:45 UTC tags: Kutools canonical_url: https://aawexcel.com/how-to-show-or-hide-zero-values-in-excel/ --- [Excel](https://www.microsoft.com/en-us/microsoft-365/excel) is a powerful tool that is widely used to **organize, analyze,** and **display data**. One important aspect of data presentation in Excel is the display of zero values. Zero values can be useful to **show the absence of data** or **to display calculations** , but they can also clutter the appearance of a worksheet and make it harder to read. In this tutorial, we will show you how to show or hide zero values in Excel. We will cover three methods: **using the Excel Options settings, using a conditional formatting rule** and **using** **Kutools**. These methods will allow you to **customize** your worksheet to display zero values only when necessary, making it easier to read and understand your data. ## Show/Hide Zero Values in the Spreadsheet There are two methods to **show or hide zero values** in the spreadsheet: ### **Using the Excel Options settings** Step 1: Open your Excel spreadsheet. ![](https://aawexcel.com/wp-content/uploads/2023/04/Show-zero-1.webp) _Data range_ Step 2: Click on the “ **File** ” tab in the top-left corner of the Excel window. Step 3: Click on “ **Options** ” in the left-hand menu. ![](https://aawexcel.com/wp-content/uploads/2023/04/Show-zero-2-1024x871.webp) _File > Options_ Step 4: In the Excel Options dialog box, select the “ **Advanced** ” tab. Step 5: Scroll down to the “ **Display options for this worksheet** ” section and uncheck the “ **Show a zero in cells that have a zero value** ” box to hide zero values or check the box to show them. Step 6: Click on “ **OK** ” to save the changes. ![](https://aawexcel.com/wp-content/uploads/2023/04/Show-zero-3-1-1024x789.webp) _Uncheck the Zero values_ ### Using Conditional Formatting Step 1: Select the cells or range of cells where you want to show or hide zero values. ![](https://aawexcel.com/wp-content/uploads/2023/04/Show-zero-4.webp) _Selecting the Data range_ Step 2: Click on the “ **Home** ” tab in the Excel ribbon. Step 3: Click on “ **Conditional Formatting** ” in the “ **Styles** ” group. Step 4: Click on “ **New Rule** ” in the Conditional Formatting dialog box. ![](https://aawexcel.com/wp-content/uploads/2023/04/Show-zero-9-1024x416.webp) _Home > Conditional Formatting > New Rule_ Step 5: In the “ **New Formatting Rule** ” dialog box, select “ **Format only cells that contain** ” from the “ **Select a Rule Type** ” section. Step 6: In the “ **Format only cells with** ” section, select “ **Cell Value** ” from the first drop-down menu, “ **equal to** ” from the second drop-down menu, and enter “ **0** ” (zero) in the third field. ![](https://aawexcel.com/wp-content/uploads/2023/04/Show-zero-6-1024x583.webp) _Formatting Cells_ Step 7: Click on the “ **Format** ” button and select the font color, background color, or any other formatting options you want to apply to the cells that contain zero values. ![](https://aawexcel.com/wp-content/uploads/2023/04/Show-zero-7-1024x419.webp) _Format > OK_ Step 8: Click on “ **OK** ” to save the formatting changes. ![](https://aawexcel.com/wp-content/uploads/2023/04/Show-zero-9-1.webp) _Output_ Step 9: Now, the zero values in the selected cells will either be shown or hidden based on the settings you have chosen. ## Show/Hide Zero Values in the Spreadsheet with Kutools To show or hide the zero values in the worksheet, do as follows: **Step 1:** First, you need to enable the **Design Tab. ** To do this, on the **[Kutools Plus](https://aawexcel.com/kutools/)** tab, select the **[Worksheet](https://aawexcel.com/how-to-create-worksheets-from-a-list-of-worksheet-names-in-excel/) Design** option. ![select the Worksheet Design option](https://aawexcel.com/wp-content/uploads/2020/10/EXCEL_cJkt0ALDpj.png) _Selecting the Worksheet Design option_ **Step 2** : Now, it will show the **Design** tab **,** where you can see the **Show Zero** option. ![Design Tab](https://aawexcel.com/wp-content/uploads/2020/10/Design-Tab-1.png) _Design Tab_ **Step 3:** Let’s consider the below example data, where you can see the **range of cells with zero values** that are highlighted in this image. ![Example data](https://aawexcel.com/wp-content/uploads/2020/10/Example-data-5.png) _Example data_ **Step 4:** On the **Design** Tab, you need to check the **Show Zero** option to **display all the zeros** in the selected range. ![Enable Show Zero option](https://aawexcel.com/wp-content/uploads/2020/10/Enable-Show-Zero-option.png) _Enable Show Zero option_ **Step 5:** If you want to **hide the zeros** in the range, then uncheck the “ **Show Zero”** option. ![Hide Zeros](https://aawexcel.com/wp-content/uploads/2020/10/Hide-Zeros.png) _Output_ ### Notes - The **Show Zero** option works for the whole worksheet. - If you want to **close** the **Design Tab,** you need to **click** the **Close Design** option in the **Options** Tab under the **Design Tab.** ![Close Design option](https://aawexcel.com/wp-content/uploads/2020/10/Close-Design-option.png) _Close Design option_ ## Advantages There are several advantages to showing or hiding zero values in Excel: - **Improved presentation** : Hiding zero values that are not needed can help create a more visually appealing and professional-looking worksheet. - **Reduced errors:** By only showing zero values that are relevant, you can reduce the risk of errors in your analysis and improve the accuracy of your results. - **Increased efficiency** : By using the appropriate settings to show or hide zero values, you can save time and effort in your data analysis process, allowing you to focus on more important tasks. - **Improved readability** : By hiding zero values that are not necessary, you can reduce clutter and make your data easier to read and understand. - **Customization** : Excel allows you to customize your worksheet to display zero values in a way that works best for your specific data and analysis needs. Overall, showing or hiding zero values in Excel is a useful technique that can help to improve the quality and effectiveness of your data analysis. ## Verdict In conclusion, **showing or hiding zero values in Excel** can greatly improve the readability of your data. By following the steps outlined in this tutorial, you can **customize** your worksheet to display zero values only when necessary and hide them when they are not relevant to the analysis. This will make it easier for you and your audience to understand the data and draw meaningful insights. Whether you choose to use the **Excel Options** settings or a conditional formatting rule, these methods are simple to implement and can save you time and effort in the long run. With these techniques, you can take your Excel skills to the next level and **create clear** and **concise** worksheets that effectively communicate your data. For more articles, you can visit our [homepage](https://aawexcel.com/). ## Video Tutorial Here is the video tutorial to **showing or hiding zero values in Excel** for your better understanding. <video controls poster="http://aawexcel.com/wp-content/uploads/2023/04/How-to-Show-or-Hide-Zero-Values-in-Excel.png.webp" src="http://aawexcel.com/wp-content/uploads/2023/04/Show-Zero-values.webm"><track src="http://aawexcel.com/wp-content/uploads/2023/03/no-captions.vtt" srclang="en" label="English" kind="subtitles"></track></video> _Showing or hiding zero values in Excel_ ## FAQ **How can I show or hide zero values in Excel?** There are two methods you can use to show or hide zero values in Excel. You can either use the Excel Options settings or a conditional formatting rule. **Why would I want to hide zero values in Excel?** Hiding zero values can help reduce clutter and make your data easier to read and understand. It can also help to create a more visually appealing and professional-looking worksheet. **Can I still use zero values in my calculations if I hide them?** Yes, hiding zero values does not affect their use in calculations. You can still use zero values in your calculations even if they are hidden from view. **Can I selectively hide zero values in specific cells or ranges?** Yes, you can use a conditional formatting rule to selectively hide zero values in specific cells or ranges based on certain criteria.
hajira_official
1,897,747
Implementing Hash-Based Strict CSP on AEM
As always, the full solution is available on Github. Scroll to the bottom of the article for the...
0
2024-06-23T12:28:34
https://dev.to/theopendle/implementing-hash-based-strict-csp-on-aem-5621
aem, csp, adobe, java
> As always, the full solution is available on Github. Scroll to the bottom of the article for the link. ## Introduction to CSP As a reminder, CSP stands for [Content Security Policy](https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP): a security standard that helps prevent cross-site scripting (XSS), clickjacking, and other code injection attacks by controlling what resources a user agent is allowed to load for a given page using the `Content-Security-Policy` header. Perhaps the most critical directive of CSP is the `script-src` directive, which controls what JS code the browser can load and execute. The highest level of protection is achieved by using the so-called 'strict' CSP. If you want to know more about strict vs non-strict CSPs, including examples and rationales, please visit these excellent resources: 1. [CSP Is Dead, Long Live Strict CSP](https://deepsec.net/docs/Slides/2016/CSP_Is_Dead,_Long_Live_Strict_CSP!_Lukas_Weichselbaum.pdf) 2. [OWASP Content Security Policy Cheat Sheet](https://cheatsheetseries.owasp.org/cheatsheets/Content_Security_Policy_Cheat_Sheet.html#strict-csp) There are two methods for implementing a strict CSP: [nonces or hashes](https://web.dev/articles/strict-csp#nonce-vs-hash). ## Defining the requirements Let's first list the features of the ideal CSP solution to help us pick the right approach: 1. It should provide the highest level of security (ie: strict CSP) 2. It should work for both author and publish instances (ie: it should not rely on a publish-only dispatcher) 3. It should be easy to maintain 4. It should be cacheable 5. It should work for `<script>` tags that: 1. point to internal clientlibs 2. contain inline JS 3. point to external clientlibs (eg: external analytics script) ## Hashes vs Nonce-nse While there are certainly cases where the nonce approach makes sense, in my opinion, hashes are a superior solution for most CMS use cases. That's because using nonces presents the following disadvantages: 1. Because it requires the HTML document to contain these nonces that are unique to each _request_, it makes the result impossible to cache and fails requirement n.4. Since caching is a critical performance optimization, this is usually disqualifying. 2. It's not a useful mechanism for protection against untrusted external scripts, so it fails requirement 5.iii. This kind of protection would require an integrity check which is, you guessed it, a hash (which we can re-use for our CSP!) Hashes, by comparison, can be cached because they are specific to the _script content_, which typically only changes with a software release and can be used to validate the integrity of scripts from untrusted sources. Therefore, we can conclude that a hash-based CSP is the best solution for most AEM use cases. > If the above rationale doesn't apply to your use case, then [this Medium article](https://medium.com/@bsaravanaprakash/implementing-csp-with-nonce-for-inline-scripts-in-aem-a-step-by-step-guide-65b1fc4c8ba3) by [Saravana Prakash](https://medium.com/@bsaravanaprakash) can show you how to achieve nonce-based CSP this in AEM. ## Solution design Now that we know what approach to take, let's design the solution. ### Where do scripts come from anyway? There are typically 3 ways in which `<script>` tags are added to a page's HTML: 1. Added directly via HTL files that make up the Page component (eg: [`customfooterlibs.html`](https://github.com/adobe/aem-core-wcm-components/blob/main/content/src/content/jcr_root/apps/core/wcm/components/page/v3/page/customfooterlibs.html) or [`customheaderlibs.html`](https://github.com/adobe/aem-core-wcm-components/blob/main/content/src/content/jcr_root/apps/core/wcm/components/page/v3/page/customheaderlibs.html)) 2. Added indirectly via the Page Policy: ![Page policy](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2kh1gfn5rmwh2g6rur70.png) 3. Added [as dependencies to clientlibs](https://experienceleague.adobe.com/en/docs/experience-manager-65/content/implementing/developing/introduction/clientlibs#linking-to-dependencies) defined in points 1 and 2. So the sequence of events should be: 1. Let AEM add all the `<script>` tags in the page HTML 2. For each `<script>` tag: * Calculate the hash using one of the following: * The inline content of the script * The content of the references clientlib * The integrity attribute of the untrusted script 3. Add it to the tag 4. Add it to the CSP header ### Transformers to the rescue! ![transformer.png](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kfshf97w2r056l6gm54i.png) > Photo by <a href="https://unsplash.com/@aditya1702?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Aditya Vyas</a> on <a href="https://unsplash.com/photos/man-in-red-and-black-suit-statue-B9MULm2UZIk?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a> Unfortunately I'm not talking about Optimus Prime, but rather a [SAX output pipeline](https://sling.apache.org/documentation/bundles/output-rewriting-pipelines-org-apache-sling-rewriter.html) that will use a [Transformer](https://developer.adobe.com/experience-manager/reference-materials/6-4/javadoc/org/apache/sling/rewriter/Transformer.html) to add the CSP hashes to the `<script>` tags on the HTML page. ## Implementation In this section I will highlight the most important parts of the solution. For a complete solution, see the Github link at the bottom of the article. ### Creating the transformer The transformer handles the following cases: **Inline scripts** Example: ```html <script> console.log('Hello, World!'); </script> ``` If the `<script>` tag is inline, the hash can be calculated using the `innerText` of the element. **Clientlib scripts** Example: ```html <script src="/etc.clientlibs/demo/clientlibs/clientlib-site.min.js"> ``` If the `<script>` tag points to a clientlib served from AEM, the hash can be calculated using the content of the clientlib. **External scripts** Example: ```html <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.3/dist/js/bootstrap.bundle.min.js" integrity="sha384-YvpcrYf0tY3lHB60NNkmXc5s9fDVZLESaAA55NDzOxhy9GkcIdslK1eN7N6jIeHz" crossorigin="anonymous"></script> ``` Here is the code for the transformer. I've been very generous with the comments to explain the rationale behind each step. This snippet refers to some POJOs and configuration that you can see in the Github diff at the end of the article. ```java @RequiredArgsConstructor @Slf4j public class CspHashTransformer extends DefaultTransformer { @Getter private final String hashingAlgorithm; private final HtmlLibraryManager htmlLibraryManager; private SlingHttpServletRequest request; private SlingHttpServletResponse response; private ContentSecurityPolicy csp; private TransformerElement currentElement; @Override public void init(final ProcessingContext context, final ProcessingComponentConfiguration config) throws IOException { super.init(context, config); request = context.getRequest(); response = context.getResponse(); csp = new ContentSecurityPolicy(); // We initialize the CSP with a strict-dynamic directive to allow for our trusted scripts to // load other scripts without being blocked by the browser. csp.addScriptSrcElem("'strict-dynamic'"); } /** * Process the start of an element. If the element has a src attribute pointing to a clientlib, calculate the hash * and add it to the element as an integrity attribute and to the CSP header. * * @param namespaceUri the namespace URI of the element * @param localName the local name of the element * @param qualifiedName the qualified name of the element * @param attributes the attributes of the element * @throws SAXException if an error occurs during processing */ @Override public void startElement(final String namespaceUri, final String localName, final String qualifiedName, final Attributes attributes) throws SAXException { currentElement = new TransformerElement(namespaceUri, localName, qualifiedName, attributes); log.debug("Start processing element {}", currentElement); addIntegrityAttributeAndCspForSrc(); super.startElement(currentElement.namespaceUri(), currentElement.localName(), currentElement.qualifiedName(), currentElement.attributes()); } /** * Called by the SAX parser when it encounters character data. Used to append the character data to the inner text * of the current element. * * @param ch the character array being read * @param start the start index of the character array * @param length the length of the character array * @throws SAXException if an error occurs during processing */ @Override public void characters(final char[] ch, final int start, final int length) throws SAXException { if (currentElement != null) { currentElement.innerText().append(ch, start, length); } super.characters(ch, start, length); } /** * Process the end of an element. If the element has inner text, calculate the hash and add it to the CSP header. * * @param namespaceUri the namespace URI of the element * @param localName the local name of the element * @param qualifiedName the qualified name of the element * @throws SAXException if an error occurs during processing */ @Override public void endElement(final String namespaceUri, final String localName, final String qualifiedName) throws SAXException { if (currentElement == null) { return; } log.debug("End processing element {}", currentElement); addCspForInnerText(); super.endElement(namespaceUri, localName, qualifiedName); } private void addIntegrityAttributeAndCspForSrc() { // Get the source of the script final String src = currentElement.attributes().getValue("src"); if (src == null) { log.debug("No src attribute found for element {}", currentElement); return; } // Attempt to find a clientlib associated with the src attribute final HtmlLibrary clientlib = getHtmlLibrary(src); // Attempt to read the integrity attribute final String integrity = currentElement.attributes().getValue("integrity"); // If no clientlib can be found using the src, then assume the src is external final boolean isExternal = clientlib == null; // For security reasons, we consider that an external script without an integrity attribute is invalid. It will // not be added to the CSP and therefore will fail to load/execute in the browser. if (isExternal && integrity == null) { log.error("Integrity attribute missing from external src <{}>. Hash cannot be calculated.", src); return; } // Re-use the integrity hash if possible, else calculate the hash from the clientlib content final String hash = isExternal ? integrity : getHashFromClientlib(clientlib); // If no hash can be calculated, then the script will not be added to the CSP and therefore will fail to load if (hash == null) { log.debug("No clientlib or external hash found for found for src <{}>. Hash cannot be calculated.", src); return; } // For internal script, add the integrity attribute containing the hash. Security-wise this does not provide any // benefit as the CSP will already enforce the hash, but it is good practice to include it so that you can // easily identify which script corresponds to which hash for debugging puposes. if (!isExternal) { addIntegrityAttribute(hash); } // Finally, add the hash to the CSP addHashToCsp(hash); } private String getHashFromClientlib(final HtmlLibrary clientlib) { try (final InputStream inputStream = clientlib.getInputStream(true)) { final String hash = calculateHashAndEncodeBase64(inputStream); log.debug("Hash for <{}>: <{}>", clientlib.getPath(), hash); return hash; } catch (final IOException e) { log.error("Could not read clientlib <{}>", clientlib.getPath(), e); return null; } } private void addCspForInnerText() { final String innerText = currentElement.innerText().toString(); if (innerText.isEmpty()) { log.debug("Element {} has no inner text", currentElement); return; } final String hash = calculateHashAndEncodeBase64(innerText); addHashToCsp(hash); } /** * Adds the hash as an integrity attribute to the current element. * * @param hash the hash to add */ private void addIntegrityAttribute(final String hash) { final AttributesImpl attributes = new AttributesImpl(currentElement.attributes()); attributes.addAttribute(currentElement.namespaceUri(), "integrity", "integrity", "0", hash); currentElement.attributes(attributes); } /** * Adds the hash to the Content-Security-Policy header. * * @param hash the hash to add */ private void addHashToCsp(final String hash) { csp.addScriptSrcElem("'" + hash + "'"); response.setHeader("Content-Security-Policy", csp.toString()); } /** * Find the clientlib associated with the src attribute if such a clientlib exists. * * @param src the src attribute of the element * @return the clientlib associated with the src attribute, or null if no such clientlib exists */ private HtmlLibrary getHtmlLibrary(final String src) { final String path; try { path = new URI(src).getPath(); } catch (final URISyntaxException e) { log.error("src attribute element {} is not a valid URI", currentElement, e); return null; } // Find true path of clientlib in /apps (or /libs, via overlay) final String appsPath = path .replace("etc.clientlibs", "apps") .replace(".min.js", ""); final Resource resource = request.getResourceResolver().resolve(appsPath); if (resource instanceof NonExistingResource) { log.error("Could not find resource using path <{}>", path); return null; } return htmlLibraryManager.getLibrary(LibraryType.JS, resource.getPath()); } private String calculateHashAndEncodeBase64(final InputStream inputStream) { try { final MessageDigest digest = MessageDigest.getInstance(hashingAlgorithm); final byte[] buffer = new byte[4096]; int bytesRead; while ((bytesRead = inputStream.read(buffer)) != -1) { digest.update(buffer, 0, bytesRead); } final byte[] hash = digest.digest(); final String hashString = Base64.getEncoder().encodeToString(hash); return hashAndAlgorithm(hashString); } catch (final IOException e) { log.error("Error reading input stream for hashing", e); return null; } catch (final NoSuchAlgorithmException e) { log.error("Encryption algorithm not found", e); return null; } } private String calculateHashAndEncodeBase64(final String string) { try { final MessageDigest digest = MessageDigest.getInstance(hashingAlgorithm); final byte[] hash = digest.digest(string.getBytes(StandardCharsets.UTF_8)); final String hashString = Base64.getEncoder().encodeToString(hash); return hashAndAlgorithm(hashString); } catch (final NoSuchAlgorithmException e) { log.error("Encryption algorithm not found", e); return null; } } private String hashAndAlgorithm(final String hash) { return hashingAlgorithm.toLowerCase().replace("-", "") + "-" + hash; } } ``` ### Adding the transformer to the pipeline To create a pipeline that includes our transformer, we need to create a [Rewriter](https://sling.apache.org/documentation/bundles/output-rewriting-pipelines-org-apache-sling-rewriter.html) by adding node at `/apps/demo/config/rewriter/links-pipeline` with the following properties: ```xml <?xml version="1.0" encoding="UTF-8"?> <jcr:root xmlns:jcr="http://www.jcp.org/jcr/1.0" xmlns:nt="http://www.jcp.org/jcr/nt/1.0" jcr:primaryType="nt:unstructured" contentTypes="text/html" generatorType="htmlparser" order="1" paths="[/content]" serializerType="htmlwriter" transformerTypes="[csp-hash-transformer]"> <generator-htmlparser jcr:primaryType="nt:unstructured" includeTags="[SCRIPT]"/> </jcr:root> ``` ## The result Now, if we load scripts onto our page using `customfooterlibs.html`: ```html <!-- This HTL include demonstrates the loading of a clientlib --> <sly data-sly-use.clientlib="core/wcm/components/commons/v1/templates/clientlib.html"> <sly data-sly-call="${clientlib.js @ categories='demo.base', async=true}"/> </sly> <!-- This script element demonstrates the loading of an external script --> <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.3/dist/js/bootstrap.bundle.min.js" integrity="sha384-YvpcrYf0tY3lHB60NNkmXc5s9fDVZLESaAA55NDzOxhy9GkcIdslK1eN7N6jIeHz" crossorigin="anonymous"></script> <!-- This script element demonstrates the inline script hashing --> <script> console.log('This was logged from an inline script!'); </script> <!-- This inline script loads an external script to demonstrate the 'dynamic' principle --> <script> const script = document.createElement('script'); script.src = 'https://cdn.jsdelivr.net/npm/jquery@3.7.1/dist/jquery.min.js'; document.body.appendChild(script); addEventListener("load", (event) => console.log('jQuery version:',$().jquery)); </script> ``` We should receive a CSP like this (yours will vary depending on your clientlibs/dependencies): ``` Content-Security-Policy: script-src-elem 'strict-dynamic' 'sha256-47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=' 'sha256-XjA+iLg5j0FvhFkZc7LcXfbQJ0b3gvw2c2jj9vv65q0=' 'sha256-Uv8dzRTPRZ2++L/ZWgfN9lPjdvzsDYVS4rEfvWCA0x0=' 'sha256-3dfW5u+XJXRPqcC3F8wewmnAr6oxejxP7ArjOE38P2Q=' 'sha256-5hrKOpQWBa1NuajxV3udxJCgNMQMD/lUApbmGxMmpuM=' 'sha256-wlCSQBL9yeqVFrMGUIlSAc0Wfb1JydFIkk8wiBq/o5M=' 'sha256-WJ3od+zqoblT5apcuXdUh4o1UWwVnb5AjQhmHWIu2OY=' 'sha384-YvpcrYf0tY3lHB60NNkmXc5s9fDVZLESaAA55NDzOxhy9GkcIdslK1eN7N6jIeHz' 'sha256-QFYsdZ/eGhCq89XHZ7IOsy5A9dTKeyvduAx2RnCqvAA=' 'sha256-Fb8sXaPGzkkQJOoKLIpn0I5s+VOOiBlZMYGgv8wHxZI=' 'sha256-g2cCN9gX44Hp5lFL/iomg3hI3LeG/LRkzeNJfQZjJGI='; ``` And you should see that the demonstration scripts have executed as expected in the browser console: ``` This was logged from an inline script! jQuery version: 3.7.1 ``` ### What about the other CSP directives? Good question! There are indeed dozens of other directives you can use to fine-tune your CSP. This article will not give you a comprehensive strategy for dealing with all your CSP requirements, it only shows you how to automate the configuration`script-src-elem` directive. Thankfully, using multiple CSP headers is a valid approach, so you can add the rest of the directives anywhere you like as additional CSP headers. Just make sure you understand [how multiple CSP headers interact with each other](https://content-security-policy.com/examples/multiple-csp-headers/) to avoid surprises. ## Conclusion As promised, all the code is available in one easy-to-read diff on Github. You can find it [here](https://github.com/theopendle/aem-demo/compare/cd123fd4f766399887171f20bad0284014dd9f09...tutorial/csp-hash). If you have any comments or ideas about this article, the topic matter or the format, don't hesitate to leave a comment or to reach out to me on [LinkedIn](https://www.linkedin.com/in/theo-pendle-1630a52a)!
theopendle
1,897,743
Why a .gitignore File is Essential for Your Projects
When working on a project using Git for version control, you’ll often encounter files and directories...
0
2024-06-23T12:20:10
https://dev.to/just_ritik/why-a-gitignore-file-is-essential-for-your-projects-4odm
webdev, frontend, javascript, programming
When working on a project using **Git** for version control, you’ll often encounter files and directories that you don’t want to track. This is where the `.gitignore` file becomes essential. In this post, we'll explore what a `.gitignore` file is, why it’s necessary, and how to use it effectively in your projects. **What is a .gitignore File?** A `.gitignore` file is a simple text file where you list the files and directories that you want Git to ignore. Git will then exclude these files from being tracked and versioned. This helps keep your repository clean and free from unnecessary files. **Why is a .gitignore File Necessary?** 1. Improving Performance By ignoring unnecessary files, you can improve the performance of Git operations. This is especially important for large projects where tracking every file can slow down version control operations. 2. Reducing Repository Size Keeping your repository clean by ignoring files that are not required in version control helps reduce the overall size of the repository. This makes cloning and pulling from the repository faster and more efficient. **Preventing Sensitive Data from Being Tracked** Including sensitive information like API keys, passwords, and configuration files in your repository can be risky. A `.gitignore` file ensures that these sensitive files are not accidentally committed to the repository. **Example .gitignore entry for sensitive data** `.env config/secrets.yml` **Example .gitignore entries for temporary files** ``` .log tmp/ build/ dist/ ``` **Improving Performance** **Reducing Repository Size** Keeping your repository clean by ignoring files that are not required in version control helps reduce the overall size of the repository. This makes cloning and pulling from the repository faster and more efficient. **Maintaining Clean and Professional Repositories** Using a .gitignore file helps maintain a clean and professional repository. It ensures that only the relevant files are tracked, making the project easier to understand and manage for other developers. **How to Use a .gitignore File** Creating and using a .gitignore file is straightforward. Here are some steps and examples to get you started: **Step 1: Create a .gitignore File** In the root directory of your project, create a file named .gitignore. **Step 2: Add Files and Directories to Ignore** Open the .gitignore file in a text editor and add the files and directories you want Git to ignore. Each entry should be on a new line. ``` # Ignore all .log files *.log # Ignore node_modules directory node_modules/ # Ignore build directory build/ # Ignore .env file .env ``` **Step 3: Commit the .gitignore File** After creating and updating your .gitignore file, commit it to your repository. ``` git add .gitignore git commit -m "Add .gitignore file" ``` ``` # Logs logs *.log npm-debug.log* # Dependency directories node_modules/ # Build output dist/ build/ # Environment variables .env # IDE files .vscode/ .idea/ ``` **Conclusion** A .gitignore file is an essential part of any project using Git. It helps keep your repository clean, secure, and efficient by ignoring unnecessary files and directories. By understanding and using .gitignore files effectively, you can ensure that your version control remains manageable and professional. Happy coding! Follow For More
just_ritik
1,897,742
Decentralized Storage Networks with Redistribution?
What are some popular decentralized storage networks similar to Freenet, but where data is...
0
2024-06-23T12:18:50
https://dev.to/moogamouth/decentralized-storage-networks-with-redistribution-3pa7
What are some popular decentralized storage networks similar to Freenet, but where data is automatically transferred to a new node if the old one drops it?
moogamouth
1,897,741
Unlocking the Secrets of JavaScript: My Exhilarating Adventure in Web Development
As I dove into the world of web development in the Bano Qabili Program 3.0, I never expected to find...
0
2024-06-23T12:18:46
https://dev.to/m_affannazeer_8f74d021e_50/unlocking-the-secrets-of-javascript-my-exhilarating-adventure-in-web-development-34mc
As I dove into the world of web development in the Bano Qabili Program 3.0, I never expected to find myself on an exhilarating adventure. But that's exactly what happened in our third class, when we explored the fascinating realm of JavaScript. Discovering the Enchanting Universe of JavaScript Types JavaScript, a language once foreign to me, began to unveil its secrets. We delved into the enchanting universe of JavaScript types, discovering both primitive and non-primitive types. The differences between them were intriguing, and I found myself captivated by the simplicity of primitive types like strings, numbers, and booleans, as well as the dynamic nature of non-primitive types like objects and arrays. Realizing the Impact of Web Development As we journeyed through the world of JavaScript, I realized that web development wasn't just about coding; it was about creating something meaningful and impactful. The thrill of building something from scratch, the satisfaction of solving complex problems, and the joy of bringing ideas to life were all part of this exciting journey. Sharing the Excitement and Enthusiasm In this article, I hope to share the excitement and enthusiasm I experienced as I unlocked the secrets of JavaScript. Whether you're a seasoned developer or just starting out, I hope my story will inspire you to embrace the thrill of web development and explore the endless possibilities that JavaScript has to offer. Join the Adventure So, what are you waiting for? Join me on this exhilarating adventure into the world of web development and JavaScript. Let's explore, create, and bring ideas to life together!
m_affannazeer_8f74d021e_50
1,897,740
React as a Library and Create React App / Vite as Frameworks
React stands out as one of the most popular libraries for building user interfaces. However, when it...
27,828
2024-06-23T12:13:50
https://imabhinav.dev/blog/react-as-a-library-and-create-react-app-vite-as-frameworks-12-9-44
webdev, javascript, beginners, tutorial
React stands out as one of the most popular libraries for building user interfaces. However, when it comes to scaffolding a new project, developers often turn to tools like Create React App (CRA) or Vite. While React is a library focused on the view layer, CRA and Vite act as frameworks, providing the necessary tooling and conventions to streamline the development process. This article explores the distinction between React as a library and Create React App or Vite as frameworks, elucidating their roles, benefits, and examples. ### React: The Library React is a JavaScript library developed by Facebook for building user interfaces, particularly single-page applications. Its core philosophy is centered around components—reusable, independent pieces of UI. React's primary job is to render UI components and manage their state efficiently. ![Description](https://media4.giphy.com/media/6hzcLwqQ7AH4fPNR59/giphy.webp?cid=790b7611tdzd8hhys2c4mk07vrf9ttlyaawcs15cuhqrly0l&ep=v1_gifs_search&rid=giphy.webp&ct=g) #### Key Features of React 1. **Component-Based Architecture**: React encourages developers to build applications using components. Each component represents a part of the user interface and can manage its own state and lifecycle. 2. **Virtual DOM**: React uses a virtual DOM to improve performance. It minimizes direct manipulation of the actual DOM by batching updates and applying the minimum number of changes. 3. **Declarative Syntax**: React allows developers to describe what the UI should look like at any given point in time. This declarative nature makes it easier to understand and debug. 4. **Unidirectional Data Flow**: React promotes a unidirectional data flow, which simplifies data management and makes the application state more predictable. #### Example: Basic React Component ```jsx import React, { useState } from 'react'; function App() { const [count, setCount] = useState(0); return ( <div> <h1>Counter: {count}</h1> <button onClick={() => setCount(count + 1)}>Increment</button> </div> ); } export default App; ``` In the above example, we define a simple React component that maintains a counter state and provides a button to increment it. React's power lies in its simplicity and reusability. ### Create React App and Vite: The Frameworks While React itself is just a library, starting a new project from scratch can be cumbersome without additional tools. This is where frameworks like Create React App and Vite come into play. These tools provide a structured environment to develop React applications, handling the complexities of build configuration, asset management, and development server setup. #### Create React App (CRA) Create React App is an officially supported way to create single-page React applications. It offers a modern build setup with no configuration. Under the hood, CRA uses Webpack and Babel to bundle and transpile code. ##### Key Features of CRA 1. **Zero Configuration**: CRA abstracts away the build configuration, allowing developers to focus on writing code without worrying about setup details. 2. **Development Server**: CRA provides a development server with hot module replacement, enabling a smooth development experience. 3. **Testing Setup**: CRA comes with a built-in testing setup using Jest, making it easier to write and run tests. 4. **Optimized Production Build**: CRA optimizes the production build for better performance and smaller bundle sizes. ##### Example: Creating a New React App with CRA ```bash npx create-react-app my-app cd my-app npm start ``` After running these commands, CRA generates a new React application with a predefined structure and configuration. Developers can immediately start building their applications without dealing with the underlying build tools. #### Vite Vite is a next-generation front-end tooling framework that aims to provide a faster and leaner development experience. It is designed to leverage the native ES modules in the browser, allowing instant server start and fast hot module replacement. ##### Key Features of Vite 1. **Fast Server Start**: Vite starts the development server instantly, even for large projects. 2. **Hot Module Replacement (HMR)**: Vite provides lightning-fast HMR, ensuring changes reflect immediately in the browser. 3. **Optimized Build**: Vite uses Rollup for production builds, offering efficient code splitting and optimization. 4. **Flexible and Extensible**: Vite is highly configurable and can be extended with plugins to suit different project needs. ##### Example: Creating a New React App with Vite ```bash npm create vite@latest my-app --template react cd my-app npm install npm run dev ``` These commands create a new React application using Vite. The development server starts almost instantly, providing a seamless development experience. ### Detailed Code Examples #### React Component with Create React App Let's explore a more detailed example of a React component created with Create React App. This example includes a form that captures user input and displays it dynamically. 1. **Initialize the Project** ```bash npx create-react-app user-form-app cd user-form-app npm start ``` 2. **Create a UserForm Component** ```jsx // src/components/UserForm.js import React, { useState } from 'react'; function UserForm() { const [name, setName] = useState(''); const [email, setEmail] = useState(''); const handleSubmit = (event) => { event.preventDefault(); alert(`User Info: \nName: ${name}\nEmail: ${email}`); }; return ( <form onSubmit={handleSubmit}> <div> <label>Name:</label> <input type="text" value={name} onChange={(e) => setName(e.target.value)} /> </div> <div> <label>Email:</label> <input type="email" value={email} onChange={(e) => setEmail(e.target.value)} /> </div> <button type="submit">Submit</button> </form> ); } export default UserForm; ``` 3. **Use the UserForm Component in App** ```jsx // src/App.js import React from 'react'; import UserForm from './components/UserForm'; function App() { return ( <div> <h1>User Form</h1> <UserForm /> </div> ); } export default App; ``` In this example, Create React App handles the project setup, allowing us to focus on developing the components and functionality. #### React Component with Vite Now, let's create a similar example using Vite. 1. **Initialize the Project** ```bash npm create vite@latest user-form-app --template react cd user-form-app npm install npm run dev ``` 2. **Create a UserForm Component** ```jsx // src/components/UserForm.jsx import React, { useState } from 'react'; function UserForm() { const [name, setName] = useState(''); const [email, setEmail] = useState(''); const handleSubmit = (event) => { event.preventDefault(); alert(`User Info: \nName: ${name}\nEmail: ${email}`); }; return ( <form onSubmit={handleSubmit}> <div> <label>Name:</label> <input type="text" value={name} onChange={(e) => setName(e.target.value)} /> </div> <div> <label>Email:</label> <input type="email" value={email} onChange={(e) => setEmail(e.target.value)} /> </div> <button type="submit">Submit</button> </form> ); } export default UserForm; ``` 3. **Use the UserForm Component in App** ```jsx // src/App.jsx import React from 'react'; import UserForm from './components/UserForm'; function App() { return ( <div> <h1>User Form</h1> <UserForm /> </div> ); } export default App; ``` In this Vite example, the development server starts instantly, and changes reflect immediately due to Vite's fast HMR. ### Conclusion React, as a library, provides the fundamental building blocks for creating user interfaces through its component-based architecture, virtual DOM, and declarative syntax. However, to streamline the development process, tools like Create React App and Vite act as frameworks that offer a structured environment with build configuration, development servers, and optimization capabilities. Create React App simplifies project setup with zero configuration, making it a great choice for developers who want a ready-to-use environment. On the other hand, Vite offers a faster, more flexible development experience, leveraging native ES modules and providing instant server start and fast HMR. By understanding the distinction between React as a library and CRA or Vite as frameworks, developers can make informed choices about their development tools, ultimately leading to more efficient and enjoyable development experiences.
imabhinavdev
1,897,731
Made typos in routes? Redirect routes with functions
Introduction In this blog post, I want to describe a new feature called redirect functions...
27,826
2024-06-23T12:09:23
https://www.blueskyconnie.com/redirect-routes-with-functions-to-fix-routing-typos/
angular, tutorial, frontend
##Introduction In this blog post, I want to describe a new feature called redirect functions with routes. When defining routes in Angular, it is possible to catch and redirect a route to a different path using redirectTo. One example is to catch all unknown routes and redirect them to a 404 page. Another example is redirecting a default route to a home page. In Angular 18, redirectTo is enhanced to accept a function. Then, a route can perform logic in the function and route to different paths according to some criteria. I will demonstrate how I made typos in the routes and used the redirect routes to functions technique to redirect them to the routes with the correct spelling. ###Update Application Config to add routing ```typescript // app.config.ts import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideHttpClient(), provideRouter(routes, withComponentInputBinding()), provideExperimentalZonelessChangeDetection() ] }; ``` provideRouter(routes, withComponentInputBinding()) add routes to the demo to navigate to Pokemon List and Pokemon Details respectively. ```typescript // main.ts import { appConfig } from './app.config'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet], template: ` <h2>redirectTo Demo</h2> <router-outlet /> `, }) export class App {} bootstrapApplication(App, appConfig); ``` Bootstrap the component and the application configuration to start the Angular application. ###Create routes for navigation to the Pokemon list and Pokemon Details I wanted to add two new routes to load the components but made typos in the configuration. I typed `/pokermon-list` instead of `/pokemon-list`. Similarly, I typed `/pokermon-list/:name` instead of the correct spelling `/pokemon-list/:name` ```typescript // app.route.ts // app.route.ts export const routes: Routes = [ { path: 'pokermon-list', loadComponent: () => import('./pokemons/pokemon-list/pokemon-list.component'), title: 'Pokermon List', }, { path: 'pokemon-list/:name', loadComponent: () => import('./pokemons/pokemon/pokemon.component'), title: 'Pokermon', }, { path: '', pathMatch: 'full', redirectTo: 'pokermon-list' }, { path: '**', redirectTo: 'pokermon-list' } ]; ``` The careless mistakes went unnoticed, and I copied and pasted the erroneous routes to the `routeLink` in the inline templates. ###Add routing in the template ```typescript // pokemon-list.component.ts <div class="container"> @for(p of pokemons(); track p.name) { <div class="card"> <p>Name: <a [routerLink]="['/pokermon-list', p.name]">{{ p.name }}</a></p> </div> } </div> ``` In the Pokemon List component, the first element of the `routerLink` is `/pokermon-list`. ```typescript // pokemon.component.ts <div> <a [routerLink]="'/pokermon-list'">Back</a> </div> ``` In the Pokemon component, the value of the routerLink is /pokermon-list to go back to the previous page. Someone spotted the typos in the URL and informed me. I easily fixed the typos in the routerLink and the routes array. ###Fix the errors in routes ```typescript // app.routes.ts import { Routes } from '@angular/router'; export const routes: Routes = [ { path: 'pokemon-list', loadComponent: () => import('./pokemons/pokemon-list/pokemon-list.component'), title: 'Pokermon List', }, { path: 'pokemon-list/:name', loadComponent: () => import('./pokemons/pokemon/pokemon.component'), title: 'Pokermon', }, { path: '', pathMatch: 'full', redirectTo: 'pokemon-list' }, { path: '**', redirectTo: 'pokemon-list' } ]; ``` All occurrences of `pokermon-list` were replaced with `pokemon-list` in the routes array. ```typescript // pokemon-list.component.ts div class="container"> @for(p of pokemons(); track p.name) { <div class="card"> <p>Name: <a [routerLink]="['/pokemon-list', p.name]">{{ p.name }}</a></p> </div> } </div> ``` ```typescript // pokemon.component.ts <div> <a [routerLink]="'/pokemon-list'">Back</a> </div> ``` Similarly, the templates replaced `/pokermon-list` with `/pokemon-list` in the `routerLink` inputs. This should work, but I wanted to improve my new solution. If someone bookmarked `/pokermon-list` or `/pokermon-list/pikachu` before, these URLs do not work now. Instead, the URLs must redirect to `/pokemon-list` and `/pokemon-list/pikachu`. I will show you how it can be done by using the new redirectTo function in Angular 18. ###Redirect routes with function ```typescript // app.routes.ts export const routes: Routes = [ { path: 'pokemon-list', loadComponent: () => import('./pokemons/pokemon-list/pokemon-list.component'), title: 'Pokermon List', }, { path: 'pokemon-list/:name', loadComponent: () => import('./pokemons/pokemon/pokemon.component'), title: 'Pokermon', }, { path: 'pokermon-list', redirectTo: 'pokemon-list', }, { path: 'pokermon-list/:name', redirectTo: ({ params }) => { const name = params?.['name'] || ''; return name ? `/pokemon-list/${name}` : 'pokemon-list'; } }, { path: '', pathMatch: 'full', redirectTo: 'pokemon-list' }, { path: '**', redirectTo: 'pokemon-list' } ]; ``` `redirectTo` accepts a string or a function; therefore, I redirected `/pokermon-list` to `/pokemon-list`. This is the same behavior before Angular 18. The `/pokermon-list/:name` route was tricky because I wanted to redirect to `/pokemon-list/:name` when the name route parameter was present and to `/pokemon-list` when the name route parameter was missing. I satisfied the requirements by redirecting the route with a function. ```typescript { path: 'pokermon-list/:name', redirectTo: ({ params }) => { const name = params?.['name'] || ''; return name ? `/pokemon-list/${name}` : 'pokemon-list'; } }, ``` I de-structured params from `ActivatedRouteSnapshot` and found the value of the name property in the record. When the string was not empty, users were directed to see the Pokemon details, and when it was empty, they were redirected to the Pokemon list. ###The full listing of the routes array ```typescript // app.routes.ts import { Routes } from '@angular/router'; export const routes: Routes = [ { path: 'pokemon-list', loadComponent: () => import('./pokemons/pokemon-list/pokemon-list.component'), title: 'Pokermon List', }, { path: 'pokemon-list/:name', loadComponent: () => import('./pokemons/pokemon/pokemon.component'), title: 'Pokermon', }, { path: 'pokermon-list', redirectTo: 'pokemon-list', }, { path: 'pokermon-list/:name', redirectTo: ({ params }) => { const name = params?.['name'] || ''; return name ? `/pokemon-list/${name}` : 'pokemon-list'; } }, { path: '', pathMatch: 'full', redirectTo: 'pokemon-list' }, { path: '**', redirectTo: 'pokemon-list' } ]; ``` This is the full listing of the routes array where navigation and redirection work as expected. The following Stackblitz repo displays the {%embed https://stackblitz.com/edit/stackblitz-starters-h1eyk9?file=src%2Fapp.routes.ts %} This is the end of the blog post that describes redirecting routes with redirect functions in Angular. I hope you like the content and continue to follow my learning experience in Angular, NestJS, GenerativeAI, and other technologies. ##Resources: - Stackblitz Demo: https://stackblitz.com/edit/stackblitz-starters-h1eyk9?file=src%2Fapp.routes.ts - Github Repo: https://github.com/railsstudent/ng-redirectTo-demo - Github Page: https://railsstudent.github.io/ng-redirectTo-demo
railsstudent
1,897,738
Twilio challenge entry: Mystery at Darkwood Manor
This is a submission for the Twilio Challenge What I Built I built a small web based...
0
2024-06-23T12:06:42
https://dev.to/dpppr/twilio-challenge-entry-mystery-at-darkwood-manor-744
devchallenge, twiliochallenge, ai, twilio
*This is a submission for the [Twilio Challenge ](https://dev.to/challenges/twilio)* ## What I Built I built a small web based Murder Mystery game; ask the suspects questions to try and figure out the culprit. I had a great time making it and hope you enjoy playing! This is built in Vanilla JS and jQuery. The character images are by Freepik and the background and banner image is by upklyak on Freepik. ## Demo <!-- Share a link to your app and include some screenshots here. --> Play it here! https://chimerical-marshmallow-8dac22.netlify.app/ ![Screenshot of game 'Mystery at Darkwood Manor'](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ktzavqjpj703q667ovt5.png) (A slightly messy) Source can be found at: https://github.com/dpppr/mystery_wp ## Twilio and AI <!-- Tell us how you leveraged Twilio’s capabilities with AI --> I used Twilio's functions to take the users chat input and character they are talking to, from there it generates the appropriate character prompt to send onto google Gemini. Once Gemini has a response, the Twilio function returns the generated text back to the user. Gemini was used for both the character responses and for judging whether or not the user's guess at solving the mystery was correct or not. ## Additional Prize Categories <!-- Does your submission qualify for any additional prize categories (Twilio Times Two, Impactful Innovators, Entertaining Endeavors)? Please list all that apply. --> **Entertaining Endeavours** A game where people can solve a mystery by asking their own questions instead of preset ones and getting unique responses, and judging those responses to work out inconsistencies. They can also write their theory as they see it.
dpppr
1,897,737
What is Google OAuth 2.0 ?
OAuth 2.0 (Open Authorization) is an industry-standard protocol for authorization, allowing...
0
2024-06-23T12:02:56
https://dev.to/yaswanth_bonumaddi/understanding-google-oauth-20-57fn
OAuth 2.0 (Open Authorization) is an industry-standard protocol for authorization, allowing third-party applications to access a user's resources without exposing their credentials. Google OAuth 2.0 specifically is Google's implementation of this protocol, enabling secure and seamless authentication and authorization across various services and applications. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vm3lj1tfo1fbf5vbfgd5.png) ## Google OAuth 2.0 offers several key benefits that make it widely adopted and highly effective: ### Enhanced Security: Instead of sharing your Google account username and password with third-party apps, Google OAuth 2.0 provides a secure way to grant limited access to your account data. It reduces the risk of credential exposure and unauthorized access. ### User Control: Users have granular control over what data and services they authorize third-party applications to access. This is facilitated through Google's consent screen, where users can review and approve the requested permissions. ### Simplicity for Developers: Implementing OAuth 2.0 simplifies the development process for integrating with Google APIs. Developers can leverage Google's robust authentication infrastructure rather than building their own, saving time and ensuring security best practices are followed. ### Scalability: OAuth 2.0 supports scalable authorization scenarios, allowing applications to securely access a user's data across multiple devices and platforms without requiring the user to repeatedly enter their credentials. ## Mechanics of Google OAuth 2.0: An Example Let's illustrate how Google OAuth 2.0 works with a practical example: ### Example Scenario: Accessing Google Drive Imagine you're using a productivity app that integrates with Google Drive for file management. Here's how Google OAuth 2.0 facilitates secure authentication and access: ### Initiation: You log in to the productivity app and choose to link your Google account for seamless file storage and management. ### Authorization Request: The app redirects you to Google's OAuth 2.0 server for authorization. Here, you're presented with a consent screen detailing the specific permissions the app requests, such as accessing your Google Drive files. ### User Consent: You review the permissions requested. If you agree, you authorize the app to access your Google Drive data. This authorization process ensures you maintain control over what the app can and cannot access. ### Token Exchange: Upon authorization, Google issues an access token to the app. This access token serves as a credential that allows the app to make authorized API requests to Google Drive on your behalf. ### Interaction with Google APIs: With the access token, the app can interact with Google Drive APIs to upload, download, or modify files as per the permissions you granted during the consent process. For example, it can create new documents or retrieve existing files securely. ### Security and Refresh: Access tokens have a limited lifespan for security reasons. When the token expires, the app can use a refresh token (if provided) to obtain a new access token without requiring you to re-enter your credentials. This ensures seamless access to your Google Drive files over time. **Conclusion** Google OAuth 2.0 revolutionizes how applications authenticate and access user data securely. By empowering users with control over their data and offering developers a robust framework for integration, OAuth 2.0 enhances both security and usability in the digital ecosystem.
yaswanth_bonumaddi
1,897,735
Top 10 String Javascript Interview Coding Question
1. Reverse a String function reverseString(str) { return...
0
2024-06-23T12:01:51
https://dev.to/vaibhav_shukla_newsletter/top-100-javascript-interview-coding-question-d63
javascript, string, webdev, beginners
**1. Reverse a String** ```javascript function reverseString(str) { return str.split('').reverse().join(''); } console.log(reverseString("hello")); // Output: "olleh" ``` **2. How to check the name of the number in English?** ```javascript function numberToWords(number) { const singleDigits = ['', 'one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine']; const teens = ['ten', 'eleven', 'twelve', 'thirteen', 'fourteen', 'fifteen', 'sixteen', 'seventeen', 'eighteen', 'nineteen']; const tens = ['', '', 'twenty', 'thirty', 'forty', 'fifty', 'sixty', 'seventy', 'eighty', 'ninety']; if (number === 0) { return 'zero'; } if (number < 10) { return singleDigits[number]; } if (number < 20) { return teens[number - 10]; } if (number < 100) { return tens[Math.floor(number / 10)] + ' ' + singleDigits[number % 10]; } if (number < 1000) { return singleDigits[Math.floor(number / 100)] + ' hundred ' + numberToWords(number % 100); } if (number < 1000000) { return numberToWords(Math.floor(number / 1000)) + ' thousand ' + numberToWords(number % 1000); } if (number < 1000000000) { return numberToWords(Math.floor(number / 1000000)) + ' million ' + numberToWords(number % 1000000); } return 'Number too large to convert'; } // Example usage: console.log(numberToWords(123)); // Outputs: ""one hundred twenty three"" console.log(numberToWords(12345)); // Outputs: ""twelve thousand three hundred forty five"" console.log(numberToWords(123456789)); // Outputs: ""one hundred twenty three million four hundred fifty six thousand seven hundred eighty nine""" ``` **3. How to calculate the number of vowels and consonants in a string? ** ```javascript function countVowelsAndConsonants(str: string): { vowels: number, consonants: number } { // Define a regular expression to match vowels (case-insensitive) const vowelRegex = /[aeiou]/i; let vowels = 0; let consonants = 0; // Convert the string to lowercase for case-insensitive comparison const lowerCaseStr = str.toLowerCase(); // Iterate through each character of the string for (let char of lowerCaseStr) { // Check if the character is a letter if (/[a-z]/i.test(char)) { // Increment the count of vowels if the character is a vowel if (vowelRegex.test(char)) { vowels++; } else { // Increment the count of consonants if the character is a consonant consonants++; } } } // Return the count of vowels and consonants return { vowels, consonants }; } // Example usage: const inputString = ""Hello World""; const { vowels, consonants } = countVowelsAndConsonants(inputString); console.log(""Vowels:"", vowels); // Output: 3 console.log(""Consonants:"", consonants); // Output: 7 ``` **4. how to calculate 2 largest vowel count from string** ```javascript function calculateTwoLargestVowelCounts(str: string): number[] { // Define an array to store counts of each vowel (a, e, i, o, u) const vowelCounts: number[] = [0, 0, 0, 0, 0]; // Convert the string to lowercase for case-insensitive comparison const lowerCaseStr = str.toLowerCase(); // Iterate through each character of the string for (let char of lowerCaseStr) { // Check if the character is a vowel switch (char) { case 'a': vowelCounts[0]++; break; case 'e': vowelCounts[1]++; break; case 'i': vowelCounts[2]++; break; case 'o': vowelCounts[3]++; break; case 'u': vowelCounts[4]++; break; } } // Sort the array in descending order to find the two largest counts const sortedCounts = vowelCounts.slice().sort((a, b) => b - a); // Return the two largest counts return sortedCounts.slice(0, 2); } // Example usage: const inputString = ""Hello World""; const twoLargestCounts = calculateTwoLargestVowelCounts(inputString); console.log(""Two Largest Vowel Counts:"", twoLargestCounts); // Output: [2, 1] ```
vaibhav_shukla_newsletter
1,897,733
Difference Between Branches
Git is a popular version control system that allows developers to manage their codebase efficiently. One of the essential features of Git is the ability to create and manage branches. Branches allow developers to work on different features or bug fixes simultaneously without interfering with each other's work. However, at some point, you may need to compare the changes between two branches. In this lab, you will learn how to view the difference between two branches using Git.
27,827
2024-06-23T11:54:44
https://labex.io/tutorials/git-difference-between-branches-12727
git, coding, programming, tutorial
## Introduction Git is a popular version control system that allows developers to manage their codebase efficiently. One of the essential features of Git is the ability to create and manage branches. Branches allow developers to work on different features or bug fixes simultaneously without interfering with each other's work. However, at some point, you may need to compare the changes between two branches. In this lab, you will learn how to view the difference between two branches using Git. ## Difference Between Branches You have been working on a project with your team, and you have created a branch named `feature-1` to work on a new feature. Your colleague has also created a branch named `feature-2` to work on a different feature. You want to compare the changes between the two branches to see what has been added, modified, or deleted. How can you view the difference between the two branches? Suppose your GitHub account clones a repository called `git-playground` from `https://github.com/labex-labs/git-playground.git`.Follow the steps below: 1. Change to the repository's directory using the command `cd git-playground`. 2. Configure your GitHub account in this environment using the commands `git config --global user.name "Your Name"` and `git config --global user.email "your@email.com"`. 3. Create and switch to the `feature-1` branch using the command `git checkout -b feature-1`, add "hello" to the `README.md` file, add it to the staging area and commit, the commit message is "Add new content to README.md" using the commands `echo "hello" >> README.md `, `git add .` and `git commit -am "Add new content to README.md"`. 4. Switch back to the `master` branch. 5. Create and switch to the `feature-2` branch using the command `git checkout -b feature-2`, add "world" to the `index.html` file, add it to the staging area and commit, the commit message is "Update index.html file" using the commands `echo "world" > index.htm`, `git add .` and `git commit -am "Update index.html file"`. 6. View the difference between the two branches using the command `git diff feature-1..feature-2`. The output should display the difference between the `feature-1` and `feature-2` branches.This shows how the final result will look like: ```shell diff --git a/README.md b/README.md index b66215f..0164284 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,2 @@ # git-playground Git Playground -hello diff --git a/index.html b/index.html new file mode 100644 index 0000000..cc628cc --- /dev/null +++ b/index.html @@ -0,0 +1 @@ +world ``` ## Summary In this lab, you have learned how to view the difference between two branches using Git. By using the `git diff` command with the branch names separated by two dots, you can compare the changes between the two branches. This feature is useful when you want to merge changes from one branch to another or when you want to see what has been modified between two branches. --- ## Want to learn more? - 🚀 Practice [Difference Between Branches](https://labex.io/tutorials/git-difference-between-branches-12727) - 🌳 Learn the latest [Git Skill Trees](https://labex.io/skilltrees/git) - 📖 Read More [Git Tutorials](https://labex.io/tutorials/category/git) Join our [Discord](https://discord.gg/J6k3u69nU6) or tweet us [@WeAreLabEx](https://twitter.com/WeAreLabEx) ! 😄
labby
1,897,732
How to get an FL Studio Mobile for free on Android
Obtaining FL Studio Mobile for free on Android through unofficial channels is illegal and can expose...
0
2024-06-23T11:53:26
https://dev.to/allenmaxde/how-to-get-an-fl-studio-mobile-for-free-on-android-3lhj
webdev, beginners, ai, productivity
Obtaining [FL Studio Mobile for free on Android](https://modversionapk.com/en/fl-studio-mobile/old-version) through unofficial channels is illegal and can expose your device to significant security risks. However, if you're looking for legitimate ways to get the software without paying full price, here are some options: 1. Look for Discounts and Promotions Image-Line, the developer of FL Studio Mobile, occasionally offers discounts and promotions. Keep an eye on their official website or sign up for their newsletter to stay informed about any sales or special offers. 2. Use Free Alternatives There are several free music production apps available on the Google Play Store that can offer similar functionality to FL Studio Mobile. Some of the popular free alternatives include: Caustic 3: A powerful music creation tool that offers many of the same features as FL Studio Mobile. n-Track Studio DAW: A digital audio workstation that provides a range of tools for music production. BandLab: A cloud-based music studio that allows you to create, share, and collaborate on music projects. 3. Trial Versions Sometimes, developers offer a trial version of their software. Check if FL Studio Mobile has a trial version available that you can use to evaluate the software before making a purchase. 4. Educational Discounts If you are a student or an educator, you might be eligible for educational discounts on software. Check if Image-Line offers any such discounts for FL Studio Mobile. 5. Bundled Offers FL Studio Mobile might be bundled with other products or services. For example, some hardware purchases might come with free software. Check if any music production hardware offers FL Studio Mobile as part of a package deal. 6. App Testing Programs Some developers offer free versions of their apps in exchange for feedback or as part of beta testing programs. Look out for any opportunities to participate in testing programs that might give you access to FL Studio Mobile for free. 7. Online Communities and Contests Join online music production communities and forums. Sometimes, members share information about giveaways, contests, or promotions where you can win free software licenses. Conclusion While it might be tempting to look for cracked or pirated versions of [FL Studio Mobile](https://modversionapk.com/en/fl-studio-mobile/), it's important to consider the legal and security implications. Sticking to legitimate methods not only ensures that you are supporting the developers but also protects your device from potential malware and other security threats. Explore the above options to get the best possible deal on FL Studio Mobile or find suitable free alternatives that meet your music production needs.
allenmaxde
1,786,118
Golang Web: POST Method
Introduction In this section of the series, we will be exploring how to send a POST HTTP...
17,548
2024-06-23T11:53:00
https://www.meetgor.com/golang-web-post-method
go, 100daysofgolang, webdev
## Introduction In this section of the series, we will be exploring how to send a `POST` HTTP request in golang. We will understand how to send a basic POST request, create an HTTP request, and parse json, structs into the request body, add headers, etc in the following sections of this post. We will understand how to marshal the golang struct/types into JSON format, send files in the request, and handle form data with examples of each in this article. Let's answer a few questions first. ## What is a POST request? POST method is a type of request that is used to send data to a server(a machine on the internet). Imagine you are placing an order at a restaurant. With a GET request, it would be like asking the waiter, "What kind of pizza do you have?" The waiter would respond by telling you the menu options (the information retrieved from the server). However, a POST request is more like giving your completed order to the waiter. You tell them the specific pizza you want, its size, and any additional toppings (the data you send). The waiter then takes this information (POST request) back to the kitchen (the server) to process it (fulfill your order). In the world of web development, POST requests are often used for things like: Submitting forms (e.g., contact forms, login forms) Uploading files (e.g., photos, videos) Creating new accounts Sending data to be processed (e.g., online purchases) Here's an example of what the POST request might look like in this scenario: ```nginx POST /api/order HTTP/1.1 Host: example.com Content-Type: application/json Content-Length: 123 { "userID": 123, "orderID": 456, "items": [ { "itemID": 789, "name": "Pizza", "quantity": 2 }, { "itemID": 999, "name": "Burger", "quantity": 1 } ] } ``` In this example: * The `POST` method is used to send data to the server. * The `/api/order` is the endpoint of the server. * The `application/json` is the content type of the request. * The `123` is the content length of the request. * The `{"userID": 123, "orderID": 456, "items": [{"itemID": 789, "name": "Pizza", "quantity": 2}, {"itemID": 999, "name": "Burger", "quantity": 1}]}` is the body of the request. ## Why the need for a POST request? In the world of HTTP requests, we use the POST method to securely send data from a client (like a user's browser) to a server. This is crucial because the GET method, while convenient for retrieving data, has limitations: Imagine you are in registering for an event via Google form, you type in your details on the webpage like name, email, address, phone number, and other personal details. If the website/app was using the `GET` method to send the request to register or do any other authentication/privacy-related requests, it could expose the data in the URL itself. It would be something along the lines [`https://form.google.com/register/<form-name>-<id>/?name=John&phone_number=1234567890`](https://form.google.com/register/%3Cform-name%3E-%3Cid%3E/?name=John&phone_number=1234567890), if a user maliciously sniffs into your network and inspects the URL, your data will be exposed. That is the reason we need `POST` a method. ## How a POST method works? A [POST](https://www.rfc-editor.org/rfc/rfc9110#POST) request is used to send data to a server to create or update(there is a separate method for updating) a resource. The client(browser/other APIs) sends a POST request to the server's API endpoint with the data in the request body. This data can be in formats like JSON, XML, or form data. The server processes the POST request, validates and parses the data in the request body, makes any changes or creates resources based on that data, and returns a response. The response would contain a status code indicating the success or failure of the operation and may contain the newly created or updated resource in the response body. The client must check the response status code to verify the outcome and process the response accordingly. Unlike GET, POST can create new resources on the server. The body of a POST contains the data for creation while the URL identifies the resource to be created. Overall, POST transfers data to the server for processing, creation or updating of resources. The status code is usually `201` indicating the resource is successfully created or `200` for just indicating success. Some common steps for creating and sending a POST request as a developer include: * Defining the API endpoint * Clarifying the data format (json, language native objects, xml , text, form-data, etc) * Converting / Marshalling the data * Attaching header for `Content-Type` as key and value as the format of the data type (e.g. `application/json` for json) * Sending the request The above steps are general for creating and sending a POST request, they are not specific to Golang. For golang specific steps, we need to dive a bit deeper, let's get started. ## Basic POST method in Golang To send a POST request in golang, we need to use the `http` package. The `http` package has the `Post` method, which takes in 3 parameters, namely the URL, the Content-Type, and the Body. The body can be `nil` if the URL endpoint doesn't necessarily require a body. The `Content-Type` is the string, since we are just touching on how the Post request is constructed, we will see what the `Content-Type` string value should be in the later sections. > [http.Post](http://http.Post)(URL, Content-Type, Body) ```go package main import ( "fmt" "net/http" ) func main() { apiURL := "https://reqres.in/api/users" // POST request resp, err := http.Post(apiURL, "", nil) // ideally the Content-Type header should be set to the relevant format // resp, err := http.Post(apiURL, "application/json", nil) if err != nil { panic(err) } fmt.Println(resp.StatusCode) fmt.Println(resp) defer resp.Body.Close() } ``` ```bash $ go run main.go 201 &{ 201 Created 201 HTTP/2.0 2 0 map[ Access-Control-Allow-Origin:[*] Cf-Cache-Status:[DYNAMIC] Cf-Ray:[861cd9aec8223e4b-BOM] Content-Length:[50] Content-Type:[application/json; charset=utf-8] Date:[Sat, 09 Mar 2024 17:40:28 GMT] Server:[cloudflare] ... ... ... X-Powered-By:[Express] ] {0xc00017c180} 50 [] false false map[] 0xc000156000 0xc00012a420 } ``` The above code is sending the `POST` request to the [`https://reqres.in/api/users`](https://reqres.in/api/users) endpoint with an empty body and no specific format for `Content-Type` header. The response is according to the [Response](https://pkg.go.dev/net/http#Response) structure. We can see we got `201` status, which indicates the server received the POST request successfully, the API is a dummy api, so we don't care about the data we are processing, we are just using the API as a placeholder for sending the POST request. We can use `map[string]interface{}` it to pass the data in the request body. The `json.Marshal` method is used to convert the map into JSON format. We will look into the details shortly in the next few examples. ```go package main import ( "bytes" "encoding/json" "fmt" "net/http" ) func main() { apiURL := "https://reqres.in/api/users" bodyMap := map[string]interface{}{ "name": "morpheus", "job": "leader", } requestBody, err := json.Marshal(bodyMap) if err != nil { panic(err) } body := bytes.NewBuffer(requestBody) resp, err := http.Post(apiURL, "application/json", body) if err != nil { panic(err) } fmt.Println(resp.StatusCode) defer resp.Body.Close() } ``` ```bash $ go run main.go 201 ``` The above code sends the `POST` request to the [`https://reqres.in/api/users`](https://reqres.in/api/users) endpoint with the data in the request body in JSON format. ## Creating a POST request in Golang We can construct the POST request with the [NewRequest](https://pkg.go.dev/net/http#NewRequest) method. The method takes in 3 parameters, namely the `method` (e.g. `POST`, `GET`), the `URL` and the `body` (if there is any). We can then add extra information to the headers or the Request object after constructing the basic HTTP [Request](https://pkg.go.dev/net/http#Request) object. ```go package main import ( "fmt" "net/http" ) func main() { apiURL := "https://reqres.in/api/users" req, err := http.NewRequest(http.MethodPost, apiURL, nil) if err != nil { panic(err) } resp, err := http.DefaultClient.Do(req) if err != nil { panic(err) } fmt.Println(resp.StatusCode) //fmt.Println(resp) defer resp.Body.Close() } ``` ```bash $ go run main.go 201 ``` In the above example, we have created an HTTP Request as the `POST` method, with [`https://reqres.in/api/users`](https://reqres.in/api/users) as the URL, and no body. This constructs an HTTP Request object, which can be sent as the parameter to the [http.DefaultClient.Do](http://http.DefaultClient.Do) method, which is the default client for the request we sent in the earlier examples as `http.Get` or [`http.Post`](http://http.Post) methods. We can implement a custom client as well, and then apply `Do` the method with the request parameters. The `Do` method returns the `Request` object or the `error` if any. More on the customizing Client will be explained in a separate post in the series. The response is also in the same format as the [Response](https://pkg.go.dev/net/http#Response) structure that we have seen earlier. This section of the series aims to construct a post request, and not to parse the response, we have already understood the parsing of the response in the [Get method](https://www.meetgor.com/golang-web-get-method/#?:~:text=Parsing%20the%20JSON%20body%20with%20structs) section of the series. ### Parsing objects to JSON for POST method request We might have a golang object that we want to send as a body to an API in the POST request, for that we need to convert the golang struct object to JSON. We can do this by using the [Marshal](https://pkg.go.dev/encoding/json#Marshal) or the [Encode](https://pkg.go.dev/encoding/json#Encoder.Encode) method for serialization of the golang struct object to JSON. #### Using Marshal method Marshaling is the process of converting data from a data structure into a format suitable for transmission over a network or for storage. It's commonly used to convert native objects in a programming language into a serialized format, typically a byte stream, that can be transmitted or stored efficiently. You might get a question here, what is the difference between `Marshalling` and `Serialization`? Well, Serialization, is a broader term that encompasses marshalling. It refers to the process of converting an object or data structure into a format that can be stored or transmitted and later reconstructed into the original object. Serialization may involve converting data into byte streams, XML, JSON, or other formats. So, in summary, marshaling specifically deals with converting native objects into a format suitable for transmission, while serialization encompasses the broader process of preparing data for storage or transmission. The `json` package has the [Marshal](https://pkg.go.dev/encoding/json#Marshal) method that converts the golang object into JSON. The `Marshal` method takes in a parameter as the struct object with type `any` and returns a byte slice `[]byte` and error (if any). ```go package main import ( "bytes" "encoding/json" "fmt" "net/http" ) type User struct { Name string `json:"name"` Salary int `json:"salary"` Age int `json:"age"` } func main() { user := User{ Name: "Alice", Salary: 50000, Age: 25, } apiURL := "https://dummy.restapiexample.com/api/v1/create" // marshalling process // converting Go specific data structure/types to JSON bodyBytes, err := json.Marshal(user) if err != nil { panic(err) } fmt.Println(string(bodyBytes)) // reading json into a buffer/in-memory body := bytes.NewBuffer(bodyBytes) // post request resp, err := http.Post(apiURL, "application/json", body) if err != nil { panic(err) } fmt.Println(resp.StatusCode) defer resp.Body.Close() } ``` ```bash $ go run main.go {"name":"Alice","salary":50000,"age":25} 200 ``` In the above example, we have created a struct `User` with fields `Name`, `Salary`, and `Age`, the json tags will help label each key in JSON with the tag for the respective fields in the struct. We create an object `user` of a type `User` with the values as `Alice`, `50000`, and `25` respectively. We call the `json.Marshal` method with the parameter `user` that represents the struct object `User`, the method returns a slice of bytes, or an error either or both could be nil. If we try to see the stringified representation of the byte slice, we can see something like `{"name":"Alice","salary":50000,"age":25}` which is a JSON string for the user struct. We can't parse the byte slice as the body in the POST request, we need the `io.Reader` object, so we can load the byte slice `bodyBytes` into a buffer and parse that as a body for the POST request. We then send a `POST` request to the endpoint [`https://dummy.restapiexample.com/api/v1/create`](https://dummy.restapiexample.com/api/v1/create) with the content type as `application/json` and with the body as `body` which was a `io.Reader` object as an in-memory buffer. In brief, we can summarize the marshaling of the golang object into JSON with `Marshal` function as the following steps: * Defining the structure as per the request body * Creating the struct object for parsing the data as body to the request * Calling the `json.Marshal` function to convert the object to JSON (parameter as the struct object `any` type) * Loading the byte slice into a buffer with `bytes.NewBuffer()` * Sending the POST request to the endpoint with the body as the `io.Reader` object and content type as `application/json` #### Using Encode method We can even use the [Encoder.Encode](https://pkg.go.dev/encoding/json#Encoder.Encode) method to parse the golang struct object to JSON. Firstly, we should have the struct defined as per the request body that the particular API takes, we can make use of the json tags, omitempty, omit(-) options to make the marshaling process work accordingly. We can then create the object of that particular struct with the data we require to be created as a resource with the POST request on that API service. Thereafter we can create an empty buffer object with [bytes.Buffer](https://pkg.go.dev/bytes#Buffer), this buffer object would be used to initialize the [Encoder](https://pkg.go.dev/encoding/json#Encoder) object with the [NewEncoder](https://pkg.go.dev/encoding/json#NewEncoder) method. This would give access to the [Encode](https://pkg.go.dev/encoding/json#Encoder.Encode) method, which is used to take in the struct object (`any` type) and this will populate the buffer we initialized with the `NewEncoder` method. Later we can access that buffer to parse it to the Post request as the body. Let's understand it better with an example. ```go package main import ( "bytes" "encoding/json" "fmt" "net/http" ) type User struct { Name string Salary int Age int } func main() { user := User{ Name: "Alice", Salary: 50000, Age: 25, } apiURL := "https://dummy.restapiexample.com/api/v1/create" var bodyBuffer bytes.Buffer var encoder = json.NewEncoder(&bodyBuffer) err := encoder.Encode(user) if err != nil { panic(err) } resp, err := http.Post(apiURL, "application/json", &bodyBuffer) if err != nil { panic(err) } fmt.Println(resp.StatusCode) fmt.Println(resp) defer resp.Body.Close() } ``` Over here, we have created a struct `User` with fields `Name`, `Salary`, and `Age`, we initialize the `user` as the object of the `User` struct. Then we create a buffer `bodyBuffer` of type `bytes.Buffer` this is the actual buffer that we will send as the body. Further, we initialize the `Encoder` object as `encoder` with the `json.NewEncoder` method by parsing the reference of `bodyBuffer` as the parameter. Since `bytes.Buffer` implements the `io.Writer` interface, we can pass the `bodyBuffer` to the `NewEncoder` method. This will create the `Encoder` object which in turn will give us access to the `Encode` method, where we will parse the struct instance and it will populate the buffer with which we initialized the `Encoder` object earlier. Now, we have the `encode` object, this gives us the access to `Encode` method, we call the `Encode` method with the parameter of `user` which is a User struct instance/object. The Encode method will populate the `bodyBuffer` object or it will result in an error if anything goes wrong (the data is incorrectly parsed or is not in the required format). We can call the `Post` method with the initialized URL, the `Content-Type` as `application/json` since we have converted the struct instance to JSON object, and the body as the reference to the buffer as `&bodyBuffer` So, the steps for parsing struct instances into JSON objects with the `Encoder.Encode` method is as follows: * Defining the structure as per the request body * Creating the struct object for parsing the data as body to the request * Creating an empty `bytes.Buffer` object as an in-memory buffer * Initializing the `Encoder` object with `NewEncoder` method by parsing the reference of `bodyBuffer` as the parameter * Calling the `Encode` method with the parameter of struct instance/object * Sending the POST request to the endpoint with the content type as `application/json` and body as the reference to the buffer The results are the same as the above example just the way we have parsed the struct instance to JSON object is different. ### Parsing JSON to POST request We have seen how we can parse golang struct instances to JSON and then send the post request, but what if we had the JSON string already with us, and we want to send the request? Well, that's much easier, right? We already have parsed the JSON string to the Post request by loading the slice of bytes into a buffer, so we just need to convert the string to a slice of bytes which is quite an easy task, and then load that byte slice to the buffer. ```go package main import ( "bytes" "fmt" "net/http" ) func main() { // dummy api apiURL := "https://dummy.restapiexample.com/api/v1/create" // json data data := `{ "name": "Alice", "job": "Teacher" }` body := bytes.NewBuffer([]byte(data)) // POST request resp, err := http.Post(apiURL, "application/json", body) if err != nil { panic(err) } fmt.Println(resp.StatusCode) fmt.Println(resp) defer resp.Body.Close() } ``` In the example above, we already have a JSON string `data` with keys as `name` and `job` but it is not JSON, it is a stringified JSON. We can convert the stringified JSON to a slice of bytes using the `[]byte` function. Further, we have used the `bytes.NewBuffer` method to load the byte slice into an `io.Reader` object. This object returned by the `bytes.NewBuffer` will serve as the body for the POST request. ### Parsing JSON to objects in Golang from POST method response ```go package main import ( "bytes" "encoding/json" "fmt" "io" "net/http" ) type User struct { Name string `json:"name"` Salary int `json:"salary"` Age string `json:"age"` ID int `json:"id,omitempty"` } type UserResponse struct { Status string `json:"status"` Data User `json:"data"` } func main() { user := User{ Name: "Alice", Salary: 50000, Age: "25", } apiURL := "https://dummy.restapiexample.com/api/v1/create" // marshalling process // converting Go specific data structure/types to JSON bodyBytes, err := json.Marshal(user) if err != nil { panic(err) } fmt.Println(string(bodyBytes)) // reading json into a buffer/in-memory body := bytes.NewBuffer(bodyBytes) // post request resp, err := http.Post(apiURL, "application/json", body) if err != nil { panic(err) } fmt.Println(resp.StatusCode) fmt.Println(resp) defer resp.Body.Close() // Read response body respBody, err := io.ReadAll(resp.Body) if err != nil { panic(err) } // unmarshalling process // converting JSON to Go specific data structure/types var userResponse UserResponse if err := json.Unmarshal(respBody, &userResponse); err != nil { panic(err) } fmt.Println(userResponse) fmt.Println(userResponse.Data) } ``` ```nginx {success {Alice 50000 25 3239}} {Alice 50000 25 577} ``` The above example is a POST request with a struct instance being loaded as a JSON string and then sent as a buffer to the API endpoint, it also reads the response body with a specific structure `UserResponse` and unmarshalled the `resp.Body` from the `io.Reader` as `respBody` and then loads into `userResponse` object. This example gives an entire process of what we have understood in the JSON data parsing for a POST request. ### Sending Form data in a POST request We can also send data to a POST request in the form of a form, the form which we use in the HTML. Golang has a `net/url` package to parse the form data. The form data is sent in the `application/x-www-form-urlencoded` format. ```go package main import ( "encoding/json" "fmt" "io" "net/http" "net/url" "strings" ) type ResponseLogin struct { Token string `json:"token"` } func main() { // dummy api apiURL := "https://reqres.in/api/login" // Define form data formData := url.Values{} formData.Set("email", "eve.holt@reqres.in") formData.Set("password", "cityslicka") // Encode the form data fmt.Println(formData.Encode()) reqBody := strings.NewReader(formData.Encode()) fmt.Println(reqBody) // Make a POST request with form data resp, err := http.Post(apiURL, "application/x-www-form-urlencoded", reqBody) if err != nil { panic(err) } defer resp.Body.Close() // Print response status code fmt.Println("Status Code:", resp.StatusCode) // Read response body respBody, err := io.ReadAll(resp.Body) if err != nil { panic(err) } token := ResponseLogin{} json.Unmarshal(respBody, &token) fmt.Println(token) } ``` ```bash $ go run main.go email=eve.holt%40reqres.in&password=cityslicka &{email=eve.holt%40reqres.in&password=cityslicka 0 -1} Status Code: 200 {QpwL5tke4Pnpja7X4} ``` In the above example, we set a `formData` with the values of `email` and `password` which are `url.Values` object. The `url.Values` the object is used to store the key-value pairs of the form data. The `formData` is encoded with the `url.Encode` method, We load the encoded string to a `buffer` with `strings.NewReader` which implements the `io.Reader` interface, so that way we can pass that object as the body to the post request. We send the `POST` request to the endpoint [`https://reqres.in/api/login`](https://reqres.in/api/login) with the content type as `application/x-www-form-urlencoded` and with the body as `reqBody` which implements the `io.Reader` interface as an in-memory buffer. The response from the request is read into the buffer with `io.ReadAll` method and we can `Unmarshal` the stream of bytes as a buffer into the `ResponseLogin` struct object. The output shows the `formData` as encoded string `email=eve.holt%`[`40reqres.in`](http://40reqres.in)`&password=cityslicka` as `@` is encoded to `%40`, then we wrap the `formData` in a `strings.NewReader` object which is a buffer that implements `io.Reader` interface, hence we can see the result as the object. The status code for the request is `200` indicating the server received the `form-data` in the body and upon unmarshalling, we get the token as a response to the POST request which was a dummy login API. This way we have parsed the form-data to the body of a POST request. ### Sending File in a POST request We have covered, parsing text, JSON, and form data, and now we need to move into sending files in a POST request. We can use the `multipart` package to parse files into the request body and set appropriate headers for reading the file from the API services. We first read the file contents [`os.Open`](http://os.Open) which returns a reference to the `file` object or an error. We create an empty `bytes.Buffer` object as `body` which will be populated later. The [multipart.NewWriter](https://pkg.go.dev/mime/multipart#NewWriter) method takes in the `io.Writer` object which will be the `body` as it is an `bytes.Buffer` object that implements the `io.Writer` interface. This will initialize the [Writer](https://pkg.go.dev/mime/multipart#Writer) object in the `multipart` package. We create a `form-field` in the `Writer` object with the [CreateFormFile](https://pkg.go.dev/mime/multipart#Writer.CreateFormFile) method, which takes in the `fieldName` as the name of the field, and the `fileName` as the name of the file which will be read later in the multipart form. The method returns either the part or the error. The `part` is an object that implements the `io.Writer` interface. Since we have stored the file contents in the `file` object, we copy the contents into the `form-field` with the [Copy](https://pkg.go.dev/io#Copy) method. Since the `part` return from the `CreateFormFile` was implementing the `io.Writer` interface, we can use it to Copy the contents from source to destination. The source is the `io.Reader` object and the destination is the `io.Writer` object, the destination for the `Copy` method is the first parameter, the source is the second parameter. This Copy method will populate the buffer initialized earlier in the `NewWriter` method. This will give us a buffer that has the file contents in it. We can pass this buffer to the POST request with the `body` parameter. We also need to make sure we close the `Writer` object after copying the contents of the file. We can extract the type of file which will serve as the `Content-Type` of the request. Let's clear the explanation with an example. ```go package main import ( "bytes" "encoding/json" "fmt" "io" "mime/multipart" "net/http" "os" ) type ResponseFile struct { Files map[string]string `json:"files"` } func main() { apiURL := "http://postman-echo.com/post" fileName := "sample.csv" file, err := os.Open(fileName) if err != nil { panic(err) } defer file.Close() body := &bytes.Buffer{} writer := multipart.NewWriter(body) part, err := writer.CreateFormFile("csvFile", fileName) if err != nil { panic(err) } _, err = io.Copy(part, file) if err != nil { panic(err) } contentType := writer.FormDataContentType() fmt.Println(contentType) writer.Close() resp, err := http.Post(apiURL, contentType, body) if err != nil { panic(err) } defer resp.Body.Close() fmt.Println("Status Code:", resp.StatusCode) respBody, err := io.ReadAll(resp.Body) if err != nil { panic(err) } token := ResponseFile{} json.Unmarshal(respBody, &token) fmt.Println(token) fmt.Println(token.Files[fileName]) } ``` ```bash multipart/form-data; boundary=7e0eacfff890be395eba19c70415c908124b503a56f23ebeec0ab3c665ca --619671ea2c0aa47ca6664a7cda422169d73f3b8a089c659203f5413d03de Content-Disposition: form-data; name="csvFile"; filename="sample.csv" Content-Type: application/octet-stream User,City,Age,Country Alex Smith,Los Angeles,20,USA John Doe,New York,30,USA Jane Smith,Paris,25,France Bob Johnson,London,40,UK --619671ea2c0aa47ca6664a7cda422169d73f3b8a089c659203f5413d03de-- Status Code: 200 {map[sample.csv:data:application/octet-stream;base64,VXNlcixDaXR5LEFnZSxDb3VudHJ5CkFsZXggU21pdGgsTG9zIEFuZ2VsZXMsMjAsVVNBCkpvaG4gRG9lLE5ldyBZb3JrLDMwLFVTQQpKYW5lIFNtaXRoLFBhmlzLDI1LEZyYW5jZQpCb2IgSm9obnNvbixMb25kb24sNDAsVUsK]} data:application/octet-stream;base64,VXNlcixDaXR5LEFnZSxDb3VudHJ5CkFsZXggU21pdGgsTG9zIEFuZ2VsZXMsMjAsVVNBCkpvaG4gRG9lLE5ldyBZb3JrLDMwLFVTQQpKYW5lIFNtaXRoLFBhmlzLDI1LEZyYW5jZQpCb2IgSm9obnNvbixMb25kb24sNDAsVUsK ``` In the above example, we first read the file `sample.csv` into the `file` object with [`os.Open`](http://os.Open) method, this will return a reference to the file object or return an error if any arises while opening the file. Then we create an empty buffer `bytes.Buffer` object which will serve as the body of the post request later as it will get populated with the file contents in the form of `multipart/form-data`. We initialize the `Writer` object with `multipart.NewWriter` method which takes in the empty buffer as the parameter, we parse the `body` as the parameter. The method will return a reference to the `multipart.Writer` object. With the `Writer` object we access the `CreateFormFile` method which takes in the `fieldName` as the name of the field, and the `fileName` as the name of the file. The method will return either the part or an error. The `part` in this case, is the reference to the `io.Writer` object that will be used to write the contents from the uploaded file. Then, we can use the `io.Copy` method to copy the contents from the `io.Reader` object to the `io.Writer` object. The source is the `io.Reader` object and the destination is the `io.Writer` object. The first parameter is however the destination and the second parameter is the source. In the example, we call `io.Copy(part, file)` which will copy the contents of `file` to the `part` buffer. We get the `Content-Type` by calling the [Writer.FormDataContentType](https://pkg.go.dev/mime/multipart#Writer.FormDataContentType) method. This returns us `multipart/form-data; boundary=7e0eacfff890be395eba19c70415c908124b503a56f23ebeec0ab3c665ca` which will serve the `Content-Type` for the Post request. We need to make sure we close the `Writer` object with the `Close` method. We just print the `body.String()` to get a look at what the actual body looks like, we can see there is a form for the file as a `form-data` with keys like `Content-Type`, `Content-Disposition`, etc. The file has the `Content-Type` as `application/octet-stream` and the actual content is rendered in the output. The dummy API responds with a 200 status code and also sends the JSON data with the name of the file as the key and the value as the `base64` encoded value of the file contents. This indicates that we were able to upload the file to the server API using a POST request. Well done! I have also included some more examples of POST requests with files [here](https://github.com/Mr-Destructive/100-days-of-golang/blob/main/web/methods/post/file_2.go) which extends the above example by taking the encoded values and decoding to get the actual contents of the file back. ## Best Practices for POST method Here are some of the best practices for the POST method which are followed to make sure you consume or create the POST request in the most secure, efficient, and graceful way. ### Always Close the Response Body Ensure that you close the response body after reading from it. Use `defer response.Body.Close()` to automatically close the body when the surrounding function returns. This is crucial for releasing associated resources like network connections or file descriptors. Failure to close the response body can lead to memory leaks, particularly with a large volume of requests. Properly closing the body prevents resource exhaustion and maintains efficient memory usage. ### Client Customization Utilize the [Client](https://pkg.go.dev/net/http#Client) struct to customize the HTTP client behavior. By using a custom client, you can set timeouts, headers, user agents, and other configurations without modifying the `DefaultClient` provided by the `http` package. This approach allows for flexibility and avoids repetitive adjustments to the client configuration for each request. ### Set Content-Type Appropriately Ensure that you set the `Content-Type` header according to the request payload. Correctly specifying the Content-Type is crucial for the server to interpret the request payload correctly. Failing to set the Content-Type header accurately may result in the server rejecting the request. Always verify and match the Content-Type header with the content being sent in the POST request to ensure smooth communication with the server. That's it from the 34th part of the series, all the source code for the examples are linked in the GitHub on the [100 days of Golang](https://github.com/Mr-Destructive/100-days-of-golang/tree/main/web/methods/post/) repository. {% embed https://github.com/Mr-Destructive/100-days-of-golang %} ## Reference * [Postman POST API](https://www.postman.com/postman/workspace/postman-answers/documentation/13455110-00378d5c-5b08-4813-98da-bc47a2e6021d) (For POST request with file upload) * [Golang net/http Package](https://pkg.go.dev/net/http) ## Conclusion That's it from this post of the series, a post on the POST method in golang :) We have covered topics like creating basic post requests, Marshalling golang types into JSON format, parsing form data, sending a POST request with files, and best practices for the POST method. Hope you found this article helpful. If you have any queries, questions, or feedback, please let me know in the comments or on my social handles. Happy Coding :)
mr_destructive
1,897,730
useEffect, useRef and useCallback with 1 project
Password Generator Thoughts There is a method currently running that...
0
2024-06-23T11:47:11
https://dev.to/geetika_bajpai_a654bfd1e0/useeffect-useref-and-usecallback-with-1-project-f7e
## Password Generator ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vyqpb6sa6nn84m68oqf1.png) ### Thoughts 1. There is a method currently running that generates random text by default. 2. This method needs to execute repeatedly because any changes in the input parameters, such as length, or toggling checkboxes for numbers and characters, result in new random text being generated. 3. Since this method runs frequently, we should consider optimization techniques. We can leverage memoization, which is inherently supported by React hooks, to optimize these methods effectively. 4. The copy button should be configured to specifically target and copy only the text within the designated text box. Don't panic read the full article you will understand and this article is written by watching videos of Sir Hitesh Choudhary in YT and reading documentation. 1. Create a Vite project: `npm create vite@latest` 2. Navigate to the project directory: `cd my-vite-project` 3. Install `npm install -D tailwindcss autoprefixer` 4. Install ` npx tailwindcss init -p` 4. Install dependencies: npm install 5. Run the development server: npm run dev 6. Configure `tailwind.config.js` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eu98p13e5548ygqs1gxl.png) Now, we gonna start with code. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ujkdhwqvf2uuz6t56sp1.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zekuyybkllqrg6yumf12.png) ## Memoization with useCallback <h3>`passwordGenerator` Function</h3> 1. <u>Purpose:</u> This function generates a random password based on the specified length and character set. 2. <u>Memoization:</u> The useCallback hook memoizes the passwordGenerator function, ensuring that it only gets recreated if any of the dependencies (length, numberAllowed, charAllowed, setPassword) change. This helps in avoiding unnecessary function re-creations and optimizes performance. <h3>`copyPasswordToClipboard` Function</h3> 1. <u>Purpose:</u> This function copies the generated password to the clipboard. 2. <u>Memoization: </u>The useCallback hook memoizes the copyPasswordToClipboard function, ensuring it only gets recreated if the password dependency changes. This optimizes performance by preventing unnecessary re-creations of the function. <h3>useEffect Hook</h3> 1. <u>Purpose:</u> This effect runs the passwordGenerator function whenever the dependencies (length, numberAllowed, charAllowed, passwordGenerator) change. 2. <u>Dependency: </u>The passwordGenerator function is memoized, which means the effect will only rerun when the actual logic inside the generator needs to change, optimizing the component’s rendering. <h4>Explanation of Memoization</h4> 1. <u>Why Use Memoization?: </u>Memoization helps to avoid unnecessary recalculations or re-creations of functions, especially useful in functional components where functions might be redefined on each render. 2. <u>Performance Optimization: </u>By using useCallback, the component ensures that passwordGenerator and copyPasswordToClipboard are only recreated when their respective dependencies change. This can reduce the rendering overhead and improve the overall performance of the component. <h4>Component Functionality</h4> 1. <u>State Management:</u> The component uses useState to manage length, numberAllowed, charAllowed, and password. 2. <u>Refs:</u> The passwordRef is used to reference the password input field for selecting and copying the password. 3. <u>Event Handling: The</u> component handles various user interactions like changing the password length, toggling the inclusion of numbers and special characters, and copying the password to the clipboard. Overall, the use of useCallback in this component ensures that functions are only recreated when necessary, optimizing the performance by avoiding unnecessary re-renders and re-creations.
geetika_bajpai_a654bfd1e0
1,897,725
Beginner's Guide to Setting Up a Django Project
Django is a powerful web framework for Python that allows you to build web applications quickly and...
0
2024-06-23T11:46:19
https://dev.to/rupesh_mishra/beginners-guide-to-setting-up-a-django-project-ep
django, python, programming, backenddevelopment
Django is a powerful web framework for Python that allows you to build web applications quickly and efficiently. This guide will walk you through the process of setting up a Django project from scratch, perfect for beginners who want to get started with web development using Django. ## Table of Contents 1. [Installing Python](#1-installing-python) 2. [Setting Up a Virtual Environment](#2-setting-up-a-virtual-environment) 3. [Installing Django](#3-installing-django) 4. [Creating a Django Project](#4-creating-a-django-project) 5. [Creating a Django App](#5-creating-a-django-app) 6. [Configuring the Database](#6-configuring-the-database) 7. [Creating Models](#7-creating-models) 8. [Creating Views](#8-creating-views) 9. [Creating Templates](#9-creating-templates) 10. [Configuring URLs](#10-configuring-urls) 11. [Running the Development Server](#11-running-the-development-server) ## 1. Installing Python Before we start with Django, make sure you have Python installed on your system. Different Django version requires different Python versions. To check if Python is installed, open your terminal or command prompt and type: ``` python --version ``` If Python is not installed, download and install it from the official Python website (https://www.python.org/downloads/). ## 2. Setting Up a Virtual Environment It's a good practice to create a virtual environment for each Django project. This keeps your project dependencies isolated from other projects. To create a virtual environment: 1. Open your terminal or command prompt. 2. Navigate to the directory where you want to create your project. 3. Run the following command: ``` python -m venv myenv ``` This creates a new virtual environment named "myenv". To activate the virtual environment: - On Windows: ``` myenv\Scripts\activate ``` - On macOS and Linux: ``` source myenv/bin/activate ``` You should see your prompt change to indicate that the virtual environment is active. ## 3. Installing Django With your virtual environment activated, install Django using pip: ``` pip install django ``` ## 4. Creating a Django Project Now that Django is installed, let's create a new project: ``` django-admin startproject myproject ``` This creates a new directory called "myproject" with the basic Django project structure. Navigate into the project directory: ``` cd myproject ``` ## 5. Creating a Django App Django projects are made up of one or more apps. Let's create an app for our project: ``` python manage.py startapp myapp ``` This creates a new directory called "myapp" with the basic app structure. Open the `myproject/settings.py` file and add your new app to the `INSTALLED_APPS` list: ```python INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'myapp', # Add this line ] ``` ## 6. Configuring the Database Django uses SQLite as its default database, which is fine for development. The database configuration is already set up in `myproject/settings.py`: ```python DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': BASE_DIR / 'db.sqlite3', } } ``` To create the database tables, run: ``` python manage.py migrate ``` ## 7. Creating Models Models define the structure of your database. Let's create a simple model in `myapp/models.py`: ```python from django.db import models class Item(models.Model): name = models.CharField(max_length=100) description = models.TextField() def __str__(self): return self.name ``` After creating or modifying models, run these commands to create and apply migrations: ``` python manage.py makemigrations python manage.py migrate ``` ## 8. Creating Views Views handle the logic of your application. Create a simple view in `myapp/views.py`: ```python from django.shortcuts import render from .models import Item def item_list(request): items = Item.objects.all() return render(request, 'myapp/item_list.html', {'items': items}) ``` ## 9. Creating Templates Templates are HTML files that define how your data is displayed. Create a new directory `myapp/templates/myapp/` and add a file `item_list.html`: ```html <!DOCTYPE html> <html> <head> <title>Item List</title> </head> <body> <h1>Items</h1> <ul> {% for item in items %} <li>{{ item.name }} - {{ item.description }}</li> {% endfor %} </ul> </body> </html> ``` ## 10. Configuring URLs To make your view accessible, you need to configure URLs. First, in `myproject/urls.py`: ```python from django.contrib import admin from django.urls import path, include urlpatterns = [ path('admin/', admin.site.urls), path('', include('myapp.urls')), ] ``` Then, create a new file `myapp/urls.py`: ```python from django.urls import path from . import views urlpatterns = [ path('', views.item_list, name='item_list'), ] ``` ## 11. Running the Development Server You're now ready to run your Django project! Use this command: ``` python manage.py runserver ``` Visit `http://127.0.0.1:8000/` in your web browser to see your Django application in action. Congratulations! You've set up a basic Django project. This foundation will allow you to expand your project by adding more models, views, and templates as needed. Follow me on my social media platforms for more updates and insights: - **Twitter**: [@rupeshmisra2002](https://twitter.com/rupeshmisra2002) - **LinkedIn**: [Rupesh Mishra](https://www.linkedin.com/in/rupeshmishra2002) - **GitHub**: [Rupesh Mishra](https://github.com/solvibrain) Remember to always activate your virtual environment before working on your project, and to run migrations whenever you make changes to your models.
rupesh_mishra
1,897,729
Introduction to Object-Oriented Programming (OOP) in Python
To Support My YouTube Channel: Click Here I need like 630 subs to reach 1000 and get...
0
2024-06-23T11:46:19
https://dev.to/vincod/introduction-to-object-oriented-programming-oop-in-python-4o4d
webdev, python, oop, javascript
_________________________________________________ To Support My YouTube Channel: [Click Here](https://youtube.com/@kwargdevs) I need like 630 subs to reach 1000 and get monetized Thanks in Advance 🙏 _________________________________________________ Object-Oriented Programming (OOP) is a programming paradigm that uses "objects" to design software. These objects can be anything you want to model in your program, such as a person, car, or bank account. Each object can have properties (attributes) and actions (methods) it can perform. Let's break down the basic concepts of OOP: 1. **Class**: A blueprint for creating objects. It defines a set of attributes and methods that the created objects (instances) will have. 2. **Object**: An instance of a class. When a class is defined, no memory is allocated until an object of that class is created. 3. **Attribute**: A variable that belongs to an object or class. Attributes are used to store information about the object. 4. **Method**: A function that belongs to an object or class. Methods define the behaviors of an object. ### Example Let's create a simple class to understand these concepts better. #### Step 1: Define a Class ```python class Dog: # This is a class attribute species = "Canis familiaris" # The __init__ method initializes the object's attributes def __init__(self, name, age): self.name = name # instance attribute self.age = age # instance attribute # An instance method def bark(self): return f"{self.name} says woof!" ``` #### Step 2: Create an Object (Instance of the Class) ```python my_dog = Dog("Buddy", 5) ``` Here, `my_dog` is an instance of the `Dog` class. It has attributes `name` and `age`, and it can use the method `bark`. #### Step 3: Access Attributes and Methods ```python # Accessing attributes print(my_dog.name) # Output: Buddy print(my_dog.age) # Output: 5 print(my_dog.species) # Output: Canis familiaris # Calling a method print(my_dog.bark()) # Output: Buddy says woof! ``` ### Breaking Down the Code 1. **Class Definition (`class Dog`)**: - `class Dog:` defines a new class named `Dog`. - `species` is a class attribute shared by all instances of the `Dog` class. 2. **The `__init__` Method**: - `__init__` is a special method called a constructor. It is automatically called when a new instance of the class is created. - `self` refers to the instance of the class. It is used to access instance attributes and methods. - `self.name` and `self.age` are instance attributes unique to each instance. 3. **Instance Method (`def bark`)**: - `bark` is a method that belongs to the `Dog` class. It uses the `self` parameter to access the instance's attributes. ### More About Classes and Objects 1. **Class Attributes vs. Instance Attributes**: - Class attributes are shared across all instances of the class. - Instance attributes are unique to each instance. 2. **Encapsulation**: - Encapsulation is the concept of bundling data (attributes) and methods that operate on the data within one unit (class). 3. **Inheritance**: - Inheritance is a way to form new classes using classes that have already been defined. It helps in reusability. 4. **Polymorphism**: - Polymorphism allows methods to do different things based on the object it is acting upon, even if they share the same name. ### Example of Inheritance ```python class Puppy(Dog): def __init__(self, name, age, color): super().__init__(name, age) # Initialize attributes from the parent class self.color = color # New attribute unique to Puppy class def bark(self): return f"{self.name} says yap!" my_puppy = Puppy("Bella", 1, "brown") print(my_puppy.bark()) # Output: Bella says yap! print(my_puppy.color) # Output: brown ``` In this example, `Puppy` is a subclass of `Dog`. It inherits attributes and methods from `Dog` but can also have its own additional attributes and methods.
vincod
1,897,726
Building production systems using Generative AI.
Originally published on Tying Shoelaces I set out several months ago to deeply understand and...
0
2024-06-23T11:43:02
https://dev.to/ejb503/building-production-systems-using-generative-ai-3plk
ai, webdev, programming, learning
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qzf8t1r260hf1vdwadc6.jpg) Originally published on [Tying Shoelaces](https://tyingshoelaces.com/blog/artificial-intelligence-production-system) I set out several months ago to deeply understand and engage with the modern AI tooling that is in the process of revolutionizing (or at least sensationalizing!) the world of Web Development as we know it. I had a single purpose: to build a theoretically scalable system that could leverage this plethora of new technologies. And one that wouldn’t bankrupt me in the process. I picked a use case that was an area of interest to me and one that I felt was ripe for the use of Generative AI. It is the world of data privacy. I built a tool that can scan any public-facing web URL, navigate, and register all the network requests. Then we analyse these requests and process the information using Generative AI. Gen AI is a valid use case because data privacy is complicated in the sense that it is difficult to understand the meaning and consequences of compliance. I believe that Generative AI is mostly useful as a data synthesizer and a reducer. Much of the current frustration comes from the erroneous use of LLMS, which involves inputting a nugget of data and expecting Gen AI to mass-produce gold. It inevitably churns out rubbish. But if you input a high concentration of quality data and ask Gen AI to condense this into something useful, that’s when you get valuable output. So, what do I want Gen AI to do? Simple, really, as someone who has been doing Web Development for nearly 20 years, I still find myself unable to answer seemingly trivial questions: Do I need permission to track user data? Which types? What is “user” data anyway? An IP address? I’m not saving that PII, so it’s legal, right? Right? When can I set cookies? But I use a Cookie to track my Cookie consent. That’s allowed, I guess? But what is a “functional” Cookie anyway? What kind of user tracking is permitted with and without consent? How do I need to ask for consent? Can I save consent? How do I even persist in negative consent? What is the difference between consent for tracking and cookies? Do I need both? Is that one button or two? What are the consequences of not respecting consent? How does this vary by geography? Is Google Analytics legal for use in Europe? Quite honestly, a lot of the confusion in the above arises because data privacy consent is a grey area with room for interpretation. This isn’t helped by a difference in legislation between different geographies. But the key is that GPT-4o/Llama 3 excels at interpreting vast amounts of data and explaining them with simple language. Perfect, thank you. So, I set out to gather as much hard evidence as I could about what is actually happening on a simple navigation on a public-facing document (i.e., website!). We map this evidence with our understanding of the legislation, and we arrive at a system that is capable of testing the data processing flow of any public website. Woohoo. But you aren’t here for the cookies are you, you’re here for the AI… One little system, one bucket load of AI. OpenAI - GPT-4o / DALL-E 3 We use this to analyze the COMPANIES that are the ultimate processors of the ingested data. Groq - Llama3-70b-8192 We use this to analyze the REQUESTS where the data is transmitted from the public document. Grok - https://developers.x.ai/ We use this to analyze sentiment and trends to inform our content generation strategy. Brave API - https://brave.com/search/api/ We use this to research public information on actors identified within our system. Algoria search - https://www.algolia.com/ We use this to intelligently map unstructured data into a lovely SQL database. Did you say 5 APIs? Five different AI products? How was my experience with this mesh of AI? Very, very hard… It nearly broke me. So, how did we end up with a platform that has five different AI integrations anyway? Experimentation, repetition, and a fair bit of lunacy. It’s a pattern. But when we untangle the system we have created, each component part makes sense. The first trade-off is Groq vs ChatGPT. ChatGPT is, of course, the flagship product of OpenAI, the first worm out of the proverbial can. And their first-mover advantage shows. Their API and models have been more refined, and this is clear from the quality difference of the output. So I use ChatGPT for the long-form content and the quality of the results is indisputable. BUT. It’s expensive. I woke up in sweats several times a week worrying what would happen if somebody, anybody, actually used this platform I’d built. A great experiment, but one I’m willing to bankrupt myself for? Not likely. Groq changed everything. Their API costs 100x less. It’s fair to say that had I not discovered Groq, I would probably have never released this blog, simply out of fear of the cost. The quality of GPT-4o over llama 3 is noticeable. But the price of Llama 3 on Groq is quite literally 1% of the price. I use Open AI if we need to leverage content of the absolute highest quality. I use Groq when I need to process lots of information. I have built a killswitch to turn everything to Groq at a second's notice. This switch is the difference between being able to launch or not. gpt-4o - $5.00 / $15.00 per 1 million tokens. Llama3-7b - $0.05/$0.08 (per 1M Tokens, input/output) So our AI count is at two out of the door… Where does Brave AI Search come in? You could easily interchange this with Perplexity AI or something similar. I was very impressed with the API offering. Brave found it's way into the stack as I was building my own SERP crawler and researcher. Mine was rubbish and consuming a lot of time, Brave’s was excellent. Mine worked 50% of the time; Brave’s 95%. To be able to generate high-quality content for people, we need to solve several puzzles. We need thorough research, and we also need to know what is interesting to the user. Brave’s search API is excellent for doing research for AI content. It provides links and references and shows high-traffic suggestions for users to follow the content rabbit trail. Without the research from Brave, the results from ChatGPT and Groq would be spam. It is a wonderful AI that feeds research and data into our AI. That’s a 2024 phrase if I’ve ever heard one. Three down… Onto the most controversial selection. Grok by Twitter (x) is an LLM with a difference. It has built-in social media retrieval (I imagine it has some kind of proprietary RAG). How does this help? This helps us understand content and topics that are trending and new. So before we research and generate content, we need to understand the hot topics. I’m not yet convinced by the viability of Grok, but the potential to plug into real-time sentiment and use this as a search and content generation strategy is an exciting one for me. Put this one down as experimental. I’ll keep you posted. So we end up with Algolia. Why do we need an AI-powered search on top of our AI-powered research and generative AI? This comes down to how I’ve structured my platform. We’ll go deeper into the how and the why later in this article, but to build a world-class platform, we need to fill in some of the basics. In my old-school paradigm, you can’t have world-class content without a world-class CMS. World-class CMS requires clean, structured data. SQL. We use Algolia to weave and map together the content from our different systems. It’s hard to define and strongly limit output from text generation models (the company recognized it could be Shopify, Shopify’s App, or Shop App). Getting JSON output is more or less stable these days. But converting JSON output to SQL with references between content types is tricky due to the unstructured nature of text generation. Algolia bridges this gap by condensing ‘similar’ content into unique SQL data that can be consumed by a website. It’s not perfect. But it works (95% of the time). So here we are, 5 AIs in the hype boom forged, with one simple platform to rule them. It was hard. It nearly broke me. We go from theoretical to engineering concerns. Chaining AI API calls to create a tolerable product. So why is using AI so hard? Fundamentally there is one simple reason. The Internet is now fast. We expect things to be fast. Even AWS API Gateway HTTP requests timeout after a maximum of 30 seconds. But generative AI? Just crafting an image with researched content and output can require up to 5 chained calls to various APIs. Identify the content (Sentiment analysis w/Grok) Research the content (AI Search w/Brave) Generate the content (Gen AI w/Llama3/ GPT-4o) Generate the image (Gen AI w/DALL-E 3) Save the content and image into SQL (Algolia) Example content: HotJar data privacy analysis on privacytrek.com It’s very hard to build something fast when the underlying APIs are so slow. You won’t get quality output reliably generated in under a minute, especially as you need to knit together disparate APIs to build anything resembling quality content. The perfectionist in me refuses to wait so long to deliver results on a website. We’ve come too far. What’s the solution? Streaming? Websockets? Background processes? It’s complicated… I tried every single one of the above. I hated every single one for different reasons; we could write a blog about each… I spent almost a week building and tweaking a Rabbit MQ broker so that my platform could subscribe to content from the backend responsible for negotiating with this mesh of AI APIs. I was so proud of myself; it was wonderful. It was also absolute rubbish. I deleted it. You know that saying, ‘Every person has a book in them. Most should keep it there.’ The same applies to Software Engineers and their AI ideas. It’s so easy to go off on a tangent and build around the problems that are inherent in artificial intelligence tools. I’ve done it many times until I reluctantly accepted that you can’t make an elephant run, and we needed a different approach. You should, too. Like horse and carriage congestion in the early 20th Century, eventually, it won’t be a problem. But until it isn’t, it is. Users expect fast web experiences; a sprinkle of AI will only sate patience for so long before web experiences become onerous and frustrating. So, to use AI at scale, we need to fetch our data before the user has even arrived. The number one rule for leveraging AI is to derive the value from our business logic long before the user has arrived. The key to the kingdom is to use every word and every image. Every scrap of expensive generated content should be treated like proverbial gold. This means vigilant control of both inputs and outputs. And so I save every API request, and I thoroughly research every API call I make. I test, and I tweak, I iterate, and I learn until I can bend the tool to my will. AI Costs $$$ Treat AI API calls with the respect they deserve. AI is prohibitively expensive. Imagine paying 5c for every API call you make to your CMS. I challenge you to do some matchstick math in your observability platform. Just look at the logs of any modern software system, requests to modern systems are typically measured in the hundreds of thousands, or millions… To make AI valuable at scale, we can’t treat its output as transient or ephemeral. The first thing I learned about working with AI APIs is to save every response, output, or image. It can be used later. And one of the ironic properties of AI output is that the more you refine and reuse it (condensing), the more valuable and realistic it becomes. Just make sure we conserve the building blocks or you will literally be paying the price. It’s funny how paying and being on the hook for your own system really takes you back to the basics as an engineer. Nothing strikes fear into a developer more than being on the hook for a faulty API call that could accidentally cost thousands of dollars. Nothing will make me optimize my API fallback strategy like the fear that an accidental loop could bankrupt me. Frankly, we should treat normal APIs with the same respect, but caching, cheap processing and laziness have made this approach redundant. The founding principle of working with these APIs is to treat every output from an LLM with respect. Spend time considering the inputs and the outputs. Prompt engineering, RAG, and Vector DBs are the buzzwords. The principles are far more simple. Every question or input to a Gen AI system is costing you real dollars. Have you optimized that input to ensure that what is coming out is actually valuable? Or are you simply pounding away at a broken slot machine, throwing your money down the drain? I spent a long time crafting every user and system prompt, optimizing the inputs and the outputs to ensure that what comes out of the LLM is of value. I failed more than I succeeded. It took me a long time to get beautifully crafted artisan image representations of my companies. I spent days trying to use ChatGPT to create an icon library (bad idea). The more you work with these APIs, the easier it is to see the cracks. It’s so easy to get rubbish output; if you haven’t carefully automated and scaled the input, it is the most likely outcome. But when the robot gets it right, it becomes something very special indeed. Ask ChatGPT Just ask ChatGPT? In my experience, the inverse is true; these LLMs aren’t generalists at all but specialists. I don’t know why this is surprising. Machine learning algorithms have always been thus. We have object detection models to detect objects from images. We have structured data extraction algorithms to extract data from text. We wouldn’t expect our object detection algorithm to extract structured data from text, right? But that is exactly what we expect from our LLMs. One superhuman AGI robot to rule them all. Absolute popsicle… Nothing screams amateur to me more loudly than the companies building a wrapper to ChatGPT and assuming that “AI” will solve their problem. The LLMs have no AGI at all, not even intelligence. They are capable of processing extremely large datasets. Just the thought that they are a silver bullet to every problem shows me that not much thought has been given at all. What are they specialists at? Condensing large amounts of information into valuable, smaller, intelligible versions of the same. Ironically, this is the exact opposite of the majority of use cases. Welcome to the trough of disillusionment. LLMs are a tool in the armory that can solve problems in new and inventive ways. They have opened doors that we didn’t even know existed. So what next for this great experiment? We are at the beginning. I guess I’ve got proof of concept, and my goal is to convert this into a functional, modern platform. There are challenges to overcome. The field of dreams conundrum. I’ve built it. Will they come? Experience tells me that probably not. I need to turn this platform into a self-aware, SEO-optimized monster. I’m going to use AI and the tools I've woven together to craft and scale human consumable content and bring Data Privacy analysis to the world… I’m not sure how far I’ll get, but it’s turning out to be a wonderful adventure… Do come along for the ride.
ejb503
1,897,711
The Identity Puzzle: the Crucial Difference Between Access Tokens and ID Tokens
In the real world Let's start with a real-world analogy. Imagine a flight ticket you...
0
2024-06-23T11:38:47
https://dev.to/zenithar/the-identity-puzzle-the-crucial-difference-between-access-tokens-and-id-tokens-j1f
identity, access, oauth, security
## In the real world Let's start with a real-world analogy. Imagine a flight ticket you previously bought as an access token authorising you to board the plane for your trip. On the other hand, an ID card functions as your ID token, a document that proves an authority has authenticated your identity. Another example could be a concert ticket, your access token to the event, and your driver's license, your ID token. Just as you can't board a plane with only your ID card without buying a flight ticket, you can't use the ticket to prove your identity. Like cards and tickets, both tokens are created to serve specific purposes. In the context of our analogy, the access token is your right to board the plane, while the ID token is your proof of identity. Using the person's name of the flight ticket bearer and proving that the name identifies you makes your intent to board the plane legit and authenticated. ## ID vs Access In the digital world, ID tokens are your digital identity, a secure container that holds all the information you agree to share with the identity authority. They're a digital version of your ID card. Just as you present your ID card to authenticate your identity, the entity initiating the authentication process uses ID tokens to retrieve authenticated identity details. Access tokens are not just proof of authorisation but the key to your digital world. They contain identity references for the user and the software authorised to access a specific service. This proof of authorisation is forged by evaluating the identity intent, is valid for a certain period, and is closely linked to service usage only. The ID tokens concern the user authentication process, demonstrating how the system knows you, while Access tokens concern an authorisation decision proving that the intent has been authorised. ## How are authenticated ID cards? When you present your ID card on request, it tends to authenticate you as the identity claimed by your ID card, representing proof of ownership. The verifier checks if the ID card looks legitimate to forge a proof of authority and compares the associated picture with you in person, acting as proof of life. In essence, the ID card "distributed" authentication process is based on the following: * It looks legit (valid symbols, colours, format, etc.) * The picture looks like the bearer's face * The card has not expired > The term 'distributed' here refers to the fact that the authentication process is not solely dependent on one factor but rather a combination of factors, making it more secure and reliable. To build trust around an ID card, you must create a card that looks legit to bypass the proof of authority, change the picture to ensure the holder looks like the associated picture, and ensure it's not expired. ## How are authenticated ID tokens? In the digital world where ID cards are translated as ID tokens, you will reduce the 'distributed' authentication process to 'it looks legit', meaning the token is cryptographically signed with a key from a trusted authority who delivered the ID token. In other words, the authentication process is 'distributed' because it relies on multiple factors, such as the token's legitimacy and the authority verifier trust that issued it, to verify the user's identity. The authentication process uses: * A cryptographic signature verification * The token is usable (expiration, type, etc.) To build trust around an ID token, you must ensure that you can sign it with one of the private keys associated with public key trust. By definition, an ID token is vulnerable to bearer spoofing, as it's not possible to provide an equivalent to a picture matching check like for an ID card to authenticate the bearer. > The purpose of ID tokens is to authenticate an identity by trusting the authority who generated it. ## How are authenticated Access tokens? Access tokens are opaque tokens, meaning they should not have any meaning for the software that received them as proof of authorisation. The access token represents a sealed authorisation decision valid for a given time and associated with a validated intent. The authentication process uses: * Proof of authority knowledge * A cryptographic signature verification * An identity cryptographic binding * The token is usable (expiration, type, etc.) * The token matches the expected intent To build trust around an opaque Access Token, you must ensure the token is known by the authority that forged it. Using digital signature verification to prove the token provenance only proves that the private key used to sign the token is the same as the authority. This weakness is why many providers don't use rich tokens but simple pseudo-random strings as access tokens. To mirror the ID card picture-based comparison check, an Access Token can have an identity binding to confirm that the access token owner and the client using it are legitimate. The token is also subject to acceptance time and intent validation to ensure that it is used in the appropriate context that represents the proof of authorisation. > The purpose of an access token is to authenticate an authorisation decision. ## Why is using ID tokens as proof of authorisation, not a good idea? Let's consider that you are authorising people based on their ID card information only: * How would you prove to the boarding control that you paid for this flight? * What can happen if someone steals your ID card and looks like you? Building an authorisation model solely based on claims identified in the ID Tokens can immediately expose you to the risk of identity spoofing and authorisation bypass. We saw that ID Tokens act as proof of authentication and don't serve an authorisation purpose. When you use ID Tokens as service access authorisation tokens, you open your service to identity impersonation. Secondly, it forces you to distribute your authorisation policy to each service that consumes your ID Tokens as you evaluate the access decision based on the presented identity just in time. This vulnerability occurs when you lack 'bearer authentication', similar to the picture on your ID card. Bearer authentication means that the token holder is considered the legitimate bearer of the token, which is why it's called bearer authorisation. This flaw is similar to API key authentication, which is, in fact, an API Key bearer authorisation. The identity is not proven but trusted by data associated with the provided token. Merely presenting API keys or ID Tokens is enough to authenticate as a legitimate bearer, posing significant security risks that you must be aware of. For example, if someone gets hold of your API Key or ID token, they could pretend to be you or access your private data without permission. This emphasises the significance of comprehending and safeguarding your tokens and using the correct type for the intended purpose. ## Conclusion A lot of the confusion arises from the fact that we can utilise the same technical encoding for both tokens in the digital realm. However, even if we use JWT for both, with shared claims, each token is constructed for a specific purpose. Use ID Tokens for authentication-related use cases only * Transfer identity from authority to client * Transfer identity from client to another authority for federation * Exchange your identity to another token Use Access Tokens for authorisation-related use cases only * Access a service/resource * Represent an intent authorisation decision In conclusion, understanding the crucial difference between access and ID tokens is essential for securely navigating the digital world. Access tokens serve as proof of authorisation and are linked to specific services, while ID tokens authenticate a user's identity and are equivalent to digital ID cards. Both tokens undergo authentication processes, with access tokens focused on sealed authorisation decisions and ID tokens utilising cryptographic signatures to verify legitimacy. By grasping the distinct roles of these tokens, individuals and organisations can enhance security and data protection in the digital space. ### More about this topic * [ID Token and Access Token: What's the Difference?](https://auth0.com/blog/id-token-access-token-what-is-the-difference/) * [ID Tokens vs Access Tokens](https://oauth.net/id-tokens-vs-access-tokens/) * [I don’t like Identity Tokens](https://leastprivilege.com/2020/06/17/i-dont-like-identity-tokens/) Photo by <a href="https://unsplash.com/@amirhanna?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Amir Hanna</a> on <a href="https://unsplash.com/photos/person-holding-white-card-dbuDVc96wsc?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a>
zenithar
1,897,724
aryan's SCSS Complete guide 🧡
note: I will not always show CSS version of my code. note: It is a three part series. (if link are...
0
2024-06-23T11:38:45
https://dev.to/aryan015/scss-complete-guide-part-one-4d03
css, javascript, react, scss
`note:` I will not always show CSS version of my code. `note:` It is a three part series. (if link are taking you at the same blog then links are not updated yet. waiting...) [two](https://dev.to/aryan015/aryans-scss-complete-guide-part-2-1p1i) [three](https://dev.to/aryan015/3-finale-of-complete-sass-longer-2gpe) ## SCSS SCSS (Syntatically Awesome Stylesheet) is CSS pre-processor. It uses `*.scss` extension. Based on Ruby 💎. ## Installation [(install)](https://sass-lang.com/install/) ### using node package manager ```sh npm install -g sass ``` ### using Extension in VSCODE (easier guide) You might this on preffered IDE too. `glenn2223.live-sass` ```sh # the above code is for extension name live-sass compiler ``` ## compilation ```sh # after installtion run this command it will com # pile index.scss to index.css sass source/stylesheets/index.scss build/stylesheets/index.css # sass package is prerequisite ``` ## variables With Sass, you can store information in variables, like: - strings - numbers - colors - booleans - lists - nulls ```sh $variableName:value; ``` `code` ```sh $myFont: Helvetica, sans-serif; $myColor: red; $myFontSize: 18px; $myWidth: 680px;h ``` in `scss` you dont have to use `var()` method to use variables. (that'a why i prefer it.)🧡 ```css body{ font-famly:$myFont; } ``` after compilation ```css body{ font-famly:Helvetica, sans-serif;; } ``` ### variaible scoping They are available only where {} are defined. ```css $myColor: red; h1 { $myColor: green; color: $myColor; } p { color: $myColor; } ``` compilation ```css h1 { color: green; } p { color: red; } ``` ### `!global` The default behavior for variable scope can be overridden by using the `!global` switch. ```css $myColor:red; h1{ $myColor:green !global; /*it will replace red to green use with caution*/ color:$myColor; /*green*/ } p{ color:$myColor; /*green*/ } ``` `note`: avoid using global you might never know what causing your varible a different color. 🧡 ## SCSS nesting Another reason I use SCSS pre-processor that it support nesing. ```css nav{ li{ list-style:none; } p{ color:red; /*set color to red*/ } } ``` after compilation ```css nav li{ list-style:none; } nav p{ color:red; } ``` Because you can nest properties in Sass, it is cleaner and easier to read than standard CSS. At the end I will show you a scss hake. ## @Import Scss keeps the CSS code DRY (Don't Repeat Yourself).You can create small files with CSS snippets to include in other Sass files. Examples of such files can be: reset file, variables, colors, fonts, font-sizes, etc. (extension is optional). You might required it in big apps. ```css @import "variables"; @import "reset"; ``` ## SCSS NESTED HACK ```css font: { family: Helvetica, sans-serif; } text: { align: center; } ``` ```css p { font-family:helvetica, sans-serif; text-align:center; } ``` ## SCSS partials It let transpiler know that it sould not translate this file to .scss. Syntax is `__colors.scss`. Import does not require __ in the name. __colors.scss ```scss $myGreen:green; ``` main.scss ```scss @import "colors"; body{ color:$myGreen; } ``` ## Part two [link](https://google.com) if the link is going to google means second has not been uploaded🧡. Might need atleast 5 stars ⭐ on this post.🤣 (just kidding) [🔗linkedin](https://www.linkedin.com/in/aryan-khandelwal-779b5723a/) ## learning resources [🧡Scaler - India's Leading software E-learning](www.scaler.com) [🧡w3schools - for web developers](www.w3school.com)
aryan015
1,897,723
Bowing to the inevitable
Data Munging with Perl was published in February 2001. That was over 23 years ago. It’s even 10 years...
0
2024-06-23T11:40:01
https://perlhacks.com/2024/06/bowing-to-the-inevitable/
books, datamungingwithperl, updates, writing
--- title: Bowing to the inevitable published: true date: 2024-06-23 11:37:07 UTC tags: Books,datamungingwithperl,updates,writing canonical_url: https://perlhacks.com/2024/06/bowing-to-the-inevitable/ cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m2nzdo93fml6sl4k71xr.jpg --- [_Data Munging with Perl_](https://datamungingwithperl.com/) was published in February 2001. That was over 23 years ago. It’s even 10 years since [Manning took the book out of print and the rights to the content reverted to me](https://perlhacks.com/2014/04/data-munging-perl/). Over that time, I’ve been to a lot of Perl conferences and met a lot of people who have bought and read the book. Many of them have been kind enough to say nice things about how useful they have found it. And many of those readers have followed up by asking if there would ever be a second edition. My answer has always been the same. It’s a lot of effort to publish a book. The Perl book market (over the last ten years, at least) is pretty much dead. So I really didn’t think the amount of time I would need to invest in updating the book would be worth it for the number of sales I would get. But times change. You may have heard of [Perl School](https://perlschool.com/). It’s a small publishing brand that I’ve been using to publish Perl ebooks for a few years. You may have even read the interview that brian d foy did with me for perl.com a few years ago about [Perl School and the future of Perl publishing](https://www.perl.com/article/perl-hacks-perl-school-and-the-future-of-perl-publishing/). In it, I talk a lot about how much easier (and, therefore, cheaper) it is to publish books when you’re just publishing ebook versions. I end the interview by inviting anyone to come to me with proposals for Perl School books, but brian is one of [only two people](https://perlschool.com/authors/) who have ever taken me up on that invitation. In fact, I haven’t really written enough Perl School books myself. There are only two – [_Perl Taster_](https://perlschool.com/books/perl-taster/) and [_The Best of Perl Hacks_](https://perlschool.com/books/the-best-of-perl-hacks/). A month or so ago, brian was passing through London and we caught up over dinner. Of course, Perl books was one of the things we discussed and brian asked if I was ever going to write a second edition of _Data Munging with Perl_. I was about to launch into my standard denial when he reminded me that I had already extracted the text from the book into a series of Markdown files which would be an excellent place to start from. He also pointed out that most of the text was still relevant – it was just the Perl that would need to be updated. I thought about that conversation over the next week or so and I’ve come to the conclusion that he was right. It’s actually not going to be that difficult to get a new edition out. I think he was a little wrong though. I think there are a few more areas that need some work to bring the book up to date. - Perl itself has changed a lot since 2001. Version 5.6.0 was released while I was using the book – so I was mostly targeting 5.005 (that was the point at which the Perl version scheme was changed). I was using “-w” and bareword filehandles. It would be great to have a version that contains “use warnings” and uses lexical filehandles. There are dozens of other new Perl features that have been introduced in the last twenty years. - There are many new and better CPAN modules. I feel slightly embarrassed that the current edition contains examples that use Date::Manip and Date::Calc. I’d love to replace those with DateTime and Time::Piece. Similarly, I’d like to expand the section on DBI, so it also covers DBIx::Class. There’s a lot of room for improvement in this area. - And then there’s the way that the world of computing has changed. The current edition talks about HTTP “becoming ubiquitous” – which was an accurate prediction, but rather dates the book. There are discussions on things like FTP and NFS – stuff I haven’t used for years. And there are new things that the book doesn’t cover at all – file formats like YAML and JSON, for example. The more I thought about it, the more I realised that I’d really like to see this book. I think the current version is still useful and contains good advice. But I don’t want to share it with many people because I worry that they would pick up an out-of-date idea of what constitutes best practices in Perl programming. So that has now become my plan. Over the next couple of months, I’ll be digging through the existing book and changing it into something that I’m still proud to see people reading. I don’t want to predict when it will be ready, but I’d hope to have it released in the autumn. I’d be interested to hear what you think about this plan. Have you read the book? Are there parts of it that you would like to see updated? What new syntax should I use? What new CPAN modules are essential? Let me know what you think. --- The post [Bowing to the inevitable](https://perlhacks.com/2024/06/bowing-to-the-inevitable/) first appeared on [Perl Hacks](https://perlhacks.com).
davorg
1,895,494
Installing PostgreSQL with Docker
Introduction In this guide, I'm going to walk you through installing PostgreSQL database...
0
2024-06-23T11:37:01
https://howtodevez.blogspot.com/2024/03/installing-postgresql-with-docker.html
postgres, docker, data, newbie
Introduction ------------ In this guide, I'm going to walk you through installing PostgreSQL database and **pgAdmin** using Docker. The big advantage here is it's quick and straightforward. You won't need to go through a long manual installation process (and potentially spend time fixing errors if they arise). Installing Docker ----------------- If you don't already have Docker installed on your machine, you'll need to do that first. At this step, you'll need to search Google because the installation process depends on the operating system you're using. Typically, installing **Docker** on **Windows** is simpler compared to using **Ubuntu**. Installing PostgreSQL --------------------- Once Docker is set up, the next step is to install the **PostgreSQL** image. Here, I'm using **postgres:alpine**, which is a minimal version of PostgreSQL (it's lightweight and includes all the essential components needed to use PostgreSQL). ```sh docker run --name postgresql -e POSTGRES_USER={username} -e POSTGRES_PASSWORD={password} -p 5432:5432 -v {directory}:/var/lib/postgresql/data -d postgres:alpine ``` Next, replace the **username** and **password** with the credentials you want to use. For the **directory** part, choose a directory on your local machine where you want to store the PostgreSQL data. Now, let's proceed with installing **pgAdmin4**: ```sh docker run -p {port}:80 -e PGADMIN_DEFAULT_EMAIL={email} -e PGADMIN_DEFAULT_PASSWORD={password} -d dpage/pgadmin4 ``` Also, replace the **email**, **password**, and **port** with the ones you want to use. To access **pgAdmin**, open your browser and go to **_http://0.0.0.0:8900_**, where **8900** is the port set in the docker run command for **pgAdmin4** above. Here's what you'll see: ![Admin page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gug4eanv9swm8jw2638t.png) Then use the email and password you set up to login. Using pgAdmin4 -------------- ### Basic Concepts In PostgreSQL, you can create multiple **Server Groups**, each containing multiple **Servers**, and each **Server** can have multiple **Databases**. ### Creating Servers and Databases To create a new Server, follow these steps: right-click on **Server Group** > **Register** > **Server...** Next, in the **General** tab, enter the **Name** as the server name. ![Register - Server General](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h4k6cxsqi2ofdicocbab.png) In the **Connection** tab, enter the following information: ![Connection](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j55yxx4r2d84ggd2j2th.png) * **Host name/address**: You can find this information by inspecting the PostgreSQL container to get the IP address using the following command: "**_docker inspect {container id}_**" * **Maintenance database** is the database name. * **Port**, **Username**, and **Password** are the information you used when running the docker postgres. After clicking Save, a new **Database** will be created. ### Basic Query In this step, I'll provide you with some simple queries to check if **PostgreSQL** is working or not. PostgreSQL queries are quite similar to **SQL**, so if you have basic SQL knowledge, it won't be difficult to understand. Open the **Query Tool** and execute the following **SQL** code: ```sql -- create table CREATE TABLE person ( id int PRIMARY KEY, name text, age int, address text, ); -- insert data INSERT INTO person values (1, 'name 1', 21, 'address 1'), (2, 'name 2', 22, 'address 2'); -- query data SELECT * FROM person; SELECT name, address FROM person; ``` ![Table created](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ldijnqj9vglet8fr0q74.png) If you get query results like these, then you've successfully completed the setup: ![Query successful](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/584joqyye2euub7cvwn8.png) Conclusion ---------- In this article, I've guided you on using **Docker** to install and run **PostgreSQL**, using **pgAdmin** as the interface to perform visual operations with the database. Additionally, I provided some simple queries to get you started with **Postgres**. **_If you found this content helpful, please visit [the original article on my blog](https://howtodevez.blogspot.com/2024/03/installing-postgresql-with-docker.html) to support the author and explore more interesting content._** <a href="https://howtodevez.blogspot.com/2024/03/sitemap.html" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/Blogger-FF5722?style=for-the-badge&logo=blogger&logoColor=white" width="36" height="36" alt="Blogspot" /></a><a href="https://dev.to/chauhoangminhnguyen" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/dev.to-0A0A0A?style=for-the-badge&logo=dev.to&logoColor=white" width="36" height="36" alt="Dev.to" /></a><a href="https://www.facebook.com/profile.php?id=61557154776384" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/Facebook-1877F2?style=for-the-badge&logo=facebook&logoColor=white" width="36" height="36" alt="Facebook" /></a><a href="https://x.com/DavidNguyenSE" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/X-000000?style=for-the-badge&logo=x&logoColor=white" width="36" height="36" alt="X" /></a>
chauhoangminhnguyen
1,897,448
Networking cant be easier than this
This is a submission for the Twilio Challenge What I Built An AI agent were you can pass...
0
2024-06-23T11:34:20
https://dev.to/sojinsamuel/networking-cant-be-easier-than-this-4fdm
devchallenge, twiliochallenge, ai, twilio
*This is a submission for the [Twilio Challenge ](https://dev.to/challenges/twilio)* ## What I Built <!-- Share an overview about your project. --> An AI agent were you can pass in an email or a linkedin profile url to get candidate details and share it with your partner, cofounder via: [Twilio Call](https://www.twilio.com/docs/voice/make-calls), [SMS](https://www.twilio.com/en-us/messaging/channels/sms) or [Email](https://app.sendgrid.com/). The Agent user can also pass in the phone number of a potential candidate and get his/her details regarding the availability of the phone, which was made with the help of [Twilio Phone Lookup service](https://help.twilio.com/articles/15515453000859) This Agent also has the power of displaying real time notifications for example when an email is sent and the recipient opens it, the AI Agent dashboard notifies the user, when they have opened the mail. eg: ```javascript Hey sojinsamue2001@gmail.com just opened their mail ``` And importantly the user data: like the signup, login events, message events back and forth between the user and AI, page visits etc are tracked with the help of [Twilio segment](https://segment.com/). ## Demo <!-- Share a link to your app and include some screenshots here. --> [Project Link](https://reversecontact.vercel.app) Demo video: {% youtube O8hQnAyhUB4 %} ## Twilio and AI <!-- Tell us how you leveraged Twilio’s capabilities with AI --> I have used [Vercel AI SDK](https://sdk.vercel.ai/docs/introduction) with gpt4o to render react components which then interacts with Twilio SDK to send messages via sms, email and as a call. Combining GPT function calling and executing actions using natural language. [Source code on github](https://github.com/sojinsamuel/reverse-contact) ## Additional Prize Categories <!-- Does your submission qualify for any additional prize categories (Twilio Times Two, Impactful Innovators, Entertaining Endeavors)? Please list all that apply. --> Qualify for: - Twilio Times Two: utilized Twilio Segment for studying customer data, Twilio Phone lookup endpoint to get candidate phone details, Twilio SMS and Programmable voice for notifying colleagues or candidates, Twilio Sendgrid for sending candidate details in a dynamically generated email template using gpt. - Impactful Innovators: Best for Networking and get details by just submitting an email address to the Agent and if that email is associated with a linkedin account then get their details and send it to your team for further verification or checks before hiring. And the data thats returned from the Reverse Contact lookup can also be passed on to email or programmable voice by mentioning what you want to share in natural language, which helps you to manage more than one channel. - Entertaining Endeavors: Getting realtime information from just an email id and passing that data into others channel with the help of webhooks to notify the recipient without missing important notifications like hiring, interview, etc
sojinsamuel
1,897,170
Product Review System Using Twilio and Gemini
This is a submission for the Twilio Challenge What I Built I built a product review and...
0
2024-06-23T11:30:56
https://dev.to/oyedeletemitope/product-review-system-using-twilio-and-gemini-4bk0
devchallenge, twiliochallenge, ai, twilio
*This is a submission for the [Twilio Challenge ](https://dev.to/challenges/twilio)* ## What I Built I built a product review and rating App that allows product users to submit ratings along with descriptive feedback. The application analyzes the sentiment of the feedback and provides users with a personalized response based on their sentiment. The backend stores the ratings in a MySQL database and sends notifications using Twilio. This app aims to help product owners gain valuable insights into customer satisfaction and areas for improvement. ## Demo The Users fill out a form with their name, description, and rating. Once submitted, Gemini performs a sentiment analysis on the feedback and replies with a response based on the feedback. During the process, Twilio sends an SMS notification informing the product owner about the new rating submission. ![product review app being testing](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/23ijgxvqg32vg36sz3jy.gif) ![product review app being tested](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5ve66vlmwdcbftcn83dd.gif) ![message being sent to the product owner](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gu92jswzpndhzujtttwx.jpg) ### Source Code {% embed https://github.com/oyedeletemitope/customer-product-review-with-twilio-gemini %} ## Twilio and AI For this project, I use Twilio communication capabilities coupled with Gemini for AI-driven sentiment analysis to create a product review system. ### Role they play #### Sentiment Analysis with AI The application leverages Gemini AI to perform a sentiment analysis, which evaluates the emotional tone of user-submitted feedback and categorizes it as positive, negative, or neutral. #### Automated Notifications with Twilio Using Twilio's powerful messaging API, the application sends real-time SMS notifications to the product owner whenever a new rating and review are submitted. This ensures that product owners are immediately aware of customer feedback and can take prompt action if necessary. ## Additional Prize Categories **Impactful Innovators:** This project qualifies for the impactful innovators category as it provides real-time sentiment analysis and notifications. The app helps businesses quickly respond to customer feedback, leading to improved customer satisfaction and service quality. Also, the insights gained from sentiment analysis allow businesses to identify and address issues promptly, contributing to better product development and overall user experience. Lastly, the app is mostly beneficial for small business owners. It gives them access to advanced tools for understanding and responding to their customers.
oyedeletemitope
1,897,713
Tutorial: How to Develop a Nostr Wallet Connect Mobile App Using Flutter and NWC
Hello everyone! My name is Aniket (aka Anipy) and in this tutorial, I'm going to show you how to...
0
2024-06-23T11:30:22
https://dev.to/anipy/tutorial-how-to-develop-a-nostr-wallet-connect-mobile-app-using-flutter-and-nwc-3kcb
--- title: Tutorial: How to Develop a Nostr Wallet Connect Mobile App Using Flutter and NWC published: true description: tags: cover_image: https://i.ibb.co/94BH67g/nostr-pay-article-cover.png # Use a ratio of 100:42 for best results. # published_at: 2024-06-23 11:00 +0000 --- Hello everyone! My name is Aniket (aka [Anipy](https://x.com/Anipy1)) and in this tutorial, I'm going to show you how to develop a Nostr Wallet Connect mobile app using Flutter and the [NWC Dart package](https://pub.dev/packages/nwc). Before we dive into the tutorial, it's important to familiarize yourself with the [NIP47](https://github.com/nostr-protocol/nips/blob/master/47.md) protocol to better understand Nostr Wallet Connect. In brief, Nostr Wallet Connect (NWC) is a protocol that allows applications to access a remote Lightning wallet. Imagine you’ve developed a simple multiplayer game like tic-tac-toe. Your game works great, but you want to add real-world rewards using satoshis (sats). One way to achieve this is by using Nostr Wallet Connect. With NWC, users can connect their remote Lightning wallets to your app. Now, your tic-tac-toe game can interact with these wallets. Once the connection is established, your game can request payments from the user’s Lightning wallet. Here’s a practical example: before starting a game, both players agree that the loser will pay an LN invoice to the winner. At the end of the game, your app will automatically create an LN invoice for the winner’s connected wallet and pay it from the loser’s connected wallet. This rewarding mechanism is made possible by NWC. NWC allows users to connect their remote Lightning wallets to your app easily, ensuring their wallets are available when needed. If a user ever wants to disable this connection, they can simply disconnect it. Let's get started on building this exciting functionality into your Flutter app! ## Getting Started Before we dive into coding, let's start by grabbing the "**starter**" project for this tutorial. Open your terminal and run the following command: ```shell git clone -b starter --single-branch https://github.com/aniketambore/nostr_pay ``` This command will clone the **starter** project, saving you time and allowing you to focus on the exciting parts of this tutorial. Feeling the excitement? Great! Now, fire up your favorite code editor, whether it’s VS Code or Android Studio. You may see some errors in the **starter** project initially. To fix this, head over to the `pubspec.yaml` file and locate the `# TODO: Add NWC package here` comment. Replace it with the following: ```yaml dependencies: nwc: ^1.0.1 ``` Next, run `flutter pub get` to set things up, and then launch the app. At this stage, it’s a straightforward UI project, but we’ll soon infuse it with some Nostr Wallet Connect magic. ![Nostr Pay - Initial Screen](https://i.ibb.co/HDyKXjy/ss1.png) ## Project Files The **starter** project includes several files to help you out. Let’s take a brief tour of these files to understand their purpose before we dive into developing the Nostr Wallet Connect functionality. ### Assets folder Inside the **assets** directory, you'll find icons, images, and lottie files that will be used to build this app. ![Nostr Pay - Project assets folder](https://i.ibb.co/DGmpB5n/assets.png) ### Folder structure In the **lib** directory, you'll notice various folders, each serving a specific purpose: ![Nostr Pay - Project lib folder](https://i.ibb.co/6vQHkSR/lib.png) #### State Management (BLoC) In `lib/bloc`, you'll find files for state and credentials management. We’ll discuss these further in the article. #### Component Library Folder The `lib/component_library` contains all the reusable UI components that might be used across different screens. #### Handlers The `lib/handlers` directory currently contains the `PaymentResultHandler`, which listens to a stream and performs navigation actions based on the received payment results. #### Models In `lib/models`, you’ll find the model objects used in our app, defining how data is structured and managed. #### Routes The `lib/routes` directory contains all the screens and dialogs displayed in the app. #### Services The `lib/services` directory includes services like: - `Device`: Provides methods to copy text to the clipboard and share text using the device's functionality. - `Keychain`: Provides methods to securely store, retrieve, and delete key-value pairs in secure storage. ## App Libraries The **starter** project comes with a set of useful libraries listed in `pubspec.yaml`: ```yaml dependencies: nwc: ^1.0.1 flutter_bloc: ^8.1.5 flutter_secure_storage: ^4.2.1 flutter_svg: ^2.0.10+1 hydrated_bloc: ^9.1.5 connectivity_plus: ^6.0.3 path: ^1.9.0 path_provider: ^2.1.3 rxdart: ^0.27.7 flutter_fgbg: ^0.3.0 intl: ^0.19.0 qr_flutter: ^4.1.0 share_plus: ^9.0.0 toastification: ^2.0.0 bolt11_decoder: ^1.0.2 auto_size_text: ^3.0.0 lottie: ^3.1.2 flutter: sdk: flutter ``` Here's what they help you to do: - `nwc`: Simplifies the integration of Nostr Wallet Connect by providing methods for handling NWC-related functionalities. - `flutter_bloc`: Implements the BLoC (Business Logic Component) design pattern for state management, separating presentation and business logic. - `flutter_secure_storage`: Securely stores key-value pairs, useful for sensitive data. - `flutter_svg`: Renders SVG images. - `hydrated_bloc`: Enhances `flutter_bloc` by persisting state to disk and restoring it. - `connectivity_plus`: Checks network connectivity status. - `path`: Provides utilities for working with file and directory paths. - `path_provider`: Helps access the file system path on the device. - `rxdart`: Extends Dart's Streams with reactive programming. - `flutter_fgbg`: Detects when the app moves between the foreground and background. - `intl`: Supports internationalization and localization. - `qr_flutter`: Generates QR codes. - `share_plus`: Provides functionality for sharing content. - `toastification`: Shows customizable toast notifications. - `bolt11_decoder`: Decodes Bolt11 payment invoices used in the Lightning Network. - `auto_size_text`: Automatically resizes text to fit within its bounds. - `lottie`: Renders lottie animations. ## NWC Initialization Let's start with initializing the `NWC` class. Open the `main.dart` file and locate `// TODO: Import nwc package`. Replace it with: ```dart import 'package:nwc/nwc.dart'; ``` Next, find `// TODO: Initialize Nostr Wallet Connect class` and replace it along with the nwc variable with: ```dart final nwc = NWC(); ``` Now, open the `lib/bloc/nwc_account/nwc_account_cubit.dart` file, import the NWC package, and locate `// TODO: Instance of the Nostr Wallet Connect class`. Replace it and the `_nwc` variable with: ```dart final NWC _nwc; ``` Next, locate `// TODO: Change balance controller and stream type to Get_Balance_Result` and replace it and the `_walletBalanceController` and `walletBalanceStream` variables with: ```dart final BehaviorSubject<Get_Balance_Result?> _walletBalanceController = BehaviorSubject<Get_Balance_Result?>(); Stream<Get_Balance_Result?> get walletBalanceStream => _walletBalanceController.stream; ``` ## Monitoring Changes We initialized the wallet balance controller and stream types in the previous step. Now, let's use the `walletBalanceStream` to monitor changes in the wallet balance by working on the `_watchAccountChanges()` method. Locate `// TODO: Update the types to match the actual data type` and replace it with: ```dart return Rx.combineLatest<Get_Balance_Result?, NWCAccountState>([walletBalanceStream], (values) { ... ``` In the above code: - `Rx.combineLatest`: Emits the latest values from the `walletBalanceStream` whenever it emits a new value. Next, locate `// TODO: Check if wallet balance result is not null` and replace it with: ```dart if (values.first != null) { // TODO: Assemble and return a new NWCAccountState based on the latest balance result } ``` In the above code: - `values`: Holds the latest values from the combined streams, where `values.first` corresponds to the latest value from `walletBalanceStream`. - We're checking if the first value (representing the latest balance result) is not null. Now, update the state by locating `// TODO: Assemble and return a new NWCAccountState based on the latest balance result` and replacing it with: ```dart return assembleNWCAccountState( values.first!.balance, values.first!.maxAmount ?? 0, state, ) ?? state; ``` In the above code: - `assembleNWCAccountState`: A helper function (located in `lib/bloc/nwc_account/nwc_account_state.dart`) that constructs a new `NWCAccountState` object using the provided `balance`, `maxAmount`, and the current state. With this, we have completed the `_watchAccountChanges` method, which combines the latest values from the `walletBalanceStream` to monitor changes in the wallet balance. When the balance changes, it constructs a new state using `assembleNWCAccountState` and emits this new state. If the balance is null or no change occurs, it returns the current state, ensuring the app’s state is always up-to-date with the latest balance information. ## Inside the Constructor The `NWCAccountCubit` constructor initializes the state, sets up listeners, and performs initial actions. Let's work on that in this section. Locate `// TODO: Listen to account changes and emit updated state` and replace it with: ```dart // 1 _watchAccountChanges().listen((acc) { debugPrint('State changed: $acc'); // 2 emit(acc); }); ``` In the above code: 1. `_watchAccountChanges().listen((acc) {...})`: Sets up a listener for the stream returned by `_watchAccountChanges()`. Whenever a new state (`acc`) is emitted by the stream, the callback is executed. 2. `emit(acc);`: Emits the new state, updating the state of the Cubit. Next, locate `// TODO: Disable unnecessary logs from the logger utils` and replace it with: ```dart _nwc.loggerUtils.disableLogs(); ``` Here, we’re disabling logging from the NWC utility to reduce log noise in the production environment, although you can keep it if desired. Next, locate `// TODO: Connect if the current state type is not none` and replace it with: ```dart if (state.type != NWCConnectTypes.none) connect(); ``` Here, we check if a NWC wallet is already initialized. If it is, we connect to it; otherwise, we do not. ## Connect Until now, we haven't talked much about NWC, focusing instead on streams and listeners. Let's now discuss the `connect` method. First, look at the parameters: ```dart Future connect({ String? connectionURI, bool restored = false, NWCConnectTypes? type, }) async { .... } ``` - `connectionURI`: A string representing the URI used to establish the connection between the remote lightning wallet and our app. - `restored`: A boolean flag indicating whether the connection is being restored. - `type`: An optional parameter of type `NWCConnectTypes` (an enum) specifying the type of connection. It has three values: `none`, `nwc`, and `alby`. - `none`: No wallet is connected to the app. - `nwc`: A wallet is connected, where the user manually copies and pastes the connection URI from the remote lightning wallet into the app. - `alby`: The Alby wallet is connected. (This article does not cover the Alby wallet connection.) Now, locate `// TODO: Parse the Nostr Connect URI` and replace it with: ```dart final parsedUri = _nwc.nip47.parseNostrConnectUri(connectionURI); ``` This uses the `parseNostrConnectUri(connectionURI)` method to parse the connection URI entered by the user, extracting components such as `secret`, `pubkey`, `relay`, and `lud16`. We will discuss `connectionURI` more in the following sections. Next, locate `// TODO: Derive the public key from the parsed URI secret` and replace it with: ```dart final myPubkey = _nwc.keysService.derivePublicKey(privateKey: parsedUri.secret); ``` Here, we use the `derivePublicKey(privateKey: parsedUri.secret)` method to derive the public key (`myPubkey`) from the parsed secret (`parsedUri.secret`). Next, locate `// TODO: Store the secret using the credentials manager` and replace it with: ```dart await _credentialsManager.storeSecret(secret: parsedUri.secret); ``` This uses the `_credentialsManager.storeSecret(secret: parsedUri.secret)` method to securely store the parsed secret (private key) using the credentials manager (`_credentialsManager`). Next, locate `// TODO: Emit the new state with the updated properties` and replace it with: ```dart emit(state.copyWith( type: type, walletPubkey: parsedUri.pubkey, myPubkey: myPubkey, relay: parsedUri.relay, lud16: parsedUri.lud16, )); ``` This emits a new state using `emit(state.copyWith(...))`, updating the type, `walletPubkey`, `myPubkey`, `relay`, and `lud16` based on the parsed URI components. So the `connect` method parses the URI to extract necessary parameters, derives the public key from the secret, securely stores the secret, updates the state with connection details, and initialize the connection to enable ongoing communication and synchronization. ## Initializing Relay and Handling Events Let's move on to the `_initializeConnection()` method. First, locate `// TODO: Initialize the relays service with the relay URL` and replace it with: ```dart await _nwc.relaysService.init(relaysUrl: [state.relay]); ``` This initializes the `relaysService` provided by `_nwc` with the relay URL obtained from the current state, preparing the NWC to communicate with the specified relay. Next, locate `// TODO: Create a subscription filter for events` and replace it with: ```dart final subToFilter = Request( filters: [ Filter( kinds: const [23195], authors: [state.walletPubkey], since: DateTime.now(), ) ], ); ``` This creates a request object with a filter specifying the types of events (`kinds`) and authors (`authors`) to subscribe to. It filters events of kind **23195**, the response event kind in the protocol, authored by the current wallet public key (`state.walletPubkey`). Next, locate `// TODO: Start the events subscription using the relays service` and replace it with: ```dart final nostrStream = _nwc.relaysService.startEventsSubscription( request: subToFilter, onEose: (relay, eose) => debugPrint('subscriptionId: ${eose.subscriptionId}, relay: $relay'), ); ``` This initiates a subscription (`startEventsSubscription`) to the Nostr relay service (`_nwc.relaysService`) based on the defined filter request. It listens for specific events matching the filter criteria (`Filter`). Next, locate `// TODO: Restore the secret from the credentials manager` and replace it with: ```dart final secret = await _credentialsManager.restoreSecret(); ``` This restores the secret (private key) associated with the wallet from the credentials manager. This secret is necessary for decrypting incoming event content. Next, locate `// TODO: Listen to the nostr stream for events` and replace it with: ```dart nostrStream.stream.listen((Event event) { // TODO: Event handling logic }); ``` This sets up a listener on the `nostrStream.stream` to listen for incoming events (`Event` objects) from the Nostr relay service. It executes the provided callback whenever a new event is received. Next, replace `// TODO: Event handling logic` with: ```dart // 1 if (event.kind == 23195 && event.content != null) { // 2 final decryptedContent = _nwc.nip04.decrypt( secret, state.walletPubkey, event.content!, ); // 3 final content = _nwc.nip47.parseResponseResult(decryptedContent); // 4 if (content.resultType == NWCResultType.get_balance) { // TODO: Handle get_balance result } else if (content.resultType == NWCResultType.make_invoice) { // TODO: Handle make_invoice result } else if (content.resultType == NWCResultType.pay_invoice) { // TODO: Handle pay_invoice result } else if (content.resultType == NWCResultType.lookup_invoice) { // TODO: Handle lookup_invoice result } else if (content.resultType == NWCResultType.error) { // TODO: Handle error result } } ``` In this code: 1. We first check if the event that we received has the `kind` equal to **23195** ([the response event kind](https://github.com/nostr-protocol/nips/blob/master/47.md#events)) and ensure the content is not null. 2. As defined in **Nip47**, the content of requests and responses is encrypted with **NIP04**. Therefore, we decrypt the content of the incoming event with the retrieved secret and the wallet public key. 3. We parse the decrypted content to determine the type of result (`NWCResultType`) contained in the event. 4. Finally, we handle different types of results (`get_balance`, `make_invoice`, `pay_invoice`, `lookup_invoice`, `error`) and update the state accordingly. Up to this point, we have set up subscriptions to receive specific events, decrypt incoming event content, handle different types of results (such as balance updates, invoice creation or payment, lookup results, and errors), and update the app state accordingly. We will cover event handling in the following section. ## Publishing Events to Relays We'll now work on the `_sentToRelay` method, which is responsible for encrypting a given message, creating an event, and sending it to the Nostr relay. Let's break down each part of this method. First, locate `// TODO: Restore the secret from the credentials manager` and replace it with: ```dart final secret = await _credentialsManager.restoreSecret(); ``` We've covered this in previous sections, so you already understand its purpose. Next, locate `// TODO: Encrypt the message using NIP04` and replace it with: ```dart final content = _nwc.nip04.encrypt( secret, state.walletPubkey, jsonEncode(message), ); ``` This encrypts the `message` content using the **NIP04** encryption method provided. It uses the retrieved secret (private key) and the wallet public key to encrypt the JSON-encoded message. Next, locate `// TODO: Create an event request with the encrypted content` and replace it with: ```dart final request = Event.fromPartialData( kind: 23194, content: content, tags: [ ['p', state.walletPubkey] ], createdAt: DateTime.now(), keyPairs: KeyPairs(private: secret), ); ``` This constructs an Event object (request) using the encrypted content. It assigns a kind (**23194** for a request, [as defined in the NIP47 protocol](https://github.com/nostr-protocol/nips/blob/master/47.md#events)), attaches tags (like `['p', state.walletPubkey]`), specifies the creation time (`DateTime.now()`), and includes the `KeyPairs` for encryption (with the private key). Finally, locate `// TODO: Send the event to relays with a timeout` and replace it with: ```dart await _nwc.relaysService.sendEventToRelays( request, timeout: const Duration(seconds: 3), ); ``` This sends the constructed request (`Event` object) to the Nostr relays. So the `_sentToRelay` method encapsulates the process of encrypting a message, constructing an Event request object with encrypted content, and sending it to the Nostr relays through the NWC. This functionality is crucial for interacting securely with the Nostr network, facilitating actions like making payments, creating invoices, and receiving updates, while maintaining the privacy and integrity of the transmitted data. ## Connecting a Lightning Wallet Let's run the project and connect a Lightning wallet. After the splash screen, you'll see a screen where the user needs to paste the connection URI of the Lightning wallet. This screen is located in `lib/routes/initial_walkthrough/initial_walkthrough_page.dart`. Open it and follow these steps: Locate `// TODO: Access NWCAccountCubit instance from context` and replace it with: ```dart final cubit = context.read<NWCAccountCubit>(); ``` This retrieves the `NWCAccountCubit` instance using `context.read`. Next, locate `// TODO: Call the connect method on NWCAccountCubit` and replace it with: ```dart await cubit.connect( connectionURI: connectionURI, type: NWCConnectTypes.nwc, ); ``` Here, we're calling `cubit.connect` to initiate the connection process to the user's remote Lightning wallet using the provided `connectionURI` and passing the type as `NWCConnectTypes.nwc`. Finally, locate `// TODO: Replace the current route with the home screen` and replace it with: ```dart navigator.pushReplacementNamed('/'); ``` If the connection is successful (no exceptions thrown), it navigates to the '/' route, typically replacing the current route with a new one, which is the home page. ## Obtaining the Connection URI To get the connection URI, use Lightning wallets that support NWC, such as Mutiny and Alby. For this article, we'll use Alby. 1. Go to [NWC by Alby](https://nwc.getalby.com/). 2. Log in with your Alby account. 3. Click on "Connect app" and enter "NWC Test" as the name (you can use any name). 4. Optionally, set a "Monthly budget"; we'll keep it at 100k sats. ![NWC Alby](https://i.ibb.co/3r9ffg6/ss2.png) 5.Click "Next" and then "Copy pairing secret". ![NWC Alby](https://i.ibb.co/nCDLsv2/ss3.png) The connection URI will look something like this: ```text nostr+walletconnect://69effe7b49a6dd5cf525bd0905917a5005ffe480b58eeb8e861418cf3ae760d9?relay=wss://relay.getalby.com/v1&secret=f488038d4e52a63e8e6f29a0be46e683c4e08b7550c2d76be9712b7da149a05a&lud16=aniketamborebitcoindev@getalby.com ``` ### Understanding the Connection URI The connection URI generated by the wallet service consists of the following components: - **Protocol**: The URI begins with `nostr+walletconnect://`, indicating the use of the Nostr Wallet Connect protocol. - **Base Path (Hex-encoded Pubkey)**: This uniquely identifies the user's wallet within the protocol. It's crucial for establishing a secure connection. - **Query String Parameters**: - `relay`: This parameter is required and specifies the URL of the relay where the wallet service is connected and will listen for events. Multiple relays can be listed. - `secret`: Also required, this is a randomly generated 32-byte hex encoded string. It's used by the client to sign events and encrypt payloads when communicating with the wallet service. - `lud16`: While optional, it's recommended as a Lightning address that clients can use to automatically set up the lud16 field. To proceed, let's hot restart your app. Paste the connection URI into the input field and click "Connect". ![Nostr Pay - Home screen](https://i.ibb.co/QQbWqKf/ss4.png) After connecting, you may notice your balance displayed as 0 sats on the home page, as we haven't yet retrieved it from the wallet. Let's move on to retrieving the balance in the next step. ## Command: get_balance To get the correct balance of the wallet, go to `lib/bloc/nwc_account/nwc_account_cubit.dart`. Locate `// TODO: Define a message to request balance` and replace it with: ```dart final balanceMessage = {"method": "get_balance"}; ``` This is one of the wallet commands [defined in NIP47 for getting the current balance](https://github.com/nostr-protocol/nips/blob/master/47.md#get_balance) of the user's wallet. Next, locate `// TODO: Call the _sentToRelay function to send the balance message` and replace it with: ```dart await _sentToRelay(balanceMessage); ``` Here, we're calling the `_sentToRelay` method with `balanceMessage` as the parameter. We are now publishing a request event to the relay with the command `get_balance`. When our wallet service receives this event, it will publish the result or response for us. To handle this, locate `// TODO: Handle get_balance result` inside the `_initializeConnection()` method and replace it with: ```dart if (content.resultType == NWCResultType.get_balance) { final balanceResult = content.result as Get_Balance_Result; _walletBalanceController.add(balanceResult); debugPrint('balance: ${balanceResult.balance}'); } ``` In the above: - When the `resultType` is `get_balance`, it casts `content.result` to `Get_Balance_Result`. - It then adds `balanceResult` to `_walletBalanceController`, which is used to stream wallet balance changes. Now, simply hot restart the application, and you will see the current balance of your Lightning wallet on the home screen. ![Nostr Pay - Home screen](https://i.ibb.co/4SYBjwn/ss5.png) ### Formatting and Refreshing Let's format the balance shown on the home screen and add a method to refresh it. Go to `lib/routes/home/home_page.dart`. Locate `// TODO: Read NWCAccountCubit instance from context` and replace it with: ```dart final accountCubit = context.read<NWCAccountCubit>(); ``` Next, locate `// TODO: Call the refresh method on accountCubit to refresh data` and replace it with: ```dart await accountCubit.refresh(); ``` The `refresh()` method simply calls the sync method inside `NWCAccountCubit`. Next, locate `// TODO: Format the balance using the formatBalance method on accountCubit` and replace it with: ```dart '${accountCubit.formatBalance(state.balance)} sats', ``` This method formats the balance. Now, hot reload the app, and you will see the changes made to the balance on the home screen. You can also pull to refresh the balance of the wallet. ![Nostr Pay - Home screen](https://i.ibb.co/7nqpN4Y/ss6.png) Next, we'll implement the `make_invoice` functionality. ## Command: make_invoice Let's head over to `lib/bloc/nwc_account/nwc_account_cubit.dart` and take a look at the `makeInvoice` method which is responsible for creating a new invoice. It takes an amount in satoshis and an optional description as parameters. Locate `// TODO: Construct the request object for making an invoice` and replace it with: ```dart final req = { "method": "make_invoice", "params": { "amount": amountInSats * 1000, // value in msats "description": desc, // invoice's description, optional } }; ``` Here, `req` constructs the [`make_invoice` command defined in NIP47](https://github.com/nostr-protocol/nips/blob/master/47.md#make_invoice): 1. `amount`: The amount in millisatoshis (msats). It converts the provided amount in satoshis to msats by multiplying by 1000. 2. `description`: Optional description for the invoice. Next, locate `// TODO: Send the request to the relay using _sentToRelay` and replace it with: ```dart await _sentToRelay(req); ``` This sends the constructed request to the relays using the `_sentToRelay` method. Now, locate `// TODO: Handle make_invoice result` inside the `_initializeConnection()` method and replace it with: ```dart else if (content.resultType == NWCResultType.make_invoice) { final invoiceResult = content.result as Make_Invoice_Result; debugPrint('invoice: ${invoiceResult.invoice}'); emit( state.copyWith( resultType: NWCResultType.make_invoice, makeInvoiceResult: invoiceResult, ), ); } ``` Here: - When `resultType` is `make_invoice`, it casts `content.result` to `Make_Invoice_Result`. - It emits a new state with `makeInvoiceResult` to notify listeners about the newly created invoice. Now, head over to `lib/routes/create_invoice/create_invoice_page.dart` and look at the `_createInvoice()` method, responsible for initiating the invoice creation process in `NWCAccountCubit`. Locate `// TODO: Access NWCAccountCubit instance from context` and replace it with: ```dart final cubit = context.read<NWCAccountCubit>(); ``` This retrieves the `NWCAccountCubit` instance. Next, locate `// TODO: Call the makeInvoice method on NWCAccountCubit` and replace it with: ```dart await cubit.makeInvoice( amountInSats: int.parse(_amountController.text), description: _descriptionController.text, ); ``` Here, `makeInvoice` is called on the cubit with the parsed amount and description from user inputs. In the `CreateInvoicePage`, when the user clicks on "Create Invoice" and supplies the amount and description, the `makeInvoice` method is invoked. The response handling is done through a listener. Now, let's move on to handling the response and errors. ### Listener in `create_invoice_page.dart` The listener reacts to state changes in `NWCAccountCubit`, specifically handling the results of making an invoice or encountering an error. Locate `// TODO: Handle the case when the result type is 'make_invoice' and the makeInvoiceResult is not null` and replace it with: ```dart // 1 if (resultType != null && resultType == NWCResultType.make_invoice && state.makeInvoiceResult != null) { // 2 final navigator = Navigator.of(context); navigator.popUntil((route) => route.settings.name == "/"); // 3 // Navigate to InvoiceQrPage with the makeInvoiceResult. navigator.push( MaterialPageRoute( builder: (context) => InvoiceQrPage( makeInvoiceResult: state.makeInvoiceResult!, ), ), ); } ``` In the above code: 1. It checks if `resultType` is `make_invoice` and `makeInvoiceResult` is not null. 2. If conditions are met, it pops routes until the root ("/") route (home screen). 3. Then, it pushes a new route to display `InvoiceQrPage`, passing `makeInvoiceResult` to it. Next, locate `// TODO: Handle the case when the result type is 'error' and the nwcErrorResponse is not null` and replace it with: ```dart // 1 else if (resultType == NWCResultType.error && state.nwcErrorResponse != null) { // 2 final errorMessage = state.nwcErrorResponse!.errorMessage; // 3 showToast( context, title: errorMessage, type: ToastificationType.error, ); } ``` Here: 1. It checks if `resultType` is error and `nwcErrorResponse` is not null. 2. If conditions are met, it extracts the error message from `nwcErrorResponse`. 3. Displays the error message as a toast notification using `showToast`. ## Handling Errors in `NWCAccountCubit` We need to handle error results in `NWCAccountCubit`. Head over to `lib/bloc/nwc_account/nwc_account_cubit.dart` and locate `// TODO: Handle error result` and replace it with: ```dart else if (content.resultType == NWCResultType.error) { final error = content.result as NWC_Error_Result; debugPrint('error message: ${error.errorMessage}'); emit( state.copyWith( resultType: NWCResultType.error, nwcErrorResponse: error, ), ); } ``` Here: - When `resultType` is error, it casts `content.result` to `NWC_Error_Result`. - Emits a new state with `nwcErrorResponse` to notify listeners about the encountered error. Now, hot restart the app. On the home page, click on "Receive", enter the amount and description, and click "Create Invoice". ![Nostr Pay - Create invoice screen](https://i.ibb.co/Zh9HdsY/ss7.png) After the invoice is generated, you will see the `InvoiceQrPage`. ![Nostr Pay - Invoice qr screen](https://i.ibb.co/QjCXPVM/ss8.png) Currently, if someone pays this invoice, nothing happens. To handle this, we'll look into the `lookup_invoice` command next. ## Command: lookup_invoice Let's head back to `lib/bloc/nwc_account/nwc_account_cubit.dart` and look at the `lookupInvoice` method responsible for sending a request to check the status of a specific invoice. Locate `// TODO: Construct the message to lookup the invoice` and replace it with: ```dart final message = { "method": "lookup_invoice", "params": { "invoice": invoice, } }; ``` Here, `message` constructs the [`lookup_invoice` command as defined in NIP47](https://github.com/nostr-protocol/nips/blob/master/47.md#lookup_invoice): - The `params` key contains a map with the `invoice` to be looked up. Next, locate `// TODO: Send the lookup request to the relay using _sentToRelay` and replace it with: ```dart await _sentToRelay(message); ``` This sends the constructed request to the relays using the `_sentToRelay` method. Now, locate `// TODO: Handle lookup_invoice result` inside the `_initializeConnection()` method and replace it with: ```dart else if (content.resultType == NWCResultType.lookup_invoice) { // 1 final result = content.result as Lookup_Invoice_Result; // 2 emit( state.copyWith( resultType: NWCResultType.lookup_invoice, lookupInvoiceResult: result, ), ); } ``` Here: 1. When `resultType` is `lookup_invoice`, it casts `content.result` to `Lookup_Invoice_Result`. 2. Emits a new state with updated `lookupInvoiceResult` to notify listeners about the invoice status. Next, head over to `lib/routes/invoice_qr/invoice_qr_page.dart`. This page displays a QR code for the generated invoice and periodically checks if the invoice has been paid. ### Inside `_startPolling()` method Locate `// TODO: Access NWCAccountCubit instance from context` and replace it with: ```dart final cubit = context.read<NWCAccountCubit>(); ``` This retrieves the `NWCAccountCubit` instance. Next, locate `// TODO: Call the lookupInvoice method on NWCAccountCubit with the invoice ID` and replace it with: ```dart await cubit.lookupInvoice(widget.makeInvoiceResult.invoice); ``` Here, `lookupInvoice` is called on the cubit with the invoice ID from `widget.makeInvoiceResult`. ### Listener in `invoice_qr_page.dart` The listener listens for changes in `NWCAccountCubit` state and reacts based on the type of result received. Locate `// TODO: Handle the case when the result type is 'lookup_invoice' and the result is not null` and replace it with: ```dart // 1 if (resultType == NWCResultType.lookup_invoice && result != null) { // 2 final isPaid = result.settledAt != null ? true : false; if (isPaid) { // 3 final navigator = Navigator.of(context); navigator.popUntil((route) => route.settings.name == "/"); navigator.push( MaterialPageRoute( builder: (_) => const SuccessPage( title: 'Payment Received Successfully', description: 'Congratulations! You have successfully received sats from the sender.', ), ), ); } } ``` Here: 1. It checks if `resultType` is `lookup_invoice` and `result` is not null. 2. Checks if `settledAt` field in the result is not null, indicating that the invoice has been paid. 3. If paid, pops routes until root ("/") route and navigates to `SuccessPage` with a success message. Next, locate `// TODO: Handle the case when the result type is 'error' and the nwcErrorResponse is not null` and replace it with: ```dart // 1 else if (resultType == NWCResultType.error && state.nwcErrorResponse != null) { final errorMessage = state.nwcErrorResponse!.errorMessage; // 2 showToast( context, title: errorMessage, type: ToastificationType.error, ); } ``` Here: 1. It checks if `resultType` is error and `nwcErrorResponse` is not null. 2. If conditions are met, it shows a toast notification with the error message. That completes the `lookup_invoice` command implementation. Now, hot restart the app, create an invoice, and try paying it. You'll see the `lookup_invoice` command in action, navigating to the `SuccessPage` upon successful payment. ![Nostr Pay - Success Receive screen](https://i.ibb.co/zx09n7C/ss9.png) We've covered `get_balance`, `make_invoice`, and `lookup_invoice`. Let's proceed to the final command of this article: `pay_invoice`. ## Command: pay_invoice For the final command, head over to `lib/bloc/nwc_account/nwc_account_cubit.dart` and look at the `payInvoice` method which is responsible for paying a specific invoice provided by the user. Locate `// TODO: Construct the message to pay the invoice` and replace it with: ```dart final message = { "method": "pay_invoice", "params": { "invoice": invoice, } }; ``` Here, `message` constructs the [`pay_invoice` command as defined in NIP47](https://github.com/nostr-protocol/nips/blob/master/47.md#pay_invoice): - The `params` key contains a map with the `invoice` to be paid. Next, locate `// TODO: Send the payment request to the relay using _sentToRelay` and replace it with: ```dart await _sentToRelay(message); ``` This sends the constructed request to the relays using the `_sentToRelay` method. Now, locate `// TODO: Handle pay_invoice result` inside the `_initializeConnection()` method and replace it with: ```dart else if (content.resultType == NWCResultType.pay_invoice) { // 1 final invoiceResult = content.result as Pay_Invoice_Result; // 2 _paymentResultStreamController.add( NWCPaymentResult( resultType: NWCResultType.pay_invoice, payInvoiceResult: invoiceResult, ), ); // 3 emit( state.copyWith( resultType: NWCResultType.pay_invoice, payInvoiceResult: invoiceResult, ), ); } ``` Here: 1. When `resultType` is `pay_invoice`, it casts `content.result` to `Pay_Invoice_Result`. 2. Adds a new payment result to `_paymentResultStreamController` to stream to listeners. 3. Emits a new state with updated `payInvoiceResult` to reflect the payment operation. Now, let's integrate this into the UI flow for paying an invoice. ### `PaymentRequestDialog` in `payment_dialogs/payment_request_dialog.dart` This dialog handles the confirmation of paying an invoice after decoding it. Locate `// TODO: Call the payInvoice method on accountCubit with the invoice bolt11` and replace it with: ```dart paymentFunc: () => accountCubit.payInvoice(widget.invoice.bolt11), ``` Here: - `payInvoice` method is called on `accountCubit` with the `bolt11` invoice string from `widget.invoice`. ### Using `payInvoice` in UI Flow From the home screen, when the user clicks on "Send", they are taken to the `PayInvoicePage`. After pasting the invoice there and clicking on "Pay Invoice" button, the invoice is decoded, and a confirmation dialog (`PaymentRequestDialog`) is shown to the user. If the user clicks "Yes" in the `PaymentRequestDialog`, the `payInvoice` method we just implemented is called to initiate the payment process. ### Testing the Flow Hot restart the app. Click on "Send" on the home page, paste an invoice from a different wallet, and click "Pay Invoice". ![Nostr Pay - Enter payment info](https://i.ibb.co/TPS7MGs/ss10.png) In the `PaymentRequestDialog`, click "Yes". ![Nostr Pay - Payment request dialog](https://i.ibb.co/L8nMTBh/ss11.png) You'll see a `ProcessingPaymentDialog` and after that, a success screen if the payment is successful. ![Nostr Pay - Success send screen](https://i.ibb.co/QjdxqtZ/ss12.png) This completes the integration of the `pay_invoice` command. Now you have a complete flow for creating, receiving, and paying invoices using the `NWCAccountCubit` and handling the respective results and errors. ## Conclusion As we come to the end of our journey exploring Nostr wallet connect, I want to express my heartfelt thanks to the developers and the vibrant community driving innovations in Nostr, NWC, and the Lightning Network space. Your dedication and expertise have made it straightforward for developers like me to integrate Lightning Network capabilities into our apps. As we continue to explore Lightning Network features in our apps, remember that this technology marks just the beginning of a transformative era in Bitcoin's scalability. If you have any questions or wish to share your experiences, please feel free to reach out to me on [Twitter](https://twitter.com/Anipy1), [Nostr](https://snort.social/p/npub1clqc0wnk2vk42u35jzhc3emd64c0u4g6y3su4x44g26s8waj2pzskyrp9x), or [LinkedIn](https://www.linkedin.com/in/aniketambore/). Thank you for joining me on this journey. ⚡🌊
anipy
1,897,710
Communication Patterns in Microservices
Exploring Communication Patterns in Microservices: Synchronous, Asynchronous, and...
0
2024-06-23T11:05:55
https://dev.to/ali_tariq_90f2c6a125b095c/communication-patterns-in-microservices-3l25
microservices, systemdesign, javascript
## Exploring Communication Patterns in Microservices: Synchronous, Asynchronous, and Publish/Subscribe In the realm of microservices architecture, effective communication between services is critical for building scalable and maintainable systems. Understanding the different communication patterns—synchronous, asynchronous one-to-one, and publish/subscribe—can help architects and developers choose the best approach for their specific needs. In this blog, we will explore these communication patterns, their use cases, and their advantages and disadvantages. ### Synchronous Communication **Synchronous communication** involves direct communication between services where the client waits for the response from the server. This pattern is often implemented using HTTP or gRPC. #### Use Cases 1. **Real-time Data Retrieval:** When immediate data is required, such as fetching user details for a profile page. 2. **Request/Response Scenarios:** Where the outcome depends on the immediate response, like form submission and validation. #### Advantages - **Simplicity:** Easy to implement and understand. - **Immediate Feedback:** Clients get instant responses, making it suitable for real-time applications. #### Disadvantages - **Tight Coupling:** Services are directly dependent on each other, which can affect system resilience. - **Latency:** Response times can be slower due to network delays and processing times. - **Scalability Issues:** High load on a service can lead to bottlenecks, affecting performance. ### Asynchronous Communication (One-to-One) **Asynchronous one-to-one communication** involves services communicating via message queues. The client sends a message to a queue, and the server processes the message independently. #### Use Cases 1. **Task Offloading:** Suitable for tasks that do not require an immediate response, like email sending or background processing. 2. **Decoupling Services:** When services need to operate independently, reducing direct dependencies. #### Advantages - **Decoupling:** Services are loosely coupled, improving resilience and flexibility. - **Scalability:** Easier to handle high loads by distributing tasks through queues. - **Fault Tolerance:** If a service fails, messages can be reprocessed from the queue. #### Disadvantages - **Complexity:** Requires managing message queues and ensuring message delivery. - **No Immediate Feedback:** Clients do not receive immediate responses, which might not be suitable for all scenarios. ### Publish/Subscribe Communication **Publish/subscribe communication** (pub/sub) involves broadcasting messages to multiple subscribers. A service publishes a message to a topic, and all subscribed services receive the message. #### Use Cases 1. **Event-Driven Architecture:** Suitable for systems where multiple services need to react to events, such as user registration or order processing. 2. **Broadcasting Updates:** For sending updates to multiple consumers, like real-time notifications or updates. #### Advantages - **Decoupling:** Publishers and subscribers are highly decoupled, promoting independence. - **Scalability:** Easily scalable as new subscribers can be added without affecting existing services. - **Flexibility:** Multiple services can react to the same event in different ways. #### Disadvantages - **Complexity:** Requires managing topics and subscriptions. - **Message Ordering:** Ensuring the correct order of message processing can be challenging. - **Eventual Consistency:** Systems need to handle eventual consistency, as updates may not be instant. ### Choosing the Right Pattern Selecting the appropriate communication pattern depends on the specific requirements of your system. Here are some guidelines: 1. **Synchronous Communication:** Use when real-time responses are needed, and latency is acceptable. Ideal for simple request/response scenarios. 2. **Asynchronous One-to-One Communication:** Use for background processing, task offloading, and when decoupling services is essential. 3. **Publish/Subscribe Communication:** Use for event-driven architectures, broadcasting updates, and scenarios where multiple services need to react to the same event. ### Conclusion Understanding and implementing the right communication pattern in microservices architecture is crucial for building scalable, resilient, and maintainable systems. Each pattern—synchronous, asynchronous one-to-one, and publish/subscribe—has its own strengths and weaknesses, and the choice depends on the specific needs of your application. By leveraging these patterns effectively, you can design systems that are both robust and flexible, capable of handling the demands of modern software applications. By integrating these patterns into your microservices architecture, you can achieve a balanced and efficient communication strategy that enhances the overall performance and reliability of your system. Whether you're building real-time applications, decoupled services, or event-driven systems, understanding these communication patterns will equip you with the knowledge to make informed decisions and optimize your microservices architecture. Happy coding!
ali_tariq_90f2c6a125b095c
1,897,709
خرید فیلترشکن v2ray
در دنیای امروز که دسترسی آزاد به اینترنت برای بسیاری از کاربران ایرانی با محدودیت‌های فراوانی روبرو...
0
2024-06-23T11:05:13
https://dev.to/filterbreaker/khryd-fyltrshkhn-v2ray-58gb
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fje9qm2rpin8fol5xrk7.png) در دنیای امروز که دسترسی آزاد به اینترنت برای بسیاری از کاربران ایرانی با محدودیت‌های فراوانی روبرو است، خرید فیلترشکن V2Ray به یکی از دغدغه‌های اصلی مردم تبدیل شده است. با افزایش تعداد شرکت‌ها و فروشندگان ارائه‌دهنده این خدمات، انتخاب یک گزینه مطمئن و قابل اعتماد برای کاربران بسیار دشوار شده است. در این میان، گالا وی پی ان (GalaVPN) به عنوان یکی از قدیمی‌ترین ارائه‌دهندگان خدمات V2Ray در ایران، جایگاه ویژه‌ای را به خود اختصاص داده است. این شرکت از زمانی که طرح فیلترینگ اینترنت ملی در کشور آغاز شد، فعالیت خود را در زمینه ارائه فیلترشکن‌های امن و پرسرعت آغاز کرد و تاکنون توانسته است بیش از 50 نماینده فروش در سراسر کشور داشته باشد. یکی از مزیت‌های بزرگ گالا وی پی ان، داشتن سرورهای پرسرعت و نیروهای متخصص مستقر در کشورهای آلمان و آمریکا است. این امر به شرکت امکان می‌دهد تا خدمات با کیفیت بالایی را به مشتریان خود ارائه دهد. علاوه بر این، گالا وی پی ان هیچ گونه محدودیتی در تعداد کاربران یا سرعت اینترنت اعمال نمی‌کند، که این ویژگی آن را از بسیاری از رقبای دیگر متمایز می‌سازد. یکی دیگر از نکات قابل توجه در مورد گالا وی پی ان، ارائه بسته‌های متنوع با قیمت‌های مناسب است. این شرکت با درک نیازهای مختلف کاربران، اشتراک‌های گوناگونی را با تعداد روزهای متفاوت و هزینه‌های متنوع ارائه می‌دهد تا هر کاربر بتواند گزینه مناسب خود را انتخاب کند. علاوه بر این، گالا وی پی ان با کسب رتبه 4.4 از 5 ستاره از سایت معتبر TrustPilot، توانسته است رضایت بسیاری از مشتریان خود را جلب کند. این امتیاز بالا نشان‌دهنده کیفیت خدمات ارائه شده و تعهد این شرکت به ارائه تجربه کاربری مطلوب است. [خرید فیلترشکن v2ray](https://galavpn.com/%D8%AE%D8%B1%DB%8C%D8%AF-vpn/) برای کسانی که قصد خرید فیلترشکن V2Ray را دارند، گالا وی پی ان یک گزینه امن و قابل اعتماد محسوب می‌شود. این شرکت به مشتریان جدید خود این امکان را می‌دهد تا قبل از خرید اشتراک، 10 گیگابایت اینترنت رایگان را به صورت آزمایشی دریافت کنند. این فرصت به کاربران کمک می‌کند تا با آزمودن سرویس، از کیفیت و سرعت آن اطمینان حاصل کنند و سپس تصمیم به خرید اشتراک بگیرند. با توجه به سابقه درخشان، کیفیت خدمات ارائه شده، عدم محدودیت در تعداد کاربران و سرعت اینترنت، ارائه بسته‌های متنوع با قیمت‌های مناسب، و اعتماد مشتریان، گالا وی پی ان می‌تواند یک انتخاب ایده‌آل برای کسانی باشد که به دنبال یک فیلترشکن V2Ray قدرتمند، امن و پرسرعت هستند. خرید از این شرکت می‌تواند تجربه کاربری لذت‌بخشی را در استفاده از اینترنت آزاد برای کاربران ایرانی به ارمغان آورد. با ارائه 10 گیگابایت اینترنت رایگان در ابتدا، گالا وی پی ان فرصتی را برای آزمودن سرویس خود در اختیار کاربران قرار می‌دهد تا آنها بتوانند با اطمینان کامل، اشتراک مورد نظر خود را خریداری کنند.
filterbreaker
1,897,852
Unlocking the Power of M365 Copilot: Access external data throught a plugin
I am very fascinated by the new copilot plugin in M365. But when you purchase the licence, you get...
0
2024-06-23T17:27:32
https://blog.bajonczak.com/creating-a-compilot-plugin/
ai, m365, githubcopilot
--- title: Unlocking the Power of M365 Copilot: Access external data throught a plugin published: true date: 2024-06-23 11:00:36 UTC tags: AI,M365,Copilot canonical_url: https://blog.bajonczak.com/creating-a-compilot-plugin/ --- ![Unlocking the Power of M365 Copilot: Access external data throught a plugin](https://blog.bajonczak.com/content/images/2024/06/blog-image-3.png) I am very fascinated by the new copilot plugin in M365. But when you purchase the licence, you get only a limited number of possibilities. Yes, it is helpful to summarize a long email or tasks from a meeting. But using it with your internal systems and complex data will be very helpful. You can create your own plugins that will use Microsoft Copilot. # What is a plugin in Copilot? A Plugin for copilot is a reusable (and maybe small) piece of code (building blog). Let's talk about pro-code plugin. These types of plugins are normally extensions within a bot. So hey, It's an AI that's calling a bot. # The use-case Let's assume you have a small database like absences and want to know who is on vacation on a specific date. So, normally, you would write the sentence "Hey, tell me, who is on vacation in August?" So normally, you would get any answer, maybe a helpful one, but the M365 Copilot doesn't know anything about absence data. So, for this, the plugin comes into the game. # The Plugin Manifest So how to create a copilot plugin is described in my post [here](https://blog.bajonczak.com/how-to-integrate-you-sap-data-into-you-ai-model/). But you can also use the existing GitHub project (see at the end of the post) The manifest is the descriptive entry point where the copilot emits the information. I decided to use the following definition of a compose extension. ``` "composeExtensions": [ { "botId": "${{BOT_ID}}", "commands": [ { "id": "absence", "context": [ "compose", "commandBox" ], "description": "Get the absences for the sepcific date or daterange.", "title": "Urlaubsabfrage", "type": "query", "initialRun": true, "fetchTask": false, "parameters": [ { "name": "startDate", "title": "Start", "description": "Contains the requested start date. Output is a date", "inputType": "date" }, { "name": "endDate", "title": "End", "description": "Contains the requested start date if exists. Output is a date", "inputType": "date" } ] } ] } ``` This will tell the M365 copilot that this plugin can fetch absence data for a date range or specific date. > This is the most important thing to do! Copilot will look into the description of each plugin and handler to identify its purpose. So, do it very declaratively to take effect. # The data structure Next, let me describe the data structure. So, my Table structure looks like this. ![Unlocking the Power of M365 Copilot: Access external data throught a plugin](https://blog.bajonczak.com/content/images/2024/06/image-19.png) So don't worry—I don't create an entry for every day in any range; I created a view for this ;). However, this will make it easy to select the data with the above query. You will see the start and end date of the requested absence, as well as the state andusername. Now, it's time to query this kind of data within the plugin. # The code Now, it's time to analyze the code that I wrote. This will done in the following step - Activate the handler - Parse the parameters - Select the data - Returning the data ## Activate the handler The activation is a little no-brainer. Every handler class must derive from the TeamsActivityHandler like this: ``` export class PromptApp extends TeamsActivityHandler .... ``` In this, I will do the "magic" for getting the data and resulting it to the caller (actually as a hero card (or multiple ones). To handle the incoming message, I must insert the handling into the message endpoint like this. ``` // Listen for incoming server requests. server.post("/api/messages", async (req, res) => { // Route received a request to adapter for processing await adapter.process(req, res as any, async (context) => { await promtApp.run(context); }); }); ``` Here, I created an instance called promtApp, called the run method, and handed over the context to the method. ## Parse the parameters The input parameters are defined in the manifest.json. So remember these parts of the manifest.json ``` ... "parameters": [ { "name": "startDate", "title": "Start", "description": "Contains the requested start date. Output is a date", "inputType": "date" }, { "name": "endDate", "title": "End", "description": "Contains the requested start date if exists. Output is a date", "inputType": "date" } ... ``` So, in the manifest, we defined two input variables - startDate - endDate This will contain the start date in a defined date format and, when it exists, the end date. The name property of the parameters is now important. I will now use these names to map the parameter value to the internal class property. My mapper looks like this: ``` private async parseParameters(inputParameters: MessagingExtensionParameter[]): Promise<InputParameters> { let output: InputParameters = { Start: new Date(), End: new Date() }; inputParameters.forEach((parameter: MessagingExtensionParameter) => { switch (parameter.name) { case "startDate": output.Start = moment(parameter.value,'MM/DD/YYYY').toDate(); break; case "endDate": output.End = moment(parameter.value,'MM/DD/YYYY').toDate(); break } }); return output; } ``` Now that the input parameters are parsed, it will come to the main logic. ## Select the data To keep it very simple, I will directly select the data in the table; this is not a best practice! So do it over an or-mapper or s.th. to prevent SQL injections. So my code looks like this: ``` public async GetAbsencesByDate(start: Date, end: Date): Promise<AbsenceItem[]> { let result: AbsenceItem[] = []; try { this.poolConnection = await sql.connect(this.config); let m: Moment = moment(start); let startDate: string = m.format("YYYY-MM-DD") let query: string = `SELECT * from AbsendeByDateView where '${startDate}'[DateValue]`; if (end != null) { let mEnd: Moment = moment(end); let endDate: string = mEnd.format("YYYY-MM-DD") query = `select * from [dbo].[AbsendeByDateView] where [DateValue] between '${startDate}' and '${endDate}'`; } console.log(query); var resultSet = await this.poolConnection.request().query(query); // Map to object resultSet.recordset.forEach(row => { result.push({ name: row.UserDisplayName, Start: moment(row.Begin).toDate(), End: moment(row.End).toDate(), Duration: moment(row.Begin).diff(moment(row.End), 'd'), State:row.State }); console.log("%s\t%s\t%s", row.UserDisplayName, row.Begin, row.End); }); this.poolConnection.close(); } catch (err) { console.error(err.message); } return result; } ``` This will retrieve the data from the SQL Server, source database, and table. Next, it will transform the result into a usable (typed) object. ## Returning the data Now that we have the data we need, I will return the data, but not only the JSON representation itself. I will return it as an Adaptive card (Herocard) so that I can create some activities (like approving or other things) later. Here is the code for generating the Adaptive card ``` public async handleAbsenceRequest( context: TurnContext, query: MessagingExtensionQuery, inputParameters: InputParameters ): Promise<MessagingExtensionResponse> { let rates: AbsenceItem[] = await this.absenceService.GetAbsencesByDate(inputParameters.Start, inputParameters.End); let attachments: Attachment[] = []; rates.forEach((item: AbsenceItem) => { // Load the result Hero card template attachments.push(this.GetAbsenceHerocard(item)); }); // Return the result return { composeExtension: { type: "result", attachmentLayout: "list", attachments: attachments, }, }; return null; } ``` You will see that I will generate a Herocard using a separate method. ``` public GetAbsenceHerocard(item: AbsenceItem): any { let template: ACData.Template = new ACData.Template(personaCard); let preview = CardFactory.heroCard(`${item.name} (Duration ${item.Duration} Day(s))`); const card = template.expand({ $root: { Name: item.name, Start: item.Start, End: item.End, Duration: item.Durationm State:item.State }, }); // Adapt to the attachemnt const attachment = { ...CardFactory.adaptiveCard(card), preview }; return attachment; } ``` These will load a Card template defined in an extra file. Then, it will apply the data from the given absence information and push it back to the caller. The result will then be attached to the caller so that it gets a screenshot of the data to the copilot. The fancy thing is that when you hover over the data reference, it will show you the adaptive card. # Tryout Now, it's time to test the plugin. So first, I activated the Plugin, and after that, it was ready to use. I asked the M365 Copilot the following. ![Unlocking the Power of M365 Copilot: Access external data throught a plugin](https://blog.bajonczak.com/content/images/2024/06/image-21.png) Then, the Copilot extracts the information that the scope is the absences. Now it extracts the required parameter to select "in August" so it will send me over the first day and the last day of August as parameter. ![Unlocking the Power of M365 Copilot: Access external data throught a plugin](https://blog.bajonczak.com/content/images/2024/06/image-20.png) With this, the selection tour can start, and you know I will send the results back afterward. So M365 Copilot will answer like this following prompt ![Unlocking the Power of M365 Copilot: Access external data throught a plugin](https://blog.bajonczak.com/content/images/2024/06/image-23.png) Nice work! # Closing words This is a small and very simple example of how to use the M365 Copilot plugin architecture to access custom data within your organization. Please be aware that I did not do any specific work about security or other issues, so it's clear that you are responsible for implementing security (especially for HR data). Take this example implementation to start with your own data structure. It was a very simple case, but it greatly impacted usability because you integrated other systems into the user flow. Now, enough words, here is the GitHub source [ GitHub - SBajonczak/copilot-vacation at develop Contribute to SBajonczak/copilot-vacation development by creating an account on GitHub. ![Unlocking the Power of M365 Copilot: Access external data throught a plugin](https://github.githubassets.com/assets/pinned-octocat-093da3e6fa40.svg)GitHubSBajonczak ![Unlocking the Power of M365 Copilot: Access external data throught a plugin](https://opengraph.githubassets.com/ed5ae97df78645687869b2412f98f3987f2a9e8c59973bdedba708f407f931c8/SBajonczak/copilot-vacation) ](https://github.com/SBajonczak/copilot-vacation/tree/develop?ref=blog.bajonczak.com)
saschadev
1,897,708
Technologies Change
I still remember dabbling with my father's first computer in the mid-90s when I was around 7 or 8...
0
2024-06-23T11:00:05
https://primalskill.blog/technologies-change
programming, learning, softwaredevelopment, beginners
I still remember dabbling with my father's first computer in the mid-90s when I was around 7 or 8 years old. Back then, the computer "unit" took up the space of a small desk, and the big CRT monitor was placed on top of the unit. My very first interaction with programming was when I opened a random exe file, saw a bunch of weird characters in the editor, and started editing it. To my surprise, the file didn't run anymore, so I edited more and more characters until, at one point, when I executed the file in the CLI, it turned the prompt green. That was the magic moment for me. After that point, I wanted nothing more than to sit in front of that IBM 80286 all day long and figure out what made that prompt switch to green; and then I discovered DOS BASIC. Fast forward to the early 2000s, I discovered web development with HTML4, CSS, JavaScript, and PHP v3. This journey continued until around 2010 when Node.js was released. In 2007 I founded my software development company, and then around 2015-ish, I had a client project requirement to be done in ReactJS on the front-end and Go on the back-end; and the rest is history. Along the way, I used almost all major programming languages in some form or another, either professionally or just as a hobby, ranging from C and Pascal in high school to Ruby, ASP, C#, modern PHP, Java, and the list could go on. My point is that in all these years, the technologies have changed radically, but more importantly, what remained constant were the [general programming principles I learned on my journey](https://primalskill.blog/10-books-every-programmer-should-read). I wasn't using the same technologies in the 2010s as I was in the 2000s, and I'm not using the same tech now as I was a decade ago. If I had focused only on the technology, I would probably still be stuck in BASIC. **I tell every developer I work with, to learn the general programming principles and they will be fine for the rest of their life.** If you learn technologies instead of programming, you will become obsolete when (and not IF) that technology falls out of trend or is replaced by some AI automation. A decade ago my tech stack looked totally different than today and in the next ten years it will look radically different I'm 100% sure of it. ---- Cover photo by [seowoo_lee](https://pixabay.com/users/seowoo_lee-21601663/)
feketegy
1,897,685
Nike Website
This project is a responsive Nike website showcasing various product offerings, customer reviews,...
0
2024-06-23T10:46:10
https://dev.to/sudhanshuambastha/nike-website-54gn
webapp, trial, beginnerlearningpurpose, nikewebsite
This project is a responsive Nike website showcasing various product offerings, customer reviews, services, and special offers while also providing a subscription form for updates and newsletters. ### Project Overview The project is built using React and Tailwind CSS, with Vite as the build tool. It consists of different components such as Nav, Button, PopularProductCard, ShoeCard, ServiceCard, ReviewCard, and sections like Hero, PopularProducts, SuperQuality, Services, SpecialOffer, CustomerReviews, Subscribe, and Footer. - GitHub Repository link => [Nike Website](https://github.com/Sudhanshu-Ambastha/Nike-Website) ### Technologies Used The project leverages technologies for seamless development and styling. [![My Skills](https://skillicons.dev/icons?i=nodejs,npm,react,vite,tailwind)](https://skillicons.dev) ### Components Breakdown - **Nav**: Navigation bar for easy website navigation. - **Button**: Reusable component for buttons across the site. - **PopularProductCard**: Display card for popular products. - **ShoeCard**: Card component specifically for showcasing shoes. - **ServiceCard**: Display card for highlighting services offered. - **ReviewCard**: Card component to showcase customer reviews. ### Sections Overview - **Hero**: Main banner displaying the latest Nike arrivals. - **PopularProducts**: Grid layout showcasing popular products. - **SuperQuality**: Section emphasizing the quality of Nike shoes. - **Services**: List of services provided by Nike. - **SpecialOffer**: Highlighting any ongoing special offers. - **CustomerReviews**: Section dedicated to showcasing customer feedback. - **Subscribe**: Subscription form for users to receive updates and newsletters. - **Footer**: Includes essential links and social media icons for easy access. ### Additional Resources For further details on the tools and technologies utilized in this project, feel free to explore the documentation provided by Tailwind CSS, Vite, and React. - [Tailwind CSS Documentation](https://tailwindcss.com/docs/guides/vite) - [Vite Documentation](https://vitejs.dev/) - [React Documentation](https://react.dev/blog/2023/03/16/introducing-react-dev) By leveraging these technologies and structures, this Nike website project offers an engaging user experience while effectively showcasing Nike's product range and services. Create an impact with React, Tailwind CSS, and Vite for your next web development endeavor! I made this app with the help of a YouTube tutorial to learn about Vite.js and Tailwind CSS functionality with React.js. This repository has received 3 stars, 6 clones, and 185 views within 5 hours of being made public. Experience the seamless integration of Tailwind CSS into my web projects by exploring this innovative Nike website project! Dive into the rich functionalities crafted with React, Tailwind CSS, and Vite. While many have cloned my projects, only a few have shown interest by granting them a star. **Plagiarism is bad**, and even if you are copying it, just consider giving it a star. Share your feedback and questions in the comments section as you explore the endless possibilities awaiting you in this dynamic and popular repository.
sudhanshuambastha
1,897,707
Balance Sheet to Code: How a Commerce Student Becomes an IT Guy!
Forget debits and credits, think algorithms and applications! Many commerce students embark on their...
0
2024-06-23T10:44:46
https://dev.to/anshul_bhartiya_37e68ba7b/balance-sheet-to-code-how-a-commerce-student-becomes-an-it-guy-2ad0
beginners, programming, career, computerscience
Forget debits and credits, think algorithms and applications! Many commerce students embark on their academic journey with visions of becoming accounting wizards or financial gurus. However, for some, the allure of technology disrupts those plans, leading them down an unexpected path – the exciting world of IT. This article explores the reasons why commerce students might make this surprising career switch and equips them with the knowledge to navigate this exciting new direction. ## From Engineering Dreams to Spreadsheets and Code: A Personal Journey Like many young minds, I harbored aspirations of becoming an engineer during my early school years. However, as I ventured deeper into my academic journey, my interests took an unexpected turn. By the time I reached high school, the intricacies of science subjects, though I achieved good grades, failed to ignite the same passion as other disciplines. This led me to explore the commerce stream, where I discovered a growing fascination with accounting and statistics. The world of numbers held a certain appeal, and I thrived in this newfound focus. Despite my delve into commerce, a childhood fascination with technology remained firmly rooted. The inner workings of apps and websites continued to intrigue me. I'd spend hours dissecting how these digital marvels functioned, yearning to understand the magic behind them. While the commerce path offered a clear direction, a nagging curiosity lingered – could I somehow bridge the gap between my interest in business and this captivating world of technology? The answer, I soon discovered, was a resounding yes! Commerce students, like myself, aren't confined to traditional career paths. The IT field offers a plethora of opportunities for individuals with strong analytical and problem-solving skills, both of which are honed extensively in commerce programs. This revelation ignited a spark of excitement, and I eagerly began exploring the various pathways that could lead me from the world of spreadsheets and debits to the dynamic realm of code and innovation. This personal story not only showcases your unique journey but also highlights a common concern – can commerce students pursue IT careers? ## Finding My Passion: A Kickstart in Tech Once I began exploring the world of tech more seriously, a whole new level of fascination unfolded. The sheer amount of innovation, the ability to constantly learn and create new things – it was unlike anything I had experienced before. This is where your advice to pursue a field that excites you truly resonates. From the moment I started delving into coding and tech concepts, I knew I was on the right path. It wasn't just about the challenge or the intellectual stimulation; it was the pure enjoyment of the process. Weekends didn't feel like a break from work, they felt like an opportunity to recharge and come back even more excited to explore the endless possibilities of the tech industry. This, for me, is the true definition of finding the right career path – a journey that energizes you, keeps you curious, and never feels like a chore. ## Reasons for the Shift: From Commerce to Coding (A Personal Story) Forget stock options and spreadsheets – commerce students are ditching their suits and ties (or maybe just the ties) for a new adventure: the wild west of the tech industry! It might seem like a strange turn of events, but hold on to your calculators – there's a method to the madness. The tech world isn't just about bean counting (although those analytical skills from accounting class will come in handy); it's about building the next big app, designing interfaces that don't look like they were created in the 90s, and solving problems so complex they'd make even the most seasoned accountant's head spin (in a good way, hopefully). So, while your classmates might be prepping for their CPA exams, you'll be learning how to code – a skill that's way cooler than memorizing arcane tax codes (and let's be honest, probably more useful in the long run). Don't worry, your commerce background isn't a dead end – it's more like a secret passage to a world of innovation, creativity, and potentially, self-cleaning robot vacuums (because who has time for chores when you can code them away?). ## Want to connect? Let's chat about code, or anything else that sparks your developer curiosity! Twitter: [Bhartiyaanshul](https://twitter.com/Bhartiyaanshul) LinkedIn: [anshulbhartiya](https://www.linkedin.com/in/anshulbhartiya/) Email: bhartiyaanshul@gmail.com
anshul_bhartiya_37e68ba7b
1,897,690
Devgym Retro 2024 Jan-Jun
1, 2, 3, testando... alguém aí? Já faz seis meses desde nosso último texto, mas apesar de não ter...
0
2024-06-23T10:41:45
https://dev.to/devgymbr/devgym-retro-2024-jan-jun-kp3
1, 2, 3, testando... alguém aí? Já faz seis meses desde nosso último texto, mas apesar de não ter documentado, muita coisa rolou nesse tempo. ## Comentários no site Desde o começo da Devgym, a ideia era reduzir ao máximo o desenvolvimento da plataforma para evitar codar funcionalidades que não seriam usadas. Nessa linha de pensamento, a Devgym nasceu com um sistema de comentários externo, usando o Disqus com um plano gratuito. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bnqz4p9cez56jrh7t3c1.png) Só que demorei muito tempo para perceber uma coisa: para comentar no Disqus a pessoa precisava ter uma conta e logar no próprio Disqus, e isso era claramente uma experiência muito ruim. Um cliente me relatou isso. O cliente já estava logado na Devgym e tinha que logar novamente, isso poderia justificar porquê o número de comentários na plataforma era baixo. Como o projeto já estava finalizando seu primeiro ano, foi um bom momento para olhar para esse débito e assim começou a saga para implementar o sistema de comentários, que honestamente dá mais trabalho do que parece. Essa jornada vai sair em vídeo lá no [canal do YouTube](https://youtube.com/filhodanuvem), mas em resumo, implementamos uma série de componentes React acoplados na aplicação Golang para chegar no resultado atual. O número de comentários desde então já passou o que tínhamos no Disqus, então valeu o investimento. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1hgksmni83f6e9tvbgbp.png) ## Fórum público Uma dor que surgiu no primeiro ano da Devgym era o fato da plataforma não ter um espaço para as pessoas trocarem informações, como um servidor do Discord. Eu pensei bastante em ter ou não um servidor, o Discord é uma plataforma bem completa, mas existe uma curva de aprendizado para gerenciar ela. Além disso, na minha visão, só fazia sentido ter um local de comunicação da comunidade Devgym se isso fosse integrado com o próprio fluxo da plataforma. Coisas como "alguém subiu a solução de um desafio? Um novo desafio é lançado? Evento online marcado para semana que vem? Pessoas são avisadas no chat". Dá para fazer essa integração entre as duas plataformas usando o sistema de APIs e webhooks do Discord, mas e se quiséssemos ter um espaço privado para quem é Devgym Pro e outro público para o restante dos usuários? Essa e outras perguntas me fizeram acreditar que o primeiro passo deveria ser ter um tipo de fórum dentro da plataforma, um passo em direção a uma plataforma mais social, sem depender de uma ferramenta externa. Como já existia uma implementação de comentários que foi feita de uma forma que pode se plugar em qualquer página (até me pergunto se isso não é um microSaaS separado, se alguém quiser conversar sobre isso, dê um toque), expandimos esses componentes React para funcionar também como um fórum ou chat. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ghz013qcl1owgl49476k.png) Ainda não parei para medir o número de visitas nessa [página do fórum](https://app.devgym.com.br/rooms/general), mas o uso está bem baixo. Não acho que o desenvolvimento tenha se provado necessário ainda, mas nos próximos meses vamos ter algumas ações em que a página deve ter mais uso. ## Tráfego pago Provavelmente eu comentei nos posts anteriores que parte do lucro da Devgym em 2023 deveria ser usado para investir em campanhas de marketing para tentar trazer novos clientes. Eu passei os últimos meses em diversos momentos tentando fazer isso dar certo com o Google Ads e Facebook Ads, mas somente na semana passada tivemos as primeiras duas vendas que vieram desse canal de comunicação. Não vou mentir, essa foi a maior conquista da Devgym desse ano (até então) 🎉 porque destrava um novo mundo. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t7xpradbnpvxcb9tcqrj.jpeg) As coisas estão começando a fazer mais sentido nesse mundo de tráfego pago e estou refazendo um curso de marketing que fiz há uns 3 anos. É incrível como o conhecimento pode ser absorvido de forma diferente dependendo do seu nível de maturidade para o assunto. Quer saber mais sobre esse mundo de marketing? Dá um salve aqui nos comments para eu fazer um vídeo. ## Novos conteúdos Nós terminamos o último texto de 2023 com a animação lá em cima. Estávamos conversando com diversas pessoas para produzir novos conteúdos de outras linguagens de programação na plataforma, mas a verdade é que saímos desse processo sem ninguém para produzir, por diversos motivos. :( Por questões pessoais, eu também não parei para lançar novos desafios na plataforma. Essa é a beleza e o stress de ser empreendedor solo. Preciso parar? Vou parar. Não existe nenhuma equipe que dependa de mim, nenhum salário de colaborador para pagar ou ninguém além dos próprios clientes (que têm sido super compreensivos) para me cobrar. Por outro lado eu sou o projeto? Se eu paro, o projeto para? Isso não é uma definição de empresa e tenho pensado muito em como resolver isso, se é que existe uma solução para esse tipo de empreendedorismo solo. Aos poucos, eu tenho entendido que a Devgym não é uma plataforma que vai vencer na quantidade de desafios, mas sim na qualidade. Eu poderia encher a plataforma de desafios de API com banco de dados, mas não é isso que eu acredito que os programadores precisam. Precisamos de algo realmente desafiador, algo que nos tire do domínio do trabalho, algo que talvez nunca tenhamos a chance de desenvolver nesse ambiente, e com a Devgym é possível vivenciar essas experiências e levar isso para o currículo. Esse último parágrafo não quer dizer que não lançaremos desafios novos. Aliás, estamos na cara do gol para lançar um novo bem legal e diferente dos outros. Este texto aqui também prova que estamos nos aquecendo e recomendo você a tomar aquele pré-treino, porque uma nova ficha de treino está chegando 💪🏾.
devgymbr
1,897,706
Firebase hosting issue on Flutter Web
Recently, I have project of creating a LLM based on a book. It was running smooth until my latest...
0
2024-06-23T10:39:15
https://dev.to/pagebook1/firebase-hosting-issue-on-flutter-web-2c8n
webdev, flutter, openai
Recently, I have project of creating a LLM based on a book. It was running smooth until my latest deployment failed to run in firebase hosting. There is an error log of unexpected < in Doctype but it's working on flutter debug and also firebase hosting emulator and also Netlify. Only firebase hosting is the issue for me. I am stuck now haha.
pagebook1
1,897,025
Thread fundamentals in Java
What is a thread and why is it important in software development? A thread is the smallest...
0
2024-06-23T10:34:21
https://dev.to/rafaaraujoo/thread-fundamentals-in-java-43h1
java, thread, concurrency
### What is a thread and why is it important in software development? A thread is the smallest sequence of instructions that a computer can execute, it's part of a process that can have multiple threads. Every Java program runs in a thread even if you don't create it, the main thread. Understanding threads is important because we can make full use of the computer resources and write software with good performance, that has lower response time. But although threads have all these advantages they also bring some complexity when dealing with multithreading programs. This concept of multithreading was confusing to me, I didn't understand why my program had multiple threads if I hadn't created them?! But then I learned that some frameworks kindly create them for you, if you are from the Spring world it means that every request your application receives runs in a different thread, which means we must create thread-safe programs. ### And what does it mean thread-safe? I've read many concepts, in simple words it means that your program must behave predictably when accessed from multiple threads. Let's say you have this simple class that has a method to increment a counter: ``` public class NoSynchronizedCounter { public int counter = 0; public void incrementCounter() { counter++; } public Integer getCounter() { return this.counter; } } ``` The expected behavior here is that every time the `incrementCounter` is called the counter is incremented by one, in a single-threaded program that will be true, but if this code is called from multiple threads, then we can not guarantee that anymore. Let's execute the code: ``` NoSynchronizedCounter simpleCounter = new NoSynchronizedCounter(); Runnable runnable = () -> { for (int i=0; i<10_000; i++) { simpleCounter.incrementCounter(); } }; Thread myThread1 = new Thread(runnable); Thread myThread2 = new Thread(runnable); myThread1.start(); myThread2.start(); myThread1.join(); myThread2.join(); System.out.println("Final counter: " + simpleCounter.getCounter()); ``` We are calling our `incrementCounter` method 10000 times, and running it in two different threads, we expect that the final counter would be 20000, but when I run this code I get a different result `14959`, but why does this happen? In the `NoSynchronizedCounter` class, the counter variable is shared between threads, and we can not guarantee that the threads will access the method in order. Two threads can access the `incrementCounter` at the same time, if two threads read the same value for example 9, in the end, both threads will increment it to 10 not 11 as would be expected. We lost one increment and the final result will be wrong. We need to make this class thread-safe, and we have different ways of doing that. One possible solution would be to change the counter type to AtomicInteger which guarantees that all the operations on counter variable will be atomic. Let's change it: ``` public class SynchronizedCounter { public AtomicInteger counter = new AtomicInteger(0); public void incrementCounter() { counter.incrementAndGet(); } public Integer getCounter() { return counter.get(); } } ``` Now let's test it: ``` SynchronizedCounter simpleCounter = new SynchronizedCounter(); Runnable runnable = () -> { for (int i=0; i<10_000; i++) { simpleCounter.incrementCounter(); } }; Thread myThread = new Thread(runnable); Thread myThread2 = new Thread(runnable); myThread.start(); myThread2.start(); myThread.join(); myThread2.join(); System.out.println("Final counter: " + simpleCounter.getCounter()); ``` The final result here will always be 20000, the AtomicInteger class is part of the Java concurrent package and it works well for this case as we have only one shared state between threads, things get more complicated when there are multiple shared states, but that's a topic for another moment. I hope you find this helpful, thanks for reading! #### References: Java Concurrency in Practice - Brian Goetz Effective Java - Joshua Bloch
rafaaraujoo
1,897,703
Make your own Arc at home
Arc Browser is gaining popularity with its cool features and easy-to-use interface. However, some...
0
2024-06-23T10:29:21
https://maxfoc.us/blog/make-your-own-arc-at-home/
productivity, browser, extensions, diy
Arc Browser is gaining popularity with its cool features and easy-to-use interface. However, some long-time Chrome or Firefox users may not want to switch over completely, and some Windows users have experienced performance issues with Arc. Fortunately, you don’t have to give up Chrome or Firefox to enjoy some of Arc’s most remarkable features. This article will show how to use extensions to bring Arc’s best features into your current browser. Whether you’re looking to preview links without leaving your page, organize your tabs more efficiently, or want a fresh look, we have solutions for you. ## Replace Arc’s Peek feature with MaxFocus Great for browsers: Chrome, Edge, Firefox, Opera, Vivaldi, and Brave. **Arc’s Peek feature**: Peek lets you quickly preview the content of a link without opening it in a new tab. It’s handy for checking the content without losing your current page. ![Preview with Little Arc](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zz0yicecektdqo79xcvc.png) **MaxFocus**: [This link preview extension](https://maxfoc.us/) provides a similar feature to other browsers. It lets you preview links in various ways, such as hovering, long-clicking, or using a keyboard shortcut. This keeps you focused on your current task by letting you see what’s behind a link without navigating away. ![Preview with MaxFocus](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xqn9ki12og0rrxtxpkz0.png) ### Why you’ll love MaxFocus - **Customizable previews**: hover over, long-click, drag a link, or use a keyboard shortcut [to preview links](https://maxfoc.us/blog/how-to-open/). - **Reader mode**: enjoy a clean, ad-free reading experience for news and articles - **Enhanced navigation**: you can preview webpages, articles, or [videos](https://www.youtube.com/watch?v=mkHMrwfqN6U) before fully committing to opening them in a new tab. - **Highly customizable**: adjust preview settings, color schemes, and power-saving options to suit your needs. - **AI features**: ask questions right in the preview, get prompt suggestions, and dive deeper into the content. ## Replace vertical tabs from Arc with the Side Space extension Great for browsers: Chrome, Opera You can also enable native vertical tabs in Edge, Vivaldi, and Brave. For Firefox, try Sidebery or Tree Style Tab. ![Vertical tab management in Arc](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z20dmdjovqvgvol4bers.png) Arc’s vertical tabs feature is incredibly convenient if you enjoy using vertical tabs. But if you’re seeking a similar experience in your browser, you should check out [the Side Space](https://www.sidespace.app/) extension. ### What’s Side Space? Side Space is a vertical tabs manager in a side panel of your browser. Imagine all your tabs neatly lined up in work, life, or school categories, making it easy to find exactly what you need. ![Vertical tab management with Side Space](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/572zk3b44hugzlfxgoa9.png) ### Why you’ll love Side Space: - **Organized spaces**: With Side Space, your tabs are neatly categorized into vertical spaces in a side panel. This means you can quickly find, manage, and focus on your tasks. - **Group tabs easily using AI**: Side Space uses AI to help organize and group your tabs efficiently, so you spend less time sorting and doing more. - **Auto sync across devices**: Your tabs automatically sync across devices. You can access your saved spaces from anywhere by logging in to your account. - **Dark mode & Custom themes**: Browse comfortably at night with dark mode, and customize the space color palette to make it feel more like you. - **Pin & Search tabs**: Pin key tabs for quick access and instantly find any tab or space with fuzzy search. ### Try it out If you like the vertical tabs in Arc, you’ll love Side Space. It makes tab management easier, giving you more options and control. Side Space helps you keep track of many tabs while working, shopping, or browsing online. ## Advanced: Make an Arc-like UI in your Firefox Love the Arc Browser design but prefer Firefox? Use [the ArcWTF theme](https://github.com/KiKaraage/ArcWTF) to customize your Firefox to look like Arc! ![ArcWTF theme. Screenshot from Reddit](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4l2ccowj1lg42zg30zvs.png) Why you’ll love ArcWTF: - **Sleek design**: Vertical tabs, rounded corners, and a polished look. - **Enhanced browsing**: Combine with add-ons like Sidebery for better tab management. ## Final thoughts If you’re looking for a new browser, Arc is a great choice. It’s fast, secure, and has a clean interface. But if you’re happy with your current browser, you can still enjoy some of Arc’s best features with extensions like **[MaxFocus](https://maxfoc.us)**, **Side Space**, and **ArcWTF**. Give them a try and see how they can improve your browsing experience.
qostya
1,897,700
VIP Call Girls Islamabad Rawalpindi Bahria town Dha Escorts girls contact.(03279066660)
Call Miss Alisha Malik 03279066660 Y SERVICES AND SPEND THE MEMORABLE TIME WITH HOTTIES *** -- SUPER...
27,825
2024-06-23T10:24:00
https://hashnode.com/@callgirlrawalpindi
writing, vipcallgirlsrawalpindi, callgirlsrawalpindi, escortsgirlsrawalpindi
Call Miss Alisha Malik 03279066660 Y SERVICES AND SPEND THE MEMORABLE TIME WITH HOTTIES *** -- SUPER HOT EROTIC TEENS AT YOUR DOORSTEP IN SINGLE CALL -- --- DEEP THROAT SUCKING, KISSING, LICKING, BODY MASSAGE INCLUDED WITH SEX --- ---- 100% SATISFACTION GUARANTEED ---- Make your nights memorable and sexy with hot, sexy, educated babes. Book our models for incalls and outcall services. You should make some time out of your busy schedule & then some sexy moments with our babes. We can guarantee that you will have the most memorable time of your life with our girls. You should contact us for having a bedroom experience that you simply cant forget in your life. Our hot models are housewives, school, university girls. Some of them are professionals i.e. Nurses, Doctors, Teachers, Anchors, Makeup Artists, Singers, IT professionals, Ramp Models, Fashion Designers and professional dancers. Drinks will be provided on demand. If you want to take our models to long drive or cinemas, you are most welcome. You can come to our place or can go to the hotels or your private place too. We have VVIP, air conditioned, neat and clean rooms. We are providing services in Islamabad / Rawalpindi / Bahria Town / Murree and surrounding areas. We have packages for short time aswell for night or long time duration. **FOR SHORT TIME PERIOD** Romance + Body 2 Body Massage + Blow Job + Kissing + Licking + 1 Shot in your favorite position. (For Prices, Please call or whatsapp on 03279066660) We have more discounted deals for you that will match your budget. We have girls for full night too. Our Girls options are: 1 hour with 1 girl 2 hours with 1 girl 1 hour with 2 girls 2 hours with 2 girls Full Night with 1 girl Full Night with 2 girls FOR PRICING AND LOCATION CALL OR WHATSAPP 03279066660 Our services are: Nude Dancing Romance French Kissing Softcore SEX Hardcore SEX Licking Sucking BDSM Fetish Lesbians Milf 69 Position Cum on Body Tit Fuck Gang Bang Oral SEX Foot Job Foreplay Hand Job Creampie Deep Throating Dominance Erotic SEX Erotic Dance ....and alot more services. Privacy Policy: Our client's privacy and safety is our first priority. We don't compromise on our client's privacy and safety. We have fully secured place in Bahria. For more location, pricing and more details, please call or whatsapp us right now. We are providing 24/7 services.(03279066660**_** -
callgirlsrawalpindi
1,897,696
The Power of Machine Learning Algorithms
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
0
2024-06-23T10:12:45
https://dev.to/vidyarathna/the-power-of-machine-learning-algorithms-3cl5
devchallenge, cschallenge, computerscience, beginners
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer Machine learning algorithms learn patterns from data to make predictions or decisions. They include supervised (predictive modeling), unsupervised (clustering), and reinforcement learning (reward-based decision-making). Algorithms like decision trees and neural networks are widely used. ## Additional Context Machine learning drives advancements in autonomous systems, recommendation engines, and natural language processing. Understanding algorithms aids in developing AI applications, optimizing processes, and making data-driven decisions.
vidyarathna
1,897,701
Advanced MongoDB Lookup: Complex MongoDB lookup Queries with Multiple Conditions
As a database management system, MongoDB stands tall as a powerful NoSQL solution. We developers,...
0
2024-06-23T10:23:45
https://dev.to/codegirl0101/advanced-mongodb-lookup-complex-mongodb-lookup-queries-with-multiple-conditions-18jf
mongodb, tutorial, backenddevelopment, learning
As a database management system, MongoDB stands tall as a powerful NoSQL solution. We developers, often face significant challenges when joining collections in MongoDB. The process of using the MongoDB lookup multiple fields operation is really not easy and requires many additional steps to structure our data correctly. To synchronize our data with the need, we sometimes have to include operations such as `$match`, `$unwind`, `$group`, and `$map` to structure our data. However, the complexity of combining these operations can be stressful and prone to errors. The syntax of MongoDB lookup with conditions with these complicated operations is not easy and extremely confusing. It's not uncommon to spend considerable time debugging, trying to get everything to work together. I always get heavy errors with [Mongodb complex lookup](https://www.codegirl0101.dev/2024/06/mongodb-advance-lookup-operations.html). I try to fix one part and the other part of my lookup aggregation code breaks. Recognizing these difficulties, I have developed an ultimate guide to MongoDB $lookup with advanced operations. With this guide, you will have access to detailed explanations and practical examples, covering a wide range of scenarios you might face. Whether you are filtering nested documents, flattening arrays, grouping results by specific criteria, or transforming data into a desired format, this blog post aims to save you time and effort by traveling from one page to another page for solutions, With this thorough guide, you won't have to look elsewhere for solutions, and I can assure you chatGpt barely helps with the complex scenarios. I tried many times and got humongous errors. visit my blog post for a detailed explanation with standard examples - https://www.codegirl0101.dev/2024/06/mongodb-advance-lookup-operations.html
codegirl0101
1,897,132
# TWILIO AI CHAT
What I Built CHAT-TWILIO-API is an innovative voice-based application leveraging Twilio...
0
2024-06-23T10:23:39
https://dev.to/wmasivi54623/-twilio-ai-chat-286c
devchallenge, twiliochallenge, ai, twilio
## What I Built CHAT-TWILIO-API is an innovative voice-based application leveraging Twilio Voice and Gemini AI to deliver seamless, intelligent, and human-like voice interactions for customer service and automated assistance. ## Demo [TWILIO AI CHAT](https://chat-twilio-ai-git-main-williammasivis-projects.vercel.app/) [TWILI-AI-CHAT-FRONTEND](https://github.com/williammasivi/ChatTwilioAI) [TWILIO-AI-CHAT-BACKEND](https://github.com/williammasivi/ChatTwilioAI-backend) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ruj1t6rgi84a8hp3xmty.jpg) <!-- Share a link to your app and include some screenshots here. --> ## Twilio and AI CHAT-TWILIO-API harnesses the power of Twilio’s Voice API to handle voice calls, combined with the advanced capabilities of Gemini AI to process and respond to customer queries. This integration allows for dynamic voice interactions that can understand and respond to natural language, providing a more personalized and efficient customer service experience. ## Additional Prize Categories :white_check_mark: Twilio Times Two :white_check_mark: Impactful Innovators :white_check_mark: Entertaining :white_check_mark: Endeavors ## Team Submissions This project is a collaborative effort. Please credit teammates: - @birusha - @emmanuelbinen <!-- Don't forget to add a cover image (if you want). --> Thank you for participating in the Twilio Challenge!
wmasivi54623
1,897,699
[DAY 60-62] I built a random quote machine using React
Hi everyone! Welcome back to another blog where I document the things I learned in web development. I...
27,380
2024-06-23T10:17:49
https://dev.to/thomascansino/day-60-62-i-built-a-random-quote-machine-3kf9
learning, react, webdev, javascript
Hi everyone! Welcome back to another blog where I document the things I learned in web development. I do this because it helps retain the information and concepts as it is some sort of an active recall. On days 60-62, after completing the libraries and frameworks course, I continued on to the projects required to finish to acquire the Front End Development Libraries certificate. These are 5 different projects that have their own user stories to be fulfilled. The first project is a random quote machine. It is a program that generates quotes from popular figures by clicking the next quote button. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jmxaxpa0m78wfawx9wej.PNG) How I did the project is doing the bare HTML first just to visualize where the quotes will go, as well as the buttons, author names, and anchor links. I made a div container for the text, div container for the name of the author, and div container for the next quote button and anchor links. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/djkn619lvmmpicvsequq.PNG) Next, I went straight into Javascript for functionality. In making this project, I wanted to practice my skills in React so I used it to make this project. My workflow has always been bare HTML first, then add the functionality to make sure it works, then lastly, I finalize my app with CSS for designs. In adding the functionality, I first found a fetch API for random quotes from popular figures. Next, I initialized the states and made functions within my class component to set state for the quotes and author names. After that, I added the function for the next quote button to randomly choose a quote within the fetch API data to render. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rvgadv04lm5au904rxr4.PNG) I also added features like a share button for twitter and facebook so that the quotes (along with their respective authors) can be shared to social media. Lastly, I finalized the design of the project using vanilla CSS and made it look visually appealing. With that, I successfully fulfilled the user stories and passed the first project. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gwcf89vqzvfk6wr86iit.PNG) Anyways, that’s all for now, more updates in my next blog! See you there!
thomascansino
1,897,698
Understanding Quantum Computing Basics
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
0
2024-06-23T10:16:17
https://dev.to/vidyarathna/understanding-quantum-computing-basics-3c8l
devchallenge, cschallenge, computerscience, beginners
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer Quantum computing leverages quantum mechanics to perform computations using qubits. Unlike classical bits, qubits can exist in superposition and entanglement, enabling parallel processing. Potential applications include cryptography, drug discovery, and optimization. ## Additional Context Quantum computing promises exponential speedups for certain problems but faces challenges in error correction and scalability. It represents a frontier in computational research, with ongoing advancements in hardware and algorithms.
vidyarathna
1,897,697
Exploring the Internet of Things (IoT)
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
0
2024-06-23T10:14:59
https://dev.to/vidyarathna/exploring-the-internet-of-things-iot-2i90
devchallenge, cschallenge, computerscience, beginners
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer IoT connects everyday devices to the internet, enabling data collection and remote control. It uses sensors and actuators to interact with the physical world. Applications range from smart homes to industrial automation, improving efficiency and convenience. ## Additional Context IoT's growth transforms industries like healthcare, agriculture, and transportation, enhancing decision-making and optimizing resource usage. Security and interoperability challenges require robust solutions for widespread adoption.
vidyarathna
1,897,694
Demystifying the Blockchain
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
0
2024-06-23T10:06:23
https://dev.to/vidyarathna/demystifying-the-blockchain-5gpd
devchallenge, cschallenge, computerscience, beginners
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer Blockchain is a decentralized ledger technology that records transactions across multiple computers. It's secure, transparent, and immutable, making it ideal for cryptocurrencies like Bitcoin. Each block contains a cryptographic hash of the previous block. ## Additional Context Blockchain ensures data integrity and security without a central authority. Its applications extend beyond cryptocurrencies to supply chain management, voting systems, and more, offering potential for innovation in various industries.
vidyarathna
1,897,693
Understanding the P vs NP Problem
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
0
2024-06-23T10:04:40
https://dev.to/vidyarathna/understanding-the-p-vs-np-problem-2i1d
devchallenge, cschallenge, computerscience, beginners
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer The P vs NP Problem asks if every problem whose solution can be quickly verified (NP) can also be quickly solved (P). It's a major unsolved question in computer science, with implications for cryptography, algorithms, and beyond. ## Additional Context Resolving P vs NP would revolutionize computing, affecting fields from encryption to optimization. If P = NP, many currently intractable problems would become solvable, drastically changing our approach to complex computations.
vidyarathna
1,897,688
Hashing: The Key to Fast Data Retrieval
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
0
2024-06-23T10:02:18
https://dev.to/vidyarathna/hashing-the-key-to-fast-data-retrieval-4gdn
devchallenge, cschallenge, computerscience, beginners
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer Hashing converts input data into a fixed-size string of characters, which appears random. It's used in data structures like hash tables for fast data retrieval. Good hashing minimizes collisions, where different inputs produce the same output. ## Additional Context Hashing is essential for quick data access and security (e.g., in cryptographic functions). It balances speed and efficiency, enabling operations like lookup and insertion in constant time, O(1), when implemented well.
vidyarathna
1,897,692
Filter system
&lt;div class="from"&gt; &lt;mat-label&gt;Banner Type&lt;/mat-label&gt; ...
0
2024-06-23T10:01:33
https://dev.to/webfaisalbd/filter-system-4ojo
```html <div class="from"> <mat-label>Banner Type</mat-label> <mat-form-field appearance="outline" > <mat-select formControlName="type" required> <mat-option [value]="'home_banner'">Home Page Banner</mat-option> <mat-option [value]="'all'">Home Page Banner</mat-option> </mat-select> <mat-error>This field is required.</mat-error> </mat-form-field> </div> ``` ```js private getAllBanner() { // Select const mSelect = { image: 1, name: 1, status: 1, priority: 1, info: 1, createdAt: 1, description: 1, }; const filter: FilterData = { filter: this.filter, // filter: { type: 'all', status: 'publish' }, pagination: null, select: mSelect, sort: { createdAt: -1 }, }; this.subDataOne = this.bannerService.getAllBanners(filter, null).subscribe({ next: (res) => { if (res.success) { this.banners = res.data; this.bannerCount = res.count; this.holdPrevData = this.banners; this.totalBannersStore = this.bannerCount; } }, error: (err) => { console.log(err); }, }); } ```
webfaisalbd
1,897,691
Bypassing captchas using an automatic captcha solver
Introduction In today's digital landscape, navigating through online security measures is crucial...
0
2024-06-23T10:01:20
https://dev.to/media_tech/bypassing-captchas-using-an-automatic-captcha-solver-k93
**Introduction** In today's digital landscape, navigating through online security measures is crucial for efficient and smooth operation. Captchas, designed to distinguish between human users and automated bots, serve as a fundamental tool in safeguarding online platforms from malicious activities. However, for legitimate users, these captchas can often become a significant barrier, slowing down processes and impeding user experience. This article delves into the world of automatic captcha solvers, exploring how they function, their ethical implications, and their practical applications in bypassing captchas. **Understanding Captchas** Captchas are automated tests designed to distinguish between human users and bots by presenting challenges that are easy for humans to solve but difficult for machines. These challenges typically involve identifying distorted text, selecting specific images, or solving simple puzzles. While captchas effectively prevent automated bots from accessing websites or performing actions, they can frustrate legitimate users who must spend time and effort completing them. **The Role of Automatic Captcha Solvers** Automatic captcha solvers are software tools specifically designed to automatically solve captchas on behalf of users. These tools employ sophisticated algorithms to analyze and decipher the challenges presented by captchas in real-time. By leveraging advanced image recognition and artificial intelligence techniques, these solvers can quickly and accurately decode captchas, effectively bypassing them without requiring human intervention. **How Automatic Captcha Solvers Work** **Image Recognition Technology** Automatic captcha solvers utilize advanced image recognition technology to interpret the visual cues embedded within captchas. This technology enables the software to identify and extract relevant features from images, such as characters or patterns, necessary to solve the captcha challenge. **Machine Learning Algorithms** Machine learning algorithms play a pivotal role in enhancing the accuracy and efficiency of automatic captcha solvers. These algorithms are trained on vast datasets containing various types of captchas, allowing the solver to continuously improve its ability to recognize and solve different captcha formats. **Real-Time Processing** Upon encountering a captcha, the automatic solver swiftly processes the image or challenge presented. The software analyzes the content, applies pre-trained models and algorithms, and generates a solution in real-time. This process occurs seamlessly and within fractions of a second, ensuring minimal disruption to the user experience. **Practical Applications** **Improving User Experience** By employing automatic captcha solvers, websites can significantly enhance user experience by reducing friction associated with completing captchas. This improvement is particularly beneficial for users who frequently encounter captchas during their online interactions. **Streamlining Operations** In commercial and industrial settings, automatic captcha solvers streamline operations by automating repetitive tasks that involve interacting with captcha-protected platforms. This automation saves time and resources, allowing businesses to focus on core activities and productivity. **Research and Development** Researchers and developers utilize automatic captcha solvers to conduct experiments and studies related to captcha technology. These tools facilitate comprehensive analysis and testing of new captcha designs, contributing to the advancement of online security solutions. **Conclusion** Automatic captcha solvers represent a significant advancement in overcoming the challenges posed by captchas in today's digital age. While they offer undeniable benefits in terms of efficiency and usability, it is essential to approach their use responsibly and ethically. By understanding their technical workings, ethical considerations, and practical applications, stakeholders can make informed decisions regarding their implementation. **Trying to bypass CAPTCHA using human methods is slow, expensive, and resource-intensive. It's a waste of both time and money.** **In contrast, using a CaptchaAI solver automates CAPTCHA solving efficiently. This service uses OCR technology to quickly solve CAPTCHAs, saving time. Plus, it offers unlimited CAPTCHA solving at a fixed price, which is more cost-effective than per CAPTCHA charges from other services.**
media_tech
1,897,689
Enhancing Next.js Builds with Webpack Custom Plugins
Learn how to customize your Next.js application's webpack configuration to include versioning using the build id.
0
2024-06-23T10:00:30
https://dev.to/itselftools/enhancing-nextjs-builds-with-webpack-custom-plugins-37aa
javascript, nextjs, webpack, webdev
At [itselftools.com](https://itselftools.com), our journey through developing over 30 innovative applications using Next.js and Firebase has been illuminating. Today, we're sharing a snippet from our development practices that enhances the build process of Next.js applications by utilizing custom Webpack plugins. ## Understanding the Code Snippet Our focus is on a specific configuration in 'next.config.js', which significantly helps in version management during the build process of a Next.js application. Here's the code snippet in question: ```js { "next.config.js": "module.exports = {\n webpack: (config, { buildId, dev, isServer, defaultLoaders, webpack }) => {\n config.plugins.push(new webpack.DefinePlugin({\n 'process.env.VERSION': JSON.stringify(buildId)\n }));\n return config;\n }\n};" } ``` ### Explanation - **module.exports**: This exports a function that accepts the default Webpack configuration object and a custom object containing various build settings provided by Next.js. - **webpack**: A function provided by Next.js that allows overriding the default Webpack configuration. The parameters include `buildId`, `dev`, `isServer`, `defaultLoaders`, and `webpack` itself. - **config.plugins.push**: Here, we are adding a new plugin to the existing array of Webpack plugins. - **new webpack.DefinePlugin**: This plugin lets you create global constants which can be configured at compile time. Here it is used to set `process.env.VERSION` to the current build id, which can be very useful for version tracking and cache busting. ## Practical Usage Incorporating this setup in your Next.js project can aid with debugging and ensuring users receive the most updated version of your application. By defining the version as a build-time constant, you can append this version to URLs for API calls, static resources, or inside your service workers to avoid caching issues during deployments. ## Conclusion Adopting such practices can significantly streamline your deployments and improve the reliability of your web applications. To see this customization in action, you are welcome to visit some of our applications like [Mic Test](https://online-mic-test.com), [Video Compression Tool](https://video-compressor-online.com), and [Pronunciation Guide](https://how-to-say.com), which leverage advanced Next.js configurations for optimal performance.
antoineit
1,897,681
Api-platform : filtrer les résultats uniquement sur l'utilisateur connecté
Imaginez avoir dans votre projet plusieurs entités avec la relation author comme par exemple ces deux...
0
2024-06-23T09:52:46
https://dev.to/aratinau/api-platform-filtrer-les-resultats-uniquement-sur-lutilisateur-connecte-1fp6
security, api, webdev
Imaginez avoir dans votre projet plusieurs entités avec la relation `author` comme par exemple ces deux entités suivantes : ```php <?php namespace App\Entity; use ApiPlatform\Metadata\ApiResource; use App\Repository\CourierFavoriteRepository; use Doctrine\ORM\Mapping as ORM; #[ORM\Entity(repositoryClass: CourierFavoriteRepository::class)] #[ApiResource()] class Book { #[ORM\Id] #[ORM\GeneratedValue] #[ORM\Column] private ?int $id = null; #[ORM\ManyToOne] #[ORM\JoinColumn(nullable: false)] private ?User $author = null; #[ORM\Column(length: 255)] private ?string $title = null; public function getId(): ?int { return $this->id; } public function getAuthor(): ?User { return $this->author; } public function setAuthor(?User $author): static { $this->author = $author; return $this; } public function getTitle(): ?string { return $this->title; } public function setTitle(string $title): self { $this->title = $title; return $this; } } ``` ```php <?php namespace App\Entity; use ApiPlatform\Metadata\ApiResource; use App\Repository\TodoRepository; use Doctrine\DBAL\Types\Types; use Doctrine\ORM\Mapping as ORM; #[ORM\Entity(repositoryClass: TodoRepository::class)] #[ApiResource()] class Todo { #[ORM\Id] #[ORM\GeneratedValue] #[ORM\Column] private ?int $id = null; #[ORM\ManyToOne] #[ORM\JoinColumn(nullable: false)] private ?User $author = null; #[ORM\Column(type: Types::TEXT)] private ?string $content = null; public function getId(): ?int { return $this->id; } public function getAuthor(): ?User { return $this->author; } public function setAuthor(?User $author): static { $this->author = $author; return $this; } public function getContent(): ?string { return $this->content; } public function setContent(string $content): static { $this->content = $content; return $this; } } ``` Nous voulons verrouiller les résultats des GET item et collection uniquement sur l'user connecté. Nous allons créer une interface que nous implémenterons sur nos entités ```php <?php namespace App\Entity; interface CurrentUserIsAuthorInterface { public function setAuthor(?User $author): static; } ``` Maitenant nous allons faire une `DoctrineExtension` qui va ajouter la contrainte `where` dans le `QueryBuilder` sur l'utilisateur connecté. ```php <?php namespace App\DoctrineExtension; use ApiPlatform\Doctrine\Orm\Extension\QueryCollectionExtensionInterface; use ApiPlatform\Doctrine\Orm\Extension\QueryItemExtensionInterface; use ApiPlatform\Doctrine\Orm\Util\QueryNameGeneratorInterface; use ApiPlatform\Metadata\Operation; use App\Entity\CurrentUserIsAuthorInterface; use Doctrine\ORM\QueryBuilder; use Symfony\Bundle\SecurityBundle\Security; class CurrentUserIsAuthorExtension implements QueryCollectionExtensionInterface, QueryItemExtensionInterface { public function __construct( private Security $security, ) { } public function applyToCollection(QueryBuilder $queryBuilder, QueryNameGeneratorInterface $queryNameGenerator, string $resourceClass, ?Operation $operation = null, array $context = []): void { $this->currentUserIsAuthor($resourceClass, $queryBuilder); } public function applyToItem(QueryBuilder $queryBuilder, QueryNameGeneratorInterface $queryNameGenerator, string $resourceClass, array $identifiers, ?Operation $operation = null, array $context = []): void { $this->currentUserIsAuthor($resourceClass, $queryBuilder); } /** * @param string $resourceClass * @param QueryBuilder $queryBuilder * @return void * @throws ReflectionException */ public function currentUserIsAuthor(string $resourceClass, QueryBuilder $queryBuilder): void { $reflectionClass = new \ReflectionClass($resourceClass); if ($reflectionClass->implementsInterface(CurrentUserIsAuthorInterface::class)) { $alias = $queryBuilder->getRootAliases()[0]; $queryBuilder->andWhere("$alias.author = :current_author") ->setParameter('current_author', $this->security->getUser()->getId()); } } } ``` Maintenant il nous reste à implémenter notre interface sur nos entités : ```php <?php namespace App\Entity; use ApiPlatform\Metadata\ApiResource; use App\Repository\CourierFavoriteRepository; use Doctrine\ORM\Mapping as ORM; #[ORM\Entity(repositoryClass: CourierFavoriteRepository::class)] #[ApiResource()] class Book implements CurrentUserIsAuthorInterface { // ... ``` ```php <?php namespace App\Entity; use ApiPlatform\Metadata\ApiResource; use App\Repository\TodoRepository; use Doctrine\DBAL\Types\Types; use Doctrine\ORM\Mapping as ORM; #[ORM\Entity(repositoryClass: TodoRepository::class)] #[ApiResource()] class Todo implements CurrentUserIsAuthorInterface { // ... ``` Et c'est tout. Pour chaque GET item ou collection vous aurez uniquement les entrées avec l'utilisateurs qui est connecté 🚀 Lisez aussi "comment enregistrer automatiquement l'utilisateur connecté" : https://dev.to/aratinau/automatisons-lenregistrement-du-user-sur-nimporte-quelle-entite-4f68
aratinau
1,897,684
HTML input attributes with examples
HTML input attributes HTML &lt;input&gt; elements have various attributes that control...
0
2024-06-23T09:50:13
https://dev.to/wasifali/html-input-attributes-with-examples-48jn
webdev, css, html, learning
## **HTML input attributes** HTML `<input>` elements have various attributes that control their behavior and appearance. Here are some of the most commonly used attributes: ## **Type** Specifies the type of input control. Common types include: ## **Example** ```HTML <input type="text"> <input type="password"> <input type="checkbox"> <input type="radio"> ``` ## **Text** A single-line text input (default). ## **Example** ```HTML <input type="text" value="Initial Value"> <input type="checkbox" value="check1"> ``` ## **password** A text input where the entered text is masked. ## **Example** ```HTML <label for="password">Password:</label> <input type="password" id="password" name="password"> ``` ## **checkbox** A checkbox allowing multiple selections. ```HTML <input type="checkbox" id="agree" checked> <label for="agree">I agree to the terms</label> ``` ## **Radio** A radio button allowing single selection from multiple options. ## **Example** ```HTML <form> <input type="radio" id="male" name="gender" value="male"> <label for="male">Male</label><br> <input type="radio" id="female" name="gender" value="female"> <label for="female">Female</label><br> <input type="radio" id="other" name="gender" value="other"> <label for="other">Other</label><br> </form> ``` ## **File** A control to select a file for upload. ## **Example** ```HTML <form action="/upload" method="post" enctype="multipart/form-data"> <label for="myfile">Select a file:</label> <input type="file" id="myfile" name="myfile"><br><br> <input type="submit" value="Upload File"> </form> ``` ## **Submit** A button to submit a form. ## **Example** ```HTML <form action="/submit-form" method="post"> <label for="username">Username:</label> <input type="text" id="username" name="username"><br><br> <label for="password">Password:</label> <input type="password" id="password" name="password"><br><br> <input type="submit" value="Submit"> </form> ``` ## **Reset** A button to reset form inputs to their default values. ## **Example** ```HTML <form> <label for="fname">First Name:</label> <input type="text" id="fname" name="fname"><br><br> <label for="lname">Last Name:</label> <input type="text" id="lname" name="lname"><br><br> <input type="reset" value="Reset"> <input type="submit" value="Submit"> </form> ``` ## **Button** A generic button (useful with JavaScript). email, tel, url, etc.: Input types for specific data formats (helps with mobile keyboards and validation). ## **Example** ```HTML <button type="button">Click Me!</button> ``` ## **Name** The name of the input field, which is submitted along with the form data. It must be unique within the form. ## **Example** ```HTML <input type="text" name="username"> <input type="email" name="useremail"> ``` ## **Value** The initial value of the input field. For checkboxes and radio buttons, this attribute determines the value that gets submitted when the input is checked/selected. ## **Example** ```HTML <input type="text" value="Initial Value"> ``` ## **Placeholder** A short hint or example text displayed in the input field before the user enters a value. It's typically used for providing a hint about the expected input format. ## **Example** ```HTML <input type="text" placeholder="Enter your name"> ``` ## **Required** Specifies that the input field must be filled out before submitting the form. It triggers browser validation and displays an error message if the field is empty. ## **Example** ```HTML <input type="text" required> ``` ## **Disabled** Disables the input field so that its value is not submitted with the form and the user cannot interact with it. ## **Example** ```HTML <input type="text" value="Disabled field" disabled> ``` ## **Readonly** Makes the input field read-only, meaning the user can see its value but cannot modify it. ## **Example** ```HTML <input type="text" value="Read-only value" readonly> ``` ## **Maxlength and Minlength** Specifies the maximum and minimum length of text allowed in a text input. ## **Example** ```HTML <input type="number" min="1" max="100"> <input type="date" min="2020-01-01" max="2024-12-31"> ``` ## **Maxlength** Sets the maximum number of characters allowed in the input field. ## **Example** ```HTML <input type="text" maxlength="50"> ``` ## **Size** Specifies the width of the input field in characters (applies only to text, password, and search types). ## **Example** ```HTML <label for="username">Username:</label> <input type="text" id="username" name="username" size="30"> ``` ## **Pattern** Specifies a regular expression pattern that the input value must match to be considered valid. ## **Example** ```HTML <input type="text" pattern="[A-Za-z]{3}"> ``` ## **Autocomplete** Enables or disables autocomplete suggestions for the input field. Values can be on or off. ## **Example** ```HTML <form action="/shipping-details" method="post" autocomplete="shipping"> <label for="fullname">Full Name:</label> <input type="text" id="fullname" name="fullname"><br><br> <label for="address">Address:</label> <input type="text" id="address" name="address"><br><br> <input type="submit" value="Submit"> </form> ``` ## **Autofocus** Specifies that the input field should automatically get focus when the page loads. ## **Example** ```HTML <input type="text" autofocus> ``` ## **Multiple** Specifies that multiple values can be entered into a file input (`<input type="file">`). ## **Example** ```HTML <label for="files">Select multiple files:</label><br> <input type="file" id="files" name="files" multiple> ``` These attributes can be combined to customize the behavior and appearance of `<input>` elements according to the specific needs of your HTML form. Each type of input (`<input>`) element may support different subsets of these attributes depending on its intended use.
wasifali
1,897,682
Maiu Online - Browser MMORPG #indiegamedev #babylonjs Ep24 - SAT 2D collision detection
Hello, I added collisions detection with environment obstacles, to make it happen I used SAT 2D...
0
2024-06-23T09:46:15
https://dev.to/maiu/maiu-online-browser-mmorpg-indiegamedev-babylonjs-ep24-sat-2d-collision-detection-42ab
babylonjs, indiegamede, gamedev, mmorpg
Hello, I added collisions detection with environment obstacles, to make it happen I used SAT 2D collision detection algorithm. Player is simple point and obstacles can have collision shapes of circle or rectangles with arbitrary orientation. Still there are some bugs mostly with calculating collisions response on the edges of the rectangles but it works quite good, for the first iteration it's enough. Still don't know how I will handle collisions/navigation in the future. I will need something simple with capabilities for integrating with path finding and line of sight algorithms. I need to do some research and think it through. Next update will be probably about monster system. I want to add walking, chasing, attacking, death and spawning monsters. {% youtube eNoSoiwyOTA %}
maiu
1,897,680
Maiu Online - Browser MMORPG #indiegamedev #babylonjs Ep23 - Global Chat
Hi, This time very short update. I made my chat online. Previously it was working only locally in...
0
2024-06-23T09:44:09
https://dev.to/maiu/maiu-online-browser-mmorpg-indiegamedev-babylonjs-ep23-global-chat-1clk
babylonjs, indiegamedev, gamedev, mmorpg
Hi, This time very short update. I made my chat online. Previously it was working only locally in offline mode. {% youtube ahCvKWgK4WU %}
maiu
1,897,678
Types of Machine Learning: Supervised, Unsupervised, and Reinforcement
Machine learning is a field of artificial intelligence (AI) that focuses on developing algorithms...
0
2024-06-23T09:38:24
https://dev.to/gigarazkiarianda/types-of-machine-learning-supervised-unsupervised-and-reinforcement-4ipm
machinelearning, ai
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kd6hoo0z5ymondhj1byi.jpg) Machine learning is a field of artificial intelligence (AI) that focuses on developing algorithms and models that allow computers to learn from data and make decisions or predictions without being explicitly programmed. One of the fundamental concepts in machine learning is the categorization of learning approaches into three main types: supervised learning, unsupervised learning, and reinforcement learning. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mylg4wj3z43ljdq611b8.jpg) Supervised Learning Supervised learning is a type of machine learning where the model is trained on a labeled dataset. In a labeled dataset, each input is associated with the corresponding output. The goal of supervised learning is to learn a mapping from inputs to outputs, allowing the model to make predictions on unseen data. Examples of supervised learning tasks include classification and regression. Unsupervised Learning Unsupervised learning, on the other hand, involves training the model on an unlabeled dataset. In an unlabeled dataset, there are no predefined labels for the input data. The model learns to find hidden patterns or structures in the data without explicit guidance. Clustering and dimensionality reduction are common tasks in unsupervised learning. Reinforcement Learning Reinforcement learning is a type of machine learning where an agent learns to make decisions by interacting with an environment. The agent receives feedback in the form of rewards or penalties based on the actions it takes. The goal of reinforcement learning is to learn the optimal actions to take in different situations in order to maximize cumulative rewards over time. Key Differences The main difference between supervised, unsupervised, and reinforcement learning lies in the way they are trained and the type of feedback they receive. Supervised learning requires labeled data for training, unsupervised learning works with unlabeled data, and reinforcement learning learns from feedback in the form of rewards or penalties. Understanding the differences between these types of machine learning is essential for choosing the right approach for a given problem and designing effective machine learning systems. Each type of machine learning has its own characteristics, advantages, and limitations. Supervised learning is suitable for tasks where labeled data is available and the goal is to make predictions or classify data into predefined categories. Unsupervised learning is useful for exploring and discovering hidden patterns in data when labeled data is not available. Reinforcement learning is suitable for problems where an agent needs to learn to make sequential decisions based on feedback from its environment. By leveraging these different types of machine learning, researchers and practitioners can develop intelligent systems that can solve a wide range of real-world problems and tasks.
gigarazkiarianda
1,897,677
Unlocking the Power of NoSQL in the Cloud: Breaking Free from Relational Constraints
Unlocking the Power of NoSQL in the Cloud: LinkedIn
0
2024-06-23T09:33:55
https://dev.to/queryhub/unlocking-the-power-of-nosql-in-the-cloud-breaking-free-from-relational-constraints-3fdj
database, nosql, webdev, learning
{% cta https://www.linkedin.com/posts/rajkishore_cloudcomputing-database-nosql-activity-7209524120819007488-5AUL?utm_source=share&utm_medium=member_desktop %} Unlocking the Power of NoSQL in the Cloud: LinkedIn {% endcta %}
queryhub
1,897,814
Configuring Keycloak with SAML
Hey everyone, In this blog we will see how you can configure Keycloak with SAML. Before we start,...
0
2024-07-05T16:23:48
https://blog.elest.io/configuring-keycloak-with-saml/
keycloak, softwares, elestio
--- title: Configuring Keycloak with SAML published: true date: 2024-06-23 09:33:36 UTC tags: Keycloak,Softwares,Elestio canonical_url: https://blog.elest.io/configuring-keycloak-with-saml/ cover_image: https://blog.elest.io/content/images/2024/06/Frame-8-20.png --- Hey everyone, In this blog we will see how you can configure [Keycloak](https://elest.io/open-source/keycloak?ref=blog.elest.io) with SAML. Before we start, ensure you have deployed Keycloak, we will be self-hosting it on [Elestio](https://elest.io/open-source/n8n?ref=blog.elest.io). Single Sign-On (SSO) is a feature for user experience and security by allowing users to authenticate once and gain access to multiple systems. SAML (Security Assertion Markup Language) is a widely used standard for implementing SSO. ### Prerequisites Before we start, make sure you have: 1. A deployed Keycloak service on Elestio. 2. A service provider (SP) ready for SAML integration ## Create a Keycloak Client First, we need to create a client in Keycloak that will act as the SAML resource server. **Gather SP Information:** To properly configure Keycloak as your Identity Provider (IdP), you need some critical information from your Service Provider (SP), such as the SP Entity ID and the Assertion Consumer Service (ACS) URL. These details identify your application within the SAML ecosystem, ensuring that authentication requests and responses are correctly routed. **Create the Keycloak Client:** 1. **Sign in to Keycloak:** Open your web browser and navigate to the Keycloak administration console URL. Log in with your admin credentials. 2. **Navigate to Clients:** In the left-hand sidebar, click on "Clients" to view and manage Keycloak clients. 3. **Create a New Client:** - Click the "Create" button to start setting up a new client. - **Client ID:** Enter the SP Entity ID. This is a unique identifier for your application within the Keycloak realm. - **Client Protocol:** Choose "SAML" from the dropdown menu. - **Client SAML Endpoint:** Enter the ACS URL provided by your SP. - Click Save. **Configure Client Settings:** 1. **Client Signature Required:** Set this to OFF. By disabling this, you are not requiring that SAML assertions be signed. This is useful for initial setups but can be changed later for added security. 2. **Name ID Format:** Choose a format that matches the expected format for usernames in your SP. Common formats include `email` or `username`. 3. **Valid Redirect URIs:** Enter your SP address with an asterisk appended (e.g., `https://your-sp-domain/*`). This wildcard allows for redirection to any sub-URL under the specified domain. 4. Click Save. 5. **Client Scopes:** Go to the "Client Scopes" tab, find `role_list` under Assigned Default Client Scopes, and click "Remove selected". This removes unnecessary role information from the SAML assertion, simplifying the setup. ## Obtain Keycloak Metadata The next step is to obtain the metadata from Keycloak to configure the SP. Metadata provides necessary configuration details like public keys, endpoints, and protocols, facilitating automatic setup and reducing manual errors. **Copy Metadata URL:** 1. **Sign in to Keycloak:** Open your web browser and navigate to the Keycloak administration console URL. Log in with your admin credentials. 2. **Navigate to Realm Settings:** In the left-hand sidebar, click on "Realm Settings." 3. **Access Metadata:** Under the "General" tab, find and click on "SAML 2.0 Identity Provider Metadata" under Endpoints. 4. **Copy URL:** A new tab will open displaying the metadata. Copy the URL from this tab. **Download Metadata XML:** 1. **Sign in to Keycloak:** Open your web browser and navigate to the Keycloak administration console URL. Log in with your admin credentials. 2. **Select Your SAML Client:** Click on "Clients" in the left-hand sidebar and select the SAML client you created. 3. **Installation:** Click on the "Installation" tab. 4. **Select Format:** Choose "Mod Auth Mellon files" from the Format Option dropdown. This format is commonly used for configuring SAML in various SPs. 5. **Download XML:** Click the "Download" button to save the XML file to your local machine. ## Configure the Service Provider Now, configure your SP with the metadata obtained from Keycloak. Metadata contains the necessary configuration details like public keys, endpoints, and protocols, facilitating automatic setup and reducing manual errors. **Using Metadata URL:** 1. **Sign in to SP Admin Interface:** Open your web browser and navigate to your SP’s admin interface URL. Log in with your admin credentials. 2. **Navigate to SAML Configuration:** Go to the section where you configure SAML settings. 3. **Enter Metadata URL:** Look for an option to provide a metadata URL. Paste the Keycloak metadata URL into this field. 4. **Fetch Details:** Save the settings or click the appropriate button to fetch and populate the necessary SAML configuration details from Keycloak. **Using Metadata XML:** 1. **Sign in to SP Admin Interface:** Open your web browser and navigate to your SP’s admin interface URL. Log in with your admin credentials. 2. **Navigate to SAML Configuration:** Go to the section where you configure SAML settings. 3. **Upload XML File:** Look for an option to upload a metadata file. Click "Choose File" or the equivalent button to select the XML file you downloaded from Keycloak. 4. **Upload and Save:** After selecting the file, upload it and save the settings. The SP should automatically configure the necessary SAML settings from the XML. ## Enable SAML Authentication Once the IdP (Keycloak) is configured in your SP, enable SAML authentication to allow users to log in using SAML. **Enable SAML:** 1. **Sign in to SP Admin Interface:** Open your web browser and navigate to your SP’s admin interface URL. Log in with your admin credentials. 2. **Navigate to Authentication Settings:** Go to the section where you configure authentication methods (e.g., Authentication > SAML). 3. **Enable SAML Authentication:** Toggle on the setting for SAML authentication. 4. **Save and Update:** Save the settings and update the server or application to apply the changes. ### Optional: Set Up IdP-Initiated Flow To allow users to start their login process from Keycloak, you can set up an IdP-initiated flow. This allows users to begin the SSO process from the Keycloak login page. 1. **Sign in to Keycloak:** Open your web browser and navigate to the Keycloak administration console URL. Log in with your admin credentials. 2. **Select Your SAML Client:** Click on "Clients" in the left-hand sidebar and select the SAML client you created. 3. **Configure IDP Initiated SSO URL Name:** - Enter the SP Entity ID for the "IDP Initiated SSO URL Name." This allows Keycloak to initiate the login process for your SP. 4. **Set IDP Initiated SSO Relay State:** - For **IDP Initiated SSO Relay State** , you can direct users to specific pages after they sign in. Common options are: - `cws` to direct users to the Client Web UI. - `profile` to direct users to a profile download page. 5. **Save Settings:** Click Save to apply the changes. 6. **Provide URL to Users:** After setting up, Keycloak will display a **Target IDP Initiated SSO URL**. Copy this URL and provide it to your users for direct login access. ### Conclusion Setting up SAML with Keycloak involves configuring Keycloak as the Identity Provider and your application as the Service Provider. By following these steps, you can leverage Keycloak’s SAML capabilities for authentication across your applications, enhancing both security and user experience. This integration ensures that users have a login experience while maintaining security standards. Additionally, by enabling optional like IdP-initiated flow, you provide greater flexibility and convenience for your users. ## **Thanks for reading ❤️** Thank you so much for reading and do check out the Elestio resources and Official [Keycloak documentation](https://www.keycloak.org/documentation?ref=blog.elest.io) to learn more about Keycloak. Click the button below to create your service on [Elestio](https://elest.io/open-source/keycloak?ref=blog.elest.io). See you in the next one👋 [![Configuring Keycloak with SAML](https://pub-da36157c854648669813f3f76c526c2b.r2.dev/deploy-on-elestio-black.png)](https://elest.io/open-source/n8n?ref=blog.elest.io)
kaiwalyakoparkar
1,897,664
Building Zerocalc, part III - errors and grammar
The basic parsing method presented in part II works well for simple expressions that consist of...
27,824
2024-06-23T08:45:16
https://dev.to/michal1024/building-zerocalc-part-iii-errors-and-grammar-15pc
rust, programming
The basic parsing method presented in [part II](https://dev.to/michal1024/building-zerocalc-part-ii-evaluating-then-parsing-3fim) works well for simple expressions that consist of binary operators and literals. Our calculator must support more complex expressions, including unary operators, parenthesis, and function calls. In addition, we'd like to be able to communicate parsing errors to the user pointing them to the potential source of the problem. ## Errors When a parser hits an issue, the user will usually get a message explaining the problem the parser hit and where in the code it happened: ![Sample parsing error](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0a0r3t54aohz9jyiowzq.png) To return such errors, we need to store the message and the span of input code the error relates to: ```rust pub struct Span { pos: usize, len: usize } impl Span { pub fn new(pos: usize, len: usize) -> Self { Span { pos, len } } } pub struct Error { message: String, span: Span } impl Error { pub fn new(message: &str, span: Span) -> Self { Error { message: String::from(message), span } } ``` Sometimes the error is detected by a function coming from a library we use and the error type is different than our `Error`. In this case, it will be handy to wrap that error in our `Error` structure to have consistent error flow through the call stack: ```rust impl Error { pub fn wrap<T: Display>(t: T) -> Self { Error { message: format!("{t}"), span: Span::new(0, 0) } } } ``` We can use the `wrap` method with Rust's handy `Result::map_err` method: ```rust pub fn parse_int(input: &str) -> Result<i128, Error> { input.parse().map_err(Error::wrap) } ``` ## Grammar Let's now look into improving our parser to handle more complex expressions. We are following recursive descent parsing (RDP) - but how exactly such a parser should be built? Wikipedia's definition of RDP states: > *In computer science, a recursive descent parser is a kind of top-down parser built from a set of mutually recursive procedures (or a non-recursive equivalent) where each such procedure implements one of the nonterminals of the grammar. Thus the structure of the resulting program closely mirrors that of the grammar it recognizes.* We need to build a grammar and then create a procedure for each non-terminal of the grammar. A non-terminal is an element of the grammar defined by other terminals and non-terminals. A terminal is the actual token we read. A grammar for basic addition can be defined as follows: ``` exp: fact + fact fact: literal ``` The `exp` (expression) consists of two factors and `+` operator. A factor is a simple literal. Grammar like this defines a single addition operation `2+2`. To define a chain of additions, we need to add recursion to the grammar: ``` exp: fact + exp | fact fact: literal ``` This can define anything from a single literal `1` to any number of additions `1+2+3...`. The fact `exp` is on the right-hand side of the `fact + exp` term is not an accident - parsing left-recursive grammars is more complicated, and for RDP we need to use right-side recursion. Next, we need to consider operator precedence. The `1+2*3` should be computed as `1+(2*3)` and `1*2+3` should become `(1*2)+3`. It means that for lower precedence operations like `+` each side can be a literal or a higher-precedence operation. so for `+` and `*` we can write: ``` exp: exp1 + exp | exp1 exp1: fact * exp1 | fact fact: literal ``` It's a bit tricky but as the old joke says, "to understand recursion you first need to understand recursion". Now we can write full grammar for the expressions we want to handle in our calculator. In addition to basic mathematical operations, we want to handle parenthesis, identifiers (which may be constants like `pi` or `e`), and functions like `sin`. ``` exp: exp1 | empty exp1: exp2 op1 exp1 | exp2 op1: + | - exp2: exp3 op2 exp2| exp3 op2: * | / | % exp3: term op3 exp3 | term op3: ^ fact: +fact | -fact | (exp1) | func | id | literal func: id(exp1) ``` ## Parsing The idea for the parser implementation follows the RDP definition: for each non-terminal, we write a function that will parse this non-terminal. The function will return true if parsing consumed a terminal or false if parsing ended with an empty statement. This will help us check whether all sides of the expression returned the actual result. Example function for parsing a non-terminal: ![Parsing a non-terminal](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iigfl1lvgozmlq9u88bp.png) I added comments showing pieces of grammar each function is responsible for: ```rust pub fn parse(&mut self) -> Result<bool, Error> { self.init(); self.parse_exp() } // exp: exp1 | empty pub fn parse_exp(&mut self) -> Result<bool, Error> { match self.next_token.kind { lexer::TokenKind::Eof => Ok(false), _ => self.parse_exp1() } } // exp1: exp2 op1 exp1 | exp2 // op1: + | - fn parse_exp1(&mut self) -> Result<bool, Error> { let has = self.parse_exp2()?; match self.next_token.kind { kind @(lexer::TokenKind::Add | lexer::TokenKind::Sub) => { self.bump(); if self.parse_exp1()? { self.parse_binary_op(kind)?; Ok(true) } else { self.error(ERR_EOF) } }, _ => Ok(has), } } // exp2: exp3 op2 exp2| exp3 // op2: * | / | % fn parse_exp2(&mut self) -> Result<bool, Error> { let has = self.parse_exp3()?; match self.next_token.kind { kind @(lexer::TokenKind::Mul | lexer::TokenKind::Div | lexer::TokenKind::Mod) => { self.bump(); if self.parse_exp2()? { self.parse_binary_op(kind)?; Ok(true) } else { self.error(ERR_EOF) } }, _ => Ok(has) } } // exp3: fact op3 exp3 | fact // op3: ^ fn parse_exp3(&mut self) -> Result<bool, Error> { let has = self.parse_fact()?; match self.current_token.kind { kind @lexer::TokenKind::Pow => { self.bump(); if self.parse_exp3()? { self.parse_binary_op(kind)?; Ok(true) } else { self.error(ERR_EOF) } }, _ => Ok(has) } } // fact: +fact | -fact | (exp1) | func | id | literal // func: id(exp1) fn parse_fact(&mut self) -> Result<bool, Error> { self.bump(); match self.current_token.kind { kind@ (lexer::TokenKind::Add |lexer::TokenKind::Sub) => { if !self.parse_fact()? { return self.error("Unary operator needs expression"); } self.parse_unary_op(kind) } lexer::TokenKind::Lpar => { let has = self.parse_exp1()?; if self.next_token.kind != lexer::TokenKind::Rpar { return self.error("Missing closing parenthesis"); }; self.bump(); Ok(has) }, lexer::TokenKind::Literal(kind) => { self.parse_literal(kind) }, lexer::TokenKind::Eof => Ok(false), // etc for id, function... _ => self.error(ERR_UNEXP) } } ``` I shortened `parse_fact` a bit but you get the idea. When parsing a terminal, we just add it to the output program I described in the previous post: ```rust fn parse_literal(&mut self, l: lexer::LiteralKind) -> Result<bool, Error> { let val = match l { lexer::LiteralKind::Int => self.parse_int()?, lexer::LiteralKind::Float => self.parse_float()? }; self.program.push(Expression::Val(val)); Ok(true) } fn parse_binary_op(&mut self, kind: lexer::TokenKind) -> Result<bool, Error>{ let op = match kind { lexer::TokenKind::Add => Op::Add, lexer::TokenKind::Sub => Op::Sub, lexer::TokenKind::Div => Op::Div, lexer::TokenKind::Mul => Op::Mul, lexer::TokenKind::Mod => Op::Mod, lexer::TokenKind::Pow => Op::Pow, _ => return self.error("Invalid binary operator") }; self.program.push(Expression::BinaryOp(op)); Ok(true) } ``` And that's it! We can now parse very complex mathematical expressions. Sources: 1. https://dev.to/nathan20/how-to-handle-errors-in-rust-a-comprehensive-guide-1cco 2. https://en.wikipedia.org/wiki/Recursive_descent_parser
michal1024
1,897,667
React Project Outgrowing Expectations? Learn These Basic Principles to Manage Better
Consequences of Poor Code Maintenance Imagine building a product with great potential, but...
0
2024-06-23T09:32:51
https://dev.to/lovestaco/react-project-outgrowing-expectations-learn-these-basic-principles-to-manage-better-9bn
react, reactjsdevelopment, webdev, javascript
## Consequences of Poor Code Maintenance Imagine building a product with great potential, but watching it progress slowly due to messy code and disorganized thinking. We are a small team, building a product called [Hexmos Feedback](https://hexmos.com/feedback), Feedback helps keep teams motivated and engaged through recognition and continuous feedback. We have attendance management as a part of Hexmos Feedback. Hexmos Feedback goes beyond simple attendance – it helps identify committed employees who are present but may not be fully invested in the organization's goals. We opted for React, known for its ease of use and scalability, as the foundation for our product's front end. We were focused on getting features out the door, screens looking good, and functionality working. In that initial sprint, coding standards weren't exactly a priority. Looking back, we just threw components wherever they fit, integrated APIs directly into component files (except for that base URLs!), and cobbled together structures that barely held things together. Again, it wasn't the worst approach for a small team focused on a bare-bones MVP. **Growing Pains: The Need for Scalability** But then, things changed. As our capabilities matured a little bit, we started training students from various colleges, and we brought on some committed interns to help us scale our team. That's when the cracks began to show. Our once "functional" codebase became difficult for these newbies to navigate. Simple bug fixes turned into hour-long hunts just to find the relevant code. It was a wake-up call. Our codebase was hindering progress and killing developer morale. That's when I knew we had to make a change. ## Learning From Our Mistakes: Embracing Best Practices It was time to learn some best practices, clean up the mess, and build a codebase that was scalable, maintainable, and wouldn't make our interns tear their hair out. ### Organizing for Scalability It took me more than two hours to organize the components, fix the import errors, and things got slightly better. I understood the importance of a well-structured codebase for easy scalability and maintenance. Most of the code should be organized within feature folders. ![best practice for react folder structure](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nqcfkok1j2werzl88d6i.png) Each feature folder should contain code specific to that feature, keeping things neatly separated. ![Best folder structure for React](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/11lecrqhbpftqcccv3kq.png) This approach helped prevent mixing feature-related code with shared components, making it simpler to manage and maintain the codebase compared to having many files in a flat folder structure. This approach helped prevent mixing feature-related code with shared components, making it simpler to manage and maintain the codebase compared to having many files in a flat folder structure. ### How Redux Simplified API Integration Remember that initial focus on getting features out the door? While it worked for our simple MVP, things got hairy as we added more features. Our initial approach of making API calls directly from component files became a burden. As the project grew, managing loaders, errors, and unnecessary API calls became a major headache. Features were getting more complicated, and our codebase was becoming a tangled mess. Component file lines increased, business logic (API calls, data transformation) and UI logic (displaying components, handling user interactions) got tangled, and maintaining the codebase and complexity to handle loaders, errors, etc became a nightmare. As our project matured and the challenges of managing the state grew, I knew it was time to implement a better approach. I started using [Redux](https://www.npmjs.com/package/@reduxjs/toolkit) for API calls, a state management library for JavaScript applications, along with thunks (like createSlice, and createAsyncThunk from @reduxjs/toolkit) to handle this problem. This is a strong approach for complex applications as Redux acts as a central hub for your application's state, promoting separation of concerns and scalability. #### Here's why Redux was a game-changer ##### 1. Centralized State Management Redux keeps all your application's state in one central location. This makes it easier to track changes and manage loading and error states, in any component. No more hunting through a maze of component files! ##### 2. Separation of Concerns Thunks and slices (Redux's way of organizing reducers and actions) allow you to cleanly separate your business logic (API calls, data transformation) from your UI logic (displaying components, handling user interactions). This makes the code more readable, maintainable, and easier to test. ##### 3. Consistency Redux enforces a consistent way to handle API calls and manage state. This leads to cleaner, more maintainable code across your entire project. ##### 4. Caching for Efficiency Redux can help with caching and memoization of data, reducing unnecessary API calls and improving performance. We'll dive into a concrete example in the next section to show you how Redux and thunks can be implemented to tame the API beast in your own React projects. ### How to reduce burden using Redux Here is how I got started with the conversion. Considering you have already installed [reduxjs/toolkit](https://www.npmjs.com/package/@reduxjs/toolkit). The Below image is how I have organized the files for redux in my streak feature. #### Async Actions with Redux Thunk (`actionCreator.ts`) First, create a file `actionCreator.js` and add a thunk for the API call of your component. This file defines an asynchronous thunk action creator named `fetchIndividualStreak` using `createAsyncThunk` from `@reduxjs/toolkit`. Thunks are middleware functions that allow us to perform asynchronous operations (like API calls) within Redux actions. It handles both successful and unsuccessful responses: - On success, it returns the response data. - On failure, it rejects the promise with an error message. ```typescript // actionCreator.ts import { createAsyncThunk } from "@reduxjs/toolkit"; export const fetchIndividualStreak = createAsyncThunk( "streak/fetchIndividualStreak", async (memberId, { rejectWithValue }) => { try { const data = { member_id: memberId, }; const response = await apiWrapper({ url: `${urls.FB_BACKEND}/v3/member/streak`, method: "POST", body: JSON.stringify(data), }); return response; } catch (error) { return rejectWithValue(error.message); } } ); ``` #### Centralized State (`reducer.ts`) This file defines a slice using `createSlice` from `@reduxjs/toolkit`. A slice groups related reducers and an initial state for a specific feature (streak in this case). The initial state includes properties for: - `loading`: indicates if data is being fetched - `data`: holds the fetched streak information - `error`: stores any errors encountered during the API call ```typescript // reducer.ts import { createSlice } from "@reduxjs/toolkit"; import { fetchIndividualStreak } from "./actionCreator"; const streakSlice = createSlice({ name: "streak", initialState: { loading: false, data: null, error: null, }, reducers: ..., extraReducers: ..., }); ``` Continue learning about [Redux State Flow During Execution](https://journal.hexmos.com/api-integration-in-react-best-practices/#centralized-state-reducerts) here. I'd love to hear your thoughts or suggestions for improvements!
lovestaco
1,897,676
Contribute to an Open-Source JavaScript Project: Join Now!
Hello JavaScript Enthusiasts! I'm excited to announce the launch of an ambitious and collaborative...
0
2024-06-23T09:32:21
https://raajaryan.tech/contribute-to-an-open-source-javascript-project-join-now
opensource, javascript, beginners, tutorial
[![BuyMeACoffee](https://img.shields.io/badge/Buy%20Me%20a%20Coffee-ffdd00?style=for-the-badge&logo=buy-me-a-coffee&logoColor=black)](https://buymeacoffee.com/dk119819) Hello JavaScript Enthusiasts! I'm excited to announce the launch of an ambitious and collaborative open-source project, the **Ultimate JavaScript Project**, hosted on GitHub. This initiative aims to create a comprehensive repository of JavaScript projects, and I invite developers, learners, and enthusiasts from all levels to contribute and build together. As of now, I've completed 9 out of the targeted 500 projects, and there's plenty of room for you to leave your mark! **Repository Link: [ULTIMATE-JAVASCRIPT-PROJECT](https://github.com/deepakkumar55/ULTIMATE-JAVASCRIPT-PROJECT)** ### What is the Ultimate JavaScript Project? The Ultimate JavaScript Project is a curated collection of JavaScript projects that range from beginner-friendly tasks to more complex applications. The goal is to create a vast repository that serves as a learning tool and a source of inspiration for developers. By contributing to this project, you not only enhance your coding skills but also help others in the community grow and learn. ### Why Should You Contribute? 1. **Collaborative Learning**: Engage with a community of like-minded developers and share knowledge. 2. **Enhance Your Skills**: Tackle a variety of projects that challenge your JavaScript skills. 3. **Build Your Portfolio**: Add meaningful contributions to your GitHub profile. 4. **Open-Source Experience**: Gain valuable experience in contributing to open-source projects. 5. **Networking**: Connect with developers from around the world and expand your professional network. ### How Can You Contribute? Contributing to the Ultimate JavaScript Project is simple and straightforward. Here's a step-by-step guide to get you started: 1. **Fork the Repository**: Go to the [GitHub repository](https://github.com/deepakkumar55/ULTIMATE-JAVASCRIPT-PROJECT) and fork it to your account. 2. **Clone the Repository**: Clone the forked repository to your local machine using: ```bash git clone https://github.com/<your-username>/ULTIMATE-JAVASCRIPT-PROJECT.git ``` 3. **Choose a Project**: Browse through the list of existing projects or propose a new one. 4. **Create a Branch**: Create a new branch for your project or enhancement. ```bash git checkout -b <your-branch-name> ``` 5. **Code and Document**: Implement your project and document it thoroughly so others can understand and learn from it. 6. **Push and Create a Pull Request**: Push your changes to your forked repository and create a pull request to the main repository. ```bash git push origin <your-branch-name> ``` ### Types of Projects to Contribute - **Beginner Projects**: Simple games, DOM manipulations, form validations, etc. - **Intermediate Projects**: API integrations, CRUD applications, dynamic web pages. - **Advanced Projects**: Full-stack applications, complex algorithms, performance optimization tasks. ### Guidelines for Contribution - **Code Quality**: Ensure your code is clean, well-commented, and follows best practices. - **Documentation**: Provide clear and concise documentation for your project. - **Originality**: Make sure your contribution is original and not a duplicate of an existing project. ### Let's Build Together! The journey of building the Ultimate JavaScript Project is just beginning, and I am thrilled to have you join this collaborative effort. Whether you're a seasoned developer or just starting, your contribution is valuable and appreciated. Let's leverage our collective expertise to create a robust resource for the entire JavaScript community. Feel free to reach out with any questions or suggestions. Happy coding, and let's build something amazing together! **Connect with Me:** - **GitHub**: [deepakkumar55](https://github.com/deepakkumar55) - **Twitter**: [dk_raajaryan](https://twitter.com/dk_raajaryan) - **LinkedIn**: [Deepak Kumar](https://www.linkedin.com/in/raajaryan/) Looking forward to your contributions! ## 💰 You can help me by Donating [![BuyMeACoffee](https://img.shields.io/badge/Buy%20Me%20a%20Coffee-ffdd00?style=for-the-badge&logo=buy-me-a-coffee&logoColor=black)](https://buymeacoffee.com/dk119819)
raajaryan
1,897,812
Configuring keycloak using OICD
Hey everyone, In this blog we will see how you can configure Keycloak using OICD. Before we start,...
0
2024-07-05T16:23:12
https://blog.elest.io/configure-keycloak-with-oicd/
keycloak, softwares, elestio
--- title: Configuring keycloak using OICD published: true date: 2024-06-23 09:18:39 UTC tags: Keycloak,Softwares,Elestio canonical_url: https://blog.elest.io/configure-keycloak-with-oicd/ cover_image: https://blog.elest.io/content/images/2024/06/Frame-8-17.png --- Hey everyone, In this blog we will see how you can configure [Keycloak](https://elest.io/open-source/keycloak?ref=blog.elest.io) using OICD. Before we start, make sure you have deployed Keycloak, we will be self-hosting it on [Elestio](https://elest.io/open-source/n8n?ref=blog.elest.io). ## What is Keycloak? Keycloak is an open-source Identity and Access Management (IAM) solution that provides tools for managing authentication and authorization. It enables single sign-on (SSO) and identity federation, supporting various protocols like OpenID Connect, OAuth 2.0, and SAML 2.0, which allows integration with a wide range of applications and services. Keycloak's features include user management, role-based access control, multi-factor authentication, and customization options through themes and extensions. It operates as a central authentication server, issuing tokens to clients and validating them to ensure secure access to protected resources, making it ideal for enterprise SSO, cloud and microservices architectures, and secure API management. ## Introduction to OIDC OpenID Connect (OIDC) is an authentication protocol built on OAuth 2.0, designed to verify user identities and provide user profile information in a standardized and secure manner. It facilitates single sign-on (SSO) by issuing ID tokens, which are JSON Web Tokens (JWT) containing user information such as name and email, after successful authentication by an authorization server. OIDC is widely used across web, mobile, and cloud applications, offering and secure user authentication and enabling integration with various identity providers. ### Creating an OIDC Client in Keycloak Integrating Keycloak with your Identity Provider (IDP) server using OpenID Connect (OIDC) involves creating and configuring an OIDC client. This client acts as an intermediary that handles authentication requests and responses between your application and the IDP server. Proper configuration of the OIDC client ensures secure communication and accurate handling of authentication tokens. #### Steps to Create the OIDC Client 1. **Access the Clients Section** : - Begin by selecting the appropriate realm from the left pane in Keycloak. The realm represents your security domain, where all configurations and user data are stored. - Navigate to **Clients**. This section lists all the clients (applications) that can request authentication from Keycloak. 2. **Create a New Client** : - Click **Create** in the right pane. This action initiates the process of setting up a new client. - Enter a name in the **Client ID** field. This name will be used as the `client_id` in OIDC authentication requests. It uniquely identifies your client application within the realm. - Click **Save** to proceed. Saving the client ID takes you to the client configuration page where you can specify further details. 3. **Configure the Client** :Setting the correct redirect URIs is crucial as it ensures the IDP server can safely redirect users back to your application after authentication, maintaining the flow’s security. - In the client configuration page, set the **Client Protocol** field to `openid-connect`. This protocol facilitates secure authentication and authorization. - Set the **Access Type** to `confidential`. Confidential clients are capable of keeping their credentials secure, making them suitable for server-side applications. - Add the **Valid Redirect URIs** for the IDP server. These URIs specify where the IDP server should redirect users after successful authentication. The URL should follow the structure `https://[CNAME]/*`, ensuring it aligns with your server’s configuration. For example, `https://www.idpserver.com/*`. ### Adding Scope to the Client Scopes in Keycloak define the permissions and data included in authentication tokens. Adding a client scope customizes the information shared during the authentication process, ensuring that only the necessary data is transmitted. 1. **Access Client Scopes** : - Navigate to the **Client Scopes** section by selecting the relevant realm in the left pane. - Click **Create** to initiate the creation of a new client scope. 2. **Create a New Scope** :Adding a scope helps control what information is shared and how it is presented in tokens, contributing to a more secure and efficient authentication process. - Enter `idpvscope` in the **Name** field. This name will help you identify the scope later. - Set the **Display on Consent screen** option to `OFF`. Disabling this option ensures that the scope will not be displayed on the user consent screen during authentication. - Click **Save** to finalize the creation of the scope. ### Mapping the Client Scope Mapping client scopes involves adding claims to tokens. Claims are pieces of information about the user, such as username or group memberships, that the IDPV server requires. Proper mapping ensures that the necessary data is included in the tokens issued during authentication. 1. **Add a Mapper** : - In the left pane, go to **Client Scopes** and select the newly created scope. - In the right pane, navigate to **Mappers** > **Create**. 2. **Create Protocol Mapper** :Adding mappers for specific attributes like `preferred_username` ensures that these details are included in the tokens, making them available for the IDPV server to use during authentication. - For the **User Attribute Mapper Type** , configure the following: - **Name** : `preferred_username` - **Mapper Type** : `User Attribute` - **User Attribute** : `cn` - **Token Claim Name** : `preferred_username` - Click **Save** to save this mapper. 3. **Add Group Membership Mapper** :Including group membership information in tokens helps manage user roles and permissions, ensuring that only authorized users can access specific resources. - For the **Group Membership Mapper Type** , configure the following: - **Name** : `groups` - **Mapper Type** : `Group Membership` - **Token Claim Name** : `groups` - Click **Save** to save this mapper. 4. **Repeat for Additional Claims** : Similarly, add other required claims to ensure that all necessary information is included in the tokens. Each claim added through mappers enhances the granularity and control over user data shared during authentication. ### Adding Scope to the OpenID Client After configuring the necessary claims, the next step is to apply the scope to the OpenID client. This ensures that tokens issued for this client will include the required claims, completing the setup for secure and customized authentication. 1. **Apply Scope to OpenID Client** :Applying the scope to the OpenID client ensures that the authentication tokens issued will contain the claims defined in the scope, providing the necessary information to the IDPV server. - In the left pane, select **Clients**. - In the right pane, choose the client you created earlier and go to the **Client Scopes** tab. - Under **Default Client Scopes** , select the previously created scope and click **Add selected**. ## **Thanks for reading ❤️** Thank you so much for reading and do check out the Elestio resources and Official [Keycloak documentation](https://www.keycloak.org/documentation?ref=blog.elest.io) to learn more about Keycloak. Click the button below to create your service on [Elestio](https://elest.io/open-source/keycloak?ref=blog.elest.io). See you in the next one👋 [![Configuring keycloak using OICD](https://pub-da36157c854648669813f3f76c526c2b.r2.dev/deploy-on-elestio-black.png)](https://elest.io/open-source/n8n?ref=blog.elest.io)
kaiwalyakoparkar
1,897,811
Keycloak Session Configuration: Best Practices and Principles
Hey everyone, in this blog we will be configuring sessions on Elestio using Keycloak. We will be...
0
2024-07-05T16:22:07
https://blog.elest.io/configure-keycloak-with-saml/
keycloak, softwares, elestio
--- title: Keycloak Session Configuration: Best Practices and Principles published: true date: 2024-06-23 09:10:44 UTC tags: Keycloak,Softwares,Elestio canonical_url: https://blog.elest.io/configure-keycloak-with-saml/ cover_image: https://blog.elest.io/content/images/2024/06/Frame-8-15.png --- Hey everyone, in this blog we will be configuring sessions on Elestio using [Keycloak](https://elest.io/open-source/keycloak?ref=blog.elest.io). We will be using a self-hosted Keycloak instance deployed on Elestio. So, to get started head over to [Elestio Dashboard](https://elest.io/open-source/keycloak?ref=blog.elest.io) and deploy and login into the Keycloak service to get started. In this tutorial, we explore the technicalities of Keycloak session and token configuration, emphasizing the importance of session timeouts and optimal settings for effective session management. By understanding and applying the recommended best practices, administrators can create a secure and efficient authentication environment within Keycloak. ### Prerequisites Ensure that you have Keycloak properly set up in your system. Navigate to the realm settings, where you will find the Sessions and Tokens configurations. The session settings are divided into four categories, but our primary focus will be on "SSO session settings" and "Client session settings." For tokens, we will concentrate on "Refresh tokens" and "Access tokens." ### Client Session Settings #### 1. Client Session Idle Timeout This setting determines the duration a client session can remain inactive before it expires. Once expired, all tokens associated with the client session become invalid. This defaults to the "SSO Session Idle" value if not explicitly set. #### 2. Client Session Maximum Lifespan This defines the maximum duration a client session remains valid after a user logs in. After this period, the tokens associated with the session are invalidated. If this setting is left unset, it defaults to the "SSO Session Max" value. ### SSO Session Settings #### 1. SSO Session Idle Timeout This setting specifies the duration a session can remain idle before it expires. Upon expiration, all tokens and browser sessions are invalidated. A typical recommendation is to set this value to around 30 minutes. #### 2. SSO Session Maximum Lifespan This defines the maximum period a session can remain active. Similar to the idle timeout, tokens and browser sessions are invalidated once this period elapses. It is advisable to set this value between 10 to 24 hours. ### Token Configuration #### 1. Refresh Tokens Administrators can configure the revocation of refresh tokens. It is advisable to enable this option and set the "Refresh Token Max Reuse" to 0, ensuring that tokens cannot be reused beyond their intended lifespan. #### 2. Access Tokens Two critical settings here are "Access Token Lifespan" and "Access Token Lifespan For Implicit Flow." The "Access Token Lifespan" should be set to a value less than or equal to the session idle timeout, while the "Access Token Lifespan For Implicit Flow" dictates the timeframe within which a refresh token can generate new access tokens. It is recommended to keep these values consistent to maintain security. ### Example Configuration To illustrate, consider the following settings: - Leave "Client Session Idle" and "Client Session Max" unset. - Set "SSO Session Idle" to 30 minutes. - Set "SSO Session Max" to 1 day. - Set "Access Token Lifespan" to 15 minutes or less. - Set "Access Token Lifespan For Implicit Flow" to 30 minutes or less. - Set "Refresh Token Max Reuse" to 0. ![Keycloak Session Configuration: Best Practices and Principles](https://blog.elest.io/content/images/2024/06/Screenshot-2024-06-12-at-6.13.17-PM.jpg) ![Keycloak Session Configuration: Best Practices and Principles](https://blog.elest.io/content/images/2024/06/Screenshot-2024-06-12-at-6.13.55-PM.jpg) ### Best Practices for Session Management Effective session management in Keycloak relies on two core principles: 1. Access tokens should not outlast their corresponding refresh tokens, ensuring controlled access within the refresh token’s lifespan. 2. Refresh tokens must align with the duration of the Keycloak session, maintaining session integrity and security. By following these guidelines and configuring session durations appropriately, administrators can establish a secure authentication framework that balances usability with stringent access controls. These practices not only enhance the overall user experience but also safeguard sensitive resources across multiple applications. ## **Thanks for reading ❤️** Thank you so much for reading and do check out the Elestio resources and Official [Keycloak documentation](https://www.keycloak.org/documentation?ref=blog.elest.io) to learn more about Keycloak. You can click the button below to create your service on [Elestio](https://elest.io/open-source/keycloak?ref=blog.elest.io). See you in the next one👋 [![Keycloak Session Configuration: Best Practices and Principles](https://pub-da36157c854648669813f3f76c526c2b.r2.dev/deploy-on-elestio-black.png)](https://elest.io/open-source/keycloak?ref=blog.elest.io)
kaiwalyakoparkar
1,897,802
Apache Superset Clickhouse integration
Hey everyone, In this blog we will be knowing more about Superset Clickhouse integration. Discover...
0
2024-07-05T15:40:25
https://blog.elest.io/apache-superset-clickhouse-integration/
superset, softwares, elestio
--- title: Apache Superset Clickhouse integration published: true date: 2024-06-23 08:52:30 UTC tags: Superset,Softwares,Elestio canonical_url: https://blog.elest.io/apache-superset-clickhouse-integration/ cover_image: https://blog.elest.io/content/images/2024/06/Frame-8-5.png --- Hey everyone, In this blog we will be knowing more about [Superset](https://elest.io/open-source/superset?ref=blog.elest.io) Clickhouse integration. Discover how integrating Apache Superset with ClickHouse can elevate your data visualization and analytics capabilities. This guide walks you through setting up and optimizing this powerful combination. #### Apache Superset and ClickHouse Integration Combining Apache Superset with ClickHouse offers a robust solution for data analytics, enabling users to visualize and interact with their ClickHouse datasets effectively. Here’s how to set up this integration: #### Prerequisites Make sure you have `clickhouse-connect>=0.6.8` installed. If you're using Docker Compose, add this requirement to `./docker/requirements-local.txt`. #### Connection String To connect Superset to ClickHouse, use the following connection string format from your Superset service: ``` clickhousedb://<user>:<password>@<host>:<port>/<database>[?options…] ``` For local ClickHouse instances, a simpler connection string can be used: ``` clickhousedb://localhost/default ``` #### Visualization and Dashboards Once connected, take advantage of Superset's user-friendly interface to create engaging visualizations and dashboards. Utilize the SQL IDE for data preparation and define custom dimensions and metrics using the semantic layer. #### Security and Scalability Superset allows you to configure detailed access rules and integrate with various authentication backends. Its cloud-native architecture ensures scalability and high availability, making it ideal for large, distributed environments. #### Unique Insights For specific configurations and best practices to enhance your integration experience, refer to the official documentation. By integrating Apache Superset with ClickHouse, you can unlock deeper insights and create compelling visualizations that drive data-informed decisions. #### Sample ClickHouse SQL Query in Superset ``` SELECT event_date, count() AS events FROM events WHERE event_date >= '2021-01-01' GROUP BY event_date ORDER BY event_date ``` ## **Thanks for reading ❤️** Thank you so much for reading and do check out the Elestio resources and Official [Superset documentation](https://superset.apache.org/docs/intro/?ref=blog.elest.io) to learn more about Superset. You can click the button below to create your service on [Elestio](https://elest.io/open-source/superset?ref=blog.elest.io). See you in the next one👋 [![Apache Superset Clickhouse integration](https://pub-da36157c854648669813f3f76c526c2b.r2.dev/deploy-on-elestio-black.png)](https://elest.io/open-source/superset?ref=blog.elest.io)
kaiwalyakoparkar
1,897,673
Difference Between Number() and parseInt() in Converting Strings?
Number(): Searches the entire string for numbers. If it finds anything else, it will return NaN...
0
2024-06-23T09:10:17
https://dev.to/yns666/difference-between-parseint-and-number-in-converting-strings-1fh5
javascript, beginners, programming, webdev
Number(): Searches the entire string for numbers. If it finds anything else, it will return NaN (short for Not a Number). parseInt() / parseFloat(): Returns the first number in the string, ignoring the rest. If there are no numbers at the beginning of the string, it will return NaN. let's take an example: **Example 1:** `let a = "1hola1"; console.log(Number(a)); console.log(parseInt(a)); output-> NaN 1` **Example 2:** `let a = "1"; console.log(Number(a)); console.log(parseInt(a)); output-> 1 1 ` In conclusion , We use Number() for the Strings that only contains numbers,the parseInt() does the opposite.
yns666
1,897,810
Adding API Key Authentication In Keycloak
Hey everyone, in this blog we will be extending Keycloak by adding API key authentication with...
0
2024-07-05T16:21:27
https://blog.elest.io/adding-api-key-authentication-in-keycloak/
keycloak, softwares, elestio
--- title: Adding API Key Authentication In Keycloak published: true date: 2024-06-23 09:09:56 UTC tags: Keycloak,Softwares,Elestio canonical_url: https://blog.elest.io/adding-api-key-authentication-in-keycloak/ cover_image: https://blog.elest.io/content/images/2024/06/Frame-8-14.png --- Hey everyone, in this blog we will be extending Keycloak by adding API key authentication with Elestio using [Keycloak](https://elest.io/open-source/keycloak?ref=blog.elest.io). We will be using a self-hosted Keycloak instance deployed on Elestio. So, to get started head over to [Elestio Dashboard](https://elest.io/open-source/keycloak?ref=blog.elest.io) and deploy and login into the Keycloak service to get started. ### Background API key authentication is among the most straightforward methods for securing access to resources and APIs. This method involves providing a static key, which must be kept secure and used to access protected APIs, typically via a special header or the Authentication header. If you're using Keycloak and want to incorporate API key authentication, continue reading. This guide demonstrates how to extend Keycloak by adding a simple API key authentication mechanism, beneficial for those working within a microservices architecture where different services authenticate differently. ### Design Consider a system composed of two services: a Spring Boot application serving dashboard pages and a Node.js stateless REST API providing important weather forecast data. Access to these services is available separately through different URIs, but sign-up via the dashboard is required. Users sign up to obtain an API key, which can be used to access the weather REST API anytime. This scenario is common in API-as-a-service applications. Additionally, our system includes a Keycloak auth server for authentication and authorization. To secure the dashboard service, we use Keycloak’s SSO mechanism. To secure the REST API service, we introduce API key authentication: a random key generated and stored with user data during registration. An endpoint is also needed to verify the existence of the API key. To implement this, we extend Keycloak with a module featuring: 1. Generation of a random key string and storage with user attributes during registration. 2. An endpoint to verify the validity of the key. ### Implementation #### Key Generation Keycloak’s extensibility allows for easy implementation of new features by utilizing its SPI interfaces or overriding providers. Here, we focus on the module implementation, starting with API key generation. This requires capturing the registration event to generate the key. Implementing EventListenerProvider helps capture various internal Keycloak events and take action. ``` public class RegisterEventListenerProvider implements EventListenerProvider { private KeycloakSession session; private RealmProvider model; private RandomString randomString; private EntityManager entityManager; public RegisterEventListenerProvider(KeycloakSession session) { this.session = session; this.model = session.realms(); this.entityManager = session.getProvider(JpaConnectionProvider.class).getEntityManager(); this.randomString = new RandomString(50); } public void onEvent(Event event) { if (event.getType().equals(EventType.REGISTER)) { RealmModel realm = model.getRealm(event.getRealmId()); String userId = event.getUserId(); addApiKeyAttribute(userId); } } public void onEvent(AdminEvent adminEvent, boolean includeRepresentation) { if (Objects.equals(adminEvent.getResourceType(), ResourceType.USER) && Objects.equals(adminEvent.getOperationType(), OperationType.CREATE)) { String userId = adminEvent.getResourcePath().split("/")[1]; if (Objects.nonNull(userId)) { addApiKeyAttribute(userId); } } } public void addApiKeyAttribute(String userId) { String apiKey = randomString.nextString(); UserEntity userEntity = entityManager.find(UserEntity.class, userId); UserAttributeEntity attributeEntity = new UserAttributeEntity(); attributeEntity.setName("api-key"); attributeEntity.setValue(apiKey); attributeEntity.setUser(userEntity); attributeEntity.setId(UUID.randomUUID().toString()); entityManager.persist(attributeEntity); } public void close() { // Used for any necessary cleanup before destroying instances. } } ``` In Keycloak, every provider has a corresponding factory responsible for creating instances. Thus, we need to implement the EventListenerProviderFactory: ``` public class RegisterEventListenerProviderFactory implements EventListenerProviderFactory { public EventListenerProvider create(KeycloakSession keycloakSession) { return new RegisterEventListenerProvider(keycloakSession); } public void init(Config.Scope scope) { } public void postInit(KeycloakSessionFactory keycloakSessionFactory) { } public void close() { } public String getId() { return "api-key-registration-generation"; } } ``` #### API Key Validation Endpoint Next, we create an endpoint to check if an API key is valid: ``` public class ApiKeyResource { private KeycloakSession session; public ApiKeyResource(KeycloakSession session) { this.session = session; } @GET @Produces("application/json") public Response checkApiKey(@QueryParam("apiKey") String apiKey) { List<UserModel> result = session.userStorageManager().searchForUserByUserAttribute("api-key", apiKey, session.realms().getRealm("example")); return result.isEmpty() ? Response.status(401).build() : Response.ok().build(); } } ``` Keycloak uses Java (Jakarta) EE, so JAX-RS annotations are used to create endpoints. To make Keycloak recognize our endpoint, we need to implement RealmResourceProvider and RealmResourceProviderFactory: ``` public class ApiKeyResourceProvider implements RealmResourceProvider { private KeycloakSession session; public ApiKeyResourceProvider(KeycloakSession session) { this.session = session; } public Object getResource() { return new ApiKeyResource(session); } public void close() {} } public class ApiKeyResourceProviderFactory implements RealmResourceProviderFactory { public RealmResourceProvider create(KeycloakSession session) { return new ApiKeyResourceProvider(session); } public void init(Config.Scope config) {} public void postInit(KeycloakSessionFactory factory) {} public void close() {} public String getId() { return "check"; } } ``` #### Provider Configuration To inform Keycloak about the new providers, we create mappings under META-INF/services: **Filename: org.keycloak.events.EventListenerProviderFactory** ``` com.gwidgets.providers.RegisterEventListenerProviderFactory ``` **Filename: org.keycloak.services.resource.RealmResourceProviderFactory** ``` com.gwidgets.providers.ApiKeyResourceProviderFactory ``` #### Module Packaging Keycloak, as a standalone web app running on Wildfly, allows modules to be installed as .ear or .jar files under standalone/deployments. More information is available in the official documentation. Our project structure is as follows: ``` api-key-ear api-key-module pom.xml ``` Full source code is available at [GitHub](https://github.com/zak905/keycloak-api-key-demo?ref=blog.elest.io). #### Testing your API key Use the API key to call the REST API service: ``` curl -H "X-API-KEY: YPqIeqhbxUcOgDd6ld2jl9txfDrHxAPme89WLMuC8e0oaYXeA7" https://[CNAME] {"forecast": "weather is cool today"} ``` Access with an incorrect key results in a 401 response: ``` curl -v -H "X-API-KEY: wrongkey" https://[CNAME] < HTTP/1.1 401 Unauthorized < X-Powered-By: Express < Date: Sun, 16 Jun 2019 18:41:34 GMT < Connection: keep-alive < Content-Length: 0 ``` ## **Thanks for reading ❤️** Thank you so much for reading and do check out the Elestio resources and Official [Keycloak documentation](https://www.keycloak.org/documentation?ref=blog.elest.io) to learn more about Keycloak. You can click the button below to create your service on [Elestio](https://elest.io/open-source/keycloak?ref=blog.elest.io). See you in the next one👋 [![Adding API Key Authentication In Keycloak](https://pub-da36157c854648669813f3f76c526c2b.r2.dev/deploy-on-elestio-black.png)](https://elest.io/open-source/keycloak?ref=blog.elest.io)
kaiwalyakoparkar
1,897,671
I will be learning linux in next 180 days
RHCSA (red hat certified system administrator) It is a global level certification that focus, on to...
0
2024-06-23T09:07:05
https://dev.to/mahir_dasare_333/i-will-be-learning-linux-in-next-180-days-4j76
RHCSA (red hat certified system administrator) It is a global level certification that focus, on to perform the core system administration skills required for Red Hat Enterprise Linux environments. Linux: - Linux is an open source kernel. - Linux base OS are like unix based. - Linux kernel written by Linux Torvalds in 1991. Linux Distributions https://en.wikipedia.org/wiki/List_of_Linux_distributions
mahir_dasare_333
1,897,809
Creating Keycloak cluster with Elestio
Hey everyone, in this blog we will be creating a Keycloak cluster with Elestio using Keycloak. We...
0
2024-07-05T16:20:45
https://blog.elest.io/keycloack-cluster-with-elestio/
keycloak, softwares, elestio
--- title: Creating Keycloak cluster with Elestio published: true date: 2024-06-23 09:06:16 UTC tags: Keycloak,Softwares,Elestio canonical_url: https://blog.elest.io/keycloack-cluster-with-elestio/ cover_image: https://blog.elest.io/content/images/2024/06/Frame-8-13.png --- Hey everyone, in this blog we will be creating a Keycloak cluster with Elestio using [Keycloak](https://elest.io/open-source/keycloak?ref=blog.elest.io). We will be using a self-hosted Keycloak instance deployed on Elestio. So, to get started head over to [Elestio Dashboard](https://elest.io/open-source/keycloak?ref=blog.elest.io) and deploy and login into the Keycloak service to get started. ### Terraform Module for Keycloak Cluster A comprehensive Terraform module developed by Elestio significantly simplifies the process of deploying and scaling Keycloak clusters. By leveraging this module, users can streamline their infrastructure setup, ensuring consistency and reliability in their Keycloak deployments. The module is designed to handle the complexities of scaling and maintaining a Keycloak cluster, making it an essential tool for developers and DevOps engineers looking to optimize their authentication and authorization workflows. #### Why Choose Keycloak? Keycloak stands out as a premier solution for managing user access across various applications. Its powerful features include single sign-on (SSO), identity brokering, and user federation, which collectively enhance security and user experience. By integrating Keycloak, organizations can reduce development time, as they no longer need to build authentication systems from scratch. Additionally, Keycloak supports a wide range of authentication protocols like OAuth2, OpenID Connect, and SAML, making it versatile for different use cases. The robust security features ensure that sensitive user data is protected, meeting compliance requirements and boosting user trust. #### Keycloak Cluster Architecture In a Keycloak cluster, multiple independent nodes work together using a distributed Infinispan cache to manage user sessions and data efficiently. This architecture allows the cluster to scale horizontally, meaning more nodes can be added to handle increased load without downtime. High availability is another critical feature, as the cluster can continue operating even if some nodes fail. The distributed cache ensures that user sessions are consistently available across all nodes, providing a seamless experience for users regardless of which node they connect to. This architecture is ideal for large-scale applications that require robust and reliable user management. ![Creating Keycloak cluster with Elestio](https://blog.elest.io/content/images/2024/06/cluster_architecture.png) #### Terraform Module Design This Terraform module focuses primarily on deploying Keycloak nodes. It is designed to be used in conjunction with other essential services, such as a load balancer and a database, to create a fully functional Keycloak cluster. While Elestio offers these services as part of their platform, the module is flexible enough to integrate with other compatible services you might already be using. This flexibility allows you to tailor the deployment to meet specific requirements and infrastructure setups. By using this module, you can automate the deployment process, ensuring that each Keycloak node is configured correctly and consistently. ![Creating Keycloak cluster with Elestio](https://blog.elest.io/content/images/2024/06/terraform_architecture.png) #### About Elestio Elestio is a comprehensive DevOps platform that simplifies the deployment and management of various services. Their fully managed approach means that you don't have to spend extensive time configuring and maintaining your infrastructure. Elestio handles crucial aspects like security, DNS management, SMTP configuration, SSL setup, monitoring and alerts, backups, and updates. This allows developers to focus on building and deploying their applications rather than dealing with operational overhead. An Elestio account is required to use the Terraform module, and the platform offers various services that can be seamlessly integrated into your deployment. ##### Getting Started with Elestio 1. **Create an account** : Sign up on Elestio's platform to get started. 2. **Request free credits** : Take advantage of the free credits offered by Elestio to explore their services. 3. **Explore services** : Browse the growing list of deployable services on Elestio. If a service you need is missing, you can request its inclusion. Elestio's platform is user-friendly and designed to cater to both novice users and experienced developers. Their support team is available to assist with any questions or issues, ensuring that you can get your services up and running smoothly. #### Usage To use this module with your own database and load balancer, you need to configure it appropriately. Below is an example configuration: ``` module "cluster" { source = "elestio-examples/keycloak-cluster/elestio" project_id = "12345" keycloak_version = "latest" keycloak_password = "MyPassword1234" database = "postgres" database_host = "hostname.example.com" database_port = "5432" database_name = "keycloak" database_schema = "public" database_user = "admin" database_password = "password" nodes = [ { server_name = "keycloak-01" provider_name = "hetzner" datacenter = "fsn1" server_type = "SMALL-1C-2G" }, { server_name = "keycloak-02" provider_name = "hetzner" datacenter = "fsn1" server_type = "SMALL-1C-2G" }, ] configuration_ssh_key = { username = "terraform-user" public_key = chomp(file("~/.ssh/id_rsa.pub")) private_key = file("~/.ssh/id_rsa") } } ``` This configuration sets up a Keycloak cluster with specified database and node details. By adjusting these parameters, you can customize the deployment to fit your infrastructure needs. #### Complete Deployment Example To deploy a complete setup including the database, load balancer, and nodes, follow these steps: 1. **Install Terraform** : Download and install the Terraform client from the official [Terraform website](https://learn.hashicorp.com/tutorials/terraform/install-cli?ref=blog.elest.io). This tool will enable you to manage your infrastructure as code, simplifying deployment and scaling processes. - Create a new directory for your project. - Inside this directory, create the following files: `main.tf`, `terraform.tfvars`, `terraform_rsa`, `terraform_rsa.pub`, and `.gitignore`. - Populate `main.tf` with your configuration details. - Add sensitive information like passwords and keys to `terraform.tfvars`. 2. **Generate SSH Key** : This key is required by the module to configure the nodes. Generate a dedicated SSH key pair for secure communication with your servers. **Apply Configuration** : ``` terraform apply ``` Confirm the deployment when prompted. Terraform will then create and configure the resources as defined in your configuration files. The process will take a few minutes, during which Terraform will set up the infrastructure and deploy the Keycloak nodes. **Initialize Terraform** : ``` terraform init ``` This command initializes the Terraform configuration, downloading necessary plugins and preparing your workspace. **Setup Configuration** : ``` . ├── main.tf ├── terraform.tfvars ├── terraform_rsa ├── terraform_rsa.pub └── .gitignore ``` #### Output Cluster Information Use the following command to display details about the created resources: ``` terraform show ``` For essential information, use custom outputs. This will give you a concise summary of critical details, such as database access information and node credentials: ``` terraform output database_admin terraform output nodes_admins terraform output load_balancer_cname ``` This information is crucial for managing and accessing your deployed services. #### Verify Deployment To ensure your deployment is successful, check the logs of the deployed services via the Elestio dashboard: 1. Navigate to the [Elestio Dashboard](https://dash.elest.io/?ref=blog.elest.io). 2. Select your cluster project. 3. View the logs for each Keycloak service under the "Overview" section. You should see log entries indicating the proper initialization and joining of nodes in the cluster. Look for specific lines that show the nodes have successfully started and joined the cluster, confirming that the deployment is functioning as expected. #### Adding a Third Node To expand your cluster, add additional nodes in the `main.tf` file: ``` nodes = [ { server_name = "keycloak-01" provider_name = "hetzner" datacenter = "fsn1" server_type = "SMALL-1C-2G" }, { server_name = "keycloak-02" provider_name = "hetzner" datacenter = "fsn1" server_type = "SMALL-1C-2G" }, { server_name = "keycloak-03" provider_name = "hetzner" datacenter = "fsn1" server_type = "SMALL-1C-2G" }, ] ``` Run `terraform apply` again and confirm the changes. The new node will join the cluster shortly. This process allows you to scale your Keycloak cluster easily, adding capacity as your application's needs grow. #### Recommendations - **Secrets** : Avoid committing sensitive information like API tokens, Keycloak passwords, and SSH keys to your version control system. Use environment variables or secret management tools to handle these credentials securely. - **Configuration** : Refer to the Keycloak service documentation for all available attributes. For example, you can disable the service firewall with `firewall_enabled = false`. This flexibility allows you to customize the deployment to meet your specific security and operational requirements. - **Hosting** : Review the Providers, Datacenters, and Server Types guide for available options. Understanding the available hosting options can help you choose the best infrastructure for your needs. - **Resource Limits** : Adding more nodes may exceed your resource quota. Visit your account quota page to request additional resources if necessary. Monitoring your resource usage and planning for scalability will ensure that your deployment remains stable and responsive. ## **Thanks for reading ❤️** Thank you so much for reading and do check out the Elestio resources and Official [Keycloak documentation](https://www.keycloak.org/documentation?ref=blog.elest.io) to learn more about Keycloak. You can click the button below to create your service on [Elestio](https://elest.io/open-source/keycloak?ref=blog.elest.io). See you in the next one👋 [![Creating Keycloak cluster with Elestio](https://pub-da36157c854648669813f3f76c526c2b.r2.dev/deploy-on-elestio-black.png)](https://elest.io/open-source/keycloak?ref=blog.elest.io)
kaiwalyakoparkar
1,897,670
JavaScript code real-time online obfuscation encryption
JavaScript obfuscation is a technique that transforms source code into a format that is difficult to...
0
2024-06-23T09:06:05
https://dev.to/fridaymeng/javascript-code-real-time-online-obfuscation-encryption-mb0
JavaScript obfuscation is a technique that transforms source code into a format that is difficult to understand and reverse engineer. The obfuscated code functions the same as the original code but is much less readable, increasing the security of the code and preventing it from being easily copied, tampered with, or reverse-engineered by others. **Why JavaScript Obfuscation is Needed** Protect Intellectual Property: Prevent unauthorized copying or theft of code. Increase Security: Reduce the risk of code being exploited by hackers. Reduce Risk of Reverse Engineering: Make it difficult for hackers to understand the code logic, thus protecting business logic and algorithms. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mo4ohuz9v252mgsmwxgb.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9npbm9kr95zfo1wfocqj.png) [Online Test](https://addgraph.com/jsobfuscator) JavaScript obfuscation is an effective technique to protect your code from unauthorized access and modification. While it cannot completely prevent reverse engineering, it can significantly increase the difficulty of cracking the code. By using appropriate tools and techniques, you can ensure that your JavaScript code is more secure when deployed to production environments.
fridaymeng
1,897,807
Embedding Superset dashboards in your React application
Hey everyone, In this blog we will knowing more about embedding Superset dashboards in your react...
0
2024-07-05T16:20:10
https://blog.elest.io/embedding-superset-dashboards-in-your-react-application/
superset, softwares, elestio
--- title: Embedding Superset dashboards in your React application published: true date: 2024-06-23 09:05:18 UTC tags: Superset,Softwares,Elestio canonical_url: https://blog.elest.io/embedding-superset-dashboards-in-your-react-application/ cover_image: https://blog.elest.io/content/images/2024/06/Frame-8-9.png --- Hey everyone, In this blog we will knowing more about embedding [Superset](https://elest.io/open-source/superset?ref=blog.elest.io) dashboards in your react application. ### Maximizing Data Insights: Integrating Superset's Image Export with External Applications Apache Superset stands out as a powerful tool for data visualization, boasting a user-friendly interface for rapid chart creation across diverse databases. Among its extensive feature set, Superset facilitates the seamless export of visualizations as images, streamlining the sharing of insights without direct platform access. Furthermore, Superset empowers users to embed dashboards into external applications through iframes, facilitating the integration of data analytics directly into web environments. #### Empowering Data Analytics with Embedded Dashboards Embedded dashboards serve as a conduit, delivering profound data analytics directly into web applications. Leveraging Superset's Embedded SDK, users effortlessly integrate Superset dashboards into their applications, utilizing the app's authentication system. This embedding process entails inserting an iframe housing a Superset page into the host application, enabling users to access integrated dashboards seamlessly, provided they're logged into the host app. #### Objectives - Seamlessly access Superset graphs within React applications. - Implement multi-tenancy support for Embedded Dashboards. - Employ Role-Level Security (RLS) for robust access control. #### Prerequisites - Docker or Docker Compose. - Functional React-based app alongside its backend. #### Superset Configuration Ensure Superset is configured correctly, especially by enabling the EMBEDDED\_SUPERSET feature flag in `superset_config.py`. #### Client App (React) Integration Integrating Superset's embedded dashboard into React applications involves embedding the dashboard within an iframe. Here's a code snippet demonstrating the embedding process: ``` import { embedDashboard } from "@superset-ui/embedded-sdk"; const token = await fetchGuestTokenFromBackend(config); embedDashboard({ id: "abcede-ghifj-xyz", // Embedded Dashboard UUID supersetDomain: "https://[ELESTIO_CNAME]", mountPoint: document.getElementById("superset-container"), // HTML element to render iframe fetchGuestToken: () => token, dashboardUiConfig: { hideTitle: true } }); ``` ## **Thanks for reading ❤️** Thank you so much for reading and do check out the Elestio resources and Official [Superset documentation](https://superset.apache.org/docs/intro/?ref=blog.elest.io) to learn more about Superset. You can click the button below to create your service on [Elestio](https://elest.io/open-source/superset?ref=blog.elest.io). See you in the next one👋 [![Embedding Superset dashboards in your React application](https://pub-da36157c854648669813f3f76c526c2b.r2.dev/deploy-on-elestio-black.png)](https://elest.io/open-source/superset?ref=blog.elest.io)
kaiwalyakoparkar
1,897,806
Superset SSO with Google integration
Hey everyone, In this blog we will be knowing more about Superset SSO with Google integration....
0
2024-07-05T16:19:31
https://blog.elest.io/superset-sso-with-google-integration/
superset, softwares, elestio
--- title: Superset SSO with Google integration published: true date: 2024-06-23 09:03:08 UTC tags: Superset,Softwares,Elestio canonical_url: https://blog.elest.io/superset-sso-with-google-integration/ cover_image: https://blog.elest.io/content/images/2024/06/Frame-8-10.png --- Hey everyone, In this blog we will be knowing more about [Superset](https://elest.io/open-source/superset?ref=blog.elest.io) SSO with Google integration. Seamless Integration of Superset with Google SSO for Enhanced Data Analysis ### Understanding Superset SSO with Google Integration Incorporating Single Sign-On (SSO) with Google in Apache Superset not only streamlines the authentication process but also enhances the overall user experience. Here's a detailed guide on how to set it up: ### Prerequisites 1. Admin access to the Superset instance. 2. A Google Cloud Platform account. ### Configuration Steps 1. **Create OAuth 2.0 Client ID in Google Cloud Platform:** - Go to APIs & Services > Credentials. - Click on "Create credentials" and select "OAuth client ID." - Configure the consent screen and set the authorized redirect URI to your Superset callback URL. 2. **Configure Superset:** Navigate to superset\_config.py and set AUTH\_TYPE to AUTH\_OAUTH. - Add Google as an OAuth provider with the client ID and secret obtained in the previous step. 1. **Map User Information:** - Ensure accurate mapping of user details from Google's OAuth response to Superset's user model. ### Testing the Integration 1. Try logging in to Superset using the "Sign in with Google" option. 2. Confirm that user details are correctly populated, and permissions are appropriately assigned. ### Troubleshooting - Verify the callback URL and client ID/secret if authentication issues arise. - Double-check the user mapping configuration for any inconsistencies. ### **Configure Superset:** Edit the superset\_config.py file to integrate Google OAuth settings properly. ``` from flask_appbuilder.security.manager import AUTH_OAUTH AUTH_TYPE = AUTH_OAUTH OAUTH_PROVIDERS = [{ 'name':'google', 'icon':'fa-google', 'token_key':'access_token', 'remote_app': { 'client_id':'YOUR_GOOGLE_CLIENT_ID', 'client_secret':'YOUR_GOOGLE_CLIENT_SECRET', 'api_base_url':'https://www.googleapis.com/oauth2/v2/', 'client_kwargs':{ 'scope': 'email profile' }, 'request_token_url':None, 'access_token_url':'https://accounts.google.com/o/oauth2/token', 'authorize_url':'https://accounts.google.com/o/oauth2/auth', } }] ``` **Create OAuth Credentials:** Visit the Google Cloud Platform, create a new project, and generate OAuth 2.0 credentials. Utilize the client\_id and client\_secret provided within the configuration above. **Redirect URIs:** In the Google Cloud Platform credentials settings, add the authorized redirect URIs, typically http://YOUR\_SUPERSET\_URL/oauth-authorized/google. **Finalize Setup:** Restart Superset to implement the changes effectively. Users can now effortlessly sign in using their Google accounts. Ensure the content provided is unique and does not replicate other sections, such as general installation procedures or database connections. ### Setting Up Google Sheets as a Data Source in Superset Integrating Google Sheets with Apache Superset requires the utilization of the shillelagh connector library. Follow these steps to seamlessly connect Google Sheets to Superset: **Install Shillelagh:** Ensure that the shillelagh library is installed in your Superset environment to facilitate the integration. **Create API Credentials:** Set up the Google Sheets API within your Google Cloud Console and obtain the necessary credentials required for authentication. **Configure Superset:** Add a new database connection in Superset, utilizing the SQLAlchemy URI format provided by shillelagh. This format typically begins with gsheets:// followed by the relevant parameters. **Test Connection:** Validate the connection between Superset and your Google Sheets by utilizing the 'Test Connection' functionality to ensure successful communication. For advanced configurations, such as implementing Single Sign-On (SSO) with Google, consult the superset sso google documentation. This enables users to authenticate seamlessly using their Google accounts, thereby enhancing the data access process. Ensure that the content provided remains distinct and does not replicate information covered in other sections, such as connections to Elasticsearch or BigQuery, and instead focuses on the unique process of connecting Google Sheets. ### Troubleshooting SSO Challenges in Superset with Google Integration While implementing Single Sign-On (SSO) between Apache Superset and Google, users may encounter various issues. Below are some common challenges along with their solutions to ensure a smooth SSO setup: 1. **Incorrect Redirect URI:** - Verify that the redirect URI in Google Cloud Console matches the one configured in Superset. - Ensure the URI follows the format https:///oauth-authorized/google. 2. **Missing or Invalid Client ID/Secret:** - Double-check the client ID and secret in the Superset configuration against the credentials provided by Google. 3. **User Email Domain Restrictions:** - If your organization restricts domains, add the allowed email domains in the Google Admin console. 4. **Insufficient Scopes:** - Ensure the scopes for the Google API include openid, email, and profile. 5. **Token Expiry Issues:** - Implement token refresh handling in Superset to manage access token expiration effectively. 6. **User Role Mapping:** - Define user role mapping in Superset to assign roles based on Google group membership. 7. **SSL/TLS Configuration:** - For https redirect URIs, ensure your Superset instance is properly configured with SSL/TLS. 8. **Firewall and Network Access:** - Confirm that your network allows traffic to and from accounts.google.com for OAuth authentication. By addressing these common challenges, users can ensure the reliability and security of their Superset SSO integration with Google. ## **Thanks for reading ❤️** Thank you so much for reading and do check out the Elestio resources and Official [Superset documentation](https://superset.apache.org/docs/intro/?ref=blog.elest.io) to learn more about Superset. You can click the button below to create your service on [Elestio](https://elest.io/open-source/superset?ref=blog.elest.io). See you in the next one👋 [![Superset SSO with Google integration](https://pub-da36157c854648669813f3f76c526c2b.r2.dev/deploy-on-elestio-black.png)](https://elest.io/open-source/superset?ref=blog.elest.io)
kaiwalyakoparkar
1,897,805
Superset Download as Image API Guide
Hey everyone, In this blog we will knowing more about Superset Download as Image API Guide. Apache...
0
2024-07-05T16:18:53
https://blog.elest.io/superset-download-as-image-api-guide/
superset, softwares, elestio
--- title: Superset Download as Image API Guide published: true date: 2024-06-23 09:00:42 UTC tags: Superset,Softwares,Elestio canonical_url: https://blog.elest.io/superset-download-as-image-api-guide/ cover_image: https://blog.elest.io/content/images/2024/06/Frame-8-8.png --- Hey everyone, In this blog we will knowing more about [Superset](https://elest.io/open-source/superset?ref=blog.elest.io) Download as Image API Guide. Apache Superset provides a powerful API for downloading dashboard images, making it easier to share visual insights without requiring direct access to the platform. This guide explains how to use the Superset API to export dashboard images seamlessly. #### Understanding Superset's Image Export Feature Superset’s API enables the programmatic export of dashboard images, which is beneficial for distributing visual insights. Here’s a step-by-step guide to leveraging this feature: #### API Endpoint - **Endpoint:** Use `/api/v1/chart/export/` to export images of your charts. - **Chart ID:** You need to provide the chart ID in the request. #### Request Method - **Method:** Send a `POST` request with the necessary authentication headers. #### Payload - **Parameters:** Include specific parameters in the request body, such as the desired image format (e.g., PNG, JPEG). #### Response - **Format:** The API returns the image in binary format, which can be saved or embedded as needed. #### Example cURL Command Here’s an example of a cURL command to export a chart image: ``` curl -X POST 'http://superset.example.com/api/v1/chart/export/' \ -H 'Authorization: Bearer YOUR_ACCESS_TOKEN' \ --data-raw '{"dashboard_id": YOUR_CHART_ID}' ``` Replace `YOUR_ACCESS_TOKEN` and `YOUR_CHART_ID` with your actual access token and chart ID. By using the Superset API to export dashboard images, you can streamline the sharing of visual data insights. For more details and customization options, refer to the official Superset documentation. ## **Thanks for reading ❤️** Thank you so much for reading and do check out the Elestio resources and Official [Superset documentation](https://superset.apache.org/docs/intro/?ref=blog.elest.io) to learn more about Superset. You can click the button below to create your service on [Elestio](https://elest.io/open-source/superset?ref=blog.elest.io). See you in the next one👋 [![Superset Download as Image API Guide](https://pub-da36157c854648669813f3f76c526c2b.r2.dev/deploy-on-elestio-black.png)](https://elest.io/open-source/superset?ref=blog.elest.io)
kaiwalyakoparkar
1,897,804
Superset Redis Caching Guide
Hey everyone, In this blog we will be knowing more about Superset Redis Caching. Optimize your...
0
2024-07-05T16:18:07
https://blog.elest.io/superset-redis-caching-guide/
superset, softwares, redis, elestio
--- title: Superset Redis Caching Guide published: true date: 2024-06-23 08:57:39 UTC tags: Superset,Softwares,Redis,Elestio canonical_url: https://blog.elest.io/superset-redis-caching-guide/ cover_image: https://blog.elest.io/content/images/2024/06/Frame-8-7.png --- Hey everyone, In this blog we will be knowing more about [Superset](https://elest.io/open-source/superset?ref=blog.elest.io) Redis Caching. Optimize your Superset dashboards for better performance and scalability by integrating Redis caching. This guide explains how to set up and leverage Redis as a cache backend in Superset. #### Understanding Redis as a Cache Backend in Superset Superset integrates with Redis to create a caching layer that improves performance by storing the results of expensive queries. Here’s how you can configure Redis as your cache backend: #### Cache Configuration To set up Redis caching, add the following configuration to your `superset_config.py` file: ``` CACHE_CONFIG = { 'CACHE_TYPE': 'redis', 'CACHE_DEFAULT_TIMEOUT': 300, 'CACHE_KEY_PREFIX': 'superset_results', 'CACHE_REDIS_URL': 'redis://localhost:6379/0' } ``` #### Benefits of Using Redis - **Speed:** Redis offers fast read and write operations as an in-memory data store. - **Scalability:** Redis can scale horizontally, making it ideal for distributed environments like Superset. - **Persistence:** Redis provides various persistence options, ensuring cached data isn’t lost during failures. #### Advanced Caching Strategies - **Time-Based Invalidation:** Use `CACHE_DEFAULT_TIMEOUT` to control cache duration. - **Granular Cache-Control** Redis offers fast read and write operations as an in-memory data store **:** Customize caching policies at the chart or dashboard level. #### Monitoring and Maintenance - **Monitoring:** Use Redis monitoring tools to track cache hit rates and optimize cache size. - **Maintenance:** Regularly clear old or unused cache keys to maintain cache efficiency. By leveraging Redis, Superset can deliver faster insights and an improved user experience. ### Integrating Redis into Superset Integrating Redis into Superset's architecture not only improves performance but also contributes to a more robust and responsive BI tool. Proper configuration of Redis is crucial to match the scale of your Superset deployment. Ensure that your Redis instance is optimized for your specific use case to maximize the benefits. ## **Thanks for reading ❤️** Thank you so much for reading and do check out the Elestio resources and Official [Superset documentation](https://superset.apache.org/docs/intro/?ref=blog.elest.io) to learn more about Superset. You can click the button below to create your service on [Elestio](https://elest.io/open-source/superset?ref=blog.elest.io). See you in the next one👋 [![Superset Redis Caching Guide](https://pub-da36157c854648669813f3f76c526c2b.r2.dev/deploy-on-elestio-black.png)](https://elest.io/open-source/superset?ref=blog.elest.io)
kaiwalyakoparkar
1,897,803
Apache Superset SSO integration guide
Hey everyone, In this blog we will be knowing more about Superset SSO Integrations. Integrating...
0
2024-07-05T15:42:39
https://blog.elest.io/apache-superset-sso-integration-guide/
superset, softwares, elestio
--- title: Apache Superset SSO integration guide published: true date: 2024-06-23 08:55:24 UTC tags: Superset,Softwares,Elestio canonical_url: https://blog.elest.io/apache-superset-sso-integration-guide/ cover_image: https://blog.elest.io/content/images/2024/06/Frame-8-6.png --- Hey everyone, In this blog we will be knowing more about [Superset](https://elest.io/open-source/superset?ref=blog.elest.io) SSO Integrations. Integrating Single Sign-On (SSO) with Apache Superset enhances both security and user experience by enabling users to authenticate with their existing credentials. This guide provides a step-by-step process to set up SSO with Superset. #### Prerequisites - A Superset instance (v0.34.0 or later) - An SSO provider (e.g., Okta, Auth0) #### Configuration Steps 1. **Install Required Packages:** Ensure you have the necessary packages installed, such as `flask-appbuilder`. 2. **Configure Superset:** Open vs code from your Tools and Edit the `superset_config.py` file to include your SSO provider's details. 3. **Set Up OAuth:** Use the OAuth authentication backend to connect with your SSO provider. #### Code Example Here's an example configuration for integrating with Okta: ``` AUTH_TYPE = AUTH_OAUTH OAUTH_PROVIDERS = [{ 'name':'okta', 'token_key':'access_token', 'icon':'fa-circle-o', 'remote_app': { 'client_id':'YOUR_CLIENT_ID', 'client_secret':'YOUR_CLIENT_SECRET', 'api_base_url':'https://<your-okta-domain>/oauth2/default', 'client_kwargs':{ 'scope': 'openid profile email' }, } }] ``` #### Testing After configuring, test the SSO integration to ensure users can log in seamlessly. By following these steps and referring to the official documentation, you can successfully integrate SSO with Apache Superset. This integration provides a secure and efficient method for users to access the platform, streamlining the authentication process. ### Configuring SSO Authentication in Superset To set up Single Sign-On (SSO) authentication in Apache Superset, follow these steps: 1. **Client Configuration** : Provide the `client_id`, `client_secret`, and other client-specific settings within the `remote_app` configuration. 2. **Additional Settings** : Adjust additional headers and parameters as needed for your OAuth provider. 3. **Test the Configuration** : Verify the SSO integration by logging into Superset through the configured OAuth provider. **Define OAuth Providers** : Specify the OAuth providers in the `OAUTH_PROVIDERS` configuration. Include necessary details such as `name`, `token_key`, `icon`, `remote_app`, and endpoints like `access_token_url` and `authorize_url`. ``` OAUTH_PROVIDERS = [ { 'name': 'google', 'token_key': 'access_token', 'icon': 'fa-google', 'remote_app': { 'client_id': '<your-client-id>', 'client_secret': '<your-client-secret>', 'api_base_url': 'https://www.googleapis.com/oauth2/v2/', 'client_kwargs': { 'scope': 'email profile' }, 'access_token_url': 'https://oauth2.googleapis.com/token', 'authorize_url': 'https://accounts.google.com/o/oauth2/auth', } } ] ``` **Modify `superset_config.py`** : Configure the authentication type by setting `AUTH_TYPE` to `AUTH_OAUTH`. ``` AUTH_TYPE = AUTH_OAUTH ``` ### Managing SSO User Roles and Permissions in Apache Superset Effectively managing Single Sign-On (SSO) user roles and permissions in Apache Superset is essential for ensuring secure and appropriate access to data and dashboards. Here's how to manage these roles and permissions: #### Understanding Permissions 1. **Model & Action** : Assign permissions such as `can_edit` and `can_delete` on entities like Dashboards or Users to control what actions users can perform. 2. **Views** : Grant view permissions to allow users access to specific web pages within Superset. 3. **Data Source** : Create permissions for each data source to restrict access to those explicitly granted to users. 4. **Database** : Permissions to access a database allow users to query within that database and all its data sources, as long as they have SQL Lab permissions. #### Role Management 1. **Admin Role** : Use sparingly as it has unrestricted access to everything in Superset. 2. **Alpha Role** : Users can access all data sources but cannot alter permissions. 3. **Gamma Role** : Users have limited access. Assign additional roles for specific data source access. 4. **Custom Roles** : Create custom roles tailored to different access needs. For example, a "Finance" role could be used for users needing access to finance-related data sources. #### SSO Integration Implement a `CustomSsoSecurityManager` to map SSO user details to Superset roles and permissions. This involves extending the `SupersetSecurityManager` and overriding the `oauth_user_info` method. #### Best Practices 1. **Base Roles** : Avoid altering the base roles provided by Superset. 2. **Custom Roles** : Create roles with specific permissions suited to different user groups. 3. **Regular Reviews** : Regularly review and update permissions to ensure they align with organizational changes. ### Troubleshooting Common SSO Issues in Superset When integrating Single Sign-On (SSO) with Apache Superset, you may encounter various challenges. Here are some tips for troubleshooting common problems: #### Incorrect Redirect URI - **Check Redirect URI** : Ensure that the redirect URI configured in your SSO provider matches the one set in Superset. Any mismatch can prevent successful authentication. #### SSO Configuration Errors - **Verify Configuration** : Double-check the settings in your `superset_config.py`, including `AUTH_TYPE`, `OAUTH_PROVIDERS`, and related configurations to ensure they are correct. #### User Role and Permissions - **Role Assignment** : After SSO authentication, verify that the user has the appropriate role and permissions assigned in Superset to access the necessary resources. #### SSL/TLS Certificate Issues - **Certificate Validation** : If using HTTPS, ensure that your SSL/TLS certificates are valid and properly installed. Incomplete or invalid certificates can disrupt secure connections. #### Debugging Logs **Enable Debug Mode** : Activate debug mode in Superset to generate detailed logs that can help identify the issue. Add the following line to your configuration: ``` DEBUG = True ``` #### SSO Provider Downtime - **Check Provider Status** : Ensure that your SSO provider is operational and not experiencing any outages or downtime. This can affect the authentication process. ## **Thanks for reading ❤️** Thank you so much for reading and do check out the Elestio resources and Official [Superset documentation](https://superset.apache.org/docs/intro/?ref=blog.elest.io) to learn more about Superset. You can click the button below to create your service on [Elestio](https://elest.io/open-source/superset?ref=blog.elest.io). See you in the next one👋 [![Apache Superset SSO integration guide](https://pub-da36157c854648669813f3f76c526c2b.r2.dev/deploy-on-elestio-black.png)](https://elest.io/open-source/superset?ref=blog.elest.io)
kaiwalyakoparkar
1,897,379
4 Ideas to Create Organic Growth for a Web App
Growing a web app is hard, especially when you have limited capital. This is usually the case for us...
0
2024-06-23T08:51:47
https://dev.to/alvinscherdin/4-ideas-to-create-organic-growth-for-a-web-app-j9m
webdev, seo, tutorial, startup
Growing a web app is hard, especially when you have limited capital. This is usually the case for us small indie developers, and because of that, I have built an arsenal of growth ideas to launch that website with a bang! ## 1. Utilize social media When starting out, the easiest win is social media. It can generate a lot of buzz right out of the gates, and people tend to share a website they like with their friends. My usual launch of a website goes something like this: - Publishing 10 short form videos (Across YouTube shorts, TikTok and Instagram Reels) - Publish 5 long form YouTube videos (Show your product and show your insight in the niche) - Get the foundation going on all the socials (X, Instagram, Facebook) by posting a couple posts on each - Plan out posts that automatically posts everyday for the next month ahead These days most of the social media platforms use a "For You" structure in their feeds. This means you no longer really need to build an audience to get traction, you just need great content. Good content is generally attention grabbing, emotional, or something people haven't seen before. ## 2. Start the SEO campaign Step number two in the launch of your web app should definitely be launching the SEO campaign. It takes a couple of months to get traction with the organic traffic, which is why it's important to start it as soon as possible. You want to build all your "money pages" first, so that search engines understand what your website is about. To later move on to the informational content afterwards. The most important part when it comes to SEO from my experience is keeping up! Until you have reached around 200 pages of good content - you want to publish at least once a day. After that, you can slow down a bit and focus on improving what's already published. ## 3. Get a PR campaign going Getting featured in local or national magazines is a great way to get a boat load of traffic in a matter of days. This gets eyeballs on your web app directly, and usually starts a snow ball effect. Once one news publication mentions you or your brand - the rest will follow. This creates a serious chance both to get your first paying customers, or reach investors and future partners. ## 4. Take the feedback to heart When launching a new web app chances are big you made some mistakes or something is broken. Make sure to therefore add some type of feedback option for your users. This makes you be able to file on your application before it hits the real big masses. It's also a great idea to launch a beta version before your real launch. Invite some of the friends you picked up in the niche your in! That's all folks, good luck with your little (or big) web app!
alvinscherdin
1,897,669
Face Mask for Tan Removal
## Brighten Your Complexion with Pink Root Face Mask for Tan Removal Remove stubborn tan and...
0
2024-06-23T08:51:24
https://dev.to/pinkroot/face-mask-for-tan-removal-44ek
beauty, deta, productivity, skin
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mkdcq5kaasnha8kpncz5.jpg)## Brighten Your Complexion with Pink Root Face Mask for Tan Removal Remove stubborn tan and brighten your complexion with [Pink Root Face Mask for Tan Removal](https://www.pinkroot.in/products/pink-root-detan-face-mask-). This powerful face mask is formulated to reduce the effects of sun exposure, removing tan and restoring your skin's natural glow. The natural ingredients in the mask help to lighten dark spots and pigmentation, giving you an even skin tone. The face mask works by exfoliating the skin and removing dead skin cells, revealing fresh and radiant skin underneath. The antioxidants in the mask help to protect the skin from further damage, while the nourishing ingredients keep the skin hydrated and healthy. Use Pink Root Face Mask for Tan Removal regularly to maintain a bright and even complexion.
pinkroot
1,897,668
Install local environment - CachyOS
Here is how you install local environment for peviitor.ro Preconditions Make...
0
2024-06-23T08:49:06
https://dev.to/sebiboga/install-local-environment-cachyos-3953
peviitor
##Here is how you install local environment for peviitor.ro --- ###Preconditions Make sure you have installed: - git - github Desktop - docker ##Steps to install - Create directory if it doesn't exist ``` mkdir -p ~/peviitor ``` - Clone repositories ``` git clone https://github.com/peviitor-ro/solr.git ~/peviitor/solr git clone https://github.com/peviitor-ro/api.git ~/peviitor/api ``` - replace < your-username > with your linux username ``` sudo chmod -R a+rwx /home/<your-username>/peviitor/solr ``` - stop apache-container ``` docker stop apache-container docker rm apache-container ``` - stop solr-container ``` docker stop solr-container docker rm solr-container ``` - stop data-migration container ``` docker stop data-migration docker rm data-migration ``` - create subnetwork mynetwork ``` sudo docker network create --subnet=172.18.0.0/16 mynetwork ``` - replace < your-username > with your linux username ``` docker run --name apache-container --network mynetwork --ip 172.18.0.11 -d -p 8080:80 -v /home/<your-username>/peviitor:/var/www/html sebiboga/php-apache:1.0.0 ``` - replace < your-username > with your linux username ``` docker run --name solr-container --network mynetwork --ip 172.18.0.10 -d -p 8983:8983 -v "/home/<your-username>/peviitor/solr/core/data:/var/solr/data" sebiboga/peviitor:1.0.0 ``` - wait for solr-container to start - run data-migration container ``` docker run --name data-migration --network mynetwork --ip 172.18.0.12 --rm sebiboga/peviitor-data-migration-local:latest ``` - remove the image data-migration ``` docker rmi sebiboga/peviitor-data-migration-local:latest ``` ##Test the environment in browser: `http://localhost:8983/` `http://localhost:8080/api/v0/random/`
sebiboga
1,897,666
Twilio challenge : EcoAlert
This is a submission for Twilio Challenge v24.06.12 What I Built I have built a web...
0
2024-06-23T08:48:32
https://dev.to/reyans/twilio-challenge-ecoalert-1bk3
devchallenge, twiliochallenge, ai, twilio
*This is a submission for [Twilio Challenge v24.06.12](https://dev.to/challenges/twilio)* ## What I Built I have built a web application named EcoAlert this application connects the general public with the authorities responsible for waste collection and management. Through my application, people can post images online depicting waste issues. These images are then labeled as high severity or low severity, making it easier for authorities to identify which tasks they should prioritize. When a high-severity issue is identified, an SMS alert is sent to the respective authorities, prompting them to take action as soon as possible. ## Demo ## Twilio and AI I have leveraged the SMS capabilities of Twilio to send notifications to the relevant authorities when a high-severity case is identified. Additionally, I trained an AI model to recognize and classify cases as either high severity or low severity. Demo ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fyzai4s4oqptcaw5it5s.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kchxzh9c7b4e9eojop56.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g63900xxox5m7ukzx5t6.png) [EcoAlert Github repo](https://github.com/Reyan9450/EcoAlert) ## Additional Prize Categories **Impactful Innovators**: EcoAlert addresses a significant environmental issue by improving the efficiency and responsiveness of waste management authorities through innovative use of Twilio's communication capabilities and AI integration.
reyans
1,897,665
Azure DevOps Services and Exploring Alternatives
DevOps is a set of practices that combines software development (Dev) and IT operations (Ops) to...
0
2024-06-23T08:46:16
https://dev.to/shreyash333/understanding-azure-devops-services-and-exploring-alternatives-1ln1
azure, devops, cicd, development
DevOps is a set of practices that combines software development (Dev) and IT operations (Ops) to streamline the software development lifecycle. Azure DevOps is a suite of services offered by Microsoft to support DevOps practices. In this article, we will explore the services offered in Azure DevOps, their alternatives, and their pros and cons. **Services in Azure DevOps:** **1. Azure Boards:** A project management tool for tracking work items and projects. **2. Azure Repos:** A version control system for managing code repositories. **3. Azure Pipelines:** A continuous integration and delivery tool for automating build, test, and deployment processes. **4. Azure Test Plans:** A testing tool for managing and executing tests. **5. Azure Artifacts:** A package management tool for managing and sharing packages. **6. GitHub Advanced Security for Azure DevOps:** A security tool for identifying vulnerabilities in code. **Azure DevOps Services Cost:** $30/user/month (basic plan) = $30/month Annual cost: $360 **Pros:** 1. Integrated platform for development, delivery, and collaboration 2. Automated pipelines and continuous integration/continuous deployment (CI/CD) 3. Advanced project management and tracking capabilities 4. Scalable and secure 5. Integrates with other Azure services **Cons:** 1. Steep learning curve 2. Can be expensive for large teams or enterprises 3. Limited customization options for some features 4. Some users find the UI cluttered **Best Overall:** - Offers a comprehensive, integrated platform for development, delivery, and collaboration - Scalable, secure, and flexible --- **Alternatives to Azure DevOps:** **Set 1:** **- Jira** (Azure Boards alternative) **- GitHub** (Azure Repos alternative) **- Jenkins** (Azure Pipelines alternative) **- TestRail** (Azure Test Plans alternative) **- Artifactory** (Azure Artifacts alternative) **- SonarQube** (GitHub Advanced Security for Azure DevOps alternative) **Price:** - Jira: $7/user/month (standard plan) = $7/month - GitHub: $21/user/month (team plan) = $21/month - Jenkins: free (open-source) = $0/month - TestRail: $25/user/month (premium plan) = $25/month - Artifactory: $22/user/month (pro plan) = $22/month - SonarQube: $15/month (individual plan) - Annual cost: $420 **Pros:** 1. Jira offers robust project management capabilities 2. GitHub provides a popular version control system 3. Jenkins offers flexible automation options 4. TestRail provides comprehensive test management 5. Artifactory offers advanced artifact management 6. SonarQube provides detailed code analysis **Cons:** 1. Multiple tools require multiple subscriptions and integrations 2. Can be costly for large teams or enterprises 3. Steep learning curve for some tools 4. Limited integration between tools **Best for Large Enterprises:** - Offers advanced features, scalability, and security - Supports large teams and complex projects - Set 1 offers a range of robust tools, but requires multiple subscriptions and integrations --- **Set 2:** **- Asana** (Azure Boards alternative) **- Bitbucket** (Azure Repos alternative) **- Travis CI** (Azure Pipelines alternative) **- PractiTest** (Azure Test Plans alternative) **- Google Container** Registry (Azure Artifacts alternative) **- Veracode** (GitHub Advanced Security for Azure DevOps alternative) **Price:** - Asana: $13.49/user/month (premium plan) = $13.49/month - Bitbucket: $6/user/month (standard plan) = $6/month - Travis CI: $6/month (pro plan) - PractiTest: $29/user/month (pro plan) = $29/month - Google Container Registry: $6/month (standard plan) - Veracode: $10/month (pro plan) - Annual cost: $343.88 **Pros:** 1. Asana offers user-friendly project management 2. Bitbucket provides a cloud-based version control system 3. Travis CI offers easy automation options 4. PractiTest provides comprehensive test management 5. Google Container Registry offers secure artifact management 6. Veracode provides advanced code analysis **Cons:** 1. Multiple tools require multiple subscriptions and integrations 2. Can be costly for large teams or enterprises 3. Limited customization options for some tools 4. Some tools have limited features compared to Set 1 **Best for Budget-Constrained Teams: ** - Offers a range of tools at a lower cost than Set 1 - Ideal for teams with limited budget but still need robust features --- **Set 3 (Free alternatives):** **- Trello** (Azure Boards alternative) **- GitLab** (Azure Repos alternative) **- CircleCI** (Azure Pipelines alternative) **- TestLink** (Azure Test Plans alternative) **- Docker Hub** (Azure Artifacts alternative) **- CodeCoverage** (GitHub Advanced Security for Azure DevOps alternative) **Price:** - Trello: free = $0/month - GitLab: free = $0/month - CircleCI: free = $0/month - TestLink: free = $0/month - Docker Hub: free = $0/month - CodeCoverage: free = $0/month - Annual cost: $0 **Pros:** 1. Completely free 2. Trello offers a user-friendly project management system 3. GitLab provides a comprehensive version control system 4. CircleCI offers easy automation options 5. TestLink provides comprehensive test management 6. Docker Hub offers secure artifact management 7. CodeCoverage provides detailed code analysis **Cons:** 1. Limited features compared to paid tools 2. Limited support and documentation 3. May require additional setup and configuration 4. Some tools have limited scalability **Best for Small Teams/Startups:** - Completely free - Offers a range of tools for project management, version control, automation, testing, and artifact management - Ideal for small teams or startups with limited budget --- **Based on this calculation, the annual cost for a single user is:** - Azure DevOps: $360 - Set 1: $420 - Set 2: $343.88 - Set 3 (Free alternatives): $0 Azure DevOps is the best overall choice for its comprehensive and integrated platform, scalability, and security. However, for small teams or startups with limited budget, Set 3 (Free alternatives) is a cost-effective option. Set 1 is suitable for large enterprises requiring advanced features, while Set 2 is a budget-friendly option for teams needing robust tools. Using individual services, like Set 1 or Set 2, can be worth the effort if your team has specific needs that aren't met by an integrated platform like Azure DevOps. For example, if your team requires advanced project management, Jira might be a better choice. However, integrating individual services can be time-consuming and costly, and may not provide the same level of seamless integration as an all-in-one platform like Azure DevOps. Ultimately, it's essential to weigh the pros and cons of each option and consider your team's specific needs before making a decision.
shreyash333
1,897,663
Ways to declare variables in javascript.!
Ways to declare variables in javascript.! Javascriptda o'zgaruvchilarni e'lon qilish...
0
2024-06-23T08:43:54
https://dev.to/samandarhodiev/ways-to-declare-variables-in-javascript-5c0h
javascript, variables
Ways to declare variables in javascript.! Javascriptda o'zgaruvchilarni e'lon qilish usullari.!!! JavaScriptda o'zgaruvchilarni 3 xil "key words" kalitli ifodalar bilan e'lon qilish mumkin. Bular: `var `, `let `, `const`.! `var` ``` var name_ = 'JavaScript'; console.log(name_); // natija - JavaScript ``` `let` ``` let name_ = 'JavaScript'; console.log(name_); // natija - JavaScript ``` `const` ``` const name_ = 'JavaScript'; console.log(name_); // natija - JavaScript ``` Misolda ko'rinibturibdiki o'zgaruvchilarni uchala kalit so'z bilan e'lon qilgandaham birxil natija bermoqda ammo ular orasida e'lon qilish usullarida va block scope {} da ishlashida bazi farqlar bor ushbu farqlarni birma-bir ko'ribo'tamiz.! **_Uchala_** kalit so'z bilan e'lon qilinadigan o'zgaruvchilar uchun umumiy bir qoida bor. o'zgaruvchini e'lon qilgandan keyingina ishlatish "chaqirish" to'g'ri yo'l, agar o'zgaruvchini e'lon qilishdan avval ishlatsak xatolikka olibkeladi.!!! ## different between `var`, `let`, `const` ## **`var` ** > 1 ``` var CssFramework = 'Bootstrap'; console.log(CssFramework); // natija - Bootstrap var CssFramework = 'TailwindCss'; console.log(CssFramework); // natija - TailwindCss ``` Ko'rinibturibdiki `var` kalit so'zi bilan o'zgaruvchini qayta e'lon qilsa bo'ladi.! > 2 ``` var CssFramework = 'Bootstrap'; console.log(CssFramework); // natija - Bootstrap CssFramework = 'TailwindCss'; console.log(CssFramework); // natija - TailwindCss ``` `var` kalit so'zi bilan e'lon qilingan o'zgaruvchiga qayta qiymat tayinlasa bo'ladi.! > 3 ``` { var CssFramework = 'Bootstrap'; } console.log(CssFramework); // natija - Bootstrap ``` `var ` bilan e'lon qilingan o'zgaruvchilar block scope {} ga ega emas ya'ni e'lon qilingan o'zgaruvchini block scope {} dan tashqariga chaqirsakham ishlaydi.! ## **let** > 1 ``` let CssFramework = 'Bootstrap'; console.log(CssFramework); // natija - Bootstrap let CssFramework = 'TailwindCss'; console.log(CssFramework); // natija - SyntaxError: Identifier 'CssFramework' has already been declared ``` ko'rinibturibdiki `let` kalit so'zi bilan e'lon qilingan o'zgaruvchini boshqa qiymat bilan qayta e'lon qilsak xatolikka olibkeladi ya'ni qayta e'lon qilibbo'lmaydi.! > 2 ``` let CssFramework = 'Bootstrap'; console.log(CssFramework); // natija - Bootstrap CssFramework = 'TailwindCss'; console.log(CssFramework); // natija - TailwindCss ``` `let` kalit so'zi bilan e'lon qilingan o'zgaruvchiga qayta qiymat tayinlasa bo'ladi.! > 3 ``` { let CssFramework = 'Bootstrap'; } console.log(CssFramework); // natija - ReferenceError: CssFramework is not defined ``` `let ` bilan e'lon qilingan o'zgaruvchilar block scope {} ga ega, ya'ni e'lon qilingan o'zgaruvchini block scope {} dan tashqariga chaqirsak ishlamaydi.! ## **const** > 1 ``` const CssFramework = 'Bootstrap'; console.log(CssFramework); // natija - Bootstrap const CssFramework = 'TailwindCss'; console.log(CssFramework); // natija - SyntaxError: Identifier 'CssFramework' has already been declared ``` ko'rinibturibdiki `const` kalit so'zi bilan e'lon qilingan o'zgaruvchini boshqa qiymat bilan qayta e'lon qilsak xatolikka olibkeladi ya'ni qayta e'lon qilibbo'lmaydi.! > 2 ``` const CssFramework = 'Bootstrap'; console.log(CssFramework); // natija - Bootstrap CssFramework = 'TailwindCss'; console.log(CssFramework); // natija - TypeError: Assignment to constant variable. ``` `const` kalit so'zi bilan e'lon qilingan o'zgaruvchiga qayta qiymat tayinlabbo'lmaydi, xatolikka olibkeladi.! > 3 ``` { const CssFramework = 'Bootstrap'; } console.log(CssFramework); // natija - ReferenceError: CssFramework is not defined ``` `const` bilan e'lon qilingan o'zgaruvchilar block scope {} ga ega, ya'ni e'lon qilingan o'zgaruvchini block scope {} dan tashqariga chaqirsak ishlamaydi.!
samandarhodiev
1,897,662
contact me
t.me https://t.me/Ikhodieff Enter fullscreen mode Exit...
0
2024-06-23T08:43:54
https://dev.to/samandarhodiev/ways-to-declare-variables-in-javascript-3c84
{% embed https://t.me/samandarhodiev %} ``` https://t.me/Ikhodieff ```
samandarhodiev
1,897,800
Superset Nginx timeout guide
Hey everyone, In this blog we will be knowing more about Superset Nginx timeout. Discover...
0
2024-07-05T15:39:36
https://blog.elest.io/superset-nginx-timeout-guide/
superset, softwares, elestio
--- title: Superset Nginx timeout guide published: true date: 2024-06-23 08:43:20 UTC tags: Superset,Softwares,Elestio canonical_url: https://blog.elest.io/superset-nginx-timeout-guide/ cover_image: https://blog.elest.io/content/images/2024/06/Frame-8-3.png --- Hey everyone, In this blog we will be knowing more about [Superset](https://elest.io/open-source/superset?ref=blog.elest.io) Nginx timeout. Discover troubleshooting tips to resolve Superset Nginx timeout issues effectively. ## Understanding Superset and Nginx Timeout Issues When running Apache Superset behind an Nginx reverse proxy, users might face `504 Gateway Timeout` errors. These errors usually occur when Nginx doesn't get a timely response from the Superset server, often due to the server handling long-running queries. To address this, Superset includes a client-side timeout limit that shows a warning message before the gateway timeout happens. ### Configuring Timeouts Open your app on VS CODE using the tools tab, then locate the file `superset_config.py` and adjust the `SUPERSET_WEBSERVER_TIMEOUT` parameter to align with or exceed the timeout settings in Nginx. ``` SUPERSET_WEBSERVER_TIMEOUT = 60 # seconds ``` **SQL Lab Async Timeout:** Increase the `SQLLAB_ASYNC_TIME_LIMIT_SEC` parameter to accommodate longer-running queries in SQL Lab. ``` SQLLAB_ASYNC_TIME_LIMIT_SEC = 60 * 60 * 6 # 6 hours ``` **Celery Configuration:** Adjust Celery's task time limit to ensure queries have ample time to complete before being terminated. ``` CELERYD_TASK_TIME_LIMIT = 60 * 60 * 6 # 6 hours ``` Properly configuring these settings will help manage long-running queries effectively, preventing premature termination and ensuring smoother operation. **Nginx Configuration:** Set the `proxy_read_timeout` directive in Nginx's configuration to a sufficiently high value to accommodate long-running queries. ``` proxy_read_timeout 120s; ``` By ensuring these configurations are properly set, you can prevent timeout issues and improve the performance of Apache Superset when running behind an Nginx reverse proxy. ### Configuring Nginx for Improved Superset Performance Optimizing Nginx for Superset involves several crucial configurations to ensure efficient handling of client requests and seamless communication with the Superset application. Below are the steps and settings to consider for enhancing Superset's performance when using Nginx as a reverse proxy. ### Timeout Settings Adjust the proxy\_read\_timeout and proxy\_connect\_timeout directives in your Nginx configuration to prevent timeouts during long-running Superset queries: ``` proxy_read_timeout 300; proxy_connect_timeout 300; ``` ### Client-Side Timeout Align the client-side timeout in superset\_config.py with Nginx's timeout settings: ``` SUPERSET_WEBSERVER_TIMEOUT = 300 ``` ### Buffer Size Increase buffer sizes to handle larger requests, especially beneficial for extensive dashboards or datasets: ``` proxy_buffers 16 32k; proxy_buffer_size 64k; ``` ### Web Server Workers Optimize the number of Nginx worker processes to match the available CPU cores and adjust worker\_connections to handle multiple connections: ``` worker_processes auto; worker_connections 1024; ``` ### Static Asset Caching Implement caching for static assets to reduce load times and enhance user experience: ``` location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ { expires 365d; add_header Cache-Control "public, no-transform"; } ``` ### SSL Configuration Ensure SSL is correctly configured for secure connections. Refer to the official Nginx documentation for setting up SSL certificates and configurations. ### Health Checks Set up health checks to monitor the Superset application's status: ``` location /health { proxy_pass http://superset_app_server/health; } ``` ## Addressing Superset Timeout Errors Behind Nginx When Apache Superset is utilized behind an Nginx reverse proxy, encountering 504 Gateway Timeout errors is not uncommon. This issue typically arises when Nginx fails to receive a prompt response from the Superset server, often due to processing long-running queries. To resolve this, consider the following steps: **Increase Nginx Timeout:** Adjust the proxy\_read\_timeout and proxy\_connect\_timeout directives in your Nginx configuration to extend the waiting period for responses. **Adjust Superset Configuration:** Modify the `SUPERSET_WEBSERVER_TIMEOUT` in superset\_config.py to match the extended timeout settings in Nginx. **Client-Side Timeout:** To prevent gateway timeout messages, adjust the `SQLLAB_ASYNC_TIME_LIMIT_SEC` in Superset to allow for longer query execution times. **Health Check Endpoint:** Employ the /health endpoint to ensure that your load balancer accurately identifies the Superset instance's status. **Proxy Headers:** If utilizing X-Forwarded-For/X-Forwarded-Proto headers, enable `ENABLE_PROXY_FIX = True` in superset\_config.py. **WSGI Server Configuration:** If Gunicorn is not used, disable flask-compress by setting `COMPRESS_REGISTER = False`. **Database Considerations:** Opt for performance and reliability by configuring the metadata database, favoring PostgreSQL or MySQL over SQLite for production environments. **Monitoring and Logging:** Implement comprehensive logging and monitoring mechanisms to pinpoint slow queries and performance bottlenecks effectively. **Caching:** Employ caching strategies to alleviate the database load and enhance response times. **Asynchronous Query Execution:** Utilize Celery workers for asynchronous query execution, mitigating timeouts during prolonged operations. ## Advanced Nginx Configuration for Large Superset Dashboards When setting up Nginx for extensive Superset dashboards, prioritizing timeouts and resource allocation is essential for a seamless user experience. Here are some advanced configurations to consider: ### Increasing Timeouts To prevent gateway timeouts (504 errors), adjust the proxy\_read\_timeout and proxy\_connect\_timeout in your Nginx configuration: ``` proxy_read_timeout 300; proxy_connect_timeout 300; ``` ### Optimizing Client Body Size Tailor the client\_max\_body\_size to accommodate substantial dashboard payloads: ``` client_max_body_size 50M; ``` ### Enabling WebSocket Support For real-time updates, verify that Nginx supports websockets: ``` map $http_upgrade $connection_upgrade { default upgrade; '' close; } server { ... location /ws { proxy_pass http://superset_app; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; } } ``` ### Configuring Caching Leverage Nginx's caching capabilities to enhance load times: ``` proxy_cache_path /path/to/cache levels=1:2 keys_zone=superset_cache:10m max_size=1g inactive=60m use_temp_path=off; server { ... location /static/ { proxy_cache superset_cache; ... } } ``` ### Implementing Load Balancing When managing multiple Superset workers, implement Nginx's load balancing: ``` upstream superset_app { server superset1.example.com; server superset2.example.com; ... } ``` ## **Thanks for reading ❤️** Thank you so much for reading and do check out the Elestio resources and Official [Superset documentation](https://superset.apache.org/docs/intro/?ref=blog.elest.io) to learn more about Superset. You can click the button below to create your service on [Elestio](https://elest.io/open-source/superset?ref=blog.elest.io). See you in the next one👋 [![Superset Nginx timeout guide](https://pub-da36157c854648669813f3f76c526c2b.r2.dev/deploy-on-elestio-black.png)](https://elest.io/open-source/superset?ref=blog.elest.io)
kaiwalyakoparkar
1,897,661
helloo
hellloooo
0
2024-06-23T08:39:12
https://dev.to/keirinkami/helloo-1jk7
hellloooo
keirinkami
1,897,798
Repair N8N SQLite database
Hey everyone, In this blog we will see how you can repair the N8N SQLite database. During this...
0
2024-07-05T15:38:49
https://blog.elest.io/repair-n8n-sqlite-database/
n8n, softwares, elestio
--- title: Repair N8N SQLite database published: true date: 2024-06-23 08:37:42 UTC tags: N8N,Softwares,Elestio canonical_url: https://blog.elest.io/repair-n8n-sqlite-database/ cover_image: https://blog.elest.io/content/images/2024/06/Frame-8-2.png --- Hey everyone, In this blog we will see how you can repair the [N8N](https://elest.io/open-source/n8n?ref=blog.elest.io) SQLite database. During this tutorial, we are going to use a self-hosted version of N8N deployed on Elestio. So before we start, ensure you have deployed N8N, we will be self-hosting it on [Elestio](https://elest.io/open-source/n8n?ref=blog.elest.io). ## Go to your N8N data folder Open terminal and go to your N8N data folder, on elestio it's located on /opt/app/n8n/database.sqlite ## Repair using the ".repair" command in CLI **Step 1:** Open the Command Line Interface and ensure you can access the SQLite CLI, the program named "SQLite3". **Step 2:** Generate SQL Text from the Corrupt Database and use the following command to recover your corrupt database and generate SQL text: ``` sqlite3 database.sqlite .recover >recover.sqlite ``` This command will create a file named "recover.sqlite" containing the SQL text necessary to reconstruct the original database. **Step 3:** Rename the corrupt database and use the cleaned database: ``` mv database.sqlite database-old.sqlite mv recover.sqlite database.sqlite ``` By following these steps, you can effectively recover data from a corrupt SQLite database using the CLI. ## **Thanks for reading ❤️** Thank you so much for reading and do check out the Elestio resources and Official [N8N documentation](https://docs.n8n.io/?ref=blog.elest.io) to learn more about N8N. You can click the button below to create your service on [Elestio](https://elest.io/open-source/n8n?ref=blog.elest.io) and start building a robust database for SQLite and recover the databases if corrupt. See you in the next one👋 [![Repair N8N SQLite database](https://pub-da36157c854648669813f3f76c526c2b.r2.dev/deploy-on-elestio-black.png)](https://elest.io/open-source/n8n?ref=blog.elest.io)
kaiwalyakoparkar
1,897,660
Simple YAML Linter/Validator Workflow for GitHub Actions
Here's a quick tip if you work with YAML on GitHub! The GitHub Actions Ubuntu runners comes with...
0
2024-06-23T08:34:31
https://tips.desilva.se/posts/simple-yaml-linter-validator-workflow-for-github-actions
github, githubactions, tutorial, yaml
Here's a quick tip if you work with YAML on GitHub! The GitHub Actions Ubuntu runners comes with [yamllint](https://github.com/adrienverge/yamllint) installed, meaning it's super simple to create linting/validating workflows to ensure your YAML is valid! ```yaml name: Validate YAML on: push: pull_request: jobs: validate-yaml: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Validate YAML file run: yamllint my-file.yml ```
codewithcaen
1,897,659
React App Inbox with 0 Notification Costs.
Building a robust notification system in React can be a complex task. Juggling multi-channel...
0
2024-06-23T08:32:01
https://dev.to/suprsend/react-app-inbox-with-0-notification-costs-jik
react, javascript, opensource, webdev
Building a robust notification system in React can be a complex task. Juggling multi-channel notifications, real-time updates, and user interaction requires a powerful solution. Enter the [SuprSend Inbox for React](https://docs.suprsend.com/docs/inbox-react), a feature-packed library designed to simplify and elevate your app's notification experience. Check out this video first to understand how will this notification infrastructure platform work? {% youtube oNhJjh5ZHkU %} --- > Want to see inapp inbox in action first? [Head here](https://inbox-playground.suprsend.com/) --- ## 2. Under the Hood ### Frontend: - Built with TypeScript for type safety and enhanced developer experience. - Provides various UI components like notifications list, bell icon, and notification details view. - Customization options allow you to tailor the look and feel to match your app's branding. - Integrates seamlessly with popular state management libraries like Redux and Context. ### Backend: - REST API for sending notifications, managing subscribers, and tracking user engagement. - Supports multiple notification channels like email, push notifications, and SMS via external integrations. - Scalable architecture ensures reliable performance even with high notification volumes. ### Security: - HMAC authentication secures communication between your app and SuprSend's API. - Data encryption protects sensitive user information. ## 3. Integration Steps ### Setup: 1. Install the `@suprsend/react-inbox` library using npm. 2. Configure the SuprSend SDK with your workspace key and subscriber ID. 3. Define optional settings like notification display format, custom icons, and initial visibility. ### Integration: 1. Render the `SuprSendInbox` component in your React app layout. 2. Pass the configuration details as props to customize the Inbox's behavior and appearance. ``` import SuprSendInbox from '@suprsend/react-inbox' // add to your react component <SuprSendInbox workspaceKey= "<workspace_key>" subscriberId= "<subscriber_id>" distinctId= "<distinct_id>" /> ``` 3. Use SuprSend's API from your backend server to send notifications with rich content and metadata. 4. Leverage the SDK's event system to handle user interactions like clicking on notifications or marking them as read. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j44ogzzxtyxnqdxbhzck.png) ## 4. Advanced Features - **Real-time Updates:** WebSockets enable instant notification delivery, keeping the Inbox updated dynamically. - **Custom Components:** Build custom UI elements for specific notification types or personalize the Inbox layout with render props. - **Offline Support:** Store notifications locally for offline access and ensure a seamless user experience. - **Deep Linking:** Integrate notifications with internal app pages for smooth navigation upon clicking. - **Data Analytics:** Track user engagement metrics like notification open rates and click-through rates for optimization. ## 5. Technical Considerations - Choose appropriate notification channels based on your app's purpose and target audience. - Implement proper error handling mechanisms for API requests and network failures. - Design a user-friendly notification experience with clear visuals and intuitive interactions. ## 6. Resources - [SuprSend Inbox for React Docs](https://docs.suprsend.com/docs/inbox-react) - [SuprSend API Documentation](https://docs.suprsend.com/docs) - [React Community Forums](https://legacy.reactjs.org/)
nikl
1,897,657
Bash Scripting Fundamentals
1. Introduction to Bash 1.1 What is Bash? Bash (Bourne Again Shell) is a...
0
2024-06-23T08:31:09
https://dev.to/zeshancodes/bash-scripting-fundamentals-5a0e
bash, cmd, coding, terminal
## 1. Introduction to Bash ### 1.1 What is Bash? Bash (Bourne Again Shell) is a command processor that typically runs in a text window where the user types commands that cause actions. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v5a2wuj2gfez67oe3q6n.jpeg) Bash can also read and execute commands from a file, called a script. ### 1.2 Creating and Running a Script - **Creating a script**: Use a text editor to create a file with the `.sh` extension. ```bash nano script.sh ``` - **Making the script executable**: ```bash chmod +x script.sh ``` - **Running the script**: ```bash ./script.sh ``` ## 2. Basic Commands ### 2.1 Echo - **Description**: Prints text to the terminal. ```bash echo "Hello, World!" ``` ### 2.2 Variables - **Description**: Storing and using data. ```bash NAME="John" echo $NAME ``` ### 2.3 Comments - **Description**: Adding comments to a script for readability. ```bash # This is a comment echo "This is a script" ``` ## 3. Input and Output ### 3.1 Reading User Input - **Description**: Taking input from the user. ```bash read "Enter your name: " NAME echo "Hello, $NAME" ``` ### 3.2 Redirecting Output - **Description**: Redirecting command output to a file. ```bash echo "This is a test" > file.txt ``` ### 3.3 Appending Output - **Description**: Appending command output to a file. ```bash echo "This is a test" >> file.txt ``` ### 3.4 Redirecting Input - **Description**: Using a file as input to a command. ```bash wc -l < file.txt ``` ### 3.5 Piping - **Description**: Using the output of one command as input to another. ```bash cat file.txt | grep "test" ``` ## 4. Control Structures ### 4.1 Conditional Statements - **Description**: Making decisions in scripts. ```bash if [ "$NAME" == "John" ]; then echo "Hello, John!" else echo "You are not John." fi ``` ### 4.2 Loops - **Description**: Repeating a set of commands. #### 4.2.1 For Loop ```bash for i in {1..5}; do echo "Welcome $i" done ``` #### 4.2.2 While Loop ```bash COUNTER=0 while [ $COUNTER -lt 5 ]; do echo "Counter: $COUNTER" COUNTER=$((COUNTER + 1)) done ``` #### 4.2.3 Until Loop ```bash COUNTER=0 until [ $COUNTER -ge 5 ]; do echo "Counter: $COUNTER" COUNTER=$((COUNTER + 1)) done ``` ## 5. Functions ### 5.1 Defining and Calling Functions - **Description**: Grouping commands into a reusable block. ```bash my_function() { echo "Hello from my_function" } my_function ``` ## 6. Data Structures ### 6.1 Arrays - **Description**: Using arrays to store multiple values. ```bash NAMES=("John" "Jane" "Doe") echo "First Name: ${NAMES[0]}" ``` ### 6.2 Tuples (Simulated with Arrays) - **Description**: Using arrays to simulate tuples. ```bash TUPLE=("Alice" 25 "Developer") echo "Name: ${TUPLE[0]}, Age: ${TUPLE[1]}, Job: ${TUPLE[2]}" ``` ### 6.3 Sets (Simulated with Arrays and Associative Arrays) - **Description**: Using arrays and associative arrays to simulate sets. ```bash declare -A SET SET["Apple"]=1 SET["Banana"]=1 if [[ ${SET["Apple"]} ]]; then echo "Apple is in the set." fi ``` ## 7. Operators ### 7.1 Mathematical Operators - **Description**: Performing arithmetic operations. ```bash A=5 B=3 SUM=$((A + B)) echo "Sum: $SUM" ``` ### 7.2 Logical Operators - **Description**: Using logical operators in conditions. ```bash if [ $A -eq 5 ] && [ $B -eq 3 ]; then echo "Both conditions are true." fi ``` ### 7.3 Semantic Operators - **Description**: Using comparison operators. ```bash if [ "$A" -eq 5 ]; then echo "A is 5" fi if [ "$A" -ne 5 ]; then echo "A is not 5" fi ``` ## 8. Type Casting - **Description**: Type casting in Bash is generally about ensuring variable content is treated correctly. ```bash VAR="123" NUM=$((VAR + 0)) # Cast string to number for arithmetic echo $NUM ``` ## 9. API Calls - **Description**: Making HTTP requests using tools like `curl`. ```bash RESPONSE=$(curl -s -X GET "https://api.example.com/data") echo "Response: $RESPONSE" ``` ## 10. Automating Tasks ### 10.1 Cron Jobs - **Description**: Scheduling scripts to run automatically. ```bash # Edit the crontab file crontab -e # Add a job to run every day at midnight 0 0 * * * /path/to/script.sh ``` ### 10.2 Using `at` - **Description**: Scheduling a one-time task. ```bash echo "/path/to/script.sh" | at now + 5 minutes ``` ### 10.3 Background Jobs - **Description**: Running scripts in the background. ```bash ./script.sh & ``` ## 11. Advanced Topics ### 11.1 Using `sed` and `awk` - **Description**: Text processing with `sed` and `awk`. #### 11.1.1 `sed` ```bash sed 's/old/new/g' file.txt ``` #### 11.1.2 `awk` ```bash awk '{print $1}' file.txt ``` ### 11.2 Traps - **Description**: Capturing signals. ```bash trap "echo 'Script interrupted'; exit" SIGINT while true; do echo "Running..." sleep 1 done ``` ## 12. Debugging ### 12.1 Enabling Debugging - **Description**: Running a script in debug mode. ```bash bash -x script.sh ``` ### 12.2 Using `set` - **Description**: Setting options for debugging. ```bash set -x # Enable debugging set +x # Disable debugging ``` ### 12.3 Error Handling - **Description**: Handling errors in scripts. ```bash command || { echo "Command failed"; exit 1; } ``` ### 12.4 Using `trap` for Cleanup - **Description**: Cleaning up resources on script exit. ```bash cleanup() { echo "Cleaning up..." # perform cleanup here } trap cleanup EXIT ``` ### 12.5 Logging - **Description**: Creating log files for your scripts. ```bash LOG_FILE="/var/log/script.log" echo "Script started" >> $LOG_FILE ``` ## 13. More on Arrays ### 13.1 Array Operations - **Description**: Array length and iteration. ```bash ARR=("one" "two" "three") echo "Array length: ${#ARR[@]}" for ITEM in "${ARR[@]}"; do echo $ITEM done ``` ### 13.2 Associative Arrays - **Description**: Using associative arrays. ```bash declare -A ASSOC_ARRAY ASSOC_ARRAY["name"]="John" ASSOC_ARRAY["age"]="30" echo "Name: ${ASSOC_ARRAY["name"]}" echo "Age: ${ASSOC_ARRAY["age"]}" ``` ### 13.3 Multidimensional Arrays - **Description**: Simulating multidimensional arrays. ```bash ARRAY_2D[0]="1 2 3" ARRAY_2D[1]="4 5 6" ARRAY_2D[2]=" 7 8 9" for ROW in "${ARRAY_2D[@]}"; do for ITEM in $ROW; do echo -n "$ITEM " done echo done ``` ## 14. String Operations ### 14.1 Substring Extraction - **Description**: Extracting a substring from a string. ```bash STRING="Hello World" echo ${STRING:6:5} # Outputs "World" ``` ### 14.2 String Length - **Description**: Getting the length of a string. ```bash echo ${#STRING} # Outputs "11" ``` ### 14.3 String Replacement - **Description**: Replacing part of a string. ```bash echo ${STRING/World/Bash} # Outputs "Hello Bash" ``` ## 15. File Operations ### 15.1 File Tests - **Description**: Testing properties of files. ```bash if [ -f "file.txt" ]; then echo "file.txt is a regular file." fi if [ -d "directory" ]; then echo "directory is a directory." fi ``` ### 15.2 Reading Files - **Description**: Reading a file line by line. ```bash while IFS= read -r line; do echo "$line" done < file.txt ``` ### 15.3 Writing to Files - **Description**: Writing to a file. ```bash echo "Hello, World!" > file.txt ``` ### 15.4 Appending to Files - **Description**: Appending to a file. ```bash echo "This is a new line" >> file.txt ``` ## 16. Network Operations ### 16.1 Downloading Files - **Description**: Using `wget` or `curl` to download files. ```bash wget http://example.com/file.zip curl -O http://example.com/file.zip ``` ### 16.2 Uploading Files - **Description**: Using `curl` to upload files. ```bash curl -F "file=@/path/to/file" http://example.com/upload ``` ### 16.3 Checking Internet Connection - **Description**: Pinging a server to check for internet connectivity. ```bash ping -c 4 google.com ``` ## 17. Handling Large Data ### 17.1 Sorting and Uniqueness - **Description**: Sorting data and removing duplicates. ```bash sort file.txt | uniq > sorted.txt ``` ### 17.2 Finding Patterns - **Description**: Using `grep` to find patterns. ```bash grep "pattern" file.txt ``` ### 17.3 Splitting Files - **Description**: Splitting large files into smaller parts. ```bash split -l 1000 largefile.txt smallfile_ ``` ## 18. Process Management ### 18.1 Checking Running Processes - **Description**: Using `ps` to list running processes. ```bash ps aux ``` ### 18.2 Killing Processes - **Description**: Using `kill` to terminate processes. ```bash kill -9 PID ``` ### 18.3 Monitoring System Resources - **Description**: Using `top` or `htop` to monitor system resources. ```bash top htop # Requires installation ``` ## 19. Packaging and Distribution ### 19.1 Creating Tar Archives - **Description**: Archiving files using `tar`. ```bash tar -cvf archive.tar file1 file2 directory/ ``` ### 19.2 Extracting Tar Archives - **Description**: Extracting files from a tar archive. ```bash tar -xvf archive.tar ``` ### 19.3 Compressing Files - **Description**: Compressing files using `gzip`. ```bash gzip file.txt ``` ### 19.4 Decompressing Files - **Description**: Decompressing files using `gunzip`. ```bash gunzip file.txt.gz ``` This extensive guide covers fundamental and advanced topics in Bash scripting
zeshancodes
1,897,656
Cypress Debugging: How to Get Started
Writing code quickly is a valuable skill, but the true mark of a proficient software developer is the...
0
2024-06-23T08:29:44
https://dev.to/kailashpathak7/cypress-debugging-how-to-get-started-4b5p
javascript, cypress, automation, testing
Writing code quickly is a valuable skill, but the true mark of a proficient software developer is the ability to effectively debug and resolve errors and bugs. Debugging is a critical aspect of the development process, ensuring that software functions as intended and meets user needs. **Methods of debugging in Cypress** Debugging in Cypress can be performed using various methods, including command logs, .pause(), .debug(), cy.log(), console.log(), and the native JavaScript debugger. **Below are various methods of debugging the test case in Cypress** [Click on links to know in detail about Cypress debugging](https://testgrid.io/blog/debug-cypress/) Cypress debugging using command logs Cypress debugging using method .pause() Cypress debugging using .debug() method Cypress debugging using cy.log() Cypress debugging using console.log() Cypress debugging using native debugger Cypress debugging using screenshot and video
kailashpathak7
1,897,655
Dynamically add nodes to a force-directed graph
https://youtu.be/qlLQj12daDo demo
0
2024-06-23T08:29:42
https://dev.to/fridaymeng/dynamically-add-nodes-to-a-force-directed-graph-4fj8
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zk6zx6q3mrx71odbgmf6.gif) https://youtu.be/qlLQj12daDo [demo](https://addgraph.com/leftRight)
fridaymeng
1,897,654
What is a Slowloris attack?
A Slowloris attack is a type of denial-of-service (DoS) attack that targets web servers by exhausting...
0
2024-06-23T08:29:03
https://dev.to/sandeepseeram/what-is-a-slowloris-attack-50d
webdev, javascript, beginners, tutorial
A Slowloris attack is a type of denial-of-service (DoS) attack that targets web servers by exhausting their connection capacity. This attack, often referred to as a slow HTTP DoS attack, takes advantage of how web servers manage connections, making them unable to handle legitimate requests. **Origins of the Slowloris Attack** The name "Slowloris" comes from a tool created by Robert RSnake Hansen in 2009, named after the slow-moving primate, the slow loris. This tool demonstrated how an attacker could use slow HTTP requests to overwhelm a server. The technique has been used in significant real-world incidents, such as attacks on Iranian government websites following the 2009 presidential election. **How Does a Slowloris Attack Work?** Web servers can be either thread-based (e.g., Apache, Microsoft IIS) or event-based (e.g., Nginx, lighttpd). Thread-based servers handle fewer connections than event-based servers. For instance, Apache can handle 150 connections by default, whereas Nginx can manage 512. A server keeps a connection open until it receives all HTTP headers and the complete body of a request, or until it times out. Apache, for example, has a default timeout of 300 seconds. An attacker exploiting this can send numerous incomplete HTTP requests, keeping the connections open and preventing the server from accepting new ones. **Variations: Slow HTTP POST Attack** A variation of the Slowloris attack is the slow HTTP POST attack. Instead of GET requests, it uses POST requests, sending data very slowly to keep the connection alive and evade timeout protections. This method is harder to detect and mitigate. **Detecting Slowloris Attacks** Detecting a Slowloris attack can be challenging as it uses legitimate-looking requests. Monitoring for patterns such as numerous long-duration connections, partial HTTP requests, and high server resource usage is essential. Intrusion detection systems (IDS) often miss these attacks, so continuous monitoring is crucial. **Mitigating Slowloris Attacks on Apache Servers** Apache servers are common targets for slow HTTP DoS attacks. Here are three effective mitigation techniques: Using the mod_reqtimeout Module: This module sets time limits for receiving HTTP request headers and bodies. If the client doesn't send data within the set time, the server responds with a 408 REQUEST TIMEOUT error. Apache ``` <IfModule mod_reqtimeout.c> RequestReadTimeout header=20-40,MinRate=500 body=20-40,MinRate=500 </IfModule> ``` Using the mod_qos Module: This module allows assigning different priorities to HTTP requests. It limits the number of connections per IP and enforces minimum data transfer rates. Apache ``` <IfModule mod_qos.c> QS_ClientEntries 100000 QS_SrvMaxConnPerIP 50 MaxClients 256 QS_SrvMaxConnClose 180 QS_SrvMinDataRate 150 1200 </IfModule> ``` Using the mod_security Module: This web application firewall (WAF) can block IPs generating multiple 408 responses, indicating potential slow HTTP DoS attacks. Apache ``` SecRule RESPONSE_STATUS "@streq 408" "phase:5,t:none,nolog,pass,setvar:ip.slow_dos_counter=+1,expirevar:ip.slow_dos_counter=60,id:'1234123456'" SecRule IP:SLOW_DOS_COUNTER "@gt 5" "phase:1,t:none,log,drop,msg:'Client Connection Dropped due to high number of slow DoS alerts',id:'1234123457'" ``` Combining these methods with additional protections like load balancers, reverse proxies, and rate limiting can significantly enhance server resilience against Slowloris attacks. By understanding and implementing these strategies, you can better protect your web servers from the disruptive impact of Slowloris and similar slow HTTP attacks.
sandeepseeram
1,897,653
Understanding Authentication & Authorization with help of keycloak
Authentication and authorization are two fundamental concepts in the realm of security, especially in...
0
2024-06-23T08:27:15
https://dev.to/yashkashyap/understanding-authentication-authorization-with-help-of-keycloak-cdd
Authentication and authorization are two fundamental concepts in the realm of security, especially in computer applications. **Authentication-** Authentication is the process of verifying the identity of a user, device, or entity in a computer system. - Purpose- To confirm the identity of the user or entity. **Authorization-** Authorization is the process of determining what an authenticated user is allowed to do. It specifies the permissions for resources in the system. - Purpose- To control access to resources and actions based on user privileges. There are lot of way to perform this in application like OAuth2.0, OIDC But all of these are paid platform for large number or advance services. So, To achieve these Services without any cost **keycloak** come in the picture. **Keycloak-** Keycloak is an **open-source identity and access management** solution developed by Red Hat. It provides authentication and authorization capabilities for modern applications and services. Some Key Feature about keycloack - **Authorization Services:** Fine-grained authorization policies and support for OAuth 2.0, OpenID Connect, and SAML. **Identity and Access Management (IAM):** Comprehensive IAM capabilities including role-based access control (RBAC) and multi-factor authentication (MFA). **Installation and Setup for keycloak:** This is a official documentation of keycloak setup https://www.keycloak.org/guides **Integration-** - OAuth 2.0 and OpenID Connect: Keycloak supports OAuth 2.0 and OpenID Connect protocols for securing applications. - SAML: Keycloak can act as a SAML Identity Provider (IdP) and Service Provider (SP). - Identity Providers: Integrate with external identity providers like Google, Facebook, and others for authentication. I hope you learnt something new today. > End Note: If you check out my profile, this is my first-ever post. So please let me know how I did, and how I can improve in future. Thanks!
yashkashyap
1,897,652
Understanding Authentication & Authorization with help of keycloak
Authentication and authorization are two fundamental concepts in the realm of security, especially in...
0
2024-06-23T08:27:14
https://dev.to/yashkashyap/understanding-authentication-authorization-with-help-of-keycloak-4dl3
Authentication and authorization are two fundamental concepts in the realm of security, especially in computer applications. **Authentication-** Authentication is the process of verifying the identity of a user, device, or entity in a computer system. - Purpose- To confirm the identity of the user or entity. **Authorization-** Authorization is the process of determining what an authenticated user is allowed to do. It specifies the permissions for resources in the system. - Purpose- To control access to resources and actions based on user privileges. There are lot of way to perform this in application like OAuth2.0, OIDC But all of these are paid platform for large number or advance services. So, To achieve these Services without any cost **keycloak** come in the picture. **Keycloak-** Keycloak is an **open-source identity and access management** solution developed by Red Hat. It provides authentication and authorization capabilities for modern applications and services. Some Key Feature about keycloack - **Authorization Services:** Fine-grained authorization policies and support for OAuth 2.0, OpenID Connect, and SAML. **Identity and Access Management (IAM):** Comprehensive IAM capabilities including role-based access control (RBAC) and multi-factor authentication (MFA). **Installation and Setup for keycloak:** This is a official documentation of keycloak setup https://www.keycloak.org/guides **Integration-** - OAuth 2.0 and OpenID Connect: Keycloak supports OAuth 2.0 and OpenID Connect protocols for securing applications. - SAML: Keycloak can act as a SAML Identity Provider (IdP) and Service Provider (SP). - Identity Providers: Integrate with external identity providers like Google, Facebook, and others for authentication. I hope you learnt something new today. > End Note: If you check out my profile, this is my first-ever post. So please let me know how I did, and how I can improve in future. Thanks!
yashkashyap
1,897,796
How to clean up the N8N database
Hey everyone, In this blog we will see how you can clean up the N8N database. During this tutorial,...
0
2024-07-05T15:37:59
https://blog.elest.io/how-to-clean-up-n8n-database/
n8n, softwares, elestio
--- title: How to clean up the N8N database published: true date: 2024-06-23 08:26:21 UTC tags: N8N,Softwares,Elestio canonical_url: https://blog.elest.io/how-to-clean-up-n8n-database/ cover_image: https://blog.elest.io/content/images/2024/06/Frame-8-1.png --- Hey everyone, In this blog we will see how you can clean up the [N8N](https://elest.io/open-source/n8n?ref=blog.elest.io) database. During this tutorial, we are going to use a self-hosted version of N8N deployed on [Elestio](https://elest.io/open-source/n8n?ref=blog.elest.io). So before we start, ensure you have deployed the N8N service and have a connected database that is used to store the logs and execution records. ## What is N8N? N8N is an open-source workflow automation tool that allows you to automate tasks and workflows by connecting various applications, services, and APIs together. It provides a visual interface where users can create workflows using a node-based system, similar to flowcharts, without needing to write any code. You can integrate n8n with a wide range of applications and services, including popular ones like Google Drive, Slack, GitHub, and more. This flexibility enables users to automate various tasks, such as data synchronization, notifications, data processing, and more. ## Executions Environment Variables Enabling the `EXECUTIONS_DATA_PRUNE` setting in N8N is a way to manage database storage by automatically deleting past execution data on a rolling basis. To enable this feature, set `EXECUTIONS_DATA_PRUNE` to `true` in your n8n configuration file and restart your instance. This automated cleanup helps maintain a lean database, improving performance and reducing the need for manual maintenance. However, consider your data retention needs to ensure you retain necessary historical data for auditing or debugging purposes. Check out the following documentation to learn about more such variables and how to use them. [ Environment Variables Overview | n8n Docs An overview of configuration environment variables for self-hosted n8n. ![How to clean up the N8N database](https://docs.n8n.io/_images/favicon.ico)logo ![How to clean up the N8N database](https://docs.n8n.io/_images/n8n-docs-icon.svg) ](https://docs.n8n.io/hosting/environment-variables/?ref=blog.elest.io#executions) ## Setting Variable In Configuration File One of the ways to clean the database in N8N is to set a Variable `EXECUTIONS_DATA_PRUNE` as mentioned above. Now that you know about the variables and have decided on what you want to use we will set the variables in this section. Head over to your N8N service deployed on Elestio and click on **Update config** under the **Software** section in the **Overview** window. ![How to clean up the N8N database](https://blog.elest.io/content/images/2024/06/Screenshot-2024-06-05-at-4.28.54-PM.jpg) Now you will be able to see all the variables configured under **Docker Compose**. Change or add the required variables and click on **Update & Restart** to implement these variables into the service. You can monitor and keep track of all the used variables here. ![How to clean up the N8N database](https://blog.elest.io/content/images/2024/06/Screenshot-2024-06-05-at-4.29.38-PM.jpg) For more information on different ways of setting this variable, head over to the official N8N documentation by clicking below. [ Configuration methods | n8n Docs How to set environment variables for n8n. ![How to clean up the N8N database](https://docs.n8n.io/_images/favicon.ico)logo ![How to clean up the N8N database](https://docs.n8n.io/_images/n8n-docs-icon.svg) ](https://docs.n8n.io/hosting/configuration/configuration-methods/?ref=blog.elest.io) ## Creating a workflow for pruning Another method can be creating a workflow for pruning and cleaning the unnecessary logs from the database. To do so, head over to the workflow section in N8N and create a workflow like below. Under the **MySQL** component configure the database information and set up the **Cron** component with the schedule information. ![How to clean up the N8N database](https://blog.elest.io/content/images/2024/06/Screenshot-2024-06-05-at-3.26.30-PM.jpg) And done! You have successfully cleaned the database in N8N. Just note that it's always preferred to clean up the data periodically and if the log data is required frequently then you can choose to make backups as required. ## **Thanks for reading ❤️** Thank you so much for reading and do check out the Elestio resources and Official [N8N documentation](https://docs.n8n.io/?ref=blog.elest.io) to learn more about N8N. You can click the button below to create your service on [Elestio](https://elest.io/open-source/n8n?ref=blog.elest.io) start cleaning your N8N database and have a smooth running service again. See you in the next one👋 [![How to clean up the N8N database](https://pub-da36157c854648669813f3f76c526c2b.r2.dev/deploy-on-elestio-black.png)](https://elest.io/open-source/n8n?ref=blog.elest.io)
kaiwalyakoparkar